query
stringlengths 23.7k
193k
| response
stringlengths 2.64k
4.69k
|
---|---|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Explaining V1 Properties with a Biologically Constrained Deep Learning Architecture
Galen Pogoncheff
Department of Computer Science
University of California, Santa Barbara
Santa Barbara, CA 93106
[email protected]
Jacob Granley
Department of Computer Science
University of California, Santa Barbara
Santa Barbara, CA 93106
[email protected]
Michael Beyeler
Department of Computer Science
Department of Psychological & Brain Sciences
University of California, Santa Barbara
Santa Barbara, CA 93106
[email protected]
###### Abstract
Convolutional neural networks (CNNs) have recently emerged as promising models of the ventral visual stream, despite their lack of biological specificity. While current state-of-the-art models of the primary visual cortex (V1) have surfaced from training with adversarial examples and extensively augmented data, these models are still unable to explain key neural properties observed in V1 that arise from biological circuitry. To address this gap, we systematically incorporated neuroscience-derived architectural components into CNNs to identify a set of mechanisms and architectures that more comprehensively explain V1 activity. Upon enhancing task-driven CNNs with architectural components that simulate center-surround antagonism, local receptive fields, tuned normalization, and cortical magnification, we uncover models with latent representations that yield state-of-the-art explanation of V1 neural activity and tuning properties. Moreover, analyses of the learned parameters of these components and stimuli that maximally activate neurons of the evaluated networks provide support for their role in explaining neural properties of V1. Our results highlight an important advancement in the field of NeuroAI, as we systematically establish a set of architectural components that contribute to unprecedented explanation of V1. The neuroscience insights that could be gleaned from increasingly accurate in-silico models of the brain have the potential to greatly advance the fields of both neuroscience and artificial intelligence.
## 1 Introduction
Many influential deep learning architectures and mechanisms that are widely used today, such as convolutional neural networks (CNNs) [1] and mechanisms of attention [2, 3, 4, 5], draw inspiration from biological intelligence. Despite decades of research into computational models of the visual system, our understanding of its complexities remains far from complete. Existing neuroscientific models of the visual system (e.g., generalized linear-nonlinear models [6, 7, 8, 9]) are often founded upon empirical observations from relatively small datasets, and are therefore unlikely to capture the true complexity of the visual system. While these models have successfully explained many properties of neural response to simple stimuli, their simplicity does not generalize to complex image stimuli [10].
Following their astounding success in computer vision, task-driven CNNs have recently been proposed as candidate models of the ventral stream in primate visual cortex [11, 12, 13, 14, 15], offering a path towards models that can explain hidden complexities of the visual system and generalize to complex visual stimuli. Through task-driven training alone (and in some cases, training a linear read-out layer [12, 13, 14, 15, 16, 17]), representations that resemble neural activity at multiple levels of the visual hierarchy have been observed in these models [16]. With the emergence of such properties, CNNs are already being used to enhance our knowledge of processing in the ventral stream [18].
Despite these advancements, CNNs that achieve state-of-the-art brain alignment are still unable to explain many properties of the visual system. Most traditional CNNs omit many well known architectural and processing hallmarks of the primate ventral stream that are likely key to the development of artificial neural networks (ANNs) that help us decipher the neural code. The development of these mechanisms remains an open challenge. A comprehensive understanding of neural processing in the brain (for instance, in the ventral stream) could in turn contribute to significant leaps in artifical intelligence (AI) - an established goal of NeuroAI research [19, 20].
In this work, we take a systematic approach to analyzing the hallmarks of the primate ventral stream that improve model-brain similarity of CNNs. We formulate architectural components that simulate these processing hallmarks within CNNs and analyze the population-level and neuron-level response properties of these networks, as compared to empirical data recorded in primates. Specifically:
* We enrich the classic ResNet50 architecture with architectural components based on neuroscience foundations that simulate cortical magnification, center-surround antagonism, local filtering, and tuned divisive normalization and show that the resulting network achieves top V1-overall score on the integrative Brain-Score benchmark suite [16].
* Although some of these components have been studied before in isolation, here we demonstrate their synergistic nature through a series of ablation studies that reveal the importance of each component and the benefits of combining them into a single neuro-constrained CNN.
* We analyze the network parameters and stimuli that activate neurons to provide insights into how these architectural components contribute to explaining primary visual cortex (V1) activity in non-human primates.
## 2 Background and Related Work
Model-Brain AlignmentOne central challenge in the field of NeuroAI is the development of computational models that can effectively explain the neural code. To achieve this goal, artificial neural networks must be capable of accurately predicting the behavior of individual neurons and neural populations in the brain. The primary visual cortex (V1) is one of the most well studies areas of the visual system, with modeling efforts dating back to at least 1962 [21]--yet many deep learning models still fall short in explaining its neural activity.
The Brain-Score integrative benchmark [16] has recently emerged as a valuable tool for assessing the capabilities of deep learning models to explain neural activity in the visual system. This suite of benchmarks integrates neural recording and behavioral data from a collection of previous studies and provides standardized metrics for evaluating model explainability of visual areas V1, V2, V4, and IT, as well as additional behavioral and engineering benchmarks.
Although CNNs draw high-level inspiration from neuroscience, current architectures (e.g., ResNet [22] and EfficientNet [23]) bear little resemblance to neural circuits in the visual system. While such differences may not necessarily hinder object recognition performance, these networks still fall short in mimicking many properties of highly capable visual systems. Although there may be many paths towards next-generation AI, foundational studies that have successfully merged foundations of neuroscience and AI have shown promising improvements to traditional ANNs [24, 25, 26].
Center-Surround AntagonismAs early as in the retina, lateral inhibitory connections establish a center-surround antagonism in the receptive field (RF) of many retinal cell types, which is preserved by neurons in the lateral geniculate nucleus and the visual cortex. In the primate visual system, this center-surround antagonism is thought to facilitate edge detection, figure-ground segregation, depth perception, and cue-invariant object perception [27, 28, 29, 30], and is therefore a fundamental property of visual processing.
Center-surround RFs are a common component of classical neuroscience models [31, 32, 33], where they are typically implemented using a Difference of Gaussian (DoG) that produces an excitatory peak at the RF center with an inhibitory surround (Fig. 1A). Although deep CNNs have the capacity to learn center-surround antagonism, supplementing traditional convolutional kernels with fixed-weight DoG kernels has been demonstrated to improve object recognition in the context of varied lighting, occlusion, and noise [34, 35].
Local Receptive FieldsThe composition of convolutional operations in CNNs enables hierarchical processing and translation equivariance, both of which are fundamental to core object recognition in the primate ventral visual stream. However, the underlying mechanism through which this is achieved is biologically implausible, as kernel weights are shared among downstream neurons. Though locally connected neural network layers can theoretically learn the same operation, traditional convolutions are typically favored in practice for their computational efficiency and performance benefits. However, local connectivity is a ubiquitous pattern in the ventral stream (Fig. 1B), and visual processing phenomena (e.g., orientation preference maps [36]) have been attributed to this circuitry pattern. In artificial neural systems, Lee _et al._[37] observed the emergence of topographic hallmarks in the inferior temporal cortex when encouraging local connectivity in CNNs. Pogodin _et al._[38] considered the biological implausibility of CNNs and demonstrated a neuro-inspired approach to reducing the performance gap between traditional CNNs and locally-connected networks, meanwhile achieving better alignment with neural activity in primates.
Divisive NormalizationDivisive normalization is wide-spread across neural systems and species [39]. In early visual cortex, it is theorized to give rise to well-documented physiological phenomena, such as response saturation, sublinear summation of stimulus responses, and cross-orientation suppression [40].
In 2021, Burg and colleagues [41] introduced an image-computable divisive normalization model in which each artificial neuron was normalized by weighted responses of neurons with the same receptive field. In comparison to a simple 3-layer CNN trained to predict the same stimulus responses, their analyses revealed that cross-orientation suppression was more prevalent in the divisive normalization model than in the CNN, suggesting that divisive normalization may not be inherently learned by task-driven CNNs. In a separate study, Cirincione _et al._[42] showed that simulating divisive normalization within a CNN can improve object recognition robustness to image corruptions and enhance alignment with certain tuning properties of primate V1.
Tuned Normalization/Cross-Channel InhibitionWhile it is not entirely clear whether divisive normalization should be performed across space and/or across channels in computational models (implementations vary widely), Rust _et al._[43] demonstrated that many response properties of motion-selective cells in the middle temporal area, such as motion-opponent suppression and response normalization, emerge from a mechanism they termed "tuned normalization". In this scheme, a given neuron is normalized by a pool of neurons that share the same receptive field but occupy a different region in feature space. We adopt this idea in the present work (Fig. 1C), hypothesizing that enforcing feature-specific weights in the pooling signal might enable a deep net to learn "opponent suppression" signals, much like cross-orientation signals found in biological V1 [44, 45].
Figure 1: Design patterns of neuro-constrained architectural components. A) Difference of Gaussian implements a center-surround receptive field. B) Local receptive fields of two neurons without weight sharing. C) Tuned divisive normalization inhibits each feature map by a Gaussian-weighted average of competing features. D) Log-polar transform simulating cortical magnification
Cortical MagnificationIn many sensory systems, a disproportionately large area of the cortex is dedicated to processing the most important information. This phenomenon, known as cortical magnification, reflects the degree to which the brain dedicates resources to processing sensory information accompanying a specific sense. In the primary visual cortex, a larger proportion of cortical area processes visual stimuli presented at the center of the visual field as compared to stimuli at greater spatial eccentricities [46]. The relationship between locations in the visual field and corresponding processing regions in the visual cortex has commonly been modeled with a log-polar mapping (Fig. 1D) or derivations thereof [47; 48; 49; 50].
Layers of artificial neurons of traditional CNNs have uniform receptive field sizes and do not exhibit any sort of cortical magnification, failing to capture these distinctive properties of neuronal organization in the primary visual cortex. Recent works have demonstrated that introducing log polar-space sampling into CNNs can give rise to improved invariance and equivariance to spatial transformations [51; 52] and adversarial robustness [53].
## 3 Methods
### Neuro-Constrained CNN Architecture
Given the previous state-of-the-art V1 alignment scores achieved with ResNet50 [25], we adopted this architecture as our baseline and test platform. However, the architectural components that we considered in the work are modular and can be integrated into general CNNs architectures. The remainder of this subsection details the implementation and integration of each architectural component within a neuro-constrained ResNet. In all experiments, we treated the output units from ResNet50 layer 1 as "artificial V1" neurons (refer to Section 3.2 for layer selection criteria). Fig. 2 depicts ResNet50 layer 1 after enhancement with neuroscience-based architectural components. Code and materials required to reproduce the presented work are available at github.com/bionicvisionlab/2023-Pogoncheff-Explaining-V1-Properties.
Center-Surround AntagonismCenter-surround ANN layers are composed of DoG kernels of shape \((c_{i}\times c_{o}\times k\times k)\), where \(c_{i}\) and \(c_{o}\) denote the number of input and output channels, respectively, and \(k\) reflects the height and width of each kernel. These DoG kernels (Fig. 1A) are convolved with the pre-activation output of a standard convolution. Each DoG kernel, \(DoG_{i}\) is of the form
\[\mathrm{DoG}_{i}(x,y)=\frac{\alpha}{2\pi\sigma_{i,\mathrm{center}}^{2}}\exp \Big{(}-\frac{x^{2}+y^{2}}{2\sigma_{i,\mathrm{center}}^{2}}\Big{)}-\frac{ \alpha}{2\pi\sigma_{i,\mathrm{surround}}^{2}}\exp\Big{(}-\frac{x^{2}+y^{2}}{2 \sigma_{i,\mathrm{surround}}^{2}}\Big{)}, \tag{1}\]
where \(\sigma_{i,\mathrm{center}}\) and \(\sigma_{i,\mathrm{surround}}\) were the Gaussian widths of the center and surround, respectively (\(\sigma_{i,\mathrm{center}}<\sigma_{i,\mathrm{surround}}\)), \(\alpha\) was a scaling factor, and \((x,y):=(0,0)\) at the kernel center. For \(\alpha>0\) the kernel will have an excitatory center and inhibitory surround, while \(\alpha<0\) results in a kernel with inhibitory center and excitatory surround. Novel to this implementation, each DoG kernel has learnable parameters, better accommodating the diverse tuning properties of neurons within the network. As in [34; 35], these DoG convolutions were only applied to a fraction of the input feature
Figure 2: ResNet50 layer 1, supplemented with neuro-constrained architectural components. Throughout the the modified layer 1, primary visual cortex (V1) activity is modeled with cortical magnification, center-surround convolutions, tuned normalization, and local receptive field layers. Layer 1 output units are treated as artificial V1 neurons.
map. Specifically, we applied this center-surround convolution to one quarter of all \(3\times 3\) convolutions in layer 1 of our neuro-constrained ResNet50.
Local Receptive FieldsTo untangle the effects of local connectivity on brain alignment, we modified the artificial V1 layer by substituting the final \(3\times 3\) convolution of ResNet50 layer 1 with a \(3\times 3\) locally connected layer in isolation. This substitution assigns each downstream neuron its own filter while preserving its connection to upstream neurons (Fig. 1B), following the pattern in [38].
Divisive NormalizationWe consider the divisive normalization block proposed in [42] which performs normalization both spatially and across feature maps using learned normalization pools. Following our experimental design principle of selectively modifying the network in the vicinity of the artificial V1 neurons, we added this divisive normalization block after the non-linear activation of each residual block in ResNet50 layer 1.
Tuned NormalizationWe devised a novel implementation of tuned normalization inspired by models of opponent suppression [31, 43, 44]. In this scheme, a given neuron is normalized by a pool of neurons that share the same receptive field but occupy a different region in feature space (Fig. 1C), as in [41, 42]. Unlike the learned, weighted normalization proposed in [41], tuned inhibition was encouraged in our implementation by enforcing that each neuron was maximally suppressed by a neuron in a different region of feature space, and that no other neuron is maximally inhibited by activity in this feature space. Letting \(x_{i,j}^{c}\) denote the activity of the neuron at spatial location \((i,j)\) and channel \(c\in[1,C]\) after application of a non-linear activation function. The post-normalization state of this neuron, \(x_{i,j}^{rc}\), is given by:
\[x_{i,j}^{rc}=\frac{x_{i,j}^{c}}{1+\sum_{k}p_{k}x_{i,j}^{c_{k}}}, \tag{2}\]
where \(p_{c,1},\dots,p_{c,C}\) defines a Gaussian distribution with variance \(\sigma_{c}^{2}\) centered at channel \((c+\frac{C}{2})\) mod \(C\). By defining \(\sigma_{c}^{2}\) as a trainable parameter, task-driven training would optimize whether each neuron should be normalized acutely or broadly across the feature space.
As this mechanism preserves the dimension of the input feature map, it can follow any non-linear activation function of the core network without further modification to the architecture. Similar to the divisive normalization block, tuned normalization was added after the non-linear activation of each residual block in ResNet50 layer 1 in our experiments.
Cortical MagnificationCortical magnification and non-uniform receptive field sampling was simulated in CNNs using a differentiable polar sampling module (Fig. 1D). In this module, the spatial dimension of an input feature map are divided into polar regions defined by discrete radial and angular divisions of polar space. In particular, we defined a discrete polar coordinate system partitioned in the first dimension by radial partitions \(r_{0},r_{1},...,r_{m}\) and along the second dimension by angular partitions \(\theta_{0},\theta_{1},...,\theta_{n}\). Pixels of the input feature map that are located within the same polar region (i.e., are within the same radial bounds \([r_{i},r_{i+1})\) and angular bounds \([\theta_{j},\theta_{j+1})\)) are pooled and mapped to coordinate \((i,j)\) of the original pixel space (Fig. 1D) [54]. Pixels in the output feature map with no associated polar region were replaced with interpolated pixel values from the same radial bin. By defining the spacing between each concentric radial bin to be monotonically increasing (i.e., for all \(i\in[1,m-1]\), \((r_{i}-r_{i-1})\leq(r_{i+1}-r_{i})\)), visual information at lower spatial eccentricities with respect to the center of the input feature map consumes a larger proportion of the transformed feature map than information at greater eccentricities (Fig. F.1).
A notable result of this transformation is that any standard 2D convolution, with a kernel of size \(k\times k\), that is applied to the the transformed features space is equivalent to performing a convolution in which the kernel covers a \(k\times k\) contiguous region of polar space and strides along the angular and radial axes. Furthermore, downstream artificial neurons which process information at greater spatial eccentricities obtain larger receptive fields. Treating the CNN as a model of the ventral visual stream, this polar transformation immediately preceded ResNet50 layer 1 (replacing the first max-pooling layer), where V1 representations were assumed to be learned.
### Training and Evaluation
Training ProcedureV1 alignment was evaluated for ImageNet-trained models [55]. For all models, training and validation images were downsampled to a resolution of \(64\times 64\) in consideration of computational constraints. Each model of this evaluation was randomly initialized and trained for 100 epochs with an initial learning rate of \(0.1\) (reduced by a factor of \(10\) at epochs \(60\) and \(80\), where validation set performance was typically observed to plateau) and a batch size of \(128\).
We additionally benchmarked each neuro-constrained model on the Tiny-ImageNet-C dataset to study the effect of V1 alignment on object recognition robustness [56] (evaluation details provided in Appendix H). Tiny-ImageNet-C was used as an alternative to ImageNet-C given that the models trained here expected \(64\times 64\) input images and downsampling the corrupted images of ImageNet-C would have biased our evaluations. ImageNet pre-trained models were fine-tuned on Tiny-ImageNet prior to this evaluation. As a given model will learn alternative representations when trained on different datasets (thereby resulting in V1 alignment differences), we methodologically froze all parameters of each ImageNet trained model, with the exception of the classification head, prior to 40 epochs of fine tuning with a learning rate of \(0.01\) and a batch size of \(128\).
Validation loss and accuracy were monitored during both training procedures. The model state that enabled the greatest validation accuracy during training was restored for evaluations that followed. Training data augmentations were limited to horizontal flipping (ImageNet and Tiny-ImageNet) and random cropping (ImageNet).
Training was performed using single NVIDIA 3090 and A100 GPUs. Each model took approximately 12 hours to train on ImageNet and less than 30 minutes to fine-tune on Tiny-ImageNet.
Evaluating V1 AlignmentWe evaluated the similarity between neuro-constrained models of V1 and the primate primary visual cortex using the Brain-Score V1 benchmark [16]. The V1 benchmark score is an average of two sub-metrics: 'V1 FreemanZiemba2013' and 'V1 Marques2020', which we refer to as V1 Predictivity and V1 Property scores in what follows. For each metric, the activity of artificial neurons in a given neural network layer is computed using in-silico neurophysiology experiments. The V1 Predictivity score reflects the degree to which the model can explain the variance in stimulus-driven responses of V1 neurons, as determined by partial least squares regression mapping. The V1 Property score measures how closely the distribution of \(22\) different neural properties, from \(7\) neural tuning categories (orientation, spatial frequency, response selectivity, receptive field size, surround modulation, texture modulation, and response magnitude), matches between the model's artificial neural responses and empirical data from macaque V1. Together, these two scores provide a comprehensive view of stimulus response similarity between artificial and primate V1 neurons.
Brain-Score evaluations assume a defined mapping between units of an ANN layer and a given brain region. In all analyses of V1 alignment that follow, we systematically fixed the output neurons of ResNet50 layer 1 as the artificial V1 neurons. Note that this is a more strict rule than most models submitted to the Brain-Score leaderboard, as researchers are able to choose which layer in the deep net should correspond to the V1 readout. In baseline analyses, among multiple evaluated layers, we observed highest V1 alignment between artificial units and primate V1 activity from layer 1, establishing it as a strong baseline. Alternative layer V1 scores are presented in Appendix B.
## 4 Results
### Architectural Components in Isolation
Patterns of neural activity observed in the brain can be attributed to the interplay of multiple specialized processes. Through an isolated analysis, our initial investigations revealed the contribution of specialized mechanisms to explaining patterns of neural activity in V1. Tables 1 and 2 present the results of this analysis, including ImageNet validation accuracy, V1 Overall, V1 Predictivity, and V1 Property scores.
Among the five modules evaluated in this analysis, cortical magnification emerged as the most influential factor in enhancing V1 alignment. This mechanism substantially improved the ResNet's ability to explain the variance in stimulus responses, and the artificial neurons exhibited tuning properties that were more closely aligned with those of biological neurons, particularly in terms oforientation tuning, spatial frequency tuning, response selectivity, and most of all, stimulus response magnitude. However, the artificial neuronal responses of the cortical magnification network showed lower resemblance to those observed in primate V1 with regard to surround modulation, as compared to the baseline network.
Simulating neural normalization within the ResNet resulted in artificial neurons that displayed improved alignment with primate V1 in terms of response properties. Noteworthy enhancements were observed in the spatial frequency, receptive field size, surround modulation, and response magnitude properties of neurons within the modified network, leading to improvements in the V1 Property score. These results applied to both tuned and untuned forms of normalization.
In contrast, the introduction of center-surround convolutions yielded minimal improvements in neural predictivity and slight reductions in overall neuron property similarity. Surprisingly, the surround modulation properties of the artificial neurons decreased compared to the baseline model, contrary to our expectations.
Finally, replacing the final \(3\times 3\) convolution preceding the artificial V1 readout with a locally connected layer resulted in modest changes in V1 alignment. This was one of the two mechanisms that led to improvements in the surround modulation response property score (tuned normalization being the other).
These findings collectively provide valuable insights into the individual contributions of each specialized mechanism. Although mechanisms simulating center-surround antagonism (i.e., DoG convolution) and local connectivity provide little benefit to overall predictivity and property scores in isolation, we observed that they reduce the property dissimilarity gap among tuning properties that are nonetheless important and complement alignment scores where divisive normalization and cortical magnification do not.
### Complementary Components Explain V1 Activity
Constraining a general-purpose deep learning model with a single architectural component is likely insufficient to explain primate V1 activity given our knowledge that a composition of known circuits play pivotal roles in visual processing. This was empirically observed in Section 4.1, wherein cortical
\begin{table}
\begin{tabular}{l r r r r r} & \multicolumn{1}{c}{ImageNet Acc} & \multicolumn{1}{c}{V1 Overall} & \multicolumn{1}{c}{V1 Predictivity} & \multicolumn{1}{c}{V1 Property} \\ \hline Center-surround antagonism & \(.610\pm.001\) & \(.545\pm.002\) & \(.304\pm.016\) & \(.786\pm.018\) \\ Local receptive fields & \(.609\pm.001\) & \(.550\pm.006\) & \(.300\pm.002\) & \(.799\pm.012\) \\ Divisive normalization & \(.606\pm.001\) & \(.543\pm.003\) & \(.271\pm.014\) & \(.815\pm.011\) \\ Tuned normalization & \(.608\pm.002\) & \(.547\pm.004\) & \(.274\pm.004\) & \(.820\pm.009\) \\ Cortical magnification & \(.548\pm.008\) & \(.587\pm.014\) & \(.370\pm.008\) & \(.805\pm.021\) \\ ResNet50 (Baseline) & \(.613\pm.002\) & \(.550\pm.004\) & \(.295\pm.003\) & \(.805\pm.011\) \\ \end{tabular}
\end{table}
Table 1: ImageNet object recognition classification performance (\(64\times 64\) images) and primary visual cortex (V1) alignment scores of ResNet50 augmented with each architectural component. Mean and standard deviations are reported across three runs (random initialization, training, and evaluating) of each architecture. Scores higher than baseline are presented in green and those lower are presented in red (the more saturated the color is, the greater the difference from baseline).
\begin{table}
\begin{tabular}{l r r r r r r} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ & \multicolumn{1}{c}{Orientation} & \multicolumn{1}{c}{frequency} & \multicolumn{1}{c}{selectivity} & \multicolumn{1}{c}{RF size} & \multicolumn{1}{c}{modulation} & \multicolumn{1}{c}{modulation} & \multicolumn{1}{c}{magnitude} \\ \hline Center-surround & \(.876\pm.027\) & \(.831\pm.030\) & \(.632\pm.012\) & \(.853\pm.046\) & \(.743\pm.027\) & \(.757\pm.025\) & \(.783\pm.024\) \\ Local receptive fields & \(.904\pm.021\) & \(.817\pm.016\) & \(.648\pm.008\) & \(.852\pm.054\) & \(.847\pm.083\) & \(.743\pm.036\) & \(.780\pm.022\) \\ Divisive normalization & \(.908\pm.017\) & \(.840\pm.014\) & \(.689\pm.007\) & \(.858\pm.046\) & \(.860\pm.070\) & \(.746\pm.030\) & \(.86\pm.019\) \\ Tuned normalization & \(.907\pm.035\) & \(.841\pm.013\) & \(.689\pm.023\) & \(.865\pm.031\) & \(.852\pm.020\) & \(.742\pm.029\) & \(.844\pm.015\) \\ Cortical magnification & \(.907\pm.037\) & \(.848\pm.039\) & \(.708\pm.011\) & \(.808\pm.044\) & \(.858\pm.020\) & \(.789\pm.058\) & \(.915\pm.041\) \\ ResNet50 (Baseline) & \(.803\pm.023\) & \(.826\pm.048\) & \(.684\pm.059\) & \(.832\pm.080\) & \(.820\pm.009\) & \(.786\pm.058\) & \(.790\pm.042\) \\ \end{tabular}
\end{table}
Table 2: Model alignment across the seven primary visual cortex (V1) tuning properties that constitute the V1 Property score. Mean and standard deviation of scores observed across three trials of model training and evaluation are reported.
magnification was the only architectural component found to improve the overall V1 alignment score. Taking inspiration from this design principle, we supplemented a ResNet50 with each architectural component and discern the necessary components to achieve optimal V1 alignment in an ablation study. We omitted the architectural component implementing divisive normalization, however, as it cannot be integrated simultaneously with tuned normalization, which was observed to yield slightly higher V1 Predictivity and Property scores in isolated component evaluation. Starting with a ResNet which features all of these architectural components, we employed a greedy approach reminiscent of backward elimination feature selection to deduce critical components without having to evaluate every permutation of component subsets. In each round of this iterative approach, we selectively removed the architectural component that reduced overall V1 alignment the most until only one feature remained. This analysis allowed us to identify the subset of components that collectively yielded the most significant improvements in V1 alignment, and unraveled the intricate relationship between these specialized features and their combined explanation of V1.
The results of the ablation study are presented in Table 3. With the exception of center-surround antagonism, removing any neural mechanisms from the modified residual network reduced overall V1 alignment, suggesting that (1) each architectural component contributed to V1 alignment (the utility of center-surround antagonism is detailed in Section 4.5) and (2) nontrivial interactions between these mechanisms explain V1 more than what is possible with any single mechanism. Seven of the eight models evaluated in this ablation study substantially outperformed all existing models on the Brain-Score platform in modeling V1 tuning property distributions. Furthermore, four models were observed to achieve state-of-the-art V1 Overall scores, explaining both V1 stimulus response activity and neural response properties with high fidelity.
Whether or not feed-forward, ImageNet-trained ANNs can fully approximate activity in primate V1 has stood as an open question. Previous studies have argued that no current model is capable of explaining all behavioral properties using neurons from a single readout layer [17]. The top performing models of the current evaluation stand out as the first examples of CNNs with neural representations that accurately approximate all evaluated V1 tuning properties (Appendix C), offering positive evidence for the efficacy of explaining primate V1 with neuro-inspired deep learning architectures.
Regarding local receptive fields, we hypothesized that removing weight sharing would enable a greater diversity of response selectivity patterns to be learned. While multi-component ablation studies revealed a reduction in response selectivity property scores when local filtering was omitted, the same finding was surprisingly not observed in the single-component analyses of Section 4.1.
Given the inter-neuron competition enforced by tuned normalization, one would expect networks with this component to learn more diverse artificial V1 representations. Analyses of the visual stimuli that maximally activated artificial neurons of these networks (i.e., optimized visual inputs that maximally excite neurons of each channel of the artificial V1 layer, computed via stochastic gradient ascent) provide evidence for this. Quantitatively, we found that the mean Learned Perceptual Image Patch Similarity (LPIPS) [57] between pairs of maximally activating stimuli of the tuned normalization network was less than that of the baseline ResNet50 network (\(p<0.01\), one-way ANOVA [58]). We suggest that this learned feature diversity contributed to the improvements in spatial frequency, response selectivity, receptive field size, surround modulation, and response magnitude tuning property scores when tuned normalization was present.
Finally, given the retinotopic organization of V1, we hypothesized that cortical magnification would give rise to better-aligned response selectivity and receptive field size tuning distributions, meanwhile improving V1 neuron predictivity. In each trial of our ablation studies for which cortical magnification was removed, these respective scores dropped, supporting this hypothesis.
### Object Recognition Robustness to Corrupted Images
In contrast with the human visual system, typical CNNs generalize poorly to out-of-distribution data. Small perturbations to an image can cause a model to output drastically different predictions than it would on the in-tact image. Recent studies have demonstrated a positive correlation between model-brain similarity and robustness to image corruptions [24; 25; 26; 42; 59] After fine-tuning each model's classification head on Tiny-ImageNet (see Section 3.2), we evaluated the object recognition accuracy of each model from Section 4.1 and the top two overall models from Section 4.2 on the Tiny-ImageNet-C dataset. The results of these evaluations for each category of corruption and corruption strength are provided in Appendix H.
Among the evaluated components, only tuned normalization was observed to yield improved corrupt image classification accuracy over the entire test set, albeit slight, beating the baseline accuracy (\(0.278\)) by \(0.005\) (i.e., an absolute improvement of \(.5\%\)). More substantial improvements were observed on 'brightness', 'defocus_blur', 'elastic_transform', and 'pixelate' corruptions (improvements over the baseline of.00986,.00989,.0105, and.0133, respectively).
### Adversarially Training Neuro-Constrained ResNets
Adversarial training has previously been shown to enhance the brain-similarity of artificial neural representations without any modification to the underlying network [60; 25]. Curious as to whether adversarial training would further align the neuro-constrained ResNet50s with V1 activity, we selectively trained the two networks most aligned with V1 (one model with all architectural components and the other with all components except center-surround convolution) from Section 4.2 using "Free" adversarial training [61] (Appendix G). The results are shown in Table 4. Despite the drop in object recognition accuracy, the artificial neural representations that emerged in each network were drastically better predictors of stimulus response variance representations. Tuning property alignment dropped in the process, but remained above previous state-of-the-art regardless. Interestingly, we found that the main difference in V1 scores between these two models can be traced to surround modulation tuning alignment. Center-surround convolutions indeed contributed to improved surround
\begin{table}
\begin{tabular}{c c c c c c c c} Center- & Local & Tuned Nor- & Cortical Mag- & Adversarial & & & \\ Surround & RF & malization & nification & Training & ImageNet Acc & V1 Overall & V1 Productivity & V1 Property \\ \hline ✓ & ✓ & ✓ & ✓ & ✓ & & &.629 &.430 &.829 \\ & ✓ & ✓ & ✓ & ✓ & & &.625 &.430 &.819 \\ & ✓ & ✓ & ✓ & ✓ & & &.555 &.581 &.352 &.809 \\ \end{tabular}
\end{table}
Table 4: Adversarial training was performed on the two models that tied for the top V1 Overall Score. Checkmarks denote whether or not the architectural component was included in the model.
modulation tuning learned while training with on corrupted images, contrasting its apparent lack of contribution to the overall network suggested in the ablation study.
In sum, both networks achieved Rank-1 V1 Overall, Predictivity, and Property scores by large margins, setting a new standard in this breed of brain-aligned CNNs. At the time of writing, the previous Rank-1 V1 Overall, Predictivity, and Property scores were.594,.409, and.816, respectively, and all achieved by separate models.
## 5 Discussion
Throughout this work we presented a systematic evaluation of four architectural components derived from neuroscience principles and their influence on model-V1 similarity. Specifically, we studied task-driven CNNs enhanced with ANN layers that simulate principle processing mechanisms of the primate visual system including center-surround antagonism, local receptive fields, tuned normalization, and cortical magnification. Through an ablation study and isolated component analyses, we found that each component contributed to the production of latent ANN representations that better resemble those of primate V1, as compared to a traditional baseline CNN. When these four components were assembled together within a neuro-constrained ResNet50, V1 tuning properties were explained better than any previous deep learning model that we are aware of. Furthermore, this neuro-constrained model exhibited state-of-the-art explanation of V1 neural activity and is the first of its kind to do so, by a large margin nonetheless, highlighting a promising direction in biologically constrained ANNs. Training this model with "free" adversarial training greatly improved its ability to predict primate neural response to image stimuli at a minor sacrifice to tuning property similarity, establishing an even larger gap between previous state of the art.
Among all architectural components examined in this work, cortical magnification was the most influential to improving V1 alignment. This mechanism on its own could not explain the neural activity as comprehensively as the top models of this study, however. Our implementation of tuned normalization provided substantial improvement to V1 tuning property alignment, and was the only component that contributed to model robustness. The importance of center-surround antagonism seemed to be training data-dependent. In our ablation study, for which all models were trained on ImageNet, center-surround convolutional layers did not contribute to overall V1 scores. This did not surprise us, as deep CNNs have the capacity to learn similar representations without these specialized layers. When training on adversarially perturbed data, however, the center-surround antagonism provided by this layer appeared to improve surround modulation tuning properties of artificial V1 neurons. While previous attempts at improving model-brain similarity have been highly dataset dependent, our results highlight the importance of artificial network design.
A notable limitation to our work is the reduction in ImageNet classification performance that was observed upon the introduction of cortical magnification. While maintaining baseline model accuracy was not a motivation of this work, we can imagine situations in which object recognition performance needs to be preserved alongside these improvements in brain-model alignment. The implementation of cortical magnification in this work assumed that the model's gaze was focused at the center of each image. Consequently, visual stimuli at greater eccentricities from the image center were under-sampled during the polar transformation (Fig. F.1), making images for which the object of interest was not located at the image center (common in ImageNet) more challenging to classify. One scope of future work involves implementing saliency- or attention-driven polar transformations that dynamically focus the center of the polar map on, or near, an object of interest as opposed to being fixed at the image center. We demonstrate the efficacy of this strategy with a naive proof of concept in which classification is performed by averaging predictions from the five static crops of each image [62]. This simple strategy improved the validation accuracy of the network with all components from 55.1% to 59.9% without affecting V1 scores. A more sophisticated, dynamic strategy could further reduce this accuracy drop. We additionally plan to extend this work to model architectures other than ResNet to validate the widespread application of each of these neuro-constrained components.
This work highlights an important advancement in the field of NeuroAI, as we systematically establish a set of neuro-constrained architectural components that contribute to state-of-the-art V1 alignment. We argue that our architecture-driven approach can be further generalized to additional areas of the brain as well. The neuroscience insights that could be gleaned from increasingly accurate in-silico models of the brain have the potential to transform the fields of both neuroscience and AI.
## References
* [1] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. _Nature_, 521(7553):436-444, May 2015.
* [2] Laurent Itti, Christof Koch, and Ernst Niebur. A model of saliency-based visual attention for rapid scene analysis. _IEEE Transactions on pattern analysis and machine intelligence_, 20(11):1254-1259, 1998.
* [3] Hugo Larochelle and Geoffrey E Hinton. Learning to combine foveal glimpses with a third-order boltzmann machine. _Advances in neural information processing systems_, 23, 2010.
* [4] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In _International conference on machine learning_, pages 2048-2057. PMLR, 2015.
* [5] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. _Advances in neural information processing systems_, 30, 2017.
* [6] David J Heeger, Eero P Simoncelli, and J Anthony Movshon. Computational models of cortical visual processing. _Proceedings of the National Academy of Sciences_, 93(2):623-627, 1996.
* [7] Matteo Carandini, David J Heeger, and J Anthony Movshon. Linearity and normalization in simple cells of the macaque primary visual cortex. _Journal of Neuroscience_, 17(21):8621-8644, 1997.
* [8] Nicole C Rust, Odelia Schwartz, J Anthony Movshon, and Eero P Simoncelli. Spatiotemporal elements of macaque v1 receptive fields. _Neuron_, 46(6):945-956, 2005.
* [9] Brett Vinitch, J Anthony Movshon, and Eero P Simoncelli. A convolutional subunit model for neuronal responses in macaque v1. _Journal of Neuroscience_, 35(44):14829-14841, 2015.
* [10] Matteo Carandini, Jonathan B. Demb, Valerio Mante, David J. Tolhurst, Yang Dan, Bruno A. Olshausen, Jack L. Gallant, and Nicole C. Rust. Do We Know What the Early Visual System Does? _Journal of Neuroscience_, 25(46):10577-10597, November 2005.
* [11] Daniel L Yamins, Ha Hong, Charles Cadieu, and James J DiCarlo. Hierarchical Modular Optimization of Convolutional Networks Achieves Representations Similar to Macaque IT and Human Ventral Stream. In _Advances in Neural Information Processing Systems_, volume 26. Curran Associates, Inc., 2013.
* [12] Daniel L. K. Yamins, Ha Hong, Charles F. Cadieu, Ethan A. Solomon, Darren Seibert, and James J. DiCarlo. Performance-optimized hierarchical models predict neural responses in higher visual cortex. _Proceedings of the National Academy of Sciences_, 111(23):8619-8624, June 2014.
* [13] Daniel L. K. Yamins and James J. DiCarlo. Using goal-driven deep learning models to understand sensory cortex. _Nature Neuroscience_, 19(3):356-365, March 2016. Number: 3 Publisher: Nature Publishing Group.
* [14] Najib J. Majaj, Ha Hong, Ethan A. Solomon, and James J. DiCarlo. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance. _Journal of Neuroscience_, 35(39):13402-13418, September 2015. Publisher: Society for Neuroscience Section: Articles.
* [15] Santiago A. Cadena, George H. Denfield, Edgar Y. Walker, Leon A. Gatys, Andreas S. Tolias, Matthias Bethge, and Alexander S. Ecker. Deep convolutional models improve predictions of macaque V1 responses to natural images. _PLOS Computational Biology_, 15(4):e1006897, April 2019. Publisher: Public Library of Science.
* [16] Martin Schrimpf, Jonas Kubilius, Ha Hong, Najib J. Majaj, Rishi Rajalingham, Elias B. Issa, Kohitij Kar, Pouya Bashivan, Jonathan Prescott-Roy, Franziska Geiger, Kailyn Schmidt, Daniel L. K. Yamins, and James J. DiCarlo. Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like?, January 2020. Pages: 407007 Section: New Results.
* [17] Tiago Marques, Martin Schrimpf, and James J. DiCarlo. Multi-scale hierarchical neural network models that bridge from single neurons in the primate primary visual cortex to object recognition behavior, August 2021. Pages: 2021.03.01.433495 Section: New Results.
* [18] Pouya Bashivan, Kohitij Kar, and James J. DiCarlo. Neural population control via deep image synthesis. _Science_, 364(6439):eaav9436, May 2019. Publisher: American Association for the Advancement of Science.
* Hassabis et al. [2017] Demis Hassabis, Dharshan Kumaran, Christopher Summerfield, and Matthew Botvinick. Neuroscience-inspired artificial intelligence. _Neuron_, 95(2):245-258, 2017.
* Zador et al. [2019] Anthony Zador, Sean Escola, Blake Richards, Bence Olveczky, Yoshua Bengio, Kwabena Boahen, Matthew Botvinick, Dmitri Chklovskii, Anne Churchland, Claudia Clopath, James DiCarlo, Surya Ganguli, Jeff Hawkins, Konrad Kording, Alexei Koulakov, Yann LeCun, Timothy Lillicrap, Adam Marblestone, Bruno Olshausen, Alexandre Pouget, Cristina Savin, Terrence Sejnowski, Eero Simoncelli, Sara Solla, David Sussillo, Andreas S. Tolias, and Doris Tsao. Catalyzing next-generation artificial intelligence through neuroai. _Nature Communications_, 14(1):1597, Mar 2023.
* Hubel and Wiesel [1962] David H Hubel and Torsten N Wiesel. Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. _The Journal of physiology_, 160(1):106, 1962. Publisher: Wiley-Blackwell.
* He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016.
* Tan and Le [2019] Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In _International conference on machine learning_, pages 6105-6114. PMLR, 2019.
* Li et al. [2019] Zhe Li, Wieland Brendel, Edgar Walker, Erick Cobos, Taliah Muhammad, Jacob Reimer, Matthias Bethge, Fabian Sinz, Zachary Pitkow, and Andreas Tolias. Learning from brains how to regularize machines. In _Advances in Neural Information Processing Systems_, volume 32. Curran Associates, Inc., 2019.
* Dapello et al. [2020] Joel Dapello, Tiago Marques, Martin Schrimpf, Franziska Geiger, David Cox, and James J DiCarlo. Simulating a Primary Visual Cortex at the Front of CNNs Improves Robustness to Image Perturbations. In _Advances in Neural Information Processing Systems_, volume 33, pages 13073-13087. Curran Associates, Inc., 2020.
* Reddy et al. [2020] Manish V. Reddy, Andrzej Banburski, Nishka Pant, and Tomaso Poggio. Biologically Inspired Mechanisms for Adversarial Robustness, June 2020. arXiv:2006.16427 [cs, stat].
* Allman et al. [1985] J. Allman, F. Miezin, and E. McGuinness. Stimulus specific responses from beyond the classical receptive field: neurophysiological mechanisms for local-global comparisons in visual neurons. _Annual Review of Neuroscience_, 8:407-430, 1985.
* Knierim and van Essen [1992] J. J. Knierim and D. C. van Essen. Neuronal responses to static texture patterns in area V1 of the alert macaque monkey. _Journal of Neurophysiology_, 67(4):961-980, April 1992. Publisher: American Physiological Society.
* Walker et al. [1999] Gary A. Walker, Izumi Ohzawa, and Ralph D. Freeman. Asymmetric Suppression Outside the Classical Receptive Field of the Visual Cortex. _Journal of Neuroscience_, 19(23):10536-10553, December 1999. Publisher: Society for Neuroscience.
* Shen et al. [2007] Zhi-Ming Shen, Wei-Feng Xu, and Chao-Yi Li. Cue-invariant detection of centre-surround discontinuity by V1 neurons in awake macaque monkey. _The Journal of Physiology_, 583(Pt 2):581-592, September 2007.
* DeAngelis et al. [1994] Gregory C DeAngelis, RALPH D Freeman, and IZUMI Ohzawa. Length and width tuning of neurons in the cat's primary visual cortex. _Journal of neurophysiology_, 71(1):347-374, 1994.
* Sceniak et al. [1999] Michael P Sceniak, Dario L Ringach, Michael J Hawken, and Robert Shapley. Contrast's effect on spatial summation by macaque V1 neurons. _Nature neuroscience_, 2(8):733-739, 1999. Publisher: Nature Publishing Group.
* Sceniak et al. [2001] Michael P Sceniak, Michael J Hawken, and Robert Shapley. Visual spatial characterization of macaque V1 neurons. _Journal of neurophysiology_, 85(5):1873-1887, 2001. Publisher: American Physiological Society Bethesda, MD.
* Hasani et al. [2019] Hosein Hasani, Mahdieh Soleymani, and Hamid Aghajan. Surround Modulation: A Bio-inspired Connectivity Structure for Convolutional Neural Networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d' Alche-Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems_, volume 32. Curran Associates, Inc., 2019.
* Babaiee et al. [2021] Zahra Babaiee, Ramin Hasani, Mathias Lechner, Daniela Rus, and Radu Grosu. On-off center-surround receptive fields for accurate and robust image classification. In _International Conference on Machine Learning_, pages 478-489. PMLR, 2021.
* Koulakov and Chklovskii [2001] Alexei A Koulakov and Dmitri B Chklovskii. Orientation preference patterns in mammalian visual cortex: a wire length minimization approach. _Neuron_, 29(2):519-527, 2001. Publisher: Elsevier.
* [37] Hyodong Lee, Eshed Margalit, Ramila M. Jozwik, Michael A. Cohen, Nancy Kanwisher, Daniel L. K. Yamins, and James J. DiCarlo. Topographic deep artificial neural networks reproduce the hallmarks of the primate inferior temporal cortex face processing network. preprint, Neuroscience, July 2020.
* [38] Roman Pogodin, Yash Mehta, Timothy Lillicrap, and Peter E Latham. Towards Biologically Plausible Convolutional Networks. In _Advances in Neural Information Processing Systems_, volume 34, pages 13924-13936. Curran Associates, Inc., 2021.
* [39] Matteo Carandini and David J. Heeger. Normalization as a canonical neural computation. _Nature Reviews Neuroscience_, 13(1):51-62, January 2012. Number: 1 Publisher: Nature Publishing Group.
* [40] David J. Heeger and Klavdia O. Zemlianova. A recurrent circuit implements normalization, simulating the dynamics of V1 activity. _Proceedings of the National Academy of Sciences_, 117(36):22494-22505, September 2020. Publisher: Proceedings of the National Academy of Sciences.
* [41] Max F Burg, Santiago A Cadena, George H Denfield, Edgar Y Walker, Andreas S Tolias, Matthias Bethge, and Alexander S Ecker. Learning divisive normalization in primary visual cortex. _PLOS Computational Biology_, 17(6):e1009028, 2021. Publisher: Public Library of Science San Francisco, CA USA.
* [42] Andrew Cirincione, Reginald Verrier, Ariom Bic, Stephanie Olaiya, James J DiCarlo, Lawrence Udeigwe, and Tiago Marques. Implementing Divisive Normalization in CNNs Improves Robustness to Common Image Corruptions.
* [43] Nicole C. Rust, Valerio Mante, Eero P. Simoncelli, and J. Anthony Movshon. How MT cells analyze the motion of visual patterns. _Nature Neuroscience_, 9(11):1421-1431, November 2006. Number: 11 Publisher: Nature Publishing Group.
* [44] M Concetta Morrone, DC Burr, and Lamberto Maffei. Functional implications of cross-orientation inhibition of cortical visual cells. I. Neurophysiological evidence. _Proceedings of the Royal Society of London. Series B. Biological Sciences_, 216(1204):335-354, 1982. Publisher: The Royal Society London.
* [45] GC DeAngelis, JG Robson, I Ohzawa, and RD Freeman. Organization of suppression in receptive fields of neurons in cat visual cortex. _Journal of Neurophysiology_, 68(1):144-163, 1992.
* [46] PM Daniel and D Whitteridge. The representation of the visual field on the cerebral cortex in monkeys. _The Journal of physiology_, 159(2):203, 1961. Publisher: Wiley-Blackwell.
* [47] Eric L Schwartz. Spatial mapping in the primate sensory projection: analytic structure and relevance to perception. _Biological cybernetics_, 25(4):181-194, 1977. Publisher: Springer.
* [48] Eric L Schwartz. Computational anatomy and functional architecture of striate cortex: a spatial mapping approach to perceptual coding. _Vision research_, 20(8):645-669, 1980. Publisher: Elsevier.
* [49] Eric L Schwartz. Computational studies of the spatial architecture of primate visual cortex: columns, maps, and protomaps. _Primary visual cortex in primates_, pages 359-411, 1994. Publisher: Springer.
* [50] Jonathan R Polimeni, Mukund Balasubramanian, and Eric L Schwartz. Multi-area visuotopic map complexes in macaque striate and extra-striate cortex. _Vision research_, 46(20):3336-3359, 2006. Publisher: Elsevier.
* [51] Carlos Esteves, Christine Allen-Blanchette, Xiaowei Zhou, and Kostas Daniilidis. Polar Transformer Networks. _International Conference on Learning Representations_, 2018.
* [52] Joao F. Henriques and Andrea Vedaldi. Warped Convolutions: Efficient Invariance to Spatial Transformations, 2021. _eprint: 1609.04382.
* [53] Taro Kiritani and Koji Ono. Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks, 2020. _eprint: 2002.05388.
* [54] M.R. Blackburn. A Simple Computational Model of Center-Surround Receptive Fields in the Retina. Technical report. Section: Technical Reports.
* [55] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In _2009 IEEE Conference on Computer Vision and Pattern Recognition_, pages 248-255, June 2009. ISSN: 1063-6919.
* [56] Dan Hendrycks and Thomas Dietterich. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations, March 2019. arXiv:1903.12261 [cs, stat].
* [57] Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In _2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 586-595, 2018.
* [58] Amanda Ross and Victor L. Willson. _One-Way Anova_, pages 21-24. SensePublishers, Rotterdam, 2017.
* [59] Shahd Safarani, Arne Nix, Konstantin Willeke, Santiago Cadena, Kelli Restivo, George Denfield, Andreas Tolias, and Fabian Sinz. Towards robust vision by multi-task learning on monkey visual cortex. In _Advances in Neural Information Processing Systems_, volume 34, pages 739-751. Curran Associates, Inc., 2021.
* [60] Alexander Riedel. Bag of Tricks for Training Brain-Like Deep Neural Networks. March 2022.
* [61] Ali Shafahi, Mahyar Najibi, Mohammad Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S Davis, Gavin Taylor, and Tom Goldstein. Adversarial training for free! In _Advances in Neural Information Processing Systems_, volume 32. Curran Associates, Inc., 2019.
* [62] Manish Reddy Vuyyuru, Andrzej Banburski, Nishka Pant, and Tomaso Poggio. Biologically inspired mechanisms for adversarial robustness. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, _Advances in Neural Information Processing Systems_, volume 33, pages 2135-2146. Curran Associates, Inc., 2020.
## Appendix A Supplemental Model Diagrams
Fig. A.1 depicts the modifications made to ResNet50 residual layer 1 in the isolated component analyses of section 4.1. All multi-component (composite) models analyzed in Section 4.2 relied on combinations of these modifications (as exemplified in Fig. 2).
Figure A.1: ResNet50 residual layer 1, supplemented with individual neuro-constrained architectural components, as in section 4.1. (A) No modification (baseline ResNet50 layer 1), (B) with center-surround antagonism, (C) with local receptive field (RF), (D) with divisive normalization, (E) with tuned normalization, (F) with cortical magnification.
V1 Scores of Alternate Layers of Baseline Network
When evaluating a model on Brain-Score, users are permitted to commit a mapping between model layers and areas of the ventral stream. Model-brain alignment is computed for each mapped pair in the Brain-Score evaluation. To promote a fair evaluation, we sought to find the layer that yielded optimal V1 alignment from the baseline ResNet50 model and fix this layer as the artificial V1 readout layer in all of our tested models. It is worth noting that after supplementing the base ResNet50 with neuro-constrained components, this layer may no longer offer optimal V1 alignment in the augmented network. In spite of this, we maintain this layer as our artificial V1 readout layer for fair evaluation.
To find the ResNet50 layer with the best V1 Overall, Predictivity, and Property scores, we compared a total of 20 different hidden layers (Fig. B.1). 16 of these layers corresponded to the post-activation hidden states of the network. The remaining 4 were downsampling layers of the first bottleneck block of each residual layer in the network, as these have previously demonstrated good V1 alignment [25]. Aside from these downsampling layers, hidden layers that did not follow a ReLU activation were omitted from this evaluation as the activities of these states can take on negative values and are therefore less interpretable as neural activities. Among all evaluated layers, the final output of ResNet50 residual layer 1 (i.e., the output of the third residual block of ResNet50) offered the highest V1 Overall score, and was therefore selected as the artificial V1 readout layer in all of our experiments.
Figure B.1: V1 alignment Brain-Scores for 20 different hidden layers of ResNet50. In the plot above, readout location ‘X.Y’ denotes that artificial V1 activity was evaluated from residual block ‘Y’ of residual layer ‘X’. Readout location suffixed with ‘.d’ correspond to downsampling layers of the associated residual bottleneck. Highest V1 overall score came from block 3 of residual layer 1.
Expanded Model Tuning Properties
Primary visual cortex (V1) tuning property alignments for each composite model evaluated in Section 4.2 are presented in Table 5. Tuning property similarities are computed as coiled Kolmogorov-Smirnov distance between artificial neural response distributions from the model and empirical distributions recorded in primates [16; 17].
## Appendix D V1 Brain-Scores of Untrained Models
## Appendix E V2, V4, and IT Brain-Scores of Top Model
Table 7 shows the Brain-Scores of our top performing V1 model (the adversarially trained ResNet50 with all architectural components) for brain areas V2, V4, and IT. Network layers were mapped to visual areas V2, V4, and IT by finding the layers that achieve the best scores on these visual area benchmarks, as evaluated on Brain-Score's publicly available evaluation set.
## Appendix E
\begin{table}
\begin{tabular}{l r r r} & V1 Overall & V1 Predictivity & V1 Property \\ \hline Center-surround antagonism & \(.298\) & \(.245\) & \(.551\) \\ Local receptive fields & \(.477\) & \(.210\) & \(.743\) \\ Divisive normalization & \(.499\) & \(.207\) & \(.792\) \\ Tuned normalization & \(.471\) & \(.218\) & \(.724\) \\ Cortical magnification & \(.497\) & \(.276\) & \(.718\) \\ All Components & \(.483\) & \(.225\) & \(.741\) \\ ResNet50 & \(.466\) & \(.223\) & \(.710\) \\ \end{tabular}
\end{table}
Table 6: primary visual cortex (V1) alignment scores of untrained ResNet50 model variants.
\begin{table}
\begin{tabular}{l r r|r r r r r r r r} & Visual Area Brain-Score & V2 & V4 & IT \\ \hline & \(.298\) & \(.245\) & \(.551\) \\ \end{tabular}
\end{table}
Table 7: V2, V4, and IT Brain-Scores of adversarially trained ResNet50 with all architectural components.
\begin{table}
\begin{tabular}{l r r r|r r r r r r r} & Visual Area Brain-Score & V2 & V4 & IT \\ \hline & & \(.298\) & \(.245\) & \(.551\) \\ \end{tabular}
\end{table}
Table 7: V2, V4, and IT Brain-Scores of adversarially trained ResNet50 with all architectural components.
\begin{table}
\begin{tabular}{l r r r|r r r r r r r} & Visual Area Brain-Score & V2 & V4 & IT \\ \hline & & \(.298\) & \(.245\) & \(.551\) \\ \end{tabular}
\end{table}
Table 8: V2, V4, and IT Brain-Scores of adversarially trained ResNet50 with all architectural components.
\begin{table}
\begin{tabular}{l r r r|r r r r r r r} & Visual Area Brain-Score & V2 & V4 & IT \\ \hline & & \(.298\) & \(.245\) & \(.551\) \\ \end{tabular}
\end{table}
Table 9: V2, V4, and IT Brain-Scores of adversarially trained ResNet50 with all architectural components.
Supplemental VisualizationsAdversarial Training
The neuro-constrained ResNets discussed in Section 4.5 were trained using the "Free" adversarial training method proposed by Shafahi _et al._[61]. In Projected Gradient Descent (PGD)-based adversarial training (a typical approach to adversarially training robust classifiers), a network is trained on adversarial samples that are generated on the fly during training. Specifically, in PGD-based adversarial training, a batch of adversarial images is first generated through a series of iterative perturbations to an original image batch, at which point the parameters of the network are finally updated according to the network's loss, as evaluated on the adversarial examples. "Free" adversarial training generates adversarial training images with a similar approach, but the parameters of the network are simultaneously updated with every iteration of image perturbation, significantly reducing training time. The authors refer to these mini-batch updates as "replays", and refer to the number of replays of each mini-batch with the parameter \(m\).
The adversarially trained models of Section 4.5 were trained with \(m=4\) replays and perturbation clipping of \(\epsilon=\frac{2}{255}\). These models were trained for \(120\) epochs using a stochastic gradient descent optimizer with an initial learning rate of \(0.1\), which was reduced by a factor of \(10\) every \(40\) epochs, momentum of \(0.9\), and weight decay of \(1\times 10^{-5}\). Each model was initialized with the weights that were learned during traditional ImageNet training for the analyses in Section 4.2. "Free" adversarial training was performed using code provided by the authors of this method ([https://github.com/mahyarnajibi/FreeAdversarialTraining](https://github.com/mahyarnajibi/FreeAdversarialTraining)).
## Appendix H Robustness to Common Image Corruptions
### Dataset Description
We evaluated image classification robustness to common image corruptions using the Tiny-ImageNet-C dataset [56]. Recall that Tiny-ImageNet-C was used instead of ImageNet-C, because our models were trained on \(64\times 64\) input images. Downscaling ImageNet-C images would have potentially altered the intended corruptions and biased our evaluations.
Tiny-ImageNet-C is among a collection of corrupted datasets (e.g., ImageNet-C, CIFAR-10-C, CIFAR-100-C) that feature a diverse set of corruptions to typical benchmark datasets. Hendrycks and Dietterich [56] suggest that given the diversity of corruptions featured in these datasets, performance on these datasets can be seen as a general indicator of model robustness. The Tiny-ImageNet-C evaluation dataset consists of images from that Tiny-ImageNet validation dataset that have been corrupted according to \(15\) types of image corruption, each of which is categorized as a 'noise', 'blur', 'weather', or 'digital' corruption. The \(15\) corruption types include: Gaussian noise, shot noise, impulse noise, defocus blur, frosted glass blur, motion blur, zoom blur, snow, frost, fog, brightness, contrast, elastic transformation, pixelation, and JPEG compression. Each corruption is depicted in Fig. H.1. Every image of this evaluation dataset is also corrupted at five levels of severity (the higher the corruption severity, the more the original image had been corrupted). Corruption severities for Gaussian noise are exemplified in Fig. H.2.
Figure H.1: \(15\) corruptions of the Tiny-ImageNet-C dataset, applied to a sample image from Tiny-ImageNet-C. First row: noise corruptions, second row: blur corruptions, third row: weather corruptions, bottom row: digital corruptions. All corruptions shown at severity level 3.
Figure H.2: Gaussian noise corruption, shown at corruption severity levels 1-5.
### Corrupted Image Robustness
A detailed breakdown of Tiny-ImageNet-C image classification accuracy for each single-component, neuro-constrained ResNet-50 and the composite models that achieved top V1 Overall score without adversarial training are provided in Tables 8, 9, and 10.
## Appendix I Code Availability
Code and materials required to reproduce the work presented in this paper are available at github.com/bionicvisionlab/2023-Pogoncheff-Explaining-V1-Properties.
\begin{table}
\begin{tabular}{l r r r r r} \multicolumn{5}{c}{Noise Corruptions} \\ & Gaussian Noise & Impulse Noise & Shot Noise & Avg. \\ \hline ResNet50 (Baseline) & \(\mathbf{.197}\pm.011\) & \(.191\pm.010\) & \(\mathbf{.232}\pm.013\) & \(\mathbf{.207}\pm.011\) \\ Center-surround antagonism & \(.195\pm.010\) & \(.186\pm.009\) & \(\mathbf{.232}\pm.012\) & \(.204\pm.010\) \\ Local Receptive Fields & \(.185\pm.006\) & \(.184\pm 009\) & \(.219\pm.010\) & \(.196\pm.008\) \\ Tuned Normalization & \(.195\pm.008\) & \(\mathbf{.192}\pm.004\) & \(.228\pm.007\) & \(.205\pm.006\) \\ Cortical Magnification & \(.150\pm.008\) & \(.157\pm.007\) & \(.180\pm.011\) & \(.162\pm.008\) \\ Composite Model A & \(.151\) & \(.156\) & \(.184\) & \(.164\) \\ Composite Model B & \(.144\) & \(.149\) & \(.177\) & \(.157\) \\ & & & & \\ \multicolumn{5}{c}{Blur Corruptions} \\ & Defocus Blur & Glass Blur & Motion Blur & Zoom Blur & Avg. \\ \hline ResNet50 (Baseline) & \(.224\pm.003\) & \(.182\pm.001\) & \(.272\pm.003\) & \(.241\pm.004\) & \(.230\pm.002\) \\ Center-surround antagonism & \(.223\pm.009\) & \(.184\pm.004\) & \(.274\pm.012\) & \(.243\pm.011\) & \(.231\pm.009\) \\ Local Receptive Fields & \(.228\pm.006\) & \(.183\pm.004\) & \(.273\pm.005\) & \(.243\pm.008\) & \(.232\pm.005\) \\ Tuned Normalization & \(\mathbf{.234}\pm.009\) & \(\mathbf{.188}\pm.002\) & \(\mathbf{.277}\pm.009\) & \(\mathbf{.248}\pm.010\) & \(\mathbf{.237}\pm.007\) \\ Cortical Magnification & \(.174\pm.010\) & \(.162\pm.008\) & \(.222\pm.007\) & \(.190\pm.006\) & \(.187\pm.008\) \\ Composite Model A & \(.186\) & \(.167\) & \(.236\) & \(.200\) & \(.197\) \\ Composite Model B & \(.196\) & \(.174\) & \(.249\) & \(.222\) & \(.210\) \\ & & & & & \\ \multicolumn{5}{c}{Weather Corruptions} \\ & Brightness & Fog & Frost & Snow & Avg. \\ \hline ResNet50 (Baseline) & \(.401\pm.005\) & \(\mathbf{.282}\pm.003\) & \(.360\pm.006\) & \(.310\pm.004\) & \(.338\pm.004\) \\ Center-surround antagonism & \(.399\pm.008\) & \(.270\pm.008\) & \(.357\pm.012\) & \(.302\pm.003\) & \(.332\pm.007\) \\ Local Receptive Fields & \(.398\pm.008\) & \(.275\pm.005\) & \(.351\pm.006\) & \(.298\pm.004\) & \(.331\pm.003\) \\ Tuned Normalization & \(\mathbf{.410}\pm.008\) & \(\mathbf{.282}\pm.011\) & \(\mathbf{.361}\pm.006\) & \(\mathbf{.311}\pm.010\) & \(\mathbf{.341}\pm.008\) \\ Cortical Magnification & \(.327\pm.011\) & \(.211\pm.013\) & \(.283\pm.014\) & \(.248\pm.010\) & \(.267\pm.011\) \\ Composite Model A & \(.338\) & \(.220\) & \(.286\) & \(.258\) & \(.275\) \\ Composite Model B & \(.327\) & \(.225\) & \(.284\) & \(.255\) & \(.273\) \\ & & & & & \\ \multicolumn{5}{c}{Digital Corruptions} \\ & Contrast & Elastic & JPEG & Pixelate & Avg. \\ \hline ResNet50 (Baseline) & \(.125\pm.001\) & \(.331\pm.007\) & \(.454\pm.007\) & \(.374\pm.003\) & \(.321\pm.003\) \\ Center-surround antagonism & \(.122\pm.002\) & \(.331\pm.014\) & \(.455\pm.007\) & \(.374\pm.004\) & \(.321\pm.006\) \\ Local Receptive Fields & \(.120\pm.004\) & \(.329\pm.003\) & \(.457\pm.005\) & \(.375\pm.002\) & \(.320\pm.001\) \\ Tuned Normalization & \(\mathbf{.128}\pm.008\) & \(\mathbf{.342}\pm.010\) & \(\mathbf{.463}\pm.006\) & \(\mathbf{.387}\pm.006\) & \(\mathbf{.330}\pm.007\) \\ Cortical Magnification & \(.082\pm.005\) & \(.287\pm.007\) & \(.374\pm.013\) & \(.287\pm.014\) & \(.257\pm.010\) \\ Composite Model A & \(.081\) & \(.305\) & \(.397\) & \(.303\) & \(.272\) \\ Composite Model B & \(.086\) & \(.314\) & \(.383\) & \(.293\) & \(.269\) \\ \end{tabular}
\end{table}
Table 10: Corrupted image classification accuracy by corruption type. Composite Model A includes all 4 neuro-constrained architectural components (center-surround antagonism, local receptive fields, tuned normalization, and cortical magnification). Composite Model B contained all architectural components, with the exception of center-surround antagonism. For baseline and single-component models, mean accuracies (\(\pm\) one standard deviation) are reported, where each trial was associated with a distinct base model from the repeated trials of section 4.1. | ## Review
### Summary
This paper proposes the incorporation of biologically inspired components, including center-surround receptive fields, local receptive fields, tuned divisive normalization, and cortical magnification, into deep convolutional networks (DCNs) to enhance alignment with V1 neural responses. The authors conduct a systematic ablation analysis to evaluate the contributions of these components, ultimately demonstrating that their approach achieves significant improvements in Brain-Score V1 alignment, particularly with the addition of adversarial training. However, the study also notes a reduction in task performance on ImageNet, which raises questions about the trade-offs involved in enhancing V1 alignment. Overall, the work fits well within the current neuroscience and AI intersection, addressing important questions about model design and performance.
### Strengths
- The paper provides a thorough discussion about biologically inspired computations in V1, integrating them into deep neural networks as modules with learnable components.
- The systematic evaluation of individual contributions of architectural components is well done, enhancing understanding of their roles in capturing V1 properties.
- The approach of combining multiple V1 components in one model is novel and addresses a high-interest topic for both the CS and neuroscience communities.
- The paper presents interesting findings about cortical magnification's role in improving alignment, adding value to the current literature.
### Weaknesses
- The novelty of biologically inspired components in neural networks is somewhat diminished as similar efforts have been made previously, notably in works like VOneNet; a clearer distinction from prior art is needed.
- Clarity issues exist regarding the organization of background and methods, making it difficult to follow the integration of previous work.
- The paper lacks detailed discussion on why specific components yield varying effects on alignment, which is crucial for understanding their contributions.
- The significant drop in ImageNet classification performance raises concerns; the authors should explore the reasons behind this trade-off more comprehensively.
- Some results show only small improvements in Brain-Score, leading to questions about the significance of claims made regarding performance enhancements.
### Questions
- What sets this work apart from previous attempts at integrating biological components into neural networks, especially in context to VOneNet?
- Could the authors clarify the greedy backwards elimination process used in their analysis?
- Is there a theoretical framework explaining why certain architectural features enhance V1 alignment while others do not?
- Can the authors include results for other brain areas (V2, V4, IT) to facilitate comparisons with existing models?
- What are the implications of downsampling ImageNet images to 64x64, and could training on larger images improve accuracy?
### Soundness
**Score:** 3
**Description:** 3 = good: The paper presents a technically sound approach with systematic evaluations, though it lacks some critical discussions and clarity on results.
### Presentation
**Score:** 3
**Description:** 3 = good: The overall presentation is clear, but there are areas needing restructuring for better clarity and flow.
### Contribution
**Score:** 3
**Description:** 3 = good: The paper makes a meaningful contribution to the field by exploring the integration of biological components in DCNs, despite some limitations in novelty.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept: The paper is technically solid and presents moderate-to-high impact findings, but it requires further exploration of certain weaknesses.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents original research within a timely and significant area, demonstrating soundness and capturing the interest of the neuroscience and AI communities. Despite some clarity issues and a notable drop in classification performance, the strengths and contributions outweigh the weaknesses, justifying an acceptance with minor revisions.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Boosting with Tempered Exponential Measures
Richard Nock
Google Research
[email protected] &Ehsan Amid
Google DeepMind
[email protected] &Manfred K. Warmuth
Google Research
[email protected]
###### Abstract
One of the most popular ML algorithms, AdaBoost, can be derived from the dual of a relative entropy minimization problem subject to the fact that the positive weights on the examples sum to one. Essentially, harder examples receive higher probabilities. We generalize this setup to the recently introduced _tempered exponential measures_ (tems) where normalization is enforced on a specific power of the measure and not the measure itself. tems are indexed by a parameter \(t\) and generalize exponential families (\(t=1\)). Our algorithm, \(t\)-AdaBoost, recovers AdaBoost as a special case (\(t=1\)). We show that \(t\)-AdaBoost retains AdaBoost's celebrated exponential convergence rate on margins when \(t\in[0,1)\) while allowing a slight improvement of the rate's hidden constant compared to \(t=1\). \(t\)-AdaBoost partially computes on a generalization of classical arithmetic over the reals and brings notable properties like guaranteed bounded leveraging coefficients for \(t\in[0,1)\). From the loss that \(t\)-AdaBoost minimizes (a generalization of the exponential loss), we show how to derive a new family of _tempered_ losses for the induction of domain-partitioning classifiers like decision trees. Crucially, strict properness is ensured for all while their boosting rates span the full known spectrum of boosting rates. Experiments using \(t\)-AdaBoost+trees display that significant leverage can be achieved by tuning \(t\).
## 1 Introduction
AdaBoost is one of the most popular ML algorithms [8, 30]. It efficiently aggregates weak hypotheses into a highly accurate linear combination [10]. The common motivations of boosting algorithms focus on choosing good linear weights (the leveraging coefficients) for combining the weak hypotheses. A dual view of boosting highlights the dual parameters, which are the weights on the examples. These weights define a distribution, and AdaBoost can be viewed as minimizing a relative entropy to the last distribution subject to a linear constraint introduced by the current hypothesis [12]. For this reason (more in SS 2), AdaBoost's weights define an exponential family.
**In this paper**, we go beyond weighing the examples with a discrete exponential family distribution, relaxing the constraint that the total mass be unit but instead requiring it for the measure's \(1/(2-t)\)'th power, where \(t\) is a temperature parameter. Such measures, called _tempered exponential measures_ (tems), have been recently introduced [4]. Here we apply the discrete version of these tems for deriving a novel boosting algorithm called \(t\)-AdaBoost. Again the measures are solutions to a relative entropy minimization problem, but the relative entropy is built from Tsallis entropy and "tempered" by a parameter \(t\). As \(t\to 1\)tems become standard exponential family distributions and our new algorithm merges into AdaBoost. As much as AdaBoost minimizes the exponential loss, \(t\)-AdaBoost minimizes a generalization of this loss we denote as the _tempered exponential loss_.
tems were introduced in the context of clustering, where they were shown to improve the robustness to outliers of clustering's population minimizers [4]. They have also been shown to bring low-level sparsity features to optimal transport [3]. Boosting is a high-precision machinery: AdaBoost is known to achieve near-optimal boosting rates under the weak learning assumption [1], but it haslong been known that numerical issues can derail it, in particular, because of the unbounded weight update rule [14]. So the question of what the tem setting can bring for boosting is of primordial importance. As we show, \(t\)-AdaBoost can suffer no rate setback as boosting's exponential rate of convergence on _margins_ can be preserved for all \(t\in[0,1)\). Several interesting features emerge: the weight update becomes bounded, margin optimization can be _tuned_ with \(t\) to focus on examples with very low margin and besides linear separators, the algorithm can also learn _progressively_ clipped models1. Finally, the weight update makes appear a new regime whereby weights can "switch off and on": an example's weight can become zero if too well classified by the current linear separator, and later on revert to non-zero if badly classified by a next iterate. \(t\)-AdaBoost makes use of a generalization of classical arithmetic over the reals introduced decades ago [18].
Footnote 1: Traditionally, clipping a sum is done after it has been fully computed. In our case, it is clipped after each new summand is added.
Boosting algorithms for linear models like AdaBoost bring more than just learning good linear separators: it is known that (ada)boosting linear models can be used to emulate the training of _decision trees_ (DT) [16], which are models known to lead to some of the best of-the-shelf classifiers when linearly combined [9]. Unsurprisingly, the algorithm obtained emulates the classical top-down induction of a tree found in major packages like CART [6] and C4.5 [23]. The _loss_ equivalently minimized, which is, _e.g._, Matusita's loss for AdaBoost[30, Section 4.1], is a lot more consequential. Contrary to losses for real-valued classification, losses to train DTs rely on the estimates of the posterior learned by the model; they are usually called _losses for Class Probability Estimation_ (CPE [25]). The CPE loss is crucial to elicit because (i) it is possible to check whether it is "good" from the standpoint of properness (Bayes rule is optimal for the loss [28]), and (ii) it conditions boosting rates, only a handful of them being known, for the most popular CPE losses [11; 22; 31].
**In this paper**, we show that this emulation scheme on \(t\)-AdaBoost provides a new family of CPE losses with remarkable constancy with respect to properness: losses are _strictly_ proper (Bayes rule is the _sole_ optimum) for any \(t\in(-\infty,2)\) and proper for \(t=-\infty\). Furthermore, over the range \(t\in[-\infty,1]\), the range of boosting rates spans the full spectrum of known boosting rates [11].
We provide experiments displaying the boosting ability of \(t\)-AdaBoost over a range of \(t\) encompassing potentially more than the set of values covered by our theory, and highlight the potential of using \(t\) as a parameter for efficient tuning the loss [25, Section 8]. Due to a lack of space, proofs are relegated to the appendix (APP). A primer on tems is also given in APP., Section I.
## 2 Related work
Boosting refers to the ability of an algorithm to combine the outputs of moderately accurate, "weak" hypotheses into a highly accurate, "strong" ensemble. Originally, boosting was introduced in the context of Valiant's PAC learning model as a way to circumvent the then-existing amount of related negative results [10; 34]. After the first formal proof that boosting is indeed achievable [29], AdaBoost became the first practical and proof-checked boosting algorithm [8; 30]. Boosting was thus born in a machine learning context, but later on, it also emerged in statistics as a way to learn from class residuals computed using the gradient of the loss [9; 21], resulting this time in a flurry of computationally efficient algorithms, still called boosting algorithms, but for which the connection with the original weak/strong learning framework is in general not known.
Our paper draws its boosting connections with AdaBoost's formal lineage. AdaBoost has spurred a long line of work alongside different directions, including statistical consistency [5], noise handling [15; 16], low-resource optimization [22], _etc_. The starting point of our work is a fascinating result in convex optimization establishing a duality between the algorithm and its memory of past iteration's performances, a probability distribution of so-called _weights_ over examples [12]. From this standpoint, AdaBoost solves the dual of the optimization of a Bregman divergence (constructed from the negative Shannon entropy as the generator) between weights subject to zero correlation with the last weak classifier's performance. As a consequence, weights define an exponential family. Indeed, whenever a relative entropy is minimized subject to linear constraints, then the solution is a member of an exponential family of distributions (see _e.g._[2, Section 2.8.1] for an axiomatization of exponential families). AdaBoost's distribution on the examples is a member of a discrete exponential family where the training examples are the finite support of the distribution, sufficient statistics are defined from the weak learners, and the leveraging coefficients are the natural parameters. In summary, there is an intimate relationship between boosting a-la-AdaBoost, exponential families, and Bregman divergences [7; 12; 20] and our work "elevates" these methods above exponential families.
## 3 Definitions
We define the \(t\)-logarithm and \(t\)-exponential [17; Chapter 7],
\[\log_{t}(z)\doteq\frac{1}{1-t}\cdot\left(z^{1-t}-1\right)\quad,\quad\exp_{t}(z )\doteq[1+(1-t)z]_{+}^{1/(1-t)}\quad([z]_{+}\doteq\max\{0,z\}), \tag{1}\]
where the case \(t=1\) is supposed to be the extension by continuity to the \(\log\) and \(\exp\) functions, respectively. To preserve the concavity of \(\log_{t}\) and the convexity of \(\exp_{t}\), we need \(t\geq 0\). In the general case, we also note the asymmetry of the composition: while \(\exp_{t}\log_{t}(z)=z,\forall t\in\mathbb{R}\), we have \(\log_{t}\exp_{t}(z)=z\) for \(t=1\) (\(\forall z\in\mathbb{R}\)), but
\[\log_{t}\exp_{t}(z)=\max\left\{-\frac{1}{1-t},z\right\}\quad(t<1)\quad\mathrm{ and}\quad\log_{t}\exp_{t}(z)=\min\left\{\frac{1}{t-1},z\right\}\quad(t>1).\]
Comparisons between vectors and real-valued functions written on vectors are assumed component-wise. We assume \(t\neq 2\) and define notation \(t\)* \(\doteq 1/(2-t)\). We now define the key set in which we model our weights (boldfaces denote vector notation).
**Definition 3.1**.: _The co-simplex of \(\mathbb{R}^{m}\), \(\tilde{\Delta}_{m}\) is defined as \(\tilde{\Delta}_{m}\doteq\{\mathbf{q}\in\mathbb{R}^{m}:\mathbf{q}\geq\mathbf{0}\wedge\mathbf{1}^ {\top}\mathbf{q}^{1/t\text{*}}=1\}\)._
The letters \(\mathbf{q}\) will be used to denote tems in \(\tilde{\Delta}_{m}\) while \(\mathbf{p}\) denote the co-density \(\mathbf{q}^{\frac{1}{t\text{*}}}\) or any element of the probability simplex. We define the general tempered relative entropy as
\[D_{t}(\mathbf{q}^{\prime}\|\mathbf{q}) = \sum_{i\in[m]}q^{\prime}_{i}\cdot\left(\log_{t}q^{\prime}_{i}-\log _{t}q_{i}\right)-\log_{t-1}q^{\prime}_{i}+\log_{t-1}q_{i}, \tag{2}\]
where \([m]\doteq\{1,...,m\}\). The tempered relative entropy is a Bregman divergence with convex generator \(\varphi_{t}(z)\doteq z\log_{t}z-\log_{t-1}(z)\) (for \(t\in\mathbb{R}\)) and \(\varphi_{t}(z)^{\prime}=\log_{t}(x)\). As \(t\to 1\), \(D_{t}(\mathbf{q},\mathbf{q}^{\prime})\) becomes the relative entropy with generator \(\varphi_{1}(x)=x\log(x)-x\).
## 4 Tempered boosting as tempered entropy projection
We start with a fixed sample \(\mathcal{S}=\{(\mathbf{x}_{i},y_{i}):i\in[m]\}\) where observations \(\mathbf{x}_{i}\) lie in some domain \(\mathcal{X}\) and labels \(y_{i}\) are \(\pm 1\). AdaBoost maintains a distribution \(\mathbf{p}\) over the sample. At the current iteration, this distribution is updated based on a current _weak hypothesis_\(h\in\mathbb{R}^{\mathcal{X}}\) using an exponential update:
\[p^{\prime}_{i}=\frac{p_{i}\cdot\exp(-\mu u_{i})}{\sum_{k}p_{k}\cdot\exp(-\mu u _{k})},\;\text{where}\;u_{i}\doteq y_{i}h(\mathbf{x}_{i}),\mu\in\mathbb{R}.\]
In [12] this update is motivated as minimizing a relative entropy subject to the constraint that \(\mathbf{p}^{\prime}\) is a distribution summing to 1 and \(\mathbf{p}^{\prime\top}\mathbf{u}=0\). Following this blueprint, we create a boosting algorithm maintaining a discrete tem over the sample which is motivated as a constrained minimization of the tempered relative entropy, with a normalization constraint on the co-simplex of \(\mathbb{R}^{m}\):
\[\mathbf{q}^{\prime} \doteq \arg\min_{\mathbf{\tilde{q}}\in\tilde{\Delta}_{m}}\quad D_{t}(\mathbf{ \tilde{q}}\|\mathbf{q}),\quad\text{ with }\mathbf{u}\in\mathbb{R}^{m}.\] \[\mathbf{\tilde{q}}^{\top}\mathbf{u}=0\]
We now show that the solution \(\mathbf{q}^{\prime}\) is a tempered generalization of AdaBoost's exponential update.
**Theorem 1**.: _For all \(t\in\mathbb{R}\backslash\{2\}\), all solutions to (3) have the form_
\[q^{\prime}_{i}=\frac{\exp_{t}(\log_{t}q_{i}-\mu u_{i})}{Z_{t}}\quad\left(- \frac{q_{i}\otimes_{t}\exp_{t}(-\mu u_{i})}{Z_{t}},\;\text{with}\;a\otimes_{t}b \doteq[a^{1-t}+b^{1-t}-1]_{+}^{\frac{1}{1-t}}\right), \tag{4}\]
_where \(Z_{t}\) ensures co-simplex normalization of the co-density. Furthermore, the unknown \(\mu\) satisfies_
\[\mu\in\arg\max-\log_{t}(Z_{t}(\mu))\quad(=\arg\min Z_{t}(\mu)), \tag{5}\]or equivalently is a solution to the nonlinear equation_
\[\mathbf{q}^{\prime}(\mu)^{\top}\mathbf{u} = 0. \tag{6}\]
_Finally, if either (i) \(t\in\mathbb{R}_{>0}\backslash\{2\}\) or (ii) \(t=0\) and \(\mathbf{q}\) is not collinear to \(\mathbf{u}\), then \(Z_{t}(\mu)\) is strictly convex: the solution to (3) is thus unique, and can be found from expression (4) by finding the unique minimizer of (5) or (equivalently) the unique solution to (6)._
(Proof in APP., Section II.1) The \(t\)-product \(\otimes_{t}\), which satisfies \(\exp_{t}(a+b)=\exp_{t}(a)\otimes_{t}\exp_{t}(b)\), was introduced in [18]. Collinearity never happens in our ML setting because \(\mathbf{u}\) contains the edges of a weak classifier: \(\mathbf{q}>0\) and collinearity would imply that \(\pm\) the weak classifier performs perfect classification, and thus defeats the purpose of training an ensemble. \(\forall t\in\mathbb{R}\backslash\{2\}\), we have the simplified expression for the normalization coefficient of the tem and the co-density \(\mathbf{p}^{\prime}\) of \(\mathbf{q}^{\prime}\):
\[Z_{t}=\left\|\exp_{t}\left(\log_{t}\mathbf{q}-\mu\cdot\mathbf{u}\right)\right\|_{1/t \ast}\ ;\ \ p^{\prime}_{i}=\frac{p_{i}\otimes_{t\ast}\exp_{t\ast}\left(- \frac{\mu u_{i}}{t^{\ast}}\right)}{Z^{\prime}_{t}}\ \ \Big{(}\ \text{with}\ Z^{\prime}_{t}\doteq Z^{1/t\ast}_{t}\Big{)}. \tag{7}\]
## 5 Tempered boosting for linear classifiers and clipped linear classifiers
ModelsA model (or classifier) is an element of \(\mathbb{R}^{\mathcal{X}}\). For any model \(H\), its empirical risk over \(\mathcal{S}\) is \(F_{\nicefrac{{0}}{{1}}}(H,\mathcal{S})\doteq(1/m)\cdot\sum_{i}[y_{i}\neq\mathrm{ sign}(H(\mathbf{x}_{i}))]\) where \([.]\), Iverson's bracket [13], is the Boolean value of the inner predicate. We learn linear separators and _clipped_ linear separators. Let \((v_{j})_{j\geq 1}\) be the terms of a series and \(\delta\geq 0\). The clipped sum of the series is:
\[\begin{array}{rcl}\stackrel{{(\delta)}}{{(-\delta)}}\!\!\!\! \sum_{j\in[J]}v_{j}&=&\min\left\{\delta,\max\left\{-\delta,v_{J}+\begin{array}{c} \stackrel{{(\delta)}}{{(-\delta)}}\!\!\!\!\sum_{j\in[J-1]}v_{j} \\ \end{array}\right\}\right\}&\quad(\in[-\delta,\delta]),\ \text{for}\ J>1,\end{array}\]
and we define the base case \(J=1\) by replacing the inner clipped sum with 0. Note that clipped summation is non-commutative, and so is different from clipping in \([-\delta,\delta]\) the whole sum itself2. Given a set of so-called weak hypotheses \(h_{j}\in\mathbb{R}^{\mathcal{X}}\) and leveraging coefficients \(\alpha_{j}\in\mathbb{R}\) (for \(j\in[J]\)), the corresponding linear separators and clipped linear separators are
Footnote 2: Fix for example \(a=-1,b=3,\delta=2\). For \(v_{1}=a,v_{2}=b\), the clipped sum is \(2=-1+3\), but for \(v_{1}=b,v_{2}=a\), the clipped sum becomes \(1=\mathbf{2}-1\).
\[H_{J}(\mathbf{x})\doteq\sum_{j\in[J]}\alpha_{j}h_{j}(\mathbf{x})\quad;\quad H^{(\delta )}_{J}(\mathbf{x})\doteq\begin{array}{c}\stackrel{{(\delta)}}{{(- \delta)}}\!\!\!\sum_{j\in[J]}\alpha_{j}h_{j}(\mathbf{x}).\end{array} \tag{9}\]
Tampered boosting and its general convergenceOur algorithm, \(t\)-AdaBoost, is presented in Algorithm 1, using presentation conventions from [30]. Before analyzing its convergence, several properties are to be noted for \(t\)-AdaBoost: first, it keeps the appealing property, introduced by AdaBoost, that examples receiving the wrong class by the current weak classifier are reweightedhigher (if \(\mu_{j}>0\)). Second, the leveraging coefficients for weak classifiers in the final classifier (\(\alpha_{j}\)s) are not the same as the ones used to update the weights (\(\mu_{j}\)s), unless \(t=1\). Third and last, because of the definition of \(\exp_{t}\) (1), if \(t<1\), tempered weights can switch off and on, _i.e._, become 0 if an example is "too well classified" and then revert back to being \(>0\) if the example becomes wrongly classified by the current weak classifier (if \(\mu_{j}>0\)). To take into account those zeroing weights, we denote \([m]_{j}^{\dagger}\doteq\{i:q_{ji}=0\}\) and \(m_{j}^{\dagger}\doteq\mathrm{Card}([m]_{j}^{\dagger})\) (\(\forall j\in[J]\) and \(\mathrm{Card}\) denotes the cardinal). Let \(R_{j}\doteq\max_{i\notin[m]_{j}^{\dagger}}|y_{i}h_{j}(\boldsymbol{x}_{i})|/q_{ ji}^{1-t}\) and \(q_{j}^{\dagger}\doteq\max_{i\in[m]_{j}^{\dagger}}|y_{i}h_{j}(\boldsymbol{x}_{i}) |^{1/(1-t)}/R_{j}^{1/(1-t)}\). It is worth noting that \(q_{j}^{\dagger}\) is homogeneous to a tempered weight.
**Theorem 2**.: _At iteration \(j\), define the weight function \(q_{ji}^{\prime}=q_{ji}\) if \(i\notin[m]_{j}^{\dagger}\) and \(q_{j}^{\dagger}\) otherwise; set_
\[\rho_{j} \doteq \frac{1}{(1+{m_{j}^{\dagger}}{q_{j}^{\dagger}}^{2-t})R_{j}}\cdot \sum_{i\in[m]}q_{ji}^{\prime}y_{i}h_{j}(\boldsymbol{x}_{i})\quad(\in[-1,1]). \tag{10}\]
_In algorithm \(t\)-AdaBoost, consider the choices (with the convention \(\prod_{k=1}^{0}v_{k}\doteq 1\))_
\[\mu_{j}\doteq-\frac{1}{R_{j}}\cdot\log_{t}\left(\frac{1-\rho_{j}}{M_{1-t}(1- \rho_{j},1+\rho_{j})}\right)\quad,\quad\alpha_{j}\doteq{m^{1-t}}^{*}\cdot\left( \prod_{k=1}^{j-1}Z_{k}\right)^{1-t}\cdot\mu_{j}, \tag{11}\]
_where \(M_{q}(a,b)\doteq((a^{q}+b^{q})/2)^{1/q}\) is the \(q\)-power mean. Then for any \(H\in\{H_{J},H_{J}^{(\nicefrac{{1}}{{1-t}})}\}\), its empirical risk is upperbounded as:_
\[F_{\nicefrac{{0}}{{1}}}(H,\mathcal{S})\leqslant\prod_{j=1}^{J}Z_{tj}^{2-t} \leqslant\prod_{j=1}^{J}\left(1+{m_{j}^{\dagger}}{q_{j}^{\dagger}}^{2-t} \right)\cdot K_{t}(\rho_{j})\quad\left(K_{t}(z)\doteq\frac{1-z^{2}}{M_{1-t}(1- z,1+z)}\right). \tag{12}\]
(Proof in APP., Section II.2) We jointly comment \(t\)-AdaBoost and Theorem 2 in two parts.
**Case \(t\to 1^{-}\):**\(t\)-AdaBoost converges to AdaBoost and Theorem 2 to its convergence analysis: \(t\)-AdaBoost converges to AdaBoost as presented in [30, Figure 1]: the tempered simplex becomes the probability simplex, \(\otimes_{t}\) converges to regular multiplication, weight update (8) becomes AdaBoost's, \(\alpha_{j}\rightarrow\mu_{j}\) in (11) and finally the expression of \(\mu_{j}\) converges to AdaBoost's leveraging coefficient in [30] (\(\lim_{t\to 1}M_{1-t}(a,b)=\sqrt{ab}\)). Even guarantee (12) converges to AdaBoost's popular guarantee of [30, Corollary 1] (\(\lim_{t\to 1}K_{t}(z)=\sqrt{1-z^{2}}\), \(m_{j}^{\dagger}=0\)). Also, in this case, we learn only the unclipped classifier since \(\lim_{t\to 1^{-}}H_{J}^{(\nicefrac{{1}}{{1-t}})}=H_{J}\).
**Case \(t<1\):** Let us first comment on the convergence rate. The proof of Theorem 2 shows that \(K_{t}(z)\leqslant\exp(-z^{2}/(2t^{*}))\). Suppose there is no weight switching, so \(m_{j}^{\dagger}=0,\forall j\) (see Section 7) and, as in the boosting model, suppose there exists \(\gamma>0\) such that \(|\rho_{j}|\geqslant\gamma,\forall j\). Then \(t\)-AdaBoost is guaranteed to attain empirical risk below some \(\varepsilon>0\) after a number of iterations equal to \(J=(2t^{*}/\gamma^{2})\cdot\log(1/\varepsilon)\). \(t^{*}\) being an increasing function of \(t\in[0,1]\), we see that \(t\)-AdaBoost is able to slightly improve upon AdaBoost's celebrated rate [32]. However, \(t^{*}=1/2\) for \(t=0\) so the improvement is just on the hidden constant. This analysis is suited for small values of \(|\rho_{j}|\) and does not reveal an interesting phenomenon for better weak hypotheses. Figure 1 compares \(K_{t}(z)\) curves (\(K_{1}(z)\doteq\lim_{t\to 1}K_{t}(z)=\sqrt{1-z^{2}}\) for AdaBoost, see [30, Corollary 1]), showing the case \(t<1\) can be substantially better, especially when weak hypotheses are not "too weak". If \(m_{j}^{\dagger}>0\), switching weights can impede our convergence _analysis_, though exponential convergence is always possible if \({m_{j}^{\dagger}}{q_{j}^{\dagger}}^{2-t}\) is small enough; also, when it is not, we may in fact have converged to a good model (see APP., Remark 1). A good criterion to train weak hypotheses is then the optimization of the edge \(\rho_{j}\), thus using \(\boldsymbol{q}_{j}^{\prime}\) normalized in the simplex. Other key features of \(t\)-AdaBoost are as follows. First, the weight update and leveraging coefficients of weak classifiers are bounded because \(|\mu_{j}|<1/(R_{j}(1-t))\) (APP., Lemma H) (this is not the case for \(t\to 1^{-}\)). This guarantees that new weights are bounded before normalization (unlike for \(t\to 1^{-}\)). Second, we remark that \(\mu_{j}\neq\alpha_{j}\) if \(t\neq 1\). Factor \(m^{1-t^{*}}\) is added for convergence analysis purposes; we can discard it to train the unclipped classifier: it does not change its empirical risk. This is, however, different for factor \(\prod_{k=1}^{j-1}Z_{k}\): from (12), we conclude that this is an indication of how well the past ensemble performs.
phenomenon that does not occur in boosting, where an excellent weak hypothesis on the current weights can have a leveraging coefficient so large that it wipes out the classification of the past ones. This can be useful to control numerical instabilities.
Extension to marginsA key property of boosting algorithms like AdaBoost is to be able to boost not just the empirical risk but more generally _margins_[19, 30], where a margin integrates both the accuracy of label prediction but also the confidence in prediction (say \(|H|\)). We generalize the margin notion of [19] to the tempered arithmetic and let \(\nu_{t}((\boldsymbol{x},y),H)\doteq\tanh_{t}(yH(\boldsymbol{x})/2)\) denote the margin of \(H\) on example \((\boldsymbol{x},y)\), where \(\tanh_{t}(z)\doteq(1-\exp_{t}(-2z))/(1+\exp_{t}(-2z))(\in[-1,1])\) is the tempered hyperbolic tangent. The objective of minimizing the empirical risk is generalized to minimizing the margin risk, \(F_{t,\theta}(H,\mathcal{S})\doteq(1/m)\cdot\sum_{i}[\nu_{t}((\boldsymbol{x}_{ i},y_{i}),H)\leq\theta]\), where \(\theta\in(-1,1)\). Guarantees on the empirical risk are guarantees on the margin risk for \(\theta=0\) only. In just a few steps, we can generalize Theorem 2 to _all_\(\theta\in(-1,1)\). For space reason, we state the core part of the generalization, from which extending it to a generalization of Theorem 2 is simple.
**Theorem 3**.: _For any \(\theta\in(-1,1)\) and \(t\in[0,1]\), the guarantee of algorithm \(t\)-AdaBoost in Theorem 2 extends to the margin risk, with notations from Theorem 2, via:_
\[F_{t,\theta}(H,\mathcal{S}) \leq \left(\frac{1+\theta}{1-\theta}\right)^{2-t}\prod_{j=1}^{J}Z_{tj} ^{2-t}. \tag{13}\]
(Proof in APP., Section II.3) At a high level, \(t\)-AdaBoost brings similar margin maximization properties as AdaBoost. Digging a bit in (13) reveals an interesting phenomenon for \(t\neq 1\) on how margins are optimized compared to \(t=1\). Pick \(\theta<0\), so we focus on those examples for which the classifier \(H\) has high confidence in its _wrong_ classification. In this case, factor \(((1+\theta)/(1-\theta))^{2-t}\) is increasing as a function of \(t\in[0,1]\) (and this pattern is reversed for \(\theta>0\)). In words, the smaller we pick \(t\in[0,1]\) and the better is the bound in (13), suggesting increased "focus" of \(t\)-AdaBoost on increasing the margins of examples _with low negative margin_ (_e.g._ the most difficult ones) compared to the case \(t=1\).
The tempered exponential lossIn the same way as AdaBoost introduced the now famous exponential loss, (12) recommends to minimize the normalization coefficient, following (7),
\[Z_{tj}^{2-t}(\mu) = \left\|\exp_{t}\left(\log_{t}\boldsymbol{q}_{j}-\mu\cdot \boldsymbol{u}_{j}\right)\right\|_{1/t^{\theta}}^{1/t^{\theta}}\quad\left( \text{with }u_{ji}\doteq y_{i}h_{j}(\boldsymbol{x}_{i})\right). \tag{14}\]
We cannot easily unravel the normalization coefficient to make appear an equivalent generalization of the exponential loss, unless we make several assumptions, one being \(\max_{i}|h_{j}(\boldsymbol{x}_{i})|\) is small enough for any \(j\in[J]\). In this case, we end up with an equivalent criterion to minimize which looks like
\[F_{t}(H,\mathcal{S}) = \frac{1}{m}\cdot\sum_{i}\exp_{t}^{2-t}\left(-y_{i}H(\boldsymbol{x }_{i})\right), \tag{15}\]
Figure 1: Plot of \(K_{t}(z)\) in (12), \(t\in[0,1]\) (the smaller, the better for convergence).
where we have absorbed in \(H\) the factor \(m^{1-t^{\mathsf{R}}}\) appearing in the \(\exp_{t}\) (scaling \(H\) by a positive value does not change its empirical risk). This defines a generalization of the exponential loss which we call the _tempered exponential loss_. Notice that one can choose to minimize \(F_{t}(H,\mathcal{S})\) disregarding any constraint on \(|H|\).
## 6 A broad family of boosting-compliant proper losses for decision trees
Losses for class probability estimationWhen it comes to tabular data, it has long been known that some of the best models to linearly combine with boosting are decision trees (DT, [9]). Decision trees, like other domain-partitioning classifiers, are not trained by minimizing a _surrogate loss_ defined over real-valued predictions, but defined over _class probability estimation_ (CPE, [26]), those estimators being posterior estimation computed at the leaves. Let us introduce a few definitions for those. A CPE loss \(\ell:\{-1,1\}\times[0,1]\rightarrow\mathbb{R}\) is
\[\ell(y,u) \doteq \llbracket y=1\rrbracket\cdot\ell_{1}(u)+\llbracket y=-1 \rrbracket\cdot\ell_{-1}(u). \tag{16}\]
Functions \(\ell_{1},\ell_{-1}\) are called _partial_ losses. The pointwise conditional risk of local guess \(u\in[0,1]\) with respect to a ground truth \(v\in[0,1]\) is:
\[\mathsf{L}\left(u,v\right) \doteq v\cdot\ell_{1}(u)+(1-v)\cdot\ell_{-1}(u). \tag{17}\]
A loss is _proper_ iff for any ground truth \(v\in[0,1]\), \(\mathsf{L}\left(v,v\right)=\inf_{u}\mathsf{L}\left(u,v\right)\), and strictly proper iff \(u=v\) is the sole minimizer [26]. The (pointwise) _Bayes_ risk is \(\underline{L}(v)\doteq\inf_{u}\mathsf{L}\left(u,v\right)\). The log/cross-entropy-loss, square-loss, Matusita loss are examples of CPE losses. One then trains a DT minimizing the expectation of this loss over leaves' posteriors, \(\mathbb{E}_{\lambda}[\underline{L}(p_{\lambda})]\), \(p_{\lambda}\) being the local proportion of positive examples at leaf \(\lambda\) - or equivalently, the local posterior.
Deriving CPE losses from (ada)boostingRecently, it was shown how to derive in a general way a CPE loss to train a DT from the minimization of a surrogate loss with a boosting algorithm [16]. In our case, the surrogate would be \(\tilde{Z}_{tj}\) (14) and the boosting algorithm, \(t\)-AdaBoost. The principle is simple and fits in four steps: (i) show that a DT can equivalently perform simple linear classifications, (ii) use a weak learner that designs splits and the boosting algorithm to fit the leveraging coefficient and compute those in closed form, (iii) simplify the expression of the loss using those, (iv) show that the expression simplified is, in fact, a CPE loss. To get (i), we remark that a DT contains a tree (graph). One can associate to each node a real value. To classify an observation, we sum all reals from the root to a leaf and decide on the class based on the sign of the prediction, just like for any real-valued predictor. Suppose we are at a leaf. What kind of weak hypotheses can create splits "in disguise"? Those can be of the form
\[h_{j}(\mathbf{x}) \doteq \llbracket x_{k}\geq a_{j}\rrbracket\cdot b_{j},\quad a_{j},b_{j }\in\mathbb{R},\]
where the observation variable \(x_{k}\) is assumed real valued for simplicity and the test \(\llbracket x_{k}\geq a_{j}\rrbracket\) splits the leaf's domain in two non-empty subsets. This creates half of the split. \(\overline{h}_{j}(\mathbf{x})\doteq\llbracket x_{k}<a_{j}\rrbracket\cdot-b_{j}\) creates the other half of the split. Remarkably \(h_{j}\) satisfies the weak learning assumption iff \(\overline{h}_{j}\) does [16]. So we get the split design part of (ii). We compute the leveraging coefficients at the new leaves from the surrogate's minimization / boosting algorithm, end up with new real predictions at the new leaves (instead of the original \(b_{j},-b_{j}\)), push those predictions in the surrogate loss for (iii), simplify it and, quite remarkably end up with a loss of the form \(\mathbb{E}_{\lambda}[\mathsf{L}(p_{\lambda})]\), where \(\mathsf{L}\) turns out to be the pointwise Bayes risk \(\underline{L}\) of a proper loss [16] (notation from [26]).
In the case of [16], it is, in fact, granted that we end up with such a "nice" CPE loss because of the choice of the surrogates at the start. In our case, however, nothing grants this _a priori_ if we start from the tempered exponential loss \(\tilde{Z}_{tj}\) (14) so it is legitimate to wonder whether such a chain of derivations (summarized) can happen to reverse engineer an interesting CPE loss:
\[\tilde{Z}_{tj}\stackrel{{?}}{{\mapsto}}\mathsf{L}\stackrel{{?}}{{\mapsto}}\mathsf{L}^{(t)}\stackrel{{?}}{{\mapsto}} \ell_{1}^{(t)};\ell_{-1}^{(t)}\quad(\text{proper? strictly proper? for which $t$s?,...)} \tag{18}\]
When such a complete derivation happens until the partial losses \(\ell_{1};\ell_{-1}\) and their properties, we shall write that minimizing \(\tilde{Z}_{tj}\)_elicits_ the corresponding loss and partial losses.
**Theorem 4**.: _Minimizing \(\tilde{Z}_{tj}\) elicits the CPE loss we define as the **tempered loss**, with partial losses:_
\[\ell_{1}^{(t)}(u)\doteq\left(\frac{1-u}{M_{1-t}(u,1-u)}\right)^{2-t}\quad,\quad \ell_{-1}^{(t)}(u)\doteq\ell_{1}^{(t)}(1-u),\quad(t\in[-\infty,2]). \tag{19}\]
_The tempered loss is symmetric, differentiable, strictly proper for \(t\in(-\infty,2)\) and proper for \(t=-\infty\)._Differentiability means the partial losses are differentiable, and symmetry follows from the relationship between partial losses [20] (the proof, in APP., Section II.4, derives the infinite case, \(\ell_{1}^{(-\infty)}(u)=2\cdot\left[u\leqslant 1/2\right]\)) Let us explicit the Bayes risk of the tempered loss and a key property.
**Lemma 1**.: _The Bayes risk of the tempered loss is (\(M_{q}\) defined in Theorem 2):_
\[\underline{L}^{(t)}(u) = \frac{2u(1-u)}{M_{1-t}(u,1-u)}, \tag{20}\]
_and it satisfies \(\forall u\in[0,1],\forall z\in[2\cdot\min\{u,1-u\},1]\), \(2t\in[-\infty,2]\) such that \(\underline{L}^{(t)}(u)=z\)._
Lemma 1, whose proof is trivial, allows to show a key boosting result: \(t=1\) retrieves Matusita's loss, for which a near-optimal boosting rate is known [11] while \(t=-\infty\) retrieves the empirical risk, which yields the worst possible guarantee [11]. In between, we have, for example, CART's Gini criterion for \(t=0\), which yields an intermediate boosting guarantee. Continuity with respect to \(t\) of the Bayes risks in between the empirical risk and Matusita's loss means the boosting ranges of the tempered loss cover _the full known spectrum of boosting rates_ for \(t\in[-\infty,1]\). We know of no (differentiable and) proper CPE loss with such coverage. Note that (i) this is a non-constructive result as we do not associate a specific \(t\) for a specific rate, and (ii) the state-of-the-art boosting rates for DT induction do not seem to cover the case \(t\in(1,2)\), thus left as an open question.
## 7 Experiments
We have performed experiments on a testbed of 10 UCI domains, whose details are given in APP. (Section A3). Experiments were carried out using a 10-fold stratified cross-validation procedure.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & \multicolumn{1}{c}{perr (not clipped)} & \multicolumn{1}{c}{perr (clipped)} & \multicolumn{1}{c}{min weight} & \multicolumn{1}{c}{max weight} \\ \hline & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \end{tabular}
\end{table}
Table 1: Experiments on \(t\)-AdaBoost comparing with AdaBoost (\(t=1\), bullets) on three domains (rows), displaying from left to right the estimated true error of non-clipped and clipped models, and the min and max codensity weights. These domains were chosen to give an example of three different situations: small values of \(t\) perform well (abalone), the best performance is achieved by the largest \(t\) (_e.g._ AdaBoost, qsar), and the worst performance is achieved by the largest \(t\) (adult). Topmost row is without noise (\(\eta=0\)) while the others are with \(10\%\) training noise; \(t\) scale displayed with varying color and width (colormap indicated on each plot). Averages shown for readability: see Table 2 for exhaustive statistical tests.
To compare \(t\)-AdaBoost with AdaBoost, we ran \(t\)-AdaBoost with a first range of values of \(t\in\{0.0,0.2,0.4,0.6,0.8,0.9\}\). This is in the range of values covered by our convergence result for linear separators in Theorem 2. Our results on decision tree induction cover a much wider range, in particular for \(t\in(1,2)\). To assess whether this can be an interesting range to study, we added \(t=1.1\) to the set of tested \(t\) values. When \(t>1\), some extra care is to be put into computations because the weight update becomes unbounded, in a way that is worse than AdaBoost. Indeed, as can be seen from (8), if \(\mu_{j}y_{i}h_{j}(\mathbf{x}_{i})\leq-1/(t-1)\) (the example is badly classified by the current weak hypothesis, assuming wlog \(\mu_{j}>0\)), the weight becomes infinity before renormalization. In our experiments, picking a value of \(t\) close to \(2\) clearly shows this problem, so to be able to still explore whether \(t>1\) can be useful, we picked a value close to \(1\), namely \(t=1.1\), and checked that in our experiments this produced no such numerical issue. We also considered training clipped and not clipped models.
All boosting models were trained for a number of \(J=20\) decision trees (The appendix provides experiments on training bigger sets). Each decision tree is induced using the tempered loss with the corresponding value of \(t\) (see Theorem 4) following the classical top-down template, which consists in growing the current heaviest leaf in the tree and picking the best split for the leaf chosen. We implemented \(t\)-AdaBoost exactly as in Section 5, including computing leveraging coefficients as suggested. Thus, we do not scale models. More details are provided in the appendix. In our experiments, we also included experiments on a phenomenon highlighted more than a decade ago [15] and fine-tuned more recently [16], the fact that a convex booster's model is the weakest link when it has to deal with noise in training data. This is an important task because while the tempered exponential loss is convex, it does not fit into the blueprint loss of [15, Definition 1] because it is not \(C^{1}\) if \(t\neq 1\). One might thus wonder how \(t\)-AdaBoost behaves when training data is affected by noise. Letting \(\eta\) denote the proportion of noisy data in the training sample, we tried \(\eta\in\{0.0,0.1\}\) (The appendix provides experiments on more noise levels). We follow the noise model of [15] and thus independently flip the true label of each example with probability \(\eta\).
For each run, we recorded the average test error and the average maximum and minimum co-density weight. Table 1 presents a subset of the results obtained on three domains. Table 2 presents a more synthetic view in terms of statistical significance of the results for \(t\neq 1\) vs. \(t=1\) (AdaBoost). The table reports only results for \(t\geq 0.6\) for synthesis purposes. Values \(t<0.6\) performed on average slightly worse than the others _but_ on some domains, as the example of abalone suggests in Table 2 (the plots include all values of \(t\) tested in \([0,1.1]\)), we clearly got above-par results for such small values of \(t\), both in terms of final test error but also fast early convergence to low test error. This comment can be generalized to all values of \(t\).
The weights reveal interesting patterns as well. First, perhaps surprisingly, we never encountered the case where weights switch off, regardless of the value of \(t\). The average minimum weight curves of Table 1 generalize to all our tests (see the appendix). This does not rule out the fact that boosting for a much longer number of iterations might lead to weights switching off/on, but the fact that this does not happen at least early during boosting probably comes from the fact that the leveraging coefficients for weights (\(\mu\).) are bounded. Furthermore, their maximal absolute value is all the smaller as \(t\) decreases to 0. Second, there is a pattern that also repeats on the maximum weights, not on all domains but on a large majority of them and can be seen in abalone and adult in Table 1: the maximum weight of AdaBoost tends to increase much more rapidly compared to \(t\)-AdaBoost with \(t<1\). In the latter case, we almost systematically observe that the maximum weight tends to be upperbounded, which is not the case for AdaBoost (the growth of the maximal weight looks almost linear). Having bounded weights could be of help to handle numerical issues of (ada)boosting [14].
Our experiments certainly confirm the boosting nature of \(t\)-AdaBoost if we compare its convergence to that of AdaBoost: more often than not, it is in fact comparable to that of AdaBoost. While this applies broadly for \(t\geq 0.6\), we observed examples where much smaller values (even \(t=0.0\)) could yield such fast convergence. Importantly, this applies to clipped models as well and it is important to notice because it means attaining a low "boosted" error does not come at the price of learning models with large range. This is an interesting property: for \(t=0.0\), we would be guaranteed that the computation of the clipped prediction is always in \([-1,1]\). Generalizing our comment on small values of \(t\) above, we observed that an efficient tuning algorithm for \(t\) could be able to get very substantial leverage over AdaBoost. Table 2 was crafted for a standard limit \(p\)-val of 0.1 and "blurs" the best results that can be obtained. On several domains (winered, abalone, eeg, creditcard, adult), applicable \(p\)-values for which we would conclude that some \(t\neq 1\) performs better than \(t=1\) drop in between \(7E-4\) and \(0.05\). Unsurprisingly, AdaBoost also manages to beat significantly alternative values of \(t\) in several cases. Our experiments with training noise (\(\eta=0.1\)) go in the same direction. Looking at Table 1, one could eventually be tempted to conclude that \(t\) slightly smaller than 1.0 may be a better choice than adaboosting (\(t=1\)), as suggested by our results for \(t=0.9\), but we do not think this produces a general "rule-of-thumb". There is also no apparent "noise-dependent" pattern that would obviously separate the cases \(t<1\) from \(t=1\) even when the tempered exponential loss does not fit to [15]'s theory. Finally, looking at the results for \(t>1\) also yields the same basic conclusions, which suggests that boosting can be attainable outside the range covered by our theory (in particular Theorem 2).
All this brings us to the experimental conclusion that the question does not reside on opposing the case \(t\neq 1\) to the case \(t=1\). Rather, our experiments suggest - pretty much like our theory does - that the actual question resides in how to efficiently _learn_\(t\) on a domain-dependent basis. Our experiments indeed demonstrate that substantial gains could be obtained, to handle overfitting or noise.
## 8 Discussion, limitations and conclusion
AdaBoost is one of the original and simplest Boosting algorithms. In this paper we generalized AdaBoost to maintaining a tempered measure over the examples by minimizing a tempered relative entropy. We kept the setup as simple as possible and therefore focused on generalizing AdaBoost. However more advanced boosting algorithms have been designed based on relative entropy minimization subject to linear constraints. There are versions that constrain the edges of all past hypotheses to be zero [36]. Also, when the maximum margin of the game is larger than zero, then AdaBoost cycles over non-optimal solutions [27]. Later Boosting algorithms provably optimize the margin of the solution by adjusting the constraint value on the dual edge away from zero (see e.g. [24]). Finally, the ELRP-Boost algorithm optimizes a trade off between relative entropy and the edge [35]. We conjecture that all of these orthogonal direction have generalizations to the tempered case as well and are worth exploring.
These are theoretical directions that, if successful, would contribute to bring more tools to the design of rigorous boosting algorithms. This is important because boosting suffers several impediments, not all of which we have mentioned: for example, to get statistical consistency for AdaBoost, it is known that early stopping is mandatory [5]. More generally, non-Lipschitz losses like the exponential loss seem to be harder to handle compared to Lipschitz losses [33] (but they yield in general better convergence rates). The validity of the weak learning assumption of boosting can also be discussed, in particular regarding the negative result of [15] which advocates, beyond just better (ada)boosting, for boosting for _more_ classes of models / architectures [16]. Alongside this direction, we feel that our experiments on noise handling give a preliminary account of the fact that there is no "one \(t\) fits all" case, but a much more in depth analysis is required to elicit / tune a "good" \(t\). This is a crucial issue for noise handling [16], but as we explain in Section 7, this could bring benefits in much wider contexts as well.
\begin{table}
\begin{tabular}{r|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \(\eta\) & \multicolumn{8}{c|}{\(0.0\)} & \multicolumn{8}{c|}{\(0.1\)} \\ \(t\) & \(0.6\) & \(0.8\) & \(0.9\) & \(1.1\) & \(0.6\) & \(0.8\) & \(0.9\) & \(1.1\) & \(0.6\) & \(0.8\) & \(0.9\) & \(1.1\) \\
[clipped] & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 \\ \hline \hline \(\#\)better & 2 & 3 & 1 & 2 & 1 & 3 & & & 1 & 1 & 1 & 2 & 2 & 1 & & & \\ \hline \(\#\)equivalent & 5 & 5 & 6 & 6 & 7 & 7 & 6 & 7 & 4 & 8 & 8 & 7 & 8 & 9 & 8 & 10 \\ \hline \(\#\)worse & 3 & 2 & 3 & 2 & 2 & & 4 & 3 & 5 & 1 & 1 & 1 & & & 2 & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Outcomes of student paired \(t\)-tests over 10 UCI domains, with training noise \(\eta\in\{0.0,0.1\}\), for \(t\in\{0.6,0.8,0.9,1.0,1.1\}\) and with / without clipped models. For each triple (\(\eta\), \(t\), [clipped]), we give the number of domains for which the corresponding setting of \(t\)-AdaBoost is statistically better than AdaBoost(\(\#\)better), the number for which it is statistically worse (\(\#\)worse) and the number for which we cannot reject the assumption of identical performances. Threshold \(p-\)val = 0.1.
## Acknowledgments
The authors thank the reviewers for numerous comments that helped improving the paper's content.
## References
* [1] N. Alon, A. Gonen, E. Hazan, and S. Moran. Boosting simple learners. In _STOC'21_, 2021.
* [2] S.-I. Amari. _Information Geometry and Its Applications_. Springer-Verlag, Berlin, 2016.
* [3] E. Amid, F. Nielsen, R. Nock, and M.-K. Warmuth. Optimal transport with tempered exponential measures. _CoRR_, abs/2309.04015, 2023.
* [4] E. Amid, R. Nock, and M.-K. Warmuth. Clustering above exponential families with tempered exponential measures. In _26\({}^{th}\) AISTATS_, 2023.
* [5] P. Bartlett and M. Traskin. Adaboost is consistent. In _NIPS*19_, 2006.
* [6] L. Breiman, J. H. Freidman, R. A. Olshen, and C. J. Stone. _Classification and regression trees_. Wadsworth, 1984.
* [7] M. Collins, R. Schapire, and Y. Singer. Logistic regression, adaboost and Bregman distances. In _Proc. of the 13 \({}^{th}\) International Conference on Computational Learning Theory_, pages 158-169, 2000.
* [8] Y. Freund and R. E. Schapire. A Decision-Theoretic generalization of on-line learning and an application to Boosting. _J. Comp. Syst. Sc._, 55:119-139, 1997.
* [9] J. Friedman, T. Hastie, and R. Tibshirani. Additive Logistic Regression : a Statistical View of Boosting. _Ann. of Stat._, 28:337-374, 2000.
* [10] M.J. Kearns. Thoughts on hypothesis boosting, 1988. ML class project.
* [11] M.J. Kearns and Y. Mansour. On the boosting ability of top-down decision tree learning algorithms. In _Proc. of the 28 \({}^{th}\) ACM STOC_, pages 459-468, 1996.
* [12] J. Kivinen and M.-K. Warmuth. Boosting as entropy projection. In _COLT'99_, pages 134-144, 1999.
* [13] D.-E. Knuth. Two notes on notation. _The American Mathematical Monthly_, 99(5):403-422, 1992.
* [14] R. Kohavi. Improving accuracy by voting classification algorithms: Boosting, bagging, and variants. In _Workshop on Computation-Intensive Machine Learning Techniques_, 1998.
* [15] P.-M. Long and R.-A. Servedio. Random classification noise defeats all convex potential boosters. _MLJ_, 78(3):287-304, 2010.
* [16] Y. Mansour, R. Nock, and R.-C. Williamson. Random classification noise does not defeat all convex potential boosters irrespective of model choice. In _40\({}^{th}\) ICML_, 2023.
* [17] J. Naudts. _Generalized thermostatistics_. Springer, 2011.
* [18] L. Nivanen, A. Le Mehaute, and Q.-A. Wang. Generalized algebra within a nonextensive statistics. _Reports on Mathematical Physics_, 52:437-444, 2003.
* [19] R. Nock and F. Nielsen. A Real Generalization of discrete AdaBoost. _Artificial Intelligence_, 171:25-41, 2007.
* [20] R. Nock and F. Nielsen. On the efficient minimization of classification-calibrated surrogates. In _NIPS*21_, pages 1201-1208, 2008.
* [21] R. Nock and F. Nielsen. The phylogenetic tree of Boosting has a bushy carriage but a single trunk. _PNAS_, 117:8692-8693, 2020.
* [22] R. Nock and R.-C. Williamson. Lossless or quantized boosting with integer arithmetic. In _36\({}^{th}\) ICML_, pages 4829-4838, 2019.
* [23] J. R. Quinlan. _C4.5 : programs for machine learning_. Morgan Kaufmann, 1993.
* [24] G. Ratsch and M.-K. Warmuth. Efficient margin maximizing with boosting. _JMLR_, pages 2131-2152, december 2005.
* [25] M.-D. Reid and R.-C. Williamson. Composite binary losses. _JMLR_, 11:2387-2422, 2010.
* [26] M.-D. Reid and R.-C. Williamson. Information, divergence and risk for binary experiments. _JMLR_, 12:731-817, 2011.
* [27] C. Rudin, I. Daubechies, and R.-E. Schapire. Dynamics of adaboost: cyclic behavior and convergence of margins. _JMLR_, pages 1557-1595, December 2004.
* [28] L.-J. Savage. Elicitation of personal probabilities and expectations. _J. of the Am. Stat. Assoc._, pages 783-801, 1971.
* [29] R. E. Schapire. The strength of weak learnability. _MLJ_, pages 197-227, 1990.
* [30] R. E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. _MLJ_, 37:297-336, 1999.
* [31] T. Sypherd, R. Nock, and L. Sankar. Being properly improper. In _39\({}^{th}\) ICML_, 2022.
* [32] M. Telgarsky. A primal-dual convergence analysis of boosting. _JMLR_, 13:561-606, 2012.
* [33] M. Telgarsky. Boosting with the logistic loss is consistent. In _26 \({}^{th}\) COLT_, pages 911-965, 2013.
* [34] L. G. Valiant. A theory of the learnable. _Communications of the ACM_, 27:1134-1142, 1984.
* [35] M.-K. Warmuth, K.-A. Glocer, and S.-V.-N. Vishwanathan. Entropy regularized LPBoost. In _Algorithmic Learning Theory_, pages 256-271. Springer Berlin Heidelberg, 2008.
* [36] M.-K. Warmuth, J. Liao, and G. Ratsch. Totally corrective boosting algorithms that maximize the margin. In _icml '06: proceedings of the 23rd international conference on machine learning_, pages 1001-1008, 2006.
**Appendix**
**Abstract**
This is the Appendix to Paper "Boosting with Tempered Exponential Measures". To differentiate with the numberings in the main file, the numbering of Theorems, Lemmata, Definitions is letter-based (A, B,...).
## Table of contents
**A short primer on Tempered Exponential Measures**
**Supplementary material on proofs**
\(\leftrightarrow\) Proof of Theorem 1
\(\leftrightarrow\) Proof of Theorem 2
\(\leftrightarrow\) Proof of Theorem 3
\(\leftrightarrow\) Proof of Theorem 4
\(\leftrightarrow\)
**Supplementary material on experiments**
Pg 15
\(\leftrightarrow\) Proof of Theorem 1
\(\leftrightarrow\) Proof of Theorem 4
Pg 32A short primer on Tempered Exponential Measures
We describe here the minimal amount of material necessary to understand how our approach to boosting connects to these measures. We refer to [4] for more details. With a slight abuse of notation, we define the perspective transforms \((\log_{t})^{*}(z)\doteq t^{*}.\log_{t*}(z/t^{*})\) and \((\exp_{t})^{*}(z)=t^{*}.\exp_{t*}(z/t^{*})\). Recall that \(t^{*}\doteq 1/(2-t)\).
**Definition A**.: _[_4_]_ _A tempered exponential measure (tem) family is a set of unnormalized densities in which each element admits the following canonical expression:_
\[q_{t|\boldsymbol{\theta}}(\boldsymbol{x})\doteq\frac{\exp_{t}( \boldsymbol{\theta}^{\top}\boldsymbol{\varphi}(\boldsymbol{x}))}{\exp_{t}(G_{t }(\boldsymbol{\theta}))}=\exp_{t}(\boldsymbol{\theta}^{\top}\boldsymbol{ \varphi}(\boldsymbol{x})\ominus_{t}G_{t}(\boldsymbol{\theta}))\quad\left(a \ominus_{t}b\doteq\frac{a-b}{1+(1-t)b}\right), \tag{21}\]
_where \(\boldsymbol{\theta}\) is the element's natural parameter, \(\boldsymbol{\varphi}(\boldsymbol{x})\) is the sufficient statistics and_
\[G_{t}(\boldsymbol{\theta}) = (\log_{t})^{*}\int(\exp_{t})^{*}(\boldsymbol{\theta}^{\top} \boldsymbol{\varphi}(\boldsymbol{x}))\mathrm{d}\xi\]
_is the (convex) cumulant, \(\xi\) being a base measure (implicit)._
Except for \(t=1\) (which reduces a tem family to a classical exponential family), the total mass of a tem is not 1 (but it has an elegant closed form expression [4]). However, the exponentiated \(q_{t|\boldsymbol{\theta}}^{1/t^{*}}\) does sum to 1. In the discrete case, this justifies extending the classical simplex to what we denote as the co-simplex.
**Definition B**.: _The co-simplex of \(\mathbb{R}^{m}\), \(\tilde{\Delta}_{m}\) is defined as \(\tilde{\Delta}_{m}\doteq\{\boldsymbol{q}\in\mathbb{R}^{m}:\boldsymbol{q}\geq \boldsymbol{0}\wedge\boldsymbol{1}^{\top}\boldsymbol{q}^{1/t^{*}}=1\}\)._
The connection between \(t\)-AdaBoost's update and tem's is immediate from the equation's update ((4) in mf). We can show that \(\tilde{\Delta}_{m}\) can also be represented as tems.
**Lemma A**.: \(\tilde{\Delta}_{m}\) _is a (discrete) family of tempered exponential measures._
Proof.: We proceed as in [2, Section 2.2.2] for exponential families: let \(\boldsymbol{q}\in\tilde{\Delta}_{m}\), which we write
\[q(n) \doteq \sum_{i\in[m]}q_{i}\cdot\llbracket i=n\rrbracket,n\in[m]. \tag{22}\]
\(\llbracket\pi\rrbracket\), the Iverson bracket [13], takes value 1 if Boolean predicate \(\pi\) is true (and 0 otherwise). We create \(m-1\) natural parameters and the cumulant,
\[\theta_{i}\doteq\log_{t}\frac{q_{i}}{q_{m}},i\in[m-1]\quad;\quad G_{t}( \boldsymbol{\theta})\doteq\log_{t}\frac{1}{q_{m}},\]
and end up with (22) also matching the atom mass function
\[q(n) = \frac{\exp_{t}\left(\sum_{i\in[m-1]}\theta_{i}\cdot\llbracket i=n \rrbracket\right)}{\exp_{t}G_{t}(\boldsymbol{\theta})},\]
which clearly defines a tempered exponential measure over \([m]\). This ends the proof of Lemma A.
Supplementary material on proofs
### Proof of Theorem 1
To improve readability, we remove dependency in \(t\) in normalization coefficient \(Z\). We use notations from [4, proof of Theorem 3.2] and denote the Lagrangian
\[\mathcal{L} = \Delta(\tilde{\mathbf{q}}\|\mathbf{q})+\lambda\left(\sum_{i}\tilde{q}_{i}^ {1/t^{\mathbf{*}}}-1\right)-\sum_{i}\nu_{i}\tilde{q}_{i}+\mu\sum_{i}\tilde{q}_{i}u _{i}, \tag{23}\]
which yields \(\partial\mathcal{L}/\partial\tilde{q}_{i}=\log_{t}\tilde{q}_{i}-\log_{t}q_{i} +\lambda\tilde{q}_{i}^{1-t}-\nu_{i}+\mu u_{i}\) (\(\lambda\) absorbs factor \(2-t\)), and, rearranging (absorbing factor \(1-t\) in \(\nu_{i}\)),
\[(1+(1-t)\lambda)\tilde{q}_{i}^{1-t} = \nu_{i}+1+(1-t)(\log_{t}q_{i}-\mu u_{i}),\forall i\in[m]. \tag{24}\]
We see that \(\lambda\neq-1/(1-t)\) otherwise the Lagrangian drops its dependence in the unknown. In fact, the solution necessarily has \(1+(1-t)\lambda>0\). To see this, we distinguish two cases: (i) if some \(u_{k}=0\), then since \(\log_{t}q_{k}\geq-1/(1-t)\) there would be no solution to (24) if \(1+(1-t)\lambda<0\) because of the KKT conditions \(\nu_{i}\geq 0,\forall i\in[m]\); (ii) otherwise, if all \(u_{k}\neq 0,\forall k\in[m]\), then there must be two coordinates of different signs otherwise there is no solution to our problem (3) (main file, we must have indeed \(\tilde{\mathbf{q}}\geq 0\) because of the co-simplex constraint). Thus, there exists at least one coordinate \(k\in[m]\) for which \(-(1-t)\mu u_{k}>0\) and since \(\log_{t}q_{k}\geq-1/(1-t)\) (definition of \(\log_{t}\)) and \(\nu_{k}\geq 0\) (KKT conditions), the RHS of (24) for \(i=k\) is \(>0\), preventing \(1+(1-t)\lambda<0\) in the LHS.
We thus have \(1+(1-t)\lambda>0\). The KKT conditions \((\nu_{i}\geq 0,\nu_{i}\tilde{q}_{i}=0,\forall i\in[m])\) yield the following: \(1+(1-t)(\log_{t}q_{i}-\mu u_{i})>0\) imply \(\nu_{i}=0\) and \(1+(1-t)(\log_{t}q_{i}-\mu u_{i})\leq 0\) imply \(\tilde{q}_{i}^{1-t}=0\) so we get the necessary form for the optimum:
\[\tilde{q}_{i} = \frac{\exp_{t}\left(\log_{t}q_{i}-\mu u_{i}\right)}{\exp_{t} \lambda} \tag{25}\] \[= \frac{q_{i}\otimes_{t}\exp_{t}(-\mu u_{i})}{Z_{t}},\]
where \(\lambda\) or \(Z_{t}\doteq\exp_{t}\lambda\) ensures normalisation for the co-density. Note that we have a simplified expression for the co-density:
\[p_{i} = \frac{p_{ji}\otimes_{t}\ast\,\exp_{t}\ast\,(-\mu u_{i}/t^{\ast}) }{Z_{t}^{\infty}}, \tag{26}\]
with \(Z_{t}^{\infty}\doteq Z_{t}^{1/t^{\mathbf{*}}}=\sum_{i}p_{ji}\otimes_{t}\ast\,\exp _{t}\ast\,(-\mu u_{i}/t^{\ast})\). For the analytic form in (25), we can simplify the Lagrangian to a dual form that depends on \(\mu\) solely:
\[\mathcal{D}(\mu) = \Delta(\tilde{\mathbf{q}}(\mu)\|\mathbf{q})+\mu\sum_{i}\tilde{q}_{i}(\mu )u_{i}. \tag{27}\]
The proof of (5) (main file) is based on a key Lemma.
**Lemma B**.: _For any \(\tilde{\mathbf{q}}\) having form (25) such that \(\tilde{\mathbf{q}}^{\top}\mathbf{u}=0\), \(\mathcal{D}(\mu)=-\log_{t}Z_{t}(\mu)\)._
Proof.: For any \(\tilde{\mathbf{q}}\) having form (25), denote
\[[m]_{\ast} \doteq \{i:\tilde{q}_{i}\neq 0\}. \tag{28}\]We first compute (still using \(\lambda\doteq\log_{t}Z_{t}(\mu)\) for short):
\[A \doteq \sum_{i}\tilde{q}_{i}\cdot\log_{t}\tilde{q}_{i} \tag{29}\] \[= \sum_{i\in[m]_{\bullet}}\tilde{q}_{i}\cdot\log_{t}\left(\frac{ \exp_{t}\left(\log_{t}q_{i}-\mu u_{i}\right)}{\exp_{t}\lambda}\right)\] \[= \sum_{i\in[m]_{\bullet}}\tilde{q}_{i}\cdot\left(\frac{1}{1-t} \cdot\left[\frac{1+(1-t)(\log_{t}q_{i}-\mu u_{i})}{1+(1-t)\lambda}-1\right]\right)\] \[= \frac{1}{1-t}\cdot\sum_{i\in[m]_{\bullet}}\tilde{q}_{i}\cdot \left(\frac{q_{i}^{1-t}-(1-t)\mu u_{i}}{1+(1-t)\lambda}\right)-\frac{1}{1-t} \cdot\sum_{i\in[m]_{\bullet}}\tilde{q}_{i}\] \[= -\frac{\mu}{1+(1-t)\lambda}\cdot\sum_{i\in[m]_{\bullet}}\tilde{q }_{i}u_{i}+\frac{1}{(1-t)(1+(1-t)\lambda)}\cdot\sum_{i\in[m]_{\bullet}}\tilde {q}_{i}q_{i}^{1-t}-\frac{1}{1-t}\cdot\sum_{i\in[m]_{\bullet}}\tilde{q}_{i}\] \[= \underbrace{-\frac{\mu}{1+(1-t)\lambda}\cdot\tilde{\boldsymbol{q }}^{\top}\boldsymbol{u}}_{\doteq B}+\underbrace{\frac{1}{(1-t)(1+(1-t)\lambda )}\cdot\sum_{i\in[m]}\tilde{q}_{i}q_{i}^{1-t}}_{\doteq C}-\underbrace{\frac{1} {1-t}\cdot\sum_{i\in[m]}\tilde{q}_{i}}_{\doteq D}.\]
Remark that in the last identity, we have put back summations over the complete set \([m]\) of indices. We note that \(B=0\) because \(\tilde{\boldsymbol{q}}^{\top}\boldsymbol{u}=0\). We then remark that without replacing the expression of \(\tilde{\boldsymbol{q}}\), we have in general for any \(\tilde{\boldsymbol{q}}\in\bar{\Delta}_{m}\):
\[E \doteq \sum_{i\in[m]}\tilde{q}_{i}\cdot\left(\log_{t}\tilde{q}_{i}-\log _{t}q_{i}\right)\] \[= \sum_{i\in[m]}\tilde{q}_{i}\cdot\left(\frac{1}{1-t}\cdot\left( \tilde{q}_{i}^{1-t}-1\right)-\frac{1}{1-t}\cdot\left(q_{i}^{1-t}-1\right)\right)\] \[= \frac{1}{1-t}\cdot\sum_{i\in[m]}\tilde{q}_{i}^{2-t}-\frac{1}{1-t }\cdot\sum_{i\in[m]}\tilde{q}_{i}q_{i}^{1-t}\] \[= \frac{1}{1-t}\cdot\left(1-\sum_{i\in[m]}\tilde{q}_{i}q_{i}^{1-t} \right)\!,\]
and we can check that for any \(\tilde{\boldsymbol{q}},\boldsymbol{q}\in\bar{\Delta}_{m}\), \(E=\Delta(\tilde{\boldsymbol{q}}\|\boldsymbol{q})\). We then develop \(\Delta(\tilde{\boldsymbol{q}}\|\boldsymbol{q})\) with a partial replacement of \(\tilde{\boldsymbol{q}}\) by its expression:
\[\Delta(\tilde{\boldsymbol{q}}\|\boldsymbol{q}) = A-\sum_{i}\tilde{q}_{i}\log_{t}q_{i}\] \[= A-\frac{1}{1-t}\cdot\sum_{i}\tilde{q}_{i}q_{i}^{1-t}+\frac{1}{1 -t}\cdot\sum_{i}\tilde{q}_{i}\] \[= C-\frac{1}{1-t}\cdot\sum_{i}\tilde{q}_{i}q_{i}^{1-t}\] \[= \frac{1}{1-t}\cdot\left(\frac{1}{1+(1-t)\lambda}-1\right)\cdot \sum_{i}\tilde{q}_{i}q_{i}^{1-t}\] \[= -\frac{\lambda}{1+(1-t)\lambda}\cdot\sum_{i}\tilde{q}_{i}q_{i}^{1 -t}\] \[= -\frac{\lambda}{1+(1-t)\lambda}\cdot\left(1-(1-t)\cdot\Delta( \tilde{\boldsymbol{q}}\|\boldsymbol{q})\right).\]
Rearranging gives that for any \(\tilde{\boldsymbol{q}},\boldsymbol{q}\in\bar{\Delta}_{m}\) such that (i) \(\tilde{\boldsymbol{q}}\) has the form (25) for some \(\mu\in\mathbb{R}\) and (ii) \(\tilde{\boldsymbol{q}}^{\top}\boldsymbol{u}=0\),
\[\Delta(\tilde{\boldsymbol{q}}\|\boldsymbol{q}) = -\lambda\] \[= -\log_{t}(Z_{t}),\]as claimed. This ends the proof of Lemma B.
We thus get from the definition of the dual that \(\mu=\arg\max-\log_{t}Z_{t}(\mu)=\arg\min Z_{t}(\mu)\). We have the explicit form for \(Z_{t}\):
\[Z_{t}(\mu) = \left(\sum_{i}\exp_{t}^{2-t}\left(\log_{t}q_{i}-\mu u_{i}\right) \right)^{\frac{1}{2-t}}\] \[= \left(\sum_{i\in[m]_{\bullet}}\exp_{t}^{2-t}\left(\log_{t}q_{i}- \mu u_{i}\right)\right)^{\frac{1}{2-t}},\]
where \([m]_{\bullet}\) is defined in (28). We remark that the last expression is differentiable in \(\mu\), and get
\[Z_{t}^{\prime}(\mu) = \frac{1}{2-t}\cdot\left(\sum_{i\in[m]_{\bullet}}\exp_{t}^{2-t} \left(\log_{t}q_{i}-\mu u_{i}\right)\right)^{-\frac{1-t}{2-t}} \tag{30}\] \[\cdot(2-t)\sum_{i\in[m]_{\bullet}}\exp_{t}^{1-t}\left(\log_{t}q_{ i}-\mu u_{i}\right)\cdot\exp_{t}^{t}\left(\log_{t}q_{i}-\mu u_{i}\right)\cdot- u_{i}\] \[= -Z_{t}^{t-1}\cdot\sum_{i\in[m]_{\bullet}}\exp_{t}\left(\log_{t}q_ {i}-\mu u_{i}\right)\cdot u_{i}\] \[= -Z_{t}^{t}\cdot\sum_{i\in[m]_{\bullet}}\tilde{q}_{i}u_{i}\] \[= -Z_{t}^{t}\cdot\tilde{\boldsymbol{q}}^{\top}\boldsymbol{u},\]
so
\[\frac{\partial-\log_{t}(Z_{t})}{\partial\mu} = -Z_{t}^{-t}Z_{t}^{\prime}\] \[= \tilde{\boldsymbol{q}}(\mu)^{\top}\boldsymbol{u},\]
and we get that any critical point of \(Z_{t}(\mu)\) satisfies \(\tilde{\boldsymbol{q}}(\mu)^{\top}\boldsymbol{u}=0\). A sufficient condition to have just one critical point, being the minimum sought is the strict convexity of \(Z_{t}(\mu)\). The next Lemma provides the proof that it is for all \(t>0\).
**Lemma C**.: \(Z_{t}^{\prime\prime}(\mu)\geqslant t\cdot Z_{t}(\mu)^{2t-1}(\tilde{\boldsymbol {q}}(\mu)^{\top}\boldsymbol{u})^{2}\)_._
Proof.: After simplifications, we have
\[Z_{t}^{3-2t}\cdot Z_{t}^{\prime\prime} = (t-1)\cdot\left(\sum_{i\in[m]}\exp_{t}\left(\log_{t}q_{i}-\mu u_{ i}\right)\cdot u_{i}\right)^{2}\] \[+\left(\sum_{i\in[m]}\exp_{t}^{2-t}\left(\log_{t}q_{i}-\mu u_{i} \right)\right)\cdot\left(\sum_{i\in[m]}\exp_{t}^{t}\left(\log_{t}q_{i}-\mu u_ {i}\right)\cdot u_{i}^{2}\right)\] \[= (t-1)\cdot\sum_{i,k\in[m]}Q_{i}Q_{k}u_{i}u_{k}+\sum_{i,k\in[m]}Q_ {i}^{2-t}Q_{k}^{t}u_{k}^{2}, \tag{33}\]
where we have let \(Q_{i}\doteq\exp_{t}\left(\log_{t}q_{i}-\mu u_{i}\right)\geqslant 0\). Since \(a^{2}+b^{2}\geqslant 2ab\), we note that for any \(i\neq k\),
\[Q_{i}^{2-t}Q_{k}^{t}u_{k}^{2}+Q_{k}^{2-t}Q_{i}^{t}u_{i}^{2} \geqslant 2\sqrt{Q_{i}^{2-t}Q_{k}^{t}Q_{k}^{2-t}Q_{i}^{t}}u_{i}u_{k} \tag{34}\] \[=2Q_{i}Q_{k}u_{i}u_{k},\]so we split (33) in two terms and get
\[Z_{t}^{3-2t}\cdot Z_{t}^{\prime\prime} = (t-1)\cdot\sum_{i\in[m]}Q_{i}^{2}u_{i}^{2}+\sum_{i\in[m]}Q_{i}^{2-t} Q_{i}^{t}u_{i}^{2} \tag{35}\] \[+\sum_{i,k\in[m],i<k}2(t-1)Q_{i}Q_{k}u_{i}u_{k}+\sum_{i,k\in[m],i< k}Q_{i}^{2-t}Q_{k}^{t}u_{k}^{2}+Q_{k}^{2-t}Q_{i}^{t}u_{i}^{2}\] \[= t\cdot\sum_{i\in[m]}Q_{i}^{2}u_{i}^{2}\] \[+\sum_{i,k\in[m],i<k}2(t-1)Q_{i}Q_{k}u_{i}u_{k}+\sum_{i,k\in[m],i< k}Q_{i}^{2-t}Q_{k}^{t}u_{k}^{2}+Q_{k}^{2-t}Q_{i}^{t}u_{i}^{2}\] \[\geqslant t\cdot\sum_{i\in[m]}Q_{i}^{2}u_{i}^{2}+2t\cdot\sum_{i,k\in[m],i <k}Q_{i}Q_{k}u_{i}u_{k}\] \[=t\cdot\left(\sum_{i\in[m]}\exp_{t}\left(\log_{t}q_{i}-\mu u_{i} \right)\cdot u_{i}\right)^{2}\] \[= tZ_{t}^{2}\cdot(\tilde{\mathbf{q}}^{\top}\mathbf{u})^{2},\]
where we have used (34) in (35). Since \(Z_{t}(\mu)>0\), we get the statement of Lemma C after reorganising (36).
Lemma C shows the strict convexity of \(Z_{t}(\mu)\) for any \(t>0\). The case \(t=0\) follows by direct differentiation: we get after simplification
\[Z_{t}^{\prime\prime}(\mu) = \frac{\left(\sum_{i\in[m]}u_{i}^{2}\right)\cdot\left(\sum_{i\in[ m]}(q_{i}-\mu u_{i})^{2}\right)-\left(\sum_{i\in[m]}(q_{i}-\mu u_{i})u_{i} \right)^{2}}{\left(\sum_{i\in[m]}(q_{i}-\mu u_{i})^{2}\right)^{\frac{3}{2}}}.\]
Cauchy-Schwartz inequality allows to conclude that \(Z_{t}^{\prime\prime}(\mu)\geqslant 0\) and is in fact \(>0\)_unless_\(\tilde{\mathbf{q}}\) is collinear to \(\mathbf{u}\). This completes the proof of Theorem 1.
### Proof of Theorem 2
The proof involves several arguments, organized into several subsections. Some are more general than what is strictly needed for the proof of the Theorem, on purpose.
#### ii.i.2.1 Clipped summations
For any \(\delta\geqslant 0\), we define clipped summations of the sequence of ordered elements \(v_{1},v_{2},...,v_{J}\): if \(J>1\),
\[{}^{(\delta)}\!\sum_{j=1}^{J}v_{j}\doteq\min\left\{v_{J}+\sum_{j=1}^{(\delta)} \!\sum_{j=1}^{J-1}v_{j},\delta\right\}\quad,\quad{}_{(-\delta)}\!\sum_{j=1}^{ J}v_{j}=\max\left\{v_{J}+\sum_{(-\delta)}\!\sum_{j=1}^{J-1}v_{j},-\delta\right\}, \tag{37}\]
and the base case (\(J=1\)) is obtained by replacing the inner sum by 0. We also define the doubly clipped summation:
\[{}^{(\delta)}\!\sum_{(-\delta)}\!\sum_{j=1}^{J}v_{j}=\max\left\{\min\left\{v_{ J}+\sum_{(-\delta)}^{(\delta)}\!\sum_{j=1}^{J-1}v_{j},\delta\right\},-\delta \right\},\]
with the same convention for the base case. We prove a series of simple but useful properties of the clipped summation.
**Lemma D**.: _The following properties hold true for clipped summation:_
1. _(doubly) clipped summations are noncommutative;_2. _(doubly) clipped summations are ordinary summation in the limit: for any_ \(J\geqslant 1\) _and any sequence_ \(v_{1},v_{2},...,v_{J}\)_,_ \[\lim_{\delta\rightarrow+\infty}\,^{(\delta)}\sum_{j=1}^{J}v_{j}=\lim_{\delta \rightarrow+\infty}\,^{J}\sum_{(-\delta)}^{J}v_{j}=\lim_{\delta\rightarrow+ \infty}\,^{(\delta)}\sum_{(-\delta)}^{J}v_{j}=\sum_{j=1}^{J}v_{j}\]
3. _clipped summations sandwich ordinary summation and the doubly clipped summation: for any_ \(\delta\geqslant 0\)_, any_ \(J\geqslant 1\) _and any sequence_ \(v_{1},v_{2},...,v_{J}\)_,_ \[\,^{(\delta)}\!\!\sum_{j=1}^{J}v_{j}\leqslant\sum_{j=1}^{J}v_{j}\leqslant\,_{ (-\delta)}\!\!\sum_{j=1}^{J}v_{j}\quad;\quad\sum_{j=1}^{(\delta)}\!\!\sum_{(- \delta)}^{J}\sum_{j=1}^{J}v_{j}\leqslant\,_{(-\delta)}\!\!\sum_{j=1}^{J}v_{j}\]
Proof.: Noncommutativity follows from simple counterexamples: for example, for \(v\doteq-1\) and \(w\doteq 2\), if we fix \(v_{1}\doteq v,v_{2}\doteq w\), then \(\,^{(0)}\!\!\sum_{j=1}^{2}v_{j}=1\) while \(\,^{(0)}\!\!\sum_{j=1}^{2}v_{3-j}=-1\). Property [2.] is trivial. The set of leftmost inequalities of property [3.] can be shown by induction, noting the base case is trivial and otherwise, using the induction hypothesis in the leftmost inequality,
\[\,^{(\delta)}\!\!\sum_{j=1}^{J+1}v_{j}\doteq\min\left\{v_{J+1}+\,\,\,^{(\delta) }\!\!\sum_{j=1}^{J}v_{j},\delta\right\}\leqslant\min\left\{v_{J+1}+\sum_{j=1}^ {J}v_{j},\delta\right\}\leqslant v_{J+1}+\sum_{j=1}^{J}v_{j}=\sum_{j=1}^{J+1}v _{j},\]
and similarly
\[\,^{(-\delta)}\!\!\sum_{j=1}^{J+1}v_{j} \doteq \max\left\{v_{J+1}+\,\,\,^{J}\!\!\sum_{(-\delta)}^{J}\!\!\sum_{j} ^{J},-\delta\right\}\] \[\geqslant \max\left\{v_{J+1}+\sum_{j=1}^{J}v_{j},-\delta\right\}\geqslant v _{J+1}+\sum_{j=1}^{J}v_{j}=\sum_{j=1}^{J+1}v_{j}.\]
A similar argument holds for the set of rightmost inequalities: for example, the induction's general case holds
\[\,^{(\delta)}\!\!\sum_{j=1}^{J+1}v_{j} \doteq \min\left\{v_{J+1}+\,\,\,^{(\delta)}\!\!\sum_{j=1}^{J}v_{j},\delta\right\}\] \[\leqslant \min\left\{v_{J+1}+\,\,\,^{(\delta)}\!\!\sum_{(-\delta)}^{J}\!\! \sum_{j}^{J}v_{j},\delta\right\}\] \[\leqslant \max\left\{\min\left\{v_{J+1}+\,\,\,^{(\delta)}\!\!\sum_{j=1}^{J} v_{j},\delta\right\},-\delta\right\}=\,\,^{(\delta)}\!\!\sum_{(-\delta)}^{J}\!\! \sum_{j=1}^{J}v_{j}.\]
for the leftmost inequality. This ends the proof of Lemma D.
#### ii.ii.2.2 Unravelling weights
**Lemma E**.: _Define_
\[v_{j} \doteq m^{1-t^{\ast}}\cdot\left(\prod_{k=1}^{j-1}Z_{tk}\right)^{1-t} \cdot\mu_{j}\quad\left(\text{convention}:\prod_{k=1}^{0}u_{k}\doteq 1\right). \tag{38}\]
_Then \(\forall J\geqslant 1\), weights unravel as:_
\[q_{(J+1)i} = \left\{\begin{array}{cc}\frac{1}{m^{t^{\ast}}\prod_{j=1}^{J}Z _{tj}}\cdot\exp_{t}\left(-\,^{(\nicefrac{{1}}{{1}}-\,^{J})}\!\sum_{j=1}^{J}v_{ j}u_{ji}\right)&\text{ if }\quad t<1\\ \frac{1}{m^{t^{\ast}}\prod_{j=1}^{J}Z_{tj}}\cdot\exp_{t}\left(-\,^{(-\nicefrac {{1}}{{1}}-\,^{J})}\!\sum_{j=1}^{J}v_{j}u_{ji}\right)&\text{ if }\quad t>1\end{array}\right..\]Proof.: We start for the case \(t<1\). We proceed by induction, noting first that the normalization constraint for the initial weights imposes \(q_{1i}=1/m^{1/(2-t)}=1/m^{t^{\bullet}}\) and so (using \((1-t)t^{\bullet}=1-t^{\bullet}\))
\[q_{2i} = \frac{\exp_{t}(\log_{t}q_{1i}-\mu_{1}u_{1i})}{Z_{1}}\] \[= \frac{1}{Z_{1}}\cdot\left[1+(1-t)\cdot\left(\frac{1}{1-t}\cdot \left(\frac{1}{m^{\frac{1-t}{2-t}}}-1\right)-\mu_{1}u_{1i}\right)\right]_{+}^{ \frac{1}{1-t}}\] \[= \frac{1}{Z_{1}}\cdot\left[\frac{1}{m^{1-t^{\bullet}}}-(1-t)\mu_{ 1}u_{1i}\right]_{+}^{\frac{1}{1-t}}\] \[= \frac{1}{m^{t^{\bullet}}Z_{1}}\cdot\left[1-(1-t)m^{1-t^{\bullet} }\mu_{1}u_{1i}\right]_{+}^{\frac{1}{1-t}}\] \[= \frac{1}{m^{t^{\bullet}}Z_{1}}\cdot\exp_{t}\left(-\sum_{j=1}^{1} v_{j}u_{ji}\right),\]
completing the base case. Using the induction hypothesis, we unravel at iteration \(J+1\):
\[q_{(J+1)i}\] \[= \frac{\exp_{t}(\log_{t}q_{Ji}-\mu_{J}u_{Ji})}{Z_{J}}\] \[= \frac{\exp_{t}\left(\log_{t}\left(\frac{1}{m^{t^{\bullet}}\prod_{ j=1}^{J-1}Z_{ij}}\cdot\exp_{t}\left(-\sum_{j=1}^{J-1}v_{j}u_{ji}\right)\right)- \mu_{J}u_{Ji}\right)}{Z_{J}}\] \[= \frac{1}{Z_{J}}\cdot\exp_{t}\left(\frac{\max\left\{-\frac{1}{1-t },-\frac{(\nicefrac{{1}}{{1-t}})}{\sum_{j=1}^{J-1}v_{j}u_{ji}}\right\}-\log_{t }\left(m^{t^{\bullet}}\prod_{j=1}^{J-1}Z_{tj}\right)}{1+(1-t)\log_{t}\left(m^{ t^{\bullet}}\prod_{j=1}^{J-1}Z_{tj}\right)}-\mu_{J}u_{Ji}\right)\] \[= \frac{1}{Z_{J}}\cdot\left[\begin{array}{c}1+\frac{(1-t)\cdot \max\left\{-\frac{1}{1-t},-\frac{(\nicefrac{{1}}{{1-t}})}{\sum_{j=1}^{J-1}v_{j }u_{ji}}\right\}-(1-t)\log_{t}\left(m^{t^{\bullet}}\prod_{j=1}^{J-1}Z_{tj} \right)}{1+(1-t)\log_{t}\left(m^{t^{\bullet}}\prod_{j=1}^{J-1}Z_{tj}\right)} \\ -(1-t)\mu_{J}u_{Ji}\end{array}\right]\] \[= \frac{1}{Z_{J}}\cdot\left[\begin{array}{c}1+\frac{(1-t)\cdot \max\left\{-\frac{1}{1-t},-\frac{(\nicefrac{{1}}{{1-t}})}{\sum_{j=1}^{J-1}v_{j }u_{ji}}\right\}-\left(\left(m^{t^{\bullet}}\prod_{j=1}^{J-1}Z_{tj}\right)^{1- t}-1\right)}{\left(m^{t^{\bullet}}\prod_{j=1}^{J-1}Z_{tj}\right)^{1-t}}\end{array} \right]_{+}^{\frac{1}{1-t}},\]
which simplifies into (using \((1-t)t^{\bullet}=1-t^{\bullet}\))
\[q_{(J+1)i}\] \[= \frac{1}{m^{t^{\bullet}}\prod_{j=1}^{J}Z_{tj}}\cdot\exp_{t}\left( -S_{J}\right),\]with
\[S_{J} \doteq \min\left\{-\max\left\{-\frac{1}{1-t},-\sum_{j=1}^{J-1}v_{j}u_{ji} \right\}+v_{J}u_{Ji},\frac{1}{1-t}\right\}\] \[= \min\left\{v_{J}u_{Ji}+\min\left\{\frac{1}{1-t},\sum_{j=1}^{J-1}v _{j}u_{ji}\right\},\frac{1}{1-t}\right\}\] \[= \min\left\{v_{J}u_{Ji}+\sqrt[(\nicefrac{{1}}{{1-t}})\sum_{j=1}^{J -1}v_{j}u_{ji},\frac{1}{1-t}\right\}\] \[\doteq \sqrt[(\nicefrac{{1}}{{1-t}})\sum_{j=1}^{J}v_{j}u_{ji}\]
(we used twice the definition of clipped summation), which completes the proof of Lemma E for \(t<1\).
We now treat the case \(t>1\). The base induction is equivalent, while unraveling gives, instead of (39):
\[q_{(J+1)i}\] \[= \frac{1}{m^{t*}\prod_{j=1}^{J}Z_{tj}}\cdot\left[1+(1-t)\cdot \left(\min\left\{-\frac{1}{1-t},-\sum_{-(\nicefrac{{1}}{{t-1}})\sum_{j=1}^{J -1}v_{j}u_{ji}\right\}-v_{J}u_{Ji}\right)\right]_{+}^{\frac{1}{1-t}}\] \[= \frac{1}{m^{t*}\prod_{j=1}^{J}Z_{tj}}\cdot\exp_{t}\left(-S_{J} \right),\]
and, this time,
\[S_{J} \doteq \max\left\{-\min\left\{-\frac{1}{1-t},-\sum_{-(\nicefrac{{1}}{{t -1}})\sum_{j=1}^{J-1}v_{j}u_{ji}\right\}+v_{J}u_{Ji},-\frac{1}{t-1}\right\} \tag{40}\] \[= \max\left\{v_{J}u_{Ji}+\max\left\{-\frac{1}{t-1},\ -\sum_{-( \nicefrac{{1}}{{t-1}})\sum_{j=1}^{J-1}v_{j}u_{ji}\right\},-\frac{1}{t-1}\right\}\] (41) \[= \max\left\{v_{J}u_{Ji}+\sum_{-(\nicefrac{{1}}{{t-1}})\sum_{j=1}^{ J-1}v_{j}u_{ji},-\frac{1}{t-1}\right\}\] (42) \[\doteq \sqrt[(\nicefrac{{1}}{{t-1}})\sum_{j=1}^{J}v_{j}u_{ji}, \tag{43}\]
which completes the proof of Lemma E.
#### ii.ii.2.3 Introducing classifiers
Ordinary linear separatorsSuppose we have a classifier
\[H_{J}(\boldsymbol{x}) \doteq \sum_{j=1}^{J}\beta_{j}^{1-t}\mu_{j}\cdot h_{j}(\boldsymbol{x}), \quad\beta_{j}\doteq m^{t*}\prod_{k=1}^{j-1}Z_{tk},\]
where \(\mu_{j}\in\mathbb{R},\forall j\in[J]\). We remark that \([\![z\neq r]\!]\leq\exp_{t}^{2-t}(-zr)\) for any \(t\leq 2\) and \(z,r\in\mathbb{R}\), and \(z\mapsto\exp_{t}^{2-t}(-z)\) is decreasing for any \(t\leq 2\), so using [3.] in Lemma D, we get for our training sample \(\mathcal{S}\doteq\{(\mathbf{x}_{i},y_{i}),i\in[m]\}\) and any \(t<1\) (from Lemma E),
\[\frac{1}{m}\cdot\sum_{i\in[m]}\llbracket\mathrm{sign}(H_{J}(\mathbf{x} _{i}))\neq y_{i}\rrbracket\] \[\leq \sum_{i\in[m]}\frac{\exp_{t}^{2-t}\left(-\sum_{j=1}^{J}m^{1-t^{ \mathfrak{k}}}\left(\prod_{k=1}^{j-1}Z_{tk}\right)^{1-t}\mu_{j}\cdot y_{i}h_{j }(\mathbf{x}_{i})\right)}{m}\] \[\leq \sum_{i\in[m]}\frac{\exp_{t}^{2-t}\left(-\sfrac{\sfrac{\sfrac{ \sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{\sfrac{ \sfrac{\sfrac{\s{\s{\s{\s{\s{\s{\s{\s{\s{\s{\s{\s}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} {\right.\right\right\right\}} \right\right\right\right\} \right\right\}{\]\]\]\]\]\]\]\]\]\]\]\] \] \] \] \[\begin{\begin{}{}\begin{}\begin{}{}\{}\{}\{}\}{} \{}\{}\{}\{}}{}\{}}{}\{}}{}{}}{}{}}{}{}}{}{}}{}{}{}}{}{}{}{}}{}{}{}{}{}}{}{}{}{}{}}{}{}{}{}{}}{}{}{}{}{}{}}{}{}{}{}{}}{}{}{}{}{}}{}{}{}{}}{}{}{}{}{}}{}{}{}{}{}{}}{}{}{}}{}{}{}{}{}}{}{}{}{}}{}{}{}{}}{}{}{}{}{}}{}{}{}{}{}}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}}{{}}{}{}{}{}{}{}{}{}{}{}{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}}{{}}}{{{}}}{{{}}}{{}}{{}}{{}}{{}}{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}}{{{{}}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{}}}{{{}}}{{{}}}{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{}}}{{{}}}{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}}{{{{{}}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}}{{{{{}}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}}{{{{{}}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}}{{{{{}}}}}{{{{}}}}{{{{}}}}{{{{{}}}}}{{{{{}}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}}{{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}}{{{{{}}}}{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{}}}}{{{{{}}}
We can now replace (44) by
\[\frac{1}{m}\cdot\sum_{i\in[m]}\big{[}\mathrm{sign}(H^{(\nicefrac{{ 1}}{{1-i}})}_{J}(\mathbf{x}_{i}))\neq y_{i}\big{]}\] \[\leq \sum_{i\in[m]}\frac{\exp_{t}^{2-t}\left(-y_{i}\cdot\frac{\binom{ \nicefrac{{ 1}}{{1-i}}}{{-i}}}{\sum_{j=1}^{J}m^{1-t}\mathfrak{*}\left(\prod_{k=1}^{j-1}Z_{ tk}\right)^{1-t}\mu_{j}\cdot h_{j}(\mathbf{x}_{i})\right)}}{m}\] \[=\sum_{i\in[m]}\frac{\exp_{t}^{2-t}\left(-\frac{\binom{ \nicefrac{{ 1}}{{1-i}}}{{-(\nicefrac{{ 1}}{{1-i}})}}}{\sum_{j=1}^{J}m^{1-t}\mathfrak{*}\left(\prod_{k=1}^{j-1}Z_{tk} \right)^{1-t}\mu_{j}\cdot y_{i}h_{j}(\mathbf{x}_{i})\right)}}{m}\] \[\leq \sum_{i\in[m]}\frac{\exp_{t}^{2-t}\left(-\frac{\binom{ \nicefrac{{ 1}}{{1-i}}}{{-1}}}{\sum_{j=1}^{J}m^{1-t}\mathfrak{*}\left(\prod_{k=1}^{j-1}Z_{ tk}\right)^{1-t}\mu_{j}\cdot y_{i}h_{j}(\mathbf{x}_{i})\right)}}{m}\] \[=\sum_{i\in[m]}\frac{\exp_{t}^{2-t}\left(-\frac{\binom{ \nicefrac{{ 1}}{{1-i}}}{{-1}}}{\sum_{j=1}^{J}v_{j}u_{ji}}\right)}{m}.\]
The first identity has used the fact that \(y_{i}\in\{-1,1\}\), so it can be folded in the doubly clipped summation without changing its value, and the second inequality used [3.] in Lemma D. This directly leads us to the following Lemma.
**Lemma G**.: _For any \(t<1\) and any clipped linear separator_
\[H^{(\nicefrac{{ 1}}{{1-i}})}_{J}(\mathbf{x}) \doteq \stackrel{{\binom{\nicefrac{{ 1}}{{1-i}}}{{-i}}}}{\sum_{j=1}^{J}\beta_{j}^{1-t}\mu_{j}\cdot h_{j}(\mathbf{x})}, \quad\left(\beta_{j}=m^{t\ast}\prod_{k=1}^{j-1}Z_{tk},\mu_{j}\in\mathbb{R},h_{ j}\in\mathbb{R}^{\mathcal{X}},\forall j\in[J]\right),\]
_where \(Z_{tk}\) is the normalization coefficient of \(\mathbf{q}\) in (25) with \(u_{ji}\doteq y_{i}h_{j}(\mathbf{x}_{i})\),_
\[\frac{1}{m}\cdot\sum_{i\in[m]}\big{[}\mathrm{sign}(H^{(\nicefrac{ { 1}}{{1-i}})}_{J}(\mathbf{x}_{i}))\neq y_{i}\big{]} \leq \prod_{j=1}^{J}Z_{tj}^{2-t}. \tag{49}\]
#### ii.ii.2.4 Geometric convergence of the empirical risk
To get the right-hand side of (47) and (49) as small as possible, we can independently compute each \(\mu_{j}\) so as to minimize
\[Z_{tj}^{2-t}(\mu) \doteq \sum_{i\in[m]}\exp_{t}^{2-t}\left(\log_{t}q_{ji}-\mu u_{ji}\right). \tag{50}\]
We proceed in two steps, first computing a convenient upperbound for (50), and then finding the \(\mu\) that minimizes this upperbound.
**Step 1**: We distinguish two cases depending on weight \(q_{ji}\). Let \([m]_{j}^{+}\doteq\{i:q_{ji}>0\}\) and \([m]_{j}^{+}\doteq\{i:q_{ji}=0\}\):
**Case 1**: \(i\in[m]_{j}^{+}\). Let \(r_{ji}=u_{ji}/q_{ji}^{1-t}\) and suppose \(R_{j}>0\) is a real that satisfies
\[|r_{ji}| \leq R_{j},\forall i\in[m]_{j}^{+}. \tag{51}\]
For any convex function \(f\) defined on \([-1,1]\), we have \(f(z)\leq((1+z)/2)\cdot f(1)+((1-z)/2)\cdot f(-1),\forall z\in[-1,1]\) (the straight line is the chord crossing \(f\) at \(z=-1,1\)). Because\[z\mapsto[1-z]_{+}^{\frac{2-t}{t}}\text{ is convex for }t\leq 2,\text{ for any }i\in[m]_{j}^{+}\] \[\exp_{t}^{2-t}\left(\log_{t}q_{ji}-\mu u_{ji}\right)\] \[= \left[q_{ji}^{1-t}-(1-t)\mu u_{ji}\right]_{+}^{\frac{2-t}{t}}\] \[= q_{ji}^{2-t}\cdot\left[1-(1-t)\mu R_{j}\cdot\frac{r_{ji}}{R_{j} }\right]_{+}^{\frac{2-t}{t}}\] \[\leq q_{ji}^{2-t}\cdot\frac{R_{j}+r_{ji}}{2R_{j}}\left[1-(1-t)\mu R_{ j}\right]_{+}^{\frac{2-t}{t}}+q_{ji}^{2-t}\cdot\frac{R_{j}-r_{ji}}{2R_{j}} \left[1+(1-t)\mu R_{j}\right]_{+}^{\frac{2-t}{t}}\] \[=\frac{q_{ji}^{2-t}R_{j}+q_{ji}u_{ji}}{2R_{j}}\left[1-(1-t)\mu R_{ j}\right]_{+}^{\frac{2-t}{t}}+\frac{q_{ji}^{2-t}R_{j}-q_{ji}u_{ji}}{2R_{j}} \left[1+(1-t)\mu R_{j}\right]_{+}^{\frac{2-t}{t}}.\]
**Case 2**: \(i\in[m]_{j}^{\dagger}\). Let \(q_{j}^{\dagger}>0\) be a real that satisfies
\[\frac{|u_{ji}|}{q_{j}^{\dagger 1-t}}<R_{j},\forall i\in[m]_{j}^{\dagger}. \tag{52}\]
Using the same technique as in case 1, we find for any \(i\in[m]_{j}^{\dagger}\)
\[\exp_{t}^{2-t}\left(\log_{t}q_{ji}-\mu u_{ji}\right)\] \[= \exp_{t}^{2-t}\left(-\frac{1}{1-t}-\mu u_{ji}\right)\] \[= \left[-(1-t)\mu u_{ji}\right]_{+}^{\frac{2-t}{t-t}}\] \[\leq \left[q_{j}^{\dagger 1-t}-(1-t)\mu u_{ji}\right]_{+}^{\frac{2-t}{t -t}}\] \[\leq \frac{q_{j}^{\dagger 2-t}R_{j}+q_{ji}^{\dagger}u_{ji}}{2R_{j}} \left[1-(1-t)\mu R_{j}\right]_{+}^{\frac{2-t}{t-t}}+\frac{q_{j}^{\dagger 2-t}R_{j}-q_{j}^{ \dagger}u_{ji}}{2R_{j}}\left[1+(1-t)\mu R_{j}\right]_{+}^{\frac{2-t}{t-t}}.\]
Folding both cases into one and letting
\[q^{\prime}{}_{ji} \doteq \left\{\begin{array}{ccc}q_{ji}&\text{ if }&i\in[m]_{j}^{+}\\ q_{j}^{\dagger}&\text{ if }&i\in[m]_{j}^{\dagger}\end{array}\right., \tag{53}\]
we get after summation, using \(m_{j}^{\dagger}\doteq\mathrm{Card}([m]_{j}^{\dagger})\) and
\[\rho_{j} \doteq \frac{1}{(1+m_{j}^{\dagger}q_{j}^{\dagger 2-t})R_{j}}\cdot\sum_{i \in[m]}q^{\prime}{}_{ji}u_{ji}\quad(\in[-1,1]), \tag{54}\]
that
\[Z_{tj}^{2-t}(\mu)\] \[\leq \frac{(1+m_{j}^{\dagger}q_{j}^{\dagger 2-t})R_{j}}{2R_{j}} \cdot\left((1+\rho_{j})\left[1-(1-t)\mu R_{j}\right]_{+}^{\frac{2-t}{t}}+(1- \rho_{j})\left[1+(1-t)\mu R_{j}\right]_{+}^{\frac{2-t}{t}}\right) \tag{55}\] \[=\frac{1+m_{j}^{\dagger}q_{j}^{\dagger 2-t}}{2}\cdot\left((1+ \rho_{j})\cdot\exp_{t}^{2-t}(-\mu R_{j})+(1-\rho_{j})\cdot\exp_{t}^{2-t}\left( \mu R_{j}\right)\right).\]
**Step 2**: we have our upperbound for (50). We now compute the minimizer \(\mu^{\ast}\) of (55). If this minimizer satisfies
\[|\mu^{\ast}| < \frac{1}{R_{j}|1-t|}, \tag{56}\]
then it can be found by ordinary differentiation, as the solution to
\[(1-\rho_{j})\cdot\exp_{t}\left(\mu^{\ast}R_{j}\right)-(1+\rho_{j}) \cdot\exp_{t}\left(-\mu^{\ast}R_{j}\right) = 0,\]which is equivalently
\[\frac{\exp_{t}\left(-\mu^{\mbox{*}}R_{j}\right)}{\exp_{t}\left(\mu^{ \mbox{*}}R_{j}\right)} = \exp_{t}\left(-\mu^{\mbox{*}}R_{j}\ominus_{t}\mu^{\mbox{*}}R_{j}\right)\] \[= \frac{1-\rho_{j}}{1+\rho_{j}},\]
where we recall \(a\ominus_{t}b\doteq(a-b)/(1+(1-t)b)\). Solving it yields
\[\mu^{\mbox{*}} = \frac{1}{R_{j}}\cdot-\frac{1}{1-t}\cdot\left(\frac{(1-\rho_{j})^{ 1-t}-(1+\rho_{j})^{1-t}}{(1-\rho_{j})^{1-t}+(1+\rho_{j})^{1-t}}\right)\] \[= \frac{1}{R_{j}}\cdot-\frac{1}{1-t}\cdot\left(\frac{2(1-\rho_{j}) ^{1-t}}{(1-\rho_{j})^{1-t}+(1+\rho_{j})^{1-t}}-1\right)\] \[= -\frac{1}{R_{j}}\cdot\log_{t}\left(\frac{1-\rho_{j}}{M_{1-t}(1- \rho_{j},1+\rho_{j})}\right),\]
where \(M_{q}(a,b)\doteq((a^{q}+b^{q})/2)^{1/q}\) is the power mean with exponent \(q\). We now check (56).
**Lemma H**.: _For any \(t\in\mathbb{R}\), let_
\[\mu_{j} \doteq -\frac{1}{R_{j}}\cdot\log_{t}\left(\frac{1-\rho_{j}}{M_{1-t}(1- \rho_{j},1+\rho_{j})}\right). \tag{57}\]
_Then \(|\mu_{j}|\leqslant 1/(R_{j}|1-t|)\)._
Proof.: Equivalently, we must show
\[\left|\log_{t}\left(\frac{1-z}{M_{1-t}(1-z,1+z)}\right)\right| \leqslant \frac{1}{|1-t|},\forall z\in[-1,1],\]
which is equivalent to showing
\[\left|\frac{2(1-z)^{1-t}}{(1-z)^{1-t}+(1+z)^{1-t}}-1\right|\left( =\left|\frac{1-\left(\frac{1+z}{1-z}\right)^{1-t}}{1+\left(\frac{1+z}{1-z} \right)^{1-t}}\right|\right) \leqslant 1,\forall z\in[-1,1].\]
Define function \(f(z,t)\doteq(1-z^{1-t})/(1+z^{1-t})\) over \(\mathbb{R}_{\geqslant 0}\times\mathbb{R}\): it is easy to check that for \(t\leqslant 1,f(z,t)\in[-1,1]\), and the symmetry \(f(z,t)=-f(z,2-t)\) also allows to conclude that for \(t\geqslant 1,f(z,t)\in[-1,1]\). This ends the proof of Lemma H.
For the expression of \(\mu_{j}\) in (57), we get from (55) the upperbound on \(Z_{tj}^{2-t}(\mu_{j})\):
\[Z_{tj}^{2-t}(\mu_{j}) \leqslant \frac{1+m_{j}^{\dagger}q_{j}^{\dagger^{2-t}}}{2}\cdot\left((1+ \rho_{j})\cdot\exp_{t}^{2-t}\left(-\mu_{j}R_{j}\right)+(1-\rho_{j})\cdot\exp _{t}^{2-t}\left(\mu_{j}R_{j}\right)\right)\] \[=\frac{1+m_{j}^{\dagger}q_{j}^{\dagger^{2-t}}}{2}\cdot\left(\frac {(1+\rho_{j})(1-\rho_{j})^{2-t}}{M_{1-t}^{2-t}(1-\rho_{j},1+\rho_{j})}+\frac{( 1-\rho_{j})(1+\rho_{j})^{2-t}}{M_{1-t}^{2-t}(1-\rho_{j},1+\rho_{j})}\right)\] \[= \left(1+m_{j}^{\dagger}q_{j}^{\dagger^{2-t}}\right)\cdot\frac{(1- \rho_{j}^{2})M_{1-t}^{1-t}(1-\rho_{j},1+\rho_{j})}{M_{1-t}^{2-t}(1-\rho_{j},1 +\rho_{j})}\] \[= \left(1+m_{j}^{\dagger}q_{j}^{\dagger^{2-t}}\right)\cdot\frac{1- \rho_{j}^{2}}{M_{1-t}(1-\rho_{j},1+\rho_{j})}.\]
We conclude that for both sets of classifiers defined in Lemmata F and G, with the choice of \(\mu_{j}\) in (57), we get
\[\frac{1}{m}\cdot\sum_{i\in[m]}[\mbox{sign}(H(\mathbf{x}_ {i}))\neq y_{i}] \leqslant \prod_{j=1}^{J}\left(1+m_{j}^{\dagger}q_{j}^{\dagger^{2-t}}\right) \cdot\frac{1-\rho_{j}^{2}}{M_{1-t}(1-\rho_{j},1+\rho_{j})},\forall H\in\{H_{J}, H_{J}^{\left(j_{1-t}\right)}\}.\]To complete the proof of Theorem 2, we just need to elicit the best \(R_{j}\) (51) and \(q_{j}^{\dagger}\) (52); looking at their constraints suggests
\[R_{j} \doteq \max_{i\notin[m]_{j}^{\dagger}}\frac{|y_{i}h_{j}(\boldsymbol{x}_{i })|}{q_{ji}^{1-t}},\] \[q_{j}^{\dagger} \doteq \frac{\max_{i\in[m]_{j}^{\dagger}}|y_{i}h_{j}(\boldsymbol{x}_{i}) |^{1/(1-t)}}{R_{j}^{1/(1-t)}}.\]
This completes the proof of Theorem 2. We complete the proof by two Lemmata of additional useful results in the context of Algorithm \(t\)-AdaBoost, and finally an important remark on the interpretation of Theorem 2.
**Lemma I**.: _The following holds true: (i) \(\rho_{j}\in[-1,1]\); (ii) if, among indexes not in \([m]_{j}^{\dagger}\), there exists at least one index with \(u_{ji}>0\) and one index with \(u_{ji}<0\), then for any \(\mu\neq 0\), \(Z_{ij}^{2-t}(\mu)>0\) in (50) (in words, the new weigh vector \(\boldsymbol{q}_{j+1}\) cannot be the null vector before normalization)._
Proof.: To show (i) for \(\rho_{j}\leqslant 1\), we write (using \(u_{ji}\doteq y_{i}h_{j}(\boldsymbol{x}_{i}),\forall i\in[m]\) for short),
\[(1+m_{j}^{\dagger}q_{j}^{\dagger^{2-t}})R_{j}\cdot\rho_{j} = \sum_{i\in[m]}q_{j}^{\prime}u_{ji}u_{ji}\] \[\leqslant \sum_{i\in[m]}q_{j}^{\prime}{}_{ji}|u_{ji}|\] \[=\sum_{i\in[m]_{j}^{+}}q_{ji}^{2-t}\cdot\frac{|u_{ji}|}{q_{ji}^{1 -t}}+q_{j}^{\dagger^{2-t}}\cdot\frac{\sum_{i\in[m]_{j}^{\dagger}}|u_{ji}|}{q_ {j}^{\dagger^{1-t}}}\] \[\leqslant R_{j}\cdot\underbrace{\sum_{i\in[m]_{j}^{+}}q_{ji}^{2-t}}_{=1}+ q_{j}^{\dagger^{2-t}}\cdot\frac{R_{j}\sum_{i\in[m]_{j}^{\dagger}}|u_{ji}|}{ \max_{i\in[m]_{j}^{\dagger}}|u_{ji}|}\] \[\leqslant R_{j}+q_{j}^{\dagger^{2-t}}m_{j}^{\dagger}R_{j}=(1+m_{j}^{ \dagger}q_{j}^{\dagger^{2-t}})R_{j},\]
showing \(\rho_{j}\leqslant 1\). Showing \(\rho_{j}\geqslant-1\) proceeds in the same way. Property (ii) is trivial.
**Lemma J**.: \[K_{t}(z) \leqslant \exp\left(-\left(1-\frac{t}{2}\right)\cdot z^{2}\right).\]
Proof.: We remark that for \(t\in[0,1),z\geqslant 0\), \(K_{t}^{\prime}(z)\) is concave and \(K_{t}^{\prime\prime}(0)=-(2-t)\), so \(K_{t}^{\prime}(z)\leqslant-(2-t)z,\forall z\geqslant 0\), from which it follows by integration
\[K_{t}(z) \leqslant 1-\left(1-\frac{t}{2}\right)\cdot z^{2},\]
and since \(1-z\leqslant\exp(-z)\), we get the statement of the Lemma.
**Remark 1**.: _The interpretation of Theorem 2 for \(t<1\) are simplified to the case where there is no weight switching, i.e. \(m_{j}^{\dagger}=0,\forall j\). While we have never observed weight switching in our experiments - perhaps owing to the fact that we did never boost for a very long number of iterations or just because our weak classifiers, decision trees, were in fact not so weak -, it is interesting, from a theoretical standpoint, to comment on convergence when this happens. Let \(Q_{j}=1+m_{j}^{\dagger}(q_{j}^{\dagger})^{2-t}\) and \(\tilde{\rho}_{j}=Q_{j}\rho_{j}\) (Notations from Theorem 2). We note that \(\tilde{\rho}_{j}\approx\beta\cdot\mathbb{E}_{\boldsymbol{p}_{j}}[y_{i}h_{j}( \boldsymbol{x}_{i})]\), where \(\boldsymbol{p}_{j}\) lives on the simplex and \(|yh|\leqslant 1,\beta\leqslant 1\). Using Lemma J and (12) (main file), to keep geometric convergence, it is roughly sufficient that \(Q_{j}\log Q_{j}\leqslant(\tilde{\rho}_{j})^{2}/(2t^{*})\). Since \(q_{j}^{\dagger}\) is homogeneous to a tempered weight, one would expect in general \(m_{j}^{\dagger}(q_{j}^{\dagger})^{2-t}\leqslant 1\), so using the Taylor approximation \(Q_{j}\log Q_{j}\approx-1+Q_{j}\), one gets the refined sufficient condition for geometric convergence_
\[m_{j}^{\dagger}(q_{j}^{\dagger})^{2-t} \leqslant (\tilde{\rho}_{j})^{2}/(2t^{*})=O((\tilde{\rho}_{j})^{2}).\]
_What does that imply? We have two cases:_* _If this holds, then we have geometric convergence;_
* _if it does not hold, then for a "large" number of training examples, we must have_ \(q_{ji}=0\) _which, because of the formula for_ \(\boldsymbol{q}\)__(_8_) implies that all these examples receive the right class with a sufficiently large margin. Breaking geometric convergence in this case is not an issue: we already have a good ensemble._
### Proof of Theorem 3
Starting from the proof of Theorem 2, we indicate the additional steps to get to the proof of Theorem 3. The key is to remark that our margin formulation has the following logical convenience:
\[\llbracket\nu_{t}((\boldsymbol{x}_{i},y_{i}),H)\leq\theta\rrbracket = \llbracket-yH(\boldsymbol{x})+\log_{t}\left(\frac{1+\theta}{1- \theta}\right)-(1-t)yH(\boldsymbol{x})\log_{t}\left(\frac{1+\theta}{1-\theta} \right)\geq 0\rrbracket\] \[= \llbracket(-yH(\boldsymbol{x}))\oplus_{t}\log_{t}\left(\frac{1+ \theta}{1-\theta}\right)\geq 0\rrbracket.\]
We then remark that since \(\llbracket z\geq 0\rrbracket\leq\exp_{t}^{2-t}(z)\), we get
\[\llbracket\nu_{t}((\boldsymbol{x}_{i},y_{i}),H)\leq\theta\rrbracket \leq \exp_{t}^{2-t}\left(\left(-yH(\boldsymbol{x})\right)\oplus_{t} \log_{t}\left(\frac{1+\theta}{1-\theta}\right)\right)\] \[=\exp_{t}^{2-t}\left(\log_{t}\left(\frac{1+\theta}{1-\theta} \right)\right)\cdot\exp_{t}^{2-t}(-yH(\boldsymbol{x}))\] \[= \left(\frac{1+\theta}{1-\theta}\right)^{2-t}\cdot\exp_{t}^{2-t}( -yH(\boldsymbol{x})).\]
We then just have to branch to (44), replacing the \(\llbracket\text{sign}(H_{J}(\boldsymbol{x}_{i}))\neq y_{i}\rrbracket\)s by \(\llbracket\nu_{t}((\boldsymbol{x}_{i},y_{i}),H)\leq\theta\rrbracket\), which yields in lieu of (46) the sought inequality:
\[F_{t,\theta}(H,\mathcal{S}) \leq \left(\frac{1+\theta}{1-\theta}\right)^{2-t}\prod_{j=1}^{J}Z_{tj }^{2-t}. \tag{58}\]
### Proof of Theorem 4
The proof proceeds in three parts. Part **(A)** makes a brief recall on encoding linear classifiers with decision trees. Part **(B)** solves (6) in mf, _i.e._ finds boosting's leveraging coefficients as solution of:
\[\boldsymbol{q}(\mu)^{\top}\boldsymbol{u} = 0. \tag{59}\]
we then simplify the loss obtained and elicit the conditional Bayes risk of the tempered loss, _i.e._ (20) in mf. Part **(C)** elicits the partial losses and shows properness and related properties.
Part **(A): encoding linear models with a tree architecture**We use the reduction trick of **(author?)**[16] to design a decision tree (DT) boosting procedure, find out the (concave) loss equivalently minimized, just like in classical top-down DT induction algorithms [6]. The trick is simple: a DT can be thought of as a set of constant linear classifiers. The prediction is the sum of predictions put at all nodes. Boosting fits those predictions at the nodes and percolating those to leaves gets a standard DT with real predictions at the leaves. Figure 2 provides a detailed description of the procedure. Let \(\lambda\) denote a leaf node of the current tree \(H\), with \(H_{\lambda}\in\mathbb{R}\) the function it implements for leaf \(\lambda\). If \(\mathrm{parent}(\lambda)\) denotes its parent node (assuming wlog it is not the root node), we have
\[H_{\lambda} \doteq H_{\mathrm{parent}(\lambda)}+\mu_{\lambda}h_{\lambda}, \tag{60}\]
Part **(B): eliciting the Bayes risk of the tempered loss**With our simple classifiers at hand, the tempered exponential loss \(Z_{tj}^{2-t}\) in (14) (mf) can be simplified to loss
\[L(H) \doteq \sum_{i}\exp_{t}^{2-t}\left(\log_{t}q_{1i}-y_{i}H_{\lambda( \boldsymbol{x}_{i})}\right) \tag{61}\] \[= \sum_{\lambda\in\Lambda(H)}m_{\lambda}^{+}\exp_{t}^{2-t}\left( \log_{t}q_{1i}-H_{\lambda}\right)+m_{\lambda}^{-}\exp_{t}^{2-t}\left(\log_{t} q_{1i}+H_{\lambda}\right),\]where \(\lambda(\mathbf{x})\) is the leaf reached by observation \(\mathbf{x}\) and \(\lambda(H)\) its set of leaf nodes of \(H\), and \(H_{\lambda}\) sums all relevant values in (60). Also, \(m_{\lambda}^{+},m_{\lambda}^{-}\) denote the cardinal of positive and negative examples at \(\lambda\) and \(p_{\lambda}\doteq m_{\lambda}^{+}/(m_{\lambda}^{+}+m_{\lambda}^{-})\) the local proportion of positive examples at \(\lambda\), and finally \(r_{\lambda}\doteq(m_{\lambda}^{+}+m_{\lambda}^{-})/m\) the total proportion of examples reaching \(\lambda\).
**Theorem A**.: _If we compute \(\mu_{\lambda}\) the solution of (59), we end up with the prediction \(H_{\lambda}\):_
\[H_{\lambda} = \frac{q_{1i}^{1-t}}{1-t}\cdot\frac{\left(\frac{m_{\lambda}^{+}}{ m_{\lambda}^{-}}\right)^{1-t}-1}{\left(\frac{m_{\lambda}^{+}}{m_{\lambda}^{-}} \right)^{1-t}+1} \tag{62}\] \[= \frac{q_{1i}^{1-t}}{1-t}\cdot\frac{p_{\lambda}^{1-t}-(1-p_{ \lambda})^{1-t}}{p_{\lambda}^{1-t}+(1-p_{\lambda})^{1-t}}, \tag{63}\]
_and the loss of the decision tree equals:_
\[L(H) = \sum_{\lambda\in\Lambda(H)}r_{\lambda}\cdot\frac{2p_{\lambda}(1-p _{\lambda})}{M_{1-t}(p_{\lambda},1-p_{\lambda})}, \tag{64}\] \[= \mathbb{E}_{\lambda}[\underline{L}^{(t)}(p_{\lambda})]. \tag{65}\]
Proof.: To compute \(\mu_{\lambda}\), (6) is reduced to the examples reaching \(\lambda\), that is, it simplifies to
\[m_{\lambda}^{+}\exp_{t}\left(\log_{t}q_{1i}-H_{\mathrm{parent}( \lambda)}-R_{\lambda}\mu_{\lambda}h_{\lambda}\right) = m_{\lambda}^{-}\exp_{t}\left(\log_{t}q_{1i}+H_{\mathrm{parent}( \lambda)}+R_{\lambda}\mu_{\lambda}h_{\lambda}\right), \tag{66}\]
that we solve for \(\mu_{\lambda}\). Equivalently,
\[\frac{\exp_{t}\left(\log_{t}q_{1i}+H_{\mathrm{parent}(\lambda)}+ R_{\lambda}\mu_{\lambda}h_{\lambda}\right)}{\exp_{t}\left(\log_{t}q_{1i}-H_{ \mathrm{parent}(\lambda)}-R_{\lambda}\mu_{\lambda}h_{\lambda}\right)} = \frac{m_{\lambda}^{+}}{m_{\lambda}^{-}},\]
or, using \(\exp_{t}(u)/\exp_{t}(v)=\exp_{t}(u\ominus_{t}v)\),
\[\frac{2H_{\mathrm{parent}(\lambda)}+2R_{\lambda}\mu_{\lambda}h_ {\lambda}}{1+(1-t)(\log_{t}q_{1i}-H_{\mathrm{parent}(\lambda)}-R_{\lambda}\mu_ {\lambda}h_{\lambda})} = \log_{t}\left(\frac{m_{\lambda}^{+}}{m_{\lambda}^{-}}\right),\]
after reorganizing:
\[R_{\lambda}\mu_{\lambda}h_{\lambda} = \frac{(1+(1-t)(\log_{t}q_{1i}-H_{\mathrm{parent}(\lambda)}))\cdot \log_{t}\left(\frac{m_{\lambda}^{+}}{m_{\lambda}^{-}}\right)-2H_{\mathrm{ parent}(\lambda)}}{2+(1-t)\log_{t}\left(\frac{m_{\lambda}^{+}}{m_{\lambda}^{-}} \right)},\]
Figure 2: The weak learner provides weak hypotheses of the form \([\![x_{k}\geq a_{j}]\!]\cdot b_{j}\). From the boosting standpoint, this weak hypothesis is "as good as" the weak hypothesis \(\overline{h}_{j}(\mathbf{x})\doteq[x_{k}<a_{j}]\!]\cdot-b_{j}\). The predicates of both are used to craft a split, _e.g._ for the root (in our depiction, \(b_{3}=-b_{2}\)) and then solving (59) provides the leveraging coefficients \(\mu_{\cdot}\). We then repeat this for as many splits as necessary. At the end, we can "percolate" nodes reals towards the leaves below and get an equivalent classifier that resembles a decision tree (right). See [16] for further details.
which yields the prediction at \(\lambda\):
\[H_{\lambda} = H_{\rm parent(\lambda)}+\frac{(1+(1-t)(\log_{t}q_{1i}-H_{\rm parent( \lambda)}))\cdot\log_{t}\left(\frac{m_{\lambda}^{+}}{m_{\lambda}^{-}}\right)-2H_ {\rm parent(\lambda)}}{2+(1-t)\log_{t}\left(\frac{m_{\lambda}^{+}}{m_{\lambda} ^{-}}\right)} \tag{67}\] \[= \frac{(1+(1-t)\log_{t}q_{1i})\cdot\log_{t}\left(\frac{m_{\lambda}^ {+}}{m_{\lambda}^{-}}\right)}{2+(1-t)\log_{t}\left(\frac{m_{\lambda}^{+}}{m_{ \lambda}^{-}}\right)}\] \[= q_{1i}^{1-t}\cdot\frac{\log_{t}\left(\frac{m_{\lambda}^{+}}{m_{ \lambda}^{-}}\right)}{2+(1-t)\log_{t}\left(\frac{m_{\lambda}^{+}}{m_{\lambda} ^{-}}\right)}\] (69) \[= \frac{q_{1i}^{1-t}}{1-t}\cdot\frac{\left(\frac{m_{\lambda}^{+}}{m _{\lambda}^{-}}\right)^{1-t}-1}{\left(\frac{m_{\lambda}^{+}}{m_{\lambda}^{+}} \right)^{1-t}+1}\] (70) \[= \frac{q_{1i}^{1-t}}{1-t}\cdot\frac{p_{\lambda}^{1-t}-(1-p_{ \lambda})^{1-t}}{p_{\lambda}^{1-t}+(1-p_{\lambda})^{1-t}}. \tag{71}\]
We plug \(H_{\lambda}\) back in the loss for all leaves and get, using \(q_{1i}=1/m^{1/(2-t)}\):
\[L(H) = \sum_{\lambda\in\Lambda(H)}\left\{\begin{array}{c}m_{\lambda}^{ +}\exp_{t}^{2-t}\left(\log_{t}q_{1i}-\frac{q_{1i}^{1-t}}{1-t}\cdot\frac{p_{ \lambda}^{1-t}-(1-p_{\lambda})^{1-t}}{p_{\lambda}^{1-t}+(1-p_{\lambda})^{1-t} }\right)\\ +m_{\lambda}^{-}\exp_{t}^{2-t}\left(\log_{t}q_{1i}+\frac{q_{1i}^{1-t}}{1-t} \cdot\frac{p_{\lambda}^{1-t}-(1-p_{\lambda})^{1-t}}{p_{\lambda}^{1-t}+(1-p_{ \lambda})^{1-t}}\right)\end{array}\right.. \tag{72}\]
We simplify. First,
\[m_{\lambda}^{+}\exp_{t}^{2-t}\left(\log_{t}q_{1i}-\frac{q_{1i}^{ 1-t}}{1-t}\cdot\frac{p_{\lambda}^{1-t}-(1-p_{\lambda})^{1-t}}{p_{\lambda}^{1- t}+(1-p_{\lambda})^{1-t}}\right) \tag{73}\] \[= m_{\lambda}^{+}\left[q_{1i}^{1-t}\cdot\left(1-\frac{p_{\lambda} ^{1-t}-(1-p_{\lambda})^{1-t}}{p_{\lambda}^{1-t}+(1-p_{\lambda})^{1-t}}\right) \right]_{+}^{\frac{2-t}{1-t}}\] \[= \frac{m_{\lambda}^{+}}{m}\cdot\left[\frac{2(1-p_{\lambda})^{1-t} }{p_{\lambda}^{1-t}+(1-p_{\lambda})^{1-t}}\right]_{+}^{\frac{2-t}{1-t}}\] \[= \frac{m_{\lambda}^{+}}{m}\cdot\left(\frac{1-p_{\lambda}}{M_{1-t} (p_{\lambda},1-p_{\lambda})}\right)^{2-t}, \tag{74}\]
and then
\[m_{\lambda}^{-}\exp_{t}^{2-t}\left(\log_{t}q_{1i}+\frac{q_{1i}^{ 1-t}}{1-t}\cdot\frac{p_{\lambda}^{1-t}-(1-p_{\lambda})^{1-t}}{p_{\lambda}^{1- t}+(1-p_{\lambda})^{1-t}}\right) \tag{75}\] \[= m_{\lambda}^{-}\left[q_{1i}^{1-t}\cdot\left(1+\frac{p_{\lambda} ^{1-t}-(1-p_{\lambda})^{1-t}}{p_{\lambda}^{1-t}+(1-p_{\lambda})^{1-t}}\right) \right]_{+}^{\frac{2-t}{1-t}}\] \[= \frac{m_{\lambda}^{-}}{m}\cdot\left[\frac{2p_{\lambda}^{1-t}}{p_ {\lambda}^{1-t}+(1-p_{\lambda})^{1-t}}\right]_{+}^{\frac{2-t}{1-t}}\] \[= \frac{m_{\lambda}^{-}}{m}\cdot\left(\frac{p_{\lambda}}{M_{1-t}(p_ {\lambda},1-p_{\lambda})}\right)^{2-t}, \tag{76}\]and we can simplify the loss,
\[L(H) = \sum_{\lambda\in\Lambda(H)}r_{\lambda}p_{\lambda}\left(\frac{1-p_{ \lambda}}{M_{1-t}(p_{\lambda},1-p_{\lambda})}\right)^{2-t}+r_{\lambda}(1-p_{ \lambda})\left(\frac{p_{\lambda}}{M_{1-t}(p_{\lambda},1-p_{\lambda})}\right)^{2 -t} \tag{77}\] \[= \sum_{\lambda\in\Lambda(H)}r_{\lambda}\cdot\frac{p_{\lambda}(1-p_{ \lambda})^{2-t}+(1-p_{\lambda})p_{\lambda}^{2-t}}{M_{1-t}^{2-t}(p_{\lambda},1- p_{\lambda})}\] (78) \[= \sum_{\lambda\in\Lambda(H)}r_{\lambda}\cdot\frac{p_{\lambda}(1-p_{ \lambda})\cdot(p_{\lambda}^{1-t}+(1-p_{\lambda})^{1-t})}{M_{1-t}^{2-t}(p_{ \lambda},1-p_{\lambda})}\] (79) \[= \sum_{\lambda\in\Lambda(H)}r_{\lambda}\cdot\frac{2p_{\lambda}(1-p _{\lambda})\cdot M_{1-t}^{1-t}(p_{\lambda},1-p_{\lambda})}{M_{1-t}^{2-t}(p_{ \lambda},1-p_{\lambda})}\] (80) \[= \sum_{\lambda\in\Lambda(H)}r_{\lambda}\cdot\frac{2p_{\lambda}(1-p _{\lambda})}{M_{1-t}(p_{\lambda},1-p_{\lambda})}, \tag{81}\]
as claimed. This ends the proof of Theorem A.
Part (C): partial losses and their propertiesThe proof relies on the following Theorem. We recall that a loss is symmetric iff its partial losses satisfy \(\ell_{1}(u)=\ell_{-1}(1-u),\forall u\in[0,1]\)[20] and differentiable iff its partial losses are differentiable.
**Theorem B**.: _Suppose \(t<2\). A set of partial losses having the conditional Bayes risk \(\underline{L}^{(t)}\) in (65) are_
\[\ell_{1}^{(t)}(u)\doteq\left(\frac{1-u}{M_{1-t}(u,1-u)}\right)^{2- t}\quad,\quad\ell_{-1}^{(t)}(u)\doteq\ell_{1}^{(t)}(1-u). \tag{82}\]
_The tempered loss is then symmetric and differentiable. It is strictly proper for any \(t\in(-\infty,2)\) and proper for \(t=-\infty\)._
Proof.: Symmetry and differentiability are straightforward. To check strict properness, we analyze the cases for \(t\neq 1\) (otherwise, it is Matusita's loss, and thus strictly proper), we compute the solution \(u\) to
\[\frac{\partial}{\partial u}L\left(u,v\right) = 0. \tag{83}\]
To this end, let \(N(u)\doteq v(1-u)^{2-t}+(1-v)u^{2-t}\) and the \(q\)-sum
\[S_{q}(a,b) \doteq (a^{q}+b^{q})^{1/q}=2^{1/q}\cdot M_{q}(a,b). \tag{84}\]
We also let \(D(u)\doteq S_{1-t}^{2-t}(u,1-u)\). Noting \(L^{(t)}(u,v)=2^{\frac{2-t}{1-t}}\cdot N(u)/D(u)\) and \(D(u)\neq 0,\forall u\in[0,1]\), the set of solutions of (83) are the set of solutions to \(N^{\prime}(u)D(u)=N(u)D^{\prime}(u)\), which boils down, after simplification, to
\[((1-v)u^{1-t}-v(1-u)^{1-t})S_{1-t}^{2-t}(u,1-u)\] \[= (v(1-u)^{2-t}+(1-v)u^{2-t})(u^{-t}-(1-u)^{-t})S_{1-t}(u,1-u),\]
developing and simplifying yields a first simplified expression \((1-2v)(u(1-u))^{1-t}=vu^{-1}(1-u)^{2-t}-(1-v)(1-u)^{-t}u^{2-t}\), which, after reorganising to isolate expressions depending on \(v\), yields
\[(u(1-u))^{1-t}+(1-u)^{-t}u^{2-t} = v\cdot\left(u^{-t}(1-u)^{2-t}+(1-u)^{-t}u^{2-t}+2(u(1-u))^{1-t}\right) \tag{85}\]
Assuming \(v\in(0,1)\), we multiply by \((u(1-u))^{1-t}\) (we shall check \(u\in(0,1)\)) and simplify, which yields \(u(1-u)+u^{2}=v((1-u)^{2}+u^{2}+2u(1-u))\), and indeed yields
\[u = v, \tag{86}\]
and we check from (85) that if \(v=0\) (resp. \(v=1\)), then necessarily \(u=0\) (resp. \(u=1\)). To complete the proof, using the previous derivations, we can then simplify
\[\frac{\partial}{\partial u}L\left(u,v\right) = (2-t)\cdot 2^{\frac{2-t}{1-t}}\cdot\frac{u-v}{(u(1-u))^{t}\cdot S_{1-t}^{3-2t}(u,1-u)}, \tag{87}\]which shows that if \(2-t>0\) but \(t\neq-\infty\), \(u=v\) is a strict minimum of the pointwise conditional risk, completing the proof for strict properness. Strict properness is sufficient to show by a simple computation that \(\underline{L}^{(t)}\) is (65). For \(t=-\infty\), we pass to the limit and use the fact that we can also write
\[\ell_{1}^{(t)}(u) = \frac{1}{M_{1-t^{\mathfrak{k}}}\left(1,\left(\frac{u}{1-u}\right) ^{\frac{1}{t^{\mathfrak{k}}}}\right)}\quad(\mbox{we recall }t^{\mathfrak{k}} \doteq 1/(2-t)) \tag{88}\]
\(t\rightarrow-\infty\) is equivalent to \(t^{\mathfrak{s}}\to 0^{+}\). If \(u<1/2\), \(u/(1-u)<1\) and so we see that
\[\lim_{t^{\mathfrak{s}}\to 0^{+}}M_{1-t^{\mathfrak{s}}}\left(1, \left(\frac{u}{1-u}\right)^{\frac{1}{t^{\mathfrak{k}}}}\right) = \frac{1}{2},\]
because \(M_{1}\) is the arithmetic mean. When \(u>1/2\), \(u/(1-u)>1\) and so this time
\[\lim_{t^{\mathfrak{s}}\to 0^{+}}M_{1-t^{\mathfrak{k}}}\left(1, \left(\frac{u}{1-u}\right)^{\frac{1}{t^{\mathfrak{k}}}}\right) = +\infty.\]
Hence,
\[\ell_{1}^{(-\infty)}(u) = 2\cdot[\![u\leqslant 1/2]\!], \tag{89}\]
which is (twice) the partial loss of the 0/1 loss [26].
This ends the proof of Theorem 4.
## III Supplementary material on experiments
### Domains
Table A3 presents the 10 domains we used for our experiments.
### Implementation details and full set of experiments on linear combinations of decision trees
SummaryThis Section depicts the full set of experiments summarized in Table 2 (mf), from Table A4 to Table A15. Tables are ordered in increasing size of the domain (Table A3). In all cases, up to \(J=20\) trees have been trained, of size 15 (total number of nodes, except the two biggest domains, for which the size is 5). For all datasets, except creditcard and adult, we have tested \(t\) in the complete range, \(t\in\{0.0,0.2,0.4,0.6,0.8,0.9,1.0,1.1\}\) (the mf only reports results for \(t\geqslant 0.6\)), and in all cases, models both clipped and not clipped. For each dataset, we have set a 10-fold stratified cross-validation experiment, and report the averages for readability (Table 2 in mf gives the results of a Student paired \(t\)-test on error averages for comparison, limit \(p\)-val = 0.1). We also provide two examples of training error averages for domains hillnoise and hillnonoise (Tables A10 and A12).
Implementation details of \(t\)-AdaBoostFirst, regarding file format, we only input a.csv file to \(t\)-AdaBoost. We do not specify a file with feature types as in ARFF files. \(t\)-AdaBoost recognizes the type of each feature from its column content and distinguishes two main types of features: numerical and categorical. The distinction is important to design the splits during decision tree induction: for numerical values, splits are midpoints between two successive observed values. For categorical, splits are partitions of the feature values in two non-empty subsets. Our implementation of \(t\)-AdaBoost (programmed in Java) makes it possible to choose \(t\) not just in the range of values for which we have shown that boosting-compliant convergence is possible (\(t\in[0,1]\)), but also \(t>1\). Because we thus implement AdaBoost (\(t=1\)) but also for \(t>1\), weights can fairly easily become infinite, we have implemented a safe-check during training, counting the number of times the weights become infinite or zero (note that in this latter case, this really is a problem just for AdaBoost because in theory this should never happen unless the weak classifiers achieve perfect (or perfectly wrong) classification), but also making sure leveraging coefficients for classifiers do not become infinite for AdaBoost, a situation that can happen because of numerical approximations in encoding. In our experiments, we have observed that none of these problematic cases did occur (notice that this could not be the case if we were to boost for a large number of iterations). We have implemented algorithm \(t\)-AdaBoost exactly as specified in mf. The weak learner is implemented to train a decision tree in which the stopping criterion is the size of the tree reaching a user-fixed number of nodes. There is thus no pruning. Also, the top-down induction algorithm proceeds by iteratively picking the heaviest leaf in the tree and then choosing the split that minimizes the expected Bayes risk of the tempered loss, computing using the same \(t\) values as for \(t\)-AdaBoost, and with the constraint to not get pure leaves (otherwise, the real prediction at the leaves, which relies on thelink of the loss, would be infinite for AdaBoost). In our implementation of decision-tree induction, when the number of possible splits exceeds a fixed number \(S\) (currently, 2 000), we pick the best split in a subset of \(S\) splits picked at random.
ResultsFirst, one may notice in several plots that the average test error increases with the number of trees. This turns out to be a sign of overfitting, as exemplified for domains hillnonoise and hillnoise, for which we provide the training curves. If we align the training curves at \(T=1\) (the value is different because the splitting criterion for training the tree is different), we notice that the experimental convergence on training is similar for all values of \(t\) (Tables A10 and A12). The other key experimental result, already visible from Table 2 (mf), is that pretty much all tested values of \(t\) are necessary to get the best results. One could be tempted to conclude that \(t\) slightly smaller than \(1.0\) seems to be a good fit from Table 2 (mf), but the curves show that this is more a consequence of the Table being computed for \(J=20\) trees. The case of eeg illustrates this phenomenon best: while small \(t\)-values are clearly the best when there is no noise, the picture is completely reversed when there is training noise. Notice that this ordering is almost reversed on creditcard and adult: when there is noise, small values of \(t\) tend to give better results. Hence, in addition to getting (i) a pruning mechanism that works for all instances of the tempered loss and (ii) a way to guess the right number of models in the ensemble, a good problem to investigate is in fact appropriately tuning \(t\) in a domain-dependent way. Looking at all plots reveals that substantial gains could be obtained with an accurate procedure (over the strategy that would be to always pick a fixed \(t\), _e.g._\(t=1\)).
* clamped models can be very useful to handle overfitting (sonar for \(\eta=0.4\), qsar for \(\eta\geq 0.2\)); this provides another justification to learn clamped models;
* the overall diversity of curves as a function of \(t\) supports the idea that good strategies could in fact tune \(t\) at training time and change its value with iterations.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \hline & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \end{tabular}
\end{table}
Table A7: Experiments on \(t\)-AdaBoost comparing with AdaBoost (\(t=1\), bullets) on domain qsar. Conventions follow Table A4.
\begin{table}
\begin{tabular}{l l} \hline \hline training err (not clipped) & training err (clipped) \\ \hline \hline training err (clipped) & training err (clipped) \\ \hline \hline \end{tabular}
\end{table}
Table A10: Experiments on \(t\)-AdaBoost comparing with AdaBoost (\(t=1\), bullets) on domain hillnonoise: training errors displayed for all algorithms using conventions from Table A4. See text for details.
\begin{table}
\begin{tabular}{l l l} \hline \hline & training err (not clipped) & training err (clipped) \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline & & \\ \hline \hline & & \\ \hline \hline & & \\ \hline \hline & & \\ \hline \hline & & \\ \hline \hline & & \\ \hline \hline & & \\ \hline \hline \end{tabular}
\end{table}
Table 11: Experiments on \(t\)-AdaBoost comparing with AdaBoost (\(t=1\), bullets) on domain hillnoise. Conventions follow Table 4.
\begin{table}
\begin{tabular}{c c c c} & \multicolumn{2}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \end{tabular}
\end{table}
Table A13: Experiments on \(t\)-AdaBoost comparing with AdaBoost (\(t=1\), bullets) on domain eeg. Conventions follow Table A4.
\begin{table}
\begin{tabular}{| | ## Review
### Summary
The paper introduces a novel generalization of the AdaBoost algorithm, termed t-AdaBoost, which incorporates a tempering parameter, t, to enhance its performance by utilizing tempered exponential measures. This extension modifies the weight optimization process while retaining the foundational principles of AdaBoost. The authors provide both theoretical insights and experimental validations, indicating that different values of t can yield varying results across datasets. Their findings suggest that tuning the parameter t can lead to improved performance in specific scenarios, thereby highlighting the practical implications of their work in machine learning applications.
### Strengths
- The paper presents a well-structured and original extension of the AdaBoost algorithm.
- Theoretical results demonstrate sound convergence and empirical performance, showing significant improvements over standard AdaBoost in some datasets.
- The integration of tempered exponential measures provides a robust solution to numerical instabilities commonly associated with AdaBoost.
- The paper is well-written and logically organized, making complex concepts accessible.
- The comprehensive analysis includes both theoretical grounding and experimental evidence, enhancing the credibility of the findings.
### Weaknesses
- Certain sections, particularly Algorithms and notations, are confusing and may hinder reader comprehension.
- The choice of notations is inconsistent, leading to ambiguity in understanding key elements of the algorithm.
- There is a lack of clarity regarding the generalization capabilities of the proposed algorithm compared to existing boosting methods.
- Performance sensitivity to the choice of t may pose challenges for practical implementation without established tuning mechanisms.
- Some experimental results lack adequate evaluation of the impact of new loss functions on performance.
### Questions
- Could the authors explore the extension of t-AdaBoost to multiclass settings, similar to original AdaBoost?
- What insights can be drawn regarding the impact of different t values on exponential decay and overfitting, particularly when t < 1?
- Could the authors clarify the significance of certain notations and equations, especially regarding their definitions and implications?
- What methods could be implemented to effectively tune the parameter t in practice?
- Is there a potential relationship between dataset characteristics (e.g., number of examples and features) and the optimal t value?
### Soundness
**Score:** 3
**Description:** 3 = good. The theoretical and empirical analyses are largely sound, although there are some areas of confusion in notation and clarity.
### Presentation
**Score:** 3
**Description:** 3 = good. The overall structure is coherent, but notational inconsistencies and some unclear explanations detract from readability.
### Contribution
**Score:** 3
**Description:** 3 = good. The paper presents a valuable extension to an established algorithm, though the practical implications and generalization capabilities could be better articulated.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept: The paper is technically solid and has moderate-to-high impact potential, with some areas requiring refinement.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a novel and relevant contribution to the field of machine learning by extending the AdaBoost algorithm through the introduction of a tempering parameter. While there are notable weaknesses in presentation and clarity, the sound theoretical foundation and empirical evidence support the significance of the findings. The work's potential to influence practical applications in boosting algorithms justifies an acceptance, albeit with recommendations for clearer exposition and further exploration of tuning methodologies.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Unexpected Improvements to Expected Improvement
for Bayesian Optimization
Sebastian Ament
Meta
[email protected]
Samuel Daulton
Meta
[email protected]
David Eriksson
Meta
[email protected]
Maximilian Balandat
Meta
[email protected]
Eytan Bakshy
Meta
[email protected]
###### Abstract
Expected Improvement (EI) is arguably the most popular acquisition function in Bayesian optimization and has found countless successful applications, but its performance is often exceeded by that of more recent methods. Notably, EI and its variants, including for the parallel and multi-objective settings, are challenging to optimize because their acquisition values vanish numerically in many regions. This difficulty generally increases as the number of observations, dimensionality of the search space, or the number of constraints grow, resulting in performance that is inconsistent across the literature and most often sub-optimal. Herein, we propose LogEI, a new family of acquisition functions whose members either have identical or approximately equal optima as their canonical counterparts, but are substantially easier to optimize numerically. We demonstrate that numerical pathologies manifest themselves in "classic" analytic EI, Expected Hypervolume Improvement (EHVI), as well as their constrained, noisy, and parallel variants, and propose corresponding reformulations that remedy these pathologies. Our empirical results show that members of the LogEI family of acquisition functions substantially improve on the optimization performance of their canonical counterparts and surprisingly, are on par with or exceed the performance of recent state-of-the-art acquisition functions, highlighting the understated role of numerical optimization in the literature.
## 1 Introduction
Bayesian Optimization (BO) is a widely used and effective approach for sample-efficient optimization of expensive-to-evaluate black-box functions [25, 28], with applications ranging widely between aerospace engineering [48], biology and medicine [49], materials science [3], civil engineering [4], and machine learning hyperparameter optimization [66, 72]. BO leverages a probabilistic _surrogate model_ in conjunction with an _acquisition function_ to determine where to query the underlying objective function. Improvement-based acquisition functions, such as Expected Improvement (EI) and Probability of Improvement (PI), are among the earliest and most widely used acquisition functions for efficient global optimization of non-convex functions [42, 58]. EI has been extended to the constrained [27, 29], noisy [52], and multi-objective [20] setting, as well as their respective batch variants [6, 13, 77], and is a standard baseline in the BO literature [25, 66]. While much of the literature has focused on developing new sophisticated acquisition functions, subtle yet critical implementation details of foundational BO methods are often overlooked. Importantly, the performance of EI and its variants is inconsistent even for _mathematically identical_ formulations and, as we show in this work, most often sub-optimal.
Although the problem of optimizing EI effectively has been discussed in various works, e.g. [25; 31; 77], prior focus has been on optimization algorithms and initialization strategies, rather than the fundamental issue of computing EI.
In this work, we identify pathologies in the computation of improvement-based acquisition functions that give rise to numerically vanishing values and gradients, which - to our knowledge - are present in _all existing implementations of EI_, and propose reformulations that lead to increases in the associated optimization performance which often match or exceed that of recent methods.
#### Contributions
1. We introduce LogEI, a new family of acquisition functions whose members either have identical or approximately equal optima as their canonical counterparts, but are substantially easier to optimize numerically. Notably, the analytic variant of LogEI, which _mathematically_ results in the same BO policy as EI, empirically shows significantly improved optimization performance.
2. We extend the ideas behind analytical LogEI to other members of the EI family, including constrained EI (CEI), Expected Hypervolume Improvement (EHVI), as well as their respective batch variants for parallel BO, qEI and qEHVI, using smooth approximations of the acquisition utilities to obtain non-vanishing gradients. All of our methods are available as part of BoTorch [6].
3. We demonstrate that our newly proposed acquisition functions substantially outperform their respective analogues on a broad range of benchmarks without incurring meaningful additional computational cost, and often match or exceed the performance of recent methods.
#### Motivation
Maximizing acquisition functions for BO is a challenging problem, which is generally non-convex and often contains numerous local maxima, see the lower right panel of Figure 1. While zeroth-order methods are sometimes used, gradient-based methods tend to be far more effective at optimizing acquisition functions on continuous domains, especially in higher dimensions.
In addition to the challenges stemming from non-convexity that are shared across acquisition functions, the values and gradients of improvement-based acquisition functions are frequently minuscule in large swaths of the domain. Although EI is never _mathematically_ zero under a Gaussian posterior distribution,1 it often vanishes, even becoming _exactly_ zero in floating point precision. The same
Figure 1: **Left:** Fraction of points sampled from the domain for which the magnitude of the gradient of EI vanishes to \(<\!10^{-10}\) as a function of the number of randomly generated data points \(n\) for different dimensions \(d\) on the Ackley function. As \(n\) increases, EI and its gradients become numerically zero across most of the domain, see App. D.2 for details. **Right:** Values of EI and LogEI on a quadratic objective. EI takes on extremely small values on points for which the likelihood of improving over the incumbent is small and is numerically _exactly_ zero in double precision for a large part of the domain (\(\approx[5,13.5]\)). The left plot shows that this tends to worsen as the dimensionality of the problem and the number of data points grow, rendering gradient-based optimization of EI futile.
applies to its gradient, making EI (and PI, see Appendix A) exceptionally difficult to optimize via gradient-based methods. The right panels of Figure 1 illustrate this behavior on a simple one-dimensional quadratic function.
To increase the chance of finding the global optimum of non-convex functions, gradient-based optimization is typically performed from multiple starting points, which can help avoid getting stuck in local optima [70]. For improvement-based acquisition functions however, optimization becomes increasingly challenging as more data is collected and the likelihood of improving over the incumbent diminishes, see our theoretical results in Section 3 and the empirical illustration in Figure 1 and Appendix D.2. As a result, gradient-based optimization with multiple random starting points will eventually degenerate into random search when the gradients at the starting points are numerically zero. This problem is particularly acute in high dimensions and for objectives with a large range.
Various initialization heuristics have been proposed to address this behavior by modifying the random-restart strategy. Rather than starting from random candidates, an alternative naive approach would be to use initial conditions close to the best previously observed inputs. However, doing that alone inherently limits the acquisition optimization to a type of local search, which cannot have global guarantees. To attain such guarantees, it is necessary to use an asymptotically space-filling heuristic; even if not random, this will entail evaluating the acquisition function in regions where no prior observation lies. Ideally, these regions should permit gradient-based optimization of the objective for efficient acquisition function optimization, which necessitates the gradients to be non-zero. In this work, we show that this can be achieved for a large number of improvement-based acquisition functions, and demonstrate empirically how this leads to substantially improved BO performance.
## 2 Background
We consider the problem of maximizing an expensive-to-evaluate black-box function \(\mathbf{f}_{\mathrm{true}}:\mathbb{X}\mapsto\mathbb{R}^{M}\) over some feasible set \(\mathbb{X}\subseteq\mathbb{R}^{d}\). Suppose we have collected data \(\mathcal{D}_{n}=\{(\mathbf{x}_{i},\mathbf{y}_{i})\}_{i=1}^{n}\), where \(\mathbf{x}_{i}\in\mathbb{X}\) and \(\mathbf{y}_{i}=\mathbf{f}_{\mathrm{true}}(\mathbf{x}_{i})+\mathbf{v}_{i}( \mathbf{x}_{i})\) and \(\mathbf{v}_{i}\) is a noise corrupting the true function value \(\mathbf{f}_{\mathrm{true}}(\mathbf{x}_{i})\). The response \(\mathbf{f}_{\mathrm{true}}\) may be multi-output as is the case for multiple objectives or black-box constraints, in which case \(\mathbf{y}_{i},\mathbf{v}_{i}\in\mathbb{R}^{M}\). We use Bayesian optimization (BO), which relies on a surrogate model \(\mathbf{f}\) that for any _batch_\(\mathbf{X}:=\{\mathbf{x}_{1},\ldots,\mathbf{x}_{q}\}\) of candidate points provides a probability distribution over the outputs \(f(\mathbf{X}):=(f(\mathbf{x}_{1}),\ldots,f(\mathbf{x}_{q}))\). The acquisition function \(\alpha\) then utilizes this posterior prediction to assign an acquisition value to \(\mathbf{x}\) that quantifies the value of evaluating the points in \(\mathbf{x}\), trading off exploration and exploitation.
### Gaussian Processes
Gaussian Processes (GP) [65] are the most widely used surrogates in BO, due to their high data efficiency and good uncertainty quantification. For our purposes, it suffices to consider a GP as a mapping that provides a multivariate Normal distribution over the outputs \(f(\mathbf{x})\) for any \(\mathbf{x}\):
\[f(\mathbf{x})\sim\mathcal{N}(\mu(\mathbf{x}),\mathbf{\Sigma}(\mathbf{x})),\qquad \mathbf{\mu}:\mathbb{X}^{q}\to\mathbb{R}^{qM},\quad\mathbf{\Sigma}:\mathbb{X}^{q}\to \mathcal{S}_{+}^{qM}. \tag{1}\]
In the single-outcome (\(M=1\)) setting, \(f(\mathbf{x})\sim\mathcal{N}(\mu(\mathbf{x}),\Sigma(\mathbf{x}))\) with \(\mu:\mathbb{X}^{q}\to\mathbb{R}^{q}\) and \(\Sigma:\mathbb{X}^{q}\to\mathcal{S}_{+}^{q}\). In the sequential (\(q=1\)) case, this further reduces to a univariate Normal distribution: \(f(\mathbf{x})\sim\mathcal{N}(\mu(\mathbf{x}),\sigma^{2}(\mathbf{x}))\) with \(\mu:\mathbb{X}\to\mathbb{R}\) and \(\sigma:\mathbb{X}\to\mathbb{R}_{+}\).
### Improvement-based Acquisition Functions
Expected ImprovementFor the fully-sequential (\(q=1\)), single-outcome (\(M=1\)) setting, "classic" EI [59] is defined as
\[\text{EI}_{y^{*}}(\mathbf{x})=\mathbb{E}_{f(\mathbf{x})}\big{[}[f(\mathbf{x}) -y^{*}]_{+}\big{]}=\sigma(\mathbf{x})\;h\left(\frac{\mu(\mathbf{x})-y^{*}}{ \sigma(\mathbf{x})}\right), \tag{2}\]
where \([\cdot]_{+}\) denotes the \(\max(0,\cdot)\) operation, \(y^{*}=\max_{i}y_{i}\) is the best function value observed so far, also referred to as the _incumbent_, \(h(z)=\phi(z)+z\Phi(z)\), and \(\phi,\Phi\) are the standard Normal density and distribution functions, respectively. This formulation is arguably the most widely used acquisition function in BO, and the default in many popular software packages.
Constrained Expected Improvement_Constrained BO_ involves one or more black-box constraints and is typically formulated as finding \(\max_{\mathbf{x}\in\mathbb{X}}f_{\text{true},1}(\mathbf{x})\) such that \(f_{\text{true},i}(\mathbf{x})\leq 0\) for \(i\in\{2,\ldots,M\}\). Feasibility-weighting the improvement [27; 29] is a natural approach for this class of problems:
\[\text{CEI}_{y^{*}}(\mathbf{x})=\mathbb{E}_{\mathbf{f}(\mathbf{x})}\left[[f_{1}( \mathbf{x})-y^{*}]_{+}\ \prod_{i=2}^{M}\mathbb{1}_{f_{i}(\mathbf{x})\leq 0}\right], \tag{3}\]
where \(1\) is the indicator function. If the constraints \(\{f_{i}\}_{i\geq 2}\) are modeled as conditionally independent of the objective \(f_{1}\) this can be simplified as the product of EI and the probability of feasibility.
Parallel Expected ImprovementIn many settings, one may evaluate \(f_{\text{true}}\) on \(q>1\) candidates in parallel to increase throughput. The associated parallel or batch analogue of EI [30; 75] is given by
\[\text{qEI}_{y^{*}}(\mathbf{X})=\mathbb{E}_{f(\mathbf{X})}\left[\max_{j=1, \ldots,q}\bigl{\{}[f(\mathbf{x}_{j})-y^{*}]_{+}\bigr{\}}\right]. \tag{4}\]
Unlike EI, qEI does not admit a closed-form expression and is thus typically computed via Monte Carlo sampling, which also extends to non-Gaussian posterior distributions [6; 75]:
\[\text{qEI}_{y^{*}}(\mathbf{X})\approx\sum_{i=1}^{N}\max_{j=1,\ldots,q}\bigl{\{} [\xi^{i}(\mathbf{x}_{j})-y^{*}]_{+}\bigr{\}}, \tag{5}\]
where \(\xi^{i}(\mathbf{x})\sim f(\mathbf{x})\) are random samples drawn from the joint model posterior at \(\mathbf{x}\).
Expected Hypervolume ImprovementIn multi-objective optimization (MOO), there generally is no single best solution; instead the goal is to explore the Pareto Frontier between multiple competing objectives, the set of mutually-optimal objective vectors. A common measure of the quality of a finitely approximated Pareto Frontier \(\mathcal{P}\) between \(M\) objectives with respect to a specified reference point \(\mathbf{r}\in\mathbb{R}^{M}\) is its _hypervolume_\(\text{HV}(\mathcal{P},\mathbf{r}):=\lambda\bigl{(}\bigcup_{\mathbf{y}_{j}\in \mathcal{P}}[\mathbf{r},\mathbf{y}_{i}]\bigr{)}\), where \([\mathbf{r},\mathbf{y}_{i}]\) denotes the hyperrectangle bounded by vertices \(\mathbf{r}\) and \(\mathbf{y}_{i}\), and \(\lambda\) is the Lebesgue measure. An apt acquisition function for multi-objective optimization problems is therefore the expected hypervolume improvement
\[\text{EHVI}(\mathbf{x})=\mathbb{E}_{\mathbf{f}(\mathbf{x})}\left[[\text{HV}( \mathcal{P}\cup\mathbf{f}(\mathbf{X}),\mathbf{r})-\text{HV}(\mathcal{P}, \mathbf{r})]_{+}\right], \tag{6}\]
due to observing a batch \(\mathbf{f}(\mathbf{X}):=[\mathbf{f}(\mathbf{x}_{1}),\cdots,\mathbf{f}(\mathbf{ x}_{q})]\) of \(q\) new observations. EHVI can be expressed in closed form if \(q=1\) and the objectives are modeled with independent GPs [80], but Monte Carlo approximations are required for the general case (qEHVI) [13].
### Optimizing Acquisition Functions
Optimizing an acquisition function (AF) is a challenging task that amounts to solving a non-convex optimization problem, to which multiple approaches and heuristics have been applied. These include gradient-free methods such as divided rectangles [41], evolutionary methods such as CMA-ES [32], first-order methods such as stochastic gradient ascent, see e.g., Daulton et al. [15], Wang et al. [75], and (quasi-)second order methods [25] such as L-BFGS-B [10]. Multi-start optimization is commonly employed with gradient-based methods to mitigate the risk of getting stuck in local minima. Initial points for optimization are selected via various heuristics with different levels of complexity, ranging from simple uniform random selection to BoTorch's initialization heuristic, which selects initial points by performing Boltzmann sampling on a set of random points according to their acquisition function value [6]. See Appendix B for a more complete account of initialization strategies and optimization procedures used by popular implementations. We focus on gradient-based optimization as often leveraging gradients results in faster and more performant optimization [13].
Optimizing AFs for parallel BO that quantify the value of a batch of \(q>1\) points is more challenging than optimizing their sequential counterparts due to the higher dimensionality of the optimization problem - \(qd\) instead of \(d\) - and the more challenging optimization surface. A common approach to simplify the problem is to use a _sequential greedy_ strategy that greedily solves a sequence of single point selection problems. For \(i=1,\ldots,q\), candidate \(\mathbf{x}_{i}\) is selected by optimizing the AF for \(q=1\), conditional on the previously selected designs \(\{\mathbf{x}_{1},...,\mathbf{x}_{i-1}\}\) and their unknown observations, e.g. by fantasizing the values at those designs [77]. For submodular AFs, including EI, PI, and EHVI, a sequential greedy strategy will attain a regret within a factor of \(1/e\) compared to the joint optimum, and previous works have found that sequential greedy optimization yields _improved_ BO performance compared to joint optimization [13; 77]. Herein, we find that our reformulations enable joint batch optimization to be competitive with the sequential greedy strategy, especially for larger batches.
### Related Work
While there is a substantial body of work introducing a large variety of different AFs, much less focus has been on the question of how to effectively implement and optimize these AFs. Zhan and Xing [81] provide a comprehensive review of a large number of different variants of the EI family, but do not discuss any numerical or optimization challenges. Zhao et al. [82] propose combining a variety of different initialization strategies to select initial conditions for optimization of acquisition functions and show empirically that this improves optimization performance. However, they do not address any potential issues or degeneracies with the acquisition functions themselves. Recent works have considered effective gradient-based approaches for acquisition optimization. Wilson et al. [77] demonstrates how stochastic first-order methods can be leveraged for optimizing Monte Carlo acquisition functions. Balandat et al. [6] build on this work and put forth sample average approximations for MC acquisition functions that admit gradient-based optimization using deterministic higher-order optimizers such as L-BFGS-B.
Another line of work proposes to switch from BO to local optimization based on some stopping criterion to achieve faster local convergence, using either zeroth order [60] or gradient-based [57] optimization. While McLeod et al. [57] are also concerned with numerical issues, we emphasize that those issues arise due to ill-conditioned covariance matrices and are orthogonal to the numerical pathologies of improvement-based acquisition functions.
## 3 Theoretical Analysis of Expected Improvement's Vanishing Gradients
In this section, we shed light on the conditions on the objective function and surrogate model that give rise to the numerically vanishing gradients in EI, as seen in Figure 1. In particular, we show that as a BO algorithm closes the optimality gap \(f^{*}-y^{*}\), where \(f^{*}\) is the global maximum of the function \(f_{\text{true}}\), and the associated GP surrogate's uncertainty decreases, EI is exceedingly likely to exhibit numerically vanishing gradients.
Let \(P_{\mathbf{x}}\) be a distribution over the inputs \(\mathbf{x}\), and \(f\sim P_{f}\) be an objective drawn from a Gaussian process. Then with high probability over the particular instantiation \(f\) of the objective, the probability that an input \(\mathbf{x}\sim P_{\mathbf{x}}\) gives rise to an argument \((\mu(\mathbf{x})-y^{*})/\sigma(\mathbf{x})\) to \(h\) in Eq. (2) that is smaller than a threshold \(B\) exceeds \(P_{\mathbf{x}}(f(\mathbf{x})<f^{*}-\epsilon_{n})\), where \(\epsilon_{n}\) depends on the optimality gap \(f^{*}-y^{*}\) and the maximum posterior uncertainty \(\max_{\mathbf{x}}\sigma_{n}(\mathbf{x})\). This pertains to EI's numerically vanishing values and gradients, since the numerical support \(\mathcal{S}_{\eta}(h)=\{\mathbf{x}:|h(\mathbf{x})|>\eta\}\) of a naive implementation of \(h\) in (2) is limited by a lower bound \(B(\eta)\) that depends on the floating point precision \(\eta\). Formally, \(\mathcal{S}_{\eta}(h)\subset[B(\eta),\infty)\) even though \(\mathcal{S}_{0}(h)=\mathbb{R}\) mathematically. As a consequence, the following result can be seen as a bound on the probability of encountering numerically vanishing values and gradients in EI using samples from the distribution \(P_{\mathbf{x}}\) to initialize the optimization of the acquisition function.
**Theorem 1**.: _Suppose \(f\) is drawn from a Gaussian process prior \(P_{f}\), \(y^{*}\leq f^{*}\), \(\mu_{n},\sigma_{n}\) are the mean and standard deviation of the posterior \(P_{f}(f|\mathcal{D}_{n})\) and \(B\in\mathbb{R}\). Then with probability \(1-\delta\),_
\[P_{\mathbf{x}}\left(\frac{\mu_{n}(\mathbf{x})-y^{*}}{\sigma_{n}(\mathbf{x})}<B \right)\geq P_{\mathbf{x}}\left(f(\mathbf{x})<f^{*}-\epsilon_{n}\right) \tag{7}\]
_where \(\epsilon_{n}=(f^{*}-y^{*})+\left(\sqrt{-2\log(2\delta)}-B\right)\max_{\mathbf{ x}}\sigma_{n}(\mathbf{x})\)._
For any given - and especially early - iteration, \(\epsilon_{n}\) does not have to be small, as both the optimality gap and the maximal posterior standard deviation can be large initially. Note that under certain technical conditions on the kernel function and the asymptotic distribution of the training data \(\mathcal{D}_{n}\), the maximum posterior variance vanishes guaranteedably as \(n\) increases, see [50, Corollary 3.2]. On its own, Theorem 1 gives insight into the non-asymptotic behavior by exposing a dependence to the distribution of objective values \(f\). In particular, if the set of inputs that give rise to high objective values (\(\approx f^{*}\)) is concentrated, \(P(f(\mathbf{x})<f^{*}-\epsilon)\) will decay very slowly as \(\epsilon\) increases, thereby maintaining a lower bound on the probability of close to 1. As an example, this is the case for the Ackley function, especially as the dimensionality increases, which explains the behavior in Figure 1.
Unexpected Improvements
In this section, we propose re-formulations of analytic and MC-based improvement-based acquisition functions that render them significantly easier to optimize. We will use differing fonts, e.g. \(\log\) and \(\log\), to differentiate between the mathematical functions and their numerical implementations.
### Analytic LogEI
Mathematically, EI's values and gradients are nonzero on the entire real line, except in the noiseless case for points that are perfectly correlated with previous observations. However, naive implementations of \(h\) are _numerically_ zero when \(z=(\mu(\mathbf{x})-y^{*})/\sigma(\mathbf{x})\) is small, which happens when the model has high confidence that little improvement can be achieved at \(\mathbf{x}\). We propose an implementation of \(\log\circ h\) that can be accurately computed for a much larger range of inputs. Specifically, we compute
\[\text{LogEI}_{y^{*}}(\mathbf{x})=\texttt{log\_h}((\mu(\mathbf{x})-y^{*})/\sigma (\mathbf{x}))+\texttt{log}(\sigma(\mathbf{x})), \tag{8}\]
where log_h is mathematically equivalent to \(\log\circ h\) and can be stably and accurately computed by
\[\texttt{log\_h}(z)=\begin{cases}\texttt{log}(\phi(z)+z\Phi(z))&z>-1\\ -z^{2}/2-c_{1}+\texttt{log\_1}\texttt{mexp}(\log(\texttt{erfcx}(-z/\sqrt{2})|z |)+c_{2})&-1/\sqrt{\epsilon}<z\leq-1\\ -z^{2}/2-c_{1}-2\log(|z|)&z\leq-1/\sqrt{\epsilon}\end{cases} \tag{9}\]
where \(c_{1}=\log(2\pi)/2\), and \(c_{2}=\log(\pi/2)/2\), \(\epsilon\) is the numerical precision, and log1mexp, erfcx are numerically stable implementations of \(\log(1-\exp(z))\) and \(\exp(z^{2})\text{erfc}(z)\), respectively, see App. A. Progenitors of Eq. (9) are found in SMAC 1.0 [35] and RoBO [46] which contain a log-transformed analytic EI implementation that is much improved, but can still exhibit instabilities as \(z\) grows negative, see App. Fig. 10. To remedy similar instabilities, we put forth the third, asymptotic case in Eq. (9), ensuring numerical stability throughout, see App. A.2 for details. The asymptotically quadratic behavior of log_h becomes apparent in the last two cases, making the function particularly amenable to gradient-based optimization with _significant_ practical implications for EI-based BO.
### Monte Carlo Parallel LogEI
Beyond analytic EI, Monte Carlo formulations of parallel EI that perform differentiation on the level of MC samples, don't just exhibit numerically, but mathematically zero gradients for a significant proportion of practically relevant inputs. For qEI, the primary issue is the discrete maximum over the \(q\) outcomes for each MC sample in (5). In particular, the acquisition utility of expected improvement in Eq. 4 on a single sample \(\xi_{i}\) of \(f\) is \(\max_{j}[\xi_{i}(\mathbf{x}_{j})-y^{*}]_{+}\). Mathematically, we smoothly approximate the acquisition utility in two stages: 1) \(u_{ij}=\operatorname{softplus}_{\tau_{0}}(\xi_{i}(\mathbf{x}_{j})-y^{*})\approx [\xi_{i}(\mathbf{x}_{j})-y^{*}]_{+}\) and 2) \(\left\lVert u_{i}.\right\rVert_{1/\tau_{\max}}\approx\max_{j}u_{ij}\). Notably, while we use canonical \(\operatorname{softplus}\) and p-norm approximations here, specialized fat-tailed non-linearities are required to scale to large batches, see Appendix A.4. Since the resulting quantities are strictly positive, they can be transformed to log-space permitting an implementation of qLogEI that is numerically stable and can be optimized effectively. In particular,
\[\begin{split}\text{qLogEI}_{y^{*}}(\mathbf{X})&= \log\int\left(\sum_{j=1}^{q}\operatorname{softplus}_{\tau_{0}}(f(\mathbf{x}_{j })-y^{*})^{1/\tau_{\max}}\right)^{\tau_{\max}}df\\ &\approx\texttt{logsumexp}_{i}(\tau_{\max}\texttt{logsumexp}_{j}( \texttt{logsoftmax}_{\tau_{0}}(\xi^{i}(\mathbf{x}_{j})-y^{*}))/\tau_{\max})), \end{split} \tag{10}\]
where \(i\) is the index of the Monte Carlo draws from the GP posterior, \(j=1,\ldots,q\) is the index for the candidate in the batch, and logsoftplus is a numerically stable implementation of \(\log(\log(1+\exp(z)))\). See Appendix A.3 for details, including the novel fat-tailed non-linearities like fatplus.
While the smoothing in (10) approximates the canonical qEI formulation, the following result shows that the associated relative approximation error can be quantified and bounded tightly as a function of the temperature parameters \(\tau_{0}\), \(\tau_{\max}\) and the batch size \(q\). See Appendix C for the proof.
**Lemma 2**.: _[Relative Approximation Guarantee] Given \(\tau_{0},\tau_{\max}>0\), the approximation error of qLogEI to qEI is bounded by_
\[\left|e^{\text{qLogEI}(\mathbf{X})}-\text{qEI}(\mathbf{X})\right|\leq(q^{\tau_ {\max}}-1)\;\text{qEI}(\mathbf{X})+\log(2)\tau_{0}q^{\tau_{\max}}. \tag{11}\]
In Appendix D.10, we show the importance of setting the temperatures sufficiently low for qLogEI to achieve good optimization characteristics, something that only becomes possible by transforming all involved computations to log-space. Otherwise, the smooth approximation to the acquisition utility would exhibit vanishing gradients numerically, as the discrete \(\max\) operator does mathematically.
### Constrained EI
Both analytic and Monte Carlo variants of LogEI can be extended for optimization problems with black-box constraints. For analytic CEI with independent constraints of the form \(f_{i}(\mathbf{x})\leq 0\), the constrained formulation in Eq. (3) simplifies to \(\text{LogEI}(\mathbf{x})=\text{LogEI}(\mathbf{x})+\sum_{i}\log(P(f_{i}(\mathbf{x })\leq 0))\), which can be readily and stably computed using LogEI in Eq. (8) and, if \(f_{i}\) is modelled by a GP, a stable implementation of the Gaussian log cumulative distribution function. For the Monte Carlo variant, we apply a similar strategy as for Eq. (10) to the constraint indicators in Eq. (3): 1) a smooth approximation and 2) an accurate and stable implementation of its log value, see Appendix A.
### Monte Carlo Parallel LogEHVI
The numerical difficulties of qEHVI in (6) are similar to those of qEI, and the basic ingredients of smoothing and log-transformations still apply, but the details are significantly more complex since qEHVI uses many operations that have mathematically zero gradients with respect to some of the inputs. Our implementation is based on the differentiable inclusion-exclusion formulation of the hypervolume improvement [13]. As a by-product, the implementation also readily allows for the differentiable computation of the expected log hypervolume, instead of the log expected hypervolume, note the order, which can be preferable in certain applications of multi-objective optimization [26].
## 5 Empirical Results
We compare standard versions of analytic EI (EI) and constrained EI (CEI), Monte Carlo parallel EI (qEI), as well as Monte Carlo EHVI (qEHVI), in addition to other state-of-the-art baselines like lower-bound Max-Value Entropy Search (GIBBON) [61] and single- and multi-objective Joint Entropy Search (JES) [36, 71]. All experiments are implemented using BoTorch [6] and utilize multi-start optimization of the AF with scipy's L-BFGS-B optimizer. In order to avoid conflating the effect of BoTorch's default initialization strategy with those of our contributions, we use 16 initial points chosen uniformly at random from which to start the L-BFGS-B optimization. For a comparison with other initialization strategies, see Appendix D. We run multiple replicates and report mean and error bars of \(\pm 2\) standard errors of the mean. Appendix D.1 contains additional details.
Single-objective sequential BOWe compare EI and LogEI on the 10-dimensional convex Sum-of-Squares (SoS) function \(f(\mathbf{x})=\sum_{i=1}^{10}{(x_{i}-0.5)^{2}}\), using 20 restarts seeded from 1024 pseudo-random samples through BoTorch's default initialization heuristic. Figure 2 shows that due to vanishing gradients, EI is unable to make progress even on this trivial problem.
In Figure 3, we compare performance on the Ackley and Michalewicz test functions [67]. Notably, LogEI substantially outperforms EI on Ackley as the dimensionality increases. Ackley is a challenging multimodal function for which it is critical to trade off local exploitation with global exploration, a task made exceedingly difficult by the numerically vanishing gradients of EI in a large fraction of the search space. We see a similar albeit less pronounced behavior on Michalewicz, which reflects the fact that Michalewicz is a somewhat less challenging problem than Ackley.
Figure 2: Regret and EI acquisition value for the candidates selected by maximizing EI and LogEI on the convex Sum-of-Squares problem. Optimization stalls out for EI after about 75 observations due to vanishing gradients (indicated by the jagged behavior of the acquisition value), while LogEI continues to make steady progress.
BO with Black Box ConstraintsFigure 4 shows results on four engineering design problems with black box constraints that were also considered in [22]. We apply the same bilog transform as the trust region-based SCBO method [22] to all constraints to make them easier to model with a GP. We see that LogCEI outperforms the naive CEI implementation and converges faster than SCBO. Similar to the unconstrained problems, the performance gains of LogCEI over CEI grow with increasing problem dimensionality and the number of constraints. Notably, we found that for some problems, LogCEI in fact _improved upon some of the best results quoted in the original literature_, while using three orders of magnitude fewer function evaluations, see Appendix D.7 for details.
Parallel Expected Improvement with qLogEIFigure 5 reports the optimization performance of parallel BO on the 16-dimensional Ackley function for both sequential greedy and joint batch optimization using the fat-tailed non-linearities of App. A.4. In addition to the apparent advantages of qLogEI over qEI, a key finding is that jointly optimizing the candidates of batch acquisition functions can yield highly competitive optimization performance, see App. D.3 for extended results.
High-dimensional BO with qLogEIFigure 6 shows the performance of LogEI on three high-dimensional problems: the \(6\)-dimensional Hartmann function embedded in a \(100\)-dimensional space, a \(100\)-dimensional rover trajectory planning problem, and a \(103\)-dimensional SVM hyperparameter tuning problem. We use a \(103\)-dimensional version of the \(388\)-dimensional SVM problem considered by Eriksson and Jankowiak [21], where the \(100\) most important features were selected using Xgboost.
Figure 4: Best feasible objective value as a function of number of function evaluations (iterations) on four engineering design problems with black-box constraints after an initial \(2d\) pseudo-random evaluations.
Figure 3: Best objective value as a function of iterations on the moderately and severely non-convex Michalewicz and Ackley problems for varying numbers of input dimensions. LogEI substantially outperforms both EI and GIBBON, and this gap widens as the problem dimensionality increases. JES performs slightly better than LogEI on Ackley, but for some reason fails on Michalewicz. Notably, JES is almost two orders of magnitude slower than the other acquisition functions (see Appendix D).
Figure 6 shows that the optimization exhibits varying degrees of improvement from the inclusion of qLogEI, both when combined with SAASBO [21] and a standard GP. In particular, qLogEI leads to significant improvements on the embedded Hartmann problem, even leading BO with the canonical GP to ultimately catch up with the SAAS-prior-equipped model. On the other hand, the differences on the SVM and Rover problems are not significant, see Section 6 for a discussion.
Multi-Objective optimization with qLogEHVIFigure 7 compares qLogEHVI and qEHVI on two multi-objective test problems with varying batch sizes, including the real-world-inspired cell network design for optimizing coverage and capacity [19]. The results are consistent with our findings in the single-objective and constrained cases: qLogEHVI consistently outperforms qEHVI and even JES [71] for all batch sizes. Curiously, for the largest batch size and DTLZ2, qLogNEHVI's improvement over the reference point (HV \(>0\)) occurs around three batches after the other methods, but dominates their performance in later batches. See Appendix D.5 for results on additional synthetic and real-world-inspired multi-objective problems such as the laser plasma acceleration optimization [38], and vehicle design optimization [54, 68]
## 6 Discussion
To recap, EI exhibits vanishing gradients 1) when high objective values are highly concentrated in the search space, and 2) as the optimization progresses. In this section, we highlight that these conditions are not met for all BO applications, and that LogEI's performance depends on the surrogate's quality.
On problem dimensionalityWhile our experimental results show that advantages of LogEI generally grow larger as the dimensionality of the problem grows, we stress that this is fundamentally due to the concentration of high objective values in the search space, not the dimensionality itself. Indeed, we have observed problems with high ambient dimensionality but low intrinsic dimensionality, where LogEI does not lead to significant improvements over EI, e.g. the SVM problem in Figure 6.
Figure 5: Best objective value for parallel BO as a function of the number evaluations for single-objective optimization on the 16-dimensional Ackley function with varying batch sizes \(q\). Notably, joint optimization of the batch outperforms sequential greedy optimization.
Figure 6: Best objective value as a function of number of function evaluations (iterations) on three high-dimensional problems, including Eriksson and Jankowiak [21]’s SAAS prior.
On asymptotic improvementsWhile members of the LogEI family can generally be optimized better, leading to higher acquisition values, improvements in optimization performance might be small in magnitude, e.g. the log-objective results on the convex 10D sum of squares in Fig. 2, or only begin to materialize in later iterations, like for \(q=16\) on DTLZ2 in Figure 7.
On model qualityEven if good objective values are concentrated in a small volume of the search space and many iterations are run, LogEI might still not outperform EI if the surrogate's predictions are poor, or its uncertainties are not indicative of the surrogate's mismatch to the objective, see Rover in Fig. 6. In these cases, better acquisition values do not necessarily lead to better BO performance.
Replacing EIDespite these limitation, we strongly suggest replacing variants of EI with their LogEI counterparts. If LogEI were dominated by EI on some problem, it would be an indication that the EI family itself is sub-optimal, and improvements in performance can be attributed to the exploratory quality of randomly distributed candidates, which could be incorporated explicitly.
## 7 Conclusion
Our results demonstrate that the problem of vanishing gradients is a major source of the difficulty of optimizing improvement-based acquisition functions and that we can mitigate this issue through careful reformulations and implementations. As a result, we see substantially improved optimization performance across a variety of modified EI variants across a broad range of problems. In particular, we demonstrate that joint batch optimization for parallel BO can be competitive with, and at times exceed the sequential greedy approach typically used in practice, which also benefits from our modifications. Besides the convincing performance improvements, one of the key advantages of our modified acquisition functions is that they are much less dependent on heuristic and potentially brittle initialization strategies. Moreover, our proposed modifications do not meaningfully increase the computational complexity of the respective original acquisition function.
While our contributions may not apply verbatim to other classes of acquisition functions, our key insights and strategies do translate and could help with e.g. improving information-based [34; 76], cost-aware [51; 66], and other types of acquisition functions that are prone to similar numerical challenges. Further, combining the proposed methods with gradient-aware first-order BO methods [5; 16; 23] could lead to particularly effective high-dimensional applications of BO, since the advantages of both methods tend to increase with the dimensionality of the search space. Overall, we hope that our findings will increase awareness in the community for the importance of optimizing acquisition functions well, and in particular, for the care that the involved numerics demand.
Figure 7: Batch optimization performance on two multi-objective problems, as measured by the hypervolume of the Pareto frontier across observed points. This plot includes JES [71]. Similar to the single-objective case, the LogEI variant qLogEHVI significantly outperforms the baselines.
## Acknowledgments and Disclosure of Funding
The authors thank Frank Hutter for valuable references about prior work on numerically stable computations of analytic EI, David Bindel for insightful conversations about the difficulty of optimizing EI, as well as the anonymous reviewers for their knowledgeable feedback.
## References
* Abadi et al. [2015] Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL [https://www.tensorflow.org/](https://www.tensorflow.org/).
* Ament and O'Neil [2018] Sebastian Ament and Michael O'Neil. Accurate and efficient numerical calculation of stable densities via optimized quadrature and asymptotics. _Statistics and Computing_, 28:171-185, 2018.
* Ament et al. [2021] Sebastian Ament, Maximilian Amsler, Duncan R. Sutherland, Ming-Chiang Chang, Dan Guevarra, Aine B. Connolly, John M. Gregoire, Michael O. Thompson, Carla P. Gomes, and R. Bruce van Dover. Autonomous materials synthesis via hierarchical active learning of nonequilibrium phase diagrams. _Science Advances_, 7(51):eabg4930, 2021. doi: 10.1126/sciadv.abg4930. URL [https://www.science.org/doi/abs/10.1126/sciadv.abg4930](https://www.science.org/doi/abs/10.1126/sciadv.abg4930).
* Ament et al. [2023] Sebastian Ament, Andrew Witte, Nishant Garg, and Julius Kusuma. Sustainable concrete via bayesian optimization, 2023. URL [https://arxiv.org/abs/2310.18288](https://arxiv.org/abs/2310.18288). NeurIPS 2023 Workshop on Adaptive Experimentation in the Real World.
* Ament and Gomes [2022] Sebastian E Ament and Carla P Gomes. Scalable first-order Bayesian optimization via structured automatic differentiation. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, _Proceedings of the 39th International Conference on Machine Learning_, volume 162 of _Proceedings of Machine Learning Research_, pages 500-516. PMLR, 17-23 Jul 2022. URL [https://proceedings.mlr.press/v162/ament22a.html](https://proceedings.mlr.press/v162/ament22a.html).
* Balandat et al. [2020] Maximilian Balandat, Brian Karrer, Daniel R. Jiang, Samuel Daulton, Benjamin Letham, Andrew Gordon Wilson, and Eytan Bakshy. BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization. In _Advances in Neural Information Processing Systems 33_, 2020.
* Baptista and Poloczek [2018] Ricardo Baptista and Matthias Poloczek. Bayesian optimization of combinatorial structures, 2018.
* Belakaria et al. [2020] Syrine Belakaria, Aryan Deshwal, and Janardhan Rao Doppa. Max-value entropy search for multi-objective bayesian optimization with constraints, 2020.
* Bradbury et al. [2018] James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL [http://github.com/google/jax](http://github.com/google/jax).
* Byrd et al. [1995] Richard H Byrd, Peihuang Lu, Jorge Nocedal, and Ciyou Zhu. A limited memory algorithm for bound constrained optimization. _SIAM Journal on Scientific Computing_, 16(5):1190-1208, 1995.
* Coello and Montes [2002] Carlos A Coello Coello and Efren Mezura Montes. Constraint-handling in genetic algorithms through the use of dominance-based tournament selection. _Advanced Engineering Informatics_, 16(3):193-203, 2002.
* Cowen-Rivers et al. [2022] Alexander I. Cowen-Rivers, Wenlong Lyu, Rasul Tutunov, Zhi Wang, Antoine Grosnit, Ryan Rhys Griffiths, Alexandre Max Maraval, Hao Jianye, Jun Wang, Jan Peters, and Haitham Bou Ammar. Hebo pushing the limits of sample-efficient hyperparameter optimisation, 2022.
* Daulton et al. [2020] Samuel Daulton, Maximilian Balandat, and Eytan Bakshy. Differentiable expected hypervolume improvement for parallel multi-objective bayesian optimization. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, _Advances in Neural Information Processing Systems_, volume 33, pages 9851-9864. Curran Associates, Inc., 2020. URL [https://proceedings.neurips.cc/paper/2020/file/6fec24eac8f18ed793f5eaad3dd7977c-Paper.pdf](https://proceedings.neurips.cc/paper/2020/file/6fec24eac8f18ed793f5eaad3dd7977c-Paper.pdf).
* Daulton et al. [2022] Samuel Daulton, Sait Cakmak, Maximilian Balandat, Michael A. Osborne, Enlu Zhou, and Eytan Bakshy. Robust multi-objective Bayesian optimization under input noise. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, _Proceedings of the 39th International Conference on Machine Learning_, volume 162 of _Proceedings of Machine Learning Research_, pages 4831-4866. PMLR, 17-23 Jul 2022. URL [https://proceedings.mlr.press/v162/daulton22a.html](https://proceedings.mlr.press/v162/daulton22a.html).
* Daulton et al. [2022] Samuel Daulton, Xingchen Wan, David Eriksson, Maximilian Balandat, Michael A. Osborne, and Eytan Bakshy. Bayesian optimization over discrete and mixed spaces via probabilistic reparameterization. In _Advances in Neural Information Processing Systems 35_, 2022.
* De Roos et al. [2021] Filip De Roos, Alexandra Gessner, and Philipp Hennig. High-dimensional gaussian process inference with derivatives. In _International Conference on Machine Learning_, pages 2535-2545. PMLR, 2021.
* Deb et al. [2002] Kalyan Deb, L. Thiele, Marco Laumanns, and Eckart Zitzler. Scalable multi-objective optimization test problems. volume 1, pages 825-830, 06 2002. ISBN 0-7803-7282-4. doi: 10.1109/CEC.2002.1007032.
* Deshwal et al. [2023] Aryan Deshwal, Sebastian Ament, Maximilian Balandat, Eytan Bakshy, Janardhan Rao Doppa, and David Eriksson. Bayesian optimization over high-dimensional combinatorial spaces via dictionary-based embeddings. In Francisco Ruiz, Jennifer Dy, and Jan-Willem van de Meent, editors, _Proceedings of The 26th International Conference on Artificial Intelligence and Statistics_, volume 206 of _Proceedings of Machine Learning Research_, pages 7021-7039. PMLR, 25-27 Apr 2023.
* Dreifuerst et al. [2021] Ryan M. Dreifuerst, Samuel Daulton, Yuchen Qian, Paul Varkey, Maximilian Balandat, Sanjay Kasturia, Anoop Tomar, Ali Yazdan, Vish Ponnampalam, and Robert W. Heath. Optimizing coverage and capacity in cellular networks using machine learning, 2021.
* Emmerich et al. [2006] M. T. M. Emmerich, K. C. Giannakoglou, and B. Naujoks. Single- and multiobjective evolutionary optimization assisted by gaussian random field metamodels. _IEEE Transactions on Evolutionary Computation_, 10(4):421-439, 2006.
* Eriksson and Jankowiak [2021] David Eriksson and Martin Jankowiak. High-dimensional Bayesian optimization with sparse axis-aligned subspaces. In _Uncertainty in Artificial Intelligence_. PMLR, 2021.
* Eriksson and Poloczek [2021] David Eriksson and Matthias Poloczek. Scalable constrained Bayesian optimization. In _International Conference on Artificial Intelligence and Statistics_. PMLR, 2021.
* Eriksson et al. [2018] David Eriksson, Kun Dong, Eric Lee, David Bindel, and Andrew G Wilson. Scaling gaussian process regression with derivatives. _Advances in neural information processing systems_, 31, 2018.
* Eriksson et al. [2019] David Eriksson, Michael Pearce, Jacob Gardner, Ryan D Turner, and Matthias Poloczek. Scalable global optimization via local Bayesian optimization. In _Advances in Neural Information Processing Systems 32_, NeurIPS, 2019.
* Frazier [2018] Peter I Frazier. A tutorial on bayesian optimization. _arXiv preprint arXiv:1807.02811_, 2018.
* Friedrich et al. [2011] Tobias Friedrich, Karl Bringmann, Thomas Voss, and Christian Igel. The logarithmic hypervolume indicator. In _Proceedings of the 11th workshop proceedings on Foundations of genetic algorithms_, pages 81-92, 2011.
* Gardner et al. [2014] Jacob Gardner, Matt Kusner, Zhixiang, Kilian Weinberger, and John Cunningham. Bayesian optimization with inequality constraints. In _Proceedings of the 31st International Conference on Machine Learning_, volume 32 of _Proceedings of Machine Learning Research_, pages 937-945, Beijing, China, 22-24 Jun 2014. PMLR.
* Garnett [2023] Roman Garnett. _Bayesian Optimization_. Cambridge University Press, 2023. to appear.
* Gelbart et al. [2014] Michael A. Gelbart, Jasper Snoek, and Ryan P. Adams. Bayesian optimization with unknown constraints. In _Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence_, UAI, 2014.
* Ginsbourger et al. [2008] David Ginsbourger, Rodolphe Le Riche, and Laurent Carraro. A Multi-points Criterion for Deterministic Parallel Global Optimization based on Gaussian Processes. Technical report, March 2008. URL [https://hal.science/hal-00260579](https://hal.science/hal-00260579).
* Gramacy et al. [2022] Robert B Gramacy, Annie Sauer, and Nathan Wycoff. Triangulation candidates for bayesian optimization. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, _Advances in Neural Information Processing Systems_, volume 35, pages 35933-35945. Curran Associates, Inc., 2022.
* Hansen et al. [2003] Nikolaus Hansen, Sibylle D. Muller, and Petros Koumoutsakos. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (cma-es). _Evolutionary Computation_, 11(1):1-18, 2003. doi: 10.1162/106365603321828970.
* Head et al. [2021] Tim Head, Manoj Kumar, Holger Nahrstaedt, Gilles Louppe, and Iaroslav Shcherbatyi. scikit-optimize/scikit-optimize, October 2021. URL [https://doi.org/10.5281/zenodo.5565057](https://doi.org/10.5281/zenodo.5565057).
* Volume 1_, NIPS'14, pages 918-926, Cambridge, MA, USA, 2014. MIT Press.
* Hutter et al. [2011] Frank Hutter, Holger H Hoos, and Kevin Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In _Learning and Intelligent Optimization: 5th International Conference, LION 5, Rome, Italy, January 17-21, 2011. Selected Papers 5_, pages 507-523. Springer, 2011.
* Hvarfner et al. [2022] Carl Hvarfner, Frank Hutter, and Luigi Nardi. Joint entropy search for maximally-informed bayesian optimization. In _Advances in Neural Information Processing Systems 35_, 2022.
* Research [2023] Wolfram Research, Inc. Wolfram alpha, 2023. URL [https://www.wolframalpha.com/](https://www.wolframalpha.com/).
* Irshad et al. [2023] F. Irshad, S. Karsch, and A. Dopp. Multi-objective and multi-fidelity bayesian optimization of laser-plasma acceleration. _Phys. Rev. Res._, 5:013063, Jan 2023. doi: 10.1103/PhysRevResearch.5.013063. URL [https://link.aps.org/doi/10.1103/PhysRevResearch.5.013063](https://link.aps.org/doi/10.1103/PhysRevResearch.5.013063).
* Irshad et al. [2023] Faran Irshad, Stefan Karsch, and Andreas Doepp. Reference dataset of multi-objective and multi- fidelity optimization in laser-plasma acceleration, January 2023. URL [https://doi.org/10.5281/zenodo.7565882](https://doi.org/10.5281/zenodo.7565882).
* Jiang et al. [2020] Shali Jiang, Daniel Jiang, Maximilian Balandat, Brian Karrer, Jacob Gardner, and Roman Garnett. Efficient nonmyopic bayesian optimization via one-shot multi-step trees. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, _Advances in Neural Information Processing Systems_, volume 33, pages 18039-18049. Curran Associates, Inc., 2020.
* Jones et al. [1993] Donald Jones, C. Perttunen, and B. Stuckman. Lipschitzian optimisation without the lipschitz constant. _Journal of Optimization Theory and Applications_, 79:157-181, 01 1993. doi: 10.1007/BF00941892.
* Jones et al. [1998] Donald R. Jones, Matthias Schonlau, and William J. Welch. Efficient global optimization of expensive black-box functions. _Journal of Global Optimization_, 13:455-492, 1998.
* Kandasamy et al. [2020] Kirthevasan Kandasamy, Karun Raju Vysyaraju, Willie Neiswanger, Biswajit Paria, Christopher R. Collins, Jeff Schneider, Barnabas Poczos, and Eric P. Xing. Tuning hyperparameters without grad students: Scalable and robust bayesian optimisation with dragonfly. _J. Mach. Learn. Res._, 21(1), jan 2020.
* Kim et al. [2022] Jungtaek Kim, Seungjin Choi, and Minsu Cho. Combinatorial bayesian optimization with random mapping functions to convex polytopes. In James Cussens and Kun Zhang, editors, _Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence_, volume 180 of _Proceedings of Machine Learning Research_, pages 1001-1011. PMLR, 01-05 Aug 2022.
* Kingma and Welling [2013] Diederik P Kingma and Max Welling. Auto-Encoding Variational Bayes. _arXiv e-prints_, page arXiv:1312.6114, Dec 2013.
* Klein et al. [2017] A. Klein, S. Falkner, N. Mansur, and F. Hutter. Robo: A flexible and robust bayesian optimization framework in python. In _NIPS 2017 Bayesian Optimization Workshop_, December 2017.
* Kraft [1988] Dieter Kraft. A software package for sequential quadratic programming. _Forschungsberich-Deutsche Forschungs- und Versuchsanstalt fur Luft- und Raumfahrt_, 1988.
* Lam et al. [2018] Remi Lam, Matthias Poloczek, Peter Frazier, and Karen E Willcox. Advances in bayesian optimization with applications in aerospace engineering. In _2018 AIAA Non-Deterministic Approaches Conference_, page 1656, 2018.
* Langer and Tirrell [2004] Robert Langer and David Tirrell. Designing materials for biology and medicine. _Nature_, 428, 04 2004.
* Lederer et al. [2019] Armin Lederer, Jonas Umlauft, and Sandra Hirche. Posterior variance analysis of gaussian processes with application to average learning curves. _arXiv preprint arXiv:1906.01404_, 2019.
* Lee et al. [2020] Eric Hans Lee, Valerio Perrone, Cedric Archambeau, and Matthias Seeger. Cost-aware Bayesian Optimization. _arXiv e-prints_, page arXiv:2003.10870, March 2020.
* Letham et al. [2019] Benjamin Letham, Brian Karrer, Guilherme Ottoni, and Eytan Bakshy. Constrained bayesian optimization with noisy experiments. _Bayesian Analysis_, 14(2):495-519, 06 2019. doi: 10.1214/18-BA1110.
* Liang et al. [2021] Qiaohao Liang, Aldair E. Gongora, Zekun Ren, Armi Tiihonen, Zhe Liu, Shijing Sun, James R. Deneault, Daniil Bash, Flore Mekki-Berrada, Saif A. Khan, Kedar Hippalgaonkar, Benji Maruyama, Keith A. Brown, John Fisher III, and Tonio Buonassisi. Benchmarking the performance of bayesian optimization across multiple experimental materials science domains. _npj Computational Materials_, 7(1):188, 2021.
* Liao et al. [2008] Xingtao Liao, Qing Li, Xujing Yang, Weigang Zhang, and Wei Li. Multiobjective optimization for crash safety design of vehicles using stepwise regression model. _Structural and Multidisciplinary Optimization_, 35:561-569, 06 2008. doi: 10.1007/s00158-007-0163-x.
* Lyu et al. [2018] Wenlong Lyu, Fan Yang, Changhao Yan, Dian Zhou, and Xuan Zeng. Batch Bayesian optimization via multi-objective acquisition ensemble for automated analog circuit design. In Jennifer Dy and Andreas Krause, editors, _Proceedings of the 35th International Conference on Machine Learning_, volume 80 of _Proceedings of Machine Learning Research_, pages 3306-3314. PMLR, 10-15 Jul 2018. URL [https://proceedings.mlr.press/v80/lyu18a.html](https://proceedings.mlr.press/v80/lyu18a.html).
* Machler [2012] Martin Machler. Accurately computing log (1- exp (-l al)) assessed by the rmpfr package. Technical report, Technical report, 2012.
* McLeod et al. [2018] Mark McLeod, Stephen Roberts, and Michael A. Osborne. Optimization, fast and slow: optimally switching between local and Bayesian optimization. In Jennifer Dy and Andreas Krause, editors, _Proceedings of the 35th International Conference on Machine Learning_, volume 80 of _Proceedings of Machine Learning Research_, pages 3443-3452. PMLR, 10-15 Jul 2018.
* Mockus [1975] Jonas Mockus. On bayesian methods for seeking the extremum. In _Optimization Techniques IFIP Technical Conference: Novosibirsk, July 1-7, 1974_, pages 400-404. Springer, 1975.
* Mockus [1978] Jonas Mockus. The application of bayesian methods for seeking the extremum. _Towards global optimization_, 2:117-129, 1978.
* Mohammadi et al. [2015] Hossein Mohammadi, Rodolphe Le Riche, and Eric Touboul. Making ego and cma-es complementary for global optimization. In Clarisse Dhaenens, Laettiia Jourdan, and Marie-Eleonore Marmon, editors, _Learning and Intelligent Optimization_, pages 287-292, Cham, 2015. Springer International Publishing.
* Moss et al. [2021] Henry B. Moss, David S. Leslie, Javier Gonzalez, and Paul Rayson. Gibbon: General-purpose information-based bayesian optimisation. _J. Mach. Learn. Res._, 22(1), jan 2021.
* Oh et al. [2019] Changyong Oh, Jakub Tomczak, Efstratios Gavves, and Max Welling. Combinatorial bayesian optimization using the graph cartesian product. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alche-Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems 32_, pages 2914-2924. Curran Associates, Inc., 2019.
* Paszke et al. [2023] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alche-Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems_, volume 32. Curran Associates, Inc., 2019.
* Picheny et al. [2023] Victor Picheny, Joel Berkeley, Henry B. Moss, Hrvoje Stojic, Uri Grants, Sebastian W. Ober, Artem Artemev, Khurram Ghani, Alexander Goodall, Andrei Paleyes, Sattar Vakili, Sergio Pascual-Diaz, Stratis Markou, Jixiang Qing, Nasrulloh R. B. S Loka, and Ivo Couckuyt. Trieste: Efficiently exploring the depths of black-box functions with tensorflow, 2023. URL [https://arxiv.org/abs/2302.08436](https://arxiv.org/abs/2302.08436).
* Rasmussen [2004] Carl Edward Rasmussen. _Gaussian Processes in Machine Learning_, pages 63-71. Springer Berlin Heidelberg, Berlin, Heidelberg, 2004.
* Snoek et al. [2012] Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learning algorithms. In _Advances in neural information processing systems_, pages 2951-2959, 2012.
* Surjanovic and Bingham [2023] S. Surjanovic and D. Bingham. Virtual library of simulation experiments: Test functions and datasets. Retrieved May 14, 2023, from [http://www.sfu.ca/~ssurjano](http://www.sfu.ca/~ssurjano).
* Tanabe and Ishibuchi [2020] Ryoji Tanabe and Hisao Ishibuchi. An easy-to-use real-world multi-objective optimization problem suite. _Applied Soft Computing_, 89:106078, 2020. ISSN 1568-4946.
* [69] The GPyOpt authors. GPyOpt: A bayesian optimization framework in python. [http://github.com/SheffieldML/GPyOpt](http://github.com/SheffieldML/GPyOpt), 2016.
* Torn and Zilinskas [1989] Aimo Torn and Antanas Zilinskas. _Global optimization_, volume 350. Springer, 1989.
* Tu et al. [2022] Ben Tu, Axel Gandy, Nikolas Kantas, and Behrang Shafei. Joint entropy search for multi-objective bayesian optimization. In _Advances in Neural Information Processing Systems 35_, 2022.
* Turner et al. [2021] Ryan Turner, David Eriksson, Michael McCourt, Juha Kiili, Eero Laaksonen, Zhen Xu, and Isabelle Guyon. Bayesian optimization is superior to random search for machine learning hyperparameter tuning: Analysis of the black-box optimization challenge 2020. In _NeurIPS 2020 Competition and Demonstration Track_, 2021.
* Wachter and Biegler [2006] Andreas Wachter and Lorenz T. Biegler. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. _Mathematical Programming_, 106(1):25-57, 2006.
* Wan et al. [2021] Xingchen Wan, Vu Nguyen, Huong Ha, Binxin Ru, Cong Lu, and Michael A. Osborne. Think global and act local: Bayesian optimisation over high-dimensional categorical and mixed search spaces. In Marina Meila and Tong Zhang, editors, _Proceedings of the 38th International Conference on Machine Learning_, volume 139, pages 10663-10674. PMLR, 18-24 Jul 2021.
* Wang et al. [2016] Jialei Wang, Scott C. Clark, Eric Liu, and Peter I. Frazier. Parallel bayesian global optimization of expensive functions, 2016.
* Wang and Jegelka [2017] Zi Wang and Stefanie Jegelka. Max-value Entropy Search for Efficient Bayesian Optimization. _ArXiv e-prints_, page arXiv:1703.01968, March 2017.
* Wilson et al. [2018] James Wilson, Frank Hutter, and Marc Deisenroth. Maximizing acquisition functions for bayesian optimization. In _Advances in Neural Information Processing Systems 31_, pages 9905-9916. 2018.
* Wu and Frazier [2016] Jian Wu and Peter I. Frazier. The parallel knowledge gradient method for batch bayesian optimization. In _Proceedings of the 30th International Conference on Neural Information Processing Systems_, NIPS'16, page 3134-3142. Curran Associates Inc., 2016.
* Wu et al. [2017] Jian Wu, Matthias Poloczek, Andrew Gordon Wilson, and Peter I Frazier. Bayesian optimization with gradients. In _Advances in Neural Information Processing Systems_, pages 5267-5278, 2017.
* 956, 2019.
* Zhan and Xing [2020] Dawei Zhan and Huanlai Xing. Expected improvement for expensive optimization: a review. _Journal of Global Optimization_, 78(3):507-544, 2020.
* Zhao et al. [2023] Jiayu Zhao, Renyu Yang, Shenghao Qiu, and Zheng Wang. Enhancing high-dimensional bayesian optimization by optimizing the acquisition function maximizer initialization. _arXiv preprint arXiv:2302.08298_, 2023.
* Zitzler et al. [2000] Eckart Zitzler, Kalyanmoy Deb, and Lothar Thiele. Comparison of multiobjective evolutionary algorithms: Empirical results. _Evol. Comput._, 8(2):173-195, jun 2000. ISSN 1063-6560. doi: 10.1162/106365600568202. URL [https://doi.org/10.1162/106365600568202](https://doi.org/10.1162/106365600568202).
Acquisition Function Details
### Analytic Expected Improvement
Recall that the main challenge with computing analytic LogEI is to accurately compute \(\log h\), where \(h(z)=\phi(z)+z\Phi(z)\), with \(\phi(z)=\exp(-z^{2}/2)/\sqrt{2\pi}\) and \(\Phi(z)=\int_{-\infty}^{z}\phi(u)du\). To express \(\log h\) in a numerically stable form as \(z\) becomes increasingly negative, we first take the log and multiply \(\phi\) out of the argument to the logarithm:
\[\log h(z)=z^{2}/2-\log(2\pi)/2+\log\left(1+z\frac{\Phi(z)}{\phi(z)}\right). \tag{12}\]
Fortunately, this form exposes the quadratic factor, \(\Phi(z)/\phi(z)\) can be computed via standard implementations of the scaled complementary error function erfcx, and \(\log\left(1+z\Phi(z)/\phi(z)\right)\), the last term of Eq. (12), can be computed stably with the loglmexp implementation proposed in [56]:
\[\texttt{loglmexp}(x)=\begin{cases}\texttt{log}(-\texttt{expm}1(x))&-\log 2<x\\ \texttt{loglp}(-\texttt{exp}(x))&-\log 2\geq x\end{cases}, \tag{13}\]
where expm1, loglp are canonical stable implementations of \(\exp(x)-1\) and \(\log(1+x)\), respectively. In particular, we compute
\[\texttt{log\_h}(z)=\begin{cases}\texttt{log}(\phi(z)+z\Phi(z))&z>-1\\ -z^{2}/2-c_{1}+\texttt{loglmexp}(\log(\texttt{erfcx}(-z/\sqrt{2})|z|)+c_{2})&1/ \sqrt{\epsilon}<z\leq-1\\ -z^{2}/2-c_{1}-2\log(|z|)&z\leq-1/\sqrt{\epsilon}.\end{cases} \tag{14}\]
where \(c_{1}=\log(2\pi)/2\), \(c_{2}=\log(\pi/2)/2\), and \(\epsilon\) is the floating point precision. We detail the reasoning behind the third asymptotic case of Equation (14) in Lemma 4 of Section A.2 below.
Numerical Study of Acquisition ValuesFigure 8 shows both the numerical failure mode of a naive implementation of EI, which becomes _exactly_ zero numerically for moderately small \(z\), while the evaluation via log_h in Eq. (14) exhibits quadratic asymptotic behavior that is particularly amenable to numerical optimization routines.
Equivalence of OptimizersLemma 3 states that if the maximum of EI is greater than 0, LogEI and EI have the same set of maximizers. Furthermore, if \(\max_{\mathbf{x}\in\mathbb{X}}\text{EI}(\mathbf{x})=0\), then \(\mathbb{X}=\arg\max_{\mathbf{x}\in\mathbb{X}}\text{EI}(\mathbf{x})\). In this case, LogEI is undefined everywhere, so it has no maximizers, which we note would yield the same BO policy as EI, for which every point is a maximizer.
**Lemma 3**.: _If \(\max_{\mathbf{x}\in\mathbb{X}}\text{EI}(\mathbf{x})>0\), then \(\arg\max_{\mathbf{x}\in\mathbb{X}}\text{EI}(\mathbf{x})=\arg\max_{\mathbf{x} \in\mathbb{X},\text{EI}(\mathbf{x})>0}\text{LogEI}(\mathbf{x})\)._
Proof.: Suppose \(\max_{\mathbf{x}\in\mathbb{X}}\text{EI}(\mathbf{x})>0\). Then \(\arg\max_{\mathbf{x}\in\mathbb{X}}\text{EI}(\mathbf{x})=\arg\max_{\mathbf{x} \in\mathbb{X},\text{EI}(\mathbf{x})>0}\text{EI}(\mathbf{x})\). For all \(\mathbf{x}\in\mathbb{X}\) such that \(\text{EI}(\mathbf{x})>0\), \(\text{LogEI}(\mathbf{x})=\log(\text{EI}(\mathbf{x}))\). Since \(\log\) is monotonic, we have that \(\arg\max_{z\in\mathbb{R}_{>0}}z=\arg\max_{z\in\mathbb{R}_{>0}}\log(z)\). Hence, \(\arg\max_{\mathbf{x}\in\mathbb{X},\text{EI}(\mathbf{x})>0}\text{EI}(\mathbf{x })=\arg\max_{\mathbf{x}\in\mathbb{X},\text{EI}(\mathbf{x})>0}\text{LogEI}( \mathbf{x})\).
Figure 8: Plot of the \(\log h\), computed via log \(\circ\)\(h\) and log_h in Eq. (14). Crucially, the naive implementation fails as \(z=(\mu(\mathbf{x})-f^{*})/\sigma(\mathbf{x})\) becomes increasingly negative, due to being exactly numerically zero, while our proposed implementation exhibits quadratic asymptotic behavior.
### Analytical LogEI's Asymptotics
As \(z\) grows negative and large, even the more robust second branch in Eq. (14), as well as the implementation of Hutter et al. [35] and Klein et al. [46] can suffer from numerical instabilities (Fig. 10, left). In our case, the computation of the last term of Eq. (12) is problematic for large negative \(z\). For this reason, we propose an approximate asymptotic computation based on a Laurent expansion at \(-\infty\). As a result of the full analysis in the following, we also attain a particularly simple formula with inverse quadratic convergence in \(z\), which is the basis of the third branch of Eq. (14):
\[\log\left(1+\frac{z\Phi(z)}{\phi(z)}\right)=-2\log(|z|)+\mathcal{O}(|z^{-2}|). \tag{15}\]
In full generality, the asymptotic behavior of the last term can be characterized by the following result.
**Lemma 4** (Asymptotic Expansion).: _Let \(z<-1\) and \(K\in\mathbb{N}\), then_
\[\log\left(1+\frac{z\Phi(z)}{\phi(z)}\right)=\log\left(\sum_{k=1}^{K}(-1)^{k+1 }\left[\prod_{j=0}^{k-1}(2j+1)\right]z^{-2k}\right)+\mathcal{O}(|z^{-2(K-1)}|). \tag{16}\]
Proof.: We first derived a Laurent expansion of the non-log-transformed \(z\Phi(z)/\phi(z)\), a key quantity in the last term of Eq. (12), with the help of Wolfram Alpha [37]:
\[\frac{z\Phi(z)}{\phi(z)}=-1-\sum_{k=1}^{K}(-1)^{k}\left[\prod_{j=0}^{k-1}(2j+ 1)\right]z^{-2k}+\mathcal{O}(|z|^{-2K}). \tag{17}\]
It remains to derive the asymptotic error bound through the log-transformation of the above expansion. Letting \(L(z,K)=\sum_{k=1}^{K}(-1)^{k+1}\left[\prod_{j=0}^{k-1}(2j+1)\right]z^{-2k}\), we get
\[\log\left(1+\frac{z\Phi(z)}{\phi(z)}\right) =\log\left(L(z,K)+\mathcal{O}(|z|^{-2K})\right) \tag{18}\] \[=\log L(z,K)+\log(1+\mathcal{O}(|z|^{-2K})/L(z,K))\] \[=\log L(z,K)+\mathcal{O}(\mathcal{O}(|z|^{-2K})/L(z,K))\] \[=\log L(z,K)+\mathcal{O}(|z|^{-2(K-1)}).\]
The penultimate equality is due to \(\log(1+x)=x+\mathcal{O}(x^{2})\), the last due to \(L(z,K)=\Theta(|z|^{-2})\).
Figure 9: Convergence behavior of the asymptotic Laurent expansion Eq. (16) of different orders.
SMAC 1.0 and RoBO's Analytic LogEITo our knowledge, SMAC 1.0's implementation of the logarithm of analytic EI due to Hutter et al. [35], later translated to RoBO [46], was the first to improve the numerical stability of analytic EI through careful numerics. The associated implementation is mathematically identical to \(\log\circ\) EI, and greatly improves the numerical stability of the computation. For large negative \(z\) however, the implementation still exhibits instabilities that gives rise to floating point infinities through which useful gradients cannot be propagated (Fig. 10, left). The implementation proposed herein remedies this problem by switching to the asymptotic approximation of Eq. (15) once it is accurate to machine precision \(\epsilon\). This is similar to the use of asymptotic expansions for the computation of \(\alpha\)-stable densities proposed by Ament and O'Neil [2].
HEBO's Approximate Analytic LogEIHEBO [12] contains an approximation to the logarithm of analytical EI as part of its implementation of the MACE acquisition function [55], which - at the time of writing - is missing the log normalization constant of the Gaussian density, leading to a large discontinuity at the chosen cut-off point of \(z=-6\) below which the approximation takes effect, see here for the HEBO implementation. Notably, HEBO does not implement an _exact_ stable computation of LogEI like the one of Hutter et al. [35], Klein et al. [46] and the non-asymptotic branches of the current work. Instead, it applies the approximation for all \(z<-6\), where the implementation exhibits a maximum error of \(>1.02\), or if the implementation's normalization constant were corrected, a maximum error of \(>0.1\). By comparison, the implementation put forth herein is mathematically exact for the non-asymptotic regime \(z>-1/\sqrt{\epsilon}\) and accurate to numerical precision in the asymptotic regime due to the design of the threshold value.
### Monte-Carlo Expected Improvement
For Monte-Carlo, we cannot directly apply similar numerical improvements as for the analytical version, because the utility values, the integrand of Eq. (4), on the sample level are likely to be _mathematically_ zero. For this reason, we first smoothly approximate the acquisition utility and subsequently apply log transformations to the approximate acquisition function.
To this end, a natural choice is \(\mathrm{softplus}_{\tau_{0}}(x)=\tau_{0}\log(1+\exp(x/\tau_{0}))\) for smoothing the \(\max(0,x)\), where \(\tau_{0}\) is a temperature parameter governing the approximation error. Further, we approximate the \(\max_{i}\) over the \(q\) candidates by the norm \(\|\cdot\|_{1/\tau_{\mathrm{max}}}\) and note that the approximation error introduced by both smooth approximations can be bound tightly as a function of two "temperature" parameters \(\tau_{0}\) and \(\tau_{\mathrm{max}}\), see Lemma 2.
Figure 10: Left: Comparison of LogEI values in single-precision floating point arithmetic, as a function of \(z=(\mu(x)-f^{*})/\sigma(x)\) to RoBO’s LogEI implementation [46]. Notably, RoBO’s LogEI improves greatly on the naive implementation (Fig. 8), but still exhibits failure points (red) well above floating point underflow. The implementation of Eq. (14) continues to be stable in this regime. Right: Comparison of LogEI values to HEBO’s approximate LogEI implementation [12]. At the time of writing, HEBO’s implementation exhibits a discontinuity and error of \(>1.02\) at the threshold \(z=-6\), below which the approximation takes effect. The discontinuity could be ameliorated, though not removed, by correcting the log normalization constant (turquoise). The figure also shows that the naive implementation used by HEBO for \(z>-6\) starts to become unstable well before \(z=-6\).
Importantly, the smoothing alone only solves the problem of having mathematically zero gradients, not that of having numerically vanishing gradients, as we have shown for the analytical case above. For this reason, we transform all smoothed computations to log space and thus need the following special implementation of \(\log\circ\operatorname{softplus}\) that can be evaluated stably for a very large range of inputs:
\[\texttt{logsoftplus}_{\tau}(x)=\begin{cases}[\texttt{log}\circ\texttt{softplus}_ {\tau}](x)&x/\tau>l\\ x/\tau+\texttt{log}(\tau)&x/\tau\leq l\end{cases}\]
where \(\tau\) is a temperature parameter and \(l\) depends on the floating point precision used, around \(-35\) for double precision in our implementation.
Note that the lower branch of logsoftplus is approximate. Using a Taylor expansion of \(\log(1+z)=z-z^{2}/2+\mathcal{O}(z^{3})\) around \(z=0\), we can see that the approximation error is \(\mathcal{O}(z^{2})\), and therefore, \(\log(\log(1+\exp(x)))=x+\mathcal{O}(\exp(x)^{2})\), which converges to \(x\) exponentially quickly as \(x\to-\infty\). In our implementation, \(l\) is chosen so that no significant digit is lost in dropping the second order term from the lower branch.
Having defined logsoftplus, we further note that
\[\log\|\mathbf{x}\|_{1/\tau_{\max}} =\log\left(\sum_{i}x_{i}^{1/\tau_{\max}}\right)^{\tau_{\max}}\] \[=\tau_{\max}\log\left(\sum_{i}\exp(\log(x_{i})/\tau_{\max})\right)\] \[=\tau_{\max}\texttt{logsumexp}_{i}\left(\log(x_{i})/\tau_{\max}\right)\]
Therefore, we express the logarithm of the smoothed acquisition utility for \(q\) candidates as
\[\tau_{\max}\texttt{logsumexp}_{j}^{q}(\texttt{logsoftplus}_{\tau_{0}}((\xi^{i} (x_{j})-y^{*})/\tau_{\max}).\]
Applying another logsumexp to compute the logarithm of the mean of acquisition utilities over a set of Monte Carlo samples \(\{\xi_{i}\}_{i}\) gives rise to the expression in Eq. (10).
In particular for large batches (large \(q\)), this expression can still give rise to vanishing gradients for some candidates, which is due to the large dynamic range of the outputs of the logsoftplus when \(x<<0\). To solve this problem, we propose a new class of smooth approximations to the "hard" non-linearities that decay as \(\mathcal{O}(1/x^{2})\) as \(x\to-\infty\) in the next section.
### A Class of Smooth Approximations with Fat Tails for Larger Batches
A regular \(\operatorname{softplus}(x)=\log(1+\exp(x))\) function smoothly approximates the ReLU non-linearity and - in conjunction with the log transformations - is sufficient to achieve good numerical behavior for small batches of the Monte Carlo acquisition functions. However, as more candidates are added, \(\log\operatorname{softplus}(x)=\log(\log(1+\exp(x)))\) is increasingly likely to have a high dynamic range as for \(x\ll 0\), \(\log\operatorname{softplus}_{\tau}(x)\sim-x/\tau\). If \(\tau>0\) is chosen to be small, \((-x/\tau)\) can vary orders of magnitude within a single batch. This becomes problematic when we approximate the maximum utility over the batch of candidates, since logsumexp only propagates numerically non-zero gradients to inputs that are no smaller than approximately \((\max_{j}x_{j}-700)\) in double precision, another source of vanishing gradients.
To solve this problem, we propose a new smooth approximation to the ReLU, maximum, and indicator functions that decay only polynomially as \(x\to-\infty\), instead of exponentially, like the canonical softplus. The high level idea is to use \((1+x^{2})^{-1}\), which is proportional to the Cauchy density function (and is also known as a Lorentzian), in ways that maintain key properties of existing smooth approximations - convexity, positivity, etc - while changing the asymptotic behavior of the functions from exponential to \(\mathcal{O}(1/x^{2})\) as \(x\to-\infty\), also known as a "fat tail". Further, we will show that the proposed smooth approximations satisfy similar maximum error bounds as their exponentially decaying counterparts, thereby permitting a similar approximation guarantee as Lemma 2 with minor adjustments to the involved constants. While the derivations herein are based on the Cauchy density with inverse quadratic decay, it is possible to generalize the derivations to e.g. \(\alpha\)-stable distribution whose symmetric variants permit accurate and efficient numerical computation [2].
Fat SoftplusWe define
\[\varphi_{+}(x)=\alpha(1+x^{2})^{-1}+\log(1+\exp(x)), \tag{19}\]
for a positive scalar \(\alpha\). The following result shows that we can ensure the monotonicity and convexity - both important properties of the ReLU that we would like to maintain in our approximation - of \(g\) by carefully choosing \(\alpha\).
**Lemma 5** (Monotonicity and Convexity).: \(\varphi_{+}(x)\) _is positive, monotonically increasing, and strictly convex for \(\alpha\) satisfying_
\[0\leq\alpha<\frac{e^{1/\sqrt{3}}}{2\left(1+e^{1/\sqrt{3}}\right)}.\]
Proof.: Positivity follows due to \(\alpha\geq 0\), and both summands being positive. Monotonicity and convexity can be shown via canonical differential calculus and bounding relevant quantities.
In particular, regarding monotonicity, we want to select \(\alpha\) so that the first derivative is bounded below by zero:
\[\partial_{x}\varphi_{+}(x)=\frac{e^{x}}{1+e^{x}}-\alpha\frac{2x}{(1+x^{2})^{2}}\]
First, we note that \(\partial_{x}\varphi_{+}(x)\) is positive for \(x<0\) and any \(\alpha\), since both terms are positive in this regime. For \(x\geq 0\), \(\frac{e^{x}}{1+e^{x}}=(1+e^{-x})^{-1}\geq 1/2\), and \(-1/(1+x^{2})^{2}\geq-1/(1+x^{2})\), so that
\[\partial_{x}\varphi_{+}(x)\geq\frac{1}{2}-\alpha\frac{2x}{(1+x^{2})}\]
Forcing \(\frac{1}{2}-\alpha\frac{2x}{(1+x^{2})}>0\), and multiplying by \((1+x^{2})\) gives rise to a quadratic equation whose roots are \(x=2\alpha\pm\sqrt{4\alpha^{2}-1}\). Thus, there are no real roots for \(\alpha<1/2\). Since the derivative is certainly positive for the negative reals and the guaranteed non-existence of roots implies that the derivative cannot cross zero elsewhere, \(0\leq\alpha<1/2\) is a sufficient condition for monotonicity of \(\varphi_{+}\).
Regarding convexity, our goal is to prove a similar condition on \(\alpha\) that guarantees the positivity of the second derivative:
\[\partial_{x}^{2}\varphi_{+}(x)=\alpha\frac{6x^{2}-2}{(1+x^{2})^{3}}+\frac{e^{- x}}{(1+e^{-x})^{2}}\]
Note that \(\frac{6x^{2}-2}{(1+x^{2})^{3}}\) is symmetric around \(0\), is negative in \((-\sqrt{1/3},\sqrt{1/3})\) and has a minimum of \(-2\) at \(0\). \(\frac{e^{-x}}{(1+e^{-x})^{2}}\) is symmetric around zero and decreasing away from zero. Since the rational polynomial is only negative in \((-\sqrt{1/3},\sqrt{1/3})\), we can lower bound \(\frac{e^{-x}}{(1+e^{-x})^{2}}>\frac{e^{-\sqrt{1/3}}}{(1+e^{-\sqrt{1/3}})^{2}}\) in \((-\sqrt{1/3},\sqrt{1/3})\). Therefore,
\[\partial_{x}^{2}\varphi_{+}(x)\geq\frac{e^{-x}}{(1+e^{-x})^{2}}-2\alpha\]
Forcing \(\frac{e^{-\sqrt{1/3}}}{(1+e^{-\sqrt{1/3}})^{2}}-2\alpha>0\) and rearranging yields the result. Since \(\frac{e^{-\sqrt{1/3}}}{(1+e^{-\sqrt{1/3}})^{2}}/2\sim 0.115135\), the convexity condition is stronger than the monotonicity condition and therefore subsumes it.
Importantly \(\varphi\) decays only polynomially for increasingly negative inputs, and therefore \(\log\varphi\) only logarithmically, which keeps the range of \(\varphi\) constrained to values that are more manageable numerically. Similar to Lemma 7, one can show that
\[|\tau\varphi_{+}(x/\tau)-\mathtt{ReLU}(x)|\leq\left(\alpha+\log(2)\right)\tau. \tag{20}\]
There are a large number of approximations or variants of the ReLU that have been proposed as activation functions of artificial neural networks, but to our knowledge, none satisfy the properties that we seek here: (1) smoothness, (2) positivity, (3) monotonicity, (4) convexity, and (5) polynomial decay. For example, the leaky ReLU does not satisfy (1) and (2), and the ELU does not satisfy (5).
Fat MaximumThe canonical logsumexp approximation to \(\max_{i}x_{i}\) suffers from numerically vanishing gradients if \(\max_{i}x_{i}-\min_{j}x_{j}\) is larger a moderate threshold, around 760 in double precision, depending on the floating point implementation. In particular, while elements close to the maximum receive numerically non-zero gradients, elements far away are increasingly likely to have a numerically zero gradient. To fix this behavior for the smooth maximum approximation, we propose
\[\varphi_{\max}(\mathbf{x})=\max_{j}x_{j}+\tau\log\sum_{i}\left[1+\left(\frac{x _{i}-\max_{j}x_{j}}{\tau}\right)^{2}\right]^{-1}. \tag{21}\]
This approximation to the maximum has the same error bound to the true maximum as the logsumexp approximation:
**Lemma 6**.: _Given \(\tau>0\)_
\[\max_{i}x_{i}\leq\tau\;\phi_{\max}(x/\tau)\leq\max_{i}x_{i}+\tau\log(d). \tag{22}\]
Proof.: Regarding the lower bound, let \(i=\arg\max_{j}x_{j}\). For this index, the associated summand in (21) is \(1\). Since all summands are positive, the entire sum is lower bounded by \(1\), hence
\[\tau\log\sum_{i}\left[1+\left(\frac{x_{i}-\max_{j}x_{j}}{\tau}\right)^{2} \right]^{-1}>\tau\log(1)=0\]
Adding \(\max_{j}x_{j}\) to the inequality finishes the proof for the lower bound.
Regarding the upper bound, (21) can be maximized when \(x_{i}=\max_{j}x_{j}\) for all \(i\), in which case each \((x_{i}-\max_{j}x_{j})^{2}\) is minimized, and hence each summand is maximized. In this case,
\[\tau\log\sum_{i}\left[1+\left(\frac{x_{i}-\max_{j}x_{j}}{\tau}\right)^{2} \right]^{-1}\leq\tau\log\left(\sum_{i}1\right)=\tau\log(d).\]
Adding \(\max_{j}x_{j}\) to the inequality finishes the proof for the upper bound.
Fat SigmoidNotably, we encountered a similar problem using regular (log)-sigmoids to smooth the constraint indicators for EI with black-box constraints. In principle the Cauchy cummulative distribution function would satisfy these conditions, but requires the computation of \(\arctan\), a special function that requires more floating point operations to compute numerically than the following function. Here, we want the smooth approximation \(\iota\) to satisfy 1) positivity, 2) monotonicity, 3) polynomial decay, and 4) \(\iota(x)=1/2-\iota(-x)\). Let \(\gamma=\sqrt{1/3}\), then we define
\[\iota(x)=\begin{cases}\frac{2}{3}\left(1+(x-\gamma)^{2}\right)^{-1}&x<0,\\ 1-\frac{2}{3}\left(1+(x+\gamma)^{2}\right)^{-1}&x\geq 0.\end{cases}\]\(\iota\) is monotonically increasing, satisfies \(\iota(x)\to 1\) as \(x\to\infty\), \(\iota(0)=1/2\), and \(\iota(x)=\mathcal{O}(1/x^{2})\) as \(x\to-\infty\). Further, we note that the asymptotics are primarily important here, but that we can also make the approximation tighter by introducing a temperature parameter \(\tau\), and letting \(\iota_{\tau}(x)=\iota(x/\tau)\). The approximation error of \(\iota_{\tau}(x)\) to the Heaviside step function becomes tighter point-wise as \(\tau\to 0+\), except for at the origin where \(\iota_{\tau}(x)=1/2\), similar to the canonical sigmoid.
### Constrained Expected Improvement
For the analytical case, many computational frameworks already provide a numerically stable implementation of the logarithm of the Gaussian cummulative distribution function, in the case of PyTorch, torch.special.log_ndtr, which can be readily used in conjunction with our implementation of LogEI, as described in Sec. 4.3.
For the case of Monte-Carlo parallel EI, we implemented the fat-tailed \(\iota\) function from Sec. A.4 to approximate the constraint indicator and compute the per-candidate, per-sample acquisition utility using
\[(\texttt{logsoftplus}_{\tau_{0}}(\xi_{i}(\mathbf{x}_{j})-y^{*})+\sum_{k} \texttt{log}\circ\iota\left(-\frac{\xi_{i}^{(k)}(\mathbf{x}_{j})}{\tau_{\text {cons}}}\right),\]
where \(\xi_{i}^{(k)}\) is the \(i\)th sample of the \(k\)th constraint model, and \(\tau_{\text{cons}}\) is the temperature parameter controlling the approximation to the constraint indicator. While this functionality is in our implementation, our benchmark results use the analytical version.
### Parallel Expected Hypervolume Improvement
The hypervolume improvement can be computed via the inclusion-exclusion principle, see [13] for details, we focus on the numerical issues concerning qEHVI here. To this end, we define
\[z_{k,i_{1},\ldots,i_{j}}^{(m)}=\min\left[\mathbf{u}_{k},\mathbf{f}(\mathbf{x} _{i_{1}}),\ldots,\mathbf{f}(\mathbf{x}_{i_{j}})\right],\]
where \(\mathbf{f}\) is the vector-valued objective function, and \(\mathbf{u}_{k}\) is the vector of upper bounds of one of \(K\) hyper-rectangles that partition the non-Pareto-dominated space, see [13] for details on the partitioning. Letting \(\mathbf{l}_{k}\) be the corresponding lower bounds of the hyper-rectangles, the hypervolume improvement can then be computed as
\[\text{HVI}(\{\mathbf{f}(\mathbf{x}_{i})\}_{i=1}^{q}=\sum_{k=1}^{K}\sum_{j=1}^{ q}\sum_{X_{j}\in\mathcal{X}_{j}}(-1)^{j+1}\prod_{m=1}^{M}[z_{k,X_{j}}^{(m)}-l_{k }^{(m)}]_{+}, \tag{23}\]
where \(\mathcal{X}_{j}=\{X_{j}\subset\mathcal{X}_{\text{cand}}:|X_{j}|=j\}\) is the superset of all subsets of \(\mathcal{X}_{\text{cand}}\) of size \(j\) and \(z_{k,X_{j}}^{(m)}=z_{k,i_{1},\ldots,i_{j}}^{(m)}\) for \(X_{j}=\{\mathbf{x}_{i_{1}},\ldots,\mathbf{x}_{i_{j}}\}\).
To find a numerically stable formulation of the logarithm of this expression, we first re-purpose the \(\varphi_{\max}\) function to compute the minimum in the expression of \(z^{(m)}_{k,i_{1},...,i_{q}}\), like so \(\varphi_{\min}(x)=-\varphi_{\max}(-x)\). Further, we use the \(\varphi_{+}\) function of Sec. A.4 as for the single objective case to approximate \([z^{(m)}_{k,X_{j}}-l^{(m)}_{k}]_{+}\). We then have
\[\log\prod_{m=1}^{M}\varphi_{+}[z^{(m)}_{k,X_{j}}-l^{(m)}_{k}]=\sum_{m=1}^{M} \log\varphi_{+}[z^{(m)}_{k,X_{j}}-l^{(m)}_{k}] \tag{24}\]
Since we can only transform positive quantities to log space, we split the sum in Eq. (23) into positive and negative components, depending on the sign of \((-1)^{j+1}\), and compute the result using a numerically stable implementation of \(\log(\exp(\log\text{ of positive terms})-\exp(\log\text{ of negative terms})\). The remaining sums over \(k\) and \(q\) can be carried out by applying logsumexp to the resulting quantity. Finally, applying logsumexp to reduce over an additional Monte-Carlo sample dimension yields the formulation of qLogEHVI that we use in our multi-objective benchmarks.
### Probability of Improvement
Numerical improvements for the probability of improvement acquisition which is defined as \(\alpha(x)=\Phi\left(\frac{\mu(x)-y^{*}}{\sigma(x)}\right)\), where \(\Phi\) is the standard Normal CDF, can be obtained simply by taking the logarithm using a numerically stable implementation of \(\log(\Phi(z))=\texttt{logerfc}\Big{(}-\frac{1}{\sqrt{2}}z\Big{)}-\log(2)\), where logerfc is computed as
\[\texttt{logerfc}(x)=\begin{cases}\texttt{log}(\texttt{erfc}(x))&x\leq 0\\ \texttt{log}(\texttt{erfcx}(x))-x^{2}&x>0.\end{cases}\]
### q-Noisy Expected Improvement
The same numerical improvements used by qLogEI to improve Monte-Carlo expected improvement (qEI) in Appendix A.3 can be applied to improve the fully Monte Carlo Noisy Expected Improvement [6, 52] acquisition function. As in qLogEI, we can (i) approximate the \(\max(0,\mathbf{x})\) using a softplus to smooth the sample-level improvements to ensure that they are mathematically positive, (ii) approximate the maximum over the \(q\) candidate designs by norm \(||\cdot||_{\frac{1}{\max}}\), and (iii) take the logarithm to of the resulting smoothed value to mitigate vanishing gradients. To further mitigate vanishing gradients, we can again leverage the Fat Softplus and Fat Maximum approximations. The only notable difference in the \(q\)EI and \(q\)NEI acquisition functions is the choice of incumbent, and similarly only a change of incumbent is required to obtain qLogNEI from qLogEI. Specifically, when the scalar \(y^{*}\) in Equation (10) is replaced with a vector containing the new incumbent under each sample we obtain the qLogNEI acquisition value. The \(i^{\text{th}}\) element of the incumbent vector for qLogNEI is \(\max_{j^{\prime}=q+1}^{n+q}\xi^{i}(\mathbf{x}_{j^{\prime}})\), where \(\mathbf{x}_{q+1},...,\mathbf{x}_{n+q}\) are the previously evaluated designs and \(\xi^{i}(\mathbf{x}_{j^{\prime}})\) is the value of the \(j^{\text{th}}\) point under the \(i^{\text{th}}\) sample from the joint posterior over \(\mathbf{x}_{1},...,\mathbf{x}_{n+q}\). We note that we use a hard maximum to compute the incumbent for each sample because we do not need to compute gradients with respect to the previously evaluated designs \(\mathbf{x}_{q+1},...,\mathbf{x}_{n+q}\).
We note that we obtain computational speed ups by (i) pruning the set of previously evaluated points that considered for being the best incumbent to include only those designs with non-zero probability of being the best design and (ii) caching the Cholesky decomposition of the posterior covariance over the resulting pruned set of previously evaluated designs and using low-rank updates for efficient sampling [14].
For experimental results of \(q\)LogNEI see Section D.4.
## Appendix B Strategies for Optimizing Acquisition Functions
As discussed in Section 2.3, a variety of different approaches and heuristics have been applied to the problem of optimizing acquisition functions. For the purpose of this work, we only consider continuous domains \(\mathbb{X}\). While discrete and/or mixed domains are also relevant in practice and have received substantial attention in recent years - see e.g. Baptista and Poloczek [7], Daulton et al. [15], Deshwal et al. [18], Kim et al. [44], Oh et al. [62], Wan et al. [74] - our work here on improving acquisition functions is largely orthogonal to this (though the largest gains should be expected when using gradient-based optimizers, as is done in mixed-variable BO when conditioning on discrete variables, or when performing discrete or mixed BO using continuous relaxations, probabilistic reparameterization, or straight-through estimators [15]).
Arguably the simplest approach to optimizing acquisition functions is by grid search or random search. While variants of this combined with local descent can make sense in the context of optimizing over discrete or mixed spaces and when acquisition functions can be evaluated efficiently in batch (e.g. on GPUs), this clearly does not scale to higher-dimensional continuous domains due to the exponential growth of space to cover.
Another relatively straightforward approach is to use zeroth-order methods such as DIRECT[41] (used e.g. by Dragonfly[43]) or the popular CMA-ES[32]. These approaches are easy implement as they avoid the need to compute gradients of acquisition functions. However, not relying on gradients is also what renders their optimization performance inferior to gradient based methods, especially for higher-dimensional problems and/or joint batch optimization in parallel Bayesian optimization.
The most common approach to optimizing acquisition functions on continuous domains is using gradient descent-type algorithms. Gradients are either be computed based on analytically derived closed-form expressions, or via auto-differentiation capabilities of modern ML systems such as PyTorch[63], Tensorflow[1], or JAX[9].
For analytic acquisition functions, a common choice of optimizer is L-BFGS-B[10], a quasi-second order method that uses gradient information to approximate the Hessian and supports box constraints. If other, more general constraints are imposed on the domain, other general purpose nonlinear optimizers such as SLSQP[47] or IPOPT[73] are used (e.g. by BoTorch). For Monte Carlo (MC) acquisition functions, Wilson et al. [77] proposes using stochastic gradient ascent (SGA) based on stochastic gradient estimates obtained via the reparameterization trick[45]. Stochastic first-order algorithms are also used by others, including e.g. Wang et al. [75] and Daulton et al. [15]. Balandat et al. [6] build on the work by Wilson et al. [77] and show how sample average approximation (SAA) can be employed to obtain deterministic gradient estimates for MC acquisition functions, which has the advantage of being able to leverage the improved convergence rates of optimization algorithms designed for deterministic functions such as L-BFGS-B. This general approach has since been used for a variety of other acquisition functions, including e.g. Daulton et al. [13] and Jiang et al. [40].
Very few implementations of Bayesian Optimization actually use higher-order derivative information, as this either requires complex derivations of analytical expressions and their custom implementation, or computation of second-order derivatives via automated differentiation, which is less well supported and computationally much more costly than computing only first-order derivatives. One notable exception is Cornell-MOE[78, 79], which supports Newton's method (though this is limited to the acquisition functions implemented in C++ within the library and not easily extensible to other acquisition functions).
### Common initialization heuristics for multi-start gradient-descent
One of the key issues to deal with gradient-based optimization in the context of optimizing acquisition functions is the optimizer getting stuck in local optima due to the generally highly non-convex objective. This is typically addressed by means of restarting the optimizer from a number of different initial conditions distributed across the domain.
A variety of different heuristics have been proposed for this. The most basic one is to restart from random points uniformly sampled from the domain (for instance, scikit-optimize[33] uses this strategy). However, as we have argued in this paper, acquisition functions can be (numerically) zero in large parts of the domain, and so purely random restarts can become ineffective, especially in higher dimensions and with more data points. A common strategy is therefore to either augment or bias the restart point selection to include initial conditions that are closer to "promising points". GPyOpt[69] augments random restarts with the best points observed so far, or alternatively points generated via Thompson sampling. Spearmint[66] initializes starting points based on Gaussian perturbations of the current best point. BoTorch[6] selects initial points by performing Boltzmann sampling on a set of random points according to their acquisition function value; the goal of this strategy is to achieve a biased random sampling across the domain that is likely to generate more points around regions with high acquisition value, but remains asymptotically space-filling. The initialization strategy used by Trieste[64] works similarly to the one in BoTorch, but instead of using soft-randomization via Boltzmann sampling, it simply selects the top-\(k\) points. Most recently, Gramacy et al. [31] proposed distributing initial conditions using a Delaunay triangulation of previously observed data points. This is an interesting approach that generalizes the idea of initializing "in between" observed points from the single-dimensional case. However, this approach does not scale well with the problem dimension and the number of observed data points due to the complexity of computing the triangulation (with wall time empirically found to be exponential in the dimension, see [31, Fig. 3] and worst-case quadratic in the number of observed points).
However, while these initialization strategies can help substantially with better optimizing acquisition functions, they ultimately cannot resolve foundational issues with acquisition functions themselves. Ensuring that acquisition functions provides enough gradient information (not just mathematically but also numerically) is therefore key to be able to optimize it effectively, especially in higher dimensions and with more observed data points.
## Appendix C Proofs
See 1
Proof.: We begin by expanding the argument to the \(h\) function in Eq. (2) as a sum of 1) the standardized error of the posterior mean \(\mu_{n}\) to the true objective \(f\) and 2) the standardized difference of the value of the true objective \(f\) at \(x\) to the best previously observed value \(y^{*}=\max_{i}^{n}y_{i}\):
\[\frac{\mu_{n}(\mathbf{x})-y^{*}}{\sigma_{n}(\mathbf{x})}=\frac{\mu_{n}( \mathbf{x})-f(\mathbf{x})}{\sigma_{n}(\mathbf{x})}+\frac{f(\mathbf{x})-f^{*}} {\sigma_{n}(\mathbf{x})} \tag{25}\]
We proceed by bounding the first term on the right hand side. Note that by assumption, \(f(\mathbf{x})\sim\mathcal{N}(\mu_{n}(\mathbf{x}),\sigma_{n}(\mathbf{x})^{2})\) and thus \((\mu_{n}(\mathbf{x})-f(\mathbf{x}))/\)\(\sigma_{n}(\mathbf{x})\sim\mathcal{N}(0,1)\). For a positive \(C>0\) then, we use a standard bound on the Gaussian tail probability to attain
\[P\left(\frac{\mu_{n}(\mathbf{x})-f(\mathbf{x})}{\sigma_{n}(\mathbf{x})}>C \right)\leq e^{-C^{2}/2}/2. \tag{26}\]
Therefore, \((\mu(\mathbf{x})-f(\mathbf{x}))/\sigma_{n}(\mathbf{x})<C\) with probability \(1-\delta\) if \(C=\sqrt{-2\log(2\delta)}\).
Using the bound just derived, and forcing the resulting upper bound to be less than \(B\) yields a sufficient condition to imply \(\mu_{n}(\mathbf{x})-y^{*}<B\sigma_{n}(\mathbf{x})\):
\[\frac{\mu_{n}(\mathbf{x})-y^{*}}{\sigma_{n}(\mathbf{x})}\leq C+\frac{f( \mathbf{x})-y^{*}}{\sigma_{n}(\mathbf{x})}<B \tag{27}\]
Letting \(f^{*}:=f(\mathbf{x}^{*})\), re-arranging and using \(y^{*}=f^{*}+(y^{*}-f^{*})\) we get with probability \(1-\delta\),
\[f(\mathbf{x})\leq f^{*}-(f^{*}-y^{*})-(\sqrt{-2\log(2\delta)}-B)\sigma_{n}( \mathbf{x}). \tag{28}\]
Thus, we get
\[P_{\mathbf{x}}\left(\frac{\mu_{n}(\mathbf{x})-y^{*}}{\sigma_{n}( \mathbf{x})}<B\right) \geq P_{\mathbf{x}}\left(f(\mathbf{x})\leq f^{*}-(f^{*}-y^{*})-( \sqrt{-2\log(2\delta)}-B)\sigma_{n}(x)\right) \tag{29}\] \[\geq P_{x}\left(f(\mathbf{x})\leq f^{*}-(f^{*}-y^{*})-(\sqrt{-2 \log(2\delta)}-B)\max_{\mathbf{x}}\sigma_{n}(\mathbf{x})\right).\]
Note that the last inequality gives a bound that is not directly dependent on the evaluation of the posterior statistics of the surrogate at any specific \(\mathbf{x}\). Rather, it is dependent on the optimality gap \(f^{*}-y^{*}\) and the maximal posterior standard deviation, or a bound thereof. Letting \(\epsilon_{n}=(f^{*}-y^{*})-(\sqrt{-2\log(2\delta)}-B)\max_{\mathbf{x}}\sigma_ {n}(\mathbf{x})\) finishes the proof.
**Lemma 2**.: _[Relative Approximation Guarantee] Given \(\tau_{0},\tau_{\max}>0\), the approximation error of \(\operatorname{qLogEI}\) to \(\operatorname{qEI}\) is bounded by_
\[\left|e^{\operatorname{qLogEI}(\mathbf{X})}-\operatorname{qEI}(\mathbf{X}) \right|\leq(q^{\tau_{\max}}-1)\;\operatorname{qEI}(\mathbf{X})+\log(2)\tau_{0 }q^{\tau_{\max}}. \tag{11}\]
Proof.: Let \(z_{iq}=\xi_{i}(\mathbf{x}_{q})-y^{*}\), where \(i\in\{1,...,n\}\), and for brevity of notation, and let \(\mathtt{lse}\), \(\mathtt{lsp}\) refer to the \(\mathtt{logsumexp}\) and \(\mathtt{logsoftplus}\) functions, respectively, and \(\mathtt{ReLU}(x)=\max(x,0)\). We then bound \(n|e^{\operatorname{qLogEI}(\mathbf{X})}-\operatorname{qEI}(\mathbf{X})|\) by
\[\left|\exp(\mathtt{lse}_{i}(\tau_{\max}\mathtt{lse}_{q}(\mathtt{ lsp}_{\tau_{0}}(z_{iq})/\tau_{\max})))-\sum_{i}\max_{q}\mathtt{ReLU}(z_{iq})\right|\] \[\leq\sum_{i}\left|\exp(\tau_{\max}\mathtt{lse}_{q}(\mathtt{lsp}_{ \tau_{0}}(z_{iq})/\tau_{\max}))-\max_{q}\mathtt{ReLU}(z_{iq})\right|\] \[=\sum_{i}\left|\|\mathtt{softplus}_{\tau_{0}}(z_{i}.)\|_{1/\tau_{ \max}}-\max_{q}\mathtt{ReLU}(z_{iq})\right| \tag{30}\] \[\leq\sum_{i}\left|\|\mathtt{softplus}_{\tau_{0}}(z_{i}.)\|_{1/ \tau_{\max}}-\max_{q}\mathtt{softplus}_{\tau_{0}}(z_{iq})\right|\] \[\qquad+\left|\max_{q}\mathtt{softplus}_{\tau_{0}}(z_{iq})-\max_{q }\mathtt{ReLU}(z_{iq})\right|\]
First and second inequalities are due to the triangle inequality, where for the second we used \(|a-c|\leq|a-b|+|b-c|\) with \(b=\max_{q}\mathtt{softplus}(z_{iq})\).
To bound the first term in the sum, note that \(\|\mathbf{x}\|_{\infty}\leq\|\mathbf{x}\|_{q}\leq\|\mathbf{x}\|_{\infty}d^{1/q}\), thus \(0\leq(\|\mathbf{x}\|_{q}-\|\mathbf{x}\|_{\infty})\leq d^{1/q}-1\|\mathbf{x}\|_{\infty}\), and therefore
\[\left\|\mathtt{softplus}_{\tau_{0}}(z_{i}.)\|_{1/\tau_{\max}}- \max_{q}\mathtt{softplus}_{\tau_{0}}(z_{iq})\right| \leq(q^{\tau_{\max}}-1)\max_{q}\mathtt{softplus}_{\tau_{0}}(z_{iq})\] \[\leq(q^{\tau_{\max}}-1)(\max_{q}\mathtt{ReLU}(z_{iq})+\log(2)\tau _{0})\]
The second term in the sum can be bound due to \(|\mathtt{softplus}_{\tau_{0}}(x)-\mathtt{ReLU}(x)|\leq\log(2)\tau_{0}\) (see Lemma 7 below) and therefore,
\[\left|\max_{q}\mathtt{softplus}_{\tau_{0}}(z_{iq})-\max_{q}\mathtt{ReLU}_{ \tau_{0}}(z_{iq})\right|\leq\log(2)\tau_{0}.\]
Dividing Eq. (30) by \(n\) to compute the sample mean finishes the proof for the Monte-Carlo approximations to the acquisition value. Taking \(n\to\infty\) further proves the result for the mathematical definitions of the parallel acquisition values, i.e. Eq. (4).
Approximating the \(\mathtt{ReLU}\) using the \(\mathtt{softplus}_{\tau}(x)=\tau\log(1+\exp(x/\tau))\) function leads to an approximation error that is at most \(\tau\) in the infinity norm, i.e. \(\|\mathtt{softplus}_{\tau}-\mathtt{ReLU}\|_{\infty}=\log(2)\tau\). The following lemma formally proves this.
**Lemma 7**.: _Given \(\tau>0\), we have for all \(x\in\mathbb{R}\),_
\[|\mathtt{softplus}_{\tau}(x)-\mathtt{ReLU}(x)|\leq\log(2)\;\tau. \tag{31}\]
Proof.: Taking the (sub-)derivative of \(\mathtt{softplus}_{\tau}-\mathtt{ReLU}\), we get
\[\partial_{x}\mathtt{softplus}_{\tau}(x)-\mathtt{ReLU}(x)=(1+e^{-x/\tau})^{-1} -\begin{cases}1&x>0\\ 0&x\leq 0\end{cases}\]
which is positive for all \(x<0\) and negative for all \(x>0\), hence the extremum must be at \(x\), at which point \(\mathtt{softplus}_{\tau}(0)-\mathtt{ReLU}(0)=\log(2)\tau\). Analyzing the asymptotic behavior, \(\lim_{x\to\pm\infty}(\mathtt{softplus}_{\tau}(x)-\mathtt{ReLU}(x))=0\), and therefore \(\mathtt{softplus}_{\tau}(x)>\mathtt{ReLU}(x)\) for \(x\in\mathbb{R}\).
Approximation guarantees for the fat-tailed non-linearities of App. A.4 can be derived similarly.
Additional Empirical Details and Results
### Experimental details
All algorithms are implemented in BoTorch. The analytic EI, qEI, cEI utilize the standard BoTorch implementations. We utilize the original authors' implementations of single objective JES [36], GIBBON [61], and multi-objective JES [71], which are all available in the main BoTorch repository. All simulations are ran with 32 replicates and error bars represent \(\pm 2\) times the standard error of the mean. We use a Matern-5/2 kernel with automatic relevance determination (ARD), i.e. separate length-scales for each input dimension, and a top-hat prior on the length-scales in \([0.01,100]\). The input spaces are normalized to the unit hyper-cube and the objective values are standardized during each optimization iteration.
### Additional Empirical Results on Vanishing Values and Gradients
The left plot of Figure 1 in the main text shows that for a large fraction of points across the domain the gradients of EI are numerically essentially zero. In this section we provide additional detail on these simulations as well as intuition for results.
The data generating process (DGP) for the training data used for the left plot of Figure 1 is the following: 80% of training points are sampled uniformly at random from the domain, while 20% are sampled according to a multivariate Gaussian centered at the function maximum with a standard deviation of 25% of the length of the domain. The idea behind this DGP is to mimic the kind of data one would see during a Bayesian Optimization loop (without having to run thousands of BO loops to generate the Figure 1). Under this DGP with the chosen test problem, the incumbent (best observed point) is typically better than the values at the random test locations, and this becomes increasingly the case as the dimensionality of the problem increases and the number of training points grows. This is exactly the situation that is typical when conducting Bayesian Optimization.
For a particular replicate, Figure 15 shows the model fits in-sample (black), out-of-sample (blue), and the best point identified so far (red) with 60 training points and a random subset of 50 (out of 2000) test points. One can see that the model produces decent mean predictions for out-of-sample data, and that the uncertainty estimates appear reasonably well-calibrated (e.g., the credible intervals typically cover the true value). A practitioner would consider this a good model for the purposes of Bayesian Optimization. However, while there is ample uncertainty in the predictions of the model away from the training points, for the vast majority of points, the mean prediction is many standard deviations away from the incumbent value (the error bars are \(\pm\) 2 standard deviations). This is the key reason for EI taking on zero (or vanishingly small) values and having vanishing gradients.
Figure 16: Histogram of \(z(x)\), the argument to \(h\) in (2), corresponding to Figure 15. Vertical lines are the thresholds corresponding to values \(z\) below which \(h(z)\) is less than the respective threshold. The majority of the test points fall below these threshold values.
Figure 15: Model fits for a typical replicate used in generating the Fig. 1 (left). While there is ample uncertainty in the test point predictions (blue, chosen uniformly at random), the mean prediction for the majority of points is many standard deviations away from the incumbent value (red).
To illustrate this, Figure 16 shows the histogram of \(z(x)\) values, the argument to the function \(h\) in (2). It also contains the thresholds corresponding to the values \(z\) below which \(h(z)\) is less than the respective threshold. Since \(\sigma(x)\) is close to 1 for most test points (mean: 0.87, std: 0.07), this is more or less the same as saying that EI(\(z(x)\)) is less than the threshold. It is evident from the histogram that the majority of the test points fall below these threshold values (especially for larger thresholds), showing that the associated acquisition function values (and similarly the gradients) are numerically almost zero and causing issues during acquisition function optimization.
### Parallel Expected Improvement
Figure 17 reports optimization performance of parallel BO on the 16-dimensional Ackley and Levy functions for both sequential greedy and joint batch optimization. Besides the apparent substantial advantages of qLogEI over qEI on Ackley, a key observation here is that jointly optimizing the candidates of batch acquisition functions can yield highly competitive optimization performance, especially as the batch size increases. Notably, joint optimization of the batch with qLogEI starts out performing worse in terms of BO performance than sequential on the Levy function, but outperforms all sequential methods as the batch size increases. See also Figure 18 for the scaling of each method with respect to the batch size \(q\).
Figure 17: Parallel optimization performance on the Ackley and Levy functions in 16 dimensions. qLogEI outperforms all baselines on Ackley, where joint optimization of the batch also improves on the sequential greedy. On Levy, joint optimization of the batch with qLogEI starts out performing worse in terms of BO performance than sequential, but wins out over all sequential methods as the batch size increases.
Figure 18: Breakdown of the parallel optimization performance of Figure 17 per method, rather than per batch size. On Levy, qLogEI exhibits a much smaller deterioration in BO performance due to increases in parallelism than the methods relying on sequentially optimized batches of candidates.
### Noisy Expected Improvement
Figure 19 benchmarks the "noisy" variant, qLogNEI. Similar to the noiseless case, the advantage of the LogEI versions over the canonical counterparts grows as with the dimensionality of the problem, and the noisy version improves on the canonical versions for larger noise levels.
### Multi-Objective optimization with qLogEHVI
Figure 20 compares qLogEHVI and qEHVI on 6 different test problems with 2 or 3 objectives, and ranging from 2-30 dimensions. This includes 3 real world inspired problems: cell network design for optimizing coverage and capacity [19], laser plasma acceleration optimization [38], and vehicle design optimization [54, 68]. The results are consistent with our findings in the single-objective and constrained cases: qLogEHVI consistently outperforms qEHVI, and the gap is larger on higher dimensional problems.
### Combining LogEI with TuRB0 for High-Dimensional Bayesian Optimization
In the main text, we show how LogEI performs particularly well relative to other baselines in high dimensional spaces. Here, we show how LogEI can work synergistically with trust region based methods for high-dimensional BO, such as TuRBO [24].
Fig. 21 compares the performance of LogEI, TuRBO-1 + LogEI, TuRBO-1 + EI, as well as the original Thompson-sampling based implementation for the 50d Ackley test problem. Combining TuRBO-1 with LogEI results in substantially better performance than the baselines when using a small number of function evaluations. Thompson sampling (TS) ultimately performs better after \(10,000\) evaluations, but this experiment shows the promise of combining TuRBO and LogEI in settings where we cannot do thousands of function evaluations. Since we optimize batches of \(q=50\) candidates jointly, we also increase the number of Monte-Carlo samples from the Gaussian process
Figure 19: Optimization performance with noisy observations on Hartmann 6d (top), Ackley 8d (mid), and Ackley 16 (bottom) for varying noise levels and \(q=1\). We set the noise level as a proportion of the maximum range of the function, which is \(\approx 3.3\) for Hartmann and \(\approx 20\) for Ackley. That is, the \(1\%\) noise level corresponds to a standard deviation of \(0.2\) for Ackley. qLogNEI outperforms both canonical EI counterparts and Gibbon significantly in most cases, especially in higher dimensions.
from \(128\), the BoTorch default, to \(512\), and use the fat-tailed smooth approximations of Sec. A.4 to ensure a strong gradient signal to all candidates of the batch.
Regarding the ultimate out-performance of TS over qLogEI, we think this is due to a model misspecification since the smoothness of a GP with the Matern-5/2 kernel cannot express the non-differentiability of Ackley at the optimum.
### Constrained Problems
While running the benchmarks using CEI in section 5, we found that we in fact improved upon a best known result from the literature. We compare with the results in Coello and Montes [11], which are generated using 30 runs of **80,000 function evaluations** each.
Figure 21: Combining LogEI with TuRBO on the high-dimensional on the 50d Ackley problem yields significant improvement in objective value for a small number of function evaluations. Unlike for qEI, no random restarts are necessary to achieve good performance when performing joint optimization of the batch with qLogEI (\(q=50\)).
Figure 20: Sequential (\(q=1\)) optimization performance on multi-objective problems, as measured by the hypervolume of the Pareto frontier across observed points. This plot includes JES [71]. Similar to the single-objective case, qLogEHVI significantly outperforms all baselines on all test problems.
* For the pressure vessel design problem, Coello and Montes [11] quote a best-case feasible objective of \(6059.946341\). Out of just 16 different runs, LogEI achieves a worst-case feasible objective of \(5659.1108\)**after only 110 evaluations**, and a best case of \(5651.8862\), a notable reduction in objective value using almost three orders of magnitude fewer function evaluations.
* For the welded beam problem, Coello and Montes [11] quote \(1.728226\), whereas LogEI found a best case of \(1.7496\) after 110 evaluations, which is lightly worse, but we stress that this is using three orders of magnitude fewer evaluations.
* For the tension-compression problem, LogEI found a feasible solution with value \(0.0129\) after 110 evaluations compared to the \(0.012681\) reported in in [11].
We emphasize that genetic algorithms and BO are generally concerned with distinct problem classes: BO focuses heavily on sample efficiency and the small-data regime, while genetic algorithms often utilize a substantially larger number of function evaluations. The results here show that in this case BO is competitive with and can even outperform a genetic algorithm, using only a tiny fraction of the sample budget, see App. D.7 for details. Sample efficiency is particularly relevant for physical simulators whose evaluation takes significant computational effort, often rendering several tens of thousands of evaluations infeasible.
### Parallel Bayesian Optimization with cross-batch constraints
In some parallel Bayesian optimization settings, batch optimization is subject to non-trivial constraints across the batch elements. A natural example for this are budget constraints. For instance, in the context of experimental material science, consider the case where each manufactured compound requires a certain amount of different materials (as described by its parameters), but there is only a fixed total amount of material available (e.g., because the stock is limited due to cost and/or storage capacity). In such a situation, batch generation will be subject to a budget constraint that is not separable across the elements of the batch. Importantly, in that case sequential greedy batch generation is not an option since it is not able to incorporate the budget constraint. Therefore, joint batch optimization is required.
Here we give one such example in the context of Bayesian Optimization for sequential experimental design. We consider the five-dimensional silver nanoparticle flow synthesis problem from Liang et al. [53]. In this problem, to goal is to optimize the absorbance spectrum score of the synthesized nanoparticles over five parameters: four flow rate ratios of different components (silver, silver nitrate, trisodium citrate, polyvinyl alcohol) and a total flow rate \(Q_{tot}\).
The original problem was optimized over a discrete set of parameterizations. For our purposes we created a continuous surrogate model based on the experimental dataset (available from [https://github.com/PV-Lab/Benchmarking](https://github.com/PV-Lab/Benchmarking)) by fitting an RBF interpolator (smoothing factor of 0.01) in scipy on the (negative) loss. We use the same search space as Liang et al. [53], but in addition to the box bounds on the parameters we also impose an additional constraint on the total flow rate \(Q_{tot}^{max}=2000\) mL/min across the batch: \(\sum_{i=1}^{q}Q_{tot}^{i}\leq Q_{tot}^{max}\) (the maximum flow rate per syringe / batch element is 1000mL/min). This constraint expresses the maximum throughput limit of the microfluidic experimentation setup. The result of this constraint is that we cannot consider the batch elements (in this case automated syringe pumps) have all elements of a batch of experiments operate in the high-flow regime at the same time.
In our experiment, we use a batch size of \(q=3\) and start the optimization from 5 randomly sampled points from the domain. We run 75 replicates with random initial conditions (shared across the different methods), error bars show \(\pm\) two times the standard error of the mean. Our baseline is uniform random sampling from the domain (we use a hit-and-run sampler to sample uniformly from the constraint polytope \(\sum_{i=1}^{q}Q_{tot}^{i}\leq Q_{tot}^{max}\)). We compare qEI vs. qLogEI, and for each of the two we evaluate (i) the version with the batch constraint imposed explicitly in the optimizer (the optimization in this case uses scipy's SLSQP solver), and (ii) a heuristic that first samples the total flow rates \(\{Q_{tot}^{i}\}_{i=1}^{q}\) uniformly from the constraint set, and then optimizes the acquisition function with the flow rates fixed to the sampled values.
The results in Figure 22 show that while both the heuristic ("random \(Q_{tot}\)") and the proper constrained optimization ("batch-constrained \(Q_{tot}\)") substantially outperform the purely random baseline, it requires uisng both LogEI _and_ proper constraints to achieve additional performance gains over the other 3 combinations. Importantly, this approach is only possible by performing joint optimization of the batch, which underlines the importance of qLogEI and its siblings being able to achieve superior joint batch optimization in settings like this.
### Details on Multi-Objective Problems
We consider a variety of multi-objective benchmark problems. We evaluate performance on three synthetic biobjective problems Branin-Currin (\(d=2\)) [8], ZDT1 (\(d=6\)) [83], and DTLZ2 (\(d=6\)) [17]. As described in 5, we also evaluated performance on three real world inspired problems. For the laser plasma acceleration problem, we used the public data available at Irshad et al. [39] to fit an independent GP surrogate model to each objective. We only queried te surrogate at the highest fidelity to create a single fidelity benchmark.
### Effect of Temperature Parameter
In Figure 23, we examine the effect of fixed \(\tau\) for the softplus operator on optimization performance. We find that smaller values typically work better.
### Effect of the initialization strategy
Packages and frameworks commonly utilize smart initialization heuristics to improve acquisition function optimization performance. In Figure 24, we compare simple random restart optimization, where initial points are selected uniformly at random, with BoTorch's default initialization strategy, which evaluates the acquisition function on a large number of points selected from a scrambled Sobol sequence, and selects \(n\) points at random via Boltzman sampling (e.g., sampling using probabilities computed by taking a softmax over the acquisition values [6]. Here we consider 1024 initial candidates. We find that the BoTorch initialization strategy improves regret for all cases, and that qLogEI, followed by UCB show less sensitivity to the choice of initializations strategy. Figure25 examines the sensitivity of qEI to the number of initial starting points when performing standard random restart optimization and jointly optimizing the \(q\) points in the batch. We find that, consistent with our empirical and theoretical results in the main text, qEI often gets stuck in local minima for the Ackley test function, and additional random restarts often improve results but do not compensate for the fundamental optimality gap. The performance of qLogEI also improves as the number of starting points increases.
Figure 22: Optimization results on the nanomaterial synthesis material science problem with cross-batch constraints. While qLogEI outperforms qEI under the proper constrained (“batch-constrained \(Q_{tot}\)”) optimization, this is not the case for the the heuristic (“random \(Q_{tot}\)”), demonstrating the value of both joint batch optimization with constraints and LogEI.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & Cell Network & Branin-Currin & DTLZ2 & Laser Plasma & ZDT1 & Vehicle Safety \\ \hline JES & 21.6 (+/- 1.1) & 89.6 (+/- 3.3) & 33.6 (+/- 1.0) & 57.3 (+/- 0.7) & 72.7 (+/- 1.0) & 47.0 (+/- 1.6) \\ qEHVI & 0.6 (+/- 0.0) & 0.7 (+/- 0.0) & 1.0 (+/- 0.0) & 3.0 (+/- 0.1) & 0.6 (+/- 0.0) & 0.6 (+/- 0.0) \\ qLogEHVI & 9.2 (+/- 0.8) & 10.0 (+/- 0.4) & 5.8 (+/- 0.2) & 31.6 (+/- 1.7) & 7.2 (+/- 0.7) & 2.1 (+/- 0.1) \\ Rand & 0.2 (+/- 0.0) & 0.2 (+/- 0.0) & 0.2 (+/- 0.0) & 0.3 (+/- 0.0) & 0.3 (+/- 0.0) & 0.3 (+/- 0.0) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Multi-objectve acquisition function optimization wall time in seconds on CPU (2x Intel Xeon E5-2680 v4 @ 2.40GHz). We report the mean and \(\pm\) 2 standard errors.
Figure 23: Ablation study on the convergence characteristics of LogEI on Ackley and sum of squares (SOS) problems in 2 and 16 dimensions. The study shows that it is important to choose a small \(\tau_{0}\) for the best convergence properties, which results in a very tight approximation to the original ReLU non-linearity in the integrand. Critically, setting \(\tau_{0}\) as low as \(10^{-6}\) is only possible due to the transformation of all computations into log-space. Otherwise, the smoothed acquisition utility would exhibit similarly numerically vanishing gradients as the original ReLU non-linearity.
Figure 24: Sensitivity to the initialization strategy. Random selects random restart points from the design space uniformly at random, whereas Boltzmann initialization is the default BoTorch initialization strategy which selects points with higher acquisition function values with a higher probability via Boltzmann sampling.
Figure 25: Sensitivity to number of starting points with multi-start optimization for the 16D Ackley and Levy test problems. Note: We plot negative regret, so higher is better. | ## Review
### Summary
This paper addresses a critical issue in Bayesian optimization related to the Expected Improvement (EI) acquisition function by identifying and proposing solutions to numerical pathologies such as vanishing gradients. The authors introduce LogEI, a reformulated acquisition function that not only mitigates these issues but also maintains performance comparable to state-of-the-art methods. Extensive theoretical analysis and empirical benchmarks demonstrate LogEI's superior performance and robustness across various settings, including constrained, parallel, and multi-objective optimization. Overall, the work presents a significant advancement in making acquisition functions more reliable for practical applications in fields like hyperparameter tuning and materials science.
### Strengths
- Well-written and organized manuscript.
- Addresses a previously neglected aspect of Bayesian optimization, focusing on the optimization of acquisition functions.
- The proposed LogEI and its variants show significant improvements over the canonical EI function with minimal computational overhead.
- Extensive numerical experiments and ablation studies support all claims made in the paper.
- The paper emphasizes the interaction between acquisition function design and optimization algorithms, highlighting its practical implications.
### Weaknesses
- The proposed solution primarily addresses acquisition functions with vanishing gradient issues, limiting its applicability.
- The similarity between LogEI and EI is not thoroughly supported by evidence, warranting further clarification.
- The paper lacks discussion on existing LogEI implementations and related work, which could enhance its novelty.
- The justification for LogEI's importance in high-dimensional problems is not convincingly presented.
- Minor concerns include a lack of attention to noisy tasks and conventional benchmarks.
### Questions
- What scenarios exist where LogEI cannot replace EI or may perform worse?
- Have the authors tested LogEI in non-continuous search spaces, and what were the outcomes?
- Can the authors elaborate on the expected performance of batch acquisition optimization under strong locality constraints?
### Soundness
**Score:** 4
**Description:** 4 = excellent; the theoretical foundations and empirical results are robust and well-supported.
### Presentation
**Score:** 4
**Description:** 4 = excellent; the manuscript is well-structured, clear, and effectively communicates complex ideas.
### Contribution
**Score:** 4
**Description:** 4 = excellent; the paper presents novel insights into acquisition function optimization, with significant potential impact on the field.
### Rating
**Score:** 7
**Description:** 7 = accept, but needs minor improvements; the paper is technically sound with high impact potential but could benefit from additional clarity and context.
### Paper Decision
**Decision:** Accept
**Reasons:** The paper is original and provides significant contributions to the field of Bayesian optimization by addressing critical numerical issues associated with the EI acquisition function. Its soundness, presentation, and contribution scores reflect a strong manuscript that, while needing some minor clarifications, is likely to have a substantial impact on both academic research and practical applications.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Rethinking Bias Mitigation: Fairer Architectures
Make for Fairer Face Recognition
Samuel Dooley\({}^{*}\)
University of Maryland, Abacus.AI
[email protected] &Rhea Sanjay Sukthanker\({}^{*}\)
University of Freiburg
[email protected] &John P. Dickerson
University of Maryland, Arthur AI
[email protected] &Colin White
Caltech, Abacus.AI
[email protected] &Frank Hutter
University of Freiburg
[email protected] &Micah Goldblum
New York University
[email protected]
# indicates equal contribution
###### Abstract
Face recognition systems are widely deployed in safety-critical applications, including law enforcement, yet they exhibit bias across a range of socio-demographic dimensions, such as gender and race. Conventional wisdom dictates that model biases arise from biased training data. As a consequence, previous works on bias mitigation largely focused on pre-processing the training data, adding penalties to prevent bias from effecting the model during training, or post-processing predictions to debias them, yet these approaches have shown limited success on hard problems such as face recognition. In our work, we discover that biases are actually inherent to neural network architectures themselves. Following this reframing, we conduct the first neural architecture search for fairness, jointly with a search for hyperparameters. Our search outputs a suite of models which Pareto-dominate all other high-performance architectures and existing bias mitigation methods in terms of accuracy and fairness, often by large margins, on the two most widely used datasets for face identification, CelebA and VGGFace2. Furthermore, these models generalize to other datasets and sensitive attributes. We release our code, models and raw data files at [https://github.com/dooleys/FR-NAS](https://github.com/dooleys/FR-NAS).
## 1 Introduction
Machine learning is applied to a wide variety of socially-consequential domains, e.g., credit scoring, fraud detection, hiring decisions, criminal recidivism, loan repayment, and face recognition [78, 81, 61, 3], with many of these applications significantly impacting people's lives, often in discriminatory ways [5, 55, 114]. Dozens of formal definitions of fairness have been proposed [80], and many algorithmic techniques have been developed for debiasing according to these definitions [106]. Existing debiasing algorithms broadly fit into three (or arguably four [96]) categories: pre-processing [e.g., 32, 93, 89, 110], in-processing [e.g., 123, 124, 25, 35, 83, 110, 73, 79, 24, 59], or post-processing [e.g., 44, 114].
Conventional wisdom is that in order to effectively mitigate bias, we should start by selecting a model architecture and set of hyperparameters which are optimal in terms of accuracy and then apply a mitigation strategy to reduce bias. This strategy has yielded little success in hard problems such as face recognition [14]. Moreover, even randomly initialized face recognition models exhibit bias and in the same ways and extents as trained models, indicating that these biases are baked in to the architectures already [13]. While existing methods for debiasing machine learning systems use a fixed neural architecture and hyperparameter setting, we instead, ask a fundamental question which has received little attention: _Does model bias arise from the architecture and hyperparameters?_ Following an affirmative answer to this question, we exploit advances in neural architecture search (NAS) [30] and hyperparameter optimization (HPO) [33] to search for inherently fair models.
We demonstrate our results on face identification systems where pre-, post-, and in-processing techniques have fallen short of debiasing face recognition systems. Training fair models in this setting demands addressing several technical challenges [14]. Face identification is a type of face recognition deployed worldwide by government agencies for tasks including surveillance, employment, and housing decisions. Face recognition systems exhibit disparity in accuracy based on race and gender [37, 92, 91, 61]. For example, some face recognition models are 10 to 100 times more likely to give false positives for Black or Asian people, compared to white people [2]. This bias has already led to multiple false arrests and jail time for innocent Black men in the USA [48].
In this work, we begin by conducting the first large-scale analysis of the impact of architectures and hyperparameters on bias. We train a diverse set of 29 architectures, ranging from ResNets [47] to vision transformers [28, 68] to Gluon Inception V3 [103] to MobileNetV3 [50] on the two most widely used datasets in face identification that have socio-demographic labels: CelebA [69] and VGGFace2 [8]. In doing so, we discover that architectures and hyperparameters have a significant impact on fairness, across fairness definitions.
Motivated by this discovery, we design architectures that are simultaneously fair and accurate. To this end, we initiate the study of NAS for fairness by conducting the first use of NAS+HPO to jointly optimize fairness and accuracy. We construct a search space informed by the highest-performing architecture from our large-scale analysis, and we adapt the existing Sequential Model-based Algorithm Configuration method (SMAC) [66] for multi-objective architecture and hyperparameter search. We discover a Pareto frontier of face recognition models that outperform existing state-of-the-art models on both test accuracy and multiple fairness metrics, often by large margins. An outline of our methodology can be found in Figure 1.
We summarize our primary contributions below:
* By conducting an exhaustive evaluation of architectures and hyperparameters, we uncover their strong influence on fairness. Bias is inherent to a model's inductive bias, leading to a substantial difference in fairness across different architectures. We conclude that the implicit convention of choosing standard architectures designed for high accuracy is a losing strategy for fairness.
* Inspired by these findings, we propose a new way to mitigate biases. We build an architecture and hyperparameter search space, and we apply existing tools from NAS and HPO to automatically design a fair face recognition system.
* Our approach finds architectures which are Pareto-optimal on a variety of fairness metrics on both CelebA and VGGFace2. Moreover, our approach is Pareto-optimal compared to other previous bias mitigation techniques, finding the fairest model.
Figure 1: Overview of our methodology.
* The architectures we synthesize via NAS and HPO generalize to other datasets and sensitive attributes. Notably, these architectures also reduce the linear separability of protected attributes, indicating their effectiveness in mitigating bias across different contexts.
We release our code and raw results at [https://github.com/dooleys/FR-NAS](https://github.com/dooleys/FR-NAS), so that users can easily adapt our approach to any bias metric or dataset.
## 2 Background and Related Work
Face Identification.Face recognition tasks can be broadly categorized into two distinct categories: _verification_ and _identification_. Our specific focus lies in face _identification_ tasks which ask whether a given person in a source image appears within a gallery composed of many target identities and their associated images; this is a one-to-many comparison. Novel techniques in face recognition tasks, such as ArcFace [108], CosFace [23], and MagFace [75], use deep networks (often called the _backbone_) to extract feature representations of faces and then compare those to match individuals (with mechanisms called the _head_). Generally, _backbones_ take the form of image feature extractors and _heads_ resemble MLPs with specialized loss functions. Often, the term "head" refers to both the last layer of the network and the loss function. Our analysis primarily centers around the face identification task, and we focus our evaluation on examining how close images of similar identities are in the feature space of trained models, since the technology relies on this feature representation to differentiate individuals. An overview of these topics can be found in Wang and Deng [109].
Bias Mitigation in Face Recognition.The existence of differential performance of face recognition on population groups and subgroups has been explored in a variety of settings. Earlier work [e.g., 57, 82] focuses on single-demographic effects (specifically, race and gender) in pre-deep-learning face detection and recognition. Buolamwini and Gebru [5] uncover unequal performance at the phenotypic subgroup level in, specifically, a gender classification task powered by commercial systems. Raji and Buolamwini [90] provide a follow-up analysis - exploring the impact of the public disclosures of Buolamwini and Gebru [5] - where they discovered that named companies (IBM, Microsoft, and Megvij) updated their APIs within a year to address some concerns that had surfaced. Further research continues to show that commercial face recognition systems still have socio-demographic disparities in many complex and pernicious ways [29, 27, 54, 54, 26].
Facial recognition is a large and complex space with many different individual technologies, some with bias mitigation strategies designed just for them [63, 118]. The main bias mitigation strategies for facial identification are described in Section 4.2.
Neural Architecture Search (NAS) and Hyperparameter Optimization (HPO).Deep learning derives its success from the manually designed feature extractors which automate the feature engineering process. Neural Architecture Search (NAS) [30, 116], on the other hand, aims at automating the very design of network architectures for a task at hand. NAS can be seen as a subset of HPO [33], which refers to the automated search for optimal hyperparameters, such as learning rate, batch size, dropout, loss function, optimizer, and architectural choices. Rapid and extensive research on NAS for image classification and object detection has been witnessed as of late [67, 125, 121, 88, 6]. Deploying NAS techniques in face recognition systems has also seen a growing interest [129, 113]. For example, reinforcement learning-based NAS strategies [121] and one-shot NAS methods [113] have been deployed to search for an efficient architecture for face recognition with low _error_. However, in a majority of these methods, the training hyperparameters for the architectures are _fixed_. We observe that this practice should be reconsidered in order to obtain the fairest possible face recognition systems. Moreover, one-shot NAS methods have also been applied for multi-objective optimization [39, 7], e.g., optimizing accuracy and parameter size. However, none of these methods can be applied for a joint architecture and hyperparameter search, and none of them have been used to optimize _fairness_.
For the case of tabular datasets, a few works have applied hyperparameter optimization to mitigate bias in models. Perrone et al. [87] introduced a Bayesian optimization framework to optimize accuracy of models while satisfying a bias constraint. Schmucker et al. [97] and Cruz et al. [17] extended Hyperband [64] to the multi-objective setting and showed its applications to fairness. Lin et al. [65] proposed de-biasing face recognition models through model pruning. However, they only considered two architectures and just one set of fixed hyperparameters. To the best of our knowledge,no prior work uses any AutoML technique (NAS, HPO, or joint NAS and HPO) to design fair face recognition models, and no prior work uses NAS to design fair models for any application.
## 3 Are Architectures and Hyperparameters Important for Fairness?
In this section, we study the question _"Are architectures and hyperparameters important for fairness?"_ and report an extensive exploration of the effect of model architectures and hyperparameters.
Experimental Setup.We train and evaluate each model configuration on a gender-balanced subset of the two most popular face identification datasets: CelebA and VGGFace2. CelebA [69] is a large-scale face attributes dataset with more than 200K celebrity images and a total of 10 177 gender-labeled identities. VGGFace2 [8] is a much larger dataset designed specifically for face identification and comprises over 3.1 million images and a total of 9 131 gender-labeled identities. While this work analyzes phenotypic metadata (perceived gender), the reader should not interpret our findings absent a social lens of what these demographic groups mean inside society. We guide the reader to Hamidi et al. [40] and Keyes [56] for a look at these concepts for gender.
To study the importance of architectures and hyperparameters for fairness, we use the following training pipeline - ultimately conducting 355 training runs with different combinations of 29 architectures from the Pytorch Image Model (timm) database [117] and hyperparameters. For each model, we use the default learning rate and optimizer that was published with that model. We then train the model with these hyperparameters for each of three heads, ArcFace [108], CosFace [23], and MagFace [75]. Next, we use the model's default learning rate with both AdamW [70] and SGD optimizers (again with each head choice). Finally, we also train with AdamW and SGD with unified learning rates (SGD with learning_rate=0.1 and AdamW with learning_rate=0.001). In total, we thus evaluate a single architecture between 9 and 13 times (9 times if the default optimizer and learning rates are the same as the standardized, and 13 times otherwise). All other hyperparameters are held constant fortraining of the model.
Evaluation procedure.As is commonplace in face identification tasks [12, 13], we evaluate the performance of the learned representations. Recall that face recognition models usually learn representations with an image backbone and then learn a mapping from those representations onto identities of individuals with the head of the model. We pass each test image through a trained model and save the learned representation. To compute the representation error (which we will henceforth simply refer to as _Error_), we merely ask, for a given probe image/identity, whether the closest image in feature space is _not_ of the same person based on \(l_{2}\) distance. We split each dataset into train, validation, and test sets. We conduct our search for novel architectures using the train and validation splits, and then show the improvement of our model on the test set.
The most widely used fairness metric in face identification is _rank disparity_, which is explored in the NIST FRVT [38]. To compute the rank of a given image/identity, we ask how many images of a different identity are closer to the image in feature space. We define this index as the rank of a given image under consideration. Thus, \(\text{Rank(image)}=0\) if and only if \(\text{Error(image)}=0\); \(\text{Rank(image)}>0\) if and only if \(\text{Error(image)}=1\). We examine the **rank disparity**: the absolute difference of the average ranks for each perceived gender in a dataset \(\mathcal{D}\):
\[\bigg{|}\frac{1}{|\mathcal{D}_{\text{male}}|}\sum_{x\in\mathcal{D}_{\text{ male}}}\text{Rank }(x)-\frac{1}{|\mathcal{D}_{\text{female}}|}\sum_{x\in\mathcal{D}_{\text{ female}}}\text{Rank}(x)\bigg{|}. \tag{1}\]
We focus on rank disparity throughout the main body of this paper as it is the most widely used in face identification, but we explore other forms of fairness metrics in face recognition in Appendix C.4.
Results and Discussion.By plotting the performance of each training run on the validation set with the error on the \(x\)-axis and rank disparity on the \(y\)-axis in Figure 2, we can easily conclude two main points. First, optimizing for error does not always optimize for fairness, and second, different architectures have different fairness properties. We also find the DPN architecture has the lowest error and is Pareto-optimal on both datasets; hence, we use that architecture to design our search space in Section 4.
We note that in general there is a low correlation between error and rank disparity (e.g., for models with error < 0.3, \(\rho=.113\) for CelebA and \(\rho=.291\) for VGGFace2). However, there are differences between the two datasets at the most extreme low errors. First, for VGGFace2, the baseline models already have very low error, with there being 10 models with error < 0.05; CelebA only has three such models. Additionally, models with low error also have low rank disparity on VGGFace2 but this is not the case for CelebA. This can be seen by looking at the Pareto curves in Figure 2.
The Pareto-optimal models also differ across datasets: on CelebA, they are versions of DPN, TNT, ReXNet, VovNet, and ResNets, whereas on VGGFace2 they are DPN and ReXNet. Finally, we note that different architectures exhibit different optimal hyperparameters. For example, on CelebA, for the Xception65 architecture finds the combinations of (SGD, ArcFace) and (AdamW, ArcFace) as Pareto-optimal, whereas the Inception-ResNet architecture finds the combinations (SGD, MagFace) and (SGD, CosFace) Pareto-optimal.
## 4 Neural Architecture Search for Bias Mitigation
Inspired by our findings on the importance of architecture and hyperparameters for fairness in Section 3, we now initiate the first joint study of NAS for fairness in face recognition, also simultaneously optimizing hyperparameters. We start by describing our search space and search strategy. We then compare the results of our NAS+HPO-based bias mitigation strategy against other popular face recognition bias mitigation strategies. We conclude that our strategy indeed discovers simultaneously accurate and fair architectures.
### Search Space Design and Search Strategy
We design our search space based on our analysis in Section 3, specifically around the Dual Path Networks[10] architecture which has the lowest error and is Pareto-optimal on both datasets, yielding the best trade-off between rank disparity and accuracy as seen in Figure 2.
Hyperparameter Search Space Design.We optimize two categorical hyperparameters (the architecture head/loss and the optimizer) and one continuous one (the learning rate). The learning rate's range is conditional on the choice of optimizer; the exact ranges are listed in Table 6 in the appendix.
Architecture Search Space Design.Dual Path Networks [10] for image classification share common features (like ResNets [46]) while possessing the flexibility to explore new features [52] through a dual path architecture. We replace the repeating 1x1_conv-3x3_conv-1x1_conv block with a simple recurring searchable block. Furthermore, we stack multiple such searched blocks to closely follow the architecture of Dual Path Networks. We have nine possible choices for each of the three operations in the DPN block, each of which we give a number 0 through 8. The choices include a vanilla convolution, a convolution with pre-normalization and a convolution with post-normalization, each of them paired with kernel sizes 1\(\times\)1, 3\(\times\)3, or 5\(\times\)5 (see Appendix C.2 for full details). We thus have 729 possible architectures (in addition to an infinite number of hyperparameter configurations). We denote each of these architectures by XYZ where \(X,Y,Z\in\{0,\dots,8\}\); e.g., architecture 180 represents the architecture which has operation 1, followed by operation 8, followed by operation 0.
Figure 2: (Left) CelebA (Right) VGGFace2. Error-Rank Disparity Pareto front of the architectures with lowest error (< 0.3). Models in the lower left corner are better. The Pareto front is denoted with a dashed line. Other points are architecture and hyperparameter combinations which are not Pareto-optimal.
Search strategy.To navigate this search space we have the following desiderata:
* **Joint NAS+HPO.** Since there are interaction effects between architectures and hyperparameters, we require an approach that can jointly optimize both of these.
* **Multi-objective optimization.** We want to explore the trade-off between the accuracy of the face recognition system and the fairness objective of choice, so our joint NAS+HPO algorithm needs to supports multi-objective optimization [84; 21; 71].
* **Efficiency.** A single function evaluation for our problem corresponds to training a deep neural network on a given dataset. As this can be quite expensive on large datasets, we would like to use cheaper approximations with multi-fidelity optimization techniques [98; 64; 31].
To satisfy these desiderata, we employ the multi-fidelity Bayesian optimization method SMAC3 [66] (using the SMAC4MF facade), casting architectural choices as additional hyperparameters. We choose Hyperband [64] for cheaper approximations with the initial and maximum fidelities set to 25 and 100 epochs, respectively, and \(\eta=2\). Every architecture-hyperparameter configuration evaluation is trained using the same training pipeline as in Section 3. For multi-objective optimization, we use the ParEGO [21] algorithm with \(\rho\) set to 0.05.
### Empirical Evaluation
We now report the results of our NAS+HPO-based bias mitigation strategy. First, we discuss the models found with our approach, and then we compare their performance to other mitigation baselines.
Setup.We conducted one NAS+HPO search for each dataset by searching on the train and validation sets. After running these searches, we identified three new candidate architectures for CelebA (SMAC_000, SMAC_010, and SMAC_680), and one candidate for VGGFace2 (SMAC_301) where the naming convention follows that described in Section 4.1. We then retrained each of these models and those high performing models from Section 3 for three seeds to study the robustness of error and disparity for these models; we evaluated their performance on the validation and test sets for each dataset, where we follow the evaluation scheme of Section 3.
Comparison against timm models.On CelebA (Figure 3), our models Pareto-dominate all of the timm models with nontrivial accuracy on the validation set. On the test set, our models still Pareto-dominate all highly competitive models (with Error<0.1), but one of the original configurations (DPN with Magface) also becomes Pareto-optimal. However, the error of this architecture is 0.13, which is significantly higher than our models (0.03-0.04). Also, some models (e.g., VoVNet and DenseNet) show very large standard errors across seeds. Hence, it becomes important to also study
Figure 3: Pareto front of the models discovered by SMAC and the rank-1 models from timm for the _(a)_ validation and _(b)_ test sets on CelebA. Each point corresponds to the mean and standard error of an architecture after training for 3 seeds. The SMAC models Pareto-dominate the top performing timm models (\(Error<0.1\)).
the robustness of models across seeds along with the accuracy and disparity Pareto front. Finally, on VGGFace2 (Figure 4), our models are also Pareto-optimal for both the validation and test sets.
Novel Architectures Outperform the State of the Art.Comparing the results of our automatically-found models to the current state of the art baseline ArcFace [23] in terms of error demonstrates that our strategy clearly establishes a new state of the art. While ArcFace [23] achieves an error of 4.35% with our training pipeline on CelebA, our best-performing novel architecture achieves a much lower error of 3.10%. Similarly, the current VGGFace2 state of the art baseline [112] achieves an error of 4.5%, whereas our best performing novel architecture achieves a much lower error of 3.66%.
Novel Architectures Pareto-Dominate other Bias Mitigation Strategies.There are three common pre-, post-, and in-processing bias mitigation strategies in face identification. First, Chang et al. [9] demonstrated that randomly flipping labels in the training data of the subgroup with superior accuracy can yield fairer systems; we call this technique Flipped. Next, Wang and Deng [110] use different angular margins during training and therefore promote better feature discrimination for the minority class; we call this technique Angular. Finally, Morales et al. [76] introduced SensitiveNets which is a sensitive information removal network trained on top of a pre-trained feature extractor with an adversarial sensitive regularizer. While other bias mitigation techniques exist in face recognition, these three are the most used and pertinent to _face identification_. See Cherepanova et al. [14] for an overview of the technical challenges of bias mitigation in face recognition. We take the top performing, Pareto-optimal timm models from the previous section and apply the three bias mitigation techniques (Flipped, Angular, and SensitiveNets). We also apply these same techniques to the novel architectures that we found. The results in Table 1 show that the novel architectures from our NAS+HPO-based mitigation strategy Pareto-dominate the bias-mitigated models. In VGGFace2, the SMAC_301 model achieves the best performance, both in terms of error and fairness, compared to the bias-mitigated models. On CelebA, the same is true for the SMAC_680 model.
NAS+HPO-Based Bias Mitigation can be Combined with other Bias Mitigation Strategies.Additionally, we combined the three other bias mitigation methods with the SMAC models that resulted from our NAS+HPO-based bias mitigation strategy. More precisely, we first conducted our NAS+HPO approach and then applied the Flipped, Angular, and SensitiveNets approach afterwards. On both datasets, the resulting models continue to Pareto-dominate the other bias mitigation strategies used by themselves and ultimately yield the model with the lowest rank disparity of all the models (0.18 on VGGFace2 and 0.03 on CelebA). In particular, the bias improvement of SMAC_000+Flipped model is notable, achieving a score of 0.03 whereas the lowest rank disparity of any model from Figure 3 is 2.63, a 98.9% improvement. In Appendix C.6, we demonstrate that this result is robust to the fairness metric -- specifically our bias mitigation strategy Pareto-dominates the other approaches on all five fairness metrics.
Figure 4: Pareto front of the models discovered by SMAC and the rank-1 models from timm for the _(a)_ validation and _(b)_ test sets on VGGFace2. Each point corresponds to the mean and standard error of an architecture after training for 3 seeds. The SMAC models are Pareto-optimal the top performing timm models (Error<0.1).
Novel Architectures Generalize to Other Datasets.We observed that when transferring our novel architectures to other facial recognition datasets that focus on fairness-related aspects, our architectures consistently outperform other existing architectures by a significant margin. We take the state-of-the-art models from our experiments and test the weights from training on CelebA and VGGFace2 on different datasets which the models did not see during training. Specifically, we transfer the evaluation of the trained model weights from CelebA and VGGFace2 onto the following datasets: LFW [53], CFP_FF [100], CFP_FP [100], AgeDB [77], CALFW [128], CPLFW [127]. Table 2 demonstrates that our approach consistently achieves the highest performance among various architectures when transferred to other datasets. This finding indicates that our approach exhibits exceptional generalizability compared to state-of-the-art face recognition models in terms of transfer learning to diverse datasets.
Novel Architectures Generalize to Other Sensitive Attributes.The superiority of our novel architectures even goes beyond accuracy-related metrics when transferring to other datasets -- our novel architectures have superior fairness properties compared to the existing architectures _even on datasets which have completely different protected attributes than were used in the architecture search_. Specifically, to inspect the generalizability of our approach to other protected attributes, we transferred our models pre-trained on CelebA and VGGFace2 (which have a gender presentation category) to the RFW dataset [111] which includes a protected attribute for race and the AgeDB dataset [77] which includes a protected attribute for age. The results detailed in Appendix C.7 show that our novel architectures always outperforms the existing architectures, across all five fairness metrics studied in this work on both datasets.
Novel Architectures Have Less Linear-Separability of Protected Attributes.Our comprehensive evaluation of multiple face recognition benchmarks establishes the importance of architectures for fairness in face-recognition. However, it is natural to wonder: _"What makes the discovered architectures fair in the first place?_ To answer this question, we use linear probing to dissect the
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{**Trained on VGGFace2**} & \multicolumn{4}{c}{**Trained on CelebA**} \\
**Model** & **Baseline** & **Flipped** & **Angular** & **SensitiveNets** & **Model** & **Baseline** & **Flipped** & **Angular** & **SensitiveNets** \\ \hline SMAC\_301 & **(3.66:0.23)** & **(4.95:0.18)** & (4.14:0.25) & (6.20:0.41) & SMAC\_000 & (3.25:2.18) & **(5.20:0.03)** & (3.45:2.28) & (3.45:2.18) \\ DPN & (3.56:0.27) & (5.87:0.32) & (6.06:0.36) & (4.76:0.34) & SMAC\_010 & (4.44:2.27) & (12.72:5.46) & (45.4:2.50) & (3.99:2.12) \\ REXNet & (4.69:0.27) & (5.73:0.45) & (5.47:0.26) & (4.75:0.25) & SMAC\_680 & **(3.21:9.16)** & (12.42:4.50) & (3.80:1.16) & (3.29:2.09) \\ Swin & (5.47:0.38) & (5.75:0.44) & (5.23:0.25) & (5.03:0.30) & ArcFace & (11.30:4.6) & (13.56:2.70) & (9.90:5.60) & (9.10:3.00) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of bias mitigation techniques where the SMAC models were found with our NAS+HPO bias mitigation technique and the other three techniques are standard in facial recognition: Flipped [9], Angular [76], and SensitiveNets [110]. Items in bold are Pareto-optimal. The values show (Error;Rank Disparity). Other metrics are reported in Appendix C.6 and Table 8.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Architecture (trained on VGGFace2)** & **LFW** & **CFP\_FF** & **CFP\_FP** & **AgeDB** & **CALFW** & **CPLFW** \\ \hline \hline & 82.60 & 80.91 & 65.51 & 59.18 & 68.23 & 62.15 \\ DPN\_SGD & 93.0 & 91.81 & 78.96 & 71.87 & 78.27 & 72.97 \\ DPN\_AdamW & 78.66 & 77.17 & 64.35 & 61.32 & 64.78 & 60.30 \\ SMAC\_301 & **96.63** & **95.10** & **86.63** & **79.97** & **86.07** & **81.43** \\ \hline \hline
**Architecture (trained on CelebA)** & **LFW** & **CFP\_FF** & **CFP\_FP** & **AgeDB** & **CALFW** & **CPLFW** \\ \hline DPN\_CosFace & 87.78 & 90.73 & 69.97 & 65.55 & 75.50 & 62.77 \\ DPN\_MagFace & 91.13 & 92.16 & 70.58 & 68.17 & 76.98 & 60.80 \\ SMAC\_000 & **94.98** & 95.60 & **74.24** & 80.23 & 84.73 & 64.22 \\ SMAC\_010 & 94.30 & 94.63 & 73.83 & **80.37** & 84.73 & **65.48** \\ SMAC\_680 & 94.16 & **95.68** & 72.67 & 79.88 & **84.78** & 63.96 \\ \hline \hline \end{tabular}
\end{table}
Table 2: We transfer the evaluation of top performing models on VGGFace2 and CelebA onto six other common face recognition datasets: LFW [53], CFP_FF [100], CFP_FP [100], AgeDB [77], CALFW [128], CPLPW [127]. The novel architectures found with our bias mitigation strategy significantly outperform other models in terms of accuracy. Refer Table 9 for the complete results.
intermediate features of our searched architectures and DPNs, which our search space is based upon. Intuitively, given that our networks are trained only on the task of face recognititon, we do not want the intermediate feature representations to implicitly exploit knowledge about protected attributes (e.g., gender). To this end we insert linear probes[1] at the last two layers of different Pareto-optimal DPNs and the model obtained by our NAS+HPO-based bias mitigation. Specifically, we train an MLP on the feature representations extracted from the pre-trained models and the protected attributes as labels and compute the gender-classification accuracy on a held-out set. We consider only the last two layers, so k assumes the values of \(N\) and \(N-1\) with \(N\) being the number of layers in DPNs (and the searched models). We represent the classification probabilities for the genders by \(gp_{k}=softmax(W_{k}+b)\), where \(W_{k}\) is the weight matrix of the \(k\)-th layer and \(b\) is a bias. We provide the classification accuracies for the different pre-trained models on VGGFace2 in Table 3. This demonstrates that, as desired, our searched architectures maintain a lower classification accuracy for the protected attribute. In line with this observation, in the t-SNE plots in Figure 18 in the appendix, the DPN displays a higher degree of separability of features.
Comparison between different NAS+HPO techniquesWe also perform an ablation across different multi-objective NAS+HPO techniques. Specifically we compare the architecture derived by SMAC with architectures derived by the evolutionary multi-objective optimization algorithm NSGA-II [22] and multi-objective asynchronous successive halving (MO-ASHA) [98]. We obesrv that the architecture derived by SMAC Pareto-dominates the other NAS methods in terms of accuracy and diverse fairness metrics Table 4. We use the implementation of NSGA-II and MO-ASHA from the syne-tune library [95] to perform an ablation across different baselines.
## 5 Conclusion, Future Work and Limitations
Conclusion.Our approach studies a novel direction for bias mitigation by altering network topology instead of loss functions or model parameters. We conduct the first large-scale analysis of the relationship among hyperparameters and architectural properties, and accuracy, bias, and disparity in predictions across large-scale datasets like CelebA and VGGFace2. Our bias mitigation technique centering around Neural Architecture Search and Hyperparameter Optimization is very competitive compared to other common bias mitigation techniques in facial recognition.
Our findings present a paradigm shift by challenging conventional practices and suggesting that seeking a fairer architecture through search is more advantageous than attempting to rectify an unfair one through adjustments. The architectures obtained by our joint NAS and HPO generalize across different face recognition benchmarks, different protected attributes, and exhibit lower linear-separability of protected attributes.
Future Work.Since our work lays the foundation for studying NAS+HPO for fairness, it opens up a plethora of opportunities for future work. We expect the future work in this direction to focus on
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**NAS Method** & **Accuracy \(\uparrow\)** & **Rank Disparity\(\downarrow\)** & **Disparity\(\downarrow\)** & **Ratio\(\downarrow\)** & **Rank Ratio \(\downarrow\)** & **Error Ratio\(\downarrow\)** \\ \hline MO-ASHA\_108 & 95.212 & 0.408 & 0.038 & 0.041 & 0.470 & 0.572 \\ \hline NSGA-IL\_728 & 86.811 & 0.599 & 0.086 & 0.104 & 0.490 & **0.491** \\ \hline SMAC\_301 & **96.337** & **0.230** & **0.030** & **0.032** & **0.367** & 0.582 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison between architectures derived by SMAC and other NAS baselines
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Architecture (trained on VGGFace2)** & **Accuracy on Layer N \(\downarrow\)** & **Accuracy on Layer N-1 \(\downarrow\)** \\ \hline DPN\_MagFace\_SGD & 86.042\% & 95.461\% \\ DPN\_CosFace\_SGD & 90.719\% & 93.787\% \\ DPN\_CosFace\_AdamW & 87.385\% & 94.444\% \\ SMAC\_301 & **69.980\%** & **68.240\%** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Linear Probes on architectures. Lower gender classification accuracy is better studying different multi-objective algorithms [34, 60] and NAS techniques [67, 125, 115] to search for inherently fairer models. Further, it would be interesting to study how the properties of the architectures discovered translate across different demographics and populations. Another potential avenue for future work is incorporating priors and beliefs about fairness in the society from experts to further improve and aid NAS+HPO methods for fairness. Given the societal importance, it would be interesting to study how our findings translate to real-life face recognition systems under deployment. Finally, it would also be interesting to study the degree to which NAS+HPO can serve as a general bias mitigation strategy beyond the case of facial recognition.
Limitations.While our work is a step forward in both studying the relationship among architectures, hyperparameters, and bias, and in using NAS techniques to mitigate bias in face recognition models, there are important limitations to keep in mind. Since we only studied a few datasets, our results may not generalize to other datasets and fairness metrics. Second, since face recognition applications span government surveillance [49], target identification from drones [72], and identification in personal photo repositories [36], our findings need to be studied thoroughly across different demographics before they could be deployed in real-life face recognition systems. Furthermore, it is important to consider how the mathematical notions of fairness used in research translate to those actually impacted [94], which is a broad concept without a concise definition. Before deploying a particular system that is meant to improve fairness in a real-life application, we should always critically ask ourselves whether doing so would indeed prove beneficial to those impacted by the given sociotechnical system under consideration or whether it falls into one of the traps described by Selbst et al. [99]. Additionally, work in bias mitigation, writ-large and including our work, can be certainly encourage techno-solutionism which views the reduction of statistical bias from algorithms as a justification for their deployment, use, and proliferation. This of course can have benefits, but being able to reduce the bias in a technical system is a _different question_ from whether a technical solution _should_ be used on a given problem. We caution that our work should not be interpreted through a normative lens on the appropriateness of using facial recognition technology.
In contrast to some other works, we do, however, feel, that our work helps to overcome the portability trap [99] since it empowers domain experts to optimize for the right fairness metric, in connection with public policy experts, for the problem at hand rather than only narrowly optimizing one specific metric. Additionally, the bias mitigation strategy which we propose here can be used in other domains and applied to applications which have more widespread and socially acceptable algorithmic applications [19].
#### Acknowledgments
This research was partially supported by the following sources: NSF CAREER Award IIS-1846237, NSF D-ISN Award #2039862, NSF Award CCF-1852352, NIH R01 Award NLM-013039-01, NIST MSE Award #20126334, DARPA GARD #HR00112020007, DoD WHS Award #HQ003420F0035, ARPA-E Award #4334192; TAILOR, a project funded by EU Horizon 2020 research and innovation programme under GA No 952215; the German Federal Ministry of Education and Research (BMBF, grant RenormalizedFlows 01IS19077C); the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant number 417962828; the European Research Council (ERC) Consolidator Grant "Deep Learning 2.0" (grant no. 101045765). Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the ERC. Neither the European Union nor the ERC can be held responsible for them.
## References
* [1] Guillaume Alain and Yoshua Bengio. Understanding intermediate layers using linear classifier probes. _arXiv preprint arXiv:1610.01644_, 2016.
* [2] Bobby Allyn. 'The computer got it wrong': How facial recognition led to false arrest of black man. _NPR, June_, 24, 2020.
* Barocas et al. [2017] Solon Barocas, Moritz Hardt, and Arvind Narayanan. Fairness in machine learning. _NIPS Tutorial_, 2017.
* Bello et al. [2021] Irwan Bello, William Fedus, Xianzhi Du, Ekin Dogus Cubuk, Aravind Srinivas, Tsung-Yi Lin, Jonathon Shlens, and Barret Zoph. Revisiting resnets: Improved training and scaling strategies. _Advances in Neural Information Processing Systems_, 34:22614-22627, 2021.
* Buolamwini and Gebru [2018] Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In _Proceedings of the 1st Conference on Fairness, Accountability and Transparency_, volume 81, pages 77-91, 2018. URL [http://proceedings.mlr.press/v81/buolamwini18a.html](http://proceedings.mlr.press/v81/buolamwini18a.html).
* Cai et al. [2018] Han Cai, Ligeng Zhu, and Song Han. Proxylessnas: Direct neural architecture search on target task and hardware. _arXiv preprint arXiv:1812.00332_, 2018.
* Cai et al. [2019] Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han. Once-for-all: Train one network and specialize it for efficient deployment. _arXiv preprint arXiv:1908.09791_, 2019.
* Cao et al. [2018] Qiong Cao, Li Shen, Weidi Xie, Omkar M Parkhi, and Andrew Zisserman. Vggface2: A dataset for recognising faces across pose and age. In _2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018)_, pages 67-74. IEEE, 2018.
* Chang et al. [2020] Hongyan Chang, Ta Duy Nguyen, Sasi Kumar Murakonda, Ehsan Kazemi, and Reza Shokri. On adversarial bias and the robustness of fair machine learning. _arXiv preprint arXiv:2006.08669_, 2020.
* Chen et al. [2017] Yunpeng Chen, Jianan Li, Huaxin Xiao, Xiaojie Jin, Shuicheng Yan, and Jiashi Feng. Dual path networks. _Advances in neural information processing systems_, 30, 2017.
* Chen et al. [2021] Zhengsu Chen, Lingxi Xie, Jianwei Niu, Xuefeng Liu, Longhui Wei, and Qi Tian. Visformer: The vision-friendly transformer. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 589-598, 2021.
* Cherepanova et al. [2021] Valeriia Cherepanova, Micah Goldblum, Harrison Foley, Shiyuan Duan, John P. Dickerson, Gavin Taylor, and Tom Goldstein. Lowkey: leveraging adversarial attacks to protect social media users from facial recognition. In _International Conference on Learning Representations (ICLR)_, 2021.
* Cherepanova et al. [2022] Valeriia Cherepanova, Steven Reich, Samuel Dooley, Hossein Souri, Micah Goldblum, and Tom Goldstein. A deep dive into dataset imbalance and bias in face identification. _arXiv preprint arXiv:2203.08235_, 2022.
* Cherepanova et al. [2023] Valeriia Cherepanova, Vedant Nanda, Micah Goldblum, John P Dickerson, and Tom Goldstein. Technical challenges for training fair neural networks. _6th AAAI/ACM Conference on AI, Ethics, and Society_, 2023.
* Chollet [2017] Francois Chollet. Xception: Deep learning with depthwise separable convolutions. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 1251-1258, 2017.
* Chu et al. [2021] Xiangxiang Chu, Zhi Tian, Yuqing Wang, Bo Zhang, Haibing Ren, Xiaolin Wei, Huaxia Xia, and Chunhua Shen. Twins: Revisiting the design of spatial attention in vision transformers. _Advances in Neural Information Processing Systems_, 34:9355-9366, 2021.
* Cruz et al. [2020] Andre F Cruz, Pedro Saleiro, Catarina Belem, Carlos Soares, and Pedro Bizarro. A bandit-based algorithm for fairness-aware hyperparameter optimization. _arXiv preprint arXiv:2010.03665_, 2020.
* Dai et al. [2021] Xiaoliang Dai, Alvin Wan, Peizhao Zhang, Bichen Wu, Zijian He, Zhen Wei, Kan Chen, Yuandong Tian, Matthew Yu, Peter Vajda, et al. Fbnetv3: Joint architecture-recipe search using predictor pretraining. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 16276-16285, 2021.
* [19] Richeek Das and Samuel Dooley. Fairer and more accurate tabular models through nas. _arXiv preprint arXiv:2310.12145_, 2023.
* [20] Stephane d'Ascoli, Hugo Touvron, Matthew Leavitt, Ari Morcos, Giulio Biroli, and Levent Sagun. Convit: Improving vision transformers with soft convolutional inductive biases. _arXiv preprint arXiv:2103.10697_, 2021.
* [21] Joan Davins-Valldaura, Said Moussaoui, Guillermo Pita-Gil, and Franck Plestan. Parego extensions for multi-objective optimization of expensive evaluation functions. _Journal of Global Optimization_, 67(1):79-96, 2017.
* [22] Kalyanmoy Deb, Amrit Pratap, Sameer Agarwal, and TAMT Meyarivan. A fast and elitist multiobjective genetic algorithm: Nsga-ii. _IEEE transactions on evolutionary computation_, 6(2):182-197, 2002.
* [23] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 4690-4699, 2019.
* [24] Emily Diana, Wesley Gill, Michael Kearns, Krishnaram Kenthapadi, and Aaron Roth. Convergent algorithms for (relaxed) minimax fairness. _arXiv preprint arXiv:2011.03108_, 2020.
* [25] Michele Donini, Luca Oneto, Shai Ben-David, John S Shawe-Taylor, and Massimiliano Pontil. Empirical risk minimization under fairness constraints. In _Advances in Neural Information Processing Systems_, pages 2791-2801, 2018.
* [26] Samuel Dooley, Ryan Downing, George Wei, Nathan Shankar, Bradon Thymes, Gudrun Thorkelsdottir, Tiye Kurtz-Miott, Rachel Mattson, Olufemi Obiwumi, Valeria Cherepanova, et al. Comparing human and machine bias in face recognition. _arXiv preprint arXiv:2110.08396_, 2021.
* [27] Samuel Dooley, George Z Wei, Tom Goldstein, and John Dickerson. Robustness disparities in face detection. _Advances in Neural Information Processing Systems_, 35:38245-38259, 2022.
* [28] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. _arXiv preprint arXiv:2010.11929_, 2020.
* [29] Pawel Drozdowski, Christian Rathgeb, Antitza Dantcheva, Naser Damer, and Christoph Busch. Demographic bias in biometrics: A survey on an emerging challenge. _IEEE Transactions on Technology and Society_, 1(2):89-103, 2020.
* [30] Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A survey. _The Journal of Machine Learning Research_, 20(1):1997-2017, 2019.
* [31] Stefan Falkner, Aaron Klein, and Frank Hutter. Bohb: Robust and efficient hyperparameter optimization at scale. In _International Conference on Machine Learning_, pages 1437-1446. PMLR, 2018.
* [32] Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In _Proceedings of the Annual Conference on Knowledge Discovery and Data Mining (KDD)_, pages 259-268, 2015.
* [33] Matthias Feurer and Frank Hutter. Hyperparameter optimization. In _Automated machine learning_, pages 3-33. Springer, Cham, 2019.
* [34] HC Fu and P Liu. A multi-objective optimization model based on non-dominated sorting genetic algorithm. _International Journal of Simulation Modelling_, 18(3):510-520, 2019.
* [35] Naman Goel, Mohammad Yaghini, and Boi Faltings. Non-discriminatory machine learning through convex fairness criteria. _Proceedings of the AAAI Conference on Artificial Intelligence_, 32(1), 2018. URL [https://ojs.aaai.org/index.php/AAAI/article/view/11662](https://ojs.aaai.org/index.php/AAAI/article/view/11662).
* Google [2021] Google. How google uses pattern recognition to make sense of images, 2021. URL [https://policies.google.com/technologies/pattern-recognition?hl=en-US](https://policies.google.com/technologies/pattern-recognition?hl=en-US).
* Grother et al. [2019] Patrick Grother, Mei Ngan, and Kayee Hanaoka. _Face Recognition Vendor Test (FVRT): Part 3, Demographic Effects_. National Institute of Standards and Technology, 2019.
* Grother et al. [2010] Patrick J. Grother, George W. Quinn, and P J. Phillips. Report on the evaluation of 2d still-image face recognition algorithms. _NIST Interagency/Internal Report (NISTIR)_, 2010. URL [https://doi.org/10.6028/NIST.IR.7709](https://doi.org/10.6028/NIST.IR.7709).
* Guo et al. [2020] Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun. Single path one-shot neural architecture search with uniform sampling. In _European conference on computer vision_, pages 544-560. Springer, 2020.
* Hamidi et al. [2018] Foad Hamidi, Morgan Klaus Scheuerman, and Stacy M Branham. Gender recognition or gender reductionism? the social implications of embedded gender recognition systems. In _Proceedings of the 2018 chi conference on human factors in computing systems_, pages 1-13, 2018.
* Han et al. [2020] Dongyoon Han, Sangdoo Yun, Byeongho Heo, and YoungJoon Yoo. Rexnet: Diminishing representational bottleneck on convolutional neural network. _arXiv preprint arXiv:2007.00992_, 6, 2020.
* Han et al. [2020] Kai Han, Yunhe Wang, Qi Tian, Jianyuan Guo, Chunjing Xu, and Chang Xu. Ghostnet: More features from cheap operations. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 1580-1589, 2020.
* Han et al. [2021] Kai Han, An Xiao, Enhua Wu, Jianyuan Guo, Chunjing Xu, and Yunhe Wang. Transformer in transformer. _Advances in Neural Information Processing Systems_, 34:15908-15919, 2021.
* Hardt et al. [2016] Moritz Hardt, Eric Price, Nati Srebro, et al. Equality of opportunity in supervised learning. In _Advances in neural information processing systems_, pages 3315-3323, 2016.
* Hazirbas et al. [2021] Caner Hazirbas, Joanna Bitton, Brian Dolhansky, Jacqueline Pan, Albert Gordo, and Cristian Canton Ferrer. Towards measuring fairness in ai: the casual conversations dataset. _arXiv preprint arXiv:2104.02821_, 2021.
* He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016.
* He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016.
* Hill [2020] Kashmir Hill. Another arrest, and jail time, due to a bad facial recognition match. _The New York Times_, 29, 2020.
* Hill [2020] Kashmir Hill. The secretive company that might end privacy as we know it. In _Ethics of Data and Analytics_, pages 170-177. Auerbach Publications, 2020.
* Howard et al. [2019] Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al. Searching for mobilenetv3. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 1314-1324, 2019.
* Hu et al. [2018] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 7132-7141, 2018.
* Huang et al. [2017] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In _2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 2261-2269, 2017. doi: 10.1109/CVPR.2017.243.
* Huang et al. [2008] Gary B Huang, Marwan Mattar, Tamara Berg, and Eric Learned-Miller. Labeled faces in the wild: A database forstudying face recognition in unconstrained environments. In _Workshop on faces in'Real-Life'Images: detection, alignment, and recognition_, 2008.
* Jaiswal et al. [2022] Siddharth Jaiswal, Karthikeya Duggirala, Abhisek Dash, and Animesh Mukherjee. Two-face: Adversarial audit of commercial face recognition systems. In _Proceedings of the International AAAI Conference on Web and Social Media_, volume 16, pages 381-392, 2022.
* Joo and Karkkainen [2020] Jungseock Joo and Kimmo Karkkainen. Gender slopes: Counterfactual fairness for computer vision models by attribute manipulation. _arXiv preprint arXiv:2005.10430_, 2020.
* Keyes [2018] Os Keyes. The misgendering machines: Trans/hci implications of automatic gender recognition. _Proceedings of the ACM on human-computer interaction_, 2(CSCW):1-22, 2018.
* Klare et al. [2012] Brendan F Klare, Mark J Burge, Joshua C Klontz, Richard W Vorder Bruegge, and Anil K Jain. Face recognition performance: Role of demographic information. _IEEE Transactions on Information Forensics and Security_, 7(6):1789-1801, 2012.
* Lacoste et al. [2019] Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. Quantifying the carbon emissions of machine learning. _arXiv preprint arXiv:1910.09700_, 2019.
* Lahoti et al. [2020] Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, and Ed H. Chi. Fairness without demographics through adversarially reweighted learning. _arXiv preprint arXiv:2006.13114_, 2020.
* Laumanns and Ocenasek [2002] Marco Laumanns and Jiri Ocenasek. Bayesian optimization algorithms for multi-objective optimization. In _International Conference on Parallel Problem Solving from Nature_, pages 298-307. Springer, 2002.
* Learned-Miller et al. [2020] Erik Learned-Miller, Vicente Ordonez, Jamie Morgenstern, and Joy Buolamwini. Facial recognition technologies in the wild, 2020.
* Lee and Park [2020] Youngwan Lee and Jongyoul Park. Centermask: Real-time anchor-free instance segmentation. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 13906-13915, 2020.
* Leslie [2020] David Leslie. Understanding bias in facial recognition technologies. _arXiv preprint arXiv:2010.07023_, 2020.
* Li et al. [2017] Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. Hyperband: A novel bandit-based approach to hyperparameter optimization. _The Journal of Machine Learning Research_, 18(1):6765-6816, 2017.
* Lin et al. [2022] Xiaofeng Lin, Seungbae Kim, and Jungseock Joo. Fairgrape: Fairness-aware gradient pruning method for face attribute classification. _arXiv preprint arXiv:2207.10888_, 2022.
* Lindauer et al. [2022] Marius Lindauer, Katharina Eggensperger, Matthias Feurer, Andre Biedenkapp, Difan Deng, Carolin Benjamins, Tim Ruhkopf, Rene Sass, and Frank Hutter. Smac3: A versatile bayesian optimization package for hyperparameter optimization. _Journal of Machine Learning Research_, 23(54):1-9, 2022. URL [http://jmlr.org/papers/v23/21-0888.html](http://jmlr.org/papers/v23/21-0888.html).
* Liu et al. [2018] Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. _arXiv preprint arXiv:1806.09055_, 2018.
* Liu et al. [2021] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 10012-10022, 2021.
* Liu et al. [2015] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In _Proceedings of the IEEE international conference on computer vision_, pages 3730-3738, 2015.
* Loshchilov and Hutter [2019] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In _International Conference on Learning Representations_, 2019. URL [https://openreview.net/forum?id=Bkg6RiCqY7](https://openreview.net/forum?id=Bkg6RiCqY7).
* [71] Gong Mao-Guo, Jiao Li-Cheng, Yang Dong-Dong, and Ma Wen-Ping. Evolutionary multi-objective optimization algorithms. _Journal of Software_, 20(2), 2009.
* [72] James Marson and Brett Forrest. Armed low-cost drones, made by turkey, reshape battlefields and geopolitics. _The Wall Street Journal, Jun_, 2021.
* [73] Natalia Martinez, Martin Bertran, and Guillermo Sapiro. Minimax pareto fairness: A multi objective perspective. In _Proceedings of the 37th International Conference on Machine Learning_, volume 119, pages 6755-6764, 2020. URL [http://proceedings.mlr.press/v119/martinez20a.html](http://proceedings.mlr.press/v119/martinez20a.html).
* [74] Dushyant Mehta, Oleksandr Sotnychenko, Franziska Mueller, Weipeng Xu, Mohamed Elgharib, Pascal Fua, Hans-Peter Seidel, Helge Rhodin, Gerard Pons-Moll, and Christian Theobalt. Xnect: Real-time multi-person 3d human pose estimation with a single rgb camera. _arXiv preprint arXiv:1907.00837_, 2019.
* [75] Qiang Meng, Shichao Zhao, Zhida Huang, and Feng Zhou. Magface: A universal representation for face recognition and quality assessment. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 14225-14234, 2021.
* [76] Aythami Morales, Julian Fierrez, Ruben Vera-Rodriguez, and Ruben Tolosana. Sensitivityets: Learning agnostic representations with application to face images. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 2020.
* [77] Stylianos Moschoglou, Athanasios Papaioannou, Christos Sagonas, Jiankang Deng, Irene Kotsia, and Stefanos Zafeiriou. Agedb: the first manually collected, in-the-wild age database. In _proceedings of the IEEE conference on computer vision and pattern recognition workshops_, pages 51-59, 2017.
* [78] Amitabha Mukerjee, Rita Biswas, Kalyanmoy Deb, and Amrit P Mathur. Multi-objective evolutionary algorithms for the risk-return trade-off in bank loan management. _International Transactions in operational research_, 2002.
* [79] Vedant Nanda, Samuel Dooley, Sahil Singla, Soheil Feizi, and John P Dickerson. Fairness through robustness: Investigating robustness disparity in deep learning. In _Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency_, pages 466-477, 2021.
* [80] Arvind Narayanan. Translation tutorial: 21 fairness definitions and their politics. In _Proc. Conf. Fairness Accountability Transp., New York, USA_, 2018.
* [81] Eric WT Ngai, Yong Hu, Yiu Hing Wong, Yijun Chen, and Xin Sun. The application of data mining techniques in financial fraud detection: A classification framework and an academic review of literature. _Decision support systems_, 50(3):559-569, 2011.
* [82] Alice J O'Toole, P Jonathon Phillips, Xiaobo An, and Joseph Dunlop. Demographic effects on estimates of automatic face recognition performance. _Image and Vision Computing_, 30(3):169-176, 2012.
* [83] Manisha Padala and Sujit Gujar. Fnnc: Achieving fairness through neural networks. In _Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20_, pages 2277-2283. International Joint Conferences on Artificial Intelligence Organization, 7 2020. doi: 10.24963/ijcai.2020/315. URL [https://doi.org/10.24963/ijcai.2020/315](https://doi.org/10.24963/ijcai.2020/315).
* [84] Biswajit Paria, Kirthevasan Kandasamy, and Barnabas Poczos. A flexible framework for multi-objective bayesian optimization using random scalarizations. In _Uncertainty in Artificial Intelligence_, pages 766-776. PMLR, 2020.
* [85] Amandalynne Paullada, Inioluwa Deborah Raji, Emily M Bender, Emily Denton, and Alex Hanna. Data and its (dis) contents: A survey of dataset development and use in machine learning research. _Patterns_, 2(11):100336, 2021.
* [86] Kenny Peng, Arunesh Mathur, and Arvind Narayanan. Mitigating dataset harms requires stewardship: Lessons from 1000 papers. _arXiv preprint arXiv:2108.02922_, 2021.
* Perrone et al. [2021] Valerio Perrone, Michele Donini, Muhammad Bilal Zafar, Robin Schmucker, Krishnaram Kenthapadi, and Cedric Archambeau. Fair bayesian optimization. In _Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society_, pages 854-863, 2021.
* Pham et al. [2018] Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, and Jeff Dean. Efficient neural architecture search via parameters sharing. In _International conference on machine learning_, pages 4095-4104. PMLR, 2018.
* Quadrianto et al. [2019] Novi Quadrianto, Viktoriia Sharmanska, and Oliver Thomas. Discovering fair representations in the data domain. In _IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019_, pages 8227-8236. Computer Vision Foundation / IEEE, 2019. doi: 10.1109/CVPR.2019.00842. URL [http://openaccess.thecvf.com/content_CVPR.2019/html/Quadrianto_Discovering_Fair_Representations_in_the_Data_Domain_CVPR.2019_paper.html](http://openaccess.thecvf.com/content_CVPR.2019/html/Quadrianto_Discovering_Fair_Representations_in_the_Data_Domain_CVPR.2019_paper.html).
* Raji and Buolamwini [2019] Inioluwa Deborah Raji and Joy Buolamwini. Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products. In _Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society_, pages 429-435, 2019.
* Raji and Fried [2021] Inioluwa Deborah Raji and Genevieve Fried. About face: A survey of facial recognition evaluation. _arXiv preprint arXiv:2102.00813_, 2021.
* Raji et al. [2020] Inioluwa Deborah Raji, Timnit Gebru, Margaret Mitchell, Joy Buolamwini, Joonseok Lee, and Emily Denton. Saving face: Investigating the ethical concerns of facial recognition auditing. In _Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society_, pages 145-151, 2020.
* Ryu et al. [2018] Hee Jung Ryu, Hartwig Adam, and Margaret Mitchell. Inclusivefacenet: Improving face attribute detection with race and gender diversity. _arXiv preprint arXiv:1712.00193_, 2018.
* Saha et al. [2020] Debjani Saha, Candice Schumann, Duncan C. McElfresh, John P. Dickerson, Michelle L Mazurek, and Michael Carl Tschantz. Measuring non-expert comprehension of machine learning fairness metrics. In _Proceedings of the International Conference on Machine Learning (ICML)_, 2020.
* Salinas et al. [2022] David Salinas, Matthias Seeger, Aaron Klein, Valerio Perrone, Martin Wistuba, and Cedric Archambeau. Syne tune: A library for large scale hyperparameter tuning and reproducible research. In _International Conference on Automated Machine Learning_, pages 16-1. PMLR, 2022.
* Savani et al. [2020] Yash Savani, Colin White, and Naveen Sundar Govindarajulu. Intra-processing methods for debiasing neural networks. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2020.
* Schmucker et al. [2020] Robin Schmucker, Michele Donini, Valerio Perrone, Muhammad Bilal Zafar, and Cedric Archambeau. Multi-objective multi-fidelity hyperparameter optimization with application to fairness. In _NeurIPS Workshop on Meta-Learning_, volume 2, 2020.
* Schmucker et al. [2021] Robin Schmucker, Michele Donini, Muhammad Bilal Zafar, David Salinas, and Cedric Archambeau. Multi-objective asynchronous successive halving. _arXiv preprint arXiv:2106.12639_, 2021.
* Selbst et al. [2019] Andrew D. Selbst, Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi. Fairness and abstraction in sociotechnical systems. In _Proceedings of the Conference on Fairness, Accountability, and Transparency_, FAT* '19, page 59-68, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450361255. doi: 10.1145/3287560.3287598. URL [https://doi.org/10.1145/3287560.3287598](https://doi.org/10.1145/3287560.3287598).
* Sengupta et al. [2016] Soumyadip Sengupta, Jun-Cheng Chen, Carlos Castillo, Vishal M Patel, Rama Chellappa, and David W Jacobs. Frontal to profile face verification in the wild. In _2016 IEEE winter conference on applications of computer vision (WACV)_, pages 1-9. IEEE, 2016.
* Simonyan and Zisserman [2014] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. _arXiv preprint arXiv:1409.1556_, 2014.
* Sun et al. [2019] Ke Sun, Bin Xiao, Dong Liu, and Jingdong Wang. Deep high-resolution representation learning for human pose estimation. In _CVPR_, 2019.
* Szegedy et al. [2016] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 2818-2826, 2016.
* Szegedy et al. [2017] Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. In _Thirty-first AAAI conference on artificial intelligence_, 2017.
* Tan and Le [2019] Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In _International conference on machine learning_, pages 6105-6114. PMLR, 2019.
* Verma and Rubin [2018] Sahil Verma and Julia Rubin. Fairness definitions explained. In _2018 IEEE/ACM International Workshop on Software Fairness (FairWare)_, pages 1-7. IEEE, 2018.
* Wang et al. [2020] Chien-Yao Wang, Hong-Yuan Mark Liao, Yueh-Hua Wu, Ping-Yang Chen, Jun-Wei Hsieh, and I-Hau Yeh. Cspnet: A new backbone that can enhance learning capability of cnn. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops_, pages 390-391, 2020.
* Wang et al. [2018] Hao Wang, Yitong Wang, Zheng Zhou, Xing Ji, Dihong Gong, Jingchao Zhou, Zhifeng Li, and Wei Liu. Cosface: Large margin cosine loss for deep face recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 5265-5274, 2018.
* Wang and Deng [2018] Mei Wang and Weihong Deng. Deep face recognition: A survey. _arXiv preprint arXiv:1804.06655_, 2018.
* Wang and Deng [2020] Mei Wang and Weihong Deng. Mitigating bias in face recognition using skewness-aware reinforcement learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 9322-9331, 2020.
* Wang et al. [2019] Mei Wang, Weihong Deng, Jiani Hu, Xunqiang Tao, and Yaohai Huang. Racial faces in the wild: Reducing racial bias by information maximization adaptation network. In _Proceedings of the IEEE International Conference on Computer Vision_, pages 692-702, 2019.
* Wang et al. [2021] Qingzhong Wang, Pengfei Zhang, Haoyi Xiong, and Jian Zhao. Face. evolve: A high-performance face recognition library. _arXiv preprint arXiv:2107.08621_, 2021.
* Wang [2021] Xiaobo Wang. Teacher guided neural architecture search for face recognition. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 35, pages 2817-2825, 2021.
* Wang et al. [2020] Zeyu Wang, Klint Qinami, Ioannis Christos Karakozis, Kyle Genova, Prem Nair, Kenji Hata, and Olga Russakovsky. Towards fairness in visual recognition: Effective strategies for bias mitigation, 2020.
* White et al. [2021] Colin White, Willie Neiswanger, and Yash Savani. Bananas: Bayesian optimization with neural architectures for neural architecture search. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 35, pages 10293-10301, 2021.
* White et al. [2023] Colin White, Mahmoud Safari, Rhea Sukthanker, Binxin Ru, Thomas Elsken, Arber Zela, Debadeepta Dey, and Frank Hutter. Neural architecture search: Insights from 1000 papers. _arXiv preprint arXiv:2301.08727_, 2023.
* Wightman [2019] Ross Wightman. Pytorch image models. [https://github.com/rwightman/pytorch-image-models](https://github.com/rwightman/pytorch-image-models), 2019.
* Wu et al. [2020] Wenying Wu, Pavlos Protopapas, Zheng Yang, and Panagiotis Michalatos. Gender classification and bias mitigation in facial images. In _12th acm conference on web science_, pages 106-114, 2020.
* Xie et al. [2016] Saining Xie, Ross B. Girshick, Piotr Dollar, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. _CoRR_, abs/1611.05431, 2016. URL [http://arxiv.org/abs/1611.05431](http://arxiv.org/abs/1611.05431).
* Xu et al. [2021] Weijian Xu, Yifan Xu, Tyler Chang, and Zhuowen Tu. Co-scale conv-attentional image transformers. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 9981-9990, 2021.
* Xu et al. [2019] Yuhui Xu, Lingxi Xie, Xiaopeng Zhang, Xin Chen, Guo-Jun Qi, Qi Tian, and Hongkai Xiong. Pc-darts: Partial channel connections for memory-efficient architecture search. _arXiv preprint arXiv:1907.05737_, 2019.
* Yu et al. [2018] Fisher Yu, Dequan Wang, Evan Shelhamer, and Trevor Darrell. Deep layer aggregation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 2403-2412, 2018.
* Zafar et al. [2017] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez-Rodriguez, and Krishna P. Gummadi. Fairness constraints: Mechanisms for fair classification. In _Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20-22 April 2017, Fort Lauderdale, FL, USA_, volume 54 of _Proceedings of Machine Learning Research_, pages 962-970. PMLR, 2017. URL [http://proceedings.mlr.press/v54/zafar17a.html](http://proceedings.mlr.press/v54/zafar17a.html).
* Zafar et al. [2019] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez-Rodriguez, and Krishna P. Gummadi. Fairness constraints: A flexible approach for fair classification. _Journal of Machine Learning Research_, 20(75):1-42, 2019. URL [http://jmlr.org/papers/v20/18-262.html](http://jmlr.org/papers/v20/18-262.html).
* Zela et al. [2019] Arber Zela, Thomas Elsken, Tonmoy Saikia, Yassine Marrakchi, Thomas Brox, and Frank Hutter. Understanding and robustifying differentiable architecture search. _arXiv preprint arXiv:1909.09656_, 2019.
* Zhang et al. [2021] Zizhao Zhang, Han Zhang, Long Zhao, Ting Chen, and Tomas Pfister. Aggregating nested transformers. _arXiv preprint arXiv:2105.12723_, 2021.
* Zheng and Deng [2018] Tianyue Zheng and Weihong Deng. Cross-pose lfw: A database for studying cross-pose face recognition in unconstrained environments. _Beijing University of Posts and Telecommunications, Tech. Rep_, 5(7), 2018.
* Zheng et al. [2017] Tianyue Zheng, Weihong Deng, and Jiani Hu. Cross-age lfw: A database for studying cross-age face recognition in unconstrained environments. _arXiv preprint arXiv:1708.08197_, 2017.
* Zhu [2019] Ning Zhu. Neural architecture search for deep face recognition. _arXiv preprint arXiv:1904.09523_, 2019.
Ethics Statement
Face recognition systems are being used for more and more parts of daily lives, from government surveillance [49], to target identification from drones [72], to identification in personal photo repositories [36]. It is also increasingly evident that many of these models are biased based on race and gender [37, 92, 91]. If left unchecked, these technologies, which make biased decision for life-changing events, will only deepen existing societal harms. Our work seeks to better understand and mitigate the negative effects that biased face recognition models have on society. By conducting the first large-scale study of the effect of architectures and hyperparameters on bias, and by developing and open-sourcing face recognition models that are more fair than all other competitive models, we provide a resource for practitioners to understand inequalities inherent in face recognition systems and ultimately advance fundamental understandings of the harms and technological ills of these systems.
That said, we would like to address potential ethical challenges of our work. We believe that the main ethical challenge of this work centers on our use of certain datasets. We acknowledge that the common academic datasets which we used to evaluate our research questions -- CelebA [69] and VGGFace2 [8] -- are all datasets of images scraped from the web without the informed consent of those whom are depicted. This also includes the datasets we transfer to, including LFW, CFP_FF, CFP_FP, AgeDB, CALFW, CPLFW, and RFW. This ethical challenge is one that has plagued the research and computer vision community for the last decade [86, 85] and we are excited to see datasets being released which have fully informed consent of the subjects, such as the Casual Conversations Dataset [45]. Unfortunately, this dataset in particular has a rather restrictive license, much more restrictive than similar datasets, which prohibited its use in our study. Additionally, these datasets all exhibit representational bias where the categories of people that are included in the datasets are not equally represented. This can cause many problems; see [14] for a detailed look at reprsentational bias's impact in facial recognition specifically. At least during training, we did address representaional bias by balancing the training data between gender presentations appropriately.
We also acknowledge that while our study is intended to be constructive in performing the first neural architecture search experiments with fairness considerations, the specific ethical challenge we highlight is that of unequal or unfair treatment by the technologies. We note that our work could be taken as a litmus test which could lead to the further proliferation of facial recognition technology which could cause other harms. If a system demonstrates that it is less biased than other systems, this could be used as a reason for the further deployment of facial technologies and could further impinge upon unwitting individual's freedoms and perpetuate other technological harms. We explicitly caution against this form of techno-solutionism and do not want our work to contribute to a conversation about whether facial recognition technologies _should_ be used by individuals.
Experiments were conducted using a private infrastructure, which has a carbon efficiency of 0.373 kgCO\({}_{2}\)eq/kWh. A cumulative of 88 493 hours of computation was performed on hardware of type RTX 2080 Ti (TDP of 250W). Total emissions are estimated to be 8,251.97 kgCO\({}_{2}\)eq of which 0% was directly offset. Estimations were conducted using the MachineLearning Impact calculator presented in [58]. By releasing all of our raw results, code, and models, we hope that our results will be widely beneficial to researchers and practitioners with respect to designing fair face recognition systems.
## Appendix B Reproducibility Statement
We ensure that all of our experiments are reproducible by releasing our code and raw data files at [https://github.com/dooleys/FR-NAS](https://github.com/dooleys/FR-NAS). We also release the instructions to reproduce our results with the code. Furthermore, we release all of the configuration files for all of the models trained. Our experimental setup is described in Section 3 and Appendix C.1. We provide clear documentation on the installation and system requirements in order to reproduce our work. This includes information about the computing environment, package requirements, dataset download procedures, and license information. We have independently verified that the experimental framework is reproducible which should make our work and results and experiments easily accessible to future researchers and the community.
## Appendix C Further Details on Experimental Design and Results
### Experimental Setup
The list of the models we study from timm are: coat_lite_small [120], convit_base [20], cspdarknet53 [107], dla102x2 [122], dpn107 [10], ese_vovnet39b [62], fbnetv3_g [18], ghostnet_100 [42], gluon_inception_v3 [103], gluon_xception65 [15], hrnet_w64 [102], ig_resnet101_32x8d [119], inception_resnet_v2 [104], inception_v4 [104], jx_nest_base [126], legacy_senet154 [51], mobilenetv3_large_100 [50], resnetrs101 [4], rexnet_200 [41], selects60b [74], swin, base_patch_window7_224 [68], tf_efficientnet_b7_ns' [105], 'tnt_s_patch16_224[43], twins_svt_large [16], vgg19 [101], vgg19_bn [101], vissformer_small [11], xception and xception65 [15].
We study at most 13 configurations per model ie 1 default configuration corresponding to the original model hyperparameters with CosFace as head. Further, we have at most 12 configs consisting of the 3 heads (CosFace, ArcFace, MagFace) \(\times\) 2 learning rates(0.1,0.001) \(\times\) 2 optimizers (SGD, AdamW). All the other hyperparameters are held constant for training all the models. All model configurations are trained with a total batch size of 64 on 8 RTX2080 GPUS for 100 epochs each.
We study these models across five important fairness metrics in face identification: Rank Disparity, Disparity, Ratio, Rank Ratio, and Error Ratio. Each of these metrics is defined in Table 5.
### Additional details on NAS+HPO Search Space
We replace the repeating 1x1_conv-3x3_conv-1x1_conv block in Dual Path Networks with a simple recurring searchable block. depicted in Figure 6. Furthermore, we stack multiple such searched blocks to closely follow the architecture of Dual Path Networks. We have nine possible choices for each of the three operations in the DPN block, each of which we give a number 0 through 8, depicted in Figure 6. The choices include a vanilla convolution, a convolution with pre-normalization and a convolution with post-normalization Table 7. To ensure that all the architectures are tractable in terms of memory consumption during search we keep the final projection layer (to 1000 dimensionality) in timm.
### Obtained architectures and hyperparameter configurations from Black-Box-Optimization
In Figure 5 we present the architectures and hyperparameters discovered by SMAC. Particularly we observe that conv 3x3 followed batch norm is a preferred operation and CosFace is the preferred head/loss choice.
### Analysis of the Pareto front of different Fairness Metrics
In this section, we include additional plots that support and expand on the main paper. Primarily, we provide further context of the Figures in the main body in two ways. First, we provide replication plots of the figures in the main body but for all models. Recall, the plots in the main body only show models with Error<0.3, since high performing models are the most of interest to the community.
\begin{table}
\begin{tabular}{l l} \multicolumn{1}{c}{**Fairness Metric**} & \multicolumn{1}{c}{**Equation**} \\ \hline Rank Disparity & \(|\text{Rank}(male)-\text{Rank}(female)|\) \\ Disparity & \(|\text{Accuracy}(male)-\text{Accuracy}(female)|\) \\ Ratio & \(|1-\frac{\text{Accuracy}(male)}{\text{Accuracy}(female)}|\) \\ Rank Ratio & \(|1-\frac{\text{Rank}(male)}{\text{Rank}(female)}|\) \\ Error Ratio & \(|1-\frac{\text{Error}(male)}{\text{Error}(female)}|\) \\ \end{tabular}
\end{table}
Table 5: The fairness metrics explored in this paper. Rank Disparity is explored in the main paper and the other metrics are reported in Appendix C.4
## 6 Conclusion
\begin{table}
\begin{tabular}{c c}
**Hyperparameter** & **Choices** \\ \hline Architecture Head/Loss & MagFace, ArcFace, CosFace \\ Optimizer Type & Adam, AdamW, SGD \\ Learning rate (conditional) & Adam/AdamW \(\rightarrow[1e-4,1e-2]\), \\ SGD \(\rightarrow[0.09,0.8]\) \\ \hline \end{tabular}
\end{table}
Table 6: Searchable hyperparameter choices.
\begin{table}
\begin{tabular}{c c c}
**Operation Index** & **Operation** & **Definition** \\ \hline
0 & BnConv1x1 & Batch Normalization \(\rightarrow\) Convolution with 1x1 kernel \\
1 & Conv1x1Bin & Convolution with 1x1 kernel \(\rightarrow\) Batch Normalization \\
2 & Conv1x1 & Convolution with 1x1 kernel \\ \hline
3 & BnConv3x3 & Batch Normalization \(\rightarrow\) Convolution with 3x3 kernel \\
4 & Conv3x3Bin & Convolution with 3x3 kernel \(\rightarrow\) Batch Normalization \\
5 & Conv3x3 & Convolution with 3x3 kernel \\ \hline
6 & BnConv5x5 & Batch Normalization \(\rightarrow\) Convolution with 5x5 kernel \\
7 & Conv5x5Bin & Convolution with 5x5 kernel \(\rightarrow\) Batch Normalization \\
8 & Conv5x5 & Convolution with 5x5 kernel \\ \hline \end{tabular}
\end{table}
Table 7: Operation choices and definitions.
Figure 5: SMAC discovers the above building blocks with (a) corresponding to architecture with CosFace, with SGD optimizer and learning rate of 0.2813 as hyperparamters (b) corresponding to CosFace, with SGD as optimizer and learning rate of 0.32348 and (c) corresponding to CosFace, with AdamW as optimizer and learning rate of 0.0006
Figure 6: DPN block (left) vs. our searchable block (right).
Second, we also show figures which depict other fairness metrics used in facial identification. The formulas for these additional fairness metrics can be found in Table 5.
We replicate Figure 2 in Figure 7 and Figure 8. We add additional metrics for CelebA in Figure 9-Figure 11 and for VGGFace2 in Figure 12-Figure 15.
correlations between parameter sizes and different fairness metrics Figure 16. This observation supports the claim that increases in accuracy and decreases in disparity are very closely tied to the architectures and feature representations of the model, irrespective of the parameter size of the model. Hence, we do not constraint model parameter sizes to help our NAS+HPO approach search in a much richer search space.
### Comparison to other Bias Mitigation Techniques on all Fairness Metrics
We have shown that our bias mitigation approach Pareto-dominates the existing bias mitigation techniques in face identification on the Rank Disparity metric. Here, we perform the same experiments but evaluate on the four other metrics discussed in the face identification literature: Disparity, Rank Ratio, Ratio, and Error Ratio.
Figure 10: Replication of Figure 7 on the CelebA validation dataset with the Disparity in accuracy metric.
Figure 9: Replication of Figure 7 on the CelebA validation dataset with Ratio of Ranks (left) and Ratio of Errors (right) metrics.
Recall, we take top performing, Pareto-optimal models from Section 4 and apply the three bias mitigation techniques: Flipped, Angular, and SensitiveNets. We also apply these same techniques to the novel architectures that we found. We report results in Table 1.
In Table 8, we see that in every metric, the SMAC_301 architecture is Pareto-dominant and that the SMAC_301, demonstrating the robustness of our approach.
### Transferability to other Sensitive Attributes
The superiority of our novel architectures even goes beyond accuracy when transfering to other datasets -- our novel architectures have superior fairness property compared to the existing architectures **even on datasets which have completely different protected attributes than were used in the architecture search**. To inspect the generalizability of our approach to other protected attributes, we transferred our models pre-trained on CelebA and VGGFace2 (which have a gender presentation category) to the RFW dataset [111] which includes a protected attribute for race. We see that our novel architectures always outperforms the existing architectures across all five fairness metrics studied in this work. See Table 10 for more details on each metric. We see example Pareto fronts for
Figure 11: Replication of Figure 7 on the CelebA validation dataset with the Ratio in accuracy metric.
Figure 12: Replication of Figure 8 on the VGGFace2 validation dataset with Ratio of Ranks metric.
these transfers in Figure 17. They are always on the Pareto front for all fairness metrics considered, and mostly Pareto-dominate all other architectures on this task. In this setting, since the race label in RFW is not binary, the Rank Disparity metric considered in Table 10 and Figure 17 is computed as the maximum rank disparity between pairs of race labels.
Furthermore, we also evaluate the transfer of fair properties of our models across different age groups on the AgeDB dataset [77]. In this case we use age as a protected attribute. We group the faces into 4 age groups 1-25, 26-50, 51-75 and 76-110. Then we compute the max disparity across age groups. As observed in Table 11 the models discovered by NAS and HPO, Pareto-dominate other competitive hand-crafted models. This further emphasise the generalizability of the fair features learnt by these models.
Figure 14: Replication of Figure 8 on the VGGFace2 validation dataset with the Disparity in accuracy metric.
Figure 13: Replication of Figure 8 on the VGGFace2 validation dataset with Ratio of Errors metric.
Figure 16: Correlation map between different fairness metrics and architecture statistics. We find no significant correlation between these objectives, e.g. between fairness metrics and parameter count.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Architecture (trained on VGGFace2)** & **Overall Accuracy \(\uparrow\)** & **Disparity \(\downarrow\)** \\ \hline \hline & 59.18 & 28.9150 \\ DPN\_SGD & 71.87 & 22.4633 \\ DPN\_AdamW & 61.32 & 21.1437 \\ SMAC\_301 & **79.97** & **18.827** \\ \hline \hline
**Architecture (trained on CelebA)** & **Accuracy** & **Disparity** \\ DPN\_CosFace & 65.55 & 27.2434 \\ DPN\_MagFace & 68.17 & 31.2903 \\ SMAC\_000 & 80.23 & **19.6481** \\ SMAC\_010 & **80.37** & 26.5103 \\ SMAC\_680 & 79.88 & 20.0586 \\ \hline \hline \end{tabular}
\end{table}
Table 11: Taking the highest performing models from the Pareto front of both VGGFace2 and CelebA, we transfer their evaluation onto a dataset with a different protected attribute – age – on the AgeDB dataset [77]. The novel architectures which we found with our bias mitigation strategy are Pareto-dominant with respect to the Accuracy and Disparity metrics
Figure 15: Replication of Figure 8 on the VGGFace2 validation dataset with the Ratio in accuracy metric.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multicolumn{1}{c}{Architecture (trained on VGGFace2)} & LFW & CFP\_FF & CFP\_FP & AgeDB & CALFW & CPLPW \\ \hline \hline & 82.60 & 80.91 & 65.51 & 59.18 & 68.23 & 62.15 \\ DPN\_SGD & 93.00 & 91.81 & 78.96 & 71.87 & 78.27 & 72.97 \\ DPN\_AdamW & 78.66 & 77.17 & 64.35 & 61.32 & 64.78 & 60.30 \\ SMAC\_301 & **96.63** & **95.10** & **86.63** & **79.97** & **86.07** & **81.43** \\ \hline \hline & & & & & & \\ \hline \hline Architecture (trained on CelebA) & LFW & CFP\_FF & CFP\_FP & AgeDB & CALFW & CPLFW \\ \hline & & & & & & \\ \hline & & & & & & \\ \hline & & & & & & \\ \hline & & & & & & \\ \end{tabular}
\end{table}
Table 10: Taking the highest performing models from the Pareto front of both VGGFace2 and CelebA, we transfer their evaluation onto a dataset with a different protected attribute – race – on the RFW dataset [111]. The novel architectures which we found with our bias mitigation strategy are always on the Pareto front, and mostly Pareto-dominant of the traditional architectures.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Rank Disparity} & \multicolumn{4}{c}{Disparity} \\ Model & Baseline & Flipped & Angular & SensitiveNets & Baseline & Flipped & Angular & SensitiveNets \\ \hline SMAC\_301 & **(3.66;0.23)** & **(4.95;0.18)** & (4.14.0;25) & (6.20;0.41) & **(3.66;0.03)** & **(4.95;0.02)** & (4.14;0.04) & (6.14;0.04) \\ DPN & \((3.56;0.27)\) & (5.87;0.32) & (6.06;0.36) & (4.76;0.34) & (3.98;0.04) & (5.87;0.05) & (6.06;0.05) & (4.78;0.05) \\ ReXNet & \((4.09.27)\) & (5.73.0.45) & (5.47;0.26) & (4.75;0.25) & (4.09;0.03) & (5.73;0.05) & (5.47;0.05) & (4.75;0.04) \\ Swin & \((5.47;0.38)\) & (5.75;0.44) & (5.23;0.25) & (5.03;0.30) & (5.47;0.05) & (5.75;0.05) & (5.23;0.04) & (5.03;0.04) \\ \hline & \multicolumn{4}{c}{Rank Ratio} & \multicolumn{4}{c}{Ratio} \\ Model & Baseline & Flipped & Angular & SensitiveNets & Baseline & Flipped & Angular & SensitiveNets \\ \hline SMAC\_301 & **(3.66;0.37)** & **(4.95;0.21)** & (4.14.0;3.09) & (6.14;0.41) & **(3.66;0.03)** & **(4.95;0.02)** & (4.14;0.04) & (6.14;0.05) \\ DPN & \((3.98;0.49)\) & (5.87;0.49) & (6.06;0.54) & (4.78;0.49) & (3.98;0.04) & (5.87;0.06) & (6.06;0.06) & (4.78;0.05) \\ ReXNet & \((4.09;0.41)\) & (5.73;0.53) & (5.47;0.38) & (4.75;0.34) & (4.09;0.04) & (5.73;0.05) & (5.47;0.05) & (4.75;0.04) \\ Swin & \((5.47;0.47)\) & (5.75;0.47) & (5.23;0.42) & (5.03;0.43) & (5.47;0.05) & (5.75;0.05) & (5.23;0.05) & (5.03;0.05) \\ \hline & \multicolumn{4}{c}{Error Ratio} & \multicolumn{4}{c}{Error Ratio} \\ Model & Baseline & Flipped & Angular & SensitiveNets & & & \\ \hline SMAC\_301 & **(3.66;0.58)** & **(4.95;0.29)** & (4.14.0;60) & (6.14;0.52) & & & \\ DPN & \((3.98;0.65)\) & (5.87;0.62) & (6.06;0.62) & (4.78;0.69) & & & \\ ReXNet & \((4.09;0.60)\) & (5.73;0.57) & (5.47;0.59) & (4.75;0.58) & & & \\ Swin & \((5.47;0.60)\) & (5.75;0.56) & (5.23;0.60) & (5.03;0.60) & & & \\ \hline \hline & & & & & & \\ \end{tabular}
\end{table}
Table 8: Comparison bias mitigation techniques where the SMAC models were found on VGGFace2 with NAS bias mitigation technique and the other three techniques are standard in facial recognition: Flipped [9], Angular [76], and Discriminator [110]. Items in bold are Pareto-optimal. The values show (Error,_metric_).
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multicolumn{1}{c}{Architecture (trained on VGGFace2)} & LFW & CFP\_FF & CFP\_FP & AgeDB & CALFW & CPLPW \\ \hline & 82.60 & 80.91 & 65.51 & 59.18 & 68.23 & 62.15 \\ DPN\_SGD & 93.00 & 91.81 & 78.96 & 71.87 & 78.27 & 72.97 \\ DPN\_AdamW & 78.66 & 77.17 & 64.35 & 61.32 & 64.78 & 60.30 \\ SMAC\_301 & **96.63** & **95.10** & **86.63** & **79.97** & **86.07** & **81.43** \\ \hline \hline & & & & & \\ \end{tabular}
\end{table}
Table 9: Taking the highest performing models from the Pareto front of both VGGFace2 and CelebA, we transfer their evaluation onto six other common face recognition datasets: LFW [53], CFP\(\_\)FF [100], CFP\(\_\)FP [100], AgeDB [77], CALFW [128], CPLPW [127]. The novel architectures which we found with our bias mitigation strategy significantly out perform all other models.
Figure 17: Models trained on CelebA (left) and VGGFace2 (right) evaluated on a dataset with a different protected attribute, specifically on RFW with the racial attribute, and with the Rank Disparity metric. The novel architectures out perform the existing architectures in both settings.
Figure 18: TSNE plots for models pretrained on VGGFace2 on the test-set _(a)_ SMAC model last layer _(b)_ DPN MagFace on the last layer _(b)_ SMAC model second last layer _(b)_ DPN MagFace on the second last layer. Note the better linear separability for DPN MagFace in comparison with the SMAC model | ## Review
### Summary
This paper presents a novel framework combining Neural Architecture Search (NAS) and Hyperparameter Optimization (HPO) to mitigate biases in face recognition models. It challenges traditional bias mitigation approaches by demonstrating that identifying fairer architectures can yield better outcomes than merely enhancing existing high-performing models. The authors conduct extensive experiments on popular datasets, CelebA and VGGFace2, revealing that specific architectures and hyperparameter configurations significantly improve fairness without sacrificing performance. The paper emphasizes the importance of architecture choice in achieving robust performance across various identities, making a valuable contribution to the fairness and bias community.
### Strengths
- The paper introduces a new paradigm for bias mitigation in face recognition by utilizing NAS and HPO.
- Extensive empirical analysis shows that specific architectures and hyperparameters significantly improve fairness.
- The authors provide their source code, enhancing reproducibility.
- The results demonstrate the generalizability of the proposed method across different datasets.
- The paper is well-motivated and addresses an important issue in machine learning.
- The presentation is clear and easy to follow.
### Weaknesses
- Lack of discussion on the impact of pretraining on model performance and generalization.
- The theoretical analysis of why NAS models outperform traditional methods is insufficient.
- Some experimental design choices need further justification, such as the selection of the SMAC3 optimization method.
- Visualization of results could be improved for better insights into error analysis.
- The reported performance in large-scale datasets remains unverified.
### Questions
- Have you considered alternatives to SMAC3 or ParEGO for optimization?
- What patterns in neural architecture contribute to Pareto-optimal performance?
- Can the authors clarify the impact of training on gender-balanced versus gender-imbalanced datasets?
### Soundness
**Score:** 3
**Description:** 3 = good: The methodology is solid, but some theoretical aspects and justifications could be strengthened.
### Presentation
**Score:** 3
**Description:** 3 = good: The paper is well-organized and clear, though some figures and tables need improvement in resolution and clarity.
### Contribution
**Score:** 4
**Description:** 4 = excellent: The paper makes a significant contribution to the field, presenting novel methods and practical insights into bias mitigation.
### Rating
**Score:** 7
**Description:** 7 = accept, but needs minor improvements: The paper is technically solid with high impact and good evaluation, but some weaknesses need addressing.
### Paper Decision
**Decision:** Accept (oral)
**Reasons:** The paper is original and tackles an important issue in machine learning regarding bias mitigation. It presents a novel approach that shows soundness in methodology and contributes significantly to the field. While there are some weaknesses and areas for improvement, particularly in theoretical explanations and validations on larger datasets, the overall quality and relevance of the work justify acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Hierarchical VAEs provide a normative account of motion processing in the primate brain
Hadi Vafaii\({}^{1}\)
[email protected]
&Jacob L. Yates\({}^{2}\)
[email protected]
&Daniel A. Butts\({}^{1}\)
[email protected]
\({}^{1}\)University of Maryland, College Park
\({}^{2}\)UC Berkeley
###### Abstract
The relationship between perception and inference, as postulated by Helmholtz in the 19th century, is paralleled in modern machine learning by generative models like Variational Autoencoders (VAEs) and their hierarchical variants. Here, we evaluate the role of hierarchical inference and its alignment with brain function in the domain of motion perception. We first introduce a novel synthetic data framework, Retinal Optic Flow Learning (ROFL), which enables control over motion statistics and their causes. We then present a new hierarchical VAE and test it against alternative models on two downstream tasks: (i) predicting ground truth causes of retinal optic flow (e.g., self-motion); and (ii) predicting the responses of neurons in the motion processing pathway of primates. We manipulate the model architectures (hierarchical versus non-hierarchical), loss functions, and the causal structure of the motion stimuli. We find that hierarchical latent structure in the model leads to several improvements. First, it improves the linear decodability of ground truth factors and does so in a sparse and disentangled manner. Second, our hierarchical VAE outperforms previous state-of-the-art models in predicting neuronal responses and exhibits sparse latent-to-neuron relationships. These results depend on the causal structure of the world, indicating that alignment between brains and artificial neural networks depends not only on architecture but also on matching ecologically relevant stimulus statistics. Taken together, our results suggest that hierarchical Bayesian inference underlines the brain's understanding of the world, and hierarchical VAEs can effectively model this understanding.
## 1 Introduction
Intelligent interactions with the world require representation of its underlying composition. This inferential process has long been postulated to underlie human perception [1, 2, 3, 4, 5, 6, 7, 8, 9], and is paralleled in modern machine learning by generative models [10, 11, 12, 13, 14, 15, 16, 17], which learn latent representations of their sensory inputs. The question of what constitutes a "good" representation has no clear answer [18, 19], but several desirable features have been proposed. In the field of neuroscience, studies focused on object recognition have suggested that effective representations "_untangle_" the various factors of variation in the input, rendering them linearly decodable [20, 21]. This intuitive notion of linear decodability has emerged in the machine learning community under different names such as "_informativeness_" [22] or "_explicitness_" [23]. Additionally, it has been suggested that "_disentangled_" representations are desirable, wherein distinct, informative factors of variations in the data are separated [24, 25, 26, 27, 28, 29]. Artificial neural networks (ANNs) are also increasingly evaluated based on their alignment with biological neural processing [30, 31, 32, 33, 34, 35, 36, 37, 38], because of the shared goals of ANNs and the brain's sensory processing [25, 39, 40]. Such alignment also provides the possibility of gaining insights into the brain by understanding the operations within an ANN [41, 42, 43, 44, 45, 46, 47].
In this work, we investigate how the combination of (i) model architecture, (ii) loss function, and (iii) training dataset, affects learned representations, and whether this is related to the brain-alignment of the ANN [41; 44]. We focus specifically on understanding the representation of motion because large sections of the visual cortex are devoted to processing motion [34], and the causes of retinal motion (moving objects and self-motion [48]) can be manipulated systematically. Crucially, motion in an image can be described irrespective of the identity and specific visual features that are moving, just as the identity of objects is invariant to how they are moving. This separation of motion and object processing mirrors the division of primate visual processing into dorsal (motion) and ventral (object) streams [49; 50; 51].
We designed a _naturalistic_ motion simulation based on distributions of ground truth factors corresponding to the location and depth of objects, motion of these objects, motion of the observer, and observer's direction of gaze (i.e., the fixation point; Fig. 1a). We then trained and evaluated an ensemble of autoencoder-based models using our simulated retinal flow data. We based our evaluation on (1) whether the models untangle and disentangle the ground truth factors in our simulation; and (2) the degree to which their latent spaces could be directly related to neural data recorded in the dorsal stream of primates (area MT).
We introduce a new hierarchical variational autoencoder, the "compressed" Nouveau VAE (cNVAE) [52]. The cNVAE exhibited superior performance compared to other models across our multiple evaluation metrics. First, it discovered latent factors that accurately captured the ground truth factors in the simulation in a more disentangled manner than other models. Second, it achieved significant improvements in predicting neural responses compared to the previous state-of-the-art model [34], doubling the performance, with sparse mapping from its latent space to neural responses.
Taken together, these observations demonstrate the power of the synthetic data framework and show that a single inductive bias--hierarchical latent structure--leads to many desirable features of representations, including brain alignment.
## 2 Background & Related Work
Neuroscience and VAEs.It has long been argued that perception reflects unconscious inference of the structure of the world constructed from sensory inputs. The concept of "perception as unconscious inference" has existed since at least the 19th century [1; 2], and more recently inspired Mumford [3] to conjecture that brains engage in hierarchical Bayesian inference to comprehend the world [3; 4]. These ideas led to the development of Predictive Coding [5; 9; 53; 54; 55; 56; 57; 58], Bayesian Brain Hypothesis [59; 60; 61; 62; 63; 64; 65], and Analysis-by-Synthesis [7], collectively suggesting that brains contain an internal generative model of the world [7; 8; 66; 67]. A similar idea underlies modern generative models [68; 69; 70; 15; 16; 17], especially hierarchical variants of VAEs [71; 72; 73; 52].
The Nouveau VAE (NVAE) [52] and very deep VAE (vdvae) [71] demonstrated that deep hierarchical VAEs can generate realistic high-resolution images, overcoming the limitations of their non-hierarchical predecessors. However, neither work evaluated how the hierarchical latent structure changed the quality of learned representations. Additionally, both NVAE and vdvae have an undesirable property: their convolutional latents result in a latent space that is several orders of magnitude larger than the input space, defeating a main purpose of autoencoders: compression. Indeed, Hazami et al. [74] showed that a tiny subset (around \(3\%\)) of the vdvae latent space is sufficient for comparable input reconstruction. Here, we demonstrate that it is possible to compress hierarchical VAEs and focus on investigating their latent representations with applications to neuroscience data.
Evaluating ANNs on predicting biological neurons.Several studies have focused on evaluating ANNs on their performance in predicting brain responses, but almost entirely on describing static ("ventral stream") image processing [30; 33; 36]. In contrast, motion processing (corresponding to the dorsal stream) has only been considered thus far in Mineault et al. [34], who used a 3D ResNet ("DorsalNet") to extract ground truth factors about self-motion from drone footage ("AirSim", [75]) in a supervised manner. DorsalNet learned representations with receptive fields that matched known features of the primate dorsal stream and achieved state-of-the-art on predicting neural responses on the dataset that we consider here. In addition to our model architecture and training set, a fundamental difference between our approach and Mineault et al. [34] is that they trained their models using direct supervision. As such, their models have access to the ground truth factors at all times.
Here, we demonstrate that it is possible to obtain ground truth factors "for free", in a completely unsupervised manner, while achieving better performance in predicting biological neuronal responses.
Using synthetic data to train ANNs.A core component of a reductionist approach to studying the brain is to characterize neurons based on their selectivity to a particular subset of pre-identified visual "features", usually by presenting sets of "feature-isolating" stimuli [76]. In the extreme, stimuli are designed that remove all other features except the one under investigation [77]. While these approaches can inform how pre-selected feature sets are represented by neural networks, it is often difficult to generalize this understanding to more natural stimuli, which are not necessarily well-described by any one feature set. As a result, here we generate synthetic data representing a _naturalistic_ distribution of natural motion stimuli. Such synthetic datasets allow us to manipulate the causal structure of the world, in order to make hypotheses about what aspects of the world matter for the representations learned by brains and ANNs [78]. Like previous work on synthesized textures [15], here we specifically manipulate the data generative structure to contain factors of variation due to known ground truth factors.
## 3 Approach: Data & Models
Retinal Optic Flow Learning (ROFL).Our synthetic dataset framework, ROFL, generates the resulting optic flow from different world structures, self-motion trajectories, and object motion (Fig. 1a, see also [79]).
ROFL can be used to generate _naturalistic_ flow fields that share key elements with those experienced in navigation through 3-D environments. Specifically, each frame contains global patterns that are due to self-motion, including rotation that can arise due to eye or head movement [80, 81]. In addition, local motion patterns can be present due to objects that move independently of the observer [48]. The overall flow pattern is also affected by the observer's direction of gaze (fixation point [82], Fig. 1a).
ROFL generates flow vectors that are instantaneous in time, representing the velocity across the visual field resulting from the spatial configuration of the scene and motion vectors of self and object. Ignoring the time-evolution of a given scene (which can arguably be considered separably [83]) dramatically reduces the input space from \([3\times H\times W\times T]\) to \([2\times H\times W]\), and allows a broader sampling of configurations without introducing changes in luminance and texture. As a result, we can explore the role of different causal structures in representation learning in ANNs.
The retinal flow patterns generated by a moving object depend on both the observer's self-motion and the rotation of their eyes as they maintain fixation in the world, in addition to the motion of the object itself. For example, Fig. 1c demonstrates a situation where the observer is moving forward, and the object is moving to the right, with different object positions: an object on the left side will have its flow patterns distorted, while an object on the right will have its flow patterns largely unaffected because its flow vectors are parallel with that of the self-motion. In summary, ROFL allows us to simulate retinal optic flow with a known ground truth structure driven by object and self-motion.
The compressed NVAE (cNVAE).The latent space of the NVAE is partitioned into groups, \(\mathbf{z}=\{\mathbf{z}_{1},\mathbf{z}_{2},\ldots,\mathbf{z}_{L}\}\), where \(L\) is the number of groups. The latent groups are serially dependent, meaning that the distribution of a given latent group depends on the value of the preceding latents,
\begin{table}
\begin{tabular}{l c c} \hline \hline Category & Description & Dimensionality \\ \hline \multirow{2}{*}{fixate-1} & A moving observer maintains fixation on a background point. & \multirow{2}{*}{\(11=2+3+6\)} \\ & In addition, the scene contains one independently moving object. & \\ \hline \multirow{2}{*}{fixate-0} & Same as fixate-1 but without the object. & \multirow{2}{*}{\(5=2+3\)} \\ & A single moving object, stationary observer. & \\ \hline \hline \end{tabular}
\end{table}
Table 1: ROFL categories used in this paper. ground truth factors include fixation point (\(+2\)); velocity of the observer when self-motion is present (\(+3\)); and, object position & velocity (\(+6\)). Figure 1b showcases a few example frames for each category. The stimuli can be rendered at any given spatial scale \(N\), yielding an input shape of \(2\times N\times N\). Here we work with \(N=17\).
such that the prior is given by \(p(\mathbf{z})=p(\mathbf{z}_{1})\cdot\prod_{\ell=2}^{L}p(\mathbf{z}_{\ell}|\mathbf{z}_{<\ell})\), and approximate posterior is given by \(q(\mathbf{z}|\mathbf{x})=\prod_{\ell=1}^{L}q(\mathbf{z}_{\ell}|\mathbf{z}_{<\ell},\mathbf{x})\) (more details in section 9.1). Additionally, different latent groups in the NVAE operate at different spatial scales (Fig. 2, left), with multiple groups per scale. Crucially, such scale-dependent grouping is absent from non-hierarchical VAEs (Fig. 2, right).
The cNVAE closely follows the NVAE [52], with one important difference: the original NVAE latent space is convolutional, and ours is not. We modified the _sampler_ layers (grey trapezoids, Fig. 2) such that their receptive field sizes match the spatial scale they operate on. Thus, sampler layers integrate over spatial information before sampling from the approximate posterior. The spatial patterns of each latent dimension are then determined by _expand_ modules (yellow trapezoids, Fig. 2), based on a deconvolution step. Further details about the processing of the sampler and expand layers are provided in Supplementary section 9.2.
Our modification of the NVAE serves two purposes. First, it decouples spatial information from the functionality of latent variables, allowing them to capture abstract features that are invariant to particular spatial locations. Second, it has the effect of compressing the input space into a lower-dimensional latent code. We explain this in more detail in Supplementary section 9.3.
Our model has the following structure: 3 latent groups operating at the scale of \(2\times 2\); 6 groups at the scale of \(4\times 4\); and 12 groups at the scale of \(8\times 8\) (Table 4, Fig. 2). Therefore, the model has \(3+6+12=21\) hierarchical latent groups in total. Each latent group has \(20\) latent variables, which results in an overall latent dimensionality of \(21\times 20=420\). See Table 4 and Supplementary section 9.3 for more details.
Alternative models.We evaluated a range of unsupervised models alongside cNVAE, including standard (non-hierarchical) VAEs [11; 12], a hierarchical autoencoder with identical architecture as
Figure 1: Retinal Optic Flow Learning (ROFL): a simulation platform for synthesizing naturalistic optic flow patterns. **(a)** The general setup includes a moving or stationary observer and a solid background, with optional moving object(s) in the scene. More details are provided in the appendix (section 13). **(b)** Example frames showcasing different categories (see Table 1 for definitions). **(c, d)** Demonstrating the causal effects of varying a single ground truth factor while keeping all others fixed: **(c)**\(X_{obj}\), the \(x\) component of object position (measured in retinal coordinates, orange), and **(d)**\(F_{x}\), the \(X\) component of the fixation point (measured in fixed coordinates, gray).
the cNVAE but trained only with reconstruction loss (cNAE), and an autoencoder (AE) counterpart for the VAE (Table 2). All models had the same latent dimensionality (Table 4), and approximately the same number of parameters and convolutional layers. We used endpoint error as our measure of reconstruction loss, which is the Euclidean norm of the difference between actual and reconstructed flow vectors. This metric works well with optical flow data [84].
Model representations.We define a model's internal representation to be either the mean of each Gaussian for variational models (i.e., samples drawn from \(q\left(\mathbf{z}|\mathbf{x}\right)\) at zero temperature), or the bottleneck activations for autoencoders. For hierarchical models (cNVAE, cNAE), we concatenate representations across all levels (Table 4).
Training details.Models were trained for \(160,000\) steps at an input scale of \(17\times 17\), requiring slightly over a day on Quadro RTX 5000 GPUs. Please refer to Supplementary section 9.4 for additional details.
Disentanglement and \(\beta\)-VAEs.A critical decision when optimizing VAEs involves determining the weight assigned to the KL term in the loss function compared to the reconstruction loss. Prior research has demonstrated that modifying a single parameter, denoted as \(\beta\), which scales the KL term, can lead to the emergence of disentangled representations [85, 86]. Most studies employing VAEs for image reconstruction typically optimize the standard evidence lower bound (ELBO) loss, where \(\beta\) is fixed at a value of 1 [11, 52, 71]. However, it should be noted that due to the dependence of the reconstruction loss on the input size, any changes in the dimensionality of the input will inevitably alter the relative contribution of the KL term, and thus the "effective" \(\beta\)[85].
Furthermore, Higgins et al. [16] recently established a strong correspondence between the generative factors discovered by \(\beta\)-VAEs and the factors encoded by inferotemporal (IT) neurons in the primate ventral stream. The alignment between these factors and IT neurons exhibited a linear relationship with the value of \(\beta\). In light of these findings, we explicitly manipulate the parameter \(\beta\) within a range spanning from \(0.01\) to \(10\) to investigate the extent to which our results depend on its value.
\begin{table}
\begin{tabular}{l l c c} \hline \hline Model & Architecture & Loss & Kullback–Leibler term (KL) \\ \hline \multirow{2}{*}{cNVAE} & \multirow{2}{*}{Hierarchical} & \multirow{2}{*}{EPE \(+\beta*\mathrm{KL}\)} & \(\mathrm{KL}=\sum_{\ell=1}^{L}\mathbb{E}_{q\left(\mathbf{z}_{<\ell}|\mathbf{x}\right)} \left[\mathrm{KL}_{\ell}\right],\) where \\ & & & \(\mathrm{KL}_{\ell}\coloneqq\mathcal{D}_{\mathrm{KL}}\left[q\left(\mathbf{z}_{\ell}| \mathbf{x},\mathbf{z}_{<\ell}\right)\|\,p\left(\mathbf{z}_{\ell}|\mathbf{z}_{<\ell}\right)\right]\) \\ \hline VAE & Non-hierarchical & \(\mathrm{EPE}+\beta*\mathrm{KL}\) & \(\mathrm{KL}=\mathcal{D}_{\mathrm{KL}}\left[q\left(\mathbf{z}|\mathbf{x}\right)\|\,p \left(\mathbf{z}\right)\right]\) \\ \hline cNAE & Hierarchical & EPE & - \\ \hline AE & Non-hierarchical & EPE & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: Model details. Here, _hierarchical_ means that there are parallel pathways for information to flow from the encoder to the decoder (Fig. 2), which is slightly different from the conventional notion. For variational models, this implies hierarchical dependencies between latents in a statistical sense [71]. This hierarchical dependence is reflected in the KL term for the cNVAE, where \(L\) is the number of hierarchical latent groups. See Supplementary section 9.3 for more details and section 9.1 for a derivation. All models have an equal # of latent dimensions (\(420\), see Table 4), approximately the same # of convolutional layers, and # of parameters (\(\sim 24\)\(M\)). EPE, endpoint error.
Figure 2: Architecture comparison. Left, compressed NVAE (cNVAE); right, non-hierarchical VAE. We modified the NVAE _sampler_ layer (grey trapezoids) and introduced a deconvolution _expand_ layer (yellow trapezoids). The encoder (inference) and decoder (generation) pathways are depicted in red and blue, respectively. \(r\), residual block; \(h\), trainable parameter; \(+\), feature combination.
Results
Our approach is based on the premise that the visual world contains a hierarchical structure. We use a simulation containing a hierarchical structure (ROFL, described above) and a hierarchical VAE (the cNVAE, above) to investigate how these choices affect the learned latent representations. While we are using a relatively simple simulation generated from a small number of ground truth factors, \(\mathbf{g}\), we do not specify how \(\mathbf{g}\) should be represented in our model or include \(\mathbf{g}\) in the loss. Rather, we allow the model to develop its own latent representation in a purely unsupervised manner. See Supplementary section 9.6 for more details on our approach.
We first consider hierarchical and non-hierarchical VAEs trained on the fixate-1 condition (see Table 1; throughout this work, fixate-1 is used unless stated otherwise). We extracted latent representations from each model and estimated the mutual information (MI) between the representations and ground truth factors such as self-motion, etc. For fixate-1, each data sample is uniquely determined using 11 ground truth factors (Table 1), and the models have latent dimensionality of \(420\) (Table 4). Thus, the resulting MI matrix has shape \(11\times 420\), where each entry shows how much information is contained in that latent variable about a given ground truth factor.
Functional specialization emerges in the cNVAE.Figure 3 shows the MI matrix for the latent space of cNVAE (top) and VAE (bottom). While both models achieved a good reconstruction of validation data (Fig. 14), the MI matrix for cNVAE exhibits clusters corresponding to distinct ground truth factors at different levels of the hierarchy. Specifically, object-related factors of variation are largely captured at the top \(2\times 2\) scale, while information about fixation point can be found across the hierarchy, and self-motion is largely captured by \(8\times 8\) latent groups. In contrast, non-hierarchical VAE has no such structure, suggesting that the inductive bias of hierarchy enhances the quality of latent spaces, which we quantify next.
Evaluating the latent code.To demonstrate the relationship between ground truth factors and latent representations discovered by the cNVAE visible in Fig. 3, we apply metrics referred to as "untangling" and "disentengling". Additionally, in a separate set of experiments, we also evaluate model representations by relating them to MT neuron responses, which we call "brain-alignment". We discuss each of these in detail in the following sections.
Untangling: the cNVAE untangles factors of variation.One desirable feature of a latent representation is whether it makes information about ground truth factors easily (linearly) decodable [20; 21; 87]. This concept has been introduced in the context of core object recognition as "_untangling_". Information about object identity that is "tangled" in the retinal input is untangled through successive nonlinear transforms, thus making it linearly available for higher brain regions to extract [20]. This concept is closely related to the "_informativeness_" metric of Eastwood and Williams [22] and "_explicitness_" metric of Ridgeway and Mozer [23].
To assess the performance of our models, we evaluated the linear decodability of the ground truth factors, \(\mathbf{g}\), from model latent codes, \(\mathbf{z}\). Based on the \(R^{2}\) scores obtained by predicting \(\mathbf{g}\) from \(\mathbf{z}\) using linear regression (Fig. 4), the cNVAE greatly outperforms competing models, faithfully capturing all ground truth factors. In contrast, the non-hierarchical VAE fails to capture object-related variables. Notably, the cNVAE can recover the fixation point location (\(F_{X}\), \(F_{Y}\)) in physical space almost
Figure 3: Mutual information between latent variables (x-axis) and ground truth factors (y-axis) is shown for cNVAE (top) and VAE (bottom). Dashed lines indicate \(21\) hierarchical latent groups of \(20\) latents each, comprising a \(420\)-dimensional latent space. These groups operate at three different spatial scales, as indicated. In contrast, the VAE latent space lacks such grouping and operates solely at the spatial scale of \(2\times 2\) (see Fig. 2 and Table 4 for details on model latent configurations).
perfectly. The fixation location has a highly nontrivial effect on the flow patterns, and varying it causes both global and local changes in the flow patterns (Fig. 1d).
Furthermore, cNVAE is the only model that reliably captures object position and velocity: especially note \(V_{obj,z}\) (last column in Fig. 4). Inferring object motion from complex optic flow patterns involves two key components. First, the model must extract self-motion from flow patterns. Second, the model must understand how self-motion influences flow patterns globally. Only then can the model subtract self-motion from global flow vectors to obtain object motion. In vision science, this is known as the _"flow-parsing hypothesis"_[88, 89, 90, 91]. Such flow-parsing is achieved by the cNVAE but none of the other models. See Supplementary section 11 for further discussion of this result and its implications.
Disentanglement: the cNVAE produces more disentangled representations.The pursuit of disentanglement in neural representations has garnered considerable attention [23, 85, 92, 93, 94, 95, 96, 97, 98, 99, 100]. In particular, Locatello et al. [19] established that learning fully disentangled representations is fundamentally impossible without inductive biases. Prior efforts such as \(\beta\)-VAE [85] demonstrated that increasing the weight of the KL loss (indicated by \(\beta\)) promotes disentanglement in VAEs. More recently, Whittington et al. [92] demonstrated that simple biologically inspired constraints such as non-negativity and energy efficiency encourage disentanglement. Here, we demonstrate that another biological inductive bias, hierarchy in the latent space, will promote disentanglement of the latent representations learned by VAEs.
To evaluate the role of hierarchy, we adopted the DCI framework [22] which offers a well-rounded evaluation of latent representations. The approach involves training a simple decoder (e.g., lasso regression) that predicts data generative factors \(\mathbf{g}\) from a latent code \(\mathbf{z}\); followed by computing a matrix of relative importances (e.g., based on lasso weights) which is then used to evaluate different aspects of the code quality: _Informativeness_--measures whether \(\mathbf{z}\) contains easily accessible information about \(\mathbf{g}\) (similar to untangling from above). _Disentanglement_--measures whether individual latents correspond to individual generative factors. _Completeness_--measures how many \(z_{i}\) are required to capture any single \(g_{j}\). If a single latent contributes to \(g_{j}\)'s prediction, the score will be 1 (complete). If all latent variables equally contribute to \(g_{j}\)'s prediction, the score will be 0 (maximally overcomplete). Note that "_completeness_" is also referred to as "_compactness_" [23]. See Fig. 9 and Supplementary
Figure 4: Hierarchical VAE untangles underlying factors of variation in data. The linear decodability of ground truth factors (x-axis) from different latent codes is shown. Untangling scores averaged across all ground truth factors are \(\text{cNVAE}=0.898\), \(\text{NVAE}=0.639\), \(\text{VAE}=0.548\), \(\text{cNAE}=0.456\), \(\text{AE}=0.477\), \(\text{PCA}=0.236\), and \(\text{Raw}=0.235\). For variational models, the best performing \(\beta\) values were selected: cNVAE, \(\beta=0.15\); VAE, \(\beta=1.5\) (see Supplementary section 9.5 for more details).
Figure 5: Evaluating the learned latent codes using the DCI framework [22]. Larger values are better for all metrics. Note that _informativeness_ is closely related to _untangling_[20, 21]. See also Fig. 9.
section 9.7.1 for more details, ref. [101] for a review, and ref. [102] for a recent extension of the DCI framework.
We follow the methods outlined by Eastwood and Williams [22] with two modifications: (1) we replaced lasso with linear regression to avoid the strong dependence on the lasso coefficient that we observed, and (2) we estimate the matrix of relative importances using a feature permutation-based algorithm (sklearn.inspection.permutation_importance), which measures the relative performance drop that results from shuffling a given latent.
We found that cNVAE outperforms competing models across all metrics for a broad range of \(\beta\) values (Fig. 5). The observed pattern of an inverted U shape is consistent with previous work [85], which suggests that there is an optimal \(\beta\) that can be empirically determined. In this case, cNVAE with \(\beta=0.5\) achieved the best average DCI score. Further, we found that VAEs lacking hierarchical structure learn highly overcomplete codes, such that many latents contribute to predicting a single ground truth factor. In conclusion, the simple inductive bias of hierarchy in the latent space led to a substantial improvement in VAE performance across all components of the DCI metric.
Brain-alignment: the cNVAE aligns more closely with MT neurons.To evaluate the performance of models in predicting neuronal activity in response to motion stimuli, we used an existing dataset of \(N=141\) MT neurons recorded while presented with random dot kinematograms representing smoothly changing combinations of optic flow velocity fields [103, 104]. A subset of these neurons (\(N=84\)) are publicly available on crcns.org, and were recently used in Mineault et al. [34] that we compare to.
To measure neuronal alignment, we first determined the mapping from each model's latent representation to MT neuron responses (binned spike counts, Fig. 6a). Here, the latent representation is defined as the mean of predicted Gaussian distributions for VAEs, and the bottleneck activations for AEs. We learn this linear latent-to-neuron mapping using ridge regression. Figure 6b shows the average firing rate of an example neuron along with model predictions. Because sensory neurons have a nonzero response latency, we determined each neuron's optimal response latency, which maximized cross-validated performance. The resulting distribution of best-selected latencies (Fig. 6c) peaked around \(100~{}ms\): consistent with known MT latencies [103]. We also empirically optimized ridge coefficients to ensure each neuron has its best fit. Figure 6d shows that the models capture the receptive field properties of MT neurons as measured by the spike-triggered average stimulus. To evaluate performance, we follow methods established by Mineault et al. [34]: whenever repeated trials were available, we report Pearson's \(R\) on that held-out data, normalized by maximum explainable variance [105]. When repeats were not available, we performed 5-fold cross-validation and reported the held-out performance using Pearson's \(R\) between model prediction and spike trains.
Evaluating brain alignment.We use two measures of brain alignment: the success at predicting the neural response (Pearson's \(R\), Fig. 7, Table 3); and, the "_alignment_" between neurons and individual model latents (Fig. 8, [16]). These mirror the untangling and completeness metrics described above (more details are provided below).
Figure 6: **(a)** Experimental setup form [103, 104]. **(b)** Both models explain MT neural variability well. **(c)** Distribution of best estimated latencies. **(d)** Spike-triggered averages (STA) are shown.
All models predict MT neuron responses well.After training a large ensemble of unsupervised models on fixate-1 and learning the neural mapping, we found that both hierarchical (cNVAE & cNAE) and non-hierarchical (VAE & AE) variants had similar ability to predict neural responses (Fig. 7). The performance did depend on the loss function itself, with the variational loss outperforming simple autoencoder reconstruction loss (Table 3).
Hierarchical VAEs are more aligned with MT neurons.We next tested how these factors affect neural alignment, i.e., how closely neurons are related to individual latents in the model. Figure 8a demonstrates what we mean by "alignment": a sparse latent-to-neuron relationship means larger alignment, indicative of a similar representational "form" [16]. See Fig. 10 for an illustration of this idea. To formalize this notion, we use feature permutation importance (described above), applied to the ridge regression models. This yields a \(420\)-dimensional vector per neuron. Each dimension of this vector captures the importance of a given latent variable in predicting the responses of the neuron. We normalize these vectors and interpret them as the probability of importance. We then define alignment score \(a_{i}\) of neuron \(i\) as \(a_{i}=1+\sum_{k=1}^{K}p_{ik}\log_{K}p_{ik}\), where \(p_{ik}\) is interpreted as the importance of \(k-\)th latent variable in predicting neuron \(i\) (Fig. 8a). This concept is closely related to the "_completeness_" score from the DCI framework as discussed above.
Figure 8: Hierarchical models (cNVAE, cNAE) are more aligned with MT neurons since they enable sparse latent-to-neuron relationships. **(a)** Alignment score measures the sparsity of permutation feature importances. \(a_{i}=0\) when all latents are equally important in predicting neuron \(i\); and, \(a_{i}=1\) when a single latent predicts the neuron. **(b)** Feature importances are plotted for an example neuron (same as in Fig. 6b). cNVAE (\(\beta=0.01\)) predicts this neuron’s response in a much sparser manner compared to non-hierarchical VAE (\(\beta=5\)). Supplementary section 9.5 contains a discussion of our rationale in choosing these \(\beta\) values. **(c)** Alignment across \(\beta\) values, and autoencoders (ae).
Figure 7: All models (pretrained on fixate-1) perform comparably in predicting MT neuron responses. Dashed line corresponds to the previous state-of-the-art on this data [106].
For almost all \(\beta\) values, the cNVAE exhibited a greater brain alignment than non-hierarchical VAE (Fig. 8c; cNVAE > VAE, paired \(t-\)test; see Fig. 16 and Table 5). Similarly, for the autoencoders, we found that the hierarchical variant outperformed the non-hierarchical one (cNAE > AE). Based on these observations, we conclude that higher brain alignment is primarily due to hierarchical latent structure. However, note that hierarchy in the traditional sense did not matter: all these models had approximately the same number of convolutional layers and parameters.
Factors leading to brain-alignment.To test the effect of the training dataset (i.e., category of ROFL) on model performance, we trained cNVAE models using fixate-0, fixate-1, and obj-1 categories (Table 1), while also exploring a variety of \(\beta\) values. We found that fixate-1 clearly outperformed the other two ROFL categories (Table 3), suggesting that both global (e.g., self-motion) and local (e.g., object motion) sources of variation are necessary for learning MT-like representations. The effect of loss function was also visible: some \(\beta\) values led to more alignment. But this effect was small compared to the effect of hierarchical architecture (Fig. 8c).
## 5 Discussion
We introduced a new framework for understanding and evaluating the representation of visual motion learned by artificial and biological neural networks. This framework provides a way to manipulate causes in the world and evaluate whether learned representations untangle and disentangle those causes. In particular, our framework makes it possible to test the influence of architecture (Fig. 2), loss function (Table 2), and training set (Table 1) on the learned representations, encompassing 3 out of the 4 core components of a recently proposed neuroconnectionist research programme [41]. Our framework brings hypothesis-testing to understand [biological] neural processing of vision and provides an interpretive framework to understand neurophysiological data.
The goal of the present work was to establish our framework and demonstrate its potential. To this end, we made several simplifying choices, such as training on individual flow frames rather than time-evolving videos. We provide a detailed discussion of study limitations in Supplementary section 8. Future work will address these by rendering images in simulations and using image-computable models, incorporating real eye-tracking and scene data in ROFL [83, 109], testing our approach on more data from other brain areas such as MST [110, 111], and using more sophisticated methods to measure representational alignment between ANNs and brains [112, 113, 114, 115].
Conclusion.We used synthetic data to test how causal structure in the world affects the representations learned by autoencoder-based models and evaluated the learned representations based on how they represent ground truth factors and how well they align with biological brains. We found that a single inductive bias, hierarchical latent structure, leads to desirable representations and increased brain alignment.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Model} & Pretraining dataset & \multicolumn{3}{c}{Performance, \(R\)\((\mu\pm se;\ N=141)\)} \\ \cline{3-6} & \(\beta=0.5\) & \(\beta=0.8\) & \(\beta=1\) & \(\beta=5\) \\ \hline \multirow{3}{*}{cNVAE} & fixate-1 & \(\mathbf{.506\pm.018}\) & \(\mathbf{.517\pm.017}\) & \(\mathbf{.494\pm.018}\) & \(\mathbf{.486\pm.016}\) \\ & fixate-0 & \(\mathbf{.428\pm.018}\) & \(\mathbf{.450\pm.019}\) & \(\mathbf{.442\pm.019}\) & \(\mathbf{.469\pm.018}\) \\ & obj-1 & \(\mathbf{.471\pm.018}\) & \(\mathbf{.465\pm.018}\) & \(\mathbf{.477\pm.017}\) & \(\mathbf{.468\pm.018}\) \\ \hline VAE & fixate-1 & \(\mathbf{.508\pm.019}\) & \(\mathbf{.481\pm.018}\) & \(\mathbf{.494\pm.018}\) & \(\mathbf{.509\pm.018}\) \\ \hline cNAE & fixate-1 & \multicolumn{3}{c}{\(\mathbf{.476\pm.018}\)} \\ \hline AE & fixate-1 & \multicolumn{3}{c}{\(\mathbf{.495\pm.019}\)} \\ \hline CPC [108] & AirSim [75] & \multicolumn{3}{c}{\(\mathbf{.250\pm.020}\) (Mineault et al. [34])} \\ \hline DorsalNet & AirSim [75] & \multicolumn{3}{c}{\(\mathbf{.251\pm.019}\) (Mineault et al. [34])} \\ \hline \hline \end{tabular}
\end{table}
Table 3: Both cNVAE and VAE perform well in predicting MT neuron responses, surpassing previous state-of-the-art models by more than a twofold improvement. Moreover, the clear gap between fixate-1 and other categories highlights the importance of pretraining data [107].
Code & Data
Our code and model checkpoints are available here: [https://github.com/hadivafaii/ROFL-cNVAE](https://github.com/hadivafaii/ROFL-cNVAE).
## 7 Acknowledgments
This work was supported by NSF IIS-2113197 (HV and DAB), NSF DGE-1632976 (HV), and NIH R00EY032179 (JLY). We thank our anonymous reviewers for their helpful comments, and the developers of the software packages used in this project, including PyTorch [116], NumPy [117], SciPy [118], scikit-learn [119], pandas [120], matplotlib [121], and seaborn [122].
## References
* [1] Hermann Von Helmholtz. _Handbuch der physiologischen Optik_. Vol. 9. Voss, 1867.
* [2] Ibn al-Haytham. _Book of optics (Kitab Al-Manazir)_. 1011-1021 AD.
* [3] David Mumford. "On the computational architecture of the neocortex: II The role of corticocortical loops". In: _Biological Cybernetics_ 66.3 (1992), pp. 241-251. doi: 10.1007 / BF00198477.
* [4] Tai Sing Lee and David Mumford. "Hierarchical Bayesian inference in the visual cortex". In: _JOSA A_ 20.7 (2003), pp. 1434-1448. doi: 10.1364/JOSAA.20.001434.
* [5] Rajesh PN Rao and Dana H Ballard. "Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects". In: _Nature Neuroscience_ 2.1 (1999), pp. 79-87. doi: 10.1038/4580.
* [6] David C Knill and Alexandre Pouget. "The Bayesian brain: the role of uncertainty in neural coding and computation". In: _Trends in Neurosciences_ 27.12 (2004), pp. 712-719. doi: 10.1016/j.tins.2004.10.007.
* [7] Alan Yuille and Daniel Kersten. "Vision as Bayesian inference: analysis by synthesis?" In: _Trends in Cognitive Sciences_ 10.7 (2006), pp. 301-308. doi: 10.1016/j.tics.2006.05.002.
* [8] Karl Friston. "A theory of cortical responses". In: _Philosophical transactions of the Royal Society B: Biological Sciences_ 360.1456 (2005), pp. 815-836. doi: 10.1098/rstb.2005.1622.
* [9] Andy Clark. "Whatever next? Predictive brains, situated agents, and the future of cognitive science". In: _Behavioral and brain sciences_ 36.3 (2013), pp. 181-204. doi: 10.1017 / S0140525X12000477.
* [10] Peter Dayan et al. "The Helmholtz machine". In: _Neural Computation_ 7.5 (1995), pp. 889-904. doi: 10.1162/neco.1995.7.5.889.
* [11] Diederik P Kingma and Max Welling. "Auto-encoding variational bayes". In: (2014). arXiv: 1312.6114v11 [stat.ML].
* [12] Danilo Jimenez Rezende et al. "Stochastic backpropagation and approximate inference in deep generative models". In: _International Conference on Machine Learning_. PMLR. 2014, pp. 1278-1286. url: [https://proceedings.mlr.press/v32/rezende14.html](https://proceedings.mlr.press/v32/rezende14.html).
* [13] Lukas Schott et al. "Towards the first adversarially robust neural network model on MNIST". In: _International Conference on Learning Representations_. 2019. url: [https://openreview.net/forum?id=S1EHOsC9tX](https://openreview.net/forum?id=S1EHOsC9tX).
* [14] Ilker Yildirim et al. "Efficient inverse graphics in biological face processing". In: _Science Advances_ 6.10 (2020), eaax5979. doi: 10.1126/sciadv.aax5979.
* [15] Katherine R Storrs et al. "Unsupervised learning predicts human perception and misperception of gloss". In: _Nature Human Behaviour_ 5.10 (2021), pp. 1402-1417. doi: 10.1038/s41562-021-01097-6.
* [16] Irina Higgins et al. "Unsupervised deep learning identifies semantic disentanglement in single inferotemporal face patch neurons". In: _Nature Communications_ 12.1 (2021), p. 6456. doi: 10.1038/s41467-021-26751-5.
* [17] Joseph Marino. "Predictive coding, variational autoencoders, and biological connections". In: _Neural Computation_ 34.1 (2022), pp. 1-44. doi: 10.1162/neco_a_01458.
* [18] Irina Higgins et al. "Towards a definition of disentangled representations". In: (2018). arXiv: 1812.02230 [cs.LG].
* [19] Francesco Locatello et al. "Challenging common assumptions in the unsupervised learning of disentangled representations". In: _international conference on machine learning_. PMLR. 2019, pp. 4114-4124. url: [https://proceedings.mlr.press/v97/locatello19a.html](https://proceedings.mlr.press/v97/locatello19a.html).
* [20] James J DiCarlo and David D Cox. "Untangling invariant object recognition". In: _Trends in Cognitive Sciences_ 11.8 (2007), pp. 333-341. doi: 10.1016/j.tics.2007.06.010.
* [21] James J DiCarlo et al. "How does the brain solve visual object recognition?" In: _Neuron_ 73.3 (2012), pp. 415-434. doi: 10.1016/j.neuron.2012.01.010.
* [22] Cian Eastwood and Christopher K. I. Williams. "A framework for the quantitative evaluation of disentangled representations". In: _International Conference on Learning Representations_. 2018. url: [https://openreview.net/forum?id=By-7dz-AZ](https://openreview.net/forum?id=By-7dz-AZ).
* [23] Karl Ridgeway and Michael C Mozer. "Learning Deep Disentangled Embeddings With the F-Statistic Loss". In: _Advances in Neural Information Processing Systems_. Vol. 31. Curran Associates, Inc., 2018. url: [https://papers.nips.cc/paper_files/paper/2018/hash/2b24d495052a8ce66358eb576b8912c8-Abstract.html](https://papers.nips.cc/paper_files/paper/2018/hash/2b24d495052a8ce66358eb576b8912c8-Abstract.html).
* [24] Yoshua Bengio et al. "Representation learning: A review and new perspectives". In: _IEEE Transactions on Pattern Analysis and Machine Intelligence_ 35.8 (2013), pp. 1798-1828. doi: 10.1109/TPAMI.2013.50.
* [25] Brenden M Lake et al. "Building machines that learn and think like people". In: _Behavioral and Brain Sciences_ 40 (2017), e253. doi: 10.1017/S0140525X16001837.
* [26] Jonas Peters et al. _Elements of causal inference: foundations and learning algorithms_. The MIT Press, 2017. url: [https://mitpress.mit.edu/9780262037310/elements-of-causal-inference](https://mitpress.mit.edu/9780262037310/elements-of-causal-inference).
* [27] Yann LeCun et al. "Deep learning". In: _Nature_ 521.7553 (2015), pp. 436-444. doi: 10.1038/nature14539.
* [28] Jurgen Schmidhuber. "Learning factorial codes by predictability minimization". In: _Neural Computation_ 4.6 (1992), pp. 863-879. doi: 10.1162/neco.1992.4.6.863.
* [29] Michael Tschannen et al. "Recent advances in autoencoder-based representation learning". In: (2018). arXiv: 1812.05069 [cs.LG].
* [30] Martin Schrimpf et al. "Brain-score: Which artificial neural network for object recognition is most brain-like?" In: _BioRxiv_ (2018), p. 407007. doi: 10.1101/407007.
* [31] Daniel LK Yamins et al. "Performance-optimized hierarchical models predict neural responses in higher visual cortex". In: _Proceedings of the National Academy of Sciences_ 111.23 (2014), pp. 8619-8624. doi: 10.1073/pnas.1403112111.
* [32] Seyed-Mahdi Khaligh-Razavi and Nikolaus Kriegeskorte. "Deep supervised, but not unsupervised, models may explain IT cortical representation". In: _PLoS Computational Biology_ 10.11 (2014), e1003915. doi: 10.1371/journal.pcbi.1003915.
* [33] Daniel LK Yamins and James J DiCarlo. "Using goal-driven deep learning models to understand sensory cortex". In: _Nature Neuroscience_ 19.3 (2016), pp. 356-365. doi: 10.1038/nn.4244.
* [34] Patrick Mineault et al. "Your head is there to move you around: Goal-driven models of the primate dorsal pathway". In: _Advances in Neural Information Processing Systems_. Ed. by M. Ranzato et al. Vol. 34. Curran Associates, Inc., 2021, pp. 28757-28771. url: [https://papers.nips.cc/paper/2021/hash/f1676935f9304b97d59b0738289d2e22](https://papers.nips.cc/paper/2021/hash/f1676935f9304b97d59b0738289d2e22) -Abstract.html.
* [35] Eric Elmoznino and Michael F Bonner. "High-performing neural network models of visual cortex benefit from high latent dimensionality". In: _bioRxiv_ (2022), pp. 2022-07. doi: 10.1101/2022.07.13.499969.
* [36] Colin Conwell et al. "What can 1.8 billion regressions tell us about the pressures shaping high-level visual representation in brains and machines?" In: _bioRxiv_ (2023). doi: 10.1101/2022.03.28.485868.
* [37] Nicholas J Sexton and Bradley C Love. "Reassessing hierarchical correspondences between brain and deep networks through direct interface". In: _Science Advances_ 8.28 (2022), eabm2219. doi: 10.1126/sciadv.abm2219.
* [38] Greta Tuckute et al. "Many but not all deep neural network audio models capture brain responses and exhibit correspondence between model stages and brain regions". In: _bioRxiv_ (2023). doi: 10.1101/2022.09.06.506680.
* [39] Blake Richards et al. "The application of artificial intelligence to biology and neuroscience". In: _Cell_ 185.15 (2022), pp. 2640-2643. doi: 10.1016/j.cell.2022.06.047.
* [40] Anthony Zador et al. "Catalyzing next-generation Artificial Intelligence through NeuroAI". In: _Nature Communications_ 14.1 (2023), p. 1597. doi: 10.1038/s41467-023-37180-x.
* [41] Adrien Doering et al. "The neuroconnectionist research programme". In: _Nature Reviews Neuroscience_ (2023), pp. 1-20. doi: 10.1038/s41583-023-00705-w.
* [42] Nancy Kanwisher et al. "Using artificial neural networks to ask 'why' questions of minds and brains". In: _Trends in Neurosciences_ (2023). doi: 10.1016/j.tins.2022.12.008.
* [43] Rosa Cao and Daniel Yamins. "Explanatory models in neuroscience: Part 1-taking mechanistic abstraction seriously". In: (2021). arXiv: 2104.01490v2 [q-bio.NC].
* [44] Blake A Richards et al. "A deep learning framework for neuroscience". In: _Nature Neuroscience_ 22.11 (2019), pp. 1761-1770. doi: 10.1038/s41593-019-0520-2.
* [45] David GT Barrett et al. "Analyzing biological and artificial neural networks: challenges with opportunities for synergy?" In: _Current Opinion in Neurobiology_ 55 (2019), pp. 55-64. doi: 10.1016/j.conb.2019.01.007.
* [46] Thomas Serre. "Deep learning: the good, the bad, and the ugly". In: _Annual Review of Vision Science_ 5 (2019), pp. 399-426. doi: 10.1146/annurev-vision-091718-014951.
* [47] Nikolaus Kriegeskorte. "Deep neural networks: a new framework for modeling biological vision and brain information processing". In: _Annual Review of Vision Science_ 1 (2015), pp. 417-446. doi: 10.1101/029876.
* [48] James J Gibson. "The visual perception of objective motion and subjective movement". In: _Psychological Review_ 61.5 (1954), p. 304. doi: 10.1037/h0061885.
* [49] Leslie Ungerleider and Mortimer Mishkin. "Two cortical visual systems". In: _Analysis of Visual Behavior_ (1982), pp. 549-586. url: [https://www.cns.nyu.edu/~tony/vns/readings/ungerleider-mishkin-1982.pdf](https://www.cns.nyu.edu/~tony/vns/readings/ungerleider-mishkin-1982.pdf).
* [50] Melvyn A Goodale and A David Milner. "Separate visual pathways for perception and action". In: _Trends in Neurosciences_ 15.1 (1992), pp. 20-25. doi: 10.1016/0166-2236(92)90344-8.
* [51] L. G. Ungerleider and L. Pessoa. "What and where pathways". In: _Scholarpedia_ 3.11 (2008). revision #91940, p. 5342. doi: 10.4249/scholarpedia.5342.
* [52] Arash Vahdat and Jan Kautz. "NVAE: A Deep Hierarchical Variational Autoencoder". In: _Advances in Neural Information Processing Systems_. Vol. 33. Curran Associates, Inc., 2020, pp. 1967-19679. url: [https://papers.nips.cc/paper_files/paper/2020/hash/e3b21256183cf7c2c7a66be163579d37-Abstract.html](https://papers.nips.cc/paper_files/paper/2020/hash/e3b21256183cf7c2c7a66be163579d37-Abstract.html).
* [53] Mandyam Veerambul Srinivasan et al. "Predictive coding: a fresh view of inhibition in the retina". In: _Proceedings of the Royal Society of London. Series B. Biological Sciences_ 216.1205 (1982), pp. 427-459. doi: 10.1098/rspb.1982.0085.
* [54] Andre M Bastos et al. "Canonical microcircuits for predictive coding". In: _Neuron_ 76.4 (2012), pp. 695-711. doi: 10.1016/j.neuron.2012.10.038.
* [55] Dawei W Dong and Joseph J Atick. "Temporal decorrelation: a theory of lagged and non-lagged responses in the lateral geniculate nucleus". In: _Network: Computation in Neural Systems_ 6.2 (1995), p. 159. doi: 10.1088/0954-898X_6\(2\)003.
* [56] Wolf Singer. "Recurrent dynamics in the cerebral cortex: Integration of sensory evidence with stored knowledge". In: _Proceedings of the National Academy of Sciences_ 118 (2021). doi: 10.1073/pnas.210104311.
* [57] Fabian A Mikulasch et al. "Where is the error? Hierarchical predictive coding through dendritic error computation". In: _Trends in Neurosciences_ 46.1 (2023), pp. 45-59. doi: 10.1016/j.tins.2022.09.007.
* [58] Beren Millidge et al. "Predictive Coding: Towards a Future of Deep Learning beyond Backpropagation?" In: _International Joint Conference on Artificial Intelligence_. 2022. doi: 10.24963/ijcai.2022/774.
* [59] David C Knill and Whitman Richards. _Perception as Bayesian inference_. Cambridge University Press, 1996. doi: 10.1017/CBO9780511984037.
* [60] Yair Weiss et al. "Motion illusions as optimal percepts". In: _Nature Neuroscience_ 5.6 (2002), pp. 598-604.
* [61] Wilson S Geisler and Daniel Kersten. "Illusions, perception and Bayes". In: _Nature Neuroscience_ 5.6 (2002), pp. 508-510. doi: 10.1038/nn0602-508.
* [62] Iris Vilares and Konrad Kording. "Bayesian models: the structure of the world, uncertainty, behavior, and the brain". In: _Annals of the New York Academy of Sciences_ 1224.1 (2011), pp. 22-39. doi: 10.1111/j.1749-6632.2011.05965.x.
* [63] Richard Langton Gregory. "Perceptions as hypotheses". In: _Philosophical Transactions of the Royal Society of London. B, Biological Sciences_ 290.1038 (1980), pp. 181-197. doi: 10.1098/RSTB.1980.0090.
* [64] Timm Lochman and Sophie Deneve. "Neural processing as causal inference". In: _Current Opinion in Neurobiology_ 21.5 (2011), pp. 774-781. doi: 10.1016/j.conb.2011.05.018.
* [65] Sabyasachi Shivkumar et al. "A probabilistic population code based on neural samples". In: _Advances in Neural Information Processing Systems_. Ed. by S. Bengio et al. Vol. 31. Curran Associates, Inc., 2018. URL: [https://papers.nips.cc/paper_files/paper/2018/hash/5401acfe633e6817b508b84d23686743-Abstract.html](https://papers.nips.cc/paper_files/paper/2018/hash/5401acfe633e6817b508b84d23686743-Abstract.html).
* [66] Jozsef Fiser et al. "Statistically optimal perception and learning: from behavior to neural representations". In: _Trends in cognitive sciences_ 14.3 (2010), pp. 119-130. doi: 10.1016/j.tics.2010.01.003.
* [67] Bruno A. Olshausen. "Perception as an Inference Problem". In: _The Cognitive Neurosciences (5th edition)_ (2014). Ed. by Michael Gazzaniga and George R. Mangun. doi: 10.7551/mitpress/9504.003.0037. url: [http://rctn.org/bruno/papers/perception-as-inference.pdf](http://rctn.org/bruno/papers/perception-as-inference.pdf).
* [68] Ferenc Csikor et al. "Top-down effects in an early visual cortex inspired hierarchical Variational Autoencoder". In: _SVRHM 2022 Workshop @ NeurIPS_. 2022. url: [https://openreview.net/forum?id=8dfbo0QfYt3](https://openreview.net/forum?id=8dfbo0QfYt3).
* [69] Eleni Miliotou et al. "Generative Decoding of Visual Stimuli". In: _Proceedings of the 40th International Conference on Machine Learning_. Ed. by Andreas Krause et al. Vol. 202. Proceedings of Machine Learning Research. PMLR, July 2023, pp. 24775-24784. url: [https://proceedings.mlr.press/v202/miliotou23a.html](https://proceedings.mlr.press/v202/miliotou23a.html).
* [70] Yujia Huang et al. "Neural Networks with Recurrent Generative Feedback". In: _Advances in Neural Information Processing Systems_. Vol. 33. Curran Associates, Inc., 2020. url: [https://papers.nips.cc/paper_files/paper/2020/hash/0660895c22f8a14eb039bfb9eb0778f-Abstract.html](https://papers.nips.cc/paper_files/paper/2020/hash/0660895c22f8a14eb039bfb9eb0778f-Abstract.html).
* [71] Rewon Child. "Very Deep [VAE]s Generalize Autoregressive Models and Can Outperform Them on Images". In: _International Conference on Learning Representations_. 2021. url: [https://openreview.net/forum?id=RLRXCV6DbeJ](https://openreview.net/forum?id=RLRXCV6DbeJ).
* [72] Casper Kaae Sonderby et al. "Ladder Variational Autoencoders". In: _Advances in Neural Information Processing Systems_. Vol. 29. Curran Associates, Inc., 2016. url: [https://papers.nips.cc/paper_files/paper/2016/hash/6ae07dcb33ec3b7c814df797cbda0f87-Abstract.html](https://papers.nips.cc/paper_files/paper/2016/hash/6ae07dcb33ec3b7c814df797cbda0f87-Abstract.html).
* [73] Lars Maaloe et al. "BIVA: A Very Deep Hierarchy of Latent Variables for Generative Modeling". In: _Advances in Neural Information Processing Systems_. Vol. 32. Curran Associates, Inc., 2019. url: [https://papers.nips.cc/paper_files/paper/2019/hash/9bd8b1faffa4b3d41779bb495d79fb9-Abstract.html](https://papers.nips.cc/paper_files/paper/2019/hash/9bd8b1faffa4b3d41779bb495d79fb9-Abstract.html).
* [74] Louay Hazami et al. "Efficientvdvae: Less is more". In: (2022). arXiv: 2203.13751v2 [cs.LG].
* [75] Shital Shah et al. "Airsim: High-fidelity visual and physical simulation for autonomous vehicles". In: _Field and Service Robotics: Results of the 11th International Conference_. Springer. 2018, pp. 621-635. doi: 10.1007/978-3-319-67361-5_40.
* [76] Nicole C Rust and J Anthony Movshon. "In praise of artifice". In: _Nature Neuroscience_ 8.12 (2005), pp. 1647-1650. doi: 10.1038/nn1606.
* [77] Bela Julesz. "Foundations of cyclopean perception". In: (1971). url: [https://books.google.com/books/about/Foundations_of_Cyclopean_Perception.html?id=K_NFQgAACAAJ](https://books.google.com/books/about/Foundations_of_Cyclopean_Perception.html?id=K_NFQgAACAAJ).
* [78] Tal Golan et al. "Controversial stimuli: Pitting neural networks against each other as models of human cognition". In: _Proceedings of the National Academy of Sciences_ 117.47 (2020), pp. 29330-29337. doi: 10.1073/pnas.1912334117.
* [79] Michael Beyeler et al. "3D visual response properties of MSTd emerge from an efficient, sparse population code". In: _Journal of Neuroscience_ 36.32 (2016), pp. 8399-8415. doi: 10.1523/JNEUROSCI.0396-16.2016.
* [80] James J Gibson. "The perception of the visual world". In: (1950). url: [https://psycnet.apa.org/record/1951-04286-000](https://psycnet.apa.org/record/1951-04286-000).
* [81] William H Warren Jr and Daniel J Hannon. "Direction of self-motion is perceived from optical flow". In: _Nature_ 336.6195 (1988), pp. 162-163. doi: 10.1038/336162A0.
* [82] J. Inigo Thomas et al. _Spherical retinal flow for a fixating observer_. Tech. rep. 1994. url: [https://repository.upenn.edu/entities/publication/f9b44866-54cd-483d-8a17-a51fb732958a](https://repository.upenn.edu/entities/publication/f9b44866-54cd-483d-8a17-a51fb732958a).
* [83] Jonathan Samir Matthis et al. "Retinal optic flow during natural locomotion". In: _PLOS Computational Biology_ 18.2 (2022), e1009575. doi: 10.1371/journal.pcbi.1009575.
* [84] Eddy Ilg et al. "Flownet 2.0: Evolution of optical flow estimation with deep networks". In: _Proceedings of the IEEE conference on computer vision and pattern recognition_. 2017, pp. 2462-2470. doi: 10.1109/CVPR.2017.179.
* [85] Irina Higgins et al. "beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework". In: _International Conference on Learning Representations_. 2017. url: [https://openreview.net/forum?id=Sy2fzUggl](https://openreview.net/forum?id=Sy2fzUggl).
* [86] Christopher P Burgess et al. "Understanding disentangling in \(\beta\)-VAE". In: (2018). arXiv: 1804.03599 [stat.ML].
* [87] Nikolaus Kriegeskorte and Jorn Diedrichsen. "Peeling the onion of brain representations". In: _Annual Review of Neuroscience_ 42 (2019), pp. 407-432. doi: 10.1146/annurev-neuro-080317-061906.
* [88] Simon K Rushton and Paul A Warren. "Moving observers, relative retinal motion and the detection of object movement". In: _Current Biology_ 15.14 (2005), R542-R543. doi: 10.1016/j.cub.2005.07.020.
* [89] Paul A Warren and Simon K Rushton. "Optic flow processing for the assessment of object movement during ego movement". In: _Current Biology_ 19.18 (2009), pp. 1555-1560. doi: 10.1016/j.cub.2009.07.057.
* [90] Paul A Warren and Simon K Rushton. "Perception of object trajectory: Parsing retinal motion into self and object movement components". In: _Journal of Vision_ 7.11 (2007), pp. 2-2. doi: 10.1167/7.11.2.
* [91] Nicole E Peltier et al. "Optic flow parsing in the macaque monkey". In: _Journal of Vision_ 20.10 (2020), pp. 8-8. doi: 10.1167/jov.20.10.8.
* [92] James C. R. Whittington et al. "Disentanglement with Biological Constraints: A Theory of Functional Cell Types". In: _The Eleventh International Conference on Learning Representations_. 2023. url: [https://openreview.net/forum?id=9Z_Gfh2nGH](https://openreview.net/forum?id=9Z_Gfh2nGH).
* [93] Sebastien Lachapelle et al. "Synergies between Disentanglement and Sparsity: Generalization and Identifiability in Multi-Task Learning". In: _Proceedings of the 40th International Conference on Machine Learning_. Ed. by Andreas Krause et al. Vol. 202. Proceedings of Machine Learning Research. PMLR, July 2023, pp. 18171-18206. url: [https://proceedings.mlr.press/v202/lachapelle23a.html](https://proceedings.mlr.press/v202/lachapelle23a.html).
* [94] Abhishek Kumar et al. "Variational Inference of Disentangled Latent Concepts from Unlabeled Observations". In: _International Conference on Learning Representations_. 2018. url: [https://openreview.net/forum?id=H1kG7GZAW](https://openreview.net/forum?id=H1kG7GZAW).
* [95] Hyunjik Kim and Andriy Mnih. "Disentangling by factorising". In: _International Conference on Machine Learning_. PMLR. 2018, pp. 2649-2658. url: [http://proceedings.mlr.press/v80/kim18b.html](http://proceedings.mlr.press/v80/kim18b.html).
* [96] Ricky T. Q. Chen et al. "Isolating Sources of Disentanglement in Variational Autoencoders". In: _Advances in Neural Information Processing Systems_. Ed. by S. Bengio et al. Vol. 31. Curran Associates, Inc., 2018. url: [https://papers.nips.cc/paper_files/paper/2018/hash/1ee3dfcd8a0645a25a35977997223d22-Abstract.html](https://papers.nips.cc/paper_files/paper/2018/hash/1ee3dfcd8a0645a25a35977997223d22-Abstract.html).
* [97] Michal Rolinek et al. "Variational autoencoders pursue PCA directions (by accident)". In: _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 2019, pp. 12406-12415. url: [https://openaccess.thecvf.com/content_CVPR_2019/html/Rolinek_Variational_Autoencoders_Pursue_PCA_Directions_by_Accident_CVPR_2019_paper.html](https://openaccess.thecvf.com/content_CVPR_2019/html/Rolinek_Variational_Autoencoders_Pursue_PCA_Directions_by_Accident_CVPR_2019_paper.html).
* [98] Sjoerd van Steenkiste et al. "Are Disentangled Representations Helpful for Abstract Visual Reasoning?" In: _Advances in Neural Information Processing Systems_. Vol. 32. Curran Associates, Inc., 2019. url: [https://papers.nips.cc/paper_files/paper/2019/hash/bc3c4a6331a8a9950945a1aa8c95ab8a-Abstract.html](https://papers.nips.cc/paper_files/paper/2019/hash/bc3c4a6331a8a9950945a1aa8c95ab8a-Abstract.html).
* [99] Andrea Dittadi et al. "On the Transfer of Disentangled Representations in Realistic Settings". In: _International Conference on Learning Representations_. 2021. url: [https://openreview.net/forum?id=8VXvj1QNR11](https://openreview.net/forum?id=8VXvj1QNR11).
* [100] W Jeffrey Johnston and Stefano Fusi. "Abstract representations emerge naturally in neural networks trained to perform multiple tasks". In: _Nature Communications_ 14.1 (2023), p. 1040. doi: 10.1038/s41467-023-36583-0.
* [101] Marc-Andre Carbonneau et al. "Measuring disentanglement: A review of metrics". In: _IEEE Transactions on Neural Networks and Learning Systems_ (2022). doi: 10.1109/TNNLS.2022.3218982.
* [102] Cian Eastwood et al. "DCI-ES: An Extended Disentanglement Framework with Connections to Identifiability". In: _The Eleventh International Conference on Learning Representations_. 2023. url: [https://openreview.net/forum?id=462z-glgSht](https://openreview.net/forum?id=462z-glgSht).
* [103] Yuwei Cui et al. "Diverse suppressive influences in area MT and selectivity to complex motion features". In: _Journal of Neuroscience_ 33.42 (2013), pp. 16715-16728. doi: 10.1523/JNEUROSCI.0203-13.2013.
* [104] Yuwei Cui et al. "Inferring cortical variability from local field potentials". In: _Journal of Neuroscience_ 36.14 (2016), pp. 4121-4135. doi: 10.1523/JNEUROSCI.2502-15.2016.
* [105] Maneesh Sahani and Jennifer Linden. "How Linear are Auditory Cortical Responses?" In: _Advances in Neural Information Processing Systems_. Vol. 15. MIT Press, 2002. URL: https: // /papers.nips. cc / paper _files / paper / 2002 / hash / Tb4773c039d539af17c883eb9283dd14-Abstract.html.
* [106] Patrick J Mineault et al. "Hierarchical processing of complex motion along the primate dorsal visual pathway". In: _Proceedings of the National Academy of Sciences_ 109.16 (2012), E972-E980. doi: 10.1073/pnas.1115685109.
* [107] Eero P. Simoncelli and Bruno A. Olshausen. "Natural image statistics and neural representation." In: _Annual Review of Neuroscience_ 24 (2001), pp. 1193-216. doi: 10.1146/annurev.neuro.24.1.1193.
* [108] Aaron van den Oord et al. "Representation Learning with Contrastive Predictive Coding". In: (2019). arXiv: 1807.03748 [cs.LG].
* [109] Karl S Muller et al. "Retinal motion statistics during natural locomotion". In: _eLife_ 12 (2023), e82410. doi: 10.7554/eLife.82410.
* [110] Benedict Wild and Stefan Treue. "Primate extrastriate cortical area MST: a gateway between sensation and cognition". In: _Journal of Neurophysiology_ 125.5 (2021), pp. 1851-1882. doi: 10.1152/jn.00384.2020.
* [111] Benedict Wild et al. "Electrophysiological dataset from macaque visual cortical area MST in response to a novel motion stimulus". In: _Scientific Data_ 9 (2022). doi: 10.1038/s41597-022-01239-z.
* [112] Alex H Williams et al. "Generalized Shape Metrics on Neural Representations". In: _Advances in Neural Information Processing Systems_. Vol. 34. Curran Associates, Inc., 2021. url: https: //papers.nips. cc/paper_files/paper/2021/hash/252a3dbaeb32e7690242ad3b556e626b-Abstract.html.
* [113] Lyndon Duong et al. "Representational Dissimilarity Metric Spaces for Stochastic Neural Networks". In: _The Eleventh International Conference on Learning Representations_. 2023. url: [https://openreview.net/forum?id=xjb563TH-GH](https://openreview.net/forum?id=xjb563TH-GH).
* [114] Max Klabunde et al. "Similarity of Neural Network Models: A Survey of Functional and Representational Measures". In: (2023). arXiv: 2305.06329 [cs.LG].
* [115] Abdulkadir Canatar et al. _A Spectral Theory of Neural Prediction and Alignment_. 2023. arXiv: 2309.12821 [q-bio.NC].
* [116] Adam Paszke et al. "PyTorch: An Imperative Style, High-Performance Deep Learning Library". In: _Advances in Neural Information Processing Systems_. Vol. 32. Curran Associates, Inc., 2019. url: [https://papers.nips.cc/paper_files/paper/2019/hash/bdbca288fee7f92f2bfa9f7012727740-Abstract.html](https://papers.nips.cc/paper_files/paper/2019/hash/bdbca288fee7f92f2bfa9f7012727740-Abstract.html).
* [117] Charles R. Harris et al. "Array programming with NumPy". In: _Nature_ 585.7825 (Sept. 2020), pp. 357-362. doi: 10.1038/s41586-020-2649-2.
* [118] Pauli Virtanen et al. "Scipy 1.0: Fundamental Algorithms for Scientific Computing in Python". In: _Nature Methods_ 17 (2020), pp. 261-272. doi: 10.1038/s41592-019-0686-2.
* [119] Fabian Pedregosa et al. "Scikit-learn: Machine learning in Python". In: _the Journal of Machine Learning Research_ 12 (2011), pp. 2825-2830. doi: 10.5555/1953048.2078195.
* [120] The pandas development team. _pandas-dev/pandas: Pandas_. Version latest. Feb. 2020. doi: 10.5281/zenodo.3509134.
* [121] John D Hunter. "Matplotlib: A 2D graphics environment". In: _Computing in Science & Engineering_ 9.03 (2007), pp. 90-95. doi: 10.1109/MCSE.2007.55.
* [122] Michael L Waskom. "Seaborn: statistical data visualization". In: _Journal of Open Source Software_ 6.60 (2021), p. 3021. doi: 10.21105/joss.03021.
* [123] Edward H Adelson and James R Bergen. "Spatiotemporal energy models for the perception of motion". In: _Josa a_ 2.2 (1985), pp. 284-299. doi: 10.1364/JOSAA.2.000284.
* [124] Shinji Nishimoto and Jack L Gallant. "A three-dimensional spatiotemporal receptive field model explains responses of area MT neurons to naturalistic movies". In: _Journal of Neuroscience_ 31.41 (2011), pp. 14551-14564. doi: 10.1523/JNEUROSCI.6801-10.2011.
* [125] Yena Han et al. "System identification of neural systems: If we got it right, would we know?" In: _International Conference on Machine Learning_. PMLR. 2023, pp. 12430-12444. url: [https://proceedings.mlr.press/v202/han23d.html](https://proceedings.mlr.press/v202/han23d.html).
* [126] Jie Hu et al. "Squeeze-and-excitation networks". In: _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 2018, pp. 7132-7141. doi: 10.1109/CVPR.2018.00745.
* [127] Sergey Ioffe and Christian Szegedy. "Batch normalization: Accelerating deep network training by reducing internal covariate shift". In: _International Conference on Machine Learning_. pmlr. 2015, pp. 448-456. url: [https://proceedings.mlr.press/v37/ioffe15.html](https://proceedings.mlr.press/v37/ioffe15.html).
* [128] Tim Salimans and Durk P Kingma. "Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks". In: _Advances in Neural Information Processing Systems_. Ed. by D. Lee et al. Vol. 29. Curran Associates, Inc., 2016. url: [https://papers.nips.cc/paper_files/paper/2016/hash/ed265bc903a5a097f61d3ec064d96d2e-Abstract.html](https://papers.nips.cc/paper_files/paper/2016/hash/ed265bc903a5a097f61d3ec064d96d2e-Abstract.html).
* [129] Prajit Ramachandran et al. "Searching for Activation Functions". In: _International Conference on Learning Representations_. 2018. url: [https://openreview.net/forum?id=SkBYTY2RZ](https://openreview.net/forum?id=SkBYTY2RZ).
* [130] Stefan Elfwing et al. "Sigmoid-weighted linear units for neural network function approximation in reinforcement learning". In: _Neural Networks_ 107 (2018), pp. 3-11. doi: 10.1016/j.neunet.2017.12.012.
* [131] Yuichi Yoshida and Takeru Miyato. "Spectral Norm Regularization for Improving the Generalizability of Deep Learning". In: (2017). arXiv: 1705.10941 [stat.ML].
* [132] Samuel R. Bowman et al. "Generating Sentences from a Continuous Space". In: _Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning_. Berlin, Germany: Association for Computational Linguistics, Aug. 2016, pp. 10-21. doi: 10.18653/v1/K16-1002.
* [133] Hao Fu et al. "Cyclical Annealing Schedule: A Simple Approach to Mitigating KL Vanishing". In: _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_. Minneapolis, Minnesota: Association for Computational Linguistics, June 2019, pp. 240-250. doi: 10.18653/v1/N19-1021.
* [134] Arash Vahdat et al. "DVAE++: Discrete Variational Autoencoders with Overlapping Transformations". In: _Proceedings of the 35th International Conference on Machine Learning_. Ed. by Jennifer Dy and Andreas Krause. Vol. 80. Proceedings of Machine Learning Research. PMLR, July 2018, pp. 5035-5044. url: [https://proceedings.mlr.press/v80/vahdat18a.html](https://proceedings.mlr.press/v80/vahdat18a.html).
* [135] Xi Chen et al. "Variational Lossy Autoencoder". In: _International Conference on Learning Representations_. 2017. url: [https://openreview.net/forum?id=BysvGP5ee](https://openreview.net/forum?id=BysvGP5ee).
* [136] Diederik P Kingma and Jimmy Ba. "Adam: A method for stochastic optimization". In: (2014). arXiv: 1412.6980 [cs.LG].
* [137] Ilya Loshchilov and Frank Hutter. "SGDR: Stochastic Gradient Descent with Warm Restarts". In: _International Conference on Learning Representations_. 2017. url: [https://openreview.net/forum?id=Sku89Sccxx](https://openreview.net/forum?id=Sku89Sccxx).
* [138] Yoav Benjamini and Yosef Hochberg. "Controlling the false discovery rate: a practical and powerful approach to multiple testing". In: _Journal of the Royal Statistical Society: series B (Methodological)_ 57.1 (1995), pp. 289-300. doi: 10.1111/J.2517-6161.1995.TB02031.X.
* [139] Jacob Cohen. _Statistical power analysis for the behavioral sciences_. Academic press, 1988. doi: 10.2307/2529115.
* [140] Gregory C. DeAngelis and Dora E. Angelaki. "Visual-Vestibular Integration for Self-Motion Perception". In: _The Neural Bases of Multisensory Processes_ (2012), pp. 629-644. doi: 10.1201/9781439812174.
* [141] Eduard Von Holst. "Relations between the central nervous system and the peripheral organs". In: _British Journal of Animal Behaviour_ (1954). doi: 10.1016/S0950-5601(54)80044-X.
* [142] Paul R MacNeilage et al. "Vestibular facilitation of optic flow parsing". In: _PLoS One 7.7_ (2012), e40264. doi: 10.1371/journal.pone.0040264.
* [143] Kathleen E Cullen and Omid A Zobeiri. "Proprioception and the predictive sensing of active self-motion". In: _Current Opinion in Physiology_ 20 (2021), pp. 29-38. doi: 10.1016/j.cophys.2020.12.001.
* [144] Constance S Royden and Ellen C Hildreth. "Human heading judgments in the presence of moving objects". In: _Perception & Psychophysics_ 58 (1996), pp. 836-856. doi: 10.3758/BF03205487.
* [145] William H Warren Jr and Jeffrey A Saunders. "Perceiving heading in the presence of moving objects". In: _Perception 24.3_ (1995), pp. 315-331. doi: 10.1068/p240315.
* [146] Edward AB Horrocks et al. "Walking humans and running mice: perception and neural encoding of optic flow during self-motion". In: _Philosophical Transactions of the Royal Society B_ 378.1869 (2023), p. 20210450. doi: 10.1098/rstb.2021.0450.
* [147] Jean-Paul Noel et al. "Causal inference during closed-loop navigation: parsing of self-and object-motion". In: _Philosophical Transactions of the Royal Society B_ 378.1886 (2023), p. 20220344. doi: 10.1098/rstb.2022.0344.
* [148] Denis N Lee. "The optic flow field: The foundation of vision". In: _Philosophical Transactions of the Royal Society of London. B, Biological Sciences_ 290.1038 (1980), pp. 169-179. doi: 10.1098/rstb.1980.0089.
* [149] Markus Lappe et al. "Perception of self-motion from visual flow". In: _Trends in Cognitive Sciences_ 3.9 (1999), pp. 329-336. doi: 10.1016/S1364-6613(99)01364-9.
* [150] Irina Higgins et al. "Symmetry-based representations for artificial and biological general intelligence". In: _Frontiers in Computational Neuroscience_ 16 (2022), p. 836498. doi: 10.3389/fncom.2022.836498.
* [151] Fabio Anselmi et al. "On invariance and selectivity in representation learning". In: _Information and Inference: A Journal of the IMA_ 5.2 (2016), pp. 134-158. doi: 10.1093/imaiai/iaw009.
* [152] Michael M Bronstein et al. "Geometric deep learning: Grids, groups, graphs, geodesics, and gauges". In: (2021). arXiv: 2104.13478 [cs.LG].
* [153] Ishaan Gulrajani et al. "PixelVAE: A Latent Variable Model for Natural Images". In: _International Conference on Learning Representations_. 2017. url: [https://openreview.net/forum?id=BJKYvt5lg](https://openreview.net/forum?id=BJKYvt5lg).
Supplementary material for:
Hierarchical VAEs provide a normative account of motion processing in the primate brain
## 8 Study Limitations & Considerations
We established a synthetic data framework that allows hypothesis generation and testing for neural processing of motion. While our paper opens up many interesting venues for future exploration, it is necessarily simplified, as it focuses on establishing our framework and demonstrating its potential. As such, it currently has several limitations:
First, our simulation generates velocity fields, rather than full spatiotemporal movies, which allows our model to avoid mimicking the complexities of motion extracted by the early visual pathway [123, 124]. This strategy also allows for direct comparison with recorded neural data in MT and MST using random dot kinematograms [103, 104, 106]. However, a more complex model would be necessary to explain responses of neurons earlier in the visual pathway, such as V1, which would require a pixel-computable model as in previous work (e.g., DorsalNet [34]). Likewise, our \(>2\times\) improvement in performance in explaining MT neural data over DorsalNet is likely due in part to their network trained to perform this extraction from spatiotemporal movies, which was not necessarily equivalent to the random dot velocity fields used in the experiments. Thus, it is an open question whether a hierarchical VAE trained and tested on video stimuli would better align to neural data than the model from Mineault et al. [34]. Future work in this space will involve rendering images in simulations and using image-computable models for a fair comparison.
Second, we chose relatively simple environments and simple fixation rules to generate our optic flow fields and avoided the true complexity of 3-D natural environments and their interaction with eye movements and self-motion as has recently been measured [83, 109]. The simplification of our simulation still demonstrates the importance of including such elements in understanding neural representations and provides a framework for incorporating real eye-tracking and scene data [109, 83] into future work with ROFL.
Finally, we only tested neural alignment on one experimental paradigm using neurons in area MT, which leaves the question of whether this is a general principle of brain computation. Addressing this requires testing our approach on more data from other brain areas, such as MST. Based on previous work [106, 110], we expect that hierarchical computation is even more necessary for MST, which is an open question to address in future work.
Interpreting brain-alignment.We measured the alignment between ANN models and MT neurons using both linear predictive power (Fig. 7) and an alternative measure of alignment that is sensitive to the sparsity of latent-to-neuron relationships ("alignment-score", Fig. 8a). Linear regression has been most commonly used to measure similarity, or alignment, between pairs of representations [30, 31, 34, 36], but often results in degenerate [30, 36, 38] and unreliable [125] measures of representational alignment. Consistent with this, our application of linear regression found that it was not effective in differentiating between models: although the cNVAE produced the single best model in terms of neuronal prediction, we found that both hierarchical and non-hierarchical VAEs performed similarly in predicting MT neuron responses (Fig. 7).
In contrast, the alignment score (Fig. 8a) was much more consistent in distinguishing between models (see Fig. 16), and revealed that hierarchical models (both cNVAE and cNAE) had significantly sparser latent-to-neuron relationships. The alignment score measures whether a model has learned a similar representational "form" to the brain, which would enable the sparsity of latent-to-neuron relationships. This concept is closely related to the "_completeness_" score from the DCI framework [22]. The alignment score shown in Fig. 8a was also used in Higgins et al. [16], although they used the magnitude of nonzero coefficients as their feature importance (under lasso regression). However, we note that this alignment score also has limitations, and for example, it is not a proper metric in the mathematical sense [112]. Future work will consider more sophisticated metrics for brain alignment [112, 113, 114, 115].
Additional Methods
### VAE loss derivation
Suppose some observed data \(\mathbf{x}\) are sampled from a generative process as follows:
\[p(\mathbf{x})=\int p(\mathbf{x},\mathbf{z})\,d\mathbf{z}=\int p(\mathbf{x}|\mathbf{z})p(\mathbf{z})\,d\mathbf{z}, \tag{1}\]
where \(\mathbf{z}\) are latent (or unobserved) variables. In this setting, it is interesting to ask which set of latents \(\mathbf{z}\) are likely, given an observation \(\mathbf{x}\). In other words, we are interested in posterior inference
\[p(\mathbf{z}|\mathbf{x})\propto p(\mathbf{x}|\mathbf{z})p(\mathbf{z}). \tag{2}\]
The goal of VAEs is to approximate the (unknown) true posterior \(p(\mathbf{z}|\mathbf{x})\) with a distribution \(q(\mathbf{z}|\mathbf{x};\theta)\), where \(\theta\) are some free parameters to be learned. This goal is achieved by minimizing the Kullback-Leibler divergence between the approximate posterior \(q\) and the true posterior \(p\):
\[\text{Goal:}\quad\text{minimize}\quad\mathcal{D}_{\text{KL}}\Big{[}q(\mathbf{z}| \mathbf{x};\theta)\,\big{\|}\,p(\mathbf{z}|\mathbf{x})\Big{]}. \tag{3}\]
This objective is intractable, but rearranging the terms leads to the following loss that is also the (negative) variational lower bound on \(\log p(\mathbf{x})\):
\[\mathcal{L}_{\text{VAE}}=-\mathbb{E}_{q}\Big{[}\log p(\mathbf{x}|\mathbf{z};\theta_{ dec})\Big{]}+\mathcal{D}_{\text{KL}}\Big{[}q(\mathbf{z}|\mathbf{x};\mathbf{\theta}_{enc}) \,\big{\|}\,p(\mathbf{z}|\theta_{dec})\Big{]}, \tag{4}\]
where \(\mathbf{\theta}_{enc}\) and \(\theta_{dec}\) are the parameters for the encoder and decoder neural networks (see Fig. 2). The first term in Equation 4 is the reconstruction loss, which we chose to be the Euclidean norm of the differences between predicted and reconstructed velocity vectors (i.e., the endpoint error [84]). We will now focus on the second term in Equation 4, the KL term.
#### 9.1.1 Prior, approximate posterior, and the KL term in vanilla VAE
In standard non-hierarchical VAEs, the prior is not parameterized. Instead, it is chosen to be a simple distribution such as a Gaussian with zero mean and unit covariance:
\[p(\mathbf{z}|\theta_{dec})\,\rightarrow\,p(\mathbf{z})=\mathcal{N}(\mathbf{0},\mathbf{I}). \tag{5}\]
The approximate posterior is also a Gaussian with mean \(\mathbf{\mu}(\mathbf{x};\mathbf{\theta}_{enc})\) and variance \(\mathbf{\sigma}^{2}(\mathbf{x};\mathbf{\theta}_{enc})\):
\[q(\mathbf{z}|\mathbf{x};\mathbf{\theta}_{enc})=\mathcal{N}(\mathbf{\mu}(\mathbf{x};\mathbf{\theta}_{ enc}),\mathbf{\sigma}^{2}(\mathbf{x};\mathbf{\theta}_{enc})) \tag{6}\]
As a result, we see that the KL term for a vanilla VAE only depends on encoder parameters \(\mathbf{\theta}_{enc}\).
#### 9.1.2 Prior, approximate posterior, and the KL term in the cNVAE
Similar to the NVAE [52], the cNVAE latent space is organized hierarchically such that latent groups are sampled sequentially, starting from "top" latents all the way down to the "bottom" ones (i.e., \(\mathbf{z}_{1}\) to \(\mathbf{z}_{3}\) in Fig. 2). In addition to its hierarchical structure, another important difference between cNVAE and vanilla VAE is that the cNVAE prior is learned from data. Note the "mid" and "bottom" latent groups in Fig. 2, indicated as \(\mathbf{z}_{2}\) and \(\mathbf{z}_{3}\) respectively: the cNVAE is designed in a way that changing the parameters along the decoder pathway will impact the prior distributions on \(\mathbf{z}_{2}\) and \(\mathbf{z}_{3}\) (but not \(\mathbf{z}_{1}\)). Note the "\(h\)" in Fig. 2, which is also a learnable parameter. In summary, the "top" cNVAE latents (e.g., \(\mathbf{z}_{1}\) in Fig. 2) have a fixed prior distribution similar to vanilla VAEs; whereas, the prior distribution for every other cNVAE latent group is parametrized and learned from data.
More formally, the cNVAE latents are partitioned into disjoint groups, \(\mathbf{z}=\{\mathbf{z}_{1},\mathbf{z}_{2},\ldots,\mathbf{z}_{L}\}\), where \(L\) is the number of groups. Then, the prior is represented by:\[p(\mathbf{z}|\theta_{dec})=p(\mathbf{z}_{1})\cdot\prod_{\ell=2}^{L}p(\mathbf{z}_{\ell}|\mathbf{z}_{ <\ell};\theta_{dec}), \tag{7}\]
where the conditionals are represented by factorial Normal distributions. For the first latent group we have \(p(\mathbf{z}_{1})=\mathcal{N}(\mathbf{0},\mathbf{I})\), similar to vanilla VAEs. For every other latent group, we have:
\[p(\mathbf{z}_{\ell}|\mathbf{z}_{<\ell};\theta_{dec})=\mathcal{N}(\mathbf{\mu}(\mathbf{z}_{< \ell};\theta_{dec}),\mathbf{\sigma}^{2}(\mathbf{z}_{<\ell};\theta_{dec})), \tag{8}\]
where \(\mathbf{\mu}(\mathbf{z}_{<\ell};\theta_{dec})\) and \(\mathbf{\sigma}^{2}(\mathbf{z}_{<\ell};\theta_{dec})\) are outputted from the decoder _sampler_ layers. Similarly, the approximate posterior in the cNVAE is represented by:
\[q(\mathbf{z}|\mathbf{x};\mathbf{\theta}_{enc})=\prod_{\ell=1}^{L}q(\mathbf{z}_{\ell}|\mathbf{z}_{ <\ell},\mathbf{x};\theta_{enc}). \tag{9}\]
We adopt a Gaussian parameterization for each conditional in the approximate posterior:
\[q(\mathbf{z}_{\ell}|\mathbf{z}_{<\ell},\mathbf{x};\mathbf{\theta}_{enc})=\mathcal{N}(\mathbf{\mu} (\mathbf{z}_{<\ell},x;\mathbf{\theta}_{enc}),\mathbf{\sigma}^{2}(\mathbf{z}_{<\ell},\mathbf{x}; \mathbf{\theta}_{enc})), \tag{10}\]
where \(\mathbf{\mu}(\mathbf{z}_{<\ell},x;\mathbf{\theta}_{enc})\) and \(\mathbf{\sigma}^{2}(\mathbf{z}_{<\ell},\mathbf{x};\mathbf{\theta}_{enc})\) are outputted from the encoder _sampler_ layers (Fig. 2; grey trapezoids). We are now in a position to explicitly write down the KL term from Equation 4 for the cNVAE:
\[\text{KL term }=\mathcal{D}_{\text{KL}}\Big{[}q(\mathbf{z}_{1}|\mathbf{x},\mathbf{ \theta}_{enc})\,\big{\|}\,p(\mathbf{z}_{1})\Big{]}+\sum_{\ell=2}^{L}\mathbb{E}_{q (\mathbf{z}_{<\ell}|\mathbf{x},\mathbf{\theta}_{enc})}\Big{[}\text{KL}_{\ell}(\theta_{enc},\theta_{dec})\Big{]}, \tag{11}\]
where \(\text{KL}_{\ell}\) refers to the local KL term for group \(\ell\) and is given by:
\[\text{KL}_{\ell}(\mathbf{\theta}_{enc},\theta_{dec})\coloneqq\mathcal{D}_{\text{ KL}}\Big{[}q\left(\mathbf{z}_{\ell}|\mathbf{z}_{<\ell},\mathbf{x};\mathbf{\theta}_{enc}\right) \,\big{\|}\,p\left(\mathbf{z}_{\ell}|\mathbf{z}_{<\ell},\theta_{dec}\right)\Big{]}, \tag{12}\]
and the approximate posterior up to the \((\ell-1)^{th}\) group is defined as:
\[q(\mathbf{z}_{<\ell}|\mathbf{x};\mathbf{\theta}_{enc})\coloneqq\prod_{i=1}^{\ell-1}q(\mathbf{ z}_{i}|\mathbf{z}_{<i},\mathbf{x};\mathbf{\theta}_{enc}). \tag{13}\]
This concludes our derivation of the KL terms shown in Table 2.
### Key architectural differences between NVAE and cNVAE
Our main contribution to architecture design lies in the modification of the NVAE "_sampler_" layer and the introduction of the "_expand_" layer, which are simple deconvolution (Fig. 2). In the NVAE, the purpose of a sampler layer is to map encoder features to mean and variance vectors \(\mathbf{\mu}(\mathbf{x})\) and \(\mathbf{\sigma}^{2}(\mathbf{x})\), which are then used to construct approximate posterior distributions \(q\left(\mathbf{z}|\mathbf{x}\right)=\mathcal{N}(\mathbf{\mu}(\mathbf{x}),\mathbf{\sigma}^{2}(\mathbf{ x}))\). During the inference forward pass, downsampling operations will progressively reduce the spatial scale of representations along the encoder pathway, such that processed stimuli will have different spatial dimensions at different stages of inference. In the NVAE, all sampler layers are convolutional with a fixed kernel size of \(3\times 3\) and padding of 1. Thus, the application of a sampler layer does not alter the spatial dimension of hidden encoder features, resulting in a convolutional latent space.
For example, at level \(\ell\) of the hierarchy, \(\mathbf{\mu}_{\ell}\left(\mathbf{x}\right)\) and \(\mathbf{\sigma}^{2}_{\ell}\left(\mathbf{x}\right)\) will have shapes like \(d\times s_{\ell}\times s_{\ell}\), where \(d\) is number of latent variables per latent group, and \(s_{\ell}\) is the spatial scale of the processed encoder features at level \(\ell\). As a result, NVAE latents themselves end up being convolutional in nature, with shapes like \(d\times s_{\ell}\times s_{\ell}\). This design choice leads to the massive expansion of NVAE latent dimensionality, which is often orders of magnitude larger than the input dimensionality.
To address this, we modified NVAE sampler layers in a manner that would integrate over spatial information before the sampling step, achieving a compressed, non-convolutional, and more abstract latent space. Specifically, we designed the cNVAE sampler layers such that their kernel sizes are selected adaptively to match the spatial scale of their input feature. For example, if latent group \(\ell\) operates at the scale of \(s_{\ell}\), then the sampler layer will have a kernel size of \(s_{\ell}\times s_{\ell}\), effectively integrating over space before the sampling step. This results in latents with shape \(d\times 1\times 1\).
Finally, combining the latents with decoder features requires projecting them back into convolutional space. We achieve this by introducing "_expand_" modules (Fig. 2; yellow trapezoids) which are simple deconvolution layers with a kernel size of \(s_{\ell}\times s_{\ell}\). Thus, processing a \(d\times 1\times 1\) latent by an expand layer results in a \(d\times s_{\ell}\times s_{\ell}\) output, allowing them to be combined with the decoder features (concatenation or addition; blue "+" in Fig. 2).
cNVAE pros and cons.Our approach resulted in a much smaller latent space compared to NVAE, which allowed us to examine the function of individual latents, decoupled from their spatial influence. However, one drawback of our approach is that it makes the architecture input-size-dependent. Scaling up the input increases the parameter count of cNVAE. This did not pose serious problems in the present work, as we generated ROFL datasets in a relatively small spatial scale of \(17\times 17\). However, this dependence on input dimensionality could cause problems when larger datasets are considered. To mitigate this, one possibility is to use depthwise separable convolutions for sampler and expand layers.
Other details.We used the same regular residual cells for both the encoder and the decoder ("\(r\)" in Fig. 2). We did not experience memory issues because of the relatively small scale of our experiments. However, rendering the input data at a larger spatial dimension would lead to larger memory usage, which might require using depthwise separable convolutions as in Figure 2(b) in Vahdat and Kautz [52]. Similar to NVAE, we found that the inclusion of squeeze & excitation [126] helped. Contrary to the suggestion of NVAE [52] and others [72], batch norm [127] destabilized training in our case. Instead, we found that weight normalization [128] and swish activation [129, 130] were both instrumental in training. We used weight normalization without data-dependent initialization. Our observations also deviate from those of Vahdat and Kautz [52] with regard to spectral regularization [131]. We achieved stable training and good results without the need for spectral regularization. This is perhaps because our dataset is synthetic and has a much smaller dimensionality compared to real-world datasets such as faces considered in the NVAE. Finally, we also found that residual Normal parameterization of the approximate posterior was helpful.
Model hyperparameters.The latent space of cNVAE contains a hierarchy of latent groups that are sampled sequentially: "top" latents are sampled first, all the way down to "bottom" latents that are closest to the stimulus space (Fig. 2). Child [71] demonstrated that model depth, in this statistical sense, was responsible for the success of hierarchical VAEs. Here, we increased the depth of cNVAE architecture while ensuring that training remained stable. Our final model architecture had a total of 21 hierarchical latent groups: 3 groups operating at the spatial scale of \(2\times 2\), 6 groups at the scale of \(4\times 4\), and 12 groups at the scale of \(8\times 8\) (see Figs. 2 and 3). Each latent group has 20 latents, which yields a \(21\times 20=420\) dimensional latent space. See below for more details. We observed that various aspects of model performance such as reconstruction loss and DCI scores increased with depth, corroborating previous work [71]. We set the initial number of channels to \(32\) in the first layer of the encoder and doubled the channel size every time we downsample the features spatially. This resulted in a width of \(256\) in the final layer. Please refer to our code for more information.
### cNVAE vs. NVAE latent dimensionality
As an illustrative example, here we count the number of latent variables in cNVAE and compare them to an otherwise identical but non-compressed NVAE. The total number of latent variables for a hierarchical VAE is given by:
\[D=\sum_{\ell=1}^{L}d_{\ell}, \tag{14}\]where \(d_{\ell}\) is the number of latents in level \(\ell\), and \(L\) is the total number of hierarchical levels. As described above, \(d_{\ell}=d\cdot s_{\ell}^{2}\) for NVAE, where \(d\) is the number of latents per level and \(s_{\ell}\) is the spatial scale of level \(\ell\). In contrast, this number is a constant for the cNVAE: \(d_{\ell}=d\). In this paper, we set \(d=20\) and had the following configuration for the cNVAE:
* "top" latents: 3 groups operating at the spatial scale of \(2\times 2\)
* "mid" latents: 6 groups operating at the spatial scale of \(4\times 4\)
* "bottom" latents: 12 groups operating at the spatial scale of \(8\times 8\)
This resulted in an overall latent dimensionality of \(D=(3+6+12)\times d=21\times 20=420\) for the cNVAE. In contrast, an NVAE with a similar number of hierarchical levels would have a dimensionality of \(D=(3\cdot 2^{2}+6\cdot 4^{2}+12\cdot 8^{2})\times d=876\times 20=17,520\).
Note that in Fig. 4, we necessarily had to reduce the NVAE depth to roughly match its latent dimensionality to other models. Specifically, the NVAE in Fig. 4 had \(2/3/6\) latent groups at top/mid/bottom with \(d=1\). This configuration resulted in latent dimensionality of \(D=(2\cdot 2^{2}+3\cdot 4^{2}+6\cdot 8^{2})\times 1=440\), which is roughly comparable to other models at \(D=420\). See Table 4 for a comparison between cNVAE and the alternative models used in this study.
### Model training details
We generated \(750,000\) samples for each ROFL category, with a \(600,000/75,000/75,000\) split for train/validation/test datasets. Model performance (reconstruction loss, NELBO) was evaluated using the validation set (Fig. 14 and 15). Similarly, we used the validation set to estimate mutual information between latents, \(\mathbf{z}\), and ground truth factors, \(\mathbf{g}\), as shown in Fig. 3. To evaluate the latent codes, we used the validation set to train linear regressors to predict \(\mathbf{g}\) from \(\mathbf{z}\). We then test the performance using the test set. This includes results presented in Fig. 4 and Fig. 5.
KL annealing and balancing.We annealed the KL term during the first half of the training, which is known to be an effective trick in training VAEs [52, 72, 132, 133]. Additionally, during the annealing period, we employed a KL balancing mechanism which ensures an equal amount of information is encoded in each hierarchical latent group [52, 134, 135].
Gradient norm clipping.During training, we occasionally encountered updates with large gradient norms. Similar to vdvae [71], we used gradient clipping, with an empirically determined clip value of \(250\). However, we did not skip the updates with large gradients, as we found that the model usually gets stuck and does not recover after skipping an update.
Training hyperparameters.We used batch size of \(600\), learning rate of \(0.002\), and trained models for \(160\) epochs (equivalent to \(160k\) steps). Each training session took roughly 24 hours to conclude on a single Quadro RTX 5000 GPU. Similar to NVAE, we used the AdaMax optimizer [136] with a cosine learning rate schedule [137] but without warm restarts.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c}{Number of latent groups} & \multicolumn{2}{c}{Number of latent} & Latent space \\ \cline{2-7} & \(2\times 2\) & \(4\times 4\) & \(8\times 8\) & total & variables per group & dimensionality \\ \hline cNVAE & \(3\) & \(6\) & \(12\) & \(21\) & \(d=20\) & \(D=21\times 20=420\) \\ VAE & \(1\) & \(-\) & \(-\) & \(1\) & \(d=420\) & \(D=1\times 420=420\) \\ cNAE & \(3\) & \(6\) & \(12\) & \(21\) & \(d=20\) & \(D=21\times 20=420\) \\ AE & \(1\) & \(-\) & \(-\) & \(1\) & \(d=420\) & \(D=1\times 420=420\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: The hierarchical models (cNVAE, cNAE) process information at various spatial scales. These models have a total of \(21\) latent groups, each containing \(d=20\) latent variables. For hierarchical models, we concatenate the latent representations across all levels of the hierarchy, resulting in a single latent vector with dimensionality \(D=420\). In contrast, non-hierarchical models (VAE, AE) do not possess such a multi-scale organization. Instead, their non-hierarchical latent spaces contain a single latent group, operating at a single spatial scale. This single latent group contains the entire set of \(d=D=420\) latent variables. See Table 2 and Fig. 2.
### Choosing \(\beta\) values for different figures
It is known that different choices of the \(\beta\) value in the VAE loss (Table 2) will result in different model properties [85]. Therefore, while we scanned across many \(\beta\) values spanning two orders of magnitude, we displayed results for certain betas to best demonstrate our points.
To ensure a fair model comparison in the results shown in Figs. 3, 4, 11 and 12, we selected \(\beta\) values that maximized the overall untangling score for each architecture (cNVAE, \(\beta=0.15\); VAE, \(\beta=1.5\)). This can be seen in Fig. 5, which presents DCI scores over the entire \(\beta\) spectrum. The top row of Fig. 5 displays informativeness (= untangling) scores, revealing an inverted U shape, consistent with previous work [85]. Different architectures peak at different \(\beta\) values.
For the neuron shown in Fig. 7(b), we found that different \(\beta\) values lead to qualitatively similar outcomes, that is, extreme sparsity in cNVAE feature importances, in sharp contrast to that of VAE. We deliberately chose to show a larger \(\beta=5\) for VAE because previous work by Higgins et al. [16] suggested that increasing \(\beta\) values increases neuronal alignment, which we did not observe here. In contrast, even a very small \(\beta=0.01\) for the cNVAE results in a large alignment. This result (paired with other observations) suggests that brain alignment, as we measured it here, emerges due to the architecture rather than from large \(\beta\) values alone--although we do observe some dependence on \(\beta\), so ultimately a combination of both architecture and loss is important (but mostly architecture).
### Details on latent code evaluation
Each ROFL sample \(\mathbf{x}\in\mathbb{R}^{2\times N\times N}\) is rendered from a low dimensional ground truth vector \(\mathbf{g}\in\mathbb{R}^{K}\), where \(N\) is the number of pixels (stimulus size) and \(K\) is the true underlying dimensionality of the data (i.e., number of independent ground truth factors). In other words, the data lies on a curved \(K\)-dimensional manifold embedded in a \(2N^{2}\)-dimensional space.
In the present work, we worked with relatively small stimuli with \(N=17\). This choice allowed us to train more models and explore a larger combination of hyperparameters. We chose an odd value for \(N\) so that there was a pixel at the exact center, which served as the center of gaze or the fixation point (Fig. 0(a)). The true dimensionality (number of generative factors) of different ROFL categories is given in Table 1. For example, it is \(K=11\) for fixate-1.
We trained unsupervised models by feeding them raw samples \(\mathbf{x}\) and asking them to produce a reconstruction \(\hat{\mathbf{x}}\). The models accomplish this by mapping their inputs to latent codes \(\mathbf{z}\) during inference (red pathways in Fig. 2), and mapping \(\mathbf{z}\) to \(\hat{\mathbf{x}}\) during generation (blue pathways in Fig. 2). Note that for all models we have \(\mathbf{z}\in\mathbb{R}^{420}\) (see Table 4).
In sum, our approach consists of four steps:
1. Data generation: \(\mathbf{g}\rightarrow\framebox{\text{ROFL}}\rightarrow\mathbf{x}\)
2. Unsupervised training: \(\mathbf{x}\rightarrow\framebox{\text{Model}}\rightarrow\hat{\mathbf{x}}\)
3. Inference: \(\mathbf{x}\rightarrow\framebox{\text{Model}}\rightarrow\mathbf{z}\)
4. Evaluation: \(\mathbf{z}\leftrightarrow\mathbf{g}\)?
Within this framework, we investigated the relationships between data generative factors \(\mathbf{g}\) and model-learned latent codes \(\mathbf{z}\) by using the DCI metrics, which we discuss in detail below.
### Metrics & Baselines
We trained a variety of unsupervised models (Table 2) and evaluated them based on (1) their ability to represent the generative variables of the ROFL stimuli; and (2) the neural alignment of their representation. This resulted in a total of five different metrics used to evaluate model performance. Below, we describe the different metrics used for each.
#1: untangling & disentangling of data generative factors.To evaluate the latent codes, we employed the DCI framework [22] and made use of the fact that ROFL provides ground truth factors for each data sample. DCI is comprised of three different metrics, _Disentanglement_, _Completeness_, and _Informativeness_, which we will discuss in detail below.
#2: explaining variability in biological neurons & brain alignment.We used ridge regression to predict macaque MT neuron responses from model latents. To measure the alignment between different latent codes and biological neurons, we used two standard metrics: linear predictive power and brain-alignment score defined in Fig. 8a.
#### 9.7.1 The DCI framework
In this section, we provide more details about the DCI framework [22] and discuss how it was used to evaluate the disentangling performance of our models. The core motivation behind DCI is that a good latent code ideally disentangles as many factors of variation as possible while discarding as little information as possible. In other words, we consider a latent code to be "_good_" as long as:
1. It can predict the variance of data generative factors, and
2. It exhibits a one-to-one relationship with the generative factors (Fig. 9).
To make this idea concrete, the starting point is to train a regressor \(f_{j}\) (e.g., lasso) that predicts a given data generative factor \(g_{j}\) from a latent code \(\mathbf{z}\). The performance of \(\mathbf{f}\) in predicting \(\mathbf{g}\) (measured using e.g., \(R^{2}\)) is referred to as the _Informativeness_ score. The notion of _Informativeness_, if measured using a linear regressor, is very similar to _Untangling_[20].
As the next step, the weights of \(\mathbf{f}\) are used to construct a matrix of relative importances \(R\), where \(R_{ij}\) denotes the relative importance of \(z_{i}\) in predicting \(g_{j}\). For example, \(R\) can be the magnitude of lasso coefficients. The matrix of relative importance is then used to define two pseudo-probability distributions:
\[P_{ij} =\frac{R_{ij}}{\sum_{k}R_{ik}}=\text{probability of $z_{i}$ being important for predicting $g_{j}$} \tag{15}\] \[\tilde{P}_{ij} =\frac{R_{ij}}{\sum_{k}R_{kj}}=\text{probability of $g_{j}$ being predicted by $z_{i}$} \tag{16}\]
These distributions are then used to define the following metrics, which complete the DCI trinity:
\[Disentanglement:\quad D_{i}=1-H_{K}(P_{i.});\quad\text{where} \quad H_{K}(P_{i.})=-\sum_{j}P_{ij}\log_{K}P_{ij} \tag{17}\] \[Completeness:\quad C_{j}=1-H_{D}(\tilde{P}_{.j});\quad\text{where} \quad H_{D}(\tilde{P}_{.j})=-\sum_{i}\tilde{P}_{ij}\log_{D}\tilde{P}_{ij} \tag{18}\]
Here, \(K\) is the number of data generative factors (e.g., \(K=11\) for fixate-1, Table 1), and \(D\) is the number of latent variables (\(D=420\) for the models considered in our paper, Table 4).
Intuitively, if latent variable \(z_{i}\) contributes to predicting a single ground truth factor, then \(D_{i}=1\). Conversely, \(D_{i}=0\) when \(z_{i}\) equally contributes to the prediction of all ground truth factors. If the ground truth factor \(g_{j}\) is being predicted from a single latent variable, then \(C_{j}=1\) (a complete code).
Figure 9: Suppose we train a regression model to predict ground truth factors \(g_{j}\) (blue) from latents \(z_{i}\) (grey). Entanglement (left) refers to a situation in which one latent variable contains information about many generative factors. An overcomplete code (middle) is one in which a single generative factor is predicted from many latent variables. A one-to-one relationship (right) is the ideal outcome.
On the other hand, if all latents contribute to predicting \(g_{j}\), then \(C_{j}=0\) (maximally overcomplete). Finally, the overall disentanglement or completeness scores are obtained by averaging across all latents or generative factors.
In the DCI paper, Eastwood and Williams [22] used lasso or random forest as their choice of regressors \(\mathbf{f}\). Initially, we experimented with lasso but found that the lasso coefficient had a strong impact on the resulting DCI scores. To mitigate this, we used linear regression instead and estimated the matrix of relative importance using scikit-learns's permutation importance score (sklearn.inspection.permutation_importance), which measures how much performance drops based on the shuffling of a given feature.
#### 9.7.2 Brain alignment score
As depicted in Fig. 7(a), the alignment score emphasizes the degree of correspondence between latent variables and neurons. A sparse relationship between latents and neurons suggests a higher alignment, indicating that the model's representation closely mirrors the neuron's representational "form" [16]. This notion of alignment is intimately tied to the "completeness" score discussed above. See Fig. 10.
#### 9.7.3 Baselines
In Fig. 4 we compared the cNVAE with alternative models introduced in Table 2. We also included an NVAE with matched number of latents, but necessarily fewer hierarchical levels (see section 9.3 for more details). To demonstrate the difficulty of our untangling task, we included two simple baselines: Raw data and a linear code based on the first \(420\) dimensions of data principal components (PCA). Both Raw and PCA baselines performed poorly, only weakly untangling self-motion velocity (\(\sim 55\%\)) and object velocity in \(x\) and \(y\) directions (\(47\%\)). This result demonstrates the importance of nonlinear computation in extracting good representations.
## 10 Additional Results
In this section, we present additional results demonstrating the advantages of hierarchical VAEs versus non-hierarchical ones, which support the results in the main text. All models explored in this section were trained on fixate-1. For both cNVAE and VAE, we chose \(\beta\) values that maximized their informativeness (we discuss this choice in section 9.5). Specifically, we set \(\beta=0.15\) for cNVAE and \(\beta=1.5\) for VAE. These same models trained using the respective \(\beta\) values were also used for results presented in Fig. 3 and Fig. 4 in the main text.
Figure 10: Consider training a regressor to predict MT neuron responses (green) from latent variables \(z_{i}\) (grey). Beyond accurate predictions, we assess the alignment between \(\mathbf{z}\) and MT based on the sparsity of the resulting latent-to-neuron relationships. One possibility is that many latents contribute to predicting the neuron in a dense fashion. This would suggest that the neuron’s function is diffusely represented within the latents, hinting that the representational form of the latents is ultimately different from that of MT, perhaps through rotation or affine transformations. Conversely, when few latents effectively predict MT, it suggests a similar representational form, laying the foundation for our rationale in defining our alignment score (Fig. 7(a)).
### Individual cNVAE latents exhibit greater linear correlation with ground truth factors
In the main text (Fig. 4) we demonstrated that the cNVAE latent representations \(\mathbf{z}\) contained explicit information about ground truth factors \(\mathbf{g}\). We achieved this by training linear regression models that map the \(420\)-dimensional latent space to each ground truth factor. Here, we investigate the degree that _individual_ latents \(z_{i}\) are (Pearson) correlated with individual factors \(g_{j}\) (Fig. 11). For some ground truth factors, we find that cNVAE learns latents that are almost perfectly correlated with them. For instance, \(r=-0.91\) for \(F_{x}\) and \(r=0.87\) for \(F_{y}\), compared to modest correlation values of \(r=0.34\) and \(r=0.38\) for VAE. The higher linear correlation is consistently observed for cNVAE over VAE. Especially, we note that the VAE does not capture object-related variables at all, consistent with other results in the main paper (Figs. 3 and 4).
For the cNVAE, the indices for the selected highly correlated latents are as follows: \([296,248,393,284,368,92,206,105,338,60,63]\), ordered left-to-right in the same manner as in Fig. 11. Here, an index of \(0\) corresponds to the very top latent operating at the spatial scale of \(2\times 2\), and an index of \(419\) corresponds to the very bottom one at scale \(8\times 8\) (see Fig. 2). By considering the hierarchical grouping of these latents (see dashed lines in Fig. 3) we see that the majority of the highly correlated latents operate at the spatial scale of \(8\), with the exception of object-related ones corresponding to \(X_{obj}\), \(Z_{obj}\), \(V_{obj,y}\), and \(V_{obj,z}\) which operate at \(4\times 4\).
In conclusion, Fig. 11 revealed that cNVAE learns individual latents that are highly correlated with individual ground truth factors, which provides complementary insights as to why cNVAE is so successful in untangling (Fig. 4). However, we note that a high correlation with individual latents is not the best way to capture the overall functional organization, which is better reflected for example in the mutual information heatmap of Fig. 3.
### Generated samples: non-hierarchical VAE completely neglects objects
If a generative model has truly learned the underlying structure of a dataset, then it should be able to generate fake samples that look like its training data. Here, we examine generated samples from both the cNVAE and VAE models, as illustrated in Fig. 12. For comparison, we also include five samples randomly drawn from the fixate-1 dataset that these models were trained on (Fig. 12, first row). The cNVAE samples appear indistinguishable from true samples; however, VAE fails to capture and generate objects, corroborating our previous results (see Fig. 3, Fig. 4, and Fig. 11). This striking divergence between cNVAE and VAE indicates that a hierarchically organized latent space is necessary for effectively modeling datasets that contain multiple interacting spatial scales.
These illustrations collectively showcase that flow vectors in the visual scene are shaped both by local variables, such as the object's physical location, and their interactions with global-influencing variables like self-motion. By comprehending data across scales, cNVAE is able to capture such hierarchical and multi-scale properties inherent in naturalistic optic flow patterns. In contrast, non-hierarchical VAEs exhibit a complete failure in this regard.
### All models achieve good reconstruction performance
All models considered in this paper are trained based (in part or fully) on reconstruction loss, corresponding to how well they could reproduce a presented stimulus. Here we report the models' reconstruction performance, to ensure that they are all able to achieve a reasonable performance. Figure 14 illustrates how the reconstruction loss (EPEPD, endpoint error per dim) and negative evidence lower bound (NELBO) depend on different values of \(\beta\) and model types. Both hierarchical and non-hierarchical VAE models perform well, with cNVAE exhibiting slightly superior performance for \(\beta<0.8\).
### Untangling and reconstruction loss are strongly anti-correlated for cNVAE but not VAE
In this work, we demonstrated that a virtue of hierarchical VAEs is that they make information about ground truth factors easily (linearly) accessible in their latent representations. We were able to quantify this relationship because we used a simulation (ROFL) where ground truth factors were available. However, such a condition is not necessarily the case in real-world datasets that typically lack such direct knowledge [22].
To address this, we examined the relationship between informativeness and reconstruction loss, as well as NELBO, depicted in Fig. 15, to test whether there is a relationship between them. Strikingly, we observe a significant and strong negative linear relationship between reconstruction loss and informativeness in cNVAE (\(r=-0.91,p=8.9\times 10^{-7}\), \(t-\)test). Conversely, no such correlation is observed for VAE (\(r=0.10,p=0.72\), \(t-\)test). Notably, this relationship is absent for both models when comparing informativeness with NELBO (Fig. 15). These findings suggest that reconstruction
Figure 12: The first row shows random samples drawn from ROFL fixate-1 category. The second and third rows are samples generated from cNVAE and VAE models that were trained on fixate-1. Generated samples from cNVAE closely resemble real data; whereas, VAE ignores objects. Samples were generated at a temperature of \(t=1\) without cherry-picking.
loss--exclusively in the case of cNVAE--can serve as a viable proxy for informativeness, even in the absence of ground truth factors in real-world datasets.
### Brain alignment requires more than just linear regression performance
In the field of neuroscience, linear regression \(R^{2}\) scores have commonly been employed to measure alignment between biological and artificial neural networks [30, 31, 34, 36]. However, we found
Figure 13: Latent traversal performed using a latent variable that effectively captures \(Y_{obj}\). This variable belongs to a latent group operating at the spatial scale of \(2\times 2\) (see Figs. 2 and 3; table 4).
that relying solely on regression performance provided a weak signal for model selection, which is consistent with recent work [36, 37, 38, 112, 125]. In contrast, our alignment score defined in Fig. 7(a) provided a much stronger signal for differentiating between models (compare Fig. 7 and Fig. 7(c)).
Figure 14: Both models demonstrate robust reconstruction performance, with cNVAE exhibiting a slight improvement. EPEPD, endpoint error per dim (lower is better). NELBO, negative evidence lower bound (lower is better).
Figure 15: Strong anti-correlation between reconstruction loss and informativeness (i.e., untangling) observed only for cNVAE. Each scatter point corresponds to a model trained with a different \(\beta\) value. EPEPD, endpoint error per dim (lower is better). NELBO, negative evidence lower bound (lower is better).
To investigate this further, we conducted statistical tests, comparing pairs of models based on either their performance in explaining the variance of MT neurons or their brain-alignment scores. The results are presented in Table 5. We observed that ridge regression performance was not particularly effective in distinguishing between models. We measured the effect sizes between distributions of \(N=141\) neurons, measuring how well ridge regression performance (Fig. 7) or alignment scores (Fig. 8c) discriminated between models. We found that the effect sizes were substantially larger for the alignment scores (Fig. 16). These findings suggest that while model performance is necessary, it is not sufficient for accurately measuring alignment between artificial and biological representations. In conclusion, refined and more comprehensive approaches are required to properly capture brain alignment [112, 113, 114, 115].
### Dissecting cross-validated model performance on neural data
One measure of neural alignment that we consider here used unsupervised latents to predict experimentally recorded neural responses (via linear regression). The neurons in this dataset had long presentations of optic flow stimuli, but only a fraction of the neurons (105 out of 141 neurons) had multiple repeats of a short stimulus sequence. Because these repeats facilitated a much clearer measure of model performance, we used these where available, following previous work [34]. However, due to the short stimulus sequence (\(\sim 5\) sec; see Fig. 6b) used to estimate performance in the neurons with stimulus repeats, here we perform additional tests to verify that our conclusions hold using a more rigorous cross-validation procedure.
Specifically, here we used the long stimulus sequence to both train model parameters (80% of data) and determine hyperparameters (including regularization and optimal latencies) using a separate validation set (20%). We then predicted model performance on the repeats. This is in contrast to our approach in the main text, which used the repeat data itself to optimize hyperparameters, but as a result, risked overfitting on the short stimulus sequence. Indeed, we found that independent determination of hyperparameters (from the validation set) resulted in decreased model performance on the test set (Table 6) for both cNVAE and VAE models, although with comparable effect sizes. As a result, our conclusions of overlapping performance between cNVAE and VAE with a slightly better performance of the cNVAE were robust.
Figure 16: Effect sizes are shown for the two different approaches in model comparisons: using ridge regression _Performance_ (related to Fig. 7), versus, _Alignment_ scores (related to Fig. 8c). Alignment demonstrates a substantially larger effect size, leading to a more pronounced differentiation among the models. Positive values indicate a larger value for hierarchical models (i.e., cNVAE > VAE or cNAE > AE). For different \(\beta\) values, model _Performance_ difference fluctuates between positive and negative values. In contrast, the difference in model _Alignment_ scores is consistently positive, indicating a greater brain alignment for hierarchical models.
## 11 Additional Discussion
### Visual-only cues and unsupervised learning are sufficient for untangling optic flow causes
It is well known that an observer's self-motion through an environment can be estimated using optic flow patterns alone [80], although it has been proposed that the brain also uses a combination of visual and non-visual cues [140], such as efference copies of motor commands [141], vestibular inputs
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{4}{c}{Performance, \(R\) (\(\mu\pm se;\ N=105\))} \\ \cline{2-6} & \(\beta=0.5\) & \(\beta=0.8\) & \(\beta=1\) & \(\beta=5\) \\ \hline cNVAE & \(.452\pm.025\) & \(.479\pm.025\) & \(.486\pm.024\) & \(.462\pm.025\) \\ VAE & \(.461\pm.027\) & \(.401\pm.029\) & \(.466\pm.028\) & \(.461\pm.027\) \\ cVAE & & & \(.460\pm.026\) & & \\ AE & & & \(.412\pm.031\) & & \\ \hline \hline \end{tabular}
\end{table}
Table 6: Model performance on the subset of neurons for which we had repeat data (\(N=105\)). Here, we used the repeat data as a held-out test set. We found that model performances remain statistically indistinguishable, providing additional support for our main results.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{\(\beta\)} & \multicolumn{2}{c|}{Performance (Figure 7)} & \multicolumn{2}{c|}{Alignment scores (Figure 8c)} \\ \cline{2-7} & \multicolumn{1}{c|}{effect size} & \(p-\)value & significant? & effect size & \(p-\)value & significant? \\ \hline ae & -0.23 & 0.01 & & 1.2 & 8e-29 & \\ \hline
0.01 & -0.28 & 0.002 & & 0.32 & 0.0004 & \\ \hline
0.1 & -0.47 & 2e-07 & & 0.12 & 0.2 & \\ \hline
0.15 & -0.16 & 0.07 & & 0.46 & 4e-07 & \\ \hline
0.2 & -0.14 & 0.1 & & 0.33 & 0.0003 & \\ \hline
0.3 & -0.012 & 0.9 & & 0.69 & 4e-13 & \\ \hline
0.4 & -0.15 & 0.09 & & 0.65 & 7e-12 & \\ \hline
0.5 & -0.013 & 0.9 & & 0.65 & 7e-12 & \\ \hline
0.6 & 0.29 & 0.001 & & 0.69 & 4e-13 & \\ \hline
0.7 & 0.16 & 0.08 & & 0.71 & 1e-13 & \\ \hline
0.8 & 0.44 & 1e-06 & & 0.73 & 4e-14 & \\ \hline
0.9 & 0.24 & 0.008 & & 0.96 & 1e-20 & \\ \hline
1.0 & 0.0016 & 1 & & 0.89 & 1e-18 & \\ \hline
1.5 & 0.17 & 0.07 & & 1.2 & 3e-28 & \\ \hline
2.0 & 0.037 & 0.7 & & 0.91 & 3e-19 & \\ \hline
5.0 & -0.24 & 0.008 & & 1.3 & 7e-31 & \\ \hline
10.0 & -0.056 & 0.6 & & 0.55 & 3e-09 & \\ \hline \hline \end{tabular}
\end{table}
Table 5: Hierarchical architectures are significantly more aligned with MT neurons compared to non-hierarchical ones. Paired \(t-\)tests were performed, and \(p-\)values were adjusted for multiple comparisons using the Benjamini-Hochberg correction method [138]. Effect sizes were estimated using Cohen’s \(d\)[139], where positive values indicate cNVAE > VAE (or cNAE > AE, first row).
[142], and proprioception [143]. Indeed, using optic flow alone can be complicated by rotational eye movements and independently moving objects [48, 144, 145]. How can the brain estimate the motion of an object when self-motion is non-trivially affecting the entire flow field [146, 147], including that of the object itself? This is a critical motion processing problem [148], manifested in diverse ecological scenarios such as prey-predator dynamics, crossing the street, or playing soccer. Are retinal visual-only cues sufficient to extract information about different self-motion components, as well as object motion [149]?
Unlike the approach of Mineault et al. [34], our work is based on the premise that retinal-only signals contain sufficient information about their ground truth causes and that these factors can be extracted using unsupervised learning alone. In contrast, Mineault et al. [34] trained their DorsalNet by providing dense supervision on self-motion components, arguing that such supervised training is biologically plausible since these signals are approximately available to the observer from vestibular inputs and other such extra-retinal sources.
Our results (Fig. 4) provide proof-of-concept that it is possible to untangle ground truth causes of retinal optic flow in a completely unsupervised manner, using the raw optic flow patterns alone. Comparing the performance of alternative models revealed two essential ingredients for the success of our approach: (1) an appropriate loss function--inference [1]; and, (2) a properly designed architecture--hierarchical, like the sensory cortex [3].
### Comparing cNVAE to NVAE, its non-compressed counterpart
In Fig. 4, we investigated the performance of the original non-compressed NVAE with an approximately matched latent dimensionality (\(440\), slightly larger than \(420\) for cNVAE: more details are provided in section 9.3). We found that although NVAE did better than non-hierarchical VAE, it still underperformed cNVAE. This is because trying to match the latent dimensionality of NVAE with cNVAE necessitates reducing the number of hierarchical latent groups in the NVAE since most of its convolutional latents capture spatial dimensions rather than more abstract features as cNVAE does. From previous work [71], we know that the "stochastic depth" of hierarchical VAEs is the key reason behind their effectiveness; therefore, it was expected that reduced depth would hurt an NVAE with a matched number of latents.
It is worth noting that cNVAE and NVAE had a balanced performance across all ground truth factors (Fig. 4), in that they captured both global (self-motion-related) and local (object-related) aspects well. In contrast, non-hierarchical VAE completely ignored object-related variables and focused solely on the global scale determined by fixation point and self-motion. This suggests that hierarchical architecture is crucial for a more comprehensive understanding of multi-scale datasets such as ROFL.
### Disentanglement is in the eyes of the beholder
The conventional definition of disentanglement in machine learning relies on relating the latent representation to predefined generative factors, a choice that is often not uniquely determined, and is often guided by heuristics [18]. For example, in the present work, we chose to represent velocities in Cartesian coordinates, rather than polar coordinates and judged the degree of disentanglement based on the relationship of the latents to these factors. Notably, these coordinate systems are related by a nonlinear transform, and latents linearly related to one coordinate system will then be less linearly related to the other.
Throughout our experiments, we observed one or more cNVAE latent variables that exhibited equally high mutual information with all components of self-motion velocity in Cartesian coordinates (e.g., latent dimensions 154 and 122 in Fig. 3). When we translated the generative factors into polar coordinates and looked at their relationship with these latents, we found that these latents were solely encoding the magnitude of self-motion, irrespective of its direction (Fig. 17). Thus, these seemingly highly entangled latent variables would have achieved almost perfect disentanglement scores had we employed polar coordinates instead of Cartesian coordinates to represent the ground truth velocities (\(d_{i}\approx 0.38\to d_{i}\approx 0.81\)).
Our findings shed light on the fact that disentanglement cannot be considered an inherent property of objective reality; rather, it is significantly influenced by our a priori determination of independent factors of variation within the data. Given the inherent complexity and richness of natural datasets, it becomes challenging to unequivocally determine the true factors of variation a priori, pointing to the need for a more systematic approach.
When it comes to interpreting brain functions, perhaps considering the behavioral relevance of generative factors could guide our choices. For example, an animal directing its own movements does not pick grid coordinates but rather chooses direction and speed, possibly making a polar representation more "behaviorally relevant" to representations within its brain.
Another promising approach lies in focusing on the symmetry transformations inherent in the world [18, 150]. By seeking representations that respect the underlying geometrical and group-theoretical structures of the world [151, 152], and comparing them in principled ways [112, 113, 114], we may uncover new perspectives that provide valuable insights beyond the conventional and subjective notions of disentanglement.
## 12 Additional Background & Related Work
The deep connections between VAEs and neuroscience is explored in a review by Marino [17], but work looking at the synergy between neuroscience and VAEs has thus far been limited to the following recent papers.
Storrs et al. [15] trained PixelVAE [153] models on synthesized images of glossy surfaces and found that the model spontaneously learned to cluster images according to underlying physical factors like material reflectance and illumination, despite receiving no explicit supervision. Most notably, PixelVAE mimicked human perceptual judgment of gloss in that it made errors that were similar to those made by humans.
Higgins et al. [16] trained \(\beta\)-VAE [85] models on face images and evaluated the models on predicting responses of individual inferotemporal (IT) neurons from Macaque face patch. They found that \(\beta\)-VAE discovers individual latents that are aligned with IT neurons and that this alignment linearly increases with increased beta values.
Csikor et al. [68] investigated the properties of representations and inference in a biologically inspired hierarchical VAE called Topdown VAE that is capable of reproducing some key properties of representations emerging in V1 and V2 of the visual cortex. The authors showed that architectures that feature top-down influences in their recognition model give rise to a richer representation, such that specific information that is not present in mean activations becomes linearly decodable from posterior correlations.
Finally, Miliotou et al. [69] learned mapping from the latent space of a hierarchical VAE to fMRI voxel space. This allowed the use of the trained decoder to reconstruct images directly from brain responses. Importantly, they perform ablation experiments and find hierarchy is an essential component of their models.
Figure 17: This latent variable cares about the magnitude of self-motion only. Its disentanglement score depends strongly on what we choose a priori as our independent factors of variation in data.
Appendix: ROFL derivations and detailed description
ROFL aims to generate synthetic optical flow patterns that closely resemble those arising from ecologically relevant types of motion. This entails the consideration of both self-motion and object-motion components. Specifically, the self-motion component encompasses various subtypes such as translational and rotational motions induced by eye or head movements. Furthermore, it is crucial to have control over the relative prominence of these two types of motion. Depending on the specific configuration, the resulting flow pattern can exhibit an _object-dominated_ characteristic when an abundance of objects is incorporated, or conversely, a _self-motion-dominated_ nature if fewer or no objects are present. By carefully manipulating these factors, ROFL enables the generation of optical flow patterns that faithfully capture the dynamics of real-world scenarios involving an interplay between self-motion and object motion.
### Setup
Self-motion is meaningful if the observer is moving relative to a static background (i.e., the environment). We consider a solid background wall at a distance \(Z=1\) from the observer which extends to infinity in the \(X-Y\) plane. Define the following coordinate frames
1. **Fixed coordinates**, represented with capital letters: \((X,Y,Z)\).
2. **Observer's coordinates**, represented with lowercase letters: \((x,y,z)\).
Both coordinate systems are centered around the observer. The difference is that \(\hat{Z}\) is always perpendicular to the background plane, while \(\hat{z}\) is defined such that it is parallel to the fixation point. All points in the space are initially expressed in the fixed coordinates. For example, any given point like \(\vec{P}=(X,Y,Z)\) has its coordinates defined in the fixed system. The same is true for fixation point, \(\vec{F}=(X_{0},Y_{0},Z)\), and the observer's position \(\vec{O}=(0,0,0)\), and so on. See Fig. 18.
#### 13.1.1 Relating the two coordinate systems
How do we go from the fixed coordinates to the observer's coordinates? We need to find a rotation matrix that maps the \(\hat{Z}\) direction to the direction defined by the fixation point. By definition, this will determine \(\hat{z}\) and all the other directions in the observer's coordinates. First, let us express \(\vec{F}\) using its polar coordinates:
\[\vec{F}=(X_{0},Y_{0},Z)\equiv\left(\|\vec{F}\|,\Theta_{0},\Phi_{0}\right), \tag{19}\]
where \(\|\vec{F}\|=\sqrt{X_{0}^{2}+Y_{0}^{2}+Z^{2}}\), \(\cos\Theta_{0}=Z/\|\vec{F}\|\), and \(\tan\Phi_{0}=Y_{0}/X_{0}\). It turns out what we are looking for is a rotation by an amount \(\Theta_{0}\) about a unit vector \(\hat{u}\) that depends only on \(\Phi_{0}\). That is,
\[\hat{u}(\Phi_{0})=\begin{pmatrix}-\sin\Phi_{0}\\ \cos\Phi_{0}\\ 0\end{pmatrix}, \tag{20}\]
and the rotation matrix is given by \(R_{\hat{u}}(\Theta_{0})=I+(\sin\Theta_{0})\,L(\hat{u})+(1-\cos\Theta_{0})\,L( \hat{u})^{2}\). Here, \(L(\vec{\omega})\) is the skew-symmetric matrix representation of a vector \(\vec{\omega}\), that is
\[L(\vec{\omega})\coloneqq\vec{\omega}\cdot\vec{L}=\omega_{X}L_{X}+\omega_{Y}L_ {Y}+\omega_{Z}L_{Z}, \tag{21}\]
where \(L_{i}\) are the familiar basis matrices for \(\mathfrak{so}(3)\), the Lie algebra of \(SO(3)\):
\[L_{X}=\begin{pmatrix}0&0&0\\ 0&0&-1\\ 0&1&0\end{pmatrix},\quad L_{Y}=\begin{pmatrix}0&0&1\\ 0&0&0\\ -1&0&0\end{pmatrix},\quad L_{Z}=\begin{pmatrix}0&-1&0\\ 1&0&0\\ 0&0&0\end{pmatrix}. \tag{22}\]
The rotation matrix in its full form is expressed as \[R\coloneqq R_{\hat{u}}(\Theta_{0})=\begin{pmatrix}s^{2}+c^{2}\cos\Theta_{0}&-cs \left[1-\cos\Theta_{0}\right]&c\sin\Theta_{0}\\ -cs\left[1-\cos\Theta_{0}\right]&c^{2}+s^{2}\cos\Theta_{0}&s\sin\Theta_{0}\\ -c\sin\Theta_{0}&-s\sin\Theta_{0}&\cos\Theta_{0}\end{pmatrix}, \tag{23}\]
where we have defined \(c\coloneqq\cos\Phi_{0}\) and \(s\coloneqq\sin\Phi_{0}\).
Any point \(\vec{P}=(X,Y,Z)\) defined in the fixed coordinate system can be mapped to a point \(\vec{p}=R^{T}\vec{P}\) with components \(\vec{p}=(x,y,z)\) given in the observer's system. For example, it can be shown that \(R^{T}\vec{F}\) is parallel to \(\hat{z}\), which is how we defined \(R\) in the first place. See Fig. 18.
In conclusion, we have a rotation matrix \(R\) that is fully determined as a function of the fixation point \(\vec{F}\). Applying \(R\) rotates the fixed coordinate system onto the observer's system
\[\hat{x} \coloneqq R\hat{X}, \tag{24}\] \[\hat{y} \coloneqq R\hat{Y},\] \[\hat{z} \coloneqq R\hat{Z}.\]
This way, the observer's coordinates \((x,y,z)\) are obtained ensuring that the fixation point is always parallel to \(\hat{z}\).
#### 13.1.2 Adding motion
Consider a point \(\vec{P}=(X,Y,Z)\). In the previous section, we showed that the rotation matrix in Equation 23 can be used to relate \(\vec{P}\) to its observer-centric coordinates
Figure 18: Setup. **(a)** Background and coordinate systems. **(b)** The rotation of the fixed coordinates \(\left(\hat{X},\hat{Y},\hat{Z}\right)\) onto the observer-centric coordinates \((\hat{x},\hat{y},\hat{z})\) can be accomplished by applying a rotation of angle \(\Theta_{0}\) around the unit vector \(\hat{u}\) as defined in Equation 20. Note that by definition, \(\hat{z}\) is always parallel to the fixation point. See the text for more details. **(c)** Perspective from the observer’s point of view. The retinotopic coordinates of point \(\vec{P}\) are denoted by \(\alpha_{x}\) and \(\alpha_{y}\), measured in radians.
\[\vec{P}=R\vec{p}. \tag{25}\]
Differentiate both sides to get
\[\vec{V}\coloneqq\frac{d\vec{P}}{dt}=\frac{dR}{dt}\vec{p}+R\frac{d\vec{p}}{dt}, \tag{26}\]
where \(\vec{V}\) is the velocity of the point relative to the observer, with components measured in the fixed coordinate system.
#### 13.1.3 Derivative of the rotation matrix
Here we will derive a closed-form expression for \(dR/dt\) in terms of the observer's angular velocity. This is important because it essentially captures the effects of eye movement.
Let us use \(\vec{\omega}=(\omega_{X},\omega_{Y},\omega_{Z})\) to denote the angular velocity of observer's coordinates, with components expressed in fixed coordinates. The time derivative of \(R\) is given as follows
\[\frac{dR}{dt}=L(\vec{\omega})R, \tag{27}\]
where \(L(\vec{\omega})\) is given in Equation 21. Taken together, equations 3, 4, 8, and 9 yield
\[\boxed{\begin{aligned} \frac{d\vec{p}}{dt}&=R^{T} \vec{V}-\left[R^{T}L(\vec{\omega})R\right]\vec{p}\\ &=R^{T}\vec{V}-\left[R^{T}\begin{pmatrix}0&-\omega_{Z}&\omega_{Y} \\ \omega_{Z}&0&-\omega_{X}\\ \omega_{Y}&\omega_{X}&0\end{pmatrix}R\right]\vec{p}\end{aligned}} \tag{28}\]
This equation allows relating the velocity of a moving point expressed in fixed coordinates, \(\vec{V}\), to its observer-centric equivalent, \(\vec{v}\coloneqq d\vec{p}/dt\). Importantly, Equation 28 is a general relationship valid for any combination of velocities and rotations.
#### 13.1.4 Motion perceived by the observer
Before discussing specific applications, we need to solve one remaining piece of the puzzle: what is \(\vec{v}\), and how is it related to the motion perceived by the observer? To address this, we need to map the retinotopic coordinates of the observer (measured in radians), to points in the environment (measured in the observer-centric coordinates).
Denote the retinotopic coordinates using \((\alpha_{x},\alpha_{y})\). The goal is to relate each \((\alpha_{x},\alpha_{y})\) pair to a point in the world \(\vec{p}\) with coordinates \((x,y,z)\). It can be shown that
\[\tan\alpha_{x} =x/z=\tan\theta\cos\phi, \tag{29}\] \[\tan\alpha_{y} =y/z=\tan\theta\sin\phi,\]
where \(\theta\) and \(\phi\) are the polar coordinates of \(\vec{p}\). See Fig. 18c for an illustration.
Differentiate both sides of Equation 29 and rearrange terms to obtain the time derivative of \((\alpha_{x},\alpha_{y})\) as a function of \(\vec{v}=(\dot{x},\dot{y},\dot{z})\) and \(\vec{p}\). That is,
\[\boxed{\begin{aligned} \dot{\alpha_{x}}&= \frac{\dot{x}z-\dot{z}x}{z^{2}+x^{2}}\\ \dot{\alpha_{y}}&=\frac{\dot{y}z-\dot{z}y}{z^{2}+y^{2 }}\end{aligned}} \tag{30}\]
Thus, we obtained a relationship between retinotopic velocity \((\dot{\alpha_{x}},\dot{\alpha_{y}})\) (measured in \(radians/s\)) and velocity of objects in the world (measured in \(m/s\)). In conclusion, equations 28 and 30 enable us to compute the observer's perceived retinotopic velocity for any type of motion.
### Example: maintaining fixation
We are now ready to apply the formalism developed in the preceding section to create optical flow patterns in realistic settings. In this section, we will consider a specific example: self-motion while maintaining fixation on a background point. This is an important and ecologically relevant type of motion, as we often tend to maintain fixation while navigating the environment--think about this next time you are moving around. In fact, one can argue that fixation occurs more naturally during behaviors such as locomotion, as opposed to typical vision experiments where the animals are stationary and passively viewing a screen.
What does it mean to maintain fixation while moving? It means the eyes rotate to cancel out the velocity of the fixation point. Assume the observer is moving with velocity \(\vec{V}_{self}\). Equivalently, we can imagine the observer is stationary while the world is moving with velocity \(\vec{V}=-\vec{V}_{self}\) (this is the same \(\vec{V}\) that appears in Equation 28).
Any eye movement can be represented mathematically as a rotation of the observer's coordinates with respect to the fixed coordinates. Use \(\vec{\omega}\) to denote the angular velocity of the rotation. We can determine \(\vec{\omega}\) by requiring that the rotation cancels out the velocity of the fixation point. To this aim, we first compute the normal component of \(\vec{V}\) with respect to the fixation point \(\vec{F}\) as follows
\[\vec{V}_{\perp}\coloneqq\vec{V}-\frac{\vec{V}\cdot\vec{F}}{\|\vec{F}\|_{2}} \vec{F}. \tag{31}\]
Use the above equation to compute the angular velocity
\[\vec{\omega}=\frac{\vec{F}\times\vec{V}_{\perp}}{\|\vec{F}\|^{2}}. \tag{32}\]
Plug Equation 32 in Equation 28 to find \(\vec{v}\), and use Equation 30 to find \((\dot{\alpha_{x}},\dot{\alpha_{y}})\). This way, we just calculated the motion perceived by the observer during self-motion while maintaining fixation.
### The algorithm
Start by choosing a fixation point \(\vec{F}=(X_{0},Y_{0},Z)\), field-of-view extent (\(fov\)), and retinal resolution (\(res\)). For example, \(fov=45^{\circ}\) and \(res=5.625^{\circ}\). Use this to create a mesh grid for \((\alpha_{x},\alpha_{y})\). The grid covers \((-fov,+fov)\) degrees in each dimension and has shape \(2*fov/res+1\), or \(17\times 17\) in our example. Use Equation 29 to compute \((\theta,\phi)\) pair for each point on the \((\alpha_{x},\alpha_{y})\) grid.
Because the simulation is scale-invariant, we can always fix the distance between the observer and the background wall. Assume \(Z=cte.=1\), and use equations 23 and 25 to find \(\vec{p}=(x,y,z)\) per grid point. Note that we assume infinite resolution for the points in the environment.
So far, we partitioned the retina into a grid and found the observer-centric coordinates of a point in the real world that falls on each grid point. Now we can sample a random self-motion velocity \(\vec{V}_{self}\) and use the results described in the previous sections to find \((\dot{\alpha_{x}},\dot{\alpha_{y}})\). This provides us with the \(2D\) retinal velocity vector on the grid and concludes the algorithm for the simple case of fixate-0 (i.e., no objects).
Please refer to our code for additional details, including instructions on how to incorporate independently moving objects into the scene.
#### 13.3.1 Valid fixation points
The observer-wall distance is constrained to be a positive value. That is, the observer is always situated on the left side of the wall (\(Z>0\) for all points; Fig. 18). This constraint, combined with a given \(fov\) value and Equation 23 results in a theoretical bound on which fixation points are valid. To derive this bound, we start by considering the \(\hat{Z}\) component of Equation 25
\[(R\vec{p})_{3}=R_{31}x+R_{32}y+R_{33}z=Z>0. \tag{33}\]Divide both sides by \(z\) and use Equation 29 to get
\[R_{31}\tan\alpha+R_{32}\tan\beta+R_{33}>0. \tag{34}\]
Plug the matrix entries from Equation 23 and rearrange to get
\[\sin\Theta_{0}(\cos\Phi_{0}\tan\alpha+\sin\Phi_{0}\tan\beta)<\cos \Theta_{0}, \tag{35}\] \[\Rightarrow\quad X_{0}\tan\alpha+Y_{0}\tan\beta<Z.\]
For a given \(fov\) value, we have \(-\tan\left(fov\right)\leq\tan\alpha,\tan\beta\leq\tan\left(fov\right)\). Assuming \(Z=1\), we find that the \((X_{0},Y_{0})\) coordinates of a valid fixation point should satisfy the following inequality
\[|X_{0}|+|Y_{0}|<\frac{1}{\tan\left(fov\right)}. \tag{36}\]
In other words, the \(L_{1}\) norm of \((X_{0},Y_{0})\) should be less than \(1/\tan\left(fov\right)\). For example, when \(fov=45^{\circ}\), the upper bound is \(1\). This result can be used to choose an appropriate \(fov\): smaller values allow a larger phase space to explore for fixation points.
### Conclusions
We introduced Retinal Optic Flow Learning (ROFL), a novel synthetic data framework to simulate realistic optical flow patterns. The main result is given in Equation 28, which shows the simulation has the potential to generalize to various other types of motion. For instance, the observer might engage in smooth pursuit of a moving object instead of maintaining fixation, a scenario that can be modeled simply by adjusting the choice of \(\vec{\omega}\) in Equation 28. This flexibility enables us to generate a diverse repertoire of realistic optical flow patterns, which can be used in training and evaluating unsupervised models, among other applications.
We believe that our framework will prove valuable not only to neuroscientists studying motion perception but also to machine learning researchers interested in disentangled representation learning. By bridging these domains, we aim to foster interdisciplinary collaboration and advance our understanding of both biological and artificial information processing systems. | ## Review
### Summary
The paper presents a thorough investigation into the alignment of representations in deep generative models with neural activity in the primate brain, focusing on motion processing. It introduces a novel synthetic dataset, Retinal Optic Flow Learning (ROFL), to train a compressed hierarchical Variational Autoencoder (cNVAE) and evaluate its performance against other architectures. The authors demonstrate that the cNVAE outperforms standard VAEs in disentanglement metrics and aligns closely with neural responses of MT neurons to optical flow stimuli. Despite some concerns regarding the importance of the hierarchical structure and the clarity of the experimental design, the paper contributes valuable insights into the modeling of neural representations in the context of vision research.
### Strengths
- The introduction of the ROFL dataset is original and relevant for testing generative models.
- The experimental evaluation includes comprehensive comparisons with various architectures, showcasing the cNVAE's advantages.
- The paper openly discusses its limitations, which adds credibility to the authors' claims.
- Clear description of the stimulus generation process.
- The use of hierarchical VAEs is intriguing and has potential applications in other domains.
### Weaknesses
- The importance of the hierarchical structure in achieving model alignment with neural data is not convincingly established.
- The evaluation methods may be perceived as circular, as better disentanglement scores are linked to improved alignment, raising questions about the validity of the conclusions.
- Lack of a dedicated related work section makes it difficult to contextualize the study within existing literature.
- The manuscript could benefit from clearer descriptions of the model architecture and methods used.
- The experimental design lacks depth in certain areas, such as the exploration of latent space dimensions and the handling of noise in neural response modeling.
### Questions
- Can the authors clarify how the cNVAE compares to its non-compressed counterpart?
- What specific metrics are used to assess performance, and how do they compare against established baselines?
- Could the authors expand on their findings regarding the influence of training data on model performance?
- How do the authors plan to mitigate the concerns regarding the hierarchical inference in their future work?
- Is there potential for further exploration of the learned representations and their relationship to physiological data?
### Soundness
**Score:** 3
**Description:** 3 = good; the study presents solid methodology but has areas of uncertainty, particularly in the implications of its findings.
### Presentation
**Score:** 3
**Description:** 3 = good; the writing is mostly clear, though some sections could benefit from restructuring and clearer explanations of complex concepts.
### Contribution
**Score:** 3
**Description:** 3 = good; while the theoretical advancements are somewhat marginal, the empirical results contribute valuable insights into the field.
### Rating
**Score:** 6
**Description:** 6 = Accept: The paper is technically solid with significant contributions, though it requires improvements in clarity and evaluation depth.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents interesting findings regarding the application of hierarchical VAEs to neural data, supported by a novel synthetic dataset. Despite some concerns about the methodological rigor and clarity, the overall contributions to the understanding of motion processing in the primate brain are significant enough to warrant acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Computing a human-like reaction time metric from stable recurrent vision models
Lore Goetschalckx
These authors contributed equally to this work. Department of Cognitive, Linguistic, & Psychological Sciences, Brown University, Providence, RI 02912
Lakshmi N. Govindarajan
These authors contributed equally to this work. Department of Cognitive, Linguistic, & Psychological Sciences, Brown University, Providence, RI 02912
Alekh K. Ashok
Aarit Ahuja
Neuroscience Department, Brown University, Providence, RI 02912
David L. Sheinberg
Neuroscience Department, Brown University, Providence, RI 02912
Thomas Serre
###### Abstract
The meteoric rise in the adoption of deep neural networks as computational models of vision has inspired efforts to "align" these models with humans. One dimension of interest for alignment includes behavioral choices, but moving beyond characterizing choice patterns to capturing temporal aspects of visual decision-making has been challenging. Here, we sketch a general-purpose methodology to construct computational accounts of reaction times from a stimulus-computable, task-optimized model. Specifically, we introduce a novel metric leveraging insights from subjective logic theory summarizing evidence accumulation in recurrent vision models. We demonstrate that our metric aligns with patterns of human reaction times for stimulus manipulations across four disparate visual decision-making tasks spanning perceptual grouping, mental simulation, and scene categorization. This work paves the way for exploring the temporal alignment of model and human visual strategies in the context of various other cognitive tasks toward generating testable hypotheses for neuroscience. Links to the code and data can be found on the project page: [https://serre-lab.github.io/rnn_rts_site/](https://serre-lab.github.io/rnn_rts_site/).
## 1 Introduction
The symbiosis between visual neuroscience and machine learning has created a distinctive and unprecedented phase in the field of vision. On the one hand, formidable advances in machine learning have supported the development of computational models of visual cognition that rival, and in some cases surpass, human-level performance on naturalistic tasks [1, 2, 3, 4, 5]. On the other hand, incorporating computational and neurophysiological constraints from biological visual systems has yielded significant benefits for AI models in terms of their performance [6, 7], efficiency [8], and robustness [9, 10, 11]. Additionally, performant computational vision models that operate directly on high-dimensional sensory inputs have shown great promise for scientific discovery through the generation of testable hypotheses for neuroscience [12, 13]. At the core of this synergistic loop lies the need for reliably quantifying the degree of "alignment" between models and the primate visual system, which can subsequently serve as a control knob for closed-loop hypothesis testing.
The notion of "alignment" is subjective and can be operationalized at varying levels of analysis [14]. One of the most prominent alignment techniques is to quantify the degree to which model featurerepresentations predict time-averaged firing rates of single neurons across various stimuli [11; 15; 16; 17]. In contrast, computational-level alignment analyses typically use axiomatic attribution methods [18] to extract minimal sub-circuits from larger models and probe them to gain mechanistic insights [12]. A third popular approach to alignment is at a higher behavioral level, where techniques focus on understanding the extent to which models and primates utilize similar "visual strategies". Top-down attention masks, diagnostic of human observers' visual strategies, are a commonly employed axis of comparison in this type of model alignment [7; 19]. At the same time, other behavioral measures such as object confusion patterns [20; 21] or human-derived pairwise image similarity ratings [22] are also widely considered effective quantities that can form the basis of alignment.
Each of these techniques has a common tenet: their purview is limited to measures of alignment that are global, time-independent summaries of neural or behavioral states. While this may be sufficient for task paradigms primarily involving feedforward processing, a large part of visual cognition is highly dynamic [23; 24; 25]. Understanding visual computations, thus, necessarily involves characterizing and modeling the time-varying properties of neural responses to complex and varied inputs. _Why, then, is there a dearth of alignment methods focusing directly on the level of visual processing dynamics?_ We posit a two-fold bottleneck. First, there is a need for more performant models that exhibit and mimic visual cortical dynamics at scale. Second, we lack the mathematical tools to extract meaningful dynamical signatures from such models and compare them with biological systems. While recent progress in biologically-inspired machine learning has started to open up solutions for the former [8; 19; 26; 27; 28], the latter remains a wide-open research area.
**Contributions.** In this study, we consider this temporal alignment perspective. Recent advances in training biologically motivated large-scale convolutional recurrent neural network models (cRNNs henceforth) on visual cognition tasks [8] have provided candidate models to study the temporal alignment problem. A cRNN's inherent design affords access to its time-varying dynamical properties that can be explicitly analyzed. Throughout this paper, we specifically focus on extracting a metric that captures a cRNN's dynamical signature and on quantifying the ability of this metric to explain primate decision time courses in the form of reaction times (RTs) in concomitant visual psychophysics tasks.
* We introduce a novel computational framework to train, analyze, and interpret the behavior of cRNNs on visual cognitive tasks of choice. Our framework triangulates insights from an attractor dynamics-based training routine [8] and evidential learning theory [29] to support stable and expressive evidence accumulation strategies.
* We derive a stimulus-computable, task- and model-agnostic metric to characterize evidence accumulation in cRNNs. Our metric does not require extra supervision and purely leverages internal cRNN activity dynamics.
* We comprehensively demonstrate the efficacy of our metric in capturing stimulus-dependent primate decision time courses in the form of reaction times (RTs) in four disparate visual cognitive challenges that include incremental grouping, mental simulation, and scene categorization. To the best of our knowledge, this is the first demonstration of qualitative temporal alignments between models and primates across task paradigms.
## 2 Related Work
**Cognitive process models.** The speed-accuracy tradeoff in human behavior is ubiquitous in cognitive science [31] and has garnered much attention from the modeling perspective. The most prominent of them is the Drift-Diffusion Model (DDM) family, which specifies RTs by determining when the amount of accumulated evidence for a behavioral choice from a noisy process reaches a threshold value [32; 33]. Though such models have had phenomenal successes in aligning neural and behavioral data [34], they are typically not stimulus-computable or rely on handcrafted heuristics [35]. Moreover, such models are usually directly fitted to RT data [35; 36], unlike our approach, where RT is a derived quantity without any additional supervision.
**Computing reaction-time proxies from neural network models.** Several notable efforts have attempted to derive RT measures from neural network models to serve as a proxy for "computational cost" [26; 37; 38; 39; 40; 41; 42]. Spoerer et al. [26] introduced the notion of energy demand in their objective function that motivated parsimony regarding the number of computations the cRNN executes. Bansalet al. [40] discussed "algorithmic extrapolation" where their networks solved problems of greater difficulty than those in their training set by performing additional computations. Graves [37] proposed the use of a "halting unit" whose activation level determined the probability of continuing model computations. Finally, Figurnov et al. [43], motivated by [37], adaptively determined the spatial extent and number of processing layers in a deep convolutional network. In a broader sense, "conditional computing" approaches aim to learn dropout-like policies [44; 45] to train neural networks that can balance parsimony and performance. While the approaches presented in [37; 43; 44; 45] are impressive advances, they neither operate explicitly with cRNNs nor yield RT values for downstream comparisons on visual cognitive tasks. The elegant approach of [26] suffers from the primary drawback that it relies on the backpropagation through time (BPTT) algorithm. BPTT imposes high memory demands, limiting the number of timesteps a cRNN can perform and therefore condemning any derived measure to be coarse. Furthermore, vanilla BPTT forces the dynamics to be constant (i.e., the cRNN arrives at a solution at \(t=T\) for any input) making it non-stimulus-adaptive [8].
**Probabilistic interpretation of model uncertainty.** Decision certainty and computational parsimony are duals of each other. In other words, easier tasks require fewer computational resources, with models typically being most certain about their predictions, while the opposite holds for harder tasks. Modeling decision uncertainties provide another effective angle to capture the "computational demand" of a stimulus condition. While deterministic neural networks suffer from ill-calibrated prediction confidence in Softmax outputs, Bayesian neural networks derive prediction confidence from their weight uncertainties but are generally not performant. Inspired by the Dempster-Shafer theory of evidence [29], Sensoy et al. [30] propose a formulation that treats model outputs as parameters of a Dirichlet distribution that signifies the total amount of evidence accumulated for each class. Although influential, this body of work, called Evidential Deep Learning (EDL), has never been linked to RT measures or applied to visual cognitive tasks. We refer the interested reader to Ulmer et al. [46] for a comprehensive survey of this literature. In this paper, we specifically leverage EDL in conjunction with attractor dynamics to support the derivation of an RT proxy.
## 3 General Methods
### Models
cRNN.The model of interest chosen in this work is the horizontal gated recurrent unit (hGRU; [19]), a convolutional recurrent neural network model that we will canonically refer to as cRNN throughout the rest of the paper. In principle, our framework applies to other cRNN architectures as well. The hGRU is inspired by the anatomical and neurophysiological properties of the primate
Figure 1: **Computing a reaction time metric from a recurrent vision model.****a.** A schematic representation of training a cRNN with evidential deep learning (EDL; [30]). Model outputs are interpreted as parameters (\(\boldsymbol{\alpha}\)) of a Dirichlet distribution over class probability estimates, with higher values (symbolized here by a darker gray) reflecting more generated evidence in favor of the corresponding class. In this framework, the width of the distribution signals the model’s uncertainty (\(\epsilon\)) about its predictions. **b.** Visualization of our metric, \(\xi_{cRNN}\), computed for an example stimulus (see Panel a) used in the task studied in Section 4. The metric, denoted \(\xi_{cRNN}\), is defined as the area under the uncertainty curve, i.e., the evolution of uncertainty (\(\epsilon\)) over time.
visual cortex. Architectural facets of the hGRU include an hypercolumnar organization, distinct excitatory and inhibitory subpopulations, and local- and long-range feedback mechanisms, rendering it overcomplete and an ideal candidate cRNN for our purposes. The hGRU and its extensions have previously demonstrated proficiency in learning long-range spatial dependency challenges, including naturalistic ones [8, 19, 48]. For implementational details, refer to SI SSA.
Control models.To further motivate our cRNN choice, we refer to its ability to systematically generalize to more challenging task settings after being trained on less challenging ones [8]. Inference-time flexibility and performance are critical desiderata, given that our goal is to probe the computational demands of a task via stimulus manipulations. We first demonstrate that the cRNN's performance remained well above chance on exceedingly challenging versions of the first task we consider (see Section 4.1) after training on a simplified version (see SI SSB for full experimental details). We trained pure feedforward convolutional neural networks (CNNs) as control models under identical training
Figure 2: **Human versus cRNN temporal alignment on an incremental grouping task. a.** Description of the task (inspired by cognitive neuroscience studies [47]). **b.** Visualization of the cRNN dynamics. The two lines represent the average latent trajectories across \(1K\) validation stimuli labeled “yes” and “no” respectively. Marker size indicates average uncertainty \(\epsilon\) across the same stimuli. Evident from the graph is that the two trajectories start to diverge after some initial time passes, clearly separating the two classes. Owing to the C-RBP training algorithm and attesting to the dynamics approaching an equilibrium, the step sizes become increasingly small over time. We also include snapshots of the latent activity in the cRNN for two example stimuli (one “yes”, one “no”; see Panel a) along the respective trajectory. Each snapshot represents \(\mathbf{h_{t}}\) at time step \(t\), averaged across the channel dimension and smoothed by averaging \(\mathbf{h_{t}}\) and \(\mathbf{h_{t+1}}\). Notice the spread of activity over time. The cRNN gradually fills the segment containing the dots. The strategy can be appreciated even better in video format (see SI SSD). **c.** Comparison against data from the experiment with human participants in [47]. The position of one dot (white, labeled “Fix”) was kept constant, while the position of the other was manipulated to create experimental conditions of varying difficulty. These manipulations have qualitatively similar effects on the cRNN as they do on human behavior. Error bars represent the standard error of the mean across stimuli. The significance levels for shown contrasts were adjusted downward from.1 (*),.05*,.01**, and.001*** using Bonferroni. The spatial uncertainty map shown on the right visualizes the spatial anisotropy observed in the model for the same example stimulus shown on the left.
data distributions. The CNNs struggled to generalize effectively, as did a naive control RNN model (Fig. S1). Because CNNs lack explicit temporal dynamics and our recurrent control models failed our generalization test, we limit the experimental results presented in this paper to our cRNN. For completeness, we remark that a Vision Transformer [49] passed the generalization test. However, we exclude these models from our analyses for two reasons. First, as such, Vision Transformers lack biological analogs. Second, our focus in this work is to derive RT measures from model dynamics, which transformers do not exhibit.
### Training Framework
General remarks.All model runs reported herein were done from _scratch_ using only the ground truth class labels for supervision. Any alignment between the cRNN model and humans would have had to emerge as a byproduct of task constraints. We emphasize that no human RT data were involved in our training routines. For the cRNNs, we computed a task loss only at the network's end state to preserve the ability to remain flexible during inference. The cRNNs were trained in a data-parallel manner on 4 Nvidia RTX GPUs (Titan/3090/A5000) with 24GB of memory each. Each training run took approximately 48 hours, except those in Section 7 (2 hours), which were faster due to the smaller training dataset size. They were furthermore trained with Adam [50], a learning rate of \(1e-3\), and \(\gamma=100\) for the C-RBP penalty ([8]; also see SI SSA). We use \(T=40\) recurrent timesteps during the training phase, with an exception for the task in Section 6, where \(T=80\) and \(\gamma=1000\).
Stabilizing cRNNs.We train our cRNN model with the contractor recurrent back-propagation algorithm (C-RBP; [8]). C-RBP imposes an attractor-dynamic constraint on the cRNN's computational trajectory, imparting several benefits over the standard BPTT algorithm. C-RBP is significantly more memory efficient and affords arbitrary precision in the cRNN temporal dynamics (i.e., \(T\) can be high). C-RBP also allows the cRNN to learn stable and expressive solutions. We can observe this from the cRNN's latent trajectories, as visualized in Fig. 1(b), where extrapolation in time (beyond \(T\)) is non-catastrophic. We argue that model instability is why BPTT-trained controls failed the generalization test (Fig. S1). Furthermore, stable solutions imparted by C-RBP allow our cRNNs to dynamically adapt their number of processing steps in a stimulus-dependent manner. That is, we pose that a C-RBP-trained cRNN can utilize its computational budget differently depending on the input (see Fig. S2 for an illustration). Finally, we note that our strategy to achieve stable recurrent solutions is in contrast to that employed by Bansal et al. [40], wherein cRNNs were stabilized through architectural choices along with an incremental training process.
Evidential loss.We augment our cRNN training objective with an evidential deep learning loss (EDL; [30]). In contrast to the classic cross-entropy loss, EDL considers cRNN outputs to be the parameters of a belief distribution rather than point estimates of class probabilities (Fig. 0(a)). We formalize cRNN beliefs as a Dirichlet distribution. Strong evidence in favor of one class causes the Dirichlet distribution to be peaked, reflecting low uncertainty (\(\epsilon\)). In contrast, when there is a lack of overall evidence, the Dirichlet distribution becomes more uniform, thereby increasing \(\epsilon\). For a complete mathematical specification of this loss formulation, refer to SI SSA.
### RT Metric (\(\xi_{cRNN}\))
We derive a novel metric to quantify the differential computational demands imposed on the cRNN by stimulus manipulations and capture the essence of model decision time courses for alignment. Modeling uncertainty (\(\epsilon\)) explicitly allows us to track its evolution over time from the cRNN (Fig. 0(b)). We formulate our RT metric (\(\xi_{cRNN}\)) as the area under this uncertainty curve. Precisely, \(\xi_{cRNN}=\int_{0}^{T}\epsilon(t)\ dt\) (Fig. 0(b)). Intuitively, high (low) \(\xi_{cRNN}\) can be understood as the cRNN spending a large (small) amount of time in an uncertain regime. Our metric affords direct comparisons to human RTs. In the following sections, we apply this metric to cRNNs trained on a host of visual cognitive tasks and study their alignment to human reaction time data.
## 4 Incremental Grouping
### Task, Stimuli, and Visualizations
Inspired by an incremental perceptual grouping task in humans [47], we propose an equivalent challenge for the cRNN where the model is required to determine whether two dots are on the same object (Fig. 2a). The stimuli consist of white outlines of natural shapes on a black background (\(150\times 150\) px). We created \(340K\) (\(15K\)) training (validation) outline stimuli from MS COCO images [51]. The dots were placed semi-randomly such that there was no difference in the distribution of dot distances between the two stimulus classes.
We additionally introduce two novel types of visualizations to probe cRNN visual strategies. First, we track cRNN dynamics by looking at its latent activity \(\mathbf{h_{t}}\) at every time step \(t\), averaged across the channel dimension and smoothed by averaging \(\mathbf{h_{t}}\) and \(\mathbf{h_{t+1}}\). To facilitate interpretation, we provide these videos for the reader (see SI SSD). Second, we compute a "spatial uncertainty" (SU) map for each outline stimulus by keeping one dot fixed and varying the position of the other. Each spatial coordinate has a value equal to \(\xi_{cRNN}\) for that corresponding dot configuration. Examples are shown in Fig. 2c (right) and Fig. 3.
### Results
Despite its lightweight architecture, our cRNN solved the incremental grouping task, achieving \(98\%\) validation accuracy. Visualizing the cRNN latent activities revealed that the model learned a cognitively-viable filling-in strategy. Network activity originated at each of the dot locations and gradually spread, eventually filling up the whole object(s) (Fig. 2b). We note that this behavior is an emergent property from purely the classification task constraint instead of direct supervision for any form of segmentation. We then sought to ask if \(\xi_{cRNN}\) effectively captures these visual decision-making dynamics. In the following sections, we report results detailing the relationship between \(\xi_{cRNN}\) and a host of complex, topological stimulus properties.
\(\boldsymbol{\xi_{cRNN}}\) captures the serial spread of attention in our models.Human visual strategies in solving the incremental grouping task are thought to involve a serial spread of attentional resources, resulting in longer RTs for higher distances [47]. To test this effect in our model, we took a subset of our validation outlines and repositioned the dots to be in the center of an object, \(14\) px apart. We then systematically kept moving the dots \(14\) px more apart until reaching the object boundary. A linear mixed-effects regression analysis revealed a significant positive effect of this manipulation on \(\xi_{cRNN}\), \(b=1.46,SE=0.11,z=13.58,p<.001\) (Fig. S3, Fig. S4). The SU maps (Fig. 2c right, Fig. 3) exhibit this distance effect too. Placing the second dot further away from the fixed dot (indicated in white) yielded higher \(\xi_{cRNN}\) values.
\(\boldsymbol{\xi_{cRNN}}\) recapitulates the spatial anisotropy in human RT data.Euclidean distance only paints a partial picture. Human RTs are affected by several other factors, such as topological distance, the proximity of the dots to boundaries, and the need to spread through narrow object regions, all amounting to a well-documented spatial anisotropy in the spread of visual attention [47, 52]. While anisotropy is evident in the SU maps from our cRNN (Fig. 2c right, Fig. 3), we also formally assess the alignment between humans and the cRNN in this regard, using human RT data from [47].
Figure 3: \(\boldsymbol{\xi_{cRNN}}\) **is spatially anisotropic.** We present spatial uncertainty (SU) maps to visualize the impact a given fixed dot position will have on model RTs when the position of the second dot is varied. Higher (lower) values of \(\xi_{cRNN}\) represent slower (faster) model responses.
First, Jeurissen et al. [47] quantified their stimuli on an alternative dot distance measure that went beyond Euclidean distance to reflect the other aforementioned factors and was more predictive of human RT (see SI SSF). We found this measure to be a better predictor of \(\xi_{cRNN}\) (\(r=.80\)) as well (versus \(r=.45\) for Euclidean distance), \(t(63)=4.68,p<.001\). Second, the stimulus set featured distinct experimental conditions, varying in expected difficulty based on the position of the dots (Fig. 2c left). As shown in Fig. 2c (middle), the effects of this manipulation on \(\xi_{cRNN}\) were qualitatively aligned with those on human RT. A one-way repeated measures ANOVA across 22 outline stimuli revealed a significant effect of "Condition" on \(\xi_{cRNN}\), \(F(3,63)=67.65,p<.001\), as well as on the average RTs from [47], \(F(3,63)=16.95,p<.001\). Furthermore, Fig. 2c (middle) presents the results of planned contrasts (paired t-tests). \(\xi_{cRNN}\) was significantly higher for B compared to A, \(Cohen^{\prime}s\ d=2.00,t(21)=9.39,p<.001\), but also for the more difficult Condition C compared to B, \(d=0.73,t(21)=3.42,p=.003\). Third, we observed a significant overall correlation between human RT and \(\xi_{cRNN}\) on Yes-stimuli, \(r=.31,p=.01\). Fourth, on a novel test set of our own, paired t-tests showed that \(\xi_{cRNN}\) was significantly higher when two dots on the same object were placed in narrow versus wide object regions, \(d=0.70,t(22)=3.34,p=.003\), and when the path was curved (i.e., longer topological distance) versus when it was not, \(d=0.98,t(22)=4.71,p<.001\) (see SI S&G, Fig. S5). Finally, SI S&G also describes an experiment in which we varied the distance between one dot and the object boundary while keeping the distance between the dots themselves constant. We found that our model exhibits higher values near the boundaries, which is strikingly similar to human behavioral results presented in [47].
Ambiguous stimuli.As a final way of gaining insight into the cRNN's task behavior, we looked at those stimuli in the validation set that elicited the highest \(\xi_{cRNN}\) values. We observed that some were characterized by uncertainty curves that remain high, even at \(t=T\), suggesting the model deems them ambiguous. In a test set we designed to be ambiguous (it features occlusions), we frequently observed such non-decreasing uncertainty curves too (see SI S&H). We note that while our training procedure promotes stable attractor states in the model's _latent_ dynamics, this is disparate from readout dynamics allowing the framework to truly explore the spectrum of computation times and uncertainties.
Taken together, these results suggest that our metric is able to capture the dynamical signatures of object-based attention in our cRNNs and provides a means to quantify alignment with human visual strategies.
## 5 Visual Simulation - _Planko_
### Task and Stimuli
The Planko task (and corresponding dataset) was introduced in [53] and features stimuli (\(64\times 64\) px; \(80K\) training, \(5K\) validation) displaying a ball about to fall down and eventually land in one of two baskets ("left"/"right"). The outcome depends on the position and orientation of a set of planks placed in between the ball and the baskets (Fig. 4a). Humans are thought to solve this task through mental
Figure 4: \(\boldsymbol{\xi_{cRNN}}\) **from our model recapitulates human RTs on the Planko task.****a.** The stimuli consist of a ball on top, followed by ten randomly placed planks and finally two baskets in the bottom. Participants are shown these static stimuli and are asked to guess if the ball will end up in the left or the right basket when released and allowed to bounce around on the planks. **b.** Human RTs (collected in [53]) are predicted by \(\xi_{cRNN}\). The five colored symbols reflect the data points obtained from the stimuli shown in the next panel. **c.** Five example stimuli of increasing \(\xi_{cRNN}\).
visual simulation [53]. Notably, feedforward CNNs struggle to solve the task [54] and when they do, they appear to adopt non-aligned strategies to humans [53]. In contrast, cRNNs show evidence of simulation-like behavior on a variant of the task [54]. With our metric, we are now able to directly compare human (collected by [53]) and cRNN RTs. To this end, we trained a cRNN from scratch on Planko (see Section 3.2 and SI SSA for details on training).
### Results
The stimuli displayed in Fig. 4 illustrate the challenge of the Planko task. Our cRNN achieves a performance of \(84\%\) on the validation set, well within the confidence interval of human accuracy [53]. Stimuli presented to human participants in the original study (\(N=200\)) were used here as a held-out set. We observed a significant correlation between model \(\xi_{cRNN}\) values and human RTs, \(r=.31,p<.001\)) (Fig. 4b). This result indicates that stimuli that were harder for humans tended to be computationally challenging for the cRNN (see examples in Fig. 4c).
_What makes a stimulus challenging?_ Variance in human RTs is known to be partially explained by the statistics of the planks. For instance, stimuli for which the outcome is more likely to change as a consequence of perturbing the plank configurations by a small amount were deemed "uncertain". Indeed, participants responded slower to these stimuli [53]. We were able to replicate this result in our cRNN, \(r=.31,p<.001\). These results showcase how our methodology can be used to make stimulus-dependent RT predictions. This could help shed light on, for example, the remarkable yet under-examined ability of humans to flexibly adopt different strategies in a Planko task depending on the stimulus demand.
## 6 Mazes
### Task and Stimuli
We devised a maze task where the goal is to tell whether two cued locations are connected by a path (Fig. 5a). To build our training (\(N=34.5K\)) and validation (\(N=14K\)) set, we started from mazes provided by [40] but modified them to make them suitable for classification (the originals did not feature any disconnected paths). The cues were placed such that the two classes ("yes"/"no") did not differ in Euclidean cue distance. Each maze was \(144\times 144\) px.
### Results
Here too, we observed that the uncertainty curves and derived \(\xi_{cRNN}\) values differed considerably depending on the input, even if nearly all mazes were classified correctly at \(t=T\) (\(99\%\) correct
Figure 5: **cRNN learns filling-in strategy to solve mazes.****a.** Task description. **b.** The cRNN model takes more time to solve mazes with longer path lengths, an effect previously found in human participants too. **c.** Latent activity visualizations for two yes-mazes. The cRNN gradually fills the segment containing the cues. The strategy can be appreciated even better in the videos supplied in the SI. **d.** Uncertainty curves for the two inputs shown in Panel c. The cRNN remains uncertain for much longer for the maze featuring the longest path. The uncertainty evolution is visualized dynamically in conjunction with the change in latent activity in the supplementary videos.
on validation mazes). For the Yes-mazes contained in a random subset of the validation set, we found that the model spent more time being uncertain (higher \(\xi_{cRNN}\)) when the length of the path between the two cued locations was longer, \(r=.84,p<.001\) (Fig. 5b,d). Moreover, path length was a significantly better predictor than simple Euclidean distance (\(r=.39\)), \(t(234)=5.81,p<.001\). Similarly, human RTs on maze tasks have been shown to increase with path length [55]. When the cued locations were on two distinct maze segments (No-mazes), \(\xi_{cRNN}\) was significantly predicted by the mean of both segment lengths, \(r=.56,p<.001\). The kind of visual strategy that produced such distance effects can be understood by inspecting the visualizations of the latent activities (Fig. 5c). The model gradually spread activity along the maze segment(s) containing the cues. We strongly encourage the reader to watch our videos (see SI SI) to better appreciate this spread in conjunction with the evolution of uncertainty over time. Finally, we did not find evidence for an effect on \(\xi_{cRNN}\) of the number of T-junctions on the path, which are known to predict human RT [56]. We speculate that this potential misalignment, worth investigating in future work, could be due to the model not being restricted in how many possible paths it can explore at once.
## 7 Scene Categorization
### Task and Stimuli
To showcase how our general approach can also be applied to study higher-level visual tasks, we consider a natural versus manmade scene categorization task as our final challenge (Fig. 6). Human RT data from a rapid-scene categorization version of this task were already available from [57]. We selected images from the SUN database [58] querying the same classes as in [57], but excluding the images shown to human participants, and performed a stratified train (\(N=8.8K\)) and validation (\(N=979\)) split. The image preprocessing pipeline entailed cropping the image to be square, resizing to \(150\times 150\) px, converting to grayscale, and random horizontal flipping. In addition, the images were matched for overall brightness.
### Results
The model was able to correctly classify \(83\%\) of the validation images and \(73\%\) of the held-out test images from [57]. The errors can be understood, at least partly, in light of the less-than-clear-cut labels (e.g., an islet with a house on top, a building behind many trees). Each of the test images had previously been assigned a discriminability score [57], based on the average error rate of logistic classifiers trained on "gist" features [59]. Human participants were reported to be slower with decreasing discriminability [57]. Here, we show a similar trend in the model (Fig. 6b), with a linear regression analysis revealing a significant, negative discriminability slope, \(b=-0.47,SE=0.03,t=-16.45,p<.001\). Directly comparing \(\xi_{cRNN}\) to human RT, we found them to be significantly correlated, \(r=.19,p<.001\). While our cRNN model of choice here does not perfectly account for empirical patterns in human reaction times, specifically in their non-monotonic reaction time readouts for the low-discriminability stimuli, they provide a promising
Figure 6: \(\boldsymbol{\xi_{cRNN}}\) **is predictive of the discriminability of a visual scene.****a.** Task description. **b.** Human RTs (collected in [57]) decrease as a function of discriminability, as do \(\xi_{cRNN}\). Highly discriminable scenes are easier for both human participants and the cRNN model. Error bars represent the standard error of the mean.
start. Our framework is model-agnostic (e.g., see SI SSJ for results on a convLSTM) and thus can be used to test hypotheses on architectural improvements.
## 8 Discussion
We are in the midst of a paradigm shift in visual neuroscience. The continued development of computational models that rival human behavioral performance has called for tools to quantify how well the internal mechanisms of such models compare to biological truths. These efforts have demonstrably been beneficial for both the AI and neuroscience ecosystems.
In this work, we push this frontier by introducing a framework to characterize the evidence accumulation strategies of stable recurrent vision models. Our experiments thus far show the potential to generate novel hypotheses of interest to visual neuroscientists. The most direct application of our framework for experimental design is in taxonomizing stimuli in terms of the putative computational demand based on our metric toward understanding neural processes at varying timescales. Furthermore, the value proposition of this framework includes the ability to adjudicate between models based on the consistency of their latent dynamics with human reaction time data. For example, our metric from a cRNN trained on the maze task is uncorrelated with the number of T-junctions in the stimulus, indicating a potential divergence between serial and parallel exploration strategies (SI SSI). It is worth noting that distinguishing between serial and parallel attention models has been of long-standing interest [60]. Similarly, testing a zoo of hierarchical models on the scene categorization task while keeping an eye on the degree of alignment with our metric can potentially lend novel mechanistic insights and help delineate bottom-up, lateral, and top-down processes [61]. We are also enthusiastic about drawing theoretical parallels between our metric summarizing the cRNN's evidence accumulation strategy and the complexity of natural data statistics toward understanding the sample efficiency of learning in models and primates.
Limitations and an outlook for the future.Within the scope of this work, we have considered four disparate visual cognition tasks to showcase how our approach offers the means to study the temporal alignment between recurrent vision models and the human visual system. Of course, in the face of the myriad of tasks that cognitive scientists have used over centuries, our selection still needs to be improved. Our results do not directly speak to the diversity in response types, stimulus types, or naturalism, to name a few. However, our approach is general-purpose enough to adapt to newer tasks as long as they are cast as a classification problem. We believe this should cover a significant share of cognitive science and AI tasks. Extending the approach beyond choice tasks would require a considerable revision of how to make the model's uncertainty explicit. Another limitation is that there could easily be other cRNNs that align with human RTs better. While we motivate the choice for the cRNN in Section 3.1, our main contribution here is to introduce the overall framework integrating stable recurrent vision models with evidential learning theory to support the derivation of a novel RT metric. A comprehensive comparison of different models, apart from those in SI SSB, was hence beyond this work's scope. However, we are confident our framework paves the way to build a taxonomy of models in the future as well as generate testable hypotheses for neuroscience.
Broader impacts.With AI vision models becoming ubiquitous in our daily lives, the call to make them fairer, transparent, and robust is growing louder. Our framework facilitates this by allowing researchers to peek into the inner working of these AI models for transparency and thinking about their match to biological systems, which are arguably more robust. From a neuroscience perspective, computational models of RT are gaining prominence in computational psychiatry and hold the key to understanding many neurobiological disorders [62, 63]. We generally do not anticipate any negative impact on society in either scenario.
## Acknowledgments
The authors would like to explicitly thank Jeurissen et al. [47] and Sofer et al. [57] for generously sharing their stimuli and human reaction time data, and Bansal et al. [40] for their maze stimuli. We are also grateful to Drew Linsley for his helpful comments and feedback. This work was supported by ONR (N00014-19-1-2029), and NSF (IIS-1912280). Additional support provided by the CarneyInstitute for Brain Science and the Center for Computation and Visualization (CCV). We acknowledge the computing hardware resources supported by NIH Office of the Director grant S10OD025181.
## References
* [1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. _Communications of the ACM_, 60(6):84-90, 2017.
* [2] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 1-9, 2015.
* [3] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In _International Conference on Learning Representations_, 2015.
* [4] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016.
* [5] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In _Proceedings of the IEEE international conference on computer vision_, pages 1026-1034, 2015.
* [6] Grace W Lindsay and Kenneth D Miller. How biological attention mechanisms improve task performance in a large-scale visual system model. _ELife_, 7:e38105, 2018.
* [7] Thomas Fel, Ivan F Rodriguez, Drew Linsley, and Thomas Serre. Harmonizing the object recognition strategies of deep neural networks with humans. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, _Advances in Neural Information Processing Systems_, volume 35, pages 9432-9446. Curran Associates, Inc., 2022. URL [https://proceedings.neurips.cc/paper_files/paper/2022/file/3d681cc4487b97c08e5aa67224dd74f2-Paper-Conference.pdf](https://proceedings.neurips.cc/paper_files/paper/2022/file/3d681cc4487b97c08e5aa67224dd74f2-Paper-Conference.pdf).
* [8] Drew Linsley, Alekh Karkada Ashok, Lakshmi Narasimhan Govindarajan, Rex Liu, and Thomas Serre. Stable and expressive recurrent vision models. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, _Advances in Neural Information Processing Systems_, volume 33, pages 10456-10467. Curran Associates, Inc., 2020. URL [https://proceedings.neurips.cc/paper_files/paper/2020/file/766d856ef1a6b02f93d894415e6bfa0e-Paper.pdf](https://proceedings.neurips.cc/paper_files/paper/2020/file/766d856ef1a6b02f93d894415e6bfa0e-Paper.pdf).
* [9] Aran Nayebi and Surya Ganguli. Biologically inspired protection of deep networks from adversarial attacks. _arXiv preprint arXiv:1703.09202_, 2017.
* [10] Joel Dapello, Tiago Marques, Martin Schrimpf, Franziska Geiger, David Cox, and James J DiCarlo. Simulating a primary visual cortex at the front of cnns improves robustness to image perturbations. _Advances in Neural Information Processing Systems_, 33:13073-13087, 2020.
* [11] Joel Dapello, Kohitij Kar, Martin Schrimpf, Robert Baldwin Geary, Michael Ferguson, David Daniel Cox, and James J. DiCarlo. Aligning model and macaque inferior temporal cortex representations improves model-to-human behavioral alignment and adversarial robustness. In _The Eleventh International Conference on Learning Representations_, 2023. URL [https://openreview.net/forum?id=SM?dcXjhIq](https://openreview.net/forum?id=SM?dcXjhIq).
* [12] Hidenori Tanaka, Aran Nayebi, Niru Maheswaranathan, Lane McIntosh, Stephen Baccus, and Surya Ganguli. From deep learning to mechanistic understanding in neuroscience: the structure of retinal prediction. _Advances in neural information processing systems_, 32, 2019.
* [13] Ben Sorscher, Surya Ganguli, and Haim Sompolinsky. Neural representational geometry underlies few-shot concept learning. _Proceedings of the National Academy of Sciences_, 119(43):e2200800119, 2022.
* [14] David Marr. _Vision: A computational investigation into the human representation and processing of visual information_. MIT press, 2010.
* [15] Daniel L Yamins, Ha Hong, Charles Cadieu, and James J DiCarlo. Hierarchical modular optimization of convolutional networks achieves representations similar to macaque it and human ventral stream. _Advances in neural information processing systems_, 26, 2013.
* [16] Martin Schrimpf, Jonas Kubilius, Michael J Lee, N Apurva Ratan Murty, Robert Ajemian, and James J DiCarlo. Integrative benchmarking to advance neurally mechanistic models of human intelligence. _Neuron_, 108(3):413-423, 2020.
* Khaligh-Razavi and Kriegeskorte [2014] Seyed-Mahdi Khaligh-Razavi and Nikolaus Kriegeskorte. Deep supervised, but not unsupervised, models may explain it cortical representation. _PLoS computational biology_, 10(11):e1003915, 2014.
* Sundararajan et al. [2017] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In _International conference on machine learning_, pages 3319-3328. PMLR, 2017.
* Linsley et al. [2018] Drew Linsley, Junkyung Kim, Vijay Veerabadran, Charles Windolf, and Thomas Serre. Learning long-range spatial dependencies with horizontal gated recurrent units. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, _Advances in Neural Information Processing Systems_, volume 31. Curran Associates, Inc., 2018. URL [https://proceedings.neurips.cc/paper_files/paper/2018/file/ec8956637a99787bd197eacd77acc5e5e-Paper.pdf](https://proceedings.neurips.cc/paper_files/paper/2018/file/ec8956637a99787bd197eacd77acc5e5e-Paper.pdf).
* Rajalingham et al. [2018] Rishi Rajalingham, Elias B Issa, Pouya Bashivan, Kohitij Kar, Kailyn Schmidt, and James J DiCarlo. Large-scale, high-resolution comparison of the core visual object recognition behavior of humans, monkeys, and state-of-the-art deep artificial neural networks. _Journal of Neuroscience_, 38(33):7255-7269, 2018.
* Geirhos et al. [2021] Robert Geirhos, Kantharaju Narayanappa, Benjamin Mitzkus, Tizian Thieringer, Matthias Bethge, Felix A Wichmann, and Wieland Brendel. Partial success in closing the gap between human and machine vision. _Advances in Neural Information Processing Systems_, 34:23885-23899, 2021.
* Peterson et al. [2018] Joshua C Peterson, Joshua T Abbott, and Thomas L Griffiths. Evaluating (and improving) the correspondence between deep neural networks and human representations. _Cognitive science_, 42(8):2648-2669, 2018.
* Bastos et al. [2012] Andre M Bastos, W Martin Usrey, Rick A Adams, George R Mangun, Pascal Fries, and Karl J Friston. Canonical microcircuits for predictive coding. _Neuron_, 76(4):695-711, 2012.
* Cauchoix et al. [2014] Maxime Cauchoix, Gladys Barragan-Jason, Thomas Serre, and Emmanuel J Barbeau. The neural dynamics of face detection in the wild revealed by mvpa. _Journal of Neuroscience_, 34(3):846-854, 2014.
* Bastos et al. [2015] Andre Moraes Bastos, Julien Vezoli, Conrado Arturo Bosman, Jan-Mathijs Schoffelen, Robert Oostenveld, Jarrod Robert Dowdall, Peter De Weerd, Henry Kennedy, and Pascal Fries. Visual areas exert feedforward and feedback influences through distinct frequency channels. _Neuron_, 85(2):390-401, 2015.
* Spoerer et al. [2020] Courtney J Spoerer, Tim C Kietzmann, Johannes Mehrer, Ian Charest, and Nikolaus Kriegeskorte. Recurrent neural networks can explain flexible trading of speed and accuracy in biological vision. _PLoS computational biology_, 16(10):e1008215, 2020.
* Kubilius et al. [2018] Jonas Kubilius, Martin Schrimpf, Aran Nayebi, Daniel Bear, Daniel LK Yamins, and James J DiCarlo. Cornet: Modeling the neural mechanisms of core object recognition. _BioRxiv_, page 408385, 2018.
* Kar et al. [2019] Kohitij Kar, Jonas Kubilius, Kailyn Schmidt, Elias B. Issa, and James J. DiCarlo. Evidence that recurrent circuits are critical to the ventral stream's execution of core object recognition behavior. _Nature Neuroscience_, 22(6):974-983, April 2019. doi: 10.1038/s41593-019-0392-5. URL [https://doi.org/10.1038/s41593-019-0392-5](https://doi.org/10.1038/s41593-019-0392-5).
* Yager and Liu [2008] Ronald R Yager and Liping Liu. _Classic works of the Dempster-Shafer theory of belief functions_, volume 219. Springer, 2008.
* Sensoy et al. [2018] Murat Sensoy, Lance Kaplan, and Melih Kandemir. Evidential deep learning to quantify classification uncertainty. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, _Advances in Neural Information Processing Systems_, volume 31. Curran Associates, Inc., 2018.
* Heitz [2014] Richard P Heitz. The speed-accuracy tradeoff: history, physiology, methodology, and behavior. _Frontiers in neuroscience_, 8:150, 2014.
* Ratcliff and McKoon [2008] Roger Ratcliff and Gail McKoon. The diffusion decision model: theory and data for two-choice decision tasks. _Neural computation_, 20(4):873-922, 2008.
* Fengler et al. [2021] Alexander Fengler, Lakshmi N Govindarajan, Tony Chen, and Michael J Frank. Likelihood approximation networks (lans) for fast inference of simulation models in cognitive neuroscience. _Elife_, 10:e65074, 2021.
* Cavanagh and Frank [2014] James F Cavanagh and Michael J Frank. Frontal theta as a mechanism for cognitive control. _Trends in cognitive sciences_, 18(8):414-421, 2014.
* Duinkharjav et al. [2022] Budmonde Duinkharjav, Praneeth Chakravarthula, Rachel Brown, Anjul Patney, and Qi Sun. Image features influence reaction time. _ACM Transactions on Graphics_, 41(4):1-15, July 2022. doi: 10.1145/3528223.3530055. URL [https://doi.org/10.1145/3528223.3530055](https://doi.org/10.1145/3528223.3530055).
* Mirzaei et al. [2013] Amin Mirzaei, Seyed-Mahdi Khaligh-Razavi, Masoud Ghodrati, Sajjad Zabbah, and Reza Ebrahimpour. Predicting the human reaction time based on natural image statistics in a rapid categorization task. _Vision Research_, 81:36-44, 2013. ISSN 0042-689. doi: [https://doi.org/10.1016/j.visres.2013.02.003](https://doi.org/10.1016/j.visres.2013.02.003). URL [https://www.sciencedirect.com/science/article/pii/S0042698913000230](https://www.sciencedirect.com/science/article/pii/S0042698913000230).
* Graves [2016] Alex Graves. Adaptive computation time for recurrent neural networks. _arXiv preprint arXiv:1603.08983_, 2016.
* Kumbhar et al. [2020] Omkar Kumbhar, Elena Sizikova, Najib J. Majaj, and Denis G. Pelli. Anytime prediction as a model of human reaction time. _CoRR_, abs/2011.12859, 2020. URL [https://arxiv.org/abs/2011.12859](https://arxiv.org/abs/2011.12859).
* Pascanu et al. [2017] Razvan Pascanu, Yujia Li, Oriol Vinyals, Nicolas Heess, Lars Buesing, Sebastien Racaniere, David P. Reichert, Theophane Weber, Daan Wierstra, and Peter W. Battaglia. Learning model-based planning from scratch. _CoRR_, abs/1707.06170, 2017. URL [http://arxiv.org/abs/1707.06170](http://arxiv.org/abs/1707.06170).
* Bansal et al. [2022] Arpit Bansal, Avi Schwarzschild, Eitan Borgnia, Zeyad Emam, Furong Huang, Micah Goldblum, and Tom Goldstein. End-to-end algorithm synthesis with recurrent networks: Extrapolation without overthinking. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, _Advances in Neural Information Processing Systems_, 2022. URL [https://openreview.net/forum?id=PPjSKy40XUB](https://openreview.net/forum?id=PPjSKy40XUB).
* Jensen et al. [2023] Kristopher T. Jensen, Guillaume Hennequin, and Marcelo G. Mattar. A recurrent network model of planning explains hippocampal replay and human behavior. _bioRxiv_, 2023. doi: 10.1101/2023.01.16.523429. URL [https://www.biorxiv.org/content/early/2023/01/19/2023.01.16.523429](https://www.biorxiv.org/content/early/2023/01/19/2023.01.16.523429).
* Gold and Shadlen [2007] Joshua I Gold and Michael N Shadlen. The neural basis of decision making. _Annu. Rev. Neurosci._, 30:535-574, 2007.
* Figurnov et al. [2017] Michael Figurnov, Maxwell D Collins, Yukun Zhu, Li Zhang, Jonathan Huang, Dmitry Vetrov, and Ruslan Salakhutdinov. Spatially adaptive computation time for residual networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 1039-1048, 2017.
* Bengio et al. [2015] Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, and Doina Precup. Conditional computation in neural networks for faster models. _arXiv preprint arXiv:1511.06297_, 2015.
* Denoyer and Gallinari [2014] Ludovic Denoyer and Patrick Gallinari. Deep sequential neural network. _arXiv preprint arXiv:1410.0510_, 2014.
* Ulmer et al. [2023] Dennis Thomas Ulmer, Christian Hardmeier, and Jes Frellsen. Prior and posterior networks: A survey on evidential deep learning methods for uncertainty estimation. _Transactions on Machine Learning Research_, 2023.
* Jeurissen et al. [2016] Danique Jeurissen, Matthew W Self, and Pieter R Roelfsema. Serial grouping of 2d-image regions with object-based attention in humans. _eLife_, 5:e14320, 2016. ISSN 2050-084X. doi: 10.7554/eLife.14320.
* Linsley et al. [2021] Drew Linsley, Girik Malik, Junkyung Kim, Lakshmi Narasimhan Govindarajan, Ennio Mingolla, and Thomas Serre. Tracking without re-recognition in humans and machines. In _Proceedings of the 35th International Conference on Neural Information Processing Systems_, 2021.
* Dosovitskiy et al. [2021] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In _International Conference on Learning Representations_, 2021. URL [https://openreview.net/forum?id=YicbFdMTTy](https://openreview.net/forum?id=YicbFdMTTy).
* Kingma and Ba [2015] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun, editors, _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_, 2015. URL [http://arxiv.org/abs/1412.6980](http://arxiv.org/abs/1412.6980).
* Lin et al. [2014] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollar, and C. Lawrence Zitnick. Microsoft COCO: Common objects in context. In David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars, editors, _The European Conference on Computer Vision (ECCV) 2014_, pages 740-755. Springer International Publishing, 2014. ISBN 978-3-319-10602-1.
* Pooresmaeili and Roelfsema [2014] Arezoo Pooresmaeili and Pieter R. Roelfsema. A growth-cone model for the spread of object-based attention during contour grouping. _Current Biology_, 24(24):2869-2877, 2014. doi: 10.1016/j.cub.2014.10.007. URL [https://doi.org/10.1016/j.cub.2014.10.007](https://doi.org/10.1016/j.cub.2014.10.007).
* Ahuja and Sheinberg [2019] Aarit Ahuja and David L. Sheinberg. Behavioral and oculomotor evidence for visual simulation of object movement. _Journal of Vision_, 19(6):13, June 2019. doi: 10.1167/19.6.13. URL [https://doi.org/10.1167/19.6.13](https://doi.org/10.1167/19.6.13).
* Ashok et al. [2022] Alekh Karkada Ashok, Lakshmi Narasimhan Govindarajan, Drew Linsley, David Sheinberg, and Thomas Serre. The emergence of visual simulation in task-optimized recurrent neural networks. In _SVRHM 2022 Workshop @ NeurIPS_, 2022. URL [https://openreview.net/forum?id=qYLP6nN0Y7m-](https://openreview.net/forum?id=qYLP6nN0Y7m-).
* Crowe et al. [2000] David A. Crowe, Bruno B. Averbeck, Matthew V. Chafee, John H. Anderson, and Apostolos P. Georgopoulos. Mental maze solving. _Journal of Cognitive Neuroscience_, 12(5):813-827, September 2000. doi: 10.1162/089892900562426. URL [https://doi.org/10.1162/089892900562426](https://doi.org/10.1162/089892900562426).
* Kadner et al. [2023] Florian Kadner, Hannah Willkomm, Inga Ibs, and Constantin A. Rothkopf. Finding your way out: Planning strategies in human maze-solving behavior. February 2023. doi: 10.31234/osf.io/nme9j. URL [https://doi.org/10.31234/osf.io/nme9j](https://doi.org/10.31234/osf.io/nme9j).
* Sofer et al. [2015] Imri Sofer, Sebastien M. Crouzet, and Thomas Serre. Explaining the timing of natural scene understanding with a computational model of perceptual categorization. _PLOS Computational Biology_, 11(9):e1004456, September 2015. doi: 10.1371/journal.pcbi.1004456. URL [https://doi.org/10.1371/journal.pcbi.1004456](https://doi.org/10.1371/journal.pcbi.1004456).
* Xiao et al. [2010] Jianxiong Xiao, James Hays, Krista A. Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In _2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition_, pages 3485-3492, 2010. doi: 10.1109/CVPR.2010.5539970.
* Oliva and Torralba [2001] Aude Oliva and Antonio Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope. _International Journal of Computer Vision_, 42(3):145-175, 2001. doi: 10.1023/a:1011139631724. URL [https://doi.org/10.1023/a:1011139631724](https://doi.org/10.1023/a:1011139631724).
* Moran et al. [2016] Rani Moran, Michael Zehetleitner, Heinrich Rene Liesefeld, Hermann J Muller, and Marius Usher. Serial vs. parallel models of attention in visual search: accounting for benchmark rt-distributions. _Psychonomic bulletin & review_, 23(5):1300-1315, 2016.
* Serre et al. [2007] Thomas Serre, Aude Oliva, and Tomaso Poggio. A feedforward architecture accounts for rapid categorization. _Proceedings of the national academy of sciences_, 104(15):6424-6429, 2007.
* Huys et al. [2016] Quentin JM Huys, Tiago V Maia, and Michael J Frank. Computational psychiatry as a bridge from neuroscience to clinical applications. _Nature neuroscience_, 19(3):404-413, 2016.
* Hitchcock et al. [2022] Peter F Hitchcock, Eiko I Fried, and Michael J Frank. Computational psychiatry needs time and context. _Annual review of psychology_, 73:243-270, 2022.
* Cooijmans et al. [2017] Tim Cooijmans, Nicolas Ballas, Cesar Laurent, Caglar Gulcehre, and Aaron Courville. Recurrent batch normalization. In _International Conference on Learning Representations_, 2017.
* Krizhevsky et al. [2012] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C.J. Burges, L. Bottou, and K.Q. Weinberger, editors, _Advances in Neural Information Processing Systems_, volume 25. Curran Associates, Inc., 2012. URL [https://proceedings.neurips.cc/paper_files/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf](https://proceedings.neurips.cc/paper_files/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf).
* Yang [2020] Siqi Yang. Pytorch-alexnet. [https://github.com/YOUSIKI/PyTorch-AlexNet](https://github.com/YOUSIKI/PyTorch-AlexNet), 2020.
* Robbins and Monro [1951] H. Robbins and S. Monro. A stochastic approximation method. _Annals of Mathematical Statistics_, 22:400-407, 1951.
* Simonyan and Zisserman [2015] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In Yoshua Bengio and Yann LeCun, editors, _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_, 2015. URL [http://arxiv.org/abs/1409.1556](http://arxiv.org/abs/1409.1556).
* Falbel [2023] Daniel Falbel. _torchvision: Models, Datasets and Transformations for Images_, 2023. [https://torchvision.mlverse.org](https://torchvision.mlverse.org), [https://github.com/mlverse/torchvision](https://github.com/mlverse/torchvision).
* Hochreiter and Schmidhuber [1997] Sepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. _Neural computation_, 9(8):1735-1780, 1997.
* SHI et al. [2015] Xingjian SHI, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-kin Wong, and Wang-chun WOO. Convolutional lstm network: A machine learning approach for precipitation nowcasting. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, _Advances in Neural Information Processing Systems_, volume 28. Curran Associates, Inc., 2015. URL [https://proceedings.neurips.cc/paper_files/paper/2015/file/07563a3fe3bbe7e3ba84431ad9d055af-Paper.pdf](https://proceedings.neurips.cc/paper_files/paper/2015/file/07563a3fe3bbe7e3ba84431ad9d055af-Paper.pdf).
**Supplementary Information**
## Appendix A Implementation Details
### Model Implementation
We instantiate our cRNN as a canonical recurrent convolutional neural network layer inspired by the primate visual cortex's anatomical and neurophysiological properties. This model, termed hGRU and first introduced in Linsley et al. [19], implements an hypercolumnar organization with distinct excitatory and inhibitory subpopulations and local- and long-range feedback mechanisms. For the sake of completeness, we outline the governing equations of the model below.
First compute suppressive interactions:
\[\mathbf{G}^{S} = sigmoid(U^{S}*\mathbf{H}[t-1]) \text{\# Compute suppression gate as function of current state}\] \[\mathbf{C}^{S} = BN(W^{S}*(\mathbf{H}[t-1]\odot\mathbf{G}^{S})) \text{\# Applying the gate}\] \[\mathbf{S} = \left[\mathbf{Z}-\left[\left(\alpha\mathbf{H}[t-1]+\mu\right) \mathbf{C}^{S}\right]_{+}\right]_{+}, \text{\# Additive and multiplicative suppression of }\mathbf{Z}\]
Then compute the facilitation terms followed by the recurrent update:
\[\mathbf{G}^{F} = sigmoid(U^{F}*\mathbf{S}) \text{\# Facilitation gate}\] \[\mathbf{C}^{F} = BN(W^{F}*\mathbf{S}) \text{\# Facilitation term}\] \[\tilde{\mathbf{H}} = \left[\nu(\mathbf{C}^{F}+\mathbf{S})+\omega(\mathbf{C}^{F}* \mathbf{S})\right]_{+} \text{\# Modulation of }\mathbf{S}\text{ through additive and multiplicative facilitation}\] \[\mathbf{H}[t] = (1-\mathbf{G}^{F})\odot\mathbf{H}[t-1]+\mathbf{G}^{F}\odot\tilde{ \mathbf{H}} \text{\# State update}\]
In these equations, \(\mathbf{H},\mathbf{Z}\in\mathcal{R}^{X\times Y\times C}\) represent the cRNN's latent state and bottom-up feedforward drive, respectively, with height/width/channels \(X,Y,C\). \(W^{S},W^{F}\in\mathbb{R}^{E\times E\times C\times C}\) are learnable kernels controlling the suppressive and facilitatory interactions and \(E\) is the spatial width of the lateral interactions. For all our experiments, we set \(E=5\), except for Planko where \(E=3\). Similarly, \(U^{S},U^{F}\in\mathbb{R}^{1\times 1\times C\times C}\) are learnable kernels controlling the gates. Model dynamics are discretized, with \(t\in[1...T]\) representing the timesteps of processing. We use _softplus_ as our rectifying nonlinearity (denoted by \([\cdot]_{+}\)). We also use Batch Normalization (_BN_) in the model to control exploding/vanishing gradients [64]. _BN_ kernels are shared across timesteps of processing.
### Training Implementation Details
Leveraging equilibrium dynamics for training.Training cRNNs has traditionally posed a problem due to the severe memory bottlenecks imposed by the backpropagation through time (BPTT) algorithm. To remedy this, Linsley et al. [8] introduced a variant of the classic recurrent backpropagation algorithm (RBP) that leveraged equilibrium dynamics to compute model gradients with \(O(1)\) memory complexity. We outline the high-level idea here and refer the reader to [8] for more details.
Our discretized cRNN dynamics can be written as follows,
\[\mathbf{H}[t]=f(\mathbf{H}[t-1],\mathbf{X};\mathbf{W}),\]
where \(f(.)\) is the model architecture described, \(\mathbf{H}[t]\) is the cRNN's state at timestep \(t\), and \(\mathbf{X}\) is the bottom-up sensory input. For convenience, we use \(\tilde{\mathbf{W}}\) to denote all the trainable parameters of \(f(.)\). If \(f(.)\) is a contraction mapping, then \(\mathbf{H}[t]\rightarrow\mathbf{H}^{*}\) as \(t\rightarrow\infty\). One can define an implicit function as follows to aid in the gradient computation: \(\psi(\mathbf{H};\mathbf{X},\mathbf{W})=\mathbf{H}-f(\mathbf{H},\mathbf{X}; \mathbf{W})\). By construction, we can observe that \(\psi(\mathbf{H}^{*};\mathbf{X},\mathbf{W})=0\) and by extension,
\[\frac{\partial\psi(\mathbf{H}^{*};\mathbf{X},\mathbf{W})}{\partial\mathbf{W}}=0\]Following a sequence of algebraic manipulations we can compute the gradient of the equilibrium state \(\mathbf{H}^{*}\) w.r.t. parameters \(\mathbf{W}\) as follows.
\[\frac{\partial\mathbf{H}^{*}}{\partial\mathbf{W}}=\left(I-J_{f,\mathbf{H}^{*}} \right)^{-1}\frac{\partial f(\mathbf{H}^{*},\mathbf{X};\mathbf{W})}{\partial \mathbf{W}},\]
where \(J_{f,\mathbf{H}^{*}}\) is the Jacobian of the cRNN \(f(.)\) evaluated at \(\mathbf{H}^{*}\). The ability to compute this gradient, however, relies on \((I-J_{f,\mathbf{H}^{*}})\) being invertible. To guarantee this, Linsley et al. [8] introduced the Lipschitz Coefficient Penalty (LCP) as \(\mathcal{L}_{LCP}(\mathbf{W})=\|(\mathbf{1}\cdot J_{f,\mathbf{H}^{*}}-\lambda )^{+}\|_{2}\) which controls the degree of contraction in \(f(.)\) and by extension the invertibility of the term discussed above. Moreover, this regularization can be added to any task loss which we exploit.
Evidential learning formulation.To support our goal of characterizing evidence accumulation strategies in our cRNN, we parameterize our model readout to explicitly express uncertainty. Specifically, we follow the evidential learning strategy introduced in Sensoy et al. [30] to interpret cRNN readouts as the parameters of the Dirichlet density on the class predictors, \(\boldsymbol{\alpha}=<\alpha_{1},...,\alpha_{K}>\), where \(K\) is the total number of classes in the classification problem of interest. As in [30], our evidential loss comprises of two components, one that supports belief accumulation for the "correct" class (\(\mathcal{L}_{risk}\)) and a second that ensures that the beliefs for all the other "wrong" classes have uniform mass (\(\mathcal{L}_{balance}\)).
\[\mathcal{L}_{EDL}(\mathbf{W}) =\sum_{i=1}^{i=N}\mathcal{L}_{risk}^{(i)}+\rho\mathcal{L}_{balance }^{(i)}(\mathbf{W})\] \[\mathcal{L}_{risk}(\mathbf{W}) =\sum_{j=1}^{j=K}(y_{j}-\hat{p}_{j})^{2}+\frac{\hat{p}_{j}(1-\hat {p}_{j})}{S+1}\] \[\mathcal{L}_{balance}(\mathbf{W}) =D_{KL}(D(\mathbf{p}|\hat{\boldsymbol{\alpha}})\mid\mid D( \mathbf{p}|<1,...,1>))\]
In the above, \(\mathbf{y}\) denotes a one-hot ground truth label for a given sample with \(y_{j}=1\) if the sample belongs to class \(j\), and \(0\) otherwise. \(\hat{p}_{j}\) is from the model that denotes the expected probability of the \(j\)-th singleton under the corresponding Dirichlet such that \(\hat{p}_{j}=\frac{\alpha_{j}}{S}\). \(S=\sum_{j=1}^{K}\alpha_{j}\) resembles the "total evidence" accumulated by the model (or the Dirichlet strength). \(D_{KL}\) is the Kullback-Leibler divergence measure. \(\hat{\boldsymbol{\alpha}}\) are the Dirichlet parameters of just the "misleading" evidence given by \(\hat{\boldsymbol{\alpha}}=\mathbf{y}+(1-\mathbf{y})\boldsymbol{\alpha}\). We define the instantaneous uncertainty as \(\epsilon=\frac{K}{S}\).
Finally, \(\rho\) is an annealing term that is used to balance the relative importance of \(\mathcal{L}_{risk}\) and \(\mathcal{L}_{balance}\). In practice, we use the following increment schedule: \(\rho_{epoch}=min\left[1,\frac{epoch}{\tau}\right]\), where \(\tau\) is a hyperparameter. For a complete derivation of the loss formula, we refer the reader to [30]. We choose \(\tau=16\) for incremental grouping, \(\tau=10\) for Planko, \(\tau=20\) for the maze task, and \(\tau=16\) for scene categorization.
Training objective.Our final training objective is thus \(\mathcal{L}(\mathbf{W})=\mathcal{L}_{EDL}(\mathbf{W})+\gamma\mathcal{L}_{LCP} (\mathbf{W})\).
Generalization Experiments
### Task and Stimuli
In the main text, we identified the flexibility to generalize to more complex task settings at inference as a critical desideratum for obtaining a human-like RT metric from any model. We tested generalizability in the cRNN as well as in a set of control models (see further) using the incremental grouping task described in the main text (dots on the same object "yes"/"no"). We trained on an easy version of the task by only selecting training stimuli for which the Euclidean dot distance was at most \(44\) px (50th percentile). We refer to this reduced training set as "short training stimuli" (\(N=170K\)). For evaluation, we created two subsets of the validation set, one easy and one hard. The easy set ("short validation stimuli", \(N=7.7K\)) consisted of those validation stimuli for which the dots were at most \(44\) px apart. The hard set ("long validation stimuli", \(N=1.5K\)) consisted of those stimuli for which the dots were more than \(82\) px apart (90th percentile). All thresholds were computed across the training stimuli in the original set.
### Models and Training
All models listed below were trained from scratch with a batch size of \(128\) and a standard cross entropy loss. Training was done in a data-parallel manner on four Nvidia RTX GPUs (Titan/3090/A5000) with 24GB of memory each.
* **AlexNet-BN**. This model was included as a pure feedforward control. We used the PyTorch implementation for AlexNet [65] with _BN_ provided in [66]. The optimizer was stochastic gradient descent (SGD) [67] and the learning rate was \(1e-2\). We report the results for the best epoch out of \(59\) (based on performance on the short validation stimuli) for three individual runs.
* **VGG-19-BN**. This model was included as an additional pure feedforward control. We trained a variant of the original VGG architecture [68] referred to as "vgg19_bn" in the torchvision model zoo [69]. We used an SGD [67] optimizer and a learning rate of \(1e-2\). We report the results for the best epoch out of \(32\) (based on performance on the short validation stimuli) for three individual runs. Interestingly, in two additional runs with the same hyperparameters, VGG-19-BN was unsuccessful in solving even the easy version of the task and performed at chance level.
* **hGRU-BPTT**. This is a recurrent control model with the same architecture (see SI SSA) as the model studied in the main paper (referred to there as cRNN), but trained with the classic BPTT. Furthermore, we found that in order to learn the easy version of the task with BPTT, we had to replace _BN_ with group normalization. The optimizer was Adam [50], \(lr=1e-3\), and \(T=15\). Our hypothesis was that this control model would fail the generalization test and perform near chance level on the harder long validation stimuli. We report the results for the best epoch out of \(32\) (based on performance on the short validation stimuli) for three individual runs.
* **hGRU-CRBP**. This model shares the same recurrent architecture (see SI SSA) and training algorithm algorithm (C-RBP) as the cRNN used in the main paper. We trained the hGRU-CRBP models with an Adam [50] optimizer, \(lr=1e-3\), \(T=40\), and \(\gamma=100\). Our hypothesis was that this model would still perform well above chance level on the harder long validation stimuli, in contrast to both the feedforward controls and the RNN control without C-RBP. This hypothesis was motivated by findings in [8]. We report the results for the best epoch out of \(32\) (based on performance on the short validation stimuli) for three individual runs.
* **ConvLSTM**. This is a recurrent control model with the LSTM architecture [70] that shares the C-RBP training algorithm used to train the cRNNs in the main text, but lacks the anatomical and neurophysiological constraints. We trained the ConvLSTM models [71] with Adam [50], \(lr=1e-3\), \(T=40\), and a \(\gamma\) of either \(100\) or \(1000\). We analyzed the generalization performance of the runs that reached at least \(90\%\) accuracy on the short validation stimuli.
* **ViT-B-16**. ViT-B-16 was included as a control model from the Vision Transformer family [49]. We used the implementation included in the torchvision zoo [69] as "vit_b_16". Totrain the model, we used an Adam [50] optimizer, a patch size of \(10\), a learning rate of \(1e-2\), a dropout rate of \(.10\), and an attention dropout rate of \(.10\). We report the results for the best epoch out of \(299\) (based on performance on the short validation stimuli).
### Results
Fig. S1 summarizes the results of the generalization experiments. The models that share the same architecture and training algorithm (hGRU-CRBP) as the cRNNs studied in the main text performed well above chance on the long validation stimuli. Despite having high accuracy on the short validation stimuli (i.e., those with dot distances within the range they were trained for), the control models performed near or at chance on the harder long validation stimuli. The only exception to this finding was ViT-B-16.
Dynamically Expendable Computational Budget
Figure S2: **Dynamic computational time.****a.** Low-dimensional projections of our model dynamics for two input conditions in the incremental grouping task (Section 4 of the main text). For the easy input (Panel b), the model effectively "stops" computing at an earlier time point (\(t^{*}\)), as evidenced by dramatically reduced step sizes in the latent space. For the harder input (Panel c), step sizes remain large (above the dotted line) for a longer period. Overall this demonstrates that our model dynamically utilizes its compute budget in a stimulus-dependent manner. **b,c.** Alternative visualization of the step size as a function of time for the easy and hard stimuli, respectively. The dotted line here is arbitrarily chosen for visual comparison.
Videos
For the incremental grouping task (Section 4 in the main text) and the maze task (Section 6 in the main text), the cRNN learned a filling-in strategy that was evident from inspecting the latent activity \(\mathbf{h_{t}}\) over time steps (\(t\)). The gradual spread of activity can be appreciated best in video-format. Examples of such videos can be found on our project page: [https://serre-lab.github.io/rnn_rts_site](https://serre-lab.github.io/rnn_rts_site).
Each video consists of three panels. The left panel shows the input to the model. The middle panel shows the uncertainty curve, with a marker added at every time step to indicate the progression of time. The right panel shows a visualization of the latent activity over time. Each frame shows \(\mathbf{h_{t}}\) averaged over the channel dimension and smoothed by additionally averaging between \(\mathbf{h_{t}}\) and \(\mathbf{h_{t+1}}\). With each frame in the video, one can observe the spread of activity and the corresponding change in uncertainty.
Distance Experiment
When humans perform the incremental grouping task (Section 4 in the main text), their RT tends to increase with increasing distances between the dots. This effect has been attributed to a serial spread of attentional resources [47, 52]. To test whether the effect was also present in the cRNN we trained, we created a dedicated test set, derived from a random subset of the validation outlines (\(N=150\)). As explained in the main text, we repositioned the dots to be in the center of an object and \(14\) px apart in Euclidean distance. We then systemically moved them \(14\) px further apart until they hit an object boundary, thus creating different distance conditions for a given outline (see Fig. S3 for an example). This procedure resulted in a total of \(1852\) stimuli. Fig. S3 demonstrates how the distance manipulation affected the uncertainty curves for one example outline. The cRNN spent more time being uncertain if the dots were placed further apart, which is captured in our \(\xi_{cRNN}\) metric. Analyzing the relation between distance and \(\xi_{cRNN}\) across the whole test set using a linear mixed-effects model, we found a significant positive effect, \(b=1.46,SE=0.11,z=13.58,p<.001\) (Fig. S4).
Non-Euclidean Distance Metric
Jeurissen et al. [2017] describe a growth cone distance metric that is sensitive to the topology of the object, proximity of boundaries, and narrowness. The authors construct a series of non-overlapping, varying-sized discs whose centers trace a curve (originating at the fixation location and ending at the cue location). The radii are such that a disc is the maximum inscribed disc at its location. Disks never cross object boundaries. The final distance metric is defined as the total number of discs that constitute this curve Spatial Anisotropy in Incremental Grouping - Additional Experiments
As a further test of how factors beyond Euclidean distance might affect \(\xi_{cRNN}\) in the incremental grouping task, we created two sets of stimuli (Fig. S7, Fig. S8), each derived from the same 23 novel outlines by placing dots in different configurations. Following the example of [47], each stimulus featured two adjacent objects of the same object class. In one set (Fig. S7), we created two conditions by placing one of the dots (or both) in a narrow region of an object in one condition, and moving the pair of dots to a wider region in the other condition. The Euclidean dot distance, as well as the topological distance were matched between both conditions. In the other stimulus set (Fig. S8), we used the same outlines but with different dot conditions. Here, one condition had the dots placed in a such a way that they could be connected by a straight path. In the other, the path was curved making that the higher topological distance condition. The Euclidean dot distance, on the other hand, was still matched. Fig. S5 visualizes the effect of both these manipulations on \(\xi_{cRNN}\). As reported in the main text, paired t-tests showed that \(\xi_{cRNN}\) was significantly higher in the narrow versus wide condition, \(d=0.70,t(22)=3.34,p=.003\), and in the curved versus straight condition, \(d=0.98,t(22)=4.71,p<.001\).
Finally, in a last follow-up experiment, we varied the distance between one dot and an object boundary while keeping the Euclidean distance between the dots themselves constant (Fig. S6). We found that our model exhibits higher values near the boundaries (Fig. S6b,c). This is strikingly similar to human behavior presented in prior literature ([47], Fig. 3d).
Figure S7: **Novel outline stimuli for the incremental grouping task featuring a narrow versus wide manipulation.** Dots are colored orange for visualization purposes.
Figure S8: **Novel outline stimuli for the incremental grouping task featuring a topological distance manipulation. The dots were placed such that it was either a straight or a curved path (higher topological distance) connecting them, whereas the Euclidean distance was the same. Dots are colored orange for visualization purposes.**
Ambiguous Grouping Stimuli and Non-zero Uncertainties
Here, we take a closer look at which stimuli elicit very high \(\xi_{cRNN}\) values. Looking at our validation set first, we observed that some high-\(\xi_{cRNN}\) stimuli are characterized by uncertainty curves that remain high, even at \(t=T\), suggesting the model deems them ambiguous (see Fig. S9 for an example).
Furthermore, in a test set we designed to be ambiguous (it features occlusions), we frequently observed such non-decreasing uncertainty curves too (see Fig. S10). We note that while our training procedure promotes stable attractor states in the model's _latent_ dynamics, this is disparate from readout dynamics allowing the framework to truly explore the spectrum of computation times and uncertainties.
Figure S10: **Testing model uncertainty on occluded stimuli.****a.** We created a dataset (N=76) of occluded images, half of which have both dots on the same side of the occluding object (control). The Euclidean dot distance is matched between the control and occluded condition. We performed zero-shot transfer with our trained cRNN on this set and observed that for the occluded samples, model uncertainty often remained at high (non-zero) values in the time limit. **b.** A paired t-test reveals that our metric is significantly higher in the occluded condition compared to the control condition: Cohen’s d = 2.85, t(37) = 17.54, p <.001.
Additional Maze Analyses
Human participants have been found to take more time to solve a maze when the solution path features more junctions [56]. To study whether the cRNN we trained exhibited a similar slow-down, we computed two measures for the Yes-mazes (Fig. S11). One was the number of T-junctions encountered on the shortest path between the two cues. The other was the total number of T-junctions on the maze segment containing the cues. Linear regression analyses that statistically controlled for path length by including it as a predictor, did not yield evidence for an effect of the first measure, \(b=-0.23,SE=0.29,t(234)=-0.79,p=.43\) or the latter, \(b=0.28,SE=0.18,t(234)=1.56,p=.12\). We treat this finding as an important example for how our metric can help identify potential mismatches between human- and model-behavior and ultimately guide the development of better aligned models.
Figure S11: **Counting T-junctions in Yes-mazes.****a.** An example maze. **b.** Visualization of the T-junctions. The solution path connecting the cued locations for the same maze shown in the previous panel is indicated in blue here. We found no evidence for an effect of the number of T-junctions included in the solution path (orange squares), nor of the number of T-junctions included in entire maze segment containing the cues (orange + green squares) on \(\xi_{cRNN}\).
## Appendix J ConvLSTM Results on Scene Categorization
Figure S12: **A convLSTM equipped with our metric captures human RT patterns in a scene categorization task.** Within our framework, we are able to swap out various model (or architecture) choices without affecting the downstream formulation. Whereas our choice of architecture in the main text was motivated by the demands that a task like incremental grouping imposes, we show here that \(\xi\) can also be computed for other architectures, such as a convLSTM. The fact that our metric is model-agnostic makes it ideally suited for comparing and contrasting models in terms of their temporal alignment. | ## Review
### Summary
This paper proposes a novel metric for evaluating the alignment between recurrent neural networks (RNNs) and human behavior in visual decision-making, focusing on reaction times. The authors introduce the measure of visual network computation time,
κRNN, which captures output uncertainty in RNNs trained with an evidential deep learning approach. The study demonstrates positive correlations between the model's reaction times and human performance across diverse tasks, enhancing the relevance of the research in understanding the interaction between machine learning models and human cognitive processes. While the work is comprehensive, it raises questions about the generalizability of the metric and its mechanistic relationship to human reaction times, which warrants further exploration.
### Strengths
- Well-written and easy to understand.
- Attempts to establish correspondence between model outputs and human behavior, which is novel and important.
- Combines evidential deep learning with convolutional RNNs to address temporal dynamics in decision-making.
- Thorough evaluation across multiple datasets shows positive correlations with human reaction times.
- Clear presentation of contributions with intuitive figures and supplementary materials.
### Weaknesses
- The proposed metric may not accurately capture the complexity of human reaction times, as it is fundamentally a measure of uncertainty.
- Lacks clarity on how well the metric extends to other architectures beyond the specific model used.
- The tasks evaluated may not fully represent real-world scenarios; further exploration of natural stimuli could enhance relevance.
- Concerns about whether the metric can be generalized to more complex tasks involving adaptive computation.
### Questions
- How does the metric behave under more complex task conditions, such as varying object topology and occlusion?
- Can the proposed metric be applied to different architectures like convolutional LSTMs?
- What does 'without loss of generality' mean in the context of the paper?
- How are the activity maps in figures computed, and what implications do the differences between model and human data suggest?
### Soundness
**Score:** 3
**Description:** 3 = good; the methodology is sound, though there are concerns about the mechanistic interpretation of the results.
### Presentation
**Score:** 3
**Description:** 3 = good; the paper is generally well-structured and clear, but some concepts need further elaboration for better comprehension.
### Contribution
**Score:** 3
**Description:** 3 = good; the paper provides valuable insights into the relationship between RNNs and human behavior, but further validation and exploration are needed.
### Rating
**Score:** 7
**Description:** 7 = accept, but needs minor improvements; the paper is technically solid with important contributions, though it requires clarifications and additional context to enhance its robustness.
### Paper Decision
**Decision:** Accept (spotlight)
**Reasons:** The paper presents an innovative approach to understanding the temporal dynamics of machine learning models in relation to human behavior, contributing significantly to the field. While it has some weaknesses, particularly regarding the generalizability and mechanistic interpretation of its findings, the overall soundness and clarity of presentation, along with its substantial contributions, warrant an acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
## Comparing Causal Frameworks: Potential Outcomes, Structural Models, Graphs, and Abstractions
### Duligur Ibeling
Department of Computer Science
Stanford University
[email protected]
&Thomas Icard
Department of Philosophy
Stanford University
[email protected]
Toward the end of the twentieth century several frameworks arose for formalizing and analyzing problems of causal inference. One of these, associated with Rubin [34] and others (see [16]), takes the _potential outcome_--formalizing the effect of one variable on another--as a fundamental target of analysis. Causal assumptions in the Rubin causal model (RCM) are naturally encoded as algebraic constraints on potential outcomes, and research in this area has spawned a remarkable body of theoretical and applied work especially in social and biomedical sciences (see [14] for a review).
A second approach, associated with Pearl [26] and collaborators (see [27] for a textbook treatment; see also [37]), focuses instead on assumptions that can be encoded qualitatively, or more specifically, graphically, arising from a fundamental object known as a structural causal model (SCM). The _docalculus_ is one of the crowning achievements of this approach, and it has been shown derivationally complete with respect to a wide range of canonical estimation and inference problems [35, 2, 19].
Both frameworks have enjoyed considerable influence within recent causal machine learning in particular. As just one example, concern in reinforcement learning about the possibility of unobserved confounders--variables impacting both decisions and their outcomes--has generated a number of important advances, some employing tools and concepts from the RCM approach (e.g., [18, 25, 17]), others grounded in the SCM approach and typically involving graphs (e.g., [3, 38, 42]).
Despite the remarkable successes that both of these frameworks have engendered in machine learning and beyond, there remains substantial controversy over how to understand their relationship. In the literature it has been claimed, on the one hand, that the two are equivalent, that "a theorem in one is a theorem in the other" [27, p. 98]. On the other hand, some authors suggest that the two are only equivalent in a weak sense, one that "builds the limitations of SCM into the resulting comparison and likewise filters out aspects of the rival theory that do not readily translate to SCM" [21, p. 443].
At issue are two separable questions. The first is one of practical significance. Some argue that graphs give greater "conceptual clarity" [27, p. 103] and that SCMs more generally offer a "a flexible formalism for data-generating models" that helps ground causal inquiry [4, p. 514]; others argue that work in the RCM framework provides "transparent definitions of causal effects and encourages the analyst to consider individual-level heterogeneity as a first principle" [23, p. 91] as well as "guidanceto researchers and policy makers for practical implementation" [14, p. 1131]. While obviously very important, our goal is not to address these disputes about what theoretical primitives are most "natural" or "useful" for practitioners or applied researchers. Rather, the aim of the present contribution is to offer a number of new technical results that together shed light on a more basic question, namely, how precisely the RCM and SCM frameworks relate at a theoretical level. For example, are the two merely notational variants, or does one tacitctly enforce assumptions that the other does not?
In this paper we first endeavor, building on previous work, to elucidate the precise sense in which SCMs can be construed as _representations_ of RCMs, provided the latter satisfy two key principles known as composition and reversibility [9, 10]. Interestingly, such principles (or their logical consequences) have been questioned in the literature (e.g., [7]). Our second goal is to help clarify the sense in which they may fail. Drawing from recent literature on causal abstraction (e.g., [33, 5])--broadened to cover both SCMs and RCMs--we suggest that failure of these principles plausibly results when causally relevant low-level details are elided in favor of more abstract variables. Our Thm. 1 buttresses this intuition, showing that every RCM is a _constructive abstraction_ of a representable RCM (hence satisfying composition and reversibility). We furthermore remark on how the well known SUTVA assumptions [16] can be understood as conditions on good variable abstractions.
Our starting point in this work is theoretically neutral, taking for granted only the primitive, "probability of a counterfactual." In the second half of the paper we introduce a framework-neutral formal language for reasoning about such probabilities, which in turn facilities further comparison. With respect to this common language, we offer a completeness result for the class of all RCMs (Thm. 2) and, drawing on [10], the class of representable RCMs (Cor. 2). These results are illustrated with an example derivation of LATE (see [15, 1]), which also helps illuminate which assumptions are logically required in the derivation. Meanwhile, we offer a partial answer to the well-known open question [39, 31] of how to characterize the algebraic constraints implied by a particular graph (Thm. 3), a result that helps bring graphical assumptions into this neutral common language. Finally, we show how an existing result on single-world intervention graphs (SWIGs), a framework drawing from both perspectives, can be construed as a completeness result for the same language (Thm. 4).
Taken together, our results are largely conciliatory--in the same spirit as other important conciliatory work in this context; see, e.g., [30, 36, 20]--showing how the two frameworks are productively compatible, while also suggesting distinctive perspectives on problems of causal inference.
Proofs are deferred to supplementary appendices A, B, which contain additional technical material.
## 1 Modeling
We first introduce a formalization of the Rubin causal model [34, 11, 12] before then turning to structural causal models [26, 27, 4]. The relationship between these two is elucidated in SS1.1.3.
### Preliminaries
Common to both frameworks will be a set \(\mathbf{V}\) of _endogenous variables_. Concerning notation:
**Notation.** The signature (or range) of a variable \(V\) is denoted \(\mathrm{Val}(V)\). Where \(\mathbf{S}\) is a set of variables, let \(\mathrm{Val}(\mathbf{S})=\bigtimes_{S\in\mathbf{S}}\mathrm{Val}(S)\), the product of the family of sets \(\mathrm{Val}(S)\) indexed by \(S\in\mathbf{S}\). Elements of \(\mathrm{Val}(\mathbf{S})\) represent joint valuations of the variables \(\mathbf{S}\). Given an indexed family of sets \(\{S_{\beta}\}_{\beta\in B}\) and elements \(s_{\beta}\in S_{\beta}\), let \(\{s_{\beta}\}_{\beta}\) denote the indexed family of elements whose object associated with the index \(\beta\) is \(s_{\beta}\), for all \(\beta\). The symbol \(\subset\) indicates any subset (or set inclusion) and does not imply a strict subset (or proper inclusion). For \(B^{\prime}\subset B\) write \(\pi_{B^{\prime}}:\bigtimes_{\beta\in B}S_{\beta}\to\bigtimes_{\beta\in B^{ \prime}}S_{\beta}\) for the _projection map_ sending each \(\{s_{\beta}\}_{\beta\in B}\mapsto\{s_{\beta^{\prime}}\}_{\beta^{\prime}\in B^{ \prime}}\); abbreviate \(\pi_{\beta^{\prime}}=\pi_{\{\beta^{\prime}\}}\), for \(\beta^{\prime}\in B\). Thus if \(\mathbf{s}\in\mathrm{Val}(\mathbf{S})\) is a joint valuation of variables \(\mathbf{S}\) and \(S\in\mathbf{S}\) is a single variable, then \(s=\pi_{S}(\mathbf{s})\in\mathrm{Val}(S)\) is a value for \(S\). If \(\mathbf{S}^{\prime}\subset\mathbf{S}\) then \(\pi_{\mathbf{S}^{\prime}}(\mathbf{s})\in\mathrm{Val}(\mathbf{S}^{\prime})\) is a joint valuation of \(\mathbf{S}^{\prime}\), namely the projection of \(\mathbf{s}\) to \(\mathbf{S}^{\prime}\). Upper-case letters like \(\mathbf{S}\) conventionally represent those sets of variables that the corresponding lower-case letters \(\mathbf{s}\) are valuations of, \(\mathbf{s}\in\mathrm{Val}(\mathbf{S})\).
#### 1.1.1 Rubin Causal Models, Potential Outcomes, Counterfactuals
The present formalization of the Rubin causal model [34, 16] loosely follows previous presentations; see especially [12]. It codifies experimental outcomes across individuals drawn from a distribution.
These are _potential outcomes_ over a variable set \(\mathbf{V}\), defined as expressions of the form \(Y_{\mathbf{x}}\) for an _outcome_\(Y\in\mathbf{V}\) and an _intervention_ or _treatment_\(\mathbf{x}\in\operatorname{Val}(\mathbf{X})\) for some \(\mathbf{X}\subset\mathbf{V}\), and interpreted as the value observed for \(Y\) in a controlled experiment where each \(X\in\mathbf{X}\) is held fixed to \(\pi_{X}(\mathbf{x})\).
**Definition 1**.: A _Rubin causal model_ (RCM) is a tuple \(\mathcal{R}=\langle\mathcal{U},\mathbf{V},\mathfrak{V},\mathfrak{F},P\rangle\) where \(\mathcal{U}\) is a finite set of _units_ or _individuals_, \(\mathbf{V}\) is a finite set of _endogenous variables_, \(\mathfrak{D}\) is a set of potential outcomes over \(\mathbf{V}\), \(\mathfrak{F}\) is a set of _potential response_ functions, to be defined shortly, and \(P:\mathcal{U}\to[0,1]\) is a probability distribution on \(\mathcal{U}\). A potential response for \(Y_{\mathbf{x}}\in\mathfrak{D}\) is a function \(f_{Y_{\mathbf{x}}}:\mathcal{U}\to\operatorname{Val}(Y)\). For each \(Y_{\mathbf{x}}\in\mathfrak{D}\) we require that \(\mathfrak{F}=\{f_{Y_{\mathbf{x}}}\}_{Y_{\mathbf{x}}\in\mathfrak{D}}\) contain exactly one such function.1
Footnote 1: By generalizing to allow multiple such functions one arrives at a class of models closely related to the _generalized structural equation models_ (GSEMs) of Peters and Halpern [29], or the equivalent class of _causal constraint models_ (CCMs) introduced by Blom et al. [6]. Rather than mapping each potential outcome \(Y_{\mathbf{x}}\) to a value in \(\operatorname{Val}(Y)\), GSEMs map each (allowable) intervention \(\mathbf{x}\) to a (possibly empty) _set_ of values for all the variables, that is, to elements of the powerset \(\wp(\operatorname{Val}(\mathbf{V}))\). RCMs, by contrast, allow that, e.g., \(Y_{\mathbf{x}}\) may be defined for all \(\mathbf{u}\), while \(Z_{\mathbf{x}}\) is undefined because \(Z_{\mathbf{x}}\notin\mathfrak{D}\). The two are thus incomparable in expressive power.
RCMs are often specified in a tabular form as in, e.g., Fig. 1 below. We adopt the notation \(y_{\mathbf{x}}(i)\) or \(Y_{\mathbf{x}}(i)=y\) as a shorthand for \(f_{Y_{\mathbf{x}}}(i)=y\): for \(\mathcal{R}\) as in Def. 1, write \(\mathcal{R}\vDash y_{\mathbf{x}}(i)\) iff \(Y_{\mathbf{x}}\in\mathfrak{D}\) and \(f_{Y_{\mathbf{x}}}(i)=y\). This means that in the above controlled experiment, outcome \(y\in\operatorname{Val}(Y)\) is observed for individual \(i\). Each \(Y_{\mathbf{x}}\) can be thought of as a variable with range \(\operatorname{Val}(Y_{\mathbf{x}})=\operatorname{Val}(Y)\). We call the set \(\operatorname{Val}(\mathfrak{D})\) of joint valuations of these variables _counterfactuals_. A set of potential responses \(\mathfrak{F}\) then maps units to counterfactuals, \(\mathfrak{F}:\mathcal{U}\to\operatorname{Val}(\mathfrak{D})\), by defining \(\mathfrak{F}(i)=\{f_{Y_{\mathbf{x}}}(i)\}_{Y_{\mathbf{x}}\in\mathfrak{D}}\), and:
**Definition 2**.: Where \(\mathcal{R}\) is as in Def. 1, the _counterfactual distribution_\(P_{\mathrm{cf}}^{\mathcal{R}}:\operatorname{Val}(\mathfrak{D})\to[0,1]\) induced by \(\mathcal{R}\) is the pushforward2\(\mathfrak{F}_{\mathbf{x}}(P)\) of \(P:\mathcal{U}\to[0,1]\) under \(\mathfrak{F}:\mathcal{U}\to\operatorname{Val}(\mathfrak{D})\).
Footnote 2: That is, \(P_{\mathrm{cf}}^{\mathcal{R}}(\mathbf{o})=P(\mathfrak{F}^{-1}(\{\mathbf{o}\}))\) for any \(\mathbf{o}\in\operatorname{Val}(\mathfrak{D})\).
The reason we call \(P_{\mathrm{cf}}^{\mathcal{R}}\) a counterfactual distribution (and \(\operatorname{Val}(\mathfrak{D})\) counterfactuals) is because such joint probabilities over multiple potential outcomes appear in the usual ratio definition of counterfactual probabilities. For instance, \(P(y_{x}|y^{\prime}_{x^{\prime}})=P(y_{x},y^{\prime}_{x^{\prime}})/P(y^{\prime} _{x^{\prime}})\) gives the probability that a unit who was withheld treatment and did not recover would have recovered if assigned treatment. But \(P_{\mathrm{cf}}^{\mathcal{R}}\) also answers via marginalization all questions (whenever defined by \(\mathcal{R}\)) about "interventional" probabilities like \(P(y_{\mathbf{x}})\), as well as purely "observational" probabilities such as \(P(\mathbf{x})\); see, e.g., [4].
Some authors submit that "probability will mean nothing more nor less than a proportion of units" [11, p. 945], thereby assuming a uniform distribution on a finite population \(\mathcal{U}\) (cf. also [1]). Of course, in the infinite population size limit we recover all RCMs as in Def. 1 (see Prop. A.1).
While practitioners do not typically consider potential outcomes \(Y_{\mathbf{x}}\) when \(Y\in\mathbf{X}\), instead maintaining a strict dichotomy between cause and effect variables (e.g., [11, 12]), it is natural to impose the following requirement (known as _effectiveness_) whenever such potential outcomes are defined. An intervention is always assumed to be a _successful_ intervention: whenever \(Y\in\mathbf{X}\),
\[\text{Effectiveness}.\qquad Y_{\mathbf{x}}(u)=\pi_{Y}(\mathbf{x}) \tag{1}\]
for every \(u\in\mathcal{U}\). In fact, practice in the RCM framework reflects this same assumption, in the sense that violations of it are taken to signify a poor choice of variables. As a classic example, the possibility of _non-compliance_ in an experimental trial motivates the introduction of instrumental variables, and specifically a separation between, e.g., _treatment_ and _assignment to treatment_ (cf. Ex. 3). Crucially, we recover effectiveness with respect to the latter. We will assume any RCM to meet (1) unless otherwise specified; let \(\mathfrak{R}_{\mathrm{eff}}\) be the class of such RCMs.3
Footnote 3: Not only does this assumption reflect practice, but it is also without loss with regard to comparing RCMs and SCMs, as the latter also satisfy effectiveness: see footnote 4.
#### 1.1.2 Structural Causal Models
An important feature of RCMs is that potential outcomes are cleanly separated from assignment mechanisms [16]. A different starting point is to assume that potential outcomes and their algebraic behavior are rather _derived_ from an underlying formal representation of causal structure. These putatively "deeper mathematical objects" [27, p. 102] involve concrete functional dependencies, and an operation of function replacement known as _intervention_:
**Definition 3**.: A _structural causal model_ (SCM) is a tuple \(\mathcal{M}=\langle\mathbf{U},\mathbf{V},\mathcal{F},P(\mathbf{U})\rangle\) where \(\mathbf{U}\) is a finite set of _exogenous variables_, \(\mathbf{V}\) is a finite set of _endogenous variables_, \(\mathcal{F}\) is a family of _structuralfunctions_, to be defined shortly, and \(P:\operatorname{Val}(\mathbf{U})\to[0,1]\) is a probability distribution on (joint valuations of) \(\mathbf{U}\).
A structural function for \(V\in\mathbf{V}\) is a function \(f_{V}:\operatorname{Val}(\mathbf{U}_{V}\cup\mathbf{Pa}_{V})\to\operatorname{ Val}(V)\), where \(\mathbf{U}_{V}\subset\mathbf{U}\), \(\mathbf{Pa}_{V}\subset\mathbf{V}\setminus\{V\}\). For every \(V\in\mathbf{V}\) we require that \(\mathcal{F}=\{f_{V}\}_{V}\) have exactly one such function; the entire collection \(\mathcal{F}\) thus forms an exogenous-to-endogenous mapping.
Interventions come as a derived notion, replacing structural functions with constant functions [22, 27]:
**Definition 4** (Intervention on SCMs).: Let \(\mathbf{x}\in\operatorname{Val}(\mathbf{X})\) for some \(\mathbf{X}\subset\mathbf{V}\) be an intervention and \(\mathcal{M}\) be the SCM of Def. 3. Then define a _manipulated model_\(\mathcal{M}_{\mathbf{x}}=(\mathbf{U},\mathbf{V},\{f^{\prime}_{V}\}_{V},P( \mathbf{U}))\) where each \(f^{\prime}_{V}:\operatorname{Val}(\mathbf{U}^{\prime}_{V}\cup\mathbf{Pa}^{ \prime}_{V})\to\operatorname{Val}(V)\). If \(V\notin\mathbf{X}\) define \(\mathbf{U}^{\prime}_{V}=\mathbf{U}_{V}\), \(\mathbf{Pa}^{\prime}_{V}=\mathbf{Pa}_{V}\), and \(f^{\prime}_{V}=f_{V}\). If \(V\in\mathbf{X}\) define \(\mathbf{U}^{\prime}_{V}=\mathbf{Pa}^{\prime}_{V}=\varnothing\) and \(f^{\prime}_{V}\) as a constant function mapping to \(\pi_{V}(\mathbf{x})\).
Letting \(\mathcal{M}\) be the SCM of Def. 3, for \(\mathbf{v}\in\operatorname{Val}(\mathbf{V})\) and \(\mathbf{u}\in\operatorname{Val}(\mathbf{U})\) write \(\mathcal{M},\mathbf{u}\vDash\mathbf{v}\) if we have \(f_{V}\big{(}\pi_{\mathbf{U}_{V}\cup\mathbf{Pa}_{V}}(\mathbf{v})\big{)}=\pi_{ V}(\mathbf{v})\) for every \(V\). Let \(\mathfrak{M}_{\operatorname{uniq}}\) be the class of all SCMs \(\mathcal{M}\) such that, for any \(\mathbf{u}\) and intervention \(\mathbf{x}\) there is a unique _solution_\(\mathbf{v}\) such that \(\mathcal{M}_{\mathbf{x}},\mathbf{u}\vDash\mathbf{v}\). In this case we define the potential outcome \(Y_{\mathbf{x}}(\mathbf{u})\) as \(\pi_{Y}(\mathbf{v})\). Thus any \(\mathcal{M}\in\mathfrak{M}_{\operatorname{uniq}}\) defines a potential outcome for _every_\(Y_{\mathbf{x}}\), giving a natural function \(p^{\mathcal{M}}:\operatorname{Val}(\mathbf{U})\to\operatorname{Val}(\{Y_{ \mathbf{x}}\}_{\mathsf{all}\,Y,\mathbf{x}})\) via these outcomes, and:
**Definition 5**.: The counterfactual distribution \(P^{\mathcal{M}}_{\mathrm{cf}}:\operatorname{Val}(\{Y_{\mathbf{x}}\}_{\mathsf{ all}\,Y,\mathbf{x}})\to[0,1]\) induced by \(\mathcal{M}\in\mathfrak{M}_{\operatorname{uniq}}\) is the pushforward \(p^{\mathcal{M}}_{*}(P)\) of \(P:\operatorname{Val}(\mathbf{U})\to[0,1]\) under \(p^{\mathcal{M}}:\operatorname{Val}(\mathbf{U})\to\operatorname{Val}(\{Y_{ \mathbf{x}}\}_{\mathsf{all}\,Y,\mathbf{x}})\).
Thus, SCMs in \(\mathfrak{M}_{\operatorname{uniq}}\) canonically define counterfactual distributions for all possible potential outcomes via manipulation of functional dependencies. Importantly, Def. 5 provides a bridge to RCMs, as both produce counterfactual distributions (recall Def. 2). As long as the counterfactual probabilities are assumed to mean the same thing--i.e., as long as they highlight the same targets for empirical and theoretical investigation--we can then compare the ranges of assumptions and inference patterns that each framework can encode about them. We thus assume that all our SCMs belong to this class \(\mathfrak{M}_{\operatorname{uniq}}\).
#### 1.1.3 Representation of RCMs by SCMs
A contentious methodological question is whether all (endogenous) variables should be potential candidates for intervention. Following the literature we have supposed that SCMs allow all possible interventions (though this assumption is not universal; see, e.g., [33, 5]). For RCMs it is generally assumed that there can be "no causation without manipulation" [11, 16], and thus that only some interventions should be allowed. While methodologically important, this is theoretically inessential. We can construe SCMs as possible representations of RCMs in the following sense:
**Definition 6**.: Let \(\mathcal{R}\in\mathfrak{R}_{\mathrm{eff}}\) and \(\mathcal{M}\in\mathfrak{M}_{\operatorname{uniq}}\). We say that \(\mathcal{M}\)_represents_\(\mathcal{R}\) if its counterfactual distribution \(P^{\mathcal{M}}_{\mathrm{cf}}\) marginalizes down to \(P^{\mathcal{R}}_{\mathrm{cf}}\) on the potential outcomes defined by \(\mathcal{R}\) (the set \(\mathfrak{O}\) in Def. 1). We say \(\mathcal{R}\) is _representable_ if it is represented by at least some \(\mathcal{M}\in\mathfrak{M}_{\operatorname{uniq}}\).4
Footnote 4: With regard to representability, our assumption that \(\mathcal{R}\in\mathfrak{R}_{\mathrm{eff}}\) is without loss since Def. 4 easily implies that the potential outcomes induced by any SCM must satisfy effectiveness (1).
Thus \(\mathcal{M}\) represents \(\mathcal{R}\) if they are counterfactually equivalent with respect to the outcomes defined by \(\mathcal{R}\). Toward a characterization of representability, consider two properties of an RCM [9]:
**Definition 7**.: The following Boolean formulas encode assumptions about potential outcomes:
\[\text{Composition.}\qquad Y_{\mathbf{w}}(u)=y\wedge Z_{\mathbf{w}}(u)=z\to Z _{\mathbf{w}\mathbf{y}}(u)=z \tag{2}\] \[\text{Reversibility.}\qquad Y_{\mathbf{w}z}(u)=y\wedge Z_{\mathbf{w} y}(u)=z\to Y_{\mathbf{w}}(u)=y. \tag{3}\]
Say \(\mathcal{R}\in\mathfrak{R}_{\mathrm{eff}}\) satisfies composition and reversibility, respectively, when the respective statements hold for every unit \(u\) of \(\mathcal{R}\), whenever all the appropriate potential outcomes are defined.
We understand lower-case values like \(y\), \(z\), \(\mathbf{w}\), when not bound as dummy indices or otherwise, to be schematic variables carrying tacit universal quantifiers. Thus (2), (3) must hold for all possible \(y\in\operatorname{Val}(Y)\), \(\mathbf{w}\in\operatorname{Val}(\mathbf{W})\), \(z\in\operatorname{Val}(Z)\). This same usage is repeated, e.g., in (9).
While reversibility seems not to have arisen in the potential outcomes literature, instances of composition have appeared explicitly (e.g., Holland 12, p. 968) and have been used implicitly in concrete derivations (see Ex. 3 below). Note that the well-known principle of _consistency_[7, 27] is merely the instance of composition for \(\mathbf{W}=\varnothing\). For \(\mathcal{R},\mathcal{R}^{\prime}\in\mathfrak{R}_{\mathrm{eff}}\) that share the same units \(\mathcal{U}\) and endogenous variables \(\mathbf{V}\) but have respective potential outcome sets \(\mathfrak{D},\mathfrak{D}^{\prime}\) and potential response sets \(\mathfrak{F},\mathfrak{F}^{\prime}\), if \(\mathfrak{D}\subset\mathfrak{O}^{\prime}\) and \(\mathfrak{F}\subset\mathfrak{F}^{\prime}\) we say that \(\mathcal{R}^{\prime}\)_extends_ or is an extension of \(\mathcal{R}\) and \(\mathcal{R}\) is a _submodel_ of \(\mathcal{R}^{\prime}\). Call \(\mathcal{R}\)_full_ if it has no proper extension. Then:
**Proposition 1** (SCM Representation).: RCM \(\mathcal{R}\) is representable iff \(\mathcal{R}\) extends to some full \(\mathcal{R}^{\prime}\) that satisfies composition and reversibility.
Note that for an RCM \(\mathcal{R}\) to be representable it is necessary (though not sufficient, in light of the models presented in Fig. 1 below) that \(\mathcal{R}\) itself witness no composition or reversibility failures. Prop. 1 thus clarifies a sense in which RCMs are more general than SCMs, not just by allowing only a subset of allowable interventions, but also by imposing fewer requirements on how potential outcomes relate to one another. However, assuming composition, reversibility, and fullness, the two define the same classes of counterfactual distributions, despite the superficial differences in their definitions. In that case the two are, e.g., equivalent for interpreting the probabilistic logical language of SS2. We submit that some version of this result makes sense of statements in the literature, e.g., from Pearl [27], that the twain are essentially equivalent frameworks from a theoretical perspective.
### Causal Abstraction
The goal of this section is to clarify the source of putative failures of principles like composition. We suggest that it is helpful to view these issues through the lens of _causal abstraction_ (the definitions in this section are adapted from [33, 5]). Abstraction has mostly been studied in the context of SCMs; our definitions apply equally to SCMs and RCMs via counterfactual distributions.
In causal abstraction, one has a set \(\mathbf{V}_{\mathrm{L}}\) of low-level (or concrete, or micro-) variables representing a fine-grained description and a set \(\mathbf{V}_{\mathrm{H}}\) of high-level (or abstract, or macro-) variables representing a coarser-grained description of the same scenario. The correspondence between the two descriptions is given by a partial _translation_ map \(\tau:\mathrm{Val}(\mathbf{V}_{\mathrm{L}})\to\mathrm{Val}(\mathbf{V}_{\mathrm{ H}})\). Translations extend canonically to maps of partial valuations (e.g., interventions) \(\tau:\mathsf{U}_{\mathbf{X}\subset\mathbf{V}_{\mathrm{L}}}\,\mathrm{Val}( \mathbf{X})\to\mathsf{U}_{\mathbf{X}\subset\mathbf{V}_{\mathrm{H}}}\,\mathrm{ Val}(\mathbf{X})\) by setting \(\tau(\mathbf{x}_{\mathrm{L}})=\mathbf{x}_{\mathrm{H}}\) iff \(\tau\big{(}\pi^{-1}_{\mathbf{X}_{\mathrm{L}}}(\mathbf{x}_{\mathrm{L}})\big{)}= \pi^{-1}_{\mathbf{X}_{\mathrm{H}}}(\mathbf{x}_{\mathrm{H}})\).
We overload \(\tau\) once more so as to cover counterfactuals, defining as follows yet another partial \(\tau:\mathrm{Val}(\mathfrak{D}_{\mathrm{L}})\to\mathrm{Val}(\mathfrak{D}_{ \mathrm{H}})\) for any sets \(\mathfrak{D}_{\mathrm{L}}\), \(\mathfrak{D}_{\mathrm{H}}\) of potential outcomes over \(\mathbf{V}_{\mathrm{L}}\) and \(\mathbf{V}_{\mathrm{H}}\) respectively. Index an element of \(\mathrm{Val}(\mathfrak{D}_{\mathrm{L}})\) as \(\{(\mathbf{y}^{i}_{\mathrm{L}})_{\mathbf{x}^{i}_{\mathbf{L}}}\}_{1\leq i\leq m}\), where \(\mathbf{x}^{i}_{\mathrm{L}}\neq\mathbf{x}^{j}_{\mathrm{L}}\) for any \(i\neq j\) and \(\mathbf{y}^{i}_{\mathrm{L}}\in\mathrm{Val}(\{Y\in\mathbf{V}_{\mathrm{L}}:Y_{ \mathbf{x}^{i}_{\mathbf{L}}}\in\mathfrak{D}_{\mathrm{L}}\})\) for each \(i\), and an element of \(\mathrm{Val}(\mathfrak{D}_{\mathrm{H}})\) likewise as \(\{(\mathbf{y}^{j}_{\mathrm{H}})_{\mathbf{x}^{i}_{\mathbf{H}}}\}_{1\leq j\leq n}\). Define \(\tau\big{(}\{(\mathbf{y}^{j}_{\mathrm{L}})_{\mathbf{x}^{i}_{\mathbf{L}}}\}_{1 \leq i\leq m}\big{)}=\{(\mathbf{y}^{j}_{\mathrm{H}})_{\mathbf{x}^{j}_{\mathbf{ H}}}\}_{1\leq j\leq n}^{1}\) if \(\tau(\{\mathbf{x}^{i}_{\mathrm{L}}:1\leq i\leq m\})=\{\mathbf{x}^{j}_{\mathrm{H} }:1\leq j\leq n\}\) and \(\tau(\mathbf{y}^{i}_{\mathrm{L}})=\mathbf{y}^{j}_{\mathrm{H}}\) for any pair \(\mathbf{x}^{i}_{\mathrm{L}},\mathbf{x}^{j}_{\mathrm{H}}\) where \(\tau(\mathbf{x}^{i}_{\mathrm{L}})=\mathbf{x}^{j}_{\mathrm{H}}\).
**Definition 8**.: With counterfactual translation in hand, we define an abstraction relation between probabilistic causal models. The model \(\mathcal{H}\) abstracts \(\mathcal{L}\) over the aligned variables (written \(\mathcal{H}\prec_{\tau}\mathcal{L}\)) if the translation \(\tau\) pushes the latter's counterfactual distribution to the former's, that is, \(P^{\mathcal{H}}_{\mathrm{cl}}=\tau_{*}(P^{\mathcal{L}}_{\mathrm{cl}})\).
A stricter and typically more useful notion is that of _constructive_ abstraction (e.g., [5]). These arise from translations that can be generated variable-wise, and thus correspond to a coherent "clustering" of variables:
**Definition 9**.: Translation \(\tau:\mathrm{Val}(\mathbf{V}_{\mathrm{L}})\to\mathrm{Val}(\mathbf{V}_{\mathrm{ H}})\) is constructive if there is a partition \(\Pi\) of \(\mathbf{V}_{\mathrm{L}}\) with non-overlapping cells \(\{\Pi_{V}\}_{V\in\mathbf{V}\mathbf{n}\cup\{\bot\}}\), each \(\Pi_{V}\subset\mathbf{V}_{\mathrm{L}}\), where \(\Pi_{V}\) is non-empty for all \(V\neq\bot\), and a collection \(\{\tau_{V}\}_{V\in\mathbf{V}_{\mathrm{H}}}\) each of which is a partial surjective map \(\tau_{V}:\mathrm{Val}(\Pi_{V})\to\mathrm{Val}(V)\), such that \(\tau(\mathbf{v}_{\mathrm{L}})=\big{\{}\tau_{V}\big{(}\pi\Pi_{V}(\mathbf{v}_{ \mathrm{L}})\big{)}\big{\}}_{V\in\mathbf{V}_{\mathrm{H}}}\) for any \(\mathbf{v}_{\mathrm{L}}\in\mathrm{Val}(\mathbf{V}_{\mathrm{L}})\).
A simple abstraction, ubiquitous in the literature (see, e.g., [16, SS1.6.2] and [7]), is that of variable treatment levels. Here a higher-level value corresponds to some collection of lower-level specifications that might represent the potency or dosage of the drug administered, the time of administration, etc.: for example, a distinction of whether one took 300, 400, 500, or 600 mg of aspirin is made at the low level, but at the high level, there is only the binary distinction between having taken aspirin and not. Formally, a treatment variable \(T\) is only binary with values \(\mathrm{c},\mathrm{tr}\) (control, treatment resp.) at the high level but takes on many values \(\mathrm{c},\mathrm{tr}^{1},\ldots,\mathrm{tr}^{n}\) at the low level. The abstraction is made by omitting the fine-grained details; symbolically, one forms a new model by eliding the superscripts, collapsingall \(\operatorname{tr}^{i}\) into \(\operatorname{tr}\). So long as for any outcomes we have \(Y_{\operatorname{tr}^{i}}(u)=Y_{\operatorname{tr}^{j}}(u)\), the model thus formed will be a constructive probabilistic abstraction of the low-level model.
The next result provides some useful properties of constructive abstraction.
**Proposition 2**.: Suppose \(\mathcal{H}\prec_{\tau}\mathcal{L}\) with \(\tau\) constructive. Then \(\mathcal{H}\) is effective if \(\mathcal{L}\) is effective. Also, for any submodel \(\mathcal{H}^{\prime}\) of \(\mathcal{H}\) there is a submodel \(\mathcal{L}^{\prime}\) of \(\mathcal{L}\) such that \(\mathcal{H}^{\prime}\prec_{\tau}\mathcal{L}^{\prime}\).
Thus our general class of effective RCMs closes under constructive translation. The next example shows that this is not the case for the narrower class of representable models.
**Example 1**.: Let \(X,Y,X^{\prime},Y^{\prime}\) be variables with \(\operatorname{Val}(X)=\{0,1,2\}\) and \(\operatorname{Val}(X^{\prime})=\operatorname{Val}(Y^{\prime})=\operatorname{ Val}(Y)=\{0,1\}\). Consider the RCM \(\mathcal{R}_{\text{L}}\) defined over \(\mathbf{V}_{\text{L}}=\{X,Y\}\) as a conjunction of POs:
\[X=1\wedge Y=1\wedge Y_{X=2}=0\wedge X_{Y=0}=1 \tag{4}\]
for a single unit (suppressed above for clarity). Consider a second RCM \(\mathcal{R}_{\text{H}}\) over \(\mathbf{V}_{\text{H}}=\{X^{\prime},Y^{\prime}\}\):
\[X^{\prime}=1\wedge Y^{\prime}=1\wedge Y^{\prime}_{X^{\prime}=1}=0\wedge X^{ \prime}_{Y^{\prime}=0}=1. \tag{5}\]
Note that \(\mathcal{R}_{\text{H}}\prec_{\tau}\mathcal{R}_{\text{L}}\) where \(\tau\) is a constructive abstraction with \(\Pi_{X^{\prime}}=\{X\}\), \(\Pi_{Y^{\prime}}=\{Y\}\) and \(\tau(X=0)=0\), \(\tau(X=1)=\tau(X=2)=1\), \(\tau(Y=y)=y\). Now \(\mathcal{R}_{\text{H}}\) violates both composition and reversibility, while \(\mathcal{R}_{\text{L}}\) is representable.
A second observation is that the analogue of the claim about submodels in Prop. 2 does not hold for extensions:
**Example 2**.: Consider enlarging (4) with the potential outcome \(Y_{X=1}=1\). Then there is no high-level abstraction under \(\tau\) that defines the outcome \(Y^{\prime}_{X^{\prime}=1}\), since \(Y_{X=2}=0\wedge Y_{X=1}=1\) translates to \(Y^{\prime}_{X^{\prime}=1}=0\wedge Y^{\prime}_{X^{\prime}=1}=1\).
The main result of this section is that the phenomenon exhibited by Ex. 1 accounts for all representability failures:
**Theorem 1** (Abstract Representation).: Let \(\mathcal{R}\) be an RCM. Then there is a representable \(\mathcal{R}_{\text{L}}\) and constructive translation \(\tau\) such that \(\mathcal{R}\prec_{\tau}\mathcal{R}_{\text{L}}\).
It is worth remarking on the connection between a well-known twofold condition called the Stable Unit Treatment Value Assumption (SUTVA [16, SS1.6]) and causal abstraction. The first part of SUTVA is the assumption that "the potential outcomes for any unit do not vary with the treatments assigned to other units"; this is already presumed by our definition of causal model, which does not admit interventions on multiple units (however, see Rmk. A.1 for a way to model this condition within our framework as an application of abstraction). The second part is that "for each unit, there are no different forms or versions of each treatment level, which lead to different potential outcomes." Note that this assumption can be seen as guaranteeing the viability of the variable treatment levels abstraction, as it is simply a restatement of the condition we already identified--that the outcomes \(Y_{\operatorname{tr}^{i}}(u)\) for any unit \(u\) and treatment level \(\operatorname{tr}^{i}\) must all agree.5
Footnote 5: Imbens and Rubin [16] also mention ways of fulfilling this condition requiring changes to the causal model definition. In the supplement (Rmk. A.2) we show these alternative models can be represented within our framework.
## 2 Inference
The raison d'etre of a causal inference framework is to provide a language for encoding causal assumptions and showing when and why conclusions follow from available data and appropriate assumptions. In this section, to provide further granularity on the comparison between RCMs and SCMs, we introduce a neutral formal language that is naturally interpreted relative to both of these models. The language systematizes reasoning about the probabilities of counterfactuals. Fixing a set \(\mathfrak{O}\) of potential outcome pairs, we define a formal language \(\mathcal{L}\) in two stages:
**Definition 10**.: The _base language_\(\mathcal{L}_{\text{base}}\) is given by all Boolean combinations of statements \(Y_{\mathbf{x}}=y\), alternatively written \(y_{\mathbf{x}}\), for all \(Y_{\mathbf{x}}\in\mathfrak{O}\), \(y\in\operatorname{Val}(Y)\). Meanwhile, \(\mathcal{L}\) is defined as the set of Boolean combinations of inequalities \(\mathbf{t}_{1}\geqslant\mathbf{t}_{2}\), where \(\mathbf{t}_{1},\mathbf{t}_{2}\) are generated as sums, products, and additive inverses of probability terms \(\mathbf{P}(\varepsilon)\), where \(\varepsilon\in\mathcal{L}_{\text{base}}\).
The language \(\mathcal{L}\) is the most expressive in a sequence of three languages introduced in [13; 4] to formalize the "causal hierarchy" [27]. By restricting probability terms to purely "observational" or "interventional" quantities, it is possible to study the inferential limitations of data and assumptions at lower levels of this hierarchy. For present purposes, \(\mathcal{L}\) naturally encodes prominent reasoning patterns in RCMs and in SCMs. Its semantics are straightforward in any \(\mathcal{M}\) or \(\mathcal{R}\) that includes all outcomes \(\mathfrak{O}\): we generate a mapping of each polynomial term \(\mathbf{t}\mapsto\llbracket\mathbf{t}\rrbracket\in\mathbb{R}\) recursively with the crucial case being to map \(\mathbf{P}(\varepsilon)\) to the probability calculable by marginalization of \(p_{\mathcal{R}}^{\mathcal{R}}\) or \(p_{\mathcal{M}}^{\mathcal{M}}\), and then evaluate the atom \(\mathbf{t}_{1}\geqslant\mathbf{t}_{2}\) true iff \(\llbracket\mathbf{t}_{1}\rrbracket\geq\llbracket\mathbf{t}_{2}\rrbracket\), recursing to define a semantics for all of \(\mathcal{L}\). Over the class of all (recursive, possibly infinite) SCMs, \(\mathcal{L}\) has been axiomatized [13] by a set of principles called \(\mathsf{AX}_{3}\), and the complexity of its satisfiability problem has been shown complete for the class \(\exists\mathbb{R}\)[24]. The class of simple probability distributions over the atoms of \(\mathcal{L}_{\text{base}}\) is axiomatized by principles known as \(\mathsf{AX}_{1}\)[13], which we will abbreviate \(\mathsf{AX}\).
### Potential Outcomes Assumptions
Reasoning about potential outcomes is often encoded in what we call the base language \(\mathcal{L}_{\text{base}}\), augmented with (typically implicit universal) quantifiers over units. For instance, the well known _monotonicity_ (or "no defiers" who do the opposite of their prescription) assumption [15; 16] says
\[\forall u.X_{z^{-}}(u)=1\to X_{z^{+}}(u)=1, \tag{6}\]
where \(X\) and \(Z\) are binary variables respectively meaning the treatment (actually taken) and the treatment prescribed, with \(z^{+},z^{-}\) abbreviating \(Z=1,Z=0\) respectively. We will use the same abbreviation for other binary variables, so that the above condition can be written succinctly as \(x_{z^{+}}^{+}\to x_{z^{-}}^{+}\). We also adopt this interpretation of \(X,Z\) for the rest of SS2.1. We now explain how this and other causal assumptions in the potential outcomes framework can be encoded in \(\mathcal{L}\):
**Definition 11**.: Let \(\xi\) be a well-formed, closed predicate formula in prenex normal form with a single quantifier over a variable \(\{u\}\) and matrix in \(\mathcal{L}_{\text{base}}\); the \(u\) can alternately be included in the atoms, e.g., by writing \(Y_{\mathbf{x}}(u)=y\). Define its encoding \(\mathrm{T}(\xi)\in\mathcal{L}\) as follows:
\[\mathrm{T}(\xi)=\begin{cases}\mathbf{P}\bigl{(}\neg\mathrm{T}(\zeta)\bigr{)}=0,&\xi=\forall u.\zeta,\text{ where }\zeta\in\mathcal{L}_{\text{base}}\\ \mathbf{P}\bigl{(}\mathrm{T}(\zeta)\bigr{)}>0,&\xi=\exists u.\zeta,\text{ where } \zeta\in\mathcal{L}_{\text{base}}\end{cases}.\]
Note that \(\zeta\) is quantifier-free in both cases. Thus, e.g., the encoding of (6) is \(\mathbf{P}\bigl{[}\neg(x_{z^{-}}^{+}\to x_{z^{+}}^{+})\bigr{]}=0\).
Where \(S\) is a set of \(\mathcal{L}_{\text{base}}\) assumptions let \(\mathfrak{R}(S)\) be the class of RCMs whose potential outcomes obey every assumption in \(S\), thus obeying \(\forall u.\sigma\) where \(u\) ranges over units for any \(\sigma\in S\). Also let \(\mathrm{T}(S)=\{\mathrm{T}(\forall u.\sigma)\}_{\sigma\in S}\) be the encoding of \(S\) via Def. 11. Then we have the following:
**Theorem 2**.: \(\mathsf{AX}+\mathrm{T}(S)\) _is sound and complete for \(\mathfrak{R}(S)\)._
**Corollary 1**.: \(\mathsf{RCM}=\mathsf{AX}+\mathrm{T}(\mathrm{1})\) _is sound and complete for \(\mathfrak{R}_{\text{eff}}\)._
One consequence of this completeness result is that purely propositional and predicate logic reasoning about potential outcomes can be interweaved with probabilistic reasoning, as in Ex. 3 below. Another consequence is a complete axiomatization of SCMs (which can be seen as a probabilistic lift of [10]):
**Corollary 2**.: Let \(\mathsf{C}\), \(\mathsf{Rev}\) be universal statements of (2) and (3) respectively. Then \(\mathsf{SCM}=\mathsf{RCM}+\mathrm{T}(\mathsf{C})+\mathrm{T}(\mathsf{Rev})\) is sound and complete for \(\mathfrak{M}_{\text{uniq}}\) (where every outcome is included in \(\mathcal{L}_{\text{base}}\)).
**Example 3**.: A seminal result from [15; 1] is that it is possible to estimate the Average Treatment Effect among the population of units who _comply_ with their treatment assignment, a quantity known as the _Local_ Average Treatment Effect (LATE): \(\mathbf{E}(Y_{x^{+}}-Y_{x^{-}}|x_{z^{+}}^{+}\wedge x_{z^{-}}^{-})\), with \(Y\) the outcome, which we assume binary purely for simplicity, and without loss of generality. Thm. 2 implies that this can be verified in our calculus, by appeal to two key assumptions [15; 1]: monotonicity (6) and
\[\text{Exclusion restriction (ER)}.\qquad\forall u.y_{z^{-},x}(u)\leftrightarrow y_{z^{+}, x}(u). \tag{7}\]
The original discovery was that these principles guarantee that \(\text{LATE}=\mathsf{IT}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{ T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{where in \(\mathrm{ITT}_{1}\), interventions like \(X_{z}\) set \(X\) at the unit level to the value that it would take under the intervention setting \(Z\) to \(z\); thus, e.g., we have \(\mathbf{P}(y_{z,X_{z}})=\mathbf{P}(y_{z,x^{+}}\wedge x_{z}^{+})+\mathbf{P}(y_{z, x^{-}}\wedge x_{z}^{-})\). Crucially, these two quantities can be estimated, e.g., through randomized experiments [16, Ch. 23].
However, over our most general class \(\mathfrak{R}_{\mathrm{eff}}\) of RCMs, these two assumptions are not in fact sufficient to identify LATE. Fig. 1 illustrates a family of RCMs that satisfy (6) and (7), but disagree on LATE. An additional principle, which Angrist et al. [1] offer as a matter of notation, we dub:
\[\text{Outcome decomposition.}\qquad\forall u.y_{x}(u)\leftrightarrow y_{z^{+},x} (u). \tag{8}\]
It can then be shown that, taken together, (6), (7), and (8) do indeed logically entail \(\mathrm{LATE}=\mathrm{ITT}_{1}/\mathrm{ITT}_{2}\); see Prop. B.1 in the technical appendix for the derivation.
There has been much discussion of monotonicity and exclusion restrictions (which are closely related to graphical assumptions; see SS2.2 below), but what might justify outcome decomposition (8)? One intuition might be that it somehow follows from the exclusion restriction (7): if the effect of \(X\) on \(Y\) is the same no matter the value of \(Z\), then it would seem that omitting \(z^{+}\) in the intervention should have no impact on the effect of \(X\) on \(Y\). Of course, the example in Fig. 1 shows that this is too quick.
It turns out that (8) does follow from (7) if we restrict attention to _representable_ RCMs. In fact, (8) is derivable from (7) and the principle of _composition_ (2) in the calculus \(\mathsf{AX}\), so long as we can reason along the way about the potential response \(Z_{x}\). By composition, for any \(x\) and \(y\) we have \(y_{x}\wedge z_{x}^{+}\to y_{z^{+},x}\) and \(y_{x}\wedge z_{x}^{-}\to y_{z^{-},x}\), and by ER (7) the latter gives \(y_{x}\wedge z_{x}^{-}\to y_{z^{+},x}\). As \(Z_{x}\) is binary, we have \(z_{x}^{+}\lor z_{x}^{-}\), and thus by propositional reasoning, \(y_{x}\to y_{z^{+},x}\). The other direction \(y_{z^{+},x}\to y_{x}\) follows from the same argument by contraposition, as \(Y\) too is binary.
Thus, while the full power of composition is not invoked, it is natural to read this example and much of the literature as implicitly assuming something like representability (thus implying composition). Another source of support for this is that under representability one can show (see Prop. B.2) that \(\mathrm{ITT}_{1}=\mathbf{E}(Y_{z^{+}}-Y_{z^{-}})\), an even simpler and manifestly identifiable expression for this average effect.
### Graphical Assumptions
As we saw above (Prop. 1), SCMs can be understood as _representations_ of suitable RCMs. As such, they also suggest further sources of assumptions for deriving causal inferences. In particular, qualitative patterns of functional dependence introduce the possibility of _graphical_ methods:
**Definition 12**.: Let \(\mathcal{M}=\left\langle\mathbf{U},\mathbf{V},\{f_{V}\}_{V\in\mathbf{V}},P \right\rangle\) be an SCM. Then define the _causal diagram_\(\mathcal{G}(\mathcal{M})\) of \(\mathcal{M}\) as a graph over nodes \(\mathbf{V}\), with mixed directed edges \(\rightarrow\) and _bidirected arcs_\(\leftarrow\)\(\rightarrow\)\(\rightarrow\)\(\rightarrow\). For any \(V,V^{\prime}\in\mathbf{V}\), there is a directed edge \(V\to V^{\prime}\) if \(V\in\mathbf{P}_{\mathbf{a}V^{\prime}}\), and there is a bidirected edge \(V\)\(\leftarrow\)\(\rightarrow\)\(\rightarrow\)\(V^{\prime}\) if \(\mathbf{U}_{V},\mathbf{U}_{V^{\prime}}\subset\mathbf{U}\) are correlated under \(P\) (including if \(\mathbf{U}_{V}\cap\mathbf{U}_{V^{\prime}}\neq\varnothing\)).
Letting \(\mathfrak{M}(\mathcal{G})\) be the set of SCMs with diagram \(\mathcal{G}\), we have \(\mathfrak{M}(\mathcal{G})\subset\mathfrak{M}_{\min}\) provided the directed edges in \(\mathcal{G}\) form a dag (see Lem. A.3). We thus assume this acyclicity of any \(\mathcal{G}\). When interpreting over an SCM, we include every possible potential outcome in \(\mathcal{L}\). Just as we earlier encoded assumptions about the potential outcomes of an RCM into \(\mathcal{L}\), we may do the same for SCMs regarding their graphs. A first observation is that Def. 11 translates axiom C6 of [10] to \(\mathsf{ProbRec}\) of [13], thus
Figure 1: A family of RCMs \(\left\{\mathcal{R}(\varepsilon)\right\}_{0\leq\varepsilon\leq\nicefrac{{1}}{{4}}}\) such that \(\mathcal{R}(\varepsilon)\) is representable iff \(\varepsilon=0\) (though (2) and (3) are met for any unit). Note that all members of this family meet (6) and (7) (the latter guaranteed since columns \(Y_{z^{+},z}(u)\), \(Y_{x^{-},z}(u)\) give the potential outcome for any \(z\)). Also, any experimentally testable quantities—including, in particular, \(\mathrm{ITT}_{1}\) and \(\mathrm{ITT}_{2}\)—agree across the family, with \(\frac{\mathrm{ITT}_{1}}{\mathrm{ITT}_{2}}=\nicefrac{{1}}{{2}}\). However the assumption in question (8) holds in \(\mathcal{R}(\varepsilon)\) iff \(\varepsilon=0\), and \(\mathrm{LATE}=\nicefrac{{1}}{{2}}+\varepsilon\), so that \(\mathrm{LATE}=\frac{\mathrm{ITT}_{1}}{\mathrm{ITT}_{2}}\) only when this holds, and it is not estimable in general.
rederiving the system \(\mathsf{AX}_{3}\) for the class of all acyclic SCMs, i.e. \(\bigcup_{\mathcal{G}}\mathfrak{M}(\mathcal{G})\), from the latter. We now encode the content of (the assumption of having) a _particular_ diagram \(\mathcal{G}\) into \(\mathcal{L}\). Let \(\mathbf{Pa}_{V}^{\mathcal{G}}=\{V^{\prime}\in\mathbf{V}:V^{\prime}\to V\in \mathcal{G}\}\) be the directed parents in a graph \(\mathcal{G}\) of a vertex \(V\). We encode by way of two schemas, encapsulating what some [28] have called "the two fundamental laws of causal inference":
**Definition 13**.: Let the exclusion restriction schema \(\mathsf{ER}^{\mathcal{G}}\) be the \(\mathcal{L}_{\mathsf{base}}\) principle \(y_{\mathbf{a}}\leftrightarrow y_{\mathbf{p}}\), for all variables \(Y\in\mathbf{V}\) and sets of variables \(\mathbf{A}\supset\mathbf{Pa}_{V}^{\mathcal{G}}\), where \(y\in\operatorname{Val}(Y)\), \(\mathbf{a}\in\operatorname{Val}(\mathbf{A})\), \(\mathbf{p}=\pi_{\mathbf{Pa}_{V}^{\mathcal{G}}}(\mathbf{a})\). Let the counterfactual independence schema \(\mathsf{cf}\mbox{-}\mathsf{sep}^{\mathcal{G}}\) be, for all pairs of variable sets \(\{Y_{i}\}_{1\leq i\leq n},\{Y^{\prime}_{j}\}_{1\leq j\leq n^{\prime}}\subset \mathbf{V}\) such that there are no \(Y_{i},Y^{\prime}_{j}\) for which \(Y_{i}=Y^{\prime}_{j}\) or \(Y_{i}\mbox{+}\mbox{-}\mbox{-}\mbox{-}\mbox{-}Y^{\prime}_{j}\) in \(\mathcal{G}\),
\[\mathsf{cf}\mbox{-}\mathsf{sep}^{\mathcal{G}}.\qquad\mathbf{P}\big{[}\bigwedge_ {1\leq i\leq n}(y_{i})_{\mathbf{p}_{i}}\wedge\bigwedge_{1\leq j\leq n^{\prime} }(y^{\prime}_{j})_{\mathbf{p}^{\prime}_{j}}\big{]}=\mathbf{P}\big{[}\bigwedge_ {1\leq i\leq n}(y_{i})_{\mathbf{p}_{i}}\big{]}\cdot\mathbf{P}\big{[}\bigwedge_ {1\leq j\leq n^{\prime}}(y^{\prime}_{j})_{\mathbf{p}_{j}}\big{]} \tag{9}\]
where \(y_{i}\in\operatorname{Val}(Y_{i})\), \(y^{\prime}_{j}\in\operatorname{Val}(Y^{\prime}_{j})\), \(\mathbf{p}_{i}\in\operatorname{Val}(\mathbf{Pa}_{V_{i}}^{\mathcal{G}}),\mathbf{ p}^{\prime}_{j}\in\operatorname{Val}(\mathbf{Pa}_{V_{j}^{\prime}}^{\mathcal{G}})\) for each \(Y_{i}\), \(Y^{\prime}_{j}\). Then the translation of \(\mathcal{G}\) is the combination of axioms \(\operatorname{T}(\mathcal{G})=\operatorname{T}(\mathsf{ER}^{\mathcal{G}})+ \mathsf{cf}\mbox{-}\mathsf{sep}^{\mathcal{G}}\).
Note that while Ex. 3 in no way relies on graphs, if we accept a \(\mathcal{G}\) where \(Z\not\to Y\), then \(\mathsf{ER}^{\mathcal{G}}\) yields \(y_{x}\leftrightarrow y_{zx}\leftrightarrow y_{z^{\prime}x}\) without further ado. Importantly, however, \(\operatorname{T}(\mathcal{G})\) is not valid over \(\mathfrak{M}(\mathcal{G})\) for any \(\mathcal{G}\) containing the edge \(Z\to X\), revealing an extra-graphical provenance. On the other hand, \(\mathsf{cf}\mbox{-}\mathsf{sep}\) is inexpressible in \(\mathcal{L}_{\mathsf{base}}\)--inferentially, the two approaches are incomparable.
A long-standing question has been whether exclusion restriction and independence axioms together could be _complete_, in that they capture all the inferential content of a given causal diagram \(\mathcal{G}\) (see, e.g., [39; 8; 31]). Answering such questions can help with the development of tractable inference methods. Partial completeness results for limited queries are known [35], and the method from Tian [39] supplies an algorithm that is complete with respect to all equality constraints [8]. Placing no limitations on queries beyond their expressibility in \(\mathcal{L}\)--and thus including inequality constraints as well--but making certain restrictions on \(\mathcal{G}\), we answer this question in the affirmative:
**Theorem 3**.: For any acyclic diagram \(\mathcal{G}\), axioms \(\operatorname{T}(\mathcal{G})+\mathsf{SCM}\) are sound for \(\mathcal{L}\) over \(\mathfrak{M}(\mathcal{G})\), and also complete if the bidirected arcs in \(\mathcal{G}\) form a disjoint union of complete graphs.
Often the famous _d-separation_ conditional independence criterion (Def. B.2) is used in place of our \(\mathsf{cf}\mbox{-}\mathsf{sep}\). Since all instances of the latter are instances of the former, our Thm. 3 is stronger (see Cor. B.1). This completeness result implies that for such a \(\mathcal{G}\), any known graphical conclusions--including _do_-calculus, identifiability results, and bounds--can be rederived in our calculus, e.g.:
**Example 4** (Verma constraints).: We derive the _Verma constraint_[40; 39] over the graph \(\mathcal{G}\) of Fig. 1(a) that \(\sum_{w}\mathbf{P}(y\mid z,w,x)\mathbf{P}(w\mid x)\) does not depend functionally on \(x\):
\[\sum_{w}\frac{\mathbf{P}(y,z,w,x)\mathbf{P}(w,x)}{\mathbf{P}(z,w, x)\mathbf{P}(x)}\stackrel{{\operatorname{T}(\mathcal{C})}}{{=}}\sum_{w} \frac{\mathbf{P}(y_{zwx},z_{ywx},w_{yzx},x_{yzw})\mathbf{P}(w_{x},x_{w})}{ \mathbf{P}(z_{wx},w_{zx},x_{zw})\mathbf{P}(x)}\] \[\stackrel{{\operatorname{T}(\mathsf{ER}^{\mathcal{G} })}}{{=}}\sum_{w}\frac{\mathbf{P}(y_{z},z_{w},w_{x},x)\mathbf{P}(w_{x},x)}{ \mathbf{P}(z_{w},w_{x},x)\mathbf{P}(x)}\stackrel{{\eqref{eq: \eqref{eq:\eqref{eq:\eqref{eq:\eqref{eq:\eqref{eq:\eqref{eq:\eqref{eq:\eqref{eq: \eqref{eq:\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq\[\{V_{\pi_{\mathbf{A}}(\mathbf{x})}:V\in\mathbf{V}\}\cup\{\pi_{V}( \mathbf{x}):V\in\mathbf{X}\},\text{ where }\mathbf{A}=(\mathbf{A}\mathbf{n}_{V}^{ \mathcal{D}}\cap\mathbf{X})\setminus\{V\},\text{ with nodes in the first set in this union termed _random_, and those in the second termed _fixed_. This SWIG has edges
\[\{\pi_{X}(\mathbf{x})\to V_{\mathbf{p}}:X\in\mathbf{X},\mathcal{G}\text{ has edge }X\to V\}\cup\{V_{\mathbf{p}}\to V_{\mathbf{p}^{\prime}}^{\prime}:V\notin \mathbf{X},\mathcal{G}\text{ has edge }V\to V^{\prime}\}.\]
See Fig. 1(c) for an example of this construction. Note that edges in the first set in the union above have fixed heads and random tails, while those in the second set have random heads and tails.
**Definition 15**.: Define the following conditional independence schema \(\mathsf{sw\text{-}sep^{\mathcal{D}_{\mathbf{x}}}}\), for any sets of random nodes \(\{(X_{i})_{\mathbf{p}_{i}}:1\leq i\leq l\}\) and \(\{(Y_{j})_{\mathbf{p}^{\prime}_{j}}:1\leq j\leq m\}\) that are d-separated (Def. B.1) given \(\{(Z_{k})_{\mathbf{p}^{\prime\prime}_{k}}:1\leq k\leq n\}\) in the SWIG \(\mathcal{D}_{\mathbf{x}}\):
\[\mathsf{sw\text{-}sep^{\mathcal{D}_{\mathbf{x}}}}.\quad\mathbf{P} \big{[}\bigwedge_{\begin{subarray}{c}1\leq i\leq l\\ 1\leq j\leq m\end{subarray}}(x_{i})_{\mathbf{p}_{i}}\wedge(y_{j})_{\mathbf{p} ^{\prime}_{j}}\big{]}\cdot\mathbf{P}\big{[}\bigwedge_{1\leq k\leq n}(z_{k})_{ \mathbf{p}^{\prime\prime}_{k}}\big{]}\\ =\mathbf{P}\big{[}\bigwedge_{\begin{subarray}{c}1\leq i\leq l\\ 1\leq k\leq n\end{subarray}}(x_{i})_{\mathbf{p}_{i}}\wedge(z_{k})_{\mathbf{p} ^{\prime\prime}_{k}}\big{]}\cdot\mathbf{P}\big{[}\bigwedge_{\begin{subarray}{c }1\leq j\leq m\\ 1\leq k\leq n\end{subarray}}(y_{j})_{\mathbf{p}^{\prime}_{j}}\wedge(z_{k})_{ \mathbf{p}^{\prime\prime}_{k}}\big{]}. \tag{10}\]
One notable model associated with SWIGs is the _FFRCISTG_[32]; given the same graph, FFRCISTGs are compatible with SCMs [30] but issue fewer (potentially controversial) implications:
**Definition 16**.: Let \(\mathcal{R}\) be a full RCM. Then \(\mathcal{R}\) is a _FFRCISTG over \(\mathcal{D}\)_ if every instance of \(\mathrm{T}(\mathsf{ER}^{\mathcal{D}})\) and \(\mathsf{sw\text{-}sep^{\mathcal{D}}}\) holds in its counterfactual distribution. Let \(\mathfrak{F}(\mathcal{D})\) be the class of FFRCISTGs over \(\mathcal{D}\).
**Proposition 3**.: Suppose the SCM \(\mathcal{M}\in\mathfrak{M}(\mathcal{D})\) represents the full RCM \(\mathcal{R}\). Then \(\mathcal{R}\in\mathfrak{F}(\mathcal{D})\).
Given that Def. 16 already defines \(\mathfrak{F}(\mathcal{D})\) in terms of \(\mathcal{L}\)-principles, while [30] have shown the soundness direction, the following is straightforward:
**Theorem 4**.: \(\mathsf{RCM}+\mathrm{T}(\mathsf{ER}^{\mathcal{D}})+\bigcup_{\mathbf{x}} \mathsf{sw\text{-}sep^{\mathcal{D}_{\mathbf{x}}}}\) is sound and complete over \(\mathfrak{F}(\mathcal{D})\).
## 3 Conclusion
The task of this paper has been to clarify the senses in which the Rubin causal model and structural causal models are very closely related formalisms for encoding causal assumptions and deriving causal conclusions. We concur with [23], [14], [41] and others that "there are insights that arise when using each that are less transparent when using the other" [41, p. 8]. Our interest in this paper has been to elucidate the comparison from a theoretical ("in principle") perspective.
We do not suppose that the present work will be the final word on theoretical connections between RCMs and SCMs. On the contrary, there remain numerous open questions. Perhaps chief among these is the generalization of Thm. 3 to encompass all possible causal diagrams (not just those in which the bidirected arcs form a disjoint union of complete graphs). Does the theorem hold with no further principles, or do additional algebraic constraints arise? This important open question [39, 8, 31] is a crucial step toward a complete theoretical synthesis of the two frameworks.
#### Acknowledgments
This work was partially supported by a seed grant from the Stanford Institute for Human-Centered Artificial Intelligence. We also thank Elias Bareinboim and Guido Imbens for helpful conversations.
## References
* [1] J. D. Angrist, G. W. Imbens, and D. B. Rubin. Identification of causal effects using instrumental variables. _Journal of the American Statistical Association_, 91(434):444-455, 1996.
* [2] E. Bareinboim and J. Pearl. Causal inference and the data-fusion problem. _Proceedings of the National Academy of Sciences_, 113(27):7345-7352, 2016.
* [3] E. Bareinboim, A. Forney, and J. Pearl. Bandits with unobserved confounders: A causal approach. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, _Advances in Neural Information Processing Systems_, volume 28. Curran Associates, Inc., 2015.
* [4] E. Bareinboim, J. Correa, D. Ibeling, and T. Icard. On Pearl's hierarchy and the foundations of causal inference. In H. Geffner, R. Dechter, and J. Y. Halpern, editors, _Probabilistic and Causal Inference: The Works of Judea Pearl_, pages 509-556. ACM Books, 2022.
* [5] S. Beckers, F. Eberhardt, and J. Y. Halpern. Approximate causal abstractions. In _Proceedings of the 35th Conference on Uncertainty in Artificial Intelligence (UAI)_, 2019.
* [6] T. Blom, S. Bongers, and J. M. Mooij. Beyond structural causal models: Causal constraints models. In _Proceedings of 35th Conference on Uncertainty in Artificial Intelligence (UAI)_, 2019.
* [7] S. Cole and C. Frangakis. The consistency statement in causal inference: A definition or an assumption? _Epidemiology_, 20(1):3-5, Jan. 2009. ISSN 1044-3983. doi: 10.1097/EDE.0b013e31818ef366.
* 2656, 2018.
* [9] D. Galles and J. Pearl. An axiomatic characterization of causal counterfactuals. _Foundations of Science_, 3(1):151-182, Jan 1998.
* [10] J. Y. Halpern. Axiomatizing causal reasoning. _Journal of AI Research_, 12:317-337, 2000.
* [11] P. W. Holland. Statistics and causal inference. _Journal of the American Statistical Association_, 81(396):945-960, 1986.
* [12] P. W. Holland. Rejoinder. _Journal of the American Statistical Association_, 81(396):968-970, 1986.
* [13] D. Ibeling and T. Icard. Probabilistic reasoning across the causal hierarchy. In _Proceedings of AAAI_, 2020.
* [14] G. W. Imbens. Potential outcome and directed acyclic graph approaches to causality: Relevance for empirical practice in economics. _Journal of Economic Literature_, 58(4):1129-1179, 2020.
* [15] G. W. Imbens and J. D. Angrist. Identification and estimation of local average treatment effects. _Econometrica_, 62(2):467-475, 1994.
* [16] G. W. Imbens and D. B. Rubin. _Causal Inference in Statistics, Social, and Biomedical Sciences_. Cambridge University Press, 2015.
* [17] J. Jung, R. Shroff, A. Feller, and S. Goel. Bayesian sensitivity analysis for offline policy evaluation. In _Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society_, page 64-70, 2020.
* [18] N. Kallus and A. Zhou. Confounding-robust policy improvement. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, _Advances in Neural Information Processing Systems_, volume 31, 2018.
* [19] S. Lee, J. Correa, and E. Bareinboim. General identifiability with arbitrary surrogate experiments. In _Proceedings of UAI_, 2019.
* [20] D. Malinsky, I. Shpitser, and T. Richardson. A potential outcomes calculus for identifying conditional path-specific effects. In K. Chaudhuri and M. Sugiyama, editors, _Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics_, volume 89, pages 3080-3088, 2019.
* [21] K. A. Markus. Causal effects and counterfactual conditionals: contrasting Rubin, Lewis and Pearl. _Economics and Philosophy_, 37:441-461, 2021.
* [22] C. Meek and C. Glymour. Conditioning and intervening. _The British Journal for the Philosophy of Science_, 45:1001-1021, 1994.
* [23] S. L. Morgan and C. Winship. _Counterfactuals and Causal Inference_. Cambridge University Press, 2nd edition, 2014.
* [24] M. Mosse, D. Ibeling, and T. Icard. Is causal reasoning harder than probabilistic reasoning? _Review of Symbolic Logic_, 2021. Forthcoming.
* [25] H. Namkoong, R. Keramati, S. Yadlowsky, and E. Brunskill. Off-policy policy evaluation for sequential decisions under unobserved confounding. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, _Advances in Neural Information Processing Systems_, volume 33, pages 18819-18831, 2020.
* [26] J. Pearl. Causal diagrams for empirical research. _Biometrika_, 82(4):669-710, 1995.
* [27] J. Pearl. _Causality_. Cambridge University Press, 2009.
* [28] J. Pearl. The mathematics of causal inference. In _Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_, KDD '11, page 5, New York, NY, USA, 2011. Association for Computing Machinery. ISBN 9781450308137. doi: 10.1145/2020408.2020416. URL [https://doi.org/10.1145/2020408.2020416](https://doi.org/10.1145/2020408.2020416).
* [29] S. Peters and J. Y. Halpern. Causal modeling with infinitely many variables, 2021.
* [30] T. S. Richardson and J. M. Robins. Single world intervention graphs (SWIGs): A unification of the counterfactual and graphical approaches to causality. Working Paper Number 128, Center for Statistics and the Social Sciences, University of Washington, 2013.
* [31] T. S. Richardson, R. J. Evans, J. M. Robins, and I. Shpitser. Nested Markov properties for acyclic directed mixed graphs. _The Annals of Statistics_, 51(1):334-361, 2023.
* [32] J. Robins. A new approach to causal inference in mortality studies with a sustained exposure period--application to control of the healthy worker survivor effect. _Mathematical Modelling_, 7(9):1393-1512, 1986. ISSN 0270-0255. doi: [https://doi.org/10.1016/0270-0255](https://doi.org/10.1016/0270-0255)(86)90088-6. URL [https://www.sciencedirect.com/science/article/pii/0270025586900886](https://www.sciencedirect.com/science/article/pii/0270025586900886).
* [33] P. K. Rubenstein, S. Weichwald, S. Bongers, J. M. Mooij, D. Janzing, M. Grosse-Wentrup, and B. Scholkopf. Causal consistency of structural equation models. In _Proceedings of the 33rd Conference on Uncertainty in Artificial Intelligence (UAI)_, 2017.
* [34] D. B. Rubin. Estimating causal effects of treatments in randomized and non-randomized studies. _Journal of Educational Psychology_, 66:688-701, 1974.
* [35] I. Shpitser and J. Pearl. Complete identification methods for the causal hierarchy. _Journal of Machine Learning Research_, 9:1941-1979, 2008.
* [36] I. Shpitser and E. Tchetgen Tchetgen. Causal inference with a graphical hierarchy of interventions. _Annals of Statistics_, 44(6):2433-2466, 2016.
* [37] P. Spirtes, C. Glymour, and R. Scheines. _Causation, Prediction, and Search_. MIT Press, 2001.
* [38] G. Tennenholtz, U. Shalit, and S. Mannor. Off-policy evaluation in partially observable environments. _Proceedings of the AAAI Conference on Artificial Intelligence_, 34(6):10276-10283, 2020.
* [39] J. Tian and J. Pearl. On the testable implications of causal models with hidden variables. In _Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence_, UAI'02, page 519-527, 2002.
* [40] T. Verma and J. Pearl. Equivalence and synthesis of causal models. In _Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence_, UAI '90, page 255-270, USA, 1990. Elsevier Science Inc. ISBN 0444892648.
* [41] N. Weinberger. Comparing Rubin and Pearl's causal modelling frameworks: a commentary on Markus (2021). _Economics and Philosophy_, pages 1-9, 2022.
* [42] J. Zhang, D. Kumor, and E. Bareinboim. Causal imitation learning with unobserved confounders. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, _Advances in Neural Information Processing Systems_, volume 33, pages 12263-12274, 2020. | ## Review
### Summary
This paper presents a logical framework that connects the Rubin causal model (RCM) and structural causal model (SCM) approaches to causality. It demonstrates that all RCMs can be represented by SCMs under certain assumptions and elucidates the necessary assumptions for instrumental variable inference within this unified framework. The authors highlight the mathematical connections between the two frameworks and explore the implications of these relationships, providing insights into the axiomatic properties and causal assumptions inherent in both models. Overall, the work contributes significantly to the foundations of causality theory, offering a new perspective on the connections between these two important schools of thought.
### Strengths
- Offers a formal connection between potential outcomes and structural causal models, which is often only stated informally.
- Presents a mathematically rigorous comparison, providing novel insights and important connections between the RCM and SCM frameworks.
- Utilizes causal abstractions effectively, allowing for a flexible interpretation of causal properties across different variable sets.
- Contains clear writing and well-defined examples that facilitate understanding of complex concepts.
### Weaknesses
- The paper is dense and introduces sophisticated ideas that could benefit from more extensive explanations and examples.
- Transitions between sections are sometimes unclear, making it difficult to follow the logical flow of the paper.
- The ultimate implications of the findings are not thoroughly discussed, which may leave readers unclear about practical takeaways.
- Some notation and concepts are not explained well, potentially hindering understanding for readers less familiar with the literature.
### Questions
- What is the connection of this work to single world intervention graphs (SWIGs)?
- Can the authors clarify the implications of the equivalence between RCMs and SCMs for practical causal inference?
- What role do soft interventions play in the SCM framework, and how would their inclusion affect the results presented in this paper?
- Could the authors provide more context or examples to clarify the use of certain notations and assumptions throughout the paper?
### Soundness
**Score:** 3
**Description:** 3 = good; the paper presents solid mathematical foundations and logical reasoning, although some sections could benefit from clearer explanations.
### Presentation
**Score:** 3
**Description:** 3 = good; while the writing is generally clear, the density of the material and some unclear transitions detract from overall readability.
### Contribution
**Score:** 4
**Description:** 4 = excellent; the paper makes a significant contribution to the understanding of causal models and their connections, potentially impacting future research in the area.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept: The paper is technically solid and offers moderate-to-high impact insights, with some concerns regarding clarity and presentation that could be addressed in a longer format.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper is original and presents a valuable theoretical advancement in the understanding of causal frameworks. It effectively bridges two important schools of thought in causality, despite some weaknesses in clarity and presentation. The significant contribution to the foundations of causality theory justifies acceptance, especially for a poster presentation where the authors can elaborate further on their findings.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# DiffuseBot: Breeding Soft Robots With
Physics-Augmented Generative Diffusion Models
Tsun-Hsuan Wang\({}^{1,}\), Juntian Zheng\({}^{2,3}\), Pingchuan Ma\({}^{1}\), Yilun Du\({}^{1}\), Byungchul Kim\({}^{1}\),
**Andrew Spielberg\({}^{1,4}\), Joshua B. Tenenbaum\({}^{1}\), Chuang Gan\({}^{1,3,5,}\), Daniela Rus\({}^{1,\dagger}\)**
\({}^{1}\)MIT, \({}^{2}\)Tsinghua University, \({}^{3}\)MIT-IBM Watson AI Lab, \({}^{4}\)Harvard, \({}^{5}\)UMass Amherst
[https://diffusebot.github.io/](https://diffusebot.github.io/)
This work was supported by the NSF EFRI Program (Grant No. 1830901), DSO grant DSOCO2107, DARPA Fellowship Grant HR00112110007, and MIT-IBM Watson AI Lab. \({}^{\dagger}\) indicates equal advising.
###### Abstract
Nature evolves creatures with a high complexity of morphological and behavioral intelligence, meanwhile computational methods lag in approaching that diversity and efficacy. Co-optimization of artificial creatures' morphology and control _in silico_ shows promise for applications in physical soft robotics and virtual character creation; such approaches, however, require developing new learning algorithms that can reason about function atop pure structure. In this paper, we present DiffuseBot, a physics-augmented diffusion model that generates soft robot morphologies capable of excelling in a wide spectrum of tasks. DiffuseBot bridges the gap between virtually generated content and physical utility by _(i)_ augmenting the diffusion process with a physical dynamical simulation which provides a certificate of performance, and _ii)_ introducing a co-design procedure that jointly optimizes physical design and control by leveraging information about physical sensitivities from differentiable simulation. We showcase a range of simulated and fabricated robots along with their capabilities.
## 1 Introduction
Designing dynamical virtual creatures or real-world cyberphysical systems requires reasoning about complex trade-offs in system geometry, components, and behavior. But, what if designing such systems could be made simpler, or even automated wholesale from high-level functional specifications? Freed to focus on higher-level tasks, engineers could explore, prototype, and iterate more quickly, focusing more on understanding the problem, and find novel, more performant designs. We present DiffuseBot, a first step toward efficient automatic robotic and virtual creature content creation, as an attempt at closing the stubborn gap between the wide diversity and capability of Nature _vis-a-vis_ evolution, and the reiterative quality of modern soft robotics.
Specifically, we leverage diffusion-based algorithms as a means of efficiently and generatively co-designing soft robot morphology and control for target tasks. Compared with with previous approaches, DiffuseBot's learning-based approach maintains evolutionary algorithms' ability to search over diverse forms while exploiting the efficient nature of gradient-based optimization. DiffuseBot is made possible by the revolutionary progress of AI-driven content generation, which is now able to synthesize convincing media such as images, audio, and animations, conditioned on human input. However, other than raw statistical modeling, these methods are typically task- and physics-oblivious, and tend to provide no fundamental reasoning about the performance of generated outputs. We provide the first method for bridging the gap between diffusion processes and the morphological design of cyberphysical systems, guided by physical simulation, enabling the computational creative design of virtual and physical creatures.
While diffusion methods can robustly sample objects with coherent spatial structure from raw noise in a step-by-step fashion, several roadblocks preventing existing generative algorithms from being directly applied to physical soft robot co-design. First, while existing diffusion methods can generate 2D or 3D shapes, useful for, say, sampling robot geometry, they do not consider physics, nor are they directly aligned with the robotic task performance. As an alternative approach, one might consider learning a diffusion model supervised directly on a dataset of highly-performant robot designs mapped to their task performance. This leads us to the second roadblock, that is, that no such dataset exists, and, more crucially, that curating such a dataset would require a prohibitive amount of human effort and would fail to transfer to novel tasks outside that dataset.
To tackle these challenges, we propose using physical simulation to guide the generative process of pretrained large-scale 3D diffusion models. Diffusion models pretrained for 3D shapes provide an expressive base distribution that can effectively propose reasonable candidate geometries for soft robots. Next, we develop an automatic procedure to convert raw 3D geometry to a representation compatible with soft body simulation, _i.e._ one that parameterizes actuator placement and specifies material stiffness. Finally, in order to sample robots in a physics-aware and performance-driven manner, we apply two methods that leverage physically based simulation. First, we optimize the embeddings that condition the diffusion model, skewing the sampling distribution toward better-performing robots as evaluated by our simulator. Second, we reformulate the sampling process that incorporates co-optimization over structure and control. We showcase the proposed approach of DiffuseBot by demonstrating automatically synthesized, novel robot designs for a wide spectrum of tasks, including balancing, landing, crawling, hurdling, gripping, and moving objects, and demonstrate its superiority to comparable approaches. We further demonstrate DiffuseBot's amenability to incorporating human semantic input as part of the robot generation process. Finally, we demonstrate the physical realizability of the robots generated by DiffuseBot with a proof-of-concept 3D printed real-world robot, introducing the possibility of AI-powered end-to-end CAD-CAM pipelines.
In summary, we contribute:
\(\bullet\) A new framework that augments the diffusion-based synthesis with physical dynamical simulation in order to generatively co-design task-driven soft robots in morphology and control.
\(\bullet\) Methods for driving robot generation in a task-driven way toward improved physical utility by optimizing input embeddings and incorporating differentiable physics into the diffusion process.
\(\bullet\) Extensive experiments in simulation to verify the effectiveness of DiffuseBot, extensions to text-conditioned functional robot design, and a proof-of-concept physical robot as a real-world result.
## 2 Method
In this section, we first formulate the problem (Section 2.1) and then describe the proposed DiffuseBot framework, which consists of diffusion-based 3D shape generation (Section 2.2), a differentiable procedure that converts samples from the diffusion models into soft robots (Section 2.3), a technique to optimize embeddings conditioned by the diffusion model for improved physical utility (Section 2.4), and a reformulation of diffusion process into co-design optimization (Section 2.4).
### Problem Formulation: Soft Robot Co-design
Soft robot co-design refers to a joint optimization of the morphology and control of soft robots. The morphology commonly involves robot geometry, body stiffness, and actuator placement. Control is the signal to the actuators prescribed by a given robot morphology. It can be formally defined as,
Figure 1: DiffuseBot aims to augment diffusion models with physical utility and designs for high-level functional specifications including robot geometry, material stiffness, and actuator placement.
\[\min_{\Psi,\phi}\mathcal{L}(\Psi,\phi)=\min_{\Psi,\phi}\mathcal{L}(\{\mathbf{u}_{h }(\mathbf{s}_{h};\phi,\Psi),\mathbf{s}_{h}\}_{h\in[1,H]}),\;\;\;\text{where}\; \mathbf{s}_{h+1}=f(\mathbf{s}_{h},\mathbf{u}_{h}) \tag{1}\]
where \(\Psi\) is robot morphology that includes geometry \(\Psi_{\text{geo}}\), stiffness \(\Psi_{\text{st}}\), and actuator placement \(\Psi_{\text{act}}\); \(\mathbf{u}_{h}\) is actuation with the controller's parameters \(\phi\) and dependency on the robot morphology \(\Psi\), \(\mathbf{s}_{h}\) is the simulator state, \(f\) is the environmental dynamics (namely the continuum mechanics of soft robots), and \(H\) is robot time horizon (not to be confused with the later-on mentioned diffusion time). Co-design poses challenges in optimization including complex interdependencies between body and control variables, ambiguity between competing morphology modifications, trade-offs between flexibility and efficacy in design representations, etc. [61]. In this work, we aim at leveraging the generative power of diffusion models in searching for optimal robot designs with (1) the potential to synthesize highly-diverse robots and (2) inherent structural biases in the pre-trained 3D generative models learned from large-scale 3D datasets to achieve efficient optimization.
### 3D Shape Generation with Diffusion-based Models
Diffusion-based generative models [24; 52] aim to model a data distribution by augmenting it with auxiliary variables \(\{\mathbf{x}_{t}\}_{t=1}^{T}\) defining a Gaussian diffusion process \(p(\mathbf{x}_{0})=\int p(\mathbf{x}_{T})\prod_{t=1}^{T}p(\mathbf{x}_{t-1}| \mathbf{x}_{t})d\mathbf{x}_{1:T}\) with the transition kernel in the forward process \(q(\mathbf{x}_{t}|\mathbf{x}_{t-1})=\mathcal{N}(\mathbf{x}_{t};\sqrt{1-\beta_{t }}\mathbf{x}_{t-1},\beta_{t}\mathbf{I})\) for some \(0<\beta_{t}<1\). For sufficiently large \(T\), we have \(p(\mathbf{x}_{T})\approx\mathcal{N}(\mathbf{0},\mathbf{I})\). This formulation enables an analytical marginal at any diffusion time \(\mathbf{x}_{t}=\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_{t}}\epsilon\) based on clean data \(\mathbf{x}_{0}\), where \(\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) and \(\bar{\alpha}_{t}=\prod_{i=1}^{t}1-\beta_{i}\). The goal of the diffusion model (or more precisely the denoiser \(\epsilon_{\theta}\)) is to learn the reverse diffusion process \(p(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) with the loss,
\[\min_{\theta}\mathbb{E}_{t\sim[1,T],p(\mathbf{x}_{0}),\mathcal{N}(\epsilon; \mathbf{0},\mathbf{I})}[||\epsilon-\epsilon_{\theta}(\mathbf{x}_{t}(\mathbf{x }_{0},\epsilon,t),t)||^{2}] \tag{2}\]
Intuitively, \(\epsilon_{\theta}\) learns a one-step denoising process that can be used iteratively during sampling to convert random noise \(p(\mathbf{x}_{T})\) gradually into realistic data \(p(\mathbf{x}_{0})\). To achieve controllable generation with conditioning \(\mathbf{c}\), the denoising process can be slightly altered via classifier-free guidance [25; 12],
\[\hat{\epsilon}_{\theta,\text{classifier-free}}:=\epsilon_{\theta}(\mathbf{x}_{ t},t,\varnothing)+s\cdot(\epsilon_{\theta}(\mathbf{x}_{t},t,\mathbf{c})- \epsilon_{\theta}(\mathbf{x}_{t},t,\varnothing)) \tag{3}\]
where \(s\) is the guidance scale, \(\varnothing\) is a null vector that represents non-conditioning.
### Robotizing 3D Shapes from Diffusion Samples
We adopt Point-E [39] as a pre-trained diffusion model that is capable of generating diverse and complex 3D shapes, providing a good prior of soft robot geometries. However, the samples from the diffusion model \(\mathbf{x}_{t}\) are in the form of surface point cloud and are not readily usable as robots to be evaluated in the physics-based simulation. Here, we describe how to roquire the diffusion samples \(\mathbf{x}_{t}\mapsto\Psi\) and its gradient computation of the objective \(\frac{d\mathcal{L}}{d\mathbf{x}_{t}}=\frac{\partial\mathcal{L}}{\partial\Psi_ {\text{gas}}}\frac{\partial\Psi_{\text{gas}}}{\partial\mathbf{x}_{t}}+\frac{ \partial\mathcal{L}}{\partial\Psi_{\text{st}}}\frac{\partial\Psi_{\text{st}}} {\partial\mathbf{x}_{t}}+\frac{\partial\mathcal{L}}{\partial\Psi_{\text{st}} }\frac{\partial\Psi_{\text{st}}}{\partial\mathbf{x}_{t}}\).
Figure 2: The DiffuseBot framework consists of three modules: (i) _robotizing_, which converts diffusion samples into physically simulatable soft robot designs (ii) _embedding optimization_, which iteratively generate new robots to be evaluated for training the conditional embedding (iii) _diffusion as co-design_, which guides the sampling process with co-design gradients from differentiable simulation. Arrow (A): evaluation of robots to guide data distribution. (B): differentiable physics as feedback.
**Solid Geometry.** We use a Material Point Method (MPM)-based simulation [61], which takes solid geometries as inputs. This poses two obstacles: (1) conversion from surface point clouds into solid geometries, and (2) the unstructuredness of data in the intermediate samples \(\mathbf{x}_{t},t\neq 0\). The second issue arises from the fact that the diffusion process at intermediate steps may produce 3D points that do not form a tight surface. First, we leverage the predicted clean sample at each diffusion time \(t\),
\[\hat{\mathbf{x}}_{0}=\frac{\mathbf{x}_{t}-\sqrt{1-\bar{\alpha}_{t}}\cdot \epsilon_{\theta}(\mathbf{x}_{t},t)}{\sqrt{\bar{\alpha}_{t}}} \tag{4}\]
This approach is used in denoising diffusion implicit model (DDIM) [53] to approximate the unknown clean sample \(\mathbf{x}_{0}\) in the reverse process \(p(\mathbf{x}_{t-1}|\mathbf{x}_{t},\hat{\mathbf{x}}_{0})\). Here, we use it to construct a better-structured data for simulation. Hence, we break down the robotizing process into \(\mathbf{x}_{t}\mapsto\hat{\mathbf{x}}_{0}\mapsto\Psi\) with gradient components as \(\frac{\partial\Psi}{\partial\mathbf{x}_{0}}\frac{\partial\mathbf{x}_{0}}{ \partial\mathbf{x}_{t}}\), where \(\frac{\partial\hat{\mathbf{x}}_{0}}{\partial\mathbf{x}_{t}}\) can be trivially derived from (4). To convert the predicted surface points \(\hat{\mathbf{x}}_{0}\) into solid geometry, we first reconstruct a surface mesh from \(\hat{\mathbf{x}}_{0}\), and then evenly sample a solid point cloud \(\Psi_{\text{geo}}\) within its interior. For mesh reconstruction, we modify the optimization approach from Shape As Points [40], which provides a differentiable Poisson surface reconstruction that maps a control point set \(\mathbf{V}_{\text{ctrl}}\) to a reconstructed surface mesh with vertices \(\mathbf{V}_{\text{mesh}}\). We calculate a modified Chamfer Distance loss indicating similarity between \(\mathbf{V}_{\text{mesh}}\) and \(\hat{\mathbf{x}}_{0}\):
\[\mathcal{L}_{\text{recon}}=\frac{\lambda_{\text{mesh}}}{|\mathbf{V}_{\text{ mesh}}|}\sum_{\mathbf{v}\in\mathbf{V}_{\text{mesh}}}d(\mathbf{v},\hat{\mathbf{x }}_{0})+\frac{\lambda_{\text{target}}}{|\hat{\mathbf{x}}_{0}|}\sum_{\mathbf{v} \in\hat{\mathbf{x}}_{0}}w(\mathbf{v})d(\mathbf{v},\mathbf{V}_{\text{mesh}}),\]
in which \(d(\cdot,\cdot)\) denotes minimal Euclidean distance between a point and a point set, and \(w(\mathbf{v})\) denotes a soft interior mask, with \(w(\mathbf{v})=1\) for \(\mathbf{v}\) outside the mesh, and \(w(\mathbf{v})=0.1\) for \(\mathbf{v}\) inside. The introduced mask term \(w(v)\) aims to lower the influence of noisy points inside the mesh, which is caused by imperfect prediction of \(\hat{\mathbf{x}}_{0}\) from noisy intermediate diffusion samples. The weight parameters are set to \(\lambda_{\text{mesh}}=1\) and \(\lambda_{\text{target}}=10\). We back-propagate \(\frac{\partial\mathcal{L}_{\text{recon}}}{\partial\mathbf{V}_{\text{mesh}}}\) to \(\frac{\partial\mathcal{L}_{\text{recon}}}{\partial\mathbf{V}_{\text{mesh}}}\) through the differentiable Poisson solver, and then apply an Adam optimizer on \(V_{\text{ctrl}}\) to optimize the loss \(\mathcal{L}_{\text{recon}}\). After mesh reconstruction, the solid geometry \(\Psi_{\text{geo}}\) represented by a solid interior point cloud is then sampled evenly within the mesh, with sufficient density to support the MPM-based simulation. Finally, to integrate the solidification process into diffusion samplers, we still need its gradient \(\frac{\partial\Psi_{\text{geo}}}{\partial\mathbf{x}_{0}}\). We adopt Gaussian kernels on point-wise Euclidean distances as gradients between two point clouds:
\[\frac{\partial\mathbf{u}}{\partial\mathbf{v}}=\frac{\exp(-\alpha\|\mathbf{u} -\mathbf{v}\|^{2})}{\sum_{v^{\prime}\in\hat{\mathbf{x}}_{0}}\exp(-\alpha\| \mathbf{u}-\mathbf{v}^{\prime}\|^{2})},\mathbf{u}\in\Psi_{\text{geo}},\mathbf{ v}\in\hat{\mathbf{x}}_{0}.\]
Intuitively, under such Gaussian kernels gradients, each solid point is linearly controlled by predicted surface points near it. In practice, this backward scheme works well for kernel parameter \(\alpha=20\).
**Actuators and Stiffness.** A solid geometry does not make a robot; in order for the robot to behave, its dynamics must be defined. After sampling a solid geometry, we thus need to define material properties and actuator placement. Specifically, we embed actuators in the robot body in the form of muscle fibers that can contract or expand to create deformation; further, we define a stiffness parameterization in order to determine the relationship between deformation and restorative elastic force. We adopt constant stiffness for simplicity since it has been shown to trade off with actuation strength [61]; thus we have \(\Psi_{\text{st}}(\hat{\mathbf{x}}_{0})=\text{const}\) and \(\frac{\partial\Psi_{\text{st}}}{\partial\mathbf{x}_{0}}=0\). Then, we propose to construct actuators based on the robot solid geometry \(\Psi_{\text{act}}(\Psi_{\text{geo}}(\hat{\mathbf{x}}_{0}))\) via clustering; namely, we perform k-means with pre-defined number of clusters on the coordinates of 3D points from the solid geometry \(\Psi_{\text{geo}}\). The gradient then becomes \(\frac{\partial\Psi_{\text{obs}}}{\partial\Psi_{\text{geo}}}\frac{\partial\Psi_{ \text{geo}}}{\partial\mathbf{x}_{0}}\frac{\partial\hat{\mathbf{x}}_{0}}{ \partial\mathbf{x}_{t}}\), where \(\frac{\partial\Psi_{\text{obs}}}{\partial\Psi_{\text{geo}}}\approx 0\) as empirically we found the clustering is quite stable in terms of label assignment, i.e., with \(\Delta\Psi_{\text{geo}}\) being small, \(\Delta\Psi_{\text{act}}\to 0\). Overall, we keep only the gradient for the robot geometry as empirically it suffices.
### Physics Augmented Diffusion Model
**Embedding Optimization.** To best leverage the diversity of the generation from a large-scale pre-trained diffusion models, we propose to (1) actively generate new data from model and maintain them in a buffer, (2) use physics-based simulation as a certificate of performance, (3) optimize the embeddings conditioned by the diffusion model under a skewed data distribution to improve robotic performance in simulation. Curating a training dataset on its own alleviates the burden of manual effort to propose performant robot designs. Optimizing the conditional embeddings instead of finetuning the diffusion model eliminates the risk of deteriorating the overall generation and saves the cost of storing model weights for each new task (especially with large models). We follow,
\[\min_{\mathbf{c}}\mathbb{E}_{t\sim[1,T],p_{\theta}(\mathbf{x}_{0}|\mathbf{c}),\mathcal{N}(\epsilon;\mathbf{0},\mathbf{I})}[||\epsilon-\epsilon_{\theta}( \mathbf{x}_{t}(\mathbf{x}_{0},\epsilon,t),t,\mathbf{c})||^{2}] \tag{5}\]
Note the three major distinctions from (2): (i) the optimization variable is the embedding \(\mathbf{c}\) not \(\theta\) (ii) the denoiser is conditioned on the embeddings \(\epsilon_{\theta}(\ldots,\mathbf{c})\) and (iii) the data distribution is based on the diffusion model \(p_{\theta}\) not the inaccessible real data distribution \(p\) and is conditioned on the embeddings \(\mathbf{c}\). This adopts an online learning scheme as the sampling distribution is dependent on the changing \(\mathbf{c}\). The procedure is briefly summarized in Algorithm 1, where _Filter_ is an operation to drop the oldest data when exceeding the buffer limit. In addition, during this stage, we use fixed prescribed controllers since we found empirically that a randomly initialized controller may not be sufficiently informative to drive the convergence toward reasonably good solutions; also, enabling the controller to be trainable makes the optimization prohibitively slow and extremely unstable, potentially due to the difficulty of the controller required to be universal to a diverse set of robot designs. After the embedding optimization, we perform conditional generation that synthesizes samples corresponding to robot designs with improved physical utility via classifier-free guidance as in (3).
```
Initialize: initial sample \(\mathbf{x}_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) while within maximal number of diffusion steps \(t\geq 0\)do Perform regular per-step diffusion update: \(\mathbf{x}_{t}\Leftarrow\mathbf{x}_{t+1}\). if perform co-design then while within \(K\) steps do Run update in (6) and (7) to overwrite \(\mathbf{x}_{t}\). endwhile endif endwhile
```
**Algorithm 2** Sampling: Diffusion As Co-design
**Diffusion as Co-design.** While the optimized embedding already allows us to generate performant robots for some target tasks, we further improve the performance of individual samples by reformulating the diffusion sampling process into a co-design optimization. As described in Section 2.3, we can convert the intermediate sample at any diffusion time \(\mathbf{x}_{t}\) to a robot design \(\Psi\), _rendering an evolving robot design throughout the diffusion process_. However, regular diffusion update [24] much less resembles any gradient-based optimization techniques, which are shown to be effective in soft robot design and control with differentiable simulation [26, 2]. Fortunately, there is a synergy between diffusion models and energy-based models [54, 15, 14], which allows a more gradient-descent-like update with Markov Chain Monte Carlo (MCMC) sampling [14]. Incorporating the soft robot co-design optimization with differentiable physics [61] into the diffusion sampling process, we have
\[\text{Design Optim.:}\ \mathbf{x}_{t}^{(k)} =\mathbf{x}_{t}^{(k-1)}+\frac{\sigma^{2}}{2}\left(\epsilon_{\theta }(\mathbf{x}_{t}^{(k-1)},t)-\kappa\nabla_{\mathbf{x}_{t}^{(k-1)}}\mathcal{L}( \Psi(\mathbf{x}_{t}^{(k-1)}),\phi_{t}^{k-1})\right)+\sigma^{2}\epsilon\] (6) Control Optim.: \[\phi_{t}^{(k)} =\phi_{t}^{(k-1)}+\gamma\nabla_{\phi_{t}^{(k-1)}}\mathcal{L}(\Psi (\mathbf{x}_{t}^{(k-1)}),\phi_{t}^{k-1}) \tag{7}\]
where \(\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I})\), \(\mathbf{x}_{t}^{(0)}=\mathbf{x}_{t-1}^{(K)}\), \(\kappa\) is the ratio between two types of design gradients, \(K\) is the number of MCMC sampling steps at the current diffusion time, \(\gamma\) is the weight for trading off design and control optimization, and \(\phi_{t}^{(0)}\) can be either inherited from the previous diffusion time \(\phi_{t-1}^{(K)}\) or reset to the initialization \(\phi_{T}^{(0)}\). We highlight the high resemblance to gradient-based co-optimization with \(\mathbf{x}_{t}\) as the design variable and \(\phi_{t}\) as the control variable. This procedure is performed once every \(M\) diffusion steps (Algorithm 2), where \(M\) is a hyperparameter that trade-offs "guidance" strength from physical utility and sampling efficiency. Intuitively, the entire diffusion-as-co-design process isguided by three types of gradients: (i) \(\epsilon_{\theta}(\mathbf{x}_{t}^{(k-1)},\cdot)\) provides a direction for the design toward feasible 3D shapes based on the knowledge of pre-training with large-scale datasets (and toward enhanced physical utility with the optimized embeddings via classifier-free guidance using \(\hat{\epsilon}_{\theta}(\mathbf{x}_{t}^{(k-1)},\cdot,\mathbf{c})\)), (ii) \(\nabla_{\mathbf{x}_{t}}\mathcal{L}(\Psi(\mathbf{x}_{t}),\cdot)\) provides a direction for the design toward improving co-design objective \(\mathcal{L}\) via differentiable simulation, and (iii) \(\nabla_{\phi_{t}}\mathcal{L}(\cdot,\phi_{t})\) provides a direction for the controller toward a better adaption to the current design \(\mathbf{x}_{t}\) that allows more accurate evaluation of the robot performance.
## 3 Experiments
### Task Setup
We cover three types of robotics tasks: passive dynamics, locomotion, and manipulation (Figure 3).
\(\bullet\)**Passive Dynamics** tasks include balancing and landing. _Balancing_ initializes the robot on a stick-like platform with small contact area with an upward velocity that introduces instability; the robot's goal is to passively balance itself after dropping on the platform. _Landing_ applies an initial force to the robot toward a target; the robot's goal is to passively land as close to the target as possible. \(\bullet\)**Locomotion** tasks include crawling and hurdling. _Crawling_ sets the robot at a rest state on the ground; the robot must actuate its body to move as far away as possible from the starting position. _Hurdling_ places an obstacle in front of the robot; the robot must jump over the obstacle. \(\bullet\)**Manipulation** tasks include gripping and moving objects. _Gripping_ places an object underneath the robot; the goal of the robot is to vertically lift the object. _Box Moving_ places a box on the right end of the robot; the robot must move the box to the left.
Please refer to the appendix Section D for more detailed task descriptions and performance metrics.
### Toward Physical Utility In Diffusion Models
**Physics-augmented diffusion.** In Table 1, we examine the effectiveness of embedding optimization and diffusion as co-design for improving physical utility. For each entry, we draw 100 samples with preset random seeds to provide valid sample-level comparison (i.e., setting the step size of co-design optimization to zero in the third row will produce almost identical samples as the second row). We report the average performance with standard deviation in the superscript. First, we observe increasing performance across all tasks while incorporating the two proposed techniques, demonstrating the efficacy of DiffuseBot. Besides, the sample-level performance does not always monotonically improve, possibly due to the stochasticity within the diffusion process and the low quality of gradient from differentiable simulation in some scenarios. For example, in gripping, when the robot fails to pick up the object in the first place, the gradient may be informative and fails to bring proper guidance toward better task performance; similarly in moving a box. In addition, we found it necessary to include control optimization during the diffusion sampling process, since, at diffusion steps further from zero, the predicted clean sample \(\hat{\mathbf{x}}_{0}\) (derived from the intermediate sample \(\mathbf{x}_{t}\)) may differ significantly from the clean sample \(\mathbf{x}_{0}\), leaving the prescribed controller largely unaligned.
**Comparison with baselines.** In Table 2, we compare with extensive baselines of soft robot design representation: particle-based method has each particle possessing its own distinct parameterization of design (geometry, stiffness, actuator); similarly, voxel-based method specifies design in voxel level; implicit function [37] uses use a shared multi-layer perceptron to map coordinates to design;
\begin{table}
\begin{tabular}{c|c||c|c|c|c|c|c} \hline \hline Embed. & Diffusion as & \multicolumn{2}{c|}{Passive Dynamics} & \multicolumn{2}{c|}{Locomotion} & \multicolumn{2}{c}{Manipulation} \\ Optim. & Co-design & Balancing & Landing & Crawling & Hurdling & Gripping & Moving a Box \\ \hline & & 0.081\({}^{\cdot 164}\) & 0.832\({}^{\cdot 217}\) & 0.011\({}^{\cdot 012}\) & 0.014\({}^{\cdot 020}\) & 0.014\({}^{\cdot 008}\) & 0.019\({}^{\cdot 020}\) \\ ✓ & & 0.556\({}^{\cdot 127}\) & 0.955\({}^{\cdot 032}\) & 0.048\({}^{\cdot 007}\) & 0.019\({}^{\cdot 014}\) & 0.025\({}^{\cdot 006}\) & 0.040\({}^{\cdot 018}\) \\ ✓ & ✓ & **0.653\({}^{\cdot 107}\)** & **0.964\({}^{\cdot 029}\)** & **0.081\({}^{\cdot 018}\)** & **0.035\({}^{\cdot 030}\)** & **0.027\({}^{\cdot 004}\)** & **0.044\({}^{\cdot 021}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Improved physical utility by augmenting physical simulation with diffusion models.
Figure 3: We consider passive dynamics tasks (balancing, landing), locomotion tasks (crawling, hurdling), and manipulation tasks (gripping, moving a box).
DiffKPPN [18] uses a graphical model composed of a set of activation function that takes in coordinates and outputs design specification; DiffAqua [35] computes the Wasserstein barycenter of a set of aquatic creatures' meshes and we adapt to use more reasonable primitives that include bunny, car, cat, cow, avocado, dog, horse, and sofa. These baselines are commonly used in gradient-based soft robot co-design [26; 56; 61]. For each baseline method, we run the co-optimization routine for the same number of steps as in the diffusion-as-co-design stage in DiffuseBot. To avoid being trapped in the local optimum, we run each baseline with 20 different random initializations and choose the best one. Since DiffuseBot is a generative method, we draw 20 samples and report the best; this is sensible an applications-driven perspective since we only need to retrieve one performant robot, within a reasonable sample budget. We observe that our method outperforms all baselines. DiffuseBot leverages the knowledge of large-scale pre-trained models that capture the "common sense" of geometry, providing a more well-structured yet flexible prior for soft robot design.
**Soft robots bred by DiffuseBot.** In Figure 5, we demonstrate the generated soft robots that excel in locomotion and manipulation tasks. We highlight the flexibility of DiffuseBot to generate highly diverse soft robot designs that accommodate various purposes in different robotics tasks. Furthermore, in Figure 4, we show how robots evolve from a feasible yet non-necessarily functional design to an improved one that intuitively matches the task objective. By manually inspecting the evolving designs, we found that the role of the embedding optimization is to drive the diverse generations toward a converged, smaller set with elements having higher chance to succeed the task; on the other hand, the role of diffusion as co-design brings relatively minor tweaks along with alignment between the control and design. Due to space limit, we refer the reader to our project page for more results.
\begin{table}
\begin{tabular}{c||c|c|c|c|c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{Passive Dynamics} & \multicolumn{2}{c|}{Locomotion} & \multicolumn{2}{c}{Manipulation} \\ & Balancing & Landing & Crawling & Hurdling & Gripping & Moving a Box \\ \hline \hline Particle-based & 0.040\({}^{.000}\) & 0.863\({}^{.005}\) & 0.019\({}^{.001}\) & 0.006\({}^{.001}\) & -0.010\({}^{.001}\) & 0.043\({}^{.027}\) \\ Voxel-based & 0.040\({}^{.000}\) & 0.853\({}^{.002}\) & 0.024\({}^{.000}\) & 0.027\({}^{.000}\) & -0.009\({}^{.000}\) & 0.025\({}^{.022}\) \\ Implicit Function [37] & 0.106\({}^{.147}\) & 0.893\({}^{.033}\) & 0.043\({}^{.024}\) & **0.044\({}^{.063}\)** & 0.006\({}^{.012}\) & 0.033\({}^{.030}\) \\ Diff-CPPN [18] & 0.091\({}^{.088}\) & 0.577\({}^{.425}\) & 0.055\({}^{.023}\) & 0.019\({}^{.029}\) & 0.007\({}^{.008}\) & 0.022\({}^{.017}\) \\ DiffAqua [35] & 0.014\({}^{.023}\) & 0.293\({}^{.459}\) & 0.027\({}^{.015}\) & 0.022\({}^{.011}\) & 0.010\({}^{.001}\) & 0.007\({}^{.008}\) \\ DiffuseBot & **0.706\({}^{.078}\)** & **0.965\({}^{.026}\)** & **0.092\({}^{.016}\)** & 0.031\({}^{.011}\) & **0.026\({}^{.002}\)** & **0.047\({}^{.019}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison with baselines.
Figure 4: Examples of DiffuseBot evolving robots to solve different tasks.
Figure 5: Examples of robots bred by DiffuseBot to achieve the desired tasks.
### Ablation Analysis
In this section, we conduct a series of ablation studies to provide a deeper understanding of the proposed method. For simplicity, all experiments in this section are done with the crawling task.
**Embedding optimization.** In Table 3, we compare the optimization of the embedding conditioned by the diffusion models with other alternatives. The pre-trained diffusion model [39] that DiffuseBot is built upon uses CLIP embeddings [43], which allows for textual inputs. Hence, a naive approach is to manually design text for the conditional embedding of the diffusion model. The result reported in Table 3 uses _"a legged animal or object that can crawl or run fast"_. We investigated the use of text prompts; in our experience, text was difficult to optimize for _functional_ robot design purposes. This is expected since most existing diffusion models perform content generation only in terms of appearance instead of physical utility, which further strengthens the purpose of this work. In addition, with exactly the same training objective as in (5), we can instead finetune the diffusion model itself. However, this does not yield better performance, as shown in the second entry in Table 3. Empirically, we found there is a higher chance of the generated samples being non-well-structured with fractured parts. This suggests that finetuning for physical utility may deteriorate the modeling of sensible 3D shapes and lead to more unstable generations.
**Diffusion as co-design.** Recall that the co-design optimization can be seamlessly incorporated into any diffusion step. In Figure 6, we examine how the strength of the injected co-design optimization affects the task performance in terms of where to apply throughout the diffusion sampling process and how many times to apply. In the left figure of Figure 6, we sweep through the maximal diffusion time of applying diffusion as co-design, i.e., for the data point at \(t=400\), we only perform co-design from \(t=400\) to \(t=0\). We found that there is a sweet spot of when to start applying co-design (at \(t\approx 200\)). This is because the intermediate samples at larger diffusion time \(\mathbf{x}_{t},t\gg 0\) are extremely under-developed, lacking sufficient connection to the final clean sample \(\mathbf{x}_{0}\), hence failing to provide informative guidance by examining its physical utility. Furthermore, we compare against post-diffusion co-design optimization, i.e., run co-design based on the final output of the diffusion (\(0.064\) vs ours \(0.081\)). We allow the same computational budget by running the same number of times of differentiable simulation as in DiffuseBot. Our method performs slightly better, potentially due to the flexibility to alter the still-developing diffusion samples. Also note that while our method is interleaved into diffusion process, it is still compatible with any post-hoc computation for finetuning.
### Flexibility To Incorporate Human Feedback
Beyond the generative power, diffusion models also provide the flexibility to composite different data distributions. This is especially useful for computational design since it empowers to easily incorporate external knowledge, e.g. from human. We follow the compositionality techniques introduced in [34, 14], which can be directly integrated into our diffusion as co-design framework. In Figure 7, we demonstrate incorporating human feedback in textual form as _"a unicorn"_ into a crawling robot generated by DiffuseBot. We can see the emergence of the horn-like body part.
### From Virtual Generation To Physical Robot
We further fabricate a physical robot for the gripping task as a proof-of-concept to demonstrate the possibility of real-world extension. We use a 3D Carbon printer to reconstruct the exact geometry of a generated design and fill the robot body with a Voronoi lattice structure to achieve softness.
\begin{table}
\begin{tabular}{c||c} \hline \hline & Performance \\ \hline MT & 0.016\({}^{.014}\) \\ FT & 0.031\({}^{.024}\) \\ Ours & 0.048\({}^{.007}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation on embedding optimization. MT means manually-designed text. FT means finetuning models.
Figure 6: Varying starting point and strength of diffusion as co-design.
Figure 7: Incorporating human textual feedback.
For actuators, we employ tendon transmission to realize the contraction force utilized in the soft robot gripper. In our project page, we demonstrate the robots generated by DiffuseBot are capable of picking up an object. Note that physical robot fabrication and real-world transfer have countless non-trivial challenges including stiffness and actuator design, sim-to-real gap, etc. Hence, this experiment is only meant to demonstrate the potential instead of a general, robust pipeline toward physical robots, which is left to future work. We refer the reader to the appendix for more details.
## 4 Related Work
**Heuristic Search For Soft Robot Co-Design.** Heuristic searches are simple but useful tools for co-designing soft robots. A long line of work has focused on evolutionary algorithms [7; 8; 10], with some including physical demonstrations [22; 31; 32] and recent benchmarks incorporating neural control [5]. These methods are often powered by parameterized by compositional pattern-producing networks [58], which parameterize highly expressive search spaces akin to neural networks [50; 51]. Similar to [5], [47] combines a heuristic approach with reinforcement learning, and demonstrates resulting designs on physical hardware. Other notable methods include particle-filter-based approaches [11] and simulated annealing [60]. Heuristic search methods tend to be less efficient than gradient-based or learning-based algorithms, but can reasoning about large search spaces; our approach employs the highly expressive diffusion processes, while leveraging the differentiable nature of neural networks and physical simulation for more efficient and gradient-directed search.
**Gradient-Based Soft Robot Co-Optimization.** A differentiable simulator is one in which useful analytical derivatives of any system variable with respect to any other system variable is efficiently queryable; the recent advent of soft differentiable simulation environments [26; 56; 13; 33; 41; 42; 61] has accelerated the exploration of gradient-based co-optimization methods. [26; 56] demonstrated how differentiable simulators can be used to co-optimize very high-dimensional spatially varying material and open-loop/neural controller parameters. [38] presented gradient-based search of shape parameters for soft manipulators. Meanwhile, [61] showed how actuation and geometry could be co-optimized, while analyzing the trade-offs of design space complexity and exploration in the search procedure. DiffuseBot borrows ideas from gradient-based optimization in guiding the design search in a physics-aware way, especially in the context of control.
**Learning-Based Soft Robot Co-Design Methods.** Though relatively nascent, learning-based approaches (including DiffuseBot ) can re-use design samples to build knowledge about a problem. Further, dataset-based minibatch optimization algorithms are more robust to local minima than single-iterate pure optimization approaches. [57] demonstrated how gradient-based search could be combined with learned-models; a soft robot proprioceptive model was continually updated by simulation data from interleaved control/material co-optimization. Other work employed learning-based methods in the context of leveraging available datasets. [35] learned a parameterized representation of geometry and actuators from basis shape geometries tractable interpolation over high-dimensional search spaces. [55] leveraged motion data and sparsifying neurons to simultaneously learn sensor placement and neural soft robotic tasks such as proprioception and grasp classification.
**Diffusion Models for Content Generation.** Diffusion models [24; 52] have emerged as the de-facto standard for generating content in continuous domains such as images [44; 46], 3D content [66; 65], controls [28; 9; 1], videos [23; 49; 16], and materials [63; 62; 48]. In this paper, we explore how diffusion models in combination with differentiable physics may be used to design new robots. Most similar to our work, [64] uses differentiable physics to help guide human motion synthesis. However, while [64] uses differentiable physics to refine motions in the last few timesteps of diffusion sampling, we tightly integrate differentiable physics with sampling throughout the diffusion sampling procedure through MCMC. We further uses differentiable simulation to define a reward objective through which we may optimize generative embeddings that represent our desirable robot structure.
## 5 Conclusion
We presented DiffuseBot, a framework that augments physics-based simulation with a diffusion process capable of generating performant soft robots for a diverse set of tasks including passive dynamics, locomotion, and manipulation. We demonstrated the efficacy of diffusion-based generation with extensive experiments, presented a method for incorporating human feedback, and prototyped a physical robot counterpart. DiffuseBot is a first step toward generative invention of soft machines, with the potential to accelerate design cycles, discover novel devices, and provide building blocks for downstream applications in automated computational creativity and computer-assisted design.
## References
* Ajay et al. [2022] Anurag Ajay, Yilun Du, Abhi Gupta, Joshua Tenenbaum, Tommi Jaakkola, and Pulkit Agrawal. Is conditional generative modeling all you need for decision-making? _arXiv preprint arXiv:2211.15657_, 2022.
* Bacher et al. [2021] Moritz Bacher, Espen Knoop, and Christian Schumacher. Design and control of soft robots using differentiable simulation. _Current Robotics Reports_, 2(2):211-221, 2021.
* Bartlett et al. [2015] Nicholas W Bartlett, Michael T Tolley, Johannes TB Overvelde, James C Weaver, Bobak Mosadegh, Katia Bertoldi, George M Whitesides, and Robert J Wood. A 3d-printed, functionally graded soft robot powered by combustion. _Science_, 349(6244):161-165, 2015.
* Bern et al. [2017] James M Bern, Kai-Hung Chang, and Stelian Coros. Interactive design of animated plushies. _ACM Transactions on Graphics (TOG)_, 36(4):1-11, 2017.
* Bhatia et al. [2021] Jagdeep Bhatia, Holly Jackson, Yunsheng Tian, Jie Xu, and Wojciech Matusik. Evolution gym: A large-scale benchmark for evolving soft robots. _Advances in Neural Information Processing Systems_, 34:2201-2214, 2021.
* Chen et al. [2015] Xiang'Anthony' Chen, Stelian Coros, Jennifer Mankoff, and Scott E Hudson. Encore: 3d printed augmentation of everyday objects with printed-over, affixed and interlocked attachments. In _Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology_, pages 73-82, 2015.
* Cheney et al. [2014] Nicholas Cheney, Jeff Clune, and Hod Lipson. Evolved electrophysiological soft robots. In _Artificial life conference proceedings_, pages 222-229. MIT Press One Rogers Street, Cambridge, MA 02142-1209, USA journals-info..., 2014.
* Cheney et al. [2014] Nick Cheney, Robert MacCurdy, Jeff Clune, and Hod Lipson. Unshackling evolution: evolving soft robots with multiple materials and a powerful generative encoding. _ACM SIGEVOlution_, 7(1):11-23, 2014.
* Chi et al. [2023] Cheng Chi, Siyuan Feng, Yilun Du, Zhenjia Xu, Eric Cousineau, Benjamin Burchfiel, and Shuran Song. Diffusion Policy: Visuomotor Policy Learning via Action Diffusion. _arXiv preprint arXiv:2303.04137_, 2023.
* Corucci et al. [2018] Francesco Corucci, Nick Cheney, Francesco Giorgio-Serchi, Josh Bongard, and Cecilia Laschi. Evolving soft locomotion in aquatic and terrestrial environments: effects of material properties and environmental transitions. _Soft robotics_, 5(4):475-495, 2018.
* Deimel et al. [2017] Raphael Deimel, Patrick Irmisch, Vincent Wall, and Oliver Brock. Automated co-design of soft hand morphology and control strategy for grasping. In _2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_, pages 1213-1218. IEEE, 2017.
* Dhariwal and Nichol [2021] Prafulla Dhariwal and Alexander Quinn Nichol. Diffusion models beat GANs on image synthesis. In _Advances in Neural Information Processing Systems_, 2021.
* Du et al. [2021] Tao Du, Kui Wu, Pingchuan Ma, Sebastien Wah, Andrew Spielberg, Daniela Rus, and Wojciech Matusik. Diffpd: Differentiable projective dynamics. _ACM Transactions on Graphics (TOG)_, 41(2):1-21, 2021.
* Du et al. [2023] Yilun Du, Conor Durkan, Robin Strudel, Joshua B Tenenbaum, Sander Dieleman, Rob Fergus, Jascha Sohl-Dickstein, Arnaud Doucet, and Will Grathwohl. Reduce, reuse, recycle: Compositional generation with energy-based diffusion models and mcmc. _arXiv preprint arXiv:2302.11552_, 2023.
* Du and Mordatch [2019] Yilun Du and Igor Mordatch. Implicit generation and generalization in energy-based models. _arXiv preprint arXiv:1903.08689_, 2019.
* Du et al. [2023] Yilun Du, Mengjiao Yang, Bo Dai, Hanjun Dai, Ofir Nachum, Joshua B Tenenbaum, Dale Schuurmans, and Pieter Abbeel. Learning universal policies via text-guided video generation. _arXiv e-prints_, pages arXiv-2302, 2023.
* [17] Bin Fang, Fuchun Sun, Linyuan Wu, Fukang Liu, Xiangxiang Wang, Haiming Huang, Wenbing Huang, Huaping Liu, and Li Wen. Multimode grasping soft gripper achieved by layer jamming structure and tendon-driven mechanism. _Soft Robotics_, 9(2):233-249, 2022.
* [18] Chrisantha Fernando, Dylan Banarse, Malcolm Reynolds, Frederic Besse, David Pfau, Max Jaderberg, Marc Lanctot, and Daan Wierstra. Convolution by evolution: Differentiable pattern producing networks. In _Proceedings of the Genetic and Evolutionary Computation Conference 2016_, pages 109-116, 2016.
* [19] Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. An image is worth one word: Personalizing text-to-image generation using textual inversion. _arXiv preprint arXiv:2208.01618_, 2022.
* [20] Debkalpa Goswami, Shuai Liu, Aniket Pal, Lucas G Silva, and Ramses V Martinez. 3d-architected soft machines with topologically encoded motion. _Advanced Functional Materials_, 29(24):1808713, 2019.
* [21] Sehoon Ha, Peng Xu, Zhenyu Tan, Sergey Levine, and Jie Tan. Learning to walk in the real world with minimal human effort. _arXiv preprint arXiv:2002.08550_, 2020.
* [22] Jonathan Hiller and Hod Lipson. Automatic design and manufacture of soft robots. _IEEE Transactions on Robotics_, 28(2):457-466, 2011.
* [23] Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P. Kingma, Ben Poole, Mohammad Norouzi, David J. Fleet, and Tim Salimans. Imagen video: High definition video generation with diffusion models. _arXiv preprint arXiv:2210.02303_, 2022.
* [24] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. _Advances in Neural Information Processing Systems_, 33:6840-6851, 2020.
* [25] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. _arXiv preprint arXiv:2207.12598_, 2022.
* [26] Yuanming Hu, Jiancheng Liu, Andrew Spielberg, Joshua B Tenenbaum, William T Freeman, Jiajun Wu, Daniela Rus, and Wojciech Matusik. Chainqueen: A real-time differentiable physical simulator for soft robotics. In _2019 International conference on robotics and automation (ICRA)_, pages 6265-6271. IEEE, 2019.
* [27] Hyunki In, Useok Jeong, Haemin Lee, and Kyu-Jin Cho. A novel slack-enabling tendon drive that improves efficiency, size, and safety in soft wearable robots. _IEEE/ASME Transactions on Mechatronics_, 22(1):59-70, 2016.
* [28] Michael Janner, Yilun Du, Joshua Tenenbaum, and Sergey Levine. Planning with diffusion for flexible behavior synthesis. In _International Conference on Machine Learning_, 2022.
* [29] Adrien Kaiser, Jose Alonso Ybanez Zepeda, and Tamy Boubekeur. A survey of simple geometric primitives detection methods for captured 3d data. In _Computer Graphics Forum_, volume 38, pages 167-196. Wiley Online Library, 2019.
* [30] Byungchul Kim, Useok Jeong, Brian Byunghyun Kang, and Kyu-Jin Cho. Slider-tendon linear actuator with under-actuation and fast-connection for soft wearable robots. _IEEE/ASME Transactions on Mechatronics_, 26(6):2932-2943, 2021.
* [31] Sam Kriegman, Douglas Blackiston, Michael Levin, and Josh Bongard. A scalable pipeline for designing reconfigurable organisms. _Proceedings of the National Academy of Sciences_, 117(4):1853-1859, 2020.
* [32] Sam Kriegman, Douglas Blackiston, Michael Levin, and Josh Bongard. Kinematic self-replication in reconfigurable organisms. _Proceedings of the National Academy of Sciences_, 118(49):e2112672118, 2021.
* [33] Yifei Li, Tao Du, Kui Wu, Jie Xu, and Wojciech Matusik. Diffcloth: Differentiable cloth simulation with dry frictional contact. _ACM Transactions on Graphics (TOG)_, 42(1):1-20, 2022.
* [34] Nan Liu, Shuang Li, Yilun Du, Antonio Torralba, and Joshua B Tenenbaum. Compositional visual generation with composable diffusion models. _arXiv preprint arXiv:2206.01714_, 2022.
* [35] Pingchuan Ma, Tao Du, John Z Zhang, Kui Wu, Andrew Spielberg, Robert K Katzschmann, and Wojciech Matusik. Diffaqua: A differentiable computational design pipeline for soft underwater swimmers with shape interpolation. _ACM Transactions on Graphics (TOG)_, 40(4):1-14, 2021.
* [36] Jonas Martinez, Jeremie Dumas, and Sylvain Lefebvre. Procedural voronoi foams for additive manufacturing. _ACM Transactions on Graphics (TOG)_, 35(4):1-12, 2016.
* [37] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 4460-4470, 2019.
* [38] Thomas Morzadec, Damien Marcha, and Christian Duriez. Toward shape optimization of soft robots. In _2019 2nd IEEE International Conference on Soft Robotics (RoboSoft)_, pages 521-526. IEEE, 2019.
* [39] Alex Nichol, Heewoo Jun, Prafulla Dhariwal, Pamela Mishkin, and Mark Chen. Point-e: A system for generating 3d point clouds from complex prompts. _arXiv preprint arXiv:2212.08751_, 2022.
* [40] Songyou Peng, Chiyu "Max" Jiang, Yiyi Liao, Michael Niemeyer, Marc Pollefeys, and Andreas Geiger. Shape as points: A differentiable poisson solver. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2021.
* [41] Yi-Ling Qiao, Junbang Liang, Vladlen Koltun, and Ming C Lin. Scalable differentiable physics for learning and control. _arXiv preprint arXiv:2007.02168_, 2020.
* [42] Yiling Qiao, Junbang Liang, Vladlen Koltun, and Ming Lin. Differentiable simulation of soft multi-body systems. _Advances in Neural Information Processing Systems_, 34:17123-17135, 2021.
* [43] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In _International conference on machine learning_, pages 8748-8763. PMLR, 2021.
* [44] Aditya Ramesh, Mikhail Pavlov, Scott Gray Gabriel Goh, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. _arXiv preprint arXiv:2102.12092_, 2021.
* [45] Stephane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In _International Conference on Artificial Intelligence and Statistics_, 2011.
* [46] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. Photorealistic text-to-image diffusion models with deep language understanding. _arXiv preprint arXiv:2205.11487_, 2022.
* [47] Charles Schaff, Audrey Sedal, and Matthew R Walter. Soft robots learn to crawl: Jointly optimizing design and control with sim-to-real transfer. _arXiv preprint arXiv:2202.04575_, 2022.
* [48] Arne Schneuing, Yuanqi Du, Arian Jamasb Charles Harris, Ilia Igashov, Weitao Du, Tom Blundell, Pietro Lio, Carla Gomes, Michael Bronstein Max Welling, and Bruno Correia. Structure-based drug design with equivariant diffusion models. _arXiv preprint arXiv:2210.02303_, 2022.
* [49] Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, et al. Make-a-video: Text-to-video generation without text-video data. _arXiv preprint arXiv:2209.14792_, 2022.
* [50] Lawrence Smith, Travis Hainsworth, Jacob Haimes, and Robert MacCurdy. Automated synthesis of bending pneumatic soft actuators. In _2022 IEEE 5th International Conference on Soft Robotics (RoboSoft)_, pages 358-363. IEEE, 2022.
* [51] Lawrence Smith and Robert MacCurdy. Soroforge: End-to-end soft actuator design. _IEEE Transactions on Automation Science and Engineering_, 2023.
* [52] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In _International Conference on Machine Learning_, pages 2256-2265. PMLR, 2015.
* [53] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. _arXiv preprint arXiv:2010.02502_, 2020.
* [54] Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. _Advances in neural information processing systems_, 32, 2019.
* [55] Andrew Spielberg, Alexander Amini, Lillian Chin, Wojciech Matusik, and Daniela Rus. Co-learning of task and sensor placement for soft robotics. _IEEE Robotics and Automation Letters_, 6(2):1208-1215, 2021.
* [56] Andrew Spielberg, Tao Du, Yuanming Hu, Daniela Rus, and Wojciech Matusik. Advanced soft robot modeling in chainqueen. _Robotica_, 41(1):74-104, 2023.
* [57] Andrew Spielberg, Allan Zhao, Yuanming Hu, Tao Du, Wojciech Matusik, and Daniela Rus. Learning-in-the-loop optimization: End-to-end control and co-design of soft robots through learned deep latent representations. _Advances in Neural Information Processing Systems_, 32, 2019.
* [58] Kenneth O Stanley. Compositional pattern producing networks: A novel abstraction of development. _Genetic programming and evolvable machines_, 8:131-162, 2007.
* [59] Michael T Tolley, Robert F Shepherd, Michael Karpelson, Nicholas W Bartlett, Kevin C Galloway, Michael Wehner, Rui Nunes, George M Whitesides, and Robert J Wood. An untethered jumping soft robot. In _2014 IEEE/RSJ International Conference on Intelligent Robots and Systems_, pages 561-566. IEEE, 2014.
* [60] Merel Van Diepen and Kristina Shea. A spatial grammar method for the computational design synthesis of virtual soft locomotion robots. _Journal of Mechanical Design_, 141(10), 2019.
* [61] Tsun-Hsuan Wang, Pingchuan Ma, Andrew Everett Spielberg, Zhou Xian, Hao Zhang, Joshua B. Tenenbaum, Daniela Rus, and Chuang Gan. Softzoo: A soft robot co-design benchmark for locomotion in diverse environments. In _The Eleventh International Conference on Learning Representations_, 2023.
* [62] Tian Xie, Xiang Fu, Octavian-Eugen Ganea, Regina Barzilay, and Tommi S Jaakkola. Crystal diffusion variational autoencoder for periodic material generation. In _International Conference on Learning Representations_, 2021.
* [63] Minkai Xu, Lantao Yu, Yang Song, Chence Shi, Stefano Ermon,, and Jian Tang. GeoDiff: A geometric diffusion model for molecular conformation generation. In _International Conference on Learning Representations_, 2021.
* [64] Ye Yuan, Jiaming Song, Umar Iqbal, Arash Vahdat, and Jan Kautz. Physdiff: Physics-guided human motion diffusion model. _arXiv preprint arXiv:2212.02500_, 2022.
* [65] Xiaohui Zeng, Arash Vahdat, Francis Williams, Zan Gojcic, Or Litany, Sanja Fidler, and Karsten Kreis. Lion: Latent point diffusion models for 3d shape generation. _arXiv preprint arXiv:2210.06978_, 2022.
* [66] Linqi Zhou, Yilun Du, and Jiajun Wu. 3d shape generation and completion through point-voxel diffusion. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 5826-5835, 2021.
Background On Point-E
Point-E [39] is a diffusion-based generative model that produces 3D point clouds from text or images. The Point-E pipeline consists of three stages: first, it generates a single synthetic view using a text-to-image diffusion model; second, it produces a coarse, low-resolution 3D point cloud (1024 points) using a second diffusion model which is conditioned on the generated image; third, it upsamples/"densifiers" the coarse point cloud to a high-resolution one (4096 points) with a third diffusion model. The two diffusion models operating on point clouds use a permutation invariant transformer architecture with different model sizes. The entire model is trained on Point-E's curated dataset of several million 3D models and associated metadata which captures a generic distribution of common 3D shapes, providing a suitable and sufficiently diverse prior for robot geometry. The diffused data is a set of points, each point possessing 6 feature dimensions: 3 for spatial coordinates and 3 for colors. We ignore the color channels in this work. The conditioning for the synthesized image in the first stage relies on embeddings computed from a pre-trained ViT-L/14 CLIP model; in the embedding optimization of DiffuseBot, the variables to be optimized is exactly the same embedding. Diffusion as co-design is only performed in the second stage (coarse point cloud generation) since the third stage is merely an upsampling which produces only minor modifications to robot designs. We refer the reader to the original paper [39] for more details.
## Appendix B Theoretical Motivation
**Online learning in embedding optimization.** In Section 2.4, we discuss how to online collect a dataset to optimize the embedding toward improved physical utility. Given a simplified version of (5)
\[\min_{\mathbf{c}}\mathbb{E}_{p_{\theta}(\mathbf{x}_{0}|\mathbf{c})}[g(\mathbf{ x}_{0},\mathbf{c})] \tag{8}\]
where, for notation simplicity, we drop \(t\sim[1,T]\), \(\mathcal{N}(\epsilon;\mathbf{0},\mathbf{I})\) in the sampling distribution, and summarize \([||\epsilon-\epsilon_{\theta}(\mathbf{x}_{t}(\mathbf{x}_{0},\epsilon,t),t, \mathbf{c})||^{2}]\) as \(g(\mathbf{x},\mathbf{c})\). We can rewrite the expectation term as,
\[\int p_{\theta}(\mathbf{x}_{0})\frac{p_{\theta}(\mathbf{c}|\mathbf{x}_{0})}{p_ {\theta}(\mathbf{c})}g(\mathbf{x}_{0},\mathbf{c})dx \tag{9}\]
which allows to sample from \(p_{\theta}(\mathbf{x}_{0})\) (i.e., generating samples from the diffusion model) and reweight the loss with \(\frac{p_{\theta}(\mathbf{c}|\mathbf{x}_{0})}{p_{\theta}(\mathbf{c})}\); the latter scaling term is essentially proportional to a normalized task performance. Empirically, we can maintain a buffer for the online dataset and train the embedding with the sampling distribution biased toward higher task performance; we use a list to store samples with top-k performance in our implementation (we also tried reshaping the sampling distribution like prioritized experience replay in reinforcement learning but we found less stability and more hyperparameters required in training compared to our simpler top-k approach).
**Connection to MCMC.** In diffusion sampling, the simplest way to perform reverse denoising process as in Section 2.2 follows [24],
\[p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})=\mathcal{N}\left(\frac{1}{\sqrt{ \alpha_{t}}}(\mathbf{x}_{t}-\frac{\beta_{t}}{\sqrt{1-\bar{\alpha}_{t}}} \epsilon_{\theta}(\mathbf{x}_{t},t)),\frac{1-\bar{\alpha}_{t-1}}{1-\bar{\alpha }_{t}}\beta_{t}\mathbf{I}\right) \tag{10}\]
Here, the denoising term can either be unconditional \(\epsilon_{\theta}(\mathbf{x}_{t},t)\) or conditional via classifier-free guidance \(\hat{\epsilon}_{\theta,\text{classifier-free}}(\mathbf{x}_{t},t,\mathbf{c})\) as in (3). We use the latter to incorporate the optimized embedding. To further leverage physics-based simulation, we aim to introduce physical utility during diffusion sampling process. One possibility is to utilize classifier-based guidance [12],
\[\hat{\epsilon}_{\theta,\text{classifier-based}}:=\epsilon_{\theta}(\mathbf{x}_ {t},t)-s\cdot\sqrt{1-\bar{\alpha}_{t}}\nabla_{\mathbf{x}_{t}}\log p(\mathbf{c }|\mathbf{x}_{t}) \tag{11}\]
where \(p(\mathbf{c}|\mathbf{x}_{t})\) can be conceptually viewed as improving physical utility and \(\nabla_{\mathbf{x}_{t}}\log p(\mathbf{c}|\mathbf{x}_{t})\) can be obtained using differentiable physics and the unconditional score. Note that we slightly abuse the notation here by overloading \(\mathbf{c}\) with conditioning from differentiable simulation during sampling other than the classifier-free guidance using the optimized embedding. However, combining (10) and (11) much less resembles any gradient-based optimization techniques, which are shown to be effective in soft robot co-design with differentiable simulation [26, 2]. Fortunately, drawing a connection to energy-based models [54, 15, 14], yet another alternative to incorporate conditioning in diffusion models is Markov Chain Monte Carlo (MCMC) sampling [14], where we use Unadjusted Langevin Dynamics,
\[\mathbf{x}_{t}=\mathbf{x}_{t}^{(K)},\;\;\text{where}\;\mathbf{x}_{t}^{(k)}\sim \mathcal{N}\left(\mathbf{x}_{t}^{(k)};\mathbf{x}_{t}^{(k-1)}+\frac{\sigma^{2}}{ 2}\nabla_{\mathbf{x}}\log p(\mathbf{c}|\mathbf{x}_{t}^{(k-1)}),\sigma^{2} \mathbf{I}\right) \tag{12}\]
where \(K\) is the number of samples in the current MCMC with \(k\) as indexing, \(\sigma^{2}\) is a pre-defined variance, and \(\mathbf{x}_{t}^{(0)}=\mathbf{x}_{t-1}\). In the context of diffusion models, this procedure is commonly performed within a single diffusion step to drive the sample toward higher-density regime under the intermediate distribution \(p(\mathbf{x}_{t})=\int q(\mathbf{x}_{t}|\mathbf{x}_{0})p(\mathbf{x}_{0})d \mathbf{x}_{0}\) at diffusion time \(t\). Inspired by its resemblance to gradient ascent with stochasticity from the added Gaussian noise of variance \(\sigma^{2}\), we establish a connection to design optimization, reformulating diffusion process as co-design optimization as in (6) and (7). Specifically, we can apply Bayes rule to decompose the score of \(p(\mathbf{c}|\mathbf{x}_{t}^{(k-1)})\),
\[\nabla_{\mathbf{x}}\log p(\mathbf{c}|\mathbf{x}_{t})=\nabla_{\mathbf{x}}\log p (\mathbf{x}_{t}|\mathbf{c})-\nabla_{\mathbf{x}}\log p(\mathbf{x}_{t}) \tag{13}\]
where \(\nabla_{\mathbf{x}}\log p(\mathbf{x}_{t})\) is simply the denoiser output \(\epsilon_{\theta}(\mathbf{x}_{t},t)\) and \(\nabla_{\mathbf{x}}\log p(\mathbf{x}_{t}|\mathbf{c})\) is the gradient of task performance with respect to the intermediate sample of the diffusion model from differentiable physical simulation and robotizing process. Overall, this leads to (6).
## Appendix C Implementation Details In Algorithm
In this section, we provide more implementation details and experimental configurations of Diffuse-Bot and other baselines. In Table 4, we list the configurations of the embedding optimization. With respect to Algorithm 1, "buffer" refers to the online dataset \(\mathcal{D}\).
* _Buffer Size_ is the capacity of the dataset.
* _Min. Buffer Size_ is the minimum of data filled in the buffer before training starts.
* _Num. Samples / Epoch_ is number of new samples collected in each epoch, where epoch here refers to a new round of data collection in online learning.
* _Train Iter. / Epoch_ is number of training iterations per epoch.
* _Buffer Top-K_ is the number of datapoints with top-k performance being retained in the _Filter_ step atop the most up-to-date data.
* _Batch Size_ is the batch size in the embedding optimization.
\begin{table}
\begin{tabular}{c||c|c|c|c|c|c} \hline & Balancing & Landing & Crawling & Hurdling & Gripping & Moving a Box \\ \hline Buffer Size & 600 & 600 & 60 & 600 & 60 & 60 \\ Min. Buffer Size & 60 & 60 & 60 & 60 & 60 & 60 \\ Num. Samples / Epoch & 60 & 60 & 60 & 60 & 60 & 60 \\ Train Iter. / Epoch & 1 & 1 & 1 & 1 & 1 & 1 \\ Buffer Top-K & 12 & 12 & 6 & 6 & 6 & 6 \\ Batch Size & 6 & 6 & 6 & 6 & 6 & 6 \\ \hline \end{tabular}
\end{table}
Table 4: Configuration of embedding optimization.
\begin{table}
\begin{tabular}{c||c|c|c|c|c|c} \hline & Balancing & Landing & Crawling & Hurdling & Gripping & Moving a Box \\ \hline \(t_{\max}\) & 400 & 150 & 400 & 400 & 400 & 400 \\ \(t_{\min}\) & 0 & 0 & 0 & 0 & 0 & 0 \\ \(\Delta t\) & 50 & 25 & 50 & 50 & 50 & 50 \\ \(K\) & 3 & 3 & 5 & 5 & 5 & 5 \\ \(\sigma\) & \(10^{-4}\cdot\beta\) & \(10^{-4}\cdot\beta\) & \(10^{-4}\cdot\beta\) & \(10^{-4}\cdot\beta\) & \(10^{-4}\cdot\beta\) & \(10^{-4}\cdot\beta\) \\ \(\kappa\) & \(10^{4}\) & \(10^{4}\) & \(10^{4}\) & \(10^{4}\) & \(10^{4}\) & \(10^{4}\) \\ \(\gamma\) & - & - & 0.01 & 0.001 & 0.001 & 0.001 \\ Renorm Scale & 10 & 10 & 10 & 10 & 10 & 10 \\ \hline \end{tabular}
\end{table}
Table 5: Configuration of diffusion as co-design.
In Table 5, we list the configurations of diffusion as co-design. We follow (6)(7) for:
* \(K\) is number of MCMC sampling steps at the current diffusion time.
* \(\sigma\) is the standard deviation related to the MCMC step size.
* \(\kappa\) is the ratio between two types of design gradients.
* \(\gamma\) is the weight for trading off design and control optimization.
* \(t_{\max}\) and \(t_{\min}\) are the maximal and minimal diffusion time to perform diffusion as co-design, respectively.
* \(\Delta t\) is the diffusion time interval to perform diffusion as co-design.
For baselines, we use learning rates for control optimization following \(\gamma\) in Table 5; for particle-based and voxel-based approaches, we use learning rate \(0.01\) for design optimization; for implicit function and diff-CPPN, we use learning rate \(0.001\) for design optimization. For the inputs of implicit function and diff-CPPN, we use x, y, z coordinates in the local workspace, the distance to the workspace center on the xy, xz, yz planes, and the radius from the center. For the network architecture of implicit function, we use a 2-layer multilayer perceptron with hidden size \(32\) and Tanh activation. For the network architecture of diff-CPPN, we use Sin, Sigmoid, Tanh, Gaussian, SELU, Softplus, Clamped activations with 5 hidden layers and 28 graph nodes in each layer.
Hyperparameters are chosen mostly based on intuition and balancing numerical scale with very little tuning. In the following, we briefly discuss the design choices of all hyperparameters listed in Table 5 and Table 4. For min buffer size, samples per epoch, training iteration per epoch, and batch size, we roughly make sufficiently diverse the data used in the optimization and use the same setting for all tasks. For buffer size, we start with 60 and if we observe instability in optimization, we increase to 10 times, 600 (similar to online on-policy reinforcement learning); note that buffer size refers to the maximal size and increasing this won't affect runtime. For buffer Top-K, we start with 6 and if we observe limited diversity of generation throughout the optimization (or lack of exploration), we double it. For \(t_{max}\), \(t_{min}\), and \(\Delta t\), we roughly inspect how structured the generation in terms of achieving the desired robotic task to determine \(t_{max}\) and modify \(\Delta t\) accordingly to match the similar number of performing MCMC sampling (e.g., \(t_{max}/\Delta t\): \(400/50\approx 150/25\)). For the number of MCMC steps \(K\), we simply set 3 for passive tasks and 5 for active tasks by intuition. For \(\sigma\), we simply follow one of the settings in [14]. For the guidance scale \(\kappa\) and renorm scale, we check the numerical values between \(\epsilon\) and gradient from differentiable simulation and try to make them roughly in the similar magnitude, and set the same scale for all tasks for simplicity. For \(\gamma\), we set 0.001 for trajectory optimization and 0.01 for parameterized controllers based on our experience of working with differentiable physics. Overall, from our empirical findings, the only hyperparameters that may be sensitive include buffer size and buffer Top-K for optimization stability and generation diversity, and guidance scales, which need to be tuned to match the numerical magnitude of other terms so as to take proper effect.
here since there is no active control of robot bodies, making optimization on robot design a direct factor toward physical utility.
2. only involve lower-level control/motion without the complications of long-term or higher-level task planning: we select tasks that mostly involve few motor skills, e.g., in manipulation, instead of pick and place, we simply aim at picking up/gripping an object.
3. are commonly considered in other soft robot co-design literature: all proposed active tasks are widely used in the soft robot community, including crawling [8, 10, 47, 61], hurdling/jumping [26, 59, 3], and manipulating objects [5, 11, 38].
4. may induce more visible difference in robot designs between the performing and the non-performing ones to facilitate evaluation and algorithmic development: we select tasks more based on heuristics and intuition, e.g., in crawling, we expect leg-like structures may outperform other random designs.
We build our environments on top of SoftZoo [61] and employ the Material Point Method for simulation. Each environment is composed of boundary conditions that include impenetrable ground, and, in the case of fixed objects or body parts, glued particles. All tasks use units without direct real-world physical correspondence and last for 100 steps with each step consisting of 17 simulation substeps. In the following, when describing the size of a 3D shape without explicitly mentioning the axis, we follow the order: length (x, in the direction when we talk about left and right), height (y, parallel with the gravity direction), and width (z). All tasks are demonstrated in Figure 3 in the main paper.
**Balancing.** The robot is initialized atop a stick-like platform of shape 0.02-unit \(\times\) 0.05-unit \(\times\) 0.02-unit. The robot is givenan initial upward velocity of a 0.5-unit/second and allowed to free fall under gravity. The goal is for the robot to passively balance itself after dropping again on the platform; the performance is measured as the intersection over union (IoU) between the space occupied by the robot during the first simulation step and the space occupied by the robot during the last simulation step. The robot geometry is confined to a 0.08-unit \(\times\) 0.08-unit \(\times\) 0.08-unit workspace. There is no prescribed actuator placement or controller (passive dynamics).
**Landing.** The robot is initialized to be 0.08-unit to the right and 0.045-unit above the landing target with size of 0.02-unit \(\times\) 0.05-unit \(\times\) 0.02-unit. The robot is given an initial velocity of 0.5-unit/second to the right. The goal of the robot is to land at the target; the performance is measured as the exponential to the power of the negative distance between the target and the robot in the last frame \(e^{-||p^{\text{subject}}_{H}-p^{\text{orbit}}_{0}||}\), where \(p_{H}\) is the position of the robot or object at the last frame with horizon \(H\). The robot geometry is confined to a 0.08-unit \(\times\) 0.08-unit \(\times\) 0.08-unit workspace. There is no prescribed actuator placement or controller (passive dynamics).
**Crawling.** The robot is initialized at rest on the ground. The goal of the robot is to actuate its body to move as far away as possible from the starting position; the performance is measured as the distance traveled \(||p^{x,\text{robot}}_{H}-p^{x,\text{robot}}_{0}||\), where \(p^{x,\text{robot}}_{r}\) is the position of the robot in the x axis at a certain frame with horizon \(H\). The robot geometry is confined to a 0.08-unit \(\times\) 0.08-unit \(\times\) 0.08-unit workspace. The actuator placement is computed by clustering the local coordinates of the robot centered at its average position in the xz (non-vertical) plane into 4 groups. Each group contains an actuator with its direction parallel to gravity. The prescribed controller is a composition of four sine waves with frequency as 30hz, amplitude as 0.3, and phases as 0.5\(\pi\), 1.5\(\pi\), 0, and \(\pi\) for the four actuators.
**Hurdling.** An obstacle of shape 0.01-unit \(\times\) 0.03-unit \(\times\) 0.12-unit is placed in 0.07-unit front of the robot. The goal of the robot is to jump as far as possible, with high distances achieved only by bounding over the obstacle. The performance is measured as the distance traveled. The robot geometry is confined to a 0.08-unit \(\times\) 0.08-unit \(\times\) 0.08-unit workspace. The actuator placement is computed by clustering the local coordinates of the robot centered at its average position in the length direction into 2 groups. Each group contains an actuator aligned parallel to gravity. The prescribed controller takes the form of open-loop, per-step actuation sequences, set to linearly-increasing values from (0.0, 0.0) to (1.0, 0.3) between the first and the thirtieth frames and zeros afterward for the two actuators (the first value of the aforementioned actuation corresponds to the actuator closer to the obstacle) respectively.
**Gripping.** An object of shape 0.03-unit \(\times\) 0.03-unit \(\times\) 0.03-unit is placed 0.08-unit underneath the robot. The goal of the robot is to vertically lift the object; the performance is measured as the vertical distance of the object being lifted \(||p^{y,\text{object}}_{H}-p^{y,\text{object}}_{0}||\), where \(p^{y,\text{object}}_{\cdot}\) is the position of the object in the y axis at a certain frame with horizon \(H\). Within a 0.06-unit \(\times\) 0.08-unit \(\times\) 0.12-unit workspace, we decompose the robot into a base and two attached submodules, and we set the output of DiffuseBot or other robot design algorithms as one of the submodules and make constrain the other submodule to be a mirror copy; conceptually we design "the finger of a parallel gripper." The base is clamped/glued to the upper boundary in z and given a large polyhedral volume to serve as the attachment point of the gripper finger submodules. The actuator placement is computed by clustering the local coordinates of the robot centered at its average position in the length direction into 2 groups; each submodule has one pair of actuators. Each actuator is aligned parallel to gravity. Overall, one pair of actuators comprises the "inner" part of the gripper and the other comprises the "outer" part. Suppose the actuation of the two pairs of actuators is denoted in the format of (actuation of the outer part, actuation of the inner part), the prescribed controller is per-frame actuation, set to (i) linearly-increasing values from (0.0, 0.0) to (0.0, 1.0) between the first and the fiftieth frames, and (ii) then linearly-decreasing values from (1.0, 0.0) to (0.0, 0.0) between the fiftieth and the last frames.
**Moving a box.** A 0.03-unit \(\times\) 0.03-unit \(\times\) 0.03-unit cube is placed on the right end of the robot (half a body length of the robot to the right from the robot center). The goal of the robot is to move the box to the left; the performance is measured as the distance that the box is moved to the right with respect to its initial position \(||p_{H}^{x,\text{object}}-p_{0}^{x,\text{object}}||\), where \(p_{t}^{x,\text{object}}\) is the position of the object in the x axis at a certain frame with horizon \(H\). The robot geometry is confined to a 0.16-unit \(\times\) 0.06-unit \(\times\) 0.06-unit workspace. The actuator placement is computed by clustering the local coordinates of the robot centered at its average position in the height direction into 2 groups. Each group contains an actuator aligned parallel to the ground. The prescribed controller is per-frame actuation, initialized to linearly-increasing values from (0.0, 0.0) to (1.0, 0.0) between the first and last frame for the lower and the upper actuator respectively.
## Appendix E Analysis On Robotizing
As mentioned in Section 2.3, Material Point Method simulation requires solid geometry for simulation; thus, we need to convert the surface point cloud from Point-E [39] to a volume. The most direct means of converting the point cloud to a solid geometry is to compute the signed distance function (SDF), and populate the interior of the SDF using rejection sampling. We refer to this baseline as _Direct SDF_. Here, we use a pretrained transformer-based model provided by Point-E as the SDF. In Figure 8, we compare Direct SDF with our approach described in Section 2.3 Solid Geometry. We perform robotizing on intermediate samples at t=300. We observe that Direct SDF fails to produce well-structured solid geometry since it is trained with watertight on geometry, and thus the conversion cannot gracefully handle generated point clouds that do not exhibit such watertight structure. This is specifically common in the intermediate diffusion sample \(\mathbf{x}_{t}\) as \(\mathbf{x}_{t}\) is essentially a Gaussian-noise-corrupted version of the clean surface point cloud. In contrast, the robotizing of DiffuseBot produces a much well-structured solid geometry since it explicitly handles the noisy interior 3D points by introducing a tailored loss as in Shape As Points optimization [40] (see Section 2.3). In addition, a better-structured robot geometry is critical to obtain not only a more accurate evaluation of a robot design at the forward pass of the simulation but also the gradients from differentiable physics at the backward pass. In Figure 8, we further perform co-optimization on the two robots obtained by Direct SDF and DiffuseBot; we observe a more stably increasing trend of task performance in our approach, demonstrating that the gradient is usually more informative with a better-structured robot. Empirically, while Direct SDF sometimes still provides improving trend with unstructured robots in co-design optimization, the performance gain is not as stable as DiffuseBot.
## Appendix F Additional Visualizations Of Experiments
In this section, we show more visualization for a diverse set of the experimental analysis. Please visit our project page ([https://diffusebot.github.io/](https://diffusebot.github.io/)) for animated videos.
**Generated robots performing various tasks.** In Figure 9, we display a montage of a generated robot's motion for each task; this is the unabridged demonstration of Figure 5 in the main paper.
**Physics-guided robot generation** In Figure 10, we show how DiffuseBot evolves robots throughout the embedding optimization process; this is the full demonstration of Figure 4 in the main paper. Notethat DiffuseBot is a generative algorithm and thus results presented are for demonstration purposes and reflect only a single fixed random seed.
**Diffusion process of robot generation.** In Figure 11, we show how robots evolve throughout the diffusion process at different diffusion time via \(\mathbf{x}_{t}\) with \(t\) from \(T\) to \(0\); not to be confused by Figure 10 where all generations are obtained _via_ a full diffusion sampling \(\mathbf{x}_{0}\) and different conditioning embeddings \(\mathbf{c}\). This figure demonstrates the robotized samples across diffusion times, gradually converging to the final functioning robots as \(t\) approaches \(0\).
**Comparison with baselines.** In Figure 12, we present robots generated from different baselines. Note that Point-E is essentially DiffuseBot without embedding optimization and diffusion as co-design.
**Incorporating human feedback.** In Figure 13, we showcase more examples of incorporating human textual feedback to the generation process of DiffuseBot. As briefly mentioned in Section 3.4 in the main paper, we leverage the compositionality of diffusion-based generative models [34, 14]. Specifically, this is done by combining two types of classifier-free guidance as in (3); one source
Figure 9: Examples of robots bred by DiffuseBot to achieve the all presented tasks.
of guidance is derived from the task-driven embedding optimization while the other is derived from embeddings from human-provided textual descriptions. These two conditional scores, namely \(\epsilon_{\theta}(\mathbf{x}_{t},t,\mathbf{c}_{\text{physics}})\) and \(\epsilon_{\theta}(\mathbf{x}_{t},t,\mathbf{c}_{\text{human}})\), can then be integrated into the diffusion as co-design framework as in (6). We refer the reader to the original papers for more details and theoretical justification. We demonstrate examples of: the balancing robot with additional text prompt _"a ball"_, the crawling robot with additional text prompt _"a star"_, the hurdling robot with additional text prompt _"a pair of wings"_, and the moving-a-box robot with additional text prompt _"thick"_. Interestingly, human feedback is introduced as an augmented "trait" to the original robot. For example, while the hurdling robot keeps the tall, lengthy geometry that is beneficial for storing energy for jumping, a wing-like structure appears at the top of the robot body instead of overwriting the entire geometry. This allows users to design specific parts for composition while also embedding fabrication and aesthetic priorities and constraints through text. We leave future explorations of these ideas for future work.
Figure 10: Examples of DiffuseBot evolving robots to solve all presented tasks.
## Appendix G Hardware Design and Fabrication
To create a real-world counterpart of the DiffuseBot, we fabricated a proof-of-concept physical robot gripper, as illustrated in Figure 14(a). The gripper was designed to match the shape of the digital gripper and successfully demonstrated its ability to grasp objects, as shown in Figure 15. A video demonstrating the grasping process can be found on our project page ([https://diffusebot.github.io/](https://diffusebot.github.io/)).
During the translation from the simulated design to a physical robot, we aimed to minimize differences between the simulated and physical designs. However, due to hardware and fabrication limitations, certain modifications were necessary.
One such challenge involved replicating the arbitrary contraction force at the particles from the simulation in the physical world. To address this, we employed tendon transmission as the actuation method for the real-world gripper, which was found to be suitable for emulating interaction forces
Figure 11: Demonstrations of generations changing from noises to robots with physical utility throughout the diffusion sampling process. We present snapshots at diffusion times \(t=600,400,200,0\) for all presented tasks.
from the digital world without introducing significant discrepancies. To enhance the stability of the tendon-driven actuation, we incorporated a rigid base ("Base" in Figure 14(a)) above the gripper itself ("Gripper" in Figure 14(a)). This base was designed to withstand the reaction force generated by the tendon-driven actuation. Furthermore, we added four tendon routing points (represented by small dotted circles in Figure 14(b)) on each finger to securely fix the tendon path. By utilizing four Teflon tubes (shown in Figure 14(b)), we were able to position complex components such as motors, batteries, and controllers away from the gripper, reducing its complexity.
The actuation strategy of the simulated gripper applies equal contraction force to both fingers simultaneously. To replicate this strategy, we employed underactuated tendon routing in the development of the gripper. This approach eliminates the need for four separate actuators, thereby reducing the complexity of the robot. We used tendon-driven actuators specifically designed for underactuated tendon-driven robots as they solve practical issues such as size and friction issues that commonly arise in such systems [30].
Figure 12: Comparison of robots generated by different baselines.
The soft gripper was 3D-printed using a digital light projection (DLP) type 3D printer (Carbon M1 printer, Carbon Inc.) and commercially available elastomeric polyurethane (EPU 40, Carbon Inc.). To enhance the softness of the robot body beyond the inherent softness of the material itself, we infilled the finger body with a Voronoi lattice structure, a metamaterial useful for creating a soft structure with tunable effective isotropic stiffness [20; 36]. We generated a Voronoi lattice foam with point spacing of 2.5mm, and a beam thickness of 0.4 mm as shown in the red boundary of Fig. 14(c). Finally, we tested the soft gripper, designed and fabricated as described above, to verify its ability to grasp an object, as shown in Figure 15.
**More discussion on fabrication.** Although, at present, the compilation of the virtual robot to a physical, digitally fabricated counterpart involves manual post-processing of algorithm's output, most, if not all of these steps could be automated. Our method outputs a point cloud (defining geometry), actuator placements, and an open-loop controller, along with a prescribed stiffness. Since we can easily convert the point cloud into a 3D triangle mesh, the geometry can be created by almost any 3D printing method. In order to realize an effective stiffness and material behavior, stochastic lattices, specifically Voronoi foams, have been used [36; 20] in the past and employed here in order to match target material properties. Given the actuator placement, tendons [27; 30] can be aligned with the prescribed (contiguous) regions. Since a lattice is used, threading tendons through the robot body is simple, and we note that even more complex routings have been studied in detail in the literature [4]. Creating attachment points for the tendons is a relatively simple geometry processing problem [6]. Thus, converting a virtual robot to a specification that incorporates geometry, material, and actuation can be automated in a straightforward way.
Figure 14: **(a)** shows a fabricated proof-of-concept physical robot gripper; **(b)** describes detailed robot configuration; **(c)** represents interior structure design used to soften the robot body.
Figure 13: More examples of incorporating human textual feedback into robot generation by DiffuseBot.
We note that when controlled, the physical robot may not always match the virtual robot's motion. This is the sim-to-real gap, and is significantly harder to overcome in our case than translating the virtual robot to physical hardware. Significant literature has been invested in specifically tackling the sim-to-real gap, and in our case would require its own dedicated project; however, we note that often hardware can be adapted to work by modifying only the control policies using feedback from real-world experiments, often even with little human intervention [21].
**Quantitative analysis**. In Figure 16, we provide a detailed analysis on the gap between simulation and physical robot. In order to explore the quantitative gap between the behavior of the physical robot and the simulated robot, we conducted an experiment with the following conditions, where similar setups are commonly adopted in soft robot literature [17]. The objective was to measure the change in distance between two tips when we pull/release two tendons - one for closing the gripper (flexion) and the other for opening it (extension). The tendons were pulled or released in increments and decrements of 2mm.
When contracting the tendon to flex or extend the fingers, both simulation and real robot results show log-shaped graphs. The pattern in the physical robot plot is a commonly observed phenomenon called hysteresis. However, the main difference between the simulation and real-world cases can be seen when releasing the tendon from a fully contracted state. In the real robot experiment, the tip distance changes rapidly, while in the simulation, the opposite effect is observed.
One plausible explanation for this disparity could be attributed to the friction direction and elongation of the tendons. During the transition from tendon contraction to tendon release, the tension of the tendon at the end-effector may change suddenly due to the change of the friction direction. Also, since we only control the motor position (not the tendon position) to pull/release the tendon with 2mm step, the exact tendon length may not be exactly the same when we consider the tendon elongation.
**Additional real robots**. In Figure 17, we show two other fabricated robots for the moving a box task and the crawling task. For moving a box, we can achieve similar motion as in the simulation; the only slight difference occurs when the stiffness in the physical robot is not as soft as that in the simulation, falling a bit short to perform the extremely curly motion in the simulation. For crawling, we found a more significant sim-to-real gap where the physical robot cannot achieve at all the galloping-like motion in the simulation; instead, the robot crawls more alike a inchworm movement, pushing the body in a quasi-static way. The major reason is the damping of the material (elastomeric polyurethane, EPU 40, Carbon Inc.) prohibit from releasing sufficient energy for dynamical motion.
**Potential solutions**. Given that the gap between simulation and real robot performance seems to originate from the actuation/transmission method, our future work will focus on developing a tendon-driven actuation simulation framework. This framework aims to address the differences and improve the accuracy of our simulations. We are exploring other possible explanations for the sim-to-real gap and will investigate any additional factors that may contribute to the observed discrepancies. Overall, as for a high-level general solution, we believe (1) adjusting parameters based on observed sim to real gap and repeat the design process or (2) building a more accurate physics-based simulation (which can be straightforwardly plug-and-played in DiffuseBot) can largely bridge the sim-to-real gap of fabricating physical robots; or more interestingly, connecting generative models to commercial-level design and simulation softwares.
Figure 15: Grasping motion of the real-world soft gripper from DiffuseBot. Time order from **(a)** to **(c)**. The demo video can be seen at our project page.
## Appendix H More Discussions
### Limitation
The major limitation of DiffuseBot is that we make a simplification in the parameterization of actuators and stiffness; we make dependencies of the two design specifications on robot geometry (check more technical details in Section 2.3 paragraph _Actuators and Stiffness_. This works well with properly-crafted mapping from geometry to the others yet limits the potential by human prior with little use of the generative power. While this may be reasonable as properly-set actuators and stiffness based on geometry (hand-tuned empirically in this work) roughly reflects task performance, a more flexible parameterization can definitely lead to improved performance. Potential remedy can be using part-based 3D generative models for actuators and direct optimization for stiffness. Another limitation is the gap between simulated results and real robots. While the hardware experiment has shown as a proof-of-concept that minimally demonstrates the potential, physical robot fabrication and real-world transfer have countless non-trivial challenges including stiffness and actuator design, sim-to-real gap, etc. This may require studies on more high-fidelity physics-based simulation, which can be straightforwardly plugged into DiffuseBot.
### Conditioning Beyond Text
The use of textual inputs additional to the embeddings optimized toward physical utility is achieved by both being able to be consumed by the diffusion model to produce guidance for the diffusion process. More concretely speaking, in DiffuseBot, we use the CLIP feature extractor as in Point-E and it allows to extract embedding for both text and image modalities, which can then be used as a condition in the diffusion model. Thus, we can also incorporate images as inputs and perform the exact same procedure as that of the textual inputs. Theoretically, the textual inputs are incorporated via following the intuition in end of the paragraph _Diffusion as Co-design_ in Section 2.4, where the textual inputs additionally provide gradients toward following the textual specification. Similarly, the image inputs can also be processed to provide gradients since CLIP embeddings live in a joint space of images and languages. More interestingly, if we build DiffuseBot on models other than Point-E, which can consume embeddings for other modalities like audio as conditioning, we can then straightforwardly perform robot design generation guided by the other corresponding input formats (and meanwhile, toward physical utility). Note that this critical feature of compositionality across different sources of guidance throughout the reverse diffusion process is one of the biggest advantages of using diffusion-based models as opposed to other types of generative models.
Figure 16: Quantitative analysis of behavior between simulation and physical robots.
Figure 17: Other proof-of-concept real robot fabrication for moving a box and crawling.
### Connection To Text Inversion
There is some synergy between text inversion in [19] and embedding optimization in DiffuseBot. Both of them aim at tuning the embedding toward reflecting certain properties of the output generation, i.e., describing the output generated images in [19] and toward improved physical utility in DiffuseBot. The major difference lies in the nuance of the data/samples used to carry out the optimization. Text inversion performs a direct optimization using latent diffusion model loss (Eq. (2) in [19]), which computes losses on noisy samples/latents corrupted from the real dataset. On the other hand, it is tricky to think about real dataset in robot design (as discussed in the second paragraph of Section 1 and the paragraph _Embedding Optimization_ in Section 2.4), embedding optimization in DiffuseBot computes losses on noisy samples corrupted from self-generated data filtered by robot performance (as in Algorithm 1 and Section 2.4). Conceptually, it is more like a mixture of diffusion model training and online imitation learning like DAGGER [45].
### Connection To Diffusion Models With Physical Plausibility
A potential and interesting way to adapt DiffuseBot to other applications like motion planning or control [28, 1] is to view a generated robot as one snapshot/frame of a motion/state sequence and the physics prior can be the dynamics constraint across timesteps (e.g., robot dynamics or contact dynamics that enforce non-penetration). The physics prior can be injected similarly to diffusion as co-design that propagates the enforcement of physical plausibility of generated states from differentiable physics-based simulation to diffusion samples. For example, considering states in two consecutive timesteps, we can compute loss in the differentiable simulation to measure the violation of physical constraints regarding robot dynamics or interaction with the environment. Then, we can compute gradients with respect to either control or design variables; for gradients in control, this will essentially augment works like [28, 1] with classifier-based guidance to achieve physical plausibility; for gradients in design, this will much resemble optimizing toward the motion sequence of a shape-shifting robot.
### Parameterization Of Actuator And Stiffness
The goal of DiffuseBot is to demonstrate the potential of using diffusion models to generate soft robot design and to leverage the knowledge of the pre-trained generative models learned from a large-scale 3D dataset. Under this setting, the generated output of the diffusion model can only provide the geometry information of robot designs, leading to our design choice of having full dependency of actuator and stiffness on the geometry. This may be a reasonable simplification as prior works [48] have shown geometry along with properly-set actuator and stiffness (we take manual efforts to design proper mapping from geometry to actuator and stiffness in this work) roughly reflect the performance of a soft robot design. For better generality, one potential remedy is to optimize actuator and stiffness independently from the geometry generated by the diffusion model, i.e., apply DiffuseBot and do direct optimization on actuator and stiffness afterward or at the same time. Another interesting direction may be, for actuators, to leverage part-based models [29] to decompose a holistic geometry into parts (or different actuator regions in soft robots). | ## Review
### Summary
This paper introduces DiffuseBot, a physics-augmented diffusion model designed for generating and optimizing the morphologies and control mechanisms of soft robots. It aims to bridge the gap between simulated designs and physical utility by augmenting the diffusion process with physical simulation and co-design optimization. The paper presents extensive experiments validating DiffuseBot's effectiveness across various tasks, demonstrating its application in both simulation and real-world scenarios. The innovative approach combines evolutionary algorithms with gradient-based optimization, leading to successful design outcomes and a proof-of-concept physical robot.
### Strengths
- The paper introduces a novel framework that integrates diffusion-based synthesis with physical dynamical simulation for co-designing soft robots.
- Extensive experimental validation on a diverse range of tasks demonstrates the versatility and effectiveness of DiffuseBot.
- The method leverages differentiable physics to optimize designs, enhancing physical utility and performance.
- The writing is clear and accessible, making complex concepts understandable even for readers unfamiliar with soft robotics.
- The authors provide a proof-of-concept by fabricating a real-world robot, showcasing practical applicability.
### Weaknesses
- Some sections of the paper could benefit from clearer exposition, particularly regarding the co-design process and its intuitive understanding.
- The robot's actuator and stiffness models appear oversimplified, raising concerns about their realism in practical applications.
- The paper lacks detailed discussions on the limitations and failure scenarios of the proposed method.
- The evaluation could be strengthened by including more recent and robust baseline comparisons.
- Ablation studies to assess the impact of the physics-augmented component are insufficiently addressed.
### Questions
- Can the authors provide more detailed quantitative results highlighting discrepancies between simulated and real-world robot performance?
- What are the structural biases referred to in line 86, and how is k-means clustering for actuator generation performed?
- Can limitations of the current pipeline and discussions about manufacturing feasibility be included in the paper?
- How sensitive is the current pipeline to hyper-parameters, and how are design choices made?
- Is the performance metric success rate as reported in Tables 1 and 2?
### Soundness
**Score:** 3
**Description:** 3 = good. The methodologies and results presented are generally sound, with solid experimental validation, though there are areas that require clearer justification and discussion.
### Presentation
**Score:** 3
**Description:** 3 = good. The paper is well-written and comprehensible, but certain sections could benefit from improved clarity and organization.
### Contribution
**Score:** 3
**Description:** 3 = good. The paper presents a significant contribution to the field of soft robotics through its innovative use of diffusion models and physical simulation, although some claims of novelty could be better substantiated.
### Rating
**Score:** 7
**Description:** 7 = Accept, but needs minor improvements. The paper is technically solid with high impact potential but requires some clarifications and enhancements in presentation and experimental rigor.
### Paper Decision
**Decision:** Accept (oral)
**Reasons:** The paper's originality and innovative approach in combining diffusion models with physical simulation present a significant advancement in soft robotics design. The soundness of the methodology, despite some clarity issues, supports a positive evaluation. The extensive experimental results, including practical applications, underscore its significance. Minor improvements in clarity and addressing specific weaknesses will enhance the paper's impact, but overall, it is well-suited for acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# CAPro: Webly Supervised Learning with Cross-Modality Aligned Prototypes
Yulei Qin\({}^{1}\) Xingyu Chen\({}^{2}\) Yunhang Shen\({}^{1}\) Chaoyou Fu\({}^{1}\)
Yun Gu\({}^{3}\) Ke Li\({}^{1}\) Xing Sun\({}^{1}\) Rongrong Ji\({}^{4}\)
\({}^{1}\)Tencent YouTu Lab \({}^{2}\)ByteDance
\({}^{3}\)Shanghai Jiao Tong University \({}^{4}\)Xiamen University
[email protected]
###### Abstract
Webly supervised learning has attracted increasing attention for its effectiveness in exploring publicly accessible data at scale without manual annotation. However, most existing methods of learning with web datasets are faced with challenges from label noise, and they have limited assumptions on clean samples under various noise. For instance, web images retrieved with queries of "_tiger cat_" (a cat species) and "_drumstick_" (a musical instrument) are almost dominated by images of tigers and chickens, which exacerbates the challenge of fine-grained visual concept learning. In this case, exploiting both web images and their associated texts is a requisite solution to combat real-world noise. In this paper, we propose Cross-modality Aligned Prototypes (CAPro), a unified prototypical contrastive learning framework to learn visual representations with correct semantics. For one thing, we leverage textual prototypes, which stem from the distinct concept definition of classes, to select clean images by text matching and thus disambiguate the formation of visual prototypes. For another, to handle missing and mismatched noisy texts, we resort to the visual feature space to complete and enhance individual texts and thereafter improve text matching. Such semantically aligned visual prototypes are further polished up with high-quality samples, and engaged in both cluster regularization and noise removal. Besides, we propose collective bootstrapping to encourage smoother and wiser label reference from appearance-similar instances in a manner of dictionary look-up. Extensive experiments on WebVision1k and NUS-WIDE (Web) demonstrate that CAPro well handles realistic noise under both single-label and multi-label scenarios. CAPro achieves new state-of-the-art performance and exhibits robustness to open-set recognition. Codes are available at [https://github.com/yuleiqin/capro](https://github.com/yuleiqin/capro).
## 1 Introduction
Large-scale annotated datasets (_e.g._, ImageNet [1]) were the driving force behind the revolution of computer vision in the past decade, but the expensive and time-consuming collection process now becomes the bottleneck of model scaling. Consequently, researchers seek to crawl web images, where search queries and user tags are directly used as labels. However, the large proportion of noise in web datasets (_e.g._, 20% in JMT-300M [2], 34% in WebVision1k [3], and 32% in WebFG496 [4]) impedes learning visual concepts. Many studies on Webly Supervised Learning (WSL) are conducted to reduce the negative impact of noise and effectively explore web data [5, 6, 7, 8, 9, 10].
Early WSL methods claim that simply scaling up datasets with standard supervised learning suffices to overcome web noise [3; 11; 2; 12]. Such a solution comes at the cost of huge computation resources, and the supervision source from noisy labels is proved suboptimal [13; 10; 14]. Therefore, various techniques are designed to reduce noise [15], such as neighbor density [16], guidance of clean samples [17; 18; 19], confidence bootstrapping [20; 21; 22], and side information [23; 24].
Despite the promising improvement, the above-mentioned methods still face challenges. First, most of them address certain types of noise such as label-flipping noise and out-of-distribution (OOD), neglecting the critical-yet-under-explored **semantic noise**. To clarify, semantic noise is caused by the misalignment between image contents and the associated texts when the search query (_e.g._, class) has multiple and ambiguous interpretations and the retrieved images do not correspond to the correct semantics. Without proper contextual information, it is rather difficult to pinpoint clean examples in polysemy classes. Second, the dominant idea of bootstrapping labels and discarding incorrect data is prone to noise overfitting [13]. The model predictions on individual images vary sharply over training epochs and such inconsistency also makes WSL inefficient. Some methods [4; 25; 26] also maintain peer models and require alternative steps to improve the reliability of bootstrapping. However, their complicated training procedures restrict scalability in practice.
To this end, we propose CAPro: **C**ross-modality **A**ligned **P**rototypes for robust representation learning from web data. Compared with previous prototypical methods [19; 27; 28], CAPro is able to handle label-flipping noise, OOD, and especially the semantic noise that remains unexplored (see Fig. 1).
First, CAPro exploits web data across modalities to formulate _semantically-correct textual and visual prototypes_. Since visual prototypes simply formed with images suffer from semantic ambiguity, we propose **text matching** to leverage textual prototypes to establish their noise-robust estimation. Motivated by successful language models [29; 30; 31], we extract textual knowledge to imply the extent to which one instance is aligned with its textual prototype. Specifically, we prepare descriptive texts (_e.g._, definitions) of each category and project them into the embedding space as textual prototypes via the pre-trained language model. For each sample, its affiliated texts of the website title, caption, and tags are also encoded into the same embedding space. The similarity between a sample and its prototype indicates "cleanness". Since incomplete and mismatched image-text pairs introduce additional noise [32], we bring in **text enhancement** with text guidance from visually-similar neighbors to mitigate the effect of noisy texts on clean sample selection. We consecutively construct image and text graphs and rerank neighbor candidates for text matching and enhancement. Samples that exactly match the target semantics are chosen as anchors for the initialization of visual prototypes. These visual prototypes are continuously polished up by high-quality web images to improve generalizability and discriminability. During representation learning, the intra-class distance between prototypes and instances is minimized by contrastive learning for regularization. By means of _class-representative visual prototypes_, various kinds of noise can be filtered out.
Second, we propose **collective bootstrapping** (CB) to provide smoother label reference by _extending bootstrapping with collective knowledge_. For each sample, instead of bootstrapping its target independently [33; 34], CAPro keeps bootstrapping the entire dynamic dictionary and provides label reference in the mode of dictionary look-up. The dictionary keys are the learned representations from the sampled data while the query is the current encoded sample. We aggregate model predictions on all keys and use their weighted combination as the pseudo target, where weights are determined by the matching scores of query-key pairs. By penalizing deviation from such targets, The proposed
Figure 1: (a) We explore cross-modality alignment to select clean examples and generate visual prototypes with correct semantics. (b) Collective bootstrapping provides consistent label references and regularization from visual dictionary. (c) Compared to \(765\) unambiguous classes, our advantage is much more highlighted on \(235\) classes where semantic noise prevails due to polysemy concepts.
CB achieves two advantages: 1) It encourages consistent performance among the query and visually similar keys. 2) Unstructured noise is suppressed by referring to the dictionary for label regularization. Our CB can also be viewed as encoding the neighborhood structure of data in the low-dimensional space, where the closely matched keys are neighbors of the query. Inductive propagation of self-labels is implicitly realized through such a structure, which draws on the assumption of manifold regularization [35] that close samples probably belong to the same class.
In summary, our contributions are summarized as:
* We propose CAPro, a prototypical learning framework to efficiently handle various noises including label-flipping noise, OOD, and semantic noise for webly supervised learning.
* We integrate class prototypes across text and image modalities to align with unambiguous semantics. We verify that text enhancement by visual guidance is the key to handling noisy texts for clean sample selection, which in turn improves visual prototypes.
* We investigate collective bootstrapping as label regularization by matching one query to all keys in a dynamic dictionary for reference. We show that scaling up the nearest neighbors from the mini-batch to the entire dictionary better leverages the visual data structure.
Experiments on WebVision1k and NUS-WIDE (Web) confirm the competitiveness of CAPro with prior state-of-the-art methods. CAPro performs robustly against real-world noise under single-label and multi-label scenarios and demonstrates superiority in open-set recognition.
## 2 Related Work
Webly Supervised Learning (WSL)WSL aims at utilizing abundant but weakly labeled web data. It serves a range of tasks, such as recognition [36; 37; 38; 39; 40; 41], detection [42; 43], and segmentation [44; 45]. In this study, we focus on visual representation learning [46; 11; 2; 3; 47; 12]. To combat noise, previous studies combine web labels with the pseudo labels generated by the model. With respect to the pseudo labels, Hinton _et al._[34] adopts soft targets in a fashion of distillation.
Figure 2: **Overview of CAPro. Images \(\mathbf{x}_{i}\) and texts \(\mathbf{t}_{i}\) are respectively fed into the image and text encoders for features \(\mathbf{v}_{i}\) and \(\mathbf{s}_{i}\). Then, \(\mathbf{v}_{i}\) is projected into the embedding space as \(\mathbf{z}_{i}\), followed by the reconstruction from \(\mathbf{z}_{i}\) to \(\mathbf{\tilde{v}}_{i}\). Visual prototypes \(\mathbf{z}^{c}\) are initialized with anchor instances that are selected by matching enhanced texts \(\mathbf{\tilde{s}}_{i}\) to textual prototypes \(\mathbf{s}^{c}\) for semantic alignment. They are constantly polished up by clean images and engage in contrastive learning to constrain cluster distribution. Collective bootstrapping exploits visual dictionary for regularization on the auxiliary classifier output \(\mathbf{q}_{i}\), where each key embedding is matched to the query for the reference \(\mathbf{b}_{i}\). Web labels \(y_{i}\) are simultaneously refined as \(\tilde{y}_{i}\) for “denoised” supervision on the classifier output \(\mathbf{p}_{i}\).**
Tanaka _et al_. [21] considers both soft and hard targets and proposes an alternative optimization framework. Yang _et al_. [22] estimates the correctness of hard targets on a case-by-case basis and dynamically balances the two supervision sources with label confidence. Recently, the idea of prototypical learning [48] has been applied for WSL. Han _et al_. [49] predicts self-labels by prototype voting and uses a constant ratio to combine web and pseudo labels. Momentum Prototype (MoPro) [28] improves prototypes with the momentum update policy [50] for smooth label adjustment.
The differences between CAPro and the closely related MoPro exactly highlight our contributions. First, we take advantage of both textual and visual prototypes to handle semantic misalignment noise. MoPro neglects such noise and its prototypes could be overwhelmed by irrelevant samples, which impairs the subsequent sample correction. Second, MoPro does not refer to appearance-similar samples for self-labels, which is prone to real-world noise that causes unreasonable label updates. In contrast, CAPro adopts bootstrapping wisely by performing dictionary look-up: the model prediction of the current query sample refers to its matching keys in the dictionary, where the matching score by visual similarity determines the contribution of each key. Third, MoPro only tackles single-label representation learning while CAPro extends prototypical contrastive learning for the multi-label scenario, which is non-trivial due to the intricate nature of noisy multi-labeled data. In that case, CAPro maintains prototypes in subspaces of the shared embedding space, which not only resolves the inter-class contrastive conflicts but also fosters implicit exploitation of label dependency.
Noise-Robust Learning from NeighborsSeveral approaches attempt to correct labels with neighborhood consensus. Both Wang _et al_. [51] and Guo _et al_. [16] measure data complexity using local neighbor density for sample selection. Huang _et al_. [52] progressively discovers neighbors to form decision boundaries in an unsupervised manner. Bahri _et al_. [53] filters out training data whose label collides with the predictions of its \(k\)-NN. Wu _et al_. [54] employs topology to only keep the largest connected component of a \(k\)-NN graph. Neighborhood collective estimation [55] evaluates model confidence on the "cleanness" of each candidate by its neighbors. Ortego _et al_. [56] identifies correct examples by comparing original labels with the soft labels from their neighbors. Neighbors are also involved in consistency regularization [57, 58] to encourage similar outputs on samples within the \(k\)-neighborhood. Such inductive label propagation allows correct supervision to transfer directly to mislabeled data via the neighborhood structure.
Unfortunately, the aforementioned methods are not scalable due to the huge complexity of updating a global \(k\)-NN graph frequently. Both Li _et al_. [57] and Iscen _et al_. [58] only consider neighbors within a mini-batch for on-the-fly graph construction. However, web data tends to be sparsely distributed and the graph built within mini-batch samples hardly provides reliable neighbors. To deal with such a trade-off, CAPro pinpoints all potential neighbors in the dictionary by matching representations without explicit graph building. We maintain the dictionary as a queue whose length is much larger than the batch size, enabling collective bootstrapping from appropriate neighbors.
Nearest neighbors play a vital role throughout our CAPro, from text enhancement to matching and collective bootstrapping. Compared with previous methods, our mechanism differs in that: 1) We acquire guidance from cross-modality neighbors, where noisy texts are enhanced by image neighbors to alleviate the mismatch problem. In contrast, existing studies investigate neighbors of one modality. 2) We exploit reciprocal structures to filter nearest neighbors for pertinent text matching, while most works neglect those top-ranked false positive neighbors. 3) We resort to neighbors for on-line collective bootstrapping in a manner of dictionary look-up instead of explicit graph construction.
Learning with Visual-Semantic AlignmentVarious tasks seek to learn visual representations for semantic concepts, including retrieval [59, 60, 61, 62, 63], caption [64, 65, 66], matching [67, 68], visual question answer [69], and zero-shot learning [70, 71]. Recently, learning unified embeddings has been studied for foundation models by language-image pre-training [72, 73, 74, 75, 76, 77].
In WSL, textual metadata such as titles and hashtags are too scarce to carry out language-image pre-training. Few studies harness both images and texts to learn semantically-correct representations. Zhou _et al_. [23] designs a co-training scheme to extract semantic embeddings to transfer knowledge from head to tail classes. Cheng _et al_. [24] builds visual and textual relation graphs to choose prototypes by graph-matching. Yang _et al_. [78] builds a visual-semantic graph and uses a graph neural network for label refinement. Nevertheless, these methods do not take noisy texts into serious consideration and underestimate their negative effect on seed selection. We also find that image outliers are wrongly kept even with textual concept matching. For example, images of a rugby team are ranked top for the class "tiger-cat" just because the team name "tiger-cat" is frequently mentioned. On the contrary, CAPro introduces text enhancement by smoothing and reranking to improve its robustness to noise. Furthermore, the prototypes in [24; 78] are fixed during training, while CAPro keeps polishing up prototypes with clean examples for better domain generalizability.
Noisy Correspondence RectificationOne paradigm similar to WSL is noisy correspondence rectification or calibration [60; 65; 62; 66; 61; 63; 68; 79]. It tackles the mismatched image and text pairs and aims to simultaneously learn aligned visual and textual embeddings for improved cross-modal retrieval. Huang [65] utilizes the memorization effect of neural networks to partition clean and noisy data and then learns to rectify correspondence. Hu [61] derives a unified framework with contrastive learning to reform cross-modal retrieval as an N-way retrieval. Han [66] proposes a meta-similarity correction network to view the binary classification of correct/noisy correspondence as the meta-process, which facilitates data purification. Our CAPro differs in two aspects: 1) We focus on the label noise where images are wrongly-labeled by keywords or hashtags. Noisy correspondence emphasizes the instance-level mismatch between an image and its associated text. 2) We aim to learn visual representations with categorical labels while most methods on noisy correspondence align image and text embeddings to improve cross-modal retrieval.
## 3 Method
### Problem Definition and Framework Architecture
Given an interested class \(y_{i}\in\{1,...,C\}\), web data are collected as \(D=\{(\mathbf{x}_{i},\mathbf{t}_{i},y_{i})\}_{i=1}^{N}\), where \(\mathbf{x}_{i}\) and \(\mathbf{t}_{i}\) respectively denote the image and textual metadata. Due to the noise issues, \(y_{i}\) might not equal to the ground-truth \(y_{i}^{*}\). We aim to optimize a deep model \(\mathcal{F}(\theta_{e};\theta_{e})\) with parameters of an encoder \(\theta_{e}\) and a classifier \(\theta_{c}\). Existing WSL studies often neglect \(\mathbf{t}_{i}\) and seldom consider the intervention between images and texts. Comparatively, our CAPro unearths \(\mathbf{t}_{i}\) for aligning visual representations with semantic concepts, which facilitates correction of various kinds of noise.
CAPro consists of the following components (see Fig. 2). **Siamese image encoders** extract features \(\mathbf{v}_{i},\mathbf{v}_{i}^{\prime}\in\mathds{R}^{d_{v}}\) from inputs \(\mathbf{x}_{i}\) and their augmented counterparts \(\mathbf{x}_{i}^{\prime}\). Following MoCo [50], parameters of the first query encoder \(\theta_{e}^{1}\) are updated by back-propagation and those of the second key encoder \(\theta_{e}^{2}\) are updated by the momentum method. **A text encoder** generates embeddings \(\mathbf{s}_{i},\mathbf{s}^{c}\in\mathds{R}^{d_{t}}\) respectively from the instance \(\mathbf{t}_{i}\) and the category \(\mathbf{t}^{c}\). Any off-the-shelf language model can be used with its pre-trained encoder frozen. **A classifier**, via a fully-connected (FC) layer, maps \(\mathbf{v}_{i}\) to predictions \(\mathbf{p}_{i}\in\mathds{R}^{C}\) over \(C\) classes. **A projector** distills discriminative low-dimensional embeddings \(\mathbf{z}_{i}\in\mathds{R}^{d_{p}}\) from \(\mathbf{v}_{i}\). It has two FC layers, followed by \(\ell_{2}\)-normalization for unit-sphere constraint on \(\mathbf{z}_{i}\). **A reconstructor**, symmetric to the projector, recovers \(\tilde{\mathbf{v}}_{i}\) from \(\mathbf{z}_{i}\) to be close to \(\mathbf{v}_{i}\). **An auxiliary classifier**, of the same structure as the classifier, outputs predictions \(\mathbf{q}_{i}\in\mathds{R}^{C}\) on \(\mathbf{z}_{i}\). **A dictionary**, implemented as a queue of size \(Q\times d_{p}\), records keys for both contrastive learning and collective bootstrapping. The latest embeddings \(\mathbf{z}_{i}^{\prime}\) are enqueued while the oldest are dequeued.
Image encoders and classifiers are trained with a cross-entropy loss. Since features \(\mathbf{v}_{i}\) contain redundant description that is vulnerable to image corruption and domain gap, we emphasize class-indicative contents by learning a low-dimensional embedding space. Inspired by denoising autoencoders [80; 27], a projector and a reconstructor are designed to optimize the projection from \(\mathbf{v}_{i}\) to \(\mathbf{z}_{i}\). An auxiliary classifier helps retain the representation capacity of \(\mathbf{z}_{i}\).
\[\mathcal{L}_{i}^{\mathrm{cls}}=-\log(\mathbf{p}_{i(y_{i})}),\;\mathcal{L}_{i} ^{\mathrm{prj}}=\|\tilde{\mathbf{v}}_{i}-\mathbf{v}_{i}\|_{2}^{2}-\log( \mathbf{q}_{i(y_{i})}). \tag{1}\]
### Cross-Modality Alignment
Text EncodingFor raw texts in metadata, we remove all html tags, file format extensions, punctuations, digits, and stop words. Then, tokenization is performed in accordance with the language model in use. After that, we obtain the pre-processed metadata \(\mathbf{t}_{i}\) and use the text encoder to extract \(\mathbf{s}_{i}\).
Text EnhancementTo handle missing and mismatched texts, we assume that similar images should share similar texts, and consider text enhancement with guidance from visual data structure. One simple way of encoding visual structure is to build a global \(k\)-NN graph on \(\mathbf{v}_{i}\)[78]. However, our preliminary experiments show that the top-ranked neighbors may not be pertinent due to the noise and domain gap. To detect _truly matched neighbors_, we construct a \(k\)-reciprocal-NN graph [81]\(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\) and use the re-ranking technique to evaluate neighbor relationship. Each node in \(\mathcal{V}\) denotes an image and the edge connectivity from \(\mathcal{E}\) is represented as the adjacency matrix \(A\).
\[A_{ij}=\begin{cases}1-d(\mathbf{v}_{i},\mathbf{v}_{j})&,\text{if }\mathbf{x}_{i} \in\mathcal{R}(\mathbf{x}_{j},k)\text{ or }\mathbf{x}_{j}\in\mathcal{R}(\mathbf{x}_{i},k),\\ 0&,\text{otherwise},\end{cases} \tag{2}\]
where \(\mathcal{N}(\mathbf{x}_{i},k)\) and \(\mathcal{R}(\mathbf{x}_{i},k)=\{\mathbf{x}_{j}|\mathbf{x}_{j}\in\mathcal{N}( \mathbf{x}_{i},k)\wedge\mathbf{x}_{i}\in\mathcal{N}(\mathbf{x}_{j},k)\}\) respectively denote \(k\)-NN and \(k\)-reciprocal-NN of \(\mathbf{x}_{i}.\) The cosine distance is used here: \(d(\mathbf{v}_{i},\mathbf{v}_{j})=1-\frac{\mathbf{v}_{i}\cdot\mathbf{v}_{j}}{ \|\mathbf{v}_{i}\|\|\mathbf{v}_{j}\|}.\) Neighbor re-ranking is achieved by re-calculating the pairwise distance. The vanilla cosine distance only weighs relative priority by measuring features, overlooking the context information of overlapped reciprocal neighbors. Hence, Jaccard Distance [81, 82, 83] is introduced to measure the intersection between reciprocal neighbors. The refined distance \(d^{*}(\mathbf{v}_{i},\mathbf{v}_{j})=\frac{1}{2}(d(\mathbf{v}_{i},\mathbf{v}_{ j})+d_{J}(\mathbf{v}_{i},\mathbf{v}_{j}))\):
\[d_{J}(\mathbf{v}_{i},\mathbf{v}_{j})\!=\!1\!-\!\frac{\sum_{k=1}^{N}\!\text{min }(V_{\mathbf{v}_{i}\mathbf{v}_{k}},V_{\mathbf{v}_{j}\mathbf{v}_{k}})}{\sum_{k= 1}^{N}\!\text{max}(V_{\mathbf{v}_{i},\mathbf{v}_{k}},V_{\mathbf{v}_{j}\mathbf{v }_{k}})},\,V_{\mathbf{v}_{i}\mathbf{v}_{j}}\!=\!\left\{\!\begin{array}{ll}\! \exp(\!-d(\mathbf{v}_{i},\mathbf{v}_{j}))&,\text{if}\mathbf{x}_{j}\in\mathcal{ R}(\mathbf{x}_{i},k)\\ 0&,\text{otherwise}.\end{array}\right. \tag{3}\]
Given \(\mathcal{G}\), smoothing is performed on \(\mathbf{S}=(\mathbf{s}_{1},\mathbf{s}_{2},...,\mathbf{s}_{N})\in\text{IR}^{N \times d_{t}}\) via graph convolution [84]: \(\hat{\mathbf{S}}=\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}} \mathbf{S},\tilde{A}=A+I_{N},\tilde{D}_{ii}=\sum_{j}\tilde{A}_{ij},\) where \(\tilde{A}\) refers to the adjacency matrix with self-connection and \(\tilde{D}\) is the diagonal degree matrix.
Textual PrototypesTo establish textual prototypes, we do not estimate \(\mathbf{s}^{c}\) from instances in one class (_e.g._, average) considering the insufficient, noisy nature of metadata. Instead, we refer to WordNet [85] for the vocabulary hierarchy [86, 78, 24]. For the \(c\)-th class, we extract its definition in WordNet and expand context with its siblings (synonyms), children (hyponyms), and parents (hypernyms). Then, we get \(\mathbf{t}^{c}\) and encode it for \(\mathbf{s}^{c}\). Such prototypes \(\mathbf{s}^{c}\) have two advantages: 1) It enriches semantic representations of classes. For instance, the comprehensive text of the class "tiger cat" is _a cat having a striped coat; domestic_cat, house_cat, felis_domesticus, felis_catus: any domesticated member of the genus Felis. It provides disambiguation to web instances of "tiger-cat" (_medium-sized wildcat in Central South America_) and "tiger, cat" (_large feline of forests in most of Asia having a tanwy coat with black stripes; endangered_). 2) It reveals the underlying inter-class relationship by language encoding. The structural information of class hierarchy is hard to infer from metadata instances but can be directly indexed in WordNet.
Text MatchingWith textual prototypes as queries, web instances with correct semantics can be retrieved by matching queries to their embeddings as keys. To improve precision, the same distance measurement in Eq. (3) for \(k\)-reciprocal-NN encoding is adopted to rerank the matched candidates. We sort samples by distance in an ascending order, and select the top-\(K\) as clean set \(D_{K}\).
\[D_{K}=D_{K}^{1}\cup D_{K}^{2}\cup...\cup D_{K}^{C},D_{K}^{c}=\{(\mathbf{x}_{i},\mathbf{t}_{i},y_{i})|(y_{i}=c)\wedge(d^{*}(\hat{\mathbf{s}}_{i},\mathbf{s}^ {c})\leq\sigma_{K}^{c})\}, \tag{4}\]
where \(\sigma_{K}^{c}\) denotes the \(K\)-th smallest distance in the \(c\)-th class.
Visual Prototypes\(D_{K}\) plays an anchoring role in shaping visual prototypes. We initialize the \(c\)-th prototype \(\mathbf{z}^{c}\) by averaging instances in \(D_{K}^{c}\): \(\hat{\mathbf{z}}^{c}=\frac{1}{K}\sum_{\mathbf{z}_{i}\in D_{K}^{c}}\mathbf{z}_ {i},\mathbf{z}^{c}=\frac{\mathbf{z}^{c}}{\|\hat{\mathbf{z}}^{c}\|_{2}}.\) Given such a good starting point, visual prototypes are consistently polished up by trustworthy web examples with a momentum coefficient \(m_{p}\): \(\mathbf{z}^{c}=m_{p}\mathbf{z}^{c}+(1-m_{p})\mathbf{z}_{i},\ \mathbf{z}^{c}= \frac{\mathbf{z}^{c}}{\|\mathbf{z}^{c}\|_{2}}.\) We perform instance-prototype contrastive learning to pull instances around their prototypes and push apart different class clusters. Instance-level discrimination is also encouraged to improve separation across classes.
\[\mathcal{L}_{i}^{\text{pro}}=-\log\frac{\exp(\mathbf{z}_{i}\cdot\mathbf{z}^{y_{i }}/\tau)}{\sum_{c=1}^{C}\exp(\mathbf{z}_{i}\cdot\mathbf{z}^{c}/\tau)},\ \mathcal{L}_{i}^{\text{ins}}=-\log\frac{\exp(\mathbf{z}_{i}\cdot\mathbf{z}^{c}_{i}/\tau)}{\sum_{j=1}^{Q}\exp(\mathbf{z}_{i}\cdot\mathbf{z}^{c}_{j}/\tau)}, \tag{5}\]
where \(\tau\) is a temperature coefficient.
Noise RemovalNoisy instances can be filtered out by self-prediction and instance-prototype similarity. We refer to MoPro [28] for rules of label adjustment by \(\mathbf{o}_{i}\in\mathrm{I\!R}^{C}\):
\[\mathbf{o}_{i}=\alpha\mathbf{p}_{i}+(1-\alpha)\mathbf{r}_{i},\ \mathbf{r}_{i(k)}=\frac{\exp(\mathbf{z}_{i}\cdot\mathbf{z}^{k}/\tau)}{\sum_{c=1}^{C}\exp( \mathbf{z}_{i}\cdot\mathbf{z}^{c}/\tau)}, \tag{6}\]
where \(\alpha\) balances two terms. Given a threshold \(0\leq\gamma\leq 1\), the pseudo-label \(\hat{y}_{i}\) is estimated by:
\[\hat{y}_{i}=\left\{\begin{array}{ll}y_{i}&\text{if }\mathbf{x}_{i}\in D_{K},\\ \arg\max_{c}\mathbf{o}_{i(c)}&\text{else if }\max_{c}\mathbf{o}_{i(c)}>\gamma,\\ y_{i}&\text{else if }\mathbf{o}_{i(y_{i})}>1/C,\\ \text{Null }(OOD)&\text{otherwise}.\end{array}\right. \tag{7}\]
The above control flow guarantees continuous guidance from \(D_{K}\) on cluster separation. If the highest score of \(\mathbf{o}_{i}\) is above \(\gamma\), the label will be changed accordingly. To prevent aggressive elimination of hard examples, we keep an instance till the next epoch so long as its confidence is above average. Otherwise, it is removed as OOD. The refined label \(\hat{y}_{i}\) successively affects Eqs. (1) and (5).
### Collective Bootstrapping
Due to memorization [87], as the training epoch increases, deep models will be prone to overfit noise even with the carefully designed logic of noise removal. We assume that overfitting occurs less dramatically when a majority can be consulted for the model to avoid over-confident decision on one single instance. With regard to the consultancy basis, the low-dimensional embedding is a good choice because its distilled description about visual contents is robust enough. Therefore, we propose to exploit the large dictionary, which is originally set up for instance-wise contrastive learning, to realize collective bootstrapping by dictionary look-up. The matching scores of the current query \(\mathbf{z}_{i}\) to all keys \(\mathbf{z}^{\prime}_{j}\) in the dictionary act as the weights for the bootstrapped representations \(\mathbf{b}_{i}\in\mathrm{I\!R}^{C}\).
\[\mathbf{b}_{i}\!=\!\sum_{j=1}^{Q}w_{ij}(\alpha\mathbf{q}^{\prime}_{j}+(1-\alpha )\mathbf{r}^{\prime}_{j})\ w_{ij}\!=\!\frac{\exp(\mathbf{z}_{i}\cdot\mathbf{z }^{\prime}_{j}/\tau)}{\sum_{j=1}^{Q}\exp(\mathbf{z}_{i}\cdot\mathbf{z}^{\prime }_{j}/\tau)},\ \mathbf{r}^{\prime}_{j(k)}\!=\!\frac{\exp(\mathbf{z}^{\prime}_{j} \cdot\mathbf{z}^{k}/\tau)}{\sum_{c=1}^{C}\exp(\mathbf{z}^{\prime}_{j}\cdot \mathbf{z}^{c}/\tau)}. \tag{8}\]
We minimize the difference between predictions and bootstrapping targets via a KL-divergence loss.
\[\mathcal{L}^{\mathrm{bts}}_{i}=D_{KL}(\mathbf{q}_{i}\parallel\mathbf{b}_{i})= \sum_{c=1}^{C}\mathbf{q}_{i(c)}\log\frac{\mathbf{q}_{i(c)}}{\mathbf{b}_{i(c)}}. \tag{9}\]
It not only allows collaborative contribution to individual soft label estimation, but also encourages consistent performance on visually similar examples. Note that such regularization is imposed on the auxiliary classifier \(\mathbf{q}_{i}\). Compared with \(\mathbf{p}_{i}\), constraints on \(\mathbf{q}_{i}\) coincide with our contrastive learning setting without potential conflicts with the hard label assignment in Eq. (7). The total objective is: \(\mathcal{L}=\sum_{i=1}^{N}(1-\lambda^{\mathrm{bts}})\mathcal{L}^{\mathrm{cls}} _{i}+\lambda^{\mathrm{bts}}\mathcal{L}^{\mathrm{bts}}_{i}+\lambda^{\mathrm{ pri}}\mathcal{L}^{\mathrm{prj}}_{i}+\lambda^{\mathrm{pro}}\mathcal{L}^{\mathrm{pro}}_{i}+ \lambda^{\mathrm{ins}}\mathcal{L}^{\mathrm{ins}}_{i}.\)
## 4 Experiments
### Experimental Setup
We evaluate CAPro on WebVision1k [3] (Google500 [78]) and NUS-WIDE (Web) [88] for single-label and multi-label representation learning, respectively. They contain image-text pairs which are in line with our WSL setting. All datasets under investigation are described in Sec. A. We perform ablation studies on Google500 and NUS-WIDE for low cost without losing generalization [78; 22]. The R50 [89]/MiniLM (L6) [30] are used as image/text encoders by default. Exhaustive details about hyper-parameters, implementation, and training are elaborated in Secs. B C and Algo. 1.
### Comparison with the SOTA
Table 1 reports the top1/top5 accuracy of WebVision1k and Google500. Results of the SOTA methods trained and evaluated on the same datasets are quoted here. Due to different choices of image encoders and training strategies, prior methods may not be directly comparable. For example, VSGraph adopts
the same R50, but is trained with a batch-size of 1024. The benefits of a larger batch size have been validated in NCR, where the same method achieves 75.7% and 73.9% in top1 accuracy respectively for the batch size of 1024 and 256. We believe batch size is the reason that a vanilla baseline [78] surpasses most SOTAs. Due to the limited budget, training with a batch size of 1024 is currently not affordable, but we will experiment in the future. In this case, methods within each row group of Table 1 are fairly comparable with each other, including the vanilla trained by a cross-entropy loss.
CAPro achieves quite competitive performance on WebVision1k, with an improvement of 1.6% (top1 accuracy) over our vanilla. Although SCC and VSGraph respectively opt for stronger backbones (_e.g._, R50D) and longer training epochs (_e.g._, 150) with a larger batch size (_e.g._, 1024), CAPro still excels in terms of the top5 accuracy. Furthermore, our gain of 1% (top1 accuracy) on ImageNet1k demonstrates that CAPro is robust to the domain gap between web and real-world datasets. Web data include advertisements, artworks, and renderings that differ from realistic photographs. On Google500 and ImageNet500, CAPro outperforms existing methods despite our disadvantages.
Table 2 reports per-Class F1 (C-F1), Overall F1 (O-F1), and mean Average Precision (mAP) on NUS-WIDE. Most prior multi-label methods are developed for ground-truth labels, while we are concerned with noisy WSL settings. Under this circumstance, CAPro is compared with methods that are trained on NUS-WIDE (Web) and evaluated on clean testing set. Following [96, 78], the top three categories of an image by prediction confidence are chosen for metric calculation. CAPro reaches the SOTA with a significant increase of 1.5% (C-F1), 3.0% (O-F1), and 9.7% (mAP) over our vanilla.
### Discussion on Open-Set Recognition
To verify if CAPro can identify outliers of unknown categories, we conduct experiments on open-set recognition. Specifically, we train CAPro on Google500 and validate on the testing sets of WebVision1k and ImageNet1k. Images from the remaining 500 classes all belong to one open-set
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline Method & Back- & \multicolumn{3}{c}{WebVision1k} & \multicolumn{3}{c}{ImageNet1k} & \multicolumn{3}{c}{Google500} & \multicolumn{3}{c}{ImageNet500} \\ & bone & Top1 & Top5 & Top1 & Top5 & Top1 & Top5 & Top1 & Top5 \\ \hline MentorNet [17] & IRV2 [90] & 72.6 & 88.9 & 64.2 & 84.8 & \(-\) & \(-\) & \(-\) & \(-\) \\ Curriculum [16] & IV2 [91] & 72.1 & 89.1 & 64.8 & 84.9 & \(-\) & \(-\) & \(-\) & \(-\) \\ Multimodal [92] & IV3 [93] & 73.2 & 89.7 & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ \hline Vanilla [22] & R50D [94] & 75.0 & 89.2 & 67.2 & 84.0 & 75.4 & 88.6 & 68.8 & 84.6 \\ SCC [22] & R50D & 75.3 & 89.3 & 67.9 & 84.7 & **76.4** & 89.6 & 69.7 & 85.3 \\ \hline Vanilla\({}^{\dagger}\)[78] & R50 & 74.2 & 89.8 & 68.2 & 66.2 & 66.9 & 82.6 & 61.5 & 78.8 \\ CoTeach [20, 78] & R50 & \(-\) & \(-\) & \(-\) & \(-\) & \(67.6\) & 84.0 & 62.1 & 80.9 \\ VSGraph\({}^{\dagger}\)[78] & R50 & **75.4** & 90.1 & **69.4** & 87.2 & 68.1 & 84.4 & 63.1 & 81.4 \\ \hline Vanilla [28] & R50 & 72.4 & 89.0 & 65.7 & 85.1 & \(-\) & \(-\) & \(-\) & \(-\) \\ SOMNet [8] & R50 & 72.2 & 89.5 & 65.0 & 85.1 & \(-\) & \(-\) & \(-\) & \(-\) \\ Curriculum [16] & R50 & 70.7 & 88.6 & 62.7 & 83.4 & \(-\) & \(-\) & \(-\) & \(-\) \\ CleanNet [18] & R50 & 70.3 & 87.7 & 63.4 & 84.5 & \(-\) & \(-\) & \(-\) & \(-\) \\ SINet [24] & R50 & 73.8 & **90.6** & 66.8 & 85.9 & \(-\) & \(-\) & \(-\) & \(-\) \\ NCR [58] & R50 & 73.9 & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ NCR\({}^{\dagger}\)[58] & R50 & 75.7 & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ MILe [26] & R50 & 75.2 & 90.3 & 67.1 & 85.6 & \(-\) & \(-\) & \(-\) & \(-\) \\ MoPro [28] & R50 & 73.9 & 90.0 & 67.8 & 87.0 & \(-\) & \(-\) & \(-\) & \(-\) \\ \hline Vanilla (ours) & R50 & 72.6 & 89.7 & 67.0 & 86.8 & 69.9 & 86.5 & 64.5 & 83.1 \\ CAPro (ours) & R50 & 74.2 & 90.5 & 68.0 & **87.2** & 76.0 & **91.3** & **72.0** & **89.2** \\ \hline \hline \end{tabular} \({}^{\dagger}\) Results on WebVision1k are under optimized training settings with batch size of 1024.
\end{table}
Table 1: Results on WebVision1k and Google500. Best/2nd best are marked bold/underlined.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Method & \multicolumn{2}{c}{WebVision} & \multicolumn{2}{c}{ImageNet} \\ & bone & C-F1 & O-F1 & \multicolumn{1}{c}{mAP} \\ \hline Vanilla [78] & R50 & 37.5 & 39.6 & 43.9 \\ VSGraph [78] & R50 & 38.6 & 40.2 & 44.8 \\ MCPL [95] & R101 & 22.5 & 17.2 & 47.4 \\ \hline Vanilla (ours) & R50 & 37.8 & 42.4 & 38.3 \\ CAPro (ours) & R50 & **39.3** & **45.4** & **48.0** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on NUS-WIDE (Web).
category. We follow [97, 98, 78] to classify an image as open-set if its highest prediction confidence is below a threshold. The average C-F1 is adopted to reflect whether a model can discriminate between base and novel classes (501 in total). Table 3 confirms our superiority over existing methods, showing that CAPro is capable of detecting semantic novelty. More analysis can be found in Sec. E.
### Ablation Study
Text Encoding and EnhancementTable 4 reveals the benefit from cross-modality alignment. For the method without text encoding and enhancement, we sample \(K\) examples randomly from each category. These instances barely provide reliable prototypes with semantic correctness. With respect to the text encoder, we additionally validate XLNet (base) [29] and GPT-Neo (1.3B) [31]. MiniLM surpasses XLNet by a minor margin, but both exhibit similar performance with our enhancement. GPT-Neo displays its power even with the plain \(k\)-NN-based smoothing in VSGraph [78], implying that advanced a Large Language Model (LLM) would boost CAPro. Note that encoders in CLIP [76] are not applicable here to avoid visual data leakage. All methods with text encoding and enhancement outperforms the vanilla one, validating the guidance from textual knowledge. Besides, we notice an increase up to 3.8%/4.7% on Google500/ImageNet500 by our text enhancement, showing that proper noise suppression is indispensable. Fig. 3 presents a comparison on the top-\(K\) matched instances. Enhancement in VSGraph can filter out OOD to a certain degree, but can do nothing with lexical confusion (_e.g._, food in _Drumstick_ and players in _Tiger cat_). More qualitative results are in Sec. D.1.
Reference ProviderOur collective bootstrapping is compared against commonly used regularization methods which provide reference on targets. Surprisingly, most existing methods bring about no or even negative impact. In one respect, bootstrapping and label smoothing are already implicitly inherited in our framework as Eqs. (6) and (7). Therefore, no further gains can be seized. As for SCC, its estimated confidence may not comply with our Eq. (6), which leads to incompatible optimization. NCR has two drawbacks: 1) The chance of similar instances appear in one mini-batch is fairly small for large web datasets. 2) It only counts on self-prediction as reference source which is fragile to noise. In contrast, CAPro is enlightened by MoCo to maintain the dictionary as a queue, which enlarges the number of reference objects beyond instances in a mini-batch. Our enriched sources from both self-prediction and instance-prototype similarity expedite a steady learning progress. Moreover, mix-up improves CAPro in top1 but lowers top5. It adopts convex combinations for both inputs and targets, enforcing a stronger regularization than our CB where we only recombine targets. For WebVision1k, examples with noisy labels still resemble prototypes and therefore neighbor knowledge brings useful reference. Mix-up does not consider appearance similarity and causes over-regularization.
\(\lambda^{\text{bts}}\) and top-\(K\)Fig. 4 shows that an increasing \(\lambda^{\text{bts}}\) triggers off worse results on Google500 and ImageNet500. This suggests that collective knowledge should not overwhelm individual instance decision. With regard to top-\(K\) on prototype initialization, there exists a trade-off between domain-variety and class-purity. The rise of \(K\) increases diversity but also the risk of noise.
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline Text & Text Enhan- & Reference & \multicolumn{3}{c}{Google500} & \multicolumn{3}{c}{ImageNet500} & \multicolumn{3}{c}{NUS-WIDE} \\ Encoding & cement & Provider & Top1 & Top5 & Top1 & Top5 & C-F1 & O-F1 & mAP \\ \hline \(\times\) & \(\times\) & \(\times\) & 71.5 & 87.7 & 66.5 & 84.6 & 37.2 & 42.4 & 46.2 \\ MiniLM & VSGraph [78] & \(\times\) & 72.0 & 88.0 & 66.9 & 85.4 & 39.2 & 44.4 & 46.8 \\ MiniLM & ✓ (ours) & \(\times\) & 75.5 & 91.0 & 71.5 & 88.8 & 39.3 & 44.9 & 47.4 \\ XLNet & VSGraph [78] & \(\times\) & 71.6 & 87.8 & 66.8 & 84.8 & 38.6 & 43.4 & 47.6 \\ XLNet & ✓ (ours) & \(\times\) & 75.4 & 91.0 & 71.5 & 88.8 & 39.3 & 45.1 & 47.5 \\ GPT-Neo & VSGraph [78] & \(\times\) & 72.0 & 88.0 & 67.2 & 85.3 & 39.2 & 45.0 & 47.4 \\ GPT-Neo & ✓ (ours) & \(\times\) & 75.7 & 91.1 & 71.6 & 88.8 & 39.2 & 45.1 & 47.6 \\ \hline MiniLM & ✓ (ours) & Mix-up (MU) [99] & 75.7 & 90.9 & 71.4 & 88.6 & 38.7 & 45.3 & 47.2 \\ MiniLM & ✓ (ours) & Bootstrap [33] & 75.5 & 90.8 & 71.3 & 88.4 & 38.1 & 43.2 & 46.0 \\ MiniLM & ✓ (ours) & Label smooth [100] & 75.4 & 90.8 & 71.2 & 88.4 & 36.9 & 42.1 & 46.8 \\ MiniLM & ✓ (ours) & SCC [22] & 73.8 & 89.9 & 70.2 & 88.0 & 35.6 & 41.3 & 45.0 \\ MiniLM & ✓ (ours) & NCR [58] & 75.5 & 91.1 & 71.5 & 88.8 & 37.6 & 43.4 & 46.8 \\ MiniLM & ✓ (ours) & ✓ CB (ours) & 76.0 & 91.3 & 72.0 & 89.2 & 39.3 & 45.4 & 48.0 \\ MiniLM & ✓ (ours) & ✓ CB (ours) + MU & 76.5 & 91.1 & 71.9 & 88.8 & 40.4 & **46.7** & 49.9 \\ GPT-Neo & ✓ (ours) & ✓ CB (ours) & 76.1 & **91.4** & **72.1** & **89.4** & 39.3 & 44.9 & 47.7 \\ GPT-Neo & ✓ (ours) & ✓ CB (ours) + MU & **76.5** & 91.2 & 72.0 & 88.8 & **40.7** & 45.2 & **50.0** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study on text encoding, enhancement, and reference provider.
More ablation studies on the **threshold \(\gamma\)**, the **update frequency** of visual prototypes, and the **noise removal policy** can be found in Secs. D.2, D.3, and D.4, respectively. Empirical guidelines on **hyper-parameter tuning** are concluded in Sec. F. We also provide analysis of **failure cases** in Sec. G. Our computation cost with respect to performance gains can be found in Sec. H.
## 5 Conclusion
CAPro utilizes web datasets to learn visual representations that are aligned with correct semantics. Cross-modality alignment and collective bootstrapping are corroborated as two keys to improve WSL. The benefits of building prototypes are two-fold: 1) Noisy web data whose visual and textual descriptions can be efficiently removed by simply measuring the distance between an instance and its prototype in the embedding space. 2) The inter-class relationship can be statistically studied by comparing each instance to all class prototypes, which may shed light on visual similarity for species. Three potential drawbacks should be considered: 1) the limited intra-class diversity with less tolerance to the minority in one category. Images crawled from websites follow the long-tailed distribution, which means that the more common or typical one instance is, the greater the likelihood that it gets exposed online. Over-emphasis on the purity of class prototypes leads to false negatives on the recognition of atypical samples. One possible solution is to introduce randomness into the initialization and update of prototypes to improve generalization. 2) the noteworthy domain gap and bias of web data. Even image contents are correct, their styles (_e.g._, advertising photos, artworks, and rendering) are different from the realistic datasets. When it comes to modalities such as infrared or medical tomography images, there exist very few images online. Therefore, it is encouraged to prepare realistic images for guided-training and evaluation, where early-stopping and normalization techniques can be used to avoid overfitting. 3) the accuracy of prior knowledge about class hierarchy. Since definitions of class prototypes rely on the systematic understanding of concepts, improper, coarse-grained, or even wrong descriptions would devalue the semantic alignment. A thorough analysis on class concepts is a requisite to developing prototypes. Future work includes extension to other modalities (_e.g._, audio and video clips) and to other tasks (_e.g._, semi-supervised learning).
Broader ImpactCAPro manoeuvres language models for visual concept learning, where LLMs (_e.g._, GPT-Neo) can deliver promising results for future development. Regardless of data sources, we showcase a cost-efficient and practical way to utilize cross-modality data. It also promotes rethinking the key-value matching mechanism for creative usages. For example, the visual dictionary originally built for instance-wise contrastive learning is re-discovered for our collective bootstrapping.
Figure 4: Impact of hyper-parameters \(\lambda^{\mathrm{bits}}\) and top-\(K\) on CAPro.
Figure 3: Top-matched WebVision1k instances are chosen: (a) without text enhancement, (b) with text enhancement in VSGraph [78], and (c) with our text enhancement.
## Acknowledgements
The authors would like to express gratitude to the anonymous reviewers who greatly help improve the manuscript. In addition, the authors thank Hao Cheng from the University of California, Santa Cruz for the valuable comments.
## References
* [1] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 248-255. IEEE, 2009.
* [2] Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 843-852, 2017.
* [3] Wen Li, Limin Wang, Wei Li, Eirikur Agustsson, and Luc Van Gool. Webvision database: Visual learning and understanding from web data. _arXiv preprint arXiv:1708.02862_, 2017.
* [4] Zeren Sun, Yazhou Yao, Xiu-Shen Wei, Yongshun Zhang, Fumin Shen, Jianxin Wu, Jian Zhang, and Heng Tao Shen. Webly supervised fine-grained recognition: Benchmark datasets and an approach. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 10602-10611, 2021.
* [5] Jonathan Krause, Benjamin Sapp, Andrew Howard, Howard Zhou, Alexander Toshev, Tom Duerig, James Philbin, and Li Fei-Fei. The unreasonable effectiveness of noisy data for fine-grained recognition. In _European Conference on Computer Vision_, pages 301-320. Springer, 2016.
* [6] Parneet Kaur, Karan Sikka, and Ajay Divakaran. Combining weakly and webly supervised learning for classifying food images. _arXiv preprint arXiv:1712.08730_, 2017.
* [7] Chuanyi Zhang, Yazhou Yao, Huafeng Liu, Guo-Sen Xie, Xiangbo Shu, Tianfei Zhou, Zheng Zhang, Fumin Shen, and Zhenmin Tang. Web-supervised network with softly update-drop training for fine-grained visual classification. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 34, pages 12781-12788, 2020.
* [8] Yi Tu, Li Niu, Junjie Chen, Dawei Cheng, and Liqing Zhang. Learning from web data with self-organizing memory module. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 12846-12855, 2020.
* [9] Huafeng Liu, Haofeng Zhang, Jianfeng Lu, and Zhenmin Tang. Exploiting web images for fine-grained visual recognition via dynamic loss correction and global sample selection. _IEEE Transactions on Multimedia_, 24:1105-1115, 2021.
* [10] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning (still) requires rethinking generalization. _Communications of the ACM_, 64(3):107-115, 2021.
* [11] Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens Van Der Maaten. Exploring the limits of weakly supervised pretraining. In _European Conference on Computer Vision_, pages 181-196, 2018.
* [12] Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, and Neil Houlsby. Large scale learning of general visual representations for transfer. _arXiv preprint arXiv:1912.11370_, 2(8), 2019.
* [13] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In _International Conference on Learning Representations_, 2017. URL [https://openreview.net/forum?id=Sy8gdB9xx](https://openreview.net/forum?id=Sy8gdB9xx).
* [14] Xingjun Ma, Yisen Wang, Michael E Houle, Shuo Zhou, Sarah Erfani, Shutao Xia, Sudanthi Wijewickrema, and James Bailey. Dimensionality-driven learning with noisy labels. In _International Conference on Machine Learning_, pages 3355-3364. PMLR, 2018.
* [15] Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, and Jae-Gil Lee. Learning from noisy labels with deep neural networks: A survey. _IEEE Transactions on Neural Networks and Learning Systems_, 2022.
* [16] Sheng Guo, Weilin Huang, Haozhi Zhang, Chenfan Zhuang, Dengke Dong, Matthew R Scott, and Dinglong Huang. Curriculumnet: Weakly supervised learning from large-scale web images. In _European Conference on Computer Vision_, pages 135-150, 2018.
* [17] Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In _International Conference on Machine Learning_, pages 2304-2313. PMLR, 2018.
* [18] Kuang-Huei Lee, Xiaodong He, Lei Zhang, and Linjun Yang. Cleannet: Transfer learning for scalable image classifier training with label noise. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 5447-5456, 2018.
* [19] Yulei Qin, Xingyu Chen, Chao Chen, Yunhang Shen, Bo Ren, Yun Gu, Jie Yang, and Chunhua Shen. Fopro: Few-shot guided robust webly-supervised prototypical learning. _arXiv preprint arXiv:2212.00465_, 2022.
* [20] Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. _Advances in Neural Information Processing Systems_, 31, 2018.
* [21] Daiki Tanaka, Daiki Ikami, Toshihiko Yamasaki, and Kiyoharu Aizawa. Joint optimization framework for learning with noisy labels. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 5552-5560, 2018.
* [22] Jingkang Yang, Litong Feng, Weirong Chen, Xiaopeng Yan, Huabin Zheng, Ping Luo, and Wayne Zhang. Webly supervised image classification with self-contained confidence. In _European Conference on Computer Vision_, pages 779-795. Springer, 2020.
* [23] Xiangzeng Zhou, Pan Pan, Yun Zheng, Yinghui Xu, and Rong Jin. Large scale long-tailed product recognition system at alibaba. In _Proceedings of ACM International Conference on Information & Knowledge Management_, pages 3353-3356, 2020.
* [24] Lele Cheng, Xiangzeng Zhou, Liming Zhao, Dangwei Li, Hong Shang, Yun Zheng, Pan Pan, and Yinghui Xu. Weakly supervised learning with side information for noisy labeled images. In _European Conference on Computer Vision_, pages 306-321. Springer, 2020.
* [25] Junnan Li, Richard Socher, and Steven CH Hoi. Dividemix: Learning with noisy labels as semi-supervised learning. In _International Conference on Learning Representations_, 2019.
* [26] Sai Rajeswar, Pau Rodriguez, Soumye Singhal, David Vazquez, and Aaron Courville. Multi-label iterated learning for image classification with label ambiguity. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 4783-4793, 2022.
* [27] Junnan Li, Pan Zhou, Caiming Xiong, and Steven Hoi. Prototypical contrastive learning of unsupervised representations. In _International Conference on Learning Representations_, 2020.
* [28] Junnan Li, Caiming Xiong, and Steven Hoi. Mopro: Webly supervised learning with momentum prototypes. In _International Conference on Learning Representations_, 2020.
* [29] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. _Advances in Neural Information Processing Systems_, 32, 2019.
* [30] Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers. _Advances in Neural Information Processing Systems_, 33:5776-5788, 2020.
* [31] Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow, March 2021. URL [https://doi.org/10.5281/zenodo.5297715](https://doi.org/10.5281/zenodo.5297715). If you use this software, please cite it using these metadata.
* [32] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In _International Conference on Machine Learning_, pages 12888-12900. PMLR, 2022.
* [33] Scott E Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, and Andrew Rabinovich. Training deep neural networks on noisy labels with bootstrapping. In _International Conference on Learning Representations (Workshop)_, 2015.
* [34] Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. Distilling the knowledge in a neural network. _arXiv preprint arXiv:1503.02531_, 2(7), 2015.
* [35] Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. _Journal of Machine Learning Research_, 7(11), 2006.
* [36] Hilde Kuehne, Ahsan Iqbal, Alexander Richard, and Juergen Gall. Mining youtube-a dataset for learning fine-grained action concepts from webly supervised video data. _arXiv preprint arXiv:1906.01012_, 2019.
* [37] Alessandro Bergamo and Lorenzo Torresani. Exploiting weakly-labeled web images to improve object classification: a domain adaptation approach. _Advances in Neural Information Processing Systems_, 23, 2010.
* [38] Haodong Duan, Yue Zhao, Yuanjun Xiong, Wentao Liu, and Dahua Lin. Omni-sourced webly-supervised learning for video recognition. In _European Conference on Computer Vision_, pages 670-688. Springer, 2020.
* [39] Zhi-Fan Wu, Tong Wei, Jianwen Jiang, Chaojie Mao, Mingqian Tang, and Yu-Feng Li. Ngc: a unified framework for learning with open-world noisy data. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 62-71, 2021.
* [40] Yazhou Yao, Jian Zhang, Fumin Shen, Xiansheng Hua, Jingsong Xu, and Zhenmin Tang. Exploiting web images for dataset construction: A domain robust approach. _IEEE Transactions on Multimedia_, 19(8):1771-1784, 2017.
* [41] Yazhou Yao, Xiansheng Hua, Guanyu Gao, Zeren Sun, Zhibin Li, and Jian Zhang. Bridging the web data and fine-grained visual recognition via alleviating label noise and domain mismatch. In _Proceedings of the ACM International Conference on Multimedia_, pages 1735-1744, 2020.
* [42] Santosh K Divvala, Ali Farhadi, and Carlos Guestrin. Learning everything about anything: Webly-supervised visual concept learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 3270-3277, 2014.
* [43] Yunhang Shen, Rongrong Ji, Zhiwei Chen, Xiaopeng Hong, Feng Zheng, Jianzhuang Liu, Mingliang Xu, and Qi Tian. Noise-aware fully webly supervised object detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 11326-11335, 2020.
* [44] Tong Shen, Guosheng Lin, Chunhua Shen, and Ian Reid. Bootstrapping the performance of webly supervised semantic segmentation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 1363-1371, 2018.
* [45] Bin Jin, Maria V Ortiz Segovia, and Sabine Susstrunk. Webly supervised semantic segmentation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 3626-3635, 2017.
* [46] Armand Joulin, Laurens van der Maaten, Allan Jabri, and Nicolas Vasilache. Learning visual features from large weakly supervised data. In _European Conference on Computer Vision_, pages 67-84. Springer, 2016.
* [47] Junnan Li, Yongkang Wong, Qi Zhao, and Mohan S Kankanhalli. Learning to learn from noisy labeled data. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 5051-5059, 2019.
* [48] Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. _Advances in Neural Information Processing Systems_, 30, 2017.
* [49] Jiangfan Han, Ping Luo, and Xiaogang Wang. Deep self-learning from noisy labels. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 5138-5147, 2019.
* [50] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 9729-9738, 2020.
* [51] Yisen Wang, Weiyang Liu, Xingjun Ma, James Bailey, Hongyuan Zha, Le Song, and Shu-Tao Xia. Iterative learning with open-set noisy labels. In _Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition_, pages 8688-8696, 2018.
* [52] Jiabo Huang, Qi Dong, Shaogang Gong, and Xiatian Zhu. Unsupervised deep learning by neighbourhood discovery. In _International Conference on Machine Learning_, pages 2849-2858. PMLR, 2019.
* [53] Dara Bahri, Heinrich Jiang, and Maya Gupta. Deep k-nn for noisy labels. In _International Conference on Machine Learning_, pages 540-550. PMLR, 2020.
* [54] Pengxiang Wu, Songzhu Zheng, Mayank Goswami, Dimitris Metaxas, and Chao Chen. A topological filter for learning with label noise. _Advances in Neural Information Processing Systems_, 33:21382-21393, 2020.
* [55] Jichang Li, Guanbin Li, Feng Liu, and Yizhou Yu. Neighborhood collective estimation for noisy label identification and correction. In _European Conference on Computer Vision_, pages 128-145. Springer, 2022.
* [56] Diego Ortego, Eric Arazo, Paul Albert, Noel E O'Connor, and Kevin McGuinness. Multi-objective interpolation training for robustness to label noise. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 6606-6615, 2021.
* [57] Junnan Li, Caiming Xiong, and Steven CH Hoi. Learning from noisy data with robust representation learning. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 9485-9494, 2021.
* [58] Ahmet Iscen, Jack Valmadre, Anurag Arnab, and Cordelia Schmid. Learning with neighbor consistency for noisy labels. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 4672-4681, 2022.
* [59] Raul Gomez, Lluis Gomez, Jaume Gibert, and Dimosthenis Karatzas. Learning to learn from web data through deep semantic embeddings. In _European Conference on Computer Vision Workshops_, pages 514-529, 2018.
* [60] Peng Hu, Xi Peng, Hongyuan Zhu, Liangli Zhen, and Jie Lin. Learning cross-modal retrieval with noisy labels. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 5403-5413, 2021.
* [61] Peng Hu, Zhenyu Huang, Dezhong Peng, Xu Wang, and Xi Peng. Cross-modal retrieval with partially mismatched pairs. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 2023.
* [62] Yang Qin, Dezhong Peng, Xi Peng, Xu Wang, and Peng Hu. Deep evidential learning with noisy correspondence for cross-modal retrieval. In _Proceedings of the 30th ACM International Conference on Multimedia_, pages 4948-4956, 2022.
* [63] Huaiwen Zhang, Yang Yang, Fan Qi, Shengsheng Qian, and Changsheng Xu. Robust video-text retrieval via noisy pair calibration. _IEEE Transactions on Multimedia_, 2023.
* [64] Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 3128-3137, 2015.
* [65] Zhenyu Huang, Guocheng Niu, Xiao Liu, Wenbiao Ding, Xinyan Xiao, Hua Wu, and Xi Peng. Learning with noisy correspondence for cross-modal matching. _Advances in Neural Information Processing Systems_, 34:29406-29419, 2021.
* [66] Haochen Han, Kaiyao Miao, Qinghua Zheng, and Minnan Luo. Noisy correspondence learning with meta similarity correction. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 7517-7526, 2023.
* [67] Yijie Lin, Mouxing Yang, Jun Yu, Peng Hu, Changqing Zhang, and Xi Peng. Graph matching with bi-level noisy correspondence. _arXiv preprint arXiv:2212.04085_, 2022.
* [68] Zheng Li, Caili Guo, Zerun Feng, Jenq-Neng Hwang, and Zhongtian Du. Integrating language guidance into image-text matching for correcting false negatives. _IEEE Transactions on Multimedia_, 2023.
* [69] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. _International Journal of Computer Vision_, 123:32-73, 2017.
* [70] Shafin Rahman, Salman Khan, and Nick Barnes. Improved visual-semantic alignment for zero-shot object detection. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 34, pages 11932-11939, 2020.
* [71] Rui Gao, Xingsong Hou, Jie Qin, Yuming Shen, Yang Long, Li Liu, Zhao Zhang, and Ling Shao. Visual-semantic aligned bidirectional network for zero-shot learning. _IEEE Transactions on Multimedia_, 2022.
* [72] Hao Wu, Jiayuan Mao, Yufeng Zhang, Yuning Jiang, Lei Li, Weiwei Sun, and Wei-Ying Ma. Unified visual-semantic embeddings: Bridging vision and language with structured meaning representations. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 6609-6618, 2019.
* [73] Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. Oscar: Object-semantics aligned pre-training for vision-language tasks. In _European Conference on Computer Vision_, pages 121-137. Springer, 2020.
* [74] Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. Uniter: Universal image-text representation learning. In _European Conference on Computer Vision_, pages 104-120. Springer, 2020.
* [75] Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In _International Conference on Machine Learning_, pages 4904-4916. PMLR, 2021.
* [76] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In _International Conference on Machine Learning_, pages 8748-8763. PMLR, 2021.
* [77] Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu. Coca: Contrastive captioners are image-text foundation models. _arXiv preprint arXiv:2205.01917_, 2022.
* [78] Jingkang Yang, Weirong Chen, Litong Feng, Xiaopeng Yan, Huabin Zheng, and Wayne Zhang. Webly supervised image classification with metadata: Automatic noisy label correction via visual-semantic graph. In _Proceedings of ACM International Conference on Multimedia_, pages 83-91, 2020.
* [79] Shuo Yang, Zhaopan Xu, Kai Wang, Yang You, Hongxun Yao, Tongliang Liu, and Min Xu. Bicro: Noisy correspondence rectification for multi-modality data via bi-directional cross-modal similarity consistency. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 19883-19892, 2023.
* [80] Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In _International Conference on Machine Learning_, pages 1096-1103, 2008.
* [81] Zhun Zhong, Liang Zheng, Donglin Cao, and Shaozi Li. Re-ranking person re-identification with k-reciprocal encoding. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 1318-1327, 2017.
* [82] Song Bai and Xiang Bai. Sparse contextual activation for efficient visual re-ranking. _IEEE Transactions on Image Processing_, 25(3):1056-1069, 2016.
* [83] Mang Ye, Chao Liang, Yi Yu, Zheng Wang, Qingming Leng, Chunxia Xiao, Jun Chen, and Ruimin Hu. Person reidentification via ranking aggregation of similarity pulling and dissimilarity pushing. _IEEE Transactions on Multimedia_, 18(12):2553-2566, 2016.
* [84] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In _International Conference on Learning Representations_, 2016.
* [85] George A Miller. _WordNet: An electronic lexical database_. MIT press, 1998.
* [86] Tingting Wei, Yonghe Lu, Huiyou Chang, Qiang Zhou, and Xianyu Bao. A semantic approach for text clustering using wordnet and lexical chains. _Expert Systems with Applications_, 42(4):2264-2275, 2015.
* [87] Devansh Arpit, Stanislaw Jastrzebski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, et al. A closer look at memorization in deep networks. In _International Conference on Machine Learning_, pages 233-242. PMLR, 2017.
* [88] Tat-Seng Chua, Jinhui Tang, Richang Hong, Haojie Li, Zhiping Luo, and Yantao Zheng. NUS-WIDE: a real-world web image database from national university of singapore. In _Proceedings of the ACM International Conference on Image and Video Retrieval_, pages 1-9, 2009.
* [89] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 770-778, 2016.
* [90] Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 31, 2017.
* [91] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 1-9, 2015.
* [92] Manan Shah, Krishnamurthy Viswanathan, Chun-Ta Lu, Ariel Fuxman, Zhen Li, Aleksei Timofeev, Chao Jia, and Chen Sun. Inferring context from pixels for multimodal image classification. In _Proceedings of the ACM International Conference on Information and Knowledge Management_, pages 189-198, 2019.
* [93] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 2818-2826, 2016.
* [94] Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, and Mu Li. Bag of tricks for image classification with convolutional neural networks. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 558-567, 2019.
* [95] Thibaut Durand, Nazanin Mehrasa, and Greg Mori. Learning a deep convnet for multi-label classification with partial labels. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 647-657, 2019.
* [96] Jiang Wang, Yi Yang, Junhua Mao, Zhiheng Huang, Chang Huang, and Wei Xu. Cnn-rnn: A unified framework for multi-label image classification. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 2285-2294, 2016.
* [97] Pramuditha Perera, Ramesh Nallapati, and Bing Xiang. Ocgan: One-class novelty detection using gans with constrained latent representations. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 2898-2906, 2019.
* [98] Ryota Yoshihashi, Wen Shao, Rei Kawakami, Shaodi You, Makoto Iida, and Takeshi Naemura. Classification-reconstruction learning for open-set recognition. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 4016-4025, 2019.
* [99] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. Mixup: Beyond empirical risk minimization. In _International Conference on Learning Representations_, 2018.
* [100] Rafael Muller, Simon Kornblith, and Geoffrey E Hinton. When does label smoothing help? _Advances in Neural Information Processing Systems_, 32, 2019.
* [101] Jorge Ramon Fonseca Cacho, Ben Cisneros, and Kazem Taghva. Building a wikipedia n-gram corpus. In _Proceedings of SAI Intelligent Systems Conference_, pages 277-294. Springer, 2021.
Datasets Details
### WebVision1k
It contains 2.4M web images collected from Google and Flickr, which share the same 1k category names with ImageNet1k [1]. For each example, we use all available description, title, and tag in its metadata for raw text preparation. Besides, we follow [78, 22] to use the subset of WebVision-**Google500** for ablation studies in consideration of lower GPU resource and time consumption without losing generalization. It contains 0.48M images from Google with randomly chosen 500 categories. The testing set of ImageNet1k and its subset ImageNet500 are involved as well for evaluation.
### NUS-WIDE (Web)
It contains 0.26M web images from Flickr with 5k unique user tags. Each example is manually annotated with multiple labels within 81 concepts that are filtered out of the 5k tags. It also provides weak labels (an official web version) by checking if each of the 81 category name appears in user tags of every example. Almost 50% of the web labels are wrong and 50% of the true labels are missing in tags [88]. Since tags contain phrases without delimiters, we split phrases based on unigram frequencies [101] for raw text preparation. We follow [78] to train models with weakly-labeled training set (_a.k.a._, NUS-WIDE Web) and validate them on the clean testing set.
## Appendix B Mathmatical Notations
In this section, we present the description of all math notations in the manuscript (see Table 5).
## Appendix C Implementation and Training Details
### General Settings
In consideration of performance and efficiency, we adopt ResNet-50 [89] and MiniLM-L6 [30] as image and text encoders by default. The training settings are listed in Table 6. In general, we refer to [28] for: batch size is 256; optimizer is SGD; learning rate linearly rises with 10 warm-up epochs and decays by cosine schedule. Specifically, although the benefit of an increased batch size (_e.g._, 1024) has been validated [78, 58], we follow the standard batch size setting of 256 on WebVision1k for two reasons: 1) comparability with most of the previous methods; 2) limited computing resources with 8 GPUs (a mini-batch size of 32 on each GPU for a batch size of 256 in total).
We empirically set \(\lambda^{prj}=1\), \(\lambda^{pro}=1\), \(\lambda^{ins}=1\), \(\lambda^{bls}=0.1\), \(m_{p}=0.999\), \(d_{p}=128\), \(\tau=0.1\), \(\alpha=0.5\), and \(K=50\) by default. Their optimal values require meticulous fine-tuning of each hyper-parameter, which is beyond consideration of the present study.
Data augmentation (random cropping, rescaling, and horizontal flipping) is applied on the inputs to the query encoder while stronger augmentation, including color jittering and blurring [50]), is added on those to the key encoder.
Experiments are conducted on a CentOS 7 workstation with an Intel 8255C CPU, 377 GB Mem, and 8 NVIDIA V100 GPUs. The training of CAPro on WebVision1k, Google500, and NUS-WIDE respectively costs about ten days, three days, and one day under the environment mentioned above.
### WebVision1k-only Implementation
We refer to [28] for \(Q=8192\) and \(\gamma=0.6\). The learning rate is 0.1 and the number of epochs is 120. Given the complexity of graph construction, we use \(k=5\) for both text denoising and matching.
### NUS-WIDE (Web)-only Implementation
In view of the dataset scale, we set \(Q=2048\) with a learning rate of \(2\times 10^{-3}\) and a total of 100 epochs. \(k=10\) is chosen since user tags are noisier and sparser in NUS-WIDE.
\begin{table}
\begin{tabular}{l l} \hline \hline Symbol & Description \\ \hline \(\mathbf{x}_{i}\) & a web image indexed with \(i\) \\ \(\mathbf{t}_{i}\) & textual metadata associated with \(\mathbf{x}_{i}\) \\ \(y_{i}\) & web label associated with \(\mathbf{x}_{i}\) \\ \(y_{i}^{*}\) & ground-truth label associated with \(\mathbf{x}_{i}\) \\ \(N\) & the total number of images in a web dataset \\ \(C\) & the total number of categories \\ \(D\) & a web dataset \\ \(\theta_{e}\) & parameters of image encoder \\ \(\theta_{c}\) & parameters of classifier \\ \(\mathcal{F}(\theta_{e};\theta_{c})\) & a deep model with image encoder and classifier \\ \(d_{v}\) & dimension of the visual features \\ \(\mathbf{v}_{i}\) & visual features of \(\mathbf{x}_{i}\) extracted by image encoder, \(\mathbf{v}_{i}\in\mathrm{I\!R}^{d_{v}}\) \\ \(d_{t}\) & dimension of the textual embeddings \\ \(\mathbf{s}_{i}\) & textual embeddings of \(\mathbf{t}_{i}\) extracted by text encoder, \(\mathbf{s}_{i}\in\mathrm{I\!R}^{d_{t}}\) \\ \(\mathbf{t}^{c}\) & category definition of the \(c\)-th class \\ \(\mathbf{s}^{c}\) & textual embeddings of \(\mathbf{t}^{c}\) extracted by text encoder, \(\mathbf{s}^{c}\in\mathrm{I\!R}^{d_{t}}\) \\ \(\mathbf{p}_{i}\) & predictions on \(\mathbf{v}_{i}\) from classifier, \(\mathbf{p}_{i}\in\mathrm{I\!R}^{C}\), \(\mathbf{p}_{i(k)}\) denotes its \(k\)-th element \\ \(d_{p}\) & dimension of the visual embeddings \\ \(\mathbf{z}_{i}\) & low-dimensional embeddings of \(\mathbf{v}_{i}\) after projection, \(\mathbf{z}_{i}\in\mathrm{I\!R}^{d_{p}}\) \\ \(\tilde{\mathbf{v}}_{i}\) & reconstructed visual features from \(\mathbf{z}_{i},\tilde{\mathbf{v}}_{i}\in\mathrm{I\!R}^{d_{v}}\) \\ \(\mathbf{q}_{i}\) & predictions on \(\mathbf{z}_{i}\) from auxiliary classifier, \(\mathbf{q}_{i}\in\mathrm{I\!R}^{C}\) \\ \(Q\) & the size of dictionary, namely the length of queue \\ \(k\) & number of nearest neighbors (NN) in \(k\)-NN and \(k\)-reciprocal-NN \\ \(\mathcal{V}\) & vertices, nodes \\ \(\mathcal{E}\) & edges \\ \(\mathcal{G}\) & graph, \(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\) \\ \(A\) & adjacency matrix \\ \(\mathcal{N}(\mathbf{x}_{i},k)\) & \(k\)-NN sets of \(\mathbf{x}_{i}\) \\ \(\mathcal{R}(\mathbf{x}_{i},k)\) & \(k\)-reciprocal-NN sets of \(\mathbf{x}_{i}\) \\ \(d(\mathbf{v}_{i},\mathbf{v}_{j})\) & distance between \(\mathbf{v}_{i}\) and \(\mathbf{v}_{j}\) \\ \(d^{*}(\mathbf{v}_{i},\mathbf{v}_{j})\) & refined distance between \(\mathbf{v}_{i}\) and \(\mathbf{v}_{j}\) \\ \(V_{\mathbf{v}_{i},\mathbf{v}_{j}}\) & \(k\)-reciprocal feature encoding the distance between \(\mathbf{v}_{i}\) and \(\mathbf{v}_{j}\) \\ \(\mathbf{S}\) & concatenated textual embeddings, \(\mathbf{S}=(\mathbf{s}_{1},\mathbf{s}_{2},...,\mathbf{s}_{N}),\mathbf{S}\in \mathrm{I\!R}^{N\times d_{t}}\) \\ \(I_{N}\) & identity matrix \\ \(\tilde{A}\) & adjacency matrix \(A\) with self-connection \(I_{N}\) \\ \(\tilde{D}_{ii}\) & degree matrix of \(\tilde{A}\) \\ \(\hat{\mathbf{S}}\) & denoised \(\mathbf{S}\) after smoothing \\ \(\hat{\mathbf{s}}_{i}\) & denoised \(\mathbf{s}_{i}\) after smoothing \\ \(K\) & top-\(K\) selected examples with visual-semantic alignment \\ \(\sigma_{K}^{c}\) & the \(K\)-th smallest distance \(d^{*}(\hat{\mathbf{s}}_{i},\mathbf{s}^{c})\) in the \(c\)-th class \\ \(D_{K}^{c}\) & top-\(K\) set of the \(c\)-th class \\ \(D_{K}\) & sets of top-\(K\) examples from all classes \\ \(\hat{\mathbf{z}}^{c}\) & unnormalized visual prototype of the \(c\)-th class \\ \(\mathbf{z}^{c}\) & normalized visual prototype of the \(c\)-th class \\ \(\tau\) & temperature coefficient \\ \(\alpha\) & weight between self-prediction and prototype-instance similarity \\ \(\mathbf{o}_{i}\) & fused, comprehensive output for label correction, \(\mathbf{o}_{i}\in\mathrm{I\!R}^{C}\) \\ \(\mathbf{r}_{i}\) & prototype-instance similarity, \(\mathbf{r}_{i}\in\mathrm{I\!R}^{C}\), \(\mathbf{r}_{i(k)}\) denotes its \(k\)-th element \\ \(\gamma\) & threshold for label correction \\ \(m_{p}\) & momentum coefficient \\ \(\mathbf{b}_{i}\) & collective bootstrapping target on \(\mathbf{z}_{i}\) \\ \(w_{ij}\) & weight of contribution from \(\mathbf{z}^{\prime}_{j}\) in the dictionary for \(\mathbf{b}_{i}\) \\ \(\lambda^{bts}\) & weight for collective bootstrapping loss \\ \(\lambda^{prj}\) & weight for projection and reconstruction losses \\ \(\lambda^{pro}\) & weight for prototypical contrastive loss \\ \(\lambda^{ins}\) & weight for instance contrastive loss \\ \hline \hline \end{tabular}
\end{table}
Table 5: List of symbols.
To support multi-label learning, it is necessary, but not yet enough, to simply replace the softmax-based cross-entropy losses with sigmoid-based binary cross-entropy losses. The premise of prototypical contrastive learning does not hold true anymore because one instance can be simultaneously engaged in formation of multiple clusters, which violates the exclusivity inherited in softmax activation. Our experiments demonstrate that, only by projecting \(\mathbf{v}_{i}\) into compact subspaces specific to each class, can we properly learn the decision boundary to continue prototypical learning. Technically, we set \(C\) additional fully-connected (FC) layers after the projector to respectively map \(\mathbf{z}_{i}\) into \(\tilde{\mathbf{z}}_{i,c}\in\mathds{R}^{d_{p}},c=1,2,...,C\). For the \(c\)-th class, both positive and negative prototypes \(\tilde{\mathbf{z}}^{c+},\tilde{\mathbf{z}}^{c-}\) are shaped accordingly for the contrast against \(\tilde{\mathbf{z}}_{i,c}\). Such operation can be viewed as magnifying class-indicative contents via recombination of the shared embedding \(\mathbf{z}_{i}\). Another minor modification is required on noise removal, where the output \(\mathbf{o}_{i,c}\) is fused independently for the \(c\)-th class for binary separation. Considering the overwhelming negative instances, we set \(\gamma=0.9\) to avoid deviation by majorities in label rectification. Discussion on the hyper-parameter \(\gamma\) can be found in Sec. D.2.
### Training Steps
CAPro adopts a three-step training pipeline. It employs a pre-training step to learn common visual patterns from the original web dataset \(D\). Then, the training starts with instance-prototype and instance-wise contrastive learning, collective bootstrapping, and on-line noise removal. Finally, given the trained model, off-line noise removal is performed on \(D\) for data cleaning and we fine-tune the classifier alone with the cleaned dataset.
\begin{table}
\begin{tabular}{l r r} \hline Settings & WebVision1k/Google500 & NUS-WIDE \\ \hline Optimizer & SGD & \\ Optimizer momentum & 0.9 & \\ Optimizer weight decay & \(1\times 10^{-4}\) & \\ Batch size & 256 & \\ \hline Step1 pre-training scheduler & Cosine decay with linear warm-up & 10 \\ Step1 pre-training warm-up epochs & 5/10 & \(2\times 10^{-3}\) \\ Step1 pre-training learning rate & \(1\times 10^{-1}\) & \(2\times 10^{-3}\) \\ Step1 pre-training epochs with encoders frozen & 0 & \\ Step1 pre-training epochs in total & 120 & 100 \\ \hline Step2 training scheduler & Cosine decay with linear warm-up & \\ Step2 training warm-up epochs & 5/10 & 10 \\ Step2 training learning rate & \(1\times 10^{-1}\) & \(2\times 10^{-3}\) \\ Step2 training epochs with encoders frozen & 5/10 & 10 \\ Step2 training epochs in total & 60/120 & 100 \\ \hline Step3 fine-tuning scheduler & Cosine decay & \\ Step3 fine-tuning warm-up epochs & 0 & \\ Step3 fine-tuning learning rate & \(1\times 10^{-4}\) & \(2\times 10^{-5}\) \\ Step3 fine-tuning epochs with encoders frozen & 15/20 & 20 \\ Step3 fine-tuning epochs in total & 15/20 & 20 \\ \hline Image encoder (by default) & ResNet-50 [89] & \\ Text encoder (by default) & MiniLM-L6 [30] & \\ \hline \(\lambda^{prj}\) & 1 & \\ \(\lambda^{pro}\) & 1 & \\ \(\lambda^{ins}\) & 1 & \\ \(\lambda^{bts}\) & 0.1 & \\ \(m_{p}\) & 0.999 & \\ \(d_{p}\) & 128 & \\ \(\tau\) & 0.1 & \\ \(\alpha\) & 0.5 & \\ top-\(K\) & 50 & \\ Q & 8192 & 2048 \\ \(\gamma\) & 0.6 & 0.9 \\ \(k\)-NN/\(k\)-reciprocal-NN & 5 & 10 \\ \hline \end{tabular}
\end{table}
Table 6: List of hyper-parameters for training settings.
Step1 pre-trainingWe perform visual pre-training on ResNet-50 with cross-entropy and projection-reconstruction losses. At this time, we only use original web labels to train the model to learn common visual description, which lays foundation for the subsequent prototype initialization in step2. Note that the **pre-trained** model is also the **vanilla** method in our experiments.
Step2 trainingAll components are initialized with the pre-trained parameters in step1. As shown in Table 6, we keep the encoders frozen with warm-up epochs. This helps stabilize training to avoid the prototypical embeddings, which are initialized at the beginning by averaging top-\(K\) semantically-correct examples, being perturbed drastically. In this step, we re-train the model with all losses. Apart from classification, we perform instance-wise and instance-prototype contrastive learning, collective bootstrapping, noise removal, and prototype update.
Step3 fine-tuningWe follow MoPro [28] to perform noise removal on the training dataset \(D\). The trained model in step2 is used to correct labels or discard OOD examples with our control flow. Then, we keep encoders frozen and fine-tune the classifier alone with the cleaned set for better performance. Note that such **fine-tuned** model is exactly the **CAPro** in experiments.
```
Data: Web images and their associated texts and labels \(D=\{(\mathbf{x}_{i},\mathbf{t}_{i},y_{i})\}_{i=1}^{N}\).
1Step1 pre-training
2for\((\mathbf{x}_{i},y_{i})\in D\)do
3\(\mathcal{L}_{i}=\mathcal{L}_{i}^{\mathrm{cls}}+\mathcal{L}_{i}^{\mathrm{pri}}\);
4 Update image encoder, classifier, projector, and reconstructor to minimize \(\mathcal{L}_{i}\);
5
6 end for
7Step2 training
8for\((\mathbf{x}_{i},\mathbf{t}_{i},y_{i})\in D\)do
9 Extract \(\mathbf{v}_{i}\) from \(\mathbf{x}_{i}\) via the image encoder;
10 Extract \(\mathbf{s}_{i}\) from \(\mathbf{t}_{i}\) via the text encoder;
11
12 end for
13 Build \(k\)-reciprocal-NN graph \(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\) with \(\{\mathbf{v}_{i}\}_{i=1}^{N}\);
14 Enhance text embeddings from \(\mathbf{s}_{i}\) to \(\hat{\mathbf{s}_{i}}\) via graph convolution on \(\mathcal{G}\);
15for\(c\in\{1,2,...,C\}\)do
16 Extract \(\mathbf{s}^{c}\) from \(\mathbf{t}^{c}\) via the text encoder;
17for\(i\in\{1,2,...,N|y_{i}=c\}\)do
18 Match textual instances \(\mathbf{s}_{i}\) to prototypes \(\mathbf{s}^{c}\) to obtain visual anchors \(D_{K}^{c}\);
19
20 end for
21Initialize visual prototypes with \(D_{K}\);
22for\((\mathbf{x}_{i},y_{i})\in D\)do
23\(\mathcal{L}_{i}=(1-\lambda^{\mathrm{bts}})\mathcal{L}_{i}^{\mathrm{cls}}+\lambda ^{\mathrm{bts}}\mathcal{L}_{i}^{\mathrm{bts}}+\lambda^{\mathrm{pj}}\mathcal{L} _{i}^{\mathrm{prj}}+\lambda^{\mathrm{pro}}\mathcal{L}_{i}^{\mathrm{pro}}+ \lambda^{\mathrm{ins}}\mathcal{L}_{i}^{\mathrm{ins}}\);
24 Update image encoder, classifier, and projector to minimize \(\mathcal{L}_{i}\);
25
26 end for
27Step3 fine-tuning for\((\mathbf{x}_{i},\hat{y}_{i})\in D\)do
28\(\mathcal{L}_{i}=\mathcal{L}_{i}^{\mathrm{cls}}\);
29 Update classifier to minimize \(\mathcal{L}_{i}\);
30
31 end for
```
**Algorithm 1**CAPro's training procedure.
We also prepare Algo. 1 to explicitly explain the entire training process.
Figure 5: Top-matched WebVision1k instances are chosen: (a) without text enhancement, (b) with text enhancement in VSGraph [78], and (c) with our text enhancement.
Figure 6: Top-matched NUS-WIDE (Web) instances are chosen: (a) without text enhancement, (b) with text enhancement in VSGraph [78], and (c) with our text enhancement.
Ablation Study
### Effect of Text Enhancement
Figs. 5 and 6 present additional qualitative comparison for selecting instances with potentially-correct semantics. For WebVision, noiser categories are chosen to validate the effectiveness of text enhancement by smoothing and reranking. We can observe that due to the problem of polysemy, a majority of the retrieved images are irrelevant to the correct semantics and simple \(k\)-NN-based smoothing in [78] can hardly handle such situation. In contrast, our text enhancement helps pinpoint, not perfect but comparatively reliable, web instances that share similar semantics (_e.g._, metalwork in _Nail_). Besides, we also sample three categories from the noiser NUS-WIDE to double-check the effectiveness of text enhancement. For example, in the category of airport, direct matching of user tag embeddings to the textual prototype returns a few close-up images of warcrafts, which has nothing to do with _Airport_. On the contrary, our text enhancement helps to select the truly matched instances.
### Effect of \(\gamma\) on Noise Removal
Table 7 reports the influence of \(\gamma\) when collective bootstrapping is not applied. We follow MoPro [28] to validate two \(\gamma\) candidates: \(\gamma=0.6\) and \(\gamma=0.8\). We find that \(\gamma=0.6\) works the best on Google500. As \(\gamma\) increases, it becomes more difficult for the model to correct labels and thereafter label-flipping errors may still exist. As suggested by MoPro, the optimum \(\gamma\) is related to the percentage of noise in web datasets. For noiser datasets, \(\gamma\) should be decreased for the model to correct wrong labels at an early stage. Otherwise, overfitting might occur and weaken prototypical representation learning. The fine-tuning of \(\gamma\) requires elaborate experiments, which is beyond the scope of the present study.
For multi-label learning on NUS-WIDE, the optimum \(\gamma\) is not only related to the noise level but also to the ratio of the number of positive examples to that of negative examples. Since the negative instances in each class exceeds positive ones by more than one order of magnitude, decreasing \(\gamma\) will easily make the model to classify any instance as negative. Once the model overfits the overwhelming negative examples, valid positive supervision sources would only come from the top-\(K\) matched examples with cross-modality alignment, which degrades generalizability greatly. In this case, we should keep a stricter threshold \(\gamma=0.9\) to only allow confident label rectification.
### Effect of Prototype Update Frequency
By default, we update prototypes by examples in a mini-batch for every epoch. We additionally perform ablation study on the update frequency where comparison is conducted between: 1) 0-epoch, 2) 1-epoch (by default), 3) 5-epoch, and 4) 10-epoch. For 0-epoch update, we **do not** update prototypes with embeddings from other high-quality web examples. Instead, prototypes are **renewed**
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline \multirow{2}{*}{\(\gamma\)} & Reference & \multicolumn{2}{c}{Google500} & \multicolumn{2}{c}{ImageNet500} & \multicolumn{3}{c}{NUS-WIDE} \\ & Provider & Top1 & Top5 & Top1 & Top5 & C-F1 & O-F1 & mAP \\ \hline
0.6 & \(\times\) & 72.0 & 88.0 & 66.9 & 85.4 & 8.3 & 9.1 & 6.9 \\
0.8 & \(\times\) & 71.2 & 87.7 & 65.9 & 84.8 & – & – & – \\
0.9 & \(\times\) & – & – & – & – & 39.2 & 44.4 & 46.8 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Effect of \(\gamma\) on CAPro without collective bootstrapping.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \# Epochs & \multicolumn{2}{c}{Google500} & \multicolumn{2}{c}{ImageNet500} & \multicolumn{3}{c}{NUS-WIDE} \\ per update & Top1 & Top5 & Top1 & Top5 & C-F1 & O-F1 & mAP \\ \hline
0 & 75.5 & 91.1 & 71.6 & 88.8 & 39.2 & 44.4 & 47.2 \\
1 (by default) & 76.0 & 91.3 & 72.0 & 89.2 & 39.3 & 45.4 & 48.0 \\
5 & 75.9 & 91.2 & 71.8 & 89.2 & 39.6 & 45.0 & 47.6 \\
10 & 76.0 & 91.2 & 71.7 & 89.1 & 39.3 & 45.8 & 48.2 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Effect of prototype update frequency on CAPro. By default, we update visual prototypes every epoch using high-quality examples in each mini-batch. For 0-epoch per update, we do not introduce additional high-quality web examples to polish prototypes, but only update them with the top-\(K\) matched semantically-correct examples with their latest visual embeddings.
with the latest embeddings only from the top-\(K\) matched examples to avoid being out-of-date. For 5-epoch and 10-epoch update, we polish prototypes every 5 and 10 epochs, respectively.
Table 8 reports the effect of prototype update frequency on CAPro. On the Google500 and NUS-WIDE datasets, if prototypes are only formed by limited clean examples, their generalization across domain becomes poor and thus causes performance drop of 0.45% (top1) and 0.6% on average, respectively. For Google500, reduced update frequency generally causes lower performance, meaning that the prototypes should keep refreshed for large-scale datasets. For NUS-WIDE, if the update frequency decreases a bit (5-epoch), the model improves on C-F1 but underperforms a little on O-F1 and mAP. With the 10-epoch frequency, we surprisingly find that results are improved on all evaluation metrics. One possible explanation is that the delayed prototype update can help stabilize training at an early stage, but the optimal frequency might be subject to dataset scale and noise level.
### Effect of Noise Removal Policy
Table 9 reports the effect of noise removal policy on CAPro. Our label adjustment policy is inspired from the Eq. (5) of MoPro [28] but differs in that we keep labels of the top-\(K\) matched examples selected in cross-modality alignment unchanged. Therefore, if we replace our noise removal policy with the MoPro one, we actually allow the deep model to get rid of the guidance from top-\(K\) examples. These selected top-\(K\) examples can be altered or even discarded by the MoPro-style policy.
It turns out that without such enforcement, the performance dropped under both single-label and multi-label scenarios. The possible reason behind is that, due to the overwhelming noise (_e.g._, the semantic-misalignment noise) in certain categories, the model itself cannot keep representation learning robust to noise even with a good start. The labels of top-\(K\) samples will be prone to the noisy majority, which invalidates prototypical learning. Besides, the superiority of CAPro over MoPro-style update also substantiates the purity and correctness of the selected examples.
## Appendix E Analysis on Open-Set Recognition
We provide detailed analysis on CAPro for open-set recognition. Fig. 7 presents the per-class F1 (C-F1), per-class Precision (C-P), and per-class Recall (C-R). We compare five methods for ablation study including: 1) the vanilla method, 2) CAPro without text enhancement (TE) & collective bootstrapping (CB), 3) CAPro without CB, 4) CAPro, and 5) CAPro with mix-up (MU).
We vary the confidence threshold from 0 to 1 with an interval of 0.01 to comprehensively measure the performance of CAPro. For each example, if the highest model prediction confidence is below the threshold, the example will be classified as the open-set category. Otherwise, the example will be classified into one of the known categories. We train methods on the Google500 training set and validate them on WebVision1k and ImageNet1k testing sets. Examples from the remaining 500 novel categories should all fall into the open-set category.
It can be observed that compared with vanilla method, CAPro enjoys a much higher recall but a relatively lower precision. It means that our CAPro is more confident about its prediction and a threshold around 0.6 would end up with an optimal C-F1 over 66%. The precision-recall curve reflects that each key component does improve open-set recognition. Note that due to the limited sampling of confidence threshold, the precision-recall curve is not spanning across the entire axes. However, the tendency of curves confirms the effectiveness of our components.
\begin{table}
\begin{tabular}{l c c c c c c} \hline Noise Removal & Google500 & ImageNet500 & \multicolumn{3}{c}{NUS-WIDE} \\ policy & Top1 & Top5 & Top1 & Top5 & C-F1 & O-F1 & mAP \\ \hline MoPro [28] & 75.8 & 91.1 & 71.7 & 89.0 & 38.8 & 42.2 & 47.2 \\ CAPro (ours) & 76.0 & 91.3 & 72.0 & 89.2 & 39.3 & 45.4 & 48.0 \\ \hline \end{tabular}
\end{table}
Table 9: Effect of noise removal policy on CAPro. We compare with MoPro to show the effectiveness of keeping labels of top-\(K\) matched semantically-correct examples unchanged.
Figure 7: Effect of threshold for open-set recognition on (a) WebVision1k and (b) ImageNet1k. TE, CB, and MU respectively refer to our text enhancement, collective bootstrapping, and mix-up. Examples with prediction confidence lower than the threshold will be classified as open-set category.
## Appendix F Guidelines on Tuning of Hyper-Parameters
Loss weights of \(\lambda^{\text{bts}}\), \(\lambda^{\text{pri}}\), \(\lambda^{\text{pro}}\), and \(\lambda^{\text{ins}}\)First, for the total objective, we follow MoPro [28] to use \(\lambda^{\text{pro}}=1\) and \(\lambda^{\text{ins}}=1\). Out of simplicity, we also use \(\lambda^{\text{pri}}=1\) as default.
Second, we would like to explain the effect of \(\lambda^{\text{pro}}\), \(\lambda^{\text{ins}}\), and \(\lambda^{\text{pri}}\) on regularzation. A larger \(\lambda^{\text{pro}}\) may pull instances too close totheir prototypes, which "shrinks" class clusters in the embedding space. A larger \(\lambda^{\text{ins}}\) will enforce stronger visual discriminability between two instances. It may cause two examples from the same category to differ greatly and thereafter downgrades the visual prototype update and class cluster regularization. A larger \(\lambda^{\text{pri}}\) improves the reconstruction quality of \(\hat{\mathbf{v}}_{i}\), which encourages \(\mathbf{z}_{i}\) to retain more information of \(\mathbf{v}_{i}\) in the embedding space. The projection-reconstruction loss is only involved in the pre-training stage (see Algo. 1), and therefore \(\lambda^{\text{pri}}\) will not affect the prototypical and instance-wise contrastive learning in the following stage.
Third, for one's custom web datasets, we suggest that \(\lambda^{\text{pro}}\), \(\lambda^{\text{ins}}\), and \(\lambda^{\text{pri}}\) should be tuned according to the performance results under three settings: 1)\(\lambda^{\text{pro}}=0\) vs. \(\lambda^{\text{pro}}=1\); 2)\(\lambda^{\text{ins}}=0\) vs. \(\lambda^{\text{ins}}=1\); 3)\(\lambda^{\text{pri}}=0\) vs. \(\lambda^{\text{pri}}=1\).
According to our experiments on both single-label and multi-label datasets, the default settings of \(\lambda^{\text{pro}}=1\), \(\lambda^{\text{ins}}=1\), and \(\lambda^{\text{pri}}=1\) should work well on most cases.
For \(\lambda^{\text{bts}}\), we suggest 0.1 would achieve a balance between the individual and collective label references. A much larger value may cause over smoothing and over-regularization on visual learning.
Threshold\(\gamma\)For \(\gamma\) on single-label datasets, its value is related to the percentage of noise in datasets. For WebVision1k and Google500 (34% noise [3]), \(\gamma=0.6\) works better than \(\gamma=0.8\). For one's own web dataset, if the noise ratio is larger, \(\gamma\) should be tuned lower so that wrong labels could be corrected at an earlier stage before overfitting. For \(\gamma\) on multi-label datasets, its value is related to both the percentage of noise and the number ratio of positive-negative samples. For NUS-WIDE (50% noise [78] and 0.02 avg. ratio of positive-negative examples), \(\gamma=0.9\) works better than \(\gamma=0.6\). For one's own web dataset, if the noise ratio is smaller and the positive over negative ratio is smaller, \(\gamma\) should be tuned higher so that hard positive samples will not be easily discarded to avoid underfitting.
Prototype Update FrequencyFor the update frequency, its value is related to the dataset scale and noise level. For WebVision1k and Google500, visual prototypes should be updated per epoch to improve their diversity, which better handles the domain gap between web and realistic datasets. For NUS-WIDE, the update frequency could be reduced to stabilize training, where the prototypes can be prone to the overwhelming negative examples in each category.
Top-\(K\)For top-\(K\), its value is related to the percentage of noise. If the noise ratio is less than 30%, \(K\) should be set higher than 50 to include more diverse examples.
OthersThe current settings of other hyper-parameters (see Table 6) work well. For one's own dataset, we believe these values can be set as starting points and finetuned accordingly. Among all hyper-parameters, our ablation results show that for \(\lambda^{\text{bts}}\), \(\gamma\), and top-\(K\), their values do affect performance and should be set following the rules mentioned above (such as the dataset scale, the noise ratio, and the positive-negative ratio). For the remaining hyper-parameters such as the prototype update frequency, we do not observe significant fluctuation. In other words, the model is robust to these hyper-parameters.
## Appendix G Failure Cases
We provide failure cases of our CaPro on the WebVision1k dataset (see Fig. 8). Our observations are summarized as the following.
First, CAPro can handle fine-grained categories on WebVision1k. The introduction of atypicals increases the risk of noise. For generalization on anomalies or rarities, one solution is to choose both top-K and randomly sampled instances.
Second, for WebVision1k, both MoPro and CAPro underperform the vanilla baseline (optimized only by the cross-entropy loss) on a total of 387 and 312 classes, respectively. Top5 failures of classesinclude screen, sunGlasses, bellCote, ballPlayer, and popBottle. For ImageNet1k, MoPro and CAPro underperform the vanilla on a total of 450 and 358 classes, respectively. Top-5 failures of classes include silkyTerrier, walkerHound, academicGown, standardSchnauzer, and bellCote.
We also provide interesting findings below:
First, the domain gap exists between web and realistic datasets as the top 5 failure cases on the WebVision1k and ImageNet1k testing sets are quite different.
Second, the vanilla method tends to overfit the training set so that it outperforms on highly similar concepts such as screen vs. monitor and sunGlasses vs. sunGlass.
Third, mistakes on silky vs. yorkshire Terrier and walker vs. englishFox hound are ascribed to over-regularization. The inter-class relationship might be used for class-wise adjustment.
## Appendix H Computational Complexity
Second, we present the cost of the text enhancement of our CAPro with respect to its performance gains. Here, \(N\) denotes the number of all nodes in the visual graph. \(k\) is the number of neighbors per node and \(d_{v}\) is the dimension of the feature \(\mathbf{v}\). With \(k=5\) and \(k=10\) respectively for WebVision1k and NUS-WIDE, our improvement over VSGraph is worthy at the expense of such a low cost.
Third, we present the cost of the reference provider of our CAPro with respect to its performance gains. Here, \(m\) is the batch size, \(d_{v}\) is the dimension of \(\mathbf{v}\), \(Q\) is the size of dictionary, \(d_{p}\) is the
\begin{table}
\begin{tabular}{l l l} \hline Encoders & Number of Parameters & GFLOPs \\ \hline R50 [89] & 25M & 3.8 \\ MiniLM [30] & 22M & 4.7 \\ XLNet [29] & 110M & 29 \\ GPT-Neo [31] & 1.3B & 3400 \\ \hline \end{tabular}
\end{table}
Table 10: The parameters and GFLOPs of different encoders.
\begin{table}
\begin{tabular}{l l l l l} \hline Text Encoding & Text Enhancement & Cost & Google500 & ImageNet500 \\ & Enhancement & Top1 & Top1 \\ \hline \multirow{2}{*}{MiniLM [30]} & VSGraph [78] & \(O(N^{2}d_{v})\)+\(O({Nd_{v}}^{2}+Nkd_{v})\) & 72.0 & 66.9 \\ & Ours & \(+O(3Nkd_{v})\)+\(O(4k)\)+\(O(4klog(4k))\) & +3.5 & +4.6 \\ \hline \multirow{2}{*}{XLNet [29]} & VSGraph [78] & \(O(N^{2}d_{v})\)+\(O({Nd_{v}}^{2}+Nkd_{v})\) & 71.6 & 66.8 \\ & Ours & \(+O(3Nkd_{v})\)+\(O(4k)\)+\(O(4klog(4k))\) & +3.8 & +4.7 \\ \hline \multirow{2}{*}{GPT-Neo [31]} & VSGraph [78] & \(O(N^{2}d_{v})\)+\(O({Nd_{v}}^{2}+Nkd_{v})\) & 72.0 & 67.2 \\ & Ours & \(+O(3Nkd_{v})\)+\(O(4k)\)+\(O(4klog(4k))\) & +3.7 & +4.4 \\ \hline \end{tabular}
\end{table}
Table 11: The cost of the text enhancement of our CAPro with respect to its performance gains.
Figure 8: Failure cases of our CAPro on certain classes where the simple vanilla baseline achieves better performance.
dimension of \(z\), and \(C\) is the number of classes. The common reference provider techniques such as the Mix-up, Bootstrapping, label smoothing, and SCC do not incur significant overhead. Operations of our CB are fast to compute for moderate \(m=256\), \(Q=8192\), \(d_{p}=128\), and \(C=1000\) since PyTorch supports efficient matrix multiplication on GPUs. Besides, compared with NCR, our \(d_{p}=128\) is 16x smaller than \(d_{v}\)=2048 in NCR, and our \(m=256\) is 4x smaller than \(m=1024\) in NCR. For WebVision1k, our cost is 1.35x smaller than NCR. For NUS-WIDE, our cost is 20.37x smaller than NCR. It is reasonable to conclude that our CB is more efficient and effective than NCR.
Finally, it is noted that the text encoding and text enhancement methods are performed off-line and executed only once. They do not participate in network optimization. Besides, the pretrained text encoders are only used for inference under 1 V100 GPU. Therefore, the additional cost is acceptable in return for semantically-correct web images.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Text Encoding & Reference & Cost & Google500 & ImageNet500 \\ & Provider & Cost & Top1 & Top1 \\ \hline \multirow{2}{*}{MiniLM [30]} & Mix-up [99] & \(-\) & 75.7 & 71.4 \\ & NCR [58] & \(O(m^{2}(d_{v}+C))\) & -0.2 & +0.1 \\ & Our CB & \(O(mQ(d_{p}+C))\) & +0.3 & +0.6 \\ \hline \multirow{2}{*}{MiniLM [30]} & Bootstrap [33] & \(-\) & 75.5 & 71.3 \\ & NCR [58] & \(O(m^{2}(d_{v}+C))\) & +0 & +0.2 \\ & Our CB & \(O(mQ(d_{p}+C))\) & +0.5 & +0.7 \\ \hline \multirow{2}{*}{MiniLM [30]} & LabelSmooth [100] & \(-\) & 75.4 & 71.2 \\ & NCR [58] & \(O(m^{2}(d_{v}+C))\) & +0.1 & +0.3 \\ & Our CB & \(O(mQ(d_{p}+C))\) & +0.6 & +0.8 \\ \hline \multirow{2}{*}{MiniLM [30]} & SCC [22] & \(-\) & 73.8 & 70.2 \\ & NCR [58] & \(O(m^{2}(d_{v}+C))\) & +1.7 & +1.3 \\ & Our CB & \(O(mQ(d_{p}+C))\) & +2.2 & +1.8 \\ \hline \hline \end{tabular}
\end{table}
Table 12: The cost of the reference provider of our CAPro with respect to its performance gains. | ## Review
### Summary
The paper introduces a unified prototypical contrastive learning framework called Cross-modality Aligned Prototypes (CAPro) aimed at addressing label noise in webly-supervised learning, particularly focusing on semantic noise. CAPro leverages web data across modalities to create semantically-correct textual and visual prototypes. It employs innovative strategies like text matching and collective bootstrapping to enhance label references and improve the learning process. The experimental results on benchmarks such as WebVision1K and NUS-WIDE (Web) demonstrate the effectiveness of CAPro, achieving state-of-the-art performance in open-set recognition. Overall, the approach offers a novel perspective on utilizing alt-text and visual data to enhance visual representation learning.
### Strengths
- The proposed CAPro framework effectively addresses semantic noise in webly-supervised learning, which has been largely overlooked in existing literature.
- Extensive experiments validate the effectiveness of CAPro, showcasing its performance on established datasets.
- The code is made available for reproducibility, allowing others to build upon this work.
- The qualitative examples included are compelling and support the findings.
- The motivation and idea of bridging visual and textual prototypes is novel and interesting.
### Weaknesses
- The framework's complexity makes it difficult to interpret, particularly the framework figure which lacks clear module divisions and coherent directional arrows.
- Comparative performance metrics among different methods can be misleading due to varying results from the same backbone.
- The paper lacks thorough ablation studies to illustrate the impact of its components versus the increased computational complexity.
- There are multiple hyper-parameters involved in the training process that are not adequately discussed.
- Some performance results presented may be inaccurate or misleading based on comparisons with prior works.
### Questions
- How does CAPro handle visual dimorphism in fine-grained classes, and does it recognize anomalies within concepts?
- What limitations arise when building prototypical definitions, especially regarding instances that deviate from these definitions?
- Could the authors clarify the distinctions between the proposed semantic noise and existing concepts of noisy correspondence?
- How are hyper-parameters chosen during experiments, and what considerations ensure their robustness?
### Soundness
**Score:** 3
**Description:** 3 = good. The proposed method is technically sound, with a solid foundation in the literature, although some performance claims require further verification.
### Presentation
**Score:** 2
**Description:** 2 = fair. The paper's presentation suffers from complexity and clarity issues, especially in figures and some unclear statements.
### Contribution
**Score:** 3
**Description:** 3 = good. The contribution is significant as it addresses a key problem in webly-supervised learning with practical implications.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept: The paper is technically solid and demonstrates moderate-to-high impact potential, but it lacks clarity in some areas.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents an innovative and effective approach to addressing semantic noise in webly-supervised learning, backed by extensive experiments and solid results. Despite some weaknesses in presentation and clarity, the overall contributions are significant, making it suitable for acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Beyond Normal: On the Evaluation of Mutual Information Estimators
Pawel Czyz\({}^{*}\)\({}^{1,2}\) Frederic Grabowski\({}^{*}\)\({}^{3}\)
Julia E. Vogt\({}^{4,5}\) Niko Beerenwinkel\({}^{\dagger}\)\({}^{1,5}\) Alexander Marx\({}^{\dagger}\)\({}^{2,4}\)
\({}^{1}\)Department of Biosystems Science and Engineering, ETH Zurich \({}^{2}\)ETH AI Center, ETH Zurich
\({}^{3}\)Institute of Fundamental Technological Research, Polish Academy of Sciences
\({}^{4}\)Department of Computer Science, ETH Zurich \({}^{5}\)SIB Swiss Institute of Bioinformatics
Equal contribution \({}^{\dagger}\)Joint supervision
###### Abstract
Mutual information is a general statistical dependency measure which has found applications in representation learning, causality, domain generalization and computational biology. However, mutual information estimators are typically evaluated on simple families of probability distributions, namely multivariate normal distribution and selected distributions with one-dimensional random variables. In this paper, we show how to construct a diverse family of distributions with known ground-truth mutual information and propose a language-independent benchmarking platform for mutual information estimators. We discuss the general applicability and limitations of classical and neural estimators in settings involving high dimensions, sparse interactions, long-tailed distributions, and high mutual information. Finally, we provide guidelines for practitioners on how to select appropriate estimator adapted to the difficulty of problem considered and issues one needs to consider when applying an estimator to a new data set.
## 1 Introduction
Estimating the strength of a non-linear dependence between two continuous random variables (r.v.) lies at the core of machine learning. Mutual information (MI) lends itself naturally to this task, due to its desirable properties, such as invariance to homeomorphisms and the data processing inequality. It finds applications in domain generalization (Li et al., 2022; Ragonesi et al., 2021), representation learning (Belghazi et al., 2018; Oord et al., 2018), causality (Solo, 2008; Kurutach et al., 2018), physics (Keys et al., 2015), systems biology (Selimkhanov et al., 2014; Grabowski et al., 2019; Uda, 2020), and epidemiology (Young et al., 2023).
Over the last decades, the estimation of MI has been extensively studied, and estimators have been developed, ranging from classical approaches based on histogram density (Pizer et al., 1987), kernel density estimation (Moon et al., 1995) to \(k\)-nearest neighbor (Kozachenko and Leonenko, 1987; Kraskov et al., 2004) and neural estimators (Belghazi et al., 2018; Oord et al., 2018; Song and Ermon, 2020). However, despite the progress that has been achieved in this area, not much attention has been focused on systematically benchmarking these approaches.
Typically, new estimators are evaluated assuming multivariate normal distributions for which MI is analytically tractable (Darbellay and Vajda, 1999; Kraskov et al., 2004; Suzuki, 2016). Sometimes, simple transformations are applied to the data (Khan et al., 2007; Gao et al., 2015), moderately high-dimensional settings are considered (Lord et al., 2018; Lu and Peltonen, 2020), or strong dependencies are evaluated (Gao et al., 2015). Beyond that, Song and Ermon (2020) study theself-consistency (additivity under independence and the data processing inequality) of neural MI estimators in the context of representation learning from image data.
**Our Contributions** In this work we show a method of developing expressive distributions with known ground-truth mutual information (Sec. 2), propose forty benchmark tasks and systematically study the properties of commonly used estimators, including representatives based on kernel or histogram density estimation, \(k\)NN estimation, and neural network-based estimators (Sec. 3). We address selected difficulties one can encounter when estimating mutual information (Sec. 4), such as sparsity of interactions, long-tailed distributions, invariance, and high mutual information. Finally, we provide recommendations for practitioners on how to choose a suitable estimator for particular problems (Sec. 6). Our benchmark is designed so that it is simple to add new tasks and estimators. It also allows for cross-language comparisons -- in our work we compare estimators implemented in Python, R and Julia. All of our experiments are fully reproducible by running Snakemake workflows [16]. Accompanying code is available at [http://github.com/cbg-ethz/bmi](http://github.com/cbg-ethz/bmi).
Overall, our _key findings_ from the benchmark can be summarized as follows:
* Testing on multivariate normal distributions gives a biased and overly optimistic view of estimator performance. In this setting, canonical correlation analysis (CCA), a model-based approach, emerges as the most effective method -- even if the model is slightly misspecified.
* Compared to classical estimators, neural estimators excel in high-dimensional settings, capturing sparse interactions across multiple dimensions.
* The popular KSG estimator [14] is accurate in low- and moderate-dimensional settings, but its performance suffers on problems involving high-dimensions or sparse interactions.
* Although MI is invariant to a wide range of transformations (Theorem 2.1), numerical estimates, even with large sample sizes, are not. Thus, MI may not be suitable when metric invariance to specific transformation families is required.
* Multivariate Student distributions pose an important and hard challenge to many mutual information estimators. This effect is partially, but not fully, attributed to their long tails.
## 2 Mutual Information: Estimation and Invariance
We start by recalling the definition of MI, then discuss the problem of estimating it, and introduce the estimators used in the benchmark.
Consider two r.v. \(X\) and \(Y\) with domains \(\mathcal{X}\) respectively \(\mathcal{Y}\), joint probability measure \(P_{XY}\) and marginal probability measures \(P_{X}\) and \(P_{Y}\), respectively. If \(P_{XY}\) is absolutely continuous with respect to \(P_{X}\otimes P_{Y}\) (e.g., \(X\) and \(Y\) are finite r.v.), MI is equal to the following Kullback-Leibler divergence2:
Footnote 2: We use the natural logarithm, meaning that the mutual information is measured in _nats_.
\[\mathbf{I}(X;Y)=\mathbf{D}_{\mathrm{KL}}\left(P_{XY}\parallel P_{X}\otimes P_{ Y}\right)=\int\log f\,\mathrm{d}P_{XY},\]
where \(f=\mathrm{d}P_{XY}/\mathrm{d}(P_{X}\otimes P_{Y})\) is the Radon-Nikodym derivative. If the absolute continuity does not hold, then \(\mathbf{I}(X;Y)=+\infty\)[17, Theorem 2.1.2]. If \(\mathcal{X}\) and \(\mathcal{Y}\) are Euclidean spaces and measures \(P_{XY}\), \(P_{X}\) and \(P_{Y}\) have probability density functions (PDFs) with respect to the Lebesgue measure, then the Kullback-Leibler divergence can be written in terms of the PDFs.
Figure 1: Visualisations of selected proposed distributions. Two correlated Gaussian variables \(X\) and \(Y\) (1) can be transformed via the Gaussian CDF into correlated uniform variables (2), into a long-tailed distribution via the “half-cube” mapping \(t\mapsto t\sqrt{|t|}\) (3), into a multi-modal distribution (4) or embedded in the three-dimensional space via the composition of the Swiss roll mapping and Gaussian CDF (5). Color on the plot corresponds to the original \(Y\) variable.
However, we will later consider distributions which do not have PDFs with respect to the Lebesgue measure and the general Radon-Nikodym derivative must be used (see "Swiss roll" in Fig. 1).
In almost all applications, \(P_{XY}\) and \(P_{X}\otimes P_{Y}\) are not known and one needs to estimate MI from a finite sample from \(P_{XY}\), i.e., a realization \(\big{(}(x_{1},y_{1}),\ldots,(x_{N},y_{N})\big{)}\in(\mathcal{X}\times\mathcal{Y })^{N}\) of \(N\) i.i.d. r.v. \((X_{1},Y_{1}),\ldots,(X_{N},Y_{N})\) distributed according to the joint distribution \(P_{XY}\). As a MI estimator we will denote a family of measurable mappings, indexed by the sample size \(N\) and spaces \(\mathcal{X}\) and \(\mathcal{Y}\) (in most applications, Euclidean spaces of various dimensionalities): \(e_{N}^{\mathcal{X}\mathcal{Y}}\colon(\mathcal{X}\times\mathcal{Y})^{N}\to \mathbb{R}\). Composing this mapping with the r.v. representing the whole data set we obtain a real-valued r.v. \(E_{N}^{XY}\). For a given \(P_{XY}\) we can consider the distribution of \(E_{N}^{XY}\). It is often summarized by its first two moments, resulting in the bias and the variance of the estimator. Understanding the bias of an estimator however requires the knowledge of ground-truth MI, so the estimators are typically tested on the families of jointly multivariate normal or uniform distributions, with varying \(N\) and ground-truth \(\mathbf{I}(X;Y)\)(Khan et al., 2007; Lord et al., 2018; Holmes and Nemenman, 2019; Song and Ermon, 2020).
Constructing Expressive Distributions and BenchmarkingBesides considering different types of distributions such as (multivariate) normal, uniform and Student, our benchmark is based on the invariance of MI to the family of chosen mappings. Generally MI is not invariant to arbitrary transformations, as due to the data processing inequality \(\mathbf{I}(f(X);g(Y))\leq\mathbf{I}(X;Y)\). However, under some circumstances MI is preserved, which enables us to create more expressive distributions by transforming the variables.
**Theorem 2.1**.: _Let \(\mathcal{X}\), \(\mathcal{X}^{\prime}\), \(\mathcal{Y}\) and \(\mathcal{Y}^{\prime}\) be standard Borel spaces (e.g., smooth manifolds with their Borel \(\sigma\)-algebras) and \(f\colon\mathcal{X}\to\mathcal{X}^{\prime}\) and \(g\colon\mathcal{Y}\to\mathcal{Y}^{\prime}\) be continuous injective mappings. Then, for every \(\mathcal{X}\)-valued r.v. \(X\) and \(\mathcal{Y}\)-valued r.v. \(Y\) it holds that \(\mathbf{I}(X;Y)=\mathbf{I}(f(X);g(Y))\)._
This theorem has been studied before by Kraskov et al. (2004, Appendix], who consider diffeomorphisms and by Polyanskiy and Wu (2022, Th. 3.7), who assume measurable injective functions with measurable left inverses. For completeness we provide a proof which covers continuous injective mappings in Appendix A. In particular, each of \(f\) and \(g\) can be a homeomorphism, a diffeomorphism, or a topological embedding. Embeddings are allowed to increase the dimensionality of the space, so even if \(X\) and \(Y\) have probability density functions, \(f(X)\) and \(g(Y)\) do not need to have them. Neither of these deformations change MI. Thus, using Theorem 2.1, we can create more expressive distributions \(P_{f(X)g(Y)}\) by sampling from \(P_{XY}\) and transforming the samples to obtain a data set \(\big{(}(f(x_{1}),g(y_{1})),\ldots,(f(x_{N}),g(y_{N})\big{)}\in(\mathcal{X}^{ \prime}\times\mathcal{Y}^{\prime})^{N}\).
While offering a tool to construct expressive distributions, another valuable perspective to consider with this problem is estimation invariance: although MI is invariant to proposed changes, the estimate may not be. More precisely, applying an estimator to the transformed data set results in the induced r.v. \(E_{N}^{f(X)g(Y)}\) and its distribution. If the estimator were truly invariant, one would expect that \(\mathbb{E}\left[E_{N}^{f(X)g(Y)}\right]\) should equal \(\mathbb{E}\left[E_{N}^{XY}\right]\). This invariance is rarely questioned and often implicitly assumed as given (see e.g. Tschannen et al. (2020) or Murphy (2023, Sec. 32.2.2.3)), but as we will see in Sec. 4.3, finite-sample estimates are not generally invariant.
## 3 Proposed Benchmark
In this section, we outline forty tasks included in the benchmark that cover a wide family of distributions, including varying tail behaviour, sparsity of interactions, multiple modes in PDF, and transformations that break colinearity. As our base distributions, we selected multivariate normal and Student distributions, which were transformed with continuous injective mappings.
We decided to focus on the following phenomena:
1. **Dimensionality.** High-dimensional data are collected in machine learning and natural sciences (Buhlmann and van de Geer, 2011, Ch. 1). We therefore change the dimensions of the \(\mathcal{X}\) and \(\mathcal{Y}\) spaces between \(1\) and \(25\).
2. **Sparsity.** While the data might be high-dimensional, the effective dimension may be much smaller, due to correlations between different dimensions or sparsity (Lucas et al., 2006). We therefore include distributions in which some of the dimensions represent random noise,which does not contribute to the mutual information, and perform an additional study in Section 4.1.
3. **Varying MI.** Estimating high MI is known to be difficult [14]. However, we can often bound it in advance -- if there are 4000 image classes, MI between image class and representation is at most \(\log 4000\approx 8.3\) nats. In this section, we focus on problems with MI varying up to 2 nats. We additionally consider distributions with higher MI in Section 4.4.
4. **Long tails.** As Taleb [2020] and Zhang et al. [2021] argue, many real-world distributions have long tails. To model different tails we consider multivariate Student distributions as well as transformations lengthening the tails. We conduct an additional study in Section 4.2.
5. **Robustness to diffeomorphisms.** As stated in Theorem 2.1, mutual information is invariant to reparametrizations of random variables by diffeomorphisms. We however argue that when only finite samples are available, invariance in mutual information estimates may not be achieved. To test this hypothesis we include distributions obtained by using a diverse set of diffeomorphisms and continuous injective mappings. We further study the robustness to reparametrizations in Section 4.3.
While we provide a concise description here, precise experimental details can be found in Appendix D and we visualise selected distributions in Appendix F. We first describe tasks employing one-dimensional variables.
**Bivariate Normal** For multivariate normal variables MI depends only on the correlation matrix. We will therefore consider a centered bivariate normal distribution \(P_{XY}\) with \(\operatorname{Cor}(X,Y)=\rho\) and \(\mathbf{I}(X;Y)=-0.5\log\left(1-\rho^{2}\right)\). We chose \(\rho=0.75\).
**Uniform Margins** As a first transformation, we apply the Gaussian CDF \(F\) to Gaussian r.v. \(X\) and \(Y\) to obtain r.v. \(X^{\prime}=F(X)\) and \(Y^{\prime}=F(Y)\). It is a standard result in copula theory (see Lemma D.2 or e.g., Nelsen [2007] for an introduction to copulas) that \(F(X)\sim\operatorname{Uniform}(0,1)\) resp. \(F(Y)\sim\operatorname{Uniform}(0,1)\). The joint distribution \(P_{X^{\prime}Y^{\prime}}\), however, is not uniform. Mutual information is preserved, \(\mathbf{I}(X^{\prime};Y^{\prime})\)=\(\mathbf{I}(X;Y)\). For an illustration, see Fig. 1. A distribution P transformed with Gaussian CDF \(F\) will be denoted as Normal CDF @ P.
**Half-Cube Map** To lengthen the tails we applied the half-cube homeomorphism \(h(x)\)=\(|x|^{3/2}\operatorname{sign}x\) to Gaussian variables \(X\) and \(Y\). We visualize an example in Fig. 1 and denote a transformed distribution as Half-cube @ P.
**Asinh Mapping** To shorten the tails we applied inverse hyperbolic sine function \(\operatorname{asinh}x=\log\left(x+\sqrt{1+x^{2}}\right)\). A distribution transformed with this mapping will be denoted as Asinh @ P.
**Wiggly Mapping** To model non-uniform lengthscales, we applied a mapping
\[w(x)=x+\sum_{i}a_{i}\sin(\omega_{i}x+\varphi_{i}),\qquad\sum_{i}|a_{i}\omega_ {i}|<1.\]
Due to the inequality constraint this mapping is injective and preserves MI. The parameter values can be found in Appendix D. We denote the transformed distribution as Wiggly @ P.
**Bimodal Variables** Applying the inverse CDF of a two-component Gaussian mixture model to correlated variables \(X\) and \(Y\) with uniform margins we obtained a joint probability distribution \(P_{X^{\prime}Y^{\prime}}\) with four modes (presented in Fig. 1). We call this distribution Bimodal and provide technical details in Appendix D.
**Additive Noise** Consider independent r.v. \(X\sim\operatorname{Uniform}(0,1)\) and \(N\sim\operatorname{Uniform}(-\varepsilon,\varepsilon)\), where \(\varepsilon\) is the noise level. For \(Y=X+N\), it is possible to derive \(\mathbf{I}(X;Y)\) analytically (see Appendix D for the formula and the parameter values) and we will call this distribution Uniform (additive noise=c).
**Swiss Roll Embedding** As the last example in this section, we consider a mixed case in which a one-dimensional r.v. \(X\sim\operatorname{Uniform}(0,1)\) is smoothly embedded into two dimensions via the Swiss roll mapping, a popular embedding used to test dimensionality reduction algorithms (see Fig. 1 for visualisation and Appendix D for the formula). Note that the obtained distribution does not have a PDF with respect to the Lebesgue measure on \(\mathbb{R}^{3}\).
Next, we describe the tasks based on multivariate distributions.
Multivariate NormalWe sampled \((X,Y)=(X_{1},\ldots,X_{m},Y_{1},\ldots,Y_{n})\) from the multivariate normal distribution. We model two correlation structures: "dense" interactions with all off-diagonal correlations set to 0.5 and "2-pair" interactions where \(\operatorname{Cor}(X_{1},Y_{1})=\operatorname{Cor}(X_{2},Y_{2})=0.8\) and there is no correlation between any other (distinct) variables (Multimormal (2-pair)). For a latent-variable interpretation of these covariance matrices we refer to Appendix D.2.
Multivariate StudentTo see how well the estimators can capture MI contained in the tails of the distribution, we decided to use multivariate Student distributions (see Appendix D) with dispersion matrix3 set to the identity \(I_{m+n}\) and \(\nu\) degrees of freedom. Contrary to the multivariate normal case, in which the identity matrix used as the covariance would lead to zero MI, the variables \(X\) and \(Y\) can still interact through the tails.
Footnote 3: The dispersion matrix of multivariate Student distribution is different from its covariance, which does not exist for \(\nu\leq 2\).
Transformed Multivariate DistributionsAs the last set of benchmark tasks we decided to transform the multivariate normal and multivariate Student distributions. We apply mappings described above to each axis separately. For example, applying normal CDF to the multivariate normal distribution will map it into a distribution over the cube \((0,1)^{m+n}\) with uniform marginals. To mix different axes we used a spiral diffeomorphism (denoted as Spiral @ P); we defer the exact construction to Appendix D but we visualise this transformation in Fig. 5.
EstimatorsIn our benchmark, we include a diverse set of estimators. Following recent interest, we include four neural estimators, i.e., Donsker-Varadhan (D-V) and MINE (Belghazi et al., 2018), InfoNCE (Oord et al., 2018), and the NWJ estimator (Nguyen et al., 2007; Nowozin et al., 2016; Poole et al., 2019). As a representative for model-based estimators, we implement an estimator based on canonical correlation analysis (CCA) (Murphy, 2023, Ch. 28), which assumes that the joint distribution is multivariate normal. For more classical estimators, we include the Kraskov, Stogbauer and Grassberger (KSG) estimator (Kraskov et al., 2004) as the most well-known \(k\)-nearest neighbour-based estimator, a recently proposed kernel-based estimator (LNN) (Gao et al., 2017), as well as an estimator based on histograms and transfer entropy (both implemented in Julia's TransferEntropy library). A detailed overview of the different classes of estimators is provided in Appendix C. Further, we describe the the hyperparameter selection in Appendix E.
PreprocessingAs a data preprocessing step, we standardize each dimension using the mean and standard deviation calculated from the sample. Other preprocessing strategies are also possible, but we did not observe a consistent improvement of one preprocessing strategy over the others (see Appendix E).
### Benchmark Results
We show the results for \(N=10\,000\) data points in Fig. 2. Neural estimators and KSG perform better than alternative estimators on most problems, with KSG having often low sample requirements (see Appendix E). The simple model-based estimator CCA obtained excellent performance at low sample sizes (see Appendix E), even for slightly transformed multivariate normal distributions. Finally, LNN and the two Julia estimators (histogram and transfer entropy), work well in low dimension but are not viable in medium- and high-dimensional problems (\(\dim X+\dim Y\geq 4\)).
Overall, we observed that for most \(1{\times}1\)-dimensional and multivariate normal problems MI can be reliably estimated. Difficulties arise for sparse interactions, atypical (Student) distributions, and for significantly transformed distributions. This suggests that the typical evaluations of MI estimators are overly optimistic, focusing on relatively simple problems.
KSG is able to accurately estimate MI in multivariate normal problems, however performance drops severely on tasks with sparse interactions (2-pair tasks). For \(5{\times}5\) dimensions the estimate is about 70% of the true MI, and for \(25{\times}25\) dimensions it drops to 10%, which is consistent with previous observations (Marx and Fischer, 2022). Meanwhile, performance of neural estimators is stable. We study the effect of sparsity in more detail in Sec. 4.
Student distributions have proven challenging to all estimators. This is partially due to the fact that long tails can limit estimation quality (see Sec. 4), and indeed, applying a tail-shortening \(\operatorname{asinh}\) transformation improves performance. However, performance still remains far below multivariate normal distributions with similar dimensionality and information content, showing that long tails are only part of the difficulty. For neural estimators, the issue could be explained by the fact that the critic is not able to fully learn the pointwise mutual information (see Appendix E).
Interestingly, the performance of CCA, which assumes a linear Gaussian model, is often favorable compared to other approaches and has very low sample complexity (see Appendix E). This is surprising, since in the case of transformed distributions the model assumptions are not met. The approach fails for Student distributions (for which the correlation matrix either does not exist or is the identity matrix), the Swiss roll embedding and multivariate normal distributions transformed using the spiral diffeomorphism. Since the spiral diffeomorphism proved to be the most challenging transformation, we study it more carefully in Sec. 4.
## 4 Selected Challenges
In this section we investigate distributions which proved to be particularly challenging for the tested estimators in our benchmark: sparse interactions, long tailed distributions and invariance to diffeomorphisms. Additionally we investigate how the estimators perform for high ground truth MI.
### Sparsity of Interactions
In the main benchmark (Fig. 2) we observed that estimation of MI with "2-pair" type of interactions is considerably harder for the KSG estimator than the "dense" interactions. On the other hand, neural estimators were able to pick the relevant interacting dimensions and provide better estimates of MI in these sparse cases.
We decided to interpolate between the "dense" and "2-pair" interaction structures using a two-stage procedure with real parameters \(\alpha\) and \(\lambda\) and the number of strongly interacting components \(K\). While the precise description is in Appendix D.2, we can interpret \(\alpha\) as controlling the baseline strength of interaction between every pair of variables in \(\{X_{1},\ldots,X_{10},Y_{1},\ldots Y_{10}\}\) and \(\lambda\) as the additional strength of interaction between pairs of variables \((X_{1},Y_{1}),\ldots,(X_{K},Y_{K})\). Dense interaction structure has \(\lambda=0\) and 2-pair interaction structure has \(\alpha=0\) and \(K=2\). First, we set \(K=10\) and \(\lambda\approx 0\) and decrease \(\alpha\) raising \(\lambda\) at the same time to maintain constant MI of 1 nat. When \(\alpha=0\), whole information is contained in the pairwise interactions \((X_{1},Y_{1}),\ldots,(X_{10},Y_{10})\). We then decrease \(K\) and increase \(\lambda\), still maintaining constant MI.
In Fig. 3 we observe that the performance of all estimators considered (apart from CCA, a model-based approach suitable for multivariate normal distributions) degrades when the interactions between
Figure 2: Mean MI estimates of nine estimators over \(n=10\) samples with \(N=10\,000\) points each against the ground-truth value on all benchmark tasks grouped by category. Color indicates relative negative bias (blue) and positive bias (red). Blank entries indicate that an estimator experienced numerical instabilities.
every pair of variables become sparser (lower \(\alpha\)). In particular, even neural estimators are not able to model the information in ten interacting pairs of variables. However, when we decrease the number of interacting pairs of variables, the performance of the neighborhood-based KSG estimator is qualitatively different from the neural estimators: performance of KSG steadily deteriorates, while neural estimators can find (some of the) relevant pairs of variables.
This motivates us to conclude that in the settings where considered variables are high-dimensional and the interaction structure is sparse, neural estimators can offer an advantage over neighborhood-based approaches.
### Long-Tailed Distributions
In Fig. 4 we investigate two different ways to lengthen the tails. First, we lengthen the tails of a multivariate normal distribution using the mapping \(x\mapsto|x|^{k}\operatorname{sign}x\) (with \(k\geq 1\)) applied to each dimension separately. In this case, we see that the performance of CCA, KSG, and neural estimators is near-perfect for \(k\) close to \(1\) (when the distribution \(P_{XY}\) is close to multivariate normal), but the performance degrades as the exponent increases. Second, we study multivariate Student distributions varying the degrees of freedom: the larger the degrees of freedom, the lower the information content in the tails of the distribution. Again, we see that that the performance of CCA, KSG, and neural estimators is near-perfect for high degrees of freedom for which the distribution is approximately normal. For low degrees of freedom, these estimators significantly underestimate MI, with the exception of CCA which gives estimates with high variance, likely due to the correlation matrix being hard to estimate.
Overall, we see that long tails complicate the process of estimating MI. The Student distribution is particularly interesting, since even after the tails are removed (with \(\operatorname{asinh}\) or preprocessing strategies described in Appendix E) the task remains challenging. We suspect that the MI might be contained in regions which are rarely sampled, making the MI hard to estimate. When neural estimators are used, pointwise mutual information learned by the critic does not match the ground truth (see Appendix E).
Hence, we believe that accurate estimation of MI in long-tailed distributions is an open problem. Practitioners expecting long-tailed distributions should approach this task with caution.
Figure 4: MI estimates as a function of the information contained in the tail of the distribution. Left: lenghtening the tails via \(x\mapsto|x|^{k}\operatorname{sign}x\) mapping with changing \(k\). Right: increasing the degrees of freedom in multivariate Student distribution shortens the tails. Shaded regions represent the sample standard deviation.
Figure 3: Two-stage interpolation between dense and sparse matrices. From left to right the sparsity of interactions increases. Shaded regions represent the sample standard deviation.
### Invariance to Diffeomorphisms
The benchmark results (Fig. 2) as well as the experiments with function \(x\mapsto|x|^{k}\operatorname{sign}x\) lengthening tails (Fig. 4) suggest that even if ground-truth MI is preserved by a transformation (see Theorem 2.1), the finite-sample estimates may not be invariant. This can be demonstrated using the spiral diffeomorphism.
We consider isotropic multivariate normal variables \(X=(X_{1},X_{2})\sim\mathcal{N}(0,I_{2})\) and \(Y\sim\mathcal{N}(0,1)\) with \(\operatorname{Cor}(X_{1},Y)=\rho\). Then, we apply a spiral diffeomorphism \(s_{v}\colon\mathbb{R}^{2}\to\mathbb{R}^{2}\) to \(X\), where
\[s_{v}(x)=\begin{pmatrix}\cos v||x||^{2}&-\sin v||x||^{2}\\ \sin v||x||^{2}&\cos v||x||^{2}\end{pmatrix}\]
is the rotation around the origin by the angle \(v||x||^{2}\) (see Fig. 5; for a theoretical study of spiral diffeomorphisms see Appendix B). In Fig. 5, we present the estimates for \(N=10\,000\), \(\rho=0.8\) and varying speed \(v\).
In general, the performance of the considered estimators deteriorates with stronger deformations. This is especially visible for the CCA estimator, as MI cannot be reliably estimated in terms of correlation.
### High Mutual Information
Gao et al. (2015) proves that estimation of MI between strongly interacting variables requires large sample sizes. Poole et al. (2019); Song and Ermon (2020) investigate estimation of high MI using multivariate normal distributions in the setting when the ground-truth MI is changing over the training time. We decided to use our family of expressive distributions to see the practical limitations of estimating MI. We chose three one-parameter distribution families: the \(3{\times}3\) multivariate normal distribution with correlation \(\rho=\operatorname{Cor}(X_{1},Y_{1})=\operatorname{Cor}(X_{2},Y_{2})\), and its transformations by the half-cube mapping and the spiral diffeomorphism with \(v=1/3\). We changed the parameter to match the desired MI.
In Fig. 6 we see that for low enough MI neural estimators and KSG reliably estimate MI for all considered distributions, but the moment at which the estimation becomes inaccurate depends on the distribution. For example, for Gaussian distributions, neural estimators (and the simple CCA baseline) are accurate even up to 5 nats, while after the application of the spiral diffeomorphism, the estimates become inaccurate already at 2 nats.
We see that in general neural estimators perform well on distributions with high MI. The performance of CCA on the multivariate normal distribution suggests that well-specified model-based estimators may require lower number of samples to estimate high mutual information than model-free alternatives (cf. Gao et al. (2015)).
## 5 Related Work
Existing BenchmarksKhan et al. (2007) used one-dimensional variables and an additive noise assumption to evaluate the estimators based on histograms, nearest neighbors, and kernels. However,
Figure 5: Left: Spiral diffeomorphism with different speed parameter \(v\). Right: Performance of considered estimators for increasing speed.
the original code and samples are not available. Song and Ermon (2020) proposed to evaluate neural estimators based on variational lower bounds (Oord et al., 2018; Belghazi et al., 2018) on image data. However, besides the multivariate Gaussian which they also evaluate, the ground-truth MI is not available. Therefore, they proposed self-consistency checks, which include \(\mathbf{I}(X;Y){=}0\) for independent \(X\) and \(Y\), and preservation of the data processing inequality. Poole et al. (2019) study neural estimators in settings employing Gaussian variables, affine transformation followed by axis-wise cubic transformation and investigate the high MI cases. In our benchmark, we focus on distributions with known MI and information-preserving continuous injective mappings such as homeomorphisms, evaluate long tail distributions, and compare representative estimators from different classes and platforms. We summarize the differences between the benchmarks in Table 1.4
Footnote 4: Khan et al. (2007) consider two one-dimensional r.v. related by additive noise, in which one of the r.v. is parametrized with a cubic function. This implicitly lengthens the tails, so we marked this with (\(\checkmark\)); similarly, Poole et al. (2019) consider cubic transformation of a multivariate normal variable. The experimental setup of estimation of MI considered by Poole et al. (2019) and Song and Ermon (2020) requires changing ground truth MI over training time of a neural estimator. This procedure differs from the usual approaches to estimating MI, so we marked it with (\(\checkmark\)).
**Causal Representation Learning and Nonlinear ICA** Recent work on disentanglement and nonlinear independent component analysis (ICA) aims at identifying causal sources up to an ambiguous transformation (Xi and Bloem-Reddy, 2022). Examples include orthogonal transformations (Zimmermann et al., 2021), permutations and diffeomorphisms applied along specific axes (Khemakhem et al., 2020), or the whole diffeomorphism group (Von Kugelgen et al., 2021; Daunhawer et al., 2023). To check whether the learned representations indeed capture underlying causal variables, it is important to use estimators which are invariant to the ambiguous transformations specific to the model. Our results suggest that such requirements are not met by MI estimators and practitioners should carefully choose the used metrics, so they are not only theoretically invariant, but also can be robustly estimated in practice.
## 6 Conclusions and Limitations
Venturing beyond the typical evaluation of mutual information estimators on multivariate normal distributions, we provided evidence that the evaluation on Gaussian distributions results in overly optimistic and biased results for existing estimators. In particular, a simple model-based approach as
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & Khan et al. & Poole et al. & Song and & Proposed \\ & (2007) & (2019) & Ermon (2020) & Benchmark \\ \hline Gaussian Distrib. & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ Long-Tailed Distrib. & (\(\checkmark\)) & (\(\checkmark\)) & \(\checkmark\) & \(\checkmark\) \\ Sparse/Dense Inter. & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ High MI & \(\checkmark\) & (\(\checkmark\)) & (\(\checkmark\)) & \(\checkmark\) \\ Invariance to Rameon. & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ Self-Consistency & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ Language Indep. & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ Code Available & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison between benchmarks.
Figure 6: We select three one-parameter families of distributions varying the MI value (\(x\)-axis and dashed line). We use \(N=10\,000\) data points and \(n=5\) replicates of each experiment.
CCA is already sufficient if the distribution is (close to) multivariate normal, highlighting the effect of using available prior information about the problem to design efficient estimators. To provide a basis for more thorough evaluation, we designed a benchmark, compatible with any programming language, with forty general tasks focusing on selected problems, as estimation with sparse interactions, the effects of long tails, estimation in problems with high MI, and (only theoretical) invariance to diffeomorphisms.
Our results suggest that in low- and medium-dimensional problems the KSG estimator remains a solid baseline, requiring a lower number of samples than the alternatives and no parameter tuning. In high-dimensional settings where the interactions are sparse, neural estimators offer an advantage over classical estimators being able to more consistently pick up the signal hidden in the high-dimensional space. We further provide evidence that although MI is invariant to continuous injective transformations of variables, in practice the estimates can differ substantially. Appropriate data preprocessing may mitigate some of these issues, but no strategy clearly outperforms the others (see Appendix E).
We hope that the proposed benchmark encourages the development of new estimators, which address the challenges that we highlight in this paper. It can be used to assess invariance to homeomorphisms, understand an estimator's behavior in the situations involving distributions of different complexity, low-data behaviour, high mutual information or to diagnose problems with implementations. We believe that some parts of the benchmark could also be used to assess whether other estimators of other statistical quantities, such as kernel dependence tests, transfer entropy or representation similarity measures, can be reliably estimated on synthetic problems and if they are robust to selected classes of transformations.
Limitations and Open ProblemsAlthough the proposed benchmark covers a wide family of distributions, its main limitation is the possibility to only transform the distribution \(P_{XY}\) via transformations \(f\times g\) to \(P_{f(X)g(Y)}\) starting from multivariate normal and Student distributions. In particular, not every possible joint distribution is of this form and extending the family of distributions which have known MI and allow efficient sampling is the natural direction of continuation. We are, however, confident that due to the code design it will be easy to extend the benchmark with the new distributions (and their transformations) when they appear.
Estimation of information in multivariate Student distributions remains a challenge for most estimators. Although we provide the evidence that distributions with longer tails pose a harder challenge for considered estimators, shortening the tails with \(\operatorname{asinh}\) transform did not alleviate all the issues. In neural estimators this can be partly explained by the fact that the critic may not learn to approximate pointwise mutual information (up to an additive constant) properly, as demonstrated in Appendix E, but the question of what makes the neighborhood-based estimator, KSG, fail, remains open.
As noted, unless strong inductive biases can be exploited (as illustrated for CCA), estimation of high MI is an important challenge. An interesting avenue towards developing new estimators can be by incorporating a stronger inductive bias, which could also take a form of development of better data normalization strategies and, in case of neural estimators, critic architectures. Better normalization strategies may make the estimators more invariant to considered transformations.
#### Acknowledgments and Disclosure of Funding
We would like to thank Craig Hamilton for the advice on scientific writing and help with the abstract. FG was supported by the Norwegian Financial Mechanism GRIEG-1 grant operated by Narodowe Centrum Nauki (National Science Centre, Poland) 2019/34/H/NZ6/00699. PC and AM were supported by a fellowship from the ETH AI Center.
## References
* Ao and Li (2022) Ziqiao Ao and Jinglai Li. Entropy estimation via normalizing flow. _Proceedings of the AAAI Conference on Artificial Intelligence_, 36(9):9990-9998, 2022.
* Arellano-Valle et al. (2013) R.B. Arellano-Valle, J.E. Contreras-Reyes, and M.G. Genton. Shannon entropy and mutual information for multivariate skew-elliptical distributions. _Scandinavian Journal of Statistics_, 49:42-62, 2013.
* Belghazi et al. (2018) Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and Devon Hjelm. Mutual information neural estimation. In _International conference on machine learning_, pages 531-540. PMLR, 2018.
* Bradbury et al. (2018) James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL [http://github.com/google/jax](http://github.com/google/jax).
* Buhlmann and van de Geer (2011) Peter Buhlmann and Sara van de Geer. _Statistics for High-Dimensional Data: Methods, Theory and Applications_. Springer Publishing Company, Incorporated, 1st edition, 2011. ISBN 3642201911.
* Cabeli et al. (2020) Vincent Cabeli, Louis Verny, Nadir Sella, Guido Uguzzoni, Marc Verny, and Herve Isambert. Learning clinical networks from medical records based on information estimates in mixed-type data. _PLOS Computational Biology_, 16(5):e1007866, 2020.
* Carapetis (2022) Anthony Carapetis. Rotating spherical shells doesn't change volume. Mathematics Stack Exchange, 2022. URL [https://math.stackexchange.com/q/4500294](https://math.stackexchange.com/q/4500294). URL:[https://math.stackexchange.com/q/4500294](https://math.stackexchange.com/q/4500294) (version: 2022-07-26).
* Cellucci et al. (2005) Christopher J Cellucci, Alfonso M Albano, and Paul E Rapp. Statistical validation of mutual information calculations: Comparison of alternative numerical algorithms. _Physical review E_, 71(6):066208, 2005.
* Cover and Thomas (2006) Thomas M. Cover and Joy A. Thomas. _Elements of Information Theory_. Wiley-Interscience New York, 2006.
* Darbellay and Vajda (1999) Georges A Darbellay and Igor Vajda. Estimation of the information by an adaptive partitioning of the observation space. _IEEE Transactions on Information Theory_, 45(4):1315-1321, 1999.
* Daub et al. (2004) Carsten O Daub, Ralf Steuer, Joachim Selbig, and Sebastian Kloska. Estimating mutual information using b-spline functions-an improved similarity measure for analysing gene expression data. _BMC bioinformatics_, 5(1):1-12, 2004.
* Daunhawer et al. (2023) Imant Daunhawer, Alice Bizeul, Emanuele Palumbo, Alexander Marx, and Julia E Vogt. Identifiability results for multimodal contrastive learning. In _The Eleventh International Conference on Learning Representations_, 2023. URL [https://openreview.net/forum?id=U_2kuq0TcB](https://openreview.net/forum?id=U_2kuq0TcB).
* Gao et al. (2015) Shuyang Gao, Greg Ver Steeg, and Aram Galstyan. Efficient estimation of mutual information for strongly dependent variables. In _Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS)_, pages 277-286. PMLR, 2015.
* Gao et al. (2017a) Weihao Gao, Sreeram Kannan, Sewoong Oh, and Pramod Viswanath. Estimating mutual information for discrete-continuous mixtures. In _Advances in Neural Information Processing Systems_, pages 5986-5997, 2017a.
* Gao et al. (2017b) Weihao Gao, Sewoong Oh, and Pramod Viswanath. Density functional estimators with k-nearest neighbor bandwidths. In _IEEE International Symposium on Information Theory (ISIT)_, pages 1351-1355. IEEE, 2017b.
* Gao et al. (2018) Weihao Gao, Sewoong Oh, and Pramod Viswanath. Demystifying fixed k-nearest neighbor information estimators. _IEEE Transactions on Information Theory_, 64(8):5629-5661, 2018.
* Gao et al. (2018)* Grabowski et al. (2019) Frederic Grabowski, Pawel Czyz, Marek Kochanczyk, and Tomasz Lipniacki. Limits to the rate of information transmission through the MAPK pathway. _Journal of The Royal Society Interface_, 16(152):20180792, 2019. doi: 10.1098/rsif.2018.0792. URL [https://royalsocietypublishing.org/doi/abs/10.1098/rsif.2018.0792](https://royalsocietypublishing.org/doi/abs/10.1098/rsif.2018.0792).
* Hauberg (2019) Soren Hauberg. Only Bayes should learn a manifold (on the estimation of differential geometric structure from data), 2019.
* Holmes and Nemenman (2019) Caroline M Holmes and Ilya Nemenman. Estimation of mutual information for real-valued data with error bars and controlled bias. _Physical Review E_, 100(2):022404, 2019.
* Kay (1992) J. Kay. Feature discovery under contextual supervision using mutual information. In _IJCNN International Joint Conference on Neural Networks_, volume 4, pages 79-84, 1992. doi: 10.1109/IJCNN.1992.227286.
* Kechris (2012) A. Kechris. _Classical Descriptive Set Theory_. Graduate Texts in Mathematics. Springer New York, 2012. ISBN 9781461241904. URL [https://books.google.ch/books?id=WR3SBwAAQBAJ](https://books.google.ch/books?id=WR3SBwAAQBAJ).
* Keys et al. (2015) Dustin Keys, Shukur Kholikov, and Alexei A. Pevtsov. Application of mutual information methods in time-distance helioseismology. _Solar Physics_, 290(3):659-671, Mar 2015. ISSN 1573-093X. doi: 10.1007/s11207-015-0650-y. URL [https://doi.org/10.1007/s11207-015-0650-y](https://doi.org/10.1007/s11207-015-0650-y).
* Khan et al. (2007) Shiraj Khan, Sharba Bandyopadhyay, Auroop R Ganguly, Sunil Saigal, David J Erickson III, Vladimir Protopopescu, and George Ostrouchov. Relative performance of mutual information estimation methods for quantifying the dependence among short and noisy data. _Physical Review E_, 76(2):026209, 2007.
* Khemakhem et al. (2020) Ilyes Khemakhem, Diederik Kingma, Ricardo Monti, and Aapo Hyvarinen. Variational autoencoders and nonlinear ICA: a unifying framework. In _International Conference on Artificial Intelligence and Statistics_, pages 2207-2217. PMLR, 2020.
* Klemela (2009) Jussi Klemela. Multivariate histograms with data-dependent partitions. _Statistica sinica_, pages 159-176, 2009.
* Koeman and Heskes (2014) Mike Koeman and Tom Heskes. Mutual information estimation with random forests. In _International Conference on Neural Information Processing_, pages 524-531. Springer, 2014.
* Kozachenko and Leonenko (1987) L. F. Kozachenko and N. N. Leonenko. Sample estimate of the entropy of a random vector. _Problemy Peredachi Informatsii_, 23:9-16, 1987.
* Kraskov et al. (2004) Alexander Kraskov, Harald Stogbauer, and Peter Grassberger. Estimating mutual information. _Physical Review E_, 69(6):066138, 2004.
* Kurutach et al. (2018) Thanard Kurutach, Aviv Tamar, Ge Yang, Stuart J Russell, and Pieter Abbeel. Learning plannable representations with causal infogan. _Advances in Neural Information Processing Systems_, 31, 2018.
* Li et al. (2022) Bo Li, Yifei Shen, Yezhen Wang, Wenzhen Zhu, Dongsheng Li, Kurt Keutzer, and Han Zhao. Invariant information bottleneck for domain generalization. In _Proceedings of the AAAI Conference on Artificial Intelligence_, pages 7399-7407, 2022.
* Lord et al. (2018) Warren M Lord, Jie Sun, and Erik M Bollt. Geometric k-nearest neighbor estimation of entropy and mutual information. _Chaos: An Interdisciplinary Journal of Nonlinear Science_, 28(3):033114, 2018.
* Lu and Peltonen (2020) Chien Lu and Jaakko Peltonen. Enhancing nearest neighbor based entropy estimator for high dimensional distributions via bootstrapping local ellipsoid. In _Proceedings of the AAAI Conference on Artificial Intelligence_, pages 5013-5020, 2020.
* Lucas et al. (2006) Joseph Lucas, Carlos Carvalho, Quanli Wang, Andrea Bild, Joseph R. Nevins, and Mike West. _Sparse Statistical Modelling in Gene Expression Genomics_, page 155-176. Cambridge University Press, 2006. doi: 10.1017/CBO9780511584589.009.
* Marx and Fischer (2022) Alexander Marx and Jonas Fischer. Estimating mutual information via geodesic \(k\)nn. In _Proceedings of the SIAM International Conference on Data Mining (SDM)_, pages 415-423. SIAM, 2022.
* Marx and Fischler (2019)Alexander Marx and Jilles Vreeken. Testing conditional independence on discrete data using stochastic complexity. In _Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS)_, pages 496-505, 2019.
* Marx et al. [2021] Alexander Marx, Lincen Yang, and Matthijs van Leeuwen. Estimating conditional mutual information for discrete-continuous mixtures using multi-dimensional adaptive histograms. In _Proceedings of the SIAM International Conference on Data Mining (SDM)_, pages 387-395, 2021.
* McAllester and Stratos [2020] David McAllester and Karl Stratos. Formal limitations on the measurement of mutual information. In Silvia Chiappa and Roberto Calandra, editors, _Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics_, volume 108 of _Proceedings of Machine Learning Research_, pages 875-884. PMLR, 26-28 Aug 2020. URL [https://proceedings.mlr.press/v108/mcallester20a.html](https://proceedings.mlr.press/v108/mcallester20a.html).
* Mesner and Shalizi [2020] Octavio Cesar Mesner and Cosma Rohilla Shalizi. Conditional Mutual Information Estimation for Mixed, Discrete and Continuous, Data. _IEEE Transactions on Information Theory_, 2020.
* Moon et al. [1995] Young-Il Moon, Balaji Rajagopalan, and Upmanu Lall. Estimation of mutual information using kernel density estimators. _Phys. Rev. E_, 52:2318-2321, Sep 1995. doi: 10.1103/PhysRevE.52.2318. URL [https://link.aps.org/doi/10.1103/PhysRevE.52.2318](https://link.aps.org/doi/10.1103/PhysRevE.52.2318).
* Murphy [2023] Kevin P. Murphy. _Probabilistic Machine Learning: Advanced Topics_. MIT Press, 2023. URL [http://probml.github.io/book2](http://probml.github.io/book2).
* Molder et al. [2021] F Molder, KP Jablonski, B Letcher, MB Hall, CH Tomkins-Tinch, V Sochat, J Forster, S Lee, SO Wardziok, A Kanitz, A Wilm, M Holtgrewe, S Rahmann, S Nahnsen, and J Koster. Sustainable data analysis with snakemake [version 1; peer review: 1 approved, 1 approved with reservations]. _F1000Research_, 10(33), 2021. doi: 10.12688/f1000research.29032.1.
* Nelsen [2007] R.B. Nelsen. _An Introduction to Copulas_. Springer Series in Statistics. Springer New York, 2007. ISBN 9780387286785. URL [https://books.google.ch/books?id=B3ONT5rBv0wC](https://books.google.ch/books?id=B3ONT5rBv0wC).
* Nguyen et al. [2007] XuanLong Nguyen, Martin J Wainwright, and Michael Jordan. Estimating divergence functionals and the likelihood ratio by penalized convex risk minimization. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, _Advances in Neural Information Processing Systems_, volume 20. Curran Associates, Inc., 2007. URL [https://proceedings.neurips.cc/paper_files/paper/2007/file/72da7fd6d1302c0a159f6436d01e9eb0-Paper.pdf](https://proceedings.neurips.cc/paper_files/paper/2007/file/72da7fd6d1302c0a159f6436d01e9eb0-Paper.pdf).
* Nowozin et al. [2016] Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. F-gan: Training generative neural samplers using variational divergence minimization. In _Proceedings of the 30th International Conference on Neural Information Processing Systems_, NIPS'16, page 271-279, Red Hook, NY, USA, 2016. Curran Associates Inc. ISBN 9781510838819.
* van den Oord et al. [2018] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. _arXiv preprint arXiv:1807.03748_, 2018.
* Paninski [2003] Liam Paninski. Estimation of entropy and mutual information. _Neural Computation_, 15(6):1191-1253, 2003.
* Paninski and Yajima [2008] Liam Paninski and Masanao Yajima. Undersmoothed kernel entropy estimators. _IEEE Transactions on Information Theory_, 54(9):4384-4388, 2008.
* Pedregosa et al. [2011] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. _Journal of Machine Learning Research_, 12:2825-2830, 2011.
* Pinsker and Feinstein [1964] M.S. Pinsker and A. Feinstein. _Information and Information Stability of Random Variables and Processes_. Holden-Day series in time series analysis. Holden-Day, 1964. ISBN 9780816268047.
* Pizer et al. [1987] Stephen M Pizer, E Philip Amburn, John D Austin, Robert Cromartie, Ari Geselowitz, Trey Greer, Bart ter Haar Romeny, John B Zimmerman, and Karel Zuiderveld. Adaptive histogram equalization and its variations. _Computer Vision, Graphics, and Image Processing_, 39(3):355-368, 1987.
* Pinsky et al. [2016]Y. Polyanskiy and Y. Wu. _Information Theory: From Coding to Learning_. Cambridge University Press, 2022. Book draft.
* Poole et al. (2019) Ben Poole, Sherjil Ozair, Aaron Van Den Oord, Alex Alemi, and George Tucker. On variational bounds of mutual information. In _International Conference on Machine Learning_, pages 5171-5180. PMLR, 2019.
* Ragonesi et al. (2021) Ruggero Ragonesi, Riccardo Volpi, Jacopo Cavazza, and Vittorio Murino. Learning unbiased representations via mutual information backpropagation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 2729-2738, 2021.
* Selimkhanov et al. (2014) Jangir Selimkhanov, Brooks Taylor, Jason Yao, Anna Pilko, John Albeck, Alexander Hoffmann, Lev Tsimring, and Roy Wollman. Accurate information transmission through dynamic biochemical signaling networks. _Science_, 346(6215):1370-1373, 2014.
* Silverman (2018) Bernard W Silverman. _Density estimation for statistics and data analysis_. Routledge, 2018.
* Solo (2008) Victor Solo. On causality and mutual information. In _2008 47th IEEE Conference on Decision and Control_, pages 4939-4944. IEEE, 2008.
* Song and Ermon (2020) Jiaming Song and Stefano Ermon. Understanding the limitations of variational mutual information estimators. In _International Conference on Learning Representations_, 2020.
* Suzuki (2016) Joe Suzuki. An estimator of mutual information and its application to independence testing. _Entropy_, 18(4):109, 2016.
* Taleb (2020) Nassim Nicholas Taleb. Statistical consequences of fat tails: Real world preasymptotics, epistemology, and applications, 2020. URL [https://arxiv.org/abs/2001.10488](https://arxiv.org/abs/2001.10488).
* Tschannen et al. (2020) Michael Tschannen, Josip Djolonga, Paul K. Rubenstein, Sylvain Gelly, and Mario Lucic. On mutual information maximization for representation learning. In _International Conference on Learning Representations_, 2020.
* Uda (2020) Shinsuke Uda. Application of information theory in systems biology. _Biophysical Reviews_, 12(2):377-384, Apr 2020. ISSN 1867-2469. doi: 10.1007/s12551-020-00665-w. URL [https://doi.org/10.1007/s12551-020-00665-w](https://doi.org/10.1007/s12551-020-00665-w).
* Valiant and Valiant (2011) Gregory Valiant and Paul Valiant. Estimating the unseen: an \(n/\log(n)\)-sample estimator for entropy and support size, shown optimal via new clts. In _Proceedings of the annual ACM symposium on Theory of computing (STOC)_, pages 685-694, 2011.
* Vejmelka and Hlavackova-Schindler (2007) Martin Vejmelka and Katerina Hlavackova-Schindler. Mutual information estimation in higher dimensions: A speed-up of a k-nearest neighbor based estimator. In _International Conference on Adaptive and Natural Computing Algorithms_, pages 790-797. Springer, 2007.
* Vinh et al. (2014) Nguyen Xuan Vinh, Jeffrey Chan, and James Bailey. Reconsidering mutual information based feature selection: A statistical significance view. In _Proceedings of the AAAI Conference on Artificial Intelligence_, 2014.
* Von Kugelgen et al. (2021) Julius Von Kugelgen, Yash Sharma, Luigi Gresele, Wieland Brendel, Bernhard Scholkopf, Michel Besserve, and Francesco Locatello. Self-supervised learning with data augmentations provably isolates content from style. _Advances in neural information processing systems_, 34:16451-16467, 2021.
* Xi and Bloem-Reddy (2022) Quanhan Xi and Benjamin Bloem-Reddy. Indeterminacy in latent variable models: Characterization and strong identifiability. _arXiv preprint arXiv:2206.00801_, 2022.
* Young et al. (2023) Alexander L. Young, Willem van den Boom, Rebecca A. Schroeder, Vijay Krishnamoorthy, Karthik Raghunathan, Hau-Tieng Wu, and David B. Dunson. Mutual information: Measuring nonlinear dependence in longitudinal epidemiological data. _PLOS ONE_, 18(4):1-16, 04 2023. doi: 10.1371/journal.pone.0284904. URL [https://doi.org/10.1371/journal.pone.0284904](https://doi.org/10.1371/journal.pone.0284904).
* Zhang et al. (2021) Yifan Zhang, Bingyi Kang, Bryan Hooi, Shuicheng Yan, and Jiashi Feng. Deep long-tailed learning: A survey, 2021. URL [https://arxiv.org/abs/2110.04596](https://arxiv.org/abs/2110.04596).
Roland S Zimmermann, Yash Sharma, Steffen Schneider, Matthias Bethge, and Wieland Brendel. Contrastive learning inverts the data generating process. In _International Conference on Machine Learning_, pages 12979-12990. PMLR, 2021.
## Appendix A Appendix
### Table of Contents
* A Invariance to Continuous Injective Mappings
* B Spiral Diffeomorphisms
* C Overview of Estimators
* D Additional Information on Benchmark Tasks
* D.1 Task Descriptions
* D.2 Covariance Matrix Family
* D.3 Technical Results
* E Additional Experiments
* E.1 Pointwise Mutual Information
* E.2 Sample Size Requirements
* E.3 Preprocessing
* E.4 Hyperparameter Selection
* F Visualisation of Selected Tasks
* G Author ContributionsInvariance to Continuous Injective Mappings
In this section, we provide a proof for Theorem 2.1, formally proving the invariance of mutual information with respect to continuous injective mappings. First, we recall the standard definitions from Chapters 1 and 2 of Pinsker and Feinstein [1964].
**Definition A.1**.: Let \((\Omega,\mathcal{A})\) be a measurable space. A family of sets \(\mathcal{E}\subseteq\mathcal{A}\) is a partition of \(\Omega\) if for every two distinct \(E_{1}\) and \(E_{2}\) in \(\mathcal{E}\) it holds that \(E_{1}\cap E_{2}=\varnothing\) and that \(\bigcup\mathcal{E}=\Omega\). If \(\mathcal{E}\) is finite, we call it a finite partition.
**Definition A.2** (Mutual Information).: Mutual information between r.v. \(X\) and \(Y\) is defined as
\[\mathbf{I}(X;Y)=\sup_{\mathcal{E},\mathcal{F}}\sum_{i,j}P_{XY}(E_{i}\times F_{j })\log\frac{P_{XY}(E_{i}\times F_{j})}{P_{X}(E_{i})P_{Y}(F_{j})},\]
where the supremum is taken over all finite partitions \(\mathcal{E}=\{E_{1},E_{2},\dots\}\) of \(\mathcal{X}\) and \(\mathcal{F}=\{F_{1},F_{2},\dots\}\) of \(\mathcal{Y}\) with the usual conventions \(0\log 0=0\) and \(0\log 0/0=0\).
_Remark A.3_.: If \(P_{XY}\) is absolutely continuous with respect to \(P_{X}\otimes P_{Y}\) (e.g., \(X\) and \(Y\) are finite r.v.), mutual information is equal to the following Kullback-Leibler divergence:
\[\mathbf{I}(X;Y)=\mathbf{D}_{\mathrm{KL}}\left(P_{XY}\parallel P_{X}\otimes P_{ Y}\right)=\int\log f\,\mathrm{d}P_{XY},\]
where \(f\) is the Radon-Nikodym derivative \(f=\mathrm{d}P_{XY}/\mathrm{d}(P_{X}\otimes P_{Y})\). If the absolute continuity does not hold, then \(\mathbf{I}(X;Y)=+\infty\)[Pinsker and Feinstein, 1964, Theorem 2.1.2].
**Theorem 2.1**.: _Let \(\mathcal{X}\), \(\mathcal{X}^{\prime}\), \(\mathcal{Y}\) and \(\mathcal{Y}^{\prime}\) be standard Borel spaces (e.g., smooth manifolds with their Borel \(\sigma\)-algebras) and \(f\colon\mathcal{X}\to\mathcal{X}^{\prime}\) and \(g\colon\mathcal{Y}\to\mathcal{Y}^{\prime}\) be continuous injective mappings. Then, for every \(\mathcal{X}\)-valued r.v. \(X\) and \(\mathcal{Y}\)-valued r.v. \(Y\) it holds that \(\mathbf{I}(X;Y)=\mathbf{I}(f(X);g(Y))\)._
Proof.: First, note that a continuous mapping between Borel spaces is measurable, so random variables \(f(X)\) and \(g(Y)\) are well-defined.
Then, observe that it suffices to prove that
\[\mathbf{I}(f(X);Y)=\mathbf{I}(X;Y).\]
It is well-known that we have \(\mathbf{I}(f(X);Y)\leq\mathbf{I}(X;Y)\) for every measurable mapping \(f\), so we will need to prove the reverse inequality.
Recall form Definition A.2 that
\[\mathbf{I}(X;Y)=\sup_{\mathcal{E},\mathcal{F}}\sum_{i,j}P_{XY}(E_{i}\times F_{ j})\log\frac{P_{XY}(E_{i}\times F_{j})}{P_{X}(E_{i})P_{Y}(F_{j})},\]
where the supremum is taken over all finite partitions \(\mathcal{E}\) of \(\mathcal{X}\) and \(\mathcal{F}\) of \(\mathcal{Y}\).
The proof will go as follows: for every finite partition \(\mathcal{E}\) of \(\mathcal{X}\) we will construct a finite partition of \(\mathcal{X}^{\prime}\), which (together with the same arbitrary partition \(\mathcal{F}\) of \(\mathcal{Y}\)) will give the same value of the sum. This will prove that \(\mathbf{I}(f(X);Y)\geq\mathbf{I}(X;Y)\).
Take any finite partitions \(\{E_{1},\dots,E_{n}\}\) of \(\mathcal{X}\) and \(\{F_{1},\dots,F_{m}\}\) of \(\mathcal{Y}\). First, we will show that sets
\[G_{0} =\mathcal{X}^{\prime}\setminus f(\mathcal{X})\] \[G_{1} =f(E_{1})\] \[G_{2} =f(E_{2})\] \[\vdots\]
form a partition of \(\mathcal{X}^{\prime}\).
It is easy to see that they are pairwise disjoint as \(f\) is injective and it is obvious that their union is the whole \(\mathcal{X}^{\prime}\). Hence, we only need to show that they are all Borel.
For \(i\geq 1\) the Lusin-Suslin theorem [Kechris, 2012, Theorem 15.1] guarantees that \(G_{i}\) is Borel (continuous injective mappings between Polish spaces map Borel sets to Borel sets). Similarly, \(f(\mathcal{X})\) is Borel and hence \(G_{0}=\mathcal{X}^{\prime}\setminus f(\mathcal{X})\) is Borel as well.
Next, we will prove that
\[\sum_{i=1}^{n}\sum_{j=1}^{m}P_{XY}(E_{i}\times F_{j})\log\frac{P_{XY}(E_{i}\times F _{j})}{P_{X}(E_{i})P_{Y}(F_{j})}=\sum_{i=0}^{n}\sum_{j=1}^{m}P_{f(X)Y}(G_{i} \times F_{j})\log\frac{P_{f(X)Y}(G_{i}\times F_{j})}{P_{f(X)}(G_{i})P_{Y}(F_{j} )}.\]
Note that \(P_{f(X)}(G_{0})=P_{X}(f^{-1}(G_{0}))=P_{X}(\varnothing)=0\), so that we can ignore the terms with \(i=0\).
For \(i\geq 1\) we have
\[P_{f(X)}(G_{i})=P_{X}(f^{-1}(G_{i}))=P_{X}(f^{-1}(f(E_{i})))=P_{X}(E_{i})\]
as \(f\) is injective.
Similarly, \(f\times 1_{\mathcal{Y}}\) is injective and maps \(E_{i}\times F_{j}\) onto \(G_{i}\times F_{j}\), so
\[P_{f(X)Y}(G_{i}\times F_{j})=P_{XY}\big{(}(f\times 1_{\mathcal{Y}})^{-1}(G_{i }\times F_{j})\big{)}=P_{XY}(E_{i}\times F_{j}).\]
This finishes the proof.
## Appendix B Spiral Diffeomorphisms
In this section we will formally define the spiral diffeomorphisms and prove that they preserve the centred Gaussian distribution \(\mathcal{N}(0,\sigma^{2}I_{n})\). They were inspired by the "swirling construction", discussed by Hauberg [2019].
**Definition B.1**.: Let \(O(n)\) be the orthogonal group and \(\gamma\colon\mathbb{R}\to O(n)\) be any smooth function. The mapping
\[f\colon\mathbb{R}^{n}\to\mathbb{R}^{n}\]
given by \(f(x)=\gamma(||x||^{2})x\) will be called the spiral.
**Example B.2**.: _Let \(\mathfrak{so}(n)\) be the vector space of all skew-symmetric matrices. For every two matrices \(R\in O(n)\) and \(A\in\mathfrak{so}(n)\) and any smooth mapping \(g\colon\mathbb{R}\to\mathbb{R}\) we can form a path_
\[\gamma(t)=R\exp(g(t)A).\]
_In particular, \(R=I_{n}\) and \(g(t)=vt\) yields the spirals defined earlier._
From the above definition it is easy to prove that spirals are diffeomorphisms. We formalize it by the following lemma.
**Lemma B.3**.: _Every spiral is a diffeomorphism._
Proof.: Let \(\gamma\) be the path leading to the spiral \(f\). As matrix inverse is a smooth function, we can define a smooth function \(h(x)=\gamma\left(||x||^{2}\right)^{-1}x\).
Note that \(||f(x)||^{2}=||x||^{2}\), as orthogonal transformation \(\gamma\left(||x||^{2}\right)\) preserves distances.
Then
\[h(f(x)) =\gamma\left(||f(x)||^{2}\right)^{-1}f(x)\] \[=\gamma\left(||x||^{2}\right)^{-1}f(x)\] \[=\gamma\left(||x||^{2}\right)^{-1}\gamma\left(||x||^{2}\right)x\] \[=x.\]
Analogously, we prove that \(f\circ h\) also is the identity mapping.
Finally, we will give the proof that spirals leave the centred normal distribution \(\mathcal{N}(0,\sigma^{2}I_{n})\) invariant. This will follow from a slightly stronger theorem:
**Theorem B.4**.: _Let \(P\) be a probability measure on \(\mathbb{R}^{n}\) with a smooth probability density function \(p\) which factorizes as:_
\[p(x)=u\left(||x||^{2}\right)\]_for some smooth function \(u\colon\mathbb{R}\to\mathbb{R}\)._
_Then, every spiral \(f\colon\mathbb{R}^{n}\to\mathbb{R}^{n}\) preserves the measure \(P\):_
\[f_{\#}P=P,\]
_where \(f_{\#}P=P\circ f^{-1}\) is the pushforward measure._
_In particular, for \(P=\mathcal{N}(0,\sigma^{2}I_{n})\) (a centered isotropic Gaussian distribution), if \(X\sim\mathcal{N}(0,\sigma^{2}I_{n})\), then \(f(X)\sim\mathcal{N}(0,\sigma^{2}I_{n})\)._
Proof.: As we have already observed, for every \(x\in\mathbb{R}^{n}\) it holds that \(||f(x)||^{2}=||x||^{2}\), so the PDF evaluated at both points is equal
\[p(x)=p(f(x))\]
Hence, we need to prove that for every point \(x\in\mathbb{R}^{n}\), \(|\det f^{\prime}(x)|=1\), where \(f^{\prime}(x)\) is the Jacobian matrix of \(f\) evaluated at the point \(x\).
The proof of this identity was given by Carapetis (2022), but for the sake of completeness we supply a variant of it below.
First, we calculate the Jacobian matrix:
\[\frac{\partial f_{i}(x)}{\partial x_{j}} =\frac{\partial}{\partial x_{j}}\sum_{k}\gamma(||x||^{2})_{ik}x_{k}\] \[=\sum_{k}\gamma(||x||^{2})_{ik}\frac{\partial x_{k}}{\partial x_{ j}}+\sum_{k}x_{k}\frac{\partial\gamma(||x||^{2})_{ik}}{\partial x_{j}}\] \[=\gamma(||x||^{2})_{ij}+\sum_{k}x_{k}\cdot 2x_{j}\cdot\gamma^{ \prime}(||x||^{2})_{ik}\] \[=\gamma(||x||^{2})_{ij}+2\left(\sum_{k}\gamma^{\prime}(||x||^{2}) _{ik}x_{k}\right)x_{j}\]
Hence,
\[f^{\prime}(x)=\gamma(||x||^{2})+2\gamma^{\prime}(||x||^{2})xx^{T}.\]
Now fix point \(x\) and consider the matrix \(\gamma(||x||^{2})\in O(n)\). Define a mapping \(y\mapsto\gamma(||x||^{2})^{-1}f(y)\). As we have already observed, multiplication by \(\gamma(||x||^{2})^{-1}\) does not change the value of the PDF. It also cannot change the \(|\det f^{\prime}(x)|\) value as because \(\gamma(||x||^{2})^{-1}\) is a linear mapping, its derivative is just the mapping itself and because \(\gamma(||x||^{2})^{-1}\) is an orthogonal matrix, we have \(|\det\gamma(||x||^{2})^{-1}|=1\). Hence, without loss of generality we can assume that \(\gamma(||x||^{2})=I_{n}\) and
\[f^{\prime}(x)=I_{n}+2Axx^{T},\]
where \(A:=\gamma^{\prime}(||x||^{2})\in\mathfrak{so}(n)\) is a skew-symmetric matrix.
If \(x=0\) we have \(|\det f^{\prime}(x)|=|\det I_{n}|=1\). In the other case, we will work in coordinates induced by an orthonormal basis \(e_{1},\ldots,e_{n}\) in which the first basis vector is \(e_{1}=x/||x||\). Hence,
\[f^{\prime}(x)=I_{n}+2||x||^{2}Ae_{1}e_{1}^{T}.\]
For \(i\geq 2\) we have \(e_{1}^{T}e_{i}=0\) and consequently
\[f^{\prime}(x)e_{i}=e_{i}+2||x||^{2}Ae_{1}e_{1}^{T}e_{i}=e_{i}.\]
This shows that in this basis we have
\[f^{\prime}(x)=\begin{pmatrix}a_{1}&0&0&\ldots&0\\ a_{2}&1&0&\ldots&0\\ a_{3}&0&1&\ldots&0\\ \vdots&\vdots&\vdots&\ddots&0\\ a_{n}&0&0&\ldots&1\end{pmatrix},\]
where the block on the bottom right is the identity matrix \(I_{n-1}\) and \(\det f^{\prime}(x)=a_{1}\).
The first column is formed by the coordinates of
\[f^{\prime}(x)e_{1}=e_{1}+2||x||^{2}Ae_{1}\]
and we have
\[a_{1}=e_{1}^{T}f^{\prime}(x)e_{1}=1+2||x||^{2}e_{1}^{T}Ae_{1}=1+0=1\]
as \(A\in\mathfrak{so}(n)\) is skew-symmetric.
## Appendix C Overview of Estimators
There exists a broad literature on (conditional) mutual information estimators ranging from estimators on discrete data (Paninski, 2003; Valiant and Valiant, 2011; Vinh et al., 2014; Marx and Vreeken, 2019), over estimators for continuous data (Kraskov et al., 2004; Darbellay and Vajda, 1999; Belghazi et al., 2018) to estimators that can consider discrete-continuous mixture data (Gao et al., 2017; Marx et al., 2021; Mesner and Shalizi, 2020). Here, we focus on the most commonly used class, which also finds applications in representation learning: estimators for continuous data. For continuous data, the most widely propagated classes of estimators are histogram or partition-based estimators, nearest neighbor-based density estimators, kernel methods, and neural estimators. In the following, we briefly discuss a representative selection of estimators from these classes, as well as model-based estimation.
Histogram-Based EstimatorsA natural way to approximate mutual information, as stated in Definition A.2, is by partitioning the domain space of \(X\) and \(Y\) into segments and estimate the densities locally. This approach is also known as histogram density estimation, where the probability mass function is approximated locally for each hyper-rectangle \(E_{i}\) by \(\hat{P}_{X}(E_{i})=\frac{n_{i}/N}{\mathbb{Vol}_{i}}\), where \(\mathbb{Vol}_{i}\) is the volume of \(E_{i}\), and \(n_{i}\) refers to the number of data points residing in \(E_{i}\).
To estimate mutual information, we explore the space of allowed partitionings and select the partitioning that maximizes mutual information (Darbellay and Vajda, 1999; Cellucci et al., 2005)
\[E_{\text{Disc},N}^{XY}=\sup_{\mathcal{E},\mathcal{F}}\sum_{i,j}\hat{P}_{XY}(E_ {i}\times F_{j})\log\frac{\hat{P}_{XY}(E_{i}\times F_{j})}{\hat{P}_{X}(E_{i}) \hat{P}_{Y}(F_{j})}\.\]
Many estimators follow this idea. Darbellay and Vajda (1999) use adaptive partitioning trees and propose to split bins until the density within a bin is uniform, which they check with the \(\chi^{2}\) test. Daub et al. (2004) use B-Splines to assign data points to bins, and other authors use regularization based on the degrees of freedom of partitioning to avoid overfitting (Suzuki, 2016; Cabeli et al., 2020; Vinh et al., 2014; Marx et al., 2021).
A well-known property of histogram-based estimators is that they are consistent for \(N\rightarrow\infty\) if the volume of the largest hypercube goes to zero, while the number of hypercubes grows sub-linearly with the number of samples, \(N\) (see e.g. Cover and Thomas (2006)). For low-dimensional settings, histogram density estimation can provide accurate results, but the strategy suffers from the curse of dimensionality (Pizer et al., 1987; Klemela, 2009).
In our evaluation, we resort to partitioning estimators based on a fixed number of grids (\(10\) equal-width bins per dimension), since more recent partitioning-based estimators focus on estimating conditional MI and only allow \(X\) and \(Y\) to be one-dimensional (Suzuki, 2016; Marx et al., 2021; Cabeli et al., 2020), or do not have a publicly available implementation (Darbellay and Vajda, 1999).
\(k\)NN EstimatorsAn alternative to histogram-based methods is to use \(k\)-nearest neighbor (\(k\)NN) distances to approximate densities locally. Most generally, \(k\)NN estimators express mutual information as the sum \(H(X)+H(Y)-H(X,Y)\), which can be done under the assumptions discussed in Remark A.3, and estimate the individual entropy terms as
\[\hat{H}((x_{1},\ldots,x_{N}))=-\frac{1}{N}\sum_{i=1}^{N}\log\frac{k(x_{i})/N} {\mathbb{Vol}_{i}}\,\]
where \(k(x_{i})\) is the number of data points other than \(x_{i}\) in a neighborhood of \(x_{i}\), and \(\mathbb{Vol}_{i}\) is the volume of the neighborhood (Kozachenko and Leonenko, 1987). The quality of the estimator depends on two factors: i) the distance metric used to estimate the nearest neighbor distances, and ii) the estimation of the volume terms.
The most well-known estimator is the Kraskov, Stogbauer and Grassberger (KSG) estimator (Kraskov et al., 2004). In contrast to the first \(k\)NN-based estimators (Kozachenko and Leonenko, 1987), Kraskov et al. (2004) propose to estimate the distance to the \(k\)-th neighbor \(\nicefrac{{\rho_{i,k}}}{{2}}\) for each data point \(i\) only on the joint space of \(\mathcal{Z}=\mathcal{X}\times\mathcal{Y}\), with
\[D_{Z}(z_{i},z_{j})=\max\{D_{X}(x_{i},x_{j}),D_{Y}(y_{i},y_{j})\}\,,\]
where \(D_{X}\) and \(D_{Y}\) can refer to any norm-induced metric (e.g., Euclidean \(\ell_{2}\), Manhattan \(\ell_{1}\) or Chebyshev \(\ell_{\infty}\))5 To compute the entropy terms of the sub-spaces \(\mathcal{X}\) and \(\mathcal{Y}\), we count the number of data points with distance smaller than \(\nicefrac{{\rho_{i,k}}}{{2}}\), i.e. we define \(n_{x,i}\) as
Footnote 5: In the original approach, the \(L_{\infty}\) is used, and Gao et al. (2018) derives theoretical properties of the estimator for the \(L_{2}\) norm.
\[n_{x,i}=\left|\left\{x_{j}\,:\,D_{X}(x_{i},x_{j})<\frac{\rho_{i,k}}{2},\,i\neq j \right\}\right|\,, \tag{1}\]
and specify \(n_{y,i}\) accordingly. As a consequence, all volume-related terms cancel and the KSG estimator reduces to
\[E^{XY}_{\text{KSG},N}=\psi(k)+\psi(n)-\frac{1}{n}\sum_{i=1}^{n}\psi(n_{x,i}+1) +\psi(n_{y,i}+1)\,. \tag{2}\]
The diagamma function6\(\psi\) was already introduced by Kozachenko and Leonenko (1987) to correct for biases induced by the volume estimation. For a detailed exposition of the issue, we refer to Kraskov et al. (2004, Sec. II.1).
Footnote 6: The digamma function is defined as \(\psi(x)=\mathrm{d}\log\Gamma(x)/\mathrm{d}x\). It satisfies the recursion \(\psi(x+1)=\psi(x)+\frac{1}{x}\) with \(\psi(1)=-\gamma\), where \(\gamma\approx 0.577\) is the Euler–Mascheroni constant.
There exist various extensions of the classical KSG estimator. When selecting the \(L_{\infty}\) norm as a distance measure for \(D_{X}\) and \(D_{Y}\), \(E^{XY}_{\text{KSG},N}\) can be efficiently computed with the K-d trie method (Vejmelka and Hlavackova-Schindler, 2007). Other approaches aim to improve the estimation quality by explicitly computing the volume terms via SVD (Lord et al., 2018) or PCA (Lu and Peltonen, 2020), or use manifold learning (Marx and Fischer, 2022) or normalizing flows (Ao and Li, 2022) to estimate the nearest neighbors more efficiently in high-dimensional settings.
Kernel EstimatorsA smaller class of estimators builds upon kernel density estimation (KDE) (Moon et al., 1995; Paninski and Yajima, 2008). Classical approaches compute the density functions for the individual entropy terms via kernel density estimation, which is well-justified under the assumption of \(P_{XY}\) being absolutely continuous with respect to \(P_{X}\otimes P_{Y}\) (cf. Remark A.3), i.e., the density for a \(d\)-dimensional random vector \(X\) is approximated as
\[\hat{p}(x)=\frac{1}{Nh^{d}}\sum_{i=1}^{N}K\left(\frac{x-x_{i}}{h}\right)\,,\]
where \(h\) is the smoothing parameter or bandwidth of a Gaussian kernel (Silverman, 2018).
Moon et al. (1995) propose to estimate mutual information using a Gaussian kernel, for which the bandwidth can be estimated optimally (Silverman, 2018). Similar to histogram-based approaches, KDE-based estimators with fixed bandwidth \(h\) are consistent if \(Nh^{d}\to\infty\) as \(h\to 0\).
A more recent proposal, LNN (Gao et al., 2017), uses a data-dependent bandwidth based on nearest-neighbor distances, to avoid fixing \(h\) globally.
Neural EstimatorsNeural network approaches such as MINE (Belghazi et al., 2018) use a variational lower bound on mutual information using the Donsker-Varadhan theorem: let \(f\colon\mathcal{X}\times\mathcal{Y}\to\mathbb{R}\) be a bounded measurable function and
\[V_{f}(X,Y)=\sup_{f}\mathbb{E}_{P_{XY}}[f(X,Y)]-\log\mathbb{E}_{P_{X}\otimes P_ {Y}}[\exp f(X,Y)].\]
Under sufficient regularity conditions on \(P_{XY}\), \(\mathbf{I}(X;Y)\) is equal to \(V_{f}(X,Y)\), which is evaluated over all bounded measurable mappings \(f\). In practice, the right hand side is optimized with respect to the parameters of a given parametric predictive model \(f\), as a neural network. Approaches following this idea also count as discriminative approaches since the neural network is trained to distinguish points from the joint and marginal distribution. The literature on variational lower bounds of mutual information is growing (Oord et al., 2018; Song and Ermon, 2020) and the idea of using a discriminative approach to estimate mutual information has also been explored by Koeman and Heskes (2014), who used random forests. For an excellent theoretical overview of different neural estimators we refer to Poole et al. (2019).
Model-based EstimatorsFinally, if one assumes that \(P_{XY}\) is a member of a family of distributions \(Q_{XY}(\theta)\) with known mutual information, one can estimate \(\theta\) basing on the data \(((x_{1},y_{1}),\ldots,(x_{N},y_{N}))\) and calculate the corresponding mutual information. For example, if one assumes that there exist jointly multivariate normal variables \(\tilde{X}=(\tilde{X}_{1},\ldots,\tilde{X}_{r},E_{1},\ldots,E_{m-r})\) and \(\tilde{Y}=(\tilde{Y}_{1},\ldots,\tilde{Y}_{r},\tilde{E}_{1},\ldots,\tilde{E}_ {m-r})\) such that the cross-covariance terms are zero apart from \(\mathrm{Cov}(\tilde{X}_{i},\tilde{Y}_{i})\), and that \(X=A\tilde{X}+a\) and \(Y=B\tilde{Y}+b\) for invertible matrices \(A\) and \(B\) and vectors \(a\in\mathbb{R}^{m}\) and \(b\in\mathbb{R}^{n}\), then the mutual information can be found as
\[\mathbf{I}(X;Y)=-\sum_{i=1}^{r}\log\sqrt{1-\mathrm{Cor}(\tilde{X}_{r},\tilde{Y }_{r})^{2}}.\]
The model parameters \(A\), \(B\), \(a\), and \(b\), as well as the pairs of corresponding r.v. \(\tilde{X}_{r}\) and \(\tilde{Y}_{r}\), can be found by maximising likelihood using canonical correlation analysis (CCA) (Murphy, 2023, Ch. 28), which has been previously explored by Kay (1992).
## Appendix D Additional Information on Benchmark Tasks
### Task Descriptions
Bimodal VariablesConsider a r.v. \(X\) distributed according to a Gaussian mixture model with CDF \(F=w_{1}F_{1}+w_{2}F_{2}\), where \(F_{1}\) and \(F_{2}\) are CDFs of two Gaussian distributions and \(w_{1}\) and \(w_{2}\) are positive weights constrained as \(w_{1}+w_{2}=1\). The CDF is everywhere positive, continuous, and strictly increasing. Hence, it has an inverse, the quantile function, \(F^{-1}\colon(0,1)\to\mathbb{R}\), which is continuous and injective as well (see Lemma D.1). Thus, we can define r.v. \(X^{\prime}=F^{-1}(X)\), where \(X\sim\mathrm{Uniform}(0,1)\) and \(\mathbf{I}(X;Y)=\mathbf{I}(X^{\prime};Y)\) for every r.v. \(Y\). We generated two CDFs of Gaussian mixtures and implemented a numerical inversion in JAX (Bradbury et al., 2018) defining both \(X^{\prime}\) and \(Y^{\prime}\) to follow a bimodal distribution.
For the experimental values, we used
\[F_{X}(x) =0.3F(x)+0.7F(x-5)\] \[F_{Y}(y) =0.5F(x+1)+0.5F(x-3),\]
where \(F\) is the CDF of the standard normal distribution \(\mathcal{N}(0,1)\).
Wiggly MappingAs the "wiggly mapping" we used
\[w_{X}(x) =x+0.4\sin x+0.2\sin(1.7x+1)+0.03\sin(3.3x-2.5)\] \[w_{Y}(y) =y-0.4\sin(0.4y)+0.17\sin(1.3y+3.5)+0.02\sin(4.3y-2.5)\]
and we used visualisations to make sure that they do not heavily distort the distribution.
Additive NoiseFor the additive noise model of \(Y=X+N\) where \(X\sim\mathrm{Uniform}(0,1)\) and \(N\sim\mathrm{Uniform}(-\varepsilon,\varepsilon)\) it holds that
\[\mathbf{I}(X;Y)=\begin{cases}\varepsilon-\log(2\varepsilon)&\text{if } \varepsilon\leq 1/2\\ (4\varepsilon)^{-1}&\text{otherwise}\end{cases}\]
implying that \(\mathbf{I}(X;Y)=1.7\) for \(\varepsilon=0.1\) and \(\mathbf{I}(X;Y)=0.3\) for \(\varepsilon=0.75\).
Swiss Roll EmbeddingConsider a bivariate distribution \(P_{XY}\) with uniform margins \(X\sim\text{Uniform}(0,1)\) and \(Y\sim\text{Uniform}(0,1)\) and \(\mathbf{I}(X;Y)=0.8\). The Swiss roll mapping is an embedding \(i\colon(0,1)^{2}\to\mathbb{R}^{3}\) given by \(i(x,y)=(e(x),y)\), where \(e\colon(0,1)\to\mathbb{R}^{2}\) is given by
\[e(x)=\frac{1}{21}(t(x)\cos t(x),t(x)\sin t(x),\qquad t(x)=\frac{3\pi}{2}(1+2x).\]
Note that \(P_{X^{\prime}Y^{\prime}}\) does not have a PDF with respect to the Lebesgue measure on \(\mathbb{R}^{3}\). We visualise the Swiss roll distribution in Fig. 1.
Spiral DiffeomorphismFor the precise definition and properties of the general spiral diffeomorphism consult Appendix B.
In our benchmark we applied the spiral \(x\mapsto\exp(vA||x||^{2})x\) to both \(X\) and \(Y\) variables. We used an \(m\times m\) skew-symmetric matrix \(A_{X}\) with \((A_{X})_{12}=-(A_{X})_{21}=1\) and an \(n\times n\) skew-symmetric matrix \(A_{Y}\) with \((A_{Y})_{23}=-(A_{Y})_{32}=1\). Each of these effectively "mixes" only two dimensions.
The speed parameters \(v_{X}\) and \(v_{Y}\) were chosen as \(v_{X}=1/m\) and \(v_{Y}=1/n\) to partially compensate for the fact that if \(X\sim\mathcal{N}(0,I_{m})\), then \(||X||^{2}\sim\chi_{m}^{2}\) and has mean \(m\).
Multivariate StudentLet \(\Omega\) be an \((m+n)\times(m+n)\) positive definite dispersion matrix and \(\nu\) be a positive integer (the degrees of freedom). By sampling an \((m+n)\)-dimensional random vector \((\tilde{X},\tilde{Y})\sim\mathcal{N}(0,\Omega)\) and a random scalar \(U\sim\chi_{\nu}^{2}\) one can define rescaled variables \(X=\tilde{X}\sqrt{\nu/U}\) and \(Y=\tilde{Y}\sqrt{\nu/U}\), which are distributed according to the multivariate Student distribution. For \(\nu=1\) this reduces to the multivariate Cauchy distribution (and the first two moments are not defined), for \(\nu=2\) the mean exists, but covariance does not, and for \(\nu>2\) both the mean and covariance exist, with \(\text{Cov}(X,Y)=\frac{\nu}{\nu-2}\Omega\), so that the tail behaviour can be controlled by changing \(\nu\). In particular, for \(\nu\gg 1\) because of the concentration of measure phenomenon \(U\) has most of its probability mass around \(\nu\) and the variables \((X,Y)\) can be well approximated by \((\tilde{X},\tilde{Y})\).
Arellano-Valle et al. (2013) proved that \(\mathbf{I}(X;Y)\) can be computed via the sum of the mutual information of the Gaussian distributed basis variables and a correction term, i.e., \(\mathbf{I}(X;Y)=\mathbf{I}(\tilde{X};\tilde{Y})+c(\nu,m,n)\), where
\[c(\nu,m,n)=f\left(\nu\right)+f\left(\nu+m+n\right)-f\left(\nu+m\right)-f\left( \nu+n\right),\quad f(x)=\log\Gamma\left(\frac{x}{2}\right)-\frac{x}{2}\psi \left(\frac{x}{2}\right),\]
and \(\psi\) is the digamma function.
Contrary to the Gaussian case, even for \(\Omega=I_{m+n}\), the mutual information \(\mathbf{I}(X;Y)=c(\nu,m,n)\) is non-zero, as \(U\) provides some information about the magnitude. In the benchmark we use this dispersion matrix to quantify how well the estimators can use the information contained in the tails, rather than focusing on estimating the Gaussian term.
### Covariance Matrix Family
We generated jointly multivariate normal variables \(X=(X_{1},\ldots,X_{m})\) and \(Y=(Y_{1},\ldots,Y_{n})\) with the following procedure.
Consider i.i.d. Gaussian variables
\[U_{\text{all}},U_{X},U_{Y},Z_{1},\ldots,Z_{K},E_{1}\ldots,E_{m},F_{1},\ldots,F_{n},E^{\prime}_{m-K},\ldots,E^{\prime}_{m},F^{\prime}_{n-K},\ldots,F^{\prime }_{n}\sim\mathcal{N}(0,1).\]
Now let \(\varepsilon_{X},\varepsilon_{Y},\eta_{X},\eta_{Y},\alpha,\beta_{X},\beta_{Y}, \lambda\in\mathbb{R}\) be hyperparameters and for \(l=1,2,\ldots,K\) define
\[X_{l}=\varepsilon_{X}E_{l}+\alpha U_{\text{all}}+\beta_{X}U_{X}+\lambda Z_{l}, \quad Y_{l}=\varepsilon_{Y}F_{l}+\alpha U_{\text{all}}+\beta_{Y}U_{Y}+\lambda Z _{l},\]
for \(l=K+1,\ldots,m\) define
\[X_{l}=\varepsilon_{X}E_{l}+\alpha U_{\text{all}}+\beta_{X}U_{X}+\eta_{X}E^{ \prime}_{l}\]
and for \(l=K+1,\ldots,n\) define
\[Y_{l}=\varepsilon_{Y}F_{l}+\alpha U_{\text{all}}+\beta_{Y}U_{Y}+\eta_{Y}F^{ \prime}_{l}.\]
The interpretation of the hyperparameters is the following:1. \(\alpha\) contributes to the correlations between all the possible pairs of variables.
2. \(\beta_{X}\) contributes to the correlations between the \(X\) variables.
3. \(\beta_{Y}\) contributes to the correlations between the \(Y\) variables.
4. \(\lambda\) gives additional "interaction strength" between pairs of variables \(X_{l}\) and \(Y_{l}\) for \(l=1,\ldots,K\).
5. \(\varepsilon_{X}\) and \(\varepsilon_{Y}\) control part of the variance non-shared with any other variable.
6. \(\eta_{X}\) and \(\eta_{Y}\) can be used to increase the variance of \(X_{l}\) and \(Y_{l}\) for \(l>K\) to match the variance of variables \(l\leq K\) due to the \(\lambda\).
Every term of the covariance matrix is easy to calculate analytically:
\[\mathrm{Cov}(X_{i},X_{j})=\alpha^{2}+\beta_{X}^{2}+\mathbf{1}[i=j]\big{(} \varepsilon_{X}^{2}+\lambda^{2}\mathbf{1}[i\leq K]+\eta_{X}^{2}\mathbf{1}[i>K ]\big{)}\]
\[\mathrm{Cov}(Y_{i},Y_{j})=\alpha^{2}+\beta_{Y}^{2}+\mathbf{1}[i=j]\big{(} \varepsilon_{Y}^{2}+\lambda^{2}\mathbf{1}[i\leq K]+\eta_{Y}^{2}\mathbf{1}[i> K]\big{)}\]
\[\mathrm{Cov}(X_{i},Y_{j})=\alpha^{2}+\lambda^{2}\mathbf{1}[i=j]\mathbf{1}[i \leq K]\]
In the following, we analyse some special cases.
Dense InteractionsConsider \(\lambda=\eta_{X}=\eta_{Y}=\beta_{X}=\beta_{Y}=0\) and write \(\varepsilon:=\varepsilon_{X}=\varepsilon_{Y}\). We have
\[\mathrm{Cov}(X_{i},X_{j})=\mathrm{Cov}(Y_{i},Y_{j})=\alpha^{2}+\varepsilon^{2} \mathbf{1}[i=j]\]
\[\mathrm{Cov}(Y_{i},Y_{j})=\alpha^{2}\]
Hence, between every pair of distinct variables we have the same correlation \(\frac{\alpha^{2}}{\alpha^{2}+\varepsilon^{2}}\).
Sparse InteractionsConsider \(\alpha=0\), \(\varepsilon:=\varepsilon_{X}=\varepsilon_{Y}\), \(\eta_{X}=\eta_{Y}=\lambda\) and \(\beta_{X}=\beta_{Y}=0\)
\[\mathrm{Cov}(X_{i},X_{j})=\mathrm{Cov}(Y_{i},Y_{j})=\mathbf{1}[i=j]( \varepsilon^{2}+\lambda^{2})\]
\[\mathrm{Cov}(X_{i},Y_{j})=\lambda^{2}\mathbf{1}[i=j]\mathbf{1}[i\leq K]\]
All the variables have the same variance, but the covariance structure is now different. Between (distinct) \(X_{i}\) and \(X_{j}\) the correlation is zero and similarly for (distinct) \(Y_{i}\) and \(Y_{j}\).
Correlations between \(X_{i}\) and \(Y_{j}\) are zero unless \(i=j\leq K\), when \(\mathrm{Cor}(X_{i},Y_{i})=\lambda^{2}/(\varepsilon^{2}+\lambda^{2})\).
InterpolationIn Section 4 we discussed a two-stage interpolation from dense matrices to sparse matrices: in the first stage \(\alpha\) is decreased increasing \(\lambda\) at the same time to keep the mutual information constant. In the second stage we decrease the number of interacting variables \(K\) increasing \(\lambda\). The selected covariance matrices from this two-stage process are visualised in Fig. 7.
### Technical Results
In this subsection we add the technical results, related to the copula theory (Nelsen, 2007) and uniformization of random variables.
**Lemma D.1**.: _Let \(X\) be a real-valued r.v. with a strictly increasing, positive everywhere, and continuous CDF \(F\). Then_
\[F\colon\mathbb{R}\to(0,1)\]
_is a homeomorphism._positive everywhere and \(1\) is excluded as \(F\) is strictly increasing. Hence, \(F(\mathbb{R})\subseteq(0,1)\). Using continuity, strict monotonicity, and
\[\lim_{x\to-\infty}F(x)=0,\quad\lim_{x\to+\infty}F(x)=1\]
we note that \(F\) is a bijection onto \((0,1)\). Analogously, we prove that
\[F((a,b))=(F(a),F(b))\]
for all \(a<b\), so \(F\) is an open map. As a continuous bijective open map \(F\) is a homeomorphism.
**Lemma D.2**.: _Let \(X\) be a real-valued r.v. with a CDF \(F\) which is continuous, strictly increasing and positive everywhere. Then, \(F(X)\) is uniformly distributed on \((0,1)\)._
Proof.: Our assumptions on \(F\) imply that it is a continuous homeomorphism \(\mathbb{R}\to(0,1)\).
Let now \(y\in(0,1)\). We have
\[P(F(X)\leq y)=P(X\leq F^{-1}(y))=F\left(F^{-1}(y)\right)=y.\]
## Appendix E Additional Experiments
### Pointwise Mutual Information
To understand why neural estimators are not able to estimate the mutual information contained in the Student distribution, we decided to qualitatively study the learned critic.
In the NWJ approach (Nguyen et al., 2007), for \(N\to\infty\) data points the optimal critic \(f(x,y)\) is given by
\[f(x,y)=1+\mathrm{PMI}(x,y),\]
where
\[\mathrm{PMI}(x,y)=\log\frac{p_{XY}(x,y)}{p_{X}(x)p_{Y}(y)}\]
is the pointwise mutual information (Pinsker and Feinstein, 1964, Ch. 2) and \(p_{XY}\), \(p_{X}\) and \(p_{Y}\) are the PDFs of measures \(P_{XY}\), \(P_{X}\) and \(P_{Y}\), respectively.
Figure 7: Interpolation between dense and sparse covariance matrices.
We trained NWJ estimator with the default hyperparameters on \(N=5\,000\) data points and visualised the critic predictions (with \(1\) subtracted), true PMI, and \(500\) samples from the distribution in Fig. 8. The data is drawn from a bivariate Student distribution with dispersion matrix
\[\Omega=\begin{pmatrix}1&0.5\\ 0.5&1\end{pmatrix}\]
and varying degrees of freedom \(\nu\in\{1,2,3,5,8\}\). Additionally, we repeated the experiment for the bivariate normal distribution with covariance matrix given by the dispersion, which corresponds to the limit \(\nu\to\infty\).
Qualitatively we see that the critic does not learn the correct PMI for \(\nu\in\{1,2\}\), with better similarities for \(\nu\in\{3,5,8\}\). For the normal distribution, the critic qualitatively approximates well the PMI function in the regions where the data points are available.
We hypothesise that these problems can be partly attributed to the longer tails of Student distributions (as the critic may be unable to extrapolate to the regions where few points are available) and partly to the analytical form of the PMI function -- for the considered bivariate normal distribution the PMI is given by a simple quadratic function, which is not the case for the bivariate Student distribution.
### Sample Size Requirements
We investigated the minimal number of samples needed to estimate the mutual information changing the number of samples in \(N\in\{100,500,1000,3000,5000,10\,000\}\). We say that sample \(\ell\) suffices
Figure 8: Qualitative comparison between ground-truth pointwise MI and learned critic for bivariate Student and normal distributions.
Figure 9: Minimal number of samples needed for the mutual information estimate to fall between \(2/3\) and \(3/2\) of the true MI. The minimal number of samples tested is \(100\), the maximal \(10\,000\). The entry \(>10k\) means that the desired accuracy was never achieved. For neural estimators \(1\,000\) is the minimal possible number of samples, since we use a 50% train-test split and a batch size of \(256\).
to solve the task if for every \(\ell^{\prime}\geq\ell\) the estimator provides the estimate between 2/3 and 3/2 of the ground-truth mutual information.
We present the minimal number of samples \(\ell\) as a function of benchmark task and considered estimator in Fig. 9. Note that CCA and KSG typically need very small amount of samples to solve the task. As neural estimators use batch size of 256 and split the sample into training and test data sets, it is not possible for them to have \(\ell<1000\), which is the smallest number of samples considered greater than \(2\cdot 256=512\).
### Preprocessing
In Fig. 2 and Fig. 9 we transformed all samples by removing the mean and scaling by the standard deviation in each dimension calculated empirically from the sample. This strategy ensures that the scales on different axes are similar. Note that this strategy is not always suitable (e.g., if the distance metric used in the \(k\)NN approach should weight select dimensions differently) and makes the observed data points not i.i.d. (as the "shared" information, sample mean and variances, has been used to rescale the variables), although the effect of it seems to be negligible on most tasks in the benchmark.
Additionally, we considered two other preprocessing strategies, implemented using the scikit-learn package (Pedregosa et al., 2011): making each marginal distribution uniform \(\operatorname{Uniform}(0,1)\) by calculating empirical CDF along each axis (see Section D for the technical lemmata discussing this operation) and making each marginal distribution a Gaussian variable \(\mathcal{N}(0,1)\). These approaches may affect the effective sample size more strongly than the previous rescaling strategy, as estimating empirical CDF requires more data points than estimating mean and standard deviation.
We used \(N=10\,000\) data points and rerun the benchmark (see Fig. 10). Overall, the impact of the "uniformization" strategy appeared to be insubstantial for most tasks, and in certain cases negative. The "Gaussianisation" strategy improved estimation for Student distributions and helped histogram based approaches in low-dimensional tasks. Interestingly, both approaches worsen the performance of CCA on Student distributions.
While the improvements gained from "Gaussianisation" were minor, the approach is simplistic, treating each axis separately. Ideally, appropriate preprocessing would be able to transform \(P_{XY}\) into a better-behaved distribution \(P_{f(X)g(Y)}\), and thus simplify the estimation. In certain cases such preprocessing could (partially) undo transformations considered in our benchmark, and in effect make the estimators invariant (or more robust) to these transformations (cf. Section 4). We consider the design of informed preprocessing strategies an important open problem.
### Hyperparameter Selection
CCAWe used the implementation provided in the scikit-learn(Pedregosa et al., 2011) Python package (version 1.2.2). We set the latent space dimension to the smaller of the dimensions of the considered random vectors \(X\) and \(Y\).
HistogramsAdaptive estimators such as JIC (Suzuki, 2016) or MIIC (Cabeli et al., 2020) were not applicable to the high-dimensional data we considered, hence we resort to using an equal-width estimator implemented in TransferEntropy library (version 1.10.0). We fixed the number of bins to \(10\), although the optimal number of bins is likely to depend on the dimensionality of the problem.
KSGAs a default, we used the KSG-1 variant from the rmi package with 10 neighbors. However, as shown in our ablation in Fig. 11, changing the number of neighbors or using the KSG-2 variant did not considerably change the estimates. We compared the used implementation with a custom Python implementation and one from TransferEntropy library. The version implemented in the TransferEntropy suffers from strong positive bias.
LnnWe used the implementation from the rmi package (version 0.1.1) with \(5\) neighbors and truncation parameter set to \(30\). Ablations can be seen in Fig. 11.
We chose the architecture using a mini-benchmark with four ReLU neural networks considered on two neural estimators (see Fig. 11) and chose a neural network with two hidden layers (network M in Table 2).
As neural estimators rely on stochastic gradient descent training, we implemented heuristic diagnostics used to diagnose potential problems with overfitting and non-convergence. We present the number of runs susceptible to these issues in Fig. 12. We did not include these runs when computing statistics in the main benchmark figures (Fig. 2 and Fig. 9).
Figure 10: Estimates for preprocessed data with \(N=10\,000\) samples. Top: margins transformed into uniform distributions. Bottom: margins transformed into normal distributions.
## Appendix F Visualisation of Selected Tasks
In the following, we visualise a selection of benchmark tasks. Each figure visualises a sample of \(N=10\,000\) points from \(P_{XY}\). The upper diagonal represents the scatter plots \(P_{X_{i}X_{j}}\), \(P_{X_{i}Y_{j}}\) and \(P_{Y_{i}Y_{j}}\) for all possible combinations of \(i\neq j\). The lower diagonal represents the empirical density fitted using a kernel density estimator with 5 levels. The diagonal represents the histograms of the marginal distributions \(P_{X_{i}}\) and \(P_{Y_{i}}\) for all possible \(i\).
We present selected \(1\times 1\)-dimensional tasks in Fig. 13 and Fig. 14; the \(2\times 1\)-dimensional Swiss-roll distribution in Fig. 15 and selected \(2\times 2\)-dimensional tasks in Fig. 16. Finally, in Fig. 17 we present two \(3\times 3\)-dimensional problems, visualising the effect of the normal CDF and the spiral diffeomorphism.
## Appendix G Author Contributions
Contributions according to the Contributor Roles Taxonomy:
1. Conceptualization: PC, FG, AM, NB.
2. Methodology: FG, PC.
3. Software: PC, FG.
4. Validation: FG, PC.
5. Formal analysis: PC, FG.
6. Investigation: FG, PC, AM.
Figure 12: Number of runs (out of 10) when our convergence and overfitting diagnostics diagnosed potential problems in neural estimator training.
Figure 13: Top row: additive noise model. Bottom row: bivariate normal distribution and transformation making each variable a bimodal mixture of Gaussians.
7. Resources: FG, NB.
8. Data Curation: FG, PC.
9. Visualization: FG, PC.
10. Supervision: AM, NB.
11. Writing - original draft: PC, AM, FG.
12. Writing - review and editing: PC, AM, FG, NB, JEV.
Figure 14: Top row: bivariate normal distribution transformed so that the margins are uniform and \(\mathrm{asinh}\)-transformed bivariate Student distribution with 1 degree of freedom. Bottom row: bivariate normal distribution transformed with the half-cube and the wiggly mappings.
Figure 15: Visualisation of the Swiss-roll distribution.
Figure 16: Visualisations of selected \(2\times 2\)-dimensional problems. Top row: multivariate normal distributions with dense and sparse interactions. Bottom row: multivariate Student distribution with 2 degrees of freedom.
Figure 17: Visualisations of selected \(3\times 3\)-dimensional problems. Top: multivariate normal distribution transformed axis-wise with normal CDF, to uniformise margins. Bottom: multivariate normal distribution transformed with the spiral diffeomorphism. | ## Review
### Summary
This paper addresses the significant problem of estimating mutual information (MI) between two variables, introducing a benchmark for MI estimators that incorporates various phenomena such as dimensionality, sparsity, and heavy tails. The authors propose a diverse family of distributions with known ground-truth mutual information and present a benchmarking platform that evaluates classical and neural estimators across high-dimensional and challenging settings. The paper emphasizes the importance of selecting appropriate estimators based on specific problem characteristics and provides practical guidelines for practitioners. Overall, it presents a comprehensive framework for advancing the understanding and application of mutual information estimation in various domains.
### Strengths
- The paper introduces a method to construct a diverse family of distributions with known ground-truth mutual information, which is a significant contribution.
- It provides a comprehensive evaluation framework for mutual information estimation, covering diverse distributions and practical guidelines.
- The code is reproducible, enabling future use for evaluating other estimators.
- The analysis is thorough, presenting forty distinct MI estimation tasks and identifying four distinct challenges for MI estimators.
- The paper is well-written, clear, and easy to understand, with a logical structure and effective use of figures.
### Weaknesses
- Not all joint distributions can be represented in the proposed form, limiting the benchmark's applicability.
- The choice of distributions is not sufficiently justified, and it seems to rely on educated guesses rather than rigorous theoretical backing.
- The paper lacks external validation or comparisons with existing methods, making it difficult to assess the generalizability of the contributions.
- The main contributions may be seen as relatively minor, primarily extending the data settings considered in previous work.
- The theoretical result presented may not be novel as it has been demonstrated in prior literature.
### Questions
- What was the reason for choosing this specific set of distributions?
- Could modern invariant neural network architectures be used to obtain MI estimates invariant to diffeomorphisms?
- Why not utilize the result of Theorem 2.1 and apply it to Normalizing Flows?
- How is Theorem 2.1 different from that in the appendix of the referenced paper?
### Soundness
**Score:** 3
**Description:** 3 = good: The theoretical and experimental foundations are solid, but some aspects lack thorough justification or innovation.
### Presentation
**Score:** 3
**Description:** 3 = good: The paper is well-structured and written clearly, though some figures and explanations could be improved for clarity.
### Contribution
**Score:** 3
**Description:** 3 = good: The paper addresses important issues in MI estimation and introduces new benchmarks, though the originality of contributions is somewhat limited.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept: The paper is technically solid with moderate-to-high impact potential, but it has some areas needing improvement.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a meaningful contribution to the field of mutual information estimation. It identifies key challenges and provides a novel benchmarking framework. Despite some weaknesses in justification and originality, the overall quality and clarity of the presentation warrant acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# General Munchausen Reinforcement Learning with Tsallis Kullback-Leibler Divergence
Lingwei Zhu
University of Alberta
[email protected] &Zheng Chen
Osaka University
[email protected] &Matthew Schlegel
University of Alberta
[email protected] &Martha White
University of Alberta
CIFAR Canada AI Chair, Amii
[email protected]
###### Abstract
Many policy optimization approaches in reinforcement learning incorporate a Kullback-Leibler (KL) divergence to the previous policy, to prevent the policy from changing too quickly. This idea was initially proposed in a seminal paper on Conservative Policy Iteration, with approximations given by algorithms like TRPO and Munchausen Value Iteration (MVI). We continue this line of work by investigating a generalized KL divergence--called the Tsallis KL divergence. Tsallis KL defined by the \(q\)-logarithm is a strict generalization, as \(q=1\) corresponds to the standard KL divergence; \(q>1\) provides a range of new options. We characterize the types of policies learned under the Tsallis KL, and motivate when \(q>1\) could be beneficial. To obtain a practical algorithm that incorporates Tsallis KL regularization, we extend MVI, which is one of the simplest approaches to incorporate KL regularization. We show that this generalized MVI(\(q\)) obtains significant improvements over the standard MVI(\(q=1\)) across 35 Atari games.
## 1 Introduction
There is ample theoretical evidence that it is useful to incorporate KL regularization into policy optimization in reinforcement learning. The most basic approach is to regularize towards a uniform policy, resulting in entropy regularization. More effective, however, is to regularize towards the previous policy. By choosing KL regularization between consecutively updated policies, the optimal policy becomes a softmax over a uniform average of the full history of action value estimates (Vieillard et al., 2020). This averaging smooths out noise, allowing for better theoretical results (Azar et al., 2012; Kozuno et al., 2019; Vieillard et al., 2020; Kozuno et al., 2022).
Despite these theoretical benefits, there are some issues with using KL regularization in practice. It is well-known that the uniform average is susceptible to outliers; this issue is inherent to KL divergence (Futami et al., 2018). In practice, heuristics such as assigning vanishing regularization coefficients to some estimates have been implemented widely to increase robustness and accelerate learning (Grau-Moya et al., 2019; Haarnoja et al., 2018; Kitamura et al., 2021). However, theoretical guarantees no longer hold for those heuristics (Vieillard et al., 2020; Kozuno et al., 2022). A natural question is what alternatives we can consider to this KL divergence regularization, that allows us to overcome some of these disadvantages while maintaining the benefits associate with restricting aggressive policy changes and smoothing errors.
In this work, we explore one possible direction by generalizing to Tsallis KL divergences. Tsallis KL divergences were introduced for physics (Tsallis, 1988, 2009) using a simple idea: replacing the use of the logarithm with the deformed \(q\)-logarithm. The implications for policy optimization, however, are that we get quite a different form for the resulting policy. Tsallis _entropy_ with \(q=2\) has actually already been considered for policy optimization (Chow et al., 2018; Lee et al., 2018), by replacing Shannon entropy with Tsallis entropy to maintain stochasticity in the policy. The resulting policies are called _sparsemax_ policies, because they concentrate the probability on higher-valued actions and truncate the probability to zero for lower-valued actions. Intuitively, this should have the benefit of maintaining stochasticity, but only amongst the most promising actions, unlike the Boltzmann policy which maintains nonzero probability on all actions. Unfortunately, using only Tsallis entropy did not provide significant benefits, and in fact often performed worse than existing methods. We find, however, that using a Tsallis KL divergence to the previous policy does provide notable gains.
We first show how to incorporate Tsallis KL regularization into the standard value iteration updates, and prove that we maintain convergence under this generalization from KL regularization to Tsallis KL regularization. We then characterize the types of policies learned under Tsallis KL, highlighting that there is now a more complex relationship to past action-values than a simple uniform average. We then show how to extend Munchausen Value Iteration (MVI) (Vieillard et al., 2020), to use Tsallis KL regularization, which we call MVI(\(q\)). We use this naming convention to highlight that this is a strict generalization of MVI: by setting \(q=1\), we exactly recover MVI. We then compare MVI(\(q=2\)) with MVI (namely the standard choice where \(q=1\)), and find that we obtain significant performance improvements in Atari.
**Remark:** There is a growing body of literature studying generalizations of KL divergence in RL (Nachum et al., 2019; Zhang et al., 2020). Futami et al. (2018) discussed the inherent drawback of KL divergence in generative modeling and proposed to use \(\beta\)- and \(\gamma\)-divergence to allow for weighted average of sample contribution. These divergences fall under the category known as the \(f\)-divergence (Sason and Verdu, 2016), commonly used in other machine learning domains including generative modeling (Nowozin et al., 2016; Wan et al., 2020; Yu et al., 2020) and imitation learning (Ghasemipour et al., 2019; Ke et al., 2019). In RL, Wang et al. (2018) discussed using tail adaptive \(f\)-divergence to enforce the mass-covering property. Belousov and Peters (2019) discussed the use of \(\alpha\)-divergence. Tsallis KL divergence, however, has not yet been studied in RL.
## 2 Problem Setting
We focus on discrete-time discounted Markov Decision Processes (MDPs) expressed by the tuple \((\mathcal{S},\mathcal{A},d,P,r,\gamma)\), where \(\mathcal{S}\) and \(\mathcal{A}\) denote state space and finite action space, respectively. Let \(\Delta(\mathcal{X})\) denote the set of probability distributions over \(\mathcal{X}\). \(d\in\Delta(\mathcal{S})\) denotes the initial state distribution. \(P:\mathcal{S}\times\mathcal{A}\rightarrow\Delta(\mathcal{S})\) denotes the transition probability function, and \(r(s,a)\) defines the reward associated with that transition. \(\gamma\in(0,1)\) is the discount factor. A policy \(\pi:\mathcal{S}\rightarrow\Delta(\mathcal{A})\) is a mapping from the state space to distributions over actions. We define the action value function following policy \(\pi\) and starting from \(s_{0}\sim d(\cdot)\) with action \(a_{0}\) taken as \(Q_{\pi}(s,a)=\mathbb{E}_{\pi}\left[\sum_{t=0}^{\infty}\gamma^{t}r_{t}|s_{0}=s,a_{0}=a\right]\). A standard approach to find the optimal value function \(Q_{*}\) is value iteration. To define the formulas for value iteration, it will be convenient to write the action value function as a matrix \(Q_{\pi}\!\in\!\mathbb{R}^{|\mathcal{S}|\times|\mathcal{A}|}\). For notational convenience, we define the inner product for any two functions \(F_{1},F_{2}\in\mathbb{R}^{|\mathcal{S}|\times|\mathcal{A}|}\) over actions as \(\langle F_{1},F_{2}\rangle\in\mathbb{R}^{|\mathcal{S}|}\).
We are interested in the entropy-regularized MDPs where the recursion is augmented with \(\Omega(\pi)\):
\[\begin{cases}\pi_{k+1}=\arg\max_{\pi}\!\left(\langle\pi,Q_{k}\rangle-\tau \Omega(\pi)\right),\\ Q_{k+1}=r+\gamma P(\langle\pi_{k+1},Q_{k}\rangle-\tau\Omega(\pi_{k+1}))\end{cases} \tag{1}\]
This modified recursion is guaranteed to converge if \(\Omega\) is concave in \(\pi\). For standard (Shannon) entropy regularization, we use \(\Omega(\pi)=-\mathcal{H}\left(\pi\right)=\langle\pi,\ln\pi\rangle\). The resulting optimal policy has \(\pi_{k+1}\propto\exp\left(\tau^{-1}Q_{k}\right)\), where \(\propto\) indicates _proportional to_ up to a constant not depending on actions.
More generally, we can consider a broad class of regularizers known as \(f\)-divergences (Sason and Verdu, 2016): \(\Omega(\pi)=D_{f}(\pi||\mu):=\langle\mu,f\left(\pi/\mu\right)\rangle\), where \(f\) is a convex function. For example, the KL divergence \(D_{\text{KL}}(\pi\,|\,\mu)=\langle\pi,\ln\pi-\ln\mu\rangle\) can be recovered by \(f(t)=-\ln t\). In this work, when we say KL regularization, we mean the standard choice of setting \(\mu=\pi_{k}\), the estimate from the previous update. Therefore, \(D_{\text{KL}}\) serves as a penalty to penalize aggressive policy changes. The optimal policy in this case takes the form \(\pi_{k+1}\propto\pi_{k}\exp\left(\tau^{-1}Q_{k}\right)\). By induction, we can show this KL-regularized optimal policy \(\pi_{k+1}\) is a softmax over a uniform average over the history of action value estimates (Vieillard et al., 2020): \(\pi_{k+1}\propto\pi_{k}\exp\left(\tau^{-1}Q_{k}\right)\propto\cdots\propto\exp \left(\tau^{-1}\sum_{j=1}^{k}Q_{j}\right)\). Using KL regularization has been shown to be theoretically superior to entropy regularization in terms of error tolerance (Azar et al., 2012; Vieillard et al., 2020; Kozuno et al., 2022; Chan et al., 2022).
The definitions of \(\mathcal{H}(\cdot)\) and \(D_{\text{KL}}(\cdot||\cdot)\) rely on the standard logarithm and both induce softmax policies as an exponential (inverse function) over (weighted) action-values (Hiriart-Urruty and Lemarechal, 2004; Nachum and Dai, 2020). Convergence properties of the resulting regularized algorithms have been well studied (Kozuno et al., 2019; Geist et al., 2019; Vieillard et al., 2020). In this paper, we investigate Tsallis entropy and Tsallis KL divergence as the regularizer, which generalize Shannon entropy and KL divergence respectively.
## 3 Generalizing to Tsallis Regularization
We can easily incorporate other regularizers in to the value iteration recursion, and maintain convergence as long as those regularizers are strongly convex in \(\pi\). We characterize the types of policies that arise from using this regularizer, and prove the convergence of resulting regularized recursion.
### Tsallis Entropy Regularization
Tsallis entropy was first proposed by Tsallis (1988) and is defined by the \(q\)-logarithm. The \(q\)-logarithm and its unique inverse function, the \(q\)-exponential, are defined as:
\[\ln_{q}x:=\frac{x^{1-q}-1}{1-q},\quad\exp_{q}x:=\left[1+(1-q)x\right]_{+}^{ \frac{1}{1-q}},\quad\text{for }q\in\mathbb{R}\backslash\{1\} \tag{2}\]
where \([\cdot]_{+}:=\max\{\cdot,0\}\). We define \(\ln_{1}=\ln\,,\exp_{1}=\exp\), as in the limit \(q\to 1\), the formulas in Eq. (2) approach these functions. Tsallis entropy can be defined by \(S_{q}(\pi):=p\left(-\pi^{q},\ln_{q}\pi\right),p\in\mathbb{R}\)(Suyari and Tsukada, 2005). We visualize the \(q\)-logarithm, \(q\)-exponential and Tsallis entropy for different \(q\) in Figure 1. As \(q\) gets larger, \(q\)-logarithm (and hence Tsallis entropy) becomes more flat and \(q\)-exponential more steep1. Note that \(\exp_{q}\) is only invertible for \(x>\frac{-1}{1-q}\).
Footnote 1: The \(q\)-logarithm defined here is consistent with the physics literature and different from prior RL works (Lee et al., 2020), where a change of variable \(q^{*}=2-q\) is made. We analyze both cases in Appendix A.
Tsallis policies have a similar form to softmax, but using the \(q\)-exponential instead. Let us provide some intuition for these policies. When \(p=\frac{1}{2},q=2\), \(S_{2}(\pi)=\frac{1}{2}\left\langle\pi,1-\pi\right\rangle\), the optimization problem \(\arg\max_{\pi\in\Delta(\mathcal{A})}\left\langle\pi,Q\right\rangle+S_{2}(\pi) =\arg\min_{\pi\in\Delta(\mathcal{A})}\left\|\pi-Q\right\|_{2}^{2}\) is known to be the Euclidean projection onto the probability simplex. Its solution \(\left[Q-\psi\right]_{+}\) is called the sparsemax (Martins and Astudillo, 2016; Lee et al., 2018) and has sparse support (Duchi et al., 2008; Condat, 2016; Blondel et al., 2020). \(\psi:\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{S}\) is the unique function satisfying \(\left\langle\mathbf{1},\left[Q-\psi\right]_{+}\right\rangle=1\).
As our first result, we unify the Tsallis entropy regularized policies for all \(q\in\mathbb{R}_{+}\) with the \(q\)-exponential, and show that \(q\) and \(\tau\) are interchangeable for controlling the truncation.
Figure 1: \(\ln_{q}x\), \(\exp_{q}x\) and Tsallis entropy component \(-\pi^{q}\ln_{q}\pi\) for \(q=1\) to \(5\). When \(q=1\) they respectively recover their standard counterpart. \(\pi\) is chosen to be Gaussian \(\mathcal{N}(2,1)\). As \(q\) gets larger \(\ln_{q}x\) (and hence Tsallis entropy) becomes more flat and \(\exp_{q}x\) more steep.
**Theorem 1**.: _Let \(\Omega(\pi)=-S_{q}(\pi)\) in Eq. (1). Then the regularized optimal policies can be expressed:_
\[\pi(a|s)=\sqrt[1-q]{\left[\frac{Q(s,a)}{\tau}-\tilde{\psi}_{q}\left(\frac{Q(s, \cdot)}{\tau}\right)\right]_{+}(1-q)}=\exp_{q}\left(\frac{Q(s,a)}{\tau}-\psi_{ q}\left(\frac{Q(s,\cdot)}{\tau}\right)\right) \tag{3}\]
_where \(\psi_{q}=\tilde{\psi}_{q}+\frac{1}{1-q}\). Additionally, for an arbitrary \((q,\tau)\) pair with \(q>1\), the same truncation effect (support) can be achieved using \((q=2,\frac{\tau}{q-1})\)._
Proof.: See Appendix B for the full proof.
Theorem 1 characterizes the role played by \(q\): controlling the degree of truncation. We show the truncation effect when \(q=2\) and \(q=50\) in Figure 2, confirming that Tsallis policies tend to truncate more as \(q\) gets larger. The theorem also highlights that we can set \(q=2\) and still get more or less truncation using different \(\tau\), helping to explain why in our experiments \(q=2\) is a generally effective choice.
Unfortunately, the threshold \(\tilde{\psi}_{q}\) (and \(\psi_{q}\)) does not have a closed-form solution for \(q\neq 1,2,\infty\). Note that \(q=1\) corresponds to Shannon entropy and \(q=\infty\) to no regularization. However, we can resort to Taylor's expansion to obtain _approximate sparsemax policies_.
**Theorem 2**.: _For \(q\neq 1,\infty\), we can obtain approximate threshold \(\hat{\psi}_{q}\approx\psi_{q}\) using Taylor's expansion, and therefore an approximate policy:_
\[\hat{\pi}(a|s)\propto\exp_{q}\left(\frac{Q(s,a)}{\tau}-\hat{\psi}_{q}\left( \frac{Q(s,\cdot)}{\tau}\right)\right),\ \hat{\psi}_{q}\left(\frac{Q(s,\cdot)}{ \tau}\right)\doteq\frac{\sum_{a\in K(s)}\frac{Q(s,\cdot)}{\tau}-1}{|K(s)|}+1. \tag{4}\]
\(K(s)\) _is the set of highest-valued actions, satisfying the relation \(1+i\frac{Q(s,a_{(i)})}{\tau}>\sum_{j=1}^{i}\frac{Q(s,a_{(j)})}{\tau}\), where \(a_{(j)}\) indicates the action with \(j\)th largest action value. The sparsemax policy sets the probabilities of lowest-valued actions to zero: \(\pi(a_{(i)}|s)=0,i=z+1,\ldots,|\mathcal{A}|\) where \(Q(s,a_{(z)})>\tau^{-1}\hat{\psi}_{q}\left(\frac{Q(s,\cdot)}{\tau}\right)>Q(s,a _{(z+1)})\). When \(q=2\), \(\hat{\psi}_{q}\) recovers \(\psi_{q}\)._
Proof.: See Appendix B for the full proof.
Lee et al. (2020) also used \(\exp_{q}\) to represent policies but they consider the continuous action setting and do not give any computable threshold. By contrast, Theorem 2 presents an easily computable \(\hat{\psi}_{q}\) for all \(q\notin\{1,\infty\}\).
### Tsallis KL Regularization and Convergence Results
The Tsallis KL divergence is defined as \(D^{q}_{KL}(\pi\,\|\,\mu):=\left\langle\pi,-\ln_{q}\frac{\mu}{\pi}\right\rangle\)(Furuichi et al., 2004). It is a member of \(f\)-divergence and can be recovered by choosing \(f(t)=-\ln_{q}t\). As a divergence penalty,
Figure 2: (Left) Tsallis KL component \(-\pi_{1}\ln_{q}\frac{\pi_{2}}{\pi_{1}}\) between two Gaussian policies \(\pi_{1}=\mathcal{N}(2.75,1),\pi_{2}=\mathcal{N}(3.25,1)\) for \(q=1\) to \(5\). When \(q=1\) TKL recovers KL. For \(q>1\), TKL is more mode-covering than KL. (Mid) The sparsemax operator acting on a Boltzmann policy when \(q=2\). (Right) The sparsemax when \(q=50\). Truncation gets stronger as \(q\) gets larger. The same effect can be also controlled by \(\tau\).
it is required that \(q>0\) since \(f(t)\) should be convex. We further assume that \(q>1\) to align with standard divergences; i.e. penalize large value of \(\frac{\pi}{\mu}\), since for \(0<q<1\) the regularization would penalize \(\frac{\mu}{\mu}\) instead. In practice, we find that \(0<q<1\) tend to perform poorly. In contrast to KL, Tsallis KL is more _mass-covering_; i.e. its value is proportional to the \(q\)-th power of the ratio \(\frac{\pi}{\mu}\). When \(q\) is big, large values of \(\frac{\pi}{\mu}\) are strongly penalized (Wang et al., 2018). This behavior of Tsallis KL divergence can also be found in other well-known divergences: the \(\alpha\)-divergence (Wang et al., 2018; Belousov and Peters, 2019) coincides with Tsallis KL when \(\alpha=2\); Renyi's divergence also penalizes large policy ratio by raising it to the power \(q\), but inside the logarithm, which is therefore an additive extension of KL (Li and Turner, 2016). In the limit of \(q\to 1\), Tsallis entropy recovers Shannon entropy and the Tsallis KL divergence recovers the KL divergence. We plot the Tsallis KL divergence behavior in Figure 2.
Now let us turn to formalizing when value iteration under Tsallis regularization converges. The \(q\)-logarithm has the following properties: _Convexity_: \(\ln_{q}\pi\) is convex for \(q\leq 0\), concave for \(q>0\). When \(q=0\), both \(\ln_{q},\exp_{q}\) become linear. _Monotonicity_: \(\ln_{q}\pi\) is monotonically increasing with respect to \(\pi\). These two properties can be simply verified by checking the first and second order derivative. We prove in Appendix A the following similarity between Shannon entropy (reps. KL) and Tsallis entropy (resp. Tsallis KL). _Bounded entropy_: we have \(0\leq\mathcal{H}\left(\pi\right)\leq\ln\left|\mathcal{A}\right|\); and \(\forall q,\,0\leq S_{q}(\pi)\leq\ln_{q}\left|\mathcal{A}\right|\). _Generalized KL property_: \(\forall q\), \(D^{q}_{KL}(\pi\,|\,\mu)\geq 0\). \(D^{q}_{KL}(\pi\,|\,\mu)=0\) if and only if \(\pi=\mu\) almost everywhere, and \(D^{q}_{KL}(\pi\,|\,\mu)\rightarrow\infty\) whenever \(\pi(a|s)>0\) and \(\mu(a|s)=0\).
However, despite their similarity, a crucial difference is that \(\ln_{q}\) is non-extensive, which means it is not additive (Tsallis, 1988). In fact, \(\ln_{q}\) is only _pseudo-additive_:
\[\ln_{q}\pi\mu=\ln_{q}\pi+\ln_{q}\mu+(1-q)\ln_{q}\pi\ln_{q}\mu. \tag{5}\]
Pseudo-additivity complicates obtaining convergence results for Eq. (1) with \(q\)-logarithm regularizers, since the techniques used for Shannon entropy and KL divergence are generally not applicable to their \(\ln_{q}\) counterparts. Moreover, deriving the optimal policy may be nontrivial. Convergence results have only been established for Tsallis entropy (Lee et al., 2018; Chow et al., 2018).
We know that Eq. (1) with \(\Omega(\pi)=D^{q}_{KL}(\pi\,|\,\mu)\), for any \(\mu\), converges for \(q\) that make \(D^{q}_{KL}(\pi\,|\,\mu)\) strictly convex (Geist et al., 2019). When \(q=2\), it is strongly convex, and so also strictly convex, guaranteeing convergence.
**Theorem 3**.: _The regularized recursion Eq. (1) with \(\Omega(\pi)=D^{q}_{KL}(\pi\,|\,\cdot)\) when \(q=2\) converges to the unique regularized optimal policy._
Proof.: See Appendix C. It simply involves proving that this regularizer is strongly convex.
### TKL Regularized Policies Do More Than Averaging
We next show that the optimal regularized policy under Tsallis KL regularization does more than uniform averaging. It can be seen as performing a weighted average where the degree of weighting is controlled by \(q\). Consider the recursion
\[\begin{cases}\pi_{k+1}=\arg\max_{\pi}\langle\pi,Q_{k}-D^{q}_{KL}(\pi\,|\,\pi_ {k})\rangle\,,\\ Q_{k+1}=r+\gamma P(\pi_{k+1},Q_{k}-D^{q}_{KL}(\pi_{k+1}||\pi_{k})\rangle\,, \end{cases} \tag{6}\]
where we dropped the regularization coefficient \(\tau\) for convenience.
**Theorem 4**.: _The greedy policy \(\pi_{k+1}\) in Equation (6) satisfies_
\[\pi_{k+1}\propto\left(\exp_{q}Q_{1}\cdots\exp_{q}Q_{k}\right)=\left[\exp_{q} \!\left(\sum_{j=1}^{k}Q_{j}\right)\!\!+\!\sum_{j=2}^{k}\!\!\!(q-1)^{j}\!\!\! \sum_{i_{1}=1<\cdots<i_{j}}^{k}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!i.e., Tsallis KL regularized policies average over the history of value estimates as well as computing the interaction between them \(\sum_{j=2}^{k}\sum_{i_{1}<\dots<i_{j}}^{k}Q_{i_{1}}\dots Q_{i_{j}}\).
Proof.: See Appendix D for the full proof. The proof comprises two parts: the first part shows \(\pi_{k+1}\propto\exp_{q}Q_{1}\dots\exp_{q}Q_{k}\), and the second part establishes the _more-than-averaging_ property by two-point equation (Yamano, 2002) and the \(2-q\) duality (Naudts, 2002; Suyari and Tsukada, 2005) to conclude \(\left(\exp_{q}x\cdot\exp_{q}y\right)^{q-1}=\exp_{q}(x+y)^{q-1}+(q-1)^{2}xy\).
The form of this policy is harder to intuit, but we can try to understand each component. The first component actually corresponds to a weighted averaging by the property of the \(\exp_{q}\):
\[\exp_{q}\!\left(\sum_{i=1}^{k}Q_{i}\right)\!\!=\exp_{q}Q_{1}\exp_{q}\left( \frac{Q_{2}}{1+(1-q)Q_{1}}\right)\dots\exp_{q}\!\left(\!\frac{Q_{k}}{1+(1-q) \sum_{i=1}^{k-1}Q_{i}}\!\right). \tag{8}\]
Eq. (8) is a possible way to expand the summation: the left-hand side of the equation is what one might expect from conventional KL regularization; while the right-hand side shows a weighted scheme such that any estimate \(Q_{j}\) is weighted by the summation of estimates before \(Q_{j}\) times \(1-q\) (Note that we can exchange 1 and \(q\), see Appendix A). Weighting down numerator by the sum of components in the demoninator has been analyzed before in the literature of weighted average by robust divergences, e.g., the \(\gamma\)-divergence (Futami et al., 2018, Table 1). Therefore, we conjecture this functional form helps weighting down the magnitude of excessively large \(Q_{k}\), which can also be controlled by choosing \(q\). In fact, obtaining a weighted average has been an important topic in RL, where many proposed heuristics coincide with weighted averaging (Grau-Moya et al., 2019; Haarnoja et al., 2018; Kitamura et al., 2021).
Now let us consider the second term with \(q=2\), therefore the leading \((q-1)^{j}\) vanishes. The action-value cross-product term can be intuitively understood as further increasing the probability for any actions that have had consistently larger values across iterations. This observation agrees with the mode-covering property of Tsallis KL. However, there is no concrete evidence yet how the average inside \(q\)-exponential and the cross-product action values may work jointly to benefit the policy, and their benefits may depend on the task and environments, requiring further categorization and discussion. Empirically, we find that the nonlinearity of Tsallis KL policies bring superior performance to the uniform averaging KL policies on the testbed considered.
## 4 A Practical Algorithm for Tsallis KL Regularization
In this section we provide a practical algorithm for implementing Tsallis regularization. We first explain why this is not straightforward to simply implement KL-regularized value iteration, and how Munchausen Value Iteration (MVI) overcomes this issue with a clever implicit regularization trick. We then extend this algorithm to \(q>1\) using a similar approach, though now with some approximation due once again to the difficulties of pseudo-additivity.
### Implicit Regularization With MVI
Even for the standard KL, it is difficult to implement KL-regularized value iteration with function approximation. The difficulty arises from the fact that we cannot exactly obtain \(\pi_{k+1}\propto\pi_{k}\exp\left(Q_{k}\right)\). This policy might not be representable by our function approximator. For \(q=1\), one needs to store all past \(Q_{k}\) which is computationally infeasible.
An alternative direction has been to construct a different value function iteration scheme, which is equivalent to the original KL regularized value iteration (Azar et al., 2012; Kozuno et al., 2019). A recent method of this family is Munchausen VI (MVI) (Vieillard et al., 2020). MVI implicitly enforces KL regularization using the recursion
\[\begin{cases}\pi_{k+1}=\arg\max_{\pi}\left\langle\pi,Q_{k}-\tau\ln\pi\right\rangle \\ Q_{k+1}=r+\alpha\tau\ln\pi_{k+1}+\gamma P\left\langle\pi_{k+1},Q_{k}-\tau\ln \pi_{k+1}\right\rangle\end{cases} \tag{9}\]
We see that Eq. (9) is Eq. (1) with \(\Omega(\pi)=-\mathcal{H}\left(\pi\right)\) (blue) plus an additional red _Munchausen term_, with coefficient \(\alpha\). Vieillard et al. (2020) showed that implicit KL regularization was performedunder the hood, even though we still have tractable \(\pi_{k+1}\propto\exp\left(\tau^{-1}Q_{k}\right)\):
\[Q_{k+1}=r+\alpha\tau\ln\pi_{k+1}+\gamma P\left\langle\pi_{k+1},Q_{k }-\tau\ln\pi_{k+1}\right\rangle\Leftrightarrow Q_{k+1}-\alpha\tau\ln\pi_{k+1}=\] \[r+\gamma P\big{(}\left\langle\pi_{k+1},Q_{k}-\alpha\tau\ln\pi_{k} \right\rangle-\left\langle\pi_{k+1},\alpha\tau(\ln\pi_{k+1}-\ln\pi_{k})-\left( 1-\alpha\right)\tau\ln\pi_{k+1}\right\rangle\big{)}\] \[\Leftrightarrow Q^{\prime}_{k+1}=r+\gamma P\big{(}\left\langle\pi_{k+1},Q^{ \prime}_{k}\right\rangle-\alpha\tau D_{\text{{KL}}}(\pi_{k+1}||\pi_{k})+\left( 1-\alpha\right)\tau\mathcal{H}\left(\pi_{k+1}\right)\big{)} \tag{10}\]
where \(Q^{\prime}_{k+1}\!:=\!Q_{k+1}-\alpha\tau\ln\pi_{k+1}\) is the generalized action value function.
The implementation of this idea uses the fact that \(\alpha\tau\ln\pi_{k+1}=\alpha(Q_{k}-\mathcal{M}_{\tau}Q_{k})\), where \(\mathcal{M}_{\tau}Q_{k}:=\frac{1}{Z_{k}}\left\langle\exp\left(\tau^{-1}Q_{k} \right),Q_{k}\right\rangle,Z_{k}=\left\langle\mathbf{1},\exp\left(\tau^{-1}Q_{ k}\right)\right\rangle\) is the Boltzmann softmax operator.2 In the original work, computing this advantage term was found to be more stable than directly using the log of the policy. In our extension, we use the same form.
Footnote 2: Using \(\mathcal{M}_{\tau}Q\) is equivalent to the log-sum-exp operator up to a constant shift (Azar et al., 2012).
### MVI(\(q\)) For General \(q\)
The MVI(q) algorithm is a simple extension of MVI: it replaces the standard exponential in the definition of the advantage with the \(q\)-exponential. We can express this action gap as \(Q_{k}-\mathcal{M}_{q,\tau}Q_{k}\), where \(\mathcal{M}_{q,\tau}Q_{k}=\left\langle\exp_{q}\left(\frac{Q_{k}}{\tau}-\psi_{q }\left(\frac{Q_{k}}{\tau}\right)\right),Q_{k}\right\rangle\). When \(q=1\), it recovers \(Q_{k}-\mathcal{M}_{\tau}Q_{k}\). We summarize this MVI(\(q\)) algorithm in Algorithm B in the Appendix. When \(q=1\), we recover MVI. For \(q=\infty\), we get that \(\mathcal{M}_{\infty,\tau}Q_{k}\) is \(\max_{a}Q_{k}(s,a)\)--no regularization--and we recover advantage learning (Baird and Moore, 1999). Similar to the original MVI algorithm, MVI(\(q\)) enjoys tractable policy expression with \(\pi_{k+1}\propto\exp_{q}\left(\tau^{-1}Q_{k}\right)\).
Unlike MVI, however, MVI(\(q\)) no longer exactly implements the implicit regularization shown in Eq. (10). Below, we go through a similar derivation as MVI, show why there is an approximation and motivate why the above advantage term is a reasonable approximation. In addition to this reasoning, our primary motivation for this extension of MVI to use \(q>1\) was to inherit the same simple form as MVI as well as because empirically we found it to be effective.
Let us similarly define a generalized action value function \(Q^{\prime}_{k+1}=Q_{k+1}-\alpha\tau\ln_{q}\pi_{k+1}\). Using the relationship \(\ln_{q}\pi_{k}=\ln_{q}\frac{\pi_{k}}{\pi_{k+1}}-\ln_{q}\frac{1}{\pi_{k+1}}-(1 -q)\ln_{q}\pi_{k}\ln_{q}\frac{1}{\pi_{k+1}}\), we get
\[Q_{k+1}-\alpha\tau\ln_{q}\pi_{k+1}=r+\gamma P\left\langle\pi_{k+1 },Q_{k}+\alpha\tau\ln_{q}\pi_{k}-\alpha\tau\ln_{q}\pi_{k}+\tau S_{q}\left(\pi_ {k+1}\right)\right\rangle\] \[\Leftrightarrow Q^{\prime}_{k+1}=r+\gamma P\left\langle\pi_{k+1},Q^{ \prime}_{k}+\tau S_{q}\left(\pi_{k+1}\right)\right\rangle+\] \[\gamma P\left\langle\pi_{k+1},\alpha\tau\left(\ln_{q}\frac{\pi_{ k}}{\pi_{k+1}}-\ln_{q}\frac{1}{\pi_{k+1}}-(1-q)\ln_{q}\frac{1}{\pi_{k+1}}\ln_{q} \pi_{k}\right)\right\rangle \tag{11}\] \[=r+\gamma P\left\langle\pi_{k+1},Q^{\prime}_{k}+(1-\alpha)\tau S _{q}(\pi_{k+1})\right\rangle-\gamma P\left\langle\pi_{k+1},\alpha\tau D^{q}_{ \text{{KL}}}(\pi_{k+1}||\pi_{k})-\alpha\tau R_{q}(\pi_{k+1},\pi_{k})\right\rangle\]
Figure 3: MVI(\(q\)) on CartPole-v1 for \(q=2,3,4,5\), averaged over 50 seeds, with \(\tau=0.03,\alpha=0.9\). (Left) The difference between the proposed action gap \(Q_{k}-\mathcal{M}_{q,\tau}Q_{k}\) and the general Munchausen term \(\ln_{q}\pi_{k+1}\) converges to a constant. (Right) The residual \(R_{q}(\pi_{k+1},\pi_{k})\) becomes larger as \(q\) increases. For \(q=2\), it remains negligible throughout the learning.
where we leveraged the fact that \(-\alpha\tau\left\langle\pi_{k+1},\ln_{q}\frac{1}{\pi_{k+1}}\right\rangle=-\alpha \tau S_{q}(\pi_{k+1})\) and defined the residual term \(R_{q}(\pi_{k+1},\pi_{k}):=(1-q)\ln_{q}\frac{1}{\pi_{k+1}}\ln_{q}\pi_{k}\). When \(q=2\), it is expected that the residual term remains negligible, but can become larger as \(q\) increases. We visualize the trend of the residual \(R_{q}(\pi_{k+1},\pi_{k})\) for \(q=2,3,4,5\) on the CartPole-v1 environment (Brockman et al., 2016) in Figure 3. Learning consists of \(2.5\times 10^{5}\) steps, evaluated every \(2500\) steps (one iteration), averaged over 50 independent runs. It is visible that the magnitude of residual jumps from \(q=4\) to \(5\), while \(q=2\) remains negligible throughout.
A reasonable approximation, therefore, is to use \(\ln_{q}\pi_{k+1}\) and omit this residual term. Even this approximation, however, has an issue. When the actions are in the support, \(\ln_{q}\) is the unique inverse function of \(\exp_{q}\) and \(\ln_{q}\pi_{k+1}\) yields \(\frac{Q_{k}}{\tau}-\psi_{q}\left(\frac{Q_{k}}{\tau}\right)\). However, for actions outside the support, we cannot get the inverse, because many inputs to \(\exp_{q}\) can result in zero. We could still use \(\frac{Q_{k}}{\tau}-\psi_{q}\left(\frac{Q_{k}}{\tau}\right)\) as a sensible choice, and it appropriately does use negative values for the Munchausen term for these zero-probability actions. Empirically, however, we found this to be less effective than using the action gap.
Though the action gap is yet another approximation, there are clear similarities between using \(\frac{Q_{k}}{\tau}-\psi_{q}\left(\frac{Q_{k}}{\tau}\right)\) and the action gap \(Q_{k}-\mathcal{M}_{q,\tau}Q_{k}\). The primary difference is in how the values are centered. We can see \(\psi_{q}\) as using a uniform average value of the actions in the support, as characterized in Theorem 2. \(\mathcal{M}_{q,\tau}Q_{k}\), on the other hand, is a weighted average of action-values.
We plot the difference between \(Q_{k}-\mathcal{M}_{q,\tau}Q_{k}\) and \(\ln_{q}\pi_{k+1}\) in Figure 3, again in Cartpole. The difference stabilizes around -0.5 for most of learning--in other words primarily just shifting by a constant--but in early learning \(\ln_{q}\pi_{k+1}\) is larger, across all \(q\). This difference in magnitude might explain why using the action gap results in more stable learning, though more investigation is needed to truly understand the difference. For the purposes of this initial work, we pursue the use of the action gap, both as itself a natural extension of the current implementation of MVI and from our own experiments suggesting improved stability with this form.
## 5 Experiments
In this section we investigate the utility of MVI(\(q\)) in the Atari 2600 benchmark (Bellemare et al., 2013). We test whether this result holds in more challenging environments. Specifically, we compare to standard MVI (\(q=1\)), which was already shown to have competitive performance on Atari (Vieillard et al., 2020). We restrict our attention to \(q=2\), which was generally effective in other settings and also allows us to contrast to previous work (Lee et al., 2020) that only used entropy regularization with KL regularization. For MVI(\(q=2\)), we take the exact same learning setup--hyperparameters and architecture--as MVI(\(q=1\)) and simply modify the term added to the VI update, as in Algorithm 1.
Figure 4: Learning curves of MVI(\(q\)) and M-VI on the selected Atari games, averaged over 3 independent runs, with ribbon denoting the standard error. On some environments MVI(\(q\)) significantly improve upon M-VI. Quantitative improvements over M-VI and Tsallis-VI are shown in Figures 5.
For the Atari games we implemented MVI(\(q\)), Tsallis-VI and M-VI based on the Quantile Regression DQN (Dabney et al., 2018). We leverage the optimized Stable-Baselines3 architecture (Raffin et al., 2021) for best performance and average over 3 independent runs following (Vieillard et al., 2020), though we run \(50\) million frames instead of 200 million. From Figure 4 it is visible that MVI(\(q\)) is stable with no wild variance shown, suggesting 3 seeds might be sufficient. We perform grid searches for the algorithmic hyperparameters on two environments Asterix and Seaquest: the latter environment is regarded as a hard exploration environment. MVI(\(q\)) \(\alpha:\{0.01,0.1,0.5,0.9,0.99\}\); \(\tau:\{0.01,0.1,1.0,10,100\}\). Tsallis-VI \(\tau:\{0.01,0.1,1.0,10,100\}\). For MVI we use the reported hyperparameters in (Vieillard et al., 2020). Hyperparameters can be seen from Table 2 and full results are provided in Appendix E.
### Comparing MVI(\(q\)) with \(q=1\) to \(q=2\)
We provide the overall performance of MVI versus MVI(\(q=2\)) in Figure 5. Using \(q=2\) provides a large improvement in about 5 games, about double the performance in the next 5 games, comparable performance in the next 7 games and then slightly worse performance in 3 games (PrivateEye, Chopper and Seaquest). Both PrivateEye and Seaquest are considered harder exploration games, which might explain this discrepancy. The Tsallis policy with \(q=2\) reduces the support on actions, truncating some probabilities to zero. In general, with a higher \(q\), the resulting policy is greedier, with \(q=\infty\) corresponding to exactly the greedy policy. It is possible that for these harder exploration games, the higher stochasticity in the softmax policy from MVI whre \(q=1\) promoted more exploration. A natural next step is to consider incorporating more directed exploration approaches, into MVI(\(q=2\)), to benefit from the fact that lower-value actions are removed (avoiding taking poor actions) while exploring in a more directed way when needed.
We examine the learning curves for the games where MVI(\(q\)) had the most significant improvement, in Figure 4. Particularly notable is how much more quickly MVI(\(q\)) learned with \(q=2\), in addition to plateauing at a higher point. In Hero, MVI(\(q\)) learned a stably across the runs, whereas standard MVI with \(q=1\) clearly has some failures.
These results are quite surprising. The algorithms are otherwise very similar, with the seemingly small change of using Munchausen term \(Q_{k}(s,a)-\mathcal{M}_{q=2,\tau}Q_{k}\) instead of \(Q_{k}(s,a)-\mathcal{M}_{q=1,\tau}Q_{k}\) and using the \(q\)-logarithm and \(q\)-exponential for the entropy regularization and policy parameterization. Previous work using \(q=2\) to get the sparsemax with entropy regularization generally harmed performance (Lee et al., 2018, 2020). It seems that to get the benefits of the generalization to \(q>1\), the addition of the KL regularization might be key. We validate this in the next section.
### The Importance of Including KL Regularization
In the policy evaluation step of Eq. (11), if we set \(\alpha=0\) then we recover Tsallis-VI which uses regularization \(\Omega(\pi)=-S_{q}(\pi)\) in Eq. (1). In other words, we recover the algorithm that incorporates entropy regularization using the \(q\)-logarithm and the resulting sparsemax policy. Unlike MVI, Tsallis
Figure 5: (Left) The percent improvement of MVI(\(q\)) with \(q=2\) over standard MVI (where \(q=1\)) on select Atari games. The improvement is computed by subtracting the scores from MVI(\(q\)) and MVI and normalizing by the MVI scores. (Right) Improvement over Tsallis-VI on Atari environments, normalized with Tsallis-VI scores.
VI has not been comprehensively evaluated on Atari games, so we include results for the larger benchmark set comprising 35 Atari games. We plot the percentage improvement of MVI(\(q\)) over Tsallis-VI in Figure 5.
The improvement from including the Munchausen term (\(\alpha>0\)) is stark. For more than half of the games, MVI(\(q\)) resulted in more than 100% improvement. For the remaining games it was comparable. For 10 games, it provided more than 400% improvement. Looking more specifically at which games there was notable improvement, it seems that exploration may again have played a role. MVI(\(q\)) performs much better on Seaquest and PrivateEye. Both MVI(\(q\)) and Tsallis-VI have policy parameterizations that truncate action support, setting probabilities to zero for some actions. The KL regularization term, however, likely slows this down. It is possible the Tsallis-VI is concentrating too quickly, resulting in insufficient exploration.
## 6 Conclusion and Discussion
We investigated the use of the more general \(q\)-logarithm for entropy regularization and KL regularization, instead of the standard logarithm (\(q=1\)), which gave rise to Tsallis entropy and Tsallis KL regularization. We extended several results previously shown for \(q=1\), namely we proved (a) the form of the Tsallis policy can be expressed by \(q\)-exponential function; (b) Tsallis KL-regularized policies are weighted average of past action-values; (c) the convergence of value iteration for \(q=2\) and (d) a relationship between adding a \(q\)-logarithm of policy to the action-value update, to provide implicit Tsallis KL regularization and entropy regularization, generalizing the original Munchausen Value Iteration (MVI). We used these results to propose a generalization to MVI, which we call MVI(\(q\)), because for \(q=1\) we exactly recover MVI. We showed empirically that the generalization to \(q>1\) can be beneficial, providing notable improvements in the Atari 2600 benchmark.
## References
* Azar et al. (2012) M. G. Azar, V. Gomez, and H. J. Kappen. Dynamic policy programming. _Journal of Machine Learning Research_, 13(1):3207-3245, 2012.
* Baird and Moore (1999) L. Baird and A. Moore. Gradient descent for general reinforcement learning. In _Proceedings of the 1998 Conference on Advances in Neural Information Processing Systems II_, page 968-974, 1999.
* Bellemare et al. (2013) M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. _Journal of Artificial Intelligence Research_, 47(1):253-279, 2013. ISSN 1076-9757.
* Belousov and Peters (2019) B. Belousov and J. Peters. Entropic regularization of markov decision processes. _Entropy_, 21(7), 2019.
* Blondel et al. (2020) M. Blondel, A. F. Martins, and V. Nicolae. Learning with fenchel-young losses. _Journal of Machine Learning Research_, 21(35):1-69, 2020.
* Brockman et al. (2016) G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym. _arXiv preprint arXiv:1606.01540_, 2016.
* Chan et al. (2022) A. Chan, H. Silva, S. Lim, T. Kozuno, A. R. Mahmood, and M. White. Greedification operators for policy optimization: Investigating forward and reverse kl divergences. _Journal of Machine Learning Research_, 23(253):1-79, 2022.
* Chen et al. (2018) G. Chen, Y. Peng, and M. Zhang. Effective exploration for deep reinforcement learning via bootstrapped q-ensembles under tsallis entropy regularization. _arXiv:abs/1809.00403_, 2018. URL [http://arxiv.org/abs/1809.00403](http://arxiv.org/abs/1809.00403).
* Chow et al. (2018) Y. Chow, O. Nachum, and M. Ghavamzadeh. Path consistency learning in Tsallis entropy regularized MDPs. In _International Conference on Machine Learning_, pages 979-988, 2018.
* Condat (2016) L. Condat. Fast projection onto the simplex and the l1 ball. _Mathematical Programming_, 158:575-585, 2016.
* D'Auria et al. (2017)T. M. Cover and J. A. Thomas. _Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing)_. Wiley-Interscience, USA, 2006.
* Dabney et al. (2018) W. Dabney, M. Rowland, M. Bellemare, and R. Munos. Distributional reinforcement learning with quantile regression. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 32, pages 2892-2899, 2018.
* Duchi et al. (2008) J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the l1-ball for learning in high dimensions. In _Proceedings of the 25th International Conference on Machine Learning_, page 272-279, 2008.
* Furuichi et al. (2004) S. Furuichi, K. Yanagi, and K. Kuriyama. Fundamental properties of tsallis relative entropy. _Journal of Mathematical Physics_, 45(12):4868-4877, 2004.
* Futami et al. (2018) F. Futami, I. Sato, and M. Sugiyama. Variational inference based on robust divergences. In _Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics_, volume 84, pages 813-822, 2018.
* Geist et al. (2019) M. Geist, B. Scherrer, and O. Pietquin. A theory of regularized Markov decission processes. In _36th International Conference on Machine Learning_, volume 97, pages 2160-2169, 2019.
* Ghasemipour et al. (2019) S. K. S. Ghasemipour, R. S. Zemel, and S. S. Gu. A divergence minimization perspective on imitation learning methods. In _Conference on Robot Learning_, pages 1-19, 2019.
* Grau-Moya et al. (2019) J. Grau-Moya, F. Leibfried, and P. Vrancx. Soft q-learning with mutual-information regularization. In _International Conference on Learning Representations_, pages 1-13, 2019.
* Haarnoja et al. (2018) T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In _Proceedings of the 35th International Conference on Machine Learning_, pages 1861-1870, 2018.
* Hiriart-Urruty and Lemarechal (2004) J. Hiriart-Urruty and C. Lemarechal. _Fundamentals of Convex Analysis_. Grundlehren Text Editions. Springer Berlin Heidelberg, 2004.
* Ke et al. (2019) L. Ke, S. Choudhury, M. Barnes, W. Sun, G. Lee, and S. Srinivasa. Imitation learning as \(f\)-divergence minimization, 2019. URL [https://arxiv.org/abs/1905.12888](https://arxiv.org/abs/1905.12888).
* Kitamura et al. (2021) T. Kitamura, L. Zhu, and T. Matsubara. Geometric value iteration: Dynamic error-aware kl regularization for reinforcement learning. In _Proceedings of The 13th Asian Conference on Machine Learning_, volume 157, pages 918-931, 2021.
* Kozuno et al. (2019) T. Kozuno, E. Uchibe, and K. Doya. Theoretical analysis of efficiency and robustness of softmax and gap-increasing operators in reinforcement learning. In _Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics_, volume 89, pages 2995-3003, 2019.
* Kozuno et al. (2022) T. Kozuno, W. Yang, N. Vieillard, T. Kitamura, Y. Tang, J. Mei, P. Menard, M. G. Azar, M. Valko, R. Munos, O. Pietquin, M. Geist, and C. Szepesvari. Kl-entropy-regularized rl with a generative model is minimax optimal, 2022. URL [https://arxiv.org/abs/2205.14211](https://arxiv.org/abs/2205.14211).
* Lee et al. (2018) K. Lee, S. Choi, and S. Oh. Sparse markov decision processes with causal sparse tsallis entropy regularization for reinforcement learning. _IEEE Robotics and Automation Letters_, 3:1466-1473, 2018.
* Lee et al. (2020) K. Lee, S. Kim, S. Lim, S. Choi, M. Hong, J. I. Kim, Y. Park, and S. Oh. Generalized tsallis entropy reinforcement learning and its application to soft mobile robots. In _Robotics: Science and Systems XVI_, pages 1-10, 2020.
* Li and Turner (2016) Y. Li and R. E. Turner. Renyi divergence variational inference. In _Advances in Neural Information Processing Systems_, volume 29, 2016.
* Martins and Astudillo (2016) A. F. T. Martins and R. F. Astudillo. From softmax to sparsemax: A sparse model of attention and multi-label classification. In _Proceedings of the 33rd International Conference on Machine Learning_, page 1614-1623, 2016.
* Liu et al. (2018)O. Nachum and B. Dai. Reinforcement learning via fenchel-rockafellar duality. 2020. URL [http://arxiv.org/abs/2001.01866](http://arxiv.org/abs/2001.01866).
* Nachum et al. (2019) O. Nachum, B. Dai, I. Kostrikov, Y. Chow, L. Li, and D. Schuurmans. Algaedice: Policy gradient from arbitrary experience. _arXiv preprint arXiv:1912.02074_, 2019.
* Naudts (2002) J. Naudts. Deformed exponentials and logarithms in generalized thermostatistics. _Physica A-statistical Mechanics and Its Applications_, 316:323-334, 2002.
* Nowozin et al. (2016) S. Nowozin, B. Cseke, and R. Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. In _Advances in Neural Information Processing Systems_, volume 29, pages 1-9, 2016.
* Prehl et al. (2012) J. Prehl, C. Essex, and K. H. Hoffmann. Tsallis relative entropy and anomalous diffusion. _Entropy_, 14(4):701-716, 2012.
* Raffin et al. (2021) A. Raffin, A. Hill, A. Gleave, A. Kanervisto, M. Ernestus, and N. Dormann. Stable-baselines3: Reliable reinforcement learning implementations. _Journal of Machine Learning Research_, 22(268):1-8, 2021.
* Sason and Verdu (2016) I. Sason and S. Verdu. f-divergence inequalities. _IEEE Transactions on Information Theory_, 62:5973-6006, 2016.
* Suyari and Tsukada (2005) H. Suyari and M. Tsukada. Law of error in tsallis statistics. _IEEE Transactions on Information Theory_, 51(2):753-757, 2005.
* Suyari et al. (2020) H. Suyari, H. Matsuzoe, and A. M. Scarfone. Advantages of q-logarithm representation over q-exponential representation from the sense of scale and shift on nonlinear systems. _The European Physical Journal Special Topics_, 229(5):773-785, 2020.
* Tsallis (1988) C. Tsallis. Possible generalization of boltzmann-gibbs statistics. _Journal of Statistical Physics_, 52:479-487, 1988.
* Tsallis (2009) C. Tsallis. _Introduction to Nonextensive Statistical Mechanics: Approaching a Complex World_. Springer New York, 2009. ISBN 9780387853581.
* Vieillard et al. (2020a) N. Vieillard, T. Kozuno, B. Scherrer, O. Pietquin, R. Munos, and M. Geist. Leverage the average: an analysis of regularization in rl. In _Advances in Neural Information Processing Systems 33_, pages 1-12, 2020a.
* Vieillard et al. (2020b) N. Vieillard, O. Pietquin, and M. Geist. Munchausen reinforcement learning. In _Advances in Neural Information Processing Systems 33_, pages 1-11. 2020b.
* Wan et al. (2020) N. Wan, D. Li, and N. Hovakimyan. f-divergence variational inference. In _Advances in Neural Information Processing Systems_, volume 33, pages 17370-17379, 2020.
* Wang et al. (2018) D. Wang, H. Liu, and Q. Liu. Variational inference with tail-adaptive f-divergence. In _Proceedings of the 32nd International Conference on Neural Information Processing Systems_, NIPS'18, page 5742-5752, 2018.
* Yamano (2002) T. Yamano. Some properties of q-logarithm and q-exponential functions in tsallis statistics. _Physica A: Statistical Mechanics and its Applications_, 305(3):486-496, 2002.
* Yu et al. (2020) L. Yu, Y. Song, J. Song, and S. Ermon. Training deep energy-based models with f-divergence minimization. In _Proceedings of the 37th International Conference on Machine Learning_, ICML'20, pages 1-11, 2020.
* Zhang et al. (2020) R. Zhang, B. Dai, L. Li, and D. Schuurmans. Gendice: Generalized offline estimation of stationary values. In _International Conference on Learning Representations_, 2020.
Basic facts of Tsallis KL divergence
We present some basic facts about \(q\)-logarithm and Tsallis KL divergence.
We begin by introducing the \(2-q\) duality for Tsallis statistics. Recall that the \(q\)-logarithm and Tsallis entropy defined in the main paper are:
\[\ln_{q}x=\frac{x^{1-q}-1}{1-q},\quad S_{q}(x)=-\left\langle x^{q},\ln_{q}x \right\rangle.\]
In the RL literature, another definition \(q^{*}=2-q\) is more often used (Lee et al., 2020). This is called the \(2-q\) duality (Naudts, 2002; Suyari and Tsukada, 2005), which refers to that the Tsallis entropy can be equivalently defined as:
\[\ln_{q^{*}}x=\frac{x^{q^{*}-1}-1}{q^{*}-1},\quad S_{q^{*}}(x)=-\left\langle x, \ln_{q^{*}}x\right\rangle,\]
By the duality we can show (Suyari and Tsukada, 2005, Eq.(12)):
\[S_{q}(x):=-\left\langle x^{q},\frac{x^{1-q}-1}{1-q}\right\rangle=\frac{\left\langle \mathbf{1},x^{q}\right\rangle-1}{1-q}=\frac{\left\langle\mathbf{1},x^{q^{*}} \right\rangle-1}{1-q^{*}}=-\left\langle x,\frac{x^{q^{*}-1}-1}{q^{*}-1} \right\rangle=:S_{q^{*}}(x),\]
i.e. the duality between logarithms \(\ln_{q^{*}}x\) and \(\ln_{q}x\) allows us to define Tsallis entropy by an alternative notation \(q^{*}\) that eventually reaches to the same functional form.
We now come to examine Tsallis KL divergence (or Tsallis relative entropy) defined in another form: \(D^{q}_{\textit{KL}}(\pi\,|\,\mu)=\left\langle\pi,\ln_{q^{*}}\frac{\pi}{\mu}\right\rangle\)(Prehl et al., 2012). In the main paper we used the definition \(D^{q}_{\textit{KL}}(\pi\,|\,\mu)=\left\langle\pi,-\ln_{q}\frac{\mu}{\pi}\right\rangle\)(Furuichi et al., 2004). We show they are equivalent by the same logic:
\[\left\langle\pi,-\ln_{q}\frac{\mu}{\pi}\right\rangle=\left\langle\pi,-\frac{ \left(\frac{\mu}{\pi}\right)^{1-q}-1}{1-q}\right\rangle=\left\langle\pi,\frac{ \left(\frac{\pi}{\mu}\right)^{q-1}-1}{q-1}\right\rangle=\left\langle\pi,\ln_{q ^{*}}\frac{\pi}{\mu}\right\rangle. \tag{12}\]
The equivalence allows us to work with whichever of \(\ln_{q}\) and \(\ln_{q^{*}}\) that makes the proof easier to work out the following useful properties of Tsallis KL divergence:
\(-\) Nonnegativity \(D^{q}_{\textit{KL}}(\pi\,|\,\mu)\geq 0\): since the function \(-\ln_{q}\pi\) is convex, by Jensen's inequality
\(-\) Conditions of \(D^{q}_{\textit{KL}}(\pi\,|\,\mu)=0\): directly from the above, in Jensen's inequality the equality holds only when \(\frac{\mu}{\pi}=1\) almost everywhere, i.e. \(D^{q}_{\textit{KL}}(\pi\,|\,\mu)=0\) implies \(\mu=\pi\) almost everywhere.
\(-\) Conditions of \(D^{q}_{\textit{KL}}(\pi\,|\,\mu)=\infty\): To better align with the standard KL divergence, let us work with \(\ln_{q^{*}}\), following (Cover and Thomas, 2006), let us define
\[0\ln_{q^{*}}\frac{0}{0}=0,\quad 0\ln_{q^{*}}\frac{0}{\mu}=0,\quad\pi\ln_{q^{* }}\frac{\pi}{0}=\infty.\]
We conclude that \(D^{q}_{\textit{KL}}(\pi\,|\,\mu)=\infty\) whenever \(\pi>0\) and \(\mu=0\).
\(-\) Bounded entropy \(\forall q,\,0\leq S_{q}(\pi)\leq\ln_{q}|\mathcal{A}|\): let \(\mu=\frac{1}{|\mathcal{A}|}\), by the nonnegativity of Tsallis KL divergence:
\[D^{q}_{\textit{KL}}(\pi\,|\,\mu) =\left\langle\pi,-\ln_{q}\frac{1}{(|\mathcal{A}|\cdot\pi)}\right\rangle =\left\langle\pi,\frac{(|\mathcal{A}|\cdot\pi)^{q-1}-1}{q-1}\right\rangle\] \[=|\mathcal{A}|^{q-1}\left(\frac{\left\langle\mathbf{1},\pi^{q} \right\rangle-1}{q-1}-\frac{\frac{1}{|\mathcal{A}|^{q-1}}-1}{q-1}\right)\geq 0.\]
Notice that \(\frac{\left\langle\mathbf{1},\pi^{q}\right\rangle-1}{q-1}=\left\langle\pi^{q},\,\frac{1-\pi^{1-q}}{1-q}\right\rangle=\left\langle\pi,\ln_{q}\pi\right\rangle =-S_{q}(\pi)\) and \(\frac{\frac{1}{|\mathcal{A}|^{q-1}}-1}{q-1}=\ln_{q}|\mathcal{A}|\), we conclude
\[S_{q}(\pi)\leq\ln_{q}|\mathcal{A}|.\]Proof of Theorem 1 and 2
We structure this section as the following three parts:
1. Tsallis entropy regularized policy has general expression for all \(q\). Moreover, \(q\) and \(\tau\) are interchangeable for controlling the truncation (Theorem 1).
2. The policies can be expressed by \(q\)-exponential (Theorem 1).
3. We present a computable approximate threshold \(\hat{\psi}_{q}\) (Theorem 2).
General expression for Tsallis entropy regularized policy.The original definition of Tsallis entropy is \(S_{q^{*}}(\pi(\cdot|s))=\frac{p}{q^{*}-1}\left(1-\sum_{a}\pi^{q^{*}}\left(a|s \right)\right),q^{*}\in\mathbb{R},\ p\in\mathbb{R}_{+}\). Note that similar to Appendix A, we can choose whichever convenient of \(q\) and \(q^{*}\), since the domain of the entropic index is \(\mathbb{R}\). To obtain the Tsallis entropy-regularized policies we follow [Chen et al., 2018]. The derivation begins with assuming an actor-critic framework where the policy network is parametrized by \(w\). It is well-known that the parameters should be updated towards the direction specified by the policy gradient theorem:
\[\Delta w\propto\mathbb{E}_{\pi}\left[Q_{\pi}\frac{\partial\ln\pi}{\partial w} +\tau\frac{\partial\mathcal{H}\left(\pi\right)}{\partial w}\right]-\sum_{s} \lambda(s)\frac{\partial\left\langle\mathbf{1},\pi\right\rangle}{\partial w}=: f(w), \tag{13}\]
Recall that \(\mathcal{H}\left(\pi\right)\) denotes the Shannon entropy and \(\tau\) is the coefficient. \(\lambda(s)\) are the Lagrange multipliers for the constraint \(\left\langle\mathbf{1},\pi\right\rangle=1\). In the Tsallis entropy framework, we replace \(\mathcal{H}\left(\pi\right)\) with \(S_{q^{*}}(\pi)\). We can assume \(p=\frac{1}{q^{*}}\) to ease derivation, which is the case for sparsemax.
We can now explicitly write the optimal condition for the policy network parameters:
\[\begin{split}& f(w)=0=\mathbb{E}_{\pi}\left[Q_{\pi}\frac{ \partial\ln\pi}{\partial w}+\tau\frac{\partial S_{q^{*}}(\pi)}{\partial w} \right]-\sum_{s}\lambda(s)\frac{\partial\left\langle\mathbf{1},\pi\right\rangle }{\partial w}\\ &=\mathbb{E}_{\pi}\left[Q_{\pi}\frac{\partial\ln\pi}{\partial w}- \tau\frac{1}{q^{*}-1}\left\langle\mathbf{1},\pi^{q^{*}}\frac{\partial\ln\pi}{ \partial w}\right\rangle-\tilde{\psi}_{q}(s)\frac{\partial\ln\pi}{\partial w} \right]\\ &=\mathbb{E}_{\pi}\left[\left(Q_{\pi}-\tau\frac{1}{q^{*}-1}\pi^{q^ {*}-1}-\tilde{\psi}_{q}(s)\right)\frac{\partial\ln\pi}{\partial w}\right], \end{split} \tag{14}\]
where we leveraged \(\frac{\partial S_{q^{*}}(\pi)}{\partial w}=\frac{1}{q^{*}-1}\left\langle \mathbf{1},\pi^{q^{*}}\frac{\partial\ln\pi}{\partial w}\right\rangle\) in the second step and absorbed terms into the expectation in the last step. \(\tilde{\psi}_{q}(s)\) denotes the adjusted Lagrange multipliers by taking \(\lambda(s)\) inside the expectation and modifying it according to the discounted stationary distribution.
Now it suffices to verify either \(\frac{\partial\ln\pi}{\partial w}=0\) or
\[\begin{split}& Q_{\pi}(s,a)-\tau\frac{1}{q^{*}-1}\pi^{q^{*}-1}(a |s)-\tilde{\psi}_{q}(s)=0\\ \Leftrightarrow&\pi^{*}(a|s)=\sqrt[{q^{*}-1}]{\left[ \frac{Q_{\pi}(s,a)}{\tau}-\frac{\tilde{\psi}_{q}(s)}{\tau}\right]_{+}(q^{*}- 1)},\\ \text{or}&\pi^{*}(a|s)=\sqrt[{q^{*}(s,a)}-\frac{ \tilde{\psi}_{q}(s)}{\tau}]_{+}(1-q),\end{split} \tag{15}\]
where we changed the entropic index from \(q^{*}\) to \(q\). Clearly, the root does not affect truncation. Consider the pair \((q^{*}=50,\tau)\), then the same truncation effect can be achieved by choosing \((q^{*}=2,\frac{\tau}{50-1})\). The same goes for \(q\). Therefore, we conclude that \(q\) and \(\tau\) are interchangeable for the truncation, and we should stick to the analytic choice \(q^{*}=2(q=0)\).
Tsallis policies can be expressed by \(q\)-exponential.Given Eq. (15), by adding and subtracting \(1\), we have:
\[\pi^{*}(a|s)=\sqrt[{1+(1-q)\left(\frac{Q_{\pi}(s,a)}{\tau}-\tilde{\psi}_{q} \left(\frac{Q_{\pi}(s,\cdot)}{\tau}\right)-\frac{1}{1-q}\right)}]_{+}=\exp_{q} \left(\frac{Q_{\pi}(s,a)}{\tau}-\hat{\psi}_{q}\left(\frac{Q_{\pi}(s,\cdot)}{ \tau}\right)\right),\]where we defined \(\hat{\psi}_{q}=\hat{\psi}_{q}+\frac{1}{1-q}\). Note that this expression is general for all \(q\), but whether \(\pi^{*}\) has closed-form expression depends on the solvability of \(\tilde{\psi}_{q}\).
Let us consider the extreme case \(q=\infty\). It is clear that \(\lim_{q\to\infty}\frac{1}{1-q}\to 0\). Therefore, for any \(x>0\) we must have \(x^{\frac{1}{1-q}}\to 1\); i.e., there is only one action with probability 1, with all others being 0. This conclusion agrees with the fact that \(S_{q}(\pi)\to 0\) as \(q\to\infty\): hence the regularized policy degenerates to \(\arg\max\).
A computable Normalization Function.The constraint \(\sum_{a\in K(s)}\pi^{*}(a|s)=1\) is exploited to obtain the threshold \(\psi\) for the sparsemax [Lee et al., 2018, Chow et al., 2018]. Unfortunately, this is only possible when the root vanishes, since otherwise the constraint yields a summation of radicals. Nonetheless, we can resort to first-order Taylor's expansion for deriving an approximate policy. Following [Chen et al., 2018], let us expand Eq. (15) by the first order Taylor's expansion \(f(z)+f^{\prime}(z)(x-z)\), where we let \(z=1\), \(x=\left[\frac{Q_{\pi}(s,a)}{\tau}-\tilde{\psi}_{q}\left(\frac{Q_{\pi}(s, \cdot)}{\tau}\right)\right]_{+}(1-q)\), \(f(x)=x^{\frac{1}{1-q}}\), \(f^{\prime}(x)=\frac{1}{1-q}x^{\frac{q}{1-q}}\). So that the unnormalized approximate policy has
\[\tilde{\pi}^{*}(a|s) \approx f(z)+f^{\prime}(z)(x-z) \tag{16}\] \[=1+\frac{1}{1-q}\left(\left(\frac{Q_{\pi}(s,a)}{\tau}-\tilde{ \psi}_{q}\left(\frac{Q_{\pi}(s,\cdot)}{\tau}\right)\right)(1-q)-1\right).\]
Therefore it is clear as \(q\to\infty,\tilde{\pi}^{*}(a|s)\to 1\). This concords well with the limit case where \(\pi^{*}(a|s)\) degenerates to \(\arg\max\). With Eq. (16), we can solve for the approximate normalization by the constraint \(\sum_{a\in K(s)}\pi^{*}(a|s)=1\):
\[1 =\sum_{a\in K(s)}\left[1+\frac{1}{1-q}\left(\left(\frac{Q_{\pi}(s, a)}{\tau}-\tilde{\psi}_{q}\left(\frac{Q_{\pi}(s,\cdot)}{\tau}\right)\right)(1 -q)-1\right)\right]\] \[=|K(s)|-\frac{1}{1-q}|K(s)|+\sum_{a\in K(s)}\left[\frac{Q_{\pi}(s,a)}{\tau}-\tilde{\psi}_{q}\left(\frac{Q_{\pi}(s,\cdot)}{\tau}\right)\right]\] \[\Leftrightarrow\tilde{\psi}_{q}\left(\frac{Q_{\pi}(s,\cdot)}{\tau }\right)=\frac{\sum_{a\in K(s)}\frac{Q_{\pi}(s,\cdot)}{\tau}-1}{|K(s)|}+1- \frac{1}{1-q}.\]
In order for an action to be in \(K(s)\), it has to satisfy \(\frac{Q_{\pi}(s,\cdot)}{\tau}>\frac{\sum_{a\in K(s)}\frac{Q_{\pi}(s,\cdot)}{ \tau}-1}{|K(s)|}+1-\frac{1}{1-q}\). Therefore, the condition of \(K(s)\) satisfies:
\[1+i\frac{Q_{\pi}(s,a_{(i)})}{\tau}>\sum_{j=1}^{i}\frac{Q_{\pi}(s,a_{(j)})}{ \tau}+i\left(1-\frac{1}{1-q}\right).\]
Therefore, we see the approximate threshold \(\hat{\psi}_{q}=\tilde{\psi}_{q}+1\). When \(q=0\) or \(q^{*}=2\), \(\hat{\psi}_{q}\) recovers \(\psi\) and hence \(\tilde{\pi}^{*}\) recovers the exact sparsemax policy.
Appendix C Proof of convergence of \(\Omega(\pi)=D^{q}_{\text{KL}}(\pi\parallel\cdot)\) when \(q=2\)
Let us work with \(\ln_{q^{*}}\) from Appendix A and define \(\left|\!\!\!\cdot\right|\!\!\right|_{p}\) as the \(l_{p}\)-norm. The convergence proof for \(\Omega(\pi)=D^{q}_{\text{KL}}(\pi\parallel\cdot)\) when \(q=2\) comes from that \(\Omega(\pi)\) is strongly convex in \(\pi\):
\[\Omega(\pi)=D^{q^{*}=2}_{\text{KL}}\left(\pi\parallel\cdot\right)=\left\langle \pi,\ln_{2}\frac{\pi}{\tau}\right\rangle=\left\langle\pi,\frac{\left(\frac{ \pi}{2}\right)^{2-1}-1}{2-1}\right\rangle\propto\left\|\frac{\pi}{\cdot} \right\|_{2}^{2}-1. \tag{17}\]
Similarly, the negative Tsallis sparse entropy \(-S_{2}(\pi)\) is also strongly convex. Then the propositions of [Geist et al., 2019] can be applied, which we restate in the following:
**Lemma 1** ([Geist et al., 2019]).: _Define regularized value functions as:_
\[Q_{\pi,\Omega}=r+\gamma PV_{\pi,\Omega},\qquad V_{\pi,\Omega}=\left\langle\pi, Q_{\pi,\Omega}\right\rangle-\Omega(\pi).\]
_If \(\Omega(\pi)\) is strongly convex, let \(\Omega^{*}(Q)=\max_{\pi}\left\langle\pi,Q\right\rangle-\Omega(\pi)\) denote the Legendre-Fenchel transform of \(\Omega(\pi)\), then_* \(\nabla\Omega^{*}\) _is Lipschitz and is the unique maximizer of_ \(\arg\max_{\pi}\left\langle\pi,Q\right\rangle-\Omega(\pi)\)_._
* \(T_{\pi,\Omega}\) _is a_ \(\gamma\)_-contraction in the supremum norm, i.e._ \(\left\|T_{\pi,\Omega}V_{1}-T_{\pi,\Omega}V_{2}\right\|_{\infty}\leq\gamma \left\|V_{1}-V_{2}\right\|_{\infty}\)_. Further, it has a unique fixed point_ \(V_{\pi,\Omega}\)_._
* _The policy_ \(\pi_{*,\Omega}=\arg\max_{\pi}\left\langle\pi,Q_{*,\Omega}\right\rangle-\Omega(\pi)\) _is the unique optimal regularized policy._
Note that in the main paper we dropped the subscript \(\Omega\) for both the regularized optimal policy and action value function to lighten notations. It is now clear that Eq. (6) indeed converges for entropic indices that make \(D^{q}_{KL}(\pi\left\|\cdot\right)\) strongly convex. But we mostly consider the case \(q=2\).
## Appendix D Derivation of the Tsallis KL Policy
This section contains the proof for the Tsallis KL-regularized policy (7). Section D.1 shows that a Tsallis KL policy can also be expressed by a series of multiplications of \(\exp_{q}\left(Q\right)\); while Section D.2 shows its more-than-averaging property.
### Tsallis KL Policies are Similar to KL
We extend the proof and use the same notations from (Lee et al., 2020, Appendix D) to derive the Tsallis KL regularized policy. Again let us work with \(\ln_{q^{*}}\) from Appendix A. Define state visitation as \(\rho_{\pi}(s)=\mathbb{E}_{\pi}\left[\sum_{t=0}^{\infty}\mathbbm{1}(s_{t}=s)\right]\) and state-action visitaion \(\rho_{\pi}(s,a)=\mathbb{E}_{\pi}\left[\sum_{t=0}^{\infty}\mathbbm{1}(s_{t}=s, a_{t}=a)\right]\). The core of the proof resides in establishing the one-to-one correspondence between the policy and the induced state-action visitation \(\rho_{\pi}\). For example, Tsallis entropy is written as
\[S_{q^{*}}(\pi)=S_{q^{*}}(\rho_{\pi})=-\sum_{s,a}\rho_{\pi}(s,a)\ln_{q^{*}}\frac {\rho_{\pi}(s,a)}{\sum_{a}\rho_{\pi}(s,a)}.\]
This unique correspondence allows us to replace the optimization variable from \(\pi\) to \(\rho_{\pi}\). Indeed, one can always restore the policy by \(\pi(a|s):=\frac{\rho_{\pi}(s,a)}{\sum_{a^{\prime}}\rho_{\pi}(s,a^{\prime})}\).
Let us write Tsallis KL divergence as \(D^{q^{*}}_{KL}(\pi\left\|\,\mu\right)=D^{q^{*}}_{KL}(\rho\left\|\,\nu)=\sum_{s,a}\rho(s,a)\ln_{q^{*}}\frac{\rho(s,a)\sum_{s^{\prime}}\nu(s,a^{\prime})}{\nu( s,a)\sum_{a^{\prime}}\rho(s,a^{\prime})}\) by replacing the policies \(\pi,\mu\) with their state-action visita tion \(\rho,\nu\). One can then convert the Tsallis MDP problem into the following problem:
\[\begin{split}&\max_{\rho}\sum_{s,a}\rho(s,a)\sum_{s^{\prime}}r(s,a)P( s^{\prime}|s,a)-D^{q^{*}}_{KL}(\rho\,|\,\nu)\\ &\text{subject to }\forall s,a,\quad\rho(s,a)>0,\\ &\sum_{a}\rho(s,a)=d(s)+\sum_{s^{\prime},a^{\prime}}P(s|s^{\prime },a^{\prime})\rho(s^{\prime},a^{\prime}),\end{split} \tag{18}\]
where \(d(s)\) is the initial state distribution. Eq. (18) is known as the Bellman Flow Constraints (Lee et al., 2020, Prop. 5) and is concave in \(\rho\) since the first term is linear and the second term is concave in \(\rho\). Then the primal and dual solutions satisfy KKT conditions sufficiently and necessarily. Following (Lee et al., 2020, Appendix D.2), we define the Lagrangian objective as
\[\begin{split}\mathcal{L}:=&\sum_{s,a}\rho(s,a)\sum _{s^{\prime}}r(s,a)P(s^{\prime}|s,a)-D^{q^{*}}_{KL}(\rho\,|\,\nu)+\sum_{s,a} \lambda(s,a)\rho(s,a)\\ &+\sum_{s}\zeta(s)\left(d(s)+\sum_{s^{\prime},a^{\prime}}P(s|s^{ \prime},a^{\prime})\rho(s^{\prime},a^{\prime})-\sum_{a}\rho(s,a)\right)\end{split}\]
where \(\lambda(s,a)\) and \(\zeta(s)\) are dual variables for nonnegativity and Bellman flow constraints. The KKT conditions are:
\[\begin{split}\forall s,a,\quad\rho^{*}(s,a)&\geq 0,\\ d(s)+\sum_{s^{\prime},a^{\prime}}P(s|s^{\prime},a^{\prime})\rho^{*} (s^{\prime},a^{\prime})-\sum_{a}\rho^{*}(s,a)&=0,\\ \lambda^{*}(s,a)&\leq 0,\quad\lambda^{*}(s,a)\rho^{*} (s,a)&=0,\end{split}\]
\[\begin{split} 0=\sum_{s^{\prime}}r(s,a)P(s^{\prime}|s,a)+\gamma\sum_{s^{ \prime}}\zeta^{*}(s^{\prime})P(s^{\prime}|s,a)-\zeta^{*}(s)+\lambda^{*}(s,a)- \frac{\partial D^{q^{*}}_{KL}(\rho^{*}\,|\,\nu)}{\partial\rho(s,a)},\\ \text{where }-\frac{\partial D^{q^{*}}_{KL}(\rho^{*}\,|\,\nu)}{ \partial\rho(s,a)}=-\ln_{q^{*}}\frac{\rho^{*}(s,a)\sum_{a^{\prime}}\nu(s,a^{ \prime})}{\nu(s,a)\sum_{a^{\prime}}\rho^{*}(s,a^{\prime})}-\left(\frac{\rho^{ *}(s,a)\sum_{a^{\prime}}\nu(s,a^{\prime})}{\nu(s,a)\sum_{a^{\prime}}\rho^{*}( s,a^{\prime})}\right)^{q^{*}-1}\\ +\sum_{a}\left(\frac{\rho^{*}(s,a)}{\sum_{a^{\prime}}\rho^{*}(s,a ^{\prime})}\right)^{q^{*}}\left(\frac{\sum_{a^{\prime}}\nu(s,a)}{\nu(s,a)} \right)^{q^{*}-1}.\end{split}\]
The dual variable \(\zeta^{*}(s)\) can be shown to equal to the optimal state value function \(V^{*}(s)\) following (Lee et al., 2020), and \(\lambda^{*}(s,a)=0\) whenever \(\rho^{*}(s,a)>0\).
By noticing that \(x^{q^{*}-1}=(q^{*}-1)\ln_{q^{*}}x+1\), we can show that \(-\frac{\partial D^{q^{*}}_{KL}(\rho^{*}\,|\,\nu)}{\partial\rho(s,a)}=-q^{*}\ln _{q^{*}}\frac{\rho^{*}(s,a)\sum_{a^{\prime}}\rho^{*}(s,a^{\prime})}{\nu(s,a) \sum_{a^{\prime}}\rho^{*}(s,a^{\prime})}-1+\sum_{a}\left(\frac{\rho^{*}(s,a)}{ \sum_{a^{\prime}}\rho^{*}(s,a^{\prime})}\right)^{q^{*}}\left(\frac{\sum_{a^{ \prime}}\nu(s,a)}{\nu(s,a)}\right)^{q^{*}-1}\). Substituting \(\zeta^{*}(s)=V^{*}(s)\), \(\pi^{*}(a|s)=\frac{\rho^{*}(s,a)}{\sum_{a^{\prime}}\rho^{*}(s,a)}\), \(\mu^{*}(a|s)=\frac{\nu^{*}(s,a)}{\sum_{a^{\prime}}\rho^{*}(s,a)}\) into the above KKT condition and leverage the equality \(Q^{*}(s,a)=r(s,a)+\mathbb{E}_{s^{\prime}\sim P}[\gamma\zeta^{*}(s^{\prime})]\) we have:
\[\begin{split}& Q^{*}(s,a)-V^{*}(s)-q^{*}\ln_{q^{*}}\frac{\pi(a|s )}{\mu(a|s)}-1+\sum_{a^{\prime}}\pi(a|s)\left(\frac{\pi(a|s)}{\mu(a|s)} \right)^{q^{*}-1}=0\\ &\Leftrightarrow\pi^{*}(a|s)=\mu(a|s)\exp_{q^{*}}\left(\frac{Q^{* }(s,a)}{q^{*}}-\frac{V^{*}(s)+1-\sum_{a^{\prime}}\pi(a|s)\left(\frac{\pi(a|s )}{\mu(a|s)}\right)^{q^{*}-1}}{q^{*}}\right).\end{split}\]
By comparing it to the maximum Tsallis entropy policy (Lee et al., 2020, Eq.(49)) we see the only difference lies in the baseline term \(\mu(a|s)^{-(q^{*}-1)}\), which is expected since we are exploiting Tsallis KL regularization. Let us define the normalization function as
\[\psi\left(\frac{Q^{*}(s,\cdot)}{q^{*}}\right)=\frac{V^{*}(s)+1-\sum_{a}\pi(a|s )\left(\frac{\pi(a|s)}{\mu(a|s)}\right)^{q^{*}-1}}{q^{*}},\]then we can write the policy as
\[\pi^{*}(a|s)=\mu(a|s)\exp_{q^{*}}\left(\frac{Q^{*}(s,a)}{q^{*}}-\psi\left(\frac{Q^{ *}(s,\cdot)}{q^{*}}\right)\right).\]
In a way similar to KL regularized policies, at \(k+1\)-th update, take \(\pi^{*}=\pi_{k+1},\mu=\pi_{k}\) and \(Q^{*}=Q_{k}\), we write \(\pi_{k+1}\propto\pi_{k}\exp_{q}Q_{k}\) since the normalization function does not depend on actions. We ignored the scaling constant \(q^{*}\) and regularization coefficient. Hence one can now expand Tsallis KL policies as:
\[\pi_{k+1}\propto\pi_{k}\exp_{q^{*}}\left(Q_{k}\right)\propto\pi_{k-1}\exp_{q^{* }}\left(Q_{k-1}\right)\exp_{q^{*}}\left(Q_{k}\right)\propto\cdots\propto\exp_{ q^{*}}Q_{1}\cdots\exp_{q^{*}}Q_{k},\]
which proved the first part of Eq. (7).
### Tsallis KL Policies Do More than Average
We now show the second part of Eq. (7), which stated that the Tsallis KL policies do more than average. This follows from the following lemma:
**Lemma 2** (Eq. (25) of [Yamano, 2002]).: \[\left(\exp_{q}x_{1}\ldots\exp_{q}x_{n}\right)^{1-q}=\exp_{q}\left(\sum_{j=1} ^{k}x_{j}\right)^{1-q}+\sum_{j=2}^{k}(1-q)^{j}\sum_{i_{1}=1<\cdots<i_{j}}^{k} x_{i_{1}}\cdots x_{i_{j}}.\] (19)
However, the mismatch between the base \(q\) and the exponent \(1-q\) is inconvenient. We exploit the \(q=2-q^{*}\) duality to show this property holds for \(q^{*}\) as well:
\[\left(\exp_{q^{*}}x\cdot\exp_{q^{*}}y\right)^{q^{*}-1}=\left[1+(q^ {*}-1)x\right]_{+}\cdot\left[1+(q^{*}-1)y\right]_{+}\] \[=\left[1+(q^{*}-1)x+(q^{*}-1)y+(q^{*}-1)^{2}xy\right]_{+}\] \[=\exp_{q}(x+y)^{q^{*}-1}+(q^{*}-1)^{2}xy.\]
Now since we proved the two-point property for \(q^{*}\), by the same induction steps in [Yamano, 2002, Eq. (25)] we conclude the proof. The weighted average part Eq. (8) comes immediately from [Suyari et al., 2020, Eq.(18)].
## Appendix E Implementation Details
We list the hyperparameters for Gym environments in Table 1. The epsilon threshold is fixed at \(0.01\) from the beginning of learning. FC \(n\) refers to the fully connected layer with \(n\) activation units.
The Q-network uses 3 convolutional layers. The epsilon greedy threshold is initialized at 1.0 and gradually decays to 0.01 at the end of first 10% of learning. We run the algorithms with the swept hyperparameters for full \(5\times 10^{7}\) steps on the selected two Atari environments to pick the best hyperparameters.
We show in Figure 6 the performance of \(\text{MVI}(q)\) on Cartpole-v1 and Acrobot-v1, and the full learning curves of MVI(\(q\)) on the Atari games in Figure 7. Figures 8 and 9 show the full learning curves of Tsallis-VI.
Figure 6: (Left) \(\text{MVI}(q)\) and \(\text{MVI}\) with logarithm \(\ln\pi\) simply replaced to \(\ln_{q}\pi\) on Cartpole-v1, when \(q=2\). The results are averaged over 50 independent runs. The flat learning curve is due to the pseudo-additivity. (Right) \(\text{MVI}(q)\) on Acrobot-v1 with different \(q\) choices. Each \(q\) is independently fine-tuned. The black bar stands for 95% confidence interval, averaged over 50 independent runs.
Figure 7: Learning curves of \(\text{MVI}(q)\) and \(\text{M-VI}\) on the selected Atari games.
Figure 8: Learning curves of MVI(\(q\)) and Tsallis-VI on the selected Atari games.
Figure 9: (cont’d) MVI(\(q\)) and Tsallis-VI on the selected Atari games.
\begin{table}
\begin{tabular}{l l l l} \hline Network Parameter & Value & Algorithm Parameter & Value \\ \hline \(T\) (total steps) & \(5\times 10^{5}\) & \(\gamma\) (discount rate) & 0.99 \\ \(C\) (interaction period) & 4 & \(\epsilon\) (epsilon greedy threshold) & 0.01 \\ \(|B|\) (buffer size) & \(5\times 10^{4}\) & \(\tau\) (Tsallis entropy coefficient) & 0.03 \\ \(B_{t}\) (batch size) & 128 & \(\alpha\) (advantage coefficient) & 0.9 \\ \(I\) (update period) & \(100\) (Car.) / \(2500\) (Acro.) & & \\ Q-network architecture & FC512 - FC512 & & \\ activation units & ReLU & & \\ optimizer & Adam & & \\ optimizer learning rate & \(10^{-3}\) & & \\ \hline \end{tabular}
\end{table}
Table 1: Parameters used for Gym.
\begin{table}
\begin{tabular}{l l l} \hline Network Parameter & Value & Algorithmic Parameter & Value \\ \hline \(T\) (total steps) & \(5\times 10^{4}\) & \(\gamma\) (discount rate) & 0.99 \\ \(C\) (interaction period) & 4 & \(\tau_{\texttt{Pall}(q)}\) ( MVI(q) entropy coefficient) & 10 \\ \(|B|\) (buffer size) & \(1\times 10^{6}\) & \(\alpha_{\texttt{PMI}(q)}\) ( MVI(q) advantage coefficient) & 0.9 \\ \(B_{t}\) (batch size) & 32 & \(\tau_{\texttt{Tasallis}}\) (Tsallis-VI entropy coef.) & 10 \\ \(I\) (update period) & \(8000\) & \(\alpha_{\texttt{P-VI}}\) (M-VI advantage coefficient) & 0.9 \\ activation units & ReLU & \(\tau_{\texttt{P-VI}}\) (M-VI entropy coefficient) & 0.03 \\ optimizer & Adam & \(\epsilon\) (epsilon greedy threshold) & \(1.0\to 0.01|_{10\%}\) \\ optimizer learning rate & \(10^{-4}\) & & \\ Q-network architecture & \(\text{Conv}_{8,8}^{4}32\) - \(\text{Conv}_{4,4}^{2}64\) - \(\text{Conv}_{3,3}^{1}64\) - FC512 - FC & & \\ \hline \end{tabular}
\end{table}
Table 2: Parameters used for Atari games. | ## Review
### Summary
This paper explores the application of Tsallis KL divergence as a regularization method for reinforcement learning algorithms. It extends the existing Munchausen Value Iteration (MVI) to incorporate this generalized form of KL divergence, referred to as MVI(q). The authors provide both theoretical foundations and empirical evaluations, demonstrating substantial performance gains across various Atari environments. Despite the innovative approach, the paper has been noted for lacking extensive empirical comparisons with other established algorithms, leading to questions about the robustness of its claims. Overall, the work is recognized for opening new avenues in reinforcement learning research, albeit with some limitations in its empirical evaluation.
### Strengths
- Well-motivated and clearly written introduction to Tsallis KL divergence and its applications.
- Strong theoretical foundations establishing the feasibility of using Tsallis KL instead of standard KL.
- Significant empirical results showing improvements over MVI across various Atari environments.
- Illustrations and figures effectively clarify the theoretical concepts presented.
### Weaknesses
- The paper does not adequately explain the significance of using Tsallis KL divergence in policy reweighting.
- Limited empirical evaluation with few baselines; lacks comparisons with standard reinforcement learning algorithms such as SAC and TRPO.
- Insufficient depth in explaining how MVI(q) improves performance and the mechanisms behind its exploration-exploitation characteristics.
- No clear theoretical justification for the advantages of Tsallis regularization in the context of value iteration.
### Questions
- How does the choice of the parameter q impact performance across different environments?
- What would happen if the regularization coefficient was adjusted? How does this affect the exploration-exploitation balance?
- Can the authors provide more empirical evidence or case studies to support the claims of performance improvement?
- What are the implications of only considering q > 1 in their regularization approach?
### Soundness
**Score:** 3
**Description:** 3 = good: The theoretical foundations are solid, but the lack of comprehensive empirical validation raises concerns about the overall soundness of the claims.
### Presentation
**Score:** 3
**Description:** 3 = good: The paper is well-structured and clear, but some areas could benefit from enhanced explanations and deeper discussions of key concepts.
### Contribution
**Score:** 2
**Description:** 2 = fair: While the extension of Tsallis KL divergence to reinforcement learning is interesting, the contribution to the field is viewed as limited due to the lack of comprehensive evaluation against established methods.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept: The paper is technically solid and demonstrates moderate-to-high impact, but its empirical evaluation requires strengthening.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper is deemed to be original and opens new research avenues in reinforcement learning with its exploration of Tsallis KL divergence. While it presents solid theoretical foundations and promising empirical results, the limited comparison with established algorithms and some unanswered questions about its contributions warrant attention. Nevertheless, its potential significance in the field supports an acceptance decision.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Noether Embedding:
Efficient Learning of Temporal Regularities
Chi Gao\({}^{\dagger}\) Zidong Zhou\({}^{\dagger}\) Luping Shi\({}^{*}\)
Center for Brain-Inspired Computing Research,
Optical Memory National Engineering Research Center,
Tsinghua University - China Electronics Technology HIK Group
Co. Joint Research Center for Brain-Inspired Computing,
IDG / McGovern Institute for Brain Research at Tsinghua University,
Department of Precision Instrument,
Tsinghua University, Beijing 100084, China.
{gaoc20, zzd21}@mails.tsinghua.edu.cn
[email protected]
Equal contribution.Corresponding author.The code is publicly available at: [https://github.com/KevinGao7/Noether-Embedding](https://github.com/KevinGao7/Noether-Embedding).
###### Abstract
Learning to detect and encode temporal regularities (TRs) in events is a prerequisite for human-like intelligence. These regularities should be formed from limited event samples and stored as easily retrievable representations. Existing event embeddings, however, cannot effectively decode TR validity with well-trained vectors, let alone satisfy the efficiency requirements. We develop Noether Embedding (NE) as the first efficient TR learner with event embeddings. Specifically, NE possesses the intrinsic time-translation symmetries of TRs indicated as conserved local energies in the embedding space. This structural bias reduces the calculation of each TR validity to embedding each event sample, enabling NE to achieve data-efficient TR formation insensitive to sample size and time-efficient TR retrieval in constant time complexity. To comprehensively evaluate the TR learning capability of embedding models, we define complementary tasks of TR detection and TR query, formulate their evaluation metrics, and assess embeddings on classic ICEWS14, ICEWS18, and GDELT datasets. Our experiments demonstrate that NE consistently achieves about double the F1 scores for detecting valid TRs compared to classic embeddings, and it provides over ten times higher confidence scores for querying TR intervals. Additionally, we showcase NE's potential applications in social event prediction, personal decision-making, and memory-constrained scenarios.
## 1 Introduction
Recall the last time you went to a restaurant but waited for half an hour after ordering dishes. You probably knew something was wrong and may have called the waitperson for help. This behavior is guided by the temporal regularity (TR) of 'order dishes -(about 10 minutes)-> have meals' stored in your brain as schemas (Ghosh & Gilboa, 2014). Such TRs play a significant role in enabling humans to exhibit flexible out-of-distribution and systematic generalization abilities (Goyal & Bengio, 2022), and are directly learned from experience through a statistical accumulation of common event structures (Pudhiyidath et al., 2020), as shown in Figure 1. Since there exist enormous potential TRs due to a large number of event types and time intervals, detecting valid TRs from allpotential ones is therefore necessary, serving as a prerequisite capability for humans to form more complex event schemas in the brain to support downstream cognitive functions (Schapiro et al., 2017; McClelland et al., 1995). To attain human-level abilities in event scenarios, it is crucial to possess two fundamental properties when learning TRs. Firstly, TRs should be formed from even a limited number of experiences, which humans achieve since childhood (Pudhiyjdath et al., 2020). Secondly, TRs should be stored as representations that can be instantly retrieved given appropriate cues, which is a central feature of human memory (Chaudhuri & Fiete, 2016).
It remains an open problem to jointly achieve the data-efficient formation and time-efficient retrieval of 1-1 TRs with event embeddings, although embedding models have achieved outstanding performance in various tasks such as event completion and event prediction (Cai et al., 2022; Zhao, 2021). Classic event embeddings can only encode patterns such as inversion and composition and decode the fitted event occurrences for better performance in completion tasks (Xu et al., 2020; Wang et al., 2020; Messner et al., 2022). However, we aim to encode TRs by directly training the embeddings of each event sample and decode TRs by calculating well-trained embeddings without first decoding the fitted event occurrences. Our primary challenge in achieving such a counterintuitive function is to design the inductive bias that automatically integrates the event statistics of each potential TR over time.
Symmetry governs regularities in nature (Tanaka & Kunin, 2021), long before Noether proved the equivalence between symmetries and conservation laws in a physical system (Noether, 1918). Inspired by Noether's theorem, we develop Noether Embedding (NE) with the intrinsic time-translation symmetries of TRs indicated as conserved local energies in the embedding space. Calculating the event statistics of each potential TR is therefore converted to reducing the training loss of all event embeddings. This allows the direct revelation of TR validity by decoding the corresponding local energies through calculating well-trained embeddings after training convergence.
Contributions are twofold. Firstly, we develop NE which for the first time jointly achieves the data-efficient formation and time-efficient retrieval of TRs solely by embedding event samples. Secondly, we define complementary tasks of TR detection and TR query, formulate their evaluation metrics, and adopt classic datasets for evaluations, aiming at complete evaluations of embeddings' TR learning capabilities. Both our tasks and method generalize to arbitrary forms of structured events.
## 2 Problem Formalization
### Definitions
Temporal Regularity (TR)An event item \(q\) can generally be represented by the basic symbolic form of \((ev,t)\), where \(ev\) is the event type, and \(t\) is the discrete occurrence time. Building on the interpretations from cognitive science literature (Ghosh & Gilboa, 2014; Pudhiyjdath et al., 2020), we formally define TRs as temporal associations that remain invariant to time shifts:
\[(ev_{b},t)\rightarrow(ev_{h},t+\tau)\quad\forall t\in\mathbb{T}_{a} \tag{1}\]
\(ev_{b}\neq ev_{h},t,t+\tau\) respectively refer to the body and head event type and their occurrence time. \(\mathbb{T}_{a}\) refers to the complete collection of the absolute time points in the whole event set, and \(\tau\) denotes the relative time. Note that \(\tau=0\) indicates the synchrony of body and head event occurrences.
Figure 1: Illustration of TR learning. TRs indicate temporal associations invariant to time shifts, and are learned from event items through statistical accumulation.
For example, a TR could be: Whenever someone orders dishes in a restaurant, he or she will have meals in around ten minutes, where \(\tau=10\).
Metrics for TR ValiditySince real-world data often contain noise, we introduce an adaptive \(\triangle=[\tau(1-\eta),\tau(1+\eta)]\) to replace \(\tau\) when evaluating statistically learned temporal regularities from events. For an evaluated TR abbreviated as \(tr:(ev_{b},ev_{h},\tau,\eta)\), if an event \(q:(ev,t)\) satisfies that \(ev=ev_{b}\), we denote it as \(b(q;tr)\); if \(ev=ev_{h}\), we denote it as \(h(q;tr)\). We define the support of a TR as the number of event pairs respectively satisfying the body and head:
\[sp(tr)=n(b(q;tr)\wedge h(q^{\prime};tr)\wedge(t^{\prime}-t)\in\triangle(\tau, \eta)) \tag{2}\]
\(\wedge\) denotes 'and', \(\in\) denotes 'in', and \(q:(ev,t),q^{\prime}:(ev^{\prime},t^{\prime})\) refer to arbitrary two different events in the event set. Note that when calculating \(sp(tr)\), we can only count one event once and in one pair to avoid overcounting when events occur in consecutive periods.
We respectively define the standard confidence, head coverage, and general confidence of a TR as:
\[sc(tr)=\frac{sp(tr)}{n(b(tr))},\quad hc(tr)=\frac{sp(tr)}{n(h(tr))},\quad gc( tr)=\frac{2}{sc(tr)}+\frac{1}{hc(tr)} \tag{3}\]
\(n(b(tr)),n(h(tr))\) respectively represent the number of events \(q:(ev,t)\) satisfying \(ev=ev_{b},ev=ev_{h}\) in the event set. Here we borrow the metrics \(sc,hc,gc\) generally used in the rule mining field (Galarraga et al., 2015) to ensure fair and reasonable evaluations. We modify them by introducing an adaptive \(\tau\) with \(\eta\) to evaluate TR validity. Intuitively, standard confidence \(sc\) can be viewed as the probability that the head event will occur within time \(t+\triangle\) once a body event occurs at time \(t\), whose statistical sufficiency is supported by \(sp\). \(hc\) and \(gc\) can be interpreted similarly.
Above some \(sp\), the higher the \(gc\), the more valid a TR is. For a potential TR \(tr:(ev_{b},ev_{h},\tau,\eta)\) with fixed event types \(ev_{b},ev_{h}\) and ratio \(\eta\), its general confidence can be written as a function of \(\tau\): \(gc(\tau)\).
### Tasks
For a fixed \(\eta\) in \(\triangle\), a potential TR can be an arbitrary \((ev_{i},ev_{j},\tau)\), where \(i,j\in\mathbb{P},\tau\in\mathbb{T}_{r}\) (\(\mathbb{P}\) is the set of event types, \(\mathbb{T}_{r}\) is the set of relative time points). Therefore, we define the two complementary tasks below to comprehensively evaluate the TR learning capabilities of event embeddings.
TR DetectionFor a query \((ev_{b},ev_{h})\), its ground truth confidence \(gc_{g}=\max\limits_{\tau\in\mathbb{T}_{r}}gc(\tau)\). Queries whose \(gc_{g}\geq\theta\) are considered to imply valid TRs that reveal good regularities, while queries whose \(gc_{g}<\theta\) are considered to imply invalid TRs, where \(\theta\) is a fixed threshold.
The task of TR detection is to identify valid TRs from all tested ones. The model is expected to determine whether a query \((ev_{b},ev_{h})\) implies a valid TR. The F1 score of all judgments is reported.
TR QueryOnly queries that imply valid TRs are considered for testing. The ground truth relative time \(\tau_{g}\) is set as what maximizes \(gc(\tau)\) in computing \(gc_{g}\). The model outputs \(\tau^{\prime}\).
The task of TR query is to output the correct \(\tau^{\prime}=\tau_{g}\) for valid TRs. For each tested query \((ev_{b},ev_{h})\), a ratio \(r^{\prime}=\frac{gc(\tau^{\prime})}{gc_{g}}\) is computed. The averaged ratio \(r\) of all queries is reported.
## 3 Noether Embedding
Noether's theoremIn 1915, mathematician Emmy Noether proved one of the most fundamental theorems in theoretical physics: every differentiable symmetry of the action of a physical system with conservative forces has a corresponding conservation law. Specifically, time-translation symmetry corresponds to energy conservation.
TRs indicate time-translation symmetriesAn event pair \((ev_{b},t)\rightarrow(ev_{h},t+\tau)\) can be regarded as a mapping of the body and the head event type over \(t\) with a parameter \(\tau\). Therefore, ideal TRs indicate the invariance of such mappings under the transformation of time translation since the mapping holds \(\forall t\in\mathbb{T}_{a}\) for TRs.
Construct embeddings with conserved local energiesDenote \(\mathbf{q}(t;ev)\) as the embedding of each event sample, and \(g(\mathbf{q}(t;ev_{b}),\mathbf{q}(t+\tau;ev_{h}))\) as the local energy of a corresponding body-and-head event pair of a potential TR. If \(g\) is innately conserved, meaning that \(g=g(\tau;ev_{b},ev_{h})\) is invariant to \(t\), it indicates time-translation symmetry. We can then use the value of \(g\) to approximate TR validity after training each event embedding \(\mathbf{q}(t;ev)\). A more strict correspondence between NE variables and those in a physical system is shown in Appendix A.1.1.
Noether-Inspired structural biasesAccordingly, the enabling factors of NE can be summarized as follows: (i) the event embedding \(\mathbf{q}\) should be constructed to make each local energy \(g\) remain invariant to \(t\); (ii) the training loss should be designed to make the value of \(g\) approximate TR validity; (iii) the local energy \(g\) should be used as the decoding function. We thus construct NE as below.
### NE's Framework and Formulas
FrameworkAs illustrated in Figure 2, NE uses a distributed storage of complex vectors to learn TRs. At the encoding stage, the event items are converted to NE representations through embedding each event sample, where TR validity is automatically calculated and stored in the embedding space. At the decoding stage, given each query \((ev_{b},ev_{h})\), potential TRs are detected and queried by directly calculating their relevant embeddings to derive the corresponding decoding results \(g(\tau)\).
#### The Encoding Stage
\[\mathbf{q}(t;ev)=\mathbf{u}(ev)\circ\mathbf{r}(t) \tag{4}\]
In the event embedding above, \(\circ\) denotes the Hadmard (or element-wise) product. \(\mathbf{u}(ev),\mathbf{r}(t)\in\mathbb{C}^{d}\) are complex vectors where \(\mathbf{u}(ev)=\frac{\mathbf{v}(ev)}{||\mathbf{v}(ev)||}\), \(\mathbf{v}(ev)\in\mathbb{C}^{d}\), \(\mathbf{r}(t)=e^{i\mathbf{\omega}t}\), \(\mathbf{\omega}\in\mathbb{R}^{d}\), and \(d\) is the dimension of vectors. Each event type \(ev\) corresponds to an independently trainable vector of \(\mathbf{u}(ev)\), while \(\mathbf{\omega}\) is a global time vector for \(\mathbf{r}(t)\) of all event embeddings. Note that the \(d\)\(\omega\)s in \(\mathbf{\omega}\) are fixed to a certain distribution. The event embedding \(\mathbf{q}(t;ev)\) can thus be depicted as a rotation of event-type vectors \(\mathbf{u}(ev)\) by time \(\mathbf{r}(t)\) in the d-dimensional complex space.
The score function and loss function of each event sample are defined as follows:
\[f(t;ev)=\sum_{i=1}^{d}Real(\mathbf{q}(t;ev))_{i} \tag{5}\]
\[L(\xi;C_{p},C_{n})=(\frac{1}{\sqrt{d}}f(\xi)-C_{p})^{2}+\frac{1}{N}\sum(\frac{ 1}{\sqrt{d}}f(\xi^{\prime})-C_{n})^{2} \tag{6}\]
We denote \(\xi\) as a positive event sample and \(\xi^{\prime}\)s as its generated negative samples whose number is \(N\). For a positive sample \(\xi:(ev,t)\), its negative samples \(\xi^{\prime}\)s are the whole set of \(\{(ev,t^{\prime}\neq t)\}\)
Figure 2: Illustration of NE. Solid lines and purple graphs in the middle jointly represent the data flow of NE. The red graphs below and the blue ones above demonstrate cases of a TR with significant temporal regularities and a TR with no. It is shown that each decoding result reveals an integrated temporal association of its relevant event pairs separated in time.
where \(t^{\prime}\in\mathbb{T}_{a}\). \(C_{p},C_{n}\) are two different constants for positive and negative samples, respectively. \(C_{p},C_{n}\in[-1,1]\) because \(||\mathbf{u}(ev)||=1\), and we generally set \(C_{p}=1,C_{n}=0\).
Until the training converges, events and TRs form a distributed storage, which includes the global time vector of \(\mathbf{\omega}\) and trainable event type vectors \(\mathbf{u}(ev)\). At the decoding stage, no training but only the inference of vectors is conducted, as described below.
The Decoding StageThe decoding function for a query \((ev_{b},ev_{h})\) is:
\[g(\tau)=||\mathbf{u}_{b}-\mathbf{u}_{h}\circ\mathbf{r}(\tau)||^{2} \tag{7}\]
\(\mathbf{u}_{b},\mathbf{u}_{h}\) are the event type vectors respectively of \(ev_{b},ev_{h}\). \(\tau\) is traversed through set \(\mathbb{T}_{r}\) of the relative time points such as \(\mathbb{T}_{r}:\{-\tau_{max},...,0,...,\tau_{max}\}\) to plot the decoding results. \(\min\limits_{\tau\in\mathbb{T}_{r}}g(\tau)\) is computed, which is compared with a global threshold \(g_{th}\) to decide whether a potential TR is valid or not (for TR detection). For a valid TR, the \(\tau^{\prime}\) which minimizes \(g(\tau),\tau\in\mathbb{T}_{r}\) is selected as the model output of the relative time (for TR query).
Since \(\mathbf{r}(t+\tau)=\mathbf{r}(t)\circ\mathbf{r}(\tau)\), \(\mathbf{r}(t)\circ\overline{\mathbf{r}(t)}=\mathbf{1},\forall t\in\mathbb{T}_{a}\), the decoding function exactly indicates the conserved local energy: \(g(\tau)=||\mathbf{u}_{b}-\mathbf{u}_{h}\circ\mathbf{r}(\tau)||^{2}=||\mathbf{u}_{b}\circ\mathbf{r} (t)-\mathbf{u}_{h}\circ\mathbf{r}(t+\tau)||^{2}=||\mathbf{q}_{b}(t)-\mathbf{q}_{h}(t+\tau)||^{ 2},\forall t\in\mathbb{T}_{a}\). This indicates that \(g=g(\tau;ev_{b},ev_{h})\) is invariant to \(t\), and the conserved energy \(g\) of arbitrary two event samples is of a quadratic form in the embedding space.
### Why NE is Efficient
Briefly speaking, the Fourier-like representations enable NE's large-capacity storage for TR validity and event occurrences, serving as a prerequisite for NE to learn TR effectively. The Noether-inspired structural biases further leads to NE's efficient TR learning capabilities. Here we only illustrate the effect of the structural biases. The reasons why NE learns TR effectively is explained in Appendix A.2.
Data-efficiency by Encoding Translation SymmetriesThe invariance of \(g(\tau)\) to \(t\) means that the value of \(g(\tau)\) is determined by a competitive effect between sample pairs across time. Considering event sample pairs \((ev_{b},t),(ev_{h},t+\tau)\) with varying \(t\) in the embedding space, if a sample pair are both positive or both negative samples, they will decrease \(g(\tau)\) since their score functions are mapped to the same constant. Otherwise, they will increase \(g(\tau)\). Since \(g(\tau)\) is invariant to \(t\), \(g(\tau)\) is trained to balance these two forces. Therefore, the value of \(g(\tau)\) after training convergence is generally determined by the ratio of sample pairs with increasing or decreasing forces. \(g(\tau)\) is thus insensitive to the number of sample pairs that are both positive. This results in a data-efficient TR formation in NE, even with limited event occurrences from which to learn a TR.
Time-efficiency by Decoding Conserved EnergiesBy calculating \(g(\tau)=||\mathbf{u}_{b}-\mathbf{u}_{h}\circ\mathbf{r}(\tau)||^{2},\tau\in\mathbb{T}_{r}\), we enable efficient TR querying for each query \((ev_{b},ev_{h})\). This process has a constant time complexity since \(\mathbb{T}_{r}\) is an arbitrary user-selected set of relative time points, and the vector dimension \(d\) can be effectively handled using GPUs. Importantly, the querying time is independent of the number of events in the entire event set and the relevant event occurrences supporting the queried TR.
## 4 Experiment
It is important to highlight that in our evaluation, we initially compare NE and classic embeddings in terms of learning effectiveness (Section 4.2), without considering the efficiency requirements. As classic embeddings are shown to be ineffective in learning TRs, we then focus on demonstrating the learning efficiency of NE in Section 4.3.
### Experimental Setting
DatasetA temporal knowledge graph (TKG) comprises \((s,p,o,t)\) quadruples (Leblay and Chekol, 2018), where \(s,p,o,t\) represent the subject, predicate, object, and time. TKG is widely used in a variety of fields to represent global political events (Trivedi et al., 2017), financial incidents (Yang et al., 2019), user-item behaviors (Xiao et al., 2020), etc. Notably, ICEWS (Boschee et al., 2015) and GDELT (Leetaru and Schrodt, 2013) are two popular data sources for TKG research (Cai et al., 2022).
In our experiments, we use ICEWS14 and ICEWS18, the same as in (Han et al., 2020). They contain global political events in 2014 and 2018, and we denote them as D14 and D18, respectively. We also use the GDELT released by (Jin et al., 2019), which contains global social events from 2018/1/1 to 2018/1/31. In the experiments, we denote each \((s,p,o)\) triple as a specific event type \(ev\). It is worth mentioning that alternative settings, such as representing each predicate \(p\) as an event type \(ev\), are also applicable to our model.
Model ImplementationFor NE, \(d=400,C_{p}=1,C_{n}=0\) and the global time vector \(\mathbf{\omega}\) is set as \(\omega_{k}=\frac{(2\pi\times\omega_{max})^{\frac{k}{2}}-1}{T_{n}},k=0,1,...,d-1\), where \(T_{a}\) is the number of absolute time points, and \(\omega_{max}\) is a tunable hyperparameter set as 600. The training details are in Appendix B.1. We compare NE with six classic and vastly different TKG embeddings, DE-SimplE (Goel et al., 2020), TeRo (Xu et al., 2020b), ATiSE (Xu et al., 2020a), TNTComplex (Lacroix et al., 2020), BoxTE (Messner et al., 2022), and TASTER (Wang et al., 2023) with their original parameter settings and \(d=400\) the same as NE. We set the queried time set \(\mathbb{T}_{r}=\{1-T_{a},...,0,...,T_{a}-1\}\) at the decoding stage. In this way, only one query needs to be decoded between each \((ev_{i},ev_{j})\) and \((ev_{j},ev_{i})\), \(i\neq j\).
Adaptations. All baselines can only output their score function \(f^{\prime}(t)\) to decode event occurrences but cannot directly decode TR validity by \(g(\tau)\) as NE does. For comparison, we add an interface \(g^{\prime}(\tau)=\frac{\sum_{t,t+v\in\mathbb{T}_{a}^{\prime}}f^{\prime}_{b}(t)f ^{\prime}_{b}(t+\tau)}{\sum_{t\in\mathbb{T}_{a}}f^{\prime}_{b}(t)\sum_{t\in \mathbb{T}_{a}}f^{\prime}_{b}(t)}\) to these models that indirectly compute TR validity from the decoded event occurrences. We also evaluate NE with \(g^{\prime}(\tau)\) to show the validity of \(g^{\prime}(\tau)\) itself.
Evaluation DetailsWe select \((ev_{b},ev_{h})\)s for tests whose event occurrences of \(ev_{b}\) and \(ev_{h}\) are both \(\geq 2\). Otherwise, their supports (\(sp\) in Definition 2) will be too small for evaluating TR validity. We set \(\eta=0.1\) in \(\triangle\)s for strict evaluations and take the upper integer \(\triangle=[\tau-\lceil\tau\eta\rceil,\tau+\lceil\tau\eta\rceil]\). Note that in the extreme situation where body and head event occurrences both \(=2\), stochastic noises are still quite unlikely to interfere with the evaluation of TR validity since \(\eta=0.1\) is strict. Only forward or reverse queries (\((ev_{b},ev_{h})\)s whose \(s_{b}=s_{h},o_{b}=o_{h}\) or \(s_{b}=o_{h},o_{b}=s_{h}\) for \((s,p,o,t)\) quadruples) are tested for better interpretability without sacrificing generality. We set \(\theta=0.8\) to distinguish between valid and invalid TRs. The fact that TRs whose \(gc_{g}\sim 0.8\) are of a tiny percentage of all tested TRs adds to the rationality of such metrics. Ablations where \(\theta=0.7,0.9\) are in Appendix B.3.3.
In comparative studies with baselines 4.2, we report the highest F1 in TR detection by tuning the global threshold \(g_{th}\) (defined in Section 3.2) after embedding the whole event set to achieve full evaluations. We also remove TRs whose \(\tau_{g}=0\) for TR query because they account for most valid TRs but can hardly reflect the query difficulty. In NE's demonstration studies 4.3 4.4, we first use D14 to derive the global threshold \(g_{th}\) with the highest F1 in TR detection and then apply the same \(g_{th}\) for evaluating NE's performance in D18. This setting better demonstrates NE's practicality.
excellent performance of NE with \(g^{\prime}(\tau)\) indicates that \(g^{\prime}(\tau)\) itself is valid and thus guarantees fair comparisons. It is worth noting that NE is intrinsically different from all existing baselines because only NE can directly decode TR validity by \(g(\tau)\). In contrast, baselines can only decode event occurrences \(f^{\prime}(t)\) from which to indirectly calculate TR validity (such as by \(g^{\prime}(\tau)\)). Detailed results of precision and recall rates with error bars are reported in Appendix B.2.
DiscussionFigure 3(a) shows NE's decoding distribution \(g_{m}=\min\limits_{\tau\in\mathbb{T}_{r}}g(\tau)\) by each query's ground truth \(gc_{g}=\max\limits_{\tau\in\mathbb{T}_{r}}gc(\tau)\). It can be observed that the decoded conserved local energy accurately reveals the TR validity, which enables NE to successfully distinguish between valid and invalid TRs, as demonstrated in the case shown in Figure 3(b). Table 10 shows that baselines with \(g^{\prime}(\tau)\) still perform poorly. This is mainly because their \(f(t)\)s do not fill well. Specifically, Figure 3(c) illustrates that TNTComplEx has much noise in its \(f^{\prime}(t)\) compared to NE. The reason is that baseline models are generally designed to achieve good performance in the completion task and, therefore, over-apply the generalization capabilities of distributed representations, which hinders the fit of event occurrences.
### NE's Superior Learning Capabilities
Data EfficiencyIn Figure 4(a) and 4(b), we group TRs by their number \(n\) of relevant events. It is shown that NE accurately detects valid TRs and reports correct \(\tau\)s with only two event pairs as positive samples. This performance is comparable to humans, able to form temporally associative memories with minimal experience (Hudson et al., 1992; Bauer and Mandler, 1989; Schapiro et al., 2017). Note that the maximum group in the test has \(n>400\), while we only show groups with \(n\leq 40\) for display considerations.
Time EfficiencyAs explained in 3.3, NE's specific decoding function \(g(\tau)\) enables NE to retrieve TRs in a constant time complexity by vector computations. Calculating \(g^{\prime}(\tau)\) of classic embeddings, however, requires an additional time complexity relevant to \(T_{a}\) (the number of absolute time points).
Storage EfficiencyIn addition to the data-efficient and time-efficient properties, NE is, in fact, also a storage-efficient memory for TRs and event occurrences. Here is a detailed analysis:
Figure 4: Grouped performances of NE
Figure 3: Illustrations of NE or baselines
The storage of NE vectors, denoted as \(S(NE)\), can be calculated as follows: \(S(NE)=S(ev-vector)+S(time-vector)=2*N*d*64bit+2*d*64bit\). In our experiments, we used torch.LongTensor and N represents the number of event-type vectors. On the other hand, the storage of exact counting, denoted as \(S(CT)\), can be calculated as follows: \(S(CT)=S(TR)+S(event)=N^{2}*T_{a}*log_{2}(n/N)bit+N*(n/N)*log_{2}(T_{a})bit\). Here, \(n\) represents the number of all event occurrences. We reserved the storage accuracy of TR validity to effectively distinguish different values, resulting in approximately \(log_{2}(n/N)bit\) for each TR validity \((ev_{b},ev_{h},\tau)\).
For the ICEWS14 and ICEWS18 datasets, where \(d=400,T_{a}=365\), and \(n=90730,468558,N=50295,266631\), we calculated the compression ratio \(\frac{S(CT)}{S(NE)}\) of NE as 421 and 2336, respectively. This remarkable capability of NE can be attributed to the fact that it separately stores the information of TR validities \((ev_{b},ev_{h},\tau)\) using event-type vectors and a global time vector. By representing the common information of related TRs efficiently in memory, NE achieves a compression ratio that is approximately linear to the number of event types.
FlexibilityIn Figure 4(c), we group valid TRs by their golden \(\tau_{g}\)s. NE is shown to be flexible for learning TRs with \(\tau\)s varying broadly, comparable to humans with stable memory codes for various time intervals in the hippocampus (Mankin et al., 2012).
### NE's Wide Potential Use
Potential Use in Social Event PredictionIn D18, NE successfully reports 21010 valid TRs with an F1 score of 0.83. The encoding stage takes around 1 hour, while decoding only takes less than 1 minute. Cases are presented below and in Figure 6, with additional cases available in Appendix B.4.
(1) Case 1. Citizen (India) will _Reject_ to Narendra Modi (events in day 23, 122, and 168) in around 87 days whenever Narendra Modi _Appeal for diplomatic cooperation (such as policy support)_ to Citizen (India) (events in day 102, 200, and 264).
(2) Case 2. Russia will _Meet at a 'third' location_ with Ukraine (events in day 31 and 138) in around 136 days whenever Ukraine _Use conventional military force_ to Russia (events in day 161 and 288).
Since the TRs mined can generalize across time, the results above imply NE's potential use in both reliable and interpretable event predictions urgently needed in the big data era (Zhao, 2021).
Figure 5: Social event prediction Figure 6: Personal decision making
Potential Use in Personal Decision MakingConsider an intelligent machine that has visited a restaurant four times, with the occurrence time of each event episode used as input for NE, as shown in Figure 6(a). After training all events, the decoded TR validity \(\min\limits_{\tau\in\mathbb{T}_{r}}g(\tau)\) is transformed linearly and demonstrated in Figure 6(b). Despite the recurrent TRs on the slash that can be set aside, valid TRs such as 'order dishes -(about 10 minutes)-> have meals' are well distinguished from invalid ones such as 'order dishes -(about 5 minutes) -> look at the floor'.
Combining NE with front-end methods that take unstructured videos as input and output event items in the form of \((ev,t)\), and with back-end methods that use the decoded valid TRs to guide decision-making, NE has the potential to aid intelligent machines in surviving in changing environments by generalizing from little experience, just as human beings (Goyal & Bengio, 2022; Xue et al., 2018).
Potential Use in Memory-constrained ScenariosAs discussed in 4.3, NE approximately reduces the required storage space from \(M^{2}\) to \(M\) in our experimental settings, where \(M\) is the number of event types. Therefore, NE holds significant potential for applications in memory-constrained scenarios like the edge. This is important when \(M\) is large, which is usual in the big-data era.
### Ablation Studies
Here we demonstrate ablation studies of loss constants \(C_{p},C_{n}\) and time vector \(\mathbf{\omega}\), while those for dimension \(d\) and event type vector \(\mathbf{u}\) are shown in Appendix B.3.
Loss ConstantsTable 2 shows that NE performs optimally when \(C_{p}=1\) and \(C_{n}=0\). In fact, as \(C_{p}\) approaches 1, the \(g(\tau)\) of perfect TRs (\(gc(\tau)=1,\eta=1\)) is enforced to converge to its minimum \(0\). This global constant for all potential TRs in the embedding space allows \(g(\tau)\) to reveal TR validity better. In terms of \(C_{n}\), setting it to \(0\) results in negative samples occupying the largest embedding space. Since negative samples comprise most of all trained event samples, this setting improves the fit of negative samples and optimizes NE's performance.
Maximal Frequency CoefficientTable 3 shows that NE performs optimally with different values of \(\omega_{max}\), respectively, in the TR detection and query task.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{D14 (90730 events)} & \multicolumn{3}{c}{D18 (468558 events)} & \multicolumn{3}{c}{GDELT (2278405 events)} \\ \cline{2-11} \(\{\omega_{k}\}\) & linear & exponential & linear & exponential & linear & exponential \\ \hline TR Query (r) & 0.81 & 0.82 & 0.75 & 0.82 & 0.24 & 0.85 \\ \hline \hline \end{tabular}
\end{table}
Table 4: NE on the three datasets with increasing events, and with different distributions of \(\{\omega_{k}\}\)
\begin{table}
\begin{tabular}{c|c c c c c|c c c c} \hline \hline \multirow{2}{*}{\(C_{p}\)} & \multicolumn{3}{c|}{**TR Detection (F1)**} & \multicolumn{3}{c}{**TR Query (r)**} \\ & 0.4 & 0.2 & 0 & -0.2 & -0.4 & 0.4 & 0.2 & 0 & -0.2 & -0.4 \\ \hline
1 & 0.78 & 0.80 & 0.82 & 0.81 & 0.80 & 0.86 & 0.87 & 0.87 & 0.87 & 0.87 \\
0.8 & 0.64 & 0.67 & 0.79 & 0.80 & 0.80 & 0.85 & 0.86 & 0.86 & 0.87 & 0.86 \\
0.6 & 0.29 & 0.40 & 0.68 & 0.79 & 0.79 & 0.44 & 0.82 & 0.86 & 0.86 & 0.85 \\
0.4 & 0.18 & 0.31 & 0.49 & 0.76 & 0.79 & 0.12 & 0.35 & 0.84 & 0.84 & 0.81 \\
0.2 & 0.53 & 0.19 & 0.27 & 0.71 & 0.78 & 0.27 & 0.05 & 0.28 & 0.79 & 0.71 \\ \hline \hline \end{tabular}
\end{table}
Table 2: NE on ICEWS14 in different \(C_{p}\) and \(C_{n}\) settings
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \(\omega_{max}\) & 1 & 5 & 10 & 50 & 100 & 200 & 400 & 600 & 800 \\ \hline TR Query (r) & 0.15 & 0.93 & 0.92 & 0.85 & 0.85 & 0.85 & 0.85 & 0.85 & 0.85 & 0.85 \\ TR Detection (F1) & 0.22 & 0.45 & 0.55 & 0.56 & 0.54 & 0.53 & 0.53 & 0.51 & 0.53 \\ \hline \hline \end{tabular}
\end{table}
Table 3: NE on GDELT with different \(\omega_{max}\)sFrequency DistributionTable 4 shows that the larger the dataset, the more exponential distribution (\(\omega_{k}=\frac{(2\pi\times\omega_{max})^{\frac{k}{2}}-1}{I_{a}},k=0,1,...,d-1\)) surpasses linear distribution (\(\omega_{k}=\frac{2\pi\times k\times\omega_{max}}{d\times I_{a}},k=0,1,...,d-1\)) with the same parameters of \(d=400,\omega_{max}=600\). This suggests that real-world event occurrences depend more on low-frequency terms than high-frequency ones.
## 5 Related Work
Event Schema InductionIn the natural language processing (NLP) field, a significant research focus is on inducing event schemas from text (Huang et al., 2016; Li et al., 2021), including from language models (Dror et al., 2022), to support downstream applications such as search, question-answering, and recommendation (Guan et al., 2022). These NLP methods aim to organize known event regularities already given as priors for the extracting algorithm (such as extracting 'earthquake -> tsunami' from the sentence 'An earthquake causes a tsunami.') and focus on the schemas for use. In contrast, our tasks are designed to learn event regularities directly from experience without supervision. Specifically, the only prior models know is whether an event occurs, and models are required to detect valid TRs from all potential ones and report the correct relative time of valid TRs.
Temporal Rule MiningVarious temporal rules are mined from event sets to reveal regularities in industry, security, healthcare, etc (Segura-Delgado et al., 2020; Chen et al., 2007; Yoo and Shekhar, 2008; Namaki et al., 2017). Although the search methods used discover event regularities directly from events without supervision, both the mined rules and source events are generally stored as symbolic representations in list form. In contrast, by applying event embeddings, NE is a distributed and approximate memory for both TRs and event items. NE strikes a balance between storage efficiency and storage accuracy compared to exact counting, as detailedly discussed in 4.3.
Embedding Models of Structured DataWithin all embedding models of static and temporal knowledge graphs (Chen et al., 2020; Cai et al., 2022; Wang et al., 2020; Messner et al., 2022), three are most related to NE. RotatE (Sun et al., 2019) represents each entity and relationship as a complex vector to model relation patterns on knowledge graphs, and TeRo (Xu et al., 2020) represents time as a rotation of entities to model time relation patterns on temporal knowledge graphs. While both RotatE and TeRo introduce complex vectors for better completion performance, NE first explores using complex vectors for TR detection in events. In particular, the specific use of complex vectors in RotatE encodes inverse relations and in TeRo encodes asymmetric and reflexive relations. NE, instead, apply rotating complex unit vectors to encode time-translation symmetries of all potential TRs. IterE (Zhang et al., 2019) construct a decodable embedding model to discover rules for better knowledge graph completion performance. While we take functional inspiration from IterE that embedding models can jointly encode data and decode regularities, we focus on event data and define the new problems of TR detection and TR query. Specifically, while IterE focuses on discrete variables, NE focuses on the continuous variable of time that involves Fourier-like transformations.
To summarize, TR detection and TR query focus on achieving human-like schema learning capabilities rather than pursuing better support for NLP applications like the event schema induction task. Meanwhile, NE leverages the advantages of distributed representations over symbolic ones of search methods in temporal rule mining and is distinct from existing embedding models of structured data.
## 6 Conclusion
We have developed NE which for the first time enables data-efficient TR formation and time-efficient TR retrieval simply through embedding event samples. We have formally defined the tasks of TR detection and TR query to comprehensively evaluate the TR learning capabilities of embedding models. We have demonstrated NE's potential use in social event prediction, personal decision-making, and memory-constrained scenarios. We hope that we have facilitated the development of human-like event intelligence.
One limitation of NE is that when the vector dimension \(d\) is set much lower than the number of absolute time points \(T_{a}\), significant performance degradation of NE will occur as observed in the GDELT experiment. Future research is needed to improve this weakness. The privacy issues potentially brought about by TR detection and the causality of TRs should also be handled properly.
## Acknowledgements
We sincerely thank Rong Zhao, Hao Zheng, Dahu Feng, Lukai Li, Hui Zeng, Wenhao Zhou, Yu Du, Songchen Ma, and Faqiang Liu for their valuable comments and encouragements. This work was supported by the National Nature Science Foundation of China (nos. 62088102, 61836004) and the National Key Research and Development Program of China (no. 2021ZD0200300).
## References
* Bauer and Mandler (1989) Bauer, P. J. and Mandler, J. M. One thing follows another: Effects of temporal structure on 1-to 2-year-olds' recall of events. _Developmental psychology_, 25(2):197, 1989.
* Boschee et al. (2015) Boschee, E., Lautenschlager, J., O'Brien, S., Shellman, S., Starz, J., and Ward, M. Icews coded event data. _Harvard Dataverse_, 12, 2015.
* Bright et al. (2020) Bright, I. M., Meister, M. L., Cruzado, N. A., Tiganj, Z., Buffalo, E. A., and Howard, M. W. A temporal record of the past with a spectrum of time constants in the monkey entorhinal cortex. _Proceedings of the National Academy of Sciences_, 117(33):20274-20283, 2020.
* Cai et al. (2022) Cai, B., Xiang, Y., Gao, L., Zhang, H., Li, Y., and Li, J. Temporal knowledge graph completion: A survey. _arXiv preprint arXiv:2201.08236_, 2022.
* Chaudhuri and Fiete (2016) Chaudhuri, R. and Fiete, I. Computational principles of memory. _Nature neuroscience_, 19(3):394-403, 2016.
* Chen et al. (2007) Chen, M., Chen, S.-C., and Shyu, M.-L. Hierarchical temporal association mining for video event detection in video databases. In _2007 IEEE 23rd International Conference on Data Engineering Workshop_, pp. 137-145. IEEE, 2007.
* Chen et al. (2020) Chen, X., Jia, S., and Xiang, Y. A review: Knowledge reasoning over knowledge graph. _Expert Systems with Applications_, 141:112948, 2020.
* Dror et al. (2022) Dror, R., Wang, H., and Roth, D. Zero-shot on-the-fly event schema induction. _arXiv preprint arXiv:2210.06254_, 2022.
* Galarraga et al. (2015) Galarraga, L., Teflioudi, C., Hose, K., and Suchanek, F. M. Fast rule mining in ontological knowledge bases with amie \(+\). _The VLDB Journal_, 24(6):707-730, 2015.
* Ghosh and Gilboa (2014) Ghosh, V. E. and Gilboa, A. What is a memory schema? a historical perspective on current neuroscience literature. _Neuropsychologia_, 53:104-114, 2014.
* Goel et al. (2020) Goel, R., Kazemi, S. M., Brubaker, M., and Poupart, P. Diachronic embedding for temporal knowledge graph completion. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 34, pp. 3988-3995, 2020.
* Goyal and Bengio (2022) Goyal, A. and Bengio, Y. Inductive biases for deep learning of higher-level cognition. _Proceedings of the Royal Society A_, 478(2266):20210068, 2022.
* Guan et al. (2022) Guan, S., Cheng, X., Bai, L., Zhang, F., Li, Z., Zeng, Y., Jin, X., and Guo, J. What is event knowledge graph: A survey. _IEEE Transactions on Knowledge and Data Engineering_, 2022.
* Han et al. (2020) Han, Z., Chen, P., Ma, Y., and Tresp, V. serte: Explainable reasoning on temporal knowledge graphs for forecasting future links. _arXiv preprint arXiv:2012.15537_, 2020.
* Howard et al. (2014) Howard, M. W., MacDonald, C. J., Tiganj, Z., Shankar, K. H., Du, Q., Hasselmo, M. E., and Eichenbaum, H. A unified mathematical framework for coding time, space, and sequences in the hippocampal region. _Journal of Neuroscience_, 34(13):4692-4707, 2014.
* Huang et al. (2016) Huang, L., Cassidy, T., Feng, X., Ji, H., Voss, C., Han, J., and Sil, A. Liberal event extraction and event schema induction. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_, pp. 258-268, 2016.
* Hudson et al. (1992) Hudson, J. A., Fivush, R., and Kuebli, J. Scripts and episodes: The development of event memory. _Applied Cognitive Psychology_, 6(6):483-505, 1992.
* Huang et al. (2016)Jin, W., Jiang, H., Qu, M., Chen, T., Zhang, C., Szekely, P., and Ren, X. Recurrent event network: Global structure inference over temporal knowledge graph. 2019.
* Lacroix et al. (2020) Lacroix, T., Obozinski, G., and Usunier, N. Tensor decompositions for temporal knowledge base completion. _arXiv preprint arXiv:2004.04926_, 2020.
* Leblay & Chekol (2018) Leblay, J. and Chekol, M. W. Deriving validity time in knowledge graph. In _Companion Proceedings of the The Web Conference 2018_, pp. 1771-1776, 2018.
* Leetaru & Schrodt (2013) Leetaru, K. and Schrodt, P. A. Gdelt: Global data on events, location, and tone, 1979-2012. In _ISA annual convention_, volume 2, pp. 1-49. Citeseer, 2013.
* Li et al. (2021) Li, M., Li, S., Wang, Z., Huang, L., Cho, K., Ji, H., Han, J., and Voss, C. The future is not one-dimensional: Complex event schema induction by graph modeling for event prediction. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_, pp. 5203-5215, 2021.
* Mankin et al. (2012) Mankin, E. A., Sparks, F. T., Slayyeh, B., Sutherland, R. J., Leutgeb, S., and Leutgeb, J. K. Neuronal code for extended time in the hippocampus. _Proceedings of the National Academy of Sciences_, 109(47):19462-19467, 2012.
* McClelland et al. (1995) McClelland, J. L., McNaughton, B. L., and O'Reilly, R. C. Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. _Psychological review_, 102(3):419, 1995.
* Messner et al. (2022) Messner, J., Abboud, R., and Ceylan, I. I. Temporal knowledge graph completion using box embeddings. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 36, pp. 7779-7787, 2022.
* Namaki et al. (2017) Namaki, M. H., Wu, Y., Song, Q., Lin, P., and Ge, T. Discovering graph temporal association rules. In _Proceedings of the 2017 ACM on Conference on Information and Knowledge Management_, pp. 1697-1706, 2017.
* Noether (1918) Noether, E. Invariante variationsprobleme. _Nachrichten von der Gesellschaft der Wissenschaften zu Gottingen, Mathematisch-Physikalische Klasse_, 1918:235-257, 1918. URL [http://eudml.org/doc/59024](http://eudml.org/doc/59024).
* Pudhiyidath et al. (2020) Pudhiyidath, A., Roome, H. E., Coughlin, C., Nguyen, K. V., and Preston, A. R. Developmental differences in temporal schema acquisition impact reasoning decisions. _Cognitive neuropsychology_, 37(1-2):25-45, 2020.
* Schapiro et al. (2017) Schapiro, A. C., Turk-Browne, N. B., Botvinick, M. M., and Norman, K. A. Complementary learning systems within the hippocampus: a neural network modelling approach to reconciling episodic memory with statistical learning. _Philosophical Transactions of the Royal Society B: Biological Sciences_, 372(1711):20160049, 2017.
* Segura-Delgado et al. (2020) Segura-Delgado, A., Gacto, M. J., Alcala, R., and Alcala-Fdez, J. Temporal association rule mining: An overview considering the time variable as an integral or implied component. _Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery_, 10(4):e1367, 2020.
* Sun et al. (2019) Sun, Z., Deng, Z.-H., Nie, J.-Y., and Tang, J. Rotate: Knowledge graph embedding by relational rotation in complex space. _arXiv preprint arXiv:1902.10197_, 2019.
* Tanaka & Kunin (2021) Tanaka, H. and Kunin, D. Noether's learning dynamics: Role of symmetry breaking in neural networks. _Advances in Neural Information Processing Systems_, 34:25646-25660, 2021.
* Trivedi et al. (2017) Trivedi, R., Dai, H., Wang, Y., and Song, L. Know-evolve: Deep temporal reasoning for dynamic knowledge graphs. In _international conference on machine learning_, pp. 3462-3471. PMLR, 2017.
* Tsao et al. (2018) Tsao, A., Sugar, J., Lu, L., Wang, C., Knierim, J. J., Moser, M.-B., and Moser, E. I. Integrating time from experience in the lateral entorhinal cortex. _Nature_, 561(7721):57-62, 2018.
* Wang et al. (2020) Wang, J., Zhang, W., Chen, X., Lei, J., and Lai, X. 3drte: 3d rotation embedding in temporal knowledge graph. _IEEE Access_, 8:207515-207523, 2020.
* Wang et al. (2018)Wang, X., Lyu, S., Wang, X., Wu, X., and Chen, H. Temporal knowledge graph embedding via sparse transfer matrix. _Information Sciences_, 623:56-69, 2023.
* Xiao et al. (2020) Xiao, C., Sun, L., and Ji, W. Temporal knowledge graph incremental construction model for recommendation. In _Asia-pacific web (apweb) and web-age information management (waim) joint international conference on web and big data_, pp. 352-359. Springer, 2020.
* Xu et al. (2020a) Xu, C., Nayyeri, M., Alkhoury, F., Yazdi, H., and Lehmann, J. Temporal knowledge graph completion based on time series gaussian embedding. In _International Semantic Web Conference_, pp. 654-671. Springer, 2020a.
* Xu et al. (2020b) Xu, C., Nayyeri, M., Alkhoury, F., Yazdi, H. S., and Lehmann, J. Tero: A time-aware knowledge graph embedding via temporal rotation. _arXiv preprint arXiv:2010.01029_, 2020b.
* Xue et al. (2018) Xue, J.-R., Fang, J.-W., and Zhang, P. A survey of scene understanding by event reasoning in autonomous driving. _Machine Intelligence Research_, 15(3):249-266, 2018.
* Yang et al. (2019) Yang, Y., Wei, Z., Chen, Q., and Wu, L. Using external knowledge for financial event prediction based on graph neural networks. In _Proceedings of the 28th ACM International Conference on Information and Knowledge Management_, pp. 2161-2164, 2019.
* Yoo & Shekhar (2008) Yoo, J. S. and Shekhar, S. Similarity-profiled temporal association mining. _IEEE Transactions on Knowledge and Data Engineering_, 21(8):1147-1161, 2008.
* Zhang et al. (2019) Zhang, W., Paudel, B., Wang, L., Chen, J., Zhu, H., Zhang, W., Bernstein, A., and Chen, H. Iteratively learning embeddings and rules for knowledge graph reasoning. In _The World Wide Web Conference_, pp. 2366-2377, 2019.
* Zhao (2021) Zhao, L. Event prediction in the big data era: A systematic survey. _ACM Computing Surveys (CSUR)_, 54(5):1-37, 2021.
## Appendix A Theoretical Illustrations
### Interdisciplinary Correspondences
#### a.1.1 Physical Correspondence
Considering each event type as a particle, the event embedding \(\mathbf{q}(t;ev)\) can be viewed as its generalized coordinate at time \(t\) in the d-dimensional complex embedding space. Suppose that each two particles \(ev_{b},ev_{h}\) are connected by a spring with the force linear to their distance, their potential energy can then be expressed as \(||\mathbf{q}_{b}(t)-\mathbf{q}_{h}(t+\tau)||^{2}\), where \(\tau\) is a parameter. As \(\mathbf{q}(t;ev)\) changes with time, its kinetic energy also changes and can be viewed as being driven by external nonconservative forces. Using the extended Noether's theorem, we can see that the work of external forces and the kinetic energy offset, leading to a conserved quantity of time-translation symmetry that only includes the potential energy instead of the total energy. The inductive bias of Noether Embedding exactly enforces that such local potential energies are innately conserved, which means that \(||\mathbf{q}_{m}(t_{k})-\mathbf{q}_{h}(t_{k}+\tau_{r})||^{2}=||\mathbf{q}_{m}(t_{s})-\mathbf{q} _{n}(t_{s}+\tau_{r})||^{2},\forall ev_{m},ev_{n}\in\mathbb{P},t_{k},t_{s}\in \mathbb{T}_{a},\tau_{r}\in\mathbb{T}_{r}\), where \(\mathbb{P},\mathbb{T}_{a},\mathbb{T}_{r}\) refer to the set of event types, absolute time points, and relative time points, respectively.
#### a.1.2 Biological Correspondence
The formation of TRs relies on two central functions, namely measurement and integration. The measurement function involves comparing the temporal distance between each two events. It is claimed that Laplace transformation exists in the hippocampus for representing time (Howard et al., 2014). Specifically, the population of temporal context cells (or referred to as ramping cells (Tsao et al., 2018)) in the lateral entorhinal cortex is discovered to code time with a variety of rate constants (Bright et al., 2020). \(\mathbf{r}(t)\) in NE functionally corresponds to such a cell population, which stores the rate distribution in vector \(\mathbf{\omega}\) of \(\mathbf{r}(t)\). The integration function aggregates event pairs separated in time to form TRs. Statistical learning for schema formation is reported to occur in the pathway connecting EC to CA1 in the hippocampus (Schapiro et al., 2017). In NE, we achieve the same function through the time-translation symmetry introduced by \(\mathbf{r}(t)\): \(\mathbf{r}(t+\tau)=\mathbf{r}(t)\circ\mathbf{r}(\tau)\), \(\mathbf{r}(t)\circ\overline{\mathbf{r}(t)}=\mathbf{1},\forall t\in\mathbb{T}_{a}\).
### Revelation of TR Validity
#### a.2.1 Effectiveness in TR Query
NE is an effective method for TR query due to its distributed storage of cross-correlations.
The score function \(f(t)\) describes the occurrences of each event type, where its value is mapped to \(C_{p}\) for time points implying event occurrences and \(C_{n}\) for those that do not. The support \(sp(\tau)\) of two event types \(ev_{b},ev_{h}\) can be then approximated by calculating the time-lagged cross-correlation \(R_{f_{b},f_{h}}(\tau)=\sum_{t,t+\tau\in\mathbb{T}_{a}}f_{b}(t)f_{h}(t+\tau)\) after training convergence.
Denote \(F(\omega)\) as the Fourier expansion of \(f(t)\). Awaraing that in NE, \(f(t)=\sum_{j=1}^{d}Real(\mathbf{u}\circ e^{i\omega t})_{j}\), we can see that the encoding stage enables a \(d\)-dimensional Fourier-like expansion for each \(f(t)\). The time vector \(\mathbf{\omega}\) provides the expansion basis and \(\mathbf{u}\) stores the coefficients as \(F(\omega)\)s.
By Fourier expansions, the corresponding correlation \(R^{\prime}_{f_{b},f_{h}}(\tau)=\int_{-\infty}^{+\infty}f_{b}(t)f_{h}(t+\tau) dt=\int_{-\infty}^{+\infty}\overline{F_{b}(\omega)}F_{h}(\omega)e^{i\omega \tau}d\omega\). In NE, \(g(\tau)=||\mathbf{u}_{b}-\mathbf{u}_{h}\circ\mathbf{r}(\tau)||^{2}=2-2\sum_{j=1}^{d}Real( \overline{\mathbf{u}_{b}}\circ\mathbf{u}_{h}\circ e^{i\omega\tau})_{j}\). Therefore, each \(g(\tau)\) reveals exactly the fitted time-lagged cross-correlation and is thus proportional to the support \(sp(\tau)\) of a TR. This guarantees NE's effectiveness in TR query.
#### a.2.2 Effectiveness in TR Detection
NotationsWithin all event samples trained for a potential TR, suppose that \(m\) event pairs share the same relative time of \(\tau\), where \(m_{1}\) pairs are both positive or both negative samples. In each remaining \(m_{2}=m-m_{1}\) pair, one is a positive sample while the other is a negative one. Denote that \(\mathbb{M}=\{1,2,...,m\},\mathbb{M}_{1}=\{i_{1},i_{2},...,i_{m_{1}}\},\mathbb{ M}_{2}=\{j_{1},j_{2},...,j_{m_{2}}\}\). There exists such least upper bound \(\eta\) that \((\frac{1}{\sqrt{d}}f(t;ev)-C)^{2}<\eta\) for the score functions of all \(2m\) samples, where \(C=C_{p},C_{n}\)Denote that \(\mathbf{a}_{i}=(cos(\mathbf{\omega}_{1}t_{i}),cos(\mathbf{\omega}_{2}t_{i}),...,cos(\mathbf{ \omega}_{d}t_{i}),sin(\mathbf{\omega}_{1}t_{i}),sin(\mathbf{\omega}_{2}t_{i}),...,sin( \mathbf{\omega}_{d}t_{i}))^{T}\), \(i\in\mathbb{M}\), where \(d\) is the dimension of NE vectors, \(t_{i}\) is time of the \(i\)th body event in \(m\) sample pairs. Denote that in the decoding function, \(\mathbf{u}_{b}-\mathbf{u}_{h}\circ\mathbf{r}(\tau)=\mathbf{\alpha}-i\mathbf{\beta}\) where \(\mathbf{\alpha},\mathbf{\beta}\) are both real vectors, and \(\mathbf{x}=(\mathbf{\alpha}_{1},\mathbf{\alpha}_{2},...,\mathbf{\alpha}_{d},\mathbf{\beta}_{1}, \mathbf{\beta}_{2},...,\mathbf{\beta}_{d})^{T}\). Denote that \(c_{1}=\underset{i\in\mathbb{M}_{1}}{max}(cos(\mathbf{a}_{i},\mathbf{x}))^{2},c_{2}= \underset{j\in\mathbb{M}_{2}}{min}(cos(\mathbf{a}_{j},\mathbf{x}))^{2}\).
Using the trigonometric inequality, we can derive the following conclusions:
Theorem 1Within the \(m_{1}\) sample pairs, if \(c_{1}>0\), then \(g(\tau)<\frac{4\eta}{c_{1}}\).
Theorem 2Within the \(m_{2}\) sample pairs, if \(c_{2}>0\) and \(|C_{p}-C_{n}|>2\sqrt{\eta}\), then \(g(\tau)>\frac{(|C_{p}-C_{n}|-2\sqrt{\eta})^{2}}{c_{2}}\).
Implications\(d,T_{a},\mathbf{\omega},C_{p},C_{n}\) jointly result in the probability distributions of \(c_{1},c_{2}\), where \(c_{1},c_{2}>0\) is generally guaranteed by experimental settings. Two conclusions can then be drawn from the two theorems. (1) Convergence. We can see from theorem 1 that \(g(\tau)\to 0\) as \(\eta\to 0\). This implies that the \(g(\tau)\) of perfect TRs (\(gc(\tau)=1,\eta=1\)) will converge to its minimum of 0. (2) Competition. Since \(g(\tau)\) is invariant to \(t\), comparing these two theorems also tells us about the competing effect of well-trained sample pairs for the value of \(g(\tau)\), generally affected by the ratio \(\frac{m_{1}}{m_{2}}\). These two conclusions, along with the fact that \(g(\tau)\) is proportional to the support \(sp(\tau)\) (as illustrated in A.2.1), jointly make \(g(\tau)\) reveal the TR validity \(gc(\tau)\) and thus guarantees NE's effectiveness in TR detection.
It is worth noting that \(d\) is generally set as \(d>T_{a}\) to control the values of \(c_{1},c_{2}\), where \(T_{a}\) is the number of absolute time points of the whole event set. Otherwise, NE's TR detection performance will be interfered. For example, if \(d\ll T_{a}\), then \(d\ll m\) in most cases. It will thus be very likely that \(c_{1}=0\) so that theorem 1 can not be applied in NE.
## Appendix B Experimental Supplements
### Training Details
All models are trained for 100 epochs on each dataset using the Adagrad optimizer (with a learning rate of 0.01) and the StepLR learning rate scheduler (with a step size of 10 and gamma of 0.9). The experiments are conducted on a single GPU (GeForce RTX 3090). The hyper-parameters of NE are fine-tuned using a grid search to achieve relatively optimal results.
### Main Results with Error Bars
To ensure the reliability of the results, the experiments are repeated three times, and the error bars are derived accordingly. Table 5 and 6 show that the main results are quite stable with small error bars. Note that the precision and recall rates in Table 6 correspond exactly to the F1 scores in Table 5. The reason why the recall rate of NE is lower than that of TASTER is that we report the highest F1 score of each model in comparative studies by tuning their respective global threshold, denoted as \(g_{th}\). As the F1 score is calculated as the harmonic mean of precision and recall rates, TASTER achieves its highest F1 score by reporting many false positives, resulting in a relatively high recall rate but an extremely low precision rate.
### Additional Ablations
Unless otherwise specified, the experiments below adopt the original parameter settings as described in the main text.
#### b.3.1 Normalization of Event Type Vectors
Table 7 demonstrates that normalized event type vectors slightly outperform unnormalized ones in the TR detection task.
#### b.3.2 Dimension of Embedding Vectors
Table 8 and 9 demonstrate that \(d\) scarcely affects the performance of NE as long as it is more than some certain value. It is worth noting that NE's detection performance in GDELT may be further improved with larger values of \(d\)s, as illustrated in A.2.2, because \(T_{a}=2976\) in GDELT, which is much larger than the tested dimensions.
#### b.3.3 Threshold for Valid TRs
In the main text, the threshold for distinguishing valid and invalid TRs is chosen as \(\theta=0.8\). Here we report NE's results on D14 with varying \(\theta\)s in Table 10. It is shown that NE still has an overwhelming advantage over all baselines.
### Mined TRs
Additional cases of mined TRs are shown in Table 11 as below.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Embedding**} & \multicolumn{3}{c}{**Precision**} & \multicolumn{3}{c}{**Recall**} \\ & D14 & D18 & GDELT & D14 & D18 & GDELT \\ \hline TNTComplEx & 0.22\(\pm\)0.01 & 0.11\(\pm\)0.00 & 0.04\(\pm\)0.00 & 0.33\(\pm\)0.02 & 0.50\(\pm\)0.01 & 1.00\(\pm\)0.01 \\ DE-SimplE & 0.16\(\pm\)0.00 & 0.14\(\pm\)0.00 & - & 0.35\(\pm\)0.03 & 0.33\(\pm\)0.00 & - \\ TASTER & 0.10\(\pm\)0.00 & 0.08\(\pm\)0.00 & 0.04\(\pm\)0.00 & 0.99\(\pm\)0.01 & 0.99\(\pm\)0.01 & 0.97\(\pm\) 0.03 \\ TeRo & 0.51\(\pm\)0.04 & 0.91\(\pm\)0.01 & 0.20\(\pm\)0.04 & 0.37\(\pm\)0.02 & 0.49\(\pm\)0.00 & 0.14\(\pm\)0.02 \\ BoxTE & 0.40\(\pm\)0.02 & 0.39\(\pm\)0.05 & 0.15\(\pm\)0.01 & 0.41\(\pm\)0.02 & 0.41\(\pm\)0.02 & 0.22\(\pm\)0.03 \\ ATISE & 0.35\(\pm\)0.01 & 0.49\(\pm\)0.03 & 0.15\(\pm\)0.01 & 0.47\(\pm\)0.01 & 0.40\(\pm\)0.00 & 0.21\(\pm\)0.01 \\ \hline NE with \(g^{\prime}(\tau)\) & 0.99\(\pm\)0.00 & 0.98\(\pm\)0.00 & 0.90\(\pm\)0.00 & 0.64\(\pm\)0.00 & 0.66\(\pm\)0.00 & 0.32\(\pm\)0.00 \\ NE with \(g(\tau)\) & 0.99\(\pm\)0.00 & 0.99\(\pm\)0.00 & 0.83\(\pm\)0.00 & 0.70\(\pm\)0.00 & 0.72\(\pm\)0.00 & 0.37\(\pm\)0.00 \\ \hline \hline \end{tabular}
\end{table}
Table 6: TR Detection by precision and recall rates
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Embedding**} & \multicolumn{3}{c}{**TR Detection (F1)**} & \multicolumn{3}{c}{**TR Query (r)**} \\ & D14 & D18 & GDELT & D14 & D18 & GDELT \\ \hline TNTComplEx & 0.26\(\pm\)0.01 & 0.18\(\pm\)0.00 & 0.08\(\pm\)0.01 & 0.08\(\pm\)0.00 & 0.08\(\pm\)0.00 & 0.01\(\pm\)0.00 \\ DE-SimplE & 0.22\(\pm\)0.00 & 0.20\(\pm\)0.00 & - & 0.09\(\pm\)0.00 & 0.09\(\pm\)0.00 & - \\ TASTER & 0.18\(\pm\)0.00 & 0.15\(\pm\)0.00 & 0.08\(\pm\)0.00 & 0.09\(\pm\)0.00 & 0.09\(\pm\)0.00 & 0.00\(\pm\)0.00 \\ TeRo & 0.43\(\pm\)0.02 & 0.64\(\pm\)0.01 & 0.16\(\pm\)0.00 & 0.08\(\pm\)0.00 & 0.08\(\pm\)0.00 & 0.01\(\pm\)0.00 \\ BoxTE & 0.40\(\pm\)0.01 & 0.40\(\pm\)0.02 & 0.18\(\pm\)0.01 & 0.08\(\pm\)0.00 & 0.08\(\pm\)0.00 & 0.01\(\pm\)0.00 \\ ATISE & 0.40\(\pm\)0.01 & 0.44\(\pm\)0.01 & 0.18\(\pm\)0.01 & 0.08\(\pm\)0.00 & 0.08\(\pm\)0.00 & 0.01\(\pm\)0.00 \\ \hline NE with \(g^{\prime}(\tau)\) & 0.78\(\pm\)0.00 & 0.79\(\pm\)0.00 & 0.48\(\pm\)0.00 & 0.85\(\pm\)0.00 & 0.83\(\pm\)0.00 & 0.83\(\pm\)0.00 \\ NE with \(g(\tau)\) & **0.82\(\pm\)**0.00 & **0.83\(\pm\)**0.00 & **0.51\(\pm\)**0.00 & **0.87\(\pm\)**0.00 & **0.86\(\pm\)**0.00 & **0.85\(\pm\)**0.00 \\ \hline \hline \end{tabular}
\end{table}
Table 5: TR detection by F1 scores and TR query by confidence ratios
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{**Embedding**} & \multicolumn{3}{c}{**Detection(F1)**} & \multicolumn{3}{c}{**Query(r)**} \\ & \(\theta\)=0.7 & \(\theta\)=0.8 & \(\theta\)=0.9 & \(\theta\)=0.7 & \(\theta\)=0.8 & \(\theta\)=0.9 \\ \hline TNTCompIEx & 0.29 & 0.26 & 0.25 & 0.07 & 0.08 & 0.06 \\ DE-SimplE & 0.28 & 0.22 & 0.21 & 0.09 & 0.09 & 0.21 \\ TASTER & 0.28 & 0.18 & 0.17 & 0.09 & 0.09 & 0.08 \\ TeRo & 0.35 & 0.43 & 0.40 & 0.08 & 0.08 & 0.06 \\ BoxTE & 0.38 & 0.40 & 0.41 & 0.08 & 0.18 & 0.06 \\ ATISE & 0.28 & 0.35 & 0.33 & 0.08 & 0.08 & 0.06 \\ \hline NE with \(g^{\prime}(\tau)\) & 0.63 & 0.78 & 0.80 & 0.81 & 0.85 & 0.86 \\ NE with \(g(\tau)\) & **0.71** & **0.82** & **0.84** & **0.87** & **0.87** & **0.87** \\ \hline \hline \end{tabular}
\end{table}
Table 10: Statistical results with different thresholds \(\theta\) on ICEWS14
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(d\) & 100 & 200 & 300 & 400 & 500 \\ \hline TR Query (r) & 0.83 & 0.84 & 0.85 & 0.85 & 0.84 \\ TR Detection (F1) & 0.22 & 0.45 & 0.50 & 0.51 & 0.51 \\ \hline \hline \end{tabular}
\end{table}
Table 8: NE on GDELT with different dimensions
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(d\) & 50 & 100 & 200 & 400 & 600 & 800 \\ \hline TR Query (r) & 0.81 & 0.86 & 0.87 & 0.87 & 0.87 & 0.87 \\ TR Detection (F1) & 0.29 & 0.55 & 0.76 & 0.82 & 0.83 & 0.83 \\ \hline \hline \end{tabular}
\end{table}
Table 9: NE on ICEWS14 with different dimensions | ## Review
### Summary
This paper introduces Noether Embedding (NE) for efficiently detecting and retrieving temporal regularities (TRs) from event data. It defines two tasks, TR detection and TR query, alongside their evaluation metrics. The authors argue that existing models lack the capability to learn temporal regularities. NE is evaluated on benchmark datasets and demonstrates improved performance compared to traditional methods. The paper also discusses the significance of time-translation symmetries in its approach. However, concerns are raised regarding the novelty of the tasks, clarity of implementation details, and the overall originality of the contribution when compared to existing event embedding methods.
### Strengths
- The proposed temporal regularity problem is important, and many existing models lack TR learning capabilities.
- The introduction of NE provides a novel approach for encoding temporal regularities and facilitates efficient retrieval.
- The tasks of TR detection and TR query are well-defined with corresponding evaluation metrics.
- Experiments on benchmark datasets demonstrate the effectiveness of the proposed framework and its superiority over classic embeddings.
- The paper is well-written and presents a clear overview of the problem and solution.
### Weaknesses
- The novelty of the TR mining task is questionable, as it appears to extend existing methodologies rather than introduce a fundamentally new paradigm.
- The implementation details, particularly regarding event embeddings, are insufficiently clear for reproducibility.
- Existing event embedding methods are not adequately discussed in terms of their applicability to the proposed tasks.
- The connection between Noether's theorem and the proposed method lacks clarity and is poorly presented.
- Certain evaluation metrics and approximations used in the experiments may not be justified as the best comparisons.
### Questions
- What is the rationale behind the choice of different embedding strategies for event representation?
- How does the framework handle relative time conditioned on multiple events?
- Can the authors clarify the reasons for the differences in recall and confidence scores observed in the experiments?
- What is the significance of the predefined thresholds and parameters used in the detection process?
- Could the authors elaborate on the societal impacts and limitations of the proposed method?
### Soundness
**Score:** 2
**Description:** 2 = fair; the methodology has merit but is not thoroughly validated or lacks clarity in some aspects.
### Presentation
**Score:** 2
**Description:** 2 = fair; while the writing is generally clear, several sections require better explanation and organization for improved understanding.
### Contribution
**Score:** 2
**Description:** 2 = fair; the contribution is valuable but presents concerns regarding originality and the significance of the proposed tasks.
### Rating
**Score:** 5
**Description:** 5 = Borderline accept; the paper has technical strengths but also substantial weaknesses that require addressing before final acceptance.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a significant contribution to the understanding and mining of temporal regularities, with a novel approach using Noether embeddings. Despite some weaknesses in originality and clarity, the detailed responses from the authors and improvements made during the review process have persuaded reviewers to recommend acceptance. The results demonstrate a solid performance on benchmark datasets and hold potential for practical applications, warranting further exploration and refinement in future work.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Going beyond persistent homology
using persistent homology
Johanna Immonen
University of Helsinki
[email protected]
Equal contribution.
Amauri H. Souza
Aalto University
Federal Institute of Ceara
[email protected]
Vikas Garg
Aalto University
YaiYai Ltd
[email protected]
###### Abstract
Representational limits of message-passing graph neural networks (MP-GNNs), e.g., in terms of the Weisfeiler-Leman (WL) test for isomorphism, are well understood. Augmenting these graph models with topological features via persistent homology (PH) has gained prominence, but identifying the class of attributed graphs that PH can recognize remains open. We introduce a novel concept of color-separating sets to provide a complete resolution to this important problem. Specifically, we establish the necessary and sufficient conditions for distinguishing graphs based on the persistence of their connected components, obtained from filter functions on vertex and edge colors. Our constructions expose the limits of vertex- and edge-level PH, proving that neither category subsumes the other. Leveraging these theoretical insights, we propose RePHINE for learning topological features on graphs. RePHINE efficiently combines vertex- and edge-level PH, achieving a scheme that is provably more powerful than both. Integrating RePHINE into MP-GNNs boosts their expressive power, resulting in gains over standard PH on several benchmarks for graph classification.
## 1 Introduction
Topological data analysis (TDA) is a rapidly growing field that provides tools from algebraic topology for uncovering the _shape_ (or structure) of data, allowing for efficient feature extraction. Its flagship tool is persistent homology (PH) [8], which seeks to characterize topological invariants (e.g., connected components, loops) of an underlying manifold based on data samples. Notably, PH has been successfully applied in many scientific domains, including computer vision [17, 27], drug design [23], fluid dynamics [24], and material science [25].
For graphs, PH has been used to provide global topological signatures for graph-level prediction tasks [2, 12, 14, 33, 39] and act as local message modulators in graph neural networks (GNNs) for node-level tasks [4, 40]. By leveraging learnable filtration/vectorization maps, PH has also been integrated into neural networks as a building block in the end-to-end learning process [2, 3, 13, 15, 26]. These strategies allow us to exploit topological features to boost the predictive capabilities of graph models. However, in stark contrast with the developments on the representational power of GNNs [1, 11, 28, 29, 30, 34, 35, 37], the theoretical properties of PH on graphs remain much less explored. For instance, open questions include: Which graph properties can PH capture? What is the characterization of pairs of graphs that PH cannot separate? Can we improve the expressivity of PH on graphs?
In a recent work, Rieck [32] discusses the expressivity of PH on graphs in terms of the Weisfeiler-Leman (WL) hierarchy [36]. The paper shows that, given different k-WL colorings, there exists a filtration such that the corresponding persistence diagrams differ. This result provides a lower bound for the expressivity in terms of WL hierarchy, but it does not describe the class of graphs which can be distinguished via PH. In this paper, we aim to fully characterize this class of graphs.
We study the expressive power of PH on attributed (or colored) graphs, viewed as 1-dim simplicial complexes. We focus on the class of graph filtrations induced by functions on these colors. Importantly, such a class is rather general and reflects choices of popular methods (e.g., topological GNNs [15]). We first analyze the persistence of connected components obtained from vertex colors. Then, we extend our analysis to graphs with edge colors. To obtain upper bounds on the expressive power of color-based PH, we leverage the notion of separating/disconnecting sets. This allows us to establish the necessary and sufficient conditions for the distinguishability of two graphs from 0-dim persistence diagrams (topological descriptors). We also provide constructions that highlight the limits of vertex- and edge-color PH, proving that neither category subsumes the other.
Based on our insights, we present RePHINE (short for "**Re**fining **PH** by Incorporating **N**ode-color into **E**dge-based filtration"), a simple method that exploits a subtle interplay between vertex- and edge-level persistence information to improve the expressivity of color-based PH. Importantly, RePHINE can be easily integrated into GNN layers and incurs no computational burden to the standard approach. Experiments support our theoretical analysis and show the effectiveness of RePHINE on three synthetic and six real datasets. We also show that RePHINE can be flexibly adopted in different architectures and outperforms PersLay [2] -- a popular topological neural net.
In sum, **our contributions** are three-fold:
**(Theory)** We establish a series of theoretical results that characterize PH on graphs, including bounds on the expressivity of vertex- and edge-level approaches, the relationship between these approaches, and impossibility results for color-based PH -- as summarized in Figure 1.
**(Methodology)** We introduce a new topological descriptor (RePHINE) that is provably more expressive than standard 0-dim and 1-dim persistence diagrams.
**(Experiments)** We show that the improved expressivity of our approach also translates into gains in real-world graph classification problems.
## 2 Preliminaries
We consider arbitrary graphs \(G=(V,E,c,X)\) with vertices \(V=\{1,2,\ldots,n\}\), edges \(E\subseteq V\times V\) and a vertex-coloring function \(c:V\to X\), where \(X\) denotes a set of \(m\) colors or features \(\{x_{1},x_{2},\ldots,x_{m}\}\) such that each color \(x_{i}\in\mathbb{R}^{d}\). We say two graphs \(G=(V,E,c,X)\) and
Figure 1: Overview of our theoretical results.
\((V^{\prime},E^{\prime},c^{\prime},X^{\prime})\) are isomorphic (denoted by \(G\cong G^{\prime}\)) if there is a bijection \(g:V\to V^{\prime}\) such that \((v,w)\in E\) iff \((g(v),g(w))\in E^{\prime}\) and \(c=c^{\prime}\circ g\). Here, we also analyze settings where graphs have an edge-coloring function \(l\).
A _filtration_ of a graph \(G\) is a finite nested sequence of subgraphs of \(G\), that is, \(G_{1}\subseteq G_{2}\subseteq...\subseteq G\). Although the design of filtrations can be flexible [12], a typical choice consists of leveraging a vertex filter (or filtration) function \(f:V\to\mathbb{R}\) for which we can obtain a permutation \(\pi\) of \(n\) vertices such that \(f(\pi(1))\leq f(\pi(2))\cdots\leq f(\pi(n))\). Then, a filtration induced by \(f\) is an indexed set \(\{G_{f(\pi(i))}\}_{i=1}^{n}\) such that \(G_{f(\pi(i))}\subseteq G\) is the subgraph induced by the set of vertices \(V_{f(\pi(i))}=\{v\in V\mid f(v)\leq f(\pi(i))\}.\) Note that filtration functions which give the same permutation of vertices induce the same filtration. Persistent homology keeps track of the topological features of each subgraph in a filtration. For graphs \(G\), these features are either the number of connected components or independent cycles (i.e., 0- and 1-dim topological features, denoted respectively by the Betti numbers \(\beta_{G}^{0}\) and \(\beta_{G}^{1}\)) and can be efficiently computed using computational homology. In particular, if a topological feature first appears in \(G_{f(\pi(i))}\) and disappears in \(G_{f(\pi(j))}\), then we encode its persistence as a pair \((f(\pi(i)),f(\pi(j)))\); if a feature does not disappear in \(G_{f(\pi(n))}=G\), then its persistence is \((f(\pi(i)),\infty)\). The collection of all pairs forms a multiset that we call _persistence diagram_[5]. We use \(\mathcal{D}^{0}\) and \(\mathcal{D}^{1}\) to refer to the persistence diagrams for 0- and 1-dim topological features respectively. Appendix A provides a more detailed treatment for persistent homology.
Recent works have highlighted the importance of adopting injective vertex filter functions. Hofer et al. [13] show that injectivity of parameterized functions \(f_{\theta}:V\to\mathbb{R}\) is a condition for obtaining well-defined gradients with respect to the parameters \(\theta\), enabling end-to-end filtration learning. Also, Horn et al. [15] show that for any non-injective function, we can find an arbitrarily close injective one that is at least as powerful at distinguishing non-isomorphic graphs as the original (non-injective) function. However, Lemma 1 shows that filtrations induced by injective functions on vertices may result in inconsistent persistence diagrams; namely, different diagrams for isomorphic graphs.
**Lemma 1** (Injective vertex-based filtrations can generate inconsistent persistence diagrams).: _Consider persistence diagrams obtained from injective vertex filter functions. There are isomorphic graphs \(G\cong G^{\prime}\) such that their persistence diagrams are different, i.e., \(\mathcal{D}_{G}\neq\mathcal{D}_{G^{\prime}}\)._
To avoid inconsistent diagrams, we need to employ permutation equivariant filter functions -- see [32, Lemma 2]. Common choices include vertex degree [12], eigenvalues of the graph Laplacian [2], and GNN layers [13], which are permutation equivariant by construction. Another option is to define graph filtrations based on vertex/edge colors [15], which are also equivariant by design, i.e., if \(G\cong G^{\prime}\) with associated bijection \(g\), then \(c(v)=c^{\prime}(g(v))\ \forall v\in V\). Notably, color-based filtrations generalize the GNN-layers case since we could redefine vertex/edge-coloring functions to take the graph structure as an additional input. Thus, we now turn our attention to color-based filtrations.
Color-based filtrations.Let \(f:X\to\mathbb{R}\) be an injective function. Therefore, \(f\) must assign a strict total order for colors, i.e., there is a permutation \(\pi:\{1,\dots,m\}\to\{1,\dots,m\}\) such that \(f(x_{\pi(1)})<\cdots<f(x_{\pi(m)})\). We define the _vertex-color filtration_ induced by \(f\) as the indexed set \(\{G_{i}\}_{i=1}^{m}\) where \(G_{i}=(V_{i},E_{i},c_{i},X_{i})\), with \(X_{i}=\{x_{\pi(1)},x_{\pi(2)},\dots,x_{\pi(i)}\}\), \(V_{i}=\{v\in V\mid c(v)\in X_{i}\}\), \(E_{i}=\{(v,w)\in E\mid c(v)\in X_{i},c(w)\in X_{i}\}\), and \(c_{i}=\{(v,c(v))\mid v\in V_{i}\}\). Similarly, we can define the _edge-color filtration_ induced by \(f\) as \(\{G_{i}\}_{i=1}^{m}\) where \(G_{i}=(V,E_{i},l_{i},X_{i})\) with \(X_{i}=\{x_{\pi(1)},\dots,x_{\pi(i)}\}\), \(E_{i}=\{(v,w)\in E\mid l(v,w)\in X_{i}\}\), and \(l_{i}=\{((v,w),l(v,w))\mid(v,w)\in E_{i}\}\).
We denote the elements of a persistence diagram \(\mathcal{D}\) as pairs \((f(x^{(b)}),f(x^{(d)}))\), where \(x^{(b)},x^{(d)}\in X\) are the colors associated with the birth and death of a hole (topological feature) in a filtration induced by \(f(\cdot)\). In the following, we use the notation \(\{\!\{\cdot\}\!\}\) to denote multisets.
## 3 The power of 0-dim persistent homology under color-based filtrations
In this section, we analyze the representational power of persistent homology when adopting color-based filtrations. We focus on the persistence of connected components (0-dimensional holes). We separately discuss vertex-color (Section 3.1) and edge-color (Section 3.2) filtrations, and then compare these approaches in Section 3.3. Proofs for all Lemmas and Theorems are in Appendix B.
### Vertex-color filtrations
To help characterize the expressivity of persistent homology, we propose classifying persistence pairs \((f(x^{(b)}),f(x^{(d)}))\) as either _real holes, almost holes, or trivial holes_. In particular, if \(f(x^{(d)})\neq\infty\)and \(f(x^{(b)})\neq f(x^{(d)})\), we say the pair \((f(x^{(b)}),f(x^{(d)}))\) is an almost hole. If \(f(x^{(b)})=f(x^{(d)})\), the pair is called a trivial hole. Finally, we call \((f(x^{(b)}),f(x^{(d)}))\) a real hole if \(f(x^{(d)})=\infty\).
#### 3.1.1 Real holes as connected components
Real holes denote topological features that persist until a filtration reaches the entire graph. Thus, real holes from 0-dim persistence diagrams are associated with connected components, regardless of the filtration. Regarding the relationship between filtrations and real holes, Lemma 2 establishes the necessary and sufficient condition for the existence of a filtration that produces persistence diagrams with distinct real holes. Such a condition is associated with the notion of component-wise colors. Formally, let \(G\) and \(G^{\prime}\) be two graphs with connected components \(C_{1},\ldots,C_{k}\) and \(C^{\prime}_{1},\ldots,C^{\prime}_{k^{\prime}}\), respectively. Also, let \(X_{i}=\{c(v)\mid v\in V_{C_{i}}\}\) be the set of colors in \(C_{i}\) and, similarly, \(X^{\prime}_{i}\) be the colors in \(C^{\prime}_{i}\). We say that \(G\) and \(G^{\prime}\) have _distinct component-wise colors_ if \(\{X_{i}\}_{i=1}^{k}\neq\{X^{\prime}_{i}\}_{i=1}^{k^{\prime}}\).
**Lemma 2** (Equivalence between component-wise colors and real holes).: _Let \(G\) and \(G^{\prime}\) be two graphs. There exists some vertex-color filtration such that their persistence diagrams \(\mathcal{D}^{0}_{G}\) and \(\mathcal{D}^{0}_{G^{\prime}}\) have different multisets of real holes iff \(G\) and \(G^{\prime}\) have distinct component-wise colors._
#### 3.1.2 Almost holes as separating sets
Now we turn our attention to the characterization of almost holes. Our next result (Lemma 3) reveals the connection between almost holes and separating sets. Here, a _separating set_\(S\) of a graph \(G\) is a subset of its vertices whose removal disconnects _some_ connected component of \(G\).
**Lemma 3** (Almost holes and separating sets).: _Regarding the relationship between almost holes and separating sets, the following holds:_
1. _Let_ \((f(x^{(b)}),f(x^{(d)}))\) _be an almost hole from a vertex-color filtration. Then the set_ \(S=\{w\in V\mid f(c(w))\geq f(x^{(d)})\}\) _is a separating set of_ \(G\)_._
2. _Let_ \(S\) _be a separating set of_ \(G\) _that splits a connected component_ \(C\subseteq G\) _into_ \(k\) _components_ \(C_{1},C_{2},\ldots,C_{k}\)_. Then, there exists a filtration that produces_ \(k-1\) _almost holes if the set of colors of vertices in_ \(\cup_{i=1}^{k}V_{C_{i}}\) _is disjoint from those of the remaining vertices, i.e.,_ \(\{c(v)\mid v\in V\setminus\cup_{i=1}^{k}V_{C_{i}}\}\cap\{c(v)\mid v\in\cup_{i= 1}^{k}V_{C_{i}}\}=\emptyset\)_._
The relationship between almost holes and separating sets elucidated in Lemma 3 raises the question if we can use separating sets (obtained from colors) to compare almost holes across different graphs. The answer is no: even if the diagrams of two graphs only differ in their multisets of almost holes, the graphs might not have separating sets that split them into different numbers of components. For instance, consider the graphs in Figure 2, where numbers denote filter values. The persistence diagrams \(\mathcal{D}^{0}_{G}=\{(1,\infty),(2,3),(2,3),(3,3),(3,4),(4,4)\}\) and \(\mathcal{D}^{0}_{G^{\prime}}=\{(1,\infty),(2,3),(2,4),(3,3),(3,3),(4,4)\}\) only differ in almost holes. Still, we cannot pick colors whose removal of associated vertices would result in different numbers of components.
Next, we introduce the notion of color-separating sets (Definition 1). Importantly, Lemma 4 leverages this definition to characterize the graphs that can be distinguished based on almost holes. Specifically, it establishes that whenever the diagrams of two graphs differ in their multiset of almost holes, we can build a color-separating set.
**Definition 1** (Color-separating sets).: _A color-separating set for a pair of graphs \((G,G^{\prime})\) is a set of colors \(Q\) such that the subgraphs induced by \(V\backslash\{w\in V\mid c(w)\in Q\}\) and \(V^{\prime}\backslash\{w\in V^{\prime}\mid c^{\prime}(w)\in Q\}\) have distinct component-wise colors._
We note that when \(G\) and \(G^{\prime}\) have identical component-wise colors, the sets \(\{w\in V\mid c(w)\in Q\}\) and \(\{w\in V^{\prime}\mid c^{\prime}(w)\in Q\}\) induced by the color-separating set \(Q\) are separating sets for \(G\) and \(G^{\prime}\).
Figure 2: We cannot use color-based separating sets to compare almost holes across graphs. Although these filtrations produce different almost holes, there is no way to remove colors s.t. the resulting graphs have different numbers of components.
**Lemma 4** (**Distinct almost holes imply distinct color-separating sets)**.: _Let \(\mathcal{D}^{0}_{G}\), \(\mathcal{D}^{0}_{G^{\prime}}\) be persistence diagrams for \(G\) and \(G^{\prime}\). If the diagrams \(\mathcal{D}^{0}_{G}\), \(\mathcal{D}^{0}_{G^{\prime}}\) differ in their multisets of almost holes, then there is a color-separating set for \(G\) and \(G^{\prime}\)._
#### 3.1.3 Bounds on the expressivity of vertex-color persistent homology
Regardless of the filtration, vertex-color PH always allows counting the numbers of connected components and vertices of a graph. If all vertices have the same color, then we cannot have any expressive power beyond \(\beta^{0}\) and \(|V|\) -- when all vertices are added simultaneously, there cannot be almost holes as the finite living times of the holes are 0. Also, all real holes are identical, and we have \(\mathcal{D}^{0}=\{\!\!\{(1,\infty),\ldots,(1,\infty),(1,1),...,(1,1)\}\!\}\), with \(|\mathcal{D}^{0}|=|V|\).
For graphs with \(m\geq 1\) colors, Lemma 5 shows that sets of birth times correspond to vertex colors. As a consequence, if the multisets of vertex colors differ for graphs \(G\) and \(G^{\prime}\), then the corresponding persistence diagrams are also different in all filtrations.
**Lemma 5** (**Equivalence between birth times and vertex colors)**.: _There is a bijection between the multiset of birth times and the multiset of vertex colors in any vertex-color filtration._
From Lemma 5, we can recover the multiset of colors from the persistence diagram and, consequently, distinguish graphs with different multisets. However, persistent homology uses vertex colors as input, and we do not need persistence diagrams to construct or compare such multisets. This highlights the importance of death times to achieve expressivity beyond identifying vertex colors. In fact, for non-trivial cases, the expressivity highly depends on the choice of filtration.
We have discussed the importance of color-separating sets (Lemma 4) and component-wise vertex colors (Lemma 2). With these notions, Theorem 1 formalizes the limits of expressivity that may be achieved with suitable filtration and characterize which pairs of graphs can, at best, be distinguished by comparing their persistence diagrams. Here, we only consider pairs of graphs that cannot be distinguished by their multisets of colors, as this corresponds to a trivial case.
**Theorem 1** (**The expressive power of vertex-color filtrations)**.: _For any two graphs \(G\) and \(G^{\prime}\) with identical multisets of colors \(\{\!\!\{c(v):v\in V\}\!\}=\{\!\!\{c^{\prime}(v):v\in V^{\prime}\}\!\}\), there exists a filtration such that \(\mathcal{D}^{0}_{G}\neq\mathcal{D}^{0}_{G}\), if and only if there is a color-separating set for \(G\) and \(G^{\prime}\)._
### Edge-color filtrations
We now consider the expressivity of 0-dim persistent homology obtained from edge-color filtrations. The persistence diagrams are constructed exactly the same way. However, in this case, all holes are born at the same time (all vertices appear in \(G_{0}\)). This implies that all real holes are identical. Also, the diagrams do not contain trivial holes since \(G_{0}\) does not have edges. All holes are either real holes or almost holes (of the form \((0,d)\)). We also note that persistence diagrams will always have almost holes unless the graph is edgeless.
Analogously to separating sets in vertex-color filtrations, Lemma 6 characterizes edge-based almost holes as _disconnecting sets_ -- a set of edges whose removal would increase the number of components.
**Lemma 6** (**Edge-based almost holes as disconnecting sets)**.: _Let \((0,f(x^{(d)}))\) be an almost hole from an edge-color filtration. Then \(S=\{e\in E\mid f(l(e))\geq f(x^{(d)})\}\) is a disconnecting set of \(G\)._
Lemma 6 tells us how to construct a disconnecting set from an almost hole. Now, suppose we are given a disconnecting set \(S\). Can we build an edge-color filtration for which \(S\) can be obtained from
Figure 3: (a) \(G\) and \(G^{\prime}\) differ in their multisets of colors, but no edge-color filtration can distinguish them. For instance, assume that \(f(\text{‘blue’})=1<2=f(\text{‘orange’})\). Then, \(\mathcal{D}^{0}_{G}=\mathcal{D}^{0}_{G^{\prime}}=\{\!\!\{(0,\infty),(0,1),(0,1),(0,1)\}\!\}\). The same holds for \(f(\text{‘blue’})>f(\text{‘orange’})\). (b) Graphs that have different disconnecting sets, but for which we can find filtrations that lead to identical diagrams.
an almost hole? In other words, can we obtain a diagram with an almost hole \((0,f(x^{(d)}))\) such that \(\{e\in E\mid f(l(e))\geq f(x^{(d)})\}\) is equal to \(S\)? \(\mathtt{Lemma\ 7}\) shows that if the colors of edges in \(S\) are distinct from those in \(E\setminus S\), then there is a filtration that induces a persistence diagram with an almost hole from which we can reconstruct \(S\).
**Lemma 7** (Reconstructing a disconnecting set).: _Let \(G=(V,E,l,X)\) be a graph and \(S\subseteq E\) be a disconnecting set for \(G\). If the set of colors of \(S\) is disjoint from that of \(E\setminus S\), then there exists a filtration such that \(S=\{e\in E\mid f(l(e))\geq f(x^{(d)})\}\) for an almost hole \((0,f(x^{(d)}))\in\mathcal{D}^{0}\)._
#### 3.2.1 Bounds on the expressivity of edge-color persistent homology
Similar to the vertex-color case, in any edge-color filtration, we have that \(|\mathcal{D}^{0}|=|V|\) and the number of real holes is \(\beta^{0}\). Also, the lowest expressivity is achieved when all edges have the same color. In this case, two graphs with different numbers of vertices or connected components have different persistence diagrams (and can be distinguished); otherwise, they cannot be distinguished.
We have seen in \(\mathtt{Lemma\ 5}\) that vertex-color filtrations encode colors as birth times. In contrast, birth times from edge-color filtrations are always trivially equal to zero. Thus, we cannot generalize \(\mathtt{Lemma\ 5}\) to edge-color filtrations. Instead, we can show there are graphs with different multisets of edge colors such that the graphs have the same persistence diagrams for any filtration (see Figure 3(a)).
Let us now consider lower limits for graphs with \(m>1\) edge colors. We can show that even if two graphs have different disconnecting sets (obtained from colors), there are filtrations that induce the same persistence diagrams. To see this behavior, consider the two graphs in Figure 3(b), where the deletion of blue edges disconnects one of the graphs but not the other. Although we can build an edge-color filtration that separates these graphs (i.e., \(\mathcal{D}^{0}_{G}\neq\mathcal{D}^{0}_{G^{\prime}}\)), if we choose \(f(\text{'green'})=3,f(\text{'orange'})=2\), and \(f(\text{'blue'})=1\), we obtain \(\mathcal{D}^{0}_{G}=\mathcal{D}^{0}_{G^{\prime}}=\{(0,\infty),(0,1),(0,2),(0,2 ),(0,2),(0,2)\}\). Interestingly, even if two graphs have different sets of edge colors, we might still find filtrations that induce identical diagrams. The reason is that unlike vertex-color filtrations where trivial holes make sure that all vertices are represented in the diagrams, in edge-color filtrations there are no trivial holes. As a result, persistence diagrams from edge-color filtrations do not account for edges that do not lead to the disappearance of connected components.
\(\mathtt{Lemma\ 6}\) and \(\mathtt{Lemma\ 7}\) showed that edge-based almost holes can be characterized as disconnecting sets, somewhat analogously to vertex-based almost holes as separating sets. We complete the analogy by introducing the notion of color-disconnecting sets in \(\mathtt{Definition\ 2}\). We then use this notion to fully characterize the the expressive power of edge-color persistent homology in \(\mathtt{Theorem\ 2}\). More specifically, the existence of a color-disconnecting set between a given pair of graphs is a necessary and sufficient condition for distinguishing them based on \(0\)-dimensional persistence diagrams.
**Definition 2** (Color-disconnecting sets).: _A color-disconnecting set for a pair of graphs \((G,G^{\prime})\) is a set of colors \(Q\) such that if we remove the edges of colors in \(Q\) from \(G\) and \(G^{\prime}\), we obtain subgraphs with different numbers of connected components._
**Theorem 2** (The expressive power of edge-color filtrations).: _Consider two graphs \(G\) and \(G^{\prime}\). There exists an edge-color filtration such that \(\mathcal{D}^{0}_{G}\neq\mathcal{D}^{0}_{G^{\prime}}\), if and only if there is a color-disconnecting set for \(G\) and \(G^{\prime}\)._
### Vertex-color versus edge-color filtrations
To compare vertex- and edge-color persistence diagrams, we consider graphs with vertex-coloring functions \(c(\cdot)\) from which we derive edge-coloring ones \(l(\cdot)\). In particular, for a graph \(G=(V,E,c,X)\), its edge-coloring function \(l:E\to X^{2}\) is defined as \(l(v,w)=\{c(v),c(w)\}\).
Figure 4: Illustration of graphs that cannot be distinguished based on (a) edge-color filtrations, (b) vertex-color filtrations, and (c) both vertex- and edge-color filtrations.
Recall that only vertex-color filtrations can \(1\)) encode multisets of colors and \(2\)) have real holes with different birth times. Naturally, we can find pairs of graphs \((G,G^{\prime})\) for which we can obtain \(\mathcal{D}^{0}_{G}\neq\mathcal{D}^{0}_{G^{\prime}}\) from vertex-color filtrations, but not from edge-color ones. Consider the graphs in Figure 4(a). The vertex-color filtration \(f(\text{'blue'})=1,f(\text{'orange'})=2\) produces \(\mathcal{D}=\{\!\!\{(1,\infty),(1,\infty),(1,2),(2,2),(2,2)\}\!\!\}\) and \(\mathcal{D}^{\prime}=\{\!\!\{(1,\infty),(1,1),(2,\infty),(2,2),(2,2)\}\!\!\}\). However, there is no edge-color filtration that would tell them apart -- there are only two possible edge-color filtrations, leading to either \(\mathcal{D}=\{\!\!\{(0,\infty),(0,\infty),(0,1),(0,2),(0,2)\}\!\!\}=\mathcal{D} ^{\prime}\), or \(\mathcal{D}=\{\!\!\{(0,\infty),(0,\infty),(0,1),(0,1),(0,2)\}\!\!\}=\mathcal{D} ^{\prime}\).
We can also show that there are graphs that can be distinguished by edge-color filtrations but not by vertex-color ones. Intuitively, one can think of this as a result of edge colors being more fine-tuned. For instance, consider the graphs in Figure 4(b). We can separate these graphs using the function \(f(\text{'orange'})=1,f(\text{'blue-orange'})=2\), and \(f(\text{'blue'})=3\), which yields \(\mathcal{D}^{0}_{G}=\{\!\!\{(0,\infty),(0,1),(0,1),(0,2),(0,2),(0,3)\}\!\!\} \neq\{\!\!\{(0,\infty),(0,1),(0,1),(0,2),(0,2),(0,2)\}\!\!\}=\mathcal{D}^{0}_{G^ {\prime}}\). However, since there is no color-separating set for \(G\) and \(G^{\prime}\), by Theorem 1, we have that \(\mathcal{D}_{G}=\mathcal{D}_{G^{\prime}}\) for all vertex-color filtrations. Theorem 3 formalizes the idea that none of the classes of color-based filtrations subsumes the other. In addition, Figure 4(c) illustrates that there are very simple non-isomorphic graphs that PH under both vertex- and edge-color filtrations cannot distinguish.
**Theorem 3** (Edge-color vs. vertex-color filtrations).: _There exist non-isomorphic graphs that vertex-color filtrations can distinguish but edge-color filtrations cannot, and vice-versa._
## 4 Going beyond persistent homology
We now leverage the theoretical results in Section 3 to further boost the representational capability of persistent homology. In particular, we propose modifying edge-color persistence diagrams to account for structural information that is not captured via the original diagrams. We call the resulting approach RePHINE (Refining PH by incorporating node-color into edge-based filtration). Notably, RePHINE diagrams are not only provably more expressive than standard color-based ones but also make 1-dimensional topological features redundant. Additionally, we show how to integrate RePHINE into arbitrary GNN layers for graph-level prediction tasks.
Edge-color diagrams with missing holes.A major drawback of edge-color filtrations is that information about the multisets of (edge) colors is lost, i.e., it cannot be recovered from persistence diagrams. To reconstruct disconnecting sets, we need the edge-color permutation given by the filtration function and the number of edges -- both of which cannot be deduced from diagrams alone.
To fill this gap, we introduce the notion of _missing holes_. Conceptually, missing holes correspond to edges that are not associated with the disappearance of any connected component in a given filtration. By design, we set the birth time of missing holes to 1 -- this distinguishes them from real and almost holes, which have birth times equal to 0. The death time of a missing hole corresponds to the first filtration step \(f(x)\) that its corresponding edge color \(x\) appears. We note that missing holes correspond to cycles obtained from 1-dim persistence diagrams.
As an example, consider the edge-color filtration in Figure 5, which produces \(\mathcal{D}^{0}=\{\!\!\{(0,\infty),(0,1),(0,2),(0,2),(0,4)\}\!\!\}\). We note that the orange edge and one of the orange-green ones do not 'kill' any 0-dim hole. This results in the missing holes \((1,3)\) and \((1,4)\). Clearly, missing holes bring in additional expressivity as, e.g., it would be possible to distinguish graphs that only differ in the orange edge in Figure 5. Still, edge-color diagrams with missing holes are not more expressive than vertex-color ones. For instance, they cannot separate the two graphs in Figure 3(a).
Augmenting edge-color diagrams with vertex-color information.To improve the expressivity of persistent homology, a simple approach is to merge tuples obtained from independent vertex- and edge-color filtrations. However, this would double the computational cost while only allowing distinguishing pairs of graphs that could already be separated by one of the filtrations. Ideally, we would like to go beyond the union of vertex- and edge-color persistence diagrams.
As in Section 3.3, we consider graphs with edge colors obtained from vertex-coloring functions. Also, we assume that \(f_{v}\) and \(f_{e}\) are independent vertex- and edge-color filtration functions, respectively. We propose adding two new elements to the tuples of edge-color diagrams with missing holes. Our augmented tuple is \((b,d,\alpha,\gamma)\) where \(\alpha\) and \(\gamma\) are the additional terms. Recall that, in any edge-color filtration, \(G_{0}\) has \(|V|\) connected components. Then, we can associate real or almost holes of edge-color diagrams with vertices in \(G\). With this in mind, we define RePHINE diagrams as follows.
**Definition 3** (RePHINE diagram).: _The RePHINE diagram of a filtration on a graph \(G\) is a multiset of cardinality \(|V|+\beta_{G}^{1}\), with elements of form \((b,d,\alpha,\gamma)\). There are two cases:_
* _Case_ \(b=0\) _(real or almost holes). Now,_ \(b\) _and_ \(d\) _correspond to birth and death times of a component as in edge-color filtration. We set_ \(\alpha(w)=f_{v}(c(w))\) _and_ \(\gamma(w)=\min_{v\in\mathcal{N}(w)}f_{e}(\{\!\!\{c(w),c(v)\}\!\!\})\)_, where_ \(w\) _is the vertex that is associated with the almost or real hole. Vertices are matched with the diagram elements as follows: An almost hole (b,d) corresponds to an edge merging two connected components,_ \(T_{1},T_{2}\)_. Each of these connected components has exactly one vertex,_ \(w_{T_{1}}\) _or_ \(w_{T_{2}}\)_, which has not yet been associated with any element of the RePHINE diagram. Let_ \(w=\arg\max_{w^{\prime}\in\{w_{T_{1}},w_{T_{2}}\}}f_{v}(c(w^{\prime}))\) _, or if_ \(f_{v}(c(w_{T_{1}}))=f_{v}(c(w_{T_{2}}))\)_, then_ \(w=\arg\min_{w^{\prime}\in\{w_{T_{1}},w_{T_{2}}\}}\gamma(w^{\prime})\)_. The vertices that are associated with real holes are vertices that have not died after the last filtration step._
* _Case_ \(b=1\) _(missing holes). Here, the entry_ \(d\) _is the filtration value of an edge_ \(e\) _that did not kill a hole but produces a cycle that appears at the filtration step associated with adding the edge_ \(e\)_. The entries_ \(\alpha\) _and_ \(\gamma\) _take uninformative values (e.g., 0)._
Figure 5 provides an example of RePHINE diagrams. Further details of the procedure can be found in Appendix C. Notably, our scheme can be computed efficiently at the same cost as standard persistence diagrams and is consistent -- we obtain identical diagrams for any two isomorphic colored graphs.
**Theorem 4** (RePHINE is isomorphism invariant).: _Let \(G\), \(G^{\prime}\) be isomorphic graphs. Then, any edge-color and vertex-color filtrations produce identical RePHINE diagrams for \(G\) and \(G^{\prime}\)._
In addition, Theorem 5 shows that RePHINE diagrams are strictly more expressive than those from both vertex- and edge-color filtrations, including \(0\)- and \(1\)-dim topological features. Figure 4(c) provides an example of graphs that cannot be recognized by any color-based filtration, but for which we can obtain distinct RePHINE diagrams.
**Theorem 5** (RePHINE is strictly more expressive than color-based PH).: _Let \(\mathcal{D},\mathcal{D}^{\prime}\) be the persistence diagrams associated with any edge or vertex-color filtration of two graphs. If \(\mathcal{D}\neq\mathcal{D}^{\prime}\), then there is a filtration that produces different RePHINE diagrams. The converse does not hold._
Despite its power, there are simple non-isomorphic graphs RePHINE cannot distinguish. In particular, if two graphs have one color, RePHINE cannot separate graphs of equal size with the same number of components and cycles. For example, star and path graphs with 4 vertices of color \(c_{1}\) produce identical RePHINE diagrams of the form \(\{\!\!\{(0,d,a,d),(0,d,a,d),(0,d,a,d),(0,\infty,a,d)\}\!\!\}\), where \(d=f_{e}(\{\!\!\{c_{1},c_{1}\}\!\!\})\) and \(a=f_{v}(c_{1})\) for arbitrary edge- and vertex-color filtration functions.
Combining RePHINE and GNNs.RePHINE diagrams can be easily incorporated into general GNN layers. For instance, one can follow the scheme in [15] to combine non-missing hole information with node features and leverage missing holes as graph-level attributes. However, here we adopt a simple scheme that processes RePHINE tuples using DeepSets [38]. These topological embeddings are then grouped using a pooling layer and concatenated with the graph-level GNN embedding. The resulting representation is fed to a feedforward network to obtain class predictions. Formally, let \(\mathcal{N}_{G}(u)\) denote the set of neighbors of vertex \(u\) in \(G\), and \(h_{u}^{(0)}=c(u)\) for all \(u\in V\). We compute
Figure 5: RePHINE diagrams. At \(G_{1}\), one component dies and creates the almost hole \((0,1,2,1)\). We also save that two nodes were discovered at \(1\) (fourth component), with colors equal to \(2\) (third component). At step \(2\), two other holes are killed, resulting in two tuples \((0,2,1,2)\). At \(G_{3}\), we obtain the missing hole \((1,3,0,0)\). Finally, \(G_{4}\) creates one almost hole and one missing hole.
GNN and RePHINE embeddings (denoted by \(r^{(\ell)}\)) at layer \(\ell\) recursively as:
\[\tilde{h}_{u}^{(\ell)} =\textsc{Agg}^{(\ell)}(\{\!\{h_{w}^{(\ell-1)}\mid w\in\mathcal{N}_{ G}(u)\}\!\})\quad\forall u\in V \mathcal{R}^{(\ell)} =\textsc{RePHINE}(f_{v}^{(\ell)},f_{\epsilon}^{(\ell)},\{\!\{h_{u}^ {(\ell)}\}\!\}_{u\in V})\] \[h_{u}^{(\ell)} =\textsc{Update}^{(\ell)}\left(h_{u}^{(\ell-1)},\tilde{h}_{u}^{( \ell)}\right)\quad\forall u\in V r^{(\ell)} =\phi^{(\ell)}(\sum_{d\in\mathcal{R}^{(\ell)}}\psi^{(\ell)}(d))\]
where \(f_{v}^{(\ell)},f_{\epsilon}^{(\ell)},\psi^{(\ell)},\phi^{(\ell)},\textsc{Agg}^ {(\ell)}\), and Update\({}^{(\ell)}\) are arbitrary non-linear mappings, usually implemented as feedforward neural nets. After \(L\) layers, we obtain the combined RePHINE-GNN graph-level representation as \([\textsc{Pool}_{1}(\{r^{(\ell)}\}_{\ell})\parallel\textsc{Pool}_{2}(\{h_{u} ^{(L)}\}_{u})]\), where Pool\({}_{1}\) is either mean or concatenation, and Pool\({}_{2}\) is an order invariant operation.
## 5 Experiments
In this section, we compare RePHINE to standard persistence diagrams from an empirical perspective. Our main goal is to evaluate whether our method enables powerful graph-level representation, confirming our theoretical analysis. Therefore, we conduct two main experiments. The first one leverages an artificially created dataset, expected to impose challenges to persistent homology and MP-GNNs. The second experiment aims to assess the predictive performance of RePHINE in combination with GNNs on popular benchmarks for graph classification. All methods were implemented in PyTorch [31], and our code is available at [https://github.com/Aalto-QuML/RePHINE](https://github.com/Aalto-QuML/RePHINE).
Synthetic data.We consider three datasets of cubic graphs (or 3-regular graphs): cub08, cub10, and cub12 [6]. These graphs cannot be distinguished by 1-WL and color-based PH as all vertices share the same color. Thus, we modify the datasets by changing the colors of 1, 2, or 3 vertices in each graph sample, resulting in the modified datasets cub08-1, cub10-2, and cub12-3. Also, we randomly partition each dataset and create a balanced binary classification task. We expect this to keep the hardness of the task while allowing some distinguishability.
We compare standard 0-dim persistence diagrams from vertex-color filtrations (referred to as PH) to 0-dim RePHINE (i.e., no missing holes). Both approaches are processed using DeepSets with exactly the same structure and optimization procedure. Also, they operate on the original colors, not on GNN embeddings. For completeness, we report results for a 2-layer graph convolutional network (GCN) [22] followed by an MLP. We are interested in assessing if the persistence modules can overfit the observed graphs. We also monitor if the methods obtain different representations for each graph, measured in terms of the proportion of unique graph embeddings over training (which we call _expressivity_). We provide further details and additional results with 1-dim persistence diagrams in the supplementary material.
Figure 6: Average learning curves for RePHINE, PH, and GCN on connected cubic graphs. RePHINE can learn representations in cases where PH and GNNs struggle to capture structural information. RePHINE shows better expressivity and fitting capability on Cub10-2 and Cub12-3.
Figure 6 shows the learning curves for 2000 epochs, averaged over five runs. Notably, for all datasets, the expressivity of RePHINE is significantly higher than those from PH and similar to GNN's. On cub10-2, while PH and GNN obtain accuracies of around 0.5, RePHINE allows a better fit to the observed data, illustrated by higher accuracy and lower loss values.
Real-world data.To assess the performance of RePHINE on real data, we use six popular datasets for graph classification (details in the Supplementary): PROTEINS, IMDB-BINARY, NCI1, NCI109, MOLHIV and ZINC [7, 16, 20]. We compare RePHINE against standard vertex-color persistence diagrams (simply called PH here). Again, we do not aim to benchmark the performance of topological GNNs, but isolate the effect of the persistence modules. Thus, we adopt _default_ (shallow) GNN architectures and process the persistence diagrams exactly the same way using DeepSets. We report the mean and standard deviation of predictive metrics (AUC for MOLHIV, MAE for ZINC, and Accuracy for the remaining) over five runs. We provide further implementation details in Appendix C.
Table 1 shows the results of PH and RePHINE combined with GCN [22] and GIN [37] models. Notably, RePHINE consistently outperforms PH, being the best-performing method in 10 out of 12 experiments. Overall, we note that GIN-based approaches achieve slightly better results. Our results suggest that RePHINE should be the default choice for persistence descriptors on graphs.
Comparison to PersLay [2].We also compare our method against another topological neural network, namely, PersLay. Since PersLay does not leverage GNNs, we adapted our initial design for a fair comparison. Specifically, we compute RePHINE diagrams with learned filtration functions and apply a linear classifier to provide class predictions. Also, we concatenate the vectorial representations of the RePHINE diagrams with the same graph-level features obtained using PersLay. We refer to our variant as RePHINE+Linear. Table 2 reports accuracy results over 5 runs on 4 datasets. For all datasets, RePHINE+Linear achieves higher accuracy, with a significant margin overall.
## 6 Conclusion, Broader Impact, and Limitations
We resolve the expressivity of persistent homology methods for graph representation learning, establishing a complete characterization of attributed graphs that can be distinguished with general node- and edge-color filtrations. Central to our analyses is a novel notion of color-separating sets.
Much like how WL test has fostered more expressive graph neural networks (GNNs), our framework of color-separating sets enables the design of provably more powerful topological descriptors such as RePHINE (introduced here). RePHINE is computationally efficient and can be readily integrated into GNNs, yielding empirical gains on several real benchmarks.
We have not analyzed here other types of filtrations, e.g., those based on the spectral decomposition of graph Laplacians. Future work should also analyze the stability, generalization capabilities, and local versions of RePHINE. Overall, we expect this work to spur principled methods that can leverage both topological and geometric information, e.g., to obtain richer representations for molecules in applications such as drug discovery and material design.
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline
**GNN** & **Diagram** & **NCI109**\(\uparrow\) & **PROTEINS**\(\uparrow\) & **IMDB-B**\(\uparrow\) & **NCI1**\(\uparrow\) & **MOLHIV**\(\uparrow\) & **ZINC**\(\downarrow\) \\ \hline \multirow{3}{*}{GCN} & - & \(76.46\pm 1.03\) & \(70.18\pm 1.35\) & \(64.20\pm 1.30\) & \(74.45\pm 1.05\) & \(74.99\pm 1.09\) & \(0.875\pm 0.009\) \\ & PH & \(77.92\pm 1.89\) & \(69.46\pm 1.83\) & \(64.80\pm 1.30\) & \(79.08\pm 1.06\) & \(73.64\pm 1.29\) & \(0.513\pm 0.014\) \\ & RePHINE & \(79.18\pm 1.97\) & \(71.25\pm 1.60\) & \(69.40\pm 3.78\) & \(80.44\pm 0.94\) & \(75.98\pm 1.80\) & \(0.468\pm 0.011\) \\ \hline \multirow{3}{*}{GIN} & - & \(76.90\pm 0.80\) & \(72.50\pm 2.31\) & \(74.20\pm 1.30\) & \(76.89\pm 1.75\) & \(70.76\pm 2.46\) & \(0.621\pm 0.015\) \\ & PH & \(78.35\pm 0.68\) & \(69.46\pm 2.48\) & \(69.80\pm 0.84\) & \(79.12\pm 1.23\) & \(73.37\pm 4.36\) & \(0.440\pm 0.019\) \\ \cline{1-1} & RePHINE & \(79.23\pm 1.67\) & \(72.32\pm 1.89\) & \(72.80\pm 2.95\) & \(80.92\pm 1.92\) & \(73.71\pm 0.91\) & \(0.411\pm 0.015\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Predictive performance on graph classification. We denote in bold the best results. For ZINC, lower is better. For most datasets, RePHINE is the best-performing method.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Method** & **NCI109** & **PROTEINS** & **IMDB-B** & **NCI1** \\ \hline PersLay & \(70.12\pm 0.83\) & \(67.68\pm 1.94\) & \(68.60\pm 5.13\) & \(68.86\pm 0.86\) \\ RePHINE+Linear & \(73.27\pm 1.69\) & \(71.96\pm 1.85\) & \(70.40\pm 2.97\) & \(74.94\pm 1.35\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: PersLay vs. RePHINE: Accuracy results on graph classification.
## Acknowledgments
This work was supported by Academy of Finland (Flagship programme: Finnish Center for Artificial Intelligence FCAI) and a tenure-track starting grant by Aalto University. We also acknowledge the computational resources provided by the Aalto Science-IT Project from Computer Science IT. We are grateful to the anonymous area chair and reviewers for their constructive feedback. Johanna Immonen thanks Tuan Anh Pham, Jannis Halbey, Negar Soltan Mohammadi, Yunseon (Sunnie) Lee, Bruce Nguyen and Nahal Mirzaie for their support and for all the fun during her research internship at Aalto University in the summer of 2022.
## References
* [1] P. Barcelo, E. V. Kostylev, M. Monet, J. Perez, J. L. Reutter, and J. Silva. The logical expressiveness of graph neural networks. In _International Conference on Learning Representations (ICLR)_, 2020.
* [2] M. Carriere, F. Chazal, Y. Ike, T. Lacombe, M. Royer, and Y. Umeda. PersLay: A neural network layer for persistence diagrams and new graph topological signatures. In _International Conference on Artificial Intelligence and Statistics (AISTATS)_, 2020.
* [3] M. Carriere, F. Chazal, M. Glisse, Y. Ike, H. Kannan, and Y. Umeda. Optimizing persistent homology based functions. In _International Conference on Machine Learning (ICML)_, 2021.
* [4] Y. Chen, B. Coskunuzer, and Y. Gel. Topological relational learning on graphs. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2021.
* [5] D. Cohen-Steiner, H. Edelsbrunner, and J. Harer. Stability of persistence diagrams. In _Proceedings of the Twenty-First Annual Symposium on Computational Geometry_, 2005.
* [6] K. Coolsaet, S. D'hondt, and J. Goedgebeur. House of graphs 2.0: A database of interesting graphs and more. _Discrete Applied Mathematics_, 325:97-107, 2023.
* [7] V. P. Dwivedi, C. K. Joshi, A. T. Luu, T. Laurent, Y. Bengio, and X. Bresson. Benchmarking graph neural networks. _Journal of Machine Learning Research_, 24(43):1-48, 2023.
* [8] Edelsbrunner, Letscher, and Zomorodian. Topological persistence and simplification. _Discrete & Computational Geometry_, 28(4), 2002.
* an Introduction_. American Mathematical Society, 2010.
* [10] M. Fey and J. E. Lenssen. Fast graph representation learning with PyTorch Geometric. In _Workshop track of the International Conference on Representation Learning (ICLR)_, 2019.
* [11] V. Garg, S. Jegelka, and T. Jaakkola. Generalization and representational limits of graph neural networks. In _International Conference on Machine Learning (ICML)_, 2020.
* [12] C. Hofer, R. Kwitt, M. Niethammer, and A. Uhl. Deep learning with topological signatures. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2017.
* [13] C. Hofer, F. Graf, B. Rieck, M. Niethammer, and R. Kwitt. Graph filtration learning. In _International Conference on Machine Learning (ICML)_, 2020.
* [14] C. D. Hofer, R. Kwitt, and M. Niethammer. Learning representations of persistence barcodes. _Journal of Machine Learning Research_, 20(126):1-45, 2019.
* [15] M. Horn, E. De Brouwer, M. Moor, Y. Moreau, B. Rieck, and K. Borgwardt. Topological graph neural networks. In _International Conference on Learning Representations (ICLR)_, 2022.
* [16] W. Hu, M. Fey, M. Zitnik, Y. Dong, H. Ren, B. Liu, M. Catasta, and J. Leskovec. Open graph benchmark: Datasets for machine learning on graphs. _arXiv preprint arXiv:2005.00687_, 2020.
* [17] Xiaoling Hu, Fuxin Li, Dimitris Samaras, and Chao Chen. Topology-preserving deep image segmentation. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2019.
* [18] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In _International Conference on Machine Learning (ICML)_, 2015.
* Irwin et al. [2012] John J. Irwin, Teague Sterling, Michael M. Mysinger, Erin S. Bolstad, and Ryan G. Coleman. Zinc: A free tool to discover chemistry for biology. _Journal of Chemical Information and Modeling_, 52(7):1757-1768, 2012.
* Kersting et al. [2016] Kristian Kersting, Nils M. Kriege, Christopher Morris, Petra Mutzel, and Marion Neumann. Benchmark data sets for graph kernels, 2016. URL [http://graphkernels.cs.tu-dortmund.de](http://graphkernels.cs.tu-dortmund.de).
* Kingma and Ba [2015] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In _International Conference on Learning Representations (ICLR)_, 2015.
* Kipf and Welling [2017] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In _International Conference on Learning Representations (ICLR)_, 2017.
* Kovacev-Nikolic et al. [2016] V. Kovacev-Nikolic, P. Bubenik, D. Nikolic, and G. Heo. Using persistent homology and dynamical distances to analyze protein binding. _Statistical Applications in Genetics and Molecular Biology_, 15(1):19-38, 2016.
* Kramar et al. [2016] M. Kramar, R. Levanger, J. Tithof, B. Suri, M. Xu, M. Paul, M. F. Schatz, and K. Mischaikow. Analysis of Kolmogorov flow and Rayleigh-Benard convection using persistent homology. _Physica D: Nonlinear Phenomena_, 334:82-98, 2016.
* Lee et al. [2017] Y. Lee, S. D. Barthel, P. Dlotko, S. M. Moosavi, K. Hess, and B. Smit. Quantifying similarity of pore-geometry in nanoporous materials. _Nature Communications_, 8(1), 2017.
* Leygonie et al. [2022] J. Leygonie, S. Oudot, and U. Tillmann. A framework for differential calculus on persistence barcodes. _Foundations of Computational Mathematics_, 22(4):1069-1131, 2022.
* Li et al. [2014] C. Li, M. Ovsjanikov, and F. Chazal. Persistence-based structural recognition. In _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, 2014.
* Loukas [2020] A. Loukas. What graph neural networks cannot learn: depth vs width. In _International Conference on Learning Representations (ICLR)_, 2020.
* Loukas [2020] A. Loukas. How hard is to distinguish graphs with graph neural networks? In _Advances in Neural Information Processing Systems (NeurIPS)_, 2020.
* Morris et al. [2019] C. Morris, M. Ritzert, M. Fey, W. L. Hamilton, J. E. Lenssen, G. Rattan, and M. Grohe. Weisfeiler and Leman go neural: Higher-order graph neural networks. In _AAAI Conference on Artificial Intelligence (AAAI)_, 2019.
* Workshop)_, 2017.
* Rieck [2023] B. Rieck. On the expressivity of persistent homology in graph learning. _arXiv: 2302.09826_, 2023.
* Rieck et al. [2019] B. Rieck, C. Bock, and K. Borgwardt. A persistent weisfeiler-lehman procedure for graph classification. In _International Conference on Machine Learning (ICML)_, 2019.
* Sato [2020] R. Sato. A survey on the expressive power of graph neural networks. _arXiv:2003.04078_, 2020.
* Sato et al. [2019] R. Sato, M. Yamada, and H. Kashima. Approximation ratios of graph neural networks for combinatorial problems. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2019.
* Weisfeiler and Lehman [1968] B. Weisfeiler and A. A. Lehman. A reduction of a graph to a canonical form and an algebra arising during this reduction. _Nauchno-Technicheskaya Informatsia_, 2(9):12-16, 1968.
* Xu et al. [2019] K. Xu, W. Hu, J. Leskovec, and S. Jegelka. How powerful are graph neural networks? In _International Conference on Learning Representations (ICLR)_, 2019.
* Zaheer et al. [2017] M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Poczos, R. Salakhutdinov, and A. Smola. Deep sets. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2017.
* Zhao and Wang [2019] Q. Zhao and Y. Wang. Learning metrics for persistence-based summaries and applications for graph classification. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2019.
* Zhao et al. [2020] Q. Zhao, Z. Ye, C. Chen, and Y. Wang. Persistence enhanced graph neural network. In _International Conference on Artificial Intelligence and Statistics (AISTATS)_, 2020.
**Supplementary material: Going beyond persistent homology using persistent homology**
## Appendix A Persistent homology
Persistent homology (PH) is one of the workhorses for topological data analysis (TDA). A central idea underlying PH is to investigate the multiresolution structure in data through the lens of low-dimensional topological features such as connected components (0-dimensional), loops (1-dimensional), and voids (2-dimensional). Here, we provide a brief description of PH, and how it extends to graphs. In particular, we do not present proofs and do not show that the constructions are well-defined. For a detailed treatment, we refer the reader to [13], [9].
We will first define homological groups. They allow to characterise p-dimensional holes in a topological space such as a simplicial complex. We present the theory for simplicial complexes, as our focus is on 1-dimensional simplicial complexes (i.e. graphs).
Let \(K\) be a simplicial complex. The \(p\)-chains are formal sums \(c=\sum a_{i}\sigma_{i}\), where \(a_{i}\in\mathbb{Z}/2\mathbb{Z}\) and \(\sigma_{i}\) are \(p\)-simplices in \(K\). One can think of \(p\)-chain as a set of \(p\)-simplices such that \(a_{i}=1\). Together with componentwise addition, \(p\)-chains form the group \(C_{p}(K)\).
Now, consider a simplex \(\sigma=(v_{0},...,v_{p})\in K\). We can define a boundary for \(\sigma\) by
\[\partial_{p}\sigma=\sum_{j=0}^{p}(v_{0},...,v_{j-1},v_{j+1},...,v_{p}),\]
i.e., \(\partial_{p}\sigma\) is a sum of the \((p-1)\)-dimensional faces of \(\sigma\). We can extend this to define a boundary homomorphism \(\partial_{p}:C_{p}(K)\to C_{p-1}(K)\) where \(\partial_{p}\sum a_{i}\sigma_{i}=\sum a_{i}\partial_{p}\sigma_{i}\). Thus, we can define a sequence of groups
\[...C_{p+1}(K)\xrightarrow{\partial_{p+1}}C_{p}(K)\xrightarrow{\partial_{p}}C_{ p-1}(K)...,\]
each connected with a boundary homomorphism. This sequence is chain complex, and it is the last definition we need in order to consider homology groups.
The \(p\)th homology group is a group of \(p\)-chains with empty boundary (\(i.e.\)\(\partial_{p}\sigma=0\)) such that each of these particular \(p\)-chains (cycles) are a boundary of a different simplex in \(C_{p+1}(K)\). So, we can define the homology group as the quotient space
\[H_{p}=\ker\partial_{p}/\mathrm{Im}\partial_{(p+1)}.\]
The rank of \(H_{p}\) is equal to the \(p\)th _Betti_ number (\(\beta_{p}\)). Then, let us see how the homology groups can be refined to gain persistent homology groups.
Persistent homology tracks the evolution of _Betti_ numbers in a sequence of chain complexes. For this, we need a filtration, which is an increasing sequence of simplicial complexes \((\mathcal{F}_{i})_{i=1}^{r}\) such that \(\mathcal{F}_{1}=\emptyset\subseteq\mathcal{F}_{2}\subseteq\ldots\subseteq \mathcal{F}_{r}=K\). By constructing all homology groups for each of these simplicial complexes, we can capture changes. New holes (or, homology classes) may emerge, or they may be annihilated such that only the older remains. As such, we can associate a pair of timestamps, or persistence points, \((i,j)\) for every hole to indicate the filtration steps it appeared and disappeared. The persistence of a point \((i,j)\) is the duration for which the corresponding feature was in existence, i.e., the difference \(|i-j|\). We set \(j=\infty\) if the hole does not disappear, i.e. is present at the last filtration step. The extension to persistent homology groups and persistent _Betti_ numbers is natural:
\[H_{p}^{i,j}=\ker\partial_{p}/(\mathrm{Im}\partial_{(p+1)}\cap\ker\partial_{p}),\]and the \(p\)th persistent _Betti_ number \(\beta_{p}^{i,j}\) are given by the rank of \(H_{p}^{i,j}\) as earlier. Lastly, a persistent diagram that consists of the persistent points \((i,j)\) with multiplicities
\[\mu_{p}^{i,j}=(\beta_{p}^{i,j-1}-\beta_{p}^{i,j})-(\beta_{p}^{i-1,j-1}-\beta_{p} ^{i-1,j})\]
where \(i<j\), encodes the persistent homology groups entirely by the Fundamental Lemma of Persistent Homology.
For graphs, the filtration may be viewed as creating an increasing sequence of subgraphs. This entails selecting a subset of vertices and edges of the graph at each step of the filtration. One can learn a parameterized function \(f\) (e.g., a neural network) to assign some value to each \(\sigma\in K\), and thereby select the subsets \(K_{i}\) based on a threshold \(\alpha_{i}\in\mathbb{R}\). That is, \(f\) induces a filtration \((\mathcal{F}_{i})_{i=1}^{r}\) using a sequence \((\alpha_{i})_{i=1}^{r}\) such that \(\alpha_{1}\geq\alpha_{2}\geq\ldots\geq\alpha_{r}\):
\[\mathcal{F}_{i}\triangleq\mathcal{F}(f;\alpha_{i})=\{\sigma\in K:f(\sigma) \geq\alpha_{i}\}.\]
We provide a detailed pseudocode in Algorithm 1 to compute the persistence diagram for an input graph. The algorithm uses the Union-Find data structure, also known as a disjoint-set forest. The code assumes we are given vertex-color filter values, stored in the variable vValues. The algorithm returns a multiset containing 0- and 1-dimensional persistence tuples (i.e., persistence diagrams).
```
\(V,E,\)vValues \(\triangleright\) Vertices, edges, and vertex-color filter values \(\text{uf}\leftarrow\textsc{UnionFind}(|V|)\) \(\text{pers0}\leftarrow\text{zeros}(|V|,2)\)\(\triangleright\) Initialize the persistence tuples \(\text{pers1}\leftarrow\text{zeros}(|E|,2)\) for\(e\in\text{E}\)do \((v,w)\gets e\) \(\text{eValues}[e]\leftarrow\max(\text{vValues}[v],\text{vValues}[w])\) endfor \(\text{pers0}[:,1]\leftarrow\text{vValues}\)\(\triangleright\) Pre-set the 'birth' times \(\text{sIndices},\text{sValues}\leftarrow\text{Sort}(\text{eValues})\) for\(e,\text{weight}\in\text{Pair}(\text{sIndices},\text{sValues})\)do\(\triangleright\) Pair is equivalent to the zip function in Python \((v,w)\gets e\) \(\text{younger}\leftarrow\text{uf}.\text{find}(v)\)\(\triangleright\) younger denotes the component that will die \(\text{older}\leftarrow\text{uf}.\text{find}(w)\) ifyounger = olderthen\(\triangleright\) A cycle was detected \(\text{pers1}[e,\,1]\leftarrow\text{weight}\) \(\text{pers1}[e,\,2]\leftarrow\infty\) continue else ifvValues[younger] < vValues[older]then \(\text{younger},\,\text{older},\,v,w\leftarrow\text{older},\,\text{younger},\,w,v\) endif endif \(\text{pers0}[\text{younger},\,2]\leftarrow\text{weight}\) \(\text{uf}.\text{merge}(v,w)\)\(\triangleright\) Merge two connected components endfor for\(r\in\text{uf}.\text{roots}()\)do \(\text{pers0}[r,\,2]\leftarrow\infty\) endfor \(\mathcal{D}_{v}\leftarrow\textsc{Join}(\text{pers0},\,\text{pers1})\) return\(\mathcal{D}_{v}\)
```
**Algorithm 1** Computing persistence diagrams
## Appendix B Proofs
### Proof of Lemma 1: Vertex-based filtrations can generate inconsistent diagrams
Proof.: Consider a simple cyclic graph with 6 vertices that share the same color. Since the vertices are structurally identical and have the same color, one would expect to get a single persistence diagram irrespective of the labeling of the vertices. However, this is not the case. Consider two different labelings for the vertices on the graph: \(\ell_{1}=(v_{1},v_{2},v_{3},v_{4},v_{5},v_{6})\) and \(\ell_{2}=(v_{1},v_{4},v_{2},v_{6},v_{3},v_{5})\) (see Figure S1). Now, consider an injective vertex-based filtration where \(f(v_{i})>f(v_{j})\) if \(i>j\). Then, we obtain two different persistence diagrams, \(\mathcal{D}_{1}=\{\!\!\{(1,\infty),(2,2),(3,3),(4,4),(5,5),(6,6)\}\!\}\) and \(\mathcal{D}_{2}=\{\!\!\{(1,\infty),(2,4),(3,5),(4,4),(5,5),(6,6)\}\!\}\). We note that for any choice of vertex-based injective filter function on this cycle graph, we can follow a similar procedure to build two different labelings such that the persistence diagrams are different.
### Proof of Lemma 2: Equivalence between component-wise colors and real holes
Proof.: We consider two arbitrary graphs \(G=(V,E,c,X)\) and \(G^{\prime}=(V^{\prime},E^{\prime},c^{\prime},X^{\prime})\) and an injective filter function \(f:X\cup X^{\prime}\rightarrow\mathbb{R}\). We note that if \(G\) and \(G^{\prime}\) do not have the same number of connected components (i.e., \(\beta^{0}_{G}\neq\beta^{0}_{G^{\prime}}\)), then \(G\) and \(G^{\prime}\) differ on the number of real holes, i.e., their multisets of real holes are different trivially. Thus, we now assume \(\beta^{0}_{G}=\beta^{0}_{G^{\prime}}=k\). We also assume both graphs have same colors -- if there is a color in \(G\) that is not in \(G^{\prime}\), the claim is trivial.
[\(\Rightarrow\)] Recall that \(X_{i}=\{c(v)\mid v\in V_{C_{i}}\}\) denotes the set of colors in the component \(C_{i}\subseteq G\). Similarly, \(X^{\prime}_{i}\) is the set of colors in \(C^{\prime}_{i}\subseteq G^{\prime}\). We want to show that if \(\{\!\!\{X_{i}\}\!\}_{i=1}^{k}\neq\{\!\!\{X^{\prime}_{i}\}\!\}_{i=1}^{k}\), then there exists a filtration such that the multisets of real holes are different. We proceed with a proof by induction on the number of colors.
If there is only 1 color, component-wise colors cannot differ for graphs with \(\beta^{0}_{G}=\beta^{0}_{G^{\prime}}\). Let us thus consider 2 colors (say, \(b\) and \(w\)). For 2 colors, there are only three possibilities for what \(X_{h}\in\{\!\!\{X_{i}\}\!\}_{i=1}^{k}\) may be: \(\{b\},\{w\}\) or \(\{b,w\}\). Now, let us denote the multiplicities of \(\{b\},\{w\}\) and \(\{b,w\}\) in \(\{\!\!\{X_{i}\}\!\}_{i=1}^{k}\) by \(n_{1},n_{2}\) and \(n_{3}\), respectively. Note that for \(G\) and \(G^{\prime}\) with \(\beta^{0}_{G}=\beta^{0}_{G^{\prime}}\), we have \(n_{1}+n_{2}+n_{3}=n^{\prime}_{1}+n^{\prime}_{2}+n^{\prime}_{3}\). Thus, when \(\{\!\!\{X_{i}\}\!\}_{i=1}^{k}\neq\{\!\!\{X^{\prime}_{i}\}\!\}_{i=1}^{k}\), there are four cases to consider:
1. \(n_{1}\neq n^{\prime}_{1},n_{2}\neq n^{\prime}_{2},n_{3}=n^{\prime}_{3}\): Here, \(n_{2}+n_{3}\neq n^{\prime}_{2}+n^{\prime}_{3}\) correspond to multiplicities of real holes \((w,\infty)\) for \(G\) and \(G^{\prime}\) respectively, in a filtration that introduces the color \(w\) first.
2. \(n_{1}\neq n^{\prime}_{1},n_{2}=n^{\prime}_{2},n_{3}\neq n^{\prime}_{3}\): Again, \(n_{2}+n_{3}\neq n^{\prime}_{2}+n^{\prime}_{3}\) correspond to multiplicities of real holes \((w,\infty)\) for \(G\) and \(G^{\prime}\) respectively in a filtration that introduces the color \(w\) first.
3. \(n_{1}=n^{\prime}_{1},n_{2}\neq n^{\prime}_{2},n_{3}\neq n^{\prime}_{3}\): Now, \(n_{1}+n_{3}\neq n^{\prime}_{1}+n^{\prime}_{3}\) correspond to multiplicities of real holes \((b,\infty)\) for \(G\) and \(G^{\prime}\) respectively in a filtration that introduces the color \(b\) first.
4. \(n_{1}\neq n^{\prime}_{1},n_{2}\neq n^{\prime}_{2},n_{3}\neq n^{\prime}_{3}\): Similarly, \(n_{1}+n_{3}\neq n^{\prime}_{1}+n^{\prime}_{3}\) correspond to multiplicities of real holes \((b,\infty)\) for \(G\) and \(G^{\prime}\) respectively in a filtration that introduces the color \(b\) first.
Note that cases as \(n_{1}\neq n^{\prime}_{1},n_{2}=n^{\prime}_{2},n_{3}=n^{\prime}_{3}\) are not possible as \(n_{1}+n_{2}+n_{3}=n^{\prime}_{1}+n^{\prime}_{2}+n^{\prime}_{3}\).
Let us then assume that there are \(l\) colors, and there exists a permutation of the colors \(\{c_{1},c_{2},...,c_{l}\}\) that induces a filtration giving different colored representatives.
Let us consider graphs \(G\) and \(G^{\prime}\) with \(l+1\) colors. Now, if \(\{\!\!\{X_{i}\}\!\}_{i=1}^{k}\neq\{\!\!\{X^{\prime}_{i}\}\!\}_{i=1}^{k}\) for subgraphs of \(G\) and \(G^{\prime}\) with only \(l\) colors, the permutation \(\{c_{l+1},c_{1},c_{2},...,c_{l}\}\) induces a filtration where the representatives of first \(l\) colors differ (and there may or may not be a difference also in the representatives of the \(l+1\)-th color). However, if there are no such subgraphs, this means that each of the pairs of unmatched component-colors contain the \(l+1\) th color. Now \(\{c_{1},c_{2},...,c_{l},c_{l+1}\}\) must induce the wanted kind filtration, since now the representatives of each component are as in \(l\) colors. The claim follows by the induction principle.
[\(\Leftarrow\)] Now, we want to prove that if there is a filtration such that the multisets of real holes differ, then \(\{\!\!\{X_{i}\}\!\}\neq\{\!\!\{X^{\prime}_{i}\}\!\}\). We proceed with a proof by contrapositive.
Assume that \(\{\!\!\{X_{i}\}\!\}=\{\!\!\{X^{\prime}_{i}\}\!\}\). Recall that, for a filter \(f\), the color of the representatives of a real hole associated with \(C_{i}\) is given by \(\arg\min_{x\in X_{i}}f(x)\). If \(\{\!\!\{X_{i}\}\!\}=\{\!\!\{X^{\prime}_{i}\}\!\}\), it implies that the multisets of colors of the representatives are identical. Finally, note that the birth times of real holes are functions of these colors and, therefore, are identical as well.
### Proof of Lemma 3: Almost holes and separating sets
Statement 1:We want to show that if \((f(x^{(b)}),f(x^{(d)}))\) is an almost hole, then \(S=\{v\in V|f(c(v))\geq f(x^{(d)})\}\) is a separating set of \(G=(V,E,c,X)\).
Proof.: Let \(d=(f(x^{(b)}),f(x^{(d)}))\) be an almost hole. Then, we know there is at least one vertex \(w\) of color \(c(w)=x^{(b)}\) that gives birth to a new connected component at the filtration step \(G_{f(x^{(b)})}\). Also, there is a distinct vertex \(w^{\prime}\) such that \(w\) and \(w^{\prime}\) are not in the same component at \(G_{f(x^{(b)})}\) but are connected at \(G_{f(x^{(d)})}\). The existence of \(w^{\prime}\) is guaranteed since if there was no such \(w^{\prime}\) that gets connected to \(w\) at \(G_{f(x^{(d)})}\), \(d\) would be a real hole, or if \(w\) was connected to all other nodes at \(G_{f(x^{(b)})}\), \(d\) would be a trivial hole. Figure S2 illustrates a filtration on a 5-vertex graph with 5 colors. The filtration produces the persistence diagram \(\{\!\!\{(1,\infty),(2,2),(3,4),(4,4),(5,5)\}\!\}\), with a single almost hole \((3,4)\). According to our description, \(w\) corresponds to \(v_{3}\) (with \(x^{(b)}=\) 'grey' and \(f(x^{(b)})=3\)), and \(v_{1}\) could be a candidate to \(w^{\prime}\), for instance.
The discovery of the vertices in \(T=\{v\in V\mid f(c(v))=f(x^{(d)})\}\) connects \(w\) to \(w^{\prime}\) since this set is added at the step when the component associated with \(w\) dies at \(f(x^{(d)})\). Equivalently, \(T\) is a separating set of \(G_{f(x^{(d)})}\). However, we want a separating set of \(G\) (not of \(G_{f(x^{(d)})}\)). Finally, we note that expanding \(T\) to \(S=\{v\in V\mid f(v)\geq f(x^{(d)})\}\) suffices to obtain a separating set of \(G\).
Statement 2:Let \(S\) be a separating set of \(G\) that splits a connected component \(C\subseteq G\) into \(k\) components \(C_{1},C_{2},\ldots,C_{k}\). Then, there exists a filtration that produces \(k-1\) almost holes if the set of colors of vertices in \(\cup_{i=1}^{k}V_{C_{i}}\) is disjoint from those of the remaining vertices, i.e., \(\{c(v)\mid v\in V\setminus\cup_{i=1}^{k}V_{C_{i}}\}\cap\{c(v)\mid v\in\cup_{i= 1}^{k}V_{C_{i}}\}=\emptyset\).
Proof.: Let us denote by \(C_{1}\), \(C_{2}\),..., \(C_{k}\) the connected components that \(S\) separates \(C\) into. We can first set a restriction \(f|_{\cup_{i=1}^{k}V_{C_{i}}}\) to be any function mapping vertex colors to \(\{1,2,...,|\cup_{i=1}^{k}V_{C_{i}}|\}\) -- i.e., vertices in \(\cup_{i=1}^{k}V_{C_{i}}\) must take filtration values in \(\{1,2,...,|\cup_{i=1}^{k}V_{C_{i}}|\}\). Similarly, we can set \(f|_{V\setminus\cup_{i=1}^{k}V_{C_{i}}}\) to be any function to \(\{|\cup_{i=1}^{k}V_{C_{i}}|+1,...,|V|\}\).
The function \(f\) obtained by combining the domains of \(f|_{\cup_{i=1}^{k}V_{C_{i}}}\) and \(f|_{V\setminus\cup_{i=1}^{k}V_{C_{i}}}\) is well defined due to the assumption \(\{c(v)\mid v\in V\setminus\cup_{i=1}^{k}V_{C_{i}}\}\cap\{c(v)\mid v\in\cup_{i =1}^{k}V_{C_{i}}\}=\emptyset\). Since \(C_{1}\), \(C_{2}\)..., \(C_{k}\) are not path-connected, the persistence diagram induced by \(f\) must have \(k\) holes that are born at filtration steps in \(\{1,2,...,|\cup_{i=1}^{k}V_{C_{i}}|\}\). Also, since the vertices of \(S\) are added at filtration steps in \(\{|\cup_{i=1}^{k}V_{C_{i}}|+1,...,|V|\}\), all holes die, forcing the birth and death times to be different. Thus, there must be one real hole corresponding to the connected component \(C\) and \(k-1\) almost holes.
### Proof of Lemma 4: Distinct almost holes imply distinct color-separating sets
Proof.: We will consider two cases. The first one assumes that the multisets of real holes of \(\mathcal{D}_{G}\) and \(\mathcal{D}_{G^{\prime}}\) are different. In the second case, we consider identical multisets of real holes and different multisets of almost holes.
**Case 1: multisets of real holes differ**. By Lemma 2, we have that the graphs have distinct component-wise colors - that is, an empty set is a color-separating set.
**Case 2: \(\mathcal{D}_{G}^{0}\) and \(\mathcal{D}_{G^{\prime}}^{0}\) have identical real holes, but different multisets of almost holes**. We want to show that there is a color-separating set for \(G\) and \(G^{\prime}\). We note that we can split the condition of distinct multisets of almost holes into two sub-cases: (i) There is some color \(x_{0}\) such that there are more almost holes with birth time \(f(x_{0})\) in \(\mathcal{D}_{G}^{0}\) than in \(\mathcal{D}_{G^{\prime}}^{0}\); (ii) There is some color \(x_{0}\) such that there are more almost holes with death \(f(x_{0})\) in \(\mathcal{D}_{G}^{0}\) than in \(\mathcal{D}_{G^{\prime}}^{0}\).
Let us first consider case (i). By the definition of birth time, we have that \(G_{f(x_{0})}\) has more connected components of color set \(\{x_{0}\}\) than \(G^{\prime}_{f(x_{0})}\). As such, \(\{x\in X\cup X^{\prime}|f(x)>f(x_{0})\}\) is a color-separating set for \(G\) and \(G^{\prime}\).
For case (ii), we assume that there are equally many births of almost holes associated to the the color \(x_{0}\) -- otherwise we return to case (i), for which we showed how to build a color-separating set. We note that if there is a different number of connected components at any earlier filtration step than when \(x_{0}\) is introduced (i.e. \(f(y)<f(x_{0})\)), then \(\{x\in X\cup X^{\prime}|f(x)>f(y)\}\) is a color separating set -- since \(G_{f(y)}\) and \(G^{\prime}_{f(y)}\) do not have as many connected components, they cannot have identical component-wise colors. However, if there is no such filtration step \(f(y)\), it follows that \(G_{f(x_{0})}\) and \(G^{\prime}_{f(x_{0})}\) cannot have the same number of components. This follows since vertices of color \(x_{0}\) kill more connected components in \(G_{f(x_{0})}\) than in \(G^{\prime}_{f(x_{0})}\), while prior to this, the numbers of components were equal. Therefore, \(\{x\in X\cup X^{\prime}|f(x)>f(x_{0})\}\) is a color-separating set.
### Proof of Lemma 5: Equivalence between birth times and vertex colors
Proof.: We consider a graph \(G=(V,E,c,X)\) and any injective vertex-color filter \(f:X\rightarrow\mathbb{R}\) from which we obtain a persistence diagram \(\mathcal{D}^{0}\). We want to show that there exists a bijection between the multiset of birth times \(\mathcal{B}=\{\!\!\{b\mid(b,d)\in\mathcal{D}^{0}\}\!\!\}\) and the multiset of vertex colors \(\mathcal{X}=\{\!\!\{c(v)\mid v\in V\}\!\!\}\). Note that we can also represent a multiset as a pair \(\mathcal{B}=(S_{\mathcal{B}},m_{\mathcal{B}})\) where \(S_{\mathcal{B}}\) is a set comprising the distinct elements of \(\mathcal{B}\), and \(m_{\mathcal{B}}:S_{\mathcal{B}}\rightarrow\mathbb{N}\) is a multiplicity function that gives the number of occurrences of each element of \(S_{\mathcal{B}}\) in the multiset. If there is a bijection \(g:S_{\mathcal{B}}\rightarrowS_{\mathcal{X}}\) such that \(m_{\mathcal{B}}=m_{\mathcal{X}}\circ g\), then we say that \(g\) is also a bijection between the multisets \(\mathcal{B}\) and \(\mathcal{X}\).
We note that \(S_{\mathcal{X}}=\operatorname{Im}[c]\) denotes the set of distinct colors in \(G\). Without loss of generality, since we are interested in filtrations induced by \(f\) on \(G\), we can constrain ourselves to filter values on \(S_{\mathcal{X}}\). Thus, filtrations induced by \(f:S_{\mathcal{X}}\rightarrow\mathbb{R}\) are increasing (i.e., for any consecutive filtration steps \(j>i\), we have that \(V_{j}\setminus V_{i}\neq\emptyset\)) and produce filtration steps \(\mathcal{T}=\{f(x)\mid x\in S_{\mathcal{X}}\}\). Because such filtrations are increasing, we have at least one vertex discovered at each step, resulting in the set of distinct birth times \(S_{\mathcal{B}}=\mathcal{T}\). The mapping \(g:S_{\mathcal{X}}\rightarrowS_{\mathcal{B}}\) where \(g(x)=f(x)\) for all \(x\in S_{\mathcal{X}}\) is a bijection. By definition, the number of vertices discovered at step \(f(x)\) equals the number of persistence pairs with birth time \(f(x)\), which is also equal to the number of vertices of color \(x\). This implies that the multiplicity of an element \(x\) in \(\mathcal{X}\) is the same as its corresponding element \(g(x)\) in \(\mathcal{B}\).
### Proof of Theorem 1: The expressive power of vertex-color filtrations
Proof.: We consider graphs \(G=(V,E,c,X)\) and \(G^{\prime}=(V^{\prime},E^{\prime},c^{\prime},X^{\prime})\). and adopt the following notation. We use \(\mathcal{X}=\{\!\!\{c(v)\mid v\in V\}\!\!\}\) and \(\mathcal{X}^{\prime}=\{\!\!\{c^{\prime}(v)\mid v\in V^{\prime}\}\!\!\}\) to denote the multisets of vertex colors of \(G\) and \(G^{\prime}\). Also, we denote by \(C_{1},\ldots,C_{k}\) the components of \(G\), and by \(C^{\prime}_{1},\ldots,C^{\prime}_{k^{\prime}}\) the components of \(G^{\prime}\). The set \(X_{i}=\{c(w)\mid w\in V_{C_{i}}\}\) denotes the distinct colors appearing in \(C_{i}\). Similarly, \(X^{\prime}_{i}=\{c^{\prime}(w)\mid w\in V_{C^{\prime}_{i}}\}\) refers to the distinct colors in \(C^{\prime}_{i}\).
[Forward direction \(\Rightarrow\)]\(\mathcal{D}_{G}^{0}\neq\mathcal{D}_{G^{\prime}}^{0}\rightarrow\) there is a color-separating set
The persistence diagrams \(\mathcal{D}_{G}^{0}\) and \(\mathcal{D}_{G^{\prime}}^{0}\) for graphs with \(\mathcal{X}=\mathcal{X}^{\prime}\) have the same birth times. It implies that if both the real holes and almost holes are identical, then the diagrams are also identical. As such,the assumption that \(\mathcal{D}^{0}_{G}\neq\mathcal{D}^{0}_{G^{\prime}}\) gives that either (1) their multisets of real holes or (2) their multiset of almost holes are different. In the following, we consider these two cases.
Regarding case (1), Lemma 2 gives that if \(\mathcal{D}^{0}_{G}\neq\mathcal{D}^{0}_{G^{\prime}}\) with different multisets of real holes, then we have that \(\{\!\!\{X_{i}\}\!\}_{i=1}^{k}\neq\{\!\!\{X^{\prime}_{i}\}\!\}_{i=1}^{k}\). Whenever this happens, the forward direction holds as even an empty set would work as a color-separating set here. Thus, it suffices to consider the case when \(\mathcal{D}^{0}_{G}\) and \(\mathcal{D}^{0}_{G}\), only differ in their multisets of almost holes. In this case, we can directly leverage Lemma 4 to obtain that there is a color-separating set for \(G\) and \(G^{\prime}\).
[Backward direction \(\Leftarrow\)] Now we want to show that if there is a color-separating set \(Q\neq\emptyset\) for \(G\) and \(G^{\prime}\), there exists a filtration such that \(\mathcal{D}^{0}_{G}\neq\mathcal{D}^{0}_{G^{\prime}}\).
If \(Q=\emptyset\), i.e. \(G\) and \(G^{\prime}\) have distinct component-wise colors, the claim follows by Lemma 2, with \(\mathcal{D}^{0}_{G}\) and \(\mathcal{D}^{0}_{G}\) having different multisets of real holes. If \(Q\neq\emptyset\), we can however use Lemma 2 to subgraphs \(G_{\bar{V}}\) and \(G^{\prime}_{\bar{V}^{\prime}}\), induced by \(\bar{V}=V\setminus\{w\in V\mid c(w)\in Q\}\) and \(\bar{V}^{\prime}=V^{\prime}\setminus\{w\in V^{\prime}\mid c^{\prime}(w)\in Q\}\) to gain a filter function \(g\) such that the diagrams for these subgraphs differ. Now, let's choose any a filter function such that \(f(x)=g(x)\ \forall x\in X\setminus Q\) AND the filtration values for vertices with colors in \(X\setminus Q\) are smaller than those with colors in \(Q\). It follows there is a filtration step \(j\) such that \(G_{j}=G_{\bar{V}}\) and \(G^{\prime}_{j}=G^{\prime}_{\bar{V}^{\prime}}\), and that the birth times for real holes (if the vertices of colors in \(Q\) do not merge the real holes of the subgraphs) or almost holes (if the vertices of colors in \(Q\) do merge components that would have been real holes in the subgraphs) differ. Thus, \(\mathcal{D}^{0}_{G}\neq\mathcal{D}^{0}_{G^{\prime}}\).
### Proof of Lemma 6: Edge-based almost holes as disconnecting sets
Proof.: Initially, when none of the edges are added, there must be \(|V|\) connected components. By definition, each pair \((0,d)\in\mathcal{D}^{0}\) corresponds to the death of one component. It follows that \(G_{f(x^{(d)})}\) has \(c=|V|-|\ \{(0,d)\in\mathcal{D}^{0}\mid d\leq f(x^{(d)})\}\mid\) connected components. If \(G\) has \(\beta^{0}\) connected components, the subgraph with vertices \(V\) and edges \(E\setminus\{e\in E\mid f(l(e))\geq f(x^{(d)})\}\) has more than \(\beta^{0}\) connected components. Thus, \(\{e\in E\mid f(l(e))\geq f(x^{(d)})\}\) is a disconnecting set of \(G\).
### Lemma 7: The reconstruction of a disconnecting set
Proof.: Let \(\pi\) be the permutation of colors associated with a vertex-color filter function \(f\), i.e., for a set of colors \(X=(x_{1},\ldots,x_{m})\), we have that \(f(x_{\pi(i)})<f(x_{\pi(i+1)})\ \forall\ i=1,\ldots,m-1\). Also, assume the colors associated with a disconnecting set \(S\) of a graph \(G=(V,E,l,X)\) is \(X_{S}=\{x_{\pi(k)},x_{\pi(k+1)},\ldots,x_{\pi(m)}\}\).
If \(S\) is a minimal disconnecting set, \((0,f(x_{\pi(k)}))\) must be an almost hole in \(\mathcal{D}\) since if we could add edges \(W\subseteq S\) with color \(x_{\pi(k)}\) without killing some connected component of \(G_{f(\pi(k-1))}\), the set of edges \(S^{\prime}=S\setminus W\) would form a proper disconnecting subset of a minimal disconnecting set If \(S\) is not a minimal disconnecting, then there must be a proper subset \(S^{\prime}\subset S\) that is a minimal disconnecting set of \(G\). Now, we choose \(S^{\prime}\) to be included first in the filtration, followed by elements of \(S\setminus S^{\prime}\). Thus, in both cases, there is a filtration s.t. an almost-hole \((0,f(x_{k}))\) appears, which allows us to reconstruct \(S\).
### Proof of Theorem 2: The expressive power of edge-color filtrations
[\(\Leftarrow\)] We split the proof of the backward direction into three cases.
Proof.: **Case 1: The color-disconnecting set \(Q\) equals to \(X\cup X^{\prime}\)**. This is a trivial case. If \(Q=X\cup X^{\prime}\), \(G\) and \(G^{\prime}\) have distinct number of connected components when _all_ the edges are removed from both graphs. This means \(|V|\neq|V^{\prime}|\). Now, if \(|V|\neq|V^{\prime}|\), then \(|\mathcal{D}^{0}_{G}|\neq|\mathcal{D}^{0}_{G^{\prime}}|\) for any filtration.
**Case 2: The color-disconnecting set \(Q\) is an empty set**. Now, the graphs have distinct number of connected component (even if none of the edges are removed), i.e. \(\beta^{0}_{G}\neq\beta^{0}_{G^{\prime}}\). The diagrams differ for any filtration since they have different numbers of real holes.
**Case 3: The color-disconnecting set \(Q\neq\emptyset\) is a proper subset of \(X\cup X^{\prime}\)**: The existence of a color-disconnecting set implies there is a set \(S\subset X\cup X^{\prime}\) such that by removing the edges of colors \(S\), the two graphs will have different number of connected components. Without loss of generality, we can assume that after removing the edges of colors \(S\), \(G\) has more components than \(G^{\prime}\). Now,we note that in a filtration where the colors of \(S\) are added the latest, there must either be more almost holes \((0,f(x^{(d)}))\) in \(\mathcal{D}^{0}_{G}\) than in \(\mathcal{D}^{0}_{G^{\prime}}\) with \(x^{(d)}\in S\), or alternatively \(\beta^{0}_{G}\neq\beta^{0}_{G^{\prime}}\). In both cases, \(\mathcal{D}^{0}_{G}\neq\mathcal{D}^{0}_{G^{\prime}}\) for some filter function \(f\).
[\(\Rightarrow\)] To prove the forward direction of the Theorem, we consider the cases where the edge-color diagrams differ in 1) their size, 2) the number of real holes, and 3) their almost holes.
Proof.: **Case 1:**\(|\mathcal{D}_{G}|\neq|\mathcal{D}_{G^{\prime}}|\). Again, this corresponds to a trivial case, since if \(|\mathcal{D}_{G}|\neq|\mathcal{D}_{G^{\prime}}|\), then \(|V|\neq|V^{\prime}|\). Now, \(Q=X\cup X^{\prime}\) is a color-disconnecting set.
**Case 2:**\(\mathcal{D}_{G}\) **and**\(\mathcal{D}_{G^{\prime}}\) **differ in their real holes**. If there is a different count of real holes, then \(\beta^{0}_{G}\neq\beta^{0}_{G^{\prime}}\), and \(Q=\emptyset\) is a color-disconnecting set.
**Case 3:**\(\mathcal{D}_{G}\) **and**\(\mathcal{D}_{G^{\prime}}\) **only differ in their almost holes**. We now assume that \(|\mathcal{D}_{G}|=|\mathcal{D}_{G^{\prime}}|\) and \(\beta^{0}_{G}=\beta^{0}_{G^{\prime}}\), but \(\mathcal{D}_{G}\neq\mathcal{D}_{G^{\prime}}\). This means that there is some \((0,d)\in\mathcal{D}\) such that there are more almost holes with this death time in \(\mathcal{D}_{G}\) than in \(\mathcal{D}_{G^{\prime}}\), without loss of generality. There may be several such almost holes (for which the diagrams differ) with distinct death times. Let's denote the set of the death times for these almost holes by \(D\). Then, let \(d_{min}\) be the minimum of the death times in \(D\), i.e. \(d_{min}=\min_{d\in D}d\). Let us show that the set \(Q=\{x\in X\cup X^{\prime}\mid f(x)>d_{min}\}\) disconnects \(G^{\prime}\) into more connected components than \(G\). For any lower filtration step, the induced subgraphs must have as many connected components because the almost holes corresponding to those steps match, and \(|\mathcal{D}_{G}|=|\mathcal{D}_{G^{\prime}}|\), i.e. \(|V|=|V^{\prime}|\), which means that at filtration step 0, we begin with equally many connected components. At filtration step \(d_{min}\), we connect more components in \(G\) than in \(G^{\prime}\) because there are more almost holes corresponding to this step in \(\mathcal{D}_{G}\) than in \(\mathcal{D}_{G^{\prime}}\). Now, \(Q\) must be a color-disconnecting set.
### Proof of Theorem 3: Edge-color vs. vertex-color filtrations
Proof.: This Theorem is proved in Section 3.3. In particular, Figure 3(a) provides an example of pairs of graphs that can be distinguished by vertex-color filtrations but not from edge-color ones. On the other hand, the graphs in Figure 3(b) can be distinguished by edge-color filtrations but not from vertex-color ones. This concludes the proof.
### Proof of Theorem 4: RePHINE is isomorphism invariant
Proof.: RePHINE diagram's isomorphism invariance stems from the fact that it is a function of a filtration on graph, and this filtration is gained from isomorphism invariant colorings. If this assumption is violated and the colorings are not gained in an invariant way, RePHINE diagrams can also be inconsistent.
It is easy to check that the tuples (b,d) are isomorphism invariant - when \(b=0\), these tuples correspond to diagrams gained from edge-color filtration. In this case, we can check the conditions given by Theorem 2 and note none of the conditions may be met with isomorphic graphs. With regard to \(b=1\), the set of missing holes is multiset of edge colors that did not appear in edge-color filtration diagram. This set can thus be gained by considering the multiset of edge colours and the edge-color diagram, which are both isomorphism invariant.
Further, it is also easy to see that the tuples \((\alpha,\gamma)\) are invariant. When \(b=0\), the set of \(\alpha\)'s corresponds to the multiset of vertex colours, and for each vertex, \(\gamma=\min_{v\in\mathcal{N}(v)}f_{e}(\{c(w),c(v)\})\).
However, the crucial part is how these two tuples are concatenated, i.e., how each of the vertices are associated with real and almost-holes. In particular, we need to check when two connected components are merged at a filtration step \(i\) and RePHINE compares the representatives (i.e. vertices of a connected component which have not yet 'died') of the two components, we will end up with same diagram elements of form \((b,i,\alpha,\gamma)\) regardless of the order we add the edges of color with filtration value \(i\). In other words, while the RePHINE algorithm considers one edge at a time and does only pairwise comparisons between merged connected components, the order or these comparisons must not affect the decision on which vertices are associated with death time \(i\). Let's consider what happens when adding all the edges of a color results in merging more than two components. Assume there is a new connected components constituting of old connected components \(T_{1},T_{2},...,T_{n}\). Now, there are two different cases. Assume first that there is a strict minimum among the vertex filtration values of the old representatives. Then, any pairwise comparison will lead to choosing this minimum as the representative of the new connected component and all the other vertices will die at this filtration step. Then, assume there is no strict minimum but a tie between two or more representatives. Then, there will be comparisons based on \(\gamma\), but choosing maximum of these is also permutation invariant function. In case there are two (or more) representatives such that there is a tie based on the vertex filtration values and \(\gamma\) values, choosing at random any of these leads to the same diagram. Lastly, note that for each real hole, \((b,d)=(0,\infty)\), and so it does not thus matter how each of the vertices are matched to the real holes, when rest of vertices are associated with almost-holes in an invariant way.
### Proof of Theorem 5: RePHINE is strictly more expressive than color-based PH
Let \(\mathcal{R}_{G}\) denote the RePHINE diagram for a graph \(G\). Similarly, let \(\mathcal{D}_{v,G}\) and \(\mathcal{D}_{e,G}\) denote persistence diagrams associated with vertex- and edge-color filtrations of \(G\). We assume that \(\mathcal{D}_{v,G}=(\mathcal{D}_{v,G}^{0},\mathcal{D}_{v,G}^{1})\) and \(\mathcal{D}_{e,G}=(\mathcal{D}_{e,G}^{0},\mathcal{D}_{e,G}^{1})\) include 0- and 1-dim persistence diagrams. We want to show that for two graphs \(G\) and \(G^{\prime}\)
1. if there is a vertex-color filtration such that \(\mathcal{D}_{v,G}\neq\mathcal{D}_{v,G^{\prime}}\) then there is a filtration that lead to \(\mathcal{R}_{G}\neq\mathcal{R}_{G^{\prime}}\).
2. if there is a edge-color filtration such that \(\mathcal{D}_{e,G}\neq\mathcal{D}_{e,G^{\prime}}\) then there is a filtration that lead to \(\mathcal{R}_{G}\neq\mathcal{R}_{G^{\prime}}\).
These results would show that RePHINE is at least as expressive as color-based persistence diagrams. We further show that
1. there is a pair of non-isomorphic graphs for which we can obtain \(\mathcal{R}_{G}\neq\mathcal{R}_{G^{\prime}}\) but \(\mathcal{D}_{v,G}=\mathcal{D}_{v,G^{\prime}}\) and \(\mathcal{D}_{e,G}=\mathcal{D}_{e,G^{\prime}}\) for all vertex- and edge-color filtrations.
Proof.: **Part (i):**\(\mathcal{D}_{v,G}\neq\mathcal{D}_{v,G^{\prime}}\rightarrow\mathcal{R}_{G}\neq \mathcal{R}_{G^{\prime}}\). Let \(f\) be the vertex-color function associated with the standard diagrams \(\mathcal{D}\). We can choose the RePHINE's vertex-level function \(f_{v}\) such that \(f_{v}=f\). We note that the original diagrams can be obtained from an auxiliary edge-level filter function \(f_{a}\) where \(f_{a}(u,w)=\max(f(c(u)),f(c(w)))\). The procedure is described in Algorithm 1.
Let \(f_{e}\) be the edge-color filter function of RePHINE. If we choose \(f_{e}=f_{a}\), then RePHINE contains in the second and third elements of its tuples exactly the same persistence information of the vertex-color diagrams. Note that in this case, we do not even need to require injectivity of the edge-color filter \(f_{e}\) since the \(\max\) function is not injective. Regarding the 1-dim features, for any tuple \((d,\infty)\) in the 1-dim persistence diagram, we have a missing hole \((1,d,\cdot,\cdot)\) that comprises the same information. Thus, we have constructed vertex- and edge-color functions such that \(\mathcal{D}_{v,G}\neq\mathcal{D}_{v,G^{\prime}}\rightarrow\mathcal{R}_{G}\neq \mathcal{R}_{G^{\prime}}\).
**Part (ii):**\(\mathcal{D}_{e,G}\neq\mathcal{D}_{e,G^{\prime}}\rightarrow\mathcal{R}_{G}\neq \mathcal{R}_{G^{\prime}}\). This is a trivial case as RePHINE consists of an augmented version of standard edge-color diagrams. Let \(f\) be the edge-color (injective) filter function associated with the standard diagrams. In this case, we can simply set \(f_{e}=f\), where \(f_{e}\) is RePHINE's edge-color filter. Then, the first and second elements of RePHINE's tuples correspond to \(\mathcal{D}_{e}\). Regarding the 1-dim features, the only difference is the way the information is encoded. While we adopted the convention \((1,d)\) for missing holes, the standard diagrams often use \((d,\infty)\). The relevant information is the same. Therefore, RePHINE is at least as expressive as edge-color persistence diagrams.
**Part (iii)**. To show that RePHINE is strictly more expressive than color-based PH, it suffices to provide an example of two graphs for which there is a filtration such that \(\mathcal{R}_{G}\neq\mathcal{R}_{G^{\prime}}\) but these graphs cannot be separated from any vertex- or edge-color filtration. We use the pair of graphs in Figure 3(c). We note that these graphs have no cycles, making 1-dim persistence information trivial.
We first note that their multisets of colors are identical and there is no color-separating sets for these two graphs -- i.e., there is no subset of colors whose removal would separate the graphs into distinct component-wise colors. Thus, by Theorem 1, there is no vertex-color filtration s.t. \(\mathcal{D}_{v,G}\neq\mathcal{D}_{v,G^{\prime}}\).
Also, we have that \(|V|=|V^{\prime}|\) and \(X=X^{\prime}\) and \(\beta_{G}^{0}=\beta_{G^{\prime}}^{0}\), and there is no color-disconnecting set for \(G\) and \(G^{\prime}\) (i.e., there is no edge colors whose removal would generate subgraphs with different number of components). By Theorem 2, these graphs cannot be separated by any edge-color filtration.
However, if we choose the filter functions \(f_{v}(\text{'blue'})=1\), \(f_{v}(\text{'orange'})=2\), \(f_{e}(\text{'blue-blue'})=4\), and \(f_{e}(\text{'blue-orange'})=3\), we obtain distinct RePHINE di agrams given by \(\mathcal{R}_{G}=\{\!\!\{(0,4,1,4),(0,\infty,1,3),(0,3,2,3),(0,3,2,3)\}\!\}\) and \(\mathcal{R}_{G^{\prime}}=\{\!\!\{(0,\infty,1,3),(0,4,1,3),(0,3,2,3),(0,3,2,3)\}\!\}\).
## Appendix C Implementation details
### Datasets
Table S1 reports summary statistics of the real-world datasets used in this paper. For the IMDB-B dataset, we use uninformative features (vector of ones) for all nodes. NCI1, NCI109, Proteins, and IMDB-B are part of the TU Datasets2, a vast collection of datasets commonly used for evaluating graph kernel methods and GNNs. MOLHIV is the largest dataset (over 41K graphs) and is part of the Open Graph Benchmark3. We also consider a regression task using the ZINC dataset -- a subset of the popular ZINC-250K chemical compounds [19], which is particularly suitable for molecular property prediction [7].
Footnote 2: [https://chrsmrrs.github.io/datasets/](https://chrsmrrs.github.io/datasets/)
Footnote 3: [https://ogb.stanford.edu](https://ogb.stanford.edu)
The cubic datasets (Cubic08, Cubic10, and Cubic12) comprise non-isomorphic 3-regular graphs with 8, 10, and 12 vertices, respectively. These datasets contain 5 (Cubic08), 19 (Cubic10), and 85 (Cubic12) graphs and can be downloaded at [https://houseofgraphs.org/meta-directory/cubic](https://houseofgraphs.org/meta-directory/cubic). For each dataset, we create a balanced graph classification problem by randomly assigning each graph a binary class. Also, since the graphs do not have node features, we add a scalar feature to each vertex, i.e., \(c(v)=1\) for all \(v\). However, this would make 1WL-GNNs and PH fail to distinguish any pair of graphs. Thus, we change the features of some arbitrary vertices of each graph, making \(c(v)=-1\) for 1 vertex in graphs from Cubic08, 2 vertices in Cubic10, and 3 vertices in Cubic12 -- we denote the resulting datasets as Cubic08-1, Cubic10-2, and Cubic12-3. Given the modified datasets, we aim to assess if the existing methods can overfit (correctly classify all) the samples.
### Models
We implement all models using the PyTorch Geometric Library [10].
Synthetic data.The GNN architecture consists of a GCN with 2 convolutional layers followed by a sum readout layer and an MLP (one hidden layer) with ReLU activation. The resulting architecture is: Conv(1, 36) \(\rightarrow\) Conv(36, 16) \(\rightarrow\) sum-readout \(\rightarrow\) BN(16) \(\rightarrow\) MLP(16, 24, 1), where BN denotes a batch norm layer [18]. For the PH model, we consider standard vertex-color filtration functions. In particular, we apply the same procedure as Hofer et al. [13], Horn et al. [15] to compute the persistence tuples. We only consider 0-dim persistence diagrams. The filtration function consists of an MLP with 8 hidden units and ReLU activation followed by a component-wise sigmoid function: Sigmoid(MLP(1, 8, 4)) -- i.e., we use \(4\) filtration functions with shared parameters. Since we can associate persistence tuples with vertices, we concatenate the resulting diagrams to obtain a \(|V|\times(4*2)\) matrix \([\mathcal{D}_{1}^{0},\mathcal{D}_{2}^{0},\mathcal{D}_{3}^{0},\mathcal{D}_{4}^{0}]\), where \(\mathcal{D}_{i}^{0}\) denotes the 0-dim diagram obtained using the \(i\)-th filtration function. This procedure was also employed by Horn et al. [15]. The obtained diagrams are processed using a DeepSet layer with mean aggregator and internal MLP function (\(\Psi\)) with 16 hidden and output units, MLP(4 * 2, 16, 16). We then apply a linear layer on top of the aggregated features. The overall DeepSet architecture is: MLP(4 * 2, 16, 16) \(\rightarrow\) Mean Aggregator \(\rightarrow\) Linear(16, 16). Finally, we obtain class predictions using BatchNorm followed by a single-hidden-layer MLP with 16 hidden units: BN(16) \(\rightarrow\) MLP(16, 16, 1).
RePHINE uses the same overall architecture as the PH model. The only differences are that i) RePHINE tuples are 3-dimensional (as opposed to 2-dimensional in PH), and ii) RePHINE additionally leverages an edge-level filtration function. Such a function follows the architecture of the vertex-level one, i.e., Sigmoid(MLP(1, 8, 4)). We note that RePHINE tuples are 3-dimensional instead of 4-dimensional because we removed their uninformative first component (equal to 0) since we only use 0-dim diagrams. In other words, we do not leverage missing holes.
Regarding the training, all models follow the same setting: we apply the Adam optimizer [21] for 2000 epochs with an initial learning rate of \(10^{-4}\) that is decreased by half every 400 epochs. We use batches of sizes 5, 8, 32 for the cubic08, cubic10, and cubic12 datasets, respectively. All results are averaged over 5 independent runs (different seeds). For all models, we obtain the expressivity metric by computing the uniqueness of graph-level representations extracted before the final MLP, with a precision of 5 decimals. Importantly, these choices of hyperparameters ensure that all models have a similar number of learned parameters: 1177 (RePHINE), 1061 (PH), and 1129 (GCN).
Real-world data.For computing the standard vertex-color persistence diagrams, we use the code available by Horn et al. [15], which consists of a parallel implementation in PyTorch of the pseudocode in Algorithm 1. Moreover, we apply a multiple filtration scheme and concatenate the 0-dim persistence diagrams to form matrix representations -- again similarly to the design in [15]. Then, we apply a DeepSet architecture of the form: MLP(TupleSize * nFiltrations, OutDim, OutDim) \(\rightarrow\)Mean Aggregator\(\rightarrow\)Linear(OutDim, OutDim). We use MLPs to define vertex- and edge-level filtration functions. For the 1-dimensional persistence tuples (or missing holes), we first process the tuples from each filtration function using a shared DeepSet layer and then apply mean pooling to obtain graph-level representations -- this avoids possibly breaking isomorphism invariance by concatenating 1-dimensional diagrams. We sum the 0- and 1-dim embeddings and send the resulting vector to an MLP head. The resulting topological embeddings are concatenated with last-layer GNN embeddings and fed into a final MLP classifier.
We carry out grid-search for model selection. More specifically, we consider a grid comprised of a combination of \(\{2,3\}\) GNN layers and \(\{2,4,8\}\) filtration functions. We set the number of hidden units in the DeepSet and GNN layers to 64, and of the filtration functions to 16 -- i.e., the vertex/edge-color filtration functions consist of a 2-layer MLP with 16 hidden units. For the largest datasets (ZINC and MOLHIV), we only use two GNN layers. The GNN node embeddings are combined using a global mean pooling layer. Importantly, for all datasets, we use the same architecture for RePHINE and color-based persistence diagrams.
For the TUDatasets, we obtain a random 80%/10%/10% (train/val/test) split, which is kept identical across five runs. The ZINC and MOLHIV datasets have public splits. All models are initialized with a learning rate of \(10^{-3}\) that is halved if the validation loss does not improve over 10 epochs. We apply early stopping with patience equal to 40.
Comparison to PersLay.We followed the guidelines in the official code repository regarding the choice of hyper-parameters. In particular, PersLay applies fixed filtration functions obtained from heat kernel signatures of the graphs with different parameters, resulting in extended and ordinary diagrams for 0 and 1-dimensional topological features. For RePHINE+Linear, we carry out a simple model selection procedure using grid-search for the number of filtration functions (\(\{4,8\}\)) and the number of hidden units (\(\{16,64\}\)) in the DeepSet models.
Hardware.For all experiments, we use Tesla V100 GPU cards and consider a memory budget of 32GB of RAM.
### Computing RePHINE diagrams
Algorithm 2 describes the computation of RePHINE diagrams. The pseudocode has been written for clarity, not efficiency. The replacement for \(\infty\) in real holes depends on the choice of edge and vertex filter functions. In all experiments, we employed the logistic function to the output of the feedforward networks (i.e., filtered values lie in \([0,1]\)) and used 1 to denote the death time of real holes.
## Appendix D Additional experiments
Here, we complement the experiments on synthetic data, providing illustrations of the learned persistence diagrams and reporting results obtained when we combine 0- and 1-dimensional diagrams.
In Figure S3, we show the concatenation of the learned persistence diagrams at the end of the training procedure for RePHINE and PH (i.e., standard vertex-color filtrations). In these examples, the RePHINE diagrams are different while the PH ones are identical. We can observe this behavior by carefully inspecting the multisets of vectors at each row of the concatenated tuples (each row of the plots in Figure S3). For instance, consider the diagrams in Figure S3(b): in the RePHINE diagram for the top graph, there is a row with 3 yellow entries which do not appear at the diagram for the bottom graph. However, the representations obtained from Standard PH are identical for these graphs.
In Figure 6 we reported results using only 0-dimensional topological features. For completeness, Figure S4 shows learning curves when exploiting both 0 and 1-dimensional diagrams. Overall, we can again observe that RePHINE produces higher expressivity and better fitting capability in comparison to vertex-color persistence diagrams.
Figure S3: 0-dimensional diagrams obtained from RePHINE and PH (standard vertex-color filtrations). These represent pairs of graphs for which the learning procedure in RePHINE could yield different representations, whereas PH produced identical graph-level embeddings. | ## Review
### Summary
This paper investigates the theoretical limits of persistent homology (PH) in graph learning, particularly focusing on the discriminative power of vertex- and edge-filtered graphs. It introduces RePHINE, a novel topological descriptor that combines both node and edge persistence diagrams, proving that it is more expressive than standard filtering methods. The authors provide necessary and sufficient conditions for distinguishing graphs and evaluate the performance of RePHINE on synthetic and real-world datasets. Overall, the paper presents a significant advancement in understanding and leveraging topological information in graph neural networks (GNNs).
### Strengths
- The theoretical results on the expressiveness of persistent homology (PH) are interesting and motivate the design of a new topological descriptor.
- RePHINE is shown to enhance the expressive power of GNNs on multiple benchmarks.
- The method integrates both vertex-level and edge-level persistent homology, proving to be more expressive than either category alone.
- The presentation is clear, effectively communicating complex ideas and theoretical results.
### Weaknesses
- The experimental evaluation lacks comprehensive comparisons with recent state-of-the-art (SOTA) methods for graph classification.
- The paper does not adequately discuss the limitations of the proposed approach, especially in practical applications.
- The exposition can be difficult to follow at times, particularly regarding the interplay between vertex- and edge-based PH.
- The authors should provide a clearer discussion of the specific datasets used and the rationale behind their experimental design.
### Questions
- How does RePHINE compare to extended persistence methods in both theory and practice?
- Is it established that RePHINE is isomorphism invariant, and how does this impact the theoretical discussions?
- What specific limitations or challenges exist when applying RePHINE to real-world datasets?
- Could the authors clarify the relevance of certain lemmas and the implications of their theoretical results?
### Soundness
**Score:** 3
**Description:** 3 = good: The theoretical foundations of the paper are sound, but some aspects require further validation through more comprehensive empirical evaluations.
### Presentation
**Score:** 3
**Description:** 3 = good: While the paper is generally well-presented, some sections could benefit from improved clarity and structure to enhance reader understanding.
### Contribution
**Score:** 3
**Description:** 3 = good: The contributions are significant, introducing a novel topological descriptor that enhances graph learning, but they would benefit from further exploration and validation.
### Rating
**Score:** 7
**Description:** 7 = accept, but needs minor improvements: The paper is technically solid and offers high impact in its area, but it requires some improvements in experimental validation and clarity.
### Paper Decision
**Decision:** Accept (oral)
**Reasons:** The paper presents an original investigation into the expressiveness of persistent homology in graph learning and introduces a novel method, RePHINE, which enhances the capabilities of graph neural networks. Despite some weaknesses in experimental validation and exposition, the theoretical contributions are robust, and the paper is well-written. The potential impact of the research justifies acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Improving Graph Matching with Positional Reconstruction Encoder-Decoder Network
Yixiao Zhou
Wangxuan Institute of Computer Technology
Peking University
&Ruiqi Jia
Wangxuan Institute of Computer Technology
Peking University
&Hongxiang Lin
Wangxuan Institute of Computer Technology
Peking University
&Hefeng Quan
School of computer science and engineering
Nanjing University of Technology
&Yumeng Zhao
School of Artificial Intelligence
Beijing University of Posts and Telecommunications
&Xiaoqing Lyu
Wangxuan Institute of Computer Technology, Beijing Institute of Big Data Research
Peking University
Correspondence: [email protected]. Author emails: {chnzyx,jiaruiqi}@pku.edu.cn, [email protected], [email protected], [email protected], [email protected]
###### Abstract
Deriving from image matching and understanding, semantic keypoint matching aims at establishing correspondence between keypoint sets in images. As graphs are powerful tools to represent points and their complex relationships, graph matching provides an effective way to find desired semantic keypoint correspondences. Recent deep graph matching methods have shown excellent performance, but there is still a lack of exploration and utilization of spatial information of keypoints as nodes in graphs. More specifically, existing methods are insufficient to capture the relative spatial relations through current graph construction approaches from the locations of semantic keypoints. To address these issues, we introduce a positional reconstruction encoder-decoder (PR-EnDec) to model intrinsic graph spatial structure, and present an end-to-end graph matching network PREGM based on PR-EnDec. Our PR-EnDec consists of a positional encoder that learns effective node spatial embedding with the affine transformation invariance, and a spatial relation decoder that further utilizes the high-order spatial information by reconstructing the locational structure of graphs contained in the node coordinates. Extensive experimental results on four public keypoint matching datasets demonstrate the effectiveness of our proposed PREGM.
## 1 Introduction
Image matching is a common and basic matching problem in multimedia and visual applications. The goal of image matching is to find a dense semantic content correspondence between two images, while feature-based matching is a typical strategy to solve the above problems, and in feature matching,the semantic keypoints in images are the most commonly used matching targets. Semantic keypoint matching aims at establishing correspondence between keypoint sets in images, and it has been applied to a wide range of applications, such as object tracking [35, 24], image retrieval [40, 11], and pose estimation [39, 3], etc.
The semantic keypoint data has the properties of non-Euclidean and unstructured for its disorder and discreteness, and graphs are powerful tools to represent some complex objects and their interactions. Specifically, we take each semantic keypoint as a node, and build edges by heuristic methods according to the positions of nodes to construct a corresponding graph. Graph matching is an effective method to solve the correspondence between the semantic keypoints in two images.
As deep learning models are scalable and time-efficient, Graph Neural Networks (GNNs) are increasingly utilized in graph matching tasks, such as GMN [44], PCA [32], and BBGM [29], etc.
However, most graph matching approaches rely on the matching of basic items of graphs, i.e. computing affinities of nodes and edges, lack the mining and utilization of spatial context information hidden in the locations of keypoints as nodes of graphs. Recently, in task scenarios such as point cloud and knowledge graph, which the nodes in graphs have locational information, there are methods such as [21, 18] utilizing positional encoder to add the spatial information as additional features. However, current methods in graph matching domain lack such spatial positional information supplementation. As the positional information is closer to the intrinsic characteristic of the graph, it is important to integrate spatial descriptions into graph feature embeddings in graph matching.
But utilizing underlying spatial information faces the following challenges: 1) Through taking key points as graph nodes and edges are associated with pairwise distance, such as delaunay triangulation, it is not sufficient to mine hidden relative positional relationship. 2) The description of spatial information from coordinates of keypoints is relatively simple compared with conventional visual features, so it needs more refined network structure to learn location information, in order to facilitate subsequent feature fusion.
To address the two challenges mentioned above, we propose a positional reconstruction encoder-decoder network (PR-EnDec) in our PREGM by integrating more spatial information into semantic features and learning more distinguishable node features. The positional encoder in PR-EnDec aims to learn the encoding of node positional information with spatial invariance, specifically, the affine transformation invariance of node coordinates. The spatial relation decoder in PR-EnDec is designed to recover the locational structure from the learned features of the encoder, and it reconstructs the relative distances and areas among the sampled nodes. To implement the joint learning of PR-EnDec about the intrinsic positional features in graphs, we, consequently, propose a corresponding contrastive loss function and a reconstruction loss function in the encoder and the decoder respectively.
Our main contributions are summarized as follows:
* We propose an end-to-end graph matching network, PREGM, which learns more distinguishable node features, not only captures visual information in images but also models intrinsic spatial structure of keypoints by our designed PR-EnDec.
* Our PR-EnDec consists of a positional encoder that learns effective graph positional encoding with affine transformation invariance, and a spatial relation decoder used to enhance such learning by reconstructing the graph locational information such as relative distances and areas from learned positional encoding. We also design a corresponding contrastive loss function and a reconstruction loss function in the encoder and decoder respectively to help learn effective node positional characteristics.
* We evaluate our PREGM on four public keypoint matching datasets. Our experimental results demonstrate that PREGM outperforms state-of-the-art methods. We also present an ablation study, which shows the effectiveness of each component of PREGM.
Related Work
### Graph Matching
Graph matching is always an effective strategy to find correspondence between semantic keypoints as graphs have sufficient representation ability for points and their complex relationships, and it can then be applied to downstream tasks such as image matching and understanding. As it is famous for being one of the practically most difficult NP-complete problems, researchers often use optimization algorithms to find an acceptable suboptimal solution for the graph matching problem. Traditional graph matching methods [17; 5; 7; 20; 36] generally use heuristic search algorithms to find suboptimal solutions.
Recent graph matching methods begin to combine combinatorial optimization with deep learning models. GMN [44] proposes an end-to-end graph matching framework, calculates the unary and pairwise affinities of nodes through CNN features, and also uses spectral matching as a differentiable solver to combine deep learning with graph matching. PCA [32] uses graph neural network (GNN) to learn and aggregate graph structure information into node feature representation, and introduces Sinkhorn network [1] as the combinatorial solver for finding node correspondence. BBGM [29] presents an end-to-end trainable architecture that incorporates a state-of-the-art differentiable combinatorial graph matching solver, and introduces a global attention mechanism to improve the matching performance.
Although the research on graph matching has made the above progress, the existing methods lack the distinguishing and informative graph spatial feature descriptions, and adaptive adjustment of node correspondence on graph matching. In this paper, we overcome these limitations by proposing a positional reconstruction encoder-decoder (PR-EnDec) and designing a contrastive learning and positional feature reconstruction procedure to guide the learning of graph matching.
### Positional Encoding on Graphs
Positional Encoding was first proposed in the Transformer [31] model. In general, positional encoding refers to the process of encoding spatial relationships or topological information of elements as supplementary information in order to augment the representational capacity of a model. The categorization of positional encoding can be further elaborated as follows: spatial positional encoding utilizes point location as input, while topological positional encoding employs the graph structure as input.
For spatial positional encoding, the point location is used as input to obtain a high-dimensional embedding that contains effective locational information. Space2Vec [21] introduces _theory_ and _grid_ as a 2D multi-scale spatial positional encoder by using sinusoidal functions with different frequencies. PointCNN [18] designs a point set spatial positional encoder with Point Conv layers consisting of sampling layer, grouping layer, and aggregation layer. DGCNN [38] considers both global shape structure and local neighborhood information, and proposes spatial positional encoder layers containing dynamic graph neighborhood and edge convolution module. This paper proposes a spatial position encoding on graphs by leveraging the characteristic of node representation containing coordinate information in image keypoint matching.
Topological positional encoding is specific to graph matching, where the graph structure is used as input to obtain the topological embedding of the nodes. Maskey et al. [22] generalizes Laplacian-based positional encoding by defining the Laplace embedding to more general dissimilarity functions such as p-norm rather than the 2-norm used in the original formulation. INFMCS [16] designs a permutation-invariant node ordering based on closeness centrality, and effectively enhances the representation of graph structural features.
In recent years, there has also been research on spatial positional encoding in the field of graph matching. GCAN [13] utilizes SplineCNN [10] to encode positional information in 2D spatial space, and successfully capture structural information of the graph.
However, there remains the open question of how to further mine intrinsic spatial node information in graphs. In this paper, we propose a positional reconstruction encoder-decoder architecture to obtain effective node spatial positional encoding using 2D point location.
## 3 Problem Formulation of Graph Matching
Given the keypoints in the images, each node is generally associated with a keypoint, and edges are built according to the spatial position relationship between nodes when building an undirected graph. The graph is represented by \(\mathbb{G}=(\mathbb{V},\mathbb{E},\mathcal{V},\mathcal{E})\):
* \(\mathbb{V}=\{v_{1},...,v_{n}\}\) denotes the node set.
* \(\mathbb{E}\subseteq\mathbb{V}\times\mathbb{V}\) denotes the edge.
* \(\mathcal{V}=\{\mathbf{v}_{1}|\mathbf{v}_{1}\in\mathbb{R}^{d_{v}},i=1,2,...,| \mathbb{V}|\}\) denotes the node feature set.
* \(\mathcal{E}=\{\mathbf{e}_{1}|\mathbf{e}_{1}\in\mathbb{R}^{d_{e}},i=1,2,...,| \mathbb{E}|\}\) denotes the edge feature set.
We use the adjacency matrix \(\mathbb{A}\) to represent the connections of nodes in an undirected graph \(\mathbb{G}\), that is, \(\mathbb{A}_{ij}=1\) iff there is an edge \(e_{ij}=(v_{i},v_{j})\in\mathbb{E}\).
Given two graphs \(\mathbb{G}_{1}=(\mathbb{V}_{1},\mathbb{E}_{1},\mathcal{V}_{1},\mathcal{E}_{1})\) and \(\mathbb{G}_{2}=(\mathbb{V}_{2},\mathbb{E}_{2},\mathcal{V}_{2},\mathcal{E}_{2})\), where \(|\mathbb{V}_{1}|=|\mathbb{V}_{2}|=n\), the goal of graph matching is to find the correspondence matrix \(\mathbf{X}\in\{0,1\}^{n\times n}\) between nodes \(\mathbb{V}_{1}\) and \(\mathbb{V}_{2}\) of the two graphs \(\mathbb{G}_{1}\) and \(\mathbb{G}_{2}\), wherein \(\mathbf{X}_{ij}=1\) iff \(v_{i}\in\mathbb{V}_{1}\) corresponds to \(v_{j}\in\mathbb{V}_{2}\).
The ultimate goal of the graph matching problem is to find the optimal node correspondence between two graphs \(\mathbb{G}_{1}\) and \(\mathbb{G}_{1}\), so the graph matching problem is often considered as a quadratic assignment programming problem (QAP problem):
\[\mathbf{x}^{*}=\operatorname{argmax}_{\mathbf{x}}\mathbf{x}^{T}\mathbf{K} \mathbf{x}, \tag{1}\]
where \(\mathbf{x}=\mathit{vec}(\mathbf{X})\in\{0,1\}^{n^{2}}\), \(\mathbf{X}\mathbf{1}_{n}=\mathbf{1}_{n}\), and \(\mathbf{X}^{T}\mathbf{1}_{n}=\mathbf{1}_{n}\). \(\mathbf{x}^{*}\) denotes the desired node correspondence, and \(\mathbf{1}_{n}\) denotes a vector of \(n\) ones. \(\mathbf{K}\in\mathbb{R}^{n^{2}\times n^{2}}\)is the corresponding affinity matrix:
\[\mathbf{K}_{ia,jb}=\begin{cases}s_{ia}^{v},&\text{if }i=j\,\&\,a=b,\\ s_{ia,jb}^{e},&\text{else if }\mathbb{A}_{ij}^{e}\mathbb{A}_{ab}^{2}>0,\\ 0,&\text{otherwise}.\end{cases} \tag{2}\]
where \(s_{ia}^{v}\) represents the affinity between node features \(\mathbf{v}_{i}\in\mathcal{V}_{1}\) and \(\mathbf{v}_{a}\in\mathcal{V}_{2}\), \(s_{ia,jb}^{e}\) represents the affinity between edge features \(\mathbf{e}_{ij}\in\mathcal{E}_{1}\) and \(\mathbf{e}_{ab}\in\mathcal{E}_{2}\).
## 4 Methodology
The framework of our PREGM Network is illustrated in Figure 1. The positional encoder module of pretrained PR-EnDec takes the coordinates of keypoints as input. It extracts a positional encoding vector for each keypoint in the images. Next, the positional encoding and the visual features extracted by vgg16_bn are further fused. Then, the fused features are fed into the message-passing module to obtain the refined node feature. Finally, the node affinity is computed and processed by the LPMP solver, generating the node correspondence matrix \(\mathrm{X}\) introduced in the problem formulation.
Figure 1: **The framework of our PREGM. The visual features are extracted by vgg16_bn from images, and the positional encoding is obtained by our pretrained positional encoder from graphs. The visual features and positional encoding of nodes are further fused to be fed into the following modules. Next, the fused features are updated and refined by the message passing module SplineCNN. Finally, we compute node affinity from updated features and adopt a differentiable graph matching solver LPMP to find the correspondence matrix \(\mathrm{X}\).**As an independent stage, we pretrain the abovementioned positional encoder along with the spatial relation decoder module of PR-EnDec (as shown in Figure 2). The pretraining stage enables the positional encoder to learn the high-order spatial information. The PR-EnDec consisting two modules,
**Positional Encoder.** This module takes the coordinates of keypoints as input and learns positional encoding. We first obtain the high-dimensional coordinate embedding vectors by an MLP. The vectors are then fed into self-attention blocks and a normalization layer to learn positional encoding. The attention mechanism provides sequence independence and relative information of nodes. Besides, we propose a contrastive loss for the encoder module. The loss function ensures the affine transformation invariance and learns the relative position information.
**Spatial Relation Decoder.** To reconstruct the spatial structure of graphs, this module generates triangle assignments by distance-based random sampling. In each triangle assignment, the module takes positional encoding corresponding to the three keypoints as the input of self-attention blocks. Next, the processed features are fed into MLPs, and the relative spatial relations of the three points are reconstructed. We adopt a Mean Squared Error (MSE) loss to compare the ground truth relative spatial relations and the decoder's predictions.
### Positional Encoder
The Positional Encoder only takes geometric features of keypoints as inputs,
\(\mathcal{V}^{g}=\{v_{i}^{g}|v_{i}^{g}\in\mathbb{R}^{d^{g}},i=1,...,n\}\) denotes the node geometric feature set.
Specifically, we utilize the 2D Cartesian coordinates of each node \(v_{i}\) as its geometric feature \(v_{i}^{g}=[x_{i},y_{i}]\). The coordinate embedding sub-module embeds the geometric feature \(\mathcal{V}^{g}\) into latent representations by a two-layer MLP, denoted as \(\rho^{v}\). The updated geometric graph \(\mathbb{G}^{g}=(\mathbb{V},\rho^{v}(\mathcal{V}^{g}))\) is then fed into the following attention sub-module.
The attention sub-module consists of \(l_{m}\) attention blocks (denoted as \(Attn_{i}^{e}\)) and a normalization layer (denoted as \(Norm\)), which updates latent representations of \(\mathbb{G}^{g}\). An attention block is composed of a multi-head self-attention layer and a feed-forward layer, which captures the spatial feature of \(\mathbb{G}^{g}\) by aggregating the latent representations of each node.
\[\mathcal{V}_{0}^{g}=\rho^{v}(\mathcal{V}^{g}), \tag{3}\]
\[\mathcal{V}_{i}^{g}=Attn_{i}^{e}(\mathcal{V}_{i-1}^{g}),i=1,2,...,l_{m}. \tag{4}\]
The self-attention mechanism is sequence-independent, which is well suited for extracting high-order relative position information between nodes. There is a normalization layer after attention blocks to allow the positional encoding of nodes to be better fused with visual features nodes on the graph matching task. The final positional encoding obtained by the encoder module is denoted as \(F_{pos}\).
\[F_{pos}=Norm(\mathcal{V}_{l_{m}}^{g}). \tag{5}\]
**Encoder Loss.** For two given graphs \(\mathbb{G}_{1}=(\mathbb{V}_{1},\mathbb{E}_{1},\mathcal{V}_{1},\mathcal{E}_{1})\) and \(\mathbb{G}_{1}=(\mathbb{V}_{1},\mathbb{E}_{1},\mathcal{V}_{1},\mathcal{E}_{1})\), the positional encoder generates the positional encoding denoted as \(F_{pos}^{1}=\{f_{i}^{1}|i=1,2,...,|\mathbb{V}|\}\) and \(F_{pos}^{2}=\{f_{i}^{2}|i=1,2,...,|\mathbb{V}|\}\) respectively. We expect that for all matched node \(i\) in graph \(\mathbb{G}_{1}\) and node \(j\) in graph \(\mathbb{G}_{2}\), the corresponding node positional encoding \(f_{i}^{1}\) and \(f_{j}^{2}\) should be similar. To let the encoder learn the relative position information of the nodes, we perform contrastive learning that
Figure 2: The framework of our PR-EnDec network and the detailed structure of its positional encoder and spatial relation decoder.
perseves the affine invariance of the positional encoding. Specifically, we adopt negative samples to conduct contrastive learning to avoid the encoder module outputs trivial node features that are all similar.
Therefore, we classify \((\mathbb{G}_{1},\mathbb{G}_{2})\) as a positive graph pair if \(\mathcal{V}_{2}^{g}\) is affine transformed by \(\mathcal{V}_{1}^{g}\) or the keypoints in \(\mathcal{V}_{1}^{g}\) and \(\mathcal{V}_{2}^{g}\) are one-by-one matched. On the contrary, \((\mathbb{G}_{1},\mathbb{G}_{2})\) is a negative graph pair if the keypoints are not matched.
For each training batch, the positive graph pair set \(\mathbb{GS}_{p}\) and negative graph pair set \(\mathbb{GS}_{n}\) are generated by node permuting or affine transformation. Let \(\mathbb{FS}_{p}\) and \(\mathbb{FS}_{n}\) denote the corresponding positional encoding pair set of \(\mathbb{GS}_{p}\) and \(\mathbb{GS}_{n}\) respectively, we propose a contrastive loss \(\mathcal{L}_{con}\),
\[S_{p}=\sum_{(F^{1}_{pos},F^{2}_{pos})\in\mathbb{FS}_{p}}exp(\tau* sim(F^{1}_{pos},F^{2}_{pos})), \tag{6}\]
\[S_{n}=\sum_{(F^{1}_{pos},F^{2}_{pos})\in\mathbb{FS}_{n}}exp(\tau* sim(F^{1}_{pos},F^{2}_{pos})), \tag{7}\]
\[\mathcal{L}_{con}=-Log(\frac{S_{p}}{S_{p}+S_{n}}), \tag{8}\]
where \(S_{p}\) and \(S_{n}\) denote the sum of the exponential of positive/negative pairs' similarity, respectively. Additionally, \(\tau\) denotes the temperature constant, and \(sim()\) denotes the cosine similarity.
### Spatial Relation Decoder
To learn the high-order spatial information, we propose \(k\)-th-order geometric reconstruction assignments. Sampling \(k\) points from the node set as an assignment, the corresponding geometric feature set is denoted as \(\mathcal{V}_{s}^{g}\) and positional encoding set as \(F_{sub}\subseteq F_{pos}\). If a decoder \(Dec\) takes \(F_{sub}\) as input, we claim the decoder provides \(k\)-th-order relative position information if there exists a non-degenerate geometric function \(\mathcal{F}\), \(\mathcal{F}(\mathcal{V}_{s}^{g})\approx Dec(F_{sub})\). Specifically, we choose \(k=3\) and function \(\mathcal{F}\) with Euclidean distance and triangle area to learn the third-order relative position information.
The decoder module first samples three keypoints and obtains the corresponding geometric feature set \(\mathcal{V}_{s}^{g}=\{v_{a}^{g},v_{b}^{g},v_{c}^{g}\}\) and positional encoding set \(F_{sub}=\{f_{a},f_{b},f_{c}\}\). The random sampling method is related to the Euclidean distance from the previous sampled node to learn the relative information better, for the sampling method guarantees that the distances between the sampled nodes are not too far. Next, the sampled feature set \(F_{sub}\) is fed into \(l_{n}\) attention blocks. The intermediate feature is denoted as \(\hat{F}_{sub}=\{\hat{f}_{a},\hat{f}_{b},\hat{f}_{c}\}\). We utilize 2 MLPs, denoted as \(\rho_{d}\) and \(\rho_{a}\), to approximate Euclidean distance and triangle area function, respectively. The reconstructed relative position information of the decoder is represented as:
\[RC=[\rho_{d}(\hat{f}_{a},\hat{f}_{b}),\rho_{d}(\hat{f}_{b},\hat{f}_{c}),\rho_{ d}(\hat{f}_{a},\hat{f}_{c}),\rho_{a}(\hat{f}_{a},\hat{f}_{b},\hat{f}_{c})]. \tag{9}\]
Let \(Dis\) denotes Euclidean distance and \(Area\) denotes triangle area function,\(RC\)'s corresponding geometric function \(\mathcal{F}\) is obviously
\[\mathcal{F}(\mathcal{V}_{s}^{g})=[dis1,dis2,dis3,area], \tag{10}\]
where \(dis1=Dis(v_{a}^{g},v_{b}^{g}),dis2=Dis(v_{b}^{g},v_{c}^{g}),dis3=Dis(v_{a}^{g },v_{c}^{g}),area=Area(v_{a}^{g},v_{b}^{g},v_{c}^{g})\) denote ground-truth third-order relative position information for the decoder to reconstruct.
**Decoder Loss.** Since we calculate the approximated relative position information \(RC\) and the ground-truth \(RC^{gt}=\mathcal{F}(\mathcal{V}_{s}^{g})\), we propose the reconstruction loss \(\mathcal{L}_{rec}\):
\[\mathcal{L}_{rec}=\mathrm{MSE}(RC,RC^{gt}). \tag{11}\]
**PR-EnDec Loss.** Finally, we combine the two losses to guide the training of our PR-EnDec jointly,
\[\mathcal{L}_{PR-EnDec}=\mathcal{L}_{con}+\lambda\cdot\mathcal{L}_{rec}, \tag{12}\]
where \(\lambda\) controls the relative importance of \(\mathcal{L}_{rec}\).
Through pretraining of PR-EnDec, we obtain an effective encoder for extracting spatial positional encoding for utilization in subsequent PREGM.
### Graph Matching with PR-EnDec
Our training procedure is divided into two stages: In the first stage, we pretrain our PR-EnDec, and in the second stage of graph matching, our positional encoder module of PR-EnDec serves as a sub-module of the base graph matching model, providing learned positional encoding in the first stage.
After the training of PR-EnDec, we freeze all parameters of the encoder module and put it in the training of graph matching tasks. The positional encoding \(F_{pos}\) generated by the positional encoder module is fused with visual features \(F_{vis}\) extracted by a standard CNN:
\[F_{fused}=Linear(F_{pos})+F_{vis}. \tag{13}\]
where \(Linear\) denotes a linear layer.
Next, we feed the fused features into a message-passing module to produce the refined node features, and the message-passing module is implemented by a two-layer SplineCNN [10]. After the message-passing module, node and edge affinities are computed and passed to the differentiable graph matching solver LPMP in [25]. The resulting correspondence matrix X is compared to the ground truth X\(gt\) and the loss function is their Hamming distance:
\[\mathcal{L}_{GM}=\text{X}\cdot(1-\text{X}^{gt})+\text{X}^{gt}\cdot(1-\text{X}). \tag{14}\]
So far, the graph matching problem formulated previously is solved by our PREGM with the two sequential training phases: the PR-EnDec pretrain and the graph matching task.
## 5 Experiments
We conduct experiments on four public keypoint matching datasets and verify the effectiveness of each component of our PREGM by ablation study. We introduce the datasets, baselines, implementation details, and then report the results. The results demonstrate that our PREGM consistently outperforms all other approaches.
### Datasets
We evaluate our PREGM on four public keypoint matching datasets: PascalVOC [8], Willow ObjectClass [4], Spar-71k [23], and IMC-PT-SparseGM [14].
The PascalVOC dataset includes 20 classes of keypoints with Berkeley annotations [2] and images with bounding boxes. The PascalVOC dataset is relatively challenging, since the scale, pose and illumination of instances in images are rich and diverse, and the number of annotated keypoints in each image also varies from 6 to 23. When conducting experiments on the PascalVOC dataset, we follow the standard protocol [32, 37]: First, each object is cropped according to its corresponding bounding box and scaled to 256 x 256 px. Second, we use 7,020 images for training and 1,682 for testing.
The Willow ObjectClass dataset contains images of five categories: face, duck, winebottle, car, and motorbike, the first three categories are from the Caltech-256 dataset [12], and the last two categories are from the PascalVOC 2007 dataset [8]. Each category contains at least 40 different images, and each image is labeled with 10 distinctive keypoints on the target object to be matched. Following the default setting in [4, 32], we crop the images to the bounding boxes of the objects and rescale to 256 px, 20 images of each class are selected during training, and the rest are for testing.
The SPair-71k dataset is a relatively novel dataset, which was recently published in the paper [23] about dense image matching. Spar-71k contains 70958 image pairs of 18 categories from PascalVOC 2012 dataset [8] and Pascal 3D+ dataset [41]. Compared with the other two datasets, SPair71-k has the advantages of higher image quality and richer annotations which includes detailed semantic keypoints, object bounding boxes, viewpoints, scales, truncation, and occlusion differences of image pairs. In addition, compared with PascalVOC dataset, SPair-71k removes two classes with ambiguous and poor annotations: sofa and dining table. Following [29], we use 53,340 image pairs for training, 5,384 for validation, and 12,234 for testing, and we also scale each image to 256x 256 px.
The IMC-PT-SparseGM dataset contains 16 object categories and 25061 images [14], which gather from 16 tourist attractions around the world. The IMC-PT-SparseGM benchmark involves matchinglarger scenes and take a step closer to the real-world downstream tasks. We take 13 classes as the training set and the other 3 classes as the test set. Experiments are conducted on the benchmark with 50 anchors.
### Baselines and Metrics
We compare our PREGM with the state-of-the-art methods, including two types of baselines: (1) 2 non-learning methods GNCCP [20], ABPF [36]; (2) 11 learning-based methods GMN [44], PCA [32], CIE [43], NGM [33], DGMC [9], EAGM [27], BBGM [29], DIP-GM [42], IA-NGM-v2 [26], ASAR [28], COMMON [19]. For a fair comparison, we follow previous work [32] and extract node visual initial features from relu4_2 and relu5_1 of vgg16_bn [30] via feature alignment. For the input image pairs, we construct the edges of graphs by Delaunay triangulation [6]. We adopt a common metric matching accuracy to evaluate the experimental results, which is computed as the number of correctly matched keypoint pairs averaged by the total number of all true matched keypoint pairs.
### Experimental Settings
In our implementation, we build up our PREGM based on the state-of-the-art method [29], which presents an end-to-end trainable architecture that incorporates a state-of-the-art differentiable combinatorial graph matching solver. Our codes will be available in the future. In all experiments, we use the same set of hyperparameters. We employ Adam [15] optimizer with an initial learning rate of \(1\times 10^{-4}\) for PR-EnDec, and \(9\times 10^{-4}\) for other models, and the learning rate is halved every three epochs. We empirically set \(l_{m}\) = 3 and \(l_{n}\) = 2 in the encoder and decoder, and choose temperature constant \(\tau=2\) in \(L_{con}\), and balance factor \(\lambda=1/32\) in \(L_{PR-EnDec}\). We also set batch size = 8 for PascalVOC, Willow ObjectClass, and Spain-71k datasets. Our learning procedure is divided into two stages: The pre-training of the PR-EnDec is uniformly conducted on the PascalVOC dataset. Then the graph matching framework is trained on each dataset, while the parameters of the positional encoder are fixed. All experiments are run on a single GTX-1080Ti GPU.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c c c c} \hline Method & aero & bike & bird & boat & bottle & car & cat & chair & cow & table & dog & horse & motor & person & plant & sheep & sofa & train & tv & Avg \\ \hline GNCCP & 25.9 & 37.1 & 46.2 & 53.1 & 48.0 & 56.3 & 45.5 & 34.7 & 36.3 & 34.2 & 25.3 & 39.8 & 39.6 & 40.7 & 61.9 & 57.4 & 50.5 & 67.0 & 34.8 & 41.6 \\ ABPF & 30.9 & 40.4 & 43.7 & 54.5 & 50.3 & 56.1 & 46.7 & 34.0 & 38.9 & 16.3 & 34.8 & 39.8 & 39.6 & 53.2 & 57.9 & 50.2 & 41.3 & 42.7 \\ \hline GMN & 31.9 & 42.7 & 31.9 & 40.8 & 67.7 & 22.2 & 33.6 & 44.6 & 37.4 & 34.6 & 30.8 & 37.6 & 35.1 & 49.5 & 30.8 & 35.3 & 41.2 & 37.7 \\ PCA & 40.9 & 55.0 & 68.8 & 47.9 & 76.9 & 77.9 & 63.5 & 64.7 & 33.7 & 65.6 & 61.3 & 69.9 & 62.8 & 44.9 & 77.5 & 64.7 & 58.7 & 56.7 & 50.9 & 63.8 \\ CIE & 51.2 & 69.2 & 68.1 & 55.0 & 52.8 & 67.0 & 74.9 & 36.6 & 68.3 & 71.0 & 70.0 & 71.8 & 66.8 & 64.8 & 45.2 & 69.6 & 65.4 & 58.2 & 62.9 & 64.9 \\ NGM & 50.8 & 64.5 & 50.9 & 55.5 & 79.4 & 76.9 & 74.4 & 69.9 & 41.3 & 62.3 & 68.5 & 65.2 & 62.4 & 64.7 & 47.8 & 78.7 & 60.3 & 81.4 & 89.6 & 66.6 \\ DGM & 50.4 & 67.0 & 70.5 & 70.5 & 78.5 & 85.2 & 52.5 & 74.3 & 46.2 & 69.9 & 67.9 & 73.5 & 68.4 & 61.6 & 69.0 & 73.2 & 69.8 & 59.6 & 73.2 \\ EAGM & 49.4 & 42.1 & 64.5 & 75.3 & **90.8** & 80.9 & 71.1 & 61.3 & 48.7 & 65.9 & 67.5 & 55.4 & 66.3 & 60.1 & 56.3 & 97.1 & 60.4 & 60.6 & 96.0 & 90.0 & 90.5 \\ BEMM & 61.5 & 79.8 & 78.1 & 80.0 & 57.9 & 89.0 & 71.1 & 68.7 & 76.5 & 79.5 & 78.6 & 78.6 & 85.1 & 67.4 & 74.7 & 77.5 & 79.7 & 94.4 & 90.1 \\ DPM-v2 & 58.5 & 74.9 & 72.0 & 86.0 & 87.1 & **94.7** & 98.5 & 79.8 & 60.4 & 77.5 & 79.8 & 76.0 & 76.2 & 78.2 & 74.5 & 67.1 & 77.4 & 76.4 & 96.5 & 92.3 & 79.6 \\ Li-NCN-v2 & 41.5 & 73.9 & 74.0 & 79.4 & 97.6 & 87.7 & 77.5 & 77.1 & 77.4 & 78.0 & 77.4 & 77.1 & 64.3 & 98.5 & 77.5 & 77.5 & 79.5 & 96.2 & 93.9 \\ ASAR & 62.9 & 74.3 & 75.9 & **80.1** & 89.2 & 94.8 & 88.9 & 77.9 & 75.8 & 78.2 & 78.9 & 79.5 & 77.9 & 61.9 & 96.2 & 77.5 & 77.1 & 76.8 & 95.7 & 81.1 \\ COMMON & **65.6** & 72.5 & **80.3** & 75.9 & 93.2 & 93.1 & 81.8 & 61.6 & 80.9 & **85.2** & 81.6 & 79.5 & 66.6 & **98.9** & 78.8 & **90.3** & **93.3** & 82.7 \\ \hline Ours & 02.6 & **76.4** & 79.3 & 78.7 & 90.3 & 90.1 & **93.2** & **83.4** & **71.9** & **82.3** & 92.4 & **82.4** & **82.0** & **85.0** & **89.2** & 96.7 & 78.4 & **90.7** & **96.7** & **95.1** & **84.1** \\ \hline \end{tabular}
\end{table}
Table 1: Matching accuracy (%) on Pascal VOC dataset.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline Method & PT & WT & car & duck & face & mbike & wbottle & Avg \\ \hline GNCCP & \(\times\) & \(\times\) & 86.4 & 77.4 & **100.0** & 95.6 & 95.7 & 91.0 \\ ABPF & \(\times\) & \(\times\) & 88.4 & 80.1 & **100.0** & 96.2 & 96.7 & 92.3 \\ \hline GMN & \(\times\) & \(\times\) & 74.3 & 82.8 & 99.3 & 71.4 & 76.7 & 80.9 \\ PCA & \(\times\) & \(\times\) & 84.0 & 93.5 & **100.0** & 76.7 & 96.9 & 90.2 \\ CIE & \(\times\) & \(\times\) & 82.2 & 81.2 & **100.0** & 90.0 & 97.6 & 90.2 \\ NGM & \(\times\) & ✓ & 84.1 & 77.4 & 99.2 & 82.1 & 93.5 & 87.3 \\ DGMC & \(\times\) & ✓ & **98.3** & 90.2 & **100.0** & 98.5 & 98.1 & 97.0 \\ \hline EAGM & \(\times\) & ✓ & 94.5 & 93.2 & **100.0** & 98.8 & 99.9 & 97.7 \\ \hline EAGM & \(\times\) & ✓ & 94.4 & 89.7 & **100.0** & 99.3 & 99.2 & 96.5 \\ \hline BBGM & \(\times\) & ✓ & 96.9 & 89.0 & **100.0** & 99.2 & 98.8 & 96.8 \\ & ✓ & ✓ & 95.7 & 93.1 & **100.0** & 98.9 & 99.1 & 97.4 \\ \hline Ours & \(\times\) & ✓ & 97.4 & 96.8 & **100.0** & **99.8** & 96.8 & 98.2 \\ \hline \end{tabular}
\end{table}
Table 2: Matching accuracy (%) on Willow dataset.
### Performance Evaluation
We conduct experiments on four public datasets: PascalVOC, WillowObject Class, Spain-71k and IMC-PT-SparseGM for the keypoint matching problem, and follow the most common experimental setting, where intersection filtering is applied to generate graphs with the equal size.
Firstly, we report the matching accuracy of the 20 classes and average accuracy on PascalVOC dataset in Table 1, where the best results are shown in bold. The results demonstrate that our model PREGM performs much better than all traditional graph matching methods, and also achieves better performance against state-of-the-art deep graph matching models with matching accuracy 84.1%. And in the 20 classes of PascalVOC dataset, our PREGM has achieved the best results in the other 17 classes except for the boat, bottle and train class. As there are large differences in the scale, pose and illumination of matching objects, PascalVOC is a complex matching dataset, thus the matching accuracy of traditional graph matching methods is not more than 50%, and the performance of deep graph matching methods is relatively better. Our PREGM learns effective node spatial characteristics by PR-EnDec, so as to further improve the graph matching performance.
For the relatively simple WillowObject Class dataset, as shown in Table 2, there are two different training strategies: PT and WT, which PT means matching frameworks are pre-trained on PascalVOC, and WT means learning models are then fine-tuned on WillowObject Class Dataset. In this paper, we adopt two strategies: PT only, both PT and WT, and in both situations, our model achieves the best performance among other state-of-the-art methods with matching accuracy 98.2% and 99.3%. The results again demonstrate the effectiveness of learning positional node features hidden in graphs in our PREGM.
We also conduct experiments on Spair-71k dataset, as shown in Table 3, we compare our PREGM with deep learning methods GMN, PCA, DGMC, BBGM, and DIP-GM. Our PREGM performs best with matching accuracy 81.9%, and shows superiority in total 18 classes, which demonstrates the generalization ability of our model on different objects. SPair-71k dataset, as a relatively new dataset, has the advantages of high image quality, rich annotation, and fixed image pairs for matching, which is a more convincing dataset. Thus our PREGM achieves the best results on the SPair-71k dataset, which further proves the effectiveness of our PREGM.
Additionally, we evaluate our model on the IMC-PT-SparseGM dataset, as shown in Table 4, our model demonstrates outstanding performance on this demanding benchmark. The results outshine the performance of the other methods, GANN-GM [34] and BBGM, by a significant margin. In terms of the average accuracy across these landmarks, our model excels with an impressive mean accuracy of 90.7%. The IMC-PT-SparseGM dataset is characterized by its substantial number of images, nodes, and high partial rate, making it one of the most comprehensive benchmarks for visual graph matching. Our approach, showcased in these results, demonstrates its ability to handle larger scenes and move closer to real-world applications, such as structure from motion.
\begin{table}
\begin{tabular}{l c c c c} \hline Method & Rechtag & Sacrc\_coeur & St\_peters\_square & Avg \\ \hline GANN-GM & 76.0 & 44.2 & 50.5 & 56.9 \\ BBGM & 99.1 & 79.5 & 86.8 & 88.4 \\ \hline Ours & **99.8** & **84.8** & **87.6** & **90.7** \\ \hline \end{tabular}
\end{table}
Table 4: Matching accuracy (%) on the IMC-PT-SparseGM.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c c c c} \hline Method & aero & bike & boat & bottle & bus & car & chair & cow & dog & horse & mbike & person & plant & sheep & train & tv & Avg \\ \hline GNN & 30.1 & 40.4 & 62.7 & 46.8 & 60.8 & 56.7 & 57.2 & 62.0 & 40.5 & 51.8 & 49.5 & 36.5 & 35.3 & 76.1 & 44.7 & 61.2 & 57.7 & 35.7 \\ PCA & 58.9 & 42.3 & 72.1 & 54.1 & 61.2 & 77.3 & 66.1 & 65.2 & 50.4 & 64.9 & 56.8 & 55.5 & 64.3 & 53.4 & 86.2 & 49.1 & 75.5 & 61.4 & 63.6 \\ DGMC & 54.8 & 44.8 & 80.3 & 70.9 & 65.5 & 90.1 & 78.5 & 66.7 & 66.4 & 73.2 & 66.2 & 66.5 & 65.7 & 91.9 & 89.7 & 68.5 & 84.9 & 98.0 & 72.2 \\ BBGM & 66.9 & 57.7 & 88.5 & 78.5 & 66.9 & 58.4 & 86.1 & 74.6 & 68.3 & 78.9 & 73.0 & 67.5 & 79.3 & 73.0 & 98.1 & 74.8 & 95.0 & 96.6 & 78.9 \\ DIP-GM & 63.7 & 54.5 & 98.0 & 89.0 & 68.2 & 95.0 & 87.3 & 73.5 & 71.0 & 73.7 & 73.4 & 68.1 & 75.1 & 71.2 & 98.8 & 76.9 & 96.0 & 99.2 & 78.7 \\ \hline Ours & **71.3** & **61.9** & **89.6** & **82.9** & **68.4** & **98.4** & **98.1** & **75.1** & **77.6** & **86.1** & **77.3** & **74.9** & **83.4** & **74.3** & **79.5** & **77.6** & **97.5** & **99.3** & **82.4** \\ \hline \end{tabular}
\end{table}
Table 3: Matching accuracy (%) on Spair-71k dataset.
\begin{table}
\begin{tabular}{l|c c c c} \hline Method & baseline & w/o positional encoder & w/o spatial relation decoder & w/o recon\_area & w/o recon\_dis & w/o visual features \\ \hline Accuracy & **84.1\%** & \(83.4\%\) & \(81.7\%\) & \(83.8\%\) & \(82.7\%\) & \(76.8\%\) \\ \hline \end{tabular}
\end{table}
Table 5: Ablation Study on PascalVOC dataset.
### Ablation Study and Parameter Analysis
To evaluate the effect of each component in our framework, we conduct comprehensive ablation studies with/without positional encoder, spatial relation decoder, and we also consider the two separate cases of no reconstruction of areas and relative distances in the loss function of spatial relation decoder. Furthermore, we conducted additional ablation experiments by removing visual features to evaluate the effectiveness of the spatial features extracted by our model. The experiments are performed on PascalVOC dataset. As shown in Table 5, compared with the baseline, the results demonstrate that all modules bring substantial performance gains, the reconstruction of relative distance is more important in the decoder, and the spatial relation decoder contributes most to the overall performance. Moreover, with the fact that only taking spatial features as node features can achieve relatively good performance, it further proves the effectiveness of the spatial features learned from our PR-EnDec.
We also conduct parameter analysis to select hyperparameters. As shown in Table 6, PREGM achieves the best performance when learning rate = \(9\times 10^{-4}\), which is our default setting, and it also shows that adjusting the learning rate causes an accuracy fluctuation of about 1%. For the balance factor \(\lambda\) in the loss function, when \(\lambda=1/128,1/32\), and \(1/8\), the matching accuracy is 83.7%, 84.1%, and 83.3% respectively. Thus we select \(\lambda=1/32\) as our default setting, and the results show that our method is rather sensitive to the choice of \(\lambda\), and our designed loss function in positional encoder and spatial relation decoder indeed improves the performance.
## 6 Conclusion
In this paper, we present an end-to-end novel deep graph matching network PREGM that learns more distinguishable node features by modeling spatial positional information in graphs. Our PREGM designs a common encoder-decoder architecture consisting of a positional encoder that learns effective node positional encoding and a spatial relation decoder that reconstructs the positional structure of graphs. Our experiments and ablation studies on four public keypoint matching datasets demonstrate the state-of-the-art performance of our method. The exploration direction of future work includes optimizing the loss function of the positional encoder to extract purer spatial structural information and improve the feature fusion method to obtain better fused feature representation.
## Acknowledgement
We acknowledge support from National Key R&D Program of China (2021ZD01133001).
## References
* (1) Ryan Prescott Adams and Richard S Zemel. Ranking via sinkhorn propagation. _arXiv preprint arXiv:1106.1925_, 2011.
* (2) Lubomir Bourdev and Jitendra Malik. Poselets: Body part detectors trained using 3d human pose annotations. In _2009 IEEE 12th International Conference on Computer Vision_, pages 1365-1372. IEEE, 2009.
* (3) Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. Realtime multi-person 2d pose estimation using part affinity fields. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 7291-7299, 2017.
* (4) Minsu Cho, Karteek Alahari, and Jean Ponce. Learning graphs to match. In _Proceedings of the IEEE International Conference on Computer Vision_, pages 25-32, 2013.
* (5) Minsu Cho, Jungmin Lee, and Kyoung Mu Lee. Reweighted random walks for graph matching. In _European conference on Computer vision_, pages 492-505. Springer, 2010.
* (6)
\begin{table}
\begin{tabular}{c|c c c c c c c} \hline lr (\(\times 10^{-4}\)) & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \hline Accuracy & 83.2\% & 83.7\% & 83.8\% & **84.1\%** & 83.9\% & 83.7\% & 83.7\% \\ \hline \end{tabular}
\end{table}
Table 6: Parameter analysis of learning rate on PascalVOC.
* [6] Paolo Cignoni, Claudio Montani, and Roberto Scopigno. Dewall: A fast divide and conquer delaunay triangulation algorithm in ed. _Computer-Aided Design_, 30(5):333-341, 1998.
* [7] Amir Egozi, Yosi Keller, and Hugo Guterman. A probabilistic approach to spectral graph matching. _IEEE transactions on pattern analysis and machine intelligence_, 35(1):18-27, 2012.
* [8] Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes (voc) challenge. _International journal of computer vision_, 88(2):303-338, 2010.
* [9] M. Fey, J. E. Lenssen, C. Morris, J. Masci, and N. M. Kriege. Deep graph matching consensus. In _International Conference on Learning Representations (ICLR)_, 2020.
* [10] Matthias Fey, Jan Eric Lenssen, Frank Weichert, and Heinrich Muller. Splinecnn: Fast geometric deep learning with continuous b-spline kernels. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 869-877, 2018.
* [11] Ivan Gonzalez-Diaz, Murat Birinci, Fernando Diaz-de Maria, and Edward J Delp. Neighborhood matching for image retrieval. _IEEE Transactions on Multimedia_, 19(3):544-558, 2016.
* [12] Gregory Griffin, Alex Holub, and Pietro Perona. Caltech-256 object category dataset. 2007.
* [13] Zheheng Jiang, Hossein Rahmani, Plamen Angelov, Sue Black, and Bryan M Williams. Graph-context attention networks for size-varied deep graph matching. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 2343-2352, 2022.
* [14] Yuhe Jin, Dmytro Mishkin, Anastasiia Mishchuk, Jiri Matas, Pascal Fua, Kwang Moo Yi, and Eduard Trulls. Image matching across wide baselines: From paper to practice. _International Journal of Computer Vision_, pages 517-547, 2021.
* [15] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In _International Conference on Learning Representations (ICLR)_, 2015.
* [16] Zixun Lan, Binjie Hong, Ye Ma, and Fei Ma. More interpretable graph similarity computation via maximum common subgraph inference. _arXiv preprint arXiv:2208.04580_, 2022.
* [17] Marius Leordeanu, Martial Hebert, and Rahul Sukthankar. An integer projected fixed point method for graph matching and map inference. _Advances in neural information processing systems_, 22, 2009.
* [18] Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Xinhan Di, and Baoquan Chen. Pointcnn: Convolution on x-transformed points. _Advances in neural information processing systems_, 31, 2018.
* [19] Yijie Lin, Mouxing Yang, Jun Yu, Peng Hu, Changqing Zhang, and Xi Peng. Graph matching with bi-level noisy correspondence, 2022.
* [20] Zhi-Yong Liu and Hong Qiao. Gnccp--graduated nonconvexityand concavity procedure. _IEEE transactions on pattern analysis and machine intelligence_, 36(6):1258-1267, 2013.
* [21] Gengchen Mai, Krzysztof Janowicz, Bo Yan, Rui Zhu, Ling Cai, and Ni Lao. Multiscale representation learning for spatial feature distributions using grid cells. _arXiv preprint arXiv:2003.00824_, 2020.
* [22] Sohir Maskey, Ali Parviz, Maximilian Thiessen, Hannes Stark, Ylli Sadikaj, and Haggai Maron. Generalized laplacian positional encoding for graph representation learning. In _NeurIPS 2022 Workshop on Symmetry and Geometry in Neural Representations_.
* [23] Juhong Min, Jongmin Lee, Jean Ponce, and Minsu Cho. Hyperpixel flow: Semantic correspondence with multi-layer neural features. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 3395-3404, 2019.
* [24] Hyeonseob Nam and Bohyung Han. Learning multi-domain convolutional neural networks for visual tracking. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 4293-4302, 2016.
* [25] Marin Vlastelica Pogancic, Anselm Paulus, Vit Musil, Georg Martius, and Michal Rolinek. Differentiation of blackbox combinatorial solvers. In _International Conference on Learning Representations_, 2019.
* [26] Tianxiang Qin, Shikui Tu, and Lei Xu. Ia-ngm: A bidirectional learning method for neural graph matching with feature fusion. _Machine Learning_, pages 1-27, 2022.
* [27] Jingwei Qu, Haibin Ling, Chenrui Zhang, Xiaoqing Lyu, and Zhi Tang. Adaptive edge attention for graph matching with outliers. In _IJCAI_, pages 966-972, 2021.
* [28] Qibing Ren, Qingquan Bao, Runzhong Wang, and Junchi Yan. Appearance and structure aware robust deep visual graph matching: Attack, defense and beyond. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 15263-15272, 2022.
* [29] Michal Rolinek, Paul Swoboda, Dominik Zietlow, Anselm Paulus, Vit Musil, and Georg Martius. Deep graph matching via blackbox differentiation of combinatorial solvers. In _European Conference on Computer Vision_, pages 407-424. Springer, 2020.
* [30] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. _arXiv preprint arXiv:1409.1556_, 2014.
* [31] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. _Advances in neural information processing systems_, 30, 2017.
* [32] Runzhong Wang, Junchi Yan, and Xiaokang Yang. Learning combinatorial embedding networks for deep graph matching. In _Proceedings of the IEEE/CVF international conference on computer vision_, pages 3056-3065, 2019.
* [33] Runzhong Wang, Junchi Yan, and Xiaokang Yang. Neural graph matching network: Learning lawler's quadratic assignment problem with extension to hypergraph and multiple-graph matching. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 2021.
* [34] Runzhong Wang, Junchi Yan, and Xiaokang Yang. Unsupervised learning of graph matching with mixture of modes via discrepancy minimization. _IEEE Transactions of Pattern Analysis and Machine Intelligence_, 2023.
* [35] Tao Wang and Haibin Ling. Gracker: A graph-based planar object tracker. _IEEE transactions on pattern analysis and machine intelligence_, 40(6):1494-1501, 2017.
* [36] Tao Wang, Haibin Ling, Congyan Lang, and Songhe Feng. Graph matching with adaptive and branching path following. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 40(12):2853-2867, 2018.
* [37] Tao Wang, He Liu, Yidong Li, Yi Jin, Xiaohui Hou, and Haibin Ling. Learning combinatorial solver for graph matching. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 7568-7577, 2020.
* [38] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. Dynamic graph cnn for learning on point clouds. _Acm Transactions On Graphics (tog)_, 38(5):1-12, 2019.
* [39] Paul Wohlhart and Vincent Lepetit. Learning descriptors for object recognition and 3d pose estimation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 3109-3118, 2015.
* [40] Jun Wu, Hong Shen, Yi-Dong Li, Zhi-Bo Xiao, Ming-Yu Lu, and Chun-Li Wang. Learning a hybrid similarity measure for image retrieval. _Pattern Recognition_, 46(11):2927-2939, 2013.
* [41] Yu Xiang, Roozbeh Mottaghi, and Silvio Savarese. Beyond pascal: A benchmark for 3d object detection in the wild. In _IEEE winter conference on applications of computer vision_, pages 75-82. IEEE, 2014.
* [42] Zhoubo Xu, Puqing Chen, Romain Raveaux, Xin Yang, and Huadong Liu. Deep graph matching meets mixed-integer linear programming: Relax at your own risk? _arXiv preprint arXiv:2108.00394_, 2021.
* [43] Tianshu Yu, Runzhong Wang, Junchi Yan, and Baoxin Li. Learning deep graph matching with channel-independent embedding and hungarian attention. In _International conference on learning representations_, 2019.
* [44] Andrei Zanfir and Cristian Sminchisescu. Deep learning of graph matching. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 2684-2693, 2018. | ## Review
### Summary
The paper presents a novel approach to graph matching through the introduction of a Positional Reconstruction Encoder-Decoder (PR-EnDec) model, which enhances the intrinsic spatial structure of graphs for image keypoints matching. By utilizing node coordinates, the model captures high-order spatial information and combines visual features with positional encodings to compute node and edge affinities. The proposed method is validated on several datasets, achieving superior performance compared to existing methods, with an ablation study affirming the contributions of its individual components. Overall, the study suggests that integrating positional encodings can significantly benefit future research in graph matching.
### Strengths
- The proposed PR-EnDec model effectively captures and utilizes spatial context information inherent in keypoints.
- The integration of positional encodings with visual features demonstrates significant improvements in graph matching across multiple datasets.
- The ablation study thoroughly analyzes the contributions of different model components, providing insights into hyper-parameter choices.
- The paper is well-structured, clearly written, and easy to follow, enhancing its accessibility.
### Weaknesses
- Several technical details about the method are insufficiently explained, such as configurations of multi-head self-attention layers and the generation of positive/negative pairs.
- The paper lacks sufficient visualizations that could provide empirical evidence of model performance and aid in understanding.
- Some experimental comparisons rely on outdated datasets and methods, raising concerns about the relevance of the results.
- The formatting of experimental results in tables could be clearer, with a recommendation to dedicate each row to a single result for better readability.
- There is a lack of discussion regarding the limitations of the proposed method and its potential societal impacts.
### Questions
- What specific configurations were used for the multi-head self-attention layers?
- How are the visual features sampled for each node, and how are the graph edges defined?
- Why was VGG-16 chosen over deeper models like ResNet-50 for visual feature extraction?
- Could the authors elaborate on how their method differentiates itself from existing approaches that also consider spatial information, such as BBGM?
- Why are some baseline results missing in the experiments, and are there specific reasons for not reporting them?
### Soundness
**Score:** 3
**Description:** Good - The methodology is solid, but some technical details require clarification to enhance understanding and reproducibility.
### Presentation
**Score:** 3
**Description:** Good - The paper is generally well-presented, though it could benefit from improved visualizations and clearer formatting in tables.
### Contribution
**Score:** 3
**Description:** Good - The work provides valuable insights into graph matching through positional encodings, but its novelty relative to existing methods could be better articulated.
### Rating
**Score:** 6
**Description:** Weak Accept - The paper is technically solid with moderate-to-high impact potential, though it requires some enhancements in clarity and detail.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper introduces a novel approach to graph matching that leverages positional encodings, which is a significant contribution to the field. Despite some concerns regarding clarity and the need for additional details and visualizations, the overall soundness of the methodology and the strong experimental results support the decision to accept.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Sharpness-Aware Minimization
Leads to Low-Rank Features
Maksym Andriushchenko
EPFL
[email protected] &Dara Bahri
Google Research
[email protected] &Hossein Mobahi
Google Research
[email protected] &Nicolas Flammarion
EPFL
[email protected]
###### Abstract
Sharpness-aware minimization (SAM) is a recently proposed method that minimizes the sharpness of the training loss of a neural network. While its generalization improvement is well-known and is the primary motivation, we uncover an additional intriguing effect of SAM: _reduction of the feature rank_ which happens at different layers of a neural network. We show that this low-rank effect occurs very broadly: for different architectures such as fully-connected networks, convolutional networks, vision transformers and for different objectives such as regression, classification, language-image contrastive training. To better understand this phenomenon, we provide a mechanistic understanding of how low-rank features arise in a simple two-layer network. We observe that a significant number of activations gets entirely _pruned_ by SAM which directly contributes to the rank reduction. We confirm this effect theoretically and check that it can also occur in deep networks, although the overall rank reduction mechanism can be more complex, especially for deep networks with pre-activation skip connections and self-attention layers. We make our code available at [https://github.com/tml-epfl/sam-low-rank-features](https://github.com/tml-epfl/sam-low-rank-features).
## 1 Introduction
Understanding generalization and features learned by overparametrized deep networks is the key topic of modern machine learning. The training objective of deep networks typically has many global optima where the training points are perfectly fitted (Zhang et al., 2017), but very different features and generalization performance are obtained (Chizat et al., 2019; Liu et al., 2019). It has been observed recently that the _sharpness_ of a minimum--i.e. how quickly the loss can change in some neighborhood around a minimum in the parameter space--correlates with the generalization error (Keskar et al., 2017; Jiang et al., 2020). The idea of minimizing the sharpness during training to improve generalization has motivated recent works that propose to use worst-case perturbations of the weights on every iteration of training (Foret et al., 2021; Zheng et al., 2021; Wu et al., 2020). We refer to this family of methods collectively as _sharpness-aware minimization_ (SAM) and focus on the version proposed in Foret et al. (2021) that uses a single step of normalized gradient ascent to approximate the worst-case weight perturbation.
**Contributions.** In our paper, we are interested in shedding more light on the effect of SAM on the structure of the _features_ learned by the network. In particular, our main observation is that:
_Sharpness-aware minimization on overparametrized neural networks leads to low-rank features._While standard training techniques can already produce features of reduced rank (Huh et al., 2021), SAM significantly amplifies this effect. This effect is of interest when the goal is to identify a low-dimensional structure in the data, as well as for more efficient feature quantization and nearest neighbor retrieval based on the learned features.
We make the following contributions in our paper:
* In Section 3, we present extensive empirical evidence of low-rank features for various models (ResNets, ViTs, MLP-Mixers) trained with SAM on four classification tasks (CIFAR-10/100, Tiny ImageNet, ImageNet-1k) as well as for contrastive text-image training (MS-COCO).
* In Section 4, we provide a mechanistic understanding of how low-rank features arise in a simple two-layer ReLU network. We observe that a significant number of activations gets entirely _pruned_ by SAM, which directly contributes to the rank reduction. Furthermore, we provide a theoretical argument supporting this effect of SAM.
* In Section 5, we confirm that this effect can also occur in deep networks, although the overall rank reduction mechanism can be more complex, especially for deep networks with pre-activation skip connections and self-attention layers.
* Finally, in Section 6, we show that directly inducing low-rank features does not improve generalization on natural data. Thus, we conclude that SAM is unique in its ability to both improve generalization _and_ decrease the feature rank in a data-adaptive fashion.
## 2 Related work and background knowledge on SAM
In this section, we first present the most relevant previous works and then cover background on SAM.
### Related work
**Sharpness-aware minimization.** The idea behind SAM introduced by Foret et al. (2021) is to minimize the sharpness during training to improve generalization. SAM modifies stochastic gradient descent (SGD) such that on every iteration of training, the gradient is taken not at the current iterate but rather at an approximate worst-case point in its vicinity. Zheng et al. (2021) concurrently propose a similar weight perturbation method which also successfully improves standard generalization on multiple deep learning benchmarks. Wu et al. (2020) also propose an almost identical algorithm with the same motivation, but with the focus on improving robust generalization of adversarial training. Chen et al. (2022) discover that SAM is particularly helpful to improve generalization for new architectures like vision transformers (Dosovitskiy et al., 2021) and MLP-Mixers (Tolstikhin et al., 2021). Andriushchenko and Flammarion (2022) study the reasons behind the success of SAM and characterize its effect on simple diagonal linear networks. Bartlett et al. (2022) and Wen et al. (2023) theoretically analyze the regularization effect of SAM close to a minimum.
**Implicit sharpness minimization.** There are multiple works that suggest that sharpness can also be minimized _implicitly_ by standard training algorithms. For example, Cohen et al. (2021) formulate the dependency between the learning rate used for training and the sharpness given by the maximum eigenvalue of the Hessian. Barrett and Dherin (2021) and Smith et al. (2021) derive a gradient regularization term that is related to SAM by analyzing SGD and GD with finite step sizes. Damian et al. (2021) suggest a connection between label-noise SGD which implicitly minimizes the trace of the Hessian and SAM whose variations can also minimize the same quantity (Wen et al., 2023). Mulayoff et al. (2021) and Nason et al. (2023) suggest a relation between sharpness and smoothness for two-layer networks depending on the learning rate of gradient descent.
**Low-rank and sparse features.** The most related work to ours is the one by Ramesh et al. (2022). They briefly note that contrastive language-image pretraining (CLIP) (Radford et al., 2021) with SAM leads to a reduction of the feature rank which they leverage for more efficient feature quantization. Huh et al. (2021) observe that existing training techniques such as SGD on neural networks can already produce features of reduced rank. On a similar note, Ethayarajh (2019) and Cai et al. (2021) observe an extreme anisotropy of the feature space for language models trained with standard methods, including an interesting low-rank cluster structure. Galanti et al. (2022) and Timor et al. (2023) discuss a low-rank effect in the weight matrices due to a small weight norm which is achieved either with weight decay or with a small initialization scale. Na et al. (2022) suggest that models trained with SAM are more compressible via weight pruning and quantization. Gulcehre et al. (2022) observe that using SAM with dropout can lead to a lower rank in reinforcement learning tasks. Andriushchenko et al. (2023) study the effect of large step size SGD on the Jacobian of the network and activation sparsity. Li et al. (2023) observe that _standard_ training leads to extreme activation sparsity in the MLP blocks of transformers.
### Background on sharpness-aware minimization
Let \(\mathcal{I}_{\text{train}}=\{1,\dots,n\}\) be the indices of the training set \(\{\mathbf{x}_{i},y_{i}\}_{i=1}^{n}\) and \(\ell_{i}(\mathbf{\theta})\) be the loss of a model parametrized by weights \(\mathbf{\theta}\in\mathbb{R}^{|\mathbf{\theta}|}\) and evaluated at point \((\mathbf{x}_{i},y_{i})\). Foret et al. (2021) propose to minimize the following worst-case objective instead of standard average loss minimization:
\[\text{SAM objective:}\quad\min_{\mathbf{\theta}\in\mathbb{R}^{|\mathbf{\theta}|}} \operatorname*{\mathbb{E}}_{\mathcal{I}\subset\mathcal{I}_{\text{train}}} \max_{\|\mathbf{\varepsilon}\|_{2}\leq\rho}\frac{1}{|\mathcal{I}|}\sum_{i\in \mathcal{I}}\ell_{i}(\mathbf{\theta}+\mathbf{\varepsilon}), \tag{1}\]
where \(\mathcal{I}\) is a random subset of \(m\) training points. We note this objective is based on the maximization of the sum of losses over batches of \(m\) points each. To make SAM practical, Foret et al. (2021) propose to minimize the objective (1) with stochastic gradients. Denoting the batch indices at time \(t\) by \(\mathcal{I}_{t}\), this leads to the following update rule on each iteration of training:
\[\text{SAM update:}\quad\mathbf{\theta}_{t+1}:=\mathbf{\theta}_{t}-\frac{ \gamma_{t}}{|\mathcal{I}_{t}|}\sum_{i\in\mathcal{I}_{t}}\nabla\ell_{i}\Big{(} \mathbf{\theta}_{t}+\frac{\rho_{t}}{|\mathcal{I}_{t}|}\sum_{j\in\mathcal{I}_{t}} \nabla\ell_{j}(\mathbf{\theta}_{t})\Big{)}. \tag{2}\]
We note that the _same_ batch \(\mathcal{I}_{t}\) is used for the inner and outer gradient steps, and \(\rho_{t}\) typically includes the gradient normalization, i.e., \(\rho_{t}:=\rho/\|\frac{1}{|\mathcal{I}_{t}|}\sum_{j\in\mathcal{I}_{t}}\nabla \ell_{j}(\mathbf{\theta}_{t})\|_{2}\). The worst-case perturbations and the use of a small \(m\) in SAM are essential for the generalization improvement which depends continuously on \(m\) as noted in Foret et al. (2021).
## 3 Experimental evidence of low-rank features due to SAM
We first discuss how we measure the feature rank and then present strong empirical evidence for low-rank features for models trained with SAM. We consider first classification tasks on CIFAR-10, CIFAR-100 (Krizhevsky and Hinton, 2009), Tiny ImageNet (Le and Yang, 2015), and ImageNet-1k (Deng et al., 2009), and then contrastive learning on MS-COCO (Lin et al., 2014).
**How we measure the feature rank.** Consider a neural network \(f:\mathcal{X}\rightarrow\mathbb{R}^{D}\) which maps inputs from a set \(\mathcal{X}\) (e.g., images) to a \(D\)-dimensional vector (e.g., logits for classification tasks or embeddings for contrastive learning). We assume that \(f\) can be decomposed on \(B\)_blocks_, i.e., \(f(\mathbf{x})=f_{B}\circ f_{B-1}\circ...\circ f_{1}(\mathbf{x})\) where a single block \(f_{b}:\mathbb{R}^{d_{b-1}}\rightarrow\mathbb{R}^{d_{b}}\) can consist of multiple layers. By _features_ we refer to the intermediate values \(f_{b}(\mathbf{x})\circ...\circ f_{1}(\mathbf{x})\in\mathbb{R}^{d_{b}}\) at some block \(b\). To assess their rank, we first do a PCA on the feature matrix \([f_{b}(\mathbf{x}_{i})^{\top}]_{i=1}^{d_{b}}\in\mathbb{R}^{n\times d_{b}}\) and then select _the minimal number of principal components that span \(99\%\) of the variance in the data_. We refer to this measure as the feature rank, as it captures the notion of the most informative features in an interpretable manner. We note that this is not equivalent to the matrix rank in a strict mathematical sense. However, it provides a practical alternative to selecting a fixed threshold on singular values, which can often be challenging to determine.
### Low-rank features for ResNets on standard classification tasks
**Setup.** We train a PreAct ResNet-18 (He et al., 2016) with standard augmentations on standard deep learning datasets: CIFAR-10, CIFAR-100, and Tiny ImageNet. We consider both (1) a _minimal setting_ with a small learning rate, no momentum, and no weight decay, and (2) a _state-of-the-art setting_ which includes large learning rates, momentum, and weight decay. Since we observe that _neural collapse_--convergence of the feature rank of the penultimate layer to the number of classes (Papyan et al., 2020)--occurs in a cascading fashion (Hui et al., 2022) and interferes with the low-rank trend of SAM, we exclude the last residual superblock and instead report the rank at the _third_ superblock.1
Footnote 1: Further details can be found in our code [https://github.com/tml-epfl/sam-low-rank-features](https://github.com/tml-epfl/sam-low-rank-features).
**Observations.** We plot the main metrics for these three datasets in Figure 1, 2, and 3. We observe that the models trained with SAM achieve a _substantially_ smaller feature rank which occurs not only at the end but also _throughout_ the training. We also note that the generalization improvement is _U-shaped_ in \(\rho\) (i.e., too large \(\rho\) is harmful) unlike the rank reduction which is monotonic. We note that for the state-of-the-art setting, the rank exhibits a sudden drop at the beginning and then a gradual increase, both for standard training and SAM. We verified that such behavior originates from the usage of initial large learning rates and is not specific to SAM. Overall, the rank reduction effect is very consistent over the three datasets and over both minimal and state-of-the-art settings. We also observe that augmentations play a crucial role in revealing the low-rank trend with respect to the \(\rho\) of SAM, whereas the addition of only weight decay is insufficient for revealing a similar trend. Additionally, to confirm the generalizability of the low-rank features taken _at an intermediate layer_, we also include transfer learning performance using k-nearest neighbors classification with \(k=10\). For this purpose, we compute (1) the k-NN classification error on CIFAR-10 for models trained on
Figure 1: ResNet-18 on CIFAR-10. SAM improves test error (_left_), leads to more generalizable features (_middle_), and noticeably reduces the feature rank at the intermediate ResNet block (_right_). Note that the test error improvement is U-shaped in \(\rho\) unlike the rank reduction which is monotonic.
Figure 2: ResNet-18 on CIFAR-100. SAM improves test error (_left_), leads to more generalizable features (_middle_), and noticeably reduces the feature rank at the intermediate ResNet block (_right_). Note that the test error improvement is U-shaped in \(\rho\) unlike the rank reduction which is monotonic.
CIFAR-100 and (2) the k-NN classification error on CIFAR-100 for the models trained on CIFAR-10 and Tiny ImageNet. Without this experiment, one could assume that since these features are not from the penultimate layer, they can be of limited use for downstream tasks. However, this experiment highlights that it is not the case: the block-3 features are still suitable for transfer learning and show completely non-trivial performance.
**Additional experiments.** We report a few more related experiments in Appendix B. First, we note that the feature rank shows the same trend on both training and test sets and is not sensitive to the chosen variance threshold (see Figure 9 and 10 in Appendix). Moreover, the same picture holds also for different batch sizes (see Figure 11 in Appendix). Additionally, we also report results for more recent variants of SAM such as ASAM from Kwon et al. (2021) and GAM from Zhang et al. (2023) (see Table 2 in Appendix). Finally, we discuss the behavior of the feature rank at the _penultimate_ layer instead of the intermediate ResNet block (see Figure 12 in Appendix).
### Low-rank features for ViTs and MLP-Mixers on ImageNet-1k
**Setup.** We take publicly available models trained on ImageNet-1k from the vision_transformer library2 which contains models from Dosovitskiy et al. (2021), Tolstikhin et al. (2021), and Chen et al. (2022). We evaluate the feature rank on \(12\,800\) examples using the zeroth token for ViTs and average features over all tokens for MLP-Mixers.
Footnote 2: [https://github.com/google-research/vision_transformer](https://github.com/google-research/vision_transformer)
**Observations.** We report the results in Table 1. We note that neural collapse is not an issue here since the feature dimension is smaller than the number of classes on ImageNet-1k. We see a very consistent rank reduction trend from SAM for each setting, with the only exception of MLP-Mixers after \(\nicefrac{{3}}{{4}}\) of the total number of blocks. Finally, we note that these models are trained with rather small \(\rho\) of SAM (e.g., \(0.2\) for ViT-B/16), and we can expect a stronger low-rank effect for higher \(\rho\).
### Low-rank features in contrastive language-image training on MS-COCO
**Setup.** Here we use a training setting similar to CLIP (Radford et al., 2021) but we start from pretrained image and language models. We fine-tune the R+Ti/16 vision transformer from Steiner et al. (2021) and BERT from Devlin et al. (2018) on MS-COCO using the InfoNCE contrastive loss (Oord et al., 2018). We add a linear head to each of the encoders to match the dimension of both pre-trained encoders. We note that for contrastive training, unlike for classification, neural collapse is
Figure 3: ResNet-18 on Tiny ImageNet. SAM improves test error (_left_), leads to more generalizable features (_middle_), and noticeably reduces the feature rank at the intermediate ResNet block (_right_). Note that the test error improvement is U-shaped in \(\rho\) unlike the rank reduction which is monotonic.
not an issue, so we report the feature rank directly at the last layer. We measure the retrieval error within each batch of size \(128\).
**Observations.** In Figure 4, we observe that SAM both substantially improves the retrieval error and leads to features of much smaller rank. We note that having features of reduced rank may contradict the common intuition that low rank is typically harmful in contrastive learning (Chen and He, 2021; Hua et al., 2021), however, we are still in the regime that the remaining dimensions are expressive enough to preserve (and even improve) the retrieval performance. In addition, we note that the low-rank features are observed both in the image _and_ text encoders suggesting that this effect of SAM is not specific to image data. Moreover, we observe the low-rank effect also when the text encoder is frozen during fine-tuning (Figure 13 in Appendix). Thus, even a single linear layer on the side of text features can be sufficient to get the low-rank effect.
## 4 How SAM can induce low-rank features: insights from simple models
In this section, we dissect the source of the low-rank effect of SAM on two-layer neural networks. We first provide clear empirical evidence that this effect can be attributed to zero activations and then confirm it theoretically.
### Mechanistic understanding of low-rank features for a two-layer ReLU network
**Setup.** We consider a ReLU network \(f(\mathbf{\theta})=\langle\mathbf{a},\sigma(\mathbf{W}\mathbf{x})\rangle\) trained with the squared loss and parametrized by \(\mathbf{\theta}=[\mathrm{vec}(\mathbf{W}),\mathbf{a}]\) where \(\mathbf{W}\in\mathbb{R}^{m\times d}\) and \(\mathbf{a}\in\mathbb{R}^{m}\). We use the teacher-student
\begin{table}
\begin{tabular}{l l r r r r r} Training & **Block** & **VIT-B/16** & **VIT-B/32** & **VIT-L/16** & **VIT-L/16** & **Mixer-B/16** \\ \hline Standard & Last & 680 & 672 & 903 & 898 & 695 \\ SAM & Last & **617** & **654** & **820** & **844** & **620** \\ \hline Standard & After \(\nicefrac{{3}}{{4}}\) blocks & 467 & 484 & 595 & 595 & **301** \\ SAM & After \(\nicefrac{{3}}{{4}}\) blocks & **426** & **440** & **314** & **442** & 477 \\ \hline Standard & After \(\nicefrac{{1}}{{2}}\) blocks & 412 & 390 & 387 & 469 & 425 \\ SAM & After \(\nicefrac{{1}}{{2}}\) blocks & **346** & **362** & **250** & **318** & **369** \\ \end{tabular}
\end{table}
Table 1: ViTs and MLP-Mixers on ImageNet-1k. We compute the feature ranks for publicly available models from the vision_transformer library.
Figure 4: **Contrastive image-text training on MS-COCO.** SAM improves the retrieval error (_top_) and reduces the feature rank at the last layer of both image and text encoders (_bottom_).
setup with \(3\) teacher neurons, \(m=100\) student neurons, and input dimension \(d=3\) inspired by the setup of Chizat et al. (2019). The goal of the student network is to recover the teacher network from a finite set of training points which are sampled from the Gaussian distribution and labeled by the teacher network. We use SAM for the first \(50\%\) of iterations and then disable it to achieve full convergence as we observed that running SAM, especially with high \(\rho\), hinders convergence. To obtain a smooth trend of rank reduction and clear separation between different \(\rho\), we exceptionally use the PCA variance threshold of \(99.99\%\), which is higher than the \(99\%\) used in all other experiments.
**Low-rank features are due to zero activations.** We provide a mechanistic understanding of how low-rank features emerge in Figure 5. First, we observe that even in this simple setting, SAM improves generalization and learns features of substantially smaller rank (\(4\) instead of \(19\)). We investigate the origin of low-rank features by examining the number of active ReLU units, which noticeably decreases with increasing \(\rho\). Although sparse activations do not necessarily imply low-rank features (e.g., the identity matrix is sparse but full-rank), we observe that SAM sets entire activations to zero during training. For the largest \(\rho=0.6\), only \(10\) out of \(100\) neurons are activated on at least one training example, and even fewer of them contribute to the \(99.99\%\) PCA variance in the data. Similar results are shown for other activations, such as tanh and absolute value (see Figure 14 in Appendix), where the low-rank effect can be similarly attributed to zero activations.
**SAM \(\neq\) weight decay.** Interestingly, SAM reduces the number of active ReLUs not by shrinking to zero the weight vectors associated with ReLU units, but by leveraging the negative part of ReLU to effectively prune neurons. As suggested in Figure 5 (right), the weight norms only increase with \(\rho\), and the effect of SAM is _opposite_ to that of weight decay. Thus, theoretical results demonstrating that weight decay leads to low-rank features (Galanti et al., 2022; Timor et al., 2023) are not directly applicable to studying the low-rank effect of SAM. This observation highlights the difference between SAM and classical regularization methods and explains why we cannot achieve the same regularization effect with weight decay. To better understand this phenomenon for this simple model, we need to find a theoretical argument _specific to SAM_.
### Theoretical argument for low-rank features in a two-layer ReLU network
Let \(\ell(\mathbf{\theta})\) be the loss on example \((\mathbf{x},y)\) sampled by SGD and assume the squared loss for concreteness. We formulate our theoretical argument in the following proposition.
**Proposition 1**.: _Every update of SAM contains a component that decreases all pre-activation values \(\{\langle w_{j},x\rangle\}_{j=1}^{m}\) of a two-layer ReLU network trained with the squared loss by a non-negative amount equal to \(\eta\rho^{\sqrt{\ell(\mathbf{\theta})}/\|\nabla f(\mathbf{\theta})\|_{2}}\sigma(\langle \mathbf{w}_{j},\mathbf{x}\rangle)\|\mathbf{x}\|_{2}^{2}\)._
Proof.: We have for the update of SAM:
\[\nabla\ell\left(\mathbf{\theta}+\rho\frac{\nabla\ell(\mathbf{\theta})}{\|\nabla\ell( \mathbf{\theta})\|_{2}}\right)=\nabla\ell(\mathbf{\theta})+\rho\nabla^{2}\ell(\mathbf{ \theta})\frac{\nabla\ell(\mathbf{\theta})}{\|\nabla\ell(\mathbf{\theta})\|_{2}}+ \mathcal{O}(\rho^{2})=\nabla\left[\ell(\mathbf{\theta})+\rho\|\nabla\ell(\mathbf{ \theta})\|_{2}+\mathcal{O}(\rho^{2})\right].\]
Thus, under the first-order approximation, a step of SAM corresponds to a gradient update on the regularized objective \(\ell(\mathbf{\theta})+\rho\|\nabla\ell(\mathbf{\theta})\|_{2}\). Now recall that the layerwise gradients of a two-layer
Figure 5: Two-layer ReLU networks in the teacher-student setup. The models trained with a higher \(\rho\) of SAM generalize better, have a significantly smaller feature rank, smaller number of active ReLUs, and higher \(\ell_{2}\) weight norm. Note that the generalization improvement is U-shaped in \(\rho\) unlike the rank reduction which is monotonic.
ReLU network can be written as:
\[\nabla_{\mathbf{a}}\ell(\mathbf{\theta})=\ell^{\prime}(\mathbf{\theta})\cdot\sigma(\mathbf{W}\mathbf{x }),\quad\nabla_{\mathbf{w}_{j}}\ell(\mathbf{\theta})=\ell^{\prime}(\mathbf{\theta})\cdot a_{ j}\sigma^{\prime}(\langle\mathbf{w}_{j},\mathbf{x}\rangle)\mathbf{x}.\]
Then a direct computation gives us the following expression for the full gradient norm:
\[\|\nabla\ell(\mathbf{\theta})\|_{2}=|r|\cdot\|\nabla f(\mathbf{\theta})\|_{2}=|\langle \mathbf{a},\sigma(\mathbf{W}\mathbf{x})\rangle-y|\sqrt{\|\sigma(\mathbf{W}\mathbf{x})\|_{2}^{2}+\| \mathbf{x}\|_{2}^{2}\cdot\|\mathbf{a}\odot\sigma^{\prime}(\mathbf{W}\mathbf{x})\|_{2}^{2}},\]
where \(\odot\) denotes element-wise multiplication and \(r\) denotes the residual \(f(\mathbf{\theta})-y\). Then the update of \(\mathbf{w}_{j}\) for neuron \(j\) on each step of SAM with step size \(\eta\) can be written as:
\[\mathbf{w}_{j} :=\mathbf{w}_{j}-\eta(\nabla\ell(\mathbf{\theta})+\rho\nabla\|\nabla\ell( \mathbf{\theta})\|_{2})+\mathcal{O}(\rho^{2})\] \[\mathbf{w}_{j} :=\mathbf{w}_{j}-\underbrace{\eta r\left(1+\rho\|\nabla f(\mathbf{\theta })\|_{2}/\sqrt{\ell(\mathbf{\theta})}\right)a_{j}\sigma^{\prime}(\langle\mathbf{w}_{j },\mathbf{x}\rangle)\mathbf{x}}_{\text{data fitting term}}-\underbrace{\eta\rho\sqrt{ \ell(\mathbf{\theta})}/\|\nabla f(\mathbf{\theta})\|_{2}\sigma(\langle\mathbf{w}_{j},\mathbf{ x}\rangle)\mathbf{x}}_{\text{regularization component}}+\mathcal{O}(\rho^{2})\]
where we used the fact that \(\sigma^{\prime}(\langle\mathbf{w}_{j},\mathbf{x}\rangle)\sigma(\langle\mathbf{w}_{j},\mathbf{ x}\rangle)=\sigma(\langle\mathbf{w}_{j},\mathbf{x}\rangle)\) and second-order terms are zero almost everywhere for ReLUs. The data fitting term is the same as normal gradient but with a larger effective learning rate \(\eta(1+\rho\|\nabla f(\mathbf{\theta})\|_{2}/\sqrt{\ell(\mathbf{\theta})})\) instead of \(\eta\). The most interesting term is the one coming from \(\nabla\|\nabla f(\mathbf{\theta})\|_{2}\) which drives pre-activations \(\langle\mathbf{w}_{j},\mathbf{x}\rangle\) to negative values _on every step of SAM_ which can be seen directly from the update rule:
\[\langle\mathbf{w}_{j},\mathbf{x}\rangle:=\langle\mathbf{w}_{j},\mathbf{x}\rangle -\eta r\left(1+\rho\|\nabla f(\mathbf{\theta})\|_{2}/\sqrt{\ell(\mathbf{ \theta})}\right)a_{j}\sigma^{\prime}(\langle\mathbf{w}_{j},\mathbf{x}\rangle)\|\mathbf{x} \|_{2}^{2}\] \[-\eta\rho\sqrt{\ell(\mathbf{\theta})}/\|\nabla f(\mathbf{\theta})\|_{2} \sigma(\langle\mathbf{w}_{j},\mathbf{x}\rangle)\|\mathbf{x}\|_{2}^{2}+\mathcal{O}(\rho^{2 }),\]
where we note that \(\eta\rho\sqrt{\ell(\mathbf{\theta})}/\|\nabla f(\mathbf{\theta})\|_{2}\sigma(\langle \mathbf{w}_{j},\mathbf{x}\rangle)\|\mathbf{x}\|_{2}^{2}\) is always non-negative for ReLU.
We confirm empirically in Figure 15 in Appendix that using only the first-order terms in the SAM update leads to the same effect in all key metrics including the feature rank and the number of active ReLUs. Overall, this mechanism suggests how SAM can _suppress redundant activations_ if they are not needed to fit the training data. Moreover, this effect takes place at every iteration of SGD but is stronger at the beginning of training since it is proportional to the loss \(\sqrt{\ell(\mathbf{\theta})}\). Moreover, a similar argument can be made for a multi-layer network which can explain why the low-rank effect occurs at _multiple_ layers since the \(\|\nabla f(\mathbf{\theta})\|_{2}\) term has activations of all layers in it. Also it suggests an intuitive picture of what a flat minimum of SAM means in terms of the learned function: a minimum that corresponds to a network sparsely activated on the training data.
## 5 Investigation of low-rank mechanisms on deep networks
In this section, we confirm that the low-rank mechanism described above can also occur in post-activation ResNets, although the overall rank reduction mechanism can be more complex, particularly for networks with pre-activation skip connections and self-attention layers.
**Post-activation ResNet on CIFAR-10.** We consider a standard, _post-activation_ ResNet-18 (He et al., 2016) with \(4\) residual superblocks trained on CIFAR-10 in the minimal setting (i.e., small step sizes, no momentum, no weight decay). Since values in the residual stream of this network (unlike for PreAct ResNets) are taken _after_ ReLU, we can expect to see the ReLU pruning effect described in the previous section. We count a ReLU as _active_ if at least on one training example, it achieves at least \(5\%\) of the maximum value in the feature matrix. We plot the results in Figure 6 which confirms the ReLU pruning effect: the number of active ReLUs decreases with \(\rho\) at later residual blocks. For example, for \(\rho=0.6\), the number of active ReLUs at block 4 reaches \(77\%\), which directly contributes to a lower feature rank, compared to \(100\%\) of standard training. Finally, we note that this mechanism is not likely to be the only one contributing to the lower rank since the trends for the feature rank and number of active ReLUs do not perfectly agree.
**Pre-activation ViT on MS-COCO.** Next, we consider the ViT with 12 _pre-activation_ residual blocks used in our experiments on MS-COCO. First, we show how the feature ranks change in the _residual stream_ after attention and MLP subblocks in Figure 7 (_left_, _middle_). We observe that almost each attention block gradually increases the feature rank, _but the increase is smaller for models with higher \(\rho\)_. At the same time, MLP blocks can both increase and decrease the rank, but for higher \(\rho\), the difference between post-MLP rank and pre-MLP rank tends to be larger. Thus, we conclude that the rank reduction effect is coming primarily from the attention blocks. Moreover, we do not observe any coordinate-aligned sparsity in the residual stream. Additionally, we plot the number of active GeLUs inside of MLP subblocks in Figure 7 (_right_). Since GeLU can be seen as a differentiable approximation of ReLU, the analogous notion of active GeLU units also makes sense. However, we do not observe any decrease in the number of GeLUs over the \(\rho\) of SAM for these models. We show more detailed plots about the behavior of feature ranks in Figure 16 in Appendix. Moreover, in Figure 17 we show that even when the text encoder remains frozen and only a single linear layer is trained on top of it, the low-rank effect is still observed. This indicates that SAM can effectively utilize even a _single matrix multiplication_ to achieve features of reduced rank. Overall, we conclude that the mechanism behind the low-rank effect of SAM can vary significantly, and SAM can leverage different components of the network, including activations, attention layers, and even individual linear layers.
## 6 Do low-rank features lead to better generalization on natural data?
We conclude the discussion on the low-rank features of SAM by demonstrating that directly inducing low-rank features _does not_ result in improved generalization for natural data such as MS-COCO.
**Setup.** We induce the low-rank directly at the last layer by using a _linear bottleneck layer_ which is also known as the _Burer-Monteiro factorization_(Burer and Monteiro, 2003): we parametrize the weight matrix \(W^{(L)}\in\mathbb{R}^{d_{L-1}\times d_{L}}\) at the last layer \(L\) as \(W^{(L)}=U^{(L)}V^{(L)}\) where \(U^{(L)}\in\mathbb{R}^{d_{L-1}\times h}\) and \(V^{(L)}\in\mathbb{R}^{h\times d_{L}}\). We perform training on MS-COCO by varying the inner dimension \(h\) for the last layers of both image and text encoders, while keeping the remaining training process the same as described in Section 3.
**Observations.** We plot the results in Figure 8 where we observe that enforcing low-rank features alone _does not_ consistently enhance generalization. Introducing bottleneck layers only marginally improves the imagewise retrieval error (from \(23.3\%\) to \(22.7\%\)) but negatively affects textwise retrieval (from \(22.2\%\) to \(22.9\%\)). In contrast, SAM demonstrates a significant improvement of \(-4.7\%\) and \(-3.0\%\) in the respective retrieval tasks. Thus, we conclude that SAM stands out in its ability to simultaneously enhance generalization and select low-rank features in a data-adaptive manner.
Figure 6: **Post-activation ResNet-18 on CIFAR-10. Both the feature rank and the number of active ReLUs tend to decrease with the \(\rho\) of SAM, particularly at later ResNet blocks.**
Figure 7: **ViT image encoder on MS-COCO. We show the difference in the feature ranks before and after attention subblocks (_left_), before and after MLP subblocks (_middle_), and the number of active GeLUs inside of residual branches (_right_).**
## 7 Discussion and future directions
Finally, we discuss the implications of low-rank features learned by SAM as well as future directions.
**More efficient retrieval.** As illustrated in Section 3, despite the lower rank of the features learned by SAM, their effectiveness for _nearest neighbor retrieval_ is preserved or even improved. This suggests that one can achieve faster retrieval by reducing the dimensionality of the feature space by using, for example, only the top-\(k\) components of PCA. Furthermore, the low-rank bias of SAM appears to be _data-adaptive_, meaning that it adjusts to the data structure rather than relying on a fixed predetermined dimension.
**More efficient feature quantization.** On a related note, having features of reduced rank can also be helpful to speed up _feature quantization_. This is confirmed, in particular, in the DALL-E 2 paper (Ramesh et al., 2022) which reports that this effect combined with PCA can lead to a \(3\times\) reduction in the number of tokens predicted during inference, and also can improve the training stability.
**Understanding the features learned by SAM.** Developing a better understanding of the features learned by popular methods such as SAM is an important goal on its own. The low-rank feature bias can be advantageous in scenarios where the data is generated by a low-rank function. However, for realistic datasets like MS-COCO, the impact of this effect on generalization is not as significant, yet it still remains a valuable side effect.
**Future directions.** Our findings suggest that low-rank features alone do not enhance generalization for natural data beyond simple teacher-student setups. Consequently, the low-rank effect of SAM appears to be a useful _side effect_. Therefore, understanding the impact of SAM on learned features that leads to generalization improvements on natural data remains an open question. Additionally, further theoretical analysis of the low-rank effect of SAM for more complex architectures, such as those involving skip-connections and self-attention layers, would be of great interest.
## Acknowledgements
We thank the anonymous reviewers for their suggestions that helped to improve the paper. The work is supported by the Google/EPFL Research Collab program. M.A. was additionally supported by the Google Fellowship and Open Phil AI Fellowship.
## References
* Andriushchenko and Flammarion (2022) M. Andriushchenko and N. Flammarion. Towards understanding sharpness-aware minimization. In _ICML_, 2022.
* Andriushchenko et al. (2023) M. Andriushchenko, A. Varre, L. Pillaud-Vivien, and N. Flammarion. SGD with large step sizes learns sparse features. In _ICML_, 2023.
* Barrett and Dherin (2021) D. G. Barrett and B. Dherin. Implicit gradient regularization. In _ICLR_, 2021.
* Bartlett et al. (2022) P. L. Bartlett, P. M. Long, and O. Bousquet. The dynamics of sharpness-aware minimization: Bouncing across ravines and drifting towards wide minima. _arXiv preprint arXiv:2210.01513_, 2022.
* Bartlett et al. (2021)
Figure 8: **Contrastive image-text training on MS-COCO.** We compare the effect of SAM and linear bottleneck layers on the retrieval error and feature ranks of the last layer. We observe that linear bottleneck layers do not improve the retrieval error at all, unlike SAM.
S. Burer and R. D. Monteiro. A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization. _Mathematical Programming_, 95(2):329-357, 2003.
* Cai et al. (2021) X. Cai, J. Huang, Y. Bian, and K. Church. Isotropy in the contextual embedding space: Clusters and manifolds. In _ICLR_, 2021.
* Chen and He (2021) X. Chen and K. He. Exploring simple siamese representation learning. In _CVPR_, 2021.
* Chen et al. (2022) X. Chen, C.-J. Hsieh, and B. Gong. When vision transformers outperform resnets without pre-training or strong data augmentations. In _ICLR_, 2022.
* Chizat et al. (2019) L. Chizat, E. Oyallon, and F. Bach. On lazy training in differentiable programming. In _NeurIPS_, 2019.
* Cohen et al. (2021) J. M. Cohen, S. Kaur, Y. Li, J. Z. Kolter, and A. Talwalkar. Gradient descent on neural networks typically occurs at the edge of stability. In _ICLR_, 2021.
* Damian et al. (2021) A. Damian, T. Ma, and J. D. Lee. Label noise SGD provably prefers flat global minimizers. In _NeurIPS_, 2021.
* Deng et al. (2009) J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In _CVPR_, 2009.
* Devlin et al. (2018) J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In _ACL_, 2018.
* Dosovitskiy et al. (2021) A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In _ICLR_, 2021.
* Ethayarajh (2019) K. Ethayarajh. How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In _EMNLP-IJCNLP_, 2019.
* Foret et al. (2021) P. Foret, A. Kleiner, H. Mobahi, and B. Neyshabur. Sharpness-aware minimization for efficiently improving generalization. In _ICLR_, 2021.
* Galanti et al. (2022) T. Galanti, Z. S. Siegel, A. Gupte, and T. Poggio. SGD and weight decay provably induce a low-rank bias in neural networks. _arXiv preprint arXiv:2206.05794_, 2022.
* Gulcehre et al. (2022) C. Gulcehre, S. Srinivasan, J. Sygnowski, G. Ostrovski, M. Farajtabar, M. Hoffman, R. Pascanu, and A. Doucet. An empirical study of implicit regularization in deep offline rl. _arXiv preprint arXiv:2207.02099_, 2022.
* He et al. (2016a) K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In _CVPR_, 2016a.
* He et al. (2016b) K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In _ECCV_, 2016b.
* Hua et al. (2021) T. Hua, W. Wang, Z. Xue, S. Ren, Y. Wang, and H. Zhao. On feature decorrelation in self-supervised learning. In _ICCV_, 2021.
* Huh et al. (2021) M. Huh, H. Mobahi, R. Zhang, B. Cheung, P. Agrawal, and P. Isola. The low-rank simplicity bias in deep networks. _TMLR_, 2021.
* Hui et al. (2022) L. Hui, M. Belkin, and P. Nakkiran. Limitations of neural collapse for understanding generalization in deep learning. _arXiv preprint arXiv:2202.08384_, 2022.
* Jiang et al. (2020) Y. Jiang, B. Neyshabur, H. Mobahi, D. Krishnan, and S. Bengio. Fantastic generalization measures and where to find them. In _ICLR_, 2020.
* Keskar et al. (2017) N. S. Keskar, D. Mudigere, J. Nocedal, M. Smelyanskiy, and P. T. P. Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In _ICLR_, 2017.
* Kingma and Ba (2014) D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. _ICLR_, 2014.
* Krizhevsky and Hinton (2009) A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. _Technical Report_, 2009.
* Kwon et al. (2021) J. Kwon, J. Kim, H. Park, and I. K. Choi. Asam: Adaptive sharpness-aware minimization for scale-invariant learning of deep neural networks. _ICML_, 2021.
* Le and Yang (2015) Y. Le and X. Yang. Tiny ImageNet visual recognition challenge. _CS-231N course at Stanford University_, 2015.
* Li et al. (2023) Z. Li, C. You, S. Bhojanapalli, D. Li, A. S. Rawat, S. J. Reddi, K. Ye, F. Chern, F. Yu, R. Guo, and S. Kumar. The lazy neuron phenomenon: On emergence of activation sparsity in transformers. In _ICLR_, 2023.
* Liu et al. (2021)T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick. Microsoft COCO: Common objects in context. In _ECCV_, 2014.
* Liu et al. (2019) S. Liu, D. Papailiopoulos, and D. Achlioptas. Bad global minima exist and SGD can reach them. In _NeurIPS_, 2019.
* Mulayoff et al. (2021) R. Mulayoff, T. Michaeli, and D. Soudry. The implicit bias of minima stability: A view from function space. In _NeurIPS_, 2021.
* Na et al. (2022) C. Na, S. V. Mehta, and E. Strubell. Train flat, then compress: Sharpness-aware minimization learns more compressible models. In _Findings of EMNLP_, 2022.
* Nacson et al. (2023) M. S. Nacson, R. Mulayoff, G. Ongie, T. Michaeli, and D. Soudry. The implicit bias of minima stability in multivariate shallow ReLU networks. In _ICLR_, 2023.
* Oord et al. (2018) A. v. d. Oord, Y. Li, and O. Vinyals. Representation learning with contrastive predictive coding. _arXiv preprint arXiv:1807.03748_, 2018.
* Papyan et al. (2020) V. Papyan, X. Han, and D. L. Donoho. Prevalence of neural collapse during the terminal phase of deep learning training. _PNAS_, 2020.
* Radford et al. (2021) A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervision. In _ICML_, 2021.
* Ramesh et al. (2022) A. Ramesh, P. Dhariwal, A. Nichol, C. Chu, and M. Chen. Hierarchical text-conditional image generation with CLIP latents. _arXiv preprint arXiv:2204.06125_, 2022.
* Smith et al. (2021) S. L. Smith, B. Dherin, D. G. Barrett, and S. De. On the origin of implicit regularization in stochastic gradient descent. In _ICLR_, 2021.
* Steiner et al. (2021) A. Steiner, A. Kolesnikov, X. Zhai, R. Wightman, J. Uszkoreit, and L. Beyer. How to train your ViT? data, augmentation, and regularization in vision transformers. _TMLR_, 2021.
* Timor et al. (2023) N. Timor, G. Vardi, and O. Shamir. Implicit regularization towards rank minimization in relu networks. In _ALT_, 2023.
* Tolstikhin et al. (2021) I. Tolstikhin, N. Houlsby, A. Kolesnikov, L. Beyer, X. Zhai, T. Unterthiner, J. Yung, A. P. Steiner, D. Keysers, J. Uszkoreit, M. Lucic, and A. Dosovitskiy. MLP-mixer: An all-MLP architecture for vision. In _NeurIPS_, 2021.
* Wen et al. (2023) K. Wen, T. Ma, and Z. Li. How sharpness-aware minimization minimizes sharpness? In _ICLR_, 2023.
* Wu et al. (2020) D. Wu, S.-T. Xia, and Y. Wang. Adversarial weight perturbation helps robust generalization. In _NeurIPS_, 2020.
* Zhang et al. (2017) C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning requires rethinking generalization. In _ICLR_, 2017.
* Zhang et al. (2023) X. Zhang, R. Xu, H. Yu, H. Zou, and P. Cui. Gradient norm aware minimization seeks first-order flatness and improves generalization. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 20247-20257, 2023.
* Zheng et al. (2021) Y. Zheng, R. Zhang, and Y. Mao. Regularizing neural networks via adversarial model perturbation. In _CVPR_, 2021.
**Appendix**
The appendix is organized as follows:
* Section A: implementation details and hyperparameters of all experiments.
* Section B: additional results for classification experiments on CIFAR-10, CIFAR-100, and Tiny ImageNet.
* Section C: additional results for contrastive image-text training on MS-COCO.
* Section D: additional results for two-layer networks.
* Section E: additional results for investigations of the low-rank mechanism on deep networks.
## Appendix A Experimental details
In this section, we specify the implementation details and hyperparameters of all our experiments.
**Experiments on CIFAR-10, CIFAR-100, Tiny ImageNet.** We train pre-activation ResNets-18 (He et al., 2016) on CIFAR-10, CIFAR-100, Tiny ImageNet for experiments in Section 3 and post-activation ResNets-18 for experiments in Section 5. We train these models with batch size \(256\) for \(200\) epochs using standard augmentations (random crops and random mirroring). For the minimal setting, we use plain SGD with the learning rate \(0.05\). For the state-of-the-art setting, we use SGD with the learning rate \(0.1\) (decayed by a factor of \(10\times\) after \(50\%\) and \(90\%\) epochs), momentum parameter \(0.9\), weight decay \(0.0005\). We train models with a grid of different parameters \(\rho\) of SAM where we adjust the grid such that it includes both values of \(\rho\) that improve generalization (small \(\rho\), usually in the range \([0.05,0.2]\)) but also those which slightly degrade it (large \(\rho\), usually larger than \(0.5\)). We evaluate the feature rank on \(10\,240\) training examples by counting the minimal number of PCA components needed to span \(99\%\) variance.
**Experiments on ImageNet-1k.** We refer to the original papers Dosovitskiy et al. (2021), Tolstikhin et al. (2021), and Chen et al. (2022) for the details on how these models were trained. We evaluate the feature rank on \(12\,800\) training examples by counting the minimal number of PCA components needed to span \(99\%\) variance.
**Experiments on MS-COCO.** We fine-tune the R+Ti/16 vision transformer from Steiner et al. (2021) and BERT from Devlin et al. (2018) on MS-COCO using the InfoNCE contrastive loss. We add a linear head to each of the encoders with the final feature dimension of \(768\). We use Adam (Kingma and Ba, 2014) with learning rate \(0.0001\) which is decayed down to \(0\) using a cosine decay schedule. We train these models with batch size \(128\) for \(25\) epochs without data augmentations. We evaluate the feature rank on \(1\,280\) training examples by counting the minimal number of PCA components needed to span \(99\%\) variance. To aggregate the features over different tokens, we always take the embedding of the first token since this is what is used for classification of the pretrained model.
**Experiments on two-layer networks in the teacher-student setup.** We use the teacher-student setup with \(3\) teacher neurons, \(m=100\) student neurons, and input dimension \(d=3\). We consider training with stochastic gradients based on mini batch size of \(1\) using the learning rate of \(0.1\). We initialize the weights of the teacher network with the standard deviation of \(1.0\) and the student network with the standard deviation of \(0.3\). We use SAM for the first \(50\%\) of iterations and then disable it to achieve full convergence as we observed that running SAM, especially with high \(\rho\), hinders convergence. We evaluate the feature rank on all \(20\) training examples by counting the minimal number of PCA components needed to span \(99.99\%\) variance. We exceptionally choose this PCA threshold (instead of \(99\%\) as in all other experiments) to obtain a smooth trend of rank reduction and clear separation between different \(\rho\).
**Computational resources.** We performed all experiments on a single Nvidia A100 GPU where we used an internal cluster for all experiments except experiments on MS-COCO, for which we used a cloud provider. A typical training run on CIFAR-10 and CIFAR-100 takes around \(2\) hours, on Tiny ImageNet around \(5\) hours, and on MS-COCO around \(7\) hours.
Additional results for classification
In Section 3, we focused on the _original_ SAM as introduced in Foret et al. (2021) since it is still the most popular SAM variant in the community and it is implemented without any further approximations. However, to provide a comprehensive comparison of SAM variants, we also include results on more recent variants of SAM such as ASAM (Adaptive Sharpness-Aware Minimization) (Kwon et al., 2021) and GAM (Gradient norm Aware Minimization) (Zhang et al., 2023). Similarly to our main experiments in Section 3, we also use ResNets on CIFAR-10. We select the default settings given in their code repositories (which includes a smaller ResNet for ASAM compared to GAM) and vary only the perturbation radius \(\rho\). We report the results in Table 2 that confirm that the low-rank observation also extends to other recent SAM variants.
In Figure 9, we plot feature ranks computed on the training and test sets of CIFAR-10 and CIFAR-100 for post-activation and pre-activation ResNets-18, respectively, taken at the last epoch. We select two different architectures to check the generality of our findings. We observe that the feature ranks stay very close, no matter if we evaluate them on the training or test sets. Thus, we conclude that the low-rank effect of SAM generalizes also to unseen samples.
In Figure 10, we plot feature ranks for \(95\%\), \(99\%\), \(99.9\%\) PCA variance. Feature ranks computed with different PCA variance thresholds exhibit very similar trends. We observe that the overall trend and relative ranking between models trained with different \(\rho\) is unaffected by the choice of the PCA variance threshold.
In Figure 11, we plot metrics like in Figure 1 but for larger batch sizes (\(512\) and \(1024\) instead of \(256\)). We see that the low-rank trend is clearly visible there as well.
Finally, in Figure 12, we show feature ranks computed at the _last_ block of ResNet (just before the linear head). We see that the neural collapse phenomenon occurs: the feature rank converges to \(10\) (i.e., the number of classes) even for \(\rho=0\). The feature rank with SAM is lower at the beginning but then increases to a slightly higher value than \(10\) towards the end of training.
\begin{table}
\begin{tabular}{l l l l l l} \(\rho\) **of ASAM** & 0.0 & 0.5 & 1.0 & 2.0 & 4.0 \\ \hline
**Test error** & 7.29\% & 6.53\% & 6.38\% & 7.12\% & 10.64\% \\ \hline
**Feature rank** & 5048 & 4801 & 4699 & 4578 & 4383 \\ \(\rho\) **of GAM** & 0.0 & 0.2 & 0.4 & 0.8 & 1.6 \\ \hline
**Test error** & 4.04\% & 3.65\% & 3.64\% & 3.81\% & 4.81\% \\ \hline
**Feature rank** & 7633 & 7381 & 7303 & 6897 & 6927 \\ \end{tabular}
\end{table}
Table 2: Results for more recent variants of SAM: ASAM (Kwon et al., 2021) and GAM (Zhang et al., 2023).
Figure 9: **ResNets-18 on CIFAR-10 and CIFAR-100. Feature ranks computed on the training and test sets are very similar to each other. We conclude that the low-rank effect of SAM generalizes also to unseen samples.**
Figure 10: **ResNet18 on CIFAR-10, CIFAR-100, Tiny ImageNet. Feature ranks computed with different PCA variance thresholds exhibit very similar trends.**
Figure 11: ResNet-18 on CIFAR-10 with different batch sizes. Similarly to the experiments in Figure 1 that were done with batch size \(256\), SAM with larger batch sizes (\(512\) and \(1024\)) also improves test error (_left_), leads to more generalizable features (_middle_), and noticeably reduces the feature rank at the intermediate ResNet block (_right_).
Figure 12: **ResNets-18 on CIFAR-10. Feature ranks computed at the _last_ block of ResNet (just before the linear head). We see that the neural collapse phenomenon occurs: the feature rank converges to \(10\) (i.e., the number of classes) even for \(\rho=0\). The feature rank with SAM is lower at the beginning but then increases to a slightly higher value than \(10\) towards the end.**
Additional results for contrastive image-text training
In Figure 13, we plot the retrieval error and feature rank for models with a _frozen_ (i.e., not updated during training) BERT text encoder trained with different \(\rho\) of SAM. We observe that the low-rank effect in the final text features (as well as image features) occurs even when a single layer is attached to a fixed BERT text encoder. Moreover, as a side note, SAM also improves the retrieval error in this setting as well.
## Appendix D Additional results for two-layer networks
In Figure 14, we plot the same metrics as in Figure 5 but, in addition, for tanh and absolute value activations. We can see that these models, when trained with higher \(\rho\) of SAM, have a smaller feature rank, smaller number of active tanh/abs units, and higher weight norm. This illustrates the fact that the same low-rank mechanism can be at play for many activations, not just for ReLU.
In Figure 15, we confirm empirically that using only the first-order terms in the SAM update--which is equivalent to the gradient norm regularization \(\sum_{i=1}^{n}\|\nabla\ell_{i}(\mathbf{\theta})\|_{2}\)--leads to the same effect in all key metrics including the feature rank and the number of active ReLUs. This empirical validation supports the argument outlined in Proposition 1 which is based only on the first-order terms that empirically appear to be dominant for the low-rank effect of SAM.
Figure 14: Two-layer ReLU/tanh/abs networks in the teacher-student setup. We can see that models trained with higher \(\rho\) of SAM have a smaller feature rank, smaller number of active tanh/abs units, and higher weight norm.
Figure 15: Two-layer ReLU networks in the teacher-student setup. We can see that models trained with higher \(\rho\) of gradient regularization (which can be seen as the first-order approximation of SAM) have a smaller feature rank, smaller number of active ReLUs, and higher weight norm.
Additional results for investigations of the low-rank mechanism on deep networks
In Figure 16, we show more detailed plots compared to Figure 7 regarding the behavior of feature ranks at different layers of the vision transformer on MS-COCO. First, we note that the feature rank grows over the residual blocks starting from very small values in the first blocks due to the identical positional encoding applied to all inputs at the first layer. Second, we do not observe a clear trend in the reduction of rank for larger \(\rho\) after applying attention or GeLU activations _in the residual branches_. This is in contrast to the ranks in the _residual stream_ (i.e., the values right after adding the identity) after attention subblocks which, as highlighted in Figure 7 in the main part, increase less for models with higher \(\rho\). For the sake of completeness, the absolute values of ranks in the residual stream are shown in Figure 16.
Finally, in Figure 17, we show the feature ranks for the last layer of the text encoder with _unfrozen_ and _frozen_ text encoders. Note that a single linear layer is used on top of the frozen text features to match the dimensions of the image and text encoders within the same feature space. We conclude that even a single linear layer trained on top of the text encoder can be sufficient for the low-rank effect of SAM.
Figure 16: **Contrastive image-text training on MS-COCO (image encoder). A more detailed look on the feature rank trend at different blocks depending on the \(\rho\) of SAM.**
## 6 Conclusion
Figure 17: **Contrastive image-text training on MS-COCO.** We show the feature ranks for the last layer of the text encoder with _unfrozen_ and _frozen_ text encoders. Matrix multiplication denotes the last linear layer used to match the dimensions of the image and text encoders within the same feature space. | ## Review
### Summary
This paper investigates the effects of Sharpness-Aware Minimization (SAM) on low-rank features in neural networks. It provides empirical evidence that SAM reduces feature rank during training, with a theoretical analysis supporting this finding in a simple two-layer ReLU network. The authors demonstrate that while SAM leads to lower rank features, these features do not necessarily correlate with better generalization performance. The paper discusses implications for more efficient retrieval and suggests future research directions, particularly in understanding the impact of SAM on various architectures and its relationship to generalization improvements.
### Strengths
- The paper presents thorough empirical evidence of low-rank features for various models trained with SAM across multiple tasks.
- The authors provide a clear mechanistic understanding of how SAM induces low-rank features in neural networks.
- The theoretical analysis of SAM offers insights into the optimization landscape of deep learning models.
- The submission is well organized, clearly written, and discusses future research directions.
- Understanding the properties of SAM-trained models is highly relevant to the deep learning community.
### Weaknesses
- The methodological contributions based on observations are limited; only observational results are presented.
- The paper lacks a comprehensive comparison of recent SAM variants.
- Empirical evidence is confined to a few datasets and models, limiting generalizability.
- The impact of low-rank features on tasks beyond retrieval and quantification is not explored.
- The results regarding the correlation between feature rank reduction and generalization are not convincingly illustrated.
- The paper does not adequately address the relationship between batch size and performance, especially in large-scale training setups.
### Questions
- What practical applications can be derived from the relatively small rank reduction achieved with SAM?
- How does the low-rank behavior of SAM interact with the phenomenon of neural collapse?
- Could the authors clarify the specific implications of Proposition 1 regarding pre-activation values?
- What are the implications of the findings for transfer learning and fine-tuning scenarios?
- Can the observations related to rank reduction and activation sparsity be utilized for more efficient training?
### Soundness
**Score:** 3
**Description:** 3 = good; the paper presents a solid theoretical foundation and empirical evidence, but some methodological aspects need further exploration.
### Presentation
**Score:** 3
**Description:** 3 = good; the writing is clear, but certain figures and discussions could benefit from improved clarity and detail.
### Contribution
**Score:** 3
**Description:** 3 = good; the paper contributes valuable insights into SAM and its implications but lacks practical applications and comprehensive comparisons.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept; the paper is technically solid and has moderate-to-high impact, though it requires some improvements in evaluation and practical applicability.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper offers significant insights into the effects of SAM on low-rank features in neural networks, supported by empirical evidence and a theoretical framework. While it presents some limitations regarding practical applications and comparisons with other methods, the strengths outweigh the weaknesses, justifying acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Learning Functional Transduction
Mathieu Chalvidal
Capital Fund Management
Paris, France
[email protected] &Thomas Serre
Carney Institute for Brain Science
Brown University, U.S.
[email protected] &Rufin VanRullen
Centre de Recherche Cerveau & Cognition
CNRS, Universite de Toulouse, France
[email protected]
###### Abstract
Research in statistical learning has polarized into two general approaches to perform regression analysis: Transductive methods construct estimates directly based on exemplar data using generic relational principles which might suffer from the curse of dimensionality. Conversely, inductive methods can potentially fit highly complex functions at the cost of compute-intensive solution searches. In this work, we leverage the theory of vector-valued Reproducing Kernel Banach Spaces (RKBS) to propose a hybrid approach: We show that transductive regression systems can be meta-learned with gradient descent to form efficient _in-context_ neural approximators of function defined over both finite and infinite-dimensional spaces (operator regression). Once trained, our _Transducer_ can almost instantaneously capture new functional relationships and produce original image estimates, given a few pairs of input and output examples. We demonstrate the benefit of our meta-learned transductive approach to model physical systems influenced by varying external factors with little data at a fraction of the usual deep learning training costs for partial differential equations and climate modeling applications.
## 1 Introduction
**Transduction vs. induction \(\diamond\)** In statistical learning, transductive inference (Vapnik, 2006) refers to the process of reasoning directly from observed (training) cases to new (testing) cases and contrasts with inductive inference, which amounts to extracting general rules from observed training cases to produce estimates. The former principle powers some of the most successful regression algorithms benefiting from straightforward construction properties, from \(k\)-Nearest Neighbors (Cover and Hart, 1967) to Support Vector Machines (Boser et al., 1992) or Gaussian Processes (Williams and Rasmussen, 1995). In contrast, deep learning research has mostly endeavored to find inductive solutions, relying on stochastic gradient descent to faithfully encode functional relationships described by large datasets into the weights of a neural network. Although ubiquitous, inductive neural learning with gradient descent is compute-intensive, necessitates large amounts of data, and poorly generalizes outside of the training distribution (Jin et al., 2020) such that a slight modification of the problem might require retraining and cause "catastrophic" forgetting of the previous solution (McCloskey and Cohen, 1989). This may be particularly problematic for real-world applications where data has heterogeneous sources, or only a few examples of the target function are available.
**Meta-learning functional regression \(\diamond\)** In this work, we meta-learn a regression program in the form of a neural network able to approximate an infinity of functions defined on finite or infinite-dimensional spaces through a transductive formulation of the solution based on the representer theorem. Namely, our model is meta-trained to take as input any dataset \(\mathcal{D}_{\mathcal{O}}\) of pairs \((\mathbf{v}_{i},\mathcal{O}(\mathbf{v}_{i}))_{i\leqslant I}\) of some target function \(\mathcal{O}\) together with a query element \(\mathbf{v}^{\prime}\) and produces directly an estimate of the image \(\mathcal{O}(\mathbf{v}^{\prime})\). After meta-training, our network is able to perform regression of unseen operators \(\mathcal{O}^{\prime}\) from varying dataset sizes in a single feedforward pass, such that our model can be interpreted as performing _in-context functional learning_. In order to build such a model, we leverage the theory of Reproducing Kernel Banach Spaces (RKBS) (Micchelli and Pontil, 2004; Zhang, 2013; Lin et al., 2022) and interpret the Transformer's (Vaswani et al., 2017) attention mechanism as a parametric vector-valued reproducing kernel. While kernel regression might be plagued by the "curse of dimensionality" (Bellman, 1966; Aggarwal et al., 2001), we show that our meta-learning approach can escape this pitfall, allowing, for instance, to perform instantaneous regressions over spaces of operators from a few example points, by building solutions to regression problem instances directly from the general reproducing kernel associated with such spaces.
**Contributions \(\diamond\)** We introduce the _Transducer_, a novel meta-learning approach leveraging reproducing kernel theory and deep learning methods to perform instantaneous regression of an infinity of functions in reproducing kernel spaces.
* Our model learns an implicit regression program able to identify, in a single feedforward pass, elements of specific functional spaces from any corresponding collection of input-output pairs describing the target function. Such ultra-fast regression program, which bypasses the need for gradient-based training, is also general and can be applied to functions either defined on finite dimensional spaces (scalar-valued function spaces) or infinite dimensional spaces (function-valued operator spaces).
* In particular, we demonstrate the flexibility and efficiency of our framework for fitting function-valued operators in two PDEs and one climate modeling problem. We show that our transductive approach allows for better generalization properties of neural operator regression, better precision when relevant data is available, and can be combined with iterative regression schemes that are too expensive for previous inductive approaches, thus holding potential to improve neural operators applicability.
* To the best of our knowledge, our proposal is the first to marry vector-valued RKBS theory with deep meta-learning and might also shed new light on the in-context learning abilities observed in deep attentional architectures.
Figure 1: Batches of functional images \(\mathcal{T}_{\mathbf{\theta}}(\mathcal{D}_{\mathcal{O}_{1}})(\mathbf{v}_{j})\times \mathcal{O}_{1}(\mathbf{v}_{j})=\mathbf{u}_{j}(x,t)\in C([0,1]^{2},\mathbb{R})\) obtained with the same _Transducer_ model \(\mathcal{T}_{\mathbf{\theta}}\) but conditioned, at each row, by a different dataset \((\mathcal{D}_{\mathcal{O}_{1}})_{i\leqslant 3}\) during feedforward computation. Each underlying operator \(\mathcal{O}_{i}\) corresponds to a different advection-diffusion-reaction equation (defined in Sec. 5.1) with spatially varying advection, diffusion, and reaction parameters unseen during training, and functions \((\mathbf{v}_{j})_{j\leqslant 9}\) correspond to initial conditions. **While usual neural regression approaches learn a _single_ target function (one row), our model learns to approximate instantaneously an _infinity_ of them.**Problem formulation
Let \(\mathcal{V}\) and \(\mathcal{U}\) be two (finite or infinite-dimensional) Banach spaces, respectively referred to as the input and output space, and let \(\mathcal{B}\) a Banach space of functions from \(\mathcal{V}\) to \(\mathcal{U}\). We also note \(L(\mathcal{U},\mathcal{B})\) (resp. \(L(\mathcal{U})\)) the set of bounded linear operators from \(\mathcal{U}\) to \(\mathcal{B}\) (resp. to itself). We consider the _meta-learning_ problem of creating a function \(\mathcal{T}\) able to approximate any functional element \(\mathcal{O}\) in the space \(\mathcal{B}\) from any finite collection of example pairs \(\mathcal{D}_{\mathcal{O}}=\{(\mathbf{v}_{i},\mathbf{u}_{i})\mid\mathbf{v}_{i}\in\mathcal{V},\mathbf{u}_{i}=\mathcal{O}(\mathbf{v}_{i})\}_{i\leqslant n}\). A prominent approach in statistical learning is _empirical risk minimization_ which consists in predefining a class \(\tilde{\mathcal{B}}\subset\mathcal{B}\) of computable functions from \(\mathcal{V}\) to \(\mathcal{U}\) and subsequently selecting a model \(\tilde{\mathcal{O}}\) as a minimizer (provided its existence) of a risk function \(\mathcal{L}:\mathcal{B}\times\mathbf{\mathcal{D}}\mapsto\mathbb{R}\):
\[\mathcal{T}(\mathcal{D}_{\mathcal{O}})\in\operatorname*{argmin}_{\tilde{ \mathcal{O}}\in\tilde{\mathcal{B}}}\mathcal{L}(\tilde{\mathcal{O}},\mathcal{D }_{\mathcal{O}}) \tag{1}\]
For instance, the procedure consisting in performing gradient-based optimization of objective (1) over a parametric class \(\tilde{\mathcal{B}}\) of neural networks defines implicitly such a function \(\mathcal{T}\). Fundamentally, this technique works by induction: It captures the statistical regularities of a single map \(\mathcal{O}\) into the parameters of the neural network \(\tilde{\mathcal{O}}\) such that \(\mathcal{D}_{\mathcal{O}}\) is discarded for inference. Recent examples of gradient-based optimization of neural networks for operator regression (i.e when \(\mathcal{V}\) and \(\mathcal{U}\) are infinite-dimensional) are DeepOnet (Lu et al., 2019) or Fourier Neural Operator (FNO) (Li et al., 2020). As previously discussed, for every regression problem instance, evaluating \(\mathcal{T}\) with these approaches requires a heavy training procedure. Instead, we show in this work that for specific spaces \(\tilde{\mathcal{B}}\), we can meta-learn a parametric map \(\mathcal{T}_{\theta}\) that transductively approximates (in a certain functional sense) any target function \(\mathcal{O}\in\mathcal{B}\) given a corresponding dataset \(\mathcal{D}_{\mathcal{O}}\) such that:
\[\forall\mathbf{v}\in\mathcal{V},\ \ \mathcal{T}(\mathcal{D}_{\mathcal{O}})(\mathbf{v})= \mathcal{T}_{\mathbf{\theta}}(\mathbf{v}_{1},\mathcal{O}(\mathbf{v}_{1}),\ldots,\mathbf{v}_{n },\mathcal{O}(\mathbf{v}_{n}),\mathbf{v})\approx\mathcal{O}(\mathbf{v}) \tag{2}\]
## 3 Vector-valued Reproducing Kernel Banach Space regression
In order to build \(\mathcal{T}_{\mathbf{\theta}}\), we leverage the structure of Reproducing Kernel Banach Spaces (RKBS) of functions \(\mathcal{B}\) and combine it with the universal approximation abilities of deep networks. As we will see in the experimental section, RKBS are very general spaces occurring in a wide range of machine learning applications. We start by recalling some elements of the theory of vector-valued RKBS developed in Zhang (2013). Namely, we will consider throughout _uniform_ Banach spaces \(\mathcal{S}\) (such condition guarantees the unicity of a compatible semi-inner product \(\langle.,.\rangle_{\mathcal{S}}:\mathcal{S}\times\mathcal{S}\mapsto\mathbb{R}\), i.e. \(\forall\mathbf{s}\in\mathcal{S},\langle\mathbf{s},\mathbf{s}\rangle_{\mathcal{S}}=||\mathbf{s} ||_{\mathcal{S}}^{2}\) and allows to build a bijective and isometric dual space \(\mathcal{S}^{*}\)).
**Theorem 1** (Vector-valued RKBS (Zhang, 2013)).: _A \(\mathcal{U}\)-valued reproducing kernel Banach space \(\mathcal{B}\) of functions from \(\mathcal{V}\) to \(\mathcal{U}\) is a Banach space such that for all \(\mathbf{v}\in\mathcal{V}\), the point evalutation \(\delta_{\mathbf{v}}:\mathcal{B}\mapsto\mathcal{U}\) defined as \(\delta_{\mathbf{v}}(\mathcal{O})=\mathcal{O}(\mathbf{v})\) is continuous. In this case, there exists a unique function \(\mathcal{K}:\mathcal{V}\times\mathcal{V}\mapsto L(\mathcal{U})\) such that for all \((\mathbf{v},\mathbf{u})\in\mathcal{V}\times\mathcal{U}\):_
\[\begin{cases}\mathbf{v}^{\prime}\mapsto\mathcal{K}(\mathbf{v},\mathbf{v}^{\prime})(\mathbf{u} )\in\mathcal{B}\\ \forall\ \mathcal{O}\in\mathcal{B},\ \langle\mathcal{O}(\mathbf{v}),\mathbf{u}\rangle_{ \mathcal{U}}=\langle\mathcal{O},\mathcal{K}(\mathbf{v},.)(\mathbf{u})\rangle_{ \mathcal{B}}\\ \forall\ \mathbf{v}^{\prime}\in\mathcal{V},\ ||\mathcal{K}(\mathbf{v},\mathbf{v}^{\prime})||_{L( \mathcal{U})}\leqslant\|\delta_{\mathbf{v}}\|_{L(\mathcal{B},\mathcal{U})}\|\delta _{\mathbf{v}^{\prime}}\|_{L(\mathcal{B},\mathcal{U})}\end{cases} \tag{3}\]
Informally, theorem (1) states that RKBS are spaces sufficiently regular such that the image of _any_ element \(\mathcal{O}\) at a given point \(\mathbf{v}\) can be expressed in terms of a unique function \(\mathcal{K}\). The latter is hence called the _reproducing kernel_ of \(\mathcal{B}\) and our goal is to leverage such unicity to build the map \(\mathcal{T}_{\mathbf{\theta}}\). Let \(\mathcal{D}\) be the set of all datasets \(\mathcal{D}_{\mathcal{O}}\) previously defined. The following original theorem gives the existence of a solution to our meta-learning problem and relates it to the reproducing kernel.
**Theorem 2** (RKBS representer map).: _Let \(\mathcal{B}\) be a \(\mathcal{U}\)-valued RKBS from \(\mathcal{V}\) to \(\mathcal{U}\), if for any dataset \(\mathcal{D}_{\mathcal{O}}\in\mathcal{D}\), \(\mathcal{L}(.,\mathcal{D}_{\mathcal{O}})\) is lower semi-continuous, coercive and bounded below, then there exists a function \(\mathcal{T}:\mathcal{D}\mapsto\mathcal{B}\) such that \(\mathcal{T}(\mathcal{D}_{\mathcal{O}})\) is a minimizer of equation (1). If \(\mathcal{L}\) is of the form \(\mathcal{L}(.,\mathcal{D}_{\mathcal{O}})=\tilde{\mathcal{L}}\circ\{\delta_{ \mathbf{v}_{i}}\}_{i\leqslant n}\) with \(\tilde{\mathcal{L}}:\mathcal{U}^{n}\mapsto\mathbb{R}\), then the dual \(\mathcal{T}(\mathcal{D}_{\mathcal{O}})\)* is in \(\overline{span}\{\mathcal{U}(\mathbf{v}_{i},.)(\mathbf{u})^{*},i\leqslant n,\mathbf{u}\in \mathcal{U}\}\). Furthermore, if for any \(\mathcal{D}_{\mathcal{O}}\), \(\mathcal{L}(.,\mathcal{D}_{\mathcal{O}})\) is strictly-convex, then \(\mathcal{T}\) is unique._
While theorem (2) provides conditions for the existence of solutions to each regression problem defined by (1), the usual method consisting in solving instance-specific minimization problederived from representer theorems characterizations is generally intractable in RKBS for several reasons (non-convexity and infinite-dimensionality of the problem w.r.t to variable \(\mathbf{u}\), non-additivity of the underlying semi-inner product). Instead, we propose to define image solutions \(\mathcal{T}(\mathcal{D}_{\mathcal{O}})=\sum_{i\leq n}\mathcal{K}_{\mathbf{\theta}}( \mathbf{v}_{i},.)(\tilde{\mathbf{u}}_{i})\) where \(\mathcal{K}_{\mathbf{\theta}}\) and \((\tilde{\mathbf{u}}_{i})\) are respectively the learned approximation of the \(\mathcal{U}\)-valued reproducing kernel \(\mathcal{K}\) and a set of functions in \(\mathcal{U}\) resulting from a sequence of deep transformations of image examples \((\mathbf{u}_{i})\) that we define below.
**Transformers attention as a reproducing kernel \(\diamond\)** We first need to build \(\mathcal{K}\). Several pieces of work have proposed constructions of \(\mathcal{K}\) in the context of a non-symmetric and nonpositive semi-definite real-valued kernel (Zhang et al., 2009; Georgiev et al., 2014; Lin et al., 2019; Xu and Ye, 2019). In particular, the exponential key-query function in the popular Transformer model (Vaswani et al., 2017) has been interpreted as a real-valued reproducing kernel \(\mathbf{\kappa_{\theta}}:\mathcal{V}\times\mathcal{V}\mapsto\mathbb{R}\) in Wright and Gonzalez (2021). We extend below this interpretation to more general vector-valued RKBS:
**Proposition 1** (**Dot-product attention as \(\mathcal{U}\)-valued reproducing kernel**).: _Let \((p_{j})_{j\leq J}\) a finite sequence of strictly positive integers, let \((A^{j}_{\mathbf{\theta}})_{j\leq J}\) be applications from \(\mathcal{V}\times\mathcal{V}\) to \(\mathbb{R}\), let \(V^{j}_{\mathbf{\theta}}\) be linear applications from \(L(\mathcal{U},\mathbb{R}^{p_{j}})\) and \(W_{\mathbf{\theta}}\) a linear application from \(L(\prod\limits_{j\leq J}\mathbb{R}^{p_{j}},\mathcal{U})\), the (multi-head) application \(\mathbf{\kappa_{\theta}}:\mathcal{V}\times\mathcal{V}\mapsto L(\mathcal{U})\) defined by_
\[\mathbf{\kappa_{\theta}}(\mathbf{v},\mathbf{v}^{\prime})(\mathbf{u})\triangleq W_{\mathbf{\theta}} \bigg{(}[...,A^{j}_{\mathbf{\theta}}(\mathbf{v},\mathbf{v}^{\prime})\cdot V^{j}_{\mathbf{ \theta}}(\mathbf{u}),...]_{j\leq J}\bigg{)} \tag{4}\]
_is the reproducing kernel of an \(\mathcal{U}\)-valued RKBS. In particular, if \(\mathcal{U}=\mathcal{V}=\mathbb{R}^{p}\), for \(p\in\mathbb{N}^{+}\) and \(A^{j}_{\mathbf{\theta}}=\exp\big{(}\frac{1}{\tau}(Q^{j}_{\mathbf{\theta}}\mathbf{v})^{T}( K^{j}_{\mathbf{\theta}}\mathbf{v}^{\prime})\big{)}/\sigma(\mathbf{v},\mathbf{v}^{\prime})\) with \((Q^{j}_{\mathbf{\theta}},K^{j}_{\mathbf{\theta}})_{j\leq J}\) applications from \(L(\mathcal{V},\mathbb{R}^{d})\), \(\mathbf{\kappa_{\theta}}\) corresponds to the dot-product attention mechanism of Vaswani et al. (2017)._
Note that in (4), the usual softmax normalization of the dot-product attention is included in the linear operations \(A^{j}_{\mathbf{\theta}}\) through \(\sigma\). We show in the next section how such kernel construction can be leveraged to build the map \(\mathcal{T}_{\theta}\) and that several variations of the kernel construction are possible, depending on the target space \(\mathcal{B}\) and applications. Contrary to usual kernel methods, our model jointly builds the full reproducing kernel approximation \(\mathcal{K}_{\mathbf{\theta}}\) and the instance-specific parametrization \((\tilde{\mathbf{u}}_{i})_{i\leq I}\) by integrating the solutions iteratively over several residual kernel transformations. We refer to our system as a _Transducer_, both as a tribute to the Transformer computation mechanism from which it is inspired and by analogy with signal conversion devices.
## 4 The Transducer
**Model definition \(\diamond\)** We define \(\mathcal{T}_{\mathbf{\theta}}\) as the sum of \(L\) residual kernel transformations \(\{\mathbf{\kappa_{\theta}}^{\ell}\}_{\ell\in L}\) whose expression can be written:
\[\forall\ \mathbf{v}\in\mathcal{V},\ \ \mathcal{T}_{\mathbf{\theta}}(\mathcal{D}_{ \mathcal{O}})(\mathbf{v})=\sum_{i\in I}\mathcal{K}_{\mathbf{\theta}}(\mathbf{v}_{i},\mathbf{v })(\tilde{\mathbf{u}}_{i})=\sum_{i\in I}\sum_{\ell\in L}\mathbf{\kappa_{\theta}}^{\ell }(\mathbf{v}_{i}^{\ell},\mathbf{v}^{\ell})(\mathbf{u}_{i}^{\ell}) \tag{5}\]
where \((\mathbf{v}_{i}^{\ell},\mathbf{u}_{i}^{\ell})_{i\leq n,l\leq L}\) and \((\mathbf{v}^{\ell})_{l\leq L}\) refer to sequences of representations starting respectively with \((\mathbf{v}_{i}^{1},\mathbf{u}_{i}^{1})_{i\leq n}=\mathcal{D}_{\mathcal{O}}\), \(\mathbf{v}^{1}=\mathbf{v}\) and defined by the following recursive relation:
\[\begin{cases}\mathbf{v}_{i}^{\ell+1}=F^{\ell}_{\mathbf{\theta}}(\mathbf{v}_{i}^{\ell})\,\ \mathbf{v}^{\ell+1}=F^{\ell}_{\mathbf{\theta}}(\mathbf{v}^{\ell})\\ \mathbf{u}_{i}^{\ell+1}=\tilde{\mathbf{u}}_{i}^{\ell}+\sum_{j}\mathbf{\kappa_{\theta}}^{ \ell}(\mathbf{v}_{j}^{\ell+1},\mathbf{v}_{i}^{\ell+1})(\tilde{\mathbf{u}}_{j}^{\ell})\ \text{where}\ \tilde{\mathbf{u}}_{i}^{\ell}=G^{\ell}_{\mathbf{\theta}}(\mathbf{u}_{i}^{\ell})\end{cases} \tag{6}\]
where \((F^{\ell}_{\mathbf{\theta}},G^{\ell}_{\mathbf{\theta}})_{\ell\leq L}\) correspond to (optional) parametric non-linear residual transformations applied in parallel to representations \((\mathbf{v}_{i}^{\ell},\mathbf{v}_{i}^{\ell})_{i\leq n}\) while \((\mathbf{\kappa_{\theta}}^{\ell})_{\ell\leq L}\) are intermediate kernel transformations of the form \(\kappa:\mathcal{V}\times\mathcal{V}\mapsto\mathcal{L}(\mathcal{U})\) such as the one defined in equation (4). Breaking down kernel estimation through this sequential construction allows for iteratively refining the reproducing kernel estimate and approximating on-the-fly the set of solutions \((\tilde{\mathbf{u}}_{i})_{i\leq I}\). We particularly investigate the importance of depth \(L\) in the experimental section. Note that equations (5) and (6) allow to handle both varying dataset sizes and efficient parallel inference by building the sequences \((\mathbf{v}^{\ell})_{\ell\leq L}\) with \((\mathbf{v}_{i}^{\ell})_{i\leq n,\ell\leq L}\) in batches and simply masking the unwanted cross-relational features during the kernel operations. All the operations are parallelizable and implemented on GPU-accelerated tensor manipulation libraries such that each regression with \(\mathcal{T}_{\mathbf{\theta}}\) is orders of magnitude faster than gradient-based regression methods.
**Discretization \(\diamond\)** In the case of infinite-dimensional functional input and output spaces \(\mathcal{V}\) and \(\mathcal{U}\), we can accommodate, for numerical computation purposes, different types of function representations previously proposed for neural operator regression and allowing for evaluation at an arbitrary point of their domain. For instance, output functions \(\mathbf{u}\) can be defined as a linear combination of learned or hardcoded finite set of functions, as in Lu et al. (2019) and Bhattacharya et al. (2020). We focus instead on a different approach inspired by Fourier Neural Operators (Li et al., 2020), by applying our model on the \(M\) first modes of a fast Fourier transform of functions \((\mathbf{v}_{i},\mathbf{u}_{i})_{i\leqslant n}\), and transform back its output, allowing us to work with discrete finite function representations.
**Meta-training \(\diamond\)** In order to train \(\mathcal{T}_{\mathbf{\theta}}\) to approximate a solution for all problems of the form (1), we jointly learn the kernel operations \(\langle\mathbf{\kappa}_{\theta}^{\ell}\rangle_{\ell\leqslant L}\) as well as transformations \(\langle F_{\theta}^{\ell}\rangle_{\ell\leqslant L}\). Let us assume that \(\mathcal{L}\) is of the form \(\mathcal{L}(\mathcal{O}^{\prime},\mathcal{D}_{\mathcal{O}})=\sum_{j}\mathcal{ L}(\mathcal{O}^{\prime}(\mathbf{v}_{j}),\mathcal{O}(\mathbf{v}_{j}))\), that datasets \(\mathcal{D}_{\mathcal{O}}\) are sampled according to a probability distribution \(\mathfrak{D}\) over the set of possible example sets with finite cardinality and that a random variable \(\mathfrak{T}\) select the indices of each test set \(\mathcal{D}_{\mathcal{O}}^{\mathit{test}}=\{(\mathbf{v}_{i},\mathbf{u}_{i})\mid(\mathbf{ v}_{j},\mathbf{u}_{j})\in\mathcal{D}_{\mathcal{O}},j\in\mathfrak{T}\}\) such that the train set is \(\mathcal{D}_{\mathcal{O}}^{\mathit{train}}=\mathcal{D}_{\mathcal{O}}\backslash \mathcal{D}_{\mathcal{O}}^{\mathit{test}}\). Our meta-learning objective is defined as:
\[\mathcal{J}(\mathbf{\theta})=\mathbb{E}_{\mathfrak{D},\mathfrak{T}}\Big{[}\sum_{j \in\mathfrak{T}}\tilde{\mathcal{L}}(\mathcal{T}_{\mathbf{\theta}}(\mathcal{D}_{ \mathcal{O}}^{\mathit{train}})(\mathbf{v}_{j}),\mathcal{O}(\mathbf{v}_{j}))\Big{]} \tag{7}\]
which can be tackled with gradient-based optimization w.r.t parameters \(\mathbf{\theta}\) provided \(\mathcal{L}\) is differentiable (see S.I for details). In order to estimate gradients of (7), we gather a meta-dataset of \(M\) operators example sets \((\mathcal{D}_{\mathcal{O}_{m}})_{m\leqslant M}\) and form, at each training step, a Monte-Carlo estimator over a batch of \(k\) datasets from this meta-dataset with random train/test splits \((\mathfrak{T}_{k})\). For each dataset in the batch, in order to form outputs \(\mathcal{T}_{\mathbf{\theta}}(\mathcal{D}_{\mathcal{O}}^{\mathit{train}})(\mathbf{v}_ {j})\) defined by equation (5), we initialize the model sequence in (6) by concatenating \(\mathcal{D}_{\mathcal{O}}^{\mathit{train}}\) with \(\mathcal{D}_{\mathcal{O}}^{\mathit{history}}=\{(\mathbf{v}_{i},0_{\mathcal{U}_{i}} )\mid\mathbf{v}_{i}\in\mathcal{D}_{\mathcal{O}}^{\mathit{test}}\}\) and obtain each infered output \(\mathcal{T}_{\mathbf{\theta}}(\mathcal{D}_{\mathcal{O}}^{\mathit{train}})(\mathbf{v}_ {j})\) as \(\sum_{\mathbf{v}_{i}\in\mathcal{D}_{\mathcal{O}}^{\mathit{train}}}\mathcal{K}_{\mathbf{ \theta}}(\mathbf{v}_{i},\mathbf{v}_{j})(\tilde{\mathbf{u}}_{i})\). Since each regression consists in a single feedforward pass, estimating gradients of the meta-parameters \(\mathbf{\theta}\) with respect to \(\mathcal{L}\) for each batch consists in a single backward pass achieved through automatic differentiation.
## 5 Numerical experiments
In this section, we show empirically that our meta-optimized model is able to approximate any element \(\mathcal{O}\) of diverse function spaces \(\mathcal{B}\) such as operators defined on scalar and vector-valued function spaces derived from parametric physical systems or regression problems in Euclidean spaces. In all experiments, we use the Adam optimizer (Kingma and Ba, 2014) to train for a fixed number of steps with an initial learning rate gradually halved along training. All the computation is carried on a single Nvidia Titan Xp GPU with 12GB memory. Further details can be found in S.I.
### Regression of Advection-Diffusion Reaction PDEs
Figure 2: **Left: RMSEs (and 95% C.I) on unseen operators as a function of the dataset size. The grey area corresponds to dataset cardinalities seen during the _Transducer_ meta-training. For comparison, we train baselines from scratch with the corresponding number of examples. Middle: Training losses of _Transducers_ with different depths. Applying several times the kernel improves performance. Untied weights yield the best performance. Right: (_Up_) 3 examples of the evolution of \(s(x,t)\) for different ADR equations and (_bottom_) spatial MSEs of intermediate representations \((\mathbf{u}^{\ell})\) colored by iteration \(\ell\). The decreasing error, consistent with the MSE reduction of deeper models, suggests that network depth allows for progressively refining function estimates.**First, we examine the problem of regressing operators \(\mathcal{O}\) associating functions \(\mathbf{v}\) from \(\mathcal{V}\subset C([0,1],\mathbb{R})\) to their solutions \(\mathbf{u}=\mathcal{O}(\mathbf{v})\subset C([0,1],\mathbb{R})\) with respect to advection-diffusion-reaction equations defined on the domain \(\Omega=[0,1]\times[0,t]\) with Dirichlet boundary conditions \(\mathbf{s}(0,t)=\mathbf{s}(1,t)=0\). We consider the space \(\mathcal{B}\) of operators \(\mathcal{O}_{(\mathbf{\delta},\mathbf{\nu},\mathbf{k},t)}\) specifically defined by \(\mathbf{v}(x)=\mathbf{s}(x,0)\), \(\mathbf{u}(x)=\mathbf{s}(x,t)\) and \(\mathbf{s}\) follows an equation depending on unknown random continuous spatially-varying diffusion \(\mathbf{\delta}(x)\), advection \(\mathbf{\nu}(x)\), and a scalar reaction term \(\mathbf{k}\sim\mathcal{U}[0,0.1]\):
\[\partial_{t}\mathbf{s}(x,t)=\underbrace{\nabla\cdot(\mathbf{\delta}(x)\nabla_{x}\mathbf{s} (x,t))}_{\text{diffusion}}+\underbrace{\mathbf{\nu}(x)\nabla_{x}\mathbf{s}(x,t)}_{\text {advection}}+\underbrace{\mathbf{k}\cdot(\mathbf{s}(x,t))^{2}}_{\text{reaction}} \tag{8}\]
Eq. (8) is generic with components arising in many physical systems of interest, leading to various forms of solutions \(\mathbf{s}(x,t)\). (We show examples for three different operators in figure 2.) Several methods exist for modeling such PDEs, but they require knowledge of the underlying parameters \((\mathbf{\delta},\mathbf{\nu},\mathbf{k})\) and often impose constraints on the evaluation point as well as expensive time-marching schemes to recover solutions. Here instead, we assume no _a priori_ knowledge of the solution and directly regress each operator \(\mathcal{O}\) behavior from the example set \(\mathcal{D}_{\mathcal{O}}\).
**Baselines and evaluation \(\diamond\)** We meta-trained our model to regress \(500\) different operators \(\mathcal{O}_{(\mathbf{\delta},\mathbf{\nu},\mathbf{k},1)}\) with \(t=1\) fixed and varying number of examples \(n\in[20,100]\) with images evaluated at 100 equally spaced points \((x_{k})_{k\in\llbracket 0,100\rrbracket}\) on the domain \([0,1]\) and meta-tested on a set of \(500\) operators with new parameters \(\mathbf{\delta},\mathbf{\nu},\mathbf{k}\) and initial states \(\mathbf{v}\). Although not directly equivalent to existing approaches, we compared our method with standard regression methods as well as inductive neural operator approximators. We applied standard finite-dimensional regression methods, \(K\)-Nearest-Neighbors (Fix and Hodges, 1989), Decision Trees (Quinlan, 1986) and Ridge regression with radial basis kernel (Hastie et al., 2009) to each discretized problems \(\big{(}\{\mathcal{O}(\mathbf{v}_{j})(x_{k})=\mathbf{u}_{j}(x_{k})\}_{j,k}\big{)}\) as well as two neural-based operators to each dataset instance: DeepONet (Lu et al., 2021) and FNO (Li et al., 2020). Finally, we also tried to meta-learn a parametrization of FNO that could adapt in 100 gradient steps following MAML (Finn et al., 2017) using the same meta-dataset. For all but this approach, an explicit optimization problem is solved before inference in order to fit the target operator. On the other hand, after meta-training of the _Transducer_, which takes only a few minutes to converge, each regression is solved in a single feedforward pass of the network, which is orders of magnitude faster and can be readily applied to new problems (Table 1).
**Results \(\diamond\)** We first verified that our model approximates well unseen operators from the test set (Table 1). We noted that our model learns a non-trivial kernel since the estimation produced with \(\ell_{2}\)-Nearest Neighbors remains poor even after \(1e^{3}\) examples. Moreover, since our model can perform inference for varying input dataset sizes, we examined the _Transducer_ accuracy when varying the number of examples and found that it learns a converging regression program (Figure 2) which consistently outperforms other instance-specific regression approaches with the exception of FNO when enough data is available (\(>60\)). We also found that deeper _Transducer_ models with more layers increase kernel approximation accuracy, with untied weights yielding the best performance (figure 2.)
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & RMSE & Time (s) & GFLOPs \\ \hline FNO & \(2.96e^{-4}\) & \(1.72e^{2}\) & \(1.68e^{2}\) \\ DEEPONET & \(2.02e^{-2}\) & \(7.85e^{1}\) & \(1.54e^{2}\) \\ FNO-MAML & \(1.4e^{-1}\) & \(2.10e^{0}\) & \(1.6e^{-1}\) \\ TRANSDUCER & \(\mathbf{2.39e^{-4}}\) & \(\mathbf{3.10e^{-3}}\) & \(\mathbf{1.06e^{-1}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: RMSE and compute costs of regression over 50 unseen datasets with \(n=50\) examples. Note that DeepONet and FNO are optimized from scratch while the _Transducer_ and _FNO-MAML_ have been pre-trained. GFLOPs represent the total number of floating point operations for regression.
Figure 3: Example of _Transducer_ regression extrapolation and RMSEs on OOD tasks with \(n=100\) examples. Color code corresponds to different correlation lengths used to generate the random functions \(\mathbf{\delta}(x)\) and \(\mathbf{\nu}(x)\). Much of the result remains below 1% error despite never being trained on such operators.
**Extrapolation to OOD tasks \(\diamond\)** We further tested the _Transducer_ ability to regress different operators than those seen during meta-training. Specifically, we varied the correlation length (CL) of the Gaussian processes used to generate functions \(\mathbf{\delta}(x)\) and \(\mathbf{\nu}(x)\) and specified a different target time \(t^{\prime}\neq 1\). We showed that the kernel meta-optimized for a solution at \(t=1\) transfers well to these new regression problems and that regression performance degrades gracefully as the target operators behave further away from the training set (figure 3), while inductive solutions do not generalize.
### Outliers detection on 2D Burgers' equation
We further show that our regression method can fit operators of vector-valued functions by examining the problem of predicting 2D vector fields defined as a solution of a two-dimensional Burgers' equation with periodic spatial boundary condition on the domain \(\Omega=[0,1]^{2}\times[0,10]\):
\[\partial_{t}\mathbf{s}(\mathbf{\vec{v}},t)=\underbrace{\mathbf{\nu}\Delta_{\mathbf{v}}\cdot \mathbf{s}(\vec{\mathbf{v}},t)}_{\text{diffusion}}-\underbrace{\mathbf{s}(\vec{\mathbf{v}},t) \nabla_{\mathbf{\sigma}}\mathbf{s}(\vec{\mathbf{v}},t)}_{\text{advection}} \tag{9}\]
Here, we condition our model with operators of the form, \(\mathbf{v}(\vec{\mathbf{x}})=\mathbf{s}(\vec{\mathbf{x}},t),\mathbf{u}(\vec{\mathbf{x}})=\mathbf{s}(\vec{ \mathbf{x}},t^{\prime})\) such that our model can regress the evolution of the vector field \(\vec{\mathbf{v}}\) starting at any time, with arbitrary temporal increment \(t^{\prime}-t\leq 10\) seconds and varying diffusion coefficient \(\mathbf{\nu}\in[0.1,0.5]\). We show in figure (4) and table (2) that our model is able to fit new instances of this problem with unseen parameters \(\mathbf{\nu}\).
**Fast and differentiable regression \(\diamond\)** Since the fitting operation is orders of magnitude faster than other operator regression approaches as well as fully differentiable, it allows for quickly executing expensive schemes requiring multiple regressions. This can have several applications, from bootstrapping or producing confidence intervals by varying the example set \(\mathcal{D}_{\mathcal{O}}^{\textit{train}}\), or performing inverse problems using Monte-Carlo Markov Chain in the dataset space. We showcase an example of this potential with an outlier detection experiment: We use the _Transducer_ to identify outliers of a dataset of Burgers' equation with coefficient \(\mathbf{\nu}_{1}\) artificially contaminated with elements from another dataset \(\mathbf{\nu}_{2}>\mathbf{\nu}_{1}\) at 5\(\%\) level. We identify outliers by estimating RMSEs over 5000 different regressions using random 50 \(\%\) splits with outliers potentially present in both training and testing sets. This technique takes only a few seconds to estimate while outliers are clearly identified as data points with significantly higher RMSE than the dataset average (figure 5). As a comparison, performing Spectral Clustering (Yu and Shi, 2003) on the FFT of elements \((\mathbf{u}_{i})\) yields very poor precision (table 2)
\begin{table}
\begin{tabular}{l c c} \hline \hline & \(t\) = 5s & \(t\) = 10s \\ \hline RMSE (test sets) & \(\mathbf{2.2e^{-3}}\) & \(\mathbf{5.9e^{-3}}\) \\ Outliers (Pre./Rec.) & \(\mathbf{100\%}/\mathbf{100\%}\) & \(\mathbf{100\%}/\mathbf{100\%}\) \\ S.C. (Pre./Rec.) & \(6\%/85\%\) & \(7\%/85\%\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: **& Figure 5: Left: Meta-test regression and outlier detection results at two target times. RMSEs on Burgers’ equations averaged over 200 different parameter conditions \(\mathbf{\nu}\in[0.1,0.5]\) each with 100 train examples. Precision/Recall in outlier detection of the Transducer versus Spectral clustering. Right: RMSE distributions of each element in the contaminated dataset over the 5000 regressions. Outliers are clearly identified.**
Figure 4: Illustrative example of initial \((t=0)\), target \((t=10)\) and _Transducer_ estimation of the vector field \(s(\vec{\mathbf{x}},t)\) discretized at resolution \(64\times 64\) over the domain \([0,1]^{2}\) for the Burgers’ equation experiment. The last panel represents absolute error to ground truth.
### Climate modeling with seasonal adaptation
One advantage of our approach is the ability to select the data that is most relevant with respect to a certain prediction task and subsequently adapt the model response. For instance, robust and precise prediction of climate variables is difficult because models need to account for seasonal variability and adapt to drifting parameters. Even with a globally large amount of available data, the underlying operator of interest might change over time or be affected by unobserved phenomena. Hence, in order to fully exploit the potential of data-driven methods, being able to capture such variations might greatly help prediction performance on fluctuating and drifting data distributions. In order to illustrate the applicability and scalability of deep transductive learning, we considered the problem of predicting the Earth's surface air pressure solely from the Earth's surface air temperature at a high resolution. Data is taken from the ERA5 reanalysis (Hersbach et al., 2020) publicly made available by the ECMWF, which consists of hourly high-resolution estimates of multiple atmospheric variables from 1979 to the current day. We model pressure estimate on a \(720\times 720\) grid, resulting in a spatial resolution of \(0.25^{\circ}\times 0.5^{\circ}\), allowing us to capture small features such as local dynamics and geographic relief.
Similar to (Pathak et al., 2022), we modify a ViT backbone to incorporate a kernel transduction layer before every patch attention and compare our model to an unmodified ViT baseline with a matching number of parameters. We additionally compare with a fully transductive Nearest Neighbors approach. In Figure 6 and Table 3, we present results obtained on training a Transducer with data from 2010 to 2014 and testing it on data from 2016 to 2019. We trained our model by predicting 5 random days sampled from random 20-day windows and present two test configurations: We either condition the _Transducer_ with a window centered at the previous year's same date (P.Y) or with a 15 days window lagging by a week (P.W) (see SI for details). Both cases outperform transductive and inductive baselines with fast inference time, confirming that our solution can scale to large problems and be combined with other deep learning modules.
### Finite-dimensional case: MNIST-like datasets classification
We finally confirm the generality of our approach in the case of finite-dimensional spaces \(\mathcal{U}\) and \(\mathcal{V}\) by studying the meta-learning problem presented in Kirsch et al. (2022) which consists in regressing classification functions from the 784-dimensional space of MNIST-like images (LeCun
Figure 6: _Up_ - Illustrative examples of \(720\times 720\) temperature (_left_) and pressure (_right_) fields of the ERA5 dataset. _Bottom_ - Estimated pressure field from conditioning the _Transducer_ with 15 days data dating 1 week before the target date. Insets show recovered details of the estimation (_blue_) compared with ground truth (_red_).
\begin{table}
\begin{tabular}{l c c} \hline Method & LWMSE (hPa) & Time (s) \\ \hline Nearest-Neighbors & \(67.326\) & \(5.915\) \\ ViT & \(32.826\) & **0.053** \\ Transducer - (P.Y) & \(25.293\) & \(0.192\) \\ Transducer - (P.W) & \(\mathbf{22.718}\) & \(0.192\) \\ \hline \end{tabular}
\end{table}
Table 3: Latitude-weighted mean-square error (in hectopascals) and inference time for the earth surface pressure prediction task.
and Cortes, 2010) to a 10-dimensional space of one-hot class encoding (i.e functions considered are \(\mathcal{O}:[0,1]^{784}\mapsto[0,1]^{10}\)). We meta-train a 2-layer Transducer to classify consistently pixel-permuted and class-permuted versions of MNIST. We then meta-test the Transducer to classify the unpermuted MNIST dataset and how the regression map transfer to Fashion-MNIST (Xiao et al., 2017) and KMNIST (Clanuwat et al., 2018). We show in Table 7 that the Transducer outperforms previous meta-learning approaches on both the original MNIST classification task as well as transfer better on Fashion MNIST and K-MNIST classfication tasks.
## 6 Related work
**Transductive Machine learning**\(\circ\) Principles of transductive statistical estimation have been formally described in Gammerman et al. (1998); Vapnik (1999). Algorithms relying on relational structures between data points such as \(K\)-nearest neighbors (Cover and Hart, 1967) and kernel methods (Nadaraya, 1964; Watson, 1964) build estimates by weighing examples with respect to a certain metric space. Further, the "kernel trick" allows to embed possibly infinite-dimensional features (Ferraty and Vieu, 2006) into finite Gram matrix representations that are also well-suited for multi-task regression (Evgeniou et al., 2005; Caponnetto et al., 2008). Distinctively, Gaussian processes regression (Williams and Rasmussen, 1995) combines transduction with Bayesian modeling to estimate a posterior distribution over possible functions. These techniques might suffer from the so-called "curse of dimensionality": with growing dimensionality, the density of exemplar point diminishes, which increases estimators' variance. More recent work combining deep learning with transductive inference has shown promising results even in high-dimensional spaces for few-shot learning (Snell et al., 2017; Sung et al., 2018) or sequence modeling (Jaitly et al., 2015), but the vast majority of neural networks still remain purely inductive.
**Neural operator learning**\(\circ\) The universal approximation abilities of neural networks have been generalized to infinite-dimensional function spaces: Chen and Chen (1995) showed that finite neural parametrization can approximate well infinite-dimensional operators. More recent work using neural networks to perform operator regression has shown strong results (Lu et al., 2019), especially when mixed with tools from functional analysis and physics (Raissi et al., 2017; Li et al., 2020; Gupta et al., 2021; Li et al., 2020; Nelsen and Stuart, 2021; Wang et al., 2021; Roberts et al., 2021) and constitutes a booming research direction in particular for physical applications (Goswami et al., 2022; Pathak et al., 2022; Vinuesa and Brunton, 2022; Wen et al., 2022; Pickering et al., 2022). Recently, the Transformer's attentional computation has been interpreted as a Petrov-Galerkin projection (Cao, 2021) or through Reproducing Kernel Hilbert Space theory (Kissas et al., 2022) for building such neural operators, but these perspectives apply attention to fit a single target operator.
**Meta-learning and in-context learning**\(\circ\) Promising work towards more general and adaptable machines has consisted in automatically "learning to learn" or meta-learning programs (Schmidhuber et al., 1997; Vilalta and Drissi, 2002), by either explicitly treating gradient descent as an optimizable object (Finn et al., 2017), modeling an optimizer as a black-box autoregressive model (Ravi and Larochelle, 2017) or informing sequential strategies via memorization (Santoro et al., 2016; Ortega et al., 2019) More recently, converging findings in various domains from reinforcement learning (Mishra et al., 2018; Laskin et al., 2022), natural language processing (Brown et al., 2020; Xie et al., 2021; Olsson et al., 2022) and functional regression (Garg et al., 2022) have established the ability of set-based attentional computation in the Transformer (Vaswani et al., 2017) for _in-context_ learning by flexibly extracting functional relationships and performing dynamic association such as linguistic analogy or few-shot behavioral imitation. We show that the theory of RKBS can help interpret such property and extends it to function-valued operators regression.
Figure 7: Comparison of meta-test accuracies of MNIST-like datasets classification tasks. Meta-learning models are trained on transformations of MNIST and are meta-tested on original MNIST, FashionMNIST and KMNIST.
Discussion
We proposed a novel transductive model combining kernel methods and neural networks that is capable of performing regression over entire function spaces. We based our model on the theory of vector-valued Reproducing Kernel Banach Spaces and showcased several instances where it learns a regression program able, in a single feedforward pass, to reach performance levels that match or outperform previous instance-specific neural operators or meta-learning systems. Our approach holds potential to create programs flexibly specified by data and able to model entire families of complex physical systems, with particular applications in functional hypothesis testing, dataset curation or fast ensemble learning. However, one limitation is that our model relies on meta-training, which requires collecting a sufficiently diverse meta-dataset to explore the kernel space. In future work, we plan to investigate methods such as synthetic augmentation to reduce the costs of meta-training.
## 8 Acknowledgements
This work was funded by ANR-31A Artificial and Natural Intelligence Toulouse Institute (ANR-19-PI3A0004) ONR (N00014-19- 1-2029), NSF (IIS-1912280 and EAR-1925481), DARPA (D19AC00015) and NIH/NINDS (R21 NS 112743). Additional support provided by the Carney Institute for Brain Science and the Center for Computation and Visualization (CCV) and the NIH Office of the Director grant S10OD025181.
## References
* Aggarwal et al. (2001) Aggarwal, C. C., Hinneburg, A., and Keim, D. A. (2001). On the surprising behavior of distance metrics in high dimensional space. In _International conference on database theory_, pages 420-434. Springer.
* Bellman (1966) Bellman, R. (1966). Dynamic programming. _Science_, 153(3731):34-37.
* Bhattacharya et al. (2020) Bhattacharya, K., Hosseini, B., Kovachki, N. B., and Stuart, A. M. (2020). Model reduction and neural networks for parametric pdes.
* Boser et al. (1992) Boser, B. E., Guyon, I. M., and Vapnik, V. N. (1992). A training algorithm for optimal margin classifiers. In _Proceedings of the fifth annual workshop on Computational learning theory_, pages 144-152.
* Brown et al. (2020) Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. (2020). Language models are few-shot learners. _Advances in neural information processing systems_, pages 1877-1901.
* Cao (2021) Cao, S. (2021). Choose a transformer: Fourier or galerkin. _Advances in Neural Information Processing Systems_, 34:24924-24940.
* Caponnetto et al. (2008) Caponnetto, A., Micchelli, C. A., Pontil, M., and Ying, Y. (2008). Universal multi-task kernels. _The Journal of Machine Learning Research_, 9:1615-1646.
* Chen and Chen (1995) Chen, T. and Chen, H. (1995). Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. _IEEE Transactions on Neural Networks_, 6(4):911-917.
* Clanuwat et al. (2018) Clanuwat, T., Bober-Irizar, M., Kitamoto, A., Lamb, A., Yamamoto, K., and Ha, D. (2018). Deep learning for classical japanese literature.
* Cover and Hart (1967) Cover, T. and Hart, P. (1967). Nearest neighbor pattern classification. _IEEE transactions on information theory_, 13(1):21-27.
* Evgeniou et al. (2005) Evgeniou, T., Micchelli, C. A., Pontil, M., and Shawe-Taylor, J. (2005). Learning multiple tasks with kernel methods. _Journal of machine learning research_, 6(4).
* Ferraty and Vieu (2006) Ferraty, F. and Vieu, P. (2006). _Nonparametric functional data analysis: theory and practice_, volume 76. Springer.
* Finn et al. (2017) Finn, C., Abbeel, P., and Levine, S. (2017). Model-agnostic meta-learning for fast adaptation of deep networks. In Precup, D. and Teh, Y. W., editors, _Proceedings of the 34th International Conference on Machine Learning_, volume 70 of _Proceedings of Machine Learning Research_, pages 1126-1135. PMLR.
* Fix and Hodges (1989) Fix, E. and Hodges, J. L. (1989). Discriminatory analysis. nonparametric discrimination: Consistency properties. _International Statistical Review / Revue Internationale de Statistique_, 57(3):238-247.
* Gammerman et al. (1998) Gammerman, A., Vovk, V., and Vapnik, V. (1998). Learning by transduction. In _Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence_, pages 148-155.
* Garg et al. (2022) Garg, S., Tsipras, D., Liang, P., and Valiant, G. (2022). What can transformers learn in-context? a case study of simple function classes.
* Georgiev et al. (2014) Georgiev, P., Sanchez-Gonzalez, L., and Pardalos, P. (2014). Construction of pairs of reproducing kernel banach spaces.
* Goswami et al. (2022) Goswami, S., Bora, A., Yu, Y., and Karniadakis, G. E. (2022). Physics-informed neural operators. _arXiv preprint arXiv:2207.05748_.
* Gupta et al. (2021) Gupta, G., Xiao, X., and Bogdan, P. (2021). Multiwavelet-based operator learning for differential equations. _Advances in Neural Information Processing Systems_, 34:24048-24062.
* Hastie et al. (2009) Hastie, T., Tibshirani, R., and Friedman, J. (2009). The elements of statistical learnin. _Cited on_, 33.
* Hastie et al. (2009)Hersbach, H., Bell, B., Berrisford, P., Hirahara, S., Horanyi, A., Munoz-Sabater, J., Nicolas, J., Peubey, C., Radu, R., Schepers, D., Simmons, A., Soci, C., Abdalla, S., Abellan, X., Balsamo, G., Bechtold, P., Biavati, G., Bidlot, J., Bonavita, M., De Chiara, G., Dahlgren, P., Dee, D., Diamantakis, M., Dragani, R., Flemming, J., Forbes, R., Fuentes, M., Geer, A., Haimberger, L., Healy, S., Hogan, R. J., Holm, E., Janiskova, M., Keeley, S., Laloyaux, P., Lopez, P., Lupu, C., Radnotti, G., de Rosnay, P., Rozum, I., Vamborg, F., Villaume, S., and Thepaut, J.-N. (2020). The era5 global reanalysis. _Quarterly Journal of the Royal Meteorological Society_, 146(730):1999-2049.
* Jaitly et al. (2015) Jaitly, N., Sussillo, D., Le, Q. V., Vinyals, O., Sutskever, I., and Bengio, S. (2015). A neural transducer. _arXiv preprint arXiv:1511.04868_.
* Jin et al. (2020) Jin, P., Lu, L., Tang, Y., and Karniadakis, G. E. (2020). Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness. _Neural Networks_, 130:85-99.
* Kingma and Ba (2014) Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_.
* Kirsch et al. (2022) Kirsch, L., Harrison, J., Sohl-Dickstein, J., and Metz, L. (2022). General-purpose in-context learning by meta-learning transformers. In _Sixth Workshop on Meta-Learning at the Conference on Neural Information Processing Systems_.
* Kirsch et al. (2021) Kirsch, L., Schmidhuber, J., and Al (2021). Meta learning backpropagation and improving it. In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W., editors, _Advances in Neural Information Processing Systems_, volume 34, pages 14122-14134. Curran Associates, Inc.
* Kissas et al. (2022) Kissas, G., Seidman, J. H., Guilhoto, L. F., Preciado, V. M., Pappas, G. J., and Perdikaris, P. (2022). Learning operators with coupled attention. _Journal of Machine Learning Research_, 23(215):1-63.
* Laskin et al. (2022) Laskin, M., Wang, L., Oh, J., Parisotto, E., Spencer, S., Steigerwald, R., Strouse, D., Hansen, S., Filos, A., Brooks, E., Gazeau, M., Sahni, H., Singh, S., and Mnih, V. (2022). In-context reinforcement learning with algorithm distillation.
* LeCun and Cortes (2010) LeCun, Y. and Cortes, C. (2010). MNIST handwritten digit database.
* Li et al. (2020a) Li, Z., Kovachki, N., Azizzadenesheli, K., Liu, B., Bhattacharya, K., Stuart, A., and Anandkumar, A. (2020a). Fourier neural operator for parametric partial differential equations. _arXiv preprint arXiv:2010.08895_.
* Li et al. (2020b) Li, Z., Kovachki, N., Azizzadenesheli, K., Liu, B., Stuart, A., Bhattacharya, K., and Anandkumar, A. (2020b). Multipole graph neural operator for parametric partial differential equations. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H., editors, _Advances in Neural Information Processing Systems_, volume 33, pages 6755-6766. Curran Associates, Inc.
* Lin et al. (2019) Lin, R., Zhang, H., and Zhang, J. (2019). On reproducing kernel banach spaces: Generic definitions and unified framework of constructions.
* Lin et al. (2022) Lin, R. R., Zhang, H. Z., and Zhang, J. (2022). On reproducing kernel banach spaces: Generic definitions and unified framework of constructions. _Acta Mathematica Sinica, English Series_, 38(8):1459-1483.
* Lu et al. (2019) Lu, L., Jin, P., and Karniadakis, G. E. (2019). Deeponet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators. _arXiv preprint arXiv:1910.03193_.
* Lu et al. (2021) Lu, L., Jin, P., Pang, G., Zhang, Z., and Karniadakis, G. E. (2021). Learning nonlinear operators via deeponet based on the universal approximation theorem of operators. _Nature Machine Intelligence_, 3(3):218-229.
* McCloskey and Cohen (1989) McCloskey, M. and Cohen, N. J. (1989). Catastrophic interference in connectionist networks: The sequential learning problem. volume 24 of _Psychology of Learning and Motivation_, pages 109-165. Academic Press.
* McCloskey et al. (2019)* Micchelli and Pontil (2004) Micchelli, C. A. and Pontil, M. (2004). A function representation for learning in banach spaces. In _Learning Theory: 17th Annual Conference on Learning Theory, COLT 2004, Banff, Canada, July 1-4, 2004. Proceedings 17_, pages 255-269. Springer.
* Mishra et al. (2018) Mishra, N., Rohaninejad, M., Chen, X., and Abbeel, P. (2018). A simple neural attentive meta-learner. In _International Conference on Learning Representations_.
* Nadaraya (1964) Nadaraya, E. A. (1964). On estimating regression. _Theory of Probability & Its Applications_, 9(1):141-142.
* Nelsen and Stuart (2021) Nelsen, N. H. and Stuart, A. M. (2021). The random feature model for input-output maps between banach spaces. _SIAM Journal on Scientific Computing_, 43(5):A3212-A3243.
* Olsson et al. (2022) Olsson, C., Elhage, N., Nanda, N., Joseph, N., DasSarma, N., Henighan, T., Mann, B., Askell, A., Bai, Y., Chen, A., et al. (2022). In-context learning and induction heads. _arXiv preprint arXiv:2209.11895_.
* Ortega et al. (2019) Ortega, P. A., Wang, J. X., Rowland, M., Genewein, T., Kurth-Nelson, Z., Pascanu, R., Heess, N., Veness, J., Pritzel, A., Sprechmann, P., et al. (2019). Meta-learning of sequential strategies. _arXiv preprint arXiv:1905.03030_.
* Pathak et al. (2022) Pathak, J., Subramanian, S., Harrington, P., Raja, S., Chattopadhyay, A., Mardani, M., Kurth, T., Hall, D., Li, Z., Azizzadenesheli, K., Hassanzadeh, P., Kashinath, K., and Anandkumar, A. (2022). Fourcastnet: A global data-driven high-resolution weather model using adaptive fourier neural operators.
* Pickering et al. (2022) Pickering, E., Guth, S., Karniadakis, G. E., and Sapsis, T. P. (2022). Discovering and forecasting extreme events via active learning in neural operators. _Nature Computational Science_, 2(12):823-833.
* Quinlan (1986) Quinlan, J. R. (1986). Induction of decision trees. _Machine learning_, 1(1):81-106.
* Raissi et al. (2017) Raissi, M., Perdikaris, P., and Karniadakis, G. E. (2017). Physics informed deep learning (part i): Data-driven solutions of nonlinear partial differential equations. _arXiv preprint arXiv:1711.10561_.
* Ravi and Larochelle (2017) Ravi, S. and Larochelle, H. (2017). Optimization as a model for few-shot learning. In _ICLR_.
* Roberts et al. (2021) Roberts, N. C., Khodak, M., Dao, T., Li, L., Re, C., and Talwalkar, A. (2021). Rethinking neural operations for diverse tasks. In Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W., editors, _Advances in Neural Information Processing Systems_.
* Santoro et al. (2016) Santoro, A., Bartunov, S., Botvinick, M. M., Wierstra, D., and Lillicrap, T. P. (2016). One-shot learning with memory-augmented neural networks. _CoRR_, abs/1605.06065.
* Schmidhuber et al. (1997) Schmidhuber, J., Zhao, J., and Wiering, M. (1997). Shifting inductive bias with success-story algorithm, adaptive levin search, and incremental self-improvement. _Machine Learning_, 28.
* Snell et al. (2017) Snell, J., Swersky, K., and Zemel, R. (2017). Prototypical networks for few-shot learning. _Advances in neural information processing systems_, 30.
* Sung et al. (2018) Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P. H., and Hospedales, T. M. (2018). Learning to compare: Relation network for few-shot learning. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_.
* Vapnik (1999) Vapnik, V. (1999). _The nature of statistical learning theory_. Springer science & business media.
* Vapnik (2006) Vapnik, V. (2006). _Estimation of dependences based on empirical data_. Springer Science & Business Media.
* Vaswani et al. (2017) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. u., and Polosukhin, I. (2017). Attention is all you need. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, _Advances in Neural Information Processing Systems_, volume 30. Curran Associates, Inc.
* Vapnik et al. (2017)Vilalta, R. and Drissi, Y. (2002). A perspective view and survey of meta-learning. _Artif. Intell. Rev._, 18(2):77-95.
* Vinuesa and Brunton (2022) Vinuesa, R. and Brunton, S. L. (2022). Enhancing computational fluid dynamics with machine learning. _Nature Computational Science_, 2(6):358-366.
* Wang et al. (2021) Wang, S., Wang, H., and Perdikaris, P. (2021). Learning the solution operator of parametric partial differential equations with physics-informed deeponets. _Science Advances_, 7(40):eabi8605.
* Watson (1964) Watson, G. S. (1964). Smooth regression analysis. _Sankhya: The Indian Journal of Statistics, Series A_, pages 359-372.
* Wen et al. (2022) Wen, G., Li, Z., Azizzadenesheli, K., Anandkumar, A., and Benson, S. M. (2022). U-fno--an enhanced fourier neural operator-based deep-learning model for multiphase flow. _Advances in Water Resources_, 163:104180.
* Williams and Rasmussen (1995) Williams, C. and Rasmussen, C. (1995). Gaussian processes for regression. _Advances in neural information processing systems_, 8.
* Wright and Gonzalez (2021) Wright, M. A. and Gonzalez, J. (2021). Transformers are deep infinite-dimensional non-merer binary kernel machines. _ArXiv_, abs/2106.01506.
* Xiao et al. (2017) Xiao, H., Rasul, K., and Vollgraf, R. (2017). Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms.
* Xie et al. (2021) Xie, S. M., Raghunathan, A., Liang, P., and Ma, T. (2021). An explanation of in-context learning as implicit bayesian inference.
* Xu and Ye (2019) Xu, Y. and Ye, Q. (2019). volume 258. American Mathematical Society.
* Yu and Shi (2003) Yu and Shi (2003). Multiclass spectral clustering. In _Proceedings Ninth IEEE International Conference on Computer Vision_, pages 313-319 vol.1.
* Zhang (2013) Zhang, H. (2013). Vector-valued reproducing kernel banach spaces with applications to multi-task learning. _Journal of Complexity_, 29(2):195-215.
* Zhang et al. (2009) Zhang, H., Xu, Y., and Zhang, J. (2009). Reproducing kernel banach spaces for machine learning. _Journal of Machine Learning Research_, 10(12). | ## Review
### Summary
The paper proposes a novel method for meta-learning through neural operators in reproducing kernel Banach spaces (RKBS). It introduces an algorithm for few-shot learning, allowing the model to generate functional relationships based on new datasets at inference time. By leveraging transformer attention layers as reproducing kernels, the authors demonstrate the effectiveness of their approach across various applications, including modeling partial differential equations (PDEs) and other benchmarks. The reviewers noted that while the authors successfully addressed many concerns, further clarity in certain sections and additional comparisons with existing methods would enhance the paper.
### Strengths
- The idea of performing few-shot learning of neural operators is novel and relevant, particularly in physical modeling.
- The paper is mostly clear, considering the complexity of the underlying ideas.
- The connection to transformer architectures is intriguing and adds depth to the work.
- Experiments are varied and demonstrate the approach's effectiveness in multiple contexts, especially PDE modeling.
- The motivation for the research is strong, linking meta-learning with RKBS theory, which could inspire further exploration.
### Weaknesses
- The initial discussion on transductive versus inductive learning is misleading and requires clarification.
- Some standard few-shot learning methods are not adequately discussed, leading to confusion regarding the need for the proposed complex formulation.
- The computational complexity of the method, particularly regarding the execution of multiple FFTs, is not sufficiently addressed.
- Some experimental comparisons appear unfair, particularly when juxtaposing the method with fully trained competitors.
- Lack of broader applications, like in-context learning of language patterns, limits the scope of experiments.
### Questions
- Could the authors rephrase the transductive discussion to enhance clarity?
- What is the training and test complexity of the proposed method?
- Can a practical example of the method's application in finite-dimensional spaces be provided for improved readability?
- How does the proposed method compare with other meta-learning approaches in terms of performance and advantages?
- Will the code for the method be made publicly available?
### Soundness
**Score:** 11
**Description:** 3 = good. The foundational concepts are generally sound, though there are areas of confusion that need addressing, particularly regarding comparisons with established methods.
### Presentation
**Score:** 11
**Description:** 3 = good. The paper is mostly well-presented, but some sections lack clarity, requiring additional explanations to aid reader understanding.
### Contribution
**Score:** 9
**Description:** 3 = good. The method presents novel ideas and applications, but further comparisons and discussions on how it stands against existing methods would bolster its contribution.
### Rating
**Score:** 7
**Description:** 7 = accept, but needs minor improvements. The paper presents solid technical work with potential high impact, yet it requires some clarifications and additional comparisons to strengthen its position.
### Paper Decision
**Decision:** Accept
**Reasons:** The paper presents original ideas within the important field of meta-learning and neural operators, demonstrating sound methodology and promising results. While there are some weaknesses related to clarity and comparisons with existing work, the overall contribution and impact justify acceptance. Addressing the raised questions and weaknesses will enhance the paper's quality.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Solving Inverse Physics Problems with Score Matching
Benjamin Holzschuh
1Technical University of Munich, 85748 Garching, Germany
Simona Vegetti
Max Planck Institute for Astrophysics, 85748 Garching, Germany
Nils Thuerey
Technical University of Munich, 85748 Garching, Germany
###### Abstract
We propose to solve inverse problems involving the temporal evolution of physics systems by leveraging recent advances from diffusion models. Our method moves the system's current state backward in time step by step by combining an approximate inverse physics simulator and a learned correction function. A central insight of our work is that training the learned correction with a single-step loss is equivalent to a score matching objective, while recursively predicting longer parts of the trajectory during training relates to maximum likelihood training of a corresponding probability flow. We highlight the advantages of our algorithm compared to standard denoising score matching and implicit score matching, as well as fully learned baselines for a wide range of inverse physics problems. The resulting inverse solver has excellent accuracy and temporal stability and, in contrast to other learned inverse solvers, allows for sampling the posterior of the solutions. Code and experiments are available at [https://github.com/tum-pbs/SMDP](https://github.com/tum-pbs/SMDP).
## 1 Introduction
Many physical systems are time-reversible on a microscopic scale. For example, a continuous material can be represented by a collection of interacting particles [1, 2] based on which we can predict future states. We can also compute earlier states, meaning we can evolve the simulation backward in time [14]. When taking a macroscopic perspective, we only know average quantities within specific regions [13], which constitutes a loss of information, and as a consequence, time is no longer reversible. In the following, we target inverse problems to reconstruct the distribution of initial macroscopic states for a given end state. This problem is genuinely tough [1, 2, 3, 10], and existing methods lack tractable approaches to represent and sample the distribution of states.
Our method builds on recent advances from the field of diffusion-based approaches [12, 11, 13]: Data samples \(\mathbf{x}\in\mathbb{R}^{D}\) are gradually corrupted into Gaussian white noise via a stochastic differential equation (SDE) \(d\mathbf{x}=f(\mathbf{x},t)dt+g(t)dW\), where the deterministic component of the SDE \(f:\mathbb{R}^{D}\times\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^{D}\) is called _drift_ and the coefficient of the \(D\)-dimensional Brownian motion \(W\) denoted by \(g:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\) is called _diffusion_. If the **score**\(\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})\) of the data distribution \(p_{t}(\mathbf{x})\) of corrupted samples at time \(t\) is known, then the dynamics of the SDE can be reversed in time, allowing for the sampling of data from noise. Diffusion models are trained to approximate the score with a neural network \(s_{\theta}\), which can then be used as a plug-in estimate for the reverse-time SDE.
However, in our physics-based approach, we consider an SDE that describes the physics system as \(d\mathbf{x}=\mathcal{P}(\mathbf{x})dt+g(t)dW\), where \(\mathcal{P}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{D}\) is a physics simulator that replaces the drift term of diffusion models. Instead of transforming the data distribution to noise, we transform a simulation state at \(t=0\) to a simulation state at \(t=T\) with Gaussian noise as a perturbation. Based on a givenend state of the system at \(t=T\), we predict a previous state by taking a small time step backward in time and repeating this multiple times. Similar to the reverse-time SDE of diffusion models, the prediction of the previous state depends on an approximate inverse of the physics simulator, a learned update \(s_{\theta}\), and a small Gaussian perturbation.
The training of \(s_{\theta}\) is similar to learned correction approaches for numerical simulations [13, 14, 15]: The network \(s_{\theta}\) learns corrections to simulation states that evolve over time according to a physics simulator so that the corrected trajectory matches a target trajectory. In our method, we target learning corrections for the "reverse" simulation. Training can either be based on single simulation steps, which only predict a single previous state, or be extended to rollouts for multiple steps. The latter requires the differentiability of the inverse physics step [16].
Importantly, we show that under mild conditions, learning \(s_{\theta}\) is equivalent to matching the score \(\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})\) of the training data set distribution at time \(t\). Therefore, sampling from the reverse-time SDE of our physical system SDE constitutes a theoretically justified method to sample from the correct posterior.
While the training with single steps directly minimizes a score matching objective, we show that the extension to multiple steps corresponds to maximum likelihood training of a related neural ordinary differential equation (ODE). Considering multiple steps is important for the stability of the produced trajectories. Feedback from physics and neural network interactions at training time leads to more robust results.
In contrast to previous diffusion methods, we include domain knowledge about the physical process in the form of an approximate inverse simulator that replaces the drift term of diffusion models [13, 14]. In practice, the learned component \(s_{\theta}\) corrects any errors that occur due to numerical issues, e.g., the time discretization, and breaks ambiguities due to a loss of information in the simulation over time.
Figure 1 gives an overview of our method. Our central aim is to show that the combination of diffusion-based techniques and differentiable simulations has merit for solving inverse problems and to provide a theoretical foundation for combining PDEs and diffusion modeling. In the following, we refer to methods using this combination as _score matching via differentiable physics_ (SMDP). The main contributions of our work are: (1) We introduce a reverse physics simulation step into diffusion models to develop a probabilistic framework for solving inverse problems. (2) We provide the theoretical foundation that this combination yields learned corrections representing the score of the underlying data distribution. (3) We highlight the effectiveness of SMDP with a set of challenging inverse problems and show the superior performance of SMDP compared to a range of stochastic and deterministic baselines.
## 2 Related Work
Diffusion models and generative modeling with SDEsDiffusion models [12, 2] have been considered for a wide range of generative applications, most notably for image [10], video [11, 12, 13], audio synthesis [14], uncertainty quantification [15, 16, 17, 18], and as autoregressive PDE-solvers [16]. However, most approaches either focus on the denoising objective common for tasks involving natural images or the synthesis process of solutions does not directly consider the underlying physics. Models based on Langevin dynamics [13, 14] or discrete Markov chains [12, 2] can be unified in a time-continuous framework using SDEs [12]. Synthesizing data by sampling from neural SDEs has been considered by, e.g., [15, 16]. Contrary to existing approaches, the drift in our method is an actual physics step, and the underlying SDE does not transform a data distribution to noise but models the temporal evolution of a physics system with stochastic perturbations.
Methods for solving inverse problems for (stochastic) PDEsDifferentiable solvers for physics dynamics can be used to optimize solutions of inverse problems with gradient-based methods by backpropagating gradients through the solver steps [13]. Learning-based approaches directly learn solution operators for PDEs and stochastic PDEs, i.e., mappings between spaces of functions, such as Fourier neural operators [15], DeepONets [17], or generalizations thereof that include stochastic forcing for stochastic PDEs, e.g., neural stochastic PDEs [14]. Recently, there have been several approaches that leverage the learned scores from diffusion models as data-drivenregularizations for linear inverse problems [14; 15; 16; 17] and general noisy inverse problems [13]. Our method can be applied to general non-linear inverse physics problems with temporal evolution, and we do not require to backpropagate gradients through all solver steps during inference. This makes inference significantly faster and more stable.
Learned corrections for numerical errorsNumerical simulations benefit greatly from machine learning models [14; 15; 16; 17; 18]. By integrating a neural network into differential equation solvers, it is possible to learn to reduce numerical errors [14; 15; 16] or guide the simulation towards a desired target state [17; 18]. The optimization of \(s_{\theta}\) with the 1-step and multi-step loss we propose in section 3.1 is conceptually similar to learned correction approaches. However, this method has, to our knowledge, not been applied to correcting the "reverse" simulation and solving inverse problems.
Maximum likelihood training and continuous normalizing flowsContinuous normalizing flows (CNFs) are invertible generative models based on neural ODEs [13; 14; 15], which are similar to our proposed physics-based neural ODE. The evolution of the marginal probability density of the SDE underlying the physics system is described by Kolmogorov's forward equation [15], and there is a corresponding probability flow ODE [15; 16]. When the score is represented by \(s_{\theta}\), this constitutes a CNF and can typically be trained with standard methods [13] and maximum likelihood training [15]. Huang et al. [17] show that minimizing the score-matching loss is equivalent to maximizing a lower bound of the likelihood obtained by sampling from the reverse-time SDE. A recent variant combines score matching with CNFs [16] and employs joint learning of drift and corresponding score for generative modeling. To the best of our knowledge, training with rollouts of multiple steps and its relation to maximum likelihood training have not been considered so far.
## 3 Method Overview
Problem formulationLet \((\Omega,\mathcal{F},P)\) be a probability space and \(W(t)=(W_{1}(t),...,W_{D}(t))^{T}\) be a \(D\)-dimensional Brownian motion. Moreover, let \(\mathbf{x}_{0}\) be a \(\mathcal{F}_{0}\)-measurable \(\mathbb{R}^{D}\)-valued random variable that is distributed as \(p_{0}\) and represents the initial simulation state. We consider the time evolution of
Figure 1: Overview of our method. For training, we fit a neural ODE, the probability flow, to the set of perturbed training trajectories (a, top). The probability flow is comprised of a reverse physics simulator \(\tilde{\mathcal{P}}^{-1}\) that is an approximate inverse of the forward solver \(\mathcal{P}\) as well as a correction function \(s_{\theta}\). In many cases, we can obtain \(\tilde{\mathcal{P}}^{-1}\) from \(\mathcal{P}\) by using a negative step size \(\Delta t\) or by learning a surrogate model from data. For inference, we simulate the system backward in time from \(\mathbf{x}_{T}\) to \(\mathbf{x}_{0}\) by combining \(\tilde{\mathcal{P}}^{-1}\), the trained \(s_{\theta}\) and Gaussian noise in each step (a, bottom). For optimizing \(s_{\theta}\), our approach moves a sliding window of size \(S\) along the training trajectories and reconstructs the current window (b). Gradients for \(\theta\) are accumulated and backpropagated through all prediction steps.
the physical system for \(0\leq t\leq T\) modeled by the stochastic differential equation (SDE)
\[d\mathbf{x}=\mathcal{P}(\mathbf{x})dt+g(t)dW \tag{1}\]
with initial value \(\mathbf{x}_{0}\) and Borel measurable drift \(\mathcal{P}:\,\mathbb{R}^{D}\rightarrow\mathbb{R}^{D}\) and diffusion \(g:\,[0,T]\rightarrow\mathbb{R}_{\geq 0}\). This SDE transforms the marginal distribution \(p_{0}\) of initial states at time \(0\) to the marginal distribution \(p_{T}\) of end states at time \(T\). We include additional assumptions in appendix A.
Moreover, we assume that we have sampled \(N\) trajectories of length \(M\) from the above SDE with a fixed time discretization \(0\leq t_{0}<t_{1}<...<t_{M}\leq T\) for the interval \([0,T]\) and collected them in a training data set \(\{(\mathbf{x}_{t_{m}}^{(n)})_{i=0}^{N}\}_{n=0}^{N}\). For simplicity, we assume that all time steps are equally spaced, i.e., \(t_{m+1}-t_{m}:=\Delta t\). Moreover, in the following we use the notation \(\mathbf{x}_{\ell:m}\) for \(0\leq\ell<m\leq M\) to refer to the trajectory \((\mathbf{x}_{t_{\ell}},\mathbf{x}_{t_{\ell+1}},...,\mathbf{x}_{t_{m}})\). We include additional assumptions in appendix A.
Our goal is to infer an initial state \(\mathbf{x}_{0}\) given a simulation end state \(\mathbf{x}_{M}\), i.e., we want to sample from the distribution \(p_{0}(\,\cdot\,|\,\mathbf{x}_{M})\), or obtain a maximum likelihood solution.
### Learned Corrections for Reverse Simulation
In the following, we furthermore assume that we have access to a reverse physics simulator \(\tilde{\mathcal{P}}^{-1}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{D}\), which moves the simulation state backward in time and is an approximate inverse of the forward simulator \(\mathcal{P}\)[17]. In our experiments, we either obtain the reverse physics simulator from the forward simulator by using a negative step size \(\Delta t\) or by learning a surrogate model from the training data. We train a neural network \(s_{\theta}(\mathbf{x},t)\) parameterized by \(\theta\) such that
\[\mathbf{x}_{m}\approx\mathbf{x}_{m+1}+\Delta t\left[\tilde{\mathcal{P}}^{-1}( \mathbf{x}_{m+1})+s_{\theta}(\mathbf{x}_{m+1},t_{m+1})\right]. \tag{2}\]
In this equation, the term \(s_{\theta}(\mathbf{x}_{m+1},t_{m+1})\) corrects approximation errors and resolves uncertainties from the Gaussian perturbation \(g(t)dW\). Below, we explain our proposed 1-step training loss and its multi-step extension before connecting this formulation to diffusion models in the next section.
1-step lossFor a pair of adjacent samples \((\mathbf{x}_{m},\mathbf{x}_{m+1})\) on a data trajectory, the 1-step loss for optimizing \(s_{\theta}\) is the L\({}_{2}\) distance between \(\mathbf{x}_{m}\) and the prediction via (2). For the entire training data set, the loss becomes
\[\mathcal{L}_{\mathrm{single}}(\theta):=\frac{1}{M}\,\mathbb{E}_{\mathbf{x}_{0: M}}\left[\sum_{m=0}^{M-1}\left[\Big{|}\Big{|}\mathbf{x}_{m}-\mathbf{x}_{m+1}- \Delta t\left[\tilde{\mathcal{P}}^{-1}(\mathbf{x}_{m+1})+s_{\theta}(\mathbf{x }_{m+1},t_{m+1})\right]\Big{|}\Big{|}_{2}^{2}\right]\right]. \tag{3}\]
Computing the expectation can be thought of as moving a window of size two from the beginning of each trajectory until the end and averaging the losses for individual pairs of adjacent points.
Multi-step lossAs each simulation state depends only on its previous state, the 1-step loss should be sufficient for training \(s_{\theta}\). However, in practice, approaches that consider a loss based on predicting longer parts of the trajectories are more successful for training learned corrections [1, 2, 3, 4]. For that purpose, we define a hyperparameter \(S\), called sliding window size, and write \(\mathbf{x}_{i:i+S}\in\mathbb{R}^{S\times D}\) to denote the trajectory starting at \(\mathbf{x}_{i}\) that is comprised of \(\mathbf{x}_{i}\) and the following \(S-1\) states. Then, we define the multi-step loss as
\[\mathcal{L}_{\text{multi}}(\theta):=\frac{1}{M}\,\mathbb{E}_{\mathbf{x}_{0:M} }\left[\sum_{m=0}^{M-S+1}\left[\big{|}|\mathbf{x}_{m:m+S-1}-\hat{\mathbf{x}}_{ m:m+S-1}|\big{|}_{2}^{2}\right]\right], \tag{4}\]
where \(\hat{\mathbf{x}}_{i:i+S-1}\) is the predicted trajectory that is defined recursively by
\[\hat{\mathbf{x}}_{i+S}=\mathbf{x}_{i+S}\quad\text{and}\quad\hat{\mathbf{x}}_{ i+S-1-j}=\hat{\mathbf{x}}_{i+S-j}+\Delta t\left[\tilde{\mathcal{P}}^{-1}( \hat{\mathbf{x}}_{i+S-j})+s_{\theta}(\hat{\mathbf{x}}_{i+S-j},t_{i+S-j}) \right]. \tag{5}\]
### Learning the Score
Denoising score matchingGiven a distribution of states \(p_{t}\) for \(0<t<T\), we follow [14] and consider the score matching objective
\[\mathcal{J}_{\mathrm{SM}}(\theta):=\frac{1}{2}\int_{0}^{T}\mathbb{E}_{ \mathbf{x}\sim p_{t}}\left[\big{|}|s_{\theta}(\mathbf{x},t)-\nabla_{\mathbf{x} }\log p_{t}(\mathbf{x})|\big{|}_{2}^{2}\right]dt, \tag{6}\]i.e., the network \(s_{\theta}\) is trained to approximate the score \(\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})\). In denoising score matching [17, 18, 19], the distributions \(p_{t}\) are implicitly defined by a noising process that is given by the forward SDE \(d\mathbf{x}=f(\mathbf{x},t)dt+g(t)dW\), where \(W\) is the standard Brownian motion. The function \(f:\mathbb{R}^{D}\times\mathbb{R}_{\geq 0}\to\mathbb{R}^{D}\) is called _drift_, and \(g:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}\) is called _diffusion_. The process transforms the training data distribution \(p_{0}\) to a noise distribution that is approximately Gaussian \(p_{T}\). For affine functions \(f\) and \(g\), the transition probabilities are available analytically, which allows for efficient training of \(s_{\theta}\). It can be shown that under mild conditions, for the forward SDE, there is a corresponding **reverse-time SDE**\(d\mathbf{x}=[f(\mathbf{x},t)-g(t)^{2}\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x} )]dt+g(t)d\tilde{W}\)[1]. In particular, this means that given a marginal distribution of states \(p_{T}\), which is approximately Gaussian, we can sample from \(p_{T}\) and simulate paths of the reverse-time SDE to obtain samples from the data distribution \(p_{0}\).
Score matching, probability flow ODE and 1-step trainingThere is a deterministic ODE [18], called probability flow ODE, which yields the same transformation of marginal probabilities from \(p_{T}\) to \(p_{0}\) as the reverse-time SDE. For the physics-based SDE (1), it is given by
\[d\mathbf{x}=\left[\mathcal{P}(\mathbf{x})-\frac{1}{2}g(t)^{2}\nabla_{\mathbf{x }}\log p_{t}(\mathbf{x})\right]dt. \tag{7}\]
For \(\Delta t\to 0\), we can rewrite the update rule (2) of the training as
\[d\mathbf{x}=\left[-\tilde{\mathcal{P}}^{-1}(\mathbf{x})-s_{\theta}(\mathbf{x}, t)\right]dt. \tag{8}\]
Therefore, we can identify \(\tilde{\mathcal{P}}^{-1}(\mathbf{x})\) with \(-\mathcal{P}(\mathbf{x})\) and \(s_{\theta}(\mathbf{x},t)\) with \(\frac{1}{2}g(t)\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})\). We show that for the 1-step training and sufficiently small \(\Delta t\), we minimize the score matching objective (6).
**Theorem 3.1**.: _Consider a data set with trajectories sampled from SDE (1) and let \(\tilde{\mathcal{P}}^{-1}(\mathbf{x})=-\mathcal{P}(\mathbf{x})\). Then the 1-step loss (3) is equivalent to minimizing the score matching objective (6) as \(\Delta t\to 0\)._
Proof.: See appendix A.1
Maximum likelihood and multi-step trainingExtending the single step training to multiple steps does not directly minimize the score matching objective, but we can still interpret the learned correction in a probabilistic sense. For denoising score matching, it is possible to train \(s_{\theta}\) via maximum likelihood training [18], which minimizes the KL-divergence between \(p_{0}\) and the distribution obtained by sampling \(\mathbf{x}_{T}\) from \(p_{T}\) and simulating the probability flow ODE (8) from \(t=T\) to \(t=0\). We derive a similar result for the multi-step loss.
**Theorem 3.2**.: _Consider a data set with trajectories sampled from SDE (1) and let \(\tilde{\mathcal{P}}^{-1}(\mathbf{x})=-\mathcal{P}(\mathbf{x})\). Then the multi-step loss (4) maximizes a variational lower bound for maximum likelihood training of the probability flow ODE (7) as \(\Delta t\to 0\)._
Proof.: See appendix A.2
To conclude, we have formulated a probabilistic multi-step training to solve inverse physics problems and provided a theoretical basis to solve these problems with score matching. Next, we outline additional details for implementing SMDP.
### Training and Inference
We start training \(s_{\theta}\) with the multi-step loss and window size \(S=2\), which is equivalent to the 1-step loss. Then, we gradually increase the window size \(S\) until a maximum \(S_{\max}\). For \(S>2\), the unrolling of the predicted trajectory includes interactions between \(s_{\theta}\) and the reverse physics simulator \(\tilde{\mathcal{P}}^{-1}\). For inference, we consider the neural SDE
\[d\mathbf{x}=\left[-\tilde{\mathcal{P}}^{-1}(\mathbf{x})-C\,s_{\theta}(\mathbf{ x},t)\right]dt+g(t)dW, \tag{9}\]
which we solve via the Euler-Maruyama method. For \(C=2\), we obtain the system's reverse-time SDE and sampling from this SDE yields the posterior distribution. Setting \(C=1\) and excluding the noise gives the probability flow ODE (8). We denote the ODE variant by _SMDP ODE_ and the SDE variant by _SMDP SDE_. While the SDE can be used to obtain many samples and to explore the posterior distribution, the ODE variant constitutes a unique and deterministic solution based on maximum likelihood training.
## 4 Experiments
We show the capabilities of the proposed algorithm with a range of experiments. The first experiment in section 4.1 uses a simple 1D process to compare our method to existing score matching baselines. The underlying model has a known posterior distribution which allows for an accurate evaluation of the performance, and we use it to analyze the role of the multi-step loss formulation. Secondly, in section 4.2, we experiment with the stochastic heat equation. This is a particularly interesting test case as the diffusive nature of the equation effectively destroys information over time. In section 4.3, we apply our method to a scenario without stochastic perturbations in the form of a buoyancy-driven Navier-Stokes flow with obstacles. This case highlights the usefulness of the ODE variant. Finally, in section 4.4, we consider the situation where the reverse physics simulator \(\tilde{\mathcal{P}}^{-1}\) is not known. Here, we train a surrogate model \(\mathcal{P}^{-1}\) for isotropic turbulence flows and evaluate how well SMDP works with a learned reverse physics simulator.
### 1D Toy SDE
As a first experiment with a known posterior distribution we consider a simple quadratic SDE of the form: \(dx=-\left[\lambda_{1}\cdot\mathrm{sign}(x)x^{2}\right]dt+\lambda_{2}dW\), with \(\lambda_{1}=7\) and \(\lambda_{2}=0.03\). Throughout this experiment, \(p_{0}\) is a categorical distribution, where we draw either \(1\) or \(-1\) with the same probability. The reverse-time SDE that transforms the distribution \(p_{T}\) of values at \(T=10\) to \(p_{0}\) is given by
\[dx=-\left[\lambda_{1}\cdot\mathrm{sign}(x)x^{2}-\lambda_{2}^{2}\cdot\nabla_{x }\log p_{t}(x)\right]dt+\lambda_{2}dw. \tag{10}\]
In figure 1(a), we show paths from this SDE simulated with the Euler-Maruyama method. The trajectories approach \(0\) as \(t\) increases. Given the trajectory value at \(t=10\), it is no longer possible to infer the origin of the trajectory at \(t=0\).
This experiment allows us to use an analytic reverse simulator: \(\tilde{\mathcal{P}}^{-1}(x)=\lambda_{1}\cdot\mathrm{sign}(x)x^{2}\). This is a challenging problem because the reverse physics step increases quadratically with \(x\), and \(s_{\theta}\) has to control the reverse process accurately to stay within the training domain, or paths will explode to infinity. We evaluate each model based on how well the predicted trajectories \(\hat{x}_{0:T}\) match the posterior distribution. When drawing \(\hat{x}_{T}\) randomly from \([-0.1,0.1]\), we should obtain trajectories with \(\hat{x}_{0}\) being either \(-1\) or \(1\) with the same likelihood. We assign the label \(-1\) or \(1\) if the relative distance of an endpoint is \(<10\)% and denote the percentage in each class by \(\rho_{-1}\) and \(\rho_{1}\). As some trajectories miss the target, typically \(\rho_{-1}+\rho_{1}<1\). Hence, we define the posterior metric \(Q\) as twice the minimum of \(\rho_{-1}\) and \(\rho_{1}\), i.e., \(Q:=2\cdot\min(\rho_{-1},\rho_{1})\) so that values closer to one indicate a better match with the correct posterior distribution.
Figure 2: Overview of our 1D toy SDE. (a) Training with a data set of trajectories and known temporal dynamics given by \(\mathcal{P}(\mathbf{x}):=-\mathrm{sign}(\mathbf{x})\mathbf{x}^{2}\) and \(g\equiv 0.1\). We estimate the score \(\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})\) with our proposed method using an MLP network for \(s_{\theta}(\mathbf{x},t)\). Negative values (blue) push down the trajectories, and positive ones (red) push them up. Together with the dynamics, this can be used to reverse the system as shown in (b) either with the reverse-time SDE or the probability flow ODE. A successful inversion of the dynamics requires the network \(s_{\theta}\) to be robust and extrapolate well (c). Inference using GRID trained with the 1-step loss causes trajectories to explode, as the network does not extrapolate well. Training GRID with the multi-step loss solves this issue.
TrainingThe training data set consists of \(2500\) simulated trajectories from \(0\) to \(T\) and \(\Delta t=0.02\). Therefore each training trajectory has a length of \(M=500\). For the network \(s_{\theta}(x,t)\), we consider a multilayer perceptron (MLP) and, as a special case, a grid-based discretization (GRID). The latter is not feasible for realistic use cases and high-dimensional data but provides means for an in-depth analysis of different training variants. For GRID, we discretize the domain \([0,T]\times[-1.25,1.25]\) to obtain a rectangular grid with \(500\times 250\) cells and linearly interpolate the solution. The cell centers are initialized with \(0\). We evaluate \(s_{\theta}\) trained via the 1-step and multi-step losses with \(S_{\max}=10\). Details of hyperparameters and model architectures are given in appendix C.
Better extrapolation and robustness from multi-step lossSee figure 1(c) for an overview of the differences between the learned score from MLP and GRID and the effects of the multi-step loss. For the 1-step training with MLP, we observe a clear and smooth score field with two tubes that merge to one at \(x=0\) as \(t\) increases. As a result, the trajectories of the probability flow ODE and reverse-time SDE converge to the correct value. Training via GRID shows that most cells do not get any gradient updates and remain \(0\). This is caused by a need for more training data in these regions. In addition, the boundary of the trained region is jagged and diffuse. Trajectories traversing these regions can quickly explode. In contrast, the multi-step loss leads to a consistent signal around the center line at \(x=0\), effectively preventing exploding trajectories.
Evaluation and comparison with baselinesAs a baseline for learning the scores, we consider implicit score matching [13, 14]. Additionally, we consider sliced score matching with variance reduction [15, 14] as a variant of ISM. We train all methods with the same network architecture using three different data set sizes. As can be seen in table 1, the 1-step loss, which is conceptually similar to denoising score matching, compares favorably against ISM and SSM-VR. All methods perform well for the reverse-time SDE, even for very little training data. Using the multi-step loss consistently gives significant improvements at the cost of a slightly increased training time. Our proposed multi-step training performs best or is on par with the baselines for all data set sizes and inference types. Because the posterior metric Q is very sensitive to the score where the paths from both starting points intersect, evaluations are slightly noisy.
Comparison with analytic scoresWe perform further experiments to empirically verify Theorem 3.1 by comparing the learned scores of our method with analytic scores in appendix C.
\begin{table}
\begin{tabular}{l|c c c|c c c} \multicolumn{1}{l}{**Method**} & \multicolumn{3}{c|}{Probability flow ODE} & \multicolumn{3}{c}{Reverse-time SDE} \\ \hline \hline & \multicolumn{3}{c|}{Data set size} & \multicolumn{3}{c}{Data set size} \\ & 100\% & 10\% & 1\% & 100\% & 10\% & 1\% \\ \hline multi-step & **0.97** & **0.91** & **0.81** & **0.99** & 0.94 & **0.85** \\
1-step & 0.78 & 0.44 & 0.41 & 0.93 & 0.71 & 0.75 \\ ISM & 0.19 & 0.15 & 0.01 & 0.92 & 0.94 & 0.52 \\ SSM-VR & 0.17 & 0.49 & 0.27 & 0.88 & 0.94 & 0.67 \\ \hline \end{tabular}
\end{table}
Table 1: Posterior metric \(Q\) for \(1000\) predicted trajectories averaged over three runs. For standard deviations, see table 3 in the appendix.
Figure 3: Stochastic heat equation overview. While the ODE trajectories provide smooth solutions with the lowest reconstruction MSE, the SDE solutions synthesize high-frequency content, significantly improving spectral error. The "\(s_{\theta}\) only" version without the reverse physics step exhibits a significantly larger spectral error. Metrics in (b) are averaged over three runs.
### Stochastic Heat Equation
The heat equation \(\frac{\partial u}{\partial t}=\alpha\Delta u\) plays a fundamental role in many physical systems. For this experiment, we consider the stochastic heat equation, which slightly perturbs the heat diffusion process and includes an additional term \(g(t)\)\(\xi\), where \(\xi\) is space-time white noise, see Pardoux [21, Chapter 3.2]. For our experiments, we fix the diffusivity constant to \(\alpha=1\) and sample initial conditions at \(t=0\) from Gaussian random fields with \(n=4\) at resolution \(32\times 32\). We simulate the heat diffusion with noise from \(t=0\) until \(t=0.2\) using the Euler-Maruyama method and a spectral solver \(\mathcal{P}_{h}\) with a fixed step size \(\Delta t=6.25\times 10^{-3}\) and \(g\equiv 0.1\). Given a simulation end state \(\mathbf{x}_{T}\), we want to recover a possible initial state \(\mathbf{x}_{0}\). In this experiment, the forward solver cannot be used to infer \(\mathbf{x}_{0}\) directly in a single step or without corrections since high frequencies due to noise are amplified, leading to physically implausible solutions. We implement the reverse physics simulator \(\tilde{P}^{-1}\) by using the forward step of the solver \(\mathcal{P}_{h}(\mathbf{x})\), i.e. \(\tilde{\mathcal{P}}^{-1}(\mathbf{x})\approx-\mathcal{P}_{h}(\mathbf{x})\).
Training and BaselinesOur training data set consists of \(2500\) initial conditions with their corresponding trajectories and end states at \(t=0.2\). We consider a small _ResNet_-like architecture based on an encoder and decoder part as representation for the score function \(s_{\theta}(\mathbf{x},t)\). The spectral solver is implemented via differentiable programming in _JAX_[20], see appendix D. As baseline methods, we consider a supervised training of the same _ResNet_-like architecture as \(s_{\theta}(\mathbf{x},t)\), a _Bayesian neural network_ (BNN) as well as a _Fourier neural operator_ (FNO) network [11]. We adopt an \(L_{2}\) loss for all these methods, i.e., the training data consists of pairs of initial state \(\mathbf{x}_{0}\) and end state \(\mathbf{x}_{T}\).
Additionally, we consider a variant of our proposed method for which we remove the reverse physics step \(\tilde{\mathcal{P}}^{-1}\) such that the inversion of the dynamics has to be learned entirely by \(s_{\theta}\), denoted by "\(s_{\theta}\) only". We do not compare to ISM and SSM-VR in the following as the data dimensions are too high for both methods to train properly, and we did not obtain stable trajectories during inference.
Reconstruction accuracy vs. fitting the data manifoldWe evaluate our method and the baselines by considering the _reconstruction MSE_ on a test set of \(500\) initial conditions and end states. For the reconstruction MSE, we simulate the prediction of the network forward in time with the solver \(\mathcal{P}_{h}\) to obtain a corresponding end state, which we compare to the ground truth via the \(L_{2}\) distance. This metric has the disadvantage that it does not measure how well the prediction matches the training data manifold. I.e., for this case, whether the prediction resembles the properties of the Gaussian random field. For that reason, we additionally compare the power spectral density of the states as the _spectral loss_. An evaluation and visualization of the reconstructions are given in figure 3, which shows that our ODE inference performs best regarding the reconstruction MSE. However, its solutions are smooth and do not contain the necessary small-scale structures. This is reflected in a high spectral error. The SDE variant, on the other hand, performs very well in terms of spectral error and yields visually convincing solutions with only a slight increase in the reconstruction MSE. This highlights the role of noise as a source of entropy in the inference process for SMDP SDE, which is essential for synthesizing small-scale structures. Note that there is a natural tradeoff between both metrics, and the ODE and SDE inference perform best for each of the cases while using an identical set of weights.
Multi-step loss is crucial for good performanceWe performed an ablation study on the maximum window size \(S_{\max}\) in figure 4 for the reconstruction MSE. For both ODE and SDE inference, increasing \(S_{\max}\) yields significant improvements at the cost of slightly increased training resources. This also highlights the importance of using a multi-step loss instead of the 1-step loss (\(S_{\max}=2\)) for inverse problems with poor conditioning.
We perform further experiments regarding test-time distribution shifts when modifying the noise scale and diffusivity, see appendix D, which showcase the robustness of our methods.
Figure 4: Multi-step \(S_{\max}\) vs. reconstruction MSE averaged over 5 runs.
### Buoyancy-driven Flow with Obstacles
Next, we test our methodology on a more challenging problem. For this purpose, we consider deterministic simulations of buoyancy-driven flow within a fixed domain \(\Omega\subset[0,1]\times[0,1]\) and randomly placed obstacles. Each simulation runs from time \(t=0.0\) to \(t=0.65\) with a step size of \(\Delta t=0.01\). SMDP is trained with the objective of reconstructing a plausible initial state given an end state of the marker density and velocity fields at time \(t=0.65\), as shown in figure 4(a) and figure 4(b). We place spheres and boxes with varying sizes at different positions within the simulation domain that do not overlap with the marker inflow. For each simulation, we place one to two objects of each category.
Score matching for deterministic systemsDuring training, we add Gaussian noise to each simulation state \(\mathbf{x}_{t}\) with \(\sigma_{t}=\sqrt{\Delta t}\). In this experiment, no stochastic forcing is used to create the data set, i.e., \(g\equiv 0\). By adding noise to the simulation states, the 1-step loss still minimizes a score matching objective in this situation, similar to denoising score matching; see appendix A.3 for a derivation. In the situation without stochastic forcing, during inference, our method effectively alternates between the reverse physics step, a small perturbation, and the correction by \(s_{\theta}(\mathbf{x},t)\), which projects the perturbed simulation state back to the distribution \(p_{t}\). We find that for the SDE trajectories, \(C=2\) slightly overshoots, and \(C=1\) gives an improved performance. In this setting, the "\(s_{\theta}\) only" version of our method closely resembles a denoiser that learns additional physics dynamics.
Training and comparisonOur training data set consists of 250 simulations with corresponding trajectories generated with _phiflow_[10]. Our neural network architecture for \(s_{\theta}(\mathbf{x},t)\) uses dilated convolutions [14], see appendix E for details. The reverse physics step \(\tilde{\mathcal{P}}^{-1}\) is implemented directly in the solver by using a negative step size \(-\Delta t\) for time integration. For training, we consider the multi-step formulation with \(S_{\max}=20\). We additionally compare with solutions from directly optimizing the initial smoke and velocity states at \(t=0.35\) using the differentiable forward simulation and limited-memory BFGS [13, 1]. Moreover, we compare with solutions obtained from diffusion posterior sampling for general noisy inverse problems [10, 15] with a pretrained diffusion model on simulation states at \(t=0.35\). For the evaluation, we consider a reconstruction MSE analogous to section 4.2 and the perceptual similarity metric LPIPS. The test set contains five simulations. The SDE version yields good results for this experiment but is most likely constrained in performance by the approximate reverse physics step and large step sizes. However, the ODE version outperforms directly inverting the simulation numerically (\(\tilde{\mathcal{P}}^{-1}\) only), and when training without the reverse physics step (\(s_{\theta}\) only), as shown in 4(c).
### Navier-Stokes with Unknown Dynamics
As a fourth experiment, we aim to learn the time evolution of isotropic, forced turbulence with a similar setup as Li et al. [11]. The training data set consists of vorticity fields from 1000 simulation trajectories from \(t=0\) until \(T=10\) with \(\Delta t=1\), a spatial resolution of \(64\times 64\) and
Figure 5: Buoyancy flow case. Ground truth shows the marker density and velocity field in the \(x\)-direction at different points of the simulation trajectory from the test set (a, b). We show reconstructions given the simulation end state at \(t=0.65\) and provide an evaluation of the reconstructed trajectories based on perceptual similarity (LPIPS) and the reconstruction MSE for three runs (c).
viscosity fixed at \(\nu=10^{-5}\). As before, our objective is to predict a trajectory \(\hat{\mathbf{x}}_{0\cdot M}\) that reconstructs the true trajectory given an end state \(\mathbf{x}_{M}\). In this experiment, we pretrain a surrogate for the reverse physics step \(\tilde{\mathcal{P}}^{-1}\) by employing the FNO architecture from [Li+21] trained on the reverse simulation. For pretraining \(\tilde{\mathcal{P}}^{-1}\) we use our proposed training setup with the multi-step loss and \(S_{\max}=10\) but freeze the score to \(s_{\theta}(\mathbf{x},t)\equiv 0\). Then, we train the time-dependent score \(s_{\theta}(\mathbf{x},t)\) while freezing the reverse physics step. This approach guarantees that any time-independent physics are captured by \(\tilde{\mathcal{P}}^{-1}\) and \(s_{\theta}(\mathbf{x},t)\) can focus on learning small improvements to \(\tilde{\mathcal{P}}^{-1}\) as well as respond to possibly time-dependent data biases. We give additional training details in appendix F.
Evaluation and training variantsFor evaluation, we consider the MSE and spectral error of the reconstructed initial state \(\hat{\mathbf{x}}_{0}\) compared to the reference \(\mathbf{x}_{0}\). As baselines, during inference, we employ only the learned surrogate model \(\tilde{\mathcal{P}}^{-1}\) without \(s_{\theta}\). In addition to that, we evaluate a variant for which we train both the surrogate model and \(s_{\theta}(\mathbf{x},t)\) at the same time. As the two components resemble the drift and score of the reverse-time SDE, this approach is similar to _DiffFlow_[ZC21], which learns both components in the context of generative modeling. We label this approach _simultaneous training_. Results are shown in figure 6. Similar to the stochastic heat equation results in section 4.2, the SDE achieves the best spectral error, while the ODE obtains the best MSE. Our proposed method outperforms both the surrogate model and the simultaneous training of the two components.
## 5 Discussion and Conclusions
We presented a combination of learned corrections training and diffusion models in the context of physical simulations and differentiable physics for solving inverse physics problems. We showed its competitiveness, accuracy, and long-term stability in challenging and versatile experiments and motivated our design choices. We considered two variants with complementary benefits for inference: while the ODE variants achieve the best MSE, the SDE variants allow for sampling the posterior and yield an improved coverage of the target data manifold. Additionally, we provided theoretical insights that the 1-step is mathematically equivalent to optimizing the score matching objective. We showed that its multi-step extension maximizes a variational lower bound for maximum likelihood training.
Despite the promising initial results, our work has limitations that open up exciting directions for future work: Among others, it requires simulating the system backward in time step by step, which can be costly and alleviated by reduced order methods. Additionally, we assume that \(\Delta t\) is sufficiently small and the reverse physics simulator is accurate enough. Determining a good balance between accurate solutions with few time steps and diverse solutions with many time steps represents an important area for future research.
AcknowledgementsBH, SV, and NT acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (LEDA: grant agreement No 758853, and SpaTe: grant agreement No 863850). SV thanks the Max Planck Society for support through a Max Planck Lise Meitner Group. This research was partly carried out on the High Performance Computing resources of the FREYA cluster at the Max Planck Computing and Data Facility (MPCDF) in Garching operated by the Max Planck Society (MPG).
Figure 6: Turbulence case. Comparison of reconstructed trajectories (a) and evaluation of MSE and spectral error for different training variants (b). Our proposed ODE and SDE inference outperforms the learned surrogate model \(\tilde{\mathcal{P}}^{-1}\). Metrics are averaged over three runs.
## References
* [And82] B. Anderson. "Reverse-time diffusion equation models". In: _Stochastic Processes and their Applications_ 12.3 (1982), pp. 313-326.
* [Bar+19] Y. Bar-Sinai, S. Hoyer, J. Hickey, and M. P. Brenner. "Learning data-driven discretizations for partial differential equations". In: _Proceedings of the National Academy of Sciences_ 116.31 (2019), pp. 15344-15349.
* [BLL02] X. Blanc, C. Le Bris, and P.-L. Lions. "From molecular models to continuum mechanics". In: _Archive for Rational Mechanics and Analysis_ 164.4 (2002), pp. 341-381.
* [BWW22] J. Brandstetter, D. Worrall, and M. Welling. "Message passing neural PDE solvers". In: _International Conference on Learning Representations_ (2022).
* [Che+18a] R. T. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duvenaud. "Neural ordinary differential equations". In: _Advances in neural information processing systems_ 31 (2018).
* [Che+18b] T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. Duvenaud. "Neural Ordinary Differential Equations". In: _Advances in Neural Information Processing Systems_. 2018, pp. 6572-6583. url: [https://proceedings.neurips.cc/paper/2018/hash/69386fbbildfed68692a2c8686939b9-Abstract.html](https://proceedings.neurips.cc/paper/2018/hash/69386fbbildfed68692a2c8686939b9-Abstract.html).
* [Che+21] N. Chen, Y. Zhang, H. Zen, R. J. Weiss, et al. "WaveGrad: Estimating Gradients for Waveform Generation". In: _9th International Conference on Learning Representations_. OpenReview.net, 2021. url: [https://openreview.net/forum?id=NsMLjcFa08O](https://openreview.net/forum?id=NsMLjcFa08O).
* [Chu+22] H. Chung, B. Sim, D. Ryu, and J. C. Ye. "Improving Diffusion Models for Inverse Problems using Manifold Constraints". In: _NeurIPS_. 2022. url: [http://papers.nips.cc/paper/2022/hash/a48e5877c7bf86a513950ab23b360498-Abstract-Conference.html](http://papers.nips.cc/paper/2022/hash/a48e5877c7bf86a513950ab23b360498-Abstract-Conference.html).
* [Chu+23] H. Chung, J. Kim, M. T. McCann, M. L. Klasky, and J. C. Ye. "Diffusion Posterior Sampling for General Noisy Inverse Problems". In: _The Eleventh International Conference on Learning Representations_. OpenReview.net, 2023. url: [https://openreview.net/pdf?id=UnB9zGAGTOk](https://openreview.net/pdf?id=UnB9zGAGTOk).
* [CSY22] H. Chung, B. Sim, and J. C. Ye. "Come-Closer-Diffuse-Faster: Accelerating Conditional Diffusion Models for Inverse Problems through Stochastic Contraction". In: _IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022_. IEEE, 2022, pp. 12403-12412. doi: 10.1109/CVPR52688.2022.01209. url: [https://doi.org/10.1109/CVPR52688.2022.01209](https://doi.org/10.1109/CVPR52688.2022.01209).
* [Del+18] S. Delaquis, M. Jewell, I. Ostrovskiy, M. Weber, et al. "Deep neural networks for energy and position reconstruction in EXO-200". In: _Journal of Instrumentation_ 13.08 (2018), P08023.
* [DN21] P. Dhariwal and A. Q. Nichol. "Diffusion Models Beat GANs on Image Synthesis". In: _Advances in Neural Information Processing Systems_. 2021, pp. 8780-8794. url: [https://proceedings.neurips.cc/paper/2021/hash/49ad23d1ec9fa4bd8d77d02681df5cfa-Abstract.html](https://proceedings.neurips.cc/paper/2021/hash/49ad23d1ec9fa4bd8d77d02681df5cfa-Abstract.html).
* [Far93] S. J. Farlow. _Partial differential equations for scientists and engineers_. Courier Corporation, 1993.
* [Gom+18] R. Gomez-Bombarelli, J. N. Wei, D. Duvenaud, J. M. Hernandez-Lobato, et al. "Automatic chemical design using a data-driven continuous representation of molecules". In: _ACS central science_ 4.2 (2018), pp. 268-276.
* [Gur82] M. E. Gurtin. _An introduction to continuum mechanics_. Academic press, 1982.
* [HIJA20] J. Ho, A. Jain, and P. Abbeel. "Denoising Diffusion Probabilistic Models". In: _Advances in Neural Information Processing Systems_. 2020. url: [https://proceedings.neurips.cc/paper/2020/hash/4c5bcfec8584af0d967f1iab10179ca4b-Abstract.html](https://proceedings.neurips.cc/paper/2020/hash/4c5bcfec8584af0d967f1iab10179ca4b-Abstract.html).
* [HKT22] P. Holl, V. Koltun, and N. Thuerey. "Scale-invariant Learning by Physics Inversion". In: _Advances in Neural Information Processing Systems_ 35 (2022), pp. 5390-5403.
* [HLC21] C. Huang, J. H. Lim, and A. C. Courville. "A Variational Perspective on Diffusion-Based Generative Models and Score Matching". In: _Advances in Neural Information Processing Systems_. 2021, pp. 22863-22876. url: [https://proceedings.neurips.cc/paper/2021/hash/c11abfd29e4d9b4d4db566b01114d8486-Abstract.html](https://proceedings.neurips.cc/paper/2021/hash/c11abfd29e4d9b4d4db566b01114d8486-Abstract.html).
* [Ho+22] J. Ho, T. Salimans, A. A. Gritsenko, W. Chan, et al. "Video Diffusion Models". In: _NeurIPS_. 2022. url: [http://papers.nips.cc/paper%5C_files/paper/2022/hash/39235c56aef13fb05a6adc95eb9d8d66-Abstract-Conference.html](http://papers.nips.cc/paper%5C_files/paper/2022/hash/39235c56aef13fb05a6adc95eb9d8d66-Abstract-Conference.html).
* [Hol+20] P. Holl, V. Koltun, K. Um, and N. Thuerey. "phillow: A differentiable pde solving framework for deep learning via physical simulations". In: _NeurIPS Workshop_. Vol. 2. 2020.
* [Hop+22] T. Hoppe, A. Mehrjou, S. Bauer, D. Nielsen, and A. Dittadi. "Diffusion Models for Video Prediction and Infilling". In: _CoRR_ abs/2206.07696 (2022). doi: 10.48550/arXiv.2206.07696. url: [https://doi.org/10.48550/arXiv.2206.07696](https://doi.org/10.48550/arXiv.2206.07696).
* [HTK20] P. Holl, N. Thuerey, and V. Koltun. "Learning to Control PDEs with Differentiable Physics". In: _8th International Conference on Learning Representations_. OpenReview.net, 2020. url: [https://openreview.net/forum?id=HyeSin4FPB](https://openreview.net/forum?id=HyeSin4FPB).
* [Hyv05] A. Hyvarinen. "Estimation of Non-Normalized Statistical Models by Score Matching". In: _J. Mach. Learn. Res._ 6 (2005), pp. 695-709. url: [http://jmlr.org/papers/v6/hyvarinen05a.html](http://jmlr.org/papers/v6/hyvarinen05a.html).
* [KCT23] G. Kohl, L.-W. Chen, and N. Thuerey. "Turbulent Flow Simulation using Autoregressive Conditional Diffusion Models". In: _arXiv:2309.01745_ (2023).
* [Kid+21] P. Kidger, J. Foster, X. Li, and T. J. Lyons. "Neural SDEs as Infinite-Dimensional GANs". In: _Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18:24 July 2021, Virtual Event_. Vol. 139. Proceedings of Machine Learning Research. PMLR, 2021, pp. 5453-5463. url: [http://proceedings.mlr.press/v139/kidger21b.html](http://proceedings.mlr.press/v139/kidger21b.html).
* [Koc+21] D. Kochkov, J. A. Smith, A. Alieva, Q. Wang, et al. "Machine learning accelerated computational fluid dynamics". In: _CoRR_ abs/2102.01010 (2021). url: [https://arxiv.org/abs/2102.01010](https://arxiv.org/abs/2102.01010).
* [KPB20] I. Kobyzev, S. J. Prince, and M. A. Brubaker. "Normalizing flows: An introduction and review of current methods". In: _IEEE transactions on pattern analysis and machine intelligence_ 43.11 (2020), pp. 3964-3979.
* [KVE21] B. Kawar, G. Vaksman, and M. Elad. "SNIPS: Solving Noisy Inverse Problems Stochastically". In: _Advances in Neural Information Processing Systems_. 2021, pp. 21757-21769. url: [https://proceedings.neurips.cc/paper/2021/hash/b5c01503041b70d41d80e3dbe31bbd8c-Abstract.html](https://proceedings.neurips.cc/paper/2021/hash/b5c01503041b70d41d80e3dbe31bbd8c-Abstract.html).
* [LCT22] B. List, L.-W. Chen, and N. Thuerey. "Learned turbulence modelling with differentiable fluid solvers: physics-based loss functions and optimisation horizons". In: _Journal of Fluid Mechanics_ 949 (2022), A25.
* [Li+21] Z. Li, N. B. Kovachki, K. Azizzadenesheli, B. Liu, et al. "Fourier Neural Operator for Parametric Partial Differential Equations". In: _9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021_. OpenReview.net, 2021. url: [https://openreview.net/forum?id=c8P9NQYtnmO](https://openreview.net/forum?id=c8P9NQYtnmO).
* [Li+22] S. Li, Z. Huang, T. Du, H. Su, et al. "Contact Points Discovery for Soft-Body Manipulations with Differentiable Physics". In: _International Conference on Learning Representations_ (2022).
* [LN89] D. C. Liu and J. Nocedal. "On the limited memory BFGS method for large scale optimization". In: _Math. Program._ 45.1-3 (1989), pp. 503-528. doi: 10.1007/BF01589116. url: [https://doi.org/10.1007/BF01589116](https://doi.org/10.1007/BF01589116).
* [LP22] J. Lim and D. Psaltis. "MaxwellNet: Physics-driven deep neural network training based on Maxwell's equations". In: _Apl Photonics 7.1_ (2022), p. 011301.
* [Lu+21] L. Lu, P. Jin, G. Pang, Z. Zhang, and G. E. Karniadakis. "Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators". In: _Nat. Mach. Intell._ 3.3 (2021), pp. 218-229. doi: 10.1038/S42256-021-00302-5. url: [https://doi.org/10.1038/s42256-021-00302-5](https://doi.org/10.1038/s42256-021-00302-5).
* [Mar+96] G. J. Martyna, M. E. Tuckerman, D. J. Tobias, and M. L. Klein. "Explicit reversible integrators for extended systems dynamics". In: _Molecular Physics_ 87.5 (1996), pp. 1117-1157.
* [Mor+18] J. Morton, A. Jameson, M. J. Kochenderfer, and F. Witherden. "Deep Dynamical Modeling and Control of Unsteady Fluid Flows". In: _Advances in Neural Information Processing Systems_. 2018.
* [MRO20] D. Maoutsa, S. Reich, and M. Opper. "Interacting Particle Solutions of Fokker-Planck Equations Through Gradient-Log-Density Estimation". In: _Entropy_ 22.8 (2020), p. 802. doi: 10.3390/e22080802. url: [https://doi.org/10.3390/e22080802](https://doi.org/10.3390/e22080802).
* [Oks03] B. Oksendal. "Stochastic differential equations". In: _Stochastic differential equations_. Springer, 2003, pp. 65-84.
* [Pap+21] G. Papamakarios, E. Nalisnick, D. J. Rezende, S. Mohamed, and B. Lakshminarayanan. "Normalizing flows for probabilistic modeling and inference". In: _The Journal of Machine Learning Research_ 22.1 (2021), pp. 2617-2680.
* [Par21] E. Pardoux. _Stochastic Partial Differential Equations: An Introduction_. Springer, 2021.
* [Pfa+20] T. Pfaff, M. Fortunato, A. Sanchez-Gonzalez, and P. Battaglia. "Learning mesh-based simulation with graph networks". In: _International Conference on Learning Representations_ (2020).
* [Ram+20] Z. Ramzi, B. Remy, F. Lanusse, J. Starck, and P. Ciuciu. "Denoising Score-Matching for Uncertainty Quantification in Inverse Problems". In: _CoRR_ abs/2011.08698 (2020). url: [https://arxiv.org/abs/2011.08698](https://arxiv.org/abs/2011.08698).
* [SC20] S. Schoenholz and E. D. Cubuk. "Jax md: a framework for differentiable physics". In: _Advances in Neural Information Processing Systems_ 33 (2020), pp. 11428-11441.
* [SE19] Y. Song and S. Ermon. "Generative Modeling by Estimating Gradients of the Data Distribution". In: _Advances in Neural Information Processing Systems_. 2019, pp. 11895-11907. url: [https://proceedings.neurips.cc/paper/2019/hash/3001ef257407d5a371a96dcd947c7d93-Abstract.html](https://proceedings.neurips.cc/paper/2019/hash/3001ef257407d5a371a96dcd947c7d93-Abstract.html).
* [SLG22] C. Salvi, M. Lemercier, and A. Gerasimovics. "Neural Stochastic PDEs: Resolution-Invariant Learning of Continuous Spatiotemporal Dynamics". In: _NeurIPS_. 2022. url: [http://papers.nips.cc/paper%5C_files/paper/2022/hash/091166620a04a289c55f411d8899049-Abstract-Conference.html](http://papers.nips.cc/paper%5C_files/paper/2022/hash/091166620a04a289c55f411d8899049-Abstract-Conference.html).
* [Soh+15] J. Sohl-Dickstein, E. A. Weiss, N. Maheswaranathan, and S. Ganguli. "Deep Unsupervised Learning using Nonequilibrium Thermodynamics". In: _Proceedings of the 32nd International Conference on Machine Learning_. Vol. 37. JMLR Workshop and Conference Proceedings. JMLR.org, 2015, pp. 2256-2265. url: [http://proceedings.mlr.press/v37/sohl-dickstein15.html](http://proceedings.mlr.press/v37/sohl-dickstein15.html).
* [Son+19] Y. Song, S. Garg, J. Shi, and S. Ermon. "Sliced Score Matching: A Scalable Approach to Density and Score Estimation". In: _Proceedings of the Thirty-Fifth Conference on Uncertainty in Artificial Intelligence_. Vol. 115. Proceedings of Machine Learning Research. AUAI Press, 2019, pp. 574-584. url: [http://proceedings.mlr.press/v115/song20a.html](http://proceedings.mlr.press/v115/song20a.html).
* [Son+21a] Y. Song, C. Durkan, I. Murray, and S. Ermon. "Maximum Likelihood Training of Score-Based Diffusion Models". In: _Advances in Neural Information Processing Systems_. 2021, pp. 1415-1428. url: [https://proceedings.neurips.cc/paper/2021/hash/0a9f4bb17feb6ccb7ec405cfb8522cc4-Abstract.html](https://proceedings.neurips.cc/paper/2021/hash/0a9f4bb17feb6ccb7ec405cfb8522cc4-Abstract.html).
* [Son+21b] Y. Song, J. Sohl-Dickstein, D. P. Kingma, A. Kumar, et al. "Score-Based Generative Modeling through Stochastic Differential Equations". In: _9th International Conference on Learning Representations_. OpenReview.net, 2021. url: [https://openreview.net/forum?id=PxTIG12RRHS](https://openreview.net/forum?id=PxTIG12RRHS).
* [Son+22] Y. Song, L. Shen, L. Xing, and S. Ermon. "Solving Inverse Problems in Medical Imaging with Score-Based Generative Models". In: _The Tenth International Conference on Learning Representations_. OpenReview.net, 2022. url: [https://openreview.net/forum?id=vaRCHVjouGI](https://openreview.net/forum?id=vaRCHVjouGI).
* [Sta+21] K. Stachenfeld, D. B. Fielding, D. Kochkov, M. Cranmer, et al. "Learned Simulators for Turbulence". In: _International Conference on Learning Representations_. 2021.
* [Thu+21] N. Thuerey, P. Holl, M. Mueller, P. Schnell, et al. _Physics-based Deep Learning_. WWW, 2021. url: [https://physicsbaseddeeplearning.org](https://physicsbaseddeeplearning.org).
* [Tom+17] J. Tompson, K. Schlachter, P. Sprechmann, and K. Perlin. "Accelerating Eulerian Fluid Simulation With Convolutional Networks". In: _5th International Conference on Learning Representations, Workshop Track Proceedings_. OpenReview.net, 2017. url: [https://openreview.net/forum?id=ByH2gxrKl](https://openreview.net/forum?id=ByH2gxrKl).
* [Um+20] K. Um, R. Brand, Y. ( Fei, P. Holl, and N. Thuerey. "Solver-in-the-Loop: Learning from Differentiable Physics to Interact with Iterative PDE-Solvers". In: _Advances in Neural Information Processing Systems_. 2020. url: [https://proceedings.neurips.cc/paper/2020/hash/43e4e6a6f341e00671e123714de019a8-Abstract.html](https://proceedings.neurips.cc/paper/2020/hash/43e4e6a6f341e00671e123714de019a8-Abstract.html).
* [Vin11] P. Vincent. "A Connection Between Score Matching and Denoising Autoencoders". In: _Neural Comput._ 23.7 (2011), pp. 1661-1674.
* [YSM22] R. Yang, P. Srivastava, and S. Mandt. "Diffusion Probabilistic Modeling for Video Generation". In: _CoRR_ abs/2203.09481 (2022). doi: 10.48550/arXiv.2203.09481. url: [https://doi.org/10.48550/arXiv.2203.09481](https://doi.org/10.48550/arXiv.2203.09481).
* [ZC21] Q. Zhang and Y. Chen. "Diffusion normalizing flow". In: _Advances in Neural Information Processing Systems_ 34 (2021), pp. 16280-16291.
* [ZDG96] K. Zhou, J. Doyle, and K. Glover. "Robust and optimal control". In: _Control Engineering Practice_ 4.8 (1996), pp. 1189-1190.
## Appendix A Proofs and Training Methodology
Below we summarize the problem formulation from the main paper and provide details about the training procedure and the derivation of our methodology.
Problem settingLet \((\Omega,\mathcal{F},P)\) be a probability space and \(W(t)=(W_{1}(t),...,W_{D}(t))^{T}\) be a \(D\)-dimensional Brownian motion. Moreover, let \(\mathbf{x}_{0}\) be a \(\mathcal{F}_{0}\)-measurable \(\mathbb{R}^{D}\)-valued random variable that is distributed as \(p_{0}\) and represents the initial simulation state. We consider the time evolution of the physical system for \(0\leq t\leq T\) modeled by the stochastic differential equation (SDE)
\[d\mathbf{x}=\mathcal{P}(\mathbf{x})dt+g(t)dW \tag{11}\]
with initial value \(\mathbf{x}_{0}\) and Borel measurable drift \(\mathcal{P}:\,\mathbb{R}^{D}\rightarrow\mathbb{R}^{D}\) and diffusion \(g:\,[0,T]\rightarrow\mathbb{R}_{\geq 0}\). This SDE transforms the marginal distribution \(p_{0}\) of initial states at time \(0\) to the marginal distribution \(p_{T}\) of end states at time \(T\).
Moreover, we assume that we have sampled \(N\) trajectories of length \(M\) from the above SDE with a fixed time discretization \(0\leq t_{0}<t_{1}<...<t_{M}\leq T\) for the interval \([0,T]\) and collected them in a training data set \(\{(\mathbf{x}_{t_{m}}^{(n)})_{i=0}^{M}\}_{n=0}^{N}\). For simplicity, we assume that all time steps are equally spaced, i.e., \(t_{m+1}-t_{m}:=\Delta t\). Moreover, in the following we use the notation \(\mathbf{x}_{\ell:m}\) for \(0\leq\ell<m\leq M\) to refer to the trajectory \((\mathbf{x}_{t_{\ell}},\mathbf{x}_{t_{\ell+1}},...,\mathbf{x}_{t_{m}})\).
AssumptionsThroughout this paper, we make some additional assumptions to ensure the existence of a unique solution to the SDE (11) and the strong convergence of the Euler-Maruyama method. In particular:
* Finite variance of samples from \(p_{0}\): \(\mathbb{E}_{\mathbf{x}_{0}\sim p_{0}}[||\mathbf{x}_{0}||_{2}^{2}]<\infty\)
* Lipschitz continuity of \(\mathcal{P}\): \(\exists K_{1}>0\;\forall\mathbf{x},\mathbf{y}\in\mathbb{R}^{D}:||\mathcal{P}( \mathbf{x})-\mathcal{P}(\mathbf{y})||_{2}\leq K_{1}||\mathbf{x}-\mathbf{y}||_{2}\)
* Lipschitz continuity of \(g\): \(\exists K_{2}>0\;\forall t,s\in[0,T]:|g(t)-g(s)|\leq K_{3}|t-s|\)
* Linear growth condition: \(\exists K_{3}>0\;\forall\mathbf{x}\in\mathbb{R}^{D}:||\mathcal{P}(\mathbf{x}) ||_{2}\leq K_{3}(1+||\mathbf{x}||_{2})\)
* \(g\) is bounded: \(\exists K_{4}>0\;\forall t\in[0,T]:|g(t)|\leq K_{4}\)
Euler-Maruyama MethodUsing Euler-Maruyama steps, we can simulate paths from SDE (11) similar to ordinary differential equations (ODE). Given an initial state \(\mathbf{X}_{t_{0}}\), we let \(\hat{\mathbf{X}}_{t_{0}}^{\Delta t}=\mathbf{X}_{t_{0}}\) and define recursively
\[\hat{\mathbf{X}}_{t_{m+1}}^{\Delta t}\leftarrow\hat{\mathbf{X}}_{t_{m}}^{ \Delta t}+\Delta t\,\mathcal{P}(\hat{\mathbf{X}}_{t_{m}}^{\Delta t})+\sqrt{ \Delta t}\,g(t_{m})\,z_{t_{m}}, \tag{12}\]
where \(z_{t_{m}}\) are i.i.d. with \(z_{t_{m}}\sim\mathcal{N}(0,I)\). For \(t_{i}\leq t<t_{i+1}\), we define the piecewise constant solution of the Euler-Maruyama Method as \(\hat{\mathbf{X}}_{t}^{\Delta t}:=\hat{\mathbf{X}}_{t_{i}}^{\Delta t}\). Let \(\mathbf{X}_{t}\) denote the solution of the SDE (11). Then the Euler-Maruyama solution \(\hat{\mathbf{X}}_{t}^{\Delta t}\) converges strongly to \(\mathbf{X}_{t}\).
**Lemma A.1**.: _[Strong convergence of Euler-Maruyama method] Consider the piecewise constant solution \(\hat{X}_{t}^{\Delta t}\) of the Euler-Maruyama method. There is a constant \(C\) such that_
\[\sup_{0\leq t\leq T}\mathbb{E}[||X_{t}-\hat{X}_{t}^{\Delta t}||_{2}]\leq C\sqrt {\Delta t}. \tag{13}\]
Proof.: See Kloeden et al. [Klo+92, p. 10.2.2]
### 1-step Loss and Score Matching Objective
**Theorem A.2**.: _Consider a data set \(\{\mathbf{x}_{0:m}^{(n)}\}_{n=1}^{N}\) with trajectories sampled from SDE (11). Then the 1-step loss_
\[\mathcal{L}_{\mathrm{single}}(\theta):=\frac{1}{M}\mathbb{E}_{\mathbf{x}_{0:M} }\left[\sum_{m=0}^{M-1}\left[||\mathbf{x}_{m}-\{\mathbf{x}_{m+1}+\Delta t\,[ -\mathcal{P}(\mathbf{x}_{m+1})+s_{\theta}(\mathbf{x}_{m+1},t_{m+1})\}||^{2} \right]\right] \tag{14}\]_is equivalent to minimizing the score matching objective_
\[\mathcal{J}(\theta):=\int_{0}^{T}\mathbb{E}_{\mathbf{x}_{t}}[||\nabla_{\mathbf{x}} \log p_{t}(\mathbf{x})-\tilde{s}_{\theta}(\mathbf{x},t)||_{2}^{2}]dt, \tag{15}\]
_where \(\tilde{s}_{\theta}(\mathbf{x},t)=s_{\theta}(\mathbf{x},t)/g^{2}(t)\) as \(\Delta t\to 0\)._
Proof.:
"\(\Leftarrow\)": Consider \(\theta^{*}\) such that \(\tilde{s}_{\theta^{*}}(\mathbf{x},t)\equiv\nabla_{\mathbf{x}}\log p_{t}( \mathbf{x})\), which minimizes the score matching objective \(\mathcal{J}(\theta)\). Then fix a time step \(t\) and sample \(\mathbf{x}_{t}\) and \(\mathbf{x}_{t+\Delta t}\) from the data set. The probability flow solution \(\mathbf{x}_{t}^{p}\) based on equation (14) is
\[\mathbf{x}_{t}^{p}:=\mathbf{x}_{t+\Delta t}+\Delta t\left[-\mathcal{P}(\mathbf{ x}_{t+\Delta t})+s_{\theta}(\mathbf{x}_{t+\Delta t},t+\Delta t)\right]. \tag{16}\]
At the same time, we know that the transformation of marginal likelihoods from \(p_{t+\Delta t}\) to \(p_{t}\) follows the reverse-time SDE [1]
\[d\mathbf{x}=\left[\mathcal{P}(\mathbf{x})+g^{2}(t)\nabla_{\mathbf{x}}\log p_{ t}(\mathbf{x})\right]dt+g(t)dW, \tag{17}\]
which runs backward in time from \(T\) to \(0\). Denote by \(\hat{\mathbf{x}}_{t}^{\Delta t}\) the solution of the Euler-Maruyama method at time \(t\) initialized with \(\mathbf{x}_{t+\Delta t}\) at time \(t+\Delta t\).
Using the triangle inequality for squared norms, we can write
\[\lim_{\Delta t\to 0}\mathbb{E}\left[||\mathbf{x}_{t}-\mathbf{x}_{t}^{p}||_{2}^ {2}\right]\leq 2\lim_{\Delta t\to 0}\mathbb{E}\left[||\mathbf{x}_{t}-\hat{ \mathbf{x}}_{t}^{\Delta t}||_{2}^{2}\right]+2\lim_{\Delta t\to 0}\mathbb{E} \left[||\hat{\mathbf{x}}_{t}^{\Delta t}-\mathbf{x}_{t}^{p}||_{2}^{2}\right]. \tag{18}\]
Because of the strong convergence of the Euler-Maruyama method, we have that for the first term of the bound in equation (18)
\[\lim_{\Delta t\to 0}\mathbb{E}\left[||\mathbf{x}_{t}-\hat{\mathbf{x}}_{t}^{ \Delta t}||_{2}^{2}\right]=0 \tag{19}\]
independent of \(\theta\). At the same time, for the Euler-Maruyama discretization, we can write
\[\hat{\mathbf{x}}_{t}^{\Delta t}=\mathbf{x}_{t+\Delta t}+\Delta t \left[-\mathcal{P}(\mathbf{x}_{t+\Delta t})+g^{2}(t+\Delta t)\nabla_{\mathbf{x }}\log p_{t+\Delta t}(\mathbf{x}_{t+\Delta t})\right] \tag{20}\] \[+g(t+\Delta t)\sqrt{\Delta t}z_{t+\Delta t}, \tag{21}\]
where \(z_{t+\Delta t}\) is a standard Gaussian distribution, i.e., \(z_{t+\Delta t}\sim\mathcal{N}(0,I)\). Therefore, we can simplify the second term of the bound in equation (18)
\[\lim_{\Delta t\to 0}\mathbb{E}\left[\left|\left|\hat{\mathbf{x}}_{t}^{ \Delta t}-\mathbf{x}_{t}^{p}\right|\right|_{2}^{2}\right] \tag{22}\] \[= \lim_{\Delta t\to 0}\mathbb{E}_{\mathbf{x}_{t+\Delta t}\sim p_{t+ \Delta t},z\sim\mathcal{N}(0,I)}\bigg{[}\bigg{|}\bigg{|}\Delta t\,g(t+\Delta t )^{2}\left[\nabla_{\mathbf{x}}\log p_{t+\Delta t}(\mathbf{x}_{t+\Delta t})\right.\] (23) \[\left.-\tilde{s}_{\theta}(\mathbf{x}_{t+\Delta t},t+\Delta t)\right] +g(t+\Delta t)\sqrt{\Delta t}\,z\bigg{|}\bigg{|}_{2}^{2}\bigg{]}\,. \tag{24}\]
If \(\theta^{*}\) minimizes the score matching objective, then \(\tilde{s}_{\theta^{*}}(\mathbf{x},t)\equiv\nabla_{\mathbf{x}}\log p_{t}( \mathbf{x})\), and therefore the above is the same as
\[\lim_{\Delta t\to 0}\mathbb{E}_{z}[||\sqrt{\Delta t}\,g(t+\Delta t)\,z||_{2}^{2} ]=0. \tag{25}\]
Combining equations (18), (19) and (25) yields
\[\lim_{\Delta t\to 0}\mathbb{E}\left[||\mathbf{x}_{t}-\mathbf{x}_{t}^{p}||_{2}^{2} \right]=0. \tag{26}\]
Additionally, since \(g\) is bounded, we even have
\[\mathbb{E}[||\sqrt{\Delta t}\,g(t+\Delta t)\,z||_{2}^{2}]\leq\mathbb{E}[|| \sqrt{\Delta t}\,K_{4}\,z||_{2}^{2}]=\mathbb{E}[||K_{4}\,z||_{2}^{2}]\Delta t. \tag{27}\]
For \(\mathcal{L}_{\mathrm{single}}(\theta)\), using the above bound (27) and strong convergence of the Euler-Maruyama method, we can therefore derive the following bound
\[\mathcal{L}_{\mathrm{single}}(\theta) =\frac{1}{M}\mathbb{E}\left[\sum_{m=0}^{M-1}\left[||\mathbf{x}_{ m}+\Delta t\left[-\mathcal{P}(\mathbf{x}_{m+1})+s_{\theta}(\mathbf{x}_{m+1},t_{m+1}) \right]||^{2}\right]\right] \tag{28}\] \[\leq\frac{1}{M}\,M\,2\,\left[C\sqrt{\Delta t}+\mathbb{E}_{z}[||K _{4}\,z||_{2}^{2}]\Delta t\right], \tag{29}\]
which implies that \(\mathcal{L}_{\mathrm{single}}(\theta)\to 0\) as \(\Delta t\to 0\).
* With the definitions from "\(\Leftarrow\)", let \(\theta^{*}\) denote a minimizer such that \(\mathcal{L}_{\rm single}(\theta)\to 0\) as \(\Delta t\to 0\), i.e., we assume there is a sequence \(\Delta t_{1},\Delta t_{2},...\) with \(\lim_{n\to\infty}\Delta t_{n}=0\) and a sequence \(\theta_{1},\theta_{2},...,\) where \(\theta_{n}\) is a global minimizer to the objective \(\mathcal{L}_{\rm single}^{\Delta t_{n}}(\theta)\) that depends on the step size \(\Delta t_{n}\). If there is \(\theta^{*}\) such that \(s_{\theta^{*}}(\mathbf{x},t)\equiv\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})\), then \(\mathcal{L}_{\rm single}^{\Delta t_{n}}(\theta_{n})\leq\mathcal{L}_{\rm single} ^{\Delta t_{n}}(\theta^{*})\). From "\(\Leftarrow\)" we know that \(\lim_{n\to\infty}\mathcal{L}_{\rm single}^{\Delta t_{n}}(\theta^{*})=0\) and therefore \(\lim_{n\to\infty}\mathcal{L}_{\rm single}^{\Delta t_{n}}(\theta_{n})=0\). This implies that each summand of \(\mathcal{L}_{\rm single}(\theta)\) also goes to zero as \(\Delta t\to 0\), i.e., \(\lim_{\Delta t\to 0}\mathbb{E}\left[\left\|\mathbf{x}_{t}-\mathbf{x}_{t}^{p} \right\|_{2}^{2}\right]=0\). Again, with the triangle inequality for squared norms, we have that \[\lim_{\Delta t\to 0}\mathbb{E}\left[\left|\hat{\mathbf{x}}_{t}^{\Delta t}- \mathbf{x}_{t}^{p}\right|_{2}^{2}\right]\leq 2\lim_{\Delta t\to 0} \mathbb{E}\left[\left|\mathbf{x}_{t}-\hat{\mathbf{x}}_{t}^{\Delta t}\right|_{2} ^{2}\right]+2\lim_{\Delta t\to 0}\mathbb{E}\left[\left|\left|\mathbf{x}_{t}- \mathbf{x}_{t}^{p}\right|_{2}^{2}\right].\] (30) By the strong convergence of the Euler-Maruyama method and \(\theta=\theta^{*}\), we obtain \[\lim_{\Delta t\to 0}\mathbb{E}\left[\left|\hat{\mathbf{x}}_{t}^{\Delta t}- \mathbf{x}_{t}^{p}\right|_{2}^{2}\right]=0.\] (31) At the same time, for fixed \(\Delta t>0\), we can compute \[\mathbb{E}\left[\left|\hat{\mathbf{x}}_{t}^{\Delta t}-\mathbf{x}_ {t}^{p}\right|_{2}^{2}\right]\] (32) \[= \mathbb{E}_{\mathbf{x}_{t+\Delta t},z\sim\mathcal{N}(0,I)}[\left| \Delta t\,g(t+\Delta t)^{2}\left[\nabla_{\mathbf{x}}\log p_{t+\Delta t}( \mathbf{x}_{t+\Delta t})\right.\right.\] (33) \[\left.\left.-s_{\theta}(\mathbf{x}_{t+\Delta t},t+\Delta t)\right]+ \sqrt{\Delta t}\,g(t+\Delta t)\,z||_{2}^{2}\right]\] (34) \[= \Delta t\,g(t+\Delta t)\,\mathbb{E}_{\mathbf{x}_{t+\Delta t},z\sim \mathcal{N}(0,I)}[\left|\Delta t^{3/2}\,g(t+\Delta t)^{5/2}\left[\nabla_{ \mathbf{x}}\log p_{t+\Delta t}(\mathbf{x}_{t+\Delta t})\right.\right.\] (35) \[\left.\left.-s_{\theta}(\mathbf{x}_{t+\Delta t},t+\Delta t)\right] +z||_{2}^{2}\right].\] (36) For fixed \(\mathbf{x}_{t+\Delta t}\), the distribution over \(z\sim\mathcal{N}(0,I)\) in equation (36) correspond to a non-central chi-squared distribution [1, Chapter 13.4], whose mean can be calculated as \[\mathbb{E}_{z\sim\mathcal{N}(0,I)}\left[\left|\left|\Delta t^{3/2}\,g(t+ \Delta t)^{5/2}\left[\nabla_{\mathbf{x}}\log p_{t+\Delta t}(\mathbf{x}_{t+ \Delta t})-s_{\theta}(\mathbf{x}_{t+\Delta t},t+\Delta t)\right]+z\right|\right| _{2}^{2}\right]\] (37) \[=\left|\left|\Delta t^{3/2}\,g(t+\Delta t)^{5/2}\left[\nabla_{ \mathbf{x}}\log p_{t+\Delta t}(\mathbf{x}_{t+\Delta t})-s_{\theta}(\mathbf{x}_{ t+\Delta t},t+\Delta t)\right]\right|\right|_{2}^{2}+D.\] (38) For each \(\Delta t>0\), the above is minimized if and only if \(\nabla_{\mathbf{x}}\log p_{t+\Delta t}(\mathbf{x}_{t+\Delta t})=s_{\theta}( \mathbf{x}_{t+\Delta t},t+\Delta t)\).
### Multi-step Loss and Maximum Likelihood Training
We now extend the 1-step formulation from above to multiple steps and discuss its relation to maximum likelihood training. For this, we consider our proposed probability flow ODE defined by
\[d\mathbf{x}=\left[\mathcal{P}(\mathbf{x})+s_{\theta}(\mathbf{x},t)\right]dt \tag{39}\]
and for \(t_{i}<t_{j}\) define \(p_{t_{i}}^{t_{j},\rm ODE}\) as the distribution obtained by sampling \(\mathbf{x}\sim p_{t_{j}}\) and integrating the probability flow with network \(s_{\theta}(\mathbf{x},t)\) equation (7) backward in time until \(t_{i}\). We can choose two arbitrary time points \(t_{i}\) and \(t_{j}\) with \(t_{i}<t_{j}\) because we do not require fixed start and end times of the simulation.
The maximum likelihood training objective of the probability flow ODE (7) can be written as maximizing
\[\mathbb{E}_{\mathbf{x}_{t_{i}}\sim p_{t_{i}}}[\log p_{t_{i}}^{p_{t_{j}},\rm ODE }(\mathbf{x}_{t_{i}})]. \tag{40}\]
Our proposed multi-step loss is based on the sliding window size \(S\), which is the length of the sub-trajectory that we aim to reconstruct with the probability flow ODE (7). The multi-step loss is defined as
\[\mathcal{L}_{\rm multi}(\theta):=\frac{1}{M}\,\mathbb{E}_{\mathbf{x}_{0:M}} \left[\sum_{m=0}^{M-S+1}\left[\left|\mathbf{x}_{m:m+S-1}-\hat{\mathbf{x}}_{m:m +S-1}\right|\right|_{2}^{2}\right]\right], \tag{41}\]where we compute the expectation by sampling \(\mathbf{x}_{0:m}\) from the training data set and \(\hat{\mathbf{x}}_{i:i+S-1}\) is the predicted sub-trajectory that is defined recursively by
\[\hat{\mathbf{x}}_{i+S}=\mathbf{x}_{i+S}\quad\text{ and }\quad\hat{\mathbf{x}}_{i+S-1-j}=\hat{\mathbf{x}}_{i+S-j}+\Delta t\left[- \mathcal{P}(\hat{\mathbf{x}}_{i+S-j})+s_{\theta}(\hat{\mathbf{x}}_{i+S-j},t_{i+ S-j})\right]. \tag{42}\]
In the following, we show that the multi-step loss (4) maximizes a variational lower bound for the maximum likelihood training objective (40).
**Theorem A.3**.: _Consider a data set \(\{\mathbf{x}_{0:m}^{(n)}\}_{n=1}^{N}\) with trajectories sampled from SDE (11). Then the multi-step loss (4) maximizes a variational lower bound for the maximum likelihood training objective of the probability flow ODE (40) as \(\Delta t\to 0\)._
Let \(\mu_{t_{i}}^{\mathrm{ODE}}(\mathbf{x}_{t_{j}})\) denote the solution of the probability flow ODE (7) integrated backward from time \(t_{j}\) to \(t_{i}\) with initial value \(\mathbf{x}_{t_{j}}\).
For the maximum likelihood objective, we can derive a variational lower bound
\[\mathbb{E}_{\mathbf{x}_{t_{i}}}\left[\log p_{t_{i}}^{t_{j},\, \mathrm{ODE}}(\mathbf{x}_{t_{i}})\right] =\mathbb{E}_{\mathbf{x}_{t_{i}}}\left[\log\left(\mathbb{E}_{ \mathbf{x}_{t_{j}}}\left[p_{t_{i}}^{t_{j},\,\mathrm{ODE}}(\mathbf{x}_{t_{i}}| \mathbf{x}_{t_{j}})\right]\right)\right] \tag{43}\] \[=\mathbb{E}_{\mathbf{x}_{t_{i}}}\left[\log\left(\mathbb{E}_{ \mathbf{x}_{t_{j}}|\mathbf{x}_{t_{i}}}\left[\frac{p_{t_{i}}(\mathbf{x}_{t_{i}} )}{p_{t_{j}}(\mathbf{x}_{t_{j}}|\mathbf{x}_{\mathbf{t_{i}}})}p_{t_{i}}^{t_{j}, \,\mathrm{ODE}}(\mathbf{x}_{t_{i}}|\mathbf{x}_{t_{j}})\right]\right)\right]\] (44) \[\geq\mathbb{E}_{\mathbf{x}_{t_{i}}}\mathbb{E}_{\mathbf{x}_{t_{j}} |\mathbf{x}_{t_{i}}}\left[\log\left(\frac{p_{t_{j}}(\mathbf{x}_{t_{j}})}{p_{t_ {j}}(\mathbf{x}_{t_{j}}|\mathbf{x}_{\mathbf{t_{i}}})}p_{t_{i}}^{t_{j},\, \mathrm{ODE}}(\mathbf{x}_{t_{i}}|\mathbf{x}_{t_{j}})\right)\right]\] (45) \[=\mathbb{E}_{\mathbf{x}_{t_{i}}}\mathbb{E}_{\mathbf{x}_{t_{j}}| \mathbf{x}_{t_{i}}}\left[\log\left(\frac{p_{t_{j}}(\mathbf{x}_{t_{j}})}{p_{t_ {j}}(\mathbf{x}_{t_{j}}|\mathbf{x}_{\mathbf{t_{i}}})}\right)+\log\left(p_{t_{i }}^{t_{j},\,\mathrm{ODE}}(\mathbf{x}_{t_{i}}|\mathbf{x}_{t_{j}})\right)\right], \tag{46}\]
where the inequality is due to Jensen's inequality. Since only \(p_{t_{i}}^{t_{j},\,\mathrm{ODE}}(\mathbf{x}_{t_{i}}|\mathbf{x}_{t_{j}})\) depends on \(\theta\), this is the same as maximizing
\[\mathbb{E}_{\mathbf{x}_{t_{i}}}\mathbb{E}_{\mathbf{x}_{t_{j}}| \mathbf{x}_{t_{i}}}\left[\log\left(p_{t_{i}}^{t_{j},\,\mathrm{ODE}}(\mathbf{ x}_{t_{i}}|\mathbf{x}_{t_{j}})\right)\right]. \tag{47}\]
The probability flow ODE is likelihood-free, which makes it challenging to optimize. Therefore, we relax the objective by perturbing the ODE distributions by convolving them with a Gaussian kernel \(G_{\epsilon}\) with small \(\epsilon>0\), see, e.g., Kersting et al. [Ker+20, Gaussian ODE filtering]. This allows us to model the conditional distribution \(p_{t_{i}}^{t_{j},\,\mathrm{ODE}}|\mathbf{x}_{t_{j}}\) as a Gaussian distribution with mean \(\mu_{t_{i}}^{t_{j},\,\mathrm{ODE}}(\mathbf{x}_{t_{j}})\) and variance \(\sigma^{2}=\epsilon\). Then maximizing (47) reduces to matching the mean of the distribution, i.e., minimizing
\[\mathbb{E}_{\mathbf{x}_{t_{i}}}\mathbb{E}_{\mathbf{x}_{t_{j}}| \mathbf{x}_{t_{i}}}\left[||\mathbf{x}_{t_{i}}-\mu_{t_{i}}^{\mathrm{ODE}}( \mathbf{x}_{t_{j}})||_{2}^{2}\right] \tag{48}\]
independent of \(\epsilon>0\). Since this is true for any time step \(t_{j}>t_{i}\) and corresponding simulation state \(\mathbf{x}_{t_{j}}\) given \(\mathbf{x}_{t_{i}}\), we can pick the pairs \((\mathbf{x}_{t_{i}},\mathbf{x}_{t_{i+1}})\), \((\mathbf{x}_{t_{i}},\mathbf{x}_{t_{i+2}})\), \((\mathbf{x}_{t_{i}},\mathbf{x}_{t_{i+3}})\) and so on. Then, we can optimize them jointly by considering the sum of the individual objectives up to a maximum sliding window size
\[\mathbb{E}_{\mathbf{x}_{i:j}}\left[\sum_{k=i}^{j-1}||\mathbf{x}_{ t_{k}}-\mu_{t_{k}}^{\mathrm{ODE}}(\mathbf{x}_{t_{j}})||_{2}^{2}\right]. \tag{49}\]
As \(\Delta t\to 0\), we compute the terms \(\mu_{t_{k}}^{\mathrm{ODE}}(\mathbf{x}_{t_{j}})\) on a single trajectory starting at \(\mathbf{x}_{t_{i}}\) with sliding window \(S\) covering the trajectory until \(\mathbf{x}_{t_{j}}\) via the Euler method, i.e., we can define recursively
\[\hat{\mathbf{x}}_{i+S}=\mathbf{x}_{i+S}\quad\text{ and }\quad\hat{\mathbf{x}}_{i+S-1-j} =\hat{\mathbf{x}}_{i+S-j}+\Delta t\left[-\mathcal{P}(\mathbf{x}_{i+S-j})+s_{ \theta}(\mathbf{x}_{i+S-j},t_{i+S-j})\right]. \tag{50}\]
By varying the starting points of the sliding window \(\mathbf{x}_{t_{i}}\), this yields our proposed multi-step loss \(\mathcal{L}_{\mathrm{multi}}(\theta)\).
### Denoising Score Matching for Deterministic Simulations
So far, we have considered physical systems that can be modeled by an SDE, i.e., equation (11). While this problem setup is suitable for many scenarios, we would also like to apply a similar methodology when the system is deterministic, i.e., when we can write the problem as an ordinary stochastic equation
\[d\mathbf{x}=\mathcal{P}(\mathbf{x})dt. \tag{51}\]
In the case of chaotic dynamical systems, this still represents a hard inverse problem, especially when information is lost due to noise added to the trajectories after their generation.
The training setup based on modeling the physics system with an SDE is shown in figure 7a. Figure 7b and 7c illustrate two additional data setup and inference variants for deterministic physical systems modeled by the ODE (51). While for the experiments in sections 3.1 and 3.2 in the main paper, our setup resembles (a), for the buoyancy-driven flow in section 3.3 and the forced isotropic turbulence in section 3.4 in the main paper, we consider (c) as the system is deterministic.
For this variant, the update by \(-\mathcal{P}(\mathbf{x})\) and \(s_{\theta}(\mathbf{x},t)\) is separated into two steps. The temporal evolution from \(t_{i+1}\) to \(t_{i}\) is then defined entirely by physics. We apply an additive noise to the system and the update step by \(s_{\theta}(\mathbf{x},t)\), which can be interpreted as denoising for a now slightly perturbed state \(\hat{\mathbf{x}}_{t_{i}}\). In this case, we show that the network \(s_{\theta}(\mathbf{x},t)\) still learns the correct score \(\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})\) during training using denoising score matching. We compare the performance of variants (b) and (c) for the buoyancy-drive flow in appendix E.
When separating physics and score updates, we calculate the updates as
\[\hat{\mathbf{x}}_{t_{i}} =\mathbf{x}_{t_{i+1}}-\Delta t\,\mathcal{P}(\mathbf{x}_{t_{i+1}}) \tag{52}\] \[\hat{\mathbf{x}}_{t_{i}}^{\mathrm{noise}} =\hat{\mathbf{x}}_{t_{i}}+\sqrt{\Delta t}\,g(t_{i})\,z_{t_{i}}\] (53) \[\mathbf{x}_{t_{i}} =\hat{\mathbf{x}}_{t_{i}}^{\mathrm{noise}}+\Delta t\,g^{2}(t_{i}) \,s_{\theta}(\hat{\mathbf{x}}_{t_{i}}^{\mathrm{noise}},t_{i}), \tag{54}\]
where \(z_{t_{i}}\sim\mathcal{N}(0,I)\). If the physics system is deterministic and \(\Delta t\) is small enough, then we can approximate \(\mathbf{x}_{t_{i}}\approx\hat{\mathbf{x}}_{t_{i}}\) and for the moment, we assume that we can write
\[\hat{\mathbf{x}}_{t_{i}}^{\mathrm{noise}}=\mathbf{x}_{t_{i}}+\sqrt{\Delta t}\, g(t_{i})\,z_{t_{i}}. \tag{55}\]
In this case, we can rewrite the 1-step loss \(\mathcal{L}_{\mathrm{single}}(\theta)\) from (14) to obtain the denoising score matching loss
\[\mathcal{L}_{\mathrm{DSM}}(\theta):=\mathbb{E}_{(\mathbf{x}_{t_{i}},\mathbf{x }_{t_{i+1}})}\left[||\mathbf{x}_{t_{i}}-\hat{\mathbf{x}}_{t_{i}}^{\mathrm{ noise}}-\Delta t\,g^{2}(t_{i})\,s_{\theta}(\hat{\mathbf{x}}_{t_{i}}^{ \mathrm{noise}},t_{i})||_{2}^{2}\right], \tag{56}\]
which is the same as minimizing
\[\mathbb{E}_{(\mathbf{x}_{t_{i}},\mathbf{x}_{t_{i+1}})}\left[||s_{\theta}(\hat{ \mathbf{x}}_{t_{i}}^{\mathrm{noise}},t_{i})-\frac{1}{\Delta t\,g^{2}(t_{i})}( \mathbf{x}_{t_{i}}-\hat{\mathbf{x}}_{t_{i}}^{\mathrm{noise}})||_{2}^{2}\right]. \tag{57}\]
Figure 7: Variants of training and inference for different physical systems. (a) shows the SDE and reverse-time SDE setup with the Euler-Maruyama discretization when the system is modeled by an SDE. The diffusion term \(g(t)\) is absorbed in the Gaussian random variable \(z_{t}\sim\mathcal{N}(0,g(t)^{2}I)\) and network \(s_{\theta}(\mathbf{x},t)\). In (b), we assume that the temporal evolution of the training data is deterministic, i.e., we model the physical system without the diffusion term. However, for inference, we consider the reverse-time SDE of the same form as in (a), where the diffusion coefficient \(g(t)\) is chosen as a hyperparameter that depends on the noise scale added to the data. Then, in (c), we split the Euler step for the backward direction into a physics-only update, adding the Gaussian noise \(z\) and a denoising step by \(s_{\theta}(\mathbf{x},t)\).
Now, the idea presented in Vincent [111] is that for score matching, we can consider a joint distribution \(p_{t_{i}}(\mathbf{x}_{t_{i}},\tilde{\mathbf{x}}_{t_{i}})\) of sample \(\mathbf{x}_{t_{i}}\) and corrupted sample \(\tilde{\mathbf{x}}_{t_{i}}\). Using Bayes' rule, we can write \(p_{t_{i}}(\mathbf{x}_{t_{i}},\tilde{\mathbf{x}}_{t_{i}})=p_{\sigma}(\tilde{ \mathbf{x}}_{t_{i}}|\,\mathbf{x}_{t_{i}})p_{t_{i}}(\mathbf{x}_{t_{i}})\). The conditional distribution \(p_{\sigma}(\cdot|\,\mathbf{x}_{t_{i}})\) for the corrupted sample is then modeled by a Gaussian with standard deviation \(\sigma=\sqrt{\Delta t}\,g(t_{i})\), i.e., we can write \(\tilde{\mathbf{x}}=\mathbf{x}+\sqrt{\Delta t}\,g(t_{i})\,z\) for \(z\sim\mathcal{N}(0,I)\) similar to equation (55). Moreover, we can define the distribution of corrupted data \(q_{\sigma}\) as
\[q_{\sigma}(\tilde{\mathbf{x}})=\int p_{\sigma}(\tilde{\mathbf{x}}|\mathbf{x})p _{t_{i}}(\mathbf{x})d\mathbf{x}. \tag{58}\]
If \(\sigma\) is small, then \(q_{\sigma}\approx p_{t_{i}}\) and \(\mathrm{KL}(q_{\sigma}||\,p_{t_{i}})\to 0\) as \(\sigma\to 0\). Importantly, in this case, we can directly compute the score for \(p_{\sigma}(\cdot|\,\mathbf{x})\) as
\[\nabla_{\tilde{\mathbf{x}}}\log p_{\sigma}(\tilde{\mathbf{x}}|\,\mathbf{x})= \frac{1}{\sigma^{2}}(\mathbf{x}-\tilde{\mathbf{x}}). \tag{59}\]
Moreover, the theorem proven by Vincent [111] means that we can use the score of the conditional distribution \(p_{\sigma}(\cdot|\mathbf{x})\) to train \(s_{\theta}(\mathbf{x},t)\) to learn the score of \(q_{\sigma}(\mathbf{x})\), i.e.
\[\operatorname*{arg\,min}_{\theta}\mathbb{E}_{\tilde{\mathbf{x}} \sim q_{\theta}}\left[||s_{\theta}(\mathbf{x},t_{i})-\nabla_{\tilde{\mathbf{x} }}\log q_{\sigma}(\tilde{\mathbf{x}})||_{2}^{2}\right] \tag{60}\] \[= \operatorname*{arg\,min}_{\theta}\mathbb{E}_{\mathbf{x}\sim p_{t _{i}},\tilde{\mathbf{x}}\sim p_{\sigma}(\cdot|\,\mathbf{x})}\left[||s_{\theta }(\mathbf{x},t_{i})-\nabla_{\tilde{\mathbf{x}}}\log p_{\sigma}(\tilde{\mathbf{ x}}|\,\mathbf{x})||_{2}^{2}\right]. \tag{61}\]
By combining (61) and (59), this exactly equals the denoising loss \(\mathcal{L}_{\mathrm{DSM}}(\theta)\) in (57). As \(q_{\sigma}\approx p_{t_{i}}\), we also obtain that \(\nabla_{\mathbf{x}}\log q_{\sigma}(\mathbf{x})\approx\nabla_{\mathbf{x}}\log p _{t_{i}}(\mathbf{x})\), so the network \(s_{\theta}(\mathbf{x},t_{i})\) approximately learns the correct score for \(p_{t_{i}}\).
We have assumed (55) that the only corruption for \(\hat{\mathbf{x}}_{t_{i}}^{\mathrm{noise}}\) is the Gaussian noise. This is not true, as we have
\[\hat{\mathbf{x}}_{t_{i}}^{\mathrm{noise}}=\mathbf{x}_{t_{i}}+\sqrt{\Delta t}\, g(t_{i})\,z_{t_{i}}+(\mathbf{x}_{t_{i+1}}-\Delta t\,\mathcal{P}(\mathbf{x}_{t _{i+1}})-\mathbf{x}_{t_{i}}), \tag{62}\]
so there is an additional source of corruption, which comes from the numerical errors due to the term \(\mathbf{x}_{t_{i+1}}-\Delta t\,\mathcal{P}(\mathbf{x}_{t_{i+1}})-\mathbf{x}_{t _{i}}\). The conditional distribution \(p_{\sigma}(\cdot|\,\mathbf{x})\) is only approximately Gaussian. Ideally, the effects of numerical errors are dominated by the Gaussian random noise. However, even small errors may accumulate for longer sequences of inference steps. To account for this, we argue that the multi-step loss \(\mathcal{L}_{\mathrm{multi}}(\theta)\) should be used. During training, with the separation of physics update and denoising, the simulation state is first progressed from time \(t_{i+1}\) to time \(t_{i}\) using the reverse physics solver. This only yields a perturbed version of the simulation at time \(t_{i}\) due to numerical inaccuracies. Then a small Gaussian noise is added and, via the denoising network \(s_{\theta}(\mathbf{x},t)\), the simulation state is projected back to the distribution \(p_{t_{i}}\), which should also resolve the numerical errors. This is iterated, as discussed in section 2 in the main paper, depending on the sliding window size and location.
Architectures
ResNetWe employ a simple ResNet-like architecture, which is used for the score function \(s_{\theta}(\mathbf{x},t)\) and the convolutional neural network baseline (ResNet) for the stochastic heat equation in section 3.2 as well as in section 3.4 again for the score \(s_{\theta}(\mathbf{x},t)\).
For experiments with periodic boundary conditions, we apply periodic padding with length 16, i.e., if the underlying 2-dimensional data dimensions are \(N\times N\), the dimensions after the periodic padding are \((N+16)\times(N+16)\). We implement the periodic padding by tiling the input three times in \(x\)- and \(y\)-direction and then cropping to the correct sizes. The time \(t\) is concatenated as an additional constant channel to the 2-dimensional input data when this architecture represents the score \(s_{\theta}(\mathbf{x},t)\).
The encoder part of our network begins with a single 2D-convolution encoding layer with \(32\) filters, kernel size 4, and no activation function. This is followed by four consecutive residual blocks, each consisting of 2D-convolution, LeakyReLU, 2D-convolution, and Leaky ReLU. All 2D convolutions have 32 filters with kernel size four and stride 1. The encoder part ends with a single 2D convolution with one filter, kernel size 1, and no activation. Then, in the decoder part, we begin with a transposed 2D convolution, 32 filters, and kernel size 4. Afterward, there are four consecutive residual blocks, analogous to the residual encoder blocks, but with the 2D convolution replaced by a transposed 2D convolution. Finally, there is a final 2D convolution with one filter and kernel size of 5. Parameter counts of this model and other models are given in table 2.
UNetWe use the UNet architecture with spatial dropout as described in [14], Appendix A.1, for the Bayesian neural network baseline of the stochastic heat equation experiment in section 3.2. The dropout rate is set to \(0.25\). We do not include batch normalization and apply the same periodic padding as done for our ResNet architecture.
FnoThe FNO-2D architecture introduced in [11] with \(k_{\max,j}=12\) Fourier modes per channel is used as a baseline for the stochastic heat equation experiment in section 3.2 and the learned physics surrogate model in section 3.4.
Dil-ResNetThe Dil-ResNet architecture is described in [15], Appendix A. This architecture represents the score \(s_{\theta}(\mathbf{x},t)\) in the buoyancy-driven flow with obstacles experiment in section 3.3. We concatenate the constant time channel analogously to the ResNet architecture. Additionally, positional information is added to the network input by encoding the \(x\)-position and \(y\)-position inside the domain in two separate channels.
\begin{table}
\begin{tabular}{l l} Architecture & Parameters \\ \hline ResNet & 330 754 \\ UNet & 706 035 \\ Dil-ResNet & 336 915 \\ FNO & 465 377 \\ \end{tabular}
\end{table}
Table 2: Summary of architectures.
## Appendix C 1D Toy SDE
For the 1D toy SDE discussed in section 3.1, we consider the SDE given by
\[dx=-\left[\lambda_{1}\cdot\mathrm{sign}(x)x^{2}\right]dt+\lambda_{2}dw, \tag{63}\]
with \(\lambda_{1}=7\) and \(\lambda_{2}=0.03\). The corresponding reverse-time SDE is
\[dx=-\left[\lambda_{1}\cdot\mathrm{sign}(x)x^{2}-\lambda_{2}^{2}\cdot\nabla_{x }\log p_{t}(x)\right]dt+\lambda_{2}dw. \tag{64}\]
Throughout this experiment, \(p_{0}\) is a categorical distribution, where we draw either \(1\) or \(-1\) with the same probability. In figure 8, we show trajectories from this SDE simulated with the Euler-Maruyama method. Trajectories start at \(1\) or \(-1\) and approach \(0\) as \(t\) increases.
Neural network architectureWe employ a neural network \(s_{\theta}(x,t)\) parameterized by \(\theta\) to approximate the score via the 1-step loss, the multi-step loss, implicit score matching [13, 14] and sliced score matching with variance reduction [20, 14]. In all cases, the neural network is a simple multilayer perceptron with elu activations and five hidden layers with \(30\), \(30\), \(25\), \(20\), and then \(10\) neurons for the last hidden layer.
We use the Adam optimizer with standard hyperparameters as described in the original paper [14]. The learning rate, batch size, and the number of epochs depend on the data set size (100% with 2 500 trajectories, 10%, or 1%) and are chosen to ensure convergence of the training loss.
Training - 1-step lossFor the 1-step loss and all data set sizes, we train for 250 epochs with a learning rate of 10e-3 and batch size of \(256\). In the first phase, we only keep every 5th point of a trajectory and discard the rest. Then, we again train for 250 epochs with the same batch size and a learning rate of 10e-4 but keep all points. Finally, we finetune the network with 750 training epochs and a learning rate of 10e-5.
Training - multi-step lossFor the multi-step loss and 100% of the data set, we first train with the 1-step loss, which resembles a sliding window size of 2. We initially train for 1 000 epochs with a batch size of 512 and a learning rate of 10e-3, where we keep only every 5th point on a trajectory and discard the rest. Then, with a decreased learning rate of 10e-4, we begin training with a sliding window size of \(S=2\) and increment it every 1 000 epochs by one until \(S_{\max}=10\). In this phase, we train on all points without any removals.
Training - ISMFor ISM, we compute the partial derivative \(\partial s_{\theta}(\mathbf{x})_{i}/\partial\mathbf{x}_{i}\) using reverse-mode automatic differentiation in JAX (jax.jacrev). For 100% and 10% of the data set, we train for 2 000 epochs with a learning rate of 10e-3 and batch size of 10 000. Then we train for an additional 2 000 epochs with a learning rate 10e-4. For 1%, increase the number of epochs to 20 000.
Training - SSM-VRFor sliced score matching with variance reduction [20, 14], we use the same training setup as for ISM.
Figure 8: Trajectories from SDE (63) with \(\lambda_{2}=0\) (a) and \(\lambda_{2}=0.03\) (b).
ComparisonWe directly compare the learned score for the reverse-time SDE trajectories and the probability flow trajectories between ISM and the multi-step loss in figure 9 trained on the full data set. The learned score of ISM and the multi-step loss in figure 9a and figure 9b are visually very similar, showing that our method and loss learn the correct score. Overall, after finetuning both ISM and the multi-step loss, the trajectories of the multi-step loss are more accurate compared to ISM. For example, in figure 9e, a trajectory explodes to negative infinity. Also, trajectories from the multi-step loss end in either \(-1\) or \(1\), while ISM trajectories are attenuated and do not fully reach \(-1\) or \(1\) exactly, particularly for the probability flow ODE.
Results of Table 1 in Main PaperWe include the standard deviations of table 1 from the main paper in table 3 above. The posterior metric \(Q\) is very sensitive to the learned score \(s_{\theta}(\mathbf{x},t)\). Overall, our proposed multi-step loss gives the most consistent and reliable results.
\begin{table}
\begin{tabular}{l|c c c|c c c}
**Method** & \multicolumn{3}{c|}{Probability flow ODE} & \multicolumn{3}{c}{Reverse-time SDE} \\ \hline \hline & \multicolumn{3}{c|}{Data set size} & \multicolumn{3}{c}{Data set size} \\ & 100\% & 10\% & 1\% & 100\% & 10\% & 1\% \\ \hline multi-step & **0.97\(\pm\)**0.04 & **0.91\(\pm\)**0.05 & **0.81\(\pm\)**0.01 & **0.99\(\pm\)**0.01 & 0.94\(\pm\)**0.02 & **0.85\(\pm\)**0.06 \\
1-step & 0.78\(\pm\)0.16 & 0.44\(\pm\)0.13 & 0.41\(\pm\)0.13 & 0.93\(\pm\)0.05 & 0.71\(\pm\)0.10 & 0.75\(\pm\)0.10 \\ ISM & 0.19\(\pm\)0.05 & 0.15\(\pm\)0.15 & 0.01\(\pm\)0.01 & 0.92\(\pm\)0.05 & 0.94\(\pm\)0.01 & 0.52\(\pm\)0.22 \\ SSM-VR & 0.17\(\pm\)0.16 & 0.49\(\pm\)0.24 & 0.27\(\pm\)0.47 & 0.88\(\pm\)0.06 & 0.94\(\pm\)0.06 & 0.67\(\pm\)0.23 \\ \end{tabular}
\end{table}
Table 3: Posterior metric \(Q\) for 1 000 predicted trajectories averaged over three runs.
Figure 9: Comparison of Implicit Score Matching (ISM, left) and our proposed training with the multi-step loss (Multi-step, right). Colormap in (a) and (b) truncated to [-75, 75].
Empirical verification of Theorem 3.1For the quadratic SDE equation 63, the analytic score is non-trivial, therefore a direct comparison of the learned network \(s_{\theta}(\mathbf{x},t)\) and the true score \(\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})\) is difficult. However, we can consider the SDE with affine drift given by
\[dx=-\lambda xdx+gdW \tag{65}\]
with \(\lambda=0.5\) and \(g\equiv 0.04\). Because this SDE is affine and there are only two starting points \(-1\) and \(1\), we can write the distribution of states starting in \(x_{0}\) as a Gaussian with mean \(\mu(t;x_{0})=x_{0}e^{-\lambda t}\) and variance \(\sigma^{2}(t;x_{0})=\frac{\sigma^{2}}{2\lambda}(1-e^{-2\lambda t})\), see [1]. Then, the score at time \(t\) and position \(x\) conditioned on the starting point \(x_{0}\) is \((x-\mu(t;x_{0}))/\sigma^{2}(t;x_{0})\). See figure 10 for a visualization of the analytic score and a comparison with the learned score. The learned score from the 1-step training matches the analytic score very well in regions where the data density is sufficiently high.
Figure 10: 1-step training score matches analytic score. (a) shows some paths sampled from SDE equation (65) and a contour of the density. (b) is the analytic score field, and (c) and (d) are visualizations of the learned score with 1-step and multi-step training. Scores from the multi-step training correspond to more narrow high-density regions. This implies that during inference, trajectories are pulled more strongly to modes of the training data set distribution than for the 1-step training.
Stochastic Heat Equation
Spectral solverThe physics of the 2-dimensional heat equation for \(\mathbf{x}\in\mathbb{R}^{d\times d}\) can be computed analytically. The solver \(\mathcal{P}_{h}^{\Delta t}(\mathbf{x})\) simulates the systems \(\mathbf{x}\) forward in time by a fixed \(\Delta t\) using the (shifted) Fourier transformation \(\mathcal{F}\). In particular, we can implement the solver with
\[\mathcal{P}_{h}^{\Delta t}(\mathbf{x})=\mathcal{F}^{-1}\left(A(\Delta t)\circ \mathcal{F}(\mathbf{x})\right), \tag{66}\]
where \(\circ\) denotes element-wise multiplication and \(A(\Delta t)_{ij}\in\mathbb{R}^{d\times d}\) is a matrix with entries \(A(\Delta t)_{ij}:=\exp\left(-\Delta t\cdot\min(i,j,d-i,j-i)\right)\). The power spectrum of \(\mathbf{x}\) is scaled down by \(A(\Delta t)\), and higher frequencies are suppressed more than lower frequencies (for \(\Delta t>0\)). If noise is added to \(\mathbf{x}\), then this means that especially higher frequencies are affected. Therefore the inverse transformation (when \(\Delta t>0\)) exponentially scales contributions by the noise, causing significant distortions for the reconstruction of \(\mathbf{x}\).
Spectral lossWe consider a spectral error based on the two-dimensional power spectral density. The radially averaged power spectra \(s_{1}\) and \(s_{2}\) for two images are computed as the absolute values of the 2D Fourier transform raised to the second power, which are then averaged based on their distance (in pixels) to the center of the shifted power spectrum. We define the spectral error as the weighted difference between the log of the spectral densities
\[\mathcal{L}(s_{1},s_{2})=\sum_{k}w_{k}|\log(s_{1,k})-\log(s_{2,k})| \tag{67}\]
with a weighting vector \(w\in\mathbb{R}^{d}\) and \(w_{k}=1\) for \(k\leq 10\) and \(w_{k}=0\) otherwise.
TrainingFor inference, we consider the linear time discretization \(t_{n}=n\Delta t\) with \(\Delta t=0.2/32\) and \(t_{32}=0.2\). During training, we sample a random time discretization \(0\leq t_{0}<t_{2}^{\prime}<....<t_{31}^{\prime}<t_{32}\) for each batch based on \(t_{n}\) via \(t_{n}^{\prime}\sim\mathcal{U}(t_{n}-\Delta t/2,t_{n}+\Delta t/2)\) for \(n=1,...,31\) and adjust the reverse physics step based on the time difference \(t_{i}-t_{i-1}\). In the first training phase, we consider the multi-step loss with a sliding window size of \(S=6,8,...,32\) steps, where we increase \(S\) every two epochs. We use Adam to update the weights \(\theta\) with learning rate \(10^{-4}\). We finetune the network weights for \(80\) epochs with an initial learning rate of \(10^{-4}\), which we reduce by a factor of \(0.5\) every \(20\) epochs.
\(s_{\theta}\) only versionFor the 1-step loss, this method is similar to Rissanen et al. [10], which proposes a classical diffusion-like model that generates data from the dynamics of the heat equation. Nonetheless, the implementation details and methodology are analogous to the multi-step loss training, except that the reverse physics step \(\tilde{\mathcal{P}}^{-1}\) is not explicitly defined but instead learned by the network \(s_{\theta}(\mathbf{x},t)\) together with the score at the same time. We make use of the same ResNet architecture as the default variant. Except for the reverse physics step, the training setup is identical. Although the network \(s_{\theta}(\mathbf{x},t)\) is not trained with any noise for a larger sliding window with the multi-step loss, we add noise to the simulation states for the SDE inference, while there is no noise for the ODE inference.
Baseline methodsAll other baseline methods are trained for \(80\) epochs using the Adam optimizer algorithm with an initial learning rate of \(10^{-4}\), which is decreased by a factor of \(0.5\) every \(20\) epochs. For the training data, we consider solutions to the heat equation consisting of initial state \(\mathbf{x}_{0}\) and end state \(\mathbf{x}_{T}\).
Test-time distribution shiftsWe have tested the effects of test-time distribution shifts for the heat equation experiment. We train the score network \(s_{\theta}\) for a specific combination of diffusivity \(\alpha\) and noise \(g\) and vary both parameters for testing. We modify both the simulator and test ground truth for the updated diffusivity and noise. See figure 11. Overall, for small changes of the parameters, there is very little overfitting. Changes in the reconstruction MSE and spectral error can mainly be explained by making the task itself easier or harder to which our network generalizes nicely, e.g., less noise or higher diffusivity leads to smaller reconstruction error.
Additional resultsWe show visualizations for the predictions of different methods in figure 12 and figure 13.
## Appendix E Buoyancy-driven Flow with Obstacles
We use semi-Lagrangian advection for the velocity and MacCormack advection for the hot marker density within a fixed domain \(\Omega\subset[0,1]\times[0,1]\). The temperature dynamics of the marker field are modeled with a Boussinesq approximation.
TrainingWe train all networks with Adam and learning rate \(10^{-4}\) with batch size \(16\). We begin training with a sliding window size of \(S=2\), which we increase every \(30\) epochs by \(2\) until \(S_{\max}=20\).
Separate vs. joint updatesWe compare a joint update of the reverse physics step and corrector function \(s_{\theta}\), see figure 6(b), and a separate update of reverse physics step and corrector function, see figure 6(c). An evaluation regarding the reconstruction MSE and perceptual distance is shown in
Figure 11: Test-time distribution shifts for noise and diffusivity. The correction network \(s_{\theta}\) is trained on diffusivity \(\alpha=1.0\) and noise \(g\equiv 0.1\). During testing, we vary the diffusivity of the test data set and simulator as well as the noise for inference. Especially the low spectral errors indicate that the network generalizes well to the new behavior of the physics.
figure 14. Both training and inference variants achieve advantages over "\(\tilde{\mathcal{P}}^{-1}\) only" and "\(s_{\theta}\) only"
Figure 12: Predictions of different methods for the heat equation problem (example 1 of 2).
Figure 13: Predictions of different methods for the heat equation problem (example 2 of 2). Neither the BNN nor the "\(s_{\theta}\) only" model can produce small-scale structures.
approaches. Overall, there are no apparent differences for the ODE inference performance but slight benefits for the SDE inference when separating physics and corrector update.
Limited-memory BFGSWe use numerical optimization of the marker and velocity fields at \(t=0.35\) to match the target smoke and velocity fields at \(t=0.65\) using limited-memory BFGS [13, 1] and the differentiable forward simulation implemented in _phiflow_[14]. Our implementation directly uses the LBFGS implementation provided by _torch.optim.LBFGS_[15]. As arguments, we use \(\operatorname{history\_size}=10\), \(\max\_iter=4\) and \(\operatorname{line\_search\_fn}=\operatorname{strong\_wolfe}\). Otherwise, we leave all other arguments to the default values. We optimize for \(10\) steps which takes ca. 240 seconds per sample on a single NVIDIA RTX 2070 gpu.
Diffusion posterior sampling DPSAn additional baseline for this problem is diffusion posterior sampling for general noisy inverse problems [15, 16]. As a first step, we pretrain a diffusion model on the data set of marker and velocity fields at \(t=0.35\). We use the mask for obstacle positions as an additional conditioning input to the network to which no noise is applied. Our architecture and training closely resemble Denoising Diffusion Probabilistic Models [12, 14]. Our network consists of ca. 18.44 million parameters trained for 100k steps and learning rate \(1\times 10^{-4}\) using cosine annealing with warm restarts (\(T_{0}=50000\), \(\eta_{\min}=5\times 10^{-6}\)). The measurement operator \(\mathcal{A}\) is implemented using our differentiable forward simulation. We consider the Gaussian version of DPS, i.e., Algorithm 1 in [15] with \(N=1000\). We fix the step size \(\zeta_{i}\) at each iteration \(i\) to \(1/||y-\mathcal{A}(\hat{\mathbf{x}}_{0}(\mathbf{x}_{i}))||\). For each inference step, we are required to backpropagate gradients through the diffusion model and the forward simulation. Inference for a single sample requires ca. 5000 seconds on a single NVIDIA RTX 2070 gpu.
Additional resultsWe provide more detailed visualizations for the buoyancy-driven flow case in figure 16 and figure 17. These again highlight the difficulties of the reverse physics simulator to recover the initial states by itself. Including the learned corrections significantly improves this behavior.
In figure 15, we also show an example of the posterior sampling for the SDE. It becomes apparent that the inferred small-scale structures of the different samples change. However, in contrast to cases like the heat diffusion example, the physics simulation in this scenario leaves only little room for substantial changes in the states.
Figure 14: Comparison of separate and joint updates averaged over three runs.
Figure 15: Comparison of SMDP-SDE predictions and ground truth for buoyancy-driven flow at \(t=0.36\).
Figure 16: Predictions for buoyancy-driven flow with obstacles (example 1 of 2).
Figure 17: Predictions for buoyancy-driven flow with obstacles (example 2 of 2).
## Appendix F Isotropic turbulence
TrainingFor the physics surrogate model \(\tilde{\mathcal{P}}^{-1}\), we employ an FNO neural network, see appendix B, with batch size \(20\). We train the FNO for \(500\) epochs using Adam optimizer with learning rate \(10^{-3}\), which we decrease every \(100\) epochs by a factor of \(0.5\). We train \(s_{\theta}(\mathbf{x},t)\) with the ResNet architecture, see appendix B, for \(250\) epochs with learning rate \(10^{-4}\), decreased every \(50\) epochs by a factor of \(0.5\) and batch size \(6\).
Refinement with Langevin DynamicsSince the trained network \(s_{\theta}(\mathbf{x},t)\) approximates the score \(\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})\), it can be used for post-processing strategies [19, 18]. We do a fixed point iteration at a single point in time based on Langevin Dynamics via:
\[\mathbf{x}_{t}^{i+1}=\mathbf{x}_{t}^{i}+\epsilon\cdot\nabla_{\mathbf{x}}\log p _{t}(\mathbf{x}_{t}^{i})+\sqrt{2\epsilon}z_{t} \tag{68}\]
for a number of steps \(T\) and \(\epsilon=2\times 10^{-5}\), cf. figure 18 and figure 19. For a prior distribution \(\pi_{t}\), \(\mathbf{x}_{t}^{0}\sim\pi_{t}\) and by iterating (68), the distribution of \(\mathbf{x}_{t}^{T}\) equals \(p_{t}\) for \(\epsilon\to 0\) and \(T\to\infty\). There are some theoretical caveats, i.e., a Metropolis-Hastings update needs to be added in (68), and there are additional regularity conditions [18].
Additional resultsWe show additional visualizations of the ground truth and reconstructed trajectories in figure 20 and figure 21.
Figure 19: Steps with Langevin dynamics for \(\epsilon=2\times 10^{-5}\) and an additional step with \(\Delta t\,s_{\theta}(\mathbf{x},t)\) which smoothes the images.
Figure 21: Predictions for isotropic turbulence (example 2 of 2).
Figure 20: Predictions for isotropic turbulence (example 1 of 2).
## References
* [And82] B. Anderson. "Reverse-time diffusion equation models". In: _Stochastic Processes and their Applications_ 12.3 (1982), pp. 313-326.
* [Chu+23] H. Chung, J. Kim, M. T. McCann, M. L. Klasky, and J. C. Ye. "Diffusion Posterior Sampling for General Noisy Inverse Problems". In: _The Eleventh International Conference on Learning Representations_. OpenReview.net, 2023. url: [https://openreview.net/pdf?id=OnD9zGAGTOk](https://openreview.net/pdf?id=OnD9zGAGTOk).
* [HJA20] J. Ho, A. Jain, and P. Abbeel. "Denoising Diffusion Probabilistic Models". In: _Advances in Neural Information Processing Systems_. 2020. url: [https://proceedings.neurips.cc/paper/2020/hash/4c5bcfec8584af0d967f1lab10179ca4b](https://proceedings.neurips.cc/paper/2020/hash/4c5bcfec8584af0d967f1lab10179ca4b) -Abstract.html.
* [Hol+20] P. Holl, V. Koltun, K. Um, and N. Thuerey. "phiflow: A differentiable pde solving framework for deep learning via physical simulations". In: _NeurIPS Workshop_. Vol. 2. 2020.
* [Hyv05] A. Hyvarinen. "Estimation of Non-Normalized Statistical Models by Score Matching". In: _J. Mach. Learn. Res._ 6 (2005), pp. 695-709. url: [http://jmlr.org/papers/v6/hyvarinen05a.html](http://jmlr.org/papers/v6/hyvarinen05a.html).
* [JKB95] N. L. Johnson, S. Kotz, and N. Balakrishnan. _Continuous univariate distributions, volume 2_. Vol. 289. John wiley & sons, 1995.
* [KB15] D. P. Kingma and J. Ba. "Adam: A Method for Stochastic Optimization". In: _3rd International Conference on Learning Representations_. 2015. url: [http://arxiv.org/abs/1412.6980](http://arxiv.org/abs/1412.6980).
* [Ker+20] H. Kersting, N. Kramer, M. Schiegg, C. Daniel, et al. "Differentiable Likelihoods for Fast Inversion of 'Likelihood-Free' Dynamical Systems". In: _Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event_. Vol. 119. Proceedings of Machine Learning Research. PMLR, 2020, pp. 5198-5208. url: [http://proceedings.mlr.press/v119/kersting20a.html](http://proceedings.mlr.press/v119/kersting20a.html).
* [Klo+92] P. E. Kloeden, E. Platen, P. E. Kloeden, and E. Platen. _Stochastic differential equations_. Springer, 1992.
* [Li+21] Z. Li, N. B. Kovachki, K. Azizzadenesheli, B. Liu, et al. "Fourier Neural Operator for Parametric Partial Differential Equations". In: _9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021_. OpenReview.net, 2021. url: [https://openreview.net/forum?id=c8P9NQVtmnO](https://openreview.net/forum?id=c8P9NQVtmnO).
* [LN89] D. C. Liu and J. Nocedal. "On the limited memory BFGS method for large scale optimization". In: _Math. Program._ 45.1-3 (1989), pp. 503-528. doi: 10.1007/BF01589116. url: [https://doi.org/10.1007/BF01589116](https://doi.org/10.1007/BF01589116).
* [Mue+22] M. Mueller, R. Greif, F. Jenko, and N. Thuerey. "Leveraging Stochastic Predictions of Bayesian Neural Networks for Fluid Simulations". In: _arXiv preprint arXiv:2205.01222_ (2022).
* [Oks03] B. Oksendal. "Stochastic differential equations". In: _Stochastic differential equations_. Springer, 2003, pp. 65-84.
* [Pas+19] A. Paszke, S. Gross, F. Massa, A. Lerer, et al. "PyTorch: An Imperative Style, High-Performance Deep Learning Library". In: _Advances in Neural Information Processing Systems_. 2019, pp. 8024-8035. url: [https://proceedings.neurips.cc/paper/2019/hash/bdbca288fee7f92f2bfa9f7012727740-Abstract.html](https://proceedings.neurips.cc/paper/2019/hash/bdbca288fee7f92f2bfa9f7012727740-Abstract.html).
* [RHS22] S. Rissanen, M. Heinonen, and A. Solin. "Generative Modelling With Inverse Heat Dissipation". In: _arXiv:2206.13397_ (2022).
* [SE19] Y. Song and S. Ermon. "Generative Modeling by Estimating Gradients of the Data Distribution". In: _Advances in Neural Information Processing Systems_. 2019, pp. 11895-11907. url: [https://proceedings.neurips.cc/paper/2019/hash/3001ef257407d5a371a96dcd947c7d93-Abstract.html](https://proceedings.neurips.cc/paper/2019/hash/3001ef257407d5a371a96dcd947c7d93-Abstract.html).
* [Son+19] Y. Song, S. Garg, J. Shi, and S. Ermon. "Sliced Score Matching: A Scalable Approach to Density and Score Estimation". In: _Proceedings of the Thirty-Fifth Conference on Uncertainty in Artificial Intelligence_. Vol. 115. Proceedings of Machine Learning Research. AUAI Press, 2019, pp. 574-584. url: [http://proceedings.mlr.press/v115/song20a.html](http://proceedings.mlr.press/v115/song20a.html).
* [Sta+21] K. Stachenfeld, D. B. Fielding, D. Kochkov, M. Cranmer, et al. "Learned Simulators for Turbulence". In: _International Conference on Learning Representations_. 2021.
* [Vin11] P. Vincent. "A Connection Between Score Matching and Denoising Autoencoders". In: _Neural Comput._ 23.7 (2011), pp. 1661-1674.
* July 2, 2011_. Omnipress, 2011, pp. 681-688. | ## Review
### Summary
The paper presents an innovative approach for solving inverse problems in physics by leveraging score-based diffusion models. It integrates a reverse physics simulator with learned correction terms to sample from the posterior distribution of initial states given final conditions. This method, termed score matching via differentiable physics (SMDP), is evaluated across various physical systems, demonstrating improvements over baseline methods. The authors propose a multi-step training strategy and provide a theoretical framework linking these approaches to existing generative modeling techniques. The results indicate potential applications in diverse fields requiring robust solutions to inverse problems.
### Strengths
- The paper introduces a novel connection between score matching and physical processes, demonstrating originality.
- Extensive numerical experiments validate the proposed methods against various baselines, showcasing superior performance.
- The clarity of the problem setup, model training, and inference across different experiments is commendable.
- The multi-step training regime captures long-range dependencies effectively.
### Weaknesses
- The novelty of the method is somewhat limited, as it primarily modifies existing diffusion models by incorporating a reverse physics simulator.
- The paper lacks sufficient discussion on related works within the field of inverse problems.
- Some theoretical results, particularly regarding the equivalence of the correction term to the score function, require empirical validation.
- The organization of the paper could be improved for better clarity, especially in the presentation of the problem and methodology.
### Questions
- How is the ground truth determined in the context of Stochastic Differential Equations (SDEs)?
- Could the authors elaborate on the motivation for the choice of the 1-step loss in model training instead of a more conventional approach?
- What measures are taken to address potential overfitting to physical parameters during training?
- In the evaluation, why are comparisons not made with conventional methods like finite element methods?
### Soundness
**Score:** 3
**Description:** Good: The paper is generally sound, but some theoretical aspects require further clarification and empirical support.
### Presentation
**Score:** 3
**Description:** Good: The presentation is clear but can be improved by better organization and explanation of complex sections.
### Contribution
**Score:** 3
**Description:** Good: The contributions are significant but incremental, with potential for broader impact pending clarification of theoretical claims.
### Rating
**Score:** 6
**Description:** Weak Accept: The paper is technically solid with moderate-to-high impact potential, but it requires minor improvements in clarity and theoretical validation.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The decision to accept the paper is based on its originality in linking score matching with physical processes, sound experimental validation, and clear contributions to the field of inverse problems. While there are areas for clarification, the overall strengths and potential applications of the work outweigh the weaknesses.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# ISP: Multi-Layered Garment Draping with Implicit Sewing Patterns
Ren Li Benoit Guillard Pascal Fua
Computer Vision Lab, EPFL
Lausanne, Switzerland
[email protected] [email protected] [email protected]
###### Abstract
Many approaches to draping individual garments on human body models are realistic, fast, and yield outputs that are differentiable with respect to the body shape on which they are draped. However, they are either unable to handle multi-layered clothing, which is prevalent in everyday dress, or restricted to bodies in T-pose. In this paper, we introduce a parametric garment representation model that addresses these limitations. As in models used by clothing designers, each garment consists of individual 2D panels. Their 2D shape is defined by a Signed Distance Function and 3D shape by a 2D to 3D mapping. The 2D parameterization enables easy detection of potential collisions and the 3D parameterization handles complex shapes effectively. We show that this combination is faster and yields higher quality reconstructions than purely implicit surface representations, and makes the recovery of layered garments from images possible thanks to its differentiability. Furthermore, it supports rapid editing of garment shapes and texture by modifying individual 2D panels.
## 1 Introduction
Draping virtual garments on body models has many applications in fashion design, movie-making, virtual try-on, virtual and augmented reality, among others. Traditionally, garments are represented by 3D meshes and the draping relies on physics-based simulations (PBS) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] to produce realistic interactions between clothes and body. Unfortunately PBS is often computationally expensive and rarely differentiable, which limits the scope of downstream applications. Hence, many recent techniques use neural networks to speed up the draping and to make it differentiable. The garments can be represented by 3D mesh templates [13, 14, 15, 16, 17, 18, 19, 20], point clouds [21, 22], UV maps [23, 24, 25, 26, 27], or implicit surfaces [28, 29, 30]. Draping can then be achieved by Linear Blend Skinning (LBS) from the shape and pose parameters of a body model, such as SMPL [31].
Even though all these methods can realistically drape individual garments over human bodies, none can handle multiple clothing layers, despite being prevalent in everyday dress. To address overlapping clothing layers such as those in Fig. 1 while preserving expressivity, we introduce an Implicit Sewing Pattern (ISP), a new representation inspired by the way fashion designers represent clothes. As shown in Fig. 2, a sewing pattern is made of several 2D panels implicitly represented by signed distance functions (SDFs) that model their 2D extent and are conditioned on a latent vector \(\mathbf{z}\), along with information about how to stitch them into a complete garment. To each panel is associated a 2D to 3D mapping representing its 3D shape, also conditioned on \(\mathbf{z}\). The 2D panels make it easy to detect collisions between surfaces and to prevent interpenetrations. In other words, the garment is made of panels whose 2D extent is learned instead of having to be carefully designed by a human and whose3D shape is expressed in a unified \(uv\) space in which a loss designed to prevent interpenetrations can easily be written.
This combination enables us to model layered garments such as those of Fig. 1 while preserving end-to-end differentiability. This lets us drape them realistically over bodies in arbitrary poses, to recover them from images, and to edit them easily. Doing all this jointly is something that has not yet been demonstrated in the Computer Vision or Computer Graphics literature. Furthermore, most data driven draping methods rely on synthetic data generated with PBS for supervision purposes. In contrast, ISPs rely on physics-based self-supervision of [18; 13]. As a result, at inference time, our approach can handle arbitrary body poses, while only requiring garments draped over bodies in a canonical pose at training time. Our code is available at [https://github.com/liren2515/ISP](https://github.com/liren2515/ISP).
## 2 Related Work
Garment Draping.Garment draping approaches can be classified as physics-based or data-driven. Physics-based ones [3; 32; 33; 34; 35; 12] produce high-quality results but are computationally demanding, while data-driven approaches are faster, sometimes at the cost of realism. Most data-driven methods are template-based [13; 14; 15; 17; 19; 18; 20; 16; 18; 13], with a triangulated mesh modeling a specific garment and a draping function trained specifically for it. As this is impractical for large garment collections, some recent works [22; 21; 36] use 3D point clouds to represent garments instead. Unfortunately, this either prevents differentiable changes in garment topology or loses the point connections with physical meanings. The approaches of [24; 25] replace the clouds by UV maps that encode the diverse geometry of garments and predict positional draping maps. These UV maps are registered to the body mesh, which restricts their interpretability and flexibility for garment representation and manipulation. The resulting garments follow the underlying body topology, and the ones that should not, such as skirts, must be post-processed to remove artifacts. Yet other algorithms [29; 30; 37] rely on learning 3D displacement fields or hierarchical graphs for garment deformation, which makes them applicable to generic garments.
Figure 1: **Multi-layered garment draping.****Top**: Draping multiple layers of garments over one body (left) and modifying the body’s shape (right). **Bottom**: Draping the same set of 5 garments over bodies with varying poses and shapes.
While many of these data-driven methods deliver good results, they are typically designed to handle a single garment or a top and a bottom garment with only limited overlap. An exception is the approach of [38] that augments SDFs with covariant fields to untangle multi-layered garments. However, the untangling is limited to garments in T-pose. In contrast, our proposed method can perform multi-layered garment draping for bodies in complex poses and of diverse shapes.
Garments as Sets of Panels.Sewing patterns are widely used in garment design and manufacturing [39; 40; 41; 42; 43]. Typically, flat sewing patterns are assembled and then draped using PBS. To automate pattern design, recent works introduce a parametric pattern space. For example, the methods of [44; 45] introduce a sparse set of parameters, such as sleeve length or chest circumference, while [27] relies on principal component analysis (PCA) to encode the shape of individual panels. It requires hierarchical graphs on groups of panels to handle multiple garment styles. By contrast, our approach relies on the expressivity of 2D Sign Distance Functions (SDFs) to represent the panels.
To promote the use of sewing patterns in conjunction with deep learning, a fully automatic dataset generation tool is proposed in [46]. It randomly samples parameters to produce sewing patterns and uses PBS to drape them on a T-posed human body model to produce training pairs of sewing patterns and 3D garment meshes.
Garments as Implicit Surfaces.SDFs have become very popular to represent 3D surfaces. However, they are primarily intended for watertight surfaces. A standard way to use them to represent open surfaces such as garments is to represent them as thin volumes surrounding the actual surface [28; 29], which can be represented by SDFs but with an inherent accuracy loss. To address this issue, in [30], the SDFs are replaced by unsigned distance functions (UDFs) while relying on the differentiable approach of [47] to meshing, in case an actual mesh is required for downstream tasks. However, this requires extra computations for meshing and makes training more difficult because surfaces lie at a singularity of the implicit field. This can result in unwanted artifacts that our continuous UV parameterization over 2D panels eliminates.
## 3 Method
In clothing software used industrially [39; 40; 41], garments are made from sets of 2D panels cut from pieces of cloth which are then stitched together. Inspired by this real-world practice, we introduce the Implicit Sewing Patterns (ISP) model depicted by Fig. 2. It consists of 2D panels whose shape is defined by the zero crossings of a function that takes as input a 2D location \(\mathbf{x}=(x_{u},x_{v})\) and a latent vector \(\mathbf{z}\), which is specific to each garment. A second function that also takes \(\mathbf{x}\) and \(\mathbf{z}\) as arguments maps the flat 2D panel to the potentially complex 3D garment surface within the panel, while enforcing continuity across panels. Finally, we train draping networks to properly drape multiple garments on human bodies of arbitrary shapes and poses, while avoiding interpenetrations of successive clothing layers.
### Modeling Individual Garments
We model garments as sets of 2D panels stitched together in a pre-specified way. Each panel is turned into a 3D surface using a \(uv\) parameterization and the networks that implement this parameterization are trained to produce properly stitched 3D surfaces. To create the required training databases, we use the PBS approach of [46]. Because it also relies on 2D panels, using its output to train our networks has proved straightforward, with only minor modifications required.
Flat 2D Panels.We take a pattern \(P\) to be a subset of \(\Omega=[-1,1]^{2}\) whose boundary are the zero crossings of an implicit function. More specifically, we define
\[\mathcal{I}_{\Theta}:\Omega\times\mathbb{R}^{|\mathbf{z}|}\rightarrow\mathbb{ R}\times\mathbb{N},\;(\mathbf{x},\mathbf{z})\mapsto(s,c)\;, \tag{1}\]
where \(\mathcal{I}_{\Theta}\) is implemented by a fully connected network. It takes as input a local 2D location \(\mathbf{x}\) and a global latent vector \(\mathbf{z}\) and returns two values. The first, \(s\), should approximate the signed distance to the panel boundary. The second, \(c\), is a label associated to boundary points and is used when assembling the garment from several panels. Then, boundaries with the same labels should be stitched together. Note that the \(c\) values are only relevant at the panel boundaries, that is, when \(s=0\). However, training the network to produce such values _only_ there would be difficult because this would involve a very sparse supervision. So, instead, we train our network to predict the label \(c\) of the closest boundary for all points, as shown in Fig. 2(c), which yields a much better behaved minimization problem at training time.
In this paper, we model every garment \(G\) using two panels \(P^{f}\) and \(P^{b}\), one for the front and one for the back. \(E^{f}\) and \(E^{b}\) are labels assigned to boundary points. Label \(0\) denotes unstitched boundary points like collar, waist, or sleeve ends. Labels other than zero denote points in the boundary of one panel that should be stitched to a point with the same label in the other.
In practice, given the database of garments and corresponding 2D panels we generated using publicly available software designed for this purpose [46], we use an _auto-decoding_ approach to jointly train \(\mathcal{I}_{\Theta}\) and to associate a latent vector \(\mathbf{z}\) to each training garment. To this end, we minimize the loss function obtained by summing over all training panels
\[\mathcal{L}_{\mathcal{I}}=\sum_{\mathbf{x}\in\Omega}\left|s(\mathbf{x}, \mathbf{z})-s^{gt}(\mathbf{x})\right|+\lambda_{CE}\text{CE}(c(\mathbf{x}, \mathbf{z}),c^{gt}(\mathbf{x}))+\lambda_{reg}\left\|\mathbf{z}\right\|_{2}^{2 }\, \tag{2}\]
with respect to a separate latent code \(\mathbf{z}\) per garment and the network weights \(\Theta\) that are used for all garments. CE is the cross-entropy loss, \(s^{gt}\) and \(c^{gt}\) are the ground-truth signed distance value and the label of the closest seamed edge of \(\mathbf{x}\), and \(\lambda_{CE}\) and \(\lambda_{reg}\) are scalars balancing the influence of the different terms. We handle each garment having a front and a back panel \(P^{f}\) and \(P^{b}\) by two separate network \(\mathcal{I}_{\Theta_{f}}\) and \(\mathcal{I}_{\Theta_{b}}\) with shared latent codes so that it can produce two separate sets of \((s,c)\) values at each \((x_{u},x_{v})\) location, one for the front and one for the back.
From 2D panels to 3D Surfaces.The sewing patterns described above are flat panels whose role is to provide stitching instructions. To turn them into 3D surfaces that can be draped on a human body, we learn a mapping from the 2D panels to the 3D garment draped on the neutral SMPL body. To this end, we introduce a second function, similar to the one used in AtlasNet [48]
\[\mathcal{A}_{\Phi}:\Omega\times\mathbb{R}^{|\mathbf{z}|}\rightarrow\mathbb{R} ^{3},\ (\mathbf{x},\mathbf{z})\mapsto\mathbf{X}. \tag{3}\]
Figure 2: **Implicit Sewing Pattern (ISP).****(a)** The 3D mesh surface for a shirt with the front surface in gray and the back one in blue. **(b)** The front and back 2D panels of the ISP. The numbers denote the labels of seamed edges \(E_{c}\), and indicate how to stitch them when \(c>0\). **(c)** We use an implicit neural representation for the signed distance and for the edge labels, denoted here by the different colors. **(d)** Interpolation in the latent space allows topology changes, here from a sleeveless shirt to a long-sleeve open jacket. The top rows show the front and back panels, the bottom row the reconstructed meshes. **(e)** To parameterize a two-panel garment, we train the implicit network \(\mathcal{I}_{\Theta}\) to predict the signed distance field (\(s_{fl}s_{b}\)) and the edge label field (\(c_{f}\)/\(c_{b}\)) that represent the two panels. They are mapped to 3D surfaces by the network \(\mathcal{A}_{\Phi}\).
It is also implemented by a fully connected network and takes as input a local 2D location \(\mathbf{x}\) and a global latent vector \(\mathbf{z}\). It returns a 3D position \(\mathbf{X}\) for every 2D location \(\mathbf{x}\) in the pattern. A key difference with AtlasNet is that we only evaluate \(\mathcal{A}_{\Phi}\) for points \(\mathbf{x}\) within the panels, that is points for which the signed distance returned by \(\mathcal{I}_{\Theta}\) of Eq. (1) is not positive. Hence, there is no need to deform or stretch \(uv\) patterns. This is in contrast to the square patches of AtlasNet that had to be, which simplifies the training.
Given the latent codes \(\mathbf{z}\) learned for each garment in the training database, as described above, we train a separate set of weights \(\Phi_{f}\) and \(\Phi_{b}\) for the front and back of the garments. To this end, we minimize a sum over all front and back training panels of
\[\mathcal{L}_{\mathcal{A}} =\mathcal{L}_{CHD}+\lambda_{n}\mathcal{L}_{normal}+\lambda_{c} \mathcal{L}_{consist}\, \tag{4}\] \[\mathcal{L}_{consist} =\sum_{c>0}\sum_{\mathbf{x}\in E_{c}}\left\|\mathcal{A}_{\Phi_{ f}}(\mathbf{x},\mathbf{z})-\mathcal{A}_{\Phi_{b}}(\mathbf{x},\mathbf{z})\right\|_{2}^ {2}\ . \tag{5}\]
where \(\mathcal{L}_{CHD}\), \(\mathcal{L}_{normal}\), and \(\mathcal{L}_{consist}\) are the Chamfer distance, a normal consistency loss, and a loss term whose minimization ensures that the points on the edges of the front and back panels sewn together--\(E_{c}\) for \(c>0\)--are aligned when folding the panels in 3D, \(\lambda_{n}\) and \(\lambda_{c}\) are scalar weights. This assumes that the front and back panels have their seamed edges aligned in 2D, ie. for \(c>0\) and \(\mathbf{x}\in\Omega\), if \(\mathbf{x}\) has label \(c\) on the front panel, then it should also have label \(c\) in the back panel, and vice-versa. Our experiments show that \(\mathcal{L}_{consist}\) reduces the gap between the front and back panels.
Meshing and Preserving Differentiability.\(\mathcal{A}_{\Phi}\) continuously deforms 2D patches that are implicitly defined by \(\mathcal{I}_{\Theta}\). To obtain triangulated meshes from these, we lift a regular triangular 2D mesh defined on \(\Omega\) by querying \(\mathcal{A}_{\Phi}\) on each of its vertex, as in [48]. More specifically, we first create a square 2D mesh \(\mathbf{T}=\{V_{\Omega},F_{\Omega}\}\) for \(\Omega\), where vertices \(V_{\Omega}\) are grid points evenly sampled along orthogonal axis of \(\Omega\) and faces \(F_{\Omega}\) are created with Delaunay triangulation. Given the latent code \(\mathbf{z}\) of a specific garment, for each vertex \(v\in V_{\Omega}\), we can obtain its signed distance value \(s\) and edge label \(c\) with \((s,c)=\mathcal{I}_{\Theta}(v,\mathbf{z})\). We construct the 2D panel mesh \(\mathbf{T}_{P}=\{V_{P},F_{P}\}\) by discarding vertices of \(\mathbf{T}\) whose SDF is positive. To get cleaner panel borders, we also keep vertices \(v\in V_{\Omega}\) with \(s(v,\mathbf{z})>0\) that belong to the mesh edges that cross the 0 iso-level, and adjust their positions to \(v-s(v,\mathbf{z})\nabla s(v,\mathbf{z})\) to project them to the zero level set [49]. The front and back panel 2D meshes are then lifted to 3D by querying \(\mathcal{A}_{\Phi}\) on each vertex. During post-processing, their edges are sewn together to produce the final 3D garment mesh \(\mathbf{T}_{G}(\mathbf{z})\). More details can be found in the Supplementary Material.
\(\mathcal{I}_{\Theta}\) performs as an indicator function to keep vertices with \(s\leq 0\), which breaks automatic differentiability. To restore it, we rely on gradients derived in [50; 51] for implicit surface extraction. More formally, assume \(v\) is a vertex of the extracted mesh \(\mathbf{T}_{G}\) and \(\mathbf{x}\) is the point on the UV space that satisfies \(v=\mathcal{A}_{\Phi}(\mathbf{x},\mathbf{z})\), then, as proved in the Supplementary Material,
\[\frac{\partial v}{\partial\mathbf{z}}=\frac{\partial\mathcal{A}_{\Phi}}{ \partial\mathbf{z}}(\mathbf{x},\mathbf{z})-\frac{\partial\mathcal{A}_{\Phi}}{ \partial\mathbf{x}}\nabla s(\mathbf{x},\mathbf{z})\frac{\partial s}{\partial \mathbf{z}}(\mathbf{x},\mathbf{z}). \tag{6}\]
Hence, ISP can be used to solve inverse problems using gradient descent, such as fitting garments to partial observations.
### Garment Draping
We have defined a mapping from a latent vector \(\mathbf{z}\) to a garment draped over the neutral SMPL model [31], which we will refer to as a _rest pose_. We now need to deform the garments to conform to bodies of different shapes and in different poses. To this end, we first train a network that can deform a single garment. We then train a second one that handles the interactions between multiple garment layers and prevents interpenetrations as depicted in Fig. 3.
Single Layer Draping.In SMPL, body templates are deformed using LBS. As in [19; 29; 30], we first invoke an extended skinning procedure with the diffused body model formulation to an initial rough estimate of the garment shape. More specifically, given the parameter vectors \(\mathbf{B}\) and \(\mathbf{\Theta}\) that control the body shape and pose respectively, each vertex \(v\) of a garment mesh \(\mathbf{T}_{G}\) is transformed by
\[v_{init} =W(v_{(\mathbf{B},\mathbf{\Theta})},\mathbf{B},\mathbf{\Theta},w( v)\mathcal{W})\, \tag{7}\] \[v_{(\mathbf{B},\mathbf{\Theta})} =v+w(v)B_{s}(\mathbf{B})+w(v)B_{p}(\mathbf{\Theta})\,\]
where \(W(\cdot)\) is the SMPL skinning function with skinning weights \(\mathcal{W}\in\mathbb{R}^{N_{B}\times 24}\), with \(N_{B}\) being the number of vertices of the SMPL body mesh, and \(B_{s}(\mathbf{B})\in\mathbb{R}^{N_{B}\times 3}\) and \(B_{p}(\mathbf{\Theta})\in\mathbb{R}^{N_{B}\times 3}\) are the shape and pose displacements returned by SMPL. \(w(v)\in\mathbb{R}^{N_{B}}\) are diffused weights returned by a neural network, which generalizes the SMPL skinning to any point in 3D space. The training of \(w(v)\) is achieved by smoothly diffusing the surface values, as in [29].
This yields an initial deformation, such that the garment roughly fits the underlying body. A displacement network \(\mathcal{D}_{s}\) is then used to refine it. \(\mathcal{D}_{s}\) is conditioned on the body shape and pose parameters \(\mathbf{B},\mathbf{\Theta}\) and on the garment specific latent code \(\mathbf{z}\). It returns a displacement map \(D_{s}=\mathcal{D}_{s}(\mathbf{B},\mathbf{\Theta},\mathbf{z})\) in UV space. Hence, the vertex \(v_{init}\) with UV coordinates \((x_{u},x_{v})\) in \(\Omega\) is displaced accordingly and becomes \(\tilde{v}=v_{init}+D_{s}[x_{u},x_{v}]\), where \([\cdot,\cdot]\) denotes standard array addressing. \(\mathcal{D}_{s}\) is implemented as a multi-layer perceptron (MLP) that outputs a vector of size \(6N^{2}\), where \(N\) is the resolution of UV mesh \(\mathbf{T}\). The output is reshaped to two \(N\times N\times 3\) arrays to produce the front and back displacement maps.
To learn the deformation for various garments without collecting any ground-truth data and train \(\mathcal{D}_{s}\) in a self-supervised fashion, we minimize the physics-based loss from [18]
\[\mathcal{L}_{phy}=\mathcal{L}_{strain}+\mathcal{L}_{bend}+\mathcal{L}_{gravity }+\mathcal{L}_{BGcol}\, \tag{8}\]
where \(\mathcal{L}_{strain}\) is the membrane strain energy caused by the deformation, \(\mathcal{L}_{bend}\) the bending energy raised from the folding of adjacent faces, \(\mathcal{L}_{gravity}\) the gravitational potential energy, and \(\mathcal{L}_{BGcol}\) the penalty for body-garment collisions.
Multi-Layer Draping.When draping independently multiple garments worn by the same person using the network \(\mathcal{D}_{s}\) introduced above, the draped garments can intersect, which is physically impossible and must be prevented. We now show that our ISP model makes that straightforward first in the case of two overlapping garments, and then in the case of arbitrarily many.
Consider an outer garment \(\mathbf{T}^{o}_{G}\) and an inner garment \(\mathbf{T}^{i}_{G}\) with rest state vertices \(V_{o}\) and \(V_{i}\). We first drape them independently on a target SMPL body with \(\mathcal{D}_{s}\), as described for single layer draping. This yields the deformed outer and underlying garments with vertices \(\tilde{V}_{o}\) and \(\tilde{V}_{i}\), respectively. We then rely on a second network \(\mathcal{D}_{m}\) to predict corrective displacements, conditioned on the outer garment geometry and repulsive virtual forces produced by the intersections with the inner garment.
In ISP, garments are represented by mapping 2D panels to 3D surfaces. Hence their geometry can be stored on regular 2D grids on which a convolutional network can act. In practice, we first encode the _rest state_ of the outer garment \(\mathbf{T}^{o}_{G}\) into a 2D position map \(M_{r}\). The grid \(M_{r}\) records the 3D location of the vertex \(v_{o}\in V_{o}\) at its \((x_{u},x_{v})\) coordinate as \(M_{r}[x_{u},x_{v}]=v_{o}\) within the panel boundaries, \(M_{r}[x_{u},x_{v}]=(0,0,0)\) elsewhere. Concatenating both front and back panels yields a \(N\times N\times 6\) array, that is, a 2D array of spatial dimension \(N\) with 6 channels. After draping, the same process is applied to encode the geometry of \(\mathbf{T}^{o}_{G}\) into position map \(M_{d}\), using vertices \(\tilde{V}_{o}\) instead of \(V_{o}\) this time. Finally, for each vertex \(\tilde{v}_{o}\in\tilde{V}_{o}\), we take the repulsive force acting on it to be
\[f(\tilde{v}_{o})=max(0,(\tilde{v}_{i}-\tilde{v}_{o})\cdot\mathbf{n}_{i}) \mathbf{n}_{i}\, \tag{9}\]
where \(\tilde{v}_{i}\) is the closest vertex in \(\tilde{V}_{i}\), \(\mathbf{n}_{i}\) is the normal of \(\tilde{v}_{i}\), and \(\cdot\) represents the dot product. The repulsive force is also recorded in the UV space to generate the 2D force map \(M_{f}\). Note that it is \(0\) for vertices that are already outside of the inner garment, for which no collision occurs. Given the forces \(M_{f}\), the garment geometry in the rest state \(M_{r}\) and after draping \(M_{d}\), the network \(\mathcal{D}_{m}\) predicts a vertex displacements map \(D_{m}=\mathcal{D}_{m}(M_{r},M_{d},M_{f})\) for the outer garment to resolve intersections, as shown in Fig. 3. We replace the vertex \(\tilde{v}_{o}\in\tilde{V}_{o}\) with coordinates \((x_{u},x_{v})\) by \(\tilde{v}_{o}^{*}=\tilde{v}_{o}+D_{m}[x_{u},x_{v}]\). \(\mathcal{D}_{m}\) is implemented as a CNN, and it can capture the local geometry and force information of vertices from the input 2D maps. The training of \(\mathcal{D}_{m}\) is self-supervised by
\[\mathcal{L}_{\mathcal{D}_{m}}=\mathcal{L}_{phy}+\lambda_{g}\mathcal{L}_{ GGcol}+\lambda_{r}\mathcal{L}_{reg}\, \tag{10}\] \[\mathcal{L}_{GGcol}=\sum_{\tilde{v}_{o}^{*}}\max(0,\epsilon-( \tilde{v}_{o}^{*}-\tilde{v}_{i})\cdot\mathbf{n}_{i})^{3}\,\ \text{and}\ \mathcal{L}_{reg}=\|\mathcal{D}_{m}(M_{r},M_{d},M_{f})\|_{2}^{2}\,\]
where \(\mathcal{L}_{phy}\) is the physics-based loss of Eq. (8), \(\lambda_{g}\) and \(\lambda_{r}\) are weighting constants, and \(\epsilon\) is a safety margin to avoid collisions. \(\mathcal{L}_{GGcol}\) penalizes the intersections between the outer and underlying garments, and \(\mathcal{L}_{reg}\) is an \(L_{2}\) regularization on the output of \(\mathcal{D}_{m}\).
Given a pair of garments, our layering network \(\mathcal{D}_{m}\) only deforms the outer garment to resolve collisions, leaving the inner one untouched. Given more than 2 overlapping garments, the process can be iterated to layer them in any desired order over a body, as detailed in the Supplementary Material.
## 4 Experiments
### Dataset, Evaluation Metrics, and Baseline
To create training and test sets, we used the software of [46] to generate sewing patterns and the corresponding 3D garment meshes in their rest state, that is draped over a T-Posed body. Our training set comprises 400 shirts, 300 skirts, and 200 pairs of trousers. For testing, we use 20 shirts, 20 skirts, and 20 pairs of trousers. As discussed in Section 3.2, we trained the draping models using SMPL body poses \(\mathbf{\Theta}\) randomly sampled from the AMASS [52] dataset and body shapes \(\mathbf{B}\) uniformly sampled from \([-2,2]^{10}\). We use the Chamfer Distance (CHD) and Normal Consistency (NC) to quantify garment reconstruction accuracy, along with the percentage of the garment mesh area that undergoes interpenetrations (Intersection), as in [30].
We compare our approach against recent and state-of-the-art methods DIG [29] and DrapeNet [30], which, like ours, can drape various garments over bodies of different shapes in arbitrary poses. Like our ISPs, DrapeNet is self-supervised using a physics-based loss and can prevent unwanted intersections between top (shirts) and bottom (trousers) garments. By contrast, DIG is a fully supervised method and cannot handle garment intersections.
### Garment Reconstruction
We first consider 3D garments in their rest pose and compare the accuracy of our ISPs against that of the UDF-based representation [30]. In Fig. 4, we provide qualitative results in a specific case and for resolutions of Marching Cubes [53] ranging from 512 to 128. Our result is visually superior and more faithful to the ground-truth garment, with UDF producing more artifacts and uneven borders, especially at resolution 512. The quantitative results reported in Tab. 1 for the training and test sets confirm this. For the test set garments, we reconstruct them by optimizing the latent code to minimize the Chamfer distance between the predicted and ground truth meshes, using the gradients of Eq. (6). Our approach consistently outperforms UDF at all resolutions while being faster, mostly because we evaluate our network on a 2D implicit field while UDF does it on a 3D one. For example, at resolution 256, our reconstruction time is 77 ms on an Nvidia V100 GPU, while UDF requires 2379 ms. Interestingly, increasing the resolution from 256 to 512 improves the reconstruction accuracy
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{Train}} & \multicolumn{1}{c|}{\multirow{2}{*}{CHD (\(\times 10^{-4}\), \(\downarrow\))}} & \multicolumn{1}{c|}{NC (\%, \(\uparrow\))} & \multicolumn{1}{c}{Time (ms, \(\downarrow\))} \\ \hline UDF - 128 & 0.641 & 98.86 & 601 \\ UDF - 256 & 0.338 & 99.20 & 2379 \\ UDF - 512 & 0.262 & 98.77 & 13258 \\ \hline Ours - 128 & 0.454 & 99.20 & **25** \\ Ours - 256 & 0.290 & 99.37 & 77 \\ Ours - 512 & **0.250** & **99.41** & 261 \\ \hline \hline \end{tabular}
\begin{tabular}{l|c|c} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{Test}} & \multicolumn{1}{c|}{\multirow{2}{*}{CHD (\(\times 10^{-4}\), \(\downarrow\))}} & \multicolumn{1}{c}{NC (\%, \(\uparrow\))} \\ \hline UDF - 128 & 0.803 & 97.71 \\ UDF - 256 & 0.493 & 98.44 \\ UDF - 512 & 0.424 & 98.09 \\ \hline Ours - 128 & 0.579 & 98.70 \\ Ours - 256 & 0.392 & 98.83 \\ Ours - 512 & **0.349** & **98.89** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of our method to UDF on shirts under the resolutions of 128, 256 and 512.
Figure 4: **Comparison between UDF and our method in resolution 512, 256 and 128**. Our reconstructions (top row) are more accurate than those of UDF (bottom row) which have artifacts and uneven borders.
Figure 3: **Multi-layer Draping. The network \(\mathcal{D}_{m}\) uses the garment geometry and forces encoded in the input UV maps to resolve garment intersections for multi-layered draping.**of our method, whereas that of UDF's drops. This is because precisely learning the 0 iso-surface of a 3D UDF is challenging, resulting in potentially inaccurate normals near the surface [54; 47]. We present similar results for trousers and skirts in the supplementary material.
### Garment Draping
Figs. 1 and 5 showcase our method's ability to realistically drape multiple garments with diverse geometry and topology over the body. Our approach can handle multi-layered garments, which is not achievable with DrapeNet. Additionally, unlike the method of [38] that is limited to multi-layered garments on the T-posed body, our method can be applied to bodies in arbitrary poses.
As pointed out in [13; 30], there are no objective metrics for evaluating the realism of a draping. Therefore, we conducted a human evaluation. As in [30], we designed a website displaying side-by-side draping results generated by our method and DrapeNet for the same shirts and trousers, on the same bodies. Participants were asked to select the option that seemed visually better to them, with the third option being "I cannot decide". 64 participants took part in the study, providing a total of 884 responses. As shown in Fig. 6(a), the majority of participants preferred our method (62.69% vs. 25.00%), demonstrating the higher fidelity of our results. The lower intersection ratio reported in Fig. 6(b) further demonstrates the efficacy of our draping model. In Figs. 6(c) and (d), we compare qualitatively our method against DrapeNet and DIG.
### Recovering Multi-Layered Garments from Images
Thanks to their differentiability, our ISPs can be used to recover multi-layered garments from image data, such as 2D garment segmentation masks. Given an image of a clothed person, we can obtain the estimation of SMPL body parameters \((\mathbf{B},\mathbf{\Theta})\) and the segmentation mask \(\mathbf{S}\) using the algorithms of [56; 57]. To each detected garment, we associate a latent code \(\mathbf{z}\) and reconstruct garment meshes
Figure 5: **Realistic garment draping with our method, for different combinations of garments.**
Figure 6: **Draping evaluation. (a) Human evaluation results. None refers to no preference. (b) Percentage of intersecting areas. (c) For each method, left is the draping results for a shirt and a pair of trousers, and right with only the pants. (d) The draping results of our method, DIG and DrapeNet.**
by minimizing
\[\mathbf{z}_{1:N}^{*} =\operatorname*{argmin}_{\mathbf{z}_{1:N}}L_{\text{IoU}}(R(\mathcal{G }(\mathbf{B},\mathbf{\Theta},\mathbf{z}_{1:N})\oplus\mathcal{M}(\mathbf{B}, \mathbf{\Theta})),\mathbf{S})\, \tag{11}\] \[\mathcal{G}(\mathbf{B},\mathbf{\Theta},\mathbf{z}_{1:N}) =\mathcal{D}(\mathbf{B},\mathbf{\Theta},\mathbf{z}_{1},\mathbf{T}_ {G}(\mathbf{z}_{1}))\oplus\cdots\oplus\mathcal{D}(\mathbf{B},\mathbf{\Theta}, \mathbf{z}_{N},\mathbf{T}_{G}(\mathbf{z}_{N})),\]
where \(L_{\text{IoU}}\) is the IoU loss [58] over the rendered and the given mask, \(R(\cdot)\) is a differentiable renderer [59], \(N\) is the number of detected garments, and \(\oplus\) represents the operation of mesh concatenation. \(\mathcal{M}\) is the SMPL body mesh, while \(\mathcal{G}(\mathbf{B},\mathbf{\Theta},\mathbf{z}_{1:N})\) is the concatenation of garment meshes reconstructed from the implicit sewing pattern as \(\mathbf{T}_{G}(\mathbf{z}_{i})\) and then draped by our draping model \(\mathcal{D}=\mathcal{D}_{m}\circ\mathcal{D}_{s}\). In practice, the minimization is performed from the outermost garment to the innermost, one by one. Further details can be found in the Supplementary Material.
Fig. 7 depicts the results of this minimization. Our method outperforms the state-of-the-art methods SMPLicit [28], ClothWild [55] and DrapeNet [30], given the same garment masks. The garments we recover exhibit higher fidelity and have no collisions between them or with the underlying body.
### Garment Editing
As the panel shape of the sewing pattern determines the shape of the garment, we can easily manipulate the garment mesh by editing the panel. Fig. 8 depicts this editing process. We begin by moving the sleeve edges of the panel inwards to shorten the sleeves of the mesh, then move the bottom edges up to shorten the length of the jacket, and finally remove the edges for the opening to close it. This is achieved by minimizing
\[L(\mathbf{z})=d(E(\mathbf{z}),\tilde{E})\,\ \text{where}\ E(\mathbf{z})=\{ \mathbf{x}|s(\mathbf{x},\mathbf{z})=0,\mathbf{x}\in\Omega\}\, \tag{12}\]
w.r.t. \(\mathbf{z}\). \(d(\cdot)\) computes the chamfer distance, \(\tilde{E}\) represents the edges of the modified panel, and \(E(\mathbf{z})\) represents the 0 iso-level extracted from \(\mathcal{I}_{\Theta}\), that is, the edges of the reconstructed panel. Our sewing pattern representation makes it easy to specify new edges by drawing and erasing lines in
Figure 8: **Shape editing**. Garment attributes can be edited by modifying the sewing pattern.
Figure 7: **Garment recovery from images**. We compare the meshes recovered by our method and the state of the art methods SMPLicit [28], ClothWild [55], DrapeNet [30] (unavailable for skirts).
the 2D panel images, whereas the fully implicit garment representation of [30] requires an auxiliary classifier to identify directions in the latent space for each garment modification. As shown in the supplementary material, the texture of the garment can also be edited by drawing on the UV panels.
## 5 Conclusion
We have introduced a novel representation for garments to be draped on human bodies. The garments are made of flat 2D panels whose boundary is defined by a 2D SDF. To each panel is associated a 3D surface parameterized by the 2D panel coordinates. Hence, different articles of clothing are represented in a standardized way. This allows the draping of multi-layer clothing on bodies and the recovery of such clothing from single images. Our current implementation assumes quasi-static garments and only deforms the outer garment to solve collisions in multi-layer draping. In future work, we will introduce garment dynamics and focus on more accurate physical interactions between the outer and inner garments.
**Acknowledgement.** This project was supported in part by the Swiss National Science Foundation.
## References
* [1] Xavier Provot et al. Deformation constraints in a mass-spring model to describe rigid cloth behaviour. In _Graphics interface_, 1995.
* [2] X. Provot. Collision and Self-Collision Handling in Cloth Model Dedicated to Design Garments. In _Computer Animation and Simulation_, 1997.
* [3] D. Baraff and A. Witkin. Large Steps in Cloth Simulation. In _ACM SIGGRAPH_, pages 43-54, 1998.
* [4] T. Vassilev, B. Spanlang, and Y. Chrysanthou. Fast cloth animation on walking avatars. In _Computer Graphics Forum_, 2001.
* [5] C. Zeller. Cloth simulation on the gpu. In _ACM SIGGRAPH_, 2005.
* [6] M. Tang, R. Tong, R. Narain, C. Meng, and D. Manocha. A GPU-based streaming algorithm for high-resolution cloth simulation. In _Computer Graphics Forum_, 2013.
* [7] T. Liu, S. Bouaziz, and L. Kavan. Quasi-newton methods for real-time simulation of hyperelastic materials. _ACM Transactions on Graphics_, 2017.
* [8] Nvidia. Nvcloth, 2018.
* [9] Optitext Fashion Design Software, 2018. [https://opticx.com/](https://opticx.com/).
* [10] Nvidia. NVIDIA Flex, 2018. [https://developer.nvidia.com/flex](https://developer.nvidia.com/flex).
* [11] M. Designer, 2018. [https://www.marvelousdesigner.com](https://www.marvelousdesigner.com).
* [12] Tongkui Su, Yan Zhang, Yu Zhou, Yao Yu, and Sidan Du. GPU-based Real-time Cloth Simulation for Virtual Try-on. In _Pacific Conference on Computer Graphics and Applications_, 2018.
* [13] H. Bertiche, M. Madadi, and S. Escalera. PBNS: Physically Based Neural Simulation for Unsupervised Garment Pose Space Deformation. _ACM Transactions on Graphics_, 2021.
* [14] B. L. Bhatnagar, G. Tiwari, C. Theobalt, and G. Pons-Moll. Multi-Garment Net: Learning to Dress 3D People from Images. In _International Conference on Computer Vision_, 2019.
* [15] B. Jiang, J. Zhang, Y. Hong, J. Luo, L. Liu, and H. Bao. Bcnet: Learning body and cloth shape from a single image. In _European Conference on Computer Vision_, 2020.
* [16] X. Pan, J. Mai, X. Jiang, D. Tang, J. Li, T. Shao, K. Zhou, X. Jin, and D. Manocha. Predicting loose-fitting garment deformations using bone-driven motion networks. In _ACM SIGGRAPH_, 2022.
* [17] C. Patel, Z. Liao, and G. Pons-Moll. Tailornet: Predicting clothing in 3d as a function of human pose, shape and garment style. In _Conference on Computer Vision and Pattern Recognition_, 2020.
* [18] I. Santesteban, M.A. Otaduy, and D. Casas. SNUG: Self-Supervised Neural Dynamic Garments. In _Conference on Computer Vision and Pattern Recognition_, 2022.
* [19] I. Santesteban, N. Thuerey, M. A. Otaduy, and D. Casas. Self-Supervised Collision Handling via Generative 3D Garment Models for Virtual Try-On. In _Conference on Computer Vision and Pattern Recognition_, 2021.
* [20] G. Tiwari, B. L. Bhatnagar, T. Tung, and G. Pons-Moll. Sizer: A Dataset and Model for Parsing 3D Clothing and Learning Size Sensitive 3D Clothing. In _European Conference on Computer Vision_, 2020.
* [21] H. Bertiche, M. Madadi, E. Tylson, and S. Escalera. DeePSD: Automatic Deep Skinning and Pose Space Deformation for 3D Garment Animation. In _International Conference on Computer Vision_, 2021.
* [22] E. Gundogdu, V. Constantin, S. Parashar, A. Seifoddini, M. Dang, M. Salzmann, and P. Fua. Garnet++: Improving Fast and Accurate Static 3D Cloth Draping by Curvature Loss. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 22(1):181-195, 2022.
* [23] Z. Lahner, D. Cremers, and T. Tung. Deepwrinkles: Accurate and Realistic Clothing Modeling. In _European Conference on Computer Vision_, September 2018.
* [24] Y. Shen, J. Liang, and M.C. Lin. Gan-based garment generation using sewing pattern images. In _European Conference on Computer Vision_, 2020.
* [25] Z. Su, T. Yu, Y. Wang, and Y. Liu. DeepCloth: Neural Garment Representation for Shape and Style Editing. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 45(2):1581-1593, 2022.
* [26] M. Zhang, D. Ceylan, and N. Mitra. Motion Guided Deep Dynamic 3D Garments. _ACM Transactions on Graphics_, 41(6):1-12, 2022.
* [27] X. Chen, G. Wang, D. Zhu, X. Liang, P. Torr, and L. Lin. Structure-Preserving 3D Garment Modeling with Neural Sewing Machines. In _Advances in Neural Information Processing Systems_, 2022.
* [28] E. Corona, A. Pumarola, G. Alenya, G. Pons-Moll, and F. Moreno-Noguer. Smplicit: Topology-Aware Generative Model for Clothed People. In _Conference on Computer Vision and Pattern Recognition_, 2021.
* [29] R. Li, B. Guillard, E. Remelli, and P. Fua. DIG: Draping Implicit Garment over the Human Body. In _Asian Conference on Computer Vision_, 2022.
* [30] Luca DeLuigi, Ren Li, Benoit Guillard, Mathieu Salzmann, and Pascal Fua. DrapeNet: Generating Garments and Draping them with Self-Supervision. In _Conference on Computer Vision and Pattern Recognition_, 2023.
* [31] F. Bogo, A. Kanazawa, C. Lassner, P. Gehler, J. Romero, and M. J. Black. Keep It SMPL: Automatic Estimation of 3D Human Pose and Shape from a Single Image. In _European Conference on Computer Vision_, 2016.
* [32] Y. Li, M. Habermann, B. Thomaszewski, S. Coros, T. Beeler, and C. Theobalt. Deep physics-aware inference of cloth deformation for monocular human performance capture. In _International Conference on 3D Vision_, 2021.
* [33] J. Liang, M. Lin, and V. Koltun. Differentiable Cloth Simulation for Inverse Problems. In _Advances in Neural Information Processing Systems_, 2019.
* [34] R. Narain, A. Samii, and J.F. O'brien. Adaptive anisotropic remeshing for cloth simulation. _ACM Transactions on Graphics_, 2012.
* [35] R. Narain, T. Pfaff, and J.F. O'Brien. Folding and crumpling adaptive sheets. _ACM Transactions on Graphics_, 2013.
* [36] I. Zakharkin, K. Mazur, A. Grigorev, and V. Lempitsky. Point-based modeling of human clothing. In _International Conference on Computer Vision_, 2021.
* [37] A. Grigorev, B. Thomaszewski, M.J. Black, and O. Hilliges. HOOD: Hierarchical Graphs for Generalized Modeling of Clothing Dynamics. In _Conference on Computer Vision and Pattern Recognition_, 2023.
* [38] I. Santesteban, M.A. Otaduy, N. Thuerey, and D. Casas. Ulnef: Untangled layered neural fields for mix-and-match virtual try-on. In _Advances in Neural Information Processing Systems_, 2022.
* [39] Clo 3d. [https://www.clo3d.com/en/](https://www.clo3d.com/en/).
* [40] Browzwear. [https://browzwear.com/](https://browzwear.com/).
* [41] Marvellous designer. [https://www.marvelousdesigner.com/](https://www.marvelousdesigner.com/).
* [42] N. Umetani, D. M. Kaufman, T. Igarashi, and E. Grinspun. Sensitive Couture for Interactive Garment Editing and Modeling. _ACM SIGGRAPH_, 30(4), 2011.
* [43] A. Kaspar, K. Wu, Y. Luo, L. Makatura, and W. Matusik. Knit Sketching: from Cut & Sew Patterns to Machine-Knit Garmets. _ACM Transactions on Graphics_, 40(4):1-15, 2021.
* [44] T. Y. Wang, D. Ceylan, J. Popovic, and N. J. Mitra. Learning a Shared Shape Space for Multimodal Garment Design. In _ACM SIGGRAPH Asia_, 2018.
* [45] R. Vidaurre, I. Santesteban, E. Garces, and D. Casas. Fully Convolutional Graph Neural Networks for Parametric Virtual Try-On. In _Computer Graphics Forum_, pages 145-156, 2020.
* [46] M. Korosteleva and S. Lee. Generating Datasets of 3D Garmets with Sewing Patterns. In _Advances in Neural Information Processing Systems_, 2021.
* [47] B. Guillard, F. Stella, and P. Fua. MeshUDF: Fast and Differentiable Meshing of Unsigned Distance Field Networks. In _European Conference on Computer Vision_, 2022.
* [48] T. Groueix, M. Fisher, V. Kim, B. Russell, and M. Aubry. Atlasnet: A Papier-Mache Approach to Learning 3D Surface Generation. In _Conference on Computer Vision and Pattern Recognition_, 2018.
* [49] J. Chibane, A. Mir, and G. Pons-Moll. Neural Unsigned Distance Fields for Implicit Function Learning. In _Advances in Neural Information Processing Systems_, 2020.
* [50] E. Remelli, A. Lukoianov, S. Richter, B. Guillard, T. Bagautdinov, P. Baque, and P. Fua. Meshsdf: Differentiable Iso-Surface Extraction. In _Advances in Neural Information Processing Systems_, 2020.
* [51] B. Guillard, E. Remelli, A. Lukoianov, S. Richter, T. Bagautdinov, P. Baque, and P. Fua. Deepmesh: Differentiable Iso-Surface Extraction. In _arXiv Preprint_, 2022.
* [52] N. Mahmood, N. Ghorbani, N. F. Troje, G. Pons-Moll, and M. J. Black. AMASS: Archive of Motion Capture as Surface Shapes. In _International Conference on Computer Vision_, pages 5442-5451, 2019.
* [53] W.E. Lorensen and H.E. Cline. Marching Cubes: A High Resolution 3D Surface Construction Algorithm. In _ACM SIGGRAPH_, pages 163-169, 1987.
* [54] R. Venkatesh, T. Karmali, S. Sharma, A. Ghosh, R. V. Babu, L. A. Jeni, and M. Singh. Deep Implicit Surface Point Prediction Networks. In _International Conference on Computer Vision_, 2021.
* [55] G. Moon, H. Nam, T. Shiratori, and K.M. Lee. 3d clothed human reconstruction in the wild. In _European Conference on Computer Vision_, 2022.
* [56] Yu Y. Rong, T. Shiratori, and H. Joo. Frankmocap: Fast monocular 3d hand and body motion capture by regression and integration. In _International Conference on Computer Vision Workshops_, 2021.
* [57] P. Li, Y. Xu, Y. Wei, and Y. Yang. Self-Correction for Human Parsing. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 44(6):3260-3271, 2020.
* [58] R. Li, M. Zheng, S. Karanam, T. Chen, and Z. Wu. Everybody Is Unique: Towards Unbiased Human Mesh Recovery. In _British Machine Vision Conference_, 2021.
* [59] Nikhila Ravi, Jeremy Reizenstein, David Novotny, Taylor Gordon, Wan-Yen Lo, Justin Johnson, and Georgia Gkioxari. PyTorch3D. [https://github.com/facebookresearch/pytorch3d](https://github.com/facebookresearch/pytorch3d), 2020.
* [60] M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M.J. Black. SMPL: A Skinned Multi-Person Linear Model. _ACM SIGGRAPH Asia_, 34(6), 2015.
## Appendix A Additional Results
### Texture Editing
As illustrated in Fig. 9, with our ISP, the texture of the garment can be easily edited by drawing on the UV panels. The figures drawn on the panel of one garment can be directly transferred to others as shown by Fig. 9(c), since panels are defined on the same UV space.
### Garment Reconstruction
#### a.2.1 Evaluation Results
In Tabs. 2 and 3 we report the reconstruction results for skirts and trousers on the training and the test set. Similar to the results on shirts shown in the main paper, our method achieves better reconstruction quality than UDF [30] with lower CHD and higher NC at all resolutions, and needs less time to
Figure 9: **Texture editing.****(a)** A smiling face drawn on the front panel. **(b)** A flower pattern painted on both the front and the back panels. **(c)** A pocket drawn for the left shirt can be transferred to different garments.
reconstruct a single mesh. Figs. 10 to 12 show the qualitative results reconstructed by our method for shirts, skirts and trousers respectively.
#### a.2.2 Latent Space Interpolation
In Figs. 13 to 15, we display the results of interpolation in the latent space of shirts, skirts and trousers respectively. We observe a smooth transformation in both the reconstructed sewing patterns and the garment meshes, despite the different topology and geometry of the given garments.
#### a.2.3 Comparison with AtlasNet
In Fig. 16, we compare our method with AtlasNet [48] which learns to deform a square patch. AtlasNet struggles to learn a mapping function capable of accurately deforming the square patch to produce a surface that matches the ground truth, especially in the collar region. In contrast, our method leverages the pattern parameterization network \(\mathcal{I}_{\Theta}\) to simplify the training of our mapping function \(\mathcal{A}_{\Phi}\). Specifically, our approach only requires learning the mapping for points within the panels, resulting in a reconstruction that is more faithful to the ground truth.
#### a.2.4 Ablation Study
To investigate the impact of the loss terms of Eq. (4) in the main paper on reconstruction quality, we performed an ablation study. We report the results of reconstructing 300 skirts using models trained with and without \(\mathcal{L}_{normal}\) in Tab. 4. We observe that training with \(\mathcal{L}_{normal}\) reduces the CHD and increases the NC, thus improving the reconstruction accuracy. In contrast, without \(\mathcal{L}_{normal}\), the
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline \multicolumn{1}{c|}{Train} & CHD (\(\times 10^{-4}\), \(\downarrow\)) & NC (\%, \(\uparrow\)) & Time (ms, \(\downarrow\)) \\ \hline UDF - 128 & 0.714 & 98.53 & 690 \\ UDF - 256 & 0.408 & 98.93 & 2784 \\ UDF - 512 & 0.331 & 98.43 & 17362 \\ \hline Ours - 128 & 0.538 & 98.80 & **25** \\ Ours - 256 & 0.321 & 99.08 & 75 \\ Ours - 512 & **0.267** & **99.14** & 262 \\ \hline \hline \end{tabular} \begin{tabular}{l|c|c|c} \hline \hline \multicolumn{1}{c|}{Test} & CHD (\(\times 10^{-4}\), \(\downarrow\)) & NC (\%, \(\uparrow\)) \\ \hline UDF - 128 & 0.734 & 97.79 \\ UDF - 256 & 0.403 & 98.64 \\ UDF - 512 & 0.324 & 98.27 \\ \hline Ours - 128 & 0.583 & 98.62 \\ Ours - 256 & 0.362 & 98.85 \\ Ours - 512 & **0.304** & **98.89** \\ \hline \hline \end{tabular} \begin{tabular}{l|c|c|c} \hline \hline \multicolumn{1}{c|}{Test} & CHD (\(\times 10^{-4}\), \(\downarrow\)) & NC (\%, \(\uparrow\)) \\ \hline UDF - 128 & 0.752 & 97.41 \\ UDF - 256 & 0.425 & 98.09 \\ UDF - 512 & 0.350 & 97.63 \\ \hline \hline Ours - 128 & 0.529 & 98.03 \\ Ours - 256 & 0.346 & 98.31 \\ Ours - 512 & **0.300** & **98.32** \\ \hline \hline \end{tabular}
\begin{tabular}{l|c|c|c} \hline \hline \multicolumn{1}{c|}{Test} & CHD (\(\times 10^{-4}\), \(\downarrow\)) & NC (\%, \(\uparrow\)) \\ \hline UDF - 128 & 0.752 & 97.41 \\ UDF - 256 & 0.425 & 98.09 \\ UDF - 512 & 0.350 & 97.63 \\ \hline \hline Ours - 128 & 0.529 & 98.03 \\ Ours - 256 & 0.346 & 98.31 \\ Ours - 512 & **0.300** & **98.32** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of our method to UDF on kirts under the resolutions of 128, 256 and 512.
Figure 10: Reconstruction samples of shirts.
Figure 11: Reconstruction samples of skirts.
Figure 12: Reconstruction samples of trousers.
\begin{table}
\begin{tabular}{l|c|c} \hline \hline & CHD (\(\times 10^{-4}\), \(\downarrow\)) & NC (\%, \(\uparrow\)) \\ \hline w/o \(\mathcal{L}_{normal}\) & 0.324 & 98.69 \\ w/ \(\mathcal{L}_{normal}\) & 0.321 & 99.08 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of the results reconstructed by the models trained w/o and w/ \(\mathcal{L}_{normal}\).
Figure 13: **Interpolation. We interpolate the latent code from a sleeveless shirt to a long-sleeve jacket. (a) and (b) show the reconstructed front and back panels, where the colors on them denote the edge label fields. (c) shows the reconstructed mesh.**
Figure 14: **Interpolation**. We interpolate the latent code from a short tight skirt to a long loose skirt. (a) and (b) show the reconstructed front and back panels, where the colors on them denote the edge label fields. (c) shows the reconstructed mesh.
Figure 15: **Interpolation**. We interpolate the latent code from a pair of short trousers to a pair of long trousers. (a) and (b) show the reconstructed front and back panels, where the colors on them denote the edge label fields. (c) shows the reconstructed mesh.
model fails to learn the correct parameterization for the garment, resulting in a mesh with inverted faces as illustrated in Fig. 17. Fig. 18 shows a comparison of the results obtained by training models without and with \(\mathcal{L}_{consist}\) to help stitch the front and back panels. We can notice that without \(\mathcal{L}_{consist}\), a spatial gap exists between the front (in gray) and back (in cyan) surfaces.
#### a.2.5 Vizualization of the Chamfer Distance
A visualization of the spatial error distribution can be found in Fig. 19. In this figure, we compare the error distribution between our reconstructions and those produced by UDFs. Our reconstructions exhibit lower error across the entire surface compared to UDFs.
### Garment Draping
Fig. 20 and Fig. 21(a) present additional comparisons of draping results for our method, DIG [29] and DrapeNet [30]. Our results show higher fidelity and fewer artifacts compared to the other methods. We also compare our method with SNUG [18], a self-supervised method that relies on mesh templates for garment representation and trains one network for each clothing item. In Fig. 21(b), we show the qualitative results on the same shirt. Our results are either comparative or visually superior to those of SNUG, despite using a single draping network for a whole garment category.
The corresponding results are shown in Tabs. 5 and 6, where the diagonal values represent the intersection ratio between the body and the \(i\)-th (\(i=1,2,3,4,5\)) layer, and the other values represent the intersection between different layers of garments. It is important to note that we cannot fairly compare these numbers to any other method, as they do not support layering. The left and right sides of the tables correspond to the results obtained with and without the layering procedure, respectively. As shown in the tables, the significantly lower intersection ratios on the left side demonstrate the effectiveness of our model in handling intersections with the body and between different garments. Another noteworthy observation is that the intersections tend to increase as we layer more garments. This behavior is explained in Appendix B.4, where we clarify that our multi-layer draping model \(\mathcal{D}_{m}\) is trained solely on garments obtained through single-layer draping, which are relatively close to the body. However, it is worth mentioning that even after five layers, the intersections still remain at a low level (\(\sim\)2%).
## Appendix B Technical Details
### Sewing Patterns for Trousers and Skirts
In Fig. 22, we show the sewing patterns used in our experiments for trousers and skirts.
Figure 19: **Comparative analysis of mesh reconstruction and spatial error distribution for shirt, skirt and trousers**. (a) represents the ground truth mesh. (b) and (d) illustrate the mesh reconstruction and corresponding spatial error distribution for our proposed method. (c) and (e) display the same for UDFs. Each row provides both front (upper half) and back (lower half) views. In (d) and (e), error magnitude is indicated by color gradation, with white representing large errors and blue small errors.
Figure 21: The comparison of draping results for (a) our method vs. DIG, and (b) our method vs. SNUG.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline w/ \(\mathcal{D}_{m}\) & 1 & 2 & 3 & 4 & 5 \\ \hline
[MISSING_PAGE_POST]
& 0.052 \\ \hline \end{tabular}
\end{table}
Table 5: Evaluation of intersections with the underlying body and between garments on different layers (the 1st layer is a **skirt**). Left and right are the results of draping with and without the multi-layering network \(\mathcal{D}_{m}\), respectively. The unit is %.
Figure 20: The comparison of draping results of our method and DrapeNet.
### Mesh Triangulation
In this section, we detail the meshing process of ISP. We first create a square 2D mesh \(\mathbf{T}\) for \(\Omega\) as shown on the left of Fig. 23. Given the latent code \(\mathbf{z}\) of a specific garment, for each vertex \(v\in V_{\Omega}\), we compute its signed distance value \(s\) and edge label \(c\) with \((s,c)=\mathcal{I}_{\Theta}(v,\mathbf{z})\). The 2D front and back panel meshes \(\mathbf{T}_{P_{f}}\) and \(\mathbf{T}_{P_{b}}\) are constructed by keeping vertices of \(\mathbf{T}\) with negative signed distance (the blue region of the colored gird in Fig. 23) and those that have positive signed distance but belong to the edges crossing the 0 iso-level (the gray region of the colored gird in Fig. 23). For the later ones, we adjust their positions from \(v\) to \(\dot{v}=v-s(v,\mathbf{z})\nabla s(v,\mathbf{z})\) to project them to the zero level set (the blue line). Finally, we query \(\mathcal{A}_{\Phi_{f}}\) and \(\mathcal{A}_{\Phi_{b}}\) on each vertex of \(\mathbf{T}_{P_{f}}\) and \(\mathbf{T}_{P_{b}}\) respectively to lift them to 3D, giving us the front and back surfaces \(\mathbf{T}_{S_{f}}\) and \(\mathbf{T}_{S_{b}}\) as shown on the right of Fig. 23.
To sew the lifted front and back surfaces \(\mathbf{T}_{S_{f}}\) and \(\mathbf{T}_{S_{b}}\), we perform triangulation with the help of panel meshes \(\mathbf{T}_{P_{f}}\) and \(\mathbf{T}_{P_{b}}\). As illustrated in Fig. 24, we group the boundary vertices of \(\mathbf{T}_{P_{f}}\) and
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline w/ \(\mathcal{D}_{m}\) & 1 & 2 & 3 & 4 & 5 \\ \hline
1 & 0.185 & 0.048 & 0.000 & 0.000 & 0.000 \\
2 & - & 0.003 & 0.631 & 0.588 & 0.637 \\
3 & - & - & 0.030 & 1.227 & 1.161 \\
4 & - & - & - & 0.020 & 2.426 \\
5 & - & - & - & - & 0.028 \\ \end{tabular}
\begin{tabular}{c|c|c|c|c|c} \hline w/o \(\mathcal{D}_{m}\) & 1 & 2 & 3 & 4 & 5 \\ \hline
1 & 0.185 & 0.225 & 0.259 & 0.223 & 0.134 \\
2 & - & 0.004 & 28.033 & 26.378 & 26.246 \\
3 & - & - & 0.046 & 24.729 & 24.679 \\
4 & - & - & - & 0.073 & 27.821 \\
5 & - & - & - & - & 0.052 \\ \end{tabular}
\end{table}
Table 6: Evaluation of intersections with the underlying body and between garments on different layers (the 1st layer is a pair of **trousers**). Left and right are the results of draping with and without the multi-layering network \(\mathcal{D}_{m}\), respectively. The unit is %.
Figure 23: **Meshing Process.** Starting with a square 2D mesh \(\mathbf{T}\), we first extract the front and back panel meshes \(\mathbf{T}_{P_{f}}\) and \(\mathbf{T}_{P_{b}}\) using the implicit function \(\mathcal{I}_{\Theta}\). Then we lift \(\mathbf{T}_{P_{f}}\) and \(\mathbf{T}_{P_{b}}\) to 3D to get the surface meshes \(\mathbf{T}_{S_{f}}\) and \(\mathbf{T}_{S_{b}}\) by querying \(\mathcal{A}_{\Phi_{f}}\) and \(\mathcal{A}_{\Phi_{b}}\) on their vertices respectively.
Figure 22: The sewing patterns for (a) trousers and (b) skirts. The front mesh surfaces are in gray and the back ones in blue.
\(\mathbf{T}_{P_{b}}\) whose predicted labels are the same (\(c\), with \(c>0\)) to form the sewing edges \(E_{c}^{f}\) and \(E_{c}^{b}\) for the front and back panels separately. Then we create faces \(F_{sew}\) between the vertices of \(E_{c}^{f}\) and \(E_{c}^{b}\), and use \(F_{sew}\) to merge the meshes of \(\mathbf{T}_{S_{f}}\) and \(\mathbf{T}_{S_{b}}\), which gives us the final assembled garment mesh.
### Proof of the Differentiability of ISP
According to the _Theorem 1_ of [50], for an SDF \(s\) and the point \(\mathbf{x}_{0}\) lying on the 0 iso-level \(l=\{\mathbf{q}|s(\mathbf{q},\mathbf{z})=0,\mathbf{q}\in\Omega\}\), we have
\[\frac{\partial\mathbf{x}_{0}}{\partial s}=-\nabla s(\mathbf{x}_{0},\mathbf{z}). \tag{13}\]
For point \(\mathbf{x}\) lying on the \(\alpha\) iso-level, we can have \(s(\mathbf{x},\mathbf{z})=\alpha\), where \(\alpha\) is a constant. Let \(s_{\alpha}=s-\alpha\), then \(\mathbf{x}\) lies on the 0 iso-level of \(s_{\alpha}\). Based on Eq. 13, we can have
\[\frac{\partial\mathbf{x}}{\partial s_{\alpha}}=-\nabla s_{\alpha}(\mathbf{x}, \mathbf{z})=-\nabla s(\mathbf{x},\mathbf{z}). \tag{14}\]
Assume \(v\) is a vertex of the mesh \(\mathbf{T}_{G}\) reconstructed by ISP and \(\mathbf{x}\) is the point on the UV space that satisfies \(v=\mathcal{A}_{\Phi}(\mathbf{x},\mathbf{z})\), then it holds that
\[\frac{\partial v}{\partial\mathbf{z}} =\frac{\partial\mathcal{A}_{\Phi}}{\partial\mathbf{z}}(\mathbf{x},\mathbf{z})+\frac{\partial\mathcal{A}_{\Phi}}{\partial\mathbf{x}}\frac{ \partial\mathbf{x}}{\partial\mathbf{z}}(\mathbf{x},\mathbf{z}), \tag{15}\] \[=\frac{\partial\mathcal{A}_{\Phi}}{\partial\mathbf{z}}(\mathbf{x},\mathbf{z})+\frac{\partial\mathcal{A}_{\Phi}}{\partial\mathbf{x}}\frac{ \partial\mathbf{x}}{\partial s_{\alpha}}\frac{\partial s_{\alpha}}{\partial s} \frac{\partial s}{\partial\mathbf{z}}(\mathbf{x},\mathbf{z}). \tag{16}\]
Since \(\frac{\partial s_{\alpha}}{\partial s}=1\), we can substitute Eq. 14 into Eq. 16 to derive that
\[\frac{\partial v}{\partial\mathbf{z}}=\frac{\partial\mathcal{A}_{\Phi}}{ \partial\mathbf{z}}(\mathbf{x},\mathbf{z})-\frac{\partial\mathcal{A}_{\Phi}} {\partial\mathbf{x}}\nabla s(\mathbf{x},\mathbf{z})\frac{\partial s}{\partial \mathbf{z}}(\mathbf{x},\mathbf{z}). \tag{17}\]
### Garment Draping
Single Layer Draping.In Fig. 25, we illustrate the pipeline for the single layer draping, which relies on the diffused LBS of SMPL [60] to get the initial rough estimate of the garment shape and the displacement map output by \(\mathcal{D}_{s}\) to refine it.
Multi-Layer Draping.Our layering network \(\mathcal{D}_{m}\) can be applied to multiple garments iteratively to resolve collisions between them. More specifically, consider \(K\) garments \([G_{1},G_{2},...,G_{K}]\) that are already draped individually by single layer draping network \(\mathcal{D}_{s}\). Their subscripts denote their draping order, with smaller ones being closer to the body. We can obtain their rest state maps \([M_{r}^{1},M_{r}^{2},...,M_{r}^{K}]\) as described in Sec. 3.2 of the main paper, and apply Algorithm 1 for layering them by iterating on garments following their draping order.
Note that we only train \(\mathcal{D}_{m}\) on _one_ pair of garments individually draped by \(\mathcal{D}_{s}\). Therefore, it can only resolve the intersections happening at the same layer, which leads to an extra inner loop in Algorithm
Figure 24: **Sewing process.** Left: the boundary vertices belonging to the sewing edges \(E_{c}^{f}\) and \(E_{c}^{b}\) are marked in red. Middle: The triangulation is performed between \(E_{c}^{f}\) and \(E_{c}^{b}\) to create new faces \(F_{sew}\). Right: The mesh generated by merging the front and back surfaces \(\mathbf{T}_{S_{f}}\) and \(\mathbf{T}_{S_{b}}\) with \(F_{sew}\).
1 that moves all the subsequent garments to the same \(j\)-th layer. Otherwise, intersections cannot be completely resolved when applying \(\mathcal{D}_{m}\) to two garments lying on the different layers as illustrated in Fig. 26(c).
### Recovering Multi-Layered Garments from Images
In Fig. 27, we show the segmentation masks used for the optimization in Eq. (11) in the main paper. The optimization is performed from the outer garment to the inner one, i.e., jacket (if detected) \(\rightarrow\) shirt \(\rightarrow\) trousers (or skirt). More specifically, for the first example shown in Fig. 27, we have the detected segmentation mask \(\mathbf{S}_{1}\), \(\mathbf{S}_{2}\) and \(\mathbf{S}_{3}\) for the jacket, the shirt and the trousers respectively. We first initialize a latent code \(\mathbf{z}_{1}\) for the jacket and perform the following optimization to recover its mesh
\[\mathbf{z}_{1}^{*}=\arg\min_{\mathbf{z}_{1}}L_{\text{IoU}}(R(\mathcal{G}( \mathbf{B},\mathbf{\Theta},\mathbf{z}_{1})\oplus\mathcal{M}(\mathbf{B}, \mathbf{\Theta})),\mathbf{S}_{1})\;, \tag{18}\]
Figure 27: Segmentation masks obtained from [57]. Jackets are marked in gray, shirts in purple, skirts in dark purple, trousers in red, and body parts in other colors.
\[\mathcal{G}(\mathbf{B},\mathbf{\Theta},\mathbf{z}_{1})=\mathcal{D}(\mathbf{B}, \mathbf{\Theta},\mathbf{z}_{1},\mathbf{T}_{G}(\mathbf{z}_{1}))\, \tag{19}\]
Note that the rendered mask is obtained by setting the colors of the jacket and the body mesh vertices to white and black respectively. Then we fix \(\mathbf{z}_{1}\) and initialize a new latent code \(\mathbf{z}_{2}\) for the shirt and perform
\[\mathbf{z}_{2}^{*} =\arg\min_{\mathbf{z}_{2}}L_{\text{IoU}}(R(\mathcal{G}(\mathbf{B},\mathbf{\Theta},\mathbf{z}_{1:2})\oplus\mathcal{M}(\mathbf{B},\mathbf{\Theta} )),\mathbf{S}_{2})\, \tag{20}\] \[\mathcal{G}(\mathbf{B},\mathbf{\Theta},\mathbf{z}_{1:2}) =\mathcal{D}(\mathbf{B},\mathbf{\Theta},\mathbf{z}_{1},\mathbf{T }_{G}(\mathbf{z}_{1}))\oplus\mathcal{D}(\mathbf{B},\mathbf{\Theta},\mathbf{z} _{2},\mathbf{T}_{G}(\mathbf{z}_{2})), \tag{21}\]
to recover the mesh of the shirt. The optimization for the trousers is similar: fixing \(\mathbf{z}_{1}\) and \(\mathbf{z}_{2}\); initializing the latent code \(\mathbf{z}_{3}\) for the trousers; performing
\[\mathbf{z}_{3}^{*} =\arg\min_{\mathbf{z}_{3}}L_{\text{IoU}}(R(\mathcal{G}(\mathbf{B },\mathbf{\Theta},\mathbf{z}_{1:3})\oplus\mathcal{M}(\mathbf{B},\mathbf{\Theta })),\mathbf{S}_{3})\, \tag{22}\] \[\mathcal{G}(\mathbf{B},\mathbf{\Theta},\mathbf{z}_{1:3}) =\mathcal{D}(\mathbf{B},\mathbf{\Theta},\mathbf{z}_{1},\mathbf{T }_{G}(\mathbf{z}_{1}))\oplus\mathcal{D}(\mathbf{B},\mathbf{\Theta},\mathbf{z} _{2},\mathbf{T}_{G}(\mathbf{z}_{2}))\oplus\mathcal{D}(\mathbf{B},\mathbf{ \Theta},\mathbf{z}_{3},\mathbf{T}_{G}(\mathbf{z}_{3})). \tag{23}\]
### Loss Terms, Network Architectures and Training Loss Terms.
The Chamfer distance loss \(\mathcal{L}_{CHD}\) of Eq. (4) in the main paper is formulated as
\[\mathcal{L}_{CHD}=\sum_{\mathbf{x}\in P}\min_{\mathbf{x}\in\mathcal{S}}|| \mathcal{A}_{\Phi}(\mathbf{x},\mathbf{z})-\mathbf{X}||_{2}^{2}+\sum_{\mathbf{ X}\in\mathcal{S}}\min_{\mathbf{x}\in P}||\mathcal{A}_{\Phi}(\mathbf{x}, \mathbf{z}),\mathbf{X}||_{2}^{2}, \tag{24}\]
where \(P\) is the panel and \(\mathcal{S}\) is the ground truth surface mesh.
The normal consistency loss \(\mathcal{L}_{normal}\) of Eq. (4) in the main paper is formulated as
\[\mathcal{L}_{normal}=\sum_{\mathbf{x}\in Fc_{P}}(1-n_{f}(\mathcal{A}_{\Phi}( \mathbf{x},\mathbf{z}))\cdot\mathbf{n}_{\mathbf{X}^{*}})+\sum_{\mathbf{X}\in Fc _{\mathcal{S}}}(1-n_{f}(\mathcal{A}_{\Phi}(\mathbf{x}^{*},\mathbf{z}))\cdot \mathbf{n}_{\mathbf{X}}), \tag{25}\]
\[\mathbf{X}^{*}=\operatorname*{argmin}_{\mathbf{X}\in Fc_{\mathcal{S}}}|| \mathbf{X}-\mathcal{A}_{\Phi}(\mathbf{x},\mathbf{z})||_{2},\ \ \mathbf{x}^{*}= \operatorname*{argmin}_{\mathbf{x}\in Fc_{\mathcal{P}}}||\mathbf{X}-\mathcal{ A}_{\Phi}(\mathbf{x},\mathbf{z})||_{2} \tag{26}\]
where \(Fc_{\mathcal{P}}\) and \(Fc_{\mathcal{S}}\) are the face centers of the panel mesh and the surface mesh, and \(\mathbf{n}_{\mathbf{X}}\) represents the normal of \(\mathbf{X}\). \(n_{f}(\cdot)\) is the function that computes the normal for the face that \(\mathcal{A}_{\Phi}(\mathbf{x},\mathbf{z})\) belongs to.
Network Architectures.For each garment category, i.e. shirts, skirts, and trousers, we train one separate set of networks \(\{\mathcal{I}_{\Theta},\mathcal{A}_{\Phi},\mathcal{D}_{s}\}\). \(\mathcal{D}_{m}\) is shared by all garment categories. Our models are implemented as the following.
* Pattern parameterization network \(\mathcal{I}_{\Theta}\): We use two separate networks \(\mathcal{I}_{\Theta_{f}}\) and \(\mathcal{I}_{\Theta_{b}}\) to learn the pattern parameterization for the front and back panels. Each of them is implemented as an MLP with Softplus activations.
* UV parameterization network \(\mathcal{A}_{\Phi}\): We use two separate networks \(\mathcal{A}_{\Phi_{f}}\) and \(\mathcal{A}_{\Phi_{b}}\) to learn the UV parameterization for the front and back surfaces. Both of them have the same architecture, which is a 7-layer MLP with a skip connection from the input layer to the middle and Softplus activations.
* Latent code \(\mathbf{z}\): The dimension is 32.
* Diffuse skinning weight model \(w\): A 9-layer MLP with leaky ReLU activations and an extra Softmax layer at the end to normalize the output.
* Displacement network \(\mathcal{D}_{s}\): A 10-layer MLP with a skip connection from the input layer to the middle and leaky ReLU activations.
* Layering network \(\mathcal{D}_{m}\): A U-Net with 4 convolution blocks and 4 deconvolution blocks.
Training.We use the Adam [14] optimizer for training our networks. The batch sizes, the learning rates and the numbers of iterations for training are summarized in Table. 7. The hyperparameters of the training losses are summarized in Table. 8. \(\mathcal{I}_{\Theta}\), \(\mathcal{A}_{\Phi}\), \(w\) and \(\mathcal{D}_{m}\) are trained with a TESLA V100 GPU, while \(\mathcal{D}_{s}\) is trained with 3 GPUs. During the training of \(\mathcal{D}_{m}\), we randomly select two garments as the outer and inner layers, and let the model learn to resolve intersections between them.
## Appendix C Extension to Sewing Patterns with More Panels
In our experiments, each garment's sewing pattern consists of two panels, the front and the back. However, our ISP can be extended to patterns with any number of panels. For example, we can train \(\mathcal{I}_{\Theta}\) and \(\mathcal{A}_{\Phi}\) on a database of sewing patterns with six panels as shown in Figure 28, using the same training protocol described in the main paper. After training, we can use them to reconstruct the panels and surfaces and produce the sewed mesh as illustrated in Fig. 29. Adding more panels does not result in better reconstructions. For this reason and for the sake of simplicity, we use a model with 2 panels as our default setting.
## Appendix D Failure Cases
Fig. 30 presents draping results as the number of shirts increases. We observe that the model produces unrealistic deformation when the number of shirts is greater than four. This behavior occurs because our multi-layer draping model \(\mathcal{D}_{m}\) is only trained on garments obtained by single layer draping as described in Section 3.2 of the main paper. In this scenario, the garments are relatively close to
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline Network & \(\mathcal{I}_{\Theta}\) & \(\mathcal{A}_{\Phi}\) & \(w\) & \(\mathcal{D}_{s}\) & \(\mathcal{D}_{m}\) \\ \hline Learning Rate & \(10^{-4}\) & \(10^{-4}\) & \(10^{-4}\) & \(5\times 10^{-5}\) & \(10^{-4}\) \\ Batch Size & 50 & 50 & 6000 & 30 & 6 \\ Iterations & 70000 & 70000 & 2000 & 20000 & 30000 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Training hyperparameters.
\begin{table}
\begin{tabular}{c|c c|c|c|c c} \hline \hline Loss & \multicolumn{2}{c|}{\(\mathcal{L}_{\mathcal{I}}\)} & \multicolumn{2}{c|}{\(\mathcal{L}_{\mathcal{A}}\)} & \multicolumn{2}{c}{\(\mathcal{L}_{\mathcal{D}_{m}}\)} \\ & \(\lambda_{CE}\) & \(\lambda_{reg}\) & \(\lambda_{n}\) & \(\lambda_{c}\) & \(\lambda_{g}\) & \(\lambda_{r}\) & \(\epsilon\) \\ \hline Value & 0.01 & 0.001 & 0.01 & 1 & 0.5 & 1 & 0.005 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Training loss hyperparameters for \(\mathcal{L}_{\mathcal{I}}\), \(\mathcal{L}_{\mathcal{A}}\) and \(\mathcal{L}_{\mathcal{D}_{m}}\).
Figure 28: **The 6-panel sewing pattern. (a) The 3D mesh surface for a shirt. The corresponding surface for each panel is denoted in different colors. (b) The six 2D panels for the front, the back, the right front/back sleeves and the left front/back sleeves.**
Figure 29: **Reconstruction with 6 panels. For both the left and right examples, we show the reconstructed panels on the top and the reconstructed surfaces and the sewed meshes at the bottom. Colors on panels denote edge labels predicted by \(\mathcal{I}_{\Theta}\).**the body. When applied to cases with more shirts (typically over four), the model may generate unpredictable results with the shirts moving far away from the body. However, we note that this issue can be resolved by finetuning the model progressively on layered garments. We also consider that wearing more than four shirts is not a common scenario.
Figure 30: Draping increasingly many shirts: from 1 (left) to 7 (right). | ## Review
### Summary
The paper presents a novel approach to draping multi-layer garments on human body models using a method inspired by traditional clothing production. It introduces a unique garment representation as a set of 2D panels mapped to 3D surfaces, leveraging signed distance functions and an AtlasNet-style network for draping without physics-based simulation. Extensive experiments demonstrate the method's effectiveness in garment reconstruction, editing, and handling interpenetration issues, showing advantages over existing techniques. While it has made significant contributions to the field, there are areas requiring further clarification and validation, particularly regarding the generalization capabilities and robustness of the proposed method.
### Strengths
- Addresses a significant and challenging problem in automatic modeling and perception of multi-layer clothing.
- Strong technical contributions, including the clever use of neural fields and AtlasNet for draping.
- Comprehensive evaluation across multiple relevant tasks, showing the versatility of the method.
- Well-structured presentation with clear mathematical formulations and informative illustrations.
- Demonstrates good performance in reducing interpenetration and improving draping realism.
### Weaknesses
- Limited variety in the training and test datasets may affect generalization ability.
- The effectiveness of the method in handling complex garment types is not fully explored.
- Lack of quantitative results on collision rates and interpenetration between layers.
- Some aspects of the writing could be clearer, and additional detail on user study findings is needed.
- Concerns regarding the reproducibility of the complex system and sufficient details on data availability.
### Questions
- How is the potential code for recovering the human body from the image obtained?
- What are the results for garment categories other than shirts?
- How would the model perform on complex garment types like robes or double-breasted suits?
- Is there overlapping between the training and test sets, and what is the distribution of different human bodies in the dataset?
- Could the authors clarify the necessity of unique latent vectors for each garment?
### Soundness
**Score:** 3
**Description:** 3 = good; the methodology is sound, but there are some concerns regarding the robustness and generalization of the results.
### Presentation
**Score:** 3
**Description:** 3 = good; the paper is well-structured and clear, but some areas could be improved for better clarity and detail.
### Contribution
**Score:** 3
**Description:** 3 = good; while the contributions are notable, further validation and exploration of the method's effectiveness in a wider context are needed.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept; the paper is technically solid with moderate-to-high impact, but it requires further improvements and clarifications.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a significant advancement in the field of garment draping with notable technical contributions and a well-structured presentation. While there are some limitations regarding the generalization and robustness of the method, the strengths and potential impact justify acceptance, especially considering the improvements made during the rebuttal process.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# DOSE: Diffusion Dropout with Adaptive Prior for Speech Enhancement
Wenxin Tai\({}^{1}\), Yue Lei\({}^{1}\), Fan Zhou\({}^{1,2}\)1, Goce Trajcevski\({}^{3}\), Ting Zhong\({}^{1,2}\)
University of Electronic Science and Technology of China
Kashi Institute of Electronics and Information Industry
Iowa State University
Footnote 1: Corresponding author: [email protected]
###### Abstract
Speech enhancement (SE) aims to improve the intelligibility and quality of speech in the presence of non-stationary additive noise. Deterministic deep learning models have traditionally been used for SE, but recent studies have shown that generative approaches, such as denoising diffusion probabilistic models (DDPMs), can also be effective. However, incorporating condition information into DDPMs for SE remains a challenge. We propose a _model-agnostic_ method called DOSE that employs two efficient condition-augmentation techniques to address this challenge, based on two key insights: (1) We force the model to prioritize the condition factor when generating samples by training it with dropout operation; (2) We inject the condition information into the sampling process by providing an informative adaptive prior. Experiments demonstrate that our approach yields substantial improvements in high-quality and stable speech generation, consistency with the condition factor, and inference efficiency. Codes are publicly available at [https://github.com/ICDM-UESTC/DOSE](https://github.com/ICDM-UESTC/DOSE).
## 1 Introduction
Speech enhancement (SE) aims to improve the intelligibility and quality of speech, particularly in scenarios where degradation is caused by non-stationary additive noise. It has significant practical implications in various fields such as telecommunications [1], medicine [2], and entertainment [3]. Modern deep learning models are often used to learn a deterministic mapping from noisy to clean speech. While deterministic models have long been regarded as more powerful in the field of SE, recent advancements in generative models [4, 5] have significantly closed this gap.
One such generative approach is based on using denoising diffusion probabilistic models (DDPMs) [6, 7], which have been shown to effectively synthesize natural-sounding speech. Several diffusion enhancement models have been developed [4, 5, 8], which try to learn a probability distribution over the data and then generate clean speech conditioned on the noisy input. A key challenge in using diffusion enhancement models is how to effectively incorporate condition information into learning and generating faithful speech [9, 10, 8]. Previous works address this issue through designing specific condition-injecting strategies [4, 9, 11] or devising complex network architectures [10, 5].
We conduct a thorough examination to understand the limitation of diffusion-based SE methods and find that diffusion enhancement models are susceptible to _condition collapse_, where the primary cause of inconsistent generation is the _non-dominant position of the condition factor_. We thus introduce a new paradigm to effectively incorporate condition information into the diffusion enhancement models. Specifically, we propose a Diffusion-drOpout Speech Enhancement method (DOSE), which is a model-agnostic SE method (Figure 1) that employs two efficient condition-augmentation techniques: (1) During training, we randomly drop out intermediate-generated samples. This dropout mechanism guides the model's attention toward the condition factors; (2) Instead of letting the model generate samples from scratch (Gaussian distribution), we employ an adaptive prior derived from the conditional factor to generate samples. Experiments on benchmark datasets demonstrate that our method surpasses recent diffusion enhancement models in terms of both accuracy and efficiency. Additionally, DOSE produces more natural-sounding speech and exhibits stronger generalization capabilities compared to deterministic mapping-based methods using the same network architecture.
## 2 Related works
There are two main categories of diffusion-based SE methods: (1) designing specific conditioning-integrating strategies [4; 9; 11; 5], or (2) generating speech with an auxiliary condition optimizer[12; 10; 8]. The first category considerates noisy speech in the diffusion (or reverse) process, either by linearly interpolating between clean and noisy speech along the process [4; 9], or by defining such a transformation within the drift term of a stochastic differential equation (SDE) [11; 5].
Works from the second category rely on an auxiliary condition optimizer - a generator (diffusion model) synthesizes clean speech and a condition optimizer informs what to generate [12; 10; 8]. Both the generator and condition optimizer have the ability to denoise, with the latter undertaking the core part. Given the challenges in leveraging condition information [8], diffusion-based SE methods within this category often necessitate specific network architecture design to guarantee the participation of condition factors.
In a paradigm sense, our method is quite similar but different to the second branch - unlike previous approaches that require additional auxiliary networks, DOSE is an end-to-end diffusion-based SE method. In addition, DOSE is model-agnostic that does not need any specific network design to guarantee consistency between the generated sample and its corresponding condition factor.
## 3 Preliminaries
We now provide a brief introduction to the diffusion probabilistic model (diffusion models, for short), the definition of speech enhancement, and the condition collapse problem.
### Diffusion models
A diffusion model [13; 6] consists of a forward (or, diffusion) process and a reverse process. Given a data point \(\mathbf{x}_{0}\) with probability distribution \(p(\mathbf{x}_{0})\), the forward process gradually destroys its data structure by repeated application of the following Markov diffusion kernel:
\[p(\mathbf{x}_{t}|\mathbf{x}_{t-1})=\mathcal{N}(\mathbf{x}_{t};\sqrt{1-\beta_{i}}\mathbf{x}_{t- 1},\beta_{t}\mathbf{I}),\quad t\in\{1,2,\cdots,T\}, \tag{1}\]
where \(\beta_{1},\cdots,\beta_{T}\) is a pre-defined noise variance schedule. With enough diffusion step \(T\), \(p(\mathbf{x}_{T})\) converges to the unit spherical Gaussian distribution. Based on the Markov chain, the marginal distribution at arbitrary timestep \(t\) has the following analytical form:
\[p(\mathbf{x}_{t}|\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{t};\sqrt{\bar{\alpha}}_{t}\mathbf{x}_ {0},(1-\bar{\alpha}_{t})\mathbf{I}),\quad t\in\{1,2,\cdots,T\}, \tag{2}\]
where \(\bar{\alpha}_{t}=\prod_{s=1}^{t}(1-\beta_{s})\).
Figure 1: An illustration of the proposed DOSE. DOSE consists two primary procedures: (1) training a condition diffusion model using dropout operation, and (2) generating speech using a conditional diffusion model equipped with the adaptive prior.
As for the reverse process, it aims to learn a transition kernel from \(\mathbf{x}_{t}\) to \(\mathbf{x}_{t-1}\), which is defined as the following Gaussian distribution [6]:
\[p_{\mathbf{\theta}}(\mathbf{x}_{t-1}|\mathbf{x}_{t})=\mathcal{N}(\mathbf{x}_{t-1};\mathbf{\mu}_{\mathbf{ \theta}}(\mathbf{x}_{t},t),\mathbf{\Sigma}(\mathbf{x}_{t},t)), \tag{3}\]
where \(\mathbf{\theta}\) is the learnable parameter and \(\mathbf{\mu}_{\mathbf{\theta}}(\mathbf{x}_{t},t)=\frac{1}{\sqrt{1-\beta_{t}}}(\mathbf{x}_{t}- \frac{\beta_{t}}{\sqrt{1-\alpha_{t}}}\mathbf{\epsilon}_{\mathbf{\theta}}(\mathbf{x}_{t},t))\) denotes the mean of \(\mathbf{x}_{t-1}\), which is obtained by subtracting the estimated Gaussian noise \(\mathbf{\epsilon}_{\mathbf{\theta}}(\mathbf{x}_{t},t)\) in the \(\mathbf{x}_{t}\). With such a learned transition kernel, one can approximate the data distribution \(p(\mathbf{x}_{0})\) via:
\[p_{\mathbf{\theta}}(\mathbf{x}_{0})=\int p_{\mathbf{\theta}}(\mathbf{x}_{T})\prod_{t=1}^{T}p_ {\mathbf{\theta}}(\mathbf{x}_{t-1}|\mathbf{x}_{t})d\mathbf{x}_{1:T}, \tag{4}\]
where \(p_{\mathbf{\theta}}(\mathbf{x}_{T})=\mathcal{N}(\mathbf{x}_{T};\mathbf{0},\mathbf{I})\).
### Problem formulation
Speech enhancement refers to methods that try to reduce distortions, make speech sounds more pleasant, and improve intelligibility. In real environments, the monaural noisy speech \(\mathbf{y}\) in the time domain can be modeled as:
\[\mathbf{y}=\mathbf{x}+\mathbf{n} \tag{5}\]
where \(\mathbf{x}\) and \(\mathbf{n}\) denote clean and noise signals, respectively. For human perception, the primary goal of speech enhancement is to extract \(\mathbf{x}\) from \(\mathbf{y}\). Mapping-based speech enhancement methods directly optimize \(p_{\mathbf{\theta}}(\mathbf{x}|\mathbf{y})\), while diffusion enhancement methods generate clean samples through a Markov process \(p_{\mathbf{\theta}}(\mathbf{x}_{0:T-1}|\mathbf{x}_{1:T},\mathbf{y})\).
### Condition Collapse in diffusion enhancement models
The _condition collapse_ problem in speech enhancement was first proposed in [8] and it refers to the limited involvement of the condition factor during conditional diffusion training, resulting in inconsistencies between the generated speech and its condition factor.
In this work, we argue that the condition factor \(\mathbf{y}\) indeed participates and helps the intermediate-generated sample \(\mathbf{x}_{t}\) approximate \(p(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})\). Our assertion is supported by the experiment depicted in the left part of Figure 2 - the diffusion model equipped with the condition factor exhibits a lower loss curve compared to the unconditional one2. To better understand the condition collapse phenomenon, we devise two variants that explicitly modify the mutual information between the condition factor and the model's output (Figure 2 (middle)). We use skip connections to add the condition factor to multiple layers, forcing the likelihood of maintaining a strong connection between the condition factor and output features. Since the dependence of the output on any hidden state in the hierarchy becomes weaker as one moves further away from the output in that hierarchy (cf. [14]), using skip connections can explicitly enhance connections between the generated sample and condition factor.
Footnote 2: We use DiffWave [7] as basic architecture and use the same experimental settings as [4; 9] – the only difference being the change in the way of condition-injecting since most speech enhancement methods will directly use noisy speech as the condition factor, rather than Mel-spectrogram.
Figure 2: Investigation of the condition collapse problem. From left to right: (1) comparison of loss curves between unconditional and conditional diffusion models; (2) three variants; (3) PESQ performance of different variants, (-) represent the unprocessed speech.
As shown in Figure 2 (right), an increase in mutual information (connections) leads to a significant improvement in the consistency between the generated sample and the condition factor (\(a\to b\)). However, it requires a meticulously designed model to guarantee its effectiveness (\(b\to c\)). While previous studies [5; 10; 8] focus on explicitly enhancing the consistency between the output speech and condition factor through specific network architecture design, we explore the possibility of a solution independent of the model architecture. This would broaden the applicability of our method, as it enables slight modifications to existing deterministic mapping-based models to transform them into diffusion enhancement models.
## 4 Methodology
Considering the diffusion model provides a transition function from \(\mathbf{x}_{t}\) to \(\mathbf{x}_{t-1}\), typical condition generation process is represented as:
\[p_{\mathbf{\theta}}(\mathbf{x}_{0}|\mathbf{y})=\int\underbrace{p(\mathbf{x}_{T})}_{\text{Prior} }\prod_{t=1}^{T}\underbrace{p_{\mathbf{\theta}}(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{y})}_{ \text{Condition}}d\mathbf{x}_{1:T},\quad\mathbf{x}_{T}\sim\mathcal{N}(\mathbf{x}_{T};\mathbf{0},\mathbf{I}). \tag{6}\]
Our experiments above indicate that \(p_{\mathbf{\theta}}(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{y})\) will easily collapse to \(p_{\mathbf{\theta}}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\), resulting in the condition generation process degenerating into a vanilla unconditional process:
\[\int p_{\mathbf{\theta}}(\mathbf{x}_{T})\prod_{t=1}^{T}p_{\mathbf{\theta}}(\mathbf{x}_{t-1}| \mathbf{x}_{t},y)d\mathbf{x}_{1:T}\Rightarrow\int p_{\mathbf{\theta}}(\mathbf{x}_{T})\prod_{t= 1}^{T}p_{\mathbf{\theta}}(\mathbf{x}_{t-1}|\mathbf{x}_{t})d\mathbf{x}_{1:T}. \tag{7}\]
As a result, facilitating automatic learning of the joint distribution for both clean and noisy speech samples does not work well for the speech enhancement task.
### Condition augmentation I: Adaptive Prior
Let's revisit Eq. (6): since we cannot easily inject the condition factor into the condition term, how about the prior term? For example, we can modify the condition generation process as:
\[p_{\mathbf{\theta}}(\mathbf{x}_{0}|\mathbf{y})=\int\underbrace{p(\mathbf{x}_{\tau}|\mathbf{y})}_{ \text{Conditional}}\prod_{t=1}^{\tau}\underbrace{p_{\mathbf{\theta}}(\mathbf{x}_{t-1}| \mathbf{x}_{t})}_{\text{Unconditional}}d\mathbf{x}_{1:\tau}, \tag{8}\]
where \(p(\mathbf{x}_{\tau}|\mathbf{y})\) is formulated as \(p(\mathbf{x}_{\tau}|\mathbf{y})=\mathcal{N}(\mathbf{x}_{\tau};\sqrt{\bar{\alpha}_{\tau}}\bm {y},(1-\bar{\alpha}_{\tau})\mathbf{I})\). The following propositions verify the feasibility of our proposal.
**Proposition 1**.: _For any \(\xi>0\) such that \(0<\xi<M\) for some finite positive value \(M\), there exists a positive value \(\tau\in\{0,\cdots,T\}\) that satisfies:_
\[D_{KL}\left(p(\mathbf{x}_{t}|\mathbf{x})\|p(\mathbf{x}_{t}|\mathbf{y})\right)\leq\xi,\quad \forall\tau\leq t\leq T, \tag{9}\]
_where \(p(\mathbf{x}_{t}|\mathbf{c})=\mathcal{N}(\mathbf{x}_{t};\sqrt{\bar{\alpha}_{t}}\mathbf{c},(1- \bar{\alpha}_{t})\mathbf{I})\)._
**Remark 1**.: _This proposition indicates that, given a tolerable margin of error \(\xi\) and a well-trained diffusion model, we can always find a suitable \(\tau\) such that we are able to recover the clean speech \(\mathbf{x}\) from its noisy one \(\mathbf{y}\) using Eq. (8)._
While **Proposition 1** allows us to generate clean speech \(\mathbf{x}\) given the noisy speech \(\mathbf{y}\) using Eq. (8), it does not guarantee that our model will achieve successful recovery with a high probability.
**Proposition 2**.: _Let \(\mathbf{x}\) be the clean sample, \(\mathbf{y}\) be it's corresponding noisy one, and \(\mathbf{x}^{\prime}\) be any neighbor from the neighbor set \(\mathcal{S}(\mathbf{x})\). Then diffusion enhancement models can recover \(\mathbf{x}\) with a high probability if the following inequality is satisfied:_
\[\log\left(\frac{p(\mathbf{x})}{p(\mathbf{x}^{\prime})}\right)>\frac{1}{2\sigma_{t}^{2} }\left(\|\mathbf{x}-\mathbf{y}\|_{2}^{2}-\|\mathbf{x}^{\prime}-\mathbf{y}\|_{2}^{2}\right), \quad\forall\mathbf{x}^{\prime}\in\mathcal{S}(\mathbf{x}), \tag{10}\]
_where \(\sigma_{t}^{2}=\frac{1-\bar{\alpha}_{t}}{\bar{\alpha}_{t}}\)._
**Remark 2**.: _Assuming that the condition factor \(\mathbf{y}\) is closer to \(\mathbf{x}\) than to \(\mathbf{x}^{\prime}\), we obtain a non-positive right-hand side (RHS). For a given \(\mathbf{x}\), the left-hand side (LHS) value is fixed, and to ensure the inequality always holds, a smaller \(\sigma_{t}^{2}\) is preferred._As shown in Figure 3, \(\sigma_{t}^{2}\) will increase as the timestep \(t\) increases. Thus, according to **Proposition 2**, we should choose a small \(\tau\) for Eq. (8) to maximize the probability of successfully recovering the clean speech from the noisy one. However, constrained by **Proposition 1**, \(\tau\) cannot be too small. In other words, the clean speech distribution \(p(\mathbf{x}_{\tau})\) and the noisy speech distribution \(p(\mathbf{y}_{\tau})\) will get closer over the forward diffusion process, and the gap \(|\mathbf{n}|=|\mathbf{y}-\mathbf{x}|\) between the noisy speech and the clean one will indeed be "washed out" by the increasingly added noise. Since the original semantic information will also be removed if \(\tau\) is too large, there should be a trade-off when we set \(\tau\) for the diffusion enhancement model.
**Condition optimizer.** We find that both propositions are correlated with the condition factor \(\mathbf{y}\). If we can reduce the gap between the condition factor and clean speech, we can choose a smaller \(\tau\), effectively increasing the likelihood of recovering clean speech. One simple idea is to employ a neural network \(f_{\mathbf{\psi}}\) to optimize the condition factor, as demonstrated in [15]. Accordingly, we can rewrite Eq. (8) as:
\[p_{\mathbf{\theta},\mathbf{\psi}}(\mathbf{x}_{0}|\mathbf{y})=\int p_{\mathbf{\psi}}(\mathbf{x}_{\tau}| \mathbf{y})\prod_{t=1}^{\tau}p_{\mathbf{\theta}}(\mathbf{x}_{t-1}|\mathbf{x}_{t})d\mathbf{x}_{1: \tau}, \tag{11}\]
where \(p_{\mathbf{\psi}}(\mathbf{x}_{\tau}|\mathbf{y})=\mathcal{N}(\mathbf{x}_{\tau};\sqrt{\bar{\alpha }_{\tau}}f_{\mathbf{\psi}}(\mathbf{y}),(1-\bar{\alpha}_{\tau})\mathbf{I})\).
In practice, we should also consider failure cases of the condition optimizer in complex scenarios, especially the issue of excessive suppression that has been reported in recent literature [16; 17; 18]. To mitigate this issue, we use \(0.5\epsilon+0.5\mathbf{y}\) (like a simple residual layer) as a mild version of the condition factor:
\[p_{\mathbf{\psi}}(\mathbf{x}_{\tau}|\mathbf{y})=\mathcal{N}(\mathbf{x}_{\tau};0.5\sqrt{\bar{ \alpha}_{\tau}}\left(f_{\mathbf{\psi}}(\mathbf{y})+\mathbf{y}\right),(1-\bar{\alpha}_{\tau })\mathbf{I}). \tag{12}\]
We call \(p_{\mathbf{\psi}}(\mathbf{x}_{\tau}|\mathbf{y})\) the adaptive prior as it varies with different noisy samples \(\mathbf{y}\).
### Condition augmentation II: Diffusion Dropout
Aside from changing the prior \(p(\mathbf{x}_{T})\) to conditional prior \(p_{\mathbf{\psi}}(\mathbf{x}_{\tau}|\mathbf{y})\), we also optimize the condition term \(p_{\mathbf{\theta}}(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{y})\). Instead of designing specific condition-injecting strategies [4; 9; 11] or devising complicated network architecture [8; 10; 5], we attempt to "do subtraction" by discarding some shared (intermediate-generated samples & condition factor) and important (target-related) information from intermediate-generated samples. Naturally, if we discard some information from \(\mathbf{x}_{t}\), then the diffusion enhancement model is forced to use the condition factor \(\mathbf{y}\) to recover the speech. Taking a further step, we can even discard the entire \(\mathbf{x}_{t}\), as the condition factor \(\mathbf{y}\) alone is sufficient for recovering the clean speech \(\mathbf{x}_{0}\) (this is what deterministic models do). To this end, we define a neural network \(f_{\mathbf{\theta}}(d(\mathbf{x}_{t},p),\mathbf{y},t)\) to approximate \(p(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})\):
\[p_{\mathbf{\theta}}(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{y})=\mathcal{N}(\mathbf{x}_{t-1};f_{ \mathbf{\theta}}(d(\mathbf{x}_{t},p),\mathbf{y},t),\mathbf{\Sigma}(\mathbf{x}_{t},t)), \tag{13}\]
where \(d(\mathbf{x}_{t},p)\) is the dropout operation:
\[d(\mathbf{x}_{t},p)=\begin{cases}\mathbf{x}_{t}&\text{ if }r=1\\ \mathbf{\epsilon}&\text{ if }r=0\end{cases},\quad r\sim\text{Bernoulli}(1-p). \tag{14}\]
### DOSE training
Ho et al. [6] and much of the following work choose to parameterize the denoising model through directly predicting \(\mathbf{\epsilon}\) with a neural network \(\mathbf{\epsilon_{\theta}}(\mathbf{x}_{t},t)\), which implicitly sets:
\[\hat{\mathbf{x}}_{0}=\frac{1}{\sqrt{\bar{\alpha}_{t}}}\left(\mathbf{x}_{t}-\sqrt{1- \bar{\alpha}_{t}}\mathbf{\epsilon_{\theta}}\right). \tag{15}\]
In this case, the training loss is also usually defined as the mean squared error in the \(\mathbf{\epsilon}\)-space \(\|\mathbf{\epsilon}-\mathbf{\epsilon_{\theta}}(\mathbf{x}_{t},t)\|_{2}^{2}\). Although this standard specification works well for training an unconditional diffusion model, it is not suited for DOSE - for two reasons.
Figure 3: The change curves of \(\log_{10}\sigma_{t}^{2}\). Elements in legend are \(T\), \(\beta_{1}\), \(\beta_{T}\) respectively.
First, we cannot estimate \(\mathbf{\epsilon}\) without the help of \(\mathbf{x}_{t}\) because \(\mathbf{\epsilon}\) and \(\mathbf{y}\) are independent. Second, as discussed earlier, we want DOSE to start with a small timestep and we strive to make \(\tau\) small. However, as \(\tau\) approaches zero, small changes in \(\mathbf{x}\)-space have an increasingly amplified effect on the implied prediction in \(\mathbf{\epsilon}\)-space (Eq. (15)). In other words, the efforts made by diffusion enhancement models become so negligible that diffusion models lose their ability to calibrate the speech at small timesteps.
So, we need to ensure that the estimation of \(\mathbf{\hat{x}}_{0}\) remains flexible as the timestep \(t\) gets smaller. Considering the equivalently of the \(\mathbf{\epsilon}\)-space loss \(\|\mathbf{\epsilon}-\mathbf{\epsilon}_{\mathbf{\theta}}(\mathbf{x}_{t},t)\|_{2}^{2}\) to a weighted reconstruction loss in \(\mathbf{x}\)-space \(\frac{1}{\sigma_{t}^{2}}\|\mathbf{x}_{0}-\mathbf{x}_{\mathbf{\theta}}(\mathbf{x}_{t},t)\|_{2} ^{2}\), we can directly estimate the clean speech \(\mathbf{x}_{0}\) at each timestep \(t\):
\[\mathcal{L}=\mathbb{E}_{\mathbf{x}_{0}\sim p(\mathbf{x}),t\in\{1,\ldots,T\}}\left[\| \mathbf{x}_{0}-f_{\mathbf{\theta}}(d(\mathbf{x}_{t},p),\mathbf{y},t)\|_{2}^{2}\right] \tag{16}\]
### DOSE inference
After training, the ideal scenario is that \(p_{\mathbf{\theta}}(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{y})\) approximates \(p(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})\) precisely, enabling us to generate clean speech using Eq. (6). However, when applied in practice, it is difficult to completely eliminate errors (both sample error and true error). If these errors are not effectively managed or corrected during the generation process, the quality of the generated samples may deteriorate, leading to artifacts, blurriness, etc [19, 20]. This issue is particularly pronounced when using diffusion models for fine-grained conditional generation tasks, as diffusion models require a large number of steps to generate samples, which will significantly reduce the consistency between the generated sample and its condition factor (see SS5.3, Figure 6).
The adaptive prior (Sec 4.1) provides an opportunity to address the error accumulation issue. Specifically, we can select a suitable \(\tau\) smaller than \(T\), conditioned on an adaptive prior, and generate speech in fewer steps. We can extend Eq. 11 by transforming the unconditional diffusion enhancement model into a conditional one:
\[p_{\mathbf{\theta},\mathbf{\psi}}(\mathbf{x}_{0}|y)=\int p_{\mathbf{\psi}}(\mathbf{x}_{\tau}|\mathbf{y })\prod_{t=1}^{\tau}p_{\mathbf{\theta}}(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{y})d\mathbf{x}_{1: \tau}, \tag{17}\]
and the number of sampling steps is reduced from \(T\) to \(\tau+1\).
Readers familiar with diffusion models may recall that the standard process repeatedly applies a "single-step" denoising operation \(\mathbf{x}_{t-1}=denoise(\mathbf{x}_{t};t)\) that aims to convert a noisy sample at some timestep \(t\) to a (slightly less) noisy sample at the previous timestep \(t-1\). In fact, each application of the one-step denoiser consists of two steps: (1) an estimation of the fully denoised sample \(\mathbf{x}_{0}\) from the current timestep \(t\), and (2) computing a (properly weighted, according to the diffusion model) average between this estimated denoised sample and the noisy sample at the previous timestep \(t-1\). Thus, instead of performing the entire \(\tau\)-step diffusion process to denoise a sample, it is also possible to run \(denoise\) once and simply output the estimated sample in one shot [21]. Accordingly, Eq. (17) can be further rewritten as:
\[p_{\mathbf{\theta},\mathbf{\psi}}(\mathbf{x}_{0}|\mathbf{y})=\int p_{\mathbf{\psi}}(\mathbf{x}_{\tau}| \mathbf{y})p_{\mathbf{\theta}}(\mathbf{x}_{0}|\mathbf{x}_{\tau},\mathbf{y})d\mathbf{x}_{\tau} \tag{18}\]
We can even achieve DOSE without the condition optimizer \(f_{\mathbf{\psi}}(\cdot)\) - using conditional diffusion enhancement model instead. For example, we can generate clean speech via:
\[p_{\mathbf{\theta}}(\mathbf{x}_{0}|\mathbf{y})=\int\int p_{\mathbf{\theta}}(\mathbf{\hat{x}}_{\tau _{2}}|\mathbf{y}_{\tau_{1}},\mathbf{y})p_{\mathbf{\theta}}(\mathbf{x}_{0}|\mathbf{\hat{x}}_{\tau_{ 2}},\mathbf{y})d\mathbf{\hat{x}}_{\tau_{2}}d\mathbf{y}_{\tau_{1}}, \tag{19}\]where \(\tau_{1},\tau_{2}\) (\(\tau_{2}<\tau_{1}\leq T\)) are two pre-defined hyper-parameters. The motivation behind Eq. (19) is that, once we have trained a neural network \(f_{\mathbf{\theta}}(\mathbf{x}_{t},\mathbf{y},t)\) that can accurately estimate \(\mathbf{x}_{0}\) (Eq. (16)), according to the theoretical analysis in Sec 4.1, we can first choose a suitable value for \(\tau_{1}\) to ensure a relatively good approximation of \(\mathbf{x}_{0}\):
\[\hat{\mathbf{x}}_{0}=f_{\mathbf{\theta}}(\mathbf{y}_{\tau_{1}},\mathbf{y},\tau_{1})\approx f_{ \mathbf{\theta}}(\mathbf{x}_{\tau_{1}},\mathbf{y},\tau_{1}) \tag{20}\]
In the second step, once we have obtained a good condition factor, we can choose a smaller timestep \(\tau_{2}<\tau_{1}\) to get a better estimation of \(\mathbf{x}_{0}\) than \(\hat{\mathbf{x}}_{0}\) generated in the first step.
**Summary.** DOSE has three important benefits: (1) By dropping \(\mathbf{x}_{t}\) entirely, we make the condition factor \(\mathbf{y}\) the "protagonist", automatically enhancing the consistency between the generated sample and the condition factor. (2) By training the model with this modified training objective, DOSE can perform well not only on Gaussian noise (\(\mathbf{x}_{t}\rightarrow\mathbf{x}_{0}\)) but also on various types of non-Gaussian noise (\(\mathbf{y}\rightarrow\mathbf{x}_{0}\)). (3) DOSE is efficient (2 steps), faster than existing diffusion enhancement models.
## 5 Experiments
We compare DOSE with prevailing diffusion enhancement methods and deterministic mapping-based enhancement methods in SS5.1. We conduct a counterfactual verification to understand the intrinsic mechanism of DOSE in SS5.2. We show two visual cases of excessive suppression and error accumulation in SS5.3. While providing a self-contained version of our main results, we note that we also have additional quantitative observations reported in the Appendices. Specifically, we compare DOSE with other baselines via subjective evaluation (Appendix A.2); We investigate the significance of the proposed adaptive prior and explain why we need to use a mild version of the condition factor (Appendix A.3); We examine the effect of our new training objective and demonstrate the necessity of using it (Appendix A.4); We explain why we use two steps in speech generation (Appendix A.5); We provide parameter sensitivity experiments (Appendix A.6); We show plenty of visual cases of excessive suppression and error accumulation (Appendix A.7 and A.8). To help readers better understand our research, we include a discussion subsection in Appendix A.9. Specifically, we: (1) Analyze the reasons behind the superior generalizability of diffusion enhancement models compared to deterministic mapping-based models (from the robust training perspective); (2) Explain why we use 0.5 in the mild version of the condition factor; (3) Discuss the broader impacts of speech enhancement methods.
**Dataset and baselines.** Following previous works [4; 9; 8], we use the VoiceBank-DEMAND dataset [22; 23] for performance evaluations. To investigate the generalization ability of models, we use CHiME-4 [24] as another test dataset following [9], i.e., the models are trained on VoiceBank-DEMAND and evaluated on CHiME-4. We compare our model with recent open-sourced diffusion enhancement models such as DiffuSE [4], CDifuSE [9], SGMSE [11], SGMSE+ [5], and DR-DiffuSE [8]. Since the only difference between SGMSE+ and SGMSE is their network architecture, we compare our model with just one of them.
**Evaluation metrics.** We use the following metrics to evaluate SE performance: the perceptual evaluation of speech quality (PESQ) [25], short-time objective intelligibility (STOI) [26], segmental signal-to-noise ratio (SSNR), the mean opinion score (MOS) prediction of the speech signal distortion (CSIG) [27], the MOS prediction of the intrusiveness of background noise (CBAK) [27] and the MOS prediction of the overall effect (COVL) [27]. Besides these metrics, we also design two MOS metrics (MOS and Similarity MOS) for subjective evaluation.
**Configurations.** To ensure a fair comparison, we keep the model architecture exactly the same as that of the DiffWave model [7] for all methods3. DiffWave takes 50 steps with the linearly spaced training noise schedule \(\beta_{t}\in\left[1\times 10^{-4},0.035\right]\)[4]. We train all methods for 300,000 iterations using 1 NVIDIA RTX 3090 GPU with a batch size of 16 audios. We select the best values for \(\tau_{1}\) and \(\tau_{2}\) according to the performance on a validation dataset, a small subset (10%) extracted from the training data. More experiment settings can be found in Appendix A.10.
Footnote 3: Since the focus of our work is on studying the capabilities of diffusion dropout and adaptive prior for consistency enhancement, we use off-the-shelf architectures to avoid confounding our findings with model improvements. This decision (using DiffWave) rests on both its widely validated effectiveness and the minimal changes it required in the baseline experimental setup.
### Performance comparison
We compare our method with previous diffusion enhancement methods and summarize our experimental results in Table 1. We observe that: **(1)** Diffusion enhancement methods have better generalizability than deterministic methods. **(2)** Methods with specific condition-injecting strategies, such as DiffuSE, CDifuSE, and SGMSE, have strong generalization but perform slightly worse than deterministic mapping-based methods in matched scenarios. **(3)** Method (DR-DiffuSE) with auxiliary condition optimizer, performs better in matched scenarios and shows a slight improvement in mismatched scenarios. **(4)** Our method performs well in both matched and mismatched scenarios and is on par with state-of-the-art diffusion enhancement models while requiring fewer steps.
### Counterfactual verification
We perform a counterfactual verification to gain insights into the underlying mechanism of DOSE. To verify whether dropout can increase the "discourse power" of the conditional factor, we keep the condition factor \(\mathbf{y}\) fixed and reverse the intermediate-generated speech at a specific step (\(reverse(\mathbf{x}_{t})\)). This reversed intermediate-generated speech is called a counterfactual sample. Notably, if the final generated speech is more similar to the condition factor than the counterfactual speech, we can conclude that the condition factor plays a dominant role in the generation process. Otherwise, we can say that the condition factor is less influential.
As shown in Figure 4, we compare the performance of two models with different dropout probabilities (0.1 vs. 0.9). We have two findings here: **(1)** A higher dropout probability encourages the model to prioritize the condition factor even with a small timestep \(t\). **(2)** When timestep \(t\) is large, DOSE ef
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline
**Method** & **Year** & **Efficiency** & **Dataset** & **STOI(\%)\(\uparrow\)** & **PESQ\(\uparrow\)** & **CSIG\(\uparrow\)** & **CBAK\(\uparrow\)** & **COVL\(\uparrow\)** \\ \hline Unprocessed & – & – & & 92.1 & 1.97 & 3.35 & 2.44 & 2.63 \\ DiffWave & 2021 & 1 step (dis) & & **93.3** & 2.51 & 3.72 & 3.27 & 3.11 \\ DiffuSE & 2021 & 6 steps & & \(93.5^{+0.06}_{-0.05}\) & \(2.39^{+0.12}_{-0.01}\) & \(3.71^{+0.01}_{-0.01}\) & \(3.04^{+0.01}_{-0.01}\) & \(3.03^{+0.08}_{-0.01}\) \\ DiffuSE & 2021 & 6 steps & VB & \(93.1^{+0.00}_{-0.05}\) & \(2.43^{+0.03}_{-0.01}\) & \(3.77^{+0.05}_{-0.01}\) & \(3.09^{+0.18}_{-0.01}\) & \(3.09^{+0.02}_{-0.01}\) \\ SGMSE & 2022 & 50 steps & & \(93.3^{+0.00}_{-0.04}\) & \(2.34^{+0.11}_{-0.11}\) & \(3.69^{+0.01}_{-0.01}\) & \(2.90^{+0.11}_{-0.01}\) & \(3.00^{+0.11}_{-0.01}\) \\ DR-DiffuSE & 2023 & 6 steps & & \(92.9^{+0.06}_{-0.06}\) & \(2.50^{+0.02}_{-0.02}\) & \(3.68^{+0.02}_{-0.02}\) & \(3.27^{+0.02}_{-0.02}\) & \(3.08^{+0.02}_{-0.02}\) \\ DOSE & – & 2 steps & & \(93.6^{+0.06}_{-0.05}\) & \(2.56^{+0.00}_{-0.01}\) & \(3.83^{+0.17}_{-0.01}\) & \(3.27^{+0.00}_{-0.01}\) & \(3.19^{+0.08}_{-0.01}\) \\ \hline \hline Unprocessed & – & – & & 71.5 & 1.21 & 2.18 & 1.97 & 1.62 \\ DiffWave & 2021 & 1 step (dis) & & 72.3 & 1.22 & 2.21 & 1.95 & 1.63 \\ DiffuSE & 2021 & 6 steps & & \(83.7^{+1.14}_{-0.05}\) & \(1.59^{+0.36}_{-0.01}\) & \(2.91^{+0.70}_{-0.01}\) & \(2.19^{+0.44}_{-0.01}\) & \(2.19^{+0.65}_{-0.01}\) \\ CDifuSE & 2022 & 6 steps & & \(82.8^{+0.06}_{-0.05}\) & \(1.58^{+0.04}_{-0.04}\) & \(2.88^{+0.07}_{-0.01}\) & \(2.15^{+0.04}_{-0.02}\) & \(2.18^{+0.03}_{-0.01}\) \\ SGMSE & 2022 & 50 steps & & \(84.3^{+0.02}_{-0.05}\) & \(1.57^{+0.01}_{-0.02}\) & \(2.92^{+0.01}_{-0.01}\) & \(2.18^{+0.02}_{-0.02}\) & \(2.18^{+0.01}_{-0.01}\) \\ DR-DiffuSE & 2023 & 6 steps & & \(77.6^{+0.30}_{-0.06}\) & \(1.29^{+0.07}_{-0.04}\) & \(2.40^{+0.19}_{-0.02}\) & \(2.04^{+0.06}_{-0.01}\) & \(1.78^{+0.15}_{-0.01}\) \\ DOSE & – & 2 steps & & \(86.6^{+0.05}_{-0.05}\) & \(1.52^{+0.01}_{-0.01}\) & \(2.71^{+0.01}_{-0.01}\) & \(2.15^{+0.01}_{-0.01}\) & \(2.06^{+0.01}_{-0.01}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of different diffusion enhancement methods.
Figure 4: Counterfactual visualization. The first two columns are associated with a dropout probability of 0.1, while the last two columns are associated with a dropout probability of 0.9. In each row, the blue waveforms in the first and third columns are the counterfactual samples, and the blue waveforms in the second and fourth columns are the normal samples. The orange waveforms are generated samples from the model.
fectively captures condition information, ensuring the model's robustness to noise and maintaining consistency in the early stages of inference.
### Excessive suppression & error accumulation
We provide a visual case of excessive suppression in Figure 5 and a visual case of error accumulation in Figure 6. From Figure 5, we can see that: **(1)** The deterministic model fails in mismatched scenarios and generates samples that lose speech details; **(2)** The diffusion enhancement model generate defective speech when directly using the estimated speech as the condition factor; **(3)** The diffusion enhancement model equipped with a mild version of the condition factor can recover clean speech effectively. From Figure 6, we notice that: **(1)** Full-step generation can remove noise and generate natural-sounding speech. However, it can't guarantee the consistency between the generated speech and condition factor; **(2)** Two-step speech generation with adaptive prior can promise consistency and high quality simultaneously.
## 6 Conclusions
In this work, we present a new approach DOSE that effectively incorporates condition information into diffusion models for speech enhancement. DOSE uses two efficient condition-augmentation techniques to address the condition collapse problem. Comprehensive experiments on benchmark datasets demonstrate the efficiency and effectiveness of our method.
In our method, there are two groups of hyper-parameters: the dropout probability \(p\) for the dropout operation and two timesteps \(\tau_{1},\tau_{2}\) for the adaptive prior. These parameters are critical to model performance. For example, if the dropout probability is set too high, the diffusion enhancement model will rely solely on the condition factor to estimate the speech. Then our diffusion enhancement model will degenerate into a deterministic model, losing its generalizability. We also need to make a trade-off when choosing the timestep \(\tau\) (especially \(\tau_{1}\)): On one hand, a large \(\tau\) is needed to reduce the gap between the clean speech and condition factor. On the other hand, the original semantic information will also be removed if \(\tau\) is set too large.
In practice, it is necessary to evaluate the model on a subset of data and then empirically set the hyperparameters. These manually defined hyper-parameters are selected based on the Empirical Risk Minimization (ERM) principle and may not be optimal for every individual sample. Thus, an important direction for future research is to develop methods that can adaptively choose hyper-parameters for different samples. It is also expected that the model can adaptively select appropriate coefficients when forming a mild version of the conditioning factor.
Figure 5: Excessive suppression visualization (unconditional diffusion enhancement model on CHIME-4). From left to right: (1) DiffWave (dis); (2) adaptive prior with the estimated condition; (3) adaptive prior with the mild condition; (4) clean speech.
Figure 6: Error accumulation visualization (VB, DOSE). From left to right: (1) noisy speech; (2) full (50) steps; (3) 2 steps; (4) clean.
## Acknowledgement
This work was supported in part by National Natural Science Foundation of China (Grant No.62176043 and No.62072077), Natural Science Foundation of Sichuan Province (Grant No.2022NSFSC0505), Kashgar Science and Technology Bureau (Grant No.KS2023025), and National Science Foundation SWIFT (Grant No.2030249).
## References
* [1] Steven L Gay and Jacob Benesty. _Acoustic signal processing for telecommunication_, volume 551. Springer Science & Business Media, 2012.
* [2] Tim Van den Bogaert, Simon Doclo, Jan Wouters, and Marc Moonen. Speech enhancement with multichannel wiener filter techniques in multimicrophone binaural hearing aids. _The Journal of the Acoustical Society of America_, 125(1):360-371, 2009.
* [3] Ivan Tashev. Recent advances in human-machine interfaces for gaming and entertainment. _International journal of information technologies and security_, 3(3):69-76, 2011.
* [4] Yen-Ju Lu, Yu Tsao, and Shinji Watanabe. A study on speech enhancement based on diffusion probabilistic model. In _2021 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)_, pages 659-666. IEEE, 2021.
* [5] Julius Richter, Simon Welker, Jean-Marie Lemercier, Bunlong Lay, and Timo Gerkmann. Speech enhancement and dereverberation with diffusion-based generative models. _arXiv preprint arXiv:2208.05830_, 2022.
* [6] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. _Advances in Neural Information Processing Systems_, 33:6840-6851, 2020.
* [7] Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. Diffwave: A versatile diffusion model for audio synthesis. In _International Conference on Learning Representations_, 2021.
* [8] Wenxin Tai, Fan Zhou, Goce Trajcevski, and Ting Zhong. Revisiting denoising diffusion probabilistic models for speech enhancement: Condition collapse, efficiency and refinement. In _AAAI_, 2023.
* [9] Yen-Ju Lu, Zhong-Qiu Wang, Shinji Watanabe, Alexander Richard, Cheng Yu, and Yu Tsao. Conditional diffusion probabilistic model for speech enhancement. In _ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_, pages 7402-7406. IEEE, 2022.
* [10] Joan Serra, Santiago Pascual, Jordi Pons, R Oguz Araz, and Davide Scaini. Universal speech enhancement with score-based diffusion. _arXiv preprint arXiv:2206.03065_, 2022.
* [11] Simon Welker, Julius Richter, and Timo Gerkmann. Speech enhancement with score-based generative models in the complex STFT domain. In _Proc. Interspeech 2022_, pages 2928-2932, 2022. doi: 10.21437/Interspeech.2022-10653.
* [12] Jianwei Zhang, Suren Jayasuriya, and Visar Berisha. Restoring degraded speech via a modified diffusion model. In _22nd Annual Conference of the International Speech Communication Association, INTERSPEECH 2021_, pages 2753-2757. International Speech Communication Association, 2021.
* [13] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In _International conference on machine learning_, pages 2256-2265. PMLR, 2015.
* [14] Thomas M Cover. _Elements of information theory_. John Wiley & Sons, 1999.
* [15] Zongsheng Yue and Chen Change Loy. Difface: Blind face restoration with diffused error contraction. _arXiv preprint arXiv:2212.06512_, 2022.
* Li et al. [2020] Andong Li, Chengshi Zheng, Cunhang Fan, Renhua Peng, and Xiaodong Li. A recursive network with dynamic attention for monaural speech enhancement. In _Proc. Interspeech 2020_, pages 2422-2426, 2020. doi: 10.21437/Interspeech.2020-1513. URL [http://dx.doi.org/10.21437/Interspeech.2020-1513](http://dx.doi.org/10.21437/Interspeech.2020-1513).
* Saharia et al. [2022] Chitwan Saharia, Jonathan Ho, William Chan, Tim Salimans, David J Fleet, and Mohammad Norouzi. Image super-resolution via iterative refinement. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 2022.
* Lopez-Tapia and Perez de la Blanca [2021] Santiago Lopez-Tapia and Nicolas Perez de la Blanca. Fast and robust cascade model for multiple degradation single image super-resolution. _IEEE Transactions on Image Processing_, 30:4747-4759, 2021.
* Nichol and Dhariwal [2021] Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In _International Conference on Machine Learning_, pages 8162-8171. PMLR, 2021.
* Lemercier et al. [2022] Jean-Marie Lemercier, Julius Richter, Simon Welker, and Timo Gerkmann. Storm: A diffusion-based stochastic regeneration model for speech enhancement and dereverberation. _arXiv preprint arXiv:2212.11851_, 2022.
* Carlini et al. [2023] Nicholas Carlini, Florian Tramer, J Zico Kolter, et al. (certified!) adversarial robustness for free! In _International Conference on Learning Representations_, 2023.
* Veaux et al. [2013] Christophe Veaux, Junichi Yamagishi, and Simon King. The voice bank corpus: Design, collection and data analysis of a large regional accent speech database. In _International Conference Oriental COCOSDA_, pages 1-4. IEEE, 2013.
* Thiemann et al. [2013] Joachim Thiemann, Nobutaka Ito, and Emmanuel Vincent. The diverse environments multi-channel acoustic noise database (demand): A database of multichannel environmental noise recordings. In _Proceedings of Meetings on Acoustics_, volume 19, page 035081. Acoustical Society of America, 2013.
* Vincent et al. [2017] Emmanuel Vincent, Shinji Watanabe, Aditya Arie Nugraha, Jon Barker, and Ricard Marxer. An analysis of environment, microphone and data simulation mismatches in robust speech recognition. _Computer Speech Language_, 46:535-557, 2017.
* Rix et al. [2001] Antony W Rix, John G Beerends, Michael P Hollier, and Andries P Hekstra. Perceptual evaluation of speech quality (pesq)-a new method for speech quality assessment of telephone networks and codecs. In _IEEE international conference on acoustics, speech, and signal processing (ICASSP)_, pages 749-752. IEEE, 2001.
* Taal et al. [2010] Cees H Taal, Richard C Hendriks, Richard Heusdens, and Jesper Jensen. A short-time objective intelligibility measure for time-frequency weighted noisy speech. In _IEEE international conference on acoustics, speech, and signal processing (ICASSP)_, pages 4214-4217. IEEE, 2010.
* Hu and Loizou [2007] Yi Hu and Philipos C Loizou. Evaluation of objective quality measures for speech enhancement. _IEEE Transactions on audio, speech, and language processing_, 16(1):229-238, 2007.
* Huang et al. [2022] Rongjie Huang, Zhou Zhao, Huadai Liu, Jinglin Liu, Chenye Cui, and Yi Ren. Prodiff: Progressive fast diffusion model for high-quality text-to-speech. In _Proceedings of the 30th ACM International Conference on Multimedia_, pages 2595-2605, 2022.
Appendix
In the supplemental material,
* **A.1:** We provide the proofs for Propositions 1 and 2.
* **A.2:** We compare DOSE with other baselines via subjective evaluation.
* **A.3:** We investigate the significance of the proposed adaptive prior and explain why we need to use a mild version of the condition factor.
* **A.4:** We examine the effect of our new training objective and demonstrate the necessity of using it.
* **A.5:** We explain why we use two steps in speech generation.
* **A.6:** We provide parameter sensitivity experiments.
* **A.7:** We present several visual cases of excessive suppression.
* **A.8:** We present several visual cases of error accumulation.
* **A.9:** We analyze the reasons behind the superior generalizability of diffusion enhancement models compared to deterministic mapping-based models (from the robust training perspective), explain why we use 0.5 in the mild version of the condition factor, and discuss the broader impacts of speech enhancement methods.
* **A.10:** We include more information about speech processing and basic architecture.
### Mathematical proofs
We now present the proofs of the Propositions stated in the main text.
**Proposition 1**.: (cf. SS4.1): _For any \(\xi>0\) such that \(0<\xi<M\) for some finite positive value \(M\), there exists a positive value \(\tau\in\{0,\cdots,T\}\) that satisfies:_
\[D_{KL}\left(p(\mathbf{x}_{t}|\mathbf{x})\|p(\mathbf{x}_{t}|\mathbf{y})\right)\leq\xi,\quad \forall\tau\leq t\leq T, \tag{9}\]
_where \(p(\mathbf{x}_{t}|\mathbf{c})=\mathcal{N}(\mathbf{x}_{t};\sqrt{\bar{\alpha}_{t}}\mathbf{c},(1- \bar{\alpha}_{t})\mathbf{I})\)._
Proof.: Given two Gaussian distributions \(P\), \(Q\) defined over a vector space \(\mathbb{R}^{d}\), the KL divergence of multivariate Gaussian distributions is defined as follows:
\[D_{KL}\left(P\|Q\right)=\frac{1}{2}\left(\operatorname{Tr}\left(\mathbf{\Sigma}_{ 2}^{-1}\mathbf{\Sigma}_{1}\right)+(\mathbf{\mu}_{2}-\mathbf{\mu}_{1})^{T}\mathbf{\Sigma}_{2}^ {-1}(\mathbf{\mu}_{2}-\mathbf{\mu}_{1})-d+\ln\left(\frac{\det\mathbf{\Sigma}_{2}}{\det\mathbf{ \Sigma}_{1}}\right)\right). \tag{21}\]
Here, \(\mathbf{\mu}_{1}\in\mathbb{R}^{d}\) and \(\mathbf{\Sigma}_{1}\in\mathbb{R}^{d\times d}\) are the mean and covariance matrix of distribution \(P\), and \(\mathbf{\mu}_{2}\in\mathbb{R}^{d}\) and \(\mathbf{\Sigma}_{2}\in\mathbb{R}^{d\times d}\) are the mean and covariance matrix of distribution \(Q\). \(d\) is the dimensionality of the vectors (i.e., the number of dimensions in the vector space), and \(\operatorname{Tr}\) denotes the trace operator.
Note that when the two Gaussian distributions have diagonal covariance matrices (i.e., when the different dimensions are independent), the above formula simplifies to the sum of the KL divergences of each univariate Gaussian distribution. Thus, given two Gaussian distributions \(p(\mathbf{x}_{t}|\mathbf{x})\), \(p(\mathbf{x}_{t}|\mathbf{y})\) and Eq. (5), the KL divergence between these two distributions can be calculated as follows:
\[D_{KL}\left(p(\mathbf{x}_{t}|\mathbf{x})\|p(\mathbf{x}_{t}|\mathbf{y})\right)=\frac{\bar{ \alpha}_{t}}{1-\bar{\alpha}_{t}}\|\mathbf{y}-\mathbf{x}\|_{2}^{2}=\frac{1}{\sigma_{t}^ {2}}\|\mathbf{n}\|_{2}^{2}. \tag{22}\]
According to the definition of the diffusion model and Figure 3, \(D_{KL}\left(p(\mathbf{x}_{t}|\mathbf{x})\|p(\mathbf{x}_{t}|\mathbf{y})\right)\) is a monotonically decreasing function. Ideally, for a bounded error \(\mathbf{n}\) with (almost) infinite timestep \(T\), we have:
\[\lim_{t\to 0}D_{KL}\left(p(\mathbf{x}_{t}|\mathbf{x})\|p(\mathbf{x}_{t}|\mathbf{y})\right)=+ \infty,\quad\lim_{t\to T}D_{KL}\left(p(\mathbf{x}_{t}|\mathbf{x})\|p(\mathbf{x}_{t}|\mathbf{y })\right)=0. \tag{23}\]
According to Bolzano's theorem, there exists at least one point \(\tau\) in the interval \(\{0,\cdots,T\}\) such that \(D_{KL}\left(p(\mathbf{x}_{\tau}|\mathbf{x})\|p(\mathbf{x}_{\tau}|\mathbf{y})\right)=\xi\). Then, Eq. (9) holds for \(\tau\leq t\leq T\).
**Proposition 2**.: (cf. SS4.1): _Let \(\mathbf{x}\) be the clean sample, \(\mathbf{y}\) be it's corresponding noisy one, and \(\mathbf{x}^{\prime}\) be any neighbor from the neighbor set \(\mathcal{S}(\mathbf{x})\). Then diffusion enhancement models can recover \(\mathbf{x}\) with a high probability if the following inequality is satisfied:_
\[\log\left(\frac{p(\mathbf{x})}{p(\mathbf{x}^{\prime})}\right)>\frac{1}{2 \sigma_{t}^{2}}\left(\|\mathbf{x}-\mathbf{y}\|_{2}^{2}-\|\mathbf{x}^{\prime}-\mathbf{y}\|_{2}^ {2}\right),\quad\forall\mathbf{x}^{\prime}\in\mathcal{S}(\mathbf{x}), \tag{10}\]
_where \(\sigma_{t}^{2}=\frac{1-\bar{\alpha}_{t}}{\bar{\alpha}_{t}}\) is the variance of the Gaussian noise added at timestep \(t\) in the forward diffusion process._
Proof.: The main idea is to prove that any point \(\mathbf{x}^{\prime}\) quite similar but different to the ground-true speech \(\mathbf{x}\) should have a lower density than \(\mathbf{x}\) in the conditional distribution so that the diffusion enhancement models can recover \(\mathbf{x}\) with a high probability. In other words, we should have:
\[p(\mathbf{x}_{0}=\mathbf{x}|\mathbf{x}_{t}=\mathbf{y}_{t})>p(\mathbf{x}_{0}=\mathbf{x}^{ \prime}|\mathbf{x}_{t}=\mathbf{y}_{t}) \tag{24}\]
According to Bayes' theorem, we have:
\[p(\mathbf{x}_{0}=\mathbf{x}|\mathbf{x}_{t}=\mathbf{y}_{t}) =\frac{p(\mathbf{x}_{0}=\mathbf{x},\mathbf{x}_{t}=\mathbf{y}_{t})}{p(\mathbf{x}_{t}= \mathbf{y}_{t})} \tag{25}\] \[=p(\mathbf{x}_{0}=\mathbf{x})\cdot\frac{p(\mathbf{x}_{t}=\mathbf{y}_{t}|\mathbf{x}_{0 }=\mathbf{x})}{p(\mathbf{x}_{t}=\mathbf{y}_{t})}.\]
Applying Eq. (25) to Eq. (24), we obtain:
\[p(\mathbf{x})\cdot\frac{1}{\sqrt{(2\pi\sigma_{t}^{2})^{d}}}\exp\frac {-\|\mathbf{x}-\mathbf{y}\|_{2}^{2}}{2\sigma_{t}^{2}}>p(\mathbf{x}^{\prime})\cdot\frac{1}{ \sqrt{(2\pi\sigma_{t}^{2})^{d}}}\exp\frac{-\|\mathbf{x}^{\prime}-\mathbf{y}\|_{2}^{2}}{ 2\sigma_{t}^{2}} \tag{26}\] \[\Leftrightarrow \log\left(\frac{p(\mathbf{x})}{p(\mathbf{x}^{\prime})}\right)>\frac{1}{2 \sigma_{t}^{2}}\left(\|\mathbf{x}-\mathbf{y}\|_{2}^{2}-\|\mathbf{x}^{\prime}-\mathbf{y}\|_{2}^ {2}\right),\quad\forall\mathbf{x}^{\prime}\in\mathcal{S}(\mathbf{x}),\]
and the proof is now complete.
### Subjective evaluation
We conduct two types of Mean Opinion Score (MOS) tests to verify the quality of synthesized audio through human evaluation.
Figure 7: Screenshot of MOS test.
**Naturalness.** For audio quality evaluation, we conduct the MOS (mean opinion score) tests and explicitly instruct the raters to "focus on examining the audio quality and naturalness, and ignore the differences of style (timbre, emotion, and prosody)". The testers present and rate the samples, and each tester is asked to evaluate the subjective naturalness on a 1-5 Likert scale.
**Consistency.** For audio consistency evaluation, we explicitly instruct the raters to "focus on the similarity of the speech (content, timbre, emotion, and prosody) to the reference, and ignore the differences of audio quality". This is slightly different from the original definition of SMOS for speech synthesis. In the SMOS (similarity mean opinion score) tests, we pair each synthesized utterance with a ground truth utterance to evaluate how well the synthesized speech matches that of the target speaker. The testers present and rate the samples, and each tester is asked to evaluate the subjective consistency on a 1-5 Likert scale.
\begin{table}
\begin{tabular}{c||c|c c||c c c} \hline \hline
**Method** & **Scenarios** & **MOS\(\uparrow\)** & **Similarity MOS\(\uparrow\)** & **Scenarios** & **MOS\(\uparrow\)** & **Similarity MOS\(\uparrow\)** \\ \hline Unprocessed & & \(3.60_{\pm 0.31}\) & \(3.33_{\pm 0.34}\) & & \(3.30_{\pm 0.30}\) & \(3.10_{\pm 0.32}\) \\ \hline DiffWave (dis) & & \(3.80_{\pm 0.21}^{+0.21}\) & \(3.75_{\pm 0.24}^{+0.24}\) & & \(2.10_{\pm 0.29}^{+0.29}\) & \(1.00_{\pm 0.14}^{+0.14}\) \\ DiffsSE & & \(3.40_{\pm 0.20}^{+0.21}\) & \(4.17_{\pm 0.26}^{+0.24}\) & & \(3.00_{\pm 0.27}^{+0.27}\) & \(3.33_{\pm 0.21}^{+0.21}\) \\ CliffuSE & Matched & \(3.85_{\pm 0.25}^{+0.25}\) & \(4.12_{\pm 0.13}^{+1.79}\) & Mismatched & \(2.55_{\pm 0.25}^{+0.25}\) & \(3.42_{\pm 0.29}^{+0.32}\) \\ SGMSE & & \(3.65_{\pm 0.28}^{+0.05}\) & \(3.95_{\pm 0.23}^{+0.02}\) & & \(2.83_{\pm 0.33}^{+0.47}\) & \(3.41_{\pm 0.25}^{+0.31}\) \\ DR-DiffuSE & & \(3.80_{\pm 0.25}^{+0.25}\) & \(3.84_{\pm 0.27}^{+0.27}\) & & \(2.48_{\pm 0.25}^{+0.25}\) & \(1.45_{\pm 0.33}^{+0.33}\) \\ \hline DOSE & & \(4.05_{\pm 0.29}^{+0.45}\) & \(4.35_{\pm 0.21}^{+1.02}\) & & \(3.48_{\pm 0.26}^{+0.18}\) & \(3.17_{\pm 0.23}^{+0.07}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: MOS tests under different scenarios.
Figure 8: Screenshot of Similarity MOS test.
Our subjective evaluation tests are crowd-sourced and conducted by 15 volunteers. The screenshots of instructions for testers have been shown in Figure 7 and Figure 8. We paid $10 to participants hourly and totally spent about $300 on participant compensation.
The MOS results with the 95% confidence interval are shown in Table 2. we observe that: **(1)** Our method surpasses all baselines, demonstrating the strong ability of the proposed framework in synthesizing natural speech; **(2)** Our model can synthesize consistent speech to the golden speech, which is aligned with our motivation for algorithm design. It's also exciting to see that DOSE yields similar scores to methods with specific condition-injecting strategies (i.e., DiffuSE, CDiffuSE, and SGMSE) on Similarity MOS.
### Adaptive prior analysis
We now investigate the significance of the proposed adaptive prior (SS4.1) and show why we need to use a mild version (Eq. (12)) of the condition factor.
We design three variants to investigate the effect of different condition optimizer settings on denoising performance. These variants are: (a) applying adaptive prior with the noisy speech; (b) applying adaptive prior with the estimated one (from the deterministic model); (c) applying adaptive prior with the mild condition (Eq. (12)). To control variables, we use an unsupervised diffusion model with adaptive priors. We conduct experiments on the matched VB dataset and mismatched CHIME-4 dataset respectively, and our results are shown in Figure 9.
As shown in Figure 9 (left), we plot the one-step loss on matched VB and obtain the following observations: **(1)** The trend of loss curves is in line with our analysis in SS4.1 that we need to find a trade-off timestep for better performance. **(2)** Equipping the unconditional diffusion enhancement model with the adaptive prior technique has a certain denoising ability but is inferior to its counterpart discriminative model in matched scenarios. We attribute the second phenomenon to the limited denoising capacity of the unconditional diffusion enhancement models (cf. [9; 21]).
Although the performance of the unconditional diffusion enhancement model equipped with adaptive prior is mediocre in matched scenarios, as illustrated in Figure 9 (mid), it exhibits greater stability than discriminative models in mismatched scenarios. To verify the influence of different priors, we compare their PESQ on mismatched CHIME-4, shown in Figure 9 (right). We see that: **(1)** The deterministic model fails in mismatched scenarios and generates samples that are even worse than the unprocessed ones; **(2)** The diffusion enhancement model has strong generalizability and performs significantly better than the deterministic model; **(3)** The diffusion enhancement model loses its capability when using the estimated speech as the condition factor; and **(4)** Although the estimated speech is worse than the unprocessed one, the diffusion enhancement model equipped with a mild version of the condition factor achieves the best performance. This implies that the estimated speech can provide additional complementary guidance to the diffusion model, and the model can adaptively "separate the wheat from the chaff". Thus, using a mild version of the condition factor is important and necessary.
In summary, our research shows the strong generalizability of the diffusion enhancement model. However, we also find that the unconditional diffusion enhancement model has mediocre performance. This suggests that relying solely on the adaptive prior technique is not be sufficient, further emphasizing
Figure 9: Performance of the unconditional diffusion enhancement model with adaptive prior. From left to right: (1) each step loss on matched VB; (2) each step loss on mismatched CHIME-4; (3) PESQ comparison for different priors on mismatched CHIME-4.
ing the importance of the diffusion dropout operation - training conditional diffusion enhancement models in a supervised manner.
### Training objective investigation
We now examine the effect of our new training objective and demonstrate the necessity of using it.
Let's recall the sampling process of DOSE. In the first step, we generate a relatively good estimation of the clean speech. In the second step, we use DOSE with a small timestep to generate a better one. We plot the relationship between \(\Delta\mathbf{\epsilon}\) and \(\Delta\mathbf{x}_{0}\) in Figure 10. Specifically, we fix the \(\Delta\mathbf{x}_{0}\) as a constant and use Eq. (15) to calculate the corresponding \(\Delta\mathbf{\epsilon}\). This experiment aims to show how effort for calibration in \(\mathbf{x}\)-space is equivalent to that in \(\mathbf{\epsilon}\)-space. From Figure 10 we see that, as \(t\) approaches zero, small changes in \(\mathbf{x}\)-space amplify the implied prediction in \(\mathbf{\epsilon}\)-space. There is a nearly 100-fold difference between the values of \(\Delta\mathbf{\epsilon}\) and \(\Delta\mathbf{x}_{0}\). This implies that the efforts of the diffusion enhancement model at small timesteps become negligible, causing diffusion models to lose their ability to recover natural-sounding speech from the defective speech estimated in the small timestep.
We also plot the training loss curves in Figure 11. After substituting the training objective from \(\mathbf{\epsilon}\)-space to \(\mathbf{x}\)-space, we observe that the loss change becomes more significant. This demonstrates more effective participation of conditioning factors in diffusion training: training diffusion enhancement model with this new objective allows for easier and more effective exploitation of conditioning factors.
### Complexity analysis & why use two-steps generation?
In this subsection, we explain why we use two steps to generate speech.
**Why not one-step speech generation?** According to SS4.4, we can generate speech in one shot using a trained conditional diffusion enhancement model to reduce error accumulation [21]. This one-step speech generation provides an appealing performance (Table 3). However, if we consider the diffusion model training as a multi-task paradigm, the denoising task at a smaller timestep \(t\) is easier than that at a larger timestep [21]. Correspondingly, the primary estimation error occurs at the large timestep area and directly estimating clean speech at a large timestep will result in sub-optimal performance [28]. Meanwhile, we can't choose a small timestep \(t\) as it will lead to the mismatched problem discussed in SS4.1.
Since the conditional diffusion model can learn vital information from both the intermediate-generated and noisy speech - the estimated speech can provide complementary guidance to the diffusion model (Appendix A.3) - allowing us to further improve the result by generating speech with multiple steps.
Figure 11: Investigation of the training objective. we compare the \(\mathbf{\epsilon}\)-space and \(\mathbf{x}\)-space loss curves.
**Hyper-parameter selection.** Basically, we need to set optimal hyper-parameters by evaluating the performance with a small batch of data. Suppose we choose the number of sampling steps as \(K\) (\(K<T\)), and the amount of test data as \(N\), then the computational complexity of the grid search is \(\mathcal{O}\left(NT!/(T-K)!\right)\). Since \(T\) is always set as a large number in the diffusion model's setup, the complexity introduced by choosing a large \(K\) is often unmanageable.
**Trivial solution.** We have a simple alternative solution: defining the hyper-parameters empirically (e.g., equal intervals [7]). We present speech quality comparisons between empirically defined (fixed) and handpicked hyper-parameters in Table 3. As shown, the conditional diffusion enhancement model with handpicked hyper-parameters performs better than that with empirically defined hyper-parameters.
Although fixed hyper-parameters have inferior performance compared to handpicked ones, they still show appealing performance compared to prevailing diffusion enhancement baselines. Therefore, in situations where we can't evaluate the model in advance, we can use empirically defined hyper-parameters instead.
### Parameter sensitivity
There are two groups of hyper-parameters that are critical to model performance: the dropout probability \(p\) for model training and two timesteps \(\tau_{1},\tau_{2}\) for model inference. In this subsection, we conduct parameter sensitivity experiments to investigate how these hyper-parameters affect the model's performance.
**Dropout probability \(p\).** We vary the dropout probability \(p\), setting it to \(\{0.0,0.5,0.9,1.0\}\), and plot the one-step loss on matched VB (Figure 12) and mismatched CHIME-4 (Figure 13) respectively. Please note that the figure with \(p=1\) has been excluded since it degenerates to a deterministic mapping-based model and the performance remains unchanged across all timesteps.
From Figure 12, we see that: **(1)** The proposed conditional diffusion enhancement model works. Compared to the deterministic mapping-based model, our model performs slightly better in matched scenarios and significantly better in mismatched scenarios; **(2)** When intermediate-generated speech \(\mathbf{x}_{t}\) is unreliable (large \(t\)), the condition factor \(\mathbf{y}\) plays a dominant role. **(3)** As the dropout probability increases, the model focuses more on the condition factor, while when the dropout probability is small, the loss curve oscillates when \(t\) gets large.
From Figure 13, We find that: when using a higher dropout probability (such as \(p=0.5\) and \(p=0.9\)), the model can generally achieve better generalizability. Note that if the dropout probability is set too high, the diffusion enhancement model will rely solely on the condition factor to estimate the clean
Figure 12: Performance of the conditional diffusion enhancement model with different dropout probability in VB. From left to right: (1) without dropout; (2) dropout 50% ; (3) dropout 90%.
\begin{table}
\begin{tabular}{c||c|c c||c c} \hline \hline
**Method** & **Scenarios** & **PESQ\(\uparrow\)** & **STOI(\%)\(\uparrow\)** & **Scenarios** & **PESQ\(\uparrow\)** & **STOI(\%)\(\uparrow\)** \\ \hline Unprocessed & & 1.97 & 92.1 & 1.21 & 71.5 \\ DOSE (fixed 1 step) & & \(2.47^{+0.50}_{-0.01}\) & \(93.0^{+0.90}_{-0.05}\) & & \(1.38^{+0.17}_{-0.01}\) & \(82.8^{+1.13}_{-0.05}\) \\ DOSE (handpicked 1 step) & Matched & \(2.50^{+0.58}_{-0.01}\) & \(93.4^{+1.30}_{-1.00}\) & Mismatched & \(1.51^{+0.31}_{-0.24}\) & \(86.4^{+1.87}_{-0.07}\) \\ DOSE (fixed 2 steps) & & \(2.48^{+0.01}_{-0.01}\) & \(93.1^{+0.05}_{-0.05}\) & & \(1.44^{+0.01}_{-0.01}\) & \(83.6^{+0.07}_{-0.07}\) \\ DOSE (handpicked 2 steps) & & \(2.56^{+0.09}_{-0.01}\) & \(93.6^{+0.05}_{-0.05}\) & & \(1.52^{+0.32}_{-0.01}\) & \(86.6^{+0.01}_{-0.05}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study for two-step speech generation.
speech - and the diffusion enhancement model will degenerate into a deterministic model, losing its generalizability.
**Timesteps \(\tau_{1}\) and \(\tau_{2}\).** Considering the computational complexity of the "one-step" grid search, we search optimal hyperparameters with a slightly larger step. Specifically, we select both optimal \(\tau_{1}\) and \(\tau_{2}\) from the predefined set \(\{1,5,10,15,20,25,30,35,40,45,50\}\). We show the PESQ and STOI performance of different combinations in Figure 14. We can observe that \(\tau_{1}>\tau_{2}\) (excluding STOI on matched VB dataset, and the difference is negligible: \(0.930\sim 0.934\)) will lead to better performance, which is in line with our analysis (SS4.4).
Figure 14: Performance of two-step speech generation. From left to right: (1) PESQ on matched VB dataset; (2) STOI on matched VB dataset; (3) PESQ on mismatched CHIME-4 dataset; (4) STOI on mismatched CHIME-4 dataset.
Figure 13: Performance of the conditional diffusion enhancement model with different dropout probability in CHIME-4. From left to right: (1) without dropout; (2) dropout 50% ; (3) dropout 90%.
### Visual case of excessive suppression
We present several visual cases of excessive suppression.
Figure 15: Excessive suppression visualization (CHIME-4, DOSE). From left to right: (1) estimated condition; (2) 2 steps; (3) clean; (4) noisy speech.
### Visual case of error accumulation
We present several visual cases of error accumulation.
Figure 16: Error accumulation visualization (VB, DOSE). From left to right: (1) full (50) steps; (2) 2 steps; (3) clean; (4) noisy speech.
### Discussion
To help readers better understand our approach, we analyze the reasons behind the better generalizability of diffusion enhancement models compared to deterministic mapping-based models (from the robust training perspective), explain why we use 0.5 in the mild version of the condition factor, and discuss the broader impacts of speech enhancement methods.
#### a.9.1 Generalization analysis
Given that full-step diffusion enhancement models have better generalizability than deterministic models, we now delve into the following question: if we are just using diffusion models as one-step (or two-step) denoisers, what accounts for their enhanced performance compared to deterministic mapping-based models?
To answer this question, we need to point out that training a diffusion model can be considered equivalent to training a multi-task paradigm. This involves training a model with shared parameters on multiple levels of Gaussian noise concurrently. Recent research [21] has demonstrated that the full training process of diffusion models leads to significantly improved one-shot denoising capabilities, which are more generalizable compared to previous works that trained standalone denoisers on a single noise level. Please refer to [21] (SS5.2) for more details.
#### a.9.2 Why use 0.5 in the mild version of the condition factor? (reviewer boQC)
Employing an equal weight provides stability, yet the performance during instances with low SNR would be compromised (albeit still superior to direct utilization of \(\mathbf{y}\) as a condition factor, unless the condition optimizer falters). One prospective solution involves introducing an additional adaptive strategy, i.e., \(\mathbf{c}=\alpha f_{\mathbf{b}}(\mathbf{y})+(1-\alpha)\mathbf{y}\). We can design an adaptive _alpha_ predictor and hope it can output \(\alpha\) based on the quality of raw condition and samples from the condition optimizer. For example, when the condition optimizer produces lower-quality samples, giving more weight to the original condition factor would make sense. Conversely, if the raw condition factor has a low SNR, emphasizing the generated counterpart could be more effective. However, implementing this idea practically is intricate.
Given our strong reliance on diffusion-enhanced models to enhance generalization, any new adaptive strategy must be generalizable. For instance, training an adaptive alpha predictor on the (seen) VB-dataset (high SNR & consistent condition optimizer performance) could lead the model to consistently output higher \(\alpha\) values for fusion. Unfortunately, this auxiliary model might not effectively adapt to variations when evaluating the mismatched (unseen) CHIME-4 dataset (low SNR & potential condition optimizer challenges). To this end, we might need other techniques such as data augmentation and adversarial training to improve its generalizability and robustness. This creates a dilemma: harnessing the speech diffusion model for overarching speech noise reduction generalization while simultaneously necessitating a pre-established generalized model to facilitate its implementation. So far, despite our efforts to train a strong alpha predictor, progress has been limited (the alpha predictor is still not generalizable, and the new system has no significant performance improvement over DOSE).
#### a.9.3 Broader impacts
As speech enhancement technology continues to advance and become more prevalent, it's important to consider its broader impacts.
**Positive impacts.** The impact of speech enhancement technology on real-life situations, particularly for individuals with hearing impairments, cannot be overstated. Hearing aids have long been the primary solution for those with hearing loss, but they are not always effective in noisy environments or for certain types of hearing loss. Speech enhancement technology can greatly improve speech intelligibility and communication for hearing aid users. For example, some hearing aids have AI-powered speech enhancement that boosts speech quality.
In addition to the benefits for individuals with hearing impairments, speech enhancement technology also has significant implications for various applications. In transportation, clearer and more intelligible speech can improve communication between pilots and air traffic control, leading to safer and more efficient air travel. In industry, speech enhancement can improve communication onnoisy factory floors, leading to increased productivity and safety. In educational settings, speech enhancement can improve student comprehension and engagement during lectures and presentations.
**Negative impacts.** While speech enhancement technology has the potential to greatly improve communication and speech intelligibility, one potential concern is that speech enhancement could modify the semantic content of speech, potentially misleading listeners. Thus, it's important for developers of speech enhancement technology to consider this potential negative effect and work towards creating trustworthy systems.
### Experimental details
**Speech preprocessing.** We process the speech waveform at a 16 kHz sampling rate. To maintain dimensionality consistency within mini-batches, we pad each utterance to 2 seconds (32000 points) using a zero-padding technique.
**Basic architecture.** To make a fair comparison, we use DiffWave [7] as the basic architecture following [4, 9] - the only difference being the change in the way of condition-injecting since most speech enhancement methods will directly use noisy speech as the condition factor, rather than Mel-spectrogram. We concatenate the condition factor with the intermediate-generated sample along the channel dimension as the model's input. Specifically, the network is composed of 30 residual layers with residual channels 128. We use a bidirectional dilated convolution (Bi-DilConv) with kernel size 3 in each layer. We sum the skip connections from all residual layers. The total number of trainable parameters is 2.31M, slightly smaller than naive DiffWave (2.64M). Please refer to [7] and our code for more details. | ## Review
### Summary
The paper presents a novel model-agnostic approach called DOSE for speech enhancement, which addresses the challenge of conditional collapse in denoising diffusion probabilistic models (DDPMs). By employing two effective condition-augmentation techniques, the authors demonstrate substantial improvements in high-quality and stable speech generation through comprehensive experiments. The methodology is well-detailed, and the findings highlight the efficiency and effectiveness of DOSE when compared to existing models. However, while the writing is generally clear, it has been noted that some sections may benefit from improved readability and accessibility.
### Strengths
- In-depth presentation on diffusion SE, including formulation and methodology.
- Good experimental results, comparing different SE baselines and demonstrating state-of-the-art results.
- The proposed method shows good generalization ability in matched and mismatched scenarios.
- Comprehensive comparison with existing diffusion enhancement methods and deterministic mapping-based methods.
- Interesting theoretical insights into the functioning of adaptive priors and dropout techniques.
- Development of a fast sampling technique with broader implications for conditional generation tasks.
### Weaknesses
- Lack of clarity in the description and implications of the dropout operation and adaptive prior.
- Insufficient detail in the training configurations and experimental replication.
- Missing qualitative analysis in the experimental section.
- The paper's writing is overly complicated in places, affecting clarity.
- Limitations and ethical considerations are not explicitly addressed.
### Questions
- Does the adaptive prior only use in the inference process, and how does it differ from that in PriorGrad?
- What is the rationale behind using similarity MOS for evaluation, and how does it differ from traditional MOS?
- Can the authors provide clarity on the choice of p for diffusion dropout and the potential training-inference mismatch?
- Is there any theoretical basis for the choice of two steps in the sampling process, and how does it affect accuracy?
- Could the authors conduct an ablation study to analyze the effectiveness of DOSE?
- What are the implications of the dropout operation on conditioning during training?
### Soundness
**Score:** 3
**Description:** 3 = good: The approach is well-founded on theoretical principles, and the experimental results support the claims made. However, some aspects require further clarification.
### Presentation
**Score:** 2
**Description:** 2 = fair: While the paper is generally well-written, it suffers from overly complex sections that hinder readability and understanding. Improvements in structure and clarity are needed.
### Contribution
**Score:** 3
**Description:** 3 = good: The contributions are significant, particularly in addressing the issue of condition collapse in diffusion models, although some elements are not entirely novel.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept: The paper is technically solid and has moderate-to-high impact potential, but it requires some revisions for clarity and to address certain weaknesses.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a novel and promising approach to speech enhancement using diffusion models, backed by strong experimental evidence. While the writing could be improved for better clarity, the overall contributions, soundness, and results warrant acceptance. The existing strengths outweigh the weaknesses, making it a valuable addition to the field.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Characterizing the Optimal \(0-1\) Loss for Multi-class Classification with a Test-time Attacker
Sihui Dai\({}^{1*}\) Wenxin Ding\({}^{2*}\) Arjun Nitin Bhagoji\({}^{2}\) Daniel Cullina\({}^{3}\)
**Ben Y. Zhao\({}^{2}\) Haitao Zheng\({}^{2}\) Prateek Mittal\({}^{1}\) \({}^{1}\)**Princeton University \({}^{2}\)University of Chicago \({}^{3}\)Pennsylvania State University
{sihuid,pmittal}@princeton.edu
{wenxind, abhagoji}@uchicago.edu
{ravenben, htzheng}@cs.uchicago.edu
[email protected]
###### Abstract
Finding classifiers robust to adversarial examples is critical for their safe deployment. Determining the robustness of the best possible classifier under a given threat model for a fixed data distribution and comparing it to that achieved by state-of-the-art training methods is thus an important diagnostic tool. In this paper, we find achievable information-theoretic lower bounds on robust loss in the presence of a test-time attacker for _multi-class classifiers on any discrete dataset_. We provide a general framework for finding the optimal \(0-1\) loss that revolves around the construction of a conflict hypergraph from the data and adversarial constraints. The prohibitive cost of this formulation in practice leads us to formulate other variants of the attacker-classifier game that more efficiently determine the range of the optimal loss. Our valuation shows, for the first time, an analysis of the gap to optimal robustness for classifiers in the multi-class setting on benchmark datasets.
## 1 Introduction
Developing a theoretical understanding of the vulnerability of classifiers to adversarial examples [27; 12; 8; 4] generated by a test-time attacker is critical to their safe deployment. Past work has largely taken one of two approaches. The first has focused on generalization bounds on learning in the presence of adversarial examples, by trying to determine the sample complexity of robust learning [10; 25; 21]. The second has been to characterize the lowest possible loss achievable within a specific hypothesis class [22; 5; 6] for binary classification for a specified data distribution and attacker. The hypothesis class of choice is often the set of all possible classification functions. The optimal loss thus determined is a lower bound on robustness for classifiers used in practice, allowing practitioners to measure progress for defenses and step away from the attack-defense arms race [19; 31; 9].
In this paper, we take on the second approach and propose methods to find the lowest possible loss attainable by any measurable classifier in the presence of a test-time attacker in the multi-class setting. The loss thus obtained is referred to as the _optimal loss_ and the corresponding classifier, the _optimal classifier_. We extend previous work [5; 6] which was restricted to the binary setting, allowing us to compare the robustness of multi-class classifiers used in practice to the optimal loss.
Our **first contribution** is thus _extending the conflict-graph framework for computing lower bounds on robustness to the multi-class setting._ In this framework, given a dataset and attacker, we construct a _conflict hypergraph_ which contains vertices representing training examples in the dataset, and hyperedges representing overlaps between adversarial neighborhoods around each training example. Using this hypergraph, we construct a linear program whose optimal value is a lower bound on the \(0-1\) loss for all classifiers and whose solution is the optimal classifier. The lower bound on robustness is thus achievable.
In practice, however, we find that the full multi-class formulation of the lower bound, although exact, can lead to prohibitively large optimization problems. Thus, we vary the information available to either the attacker or classifier to find alternative lower bounds that are quicker to compute. Our **second contribution** is the _development and analysis of more efficient methods to determine the range of loss obtained by the optimal classifier_. (see Table 1). We also interpret these methods as classifier-attacker games. In particular, we find lower bounds on the optimal loss by aggregating the set of all binary classifier-only lower bounds as well as by using truncated hypergraphs (hypergraphs with a restriction on the maximum hyperedge degree). We also upper bound the optimal loss with a generalization of the Caro-Wei bound (1) on a graph's independent set size. The gap between the lower and upper bounds on the optimal loss allows us to determine the range within which the optimal loss lies.
To analyze the performance of classifiers obtained from adversarial training [19; 31], we compare the loss obtained through adversarial training to that of the optimal classifier. We find a loss differential that is greatly exacerbated compared to the binary case [5; 22]. In addition, we also determine the cases where, in practice, the bounds obtained from game variants are close to the optimal. We find that while the aggregation of binary classifier-only bounds leads to a very loose lower bound, the use of truncated hypergraphs can significantly speed up computation while achieving a loss value close to optimal. This is validated using the Caro-Wei upper bound, with the lower and upper bounds on the optimal loss closely matching for adversarial budgets used in practice. Thus, our **final contribution** is an _extensive empirical analysis of the behavior of the optimal loss for a given attacker, along with its lower and upper bounds_. This enables practitioners to utilize our methods even when the optimal loss is computationally challenging to determine.
The rest of the paper is organized as follows: SS2 provides the characterization of the optimal loss; SS3 proposes several upper and lower bounds on the optimal loss; SS4 computes and compares the optimal loss to these bounds, as well as the performance of robustly trained classifiers and SS5 concludes with a discussion of limitations and future work.
## 2 Characterizing Optimal 0-1 Loss
In this section, we characterize the optimal \(0-1\) loss for any discrete distribution (_e.g._ training set) in the presence of a test-time attacker. This loss can be computed as the solution of a linear program (LP), which is defined based on a hypergraph constructed from the classification problem and attacker constraint specification. The solution to the LP can be used to construct a classifier achieving the optimal loss, and _lower bounds_ the loss attainable by any particular learned classifier.
### Problem Formulation
**Notation.** We consider a classification problem where inputs are sampled from input space \(\mathcal{X}\), and labels belong to \(K\) classes: \(y\in\mathcal{Y}=[K]=\{1,...,K\}\). Let \(P\) be the joint probability over \(\mathcal{X}\times\mathcal{Y}\). Let \(\mathcal{H}_{\text{soft}}\) denote the space of all soft classifiers; i.e. specifically, for all \(h\in\mathcal{H}_{\text{soft}}\) we have that \(h:\mathcal{X}\rightarrow[0,1]^{K}\) and \(\sum_{i=1}^{K}h(x)_{i}=1\) for all \(x\in\mathcal{X}\). Here, \(h(x)_{i}\) represents the probability that the input \(x\) belongs to the \(i^{\text{th}}\) class. We use the natural extension of \(0-1\) loss to soft classifiers as our loss function: \(\ell(h,(x,y))=1-h(x)_{y}\). This reduces to \(0-1\) loss when \(h(x)\in\{0,1\}^{K}\). 1
Footnote 1: A soft classifier can be interpreted as a randomized hard-decision classifier \(f\) with \(\Pr[f(x)=y]=h(\tilde{x})_{y}\), in which case \(\ell(h,(x,y))=\Pr[f(x)\neq y]\), classification error probability of this randomized classifier.
**The adversarial classification game.** We are interested in the setting where there exists a test-time attacker that can modify any data point \(x\) to generate an adversarial example \(\tilde{x}\) from the neighborhood \(N(x)\) of \(x\). An instance of the game is specified by a discrete probability distribution \(P\),2 hypothesis class \(\mathcal{H}\subseteq\mathcal{H}_{\text{soft}}\), and neighborhood function \(N\). We require that for all \(x\in\mathcal{X}\), \(N(x)\) always contains \(x\). The goal of the classifier player is to minimize expected classification loss and the goal of the attacker is to maximize it. The optimal loss is
\[L^{*}(P,N,\mathcal{H})=\inf_{h\in\mathcal{H}}\mathbb{E}_{(x,y)\sim P}\Big{[}\sup_{ \tilde{x}\in N(x)}\ell(h,(\tilde{x},y))\Big{]}=1-\sup_{h\in\mathcal{H}}\mathbb{E }_{(x,y)\sim P}\Big{[}\inf_{\tilde{x}\in N(x)}h(\tilde{x})_{y}\Big{]}. \tag{1}\]
Alternative hypothesis classes.In general, for \(\mathcal{H}^{\prime}\subseteq\mathcal{H}\), we have \(L^{*}(P,N,\mathcal{H})\leq L^{*}(P,N,\mathcal{H}^{\prime})\). Two particular cases of this are relevant. First, the class of hard-decision classifiers is a subset of the class of soft classifiers (\(\mathcal{H}_{\text{soft}}\)). Second, for any fixed model parameterization (ie. fixed NN architecture), the class of functions represented by that parameterization is another subset. Thus, optimal loss over \(\mathcal{H}_{\text{soft}}\) provides a lower bound on loss for these settings.
### Optimal loss for distributions with finite support
Since we would like to compute the optimal loss for distributions \(P\)_with finite support_, we can rewrite the expectation in Equation 1 as an inner product. Let \(V\) be the support of \(P\), i.e. the set of points \((x,y)\in\mathcal{X}\times\mathcal{Y}\) that have positive probability in \(P\). Let \(p\in[0,1]^{V}\) be the probability mass vector for \(P\): \(p_{v}=P(\{v\})\). For a soft classifier \(h\), let \(q_{N}(h)\in\mathbb{R}^{V}\) be the vector of robustly correct classification probabilities for vertices \(v=(x,y)\in V\), _i.e._\(q_{N}(h)_{v}:=\inf_{\tilde{x}\in N(x)}h(\tilde{x})_{y}\). Rewriting (1) with our new notation, we have \(1-L^{*}(P,\mathcal{H}_{\text{soft}},N)=\sup_{h\in\mathcal{H}_{\text{soft}}}p^ {T}q_{N}(h)\). This is the maximization of a linear function over all possible vectors \(q_{N}(h)\). In fact, the convex hull of all correct classification probability vectors is a polytope and this optimization problem is a linear program, as described next.
**Definition 1**.: _For a soft classifier \(h\), the correct-classification probability achieved on an example \(v=(x,y)\) in the presence of an adversary with constraint \(N\) is \(q_{N}(h)_{v}=\inf_{\tilde{x}\in N(x)}h(\tilde{x})_{y}\)._
_The space of achievable correct classification probabilities is \(\mathcal{P}_{\mathcal{V},N,\mathcal{H}}\subseteq[0,1]^{\mathcal{V}}\), defined as_
\[\mathcal{P}_{\mathcal{V},N,\mathcal{H}}=\bigcup_{h\in\mathcal{H}}\prod_{v\in \mathcal{V}}[0,q_{N}(h)_{v}]\]
In other words we say that \(q^{\prime}\in[0,1]^{\mathcal{V}}\) is achievable when there exists \(h\in\mathcal{H}\) such that \(q^{\prime}\leq q_{N}(h)\). The inequality appears because we will always take nonnegative linear combinations of correct classification probabilities.
Characterizing \(\mathcal{P}_{\mathcal{V},N,\mathcal{H}}\) allows the minimum adversarial loss achievable to be expressed as an optimization problem with a linear objective:3
Footnote 3: For any loss function that is a decreasing function of \(h(\tilde{x})_{y}\), the optimal loss can be specified as an optimization over \(\mathcal{P}_{\mathcal{V},N,\mathcal{H}}\). In fact (6) focused on cross-entropy loss, which has this property.
\[1-L^{*}(P,N,\mathcal{H}_{\text{soft}})=\sup_{h\in\mathcal{H}_{\text{soft}}} \mathbb{E}_{v\sim P}[q_{N}(h)_{v}]=\sup_{h\in\mathcal{H}_{\text{soft}}}p^{T}q _{N}(h)=\sup_{q\in\mathcal{P}_{\mathcal{V},N,\mathcal{H}}}p^{T}q. \tag{2}\]
(6) characterized \(\mathcal{P}_{\mathcal{V},N,\mathcal{H}_{soft}}\) in the two-class case and demonstrated that this space can be captured by linear inequalities. We now demonstrate that this also holds for the multi-class setting.
### Linear Program to obtain Optimal Loss
In order to characterize \(\mathcal{P}_{\mathcal{V},N,\mathcal{H}_{soft}}\), we represent the structure of the classification problem with a _conflict hypergraph_\(\mathcal{G}_{\mathcal{V},N}=(\mathcal{V},\mathcal{E})\), which records intersections between neighborhoods of points in \(\mathcal{X}\). The set of vertices \(V\) of \(\mathcal{G}_{\mathcal{V},N}\) is the support of \(P\). \(\mathcal{E}\) denotes the set of hyperedges of the graph. For a set \(S\subseteq\mathcal{V}\), \(S\in\mathcal{E}\) (i.e. \(S\) is a hyperedge in \(\mathcal{G}_{\mathcal{V},N}\)) if all vertices in \(S\) belong to different classes and the neighborhoods of all vertices in \(S\) overlap: \(\bigcap_{(x,y)\in S}N(x)\neq\emptyset\). Thus, the size of each hyperedge is at most \(K\), \(\mathcal{E}\) is downward-closed (meaning if \(e\in\mathcal{E}\) and \(e^{\prime}\subset e\), then \(e^{\prime}\subset\mathcal{E}\)), and every \(v\in\mathcal{V}\) is a degree 1 hyperedge.
Using the conflict hypergraph \(\mathcal{G}_{\mathcal{V},N}\), we can now describe \(\mathcal{P}_{\mathcal{V},N,\mathcal{H}_{soft}}\).
**Theorem 1** (Feasible output probabilities (Adapted from (6))).: _The set of correct classification probability vectors for support points \(\mathcal{V}\), adversarial constraint \(N\), and hypothesis class \(\mathcal{H}_{soft}\) is_
\[\mathcal{P}_{\mathcal{V},N,\mathcal{H}_{soft}}=\{q\in\mathbb{R}^{\mathcal{V}}: q\geq\mathbf{0},\;Bq\leq\mathbf{1}\} \tag{3}\]
_where \(B\in\mathbb{R}^{\mathcal{E}\times\mathcal{V}}\) is the hyperedge incidence matrix of the conflict hypergraph \(G_{\mathcal{V},N}\)._See Supplementary SSA for proof. This characterization of \(\mathcal{P}_{\mathcal{V},N,\mathcal{H}_{\textit{soft}}}\) allows us to express optimal loss as a linear program for any dataset and attacker using Eq. 24.
Footnote 4: We note that concurrent work by Trillos et al. (2019) provides a general-purpose framework based on optimal transport to find lower bounds on robustness in the per-sample as well as distributional sense. They also independently construct the three-class minimal example from Figure 1.
**Corollary 1** (Optimal loss as an LP).: _For any distribution \(P\) with finite support,_
\[1-L^{*}(P,N,\mathcal{H}_{\textit{soft}})=\max_{q}p^{T}q\ \ \text{s.t }q\geq 0,\ Bq\leq 1. \tag{4}\]
Duality and adversarial strategies.The dual linear program is
\[\min_{z}\mathbf{1}^{T}z\quad\text{s.t.}\quad z\geq 0,\quad B^{T}z\geq p.\]
Feasible points in the dual linear program are fractional coverings of the vertices by the hyperedges (24). (See Section 3.4 and Supplementary SSB for more discussion of fractional packings and coverings.) An adversarial strategy can be constructed from a feasible point \(z\) as follows. For each hyperedge \(e\), chose an example \(\tilde{x}(e)\) such that \(\tilde{x}(e)\in N(x)\) for all \((x,y)\in e\). From the definition of the conflict hypergraph, a choice is always available. A randomized adversarial strategy consists of conditional distributions for the adversarial example \(\tilde{x}\) given the natural example \(x\). When the adversary samples the natural example \((x,y)\), the adversary can select \(\tilde{x}(e)\) with probability at least \(\frac{B_{e}vz_{e}}{p_{v}}\). Note that \(B_{e,v}z_{e}\) is the amount of coverage of \(v\) coming from \(e\). This is nonzero only for \(e\) that contain \((x,y)\). A conditional distribution satisfying these inequalities exists because \(\sum_{e}B_{e,v}z_{e}=(B^{T}z)_{v}\geq p_{v}\).
Thus for a vertex \(v\) such that \((B^{T}z)_{v}=p_{v}\), there is only one choice for this distribution. For a vertex \(v\) that is over-covered, i.e. \((B^{T}z)_{v}>p_{v}\), the adversary has some flexibility. If \(v\) is over-covered in some minimal cost covering, by complementary slackness \(q_{v}=0\) in every optimal \(q\), so the optimal classifiers do not attempt to classify \(v\) correctly.
Three-class minimal examples.Corollary 1 demonstrates that the optimal loss for the multi-class problem is influenced by hyperedges between vertices which reflect higher order interactions between examples. Figure 1 shows an important distinction between two types of three-way interactions of examples.
We have \(1-L^{*}(P,\mathcal{H}_{soft},N)=\max(p_{u},p_{v},p_{w},\frac{1}{2})\) while \(1-L^{*}(P^{\prime},\mathcal{H}_{soft},N)=\max(p^{\prime}_{u},p^{\prime}_{v},p ^{\prime}_{w})\). The presence or absence of the size-three hyperedge affects the optimal loss if and only if the example probabilities are close to balanced, i.e. all at most \(\frac{1}{2}\).
It is instructive to consider the optimal classifiers and adversarial strategies in the two cases. For \(\mathcal{V}\), when \(\frac{1}{2}\leq p_{u}\), the classifier \(h\) with \(q_{N}(h)=(1,0,0)\) is optimal. One such classifier is the constant classifier \(h(x)=(1,0,0)\). The optimal cover satisfies \(z_{\{u,v\}}+z_{\{u,w\}}=p_{u}\), \(z_{\{u,v\}}\geq p_{v}\), \(z_{\{u,w\}}\geq p_{w}\), \(z_{\{v,w\}}=0\). Thus when the adversary
\begin{table}
\begin{tabular}{|l|l|l|} \hline & **Summary of Method** & **Location** \\ \hline
**Optimal 0-1 loss** & LP on conflict hypergraph & §2.3 \\ \hline
**Lower bounds for optimal 0-1 loss** & LP on truncated conflict hypergraph & §3.1 \\ & Combining binary classification bounds & §3.2 \\ \hline
**Upper bound for optimal 0-1 loss** & Generalization of Caro-Wei bound & §3.5 \\ \hline \end{tabular}
\end{table}
Table 1: Summary of methods for computing the optimal \(0-1\) loss and efficient bounds on this value.
Figure 1: Two possible conflict structures involving three examples, each from a different class. In the right case, all subsets of \(\mathcal{V}^{\prime}=\{u^{\prime},v^{\prime},w^{\prime}\}\) are hyperedges in the conflict hypergraph \(\mathcal{G}_{\mathcal{V}^{\prime},N}\). In the left case, \(\{u,v,w\}\) is not a hyperedge in \(\mathcal{G}_{\mathcal{V},N}\), but all other subsets of \(\mathcal{V}\) are.
samples \(v\) or \(w\), it always produces an adversarial example that could be confused for \(u\). When \(\max(p_{u},p_{v},p_{w})\leq\frac{1}{2}\), any classifier \(h\) with \(q_{N}(h)=(\frac{1}{2},\frac{1}{2},\frac{1}{2})\) is optimal. To achieve these correct classification probabilities, we need \(h(\tilde{x})=(\frac{1}{2},\frac{1}{2},0)\) for \(\tilde{x}\in N(u)\cap N(v)\), \(h(\tilde{x})=(\frac{1}{2},0,\frac{1}{2})\) for \(\tilde{x}\in N(u)\cap N(w)\), etc.. The cover \(z_{\{u,v\}}=p_{u}+p_{v}-\frac{1}{2}\), \(z_{\{u,w\}}=p_{u}+p_{w}-\frac{1}{2}\), and \(z_{\{v,w\}}=p_{v}+p_{w}-\frac{1}{2}\) is optimal and has cost \(\frac{1}{2}\). The adversary produces examples associated with all three edges.
For \(\mathcal{V}^{\prime}\), things are simpler. The cover \(z_{\{u,v,w\}}=\max(p_{u},p_{v},p_{w})\) is always optimal. When \(p_{u}\geq\max(p_{v},p_{w})\), the classifier that returns \((1,0,0)\) everywhere is optimal.
## 3 Bounding the Optimal 0-1 Loss
While Corollary 1 characterizes the optimal loss, it may be computationally expensive to construct conflict hypergraphs in practice for a given dataset and to solve the linear program. Thus, we discuss several methods of bounding the optimal loss from the LP in Corollary 1, which are computationally faster in practice (SS4.2).
### Lower bounds on multiclass optimal loss via truncated hypergraphs
The edge set of the hypergraph \(\mathcal{G}\) can be very large: there are \(\prod_{i\in[K]}(1+|V_{i}|)\) vertex sets that are potential hyperedges. Even when the size of the edge set is reasonable, it is not clear that higher order hyperedges can be computed from \(\mathcal{V}\) efficiently. To work around these issues, we consider hypergraphs with bounded size hyperedges: \(\mathcal{G}^{\leq m}=(\mathcal{V},\mathcal{E}^{\leq m})\) where \(\mathcal{E}^{\leq m}=\{e\in\mathcal{E}:|e|\leq m\}\). We refer to these hypergraphs as _truncated hypergraphs_. In the corresponding relaxation of (4), \(B\) is replaced by \(B^{\leq m}\), the incidence matrix for \(\mathcal{E}^{\leq m}\). Since \(\mathcal{E}^{\leq m}\subseteq\mathcal{E}\), this relaxation provides a lower bound on \(L^{*}(P,N,\mathcal{H}_{\text{soft}})\).
Classification with side-information.This relaxation has an interpretation as the optimal loss in a variation of the classification game with side information.
**Definition 2**.: _In the example-dependent side information game with list length \(m\), the adversary samples \((x,y)\sim P\), then selects \(\tilde{x}\) and \(C\subseteq\mathcal{Y}\) such that \(y\in C\) and \(|C|=m\). We call \(C\) the side information. The classifier receives both \(\tilde{x}\) and \(C\), so the classifier is a function \(h:\mathcal{X}\times\binom{\mathcal{Y}}{m}\rightarrow[0,1]^{K}\), where \(\binom{\mathcal{Y}}{m}\) is the set of \(m\)-element subsets of \(\mathcal{Y}\). Let_
\[L^{*}(m,P,\mathcal{H},N)=\inf_{h\in\mathcal{H}}\mathbb{E}_{(x,y)\sim P}\Big{[} \inf_{\tilde{x}\in N(x)}\min_{C\in\binom{\mathcal{Y}}{m}:y\in C}(1-h(\tilde{ x},C)_{y})\Big{]}\]
_be the minimum loss in this game._
To illustrate this, consider the distribution \(P^{\prime}\) from Figure 1 with \(m=2\). The adversary can select some \(\tilde{x}\in N(u^{\prime})\cap N(v^{\prime})\cap N(w^{\prime})\), but the classifier will use the side-information to eliminate one of the three classes. The classifier is in the same situation it would be if the distribution were \(P\) and the size-three hyperedge was absent.
**Definition 3**.: _For classifiers using class list side-information, the correct-classification probability is defined as follows: \(q_{m,N}(h)_{(x,y)}=\inf_{\tilde{x}\in N(x)}\min_{C\in\binom{|K|}{m}:y\in C}h( \tilde{x},C)_{y}\). The set of achievable correct-classification probabilities is \(\mathcal{P}_{m,\mathcal{V},N,\mathcal{H}}=\bigcup_{h\in\mathcal{H}}\prod_{v\in \mathcal{V}}[0,q_{m,N}(h)_{v}]\)._
When \(m=K\), the minimization over \(C\) is trivial and \(q_{m,N}(h)\) reduces to \(q_{N}(h)\).
**Theorem 2** (Feasible output probabilities in the side-information game).: _The set of correct classification probability vectors for side-information of size \(m\), support points \(\mathcal{V}\), adversarial constraint \(N\), and hypothesis class \(\mathcal{H}_{soft}\) is_
\[\mathcal{P}_{m,\mathcal{V},N,\mathcal{H}_{soft}}=\{q\in\mathbb{R}^{\mathcal{V }}:q\geq\mathbf{0},\;B^{\leq m}q\leq\mathbf{1}\} \tag{5}\]
_where \(B^{\leq m}\in\mathbb{R}^{\mathcal{E}\times\mathcal{V}}\) is the hyperedge incidence matrix of the conflict hypergraph \(G^{\leq m}_{\mathcal{V},N}\)._
The proof can be found in Supplementary SSA.
Using the feasible correct classification probabilities in Theorem 2, we can now write the LP for obtaining the optimal loss for classification with side-information:
**Corollary 2** (Optimal loss for classification with side information / truncation lower bound).: \[1-L^{*}(P,N,\mathcal{H}_{\text{soft}})=\max_{q}p^{T}q\ \ s.t\,q\geq 0,\ B^{\leq m}q\leq 1.\]
### Lower bounds on multiclass optimal loss via lower bounds for binary classification
For large training datasets and large perturbation sizes, it may still be computationally expensive to compute lower bounds via LP even when using truncated hypergraphs due to the large number of edge constraints. Prior works (5; 6) proposed methods of computing lower bounds for \(0-1\) loss for binary classification problems and demonstrate that their algorithm is more efficient than generic LP solvers. We now ask the question: _Can we use lower bounds for binary classification problems to efficiently compute a lower bound for multi-class classification?_
Consider the setting where we obtain the optimal \(0-1\) loss for all one-versus-one binary classification tasks. Specifically, for each \(C\in\binom{[K]}{2}\), the binary classification task for that class pair uses example distribution \(P|Y\in C\) and the corresponding optimal loss is \(L^{*}((P|Y\in C),N,\mathcal{H}_{soft})\). What can we say about \(L^{*}(P,N,\mathcal{H}_{soft})\) given these \(\binom{K}{2}\) numbers?
This question turns about to be related to another variation of classification with side information.
**Definition 4**.: _In the class-only side-information game, the adversary samples \(y\sim P_{y}\), then selects \(C\in\binom{[K]}{m}\), then samples \(x\sim P_{x|y}\) and selects \(\tilde{x}\in N(x)\). Let \(L^{*}_{\text{co}}(m,P,\mathcal{H},N)\) be the minimum loss in this game._
In the example-dependent side-information game from Section 3.1, the adversary's choice of \(C\) can depend on both \(x\) and \(y\). In the class-only variation it can only depend on \(y\). For the class only game, we will focus on the \(m=2\) case.
To make the connection to the binary games, we need to add one more restriction on the adversary's choice of side information: for all \(y,y^{\prime}\Pr[C=\{y,y^{\prime}\}|Y=y]=\Pr[C=\{y,y^{\prime}\}|Y=y^{\prime}]\). This ensures that the classifier's posterior for \(Y\) given \(C\) is \(\Pr[Y=y|C]=\Pr[Y=y]/\Pr[Y\in C]\).
**Theorem 3**.: _The optimal loss in the class-only side-information game is \(L^{*}_{\text{co}}(2,P,N,\mathcal{H})=\max_{s}\sum_{i,j}Pr[Y=i]a_{i,j}s_{i,j}\) where \(a_{i,j}=L^{*}(P|(y\in\{i,j\}),\mathcal{H},N)\) and \(s\in\mathbb{R}^{[K]\times[K]}\) is a symmetric doubly stochastic matrix: \(s\geq 0\), \(s=s^{T}\), \(s\mathbf{1}=\mathbf{1}\)._
The proof in is Supplementary SSA. The variable \(s\) represents the attacker's strategy for selecting the class side information. When the classes are equally likely, we have a maximum weight coupling problem: because the weights \(a\) are symmetric, the constraint that \(s\) be symmetric becomes irrelevant.
### Relationships between games and bounds
The side information games provide a collection of lower bounds on \(L^{*}(P,N,\mathcal{H}_{\text{soft}})\). When \(m=1\), the side information game becomes trivial: \(C=\{y\}\) and the side information contains the answer to the classification problem. Thus \(L^{*}(1,\mathcal{H}_{soft})=L^{*}_{\text{co}}(1,\mathcal{H}_{soft})=0\). When \(m=K\), \(C=\mathcal{Y}\) and both the example-dependent and class-only side information games are equivalent to the original game, so \(L^{*}(P,N,\mathcal{H})=L^{*}(K,P,N,\mathcal{H})=L^{*}_{\text{co}}(K,P,N, \mathcal{H})\). For each variation of the side-information game, the game becomes more favorable for the adversary as \(m\) increases: \(L^{*}(m,P,n,\mathcal{H})\leq L^{*}(m+1,P,N,\mathcal{H})\) and \(L^{*}_{\text{co}}(m,P,N,\mathcal{H})\leq L^{*}_{\text{co}}(m+1,P,N,\mathcal{H})\). For each \(m\), it is more favorable for the adversary to see \(x\) before selecting \(C\), _i.e._\(L^{*}_{\text{co}}(m,P,N,\mathcal{H})\leq L^{*}(m,P,N,\mathcal{H})\).
### Optimal Loss for Hard Classifiers
Since \(\mathcal{H}_{\text{hard}}\subset\mathcal{H}_{\text{soft}}\), \(L^{*}(m,P,N,\mathcal{H}_{\text{soft}})\leq L^{*}(P,N,\mathcal{H}_{\text{hard}})\). The optimal loss over hard classifiers is interesting both as a bound on the optimal loss over soft classifiers and as an independent quantity. Upper bounds on \(L^{*}(P,N,\mathcal{H}_{\text{hard}})\) can be combined with lower bounds from SS3.1 and SS3.2 using small values of \(m\) to pin down \(L^{*}(m,P,N,\mathcal{H}_{\text{soft}})\) and establish that larger choices of \(m\) would not provide much additional information.
A hard classifier \(h:\mathcal{X}\rightarrow\{0,1\}^{[K]}\) has \(0,1\)-valued correct classification probabilities. When we apply the classifier construction procedure from the proof of Theorem 1 using an integer-valued\(q\) vector, we obtain a hard classifier. Thus the possible correct classification probabilities for hard classifiers are \(\mathcal{P}_{\mathcal{V},N,\mathcal{H}_{soft}}\cap\{0,1\}^{[K]}\). These are exactly the indicator vectors for the independent sets in \(\mathcal{G}^{\leq 2}\): the vertices included in the independent set are classified correctly and the remainder are not. Formally, we can express hard classifier loss as:
\[1-L^{*}(P,N,\mathcal{H}_{\text{hard}})=\max_{S\subseteq\mathcal{V}:S\text{ independent in }\mathcal{G}^{\leq 2}}P(S). \tag{6}\]
Finding the maximum weight independent set is an NP hard problem, which makes it computationally inefficient to compute optimal hard classifier loss.
Two-class versus Multi-class hard classificationThere are a number of related but distinct polytopes associated with the vertices of a hypergraph (24). The distinctions between these concepts explain some key differences between two-class and multi-class adversarial classification. See Supplementary SSB for full definitions of these polytopes.
When \(K=2\), the conflict hypergraph is a bipartite graph. For bipartite graphs, the fractional vertex packing polytope, which has a constraint \(\sum_{i\in e}q_{i}\leq 1\) for each edge \(e\), coincides with the independent set polytope, which is the convex hull of the independent set indicators. Due to this, in the two class setting, \(\mathcal{P}_{\mathcal{V},N,\mathcal{H}_{soft}}\) is the convex hull of \(\mathcal{P}_{\mathcal{V},N,\mathcal{H}_{hard}}\), hard classifiers achieve the optimal loss, and optimal hard classifiers can be found efficiently.
As seen in Theorem 1, for all \(K\) the fractional vertex packing polytope characterizes performance in soft classification problem. However, for \(K>2\), it becomes distinct from the independent set polytope. An independent set in a hypergraph is a subset of vertices that induces no hyperedges. In other words, in each hyperedge of size \(m\), at most \(m-1\) vertices can be included in any independent set. Because the edge set of the conflict hypergraph is downward-closed, only the size-two hyperedges provide binding constraints: the independent sets in \(\mathcal{G}\) are the same as the independent sets in \(\mathcal{G}^{\leq 2}\). Thus the concept of a hypergraph independent set is not truly relevant for our application.
There is a third related polytope: the fractional independent set polytope of \(\mathcal{G}^{\leq 2}\), which has a constraint \(\sum_{i\in S}q_{i}\leq 1\) for each clique \(S\) in \(\mathcal{G}^{\leq 2}\). The fractional independent set polytope of \(\mathcal{G}^{\leq 2}\) is contained in the fractional vertex packing polytope of \(\mathcal{G}\): every hyperedge in \(\mathcal{G}\) produces a clique in \(\mathcal{G}^{\leq 2}\) but not the reverse. This inclusion could be used to find an upper bound on optimal soft classification loss.
Furthermore, when \(K>2\) the fractional vertex packing polytope of the conflict hypergraph, i.e. \(\mathcal{P}_{\mathcal{V},N,\mathcal{H}_{soft}}\), can have non-integral extreme points and thus can be strictly larger than the independent set polytope. The first configuration in Figure 1 illustrates this. Thus the soft and hard classification problems involve optimization over different spaces of correct classification probabilities. Furthermore, maximum weight or even approximately maximum weight independent sets cannot be efficiently found in general graphs: the independent set polytope is not easy to optimize over.
In Section 3.5, we will use an efficiently computable lower bound on graph independence number.
### Upper bounds on hard classifier loss via Caro-Wei bound on independent set probability
In SS3.1 and 3.2, we discussed 2 ways of obtaining lower bounds for the loss of soft classifiers for the multi-class classification problem. In this section, we provide an upper bound on the loss of the optimal hard classifier (we note that this is also an upper bound for optimal loss for soft classifiers). In SS3.4, we discussed the relationship between optimal loss achievable by hard classifiers and independent set size. We upper bound the optimal loss of hard classifiers by providing a lower bound on the probability of independent set in the conflict graph.
The following theorem is a generalization of the Caro-Wei theorem (1) and gives a lower bound on the weight of the maximum weight independent set.
**Theorem 4**.: _Let \(\mathcal{G}\) be a graph on \(\mathcal{V}\) with adjacency matrix \(A\in\{0,1\}^{\mathcal{V}\times\mathcal{V}}\) and let \(P\) be a probability distribution on \(\mathcal{V}\). For any \(w\in\mathbb{R}^{\mathcal{V}}\), \(w\geq 0\), there is some independent set \(S\subseteq\mathcal{V}\) with_
\[P(S)\geq\sum_{v\in\mathcal{V}:w_{v}>0}\frac{p_{v}w_{v}}{((A+I)w)_{v}}.\]
The proof is in Supplementary SSA.
For comparison, the standard version of the Caro-Wei theorem is a simple lower bound on the independence number of a graph. It states that \(\mathcal{G}\) contains an independent set \(S\) with \(|S|\geq\sum_{v\in\mathcal{V}}1/(d_{v}+1)\) where \(d_{v}\) is the degree of vertex \(v\).
Note that if \(w\) is the indicator vector for an independent set \(S^{\prime}\), the bound becomes \(p^{T}w=P(S^{\prime})\). In general, the proof of Theorem 4 can be thought of as a randomized procedure for rounding an arbitrary vector into an independent set indicator vector. Vectors \(w\) that are nearly independent set indicators yield better bounds.
Theorem 4 provides a lower bound on the size of the maximum independent set in \(\mathcal{G}^{\leq 2}\) and thus an upper bound on \(L^{*}(P,n,\mathcal{H}_{hard})\), which we call \(L_{CW}=1-P(S)\).
## 4 Empirical Results
In this section, we compute optimal losses in the presence of an \(\ell_{2}\)-constrained attacker (12) at various strengths \(\epsilon\) for benchmark computer vision datasets like MNIST and CIFAR-10 in a \(3\)-class setting. We compare these optimal losses to those obtained by state-of-the-art adversarially trained classifiers, showing a large gap. We then compare the optimal loss in the \(10\)-class setting to its upper and lower bounds (SS3.3), showing matching bounds at lower \(\epsilon\). In the Supplementary, SSC describes hyperedge finding in practice, SSD details the experimental setup and SSE contains additional results.
### Optimal loss for 3-class problems
Corollary 1 allows us to compute the optimal loss given any dataset (and its corresponding conflict hypergraph). We compute the optimal loss for 3-way classification, due to the computational complexity of finding higher order hyperedges (see SSD.4 of the Supp.). In Figure 2, we plot the optimal loss \(L^{*}(3)\) computed via the LP in Corollary 1 against the loss obtained through TRADES (31) for 3-class MNIST (classes '1', '4', '7') and 3-class CIFAR-10 ('plane', 'bird' and'ship' classes) with 1000 samples per class 5. For MNIST, we train a 3 layer CNN for 20 epochs with TRADES regularization strength \(\beta=1\), and for CIFAR-10, we train a WRN-28-10 model for 100 epochs with \(\beta=6\). We evaluate models using APGD-CE from AutoAttack (9).
Footnote 5: We also used PGD adversarial training (19) but found its performance to be worse (See Appendix D.6)
From Figure 2, we observe that this gap is quite large even at lower values of \(\epsilon\). This indicates that there is considerable progress to be made for current robust training methods, and this gap may be due to either the expressiveness of the hypothesis class or problems with optimization. We find that for CIFAR-10, TRADES is unable to achieve loss much better than 0.6 at an \(\epsilon\) for which the optimal loss is near 0. This gap is much larger than observed by prior work (5; 6) for binary classification, suggesting that _current robust training techniques struggle more to fit training data with more classes_. In SSD.8 of the Supp., we ablate over larger architectures, finding only small improvements at lower values of \(\epsilon\) and none at higher.
Figure 2: Optimal error for MNIST and CIFAR-10 3-class problems (\(L^{*}(3)\)). \(L^{*}(2)\) is a lower bound computed using only constraints from edges. \(AT\) is the loss for an adversarially trained classifier under the strong APGD attack (9).
### Bounds on optimal loss for 10-class problems
As the number of classes and dataset size increases, the difficulty of solving the LP in Corollary 1 increases to the point of computational infeasibility (see SSD.4 of the Supp.). We use the methods discussed in Section 3 to bound the optimal loss for 10-class problems on MNIST and CIFAR-10 datasets on the full training dataset. We present results for each approximation in Figure 3, with the limitation that for some methods, we are only able to obtain results for smaller values of \(\epsilon\) due to runtime blowup. We provide truncated hypergraph bounds for CIFAR-100 in SSD.2 of the Supp.
**Lower bounding the optimal loss using truncated hypergraphs (SS3.1):** In Figure 3, we plot the loss lower bound obtained via by truncating the hypergraph to consider only edges (\(L^{*}(2)\)), up to degree 3 hyperedges (\(L^{*}(3)\)), and up to degree 4 hyperedges (\(L^{*}(4)\)). Computing the optimal loss would require computing degree 10 hyperedges. However, we find at small values of \(\epsilon\), there is little difference in these bounds despite the presence of many higher degree hyperedges. This indicates that the use of use of higher degree hyperedges may not be critical to get a reasonable estimate of the optimal loss. For example, for CIFAR-10 at \(\epsilon=3\), we observe 3M degree 3 hyperedges and 10M degree 4 hyperedges, but these constraints have no impact on the computed lower bound. To understand the impact of hyperedges, we provide the count of hyperedges for each value of \(\epsilon\) and plots of the distribution of optimal classification probabilities per vertex in SSD.3 of the Supp. From Figure 2, we find that the difference \(L^{*}(2)\) and \(L^{*}(3)\) does not occur until the loss reaches above 0.4 for the 3-class problem.
_Takeaway:_ In practice, we _do not lose information from computing lower bounds with only edges in the conflict hypergraph_.
**Lower bounding the optimal loss using the \(1\)v\(1\) binary classification problems (SS3.2):** We can use the algorithm from Bhagoji et al.(6) to efficiently compute \(1\)v\(1\) pairwise optimal losses to find a lower bound on the 10-class optimal loss (\(L^{*}_{CO}(2)\)). From Theorem 3, we use maximum weight coupling over these optimal losses to find a lower bound. Optimal loss heatmaps for each pair of classes in MNIST and CIFAR-10 are in SSD.5 of the Supp. The efficiency of the \(1\)v\(1\) computation allows us to compute lower bounds at larger values of \(\epsilon\) in Figure 3.
_Takeaway:_ From Figure 3, we find that _while this lower bound is the most efficient to compute, the obtained bound is much looser compared to that from truncated hypergraphs._ This arises from the weak attacker assumed while computing this bound.
**Upper bounding optimal loss via Caro-Wei approximation (SS3.5):** In Figure 3, we also plot the upper bound on \(0-1\) loss for hard classifiers (denoted by \(L_{CW}\)) obtained via applying Theorem 4 with vertex weights obtained from the solution to \(L^{*}(2)\). When \(\epsilon\) becomes large (\(\epsilon\geq 3.0\) for MNIST and \(\epsilon\geq 4.5\) for CIFAR-10), the loss upper bound increases sharply. This indicates the lower bound on the independent set size becomes looser as the number of edges increases, due to the importance of higher-order interactions that are not captured by the approximation. At the small values of \(\epsilon\) used in practice however, the lower bounds obtained through truncated hypergraphs (\(L^{*}(2)\), \(L^{*}(3)\), and \(L^{*}(4)\)) are close to the value of this upper bound.
Figure 3: Lower bounds on the exact optimal \(10\)-class loss using hyperedges up to degree 2 (\(L^{*}(2)\)), 3 (\(L^{*}(3)\)) and 4 (\(L^{*}(4)\)), as well as maximum weight coupling of pairs of binary \(0-1\) loss lower bounds (\(L^{*}_{\text{co}}(2)\)). \(L_{CW}\) is an upper bound from the Caro-Wei approximation of the independent set number. The region in grey represents the range of values we would expect the true optimal loss \(L^{*}(10)\) to fall under.
_Takeaways:_ (i) We do not lose much information from not including all hyperedges at small \(\epsilon\) values as _the upper and lower bounds are almost tight_; (ii) At these small values of \(\epsilon\), _we do not expect much difference in performance between hard classifiers and soft classifiers._
**Comparing loss of trained classifiers to optimal:** In Figure 3, we also compare the loss obtained by a robustly trained classifier (AT) to our bounds on optimal loss. For both datasets, we see a large gap between the performance of adversarial training and our bounds (including the upper bound from the Caro-Wei approximation \(L_{CW}\)), even at small values of \(\epsilon\). This suggests that current robust training techniques are currently unable to optimally fit the training data in multiclass classification tasks of interest. In addition, we also checked the performance of state-of-the-art verifiably robust models on these two datasets from a leaderboard (18). For MNIST, the best certifiably robust model has a \(0-1\) loss of \(0.27\) at a budget of \(1.52\) and \(0.44\) at a budget of \(2.0\), while for CIFAR-10, the best certifiably robust model has a \(0-1\) loss of \(0.6\) at a budget of \(1.0\) and \(0.8\) at a budget of \(2.0\). These are much higher than the optimal lower bound that is achievable for these datasets which is \(0\) in all these cases.
_Takeaway:_ Performance of state-of-the-art robust models, both empirical and verifiable, _exhibit a large gap from the range of values predicted by our bounds on optimal \(0-1\) loss, even when \(\epsilon\) is small._ Future research focusing on developing algorithms to decrease this gap while maintaining generalization capabilities may lead to improvements in model robustness.
## 5 Discussion and Related Work
**Related Work:** When the data distribution satisfies certain properties, Dohmatob (11) and Mahloujifar _et al._ (20) use the 'blowup' property to determine bounds on the robust loss, given some level of loss on benign data. We note that these papers use a different loss function that depends on the original classification output on benign data, thus their bounds are not comparable. Bhagoji _et al._ (5; 6), and Pydi _et al._ (22) provide lower bounds on robust loss when the set of classifiers under consideration is all measurable functions. These works _only provide bounds in the binary classification setting_. Work on verifying robustness (7; 28; 13; 18) provides bounds on the robustness of specific classifiers. Yang _et al._ (30) independently introduced the concept of a 'conflict graph' to obtain robust non-parametric classifiers via an adversarial pruning defense. The closest related work to ours is Trillos _et al._ (29), which uses optimal transport theory to find lower bounds on multi-class classification that are applicable for both continuous and discrete distributions. While their theoretical bounds are exact and more general than ours, accounting for distributional adversaries, their numerically computed bounds via the Sinkhorn algorithm are approximate and converge to the true value only as the entropy regularization decreases. In contrast, we provide methods to directly compute the optimal loss for discrete distributions, along with efficient methods for determining its range to overcome the computational bottlenecks encountered.
**Discussion and Limitations:** Our work in this paper firmly establishes for the multi-class case what was known only in the binary setting before: _there exists a large gap in the performance of current robust classifiers and the optimal classifier_. It also provides methods to bound the loss efficiently in practice, giving practitioners quick means to determine the gap. The question then arises: _why does this gap arise and how can we improve training to decrease this gap?_ This paper, however, does not tackle the problem of actually closing this gap. Possible methods include increasing the architecture size (26), using additional data (15) and using soft-labels (6). A surprising finding from our experiments was that the addition of hyperedges to the multi-way conflict graph did not change the lower bounds much, indicating we are in a regime where multi-way intersections minimally impact optimal probabilities. One major limitation of our work is the computational expense at larger budgets, sample sizes and class sizes. We suspect this is due to the general-purpose nature of the solvers we use and future work should look into developing custom algorithms to speed up the determination of lower bounds.
## Acknowledgments and Disclosure of Funding
ANB, WD and BYZ were supported in part by NSF grants CNS-2241303, CNS-1949650, and the DARPA GARD program. SD was supported in part by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-2039656. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. SD and PM were supported in part by the National Science Foundation under grant CNS-2131938, the ARL's Army Artificial Intelligence Innovation Institute (A2I2), Schmidt DataX award, and Princeton E-filiates Award.
## References
* Alon and Spencer [2016] N. Alon and J. H. Spencer. _The probabilistic method_. John Wiley & Sons, 2016.
* Andersen et al. [2013] M. S. Andersen, J. Dahl, and L. Vandenberghe. Cvxopt: Python software for convex optimization, 2013.
* ApS [2019] M. ApS. _MOSEK Optimizer API for Python 10.0.22_, 2019. URL [https://docs.mosek.com/10.0/pythonapi/index.html](https://docs.mosek.com/10.0/pythonapi/index.html).
* Bhagoji et al. [2017] A. N. Bhagoji, W. He, B. Li, and D. Song. Exploring the space of black-box attacks on deep neural networks. _arXiv preprint arXiv:1712.09491_, 2017.
* Bhagoji et al. [2019] A. N. Bhagoji, D. Cullina, and P. Mittal. Lower bounds on adversarial robustness from optimal transport. In _Advances in Neural Information Processing Systems_, pages 7496-7508, 2019.
* Bhagoji et al. [2021] A. N. Bhagoji, D. Cullina, V. Sehwag, and P. Mittal. Lower bounds on cross-entropy loss in the presence of test-time adversaries. In _Proceedings of the 38th International Conference on Machine Learning_, 2021.
* Bunel et al. [2017] R. Bunel, I. Turkaslan, P. H. Torr, P. Kohli, and M. P. Kumar. A unified view of piecewise linear neural network verification. _arXiv preprint arXiv:1711.00455_, 2017.
* Carlini and Wagner [2016] N. Carlini and D. A. Wagner. Towards evaluating the robustness of neural networks. _CoRR_, abs/1608.04644, 2016. URL [http://arxiv.org/abs/1608.04644](http://arxiv.org/abs/1608.04644).
* Croce and Hein [2020] F. Croce and M. Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In _International Conference on Machine Learning_, pages 2206-2216. PMLR, 2020.
* Cullina et al. [2018] D. Cullina, A. N. Bhagoji, and P. Mittal. Pac-learning in the presence of adversaries. In _Advances in Neural Information Processing Systems_, pages 230-241, 2018.
* Dohmatob [2019] E. Dohmatob. Generalized no free lunch theorem for adversarial robustness. In _Proceedings of the 36th International Conference on Machine Learning_, pages 1646-1654, 2019.
* Goodfellow et al. [2015] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. In _International Conference on Learning Representations_, 2015.
* Gowal et al. [2018] S. Gowal, K. Dvijotham, R. Stanforth, R. Bunel, C. Qin, J. Uesato, T. Mann, and P. Kohli. On the effectiveness of interval bound propagation for training verifiably robust models. _arXiv preprint arXiv:1810.12715_, 2018.
* Gowal et al. [2020] S. Gowal, C. Qin, J. Uesato, T. Mann, and P. Kohli. Uncovering the limits of adversarial training against norm-bounded adversarial examples. _arXiv preprint arXiv:2010.03593_, 2020.
* Gowal et al. [2021] S. Gowal, S. Rebuffi, O. Wiles, F. Stimberg, D. A. Calian, and T. A. Mann. Improving robustness using generated data. In M. Ranzato, A. Beygelzimer, Y. N. Dauphin, P. Liang, and J. W. Vaughan, editors, _Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual_, pages 4218-4233, 2021. URL [https://proceedings.neurips.cc/paper/2021/hash/21ca6d0cf2f25c4dbb35d8dc0b679c3f-Abstract.html](https://proceedings.neurips.cc/paper/2021/hash/21ca6d0cf2f25c4dbb35d8dc0b679c3f-Abstract.html).
* Krizhevsky and Hinton [2009] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009.
* LeCun and Cortes [1998] Y. LeCun and C. Cortes. The MNIST database of handwritten digits. 1998.
* Li et al. [2023] L. Li, T. Xie, and B. Li. Sok: Certified robustness for deep neural networks. In _2023 IEEE Symposium on Security and Privacy (SP)_, pages 1289-1310. IEEE, 2023.
* Madry et al. [2018] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. In _ICLR_, 2018.
* Mahloujifar et al. [2019] S. Mahloujifar, D. I. Diochnos, and M. Mahmoody. The curse of concentration in robust learning: Evasion and poisoning attacks from concentration of measure. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 33, pages 4536-4543, 2019.
* Montasser et al. [2019] O. Montasser, S. Hanneke, and N. Srebro. Vc classes are adversarially robustly learnable, but only improperly. _arXiv preprint arXiv:1902.04217_, 2019.
* Pydi and Jog [2020] M. S. Pydi and V. Jog. Adversarial risk via optimal transport and optimal couplings. In _Proceedings of the 37th International Conference on Machine Learning_, pages 7814-7823, 2020.
* Pydi and Jog [2022] M. S. Pydi and V. Jog. The Many Faces of Adversarial Risk, Jan. 2022. URL [http://arxiv.org/abs/2201.08956](http://arxiv.org/abs/2201.08956). arXiv:2201.08956 [cs, math, stat].
* Scheinerman and Ullman [1997] E. R. Scheinerman and D. H. Ullman. _Fractional Graph Theory: A Rational Approach to the Theory of Graphs_. Wiley, Sept. 1997. ISBN 978-0-471-17864-4. Google-Books-ID: KujuAAAAAAJ.
* Schmidt et al. [2018] L. Schmidt, S. Santurkar, D. Tsipras, K. Talwar, and A. Madry. Adversarially robust generalization requires more data. _arXiv preprint arXiv:1804.11285_, 2018.
* Sehwag et al. [2022] V. Sehwag, S. Mahloujifar, T. Handina, S. Dai, C. Xiang, M. Chiang, and P. Mittal. Robust learning meets generative models: Can proxy distributions improve adversarial robustness? In _The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022_. OpenReview.net, 2022. URL [https://openreview.net/forum?id=WXONVNBBKv](https://openreview.net/forum?id=WXONVNBBKv).
* Szegedy et al. [2013] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. _arXiv preprint arXiv:1312.6199_, 2013.
* Tjeng et al. [2019] V. Tjeng, K. Y. Xiao, and R. Tedrake. Evaluating robustness of neural networks with mixed integer programming. In _ICLR_, 2019.
* Trillos et al. [2023] N. G. Trillos, M. Jacobs, and J. Kim. The multimarginal optimal transport formulation of adversarial multiclass classification. _Journal of Machine Learning Research_, 24(45):1-56, 2023.
* Yang et al. [2020] Y.-Y. Yang, C. Rashtchian, Y. Wang, and K. Chaudhuri. Robustness for non-parametric classification: A generic attack and defense. In _International Conference on Artificial Intelligence and Statistics_, pages 941-951. PMLR, 2020.
* Zhang et al. [2019] H. Zhang, Y. Yu, J. Jiao, E. P. Xing, L. E. Ghaoui, and M. I. Jordan. Theoretically principled trade-off between robustness and accuracy. _arXiv preprint arXiv:1901.08573_, 2019.
This Supplementary Material accompanies paper: _Characterizing the Optimal \(0-1\) Loss for Multi-class Classification with a Test-time Attacker_ to NeurIPS 2023.
We first present proofs for theorems from the main body in SSA. We then explain fractional packing and covering in SSB. We present our algorithm for hyperedge finding in SSC. SSD contains further details about the experimental setup with all additional experimental results in SSE.
## Appendix A Proofs
Proof of Theorem 1.: This follows immediately from Theorem 2 with \(m=K\).
Proof of Theorem 2.: Recall that the definition of the correct-classification probabilities is
\[q_{m,N}(h)_{(x,y)}=\inf_{\tilde{x}\in N(x)}\min_{C\in\left(\begin{subarray}{c }[K]\\ \infty\end{subarray}\right):y\in C}h(\tilde{x},C)_{y}\]
and the set of achievable correct-classification probabilities is
\[\mathcal{P}_{\mathcal{V},N,\mathcal{H}}=\bigcup_{h\in\mathcal{H}}\prod_{v\in \mathcal{V}}[0,q_{N}(h)_{v}].\]
First, we will show \(\mathcal{P}_{m,\mathcal{V},N,\mathcal{H}_{soft}}\subseteq\{q\in\mathbb{R}^{ \mathcal{V}}:q\geq\mathbf{0},\,B^{\leq m}q\leq\mathbf{1}\}\). The constraint \(h\in\mathcal{H}_{soft}\), \(\mathbf{0}\leq q_{m,N}(h)\leq\mathbf{1}\) holds because classification probabilities \(h(\tilde{x},C)_{y}\) must lie in the range \([0,1]\).
We will now demonstrate that the constraint \(B^{\leq m}q\leq\mathbf{1}\) must also hold. Let \(e=((x_{1},y_{1}),...,(x_{\ell},y_{\ell}))\) be a size-\(\ell\) hyperedge in \(\mathcal{E}^{(\leq m)}\). By construction of \(\mathcal{E}^{(\leq m)}\), there exists some \(\tilde{x}\in\bigcap_{i=1}^{\ell}N(x_{i})\). Let \(S\in\binom{[K]}{m}\) be some superset of \(\{y_{1},\ldots,y_{\ell}\}\), which exists because \(\ell\leq m\). From the definition of \(q_{m,N}(h)\), we have that \(q_{m,N}(h)_{(x_{i},y_{i})}\leq h(\tilde{x},S)_{y_{i}}\) for each \(1\leq i\leq\ell\). Thus,
\[\sum_{i=1}^{\ell}q_{m,N}(h)_{(x_{i},y_{i})}\leq\sum_{i=1}^{\ell}h(\tilde{x},S)_ {y_{i}}\leq\sum_{j\in\mathcal{Y}}h(\tilde{x},S)_{j}=1.\]
This gives \((B^{\leq m}q)_{e}\leq 1\).
Now we will show \(\mathcal{P}_{m,\mathcal{V},N,\mathcal{H}_{soft}}\supset\{q\in\mathbb{R}^{ \mathcal{V}}:q\geq\mathbf{0},\,B^{\leq m}q\leq\mathbf{1}\}\).
For any vector \(q\) in the polytope, we have a classifier \(h:\mathcal{X}\times\binom{[K]}{m}\rightarrow\mathbb{R}^{[K]}\) that achieves at least those correct classification probabilities. This mean that \(h\) has the following properties. First, \(h(\tilde{x},L)_{y}\geq 0\) and \(\sum_{y\in[K]}h(\tilde{x},L)_{y}=1\). Second, for all \((x,y)\in\mathcal{V}\), all \(\tilde{x}\in N(x)\), and all \(L\in\binom{[K]}{m}\) such that \(y\in L\), we have \(h(\tilde{x},L)_{y}\geq q_{(x,y)}\).
To get \(h\), first define the function \(g:\mathcal{X}\times\binom{[K]}{m}\rightarrow\mathbb{R}^{[K]}\) so \(g(\tilde{x},L)_{y}=0\) for \(i\not\in L\) and \(g(\tilde{x},L)_{y}=\max(0,\sup\{q_{(x_{y},y)}:x_{y}\in\mathcal{V}_{y},\tilde {x}\in N(x_{y})\})\). Let \(L^{\prime}\subseteq L\) be the set of indices where \(g(\tilde{x},L)_{y}>0\). Then any list of vertices \(e=(x_{y}:y\in L^{\prime},x_{y}\in\mathcal{V}_{y},\tilde{x}\in N(x_{y}))\) forms a hyperedge of size \(|L^{\prime}|\leq m\). Thus
\[\sum_{y\in[K]}g(\tilde{x},L)_{y}=\sum_{y\in L^{\prime}}g(\tilde{x},L)_{y}=\sup _{e}\sum_{y\in L^{\prime}}q_{(x_{y},y)}\leq\sup_{e}1=1.\]
To produce \(h\), allocate the remaining probability (\(1-\sum_{y}g(\tilde{x},L)_{y}\)) to an arbitrary class.
Proof of Theorem 3.: The first part of this proof applies for any side-information size \(m\). The adversarial strategy for selecting \(C\) is a specified by a conditional p.m.f. \(p_{C|y}(C|y)\). Thus \(p_{y|C}(y|C)=p_{C|y}(C|y)p_{\mathbf{y}}(y)/\sum_{y^{\prime}}p_{C|y}(C|y^{ \prime})p_{y}(y^{\prime})\).
The optimal loss of the classifier against a particular adversarial strategy is just a mixture of the optimal losses for each class list: \(\sum_{C}p_{C|y}(C|y)\Pr[p_{y}(y)L^{*}(P|(y\in\{i,j\}),N,\mathcal{H})\).
If \(p_{C|y}(C|y)=p_{C|y}(C|y^{\prime})\) for all \(y,y^{\prime}\in C\), then \(p_{y|C}(y|C)=p_{\mathbf{y}}(y)/\sum_{y^{\prime}\in C}p_{\mathbf{y}}(y^{\prime})\) and the adversary has not provided the classifier with extra information beyond the fact that \(y\in C\). Thus \(P_{x|y}P_{y|C}=P|(y\in C)\).
Now we can spcialize to the \(m=2\) case. Any stochastic matrix \(s\) with zeros on the diagonal specifies an adversarial strategy for selecting \(C\) with \(p_{C|y}(\{i,j\}|i)=s_{i,j}\). Furthermore, if \(s\) is also symmetric, \(p_{C|y}(\{i,j\}|i)=p_{C|y}(\{i,j\}|j)\) and \(p_{y|C}(i|\{i,j\})=p_{y|C}(j|\{i,j\})\). Then the optimal classifier for the side-information game uses the \(\binom{K}{2}\) optimal classifiers for the two-class games and incurs loss \(\sum_{i,j}Pr[Y=i]a_{i,j}s_{i,j}\) where \(a_{i,j}=L^{*}(P|(y\in\{i,j\}),\mathcal{H},N)\). Because the diagonal entries of \(a\) are all zero, there is always a maximizing choice of \(s\) with a zero diagonal. Thus it is not necessary to include that constraint on \(s\) when specifying the optimization.
Proof of Theorem 4.: If \(w=0\), then the lower bound is zero and holds trivially. Otherwise, \(\frac{1}{1^{T}w}w\) forms a probability distribution over the vertices. Let \(X\in\mathcal{V}^{\mathbb{N}}\) be a sequence of i.i.d. random vertices with this distribution. From this sequence, we define a random independent set as follows. Include \(v\) in the set if it appears in the sequence \(X\) before any of its neighbors in \(\mathcal{G}\). If \(v\) and \(v^{\prime}\) are adjacent, at most one of them can be included, so this procedure does in fact construct an independent set. The probability that \(X_{i}=v\) is \(\frac{w_{i}}{1^{T}w}\) and the probability that \(X_{i}\) is \(v\) or is adjacent to \(v\) is \(\frac{((A+I)w)_{v}}{1^{T}w}\). The first time that the latter event occurs, \(v\) is either included in the set or ruled out. If \(w_{i}^{T}>0\), the probability that \(v\) is included in the set is \(\frac{w_{v}}{((A+I)w)_{v}}\) and otherwise it is zero. Thus the quantity \(P(S\) in Theorem 4 is the expected size of the random independent set and \(\mathcal{G}\) must contain some independent set at least that large.
## Appendix B Fractional packing and covering
In this section, we record some standard definition in fractional graph and hypergraph theory (24).
Let \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) be hypergraph and let \(B\in\mathbb{R}^{\mathcal{E}\times\mathcal{V}}\) be its edge-vertex incidence matrix. That is, \(B_{e,v}=1\) if \(v\in e\) and \(B_{e,v}=0\) otherwise. Let \(p\in\mathbb{R}^{\mathcal{V}}\) be a vector of nonnegative vertex weights. Let \(\mathbf{0}\) be the vector of all zeros and let \(\mathbf{1}\) be the vector of all ones (of whichever dimensions are required).
A fractional vertex packing in \(\mathcal{G}\) is a vector \(q\in\mathbb{R}^{\mathcal{V}}\) in the polytope
\[q\geq\mathbf{0}\qquad Bq\leq\mathbf{1}.\]
The minimum weight fractional vertex packing linear program is
\[\min p^{T}q\quad\text{s.t.}\quad q\geq\mathbf{0}\qquad Bq\leq\mathbf{1}.\]
A fractional hyperedge covering in \(\mathcal{G}\) for weights \(p\) is a vector \(z\in\mathbb{R}^{\mathcal{V}}\) in the polytope
\[q\geq\mathbf{0}\qquad B^{T}z\geq p.\]
The minimum weight fractional hyperedge covering linear program is
\[\min_{z}\mathbf{1}^{T}z\quad\text{s.t.}\quad z\geq 0,\quad B^{T}z\geq p.\]
These linear programs form a dual pair.
The independent set polytope for \(\mathcal{G}\) is the convex hull of the independent set indicator vectors.
Let \(\mathcal{G}^{\prime}\) be a graph on the vertex set \(\mathcal{V}\). A clique in \(\mathcal{G}^{\prime}\) is a subset of vertices in which every pair are adjacent. Let \(\mathcal{C}\) be the set of cliques of \(\mathcal{G}^{\prime}\) and let \(C\in\mathbb{R}^{\mathcal{C}\times\mathcal{V}}\) be the clique vertex incidence matrix. (In fact this construction can be interpreted as another hypergraph on \(\mathcal{V}\).) A fractional clique cover in \(\mathcal{G}^{\prime}\) for vertex weights \(p\) is a vector \(z\in\mathbb{R}^{\mathcal{C}}\) in the polytope
\[q\geq\mathbf{0}\qquad C^{T}z\geq p.\]
Somewhat confusingly, the dual concept is called a fractional independent set in \(\mathcal{G}^{\prime}\). This is a vector \(q\in\mathbb{R}^{\mathcal{V}}\) in the polytope
\[q\geq\mathbf{0}\qquad Cq\leq\mathbf{1}.\]
## Appendix C Hyperedge Finding
One challenge in computing lower bounds for \(0-1\) loss in the multi-class setting is that we need to find hyperedges in the conflict hypergraph. In this section, we will consider an \(\ell_{2}\) adversary: \(N(x)=\{x^{\prime}\in\mathcal{X}|\ ||x^{\prime}-x||_{2}\leq\epsilon\}\) and describe an algorithm for finding hyperedges within the conflict graph.
We first note that for an \(n\)-way hyperedge to exist between \(n\) inputs \(\{x_{i}\}_{i=1}^{n}\), \(\{x_{i}\}_{i=1}^{n}\) must all lie on the interior of an \(n-1\)-dimensional hypersphere of radius \(\epsilon\).
Given input \(x_{1}\),..., \(x_{n}\) where \(x_{i}\in\mathbb{R}^{d}\), we first show that distance between any two points in the affine subspace spanned by the inputs can be represented by a distance matrix whose entries are the squared distance between inputs. This allows us to compute the circumradius using the distance information only, not requiring a coordinate system in high dimension. Then we find the circumradius using the properties that the center of the circumsphere is in the affine subspace spanned by the inputs and has equal distance to all inputs.
We construct matrix \(X\in\mathbb{R}^{d\times n}\) whose \(i^{th}\) column is input \(x_{i}\). Let \(D\in\mathbb{R}^{n\times n}\) be the matrix of squared distances between the inputs, i.e., \(D_{i,j}=\|x_{i}-x_{j}\|^{2}\).
We first notice that \(D\) can be represented by \(X\) and a vector in \(\mathbb{R}^{n}\) whose \(i^{th}\) entry is the squared norm of \(x_{i}\). Let \(\Delta\in\mathbb{R}^{n}\) be such vector such that \(\Delta_{i}=\|x_{i}\|^{2}=(X^{T}X)_{i,i}\). Then given that \(D_{i,j}\) is the squared distance between \(x_{i}\) and \(x_{j}\), we have
\[D_{i,j}=\|x_{i}\|^{2}+\|x_{j}\|^{2}-2\langle x_{i},x_{j}\rangle,\]
which implies that
\[D=\Delta\mathbf{1}^{T}+\mathbf{1}\Delta^{T}-2X^{T}X.\]
Let \(\alpha,\beta\in\mathbb{R}^{n}\) be vectors of affine weights: \(\mathbf{1}^{T}\alpha=\mathbf{1}^{T}\beta=1\). Then \(X\alpha\) and \(X\beta\) are two points in the affine subspace spanned by the columns of \(X\). The distance between \(X\alpha\) and \(X\beta\) is \(\frac{-(\alpha-\beta)^{T}D(\alpha-\beta)}{2}\), shown as below:
\[\frac{-(\alpha-\beta)^{T}D(\alpha-\beta)}{2} =\frac{-(\alpha-\beta)^{T}(\Delta\mathbf{1}^{T}+\mathbf{1}\Delta ^{T}-2X^{T}X)(\alpha-\beta)}{2}\] \[=\frac{-(0+0-2(\alpha-\beta)^{T}X^{T}X(\alpha-\beta))}{2}\] \[=\|X\alpha-X\beta\|^{2}.\]
Now we compute the circumradius using the squared distance matrix \(D\). The circumcenter is in the affine subspace spanned by the inputs so we let \(X\alpha\) to be the circumcenter where \(\mathbf{1}^{T}\alpha=1\). Let \(e^{(i)}\in\mathbb{R}^{n}\) be the \(i^{th}\) standard basis vector. The distance between the circumcenter and \(x_{i}\) is \(\|X\alpha-Xe^{(i)}\|^{2}\). From previous computation, we know that \(\|X\alpha-Xe^{(i)}\|^{2}=\frac{-(\alpha-e^{(i)})^{T}D(\alpha-e^{(i)})}{2}\). Since the circumcenter has equal distance to all inputs, we have
\[(\alpha-e^{(1)})^{T}D(\alpha-e^{(1)})=\ldots=(\alpha-e^{(n)})^{T}D(\alpha-e^{ (n)}). \tag{7}\]
Note that the quadratic term in \(\alpha\) is identical in each of these expressions. In addition, \(e^{(i)T}De^{(i)}=0\) for all \(i\). So equation 7 simplifies to the linear system
\[e^{(i)T}D\alpha=c \implies D\alpha=c\mathbf{1}\] \[\implies\alpha=cD^{-1}\mathbf{1}\]
for some constant \(c\). Since \(\mathbf{1}^{T}\alpha=1\), we have
\[1=\mathbf{1}^{T}\alpha=c\mathbf{1}^{T}D^{-1}\mathbf{1}\] \[\implies \frac{1}{c}=\mathbf{1}^{T}D^{-1}\mathbf{1}\]assuming that \(D\) is invertible. The square of the circumradius, \(r^{2}\), which is the squared distance between the circumcenter and \(x_{1}\), is
\[\|X\alpha-Xe^{(1)}\|^{2}\] \[= \frac{-(\alpha-e^{(1)})^{T}D(\alpha-e^{(1)})}{2}\] \[= e^{(1)T}D\alpha-\frac{\alpha^{T}D\alpha}{2}\] \[= c-\frac{c^{2}\mathbf{1}^{T}D^{-1}\mathbf{1}}{2}\] \[= \frac{c}{2}\] \[= \frac{1}{2\mathbf{1}^{T}D^{-1}\mathbf{1}}.\]
Therefore, assuming matrix \(D\) is invertible, the circumradius is \(\frac{1}{\sqrt{2\mathbf{1}^{T}D^{-1}\mathbf{1}}}\).
The inverse of \(D\) can be computed as \(\frac{\det D}{\operatorname{adj}D}\). Since \(\alpha=cD^{-1}\mathbf{1}\), we have \(\alpha=c\frac{\det D}{\operatorname{adj}D}\mathbf{1}\). As \(r^{2}=\frac{c}{2}\), constant \(c\) is non-negative. Therefore, \(\alpha\propto\frac{\det D}{\operatorname{adj}D}\mathbf{1}\).
When all entries of \(\alpha\) are non-negative, the circumcenter is a convex combination of the all inputs and the circumsphere is the minimum sphere in \(\mathbb{R}^{n-1}\) that contains all inputs. Otherwise, the circumsphere of \(\{x_{i}|\alpha_{i}>0\}\) is the minimum sphere contains all inputs.
After finding the radius of the minimum sphere that contains all inputs, we compare the radius with the budget \(\epsilon\). If the radius is no larger than \(\epsilon\), then there is a hyperedge of degree \(n\) among the inputs.
## Appendix D Experimental Setup
In this section, we describe our experimental setup. Our code for computing bounds is also available at [https://github.com/inspire-group/multiclass_robust_lb](https://github.com/inspire-group/multiclass_robust_lb).
**Datasets:** We compute lower bounds for MNIST (17), CIFAR-10, and CIFAR-100 (16). Since we do not know the true distribution of these datasets, we compute lower bounds based on the empirical distribution of the training set for each dataset.
**Attacker:** We will consider an \(\ell_{2}\) adversary: \(N(x)=\{x^{\prime}\in\mathcal{X}|\;||x^{\prime}-x||_{2}\leq\epsilon\}\). This has been used in most prior work (5; 22; 29).
**LP solver:** For solving the LP in Equation (4), we primarily use the Mosek LP solver (3). When the Mosek solver did not converge, we default to using CVXOpt's LP solver (2).
**Computing infrastructure.** In order to compute lower bounds, we perform computations across 10 2.4 GHz Intel Broadwell CPUs. For adversarial training, we train on a single A100 GPU.
Training DetailsFor MNIST, we use 40-step optimization to find adversarial examples during training and use step size \(\frac{\epsilon}{30}\) and train all models for 20 epochs. For CIFAR-10 and CIFAR-100, we use 10 step optimization to find adversarial examples and step size \(\frac{\epsilon}{3}\) and train models for 100 epochs. For MNIST TRADES training, we use \(\beta=1\) and for CIFAR-10 and CIFAR-100, we use \(\beta=6\). Additionally, for CIFAR-10 and CIFAR-100, we optimize the model using SGD with learning rate and learning rate scheduling from (14). For MNIST, we use learning rate 0.01.
Architectures used:For CIFAR-10 and CIFAR-100, we report results from training a WRN-28-10 architecture. For MNIST, we train a small CNN architecture consisting of 2 convolutional layers, each followed by batch normalization, ReLU, and 2 by 2 max pooling. The first convolutional layer uses a 5 by 5 convolutional kernel and has 20 output channels. The second convolutional layer also uses a 5 by 5 kernel and has 50 output channels. After the set of 2 convolutional layers with batch normalization, ReLU, and pooling, the network has a fully connected layer with 500 output channels followed by a fully connected classifier (10 output channels). In SSE.8, we consider the impact of architecture on closing the gap to the optimal loss.
Additional Experimental Results
In this Section, we provide additional experimental results to complement those from the main body of the paper. We organize it as follows:
* SSE.1: We analyze a toy problem on 2D Gaussian data to get a better understanding of the impact of hyperedges on the optimal loss computation.
* SSE.2: We investigate why higher degree hyperedges dp not have a large impact on lower bounds at lower values of \(\epsilon\).
* SSE.3: We show the runtime explosion at higher values of \(\epsilon\) that makes it challenging for us to report optimal loss values.
* SSE.4: Classwise L2 distance statistics and heatmaps for pairwise losses used to compute class-only lower bounds in main paper.
* SSE.5: We provide results with standard PGD-based adversarial training, and show it is outperformed by Trades.
* SSE.6: We provide results on the CIFAR-100 dataset.
* SSE.7: We show lower bounds for a different set of 3 classes than the one considered in the main body. Main takeaways remain the same.
* SSE.8: We ablate across larger neural networks to check if increasing capacity reduces the gap to optimal.
* SSE.9: We attempt dropping examples from the training set that even the optimal clasifier cannot classify correctly in order to improve convergence.
* SSE.10: We compute lower bounds on the test set for MNIST 3-class classification
### Results for Gaussian data
We begin with a 3-way classification problem on 2D Gaussian data. To generate our Gaussian dataset, we sample 1000 points per class from 3 spherical Gaussians with means at distance 3 away from from the origin (a sample is shown in Figure 4). We compute multiclass lower bounds via the LP in Theorem 1 on robust accuracy at various \(\ell_{2}\) budget \(\epsilon\) and display these values in Figure 5 as \(L^{*}(3)\). Additionally, we compare to a deterministic 3 way classifier. This classifier is the best performing out of the 2 strategies: 1) constantly predict a single class (thus achieving \(\frac{2}{3}\) loss) or 2) is the classifier in black in Figure 4 which classifies incorrectly when a sample lies over the edge of the nearest \(\epsilon\) margin of the classifier.
We observe that at smaller values of \(\epsilon\), the loss achieved by the 3-way classifier matches optimal loss (\(L^{*}(3)\)); however, after \(\epsilon=2.5\) for \(\sigma^{2}=0.05\) and \(\epsilon=2.3\) for \(\sigma^{2}=0.5\), we find the classifier no longer achieves optimal loss. This suggests that there is a more optimal classification strategy at these larger values of \(\epsilon\). In Figures 6 and 7, we visualize the distribution of correct classification probabilities obtained through solving the LP with and without considering hyperedges. These figures are generated by taking a fresh sample of 1000 points from each class and coloring the point based on the correct classification probability \(q_{v}\) assigned to its nearest neighbor that was used in the conflict
Figure 4: A sample 3-class Gaussian problem (each color pertains to a class) and a corresponding classifier for this problem shown in black. The classifier classifies a sample incorrectly when it lies over the edge of the \(\epsilon\) margin (shown by the red lines) nearest the corresponding Gaussian center.
hypergraph when computing the lower bound. We observe from Figure 6, for all classes, the data are mostly assigned classification probabilities around 0.5. In Figure 7, we can see that when we consider hyperedges, we some of these 0.5 assignments are reassigned values close to \(\frac{2}{3}\) and \(\frac{1}{3}\). Interestingly, we notice that when we do not consider hyperedges, our solver finds an asymmetric solution to the problem (strategies for class 0, 1, and 2 differ) while when considering hyperedges this solution becomes symmetric.
### Impact of hyperedges
In Figure 8, we show the count of edges, degree 3 hyperedges, and degree 4 hyperedges found in the conflict hypergraphs of the MNIST, CIFAR-10, and CIFAR-100 train sets. We note that we did
Figure 5: Lower bounds on error for the Gaussian 3-class problem (\(\sigma^{2}=0.05\) and \(\sigma^{2}=0.5\)) computed using only constraints from edges (\(L^{*}(2)\)) and up to degree 3 hyperedges (\(L^{*}(3)\)) in comparison to the performance of the deterministic 3-way classifier depicted in Figure 4.
Figure 6: Distribution of optimal classification probabilities across samples from each class of the Gaussian obtained as a solution when computing \(L^{*}(2)\).
Figure 7: Distribution of optimal classification probabilities across samples from each class of the Gaussian obtained as a solution when computing \(L^{*}(3)\).
not observe any increase in loss when considering degree 4 hyperedges at the \(\epsilon\) with a data point for number of degree 4 hyperedges in Figure 8. We find that the relative number of edges and hyperedges is not reflective of whether we expect to see an increase in loss after considering hyperedges. For example in CIFAR-10, at \(\epsilon=4.0\), we there are about 100 times more hyperedges than edges, but we see no noticeable increase in the \(0-1\) loss lower bound when incorporating these hyperedge constraints.
To understand why including more information about hyperedges does not influence the computed lower bound much, we examine the distribution of \(q_{v}\) obtained from solutions to the LP with (\(L^{*}(3)\)) and without degree 3 hyperedges (\(L^{*}(2)\)). Fig. 9 contains a histogram of the distributions of \(q_{v}\) for MNIST. For small \(\epsilon\), there is no change in the distribution of \(q_{v}\), the distribution of \(q_{v}\) is almost identical between \(L^{*}(2)\) and \(L^{*}(3)\). At larger values of \(\epsilon\), in \(L^{*}(2)\), a significant fraction of vertices are assigned \(q_{v}\) near 0.5. While these shift with the addition of hyperedges, very few of them were in triangles of \(\mathcal{G}^{\leq 2}\) that were replaced by a hyperedge in \(\mathcal{G}^{\leq 3}\). This leads to the loss value changing minimally.
Similar to Figure 9, we plot the distribution of vertex weights \(q_{v}\) obtained through solving the LP for \(L^{*}(2)\) and \(L^{*}(3)\) for CIFAR-10 in Figure 10. Similar to trends for MNIST, we find that the gap between \(L^{*}(2)\) and \(L^{*}(3)\) only occurs when the frequency of 0.5 weights is higher.
### Computational complexity of computing lower bounds
Our experiments of \(L^{*}(3)\) and \(L^{*}(4)\) at higher \(\epsilon\) are limited due to computation constraints. Figure 11 we see that the time taken to compute the \(L^{*}(3)\) grows rapidly with \(\epsilon\). We also report timings for all bounds for CIFAR-10 at \(\epsilon=4\) in Table 2. In future work, we are seeking algorithmic optimization to achieve more results at high \(\epsilon\).
Figure 8: Number of edges, degree 3 hyperedges, and degree 4 hyperedges found in the conflict hypergraphs of MNIST, CIFAR-10, and CIFAR-100 train sets. The red vertical line indicates the \(\epsilon\) at which we noticed an increase in the \(0-1\) loss lower bound when considering degree 3 hyperedges.
Figure 9: Distribution of optimal classification probabilities \(q\) obtained by solving the LP with up to degree 2 hyperedges (\(m=2\)) and up to degree 3 hyperedges (\(m=3\)) on the MNIST training set.
### Classwise statistics and pairwise losses
In order to have a better understanding of the difficulty of the classification task under the presence of an \(\ell_{2}\) bounded adversary, we report the average \(\ell_{2}\) to the nearest neighbor of another class for each class in the MNIST and CIFAR-10 datasets in Table 3.
Another way of understanding the relative difficulty of classifying between classes is by computing the optimal loss for all 1v1 binary classification tasks. We note that these values are used in Section 4.2 to compute a lower bound on the optimal loss in the 10-class case from maximum weight coupling over optimal losses for \(1v1\) binary classification problems. In Figure 12, we show the heat maps for optimal losses for each pair of 1v1 classification problems for \(\epsilon=3\) on MNIST and \(\epsilon=4\) on CIFAR-10. We find that for both datasets only a few pairs of classes have high losses. Specifically, for MNIST, we find that the class pairs 4-9 and 7-9 have significantly higher loss than all other pairs of classes. For CIFAR-10, we find that 2-4 has the highest loss compared to other pairs, and 2-6 and 6-4 are also quite high.
### Additional adversarial training results
In Figure 13, we also add the loss achieved by PGD adversarial training. We find that this approach generally performs worse on MNIST compared to TRADES and is also unable to fit to CIFAR-10 data at the \(\epsilon\) values tested.
### Truncated hypergraph lower bounds for CIFAR-100
We provide results for truncated hypergraph lower bounds for the CIFAR-100 train set. We observe that similar to MNIST and CIFAR-10, including more hyperedge constraints does not influence the computed lower bound.
Figure 11: Time taken to compute \(L^{*}(2)\) and \(L^{*}(3)\) for MNIST and CIFAR-10.
Figure 10: Distribution of optimal classification probabilities \(q\) obtained by solving the LP with up to degree 2 hyperedges (\(L^{*}(2)\)) and up to degree 3 hyperedges (\(L^{*}(3)\)) on the CIFAR-10 training set.
### Computed bounds for a different set of 3-classes
In Figure 15, we plot 3-class lower bounds via truncated hypergraphs (\(L^{*}(2)\) and \(L^{*}(3)\)) for a different set of 3 classes as shown in the main body. These classes generally have less similarity than the classes shown in the main body of the paper causing the loss bound to increase more slowly as epsilon increases. However, we find that patterns observed for the 3 classes present in the main body of the paper are also present here: the gap between \(L^{*}(2)\) and \(L^{*}(3)\) is only visible at large values of 0-1 loss (ie. loss of 0.4).
### Impact of architecture size
Previously, we saw that adversarial training with larger values of \(\epsilon\) generally fails to converge (leading to losses matching random guessing across 3 classes). We now investigate whether increasing model capacity can resolve this convergence issue. In Figure 16, we plot the training losses of 3 WRN architectures commonly used in adversarial ML research across attack strength \(\epsilon\) for the 3-class CIFAR-10 (classes 0, 2, 8) problem. All models are trained with TRADES adversarial training. Interestingly, we find that the benefit of larger architecture size only appears for the smallest value of epsilon plotted (\(\epsilon=1\)) at which the optimal classifier can theoretically obtain 0 loss. At larger values of epsilon, the improvement in using larger architecture generally disappears.
### Impact of dropping "hard" examples
From our experiments involving adversarial training, we found that training with large values of \(\epsilon\) generally fails to converge. A potential way of trying to improve convergence using the results from our LP is to drop examples with optimal classification probability less than 1 and train with the remaining examples. Since even the optimal classifier cannot classify these examples correctly, these examples can be considered "hard" to learn. We find that in practice this does not lead to lower training loss; specifically, training without "hard" examples leads to a loss of 0.64 for CIFAR-10 with \(\epsilon=3\) while training with "hard" examples leads to a loss 0.57. We note that this loss is computed over the entire training dataset (including "hard" examples) for both settings.
### Lower bounds on test set
In the main text, we computer lower bounds on the train set as this would measure how well existing training algorithms are able to fit to the training data. In Table 4, we compute lower bounds on the test set (which contains 1000 samples per class) for MNIST 3-class classification between classes 1, 4, and 7. We find that the computed loss is similar to what is computed on a subset of the train set which contains the same number of samples per class.
\begin{table}
\begin{tabular}{l|l} \hline
**Loss bound** & **Runtime (s)** \\ \hline \(L^{*}(2)\) & 188.24 \\ \(L^{*}(3)\) & 10413.91 \\ \(L^{*}(4)\) & \textgreater{}86400 \\ \(L_{co}(2)\) & 327.92 \\ \(L_{CW}\) & 192.27 \\ \hline \end{tabular}
\end{table}
Table 2: Runtimes for computing different bounds for CIFAR-10 dataset at \(\epsilon=4\). We note that the \(L_{co}^{*}(2)\) reports the time for computing all pairwise losses sequentially and can be sped up by running these computations in parallel. We also note that \(L^{*}(4)\) computation did not terminate within a day.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline MNIST & 7.07 & 4.33 & 6.94 & 6.29 & 5.72 & 6.29 & 6.42 & 5.52 & 6.29 & 5.15 \\ CIFAR-10 & 8.96 & 10.84 & 8.48 & 9.93 & 8.22 & 10.04 & 8.72 & 10.05 & 9.23 & 10.91 \\ \hline \end{tabular}
\end{table}
Table 3: Average \(\ell_{2}\) distance to nearest neighbor in another class for each class in MNIST and CIFAR-10 datasets.
Figure 14: Lower bounds for optimal 0-1 loss the for CIFAR-100 train set
Figure 12: Heat maps for optimal loss for each pair of 1v1 classification problems.
Figure 13: Lower bounds on error for MNIST and CIFAR-10 3-class problems (1000 samples per class) computed using only constraints from edges (\(L^{*}(2)\)) and up to degree 3 hyperedges (\(L^{*}(3)\)) in comparison to TRADES adversarial training (TRADES-AT) and PGD adversarial training (PGD-AT) loss.
Figure 15: Lower bounds on error for MNIST and CIFAR-10 3-class problems (1000 samples per class) computed using only constraints from edges (\(L^{*}(2)\)) and up to degree 3 hyperedges (\(L^{*}(3)\))
Figure 16: Impact of architecture size on training loss for 3-class CIFAR (0, 2, 8) at different strengths \(\epsilon\) for models trained with TRADES adversarial training.
\begin{table}
\begin{tabular}{|c|c c c|} \hline \(\epsilon\) & train set & \multicolumn{2}{c|}{train set} & test set \\ & & (1000 samples per class) & & test set \\ \hline
2 & 0.0045 & 0.0020 & 0.0025 \\
2.5 & 0.0260 & 0.0193 & 0.0149 \\
3 & 0.1098 & 0.0877 & 0.0773 \\
3.5 & 0.2587 & 0.2283 & 0.2181 \\
4 & - & 0.4083 & 0.3987 \\ \hline \end{tabular}
\end{table}
Table 4: Optimal losses \(L^{*}(3)\) for MNIST 3 class problem (classes 1, 4, 7) computed across the train set, the first 1000 samples of each class in the train set, and computed across the test set. | ## Review
### Summary
This theoretical paper investigates the robustness of multi-class classifiers against adversarial perturbations by establishing upper and lower bounds on optimal 0/1 loss. The authors extend existing binary classification frameworks using graph-theoretic concepts to analyze multi-class settings. They address the computational complexities associated with calculating these bounds and propose techniques for more efficient computation. Empirical results demonstrate the practical application of these theoretical insights, particularly in the context of adversarial training. Overall, the paper presents significant contributions to the field of adversarial robustness, enhancing understanding and providing practical methodologies for future research.
### Strengths
- The paper is clearly presented and easy to follow, making complex theoretical concepts accessible.
- It extends the analysis of optimal classifiers under adversarial conditions from binary to multi-class classification, marking a notable contribution.
- The proposed methods for reducing computational complexity through graph truncation are innovative and effective.
- Empirical evaluations are well-executed, providing insights into adversarial training techniques.
- The formalization of assumptions is clear and well-structured, aiding in reader comprehension.
### Weaknesses
- The lack of a main theorem in the theoretical exposition may hinder reader focus on key contributions.
- There are concerns regarding the correctness of some lemmas, particularly Lemma 1, which could invalidate central results.
- The applicability of the proposed algorithm in practical settings is limited due to computational costs, as evidenced by experiments on only small-scale problems.
- The paper does not sufficiently address the implications of existing verified classifiers compared to the proposed bounds.
### Questions
- How accurate are the established upper and lower bounds, particularly for the upper bound derived from the Caro-Wei bound?
- Could the authors clarify the concerns raised about the correctness of Lemma 1?
- What is the significance of the observations regarding the full hypergraph construction in terms of computational feasibility?
- Are there plans to include additional empirical comparisons with verified classifiers in future iterations of the paper?
### Soundness
**Score:** 3
**Description:** Good. The theoretical results appear sound but have some concerns regarding specific lemmas that need further clarification.
### Presentation
**Score:** 3
**Description:** Good. The paper is generally well-structured and clear, but some central mathematical concepts could be introduced more effectively.
### Contribution
**Score:** 3
**Description:** Good. The contributions are significant and relevant, though the limitations regarding applicability and comparison with existing methods could be more thoroughly discussed.
### Rating
**Score:** 7
**Description:** Accept: The paper is technically solid with moderate-to-high impact, but it needs minor improvements in addressing specific concerns and enhancing the clarity of presentations.
### Paper Decision
**Decision:** Accept
**Reasons:** The paper presents original and sound theoretical advancements in the field of adversarial robustness, with practical implications highlighted through empirical results. While there are some weaknesses related to clarity and specific correctness issues, the overall contributions are substantial enough to warrant acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Discriminative Calibration: Check Bayesian Computation from Simulations and Flexible Classifier
Yuling Yao
Flatiron Institute,
New York, NY 10010.
[email protected] &Justin Domke
University of Massachusetts,
Amherst, MA 01002.
[email protected]
###### Abstract
To check the accuracy of Bayesian computations, it is common to use rank-based simulation-based calibration (SBC). However, SBC has drawbacks: The test statistic is somewhat ad-hoc, interactions are difficult to examine, multiple testing is a challenge, and the resulting p-value is not a divergence metric. We propose to replace the marginal rank test with a flexible classification approach that learns test statistics from data. This measure typically has a higher statistical power than the SBC test and returns an interpretable divergence measure of miscalibration, computed from classification accuracy. This approach can be used with different data generating processes to address simulation-based inference or traditional inference methods like Markov chain Monte Carlo or variational inference. We illustrate an automated implementation using neural networks and statistically-inspired features, and validate the method with numerical and real data experiments.
## 1 Introduction
Simulation based calibration (SBC) is a default approach to diagnose Bayesian computation. SBC was originally designed to validate if computer software accurately draws samples from the exact posterior inference, such as Markov chain Monte Carlo (MCMC, [4; 38; 26]) and variational inference [41]. With recent advances in amortized and simulation-based inferences [6] and growing doubt on the sampling quality [23; 18], there has also been an increasing trend to apply SBC to likelihood-free inference such as approximate Bayesian computation [42] and normalizing-flow-based [31; 1] neural posterior sampling [22; 20], with a wide range of applications in science [15; 7; 34].
Bayesian computation tries to sample from the posterior distribution \(p(\theta|y)\) given data \(y\). We work with the general setting where it may or may not be possible to evaluate the likelihood \(p(y|\theta)\). Suppose we have an inference algorithm or software \(q(\theta|y)\) that attempts to approximate \(p(\theta|y)\), and we would like to assess if this \(q\) is calibrated, meaning if \(q(\theta|y)=p(\theta|y)\) for all possible \(\theta,y\). Simulation based calibration involves three steps: First, we draw a \(\theta\) from the prior distribution \(p(\theta)\). Second, we simulate a synthetic observation \(y\) from the data model \(p(y|\theta)\). Third, given \(y\) we draw a size-\(M\) posterior sample \(\tilde{\theta}_{1},\ldots,\tilde{\theta}_{M}\) from the inference engine \(q(\theta|y)\) that we need to diagnose. SBC traditionally computes the rank statistic of the prior draw \(\theta\) among the \(q\) samples, i.e. \(r=\sum_{m=1}^{M}1(\theta\leq\tilde{\theta}_{m})\). If the inference \(q\) is calibrated, then given \(y\) both \(\theta\) and \(\tilde{\theta}_{m}\) are from the same distribution \(p(\theta|y)\), hence with repeated simulations of \((\theta,y)\), we should expect such rank statistics \(r\) to appear uniform, which can be checked by a histogram visualization or a formal uniformity test.
Despite its popularity, rank-based SBC has limitations: (i) We only compute the _rank_ of univariate parameters. In practice, \(\theta\) and \(y\) are high dimensional. We typically run SBC on each component of \(\theta\) separately, this creates many marginal histograms and does not diagnose the joint distribution or interactions. We may compare ranks of some one-dimensional test statistics, but there is no method tofind the best summary test statistic. (ii) As long as we test multiple components of \(\theta\) or test statistics, directly computing the uniformity \(p\)-values is invalid (subject to false discovery) unless we make a _multiple-testing_ adjustment, which drops the test power (subject to false negative) in high dimensions. (iii) Often we know inference is approximate, so the final goal of diagnostic is not to reject the null hypothesis of perfect calibration but to measure the _degree of miscalibration_. The \(p\)-value is not such a measure: neither can we conclude an inference with \(p=.02\) is better than \(p=.01\), nor connect the \(p\)-value with the posterior inference error. The evidence lower bound, a quantity common in variational inference, also does not directly measure the divergence due to the unknown entropy.
Heuristic calibration via classification.To address all these drawbacks, while maintaining versatility in various computing tasks, we propose _discriminative calibration_, a pragmatic and unified framework for Bayesian computing calibration and divergence estimation. The intuitive heuristic behind discriminative calibration is to use a classifier to perform similarity tests--when two or several distributions are similar, we cannot distinguish samples from them so the classification error is large. In Bayesian computation, we compare conditional distributions, the true \(p(\theta|y)\) and inferred \(q(\theta|y)\), to which we have access via simulations, and in some occasions explicit densities. It is natural to try to classify the samples drawn from \(p\) v.s. from \(q\), and the ability to distinguish them would suggest a miscalibration between \(p\) and \(q\).
To formulate such heuristics into an algorithm, we design a family of "label mapping" that prepares the simulations into classification examples that contain the label and feature. In Sec. 2, we first give four concrete label mapping examples, where the rank-based SBC becomes essentially a special case. Sec. 3 states the general theory: if we train a classifier to predict the labels, then the prediction ability yields a computable divergence from \(p(\theta|y)\) to \(q(\theta|y)\). In Sec. 4, we illustrate the practical implementation to get the divergence estimate, its confidence interval, and a valid hypothesis testing \(p\)-value. We explain why the learned classifier helps statistical power. In Sec. 4.1, we discuss the classifier design and incorporate extra known information such as ranks, likelihood, and log densities, whenever available, as features. We also show our method is applicable to MCMC without waste from thinning. We illustrate numerical and cosmology data examples in Sec. 5. We review other related posterior validation approaches and discuss the limitation and future direction in Sec. 6.
## 2 Generate labels, run a classifier, and obtain a divergence
As with traditional SBC, we generate a simulation table by repeatedly sampling parameters \(\theta\) and synthetic data \(y\) from the target model \(p\), i.e. draws \((\theta,y)\sim p(\theta,y)\). Then, for each \((\theta,y)\), we run the inference routine \(q\) to obtain a set of \(M\) IID approximate posterior samples \(\bar{\theta}_{1},\cdots,\bar{\theta}_{M}\sim q(\theta|y)\). We wish to assess how close, on average (over different \(y\)), the inference procedure \(q(\theta|y)\) is to the true posterior \(p(\theta|y)\). Here, we observe that classification example-sets can be created in several ways and these produce different divergences. We generalize the four examples and the claims on divergence in Sec. 3.
**Example 1: Binary classification with full features.** An intuitive way to estimate this closeness between \(q(\theta|y)\) and \(p(\theta|y)\) is to train a binary classifier. Imagine creating a binary classification dataset of \((t,\phi)\) pairs, where \(t\) is a binary label, and \(\phi\) are features. For each \((\theta,y)\) simulated from \(p\), \(M+1\) pairs of examples are created. In the first, \(t=0\) and \(\phi=(\theta,y)\). In the others, \(t=1\), and \(\phi=(\bar{\theta}_{m},y)\), \(1\leq m\leq M\). Collecting all data across
Figure 1: _Our discriminate calibration framework has three modules (a) generate simulation table \((\theta,y,\tilde{\theta})\), and map it into classification examples with some label \(t\) and feature \(\phi\), (b) train a classifier to predict labels, (c) from the learned classifier, perform hypothesis testing and estimate a divergence._
\(1\leq i\leq S\), we obtain \(S(M+1)\) pairs of \((t,\phi)\) classification examples. A binary classifier is then trained to maximize the conditional log probability of \(t\) given \(\phi\). If inference \(q\) were exact, no useful classification would be possible. In that case, the expected test log predictive density could be no higher than the negative binary entropy \(h(w)\coloneqq w\log w+(1-w)\log(1-w)\) of a Bernoulli distribution with parameter \(w\coloneqq 1/(M+1)\).
Now imagine drawing a validation set in the same way, and evaluating the log predictive density of the learned classifier. We will show below (Thm. 1) that the expected log predicted density [10]\(\mathrm{ELPD}=\mathbb{E}\log\Pr(t|\phi)\) of the classifier on validation data is a lower bound to a divergence \(D_{1}\) between \(p(\theta|y)\) and \(q(\theta|y)\) up to the known constant \(h(w)\),
\[\mathrm{ELPD}_{1}-h(w)\leq D_{1}(p,q)\coloneqq w\mathrm{KL}\left(p(\theta|y) \parallel r(\theta|y)\right)+(1-w)\mathrm{KL}\left(q(\theta|y)\parallel r( \theta|y)\right), \tag{1}\]
where \(w=1/(M+1)\) and \(r(\theta|y)=wp(\theta|y)+(1-w)q(\theta|y)\) is a mixture of posterior density. If the classifier \(c\) is optimal (\(c(t|\phi)=\Pr(t|\phi)\) in the distribution) then the bound in the above equation is tight \(\max_{\mathrm{classifiers}}\mathrm{ELPD}_{1}-h(w)=D_{1}(p,q)\). Here \(\mathrm{KL}(p(\theta|y)\|q(\theta|y))\) denotes a standard conditional Kullback-Leibler divergence1. By optimizing the classifier, \(\max\mathrm{ELPD}_{1}-h(w)\) becomes a commutable divergence, and its approaching zero is a necessary and sufficient condition for perfect calibration since \(D_{1}(p,q)=0\) if and only if \(p(\theta|y)=q(\theta|y)\) almost everywhere.
Footnote 1: Standard notation for conditional divergences [5] is that \(\mathrm{KL}(p(\theta|y)\|q(\theta|y))\coloneqq\mathbb{E}_{p(y,\theta)}\log \frac{p(\theta|y)}{q(\theta|y)}\). Conditional divergence is _not_ the divergence of conditional distributions. We interpret \(p(y)=q(y)\).
**Example 2: Binary classification without \(y\).** Similar to Example 1, from each simulation draw we generate \(M+1\) pairs of \((t,\phi)\) examples, except that the feature \(\phi\) only contains the parameters \(\theta\), not \(y\). A binary classifier is then trained to predict \(t\) given \(\phi\). The ELPD of this classifier on validation data is a lower bound to a generalized divergence \(D_{2}\) between the prior \(p(\theta)\) and \(q(\theta)\), up to a known constant \(h(w)\)
\[\mathrm{ELPD}_{2}-h(w)\leq D_{2}(p,q)\coloneqq w\mathrm{KL}\left(p(\theta) \parallel r(\theta)\right)+(1-w)\mathrm{KL}\left(q(\theta)\parallel r(\theta) \right), \tag{2}\]
where \(w=(M+1)^{-1}\), \(r(\theta)=wp(\theta)+(1-w)q(\theta)\) is the prior mixture, and the bound is tight when the classifier is optimal. A large ELPD reveals the difference between the inference \(q(\theta|y)\) and \(p(\theta|y)\). But \(D_{2}\) is only a generalized divergence: \(D_{2}=0\) is necessary not sufficient for \(q(\theta|y)=p(\theta|y)\).
**Example 3: Binary classification with ranks (where the classical SBC is a special case).** Instead of classification using full \((\theta,y)\), we construct a _feature_: the rank statistics. From each simulation draw we generate \(M+1\) pairs of \((t,\phi)\) examples. The first pair is \(t=0\) and \(\phi=\sum_{m=1}^{M}\mathbb{1}\left(\theta<\tilde{\theta}_{m}\right)\), the rank statistics of the prior draw. In the others, \(t=1\) and \(\phi=\sum_{m^{\prime}=1}^{M}\mathbb{1}(\tilde{\theta}_{m}<\tilde{\theta}_{m^{ \prime}})+\mathbb{1}(\tilde{\theta}_{m}<\theta)\), \(1\leq m^{\prime}\leq M\) are the rank statistics of the inferred samples. A binary classifier is then trained to predict \(t\) given \(\phi\). The ELPD of this classifier is a lower bound to a generalized divergence \(D_{3}\) between \(p(\theta|y)\) and \(q(\theta|y)\) up to a known constant \(h(w)\)
\[\mathrm{ELPD}_{3}-h(w)\leq D_{3}(p,q)\coloneqq D_{2}(Z(p,q)\parallel\mathrm{ Uniform}(0,1)),\;\;w=1/(M+1), \tag{3}\]
and again the bound is tight if the classifier is optimal. Here \(Z(p,q)\) is a random variable defined by \(Z=Q(\theta|y),(\theta,y)\sim p(\theta,y)\), where \(Q\) is the cumulative distribution function of \(q(\theta|y)\).
Training a "generative" classifier on this rank-based label-generating map is similar to testing for uniformity in rank statistics, as done in traditional SBC which estimates the distribution of \(r|t=0\) by histograms (See Appendix A.1 for precise correspondence between SBC and the naive Bayes classifier). The success of SBC suggests the usefulness of ranks, which motivates us to include ranks or more generally feature engineering in the classifier. However, \(D_{3}\) is only a generalized divergence: \(D_{3}=0\) is necessary but not sufficient for \(p(\theta|y)=q(\theta|y)\). If inference always returns the prior, \(q(\theta|y)=p(\theta)\), then \(D_{3}(p,q)=0\), a known counterexample of when rank-based SBC fails [32].
**Example 4: Multi-class classification.** We go beyond binary labeling. Given a simulation run \((y,\theta,\tilde{\theta}_{1},\dots,\tilde{\theta}_{M})\), we now create an (\(M\)+1)-class classification dataset with \(M+1\) pairs of \((t,\phi)\) examples. In each one, the features are \(\phi=(y,\theta^{*})\) where \(\theta^{*}\) is a permutation of \((\theta,\tilde{\theta}_{1},\dots,\tilde{\theta}_{M})\) that moves \(\theta\) into a given location and \(t\in 0,\cdots,M\) indicates the location of \(\theta\) in the permutation (See the table on the right). We train a (\(M\)+1)-class classifier to predict \(t\) from \(\phi\). The ELPD on a validation set is a lower bound to the following divergence \(D_{4}\) between \(p(\theta|y)\) and \(q(\theta|y)\) up to a known constant:
\[\mathrm{ELPD}_{4}+\log(M+1)\leq D_{4}(p,q)\coloneqq\mathrm{KL}\left(p(\theta_{0 })\prod_{k=1}^{M}q(\theta_{k}),\ \ \frac{1}{M+1}\sum_{m=0}^{M}p(\theta_{m})\prod_{k \neq m}q(\theta_{k})\right). \tag{4}\]
Again the bound is tight if the classifier is optimal. The divergence \(0\leq D_{4}\leq\log(M+1)\), and \(D_{4}=0\) if and only if \(p(\theta|y)\stackrel{{\text{a.e.}}}{{=}}q(\theta|y)\), necessary and sufficient for calibration. In Theorem 3 we shows that as \(M\to\infty\), \(D_{4}(p,q)\) converges to \(\mathrm{KL}(p(\theta|y),q(\theta|y))\) at an \(O(1/M)\) convergence rate.
## 3 Theory on calibration divergence
To generalize the previous examples, we define a "**label mapping**" \(\Phi:(y,\theta,\tilde{\theta}_{1},\ldots,\tilde{\theta}_{M})\mapsto\{(t_{1}, \phi_{1}),\ldots,(t_{L},\phi_{L})\}\). that maps one simulation run \((y,\theta,\tilde{\theta}_{1},\ldots,\tilde{\theta}_{M})\) into a \(K\)-class classification example-set containing \(L\) pairs of labels \(t\) and features \(\phi\). The label \(t_{l}\in\{0,1,\ldots,K-1\}\) is deterministic and only depends on \(l\). The features \(\phi_{l}\) can depend on \(l\) and \((y,\theta,\tilde{\theta}_{1},\ldots,\tilde{\theta}_{M})\). We only consider \(\Phi\) satisfying that, when \(p(\theta|y)=q(\theta|y)\), \(\phi\) given \(y\) is conditionally independent of \(t\) (equivalently, \(\phi|(y,t)\) has the same distribution for any \(t\)). Let \(\mathbb{F}\) be the set of these mappings \(\Phi\).
We train a classifier on the classification examples created by \(\Phi\) collected across repeated sampling. The classifier performance is measured by its expected log predictive density (ELPD), or equivalently the negative cross-entropy loss. Given \(p\), \(q\), and the mapping \(\Phi\), let \(c(\phi)\) be any probabilistic classifier that uses \(\phi\) to predict the label \(t\), and \(\Pr(k|\phi,c)\) is the predicted \(k\)-th class probability. Taking expectations over features and labels with \((\theta,y)\sim p\) and \(\tilde{\theta}_{m}\sim q(\theta|y)\) reaches the ELPD of the classifier,
\[\mathrm{ELPD}(\Phi,c)\coloneqq\mathbb{E}_{t,\phi}\log\Pr(t=k|\phi,c),\quad(t, \phi)=\Phi(y,\theta,\tilde{\theta}_{1},\ldots,\tilde{\theta}_{M}) \tag{5}\]
We then define the **prediction ability**\(D(p,q,\Phi,c)\) to be the ELPD plus the entropy of a categorical distribution, i.e.
\[D(p,q,\Phi,c)=\mathrm{ELPD}(\Phi,c)-\sum_{k=0}^{K-1}w_{k}\log w_{k},\quad \mathrm{where}\ w_{k}=\frac{1}{L}\sum_{l=1}^{L}\mathbb{1}(t_{l}=k). \tag{6}\]
The optimal classifier is the \(c\) that achieves the highest prediction ability in the population:
\[D^{\mathrm{opt}}(p,q,\Phi)\coloneqq\max_{c\in\mathcal{C}}D(p,q,\Phi,c),\ \ \mathrm{ where}\ \mathcal{C}\ \mathrm{is\ the\ set\ of\ all\ probabilistic\ classifiers}. \tag{7}\]
The next theorem is the basic theory of our method: as long as we pass the simulation draws to a label mapping \(\Phi\in\mathbb{F}\), and train a classifier on the classification examples, then \(D^{\mathrm{opt}}(p,q,\Phi)\) is a generalized divergence between \(p\) and \(q\).
**Theorem 1** (Prediction ability yields divergence).: _Given any \(p,q\), and feature mapping \(\Phi\in\mathbb{F}\), the optimal prediction ability \(D^{\mathrm{opt}}(p,q,\Phi)\) is a generalized divergence from \(p\) to \(q\) in the sense that \(D^{\mathrm{opt}}(p,q,\Phi)\geq 0\), and \(p(\theta|y)=q(\theta|y)\) almost everywhere implies \(D^{\mathrm{opt}}(p,q,\Phi)=0\). This generalized divergence is reparametrization invariant and uniformly bounded. For any classifier \(c\),_
\[0\leq D(p,q,\Phi,c)\leq D^{\mathrm{opt}}(p,q,\Phi)\leq-\sum_{k=0}^{K-1}w_{k} \log w_{k}; \tag{8}\]
\[p(\theta|y)\stackrel{{\text{a.e.}}}{{=}}q(\theta|y)\Rightarrow D^{\mathrm{opt}}(p,q,\Phi)=0.\]
That is, any label mapping \(\Phi\) produces a generalized divergence \(D^{\mathrm{opt}}(p,q,\Phi)\), the prediction ability of the optimal classifier. The prediction ability \(D(p,q,\Phi,c)\) of any classifier \(c\) estimated on validation data is a lower bound to \(D^{\mathrm{opt}}(p,q,\Phi)\). Further, this generalized divergence \(D^{\mathrm{opt}}(p,q,\Phi)\) is always a proper Jensen-Shannon divergence in the projected feature-label space (Theorem 1 in the Appendix).
When \(p(\theta|y)\neq q(\theta|y)\), to increase the statistical power, we wish that the generalized divergence can be as "strong" as possible such that we can detect the miscalibration. In the four examples in Section 2, we have used \(D_{1},D_{2},D_{3},D_{4}\) to denote the (generalized) divergence they yield. The next theorem shows there is a deterministic domination order among these four metrics. Moreover, \(D_{4}\) is the largest possible classification divergence from any given simulation table.
**Theorem 2** (Strongest divergence).: _For any given \(p,q\), and any \(\Phi\in\mathbb{F}\), (1) \(D_{4}\geq D_{1}\geq D_{3}\geq D_{2}\). (2) \(D_{4}\) and \(D_{1}\) are proper divergences. They attain 0 if and only if \(p(\theta|y)=q(\theta|y)\) almost everywhere. They attain the corresponding upper bound in (8) if and only \(p(\theta|y)\) are \(q(\theta|y)\) are disjoint, i.e., \(\int_{A}p(\theta|y)q(\theta|y)d\theta=0\) for any measurable set \(A\) and almost surely \(y\). (3) For any \(p,q\) and \(\Phi\in\mathbb{F}\) (\(\Phi\) can have an arbitrary example size \(L\) and class size \(K\)), \(D_{4}(p,q)\geq D^{\mathrm{opt}}(p,q,\Phi)\)._
The following result shows that the divergence \(D_{4}\) in Eq. (4) approaches the "mode-spanning" KL divergence in the limit that the number of posterior draws \(M\rightarrow\infty\). This is appealing because for many inference routes, increasing the number of posterior draws is cheap. Thus, \(D_{4}\) provides an accessible approximation to the KL divergence that is otherwise difficult to compute from samples.
**Theorem 3** (Big \(M\) limit and rate).: _For any \(p,q\), generate the simulation \(\{(y,q,\tilde{\theta}_{i},\ldots,\tilde{\theta}_{M})\}\) and train the multiclass-classier, then \(D_{4}(p,q)-\mathrm{KL}(p(\theta|y)\mid\mid q(\theta|y))\to 0,\) as \(M\rightarrow\infty\). If further \(p(\theta|y)\) and \(q(\theta|y)\) have the same support, and if \(\mathbb{E}_{p(\theta|y)}\left[\frac{p(\theta|y)}{q(\theta|y)}\right]^{2}<\infty\) for a.s. \(y\), then_
\[D_{4}(p,q)=\mathrm{KL}(p(\theta|y)\mid\mid q(\theta|y))-\frac{1}{2M}\chi^{2}(q (\theta|y)\mid\mid p(\theta|y))+o(M^{-1}).\]
_where \(\chi^{2}(\cdot||\cdot)\) is the conditional chi-squared divergence._
## 4 Practical implementation
```
input : The ability to sample from \(p(\theta,y)\) and \(q(\theta|y)\), and a label mapping \(\Phi\). output :(i) estimate of a divergence between \(p(\theta|y)\) and \(q(\theta|y)\); (ii) \(p\)-value for testing \(p(\theta|y)=q(\theta|y)\). for(\(\;i=1:S\;\)) Sample \((\theta_{i},y_{i})\sim p(\theta,y)\), and sample \(\tilde{\theta}_{i1},\tilde{\theta}_{i2}\ldots,\tilde{\theta}_{iM}\sim q(\theta| y_{i})\); \(\triangleright\) simulation table Generate a batch of \(L\) examples of \((t,\phi)=\Phi(y_{i},\theta_{i},\tilde{\theta}_{i1},\ldots,\tilde{\theta}_{iM})\), \(0\leq t\leq K-1\); \(\triangleright\) label mapping Randomly split the \(LS\) classification examples \((t,\phi)\) into training and validation sets (all \(L\) examples for a given \(i\) go to either training or validation); Train a \(K\)-class classifier to predict label \(t\) on the training examples, incorporating useful features; Compute the validation log predictive density \(\mathrm{LPD}_{\mathrm{val}}\) in (9), obtain an estimate of the divergence (7) and its bootstrap confidence intervals; for(\(\;b=1:B\;\)) for(\(\;b=1:B\;\)) Randomly permute the label \(t\) in the validation set within each batch; Compute \(\mathrm{LPD}_{\mathrm{val}}^{b}\) on the permutated validation set; Compute the calibration \(p\)-value \(p=1/B\sum_{b=1}^{B}\mathbb{1}\left(\mathrm{LPD}_{b}^{\mathrm{val}}\geq\mathrm{ LPD}^{\mathrm{val}}\right)\). \(\triangleright\) frequentist test
```
**Algorithm 1**Proposed method: Discriminative calibration
**Workflow for divergence estimate.** We repeat the simulation of \((y,\theta,\tilde{\theta}_{1},\ldots,\tilde{\theta}_{M})\) for \(S\) times. Each time we sample \((y,\theta)\sim p(\theta,y)\) and \(\tilde{\theta}_{1:M}\sim q(\theta|y)\) and generate a _batch_ of \(L\) examples through a label mapping \(\Phi\in\mathbb{F}\). In total, we obtain \(SL\) pairs of \((t,\phi)\) classification examples. We recommend using the binary and multiclass labeling schemes (Examples 1 and 4, where \(L=M+1\) and \(K=2\) or \(M+1\)) such that we can obtain a proper divergence estimate. We split the classification example-set \(\{(t,\phi)\}\) into the training and validation set (do not split batches) and train a \(K\)-class classifier \(c\) on the training set to minimize cross-entropy. Denote \(\mathcal{I}_{\mathrm{val}}\) to be the validation index, \(\Pr(t=t_{j}|\phi_{j},c)\) to be the learned class probability for any validation example \((t_{j},\phi_{j})\), we compute the ELPD (5) by the validation set log predictive density:
\[\mathrm{ELPD}(\Phi,c)\approx\mathrm{LPD}^{\mathrm{val}}(\Phi,c)\coloneqq| \mathcal{I}_{\mathrm{val}}|^{-1}\sum_{j:\in\mathcal{I}_{\mathrm{val}}}\log \Pr(t=t_{j}|\phi_{j},c). \tag{9}\]
For any \(c\), \(\mathrm{LPD}^{\mathrm{val}}(\Phi,c)-\sum_{k=0}^{K-1}w_{k}\log w_{k}\) becomes a lower bound estimate of the (generalized) divergence \(D^{\mathrm{opt}}(p,q,\Phi)\) in Thm. 1, and an estimate of \(D^{\mathrm{opt}}(p,q,\Phi)\) itself when the classier \(c\) is good enough. In addition to the point estimate of the divergence, we can compute the confidence interval. It is straightforward to obtain the standard error of the sample mean (9). To take into account the potentially heavy tail of the log predictive densities, we can also use Bayesian bootstrap [37] that reweights the sum in (9) by a uniform Dirichlet weight.
**Hypothesis testing.** Our approach facilitates rigorous frequentist hypothesis testing. The null hypothesis is that the approximate inference matches the exact posterior, i.e., \(p(\theta|y)=q(\theta|y)\) almost everywhere. We adopt the permutation test: We train the classifier \(c\) once on the training set and keep it fixed, and evaluate the validation set log predictive density \(\operatorname{LPD}^{\operatorname{val}}(\Phi,c)\) in (9). Next, permutate the validation set \(B\) times: at time \(b\), keep the features unchanged and randomly permutate the validation labels \(t\) within each batch of examples (\(\Phi\) generates a batch of \(L\) examples each time), and reevaluate the validation set log predictive density (9) on permutated labels, call it \(\operatorname{LPD}^{\operatorname{val}}_{b}\). Then we compute the one-sided permutation \(p\)-value as \(p=\sum_{b=1}^{B}\mathbb{1}\left(\operatorname{LPD}^{\operatorname{val}}_{b} \geq\operatorname{LPD}^{\operatorname{val}}\right)/B\). Given a significance level, say 0.05, we will reject the null if \(p<0.05\) and conclude a miscalibration.
**Theorem 4** (Finite sample frequentist test).: _For any finite simulation size \(S\) and posterior draw size \(M\), and any classifier \(c\), under the null hypothesis \(p(\theta|y)=q(\theta|y)\) almost everywhere, the permutation test is valid as the \(p\)-value computed above is uniformly distributed on \([0,1]\)._
Our test is exact when the simulation size \(S\) and \(M\) is finite, while the original SBC [4] relied on asymptotic approximation. Further, we _learn_ the test statistic via the classifier, and our test is valid regardless of the dimension of \(\theta\) and there is no need to worry about post-learning testing [3], while the usual SBC rank test will suffer from low power due to multiple testing.
Our test is always valid even if the classifier \(c\) is not optimal. Why does our _learning_ step help? For notional brevity, here we only reason for the binary classification. For any \(p,q\), we apply the binary classification as described in Example 1, \(t=0\ \mathrm{or}\ 1\) and \(\phi=(\theta,y)\).
**Theorem 5** (Sufficiency).: _Let \(\hat{c}(\theta,y)=\Pr(t=1|\theta,y)\) be the probability of label 1 in the optimal classifier as per (7), and let \(\pi_{c}^{p}\) and \(\pi_{c}^{q}\) be the one-dimensional distributions of this \(\hat{c}(\theta,y)\) when \((\theta,y)\) is sampled from \(p(\theta,y)\) or from \(p(y)q(\theta|y)\) respectively, then (i) Conditional on the summary statistic \(\hat{c}\), the label \(t\) is independent of features \(\phi=(\theta,y)\). (ii) Under regularity conditions, there is no loss of information in divergence as the joint divergence is the same as the projected divergence in the one-dimensional \(\hat{c}\)-space \(D_{1}(p,q)=D_{1}(\pi_{c}^{p},\pi_{c}^{q})\)._
That is, the best prediction \(\hat{c}\) entails the best one-dimensional summary statistics of the high dimensional \(\theta\times y\) space. The enhancement of the test power from using the sufficient statistics is then assured by the Neyman-Pearson lemma [27].
### Feature engineering: use rank, log density and symmetry
Reassuringly, whatever the classifier \(c\), the prediction ability \(D(p,q,\Phi,c)\) is always a lower bound to the corresponding divergence \(D^{\operatorname{opt}}(p,q,\Phi)\), and the permutation test is always exact. But we still wish to train a "good" classifier in terms of its out-of-sample performance, for a tighter bound in divergence, and a higher power in testing. We will typically use a flexible parametric family such as a multilayer perceptron (MLP) network to train the classifier.
The oracle optimal probabilistic classifier is the true label probability, and in the binary and multiclass classification (Example 1 and 4), the oracle has closed-form expressions, although we cannot evaluate:
\[\operatorname*{Pr}_{\mathrm{binary}}(t=0|\theta,y)=\frac{p(\theta|y)}{p( \theta|y)+q(\theta|y)},\quad\operatorname*{Pr}_{\mathrm{multi}}(t|\theta_{0}, \ldots,\theta_{M},y)=\frac{p(\theta_{t},y)/q(\theta_{t}|y)}{\sum_{k=0}^{M}p( \theta_{k},y)/q(\theta_{k}|y)}. \tag{10}\]
Statistically-meaningful feature.Depending on the inference task, we have more information than just the sample points and should use them in the classifier. In light of the shape and component of the optimal classifiers (10), the following statistically-meaningful features are useful whenever available: (i) The log target density \(\log p(\theta|y)\). As proxies, the log joint density \(\log p(\theta,y)\) and the log likelihood \(\log p(y|\theta)\) are often known to traditional Bayesian models. (ii) The log approximate density \(\log q(\theta|y)\), known to volitional and normalizing flows. (iii) When the log approximate density is unknown, a transformation is to integrate the density and obtain the posterior CDF, \(Q(\theta|y)=\mathbb{E}_{x\sim q(x|y)}\,\mathbb{1}(x>\theta)\), and this CDF can be approximated by the rank statistics among the approximate draws up to a rescaling \(r(\theta,y)\coloneqq\sum_{m=1,i=1}^{M\!S}\mathbb{1}(y_{i}=y)\mathbb{1}(\theta <\tilde{\theta}_{im})\). See Appendix Table 1 for the usage and extension of these features in binary and multiclass classifiers.
Linear features.We call the log likelihood, the log approximate density, or log prior (whenever available) linear features, and denote them to be \(l(\theta,y)\). For example if both likelihood and \(q\) densityis known, then \(l(\theta,y)=(\log p(y|\theta),\log q(\theta|y))\). Because they appear in the oracle 10, we will keep linear features in the last layer of the network (followed by a softmax). We recommend parameterizing the binary classification in the following form:
\[\Pr(t=1|(\theta,y))=\text{inv\_logit}[\operatorname{MLP}(\theta,y)+w^{T}l( \theta,y)]. \tag{11}\]
Symmetry in multiclass classifier.The multi-class classifier is harder to train. Luckily, we can use the symmetry of the oracle classifier (10): the probability of class \(k\) is proportional to a function of \((\theta_{k},y)\) only, hence we recommend parameterizing the multiclass probability as
\[\Pr(t=k|(\theta_{0},\theta_{1},\dots,\theta_{M},y))=\frac{\exp(g(\theta_{k},y)) }{\sum_{k^{\prime}=0}^{M}\exp(g(\theta_{k^{\prime}},y))},\ g(\theta,y)= \operatorname{MLP}(\theta,y)+w^{T}l(\theta,y), \tag{12}\]
where \(l(\theta,y)\) is available linear features. We only need to learn a reduced function from \(\theta\times y\) to \(\mathbb{R}\), instead of from \(\theta^{M+1}\times y\) to \(\mathbb{R}\), reducing the complexity while still keeping the oracle (10) attainable.
Zero waste calibration for MCMC.If \(\tilde{\theta}_{1},\dots,\tilde{\theta}_{M}\sim q(\theta|y)\) are produced by MCMC sampler, typically \(\tilde{\theta}\) has autocorrelations. Although classical SBC originated from MCMC application, the rank test requires independent samples, so SBC can only handle autocorrelation by thinning: to subsample \(\tilde{\theta}_{m}\) and hope the thinned \(\hat{\theta}\) draws are independent. Thinning is a waste of draws, inefficient when the simulations are expensive, and the thinned samples are never truly independent. With MCMC draws \(\tilde{\theta}\), our method could adopt thinning as our Thm. 1 and 4 are valid even when \(M=1\).
Yet we can do better by using all draws. The "separable" network architecture (12) is ready to use for MCMC samples. For example, we sample \((\theta,y)\sim p(\theta,y)\), and sample \((\tilde{\theta}_{1},\dots,\tilde{\theta}_{M})\) from a MCMC sampler whose stationary distribution we believe is \(q(\theta|y)\), and generate examples from the multiclass permutation (Example 4). Then we run a separable classifier(12) to predict \(t\). Intuitively, the separable network design (12) avoids the interaction between \(\tilde{\theta}_{m}\) with \(\tilde{\theta}_{m^{\prime}}\), and disallows the network to predict \(t\) based on the autocorrelation or clustering of \(\tilde{\theta}\). The next theorem states the validity of our method in MCMC settings without thinning.
**Theorem 6** (Mcmc).: _Suppose we sample \((\theta,y)\sim p(\theta,y)\), and sample \((\tilde{\theta}_{1},\dots,\tilde{\theta}_{M})\) from a MCMC sampler whose stationary distribution we believe is \(q(\theta|y)\) (i.e., marginally \(\tilde{\theta}_{i}\) is from \(q(\theta|y)\)), and generate examples \(((t_{1},\phi_{1}),\dots,(t_{M+1},\phi_{M+1}))\) from the multiclass permutation, such that \(\phi=(\theta_{0},\theta_{1},\dots,\theta_{M})\). Then we run an exchangeable classifier (12) in which \(g\) is any \(\Theta\times\mathcal{Y}\to\mathbb{R}\) mapping. Denote \(D_{4}^{\operatorname{MCMC,sep}}(p,q)\) to be the predictive ability of the optimal classifier among all separable classifiers (12), then \(D_{4}^{\operatorname{MCMC,sep}}(p,q)=D_{4}(p,q)\)._
Dimension reduction and nuisance parameter.Sometimes we only care about the sampling quality of one or a few key dimensions of the parameter space, then we only need to restrict the classifier to use these targeted dimensions, as a result of Theorem 1. For example, in binary classification, if we reduce the feature \(\phi=(\theta,y)\) to \(\phi=(h(\theta),y)\) in the classifier, where \(h(\theta)\) can be a subset of \(\theta\) dimensions, then the resulting classification divergence becomes projected divergence between \(h(\theta)|y,\theta\sim p(\theta|y)\) and \(h(\theta)|y,\theta\sim q(\theta|y)\), and other nuisance parameters do not impact the diagnostics.
Weighing for imbalanced binary classification.When \(M\) is big, the binary classification (Example 1) can suffer from imbalanced labels, and the divergence in (1) degenerates: \(D_{1}(p,q)\to 0\) as \(M\to\infty\). One solution is to use the multiclass classification which has a meaningful limit (Thm. 3). Another solution is to reweigh the loss function or log predictive density by the label \(t\). If the weights of class 1 and class 0 examples are \(\frac{M+1}{2M}\) and \(\frac{M+1}{2}\), used in both training and validation log prediction density, then regardless of \(M\), the weighted classification is equivalent to balanced classification, and the resulting divergence is the symmetric Jensen-Shannon (JS) divergence \(\frac{1}{2}\operatorname{KL}[p(\theta|y)||r(\theta|y)]+\frac{1}{2} \operatorname{KL}[q(\theta|y)||r(\theta|y)]\), where \(r(\theta|y)=\frac{1}{2}[p(\theta|y)+q(\theta|y)]\). See Appendix A for proofs.
## 5 Experiments
Closed-form example.Consider a multivariate normal parameter prior \(\theta\in\mathbb{R}^{d}\sim\operatorname{MVN}(\mathbf{0},\operatorname{Id}_{d})\) and a normal data model \(y|\theta\sim\operatorname{MVN}(\theta,\Sigma)\), so the exact posterior \(p(\theta|y)\) is explicit. In the experiments,we use neural nets to train binary (11) or symmetric multiclass classifiers (12), which we generally recommend. We assume the inference posterior \(\log q(\theta|y)\) is known and added to the classifier.
_Legitimacy of testing_. First, to validate our hypothesis test under the null, we use true inference \(q(\theta|y)=p(\theta|y)\). With \(d=16\), we simulate \(S=500\) draws of \((\theta,y)\sim p(\theta,y)\), then generate true posterior \(\hat{\theta}_{m}\sim p(\theta|y)\), and run our binary-classifier and obtain a permutation \(p\)-value. We repeat this testing procedure 1000 times to obtain the distribution of the \(p\)-values under the null, which is uniform as shown in Fig. 2, in agreement with Thm. 4. Because \(\theta\) has \(d\) dimensions, a SBC rank test on each margin separately is invalid without adjustment, and would require Bonferroni corrections.
_Power_. We consider two sampling corruptions: (i) bias, where we add a scalar noise (0.01 to 0.2) to each dimension of the true posterior mean, (ii) variance, where we inflate the true posterior covariance matrix by multiplying a scalar factor (0.8 to 1.2). We compare our discriminative tests to a Bonferroni-corrected SBC chi-squared rank-test. In both settings, we fix a 5% type-I error and compute the power from 1000 repeated tests. Fig. 3 shows that our method has uniformly higher power than SBC, sometimes as good as SBC with a 10 times bigger simulation sample size.
_Divergence estimate_. The left panel of Fig. 4 validates Thm. 1: We run a weighted binary classifier on the Gaussian simulations with a scalar bias 1 or 2 added to the posterior mean in \(q\). As the number of simulations \(S\) grows, the learned classifier quickly approaches the optimal, and the prediction ability matches the theory truth (dashed line): the Jensen-Shannon divergence between \(p(\theta|y)\) and \(q(\theta|y)\). The right panel validates Thm. 3: With a fixed simulation size \(S\), we increase \(M\), the number of posterior draws from \(q\), and run a multiclass classification. When \(M\) is big, the estimated divergence converges from below to the dashed line, which is the theory limit \(\text{KL}(p(\theta|y),q(\theta|y))\) in Thm. 3.
Benchmark examples.Next, we apply our calibration to three models from the SBI benchmark [23]: the simple likelihood complex posterior (SLCP), the Gaussian linear, and the Gaussian mixture model. In each dataset, we run adaptive No-U-Turn sampler (NUTS) and check the quality of the sampled distribution after a fixed number of iterations, varying from 2 to 2000 (we use equal number of iterations for warm-up and for sampling, and the warm-up samples were thrown away). At each given MCMC iterations, we run our classifier calibration, and estimate the JS divergence, as reported
Figure 4: _In the Gaussian example with bias = 1 or 2, with a moderately big simulation sample size \(S\), the divergence estimated from a binary (left panel) or multiclass (right panel) classifier matches its theory quantity (dashed): the Jensen-Shannon or Kullback–Leibler divergence in the big-\(M\) limit._
Figure 5: _We apply binary classifier calibration to posterior inferences on three models from the SBI benchmark, and check the sampling quality after some iterations. The \(x\)-axis is the number of MCMC iterations. The \(y\)-axis is the estimated JS divergence (\(\pm\) standard error) between the true target and sampled distribution at a given number of iterations, indicating gradual convergence._
Figure 3: _Our test has a uniformly higher power than SBC rank test. We simulate wrong inference \(q\) by multiplying the posterior covariance or adding biases to the posterior mean._
in Fig. 5. In all three panels, we are able to detect the inference flaws at early iterations and observe a gradual convergence to zero.
**Visual check.** Our diagnostic outputs rigorous numerical estimates, but it also facilities visual checks. We make a scatter plot of the binary classifier prediction \(\Pr(t=1|\phi)\), a proxy of \(p(\theta|y)/q(\theta|y)\), against any one-dimensional parameter we need to check: If that parameter is under- or over-confident, this scatter plot will display a U- or inverted-U-shaped trend. Compared with SBC rank histograms, our visualization can further check the magnitude of mismatch (how far away \(\Pr(t=1|\phi)\) is from 0.5), tail behavior (small \(q(\theta|y)\)), and several dimensions jointly. Fig 6 is a visual check in the Gaussian example when we multiply the posterior covariance in inference \(q\) by 0.8, 1, or 1.2.
**Galaxy clustering.** We would like to model galaxy spatial clustering observations in a Lambda cold dark matter framework [34], where observations \(y\) correspond to statistical descriptors of galaxy clustering, and a 14-dimensional parameter \(\theta\) encodes key cosmological information about the Universe. The forward simulation \(y|\theta\) involves expensive \(N\)-body simulations (each single \(y\) simulation takes around 5000 cpu hours). We consider three different physical models \(p\): they contain the same cosmology parameter \(\theta\) but three types of observations whose dimension \(d_{y}\) varies from 38 to 1031, reflecting three power-spectrum designs. We apply simulation based inference to sample from each of the models, where we try various normalizing flow architectures and return either the best architecture or the ensemble of five architectures. We now apply our discriminative calibration to these six approximate inferences using a weighted binary classifier trained from \(S=2000,M=500\) simulations for each inference (that is 1 million examples). We have added the log densities \(\log q(\theta|y)\) as extra features in the classifier, as they are known in the normalizing-flows. The table above shows the estimated Jensen-Shannon distances and standard deviation. Compared this table with the estimates from blackbox MLP classifier (Appendix Fig. 8), using statistical features greatly improves the label prediction and tightens the divergence estimate. From the table above, all inferences \(q\) have a significantly non-zero but small divergence, among which the 5-ensemble that uses a mixture of 5 flows always has a lower divergence so is more trustworthy. Further visualization of the classification log odds (Fig. 7) reveals that \(q\) is under-confident in the parameter that encodes the galaxy concentration rate. To interpret the tail behavior, the predicted log odds are negatively correlated with log joint normalizing-flow density \(q(\theta|y)\), suggesting \(q(\theta|y)>p(\theta|y)\) in the tail of \(q(\theta|y)\), another evidence of \(q\) underconfidence.
We share Jax implementation of our binary and multiclass classifier calibration in Github2.
Footnote 2: [https://github.com/yao-yl/DiscCalibration](https://github.com/yao-yl/DiscCalibration)
## 6 Discussions
This paper develops a classifier-based diagnostic tool for Bayesian computation, applicable to MCMC, variational and simulation based inference, or even their ensembles. We learn test statistics from data using classification. Through a flexible label mapping, our method yields a (family of) computable divergence metric and an always valid testing \(p\)-value, and the statistical power is typically higher than the traditional rank based SBC. Various statically-meaningful features are available depending on the task and we include them in the classifier. Visual exploration is also supported.
Figure 6: _Scatter plots of the classifier prediction v.s._ Figure 7: _In the cosmology example we visually check a one-dimensional parameter we need to check can the classifier predicted log odds v.s. cosmology param-reveal marginal over- or under-confidence in inference \(\Omega_{b}\) and \(\mathtt{conc}\), and \(\log q(\theta|y)\). Underconfidence is \(q(\theta|y)\) from a U or inverted-U shape._ _detected in \(\mathtt{conc}\) marginally, and more so in the joint._
**Related diagnostics.** The early idea of simulation based calibration dates back to [13] who compared the prior and the data-averaged posterior, i.e., to check \(p(\theta)=\int\!q(\theta|y)p(y)dy\) using simulations. [42] further developed techniques for the first and second moment estimate of the data-averaged posterior using the law of total variance. Our method includes such moment comparison as special cases: using the no-\(y\) binary labeling (Example 2), then comparing the empirical moments of the \(q\) sample the \(p\) sample can be achieved through a naive Bayes classifier or a linear discriminant analysis.
The rank-based SBC [4, 38] can be recovered by ours using binary labels and taking ranks to be features (Example 3). The rank statistic is central to SBC, which follows the classical frequentist approach--using the tail probability under the posited model quantifies the extremeness of the realized value[11] is only a convenient way to locate the observations in the reference distribution, especially in the past when high dimensional data analysis was not developed--it is the use of modern learning tools that sharpens our diagnostic. Recently, SBC has developed various heuristics for designing (one-dimensional) test statistics \(\phi(\theta,y)\). Such rank tests are recovered by our method by including the rank of \(\phi\) in our features (Sec. 4.1). For example, [26] proposed to test the rank of the likelihood \(\phi=p(\theta|y)\) in MCMC, [20] looked at the rank of proposal density \(q(\theta|y)\), [22] used the \(q\)-probability integral transformation in normalizing flows. In light of our Theorem 5, the optimal test statistic is related to the density ratio and hence problem-specific, which is why our method includes all known useful features in the classifier and learns the test statistic from data.
**Classifier two-sample test.** Using classification to compare two-sample closeness is not a new idea. The classifier two-sample test (C2ST) has inspired the generative adversarial network (GAN) [16] and conditional GAN [25]. [33] has developed GAN-typed inference tools for SBI. In the same vein, [19] used classifiers to diagnose multiple-run MCMC, and [24] used classifiers to learn the likelihood ratio for frequentist calibration, which is further related to using regression to learn the propensity score [36] in observational studies. The theory correspondence between binary classification loss and distribution divergence has been studied in [21, 28, 35]. This present paper not only applies this classifier-for-two-sample-test idea to amortized Bayesian inference to obtain a rigorous test, but also advances the classifier framework by developing the theory of the "label mapping", while the traditional GAN-type approach falls in the one-class-per-group category and deploy binary classifiers as in Example 1. Our extension is particularly useful when samples are overlapped (multiple \(\theta\) per \(y\)), autocorrelated, and imbalanced.
**KL divergence estimate from two samples.** As a byproduct, our proposed multiclass-classifier provides a consistent KL divergence estimate (Thm. 3) from two samples. Compared with existing two-sample KL estimate tools from the \(f\)-divergence representation [29] or the Donsker-Varadhan representation [2], our multiclass-classifier estimate appears versatile for it applies to samples with between-sample dependence or within-sample auto-correlation. It is plausible to apply our multiclass-classifier approach to other two-sample divergence estimation tasks such as the \(f\)-GAN [30], which we leave for future investigation.
**Limitations and future directions.** Like traditional SBC, our method assesses the difference between \(p(\theta|y)\) and \(q(\theta|y)\), averaged over \(y\). This is a "global" measure relevant to developing algorithms and statistical software. But sometimes the concern is how close \(p\) and \(q\) are for some particular observation \(y=y^{\mathrm{obs}}\). "Local" diagnostics have been developed for MCMC [14, 12, 40, 17], variational inference [41, 8] and simulation-based inference [42, 22] that try to assess \(q(\theta|y^{\mathrm{obs}})\) only. It would be valuable to extend our approach to address these cases. Another future direction would be to extend our approach to posterior predictive checks [11, 9, 39] that diagnose how well the statistical model \(p\) fits the observed data.
## Acknowledgement
The authors thank Bruno Regaldo-Saint Blancard for his help on galaxy clustering data and simulations. The authors thank Andreas Buja, Andrew Gelman, and Bertrand Clarke for discussions.
## References
* [1] Agrawal, A., Sheldon, D. R., and Domke, J. (2020). Advances in black-box VI: Normalizing flows, importance weighting, and optimization. In _Advances in Neural Information Processing Systems_.
* Belghazi et al. [2018] Belghazi, M. I., Baratin, A., Rajeswar, S., Ozair, S., Bengio, Y., Courville, A., and Hjelm, R. D. (2018). MINE: mutual information neural estimation. In _International Conference on Machine Learning_.
* Berk et al. [2013] Berk, R., Brown, L., Buja, A., Zhang, K., and Zhao, L. (2013). Valid post-selection inference. _Annals of Statistics_, 41(2):802-837.
* Cook et al. [2006] Cook, S. R., Gelman, A., and Rubin, D. B. (2006). Validation of software for Bayesian models using posterior quantiles. _Journal of Computational and Graphical Statistics_, 15(3):675-692.
* Cover and Thomas [1991] Cover, T. M. and Thomas, J. A. (1991). _Elements of Information Theory_. John Wiley & Sons, 2rd edition.
* Cranmer et al. [2020] Cranmer, K., Brehmer, J., and Louppe, G. (2020). The frontier of simulation-based inference. _Proceedings of the National Academy of Sciences_, 117(48):30055-30062.
* Das et al. [2021] Das, M., Green, S. R., Gair, J., Macke, J. H., Buonanno, A., and Scholkopf, B. (2021). Real-time gravitational wave science with neural posterior estimation. _Physical Review Letters_, 127(24):241103.
* Domke [2021] Domke, J. (2021). An easy to interpret diagnostic for approximate inference: Symmetric divergence over simulations. _arXiv:2103.01030_.
* Gabry et al. [2019] Gabry, J., Simpson, D., Vehtari, A., Betancourt, M., and Gelman, A. (2019). Visualization in bayesian workflow. _Journal of Royal Statistical Society: Series A_, 182(2):389-402.
* Gelman et al. [2014] Gelman, A., Hwang, J., and Vehtari, A. (2014). Understanding predictive information criteria for Bayesian models. _Statistics and Computing_, 24(6):997-1016.
* Gelman et al. [1996] Gelman, A., Meng, X.-L., and Stern, H. (1996). Posterior predictive assessment of model fitness via realized discrepancies. _Statistical Sinica_, 6:733-760.
* Gelman and Rubin [1992] Gelman, A. and Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. _Statistical Science_, 7(4):457-472.
* Geweke [2004] Geweke, J. (2004). Getting it right: Joint distribution tests of posterior simulators. _Journal of American Statistical Association_, 99(467):799-804.
* Geyer [1992] Geyer, C. J. (1992). Practical Markov chain Monte Carlo. _Statistical Science_, 7(4):473-483.
* Goncalves et al. [2020] Goncalves, P. J., Lueckmann, J.-M., Deistler, M., Nonnenmacher, M., Ocal, K., Bassetto, G., Chintaluri, C., Podlaski, W. F., Haddad, S. A., and Vogels, T. P. (2020). Training deep neural density estimators to identify mechanistic models of neural dynamics. _Elife_, 9:e56261.
* Goodfellow et al. [2014] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative adversarial networks. In _Advances in Neural Information Processing Systems_.
* Gorham and Mackey [2017] Gorham, J. and Mackey, L. (2017). Measuring sample quality with kernels. In _International Conference on Machine Learning_.
* Hermans et al. [2022] Hermans, J., Delaunoy, A., Rozet, F., Wehenkel, A., Begy, V., and Louppe, G. (2022). A trust crisis in simulation-based inference? Your posterior approximations can be unfaithful. _Transactions on Machine Learning Research_.
* Lambert and Vehtari [2022] Lambert, B. and Vehtari, A. (2022). \(R^{*}\): A robust MCMC convergence diagnostic with uncertainty using decision tree classifiers. _Bayesian Analysis_, 17(2):353-379.
* Lemos et al. [2023] Lemos, P., Coogan, A., Hezaveh, Y., and Perreault-Levasseur, L. (2023). Sampling-based accuracy testing of posterior estimators for general inference. _arXiv:2302.03026_.
* Liese and Vajda [2006] Liese, F. and Vajda, I. (2006). On divergences and informations in statistics and information theory. _IEEE Transactions on Information Theory_, 52(10):4394-4412.
* Linhart et al. [2022] Linhart, J., Gramfort, A., and Rodrigues, P. L. (2022). Validation diagnostics for SBI algorithms based on normalizing flows. _arXiv:2211.09602_.
* Lueckmann et al. [2021] Lueckmann, J.-M., Boelts, J., Greenberg, D., Goncalves, P., and Macke, J. (2021). Benchmarking simulation-based inference. In _International Conference on Artificial Intelligence and Statistics_.
* Masserano et al. [2022] Masserano, L., Dorigo, T., Izbicki, R., Kuusela, M., and Lee, A. B. (2022). Simulation-based inference with WALDO: Perfectly calibrated confidence regions using any prediction or posterior estimation algorithm.
* Mirza and Osindero [2014] Mirza, M. and Osindero, S. (2014). Conditional generative adversarial nets. _arXiv:1411.1784_.
* Modrak et al. [2022] Modrak, M., Moon, A. H., Kim, S., Burkner, P., Huurre, N., Faltejskova, K., Gelman, A., and Vehtari, A. (2022). Simulation-based calibration checking for Bayesian computation: The choice of test quantities shapes sensitivity. _arXiv:2211.02383_.
* Neyman and Pearson [1936] Neyman, J. and Pearson, E. (1936). Sufficient statistics and uniformly most powerful tests of statistical hypotheses. In _Statistical Research Memoirs_, pages 1-37. Cambridge University Press.
* Nguyen et al. [2009] Nguyen, X., Wainwright, M. J., and Jordan, M. I. (2009). On surrogate loss functions and \(f\)-divergences. _Annals of Statistics_, 37(2):876-904.
* Nguyen et al. [2010] Nguyen, X., Wainwright, M. J., and Jordan, M. I. (2010). Estimating divergence functionals and the likelihood ratio by convex risk minimization. _IEEE Transactions on Information Theory_, 56(11):5847-5861.
* Nowozin et al. [2016] Nowozin, S., Cseke, B., and Tomioka, R. (2016). f-GAN: Training generative neural samplers using variational divergence minimization. _Advances in Neural Information Processing Systems_.
* Papamakarios et al. [2021] Papamakarios, G., Nalisnick, E., Rezende, D. J., Mohamed, S., and Lakshminarayanan, B. (2021). Normalizing flows for probabilistic modeling and inference. _Journal of Machine Learning Research_, 22(1):2617-2680.
* Prangle et al. [2014] Prangle, D., Blum, M., Popovic, G., and Sisson, S. (2014). Diagnostic tools of approximate Bayesian computation using the coverage property. _Australian & New Zealand Journal of Statistics_, 56(4):309-329.
* Ramesh et al. [2022] Ramesh, P., Lueckmann, J.-M., Boelts, J., Tejero-Cantero, A., Greenberg, D. S., Goncalves, P. J., and Macke, J. H. (2022). GATSBI: Generative adversarial training for simulation-based inference. _International Conference on Learning Representations_.
* Regaldo-Saint Blancard et al. [2023] Regaldo-Saint Blancard, B., Hahn, C., Ho, S., Hou, J., Lemos, P., Massara, E., Modi, C., Moradinezhad Dizgah, A., Parker, L.,, Yao, Y., and Eickenberg, M. (2023). SimBIG: Galaxy clustering analysis with the wavelet scattering transform. _arXiv:2310.15250_.
* Reid and Williamson [2011] Reid, M. and Williamson, R. (2011). Information, divergence and risk for binary experiments. _Journal of Machine Learning Research_.
* Rosenbaum and Rubin [1983] Rosenbaum, P. R. and Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. _Biometrika_, 70(1):41-55.
* Rubin [1981] Rubin, D. B. (1981). The Bayesian bootstrap. _Annals of Statistics_, 9(1):130-134.
* Talts et al. [2018] Talts, S., Betancourt, M., Simpson, D., Vehtari, A., and Gelman, A. (2018). Validating Bayesian inference algorithms with simulation-based calibration. _arXiv:1804.06788_.
* Vandegar et al. [2021] Vandegar, M., Kagan, M., Wehenkel, A., and Louppe, G. (2021). Neural empirical Bayes: Source distribution estimation and its applications to simulation-based inference. In _International Conference on Artificial Intelligence and Statistics_.
* Vehtari et al. [2021] Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., and Burkner, P.-C. (2021). Rank-normalization, folding, and localization: An improved \(\hat{R}\) for assessing convergence of MCMC. _Bayesian Analysis_, 16(2):667-718.
* Yao et al. [2018] Yao, Y., Vehtari, A., Simpson, D., and Gelman, A. (2018). Yes, but did it work?: Evaluating variational inference. In _International Conference on Machine Learning_.
* Yu et al. [2021] Yu, X., Nott, D. J., Tran, M.-N., and Klein, N. (2021). Assessment and adjustment of approximate inference algorithms using the law of total variance. _Journal of Computational and Graphical Statistics_, 30(4):977-990.
**Appendices to "Discriminative Calibration"**
The Appendices are organized as follows. In Section A, we provide several extra theory results for the main paper. In Section B, we discuss our experiment and implementations. In Section C, we provide proofs for the theory claims.
## Appendix A Additional theory results
**Theorem 7** (Closed form expression of the divergence).: _Consider the label generating process in Section 4. Let \(\pi(t,\phi)\) be the joint density of the label and features after mapping \(\Phi\), i.e., the distribution of the \((t,\phi)\) generated by the simulation process:_
1. _sample_ \((\theta,y)\sim p(\theta,y)\)_;_
2. _sample_ \((\tilde{\theta}_{1},\ldots,\tilde{\theta}_{M})\) _from_ \(q(\theta|y)\)_;_
3. _generate_ \(((t_{1},\phi_{1}),\ldots,(t_{L},\phi_{L})))=\Phi((y,\theta,\tilde{\theta}_{1}, \ldots,\tilde{\theta}_{M}))\)_;_
4. _sample an index l from Uniform_\((1,\ldots,L)\)_;_
5. _return_ \((t_{l},\phi_{l})\)_._
_We define_
\[\pi_{k}(\phi)=\pi(\phi|t=k)=\int_{y}\pi(y)\pi(\phi|y,t=k)dy\]
_to be the \(y\)**-averaged law** of \(\phi|t=k\). Note that \(y\) has been averaged out._
_Then classification divergence defended through (7) in Theorem 1 has a closed form expression, a Jensen-Shannon divergence in this projected \(\phi\) space._
\[D^{\mathrm{opt}}(p,q,\Phi)=\sum_{k=0}^{K-1}w_{k}\mathrm{KL}\left( \pi_{k}(\cdot)\mid\mid\sum_{j=0}^{K-1}w_{j}\pi_{j}(\cdot)\right). \tag{13}\]
This Theorem 7 gives a connection between our divergence and the familiar Jensen-Shannon divergence, which is well known to be linked to IID sample classification error (note that the classification examples we generate are not IID for they share \(y\)).
As a special case, when \(K=2\), \(w_{0}=w_{1}=1/2\), we recover the symmetric two-group Jensen-Shannon divergence, or JSD. In general, between two densities \(\pi_{1}(x)\) and \(\pi_{2}(x)\), the JSD is
\[\mathrm{Jensen\ Shannon\ divergence}(\pi_{1},\pi_{2})=\frac{1}{2} \,\mathrm{KL}\left(\pi_{1}\mid\mid\frac{\pi_{1}+\pi_{2}}{2}\right)+\frac{1}{2 }\,\mathrm{KL}\left(\pi_{2}\mid\mid\frac{\pi_{1}+\pi_{2}}{2}\right). \tag{14}\]
As before, the standard notation [5] of conditional divergence is defined as taking expectations over conditional variables:
\[\mathrm{JSD}(\pi_{1}(x|z),\pi_{2}(x|z))= \frac{1}{2}\,\mathrm{KL}\left(\pi_{1}(x|z)\mid\mid\frac{\pi_{1}(x |z)+\pi_{2}(x|z)}{2}\right)\] \[+\frac{1}{2}\,\mathrm{KL}\left(\pi_{2}(x|z)\mid\mid\frac{\pi_{1}( x|z)+\pi_{2}(x|z)}{2}\right). \tag{15}\]
**Theorem 8** (weighting).: _The binary label mapping generates \(M\) paris label-\(1\) examples and one pair of label-\(0\) examples per simulation. We can reweight the binary classifier by letting the ELTD be \(\mathbb{E}[\frac{C}{M+1}\,\mathbb{I}(t=1)\log p(t=1|c(\phi))+\frac{CM}{M+1} \mathbb{I}(t=0)\log p(t=0|c(\phi))],\) where \(C=\frac{(M+1)^{2}}{2M}\) is a normalizing constant._
_That is if we modify the training utility function or the validation data LPD to be:_
\[\frac{1}{n}\sum_{i=1}^{n}\left(\frac{C}{M+1}\mathbb{I}(t_{i}=1) \log\Pr(t=1|c(\phi_{i}))+\frac{CM}{M+1}\mathbb{I}(t_{i}=0)\log\Pr(t=0|c(\phi_ {i}))\right).\]_then the resulting binary classification divergence in Thm. 1, i.e.,_
\[\mathrm{weighted\ ELPD}+\log 2,\]
_is the conditional Jensen Shannon divergence (15)._
### SBC test and generative classifier
In Example 3, we stated the intuitive connection between the SBC rank test and using rank as the feature in a "_generative_" classifier (in contrast to our "discriminate" approach). Now we make such comparison more precise: Assuming the parameter \(\theta\) is one-dimensional. This is what SBC does: From \(p\) we obtain \(\theta,y\), and for each \(\theta\) we compute the rank of it in the paired \(q\) samples who shared the same \(y\): \(r=\sum_{m=1}^{M}\mathbbm{1}(\theta\leq\bar{\theta}_{m})\). Repeating this simulation many times, we obtain a sample of \(r\), and SBC will test if such \(r\) is discrete-uniformly distributed. This test could be done by simulating a reference distribution that matches the null. We do so by generating the ranks in the \(q\) samples: \(\tilde{r}_{n}=\sum_{m\neq n}\mathbbm{1}(\bar{\theta}_{n}\leq\bar{\theta}_{m}) +\mathbbm{1}(\bar{\theta}_{n}\leq\theta)\). To be clear, there are other ways to generate reference samples, but all it matters is that the reference distribution matches the distribution of \(r\) under the null. The uniformity test in SBC now becomes to test if \(r\) and \(\tilde{r}\) have the same distribution. For example, we can do a two-sample Kolmogorov-Smirnov test or a permutation test.
In comparison, this is what naive Bayes would do: the naive Bayes classifier first estimates the two conditional distributions: \(\Pr(r|t=0)\) from the empirical distribution of the prior rank \(r\), and \(\Pr(r|t=1)\) from the empirical distribution of \(\tilde{r}\). Then to test if \(\Pr(r|t=0)=\Pr(r|t=1)\) becomes the same to testing if the empirical distribution of \(r\) is the same as the empirical distribution of \(r\)--the same task as in SBC. The point is that given any two-sample based test,
\[\text{SBC + sample-based test}\]
is _operationally_ the same as
only use rank + naive Bayes classifier + density estimated by histogram + sample-based test.
## Appendix B Implementations
### A cheat sheet of useful pre-learned features
Table 1 summarizes how to use the statistical feature (whenever available) in both the binary and multiclass classifiers.
### Code and data
We share the python and Jax implementation of our binary and multiclass calibration code in [https://github.com/yao-yl/DiscCalibration](https://github.com/yao-yl/DiscCalibration).
In the MLP training we include a standard \(L^{2}\) weight decay (i.e., training loss function = cross entropy loss + tuning weight \(*L^{2}\) penalization). We tune the weight of the decay term by a 5 fold cross-validation in the training set on a fixed grid \(\{0.1,0.01,0.001,0.0001\}\).
In the cosmology experiment in Section 5, we use one-hidden-layer MLP with 64 nodes to parameterize the classifier with the form (11), with additional pre-learned features such as \(\log q(\theta|y)\) added as linear features. The approximate log density \(\log q(\theta|y)\) is known (i.e., we can evaluate the for any \(\theta\) and \(y\), or at least least up to a constant) in either the normalizing flow or the ensemble of the normalizing flows. One classification run with roughly one million examples took roughly two hour
\begin{table}
\begin{tabular}{c|c|c} & binary & multiclass \\ \hline full feature & \((\theta,y)\) & \((\theta_{1},\ldots,\theta_{K},y)\) \\ MCMC & \(p(\theta,y),\ r(\theta,y)\) & \(p(\theta_{t},y)\ r(\theta_{t},y),1\leq t\leq K\) \\ VI & \(p(\theta,y),\ q(\theta|y)\) & \(p(\theta_{t},y)\ q(\theta_{t}|y),1\leq t\leq K\) \\ likelihood-free & \(p(\theta),\ q(\theta|y)\) & \(p(\theta_{t}),\ q(\theta_{t}|y),1\leq t\leq K\) \\ \end{tabular}
\end{table}
Table 1: A cheat sheet of useful pre-learned featurescpu time on a local laptop. It would be more efficient to run the classification on GPU, thought the classification cost is overall negligible compared with the simulation step \(y|\theta\) which is pre-computed and stored.
In the closed-form Gaussian experiment in Section 5, we consider the easiest setting: \(\theta\in\mathbb{R}^{d}\sim\mathrm{MVN}(\mathbf{0},\mathrm{Id}_{d})\) and a normal data model \(y|\theta\sim\mathrm{MVN}(\theta,\mathrm{Id}_{d})\). Indeed, we have kept both the true distribution and the sampling corruption mean-field to make the calibration task easier for the traditional SBC rank-test. The power of the traditional SBC rank-test would be even worse if we add some interaction terms in the corrupted samples \(q(\theta|y)\), while our method leans the joint space sampling quality by default. The exact posterior \(p(\theta|y)\) is known but we pretend we cannot evaluate it. We set \(d=16\) to be comparable with the real data example. The right panel validates Thm. 3 fixed the simulation size \(S=5000\) and vary \(M\). We have tried other fixed \(M=1000\) but it seems the resulting graph is very similiar. The confidence band in Figure 3 and Figure 4 is computed using 1.96 times standard error from repeated experiments.
Table 8 shows the naive classifier: using the bandbox MLP without the linear features. The estimate is rather loose compared with the table we obtained in Section 5 which used statistically-meaningful features.
## Appendix C Proofs of the theorems
We prove the theorems in the following order: Thm. 7, Thm. 1, Thm. 8, Thm. 6, Thm. 3, Thm. 5, Thm. 4, and finally Thm. 2.
###### Contents
* C.1 The classification divergence
* C.2 Reweighting (Theorem 8) leads to symmetric divergence
* C.3 MCMC without thinning (Theorem 6)
* C.4 Large sample limit and rate as \(M\to\infty\) (Theorem 3)
* C.5 Valid hypothesis testing (Theorem 4)
* C.6 Why does classifier help test? (Theorem 5)
* C.7 The maximum discriminative generator (Theorem 2)
### The classification divergence
To prove our Theorem 1, we first state an equivalent but a more general theorem. Note that we are allowing non-IID classification examples for the shared \(z\) variables in this theory.
Figure 8: _The estimated divergence of two simulation based inferences (either using one flow or an ensemble of five flows) in three cosmology models. These three models have the same parameter space and involve expensive simulations. Here we apply our classifier approach and estimate the Jensen–Shannon divergence (and standard deviation) using weighted binary classifiers from \(S=2000,M=500\) simulations: that is \(2000*501\)=1 million classification examples. In this table, we run this classifier by the black-box multilayer perceptron (MLP) on the full \((\theta,y)\) vector, while the table we have in the main paper further adds \(\log q(\theta|y)\) as a pre-learned feature since it is known in the normalizing flows. In this black box estimate table, we would pass a \(t\)-test and not even detect miscalculation. Adding a statistically-meaningful feature greatly improves the classifier prediction, and tightens the bound of the divergence estimate._
**Theorem 9** (The general relation between divergence and classification error).: _Define a simulation process \(\pi(z,k,x)\) through the following process, among the three random variables, \(z\in\mathbb{R}^{n}\), \(x\in\mathbb{R}^{d}\) can have any dimension, and \(k\in\{1,2,\ldots,K\}\) is an integer._
1. _Sample_ \(z\sim\pi(z)\)__
2. _Sample_ \(k\sim\operatorname{Categorical}(w)\)_, where_ \(w\) _is a simplex._
3. _Sample_ \(x\sim\pi_{k}(x|z)\)__
4. _Return_ \((z,k,x)\)__
_Define \(\operatorname{U}(\pi)\) to be the expected log probability or cross-entropy of an optimal classifier trained to predict \(k\) from \((x,z)\), i.e._
\[\operatorname{U}(\pi)=\max_{c}\mathbb{E}_{\pi}\operatorname{U}(k,c(x,z)),\]
_where \(U(k,c)=\log c(k)\). Then3_
Footnote 3: We emphasize that we are using the notation of conditional KL divergence:
\[\operatorname{KL}\left(\pi_{n}(x|z)\parallel\pi(x|z)\right)\coloneqq\int\left[ p(z)\int\pi_{k}(x|z)\log\left(\frac{\pi_{k}(x|z)}{r(x|z)}\right)dx\right]dz.\]
\[\operatorname{U}(\pi)-\sum_{k=1}^{K}w_{k}\log w_{k}=\sum_{n=1}^{K}w_{k} \operatorname{KL}\left(\pi_{n}(x|z)\parallel\pi(x|z)\right)\]
_where \(\pi(x|z)=\sum_{k=1}^{K}w_{k}\pi_{k}(x|z)\) is the mixture over all groups._
_We define \(D(\pi)=\operatorname{U}(\pi)-\sum_{k=1}^{K}w_{k}\log w_{k}\). Then this \(D\) has the following properties:_
1. _(lower bound)._ \(D(\pi)\geq 0\)_. It achieves zero if and only if all_ \(p_{k}\) _are the same, i.e.,_ \(\operatorname{Pr}(X^{k}\in A)=\operatorname{Pr}(X^{j}\in A)\) _for any measurable set_ \(A\)_._
2. _(upper bound)._ \(D(\pi)\leq\sum_{n=1}^{K}w_{k}\log w_{k}\)_. This maximum is achieved is_ \(p_{k}\) _are disjoint, i.e.,_ \(\operatorname{Pr}(X^{k}\in A)\operatorname{Pr}(X^{j}\in A)=0\) _for any measurable set_ \(A\)_._
3. _(alternative upper bound when the difference is small)._ \(D(\pi)\leq\max_{j,k}\operatorname{KL}(\pi_{k}(x|z),\pi_{j}(x|z))\)_._
4. _(reparametrization invariance). For any fixed bijective transformation_ \(x\mapsto g(x)\)_, let_ \(p_{k}^{g}\) _be the law of_ \(g(x^{k})\)_, then_ \(D(p_{k}^{g},\ldots,p_{k}^{g})=D(p_{k},\ldots,p_{k})\)_._
5. _(transformation decreases divergence). For any fixed transformation_ \(x\mapsto g(x)\)_, let_ \(p_{k}^{g}\) _be the law of_ \(g(x^{k})\)_, then_ \(D(p_{k}^{g},\ldots,p_{k}^{g})\leq D(p_{k},\ldots,p_{k})\)_._
Proof.: The (unknown) optimal conditional classifier is the Bayes classifier:
\[\hat{\operatorname{Pr}}(t=k|x,z)=\frac{w_{k}\pi_{k}(x|z)}{\sum_{j}w_{k}\pi_{j} (x|z)}.\]
Let \(r(x|z)\sum_{j=1}^{K}w_{k}\pi_{j}(x|z)\) be mixture density marginalized over the group index \(k\).
The expected log predictive density of the optimal classification is then
\[\mathrm{ELPD}(\hat{\Pr}(t=k|x,z)) =\mathbb{E}_{z}\int r(x|z)\sum_{k=1}^{K}\left[\frac{w_{k}\pi_{k}(x|z )}{r(x|z)}\log\left(\frac{w_{k}\pi_{k}(x|z)}{r(x|z)}\right)\right]dx\] \[=\mathbb{E}_{z}\sum_{k=1}^{K}w_{k}\int\pi_{k}(x|z)\log\left(\frac{ \pi_{k}(x|z)}{r(x|z)}+\log w_{k}\right)dx\] \[=\mathbb{E}_{z}\sum_{k=1}^{K}w_{k}\log(w_{k})+\sum_{n=1}^{K}w_{k} \,\mathbb{E}_{z}\int\pi_{k}(x|z)\log\left(\frac{\pi_{k}(x|z)}{r(x|z)}\right)dx\] \[=\sum_{k=1}^{K}w_{k}\log(w_{k})+\sum_{n=1}^{K}w_{k}\mathrm{KL} \left(\pi_{k}(x|z)\;,\;r(x|z)\right).\]
When \(\mathrm{ELPD}-\sum_{k=1}^{K}w_{k}\log(w_{k})=0\), \(\sum_{k=1}^{K}w_{k}\pi_{k}(x|z)=0\), hence these \(K\) conditional densities \(\pi_{k}(x|z)\) are the same almost everywhere.
The reparametrization invariance is directly inherited from KL diverge.
The upper bound is the consequence of \(\mathrm{ELPD}\leq 0\) for category classification and the lower bound is that KL divergence \(\geq 0\).
Let's emphasize again that we are following the notation of conditional divergence in [5], such that for any two joint density \(p(theta,y)\) and \(q(\theta,y)\), the conditional KL divergence is \(\mathrm{KL}(p(\theta|y)\|q(\theta|y))\coloneqq\mathbb{E}_{p(y)}\,\mathbb{E}_{p (y|\theta)}\log\frac{p(y|\theta)}{q(y|\theta)}\), an expectation is taken over \(y\).
The procedure in Theorem 9 can reflect many different generating processes. Let's recap the examples we show in Section 2.
In **Example 1** of the main paper, the binary case, \(K=2\), and we generate \(M\) samples from \(q\). \(\pi_{1}=p(\theta|y)\), \(\pi_{2}=q(\theta|y)\), \(w_{1}=1/M\), \(w_{2}=(M-1)/M\). Our simulation process in example 1 is equivalent to
1. Sample \(y\sim\text{marginal}\ p(y)\),
2. Sample \(k\sim\text{Categorical}(w)\), where \(w_{1}=1/M\), \(w_{2}=(M-1)/M\).
3. Sample \(x\sim\pi_{k}(x|z)\), where \(\pi_{1}=p(\theta|y)\), \(\pi_{2}=q(\theta|y)\).
4. Return \((z,k,x)\)
As a direct consequence of Theorem 9, the resulting divergence in the binary case is
\[D_{1}(p,q)=w_{1}\mathrm{KL}\left(p(\theta|y)\parallel r(\theta|y)\right)+w_{2 }\mathrm{KL}\left(q(\theta|y)\parallel r(\theta|y)\right),\]
where \(r=w_{1}p+w_{2}q\).
In **Example 2**: Binary label with no \(y\), we generate \(M\) samples from \(q\). \(K=2\), \(\pi_{1}=p(\theta)\), \(\pi_{2}=q(\theta)\), \(w_{1}=1/(M+1)\), \(w_{2}=M/(M+1)\). Our simulation process in example 2 is equivalent to
1. Sample \(k\sim\text{Categorical}(w)\), where \(w_{1}=1/M\), \(w_{2}=(M-1)/M\).
2. Sample \(x\sim\pi_{k}(x)\), where \(\pi_{1}=p(\theta)\), \(\pi_{2}=q(\theta)\).
3. Return \((k,x)\)
From Theorem 9, the resulting divergence reads
\[D_{3}(p,q)=w_{1}\mathrm{KL}\left(p(\theta)\parallel r(\theta)\right)+w_{2} \mathrm{KL}\left(q(\theta)\parallel r(\theta)\right)\]
In the multivariate **Example 4**, \(K=M+1\), and the individual density is
\[\pi_{k}(\theta_{1:K})=p(\theta_{k}|y)\prod_{m\neq k}q(\theta_{m}|y)\]
From Theorem 9, the resulting divergence reads
\[D_{4}(p,q)=\mathrm{KL}\left(p(\theta_{1}|y)\prod_{m>1}q(\theta_{m}|y)\parallel\frac {1}{K}\sum_{k=1}^{K}p(\theta_{k}|y)\prod_{m\neq k}q(\theta_{m}|y)\right)\]
We give more results on this \(D_{4}\) in Section C.4.
The discriminative calibration is designed to detect and gauge the difference between \(p(\theta|y)\) and \(q(\theta|y)\). From now on we use a long vector \(\theta_{1:M+1}\) (the index starts by 1) to represent the simulation draw. Each _simulation draw_ first generates one \(\theta_{1}\) from the prior, one data point y from the sampling distribution \(p(y|\theta_{1})\), and finally \(M\) draws from the approximate inference \(\theta_{1,\ldots,M+1}\sim q(\theta|y)\).
The "label generating process" \(\Phi\) takes a simulated draw as its input and returns a vector of features \(\phi\) and labels \(t\).
\[\Phi:(y,\theta_{1,\ldots,M+1})\mapsto\{(\phi_{1},t_{1}),\ldots,(\phi_{K},t_{K} )\}.\]
The classifier will then see this feature and labels (the index \(k\) is disregarded) across all simulations. We only ask that the map \(\Phi\) be within in set \(\mathbb{F}:\) satisfying that:
\[\mathbb{F}:\ \ \phi|y\text{ is independent of }t\text{ if }p(\theta|y)=q(\theta|y)\text{, a.e.}\]
Proof of Theorem 1.: The classifier sees features \(\phi\) and labels \(t\). Given a \(t\), \((\phi,y)\) are IID draws from \(p(y)\pi(\phi|y,t)\). In case \(y\) is not observable by the classifier, \(\phi|t\) are IID draws from \(\int_{y}p(y)p(\phi|y,t)\).
To use Theorem 9, let \(\mathbf{x}^{k}=\{(\phi_{j},y)|t_{j}=k\}\) and \(w_{k}=\Pr(t=k)\). The divergence from Theorem 9: reads
\[D^{\mathrm{opt}}(p,q,\Phi)=\sum_{n=1}^{K}w_{k}\mathrm{KL}\left(\pi_{1}(\phi)\, \ \sum_{k=1}^{K}w_{k}\pi_{1}(\phi)\right),\]
in which \(\pi_{k}(\phi)=p(\phi|t=k)=\int_{y}p(y)\pi(\phi|y,t=k)dy\) is the \(y\)-averaged \(\phi\) margin. This is essentially Theorem 7.
If \(p=q\), from \(\mathbb{F}\), the law of \(\phi,y\) is independent of \(t\) (\(y\) is always independent of \(t\) marginally), so that \(\pi_{k}(\phi)=\int_{y}p(y)\pi(\phi|y,t=k)dy\) is also independent of \(t\), as it only depends on \(y\) and \(\phi\), hence the divergence remains 0.
### Reweighting (Theorem 8) leads to symmetric divergence
We now prove Theorem 8. The binary label mapping generates \(M\) pairs label-\(1\) examples and one pair of label-\(0\) examples per simulation. We can reweight the binary classifier by letting the ELPD be \(\mathbb{E}[\frac{C}{M+1}\mathbbm{1}(t=1)\log p(t=1|c(\phi))+\frac{CM}{M+1} \mathbbm{1}(t=0)\log p(t=0|c(\phi))]\), where \(C=\frac{(M+1)^{2}}{2M}\) is a normalizing constant. We want to prove that after weighting, the classification divergence is symmetric Jensen shannon distance, as if we have balanced classification data.
Proof.: After the label-reweighing, the optimal classifier is the balanced-class classifier
\[c(t=1|\phi)=\frac{\Pr(\phi|t=1)}{\Pr(\phi|t=1)+\Pr(\phi|t=0)}.\]
The re-weighted ELPD in the population is
\[\mathrm{ELPD} =\mathbb{E}_{\phi}[\Pr(t=0|\phi)\frac{CM}{M+1}\log c(t=0|\phi)+ \Pr(t=0|\phi)\frac{C}{M+1}\log c(t=1|\phi)]\] \[=\mathbb{E}_{y}\left[\frac{p(\theta|y)+Mq(\theta|y)}{M+1}\left( \frac{p(\theta|y)}{p(\theta|y)+Mq(\theta|y)}\frac{CM}{M+1}\log\frac{p(\theta|y )}{2r(\theta|y)}+\frac{Mq(\theta|y)}{p(\theta|y)+Mq(\theta|y)}\frac{C}{M+1} \log\frac{q(\theta|y)}{2r(\theta|y)}\right)\right]\] \[=\mathbb{E}_{y}\left[C\frac{2M}{(M+1)^{2}}\left(\frac{p(\theta|y )}{2}\log\frac{p(\theta|y)}{2r(\theta|y)}+\frac{q(\theta|y)}{2}\log\frac{q( \theta|y)}{2r(\theta|y)}\right).\right]\]With an appropriate normalizing constant \(C=\frac{(M+1)^{2}}{2M}\),
\[\mathrm{ELPD}=\mathbb{E}_{y}\left[\frac{1}{2}p(\theta|y)\log\frac{p(\theta|y)}{r( \theta|y)}+\frac{1}{2}q(\theta|y)\log\frac{p(\theta|y)}{r(\theta|y)}\right]-\log 2.\]
That is, the reweighted ELPD + \(\log 2\) is the conditional Jensen Shannon divergence (15) between \(p(\theta|y)\) and \(q(\theta|y)\).
### MCMC without thinning (Theorem 6)
To diagnose MCMC, we sample \((\theta,y)\sim p(\theta,y)\), and sample (potentially autocorrelated) draws \((\tilde{\theta}_{1},\ldots,\tilde{\theta}_{M})\) from a MCMC sampler whose stationary distribution we believe is \(q(\theta|y)\) (i.e., marginally \(\tilde{\theta}_{i}\) is from \(q(\theta|y)\)), and generate examples \(((t_{1},\phi_{1}),\ldots,(t_{M+1},\phi_{M+1}))\), from the multi-class permutation (see definition in Example 4), such that
\[\phi=(\theta_{0},\theta_{1},\ldots,\theta_{M}).\]
Then we run an exchangeable classifier parameterized by
\[\Pr(t=k|(\theta_{0},\theta_{1},\ldots,\theta_{M},y))=\frac{\exp(g(\theta_{k},y ))}{\sum_{k^{\prime}=0}^{M}\exp(g(\theta_{k^{\prime}},y))}, \tag{16}\]
where \(g\) is any \(\Theta\times\mathcal{Y}\rightarrow\mathbb{R}\) mapping to be learned.
Note that here \(\Pr(t=k|(\theta_{0},\theta_{1},\ldots,\theta_{M},y))\) is the classifier model we restrict it to be, not the true population \(\pi\). In general \(\pi(t=k|(\theta_{0},\theta_{1},\ldots,\theta_{M},y))\)\(=\neq\Pr(t=k|(\theta_{0},\theta_{1},\ldots,\theta_{M},y))\) Roughly speaking, the optimal classifier should be the Bayes classifier projected to the restricted space (16), and we need to proof that this _projected restricted_ solution turns out to be the same as the IID case 10, which is not trivial in general.
Proof.: Intuitively, this separable network design (16) avoids the interaction between \(\tilde{\theta}_{m}\) with \(\tilde{\theta}_{m}\), and disallows the network to predict \(t\) based on the autocorrelation or clustering of \(\tilde{\theta}\).
Because of the permutation design and we define \(q\) to be the marginal distribution of \(\tilde{\theta}\), first, we know that
\[\pi(\theta_{k}|y,t=k)=p(\theta_{k}|y),\;\;\pi(\theta_{k}|y,t\neq k)=q(\theta_{ k}|y),\]
in the MCMC population \(\pi\) (there is no need to address the joint for now).
From this we obtain the conditionals in the population
\[\pi(t=k|\theta_{k},y)=\frac{p(\theta_{k}|y)}{p(\theta_{k}|y)+Mq(\theta|y)}\]
and further the ratios for any \(m\), \(k\) within the index set \(\{0,2,\ldots,M\}\) and \(m\neq k\):
\[\pi(t=k|\theta_{k},\theta_{m},y)=\frac{p(\theta_{k}|y)q(\theta_{k}|y)}{p( \theta_{k}|y)q(\theta_{m}|y)+q(\theta_{k}|y)p(\theta_{k}|y)+(M-1)q_{12}(\theta _{k},\theta_{m})}.\]
Here \(q_{12}(\theta_{k},\theta_{m})\) is the joint distribution of two out of \(M\) draws from the MCMC sampler \(q(\theta|y)\), which would often not be the same as \(q(\theta_{k}|y)q(\theta_{m}|y)\) because of the autocorrelation of Markov chains. We do not need to specify the form of this joint. The key blessing is that when \(\theta_{k}\) is from \(p(\theta|y)\) and \(\theta_{m}\) is from \(q(\theta|y)\), then they are independent (the numerator).
Next, from the line above we obtain the ratio estimate in the true population \(\pi\):
\[\frac{\pi(t=k|\theta_{k},\theta_{m},y)}{\pi(t=m|\theta_{k},\theta_{m},y)}=\frac {p(\theta_{k}|y)q(\theta_{k}|y)}{q(\theta_{k}|y)p(\theta_{k}|y)}=\frac{p( \theta_{k}|y)/q(\theta_{k}|y)}{p(\theta_{m}|y)/q(\theta_{m}|y)}. \tag{17}\]
Now we project the restricted classifier (16). Intuitively, the separable classifier "almost" only depends on \(\theta_{t}\), except for the normalizing constant. We can remove the dependence on the normalizing constant by specifying the ratio
\[\frac{\Pr(t=k|(\theta_{0},\theta_{1},\ldots,\theta_{M},y))}{\Pr(t=m|(\theta_{0 },\theta_{1},\ldots,\theta_{M},y))}=\frac{\exp(g(\theta_{k},y))}{\exp(g( \theta_{m},y))}.\]Marginalizing out all other components, we obtain
\[\frac{\Pr(t=k|(\theta_{k},\theta_{m},y))}{\Pr(t=m|(\theta_{k},\theta_{m},y))}=\frac{ \exp(g(\theta_{k},y))}{\exp(g(\theta_{m},y))}. \tag{18}\]
Matching the restriction (18) with the true population (17), the restricted projection is attainable if and only if
\[\exp(g(\theta_{t},y)=p(\theta_{t},y)/q(\theta_{t}|y),\]
so that the optimal classifier needs to be
\[\Pr(t|\theta_{1},\ldots,\theta_{K},y)=\frac{p(\theta_{t},y)/q(\theta_{t}|y)}{ \sum_{k=1}^{K}p(\theta_{k},y)/q(\theta_{k}|y)}.\]
It happens that this MCMC restricted optimal classifier matches the IID optimal classifier (10). It follows from the proof in Theorem 9 that the classification divergence using the restricted classifier is still \(D_{4}(p,q)\), as if \(\{\tilde{\theta}_{m}\}_{m=1}^{M}\) are IID samples from \(q(\theta|y)\).
### Large sample limit and rate as \(M\to\infty\) (Theorem 3)
In the multivariate **Example 4**, from each draw \(\theta,y,\tilde{\theta}_{1:M}\), we generate \(M+1\) examples from \(K\coloneqq M+1\) classes; one label from each. We will use the index starting from \(1\) in this subsection. For the \(k\)-th example, \(t_{k}=k\), and the permutation for classifier reads \(\phi_{k}\) is a vector including \(y\) and \(K\) copies of \(\theta\), where the \(k\)-th copy is the prior draw, and the remaining \(\theta\) are from \(q(|y)\). Slightly abused the notation, in this subsection, we call this long feature vector as \(\theta_{1:K}\); it is a vector in the \(\Theta^{M+1}\) space. We now prove Theorem 3.
Proof.: First, we write the true conditional label probability in this process:
\[\pi(t|\theta_{1:K},y)=\frac{p(\theta_{t}|y)\prod_{j\neq t}p(\theta_{j}|y)}{\sum _{t^{\prime}}p^{\prime}(\theta_{t^{\prime}}|y)\prod_{j\neq t^{\prime}}p^{ \prime}(\theta_{j}|y)}=\frac{p(\theta_{t}|y)/q(\theta_{t}|y)}{\sum_{j}p(\theta_ {j}|y)/q(\theta_{j}|y)}.\]
Plug it as the classifier, we obtain the optimal ELPD or negative cross entropy: ELPD = \(\mathbb{E}\log\pi(t|\theta_{1:K},y)\),
\[\mathrm{ELPD} =\mathbb{E}\frac{p(\theta_{t}|y)/q(\theta_{t}|y)}{\sum_{k=1}^{K}p (\theta_{k}|y)/q(\theta_{k}|y)}\] \[=\mathbb{E}\log\frac{p(\theta_{t}|y)}{q(\theta_{t}|y)}-\mathbb{E} \log\sum_{k}p(\theta_{k}|y)/q(\theta_{k}).\]
The first term above is simply
\[\mathbb{E}\log\frac{p(\theta_{t}|y)}{q(\theta_{t}|y)}=\mathrm{KL}\left(p( \theta|y)||q(\theta|y)\right)\]
According to our definition (4), the divergence is ELPD offset by an entropy term,
\[D_{4}\coloneqq\mathrm{ELPD}+\log K.\]
We now derive the limit of \(D_{4}-\mathrm{KL}\left(p(\theta|y)||q(\theta|y)\right)\) when \(M\to\infty\) (or equivalently \(K=M+1\to\infty\))
\[\Delta \coloneqq D_{4}-\mathrm{KL}\left(p(\theta|y)||q(\theta|y)\right)\] \[=\mathrm{ELPD}+\log K-\mathrm{KL}\left(p(\theta|y)||q(\theta|y)\right)\] \[=\log K-\mathbb{E}\log\left(\sum_{k}\frac{p(\theta_{k}|y)}{q( \theta_{k}|y)}\right)\] \[=-\mathbb{E}\log\left(\frac{1}{K}\sum_{k=1}^{K}\frac{p(\theta_{k }|y)}{q(\theta_{k}|y)}\right)\]Given any label value \(1\leq\leq K\), \(\theta_{t}\sim p(\cdot|y)\), and all the remaining \(\theta_{j}\sim q(\cdot|y)\) for \(j\neq t\).
Let
\[X_{k}=\frac{1}{K}\sum_{k=1}^{K}\frac{p(\theta_{k}|y)}{q(\theta_{k}|y)}=\frac{1}{ K}\frac{p(\theta_{t}|y)}{q(\theta_{t}|y)}+\frac{1}{K}\sum_{k\neq t}\frac{p(\theta_{ t}|y)}{q(\theta_{t}|y)}.\]
The first term
\[\frac{1}{K}\sum_{k=1}^{K}\frac{p(\theta_{t}|y)}{q(\theta_{t}|y)}\to 0.\]
The second term, call it \(\Delta_{2}\) is the mean of IID means as \(\theta_{j}\sim q(\cdot|y)\) for \(j\neq t\), the law of large number yields
\[\Delta_{2}\coloneqq\frac{1}{K}\sum_{k\neq t}^{K}\frac{p(\theta_{t}|y)}{q( \theta_{t}|y)},\quad\Delta_{2}\to\mathbb{E}_{x\sim q(x|y)}\,\frac{p(x|y)}{q(x| y)}=1.\]
This proves that
\[\Delta\to 0,\text{ as }K\to\infty.\]
Hence,
\[D_{4}-\operatorname{KL}\left(p(\theta|y)||q(\theta|y)\right)\to 0,\]
which finished the first part of the proof.
Now let's derive the rate. Under regularity conditions, for example, the variance of density ratio is bounded, i.e., there exists a constant \(C<\infty\), such that for all \(y\),
\[\operatorname{Var}_{\theta_{t}\sim q(\theta_{t}|y)}\left(\frac{p(\theta_{t}|y) }{q(\theta_{t}|y)}\right)<C,\]
then CLT holds, such that the second term above has a normal limit,
\[\sqrt{K}(\Delta_{2}-1)\to\text{normal}(0,\sigma^{2}),\ \text{ in distribution},\]
where
\[\sigma^{2} =\operatorname{Var}_{q}(\frac{p(\theta_{t}|y)}{q(\theta_{t}|y)})\] \[=\mathbb{E}_{\theta,y\sim q(\theta,y)}\left(\frac{p(\theta|y)}{q( \theta|y)}-1\right)^{2}\] \[=\mathbb{E}_{y}\,\mathbb{E}_{\theta\sim q(\theta|y)}\left(\frac{ p(\theta|y)}{q(\theta|y)}-1\right)^{2}\] \[=\chi^{2}\left(p(\theta|y)\mid\mid q(\theta|y)\right),\]
which is the definition of the conditional chi-squared divergence.
Consider a Taylor series expansion, \(\log(1+x)=x-\frac{1}{2}x^{2}+o(x^{2})\). Using the Delta method to express the log function and the Slutsky theorem to ignore the zero term, we get
\[K\,\mathbb{E}\log(\Delta_{2})\to-\frac{\sigma^{2}}{2}.\]
Plug this in the definition of \(\Delta\) we obtain the desired convergence rate,
\[D_{4}=\operatorname{KL}\left(p(\theta|y)||q(\theta|y)\right)-\frac{1}{2M}\chi^ {2}\left(p(\theta|y)\mid\mid q(\theta|y)\right)+o(1/M).\]
This proves the claims in Theorem 3.
### Valid hypothesis testing (Theorem 4)
It is not trivial to have a valid permutation test since we do not have IID examples. One naive permutation is to permutate all the labels \(t\) (across batches \(i\) ). The resulting permutation testing is not uniform under the null (Figure 9).
In contrast, when we permute the label, we only ask for within-batch permeation (permute the index \(\{t_{1},\ldots,t_{L}\}\) in each batch). See Table 2 for an illustration. Let's prove this permutation is valid.
Proof.: Under the null, \(p(\theta|y)\)= \(q(\theta|y)\) almost everywhere. According to the label mapping \(\Phi\in\mathbb{F}\), label \(t\) is independent of \(\phi\) given \(y\). We first show that label \(t\) is independent of \(\phi\).
In general conditional independence does not lead to unconditional independence. But because here we design the generating process by \(\pi(y,t,\phi)=\pi_{Y}(y)\pi_{t}(t)\pi(\phi|y,\theta)\). Under the null we have \(\pi(y,t,\phi)=\pi_{t}(t)(\pi_{Y}(y)\pi(\phi|y))\). Hence \(t\) needs to be independent of \(\phi\).
For any classifier \(c\), because \(c(\phi)\) is a function of \(\phi\), \(c(\phi)\) and \(t\) are also independent. Let \(\pi_{\Phi}(\phi)\) be the marginal distribution of \(\phi\) in \(\pi\).
Now we are computing the cross entropy (with respect to the population \(\pi\)). \(\mathrm{LPD}=\sum_{j=1}^{N}U(c(\phi_{n}),t_{n})\) where \(U\) is the log score \(U(P,x)=\log P(x)\). It is a random variable because we have a finite validation-set. Now we are conducting a permutation of label \(t\), the permuted label \(\tilde{t}\) is an independent draw from the same marginal distribution of \(t\), \(\pi_{t}(t)\). Because of the independence, \(\mathbb{E}_{\phi,t}\,U(c(\phi),t)=\mathbb{E}_{t}\,\mathbb{E}_{\phi}\,U(c(\phi),t)\), hence the permuted \(\mathrm{LPD}_{b}\stackrel{{ d}}{{=}}\mathrm{LPD}\), where \(\mathrm{LPD}_{b}\) is the computed LPD in the \(b\)-th permutation. Therefore \(\Pr(\mathrm{LPD}\leq x)=\Pr(\mathrm{LPD}_{b}\leq x)\). In expectation (with an infinitely amount of repeated random permutation), this \(p\)-value will be uniform on [0,1].
In practice, the computed p-value from finite permutation can only take values on a finite set \(\{0,1/B,\ldots,(B-1)/B,1\}\). More precisely, under the null hypothesis, this permutation test p-value is _discretly-uniform_ on this set. This is because for any \(0\leq m\leq B\)
\[\Pr(p=m/B)=\int_{0}^{1}\binom{B}{m}p^{m}(1-p)^{B-m}dp=1/(B+1),\ \ \forall m.\]
Hence, the permutation \(p\)-value based on \(B\) random permutations is uniformly distributed on the set \(\{0,1/B,\ldots,(B-1)/B,1\}\).
Figure 9: _Our designed permutation vs the naive permutation._
### Why does classifier help test? (Theorem 5)
Let's recap the binary label-generating process:
1. Sample \(y\sim\text{marginal}\)\(p(y)\),
2. Sample \(k\sim\text{Bernoulli}(w)\), where \(w_{1}=1/M\), \(w_{2}=(M-1)/M\).
3. Sample \(\theta\sim\pi_{k}(\theta|z)\), where \(\pi_{0}=p(\theta|y)\), \(\pi_{1}=q(\theta|y)\).
4. Return \((z,k,x)\)
Theorem 5 state that the optimal classifier has a sufficiency propriety in that (a) let \(\hat{c}\) be the probability of label 1 in the optimal classifier as per (7), and let \(\pi_{c}^{p}\) and \(\pi_{c}^{q}\) be the one-dimensional distribution of this \(\hat{c}(\phi)\) when \((\theta,y)\) is sampled from \(p(\theta,y)\) or from \(p(y)q(\theta|y)\) respectively, then (i) Conditional on the summary statistic \(\hat{c}\), the label \(t\) is independent of all features \(\phi=(\theta,y)\). (ii) There is no loss of information in divergence as the joint divergence is the same as the projected divergence, \(D_{1}(p,q)=D_{1}(\pi_{c}^{p},\pi_{c}^{q})\).
Proof.: There are three random variables, \(\theta\), \(y\), and \(t\). The optimal classifier \(\hat{c}\) is the probability in this true joint:
\[\hat{c}(\theta,y)=\Pr(t=1|(\theta,y)).\]
To show sufficiency or conditional independence, all we need to show is that, conditional on any given value of \(\hat{c}\), \(\Pr(t=1|(\theta,y),c)\) does not depend on \((\theta,y)\) (in other words, \(\Pr(t=1|(\theta,y),c)\) is a function of \(c\) only). This becomes obvious as
\[\Pr(t=1|(\theta,y),c(\theta,y))=\Pr(t=1|(\theta,y))=\hat{c}(\theta,y).\]
Now we prove that there is "no loss" in divergence.
\[D_{1}(p,q)=w\operatorname{KL}(p(\theta,y)||r(\theta,y))+(1-w)\operatorname{ KL}(q(\theta,y)||r(\theta,y))\]
We express the first term in \(\hat{c}\)
\[\operatorname{KL}(p(\theta,y)\mid\mid wp(\theta,y)+(1-w)q(\theta, y))\] \[=\operatorname{KL}(\pi(\theta,y|t=0)\mid\mid\pi(\theta,y))\] \[=\operatorname{KL}(\pi(\hat{c}|t=0)\mid\mid\pi(\hat{c}))- \operatorname{KL}(\pi(\theta,y|\hat{c},t=0)\mid\mid\pi(\theta,y|\hat{c}))\]
This steps uses the chain rule of KL divergence: \(\operatorname{KL}[p(x,y)\mid q(x,y)]=\operatorname{KL}[p(x)\mid q(x)]+ \operatorname{KL}[p(y\mid x)\mid q(y\mid x)]\)
Using conditional independence:
\[\pi(\theta,y|\hat{c},t=0)=\pi(\theta,y|\hat{c},t=1)\]
Hence \(\operatorname{KL}(\pi(\theta,y|\hat{c},t=0)\mid\mid\pi(\theta,y|\hat{c}))=0\). Therefore,
\[\operatorname{KL}(p(\theta,y)||r(\theta,y))=\operatorname{KL}(\pi(\hat{c}|t=0 )\mid\mid\pi(\hat{c}))\]
where \(\pi(c)=\pi(\hat{c}|t=0)+(1-w)\pi(\hat{c}|t=0)\)
Similarly,
\[\operatorname{KL}(q(\theta,y)||r(\theta,y))=\operatorname{KL}(\pi(\hat{c}|t= 1)\mid\mid\pi(\hat{c}))\]
This proves \(D_{1}(p,q)=D_{1}(\pi_{c}^{p},\pi_{c}^{q})\).
### The maximum discriminative generator (Theorem 2)
We save the proof of Theorem 2 in the end for its length.
The generator \(\phi\) contains a few degrees of freedom: the number of classes \(K\), the number of examples \(L\), and how to design and the label-feature pairs. In the binary labeling: \(t_{1}=1,\phi_{1}=(\theta_{1},y)\) and the remaining \(t_{k}=0,\phi_{k}=(\theta_{k},y)\) for \(2\leq k\leq M+1\). The multi-class \(\Phi^{*}\) assigns labels 1:\(K\) as
\[\Phi^{*}:t_{k}=k,\phi_{k}=(\operatorname{Perm}_{1\to k}(\theta_{1}, \ldots,\theta_{M}),y),1\leq k\leq K=M+1. \tag{19}\]Before the main proof that the multiclass permutation creates the largest divergence, let's first convince that the multi-class classification produced higher divergence than the binary one.
In the binary classification with \(M=1\) (one draw from \(p\) and one draw from \(q\))
\[D_{1}(p,q)=\frac{1}{2}\operatorname{KL}\left(p(\theta|y),\frac{p(\theta|y)+q( \theta|y)}{2}\right)+\frac{1}{2}\operatorname{KL}\left(q(\theta|y),\frac{p( \theta|y)+q(\theta|y)}{2}\right).\]
In the multi-class classification with \(M=1\),
\[D_{4}(p,q)=\operatorname{KL}\left(p(\theta^{1}|y)q(\theta^{2}|y),\frac{p( \theta^{1}|y)q(\theta^{2}|y)+q(\theta^{1}|y)p(\theta^{2}|y)}{2}\right).\]
Use the joint \(\operatorname{KL}>\) marginal KL, we have
\[D_{4}(p,q)\geq\operatorname{KL}\left(p(\theta^{1}|y),\frac{p(\theta^{1}|y)+q( \theta^{1}|y)}{2}\right).\]
Likewise,
\[D_{4}(p,q)\geq\operatorname{KL}\left(q(\theta^{2}|y),\frac{p(\theta^{2}|y)+q( \theta^{2}|y)}{2}\right).\]
Add these two lines we obtain
\[D_{4}(p,q)\geq D_{1}(p,q).\]
To prove that this multi-class permutation produced the uniformly largest divergence (across \(M\), \(K\), \(p\), \(q\)) optimal, we organize the proof into lemmas 5 to 9. For notation brevity, we denote \(\hat{M}\coloneqq M+1\) in these lemmas to avoid using index \(M+1\).
**Lemma 10**.: _For an arbitrary integer \(L\), any given output space \(\mathcal{Y}\), and any input space \(\mathcal{X}\) that has at least \(L\) elements, if there are two functions mapping \(\mathcal{X}^{L}\) to \(\mathcal{Y}\)_
\[f_{1},f_{2}:(x_{1},\ldots,,x_{L})\mapsto y\in\mathcal{Y}.\]
_satisfying the following propriety:_
* _for any probability distribution_ \(\pi\) _on_ \(\mathcal{X}\)_, when_ \(x_{1},\ldots,x_{n}\) _are_ \(L\) _iid random variables with law_ \(\pi\)_,_ \(f_{1}(x_{1},\ldots,,x_{L})\) _has the same distribution as_ \(f_{2}(x_{1},\ldots,,x_{L})\)_,_
_then there must exist a permutation of \(1:L\), denoted by \(\sigma(1:L)\), such that_
\[f_{2}(x_{1},\ldots,x_{L})=f_{1}(x_{\sigma(1)},\ldots,x_{\sigma(L)}).\]
Proof.: For any \(L\) distinct values \(a_{1},\ldots,a_{L}\), \(a_{i}\in\mathcal{X}\), let \(\pi\) be a mixture distribution of \(L\) delta functions:
\[\pi=\sum_{m=1}^{L}\delta(a_{m})p_{m},\]
where the \(m\)-th mixture probability is
\[p_{m}=C\left(\frac{1}{2}\right)^{L^{m-1}},\;\;C^{-1}=\sum_{m=1}^{L}\left(\frac {1}{2}\right)^{L^{m-1}}.\]
\(C\) is chosen such that \(\sum_{m=1}^{L}p_{m}=1\).
Now that \(x_{1},\ldots,x_{L}\) are \(L\) IID random variables from this \(\pi\), \(f_{1}(x_{1},\ldots,x_{L})\) is also a mixture of delta functions, For any sequence of input indices \((u_{1},\ldots,u_{L})\in\{1:L\}^{L}\),
\[\Pr(f_{1}(x_{1},\ldots,,x_{L})=f_{1}(a_{u_{1}},\ldots,,a_{u_{L}})) =\prod_{m=1}^{L}((C/2^{L^{m-1}})^{\sum_{j=1}^{L}1(u_{j}=m)})\] \[=C^{L}(\frac{1}{2})^{\sum_{m=1}^{L}\left(\sum_{j=1}^{L}1(u_{j}=m )L^{m-1}\right)}, \tag{20}\]in which the power index can be written as
\[\left(\sum_{j=1}^{L}1(u_{j}=1),\ldots,\sum_{j=1}^{L}1(u_{j}=L)\right)_{L}\coloneqq \sum_{m=1}^{L}\left(\sum_{j=1}^{L}1(u_{j}=m)L^{m-1}\right)\]
as an \(L\)-decimal-integer.
Next, we study the law of \(f_{2}(x_{1},\ldots,,x_{L})\):
\[\Pr(f_{2}(x_{1},\ldots,,x_{L})=f_{2}(a_{1},\ldots,,a_{L}))=C^{L}(\frac{1}{2})^ {(1,1,\ldots,1)_{L}}.\]
Because \(f_{2}(x_{1},\ldots,,x_{L})\) and \(f_{1}(x_{1},\ldots,,x_{L})\) have the same distribution, \(f_{2}(a_{1},\ldots,,a_{L}))\) needs to match the value at which \(f_{1}(x_{1},\ldots,,x_{L})\) has probability \(C^{L}(\frac{1}{2})^{(1,1,\ldots,1)_{L}}\) to attain. Comparing with (20), this probability is only attained when \(\sum_{j=1}^{L}1(u_{j}=m)=1\), \(\forall m\). That is, there exists a \(\sigma\), a permutation of \(1:L\), such that \(u_{1},\ldots,u_{L}=\sigma(1,2,\ldots,L)\).
Matching the value of \(f_{2}(x_{1},\ldots,,x_{L})\) we obtain
\[f_{2}(a_{1},\ldots,a_{L})=f_{1}(a_{\sigma(1)},\ldots,a_{\sigma(L)}).\]
Because the choice of vector \((a_{1},\ldots,a_{L})\) is arbitrary, we have
\[f_{2}(x_{1},\ldots,x_{L})=f_{1}(x_{\sigma(1)},\ldots,x_{\sigma(L)}).\]
Because augmentation increases divergence, for the purpose of finding the largest divergence, to produce the largest divergence, we only need to consider an augmented generator that includes all \(y\)
\[\Phi^{aug}:(y,\theta_{1,\ldots,L})\mapsto\{((\phi_{1},y),t_{1}),\ldots,((\phi_ {K},y),t_{K})\}.\]
It is enough to consider the generator of which \(\phi_{k}=\phi_{k}(\theta_{1},\ldots,\theta_{L})\) are \(K\) functions of \((\theta_{1},\ldots,\theta_{L})\).
**Lemma 11**.: _For any augmented generator \(\Phi^{arg}\) satisfies \(\mathbb{F}\), the null, there must exists \(K\) permutations \(\sigma_{1}(1:L),\ldots,\sigma_{K}(1:(L))\), with the convention \(\sigma_{1}(1:L)=1:L\), such that_
\[\phi_{k}(\theta_{1},\ldots,\theta_{L})=\phi_{1}(\theta_{\sigma_{k}(1:(L))}).\]
Proof.: Use Lemma (10) for \((K-1)\) times.
**Lemma 12**.: _For any augmented generator \(\Phi^{arg}\) satisfies \(\mathbb{F}\), all feature-label generator can be replaced by a permutation, \(\phi_{k}(\theta_{1},\ldots\theta_{L})=(\theta_{\sigma_{k}(1)},\ldots\theta_{ \sigma_{k}(L)})\), while the divergence does not decrease._
Proof.: From the previous lemma,
\[\phi_{k}(\theta_{1},\ldots,\theta_{L})=\phi_{1}(\theta_{\sigma_{k}(1:L)}).\]
The augmented feature is now \((\phi_{1}(\theta_{\sigma_{k}(1:L)}),y)\), a transformation of \((\theta_{\sigma_{k}(1:L)}),y)\). Using the raw feature \((\theta_{\sigma_{k}(1:L)}),y)\) keeps the divergence non-decreasing.
Now that we only want to consider permutation-based generators: there exists a \(K\), and \(K\) permutations \(\sigma_{k}(1:L)\).
\[\Phi^{aug}:(y,\theta_{1,\ldots,L})\mapsto\{((\phi_{1},y),t_{1}),\ldots,((\phi _{k},y),t_{k})\}.\]
\[\phi_{k}(\theta_{1},\ldots,\theta_{L})=\theta_{\sigma_{k}(1:L)}.\]
Given a \(p\neq q\), any permutations \(\theta_{\sigma_{i}(1:L)}\) contains one copy from \(p\) and \((L-1)\) copies from \(q\). It suffices to only consider those permutation \(\theta_{\sigma_{i}(1:L)}\) whose distributions are unique.
**Lemma 13**.: _Given any \(p\neq q\) and \(L\) fixed, assuming all \(\theta_{\sigma_{i}(1:L)}\) has different distributions, then \(D(q,p,\Phi^{arg})\) is an increasing function on \(K\)._Proof.: Using that joint KL is always not smaller than the KL divergence of sub-coordinates.
**Lemma 14**.: _Among all permutations \(\sigma(\theta_{1},\ldots\theta_{L})\), the maximum number of distinct distributions are \(K=L\)._
Proof.: The total number of permutations is \(L!\). Because \(\theta_{2\cdot L}\) are IID given \(y\), the permutation of index \(2:L\) does not change the distribution. When \(p\neq q\), the total number of distributionally distinct permutations is \(L!/(L-1)!=L=M+1\).
It is clear that the proposed multi-class permutation \(\Phi^{*}\) attains this maximum number of distinct distributions, which proves its optimality among all generators from all previous lemmas, thereby the proof of our Theorem 2. | ## Review
### Summary
This paper contributes to the field of simulation-based inference (SBI) by presenting a novel classifier-based approach to measure miscalibration in Bayesian computation methods, including Approximate Bayesian Computation (ABC). The authors develop techniques to assess the quality of posterior approximations by using classifiers to compare samples from true and approximated posteriors. Through theoretical grounding and empirical validation, the paper showcases improvements over existing methods, particularly in terms of data efficiency and statistical interpretation. The balance of theoretical and practical discussions enhances its accessibility and relevance to the SBI literature, while also addressing an understudied problem in the literature. However, the scope of the paper's applicability to the NeurIPS conference remains a point of contention.
### Strengths
- The paper successfully generalizes the Simulation-Based Calibration (SBC) method within a discriminative framework, enhancing understanding and comparison with other approaches.
- It presents three theorems in Section 3 that offer practical utility, ensuring classifier performance serves as a lower bound for theoretical quantities.
- The algorithm is well-grounded theoretically and addresses an important, understudied area in the literature.
- Empirical studies validate the theoretical results, demonstrating the effectiveness of the proposed methods.
- The paper provides a clear description of the challenge and proposed methods, including useful discussions on implementation and expected behaviors in large sample limits.
### Weaknesses
- The mathematical notation can be confusing, making it difficult to differentiate between scalar inputs and random variable samples.
- There is insufficient discussion on the quality of classifiers trained on calibration data and their subsequent use in statistical tests.
- The paper does not explore the impact of challenging behaviors such as auto-correlations and class imbalances in empirical evaluations.
- The idea of using classifiers for two-sample testing is not entirely new, which may limit the perceived novelty of the contribution.
- Some experiments are too simple, lacking complexity that could provide deeper insights into the method's quality in various scenarios.
### Questions
- Why is Figure 2 placed after Figures 3-6, and what is the rationale behind this arrangement?
- How useful is the diagnostics for q(θ|y) if it is only valid in average over y?
- In the context of Example 1-4, how does the class imbalance affect the classifier's ability to differentiate between classes?
- Could the authors provide more details on the posterior model in the multi-variate Gaussian example?
- Can the authors explain the ramifications of the Neyman-Pearson lemma in relation to their proposed tests?
### Soundness
**Score:** 3
**Description:** Good: The paper presents a solid theoretical foundation and empirical validation, although some methodological aspects could be clearer.
### Presentation
**Score:** 3
**Description:** Good: The paper is generally well-written but contains some confusing mathematical notation and could benefit from clearer organization.
### Contribution
**Score:** 3
**Description:** Good: The paper addresses a significant problem in SBI and presents useful methods, but some aspects of the contribution may not be entirely novel.
### Rating
**Score:** 7
**Description:** Accept: The paper is technically solid with moderate-to-high impact potential, though it requires minor improvements in clarity and depth.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The decision to accept is based on the paper's originality, solid theoretical foundations, and significant contributions to SBI methods. While some minor weaknesses were noted, they do not detract from the overall quality and relevance of the work. The authors are encouraged to address the feedback in their final version to enhance clarity and depth.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Navigating Data Heterogeneity in Federated Learning:
A Semi-Supervised Federated Object Detection
Taehyeon Kim\({}^{1}\) Eric Lin\({}^{2}\) Junu Lee\({}^{3}\) Christian Lau\({}^{2}\) Vaikkunth Mugunthan\({}^{2}\)
\({}^{1}\)KaiST \({}^{2}\)DynamoFL \({}^{3}\)The Wharton School
[email protected]
###### Abstract
Federated Learning (FL) has emerged as a potent framework for training models across distributed data sources while maintaining data privacy. Nevertheless, it faces challenges with limited high-quality labels and non-IID client data, particularly in applications like autonomous driving. To address these hurdles, we navigate the uncharted waters of Semi-Supervised Federated Object Detection (SSFOD). We present a pioneering SSFOD framework, designed for scenarios where labeled data reside only at the server while clients possess unlabeled data. Notably, our method represents the inaugural implementation of SSFOD for clients with 0% labeled non-IID data, a stark contrast to previous studies that maintain some subset of labels at each client. We propose **FedSTO**, a two-stage strategy encompassing **S**elective **T**raining followed by **O**rthogonally enhanced full-parameter training, to effectively address data shift (e.g. weather conditions) between server and clients. Our contributions include selectively refining the backbone of the detector to avert overfitting, orthogonality regularization to boost representation divergence, and local EMA-driven pseudo label assignment to yield high-quality pseudo labels. Extensive validation on prominent autonomous driving datasets (BDD100K, Cityscapes, and SODA10M) attests to the efficacy of our approach, demonstrating state-of-the-art results. Remarkably, FedSTO, using just 20-30% of labels, performs nearly as well as fully-supervised centralized training methods.
## 1 Introduction
Federated Learning (FL) enables decentralized training across distributed data sources, preserving data privacy [27]. It has emerged in response to the need for privacy, security, and regulatory compliance such as GDPR [36] and CCPA [31]. FL trains models on local devices and shares only model updates, thereby improving privacy and efficiency. In a typical FL cycle, each client updates a shared model with local data, sends the updates to a server for parameter aggregation, and then updates its local model with the newly aggregated global model sent back by the server.
Despite the potential of FL, the assumption of fully labeled data restricts its practicality [12, 11]. In order to acquire high-quality labels, data is often transmitted from edge clients to a central server, thereby compromising the privacy assurances provided by FL. Limited labels at the edge necessitate the adoption of transfer learning, self-supervised learning, and semi-supervised learning (SSL) techniques. However, the separation of labeled and unlabeled data complicates the application of these techniques to FL, which can undermine the system's effectiveness. This issue is amplified in labels-at-server scenarios where only the server possesses labeled data, and clients hold only unlabeled data [5, 10, 43, 2, 18, 42]. In autonomous driving, a novel approach is required to bridge the knowledge gap between labeled and unlabeled data without the need for direct data exchange.
While Semi-Supervised Federated Learning (SSFL) has been explored for image classification tasks,[5, 10, 43, 2, 18, 42], these studies have faced the following challenges:1. Limited scale and complexity of tasks with datasets such as CIFAR and ImageNet, while semi-supervised federated object detection (SSFOD) presents sizably greater difficulties.
2. Non-IID data shift from labeled to unlabeled data. Our investigation stands apart in tackling the most challenging FL situations where clients hold exclusively unlabeled data from a different distribution from labeled server data. This acknowledges inherent heterogeneity of real-world FL settings, such as diverse weather conditions across clients. For instance, one client's dataset may predominantly consist of images captured under cloudy conditions, while others may include images from overcast, rainy, snowy, etc. conditions.
To surmount these inadequately addressed challenges of SSFOD, we introduce FedSTO (Federated Selective Training followed by Orthogonally enhanced training), a two-stage training strategy tailored specifically for our SSFOD framework (Figure 1). Our key contributions include:
* **Selective Training and Orthogonal Enhancement:** FedSTO begins with selective training of the model's backbone while other components remain frozen, fostering more consistent representations and establishing a robust backbone. This promotes generalization across non-IID clients, even in the absence of local labels. The subsequent stage involves fine-tuning all parameters with orthogonal regularizations applied to the non-backbone part of the model. This enhancement step is designed to imbue the predictors with resilience against skewed representations induced by local data heterogeneity, thereby promoting representation divergence and robustness.
* **SSFL with a Personalized EMA-Driven Semi-Efficient Teacher:** To prevent deterioration of teacher pseudo labeling models for non-IID unlabeled clients, we showcase for the first time an SSFOD framework that applies an alternate training methodology [5], integrated with a Semi-Efficient Teacher [38], driven by a local Exponential Moving Average (EMA). Our empirical observations suggest that this personalized EMA-driven model provides superior quality pseudo labels for detection, contrary to the commonly used global model
Figure 1: An overview of our FedSTO method within the SSFOD framework with key components: selective training, orthogonal enhancement, and local Exponential Moving Average (EMA)-driven pseudo label assignment, organized into two stages. Algorithm steps are numbered accordingly.
for pseudo labeling in related studies [5]. This approach further enhances the quality of the learning process, mitigating potential pitfalls of noisy pseudo labeling.
* **Performance Improvements:** FedSTO achieves 0.082 and 0.035 higher [email protected] when compared to partially supervised and SSFL baselines respectively, nearly matching the fully supervised model's performance (0.012 gap) on BDD100K [41] with a mere 25% of labeled data. We demonstrate similar considerable improvements in model generalization (Figure 2) on rigorous benchmark and ablation experiments with 20k-120k datapoints from Cityscapes [4] and SODA10M [9], utilizing the YOLOv5 object detector [13].
Our above contributions present a pioneering approach for utilizing unlabeled data in FL to enhance non-IID detection performance, especially for dynamic objects--an aspect not yet considered in previous research. Despite the paucity of research on SSFOD, we contend that our methods and experiments offer a valuable benchmark for future investigations across diverse domains.
## 2 Related Works
### Federated Learning (FL): Challenges and Advances
FL has gained significant attention in recent years as a privacy-preserving approach to harness the potential of distributed data [27; 20; 21; 15; 28; 33]. Despite the progress made in FL, most research has focused primarily on classification tasks, which may limit its applicability and generalizability to a broader range of real-world problems. Advanced FL techniques are essential to revolutionize various fields, including autonomous driving, healthcare, and finance, by enabling collaborative learning from distributed data sources [6; 30].
Addressing data heterogeneity is of paramount importance in FL, as clients frequently hold data with diverse distributions, which may impact the performance of the global model. To tackle this challenge, researchers have proposed various techniques to handle non-IID data, including adaptive aggregation algorithms and local fine-tuning of models [20; 33]. Personalization constitutes another vital aspect of FL, since clients may exhibit unique requirements or preferences not entirely captured by the global model [3; 19; 39]. Methods such as model distillation [23] and meta-learning [7] have been investigated to facilitate client-specific model adaptation and personalization. Finally, communication efficiency is crucial in FL, as exchanging model updates can be resource-intensive. To alleviate this issue, researchers have introduced strategies like quantization [34], sparsification [29], and the utilization of a supernet containing multiple subnetworks, with only the selected subnetwork transmitted to the server to reduce communication overhead while preserving model performance [17].
Figure 2: Performance comparison on BDD100K dataset [41]. “Partially Supervised Training” shows lower-bound performance using partial labels in a centralized setting. “Vanilla Semi-Supervised Federated Learning” and “Our FedSTO” demonstrate improved performance with non-IID federated data. FedSTO approaches the “Fully Supervised Training” upper-bound performance under full label use in a centralized setting. The x-axis shows the number of labeled examples, and the y-axis displays the mean average precision ([email protected]) on the test set.
### Semi-Supervised Object Detection (SSOD) with YOLO Object Detector
SSOD has been an active research area, focusing on improving the quality of pseudo labels to enhance overall detection performance [35, 40]. The evolution of SSL techniques in object detection primarily revolves around using pretrained architectures and applying strong data augmentation strategies to generate consistent and reliable pseudo labels.
Traditional single-stage detectors, such as the family of YOLO detectors, have faced notable challenges in leveraging SSL techniques, often underperforming compared to their two-stage counterparts (e.g., Faster RCNN). The limited efficacy of existing SSL methods for single-stage detectors has inspired researchers to develop innovative solutions to overcome these limitations [44]. Recently, a novel pipeline incorporating EMA of model weights has exhibited remarkable enhancements in the performance of single-stage detectors like YOLO detectors [38]. By utilizing the EMA model for pseudo labeling, researchers have adeptly addressed the inherent weaknesses in single-stage detectors, substantially elevating their performance in SSL contexts for object detection tasks.
### Semi-Supervised Federated Learning (SSFL)
SSFL has emerged as a promising approach to address the challenge of limited labeled data in FL scenarios [5, 10, 43, 2, 18, 42]. SSFL aims to jointly use both labeled and unlabeled data owned by participants to improve FL. Two primary settings have been explored: Labels-at-Client and Labels-at-Server [10, 11]. In the Labels-at-Client scenario, clients possess labeled data, while the server only has access to unlabeled data. Conversely, in the Labels-at-Server scenario, the server holds labeled data, and clients have only unlabeled data. Despite the progress in SSFL, there remain limitations in the current research landscape. The majority of existing SSFL research predominantly focuses on image classification tasks, leaving other applications relatively unaddressed. In this study, we address these limitations by tackling the more realistic and challenging scenarios with edge clients having (1) no labels and (2) non-IID data (domain shift from the server labeled data), specifically in the context of object detection tasks.
## 3 Problem Statement
SSFODWe tackle a semi-supervised object detection task involving a labeled dataset \(\mathcal{S}=\{\mathbf{x}_{i}^{s},\mathbf{y}_{i}^{s}\}_{i=1}^{N_{S}}\) and an unlabeled dataset \(\mathcal{U}=\{x_{i}^{u}\}_{i=1}^{N_{U}}\), focusing on scenarios where \(N_{S}\ll N_{U}\). In our SSFOD setup, as illustrated in Figure 1, we assume \(M\) clients each possessing an unsupervised dataset \(x^{u,m}\). The server retains the labeled dataset \(\{x^{s},\mathbf{y}^{s}\}\) and a model parameterized by \(W^{s}\). Each client model, parameterized by \(W^{u,m}\), shares the object detection architecture denoted by \(f:(\mathbf{x},W)\mapsto f(\mathbf{x},W)\). We assume that all models share the same object detection architecture, denoted by \(f:(\mathbf{x},W)\mapsto f(\mathbf{x},W)\), which maps an input \(\mathbf{x}\) and parameters \(W\) to a set of bounding boxes and their corresponding class probabilities on the \(K\)-dimensional simplex (e.g., using the sigmoid function applied to model outputs unit-wise).
Data HeterogeneityOur study addresses non-IID data resulting from varying weather conditions such as cloudy, overcast, rainy, and snow, inspired by feature distribution skew or covariate shift [14]. We utilize three datasets, BDD100K [41], CityScapes [4], and SODA10M [9], each displaying class distribution heterogeneity and label density heterogeneity. Our aim is an SSFOD framework that can manage this heterogeneity, maintaining performance across diverse conditions and distributions. Data is considered IID when each client exhibits a balanced weather condition distribution.
EvaluationIn our framework, we assess the performance of all detection tasks based on mean average precision ([email protected]), a standard metric in object detection literature that provides a comprehensive view of model performance across various object classes and sizes. Importantly, we evaluate the post-training performance of our method by assessing the personalized models of the server and client on their respective datasets. This approach ensures a fair and context-specific evaluation, reflecting the true performance of the personalized models in their intended environments.
Baseline TrainingOur work explores two principal baselines: "Centralized Training" and "Federated Learning". Depending on the degree of labeled data utilization, we categorize the training into "Partially Supervised" and "Fully Supervised". An ideal situation is one where a fully supervised model is trained in a centralized fashion, utilizing all labeled data. In contrast, a more challenging scenario involves a partially supervised model trained solely on the server's limited labeled data. Under our problem setup, we initially establish a baseline by performing partial supervision on theserver's limited labeled data, which serves as a pretraining step. Following this, each client conducts "Unsupervised learning with unlabeled data". Upon completion, clients transmit their model weights to the server. The server then aggregates these weights and fine-tunes the amalgamated model using its labeled data. The updated model is subsequently disseminated back to the clients, culminating one learning round. This cyclical process, known as alternate training in Diao et al. [5], continues effectively. It merges the strength of supervised and unsupervised learning to capitalize on unlabeled data while preventing model deterioration, thereby optimizing model performance.
Personalized Pseudo Labeling for Unlabeled ClientsA crucial obstacle in SSFOD lies in precise pseudo label assignment, as improper allotments can result in label inconsistencies, thus negatively impacting mutual learning performance. Building upon the foundation by Xu et al. [38] in centralized settings, we present the first extension of this approach to federated settings, leveraging a personalized Pseudo Label Assigner (PLA) equipped with local EMA models. This technique bifurcates pseudo labels into reliable and unreliable categories using high and low thresholds, thus ensuring a robust and precise learning mechanism in federated environments. In FL, the PLA can be applied to both global and local models. However, the global model may fall short in capturing unique features of local data, compromising pseudo label quality. As demonstrated in our evaluation (Table 1), locally updated EMA models outperform global models. While it is feasible to federate the local EMA model, it introduces certain trade-offs, such as increased communication costs and minor performance degradation compared to the local EMA model. Our SSFOD framework, therefore, incorporates a local PLA with a local EMA model, optimally balancing communication efficiency and model stability, ensuring an effective learning process for SSOD tasks in distributed environments.
SSFOD with YOLOWe utilize the YOLOv5 model, a single-stage object detector, in our evaluation. Existing literature shows a scarcity of research on SSFOD within FL like FedAvg [27], particularly for single-stage detectors like YOLO. Figure 3 compares various learning approaches in centralized and federated settings, denoted by green dotted and blue hatched boxes, respectively. We highlight non-IID scenarios with labeled (cloudy) and unlabeled data (overcast, rainy, snowy). In the CL scenario, fully supervised methods noticeably surpass partially supervised ones, and SSL approaches almost match the performance of fully supervised methods. However, baseline training for FL falls substantially short of these high standards, particularly with unlabeled data.
\begin{table}
\begin{tabular}{c c c c c c|c c c c} \hline \hline \multirow{2}{*}{Type} & \multirow{2}{*}{Method} & \multicolumn{4}{c|}{Non-IID} & \multicolumn{4}{c}{IID} \\ \cline{3-11} & & Cloudy & Overcast & Rainy & Snowy & Cloudy & Overcast & Rainy & Snowy \\ \hline \multirow{2}{*}{Centralized} & Fully Supervised & 0.600 & 0.604 & 0.617 & 0.597 & 0.600 & 0.604 & 0.617 & 0.597 \\ & Partially Supervised & 0.540 & 0.545 & 0.484 & 0.474 & 0.528 & 0.545 & 0.533 & 0.510 \\ \hline \multirow{2}{*}{Federated} & Global Model [5] & 0.555 & 0.560 & 0.497 & 0.488 & 0.540 & 0.551 & 0.576 & 0.542 \\ & Local EMA Model [38] & 0.560 & 0.566 & 0.553 & 0.553 & 0.572 & 0.588 & 0.593 & 0.610 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance under different weather conditions for non-IID and IID data splits with 1 server and 3 clients. The results are presented for centralized (Fully Supervised and Partially Supervised) and federated approaches with a pseudo label assigner (Global Model and Local EMA Model).
Figure 3: Performance of various methods on the BDD100K dataset [41], with the server containing labeled data for the “Cloudy” category and 3 clients having unlabeled data for “Rainy”, “Snowy”, and “Overcast” categories. Baseline SSFL (red hatched boxes) struggles in comparison to centralized learning (bars in green dotted boxes). “Fully Supervised” and “Partially Supervised” refer to training a centralized model with the complete labeled dataset and only the “Cloudy” labeled data, respectively.
## 4 Main Method: FedSTO
To mitigate these inherent hurdles presented by FL, we introduce FedSTO, a method that unfolds in two stages, preceded by a warmup stage. The process commences with an emphasis on robust representation learning for pretraining (Subsection 4.1), followed by full parameter training (Subsection 4.2). The initial stage of pretraining integrates a warm-up period utilizing labeled data at the server, transitioning into selective training. This groundwork is fortified by the orthogonal enhancement implemented in the subsequent full parameter training phase.
### Selective Training (ST)
Selective Training (ST) is designed to address the primary challenge of establishing a robust backbone for object detector in FL. The approach unfolds as follows:
1. **Labeled dataset training**: All model parameters are updated using a labeled dataset. This step ensures training commences on high quality labeled data, mitigating the potential destabilizing effect of noisy, unlabeled data, and heterogeneity from diverse weather conditions.
2. **Client-side training with unlabeled dataset**: The model, updated in the previous step, is dispatched to the clients. Each client trains the model on their local unlabeled dataset. However, during this phase, only the backbone part of the model is updated, leaving other components frozen. This selective updating procedure fosters more consistent representations by sharing the same non-backbone part (e.g., neck and head), and thus enhances its potential generalization capabilities by concentrating on feature extraction.
3. **Server-side aggregation**: The server aggregates the updated backbone parameters from clients, effectively synthesizing the learned information from diverse unlabeled datasets. The aggregated backbone is then utilized in the first step for further training, repeating the process until performance convergence.
By adhering to this procedure, ST effectively navigates the challenges inherent in the progression of FL while simultaneously accruing substantial benefits. Ensuring stability in semi-supervised object detection tasks is paramount. The exposure to heterogeneous unlabeled data, potentially characterized by noise or variable quality, can induce biases into the neck and head components of the model, thereby risking performance degradation by inadvertently generating low-quality or imprecise pseudo annotations. To mitigate this, ST employs a selective update strategy specifically targeting the backbone of the model, which is predominantly entrusted with the task of extracting salient features from the input data. By concentrating on the backbone during training, ST aids in the preservation of model stability and the enhancement of its generalization capabilities. Furthermore, in this stage, the communication cost between the server and clients is significantly reduced by uploading only the backbone part from clients to a server. Consequently, it significantly minimizes the deleterious impacts of heterogeneous unlabeled data on overall model performance (Table 2). While ST brings marginal improvements in IID conditions, it presents potent effects under Non-IID circumstances, emphasizing its efficacy in handling heterogeneous data distributions.
### Full Parameter Training (FPT) with Orthogonal Enhancement
Inspired by the critical need for personalized models to exhibit robustness against feature distribution skewness--predominantly due to diverse weather conditions--we integrate orthogonality regularization presented by Kim et al. [16] which penalizes the symmetric version of the spectral restricted isometry property regularization \(\sum_{\theta}\sigma(\theta^{T}\theta-I)+\sigma(\theta\theta^{T}-I)\) within the SSFOD framework where \(\sigma(\cdot)\) calculates the spectral norm of the input matrix and \(\theta\) is a weight matrix from non-backbone parts. This regularization is applied during both server and client training stages and targets non-backbone
\begin{table}
\begin{tabular}{c c c c c c|c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{Non-IID} & \multicolumn{4}{c}{IID} \\ \cline{2-11} & Cloudy & Overcast & Rainy & Snowy & Total & Cloudy & Overcast & Rainy & Snowy & Total \\ \hline Partially Supervised & 0.540 & 0.545 & 0.484 & 0.474 & 0.511 & 0.528 & 0.545 & 0.533 & 0.510 & 0.529 \\ \hline + SSFL [5] with Local EMA Model & 0.560 & 0.566 & 0.553 & 0.553 & 0.558 & 0.572 & 0.588 & 0.593 & **0.610** & 0.591 \\ + Selective Training & 0.571 & 0.583 & 0.557 & 0.556 & 0.567 & 0.576 & 0.578 & 0.594 & 0.599 & 0.587 \\ \#FPT with Orthogonal Enhancement [16] & **0.596** & **0.607** & **0.590** & **0.580** & **0.593** & **0.591** & **0.634** & **0.614** & 0.595 & **0.609** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance on the BDD dataset with 1 labeled server and 3 unlabeled clients as each element of our FedSTO approach within the SSFOD framework is added. It highlights how each added method contributes to the overall performance under both Non-IID and IID conditions.
components of the architecture. Our approach promotes generation of diverse, non-redundant, and domain-invariant feature representations, thereby enhancing the model's robustness, reducing noise influence, and significantly augmenting its ability to handle unlabeled data across varied domains.
Incorporating orthogonality regularization into our framework substantially amplifies the divergence in the embedding space, enhancing the model's overall detection quality and the reliability of pseudo labels. Importantly, our strategy of embedding orthogonality into the non-backbone parts of the model, such as the neck and head, fosters a more balanced and comprehensive training process. This reduces the bias towards specific weather conditions and the heterogeneity of object categories, leading to improved performance as demonstrated in Table 2. Our approach draws upon successful techniques from fine-tuning [8, 26], and transfer learning, and is particularly inspired by meta-learning concepts,[37, 32]. In particular, the tendency of the non-backbone components of the model to develop biases prompts us to introduce an orthogonal property to this section. This measure helps counteract these biases, thereby further enhancing the model's robustness and adaptability when confronted with diverse, unlabeled data across multiple domains.
### Main Algorithm: FedSTO
```
input : server model parameterized by \(W_{s}\), the number of rounds for each phase \(T_{1},T_{2}\), client models parameterized by \(\{W_{u,1},...,W_{u,M}\}\), client backbone part parameterized by \(\{B_{u,1},...,B_{u,M}\}\)
1:\(W_{s}\leftarrow\textsc{WarmUp}(x_{s},y_{s},W_{s})\)// Supervised training at server /* Phase 1: Selective Training for Pretraining */
2:for\(t\gets 0,\dots,T_{1}-1\)do
3:\(S^{t}\leftarrow\textsc{SampleClients}\)
4:for each client \(k\in S^{t}\) in parallel do
5:\(W_{u,k}\leftarrow\textsc{Client-BackboneUpdate}(x_{u,k},B_{u,k})\)// Client-Update
6:endfor
7:\(W_{s}\leftarrow\sum_{k\in S^{t}}p_{k}W_{u,k}\)// Aggregation
8:\(W_{s}\leftarrow\textsc{Server-Update}(x_{s},y_{s},W_{s})\)// Server-Update
9:endfor /* Phase 2: Full Parameter Training with Orthogonal Enhancement */
10:for\(t\gets 0,\dots,T_{2}-1\)do
11:\(S^{t}\leftarrow\textsc{SampleClients}\)
12:for each client \(k\in S^{t}\) in parallel do
13:\(W_{u,k}\leftarrow\textsc{Client-OrthogonalUpdate}(x_{u,k},W_{u,k})\)// Client-OrthogonalUpdate
14:endfor
15:\(W_{s}\leftarrow\sum_{k\in S^{t}}p_{k}W_{u,k}\)// Aggregation
16:\(W_{s}\leftarrow\textsc{Server-OrthogonalUpdate}(x_{s},y_{s},W_{s})\)// Server-OrthogonalUpdate
17:endfor
```
**Algorithm 1**FedSTO Algorithm within the SSFOD Framework
Algorithm 1 illustrates the overall procedure of FedSTO within the SSFOD framework. The server model, parameterized by \(W_{s}\), is first trained in a supervised fashion during the warm-up phase (Line 1). The algorithm then transitions to Phase 1: Selective Training for Pretraining. This phase involves multiple training iterations (Line 3), where in each iteration, a subset of clients is sampled (Line 4). The backbone part of each client's model, \(W_{u,k}\), is updated using their local unlabeled datasets (Line 6). The updated parameters are then aggregated at the server (Line 8), and the server model is updated using its labeled dataset (Line 9). In Phase 2: Full Parameter Training with Orthogonal Enhancement, the Client-OrthogonalUpdate and Server-OrthogonalUpdate methods are employed (Lines 14 and 18), introducing orthogonality regularization to the training process. This second phase debiases the non-backbone parts of the model, ensuring they have a robust predictor across various weather conditions that effectively counterbalance the inherent data heterogeneity.
## 5 Experiment
### Experimental Setup
#### 5.1.1 Datasets
Bdd100K [41]We utilize the BDD100K dataset, which consists of 100,000 driving videos recorded across diverse U.S. locations and under various weather conditions, to evaluate our method. Eachvideo, approximately 40 seconds in duration, is recorded at 720p and 30 fps, with GPS/IMU data available for driving trajectories. For our experiments, we specifically select 20,000 data points, distributed across four distinct weather conditions--cloudy, rainy, overcast, and snowy. In this study, we primarily focus on five object categories: person, car, bus, truck, and traffic sign. The dataset is partitioned into clients based on these weather conditions, simulating data-heterogeneous clients. This experimental setup enables us to investigate the influence of data heterogeneity on our framework and to evaluate its robustness under realistic conditions.
Cityscape [4]We conduct additional experiments using the Cityscapes dataset, which consists of urban street scenes from 50 different cities. Given that this dataset does not provide precise weather information for each annotation, we distribute the data to clients in a uniformly random manner. For our studies, we employ the package, encompassing fine annotations for 3,475 images in the training and validation sets, and dummy annotations for the test set with 1,525 images. We also include the other package, providing an additional 19,998 8-bit images for training.
SODA10M [9]To evaluate our approach under diverse conditions, we employ the SODA10M dataset, which features varied geographies, weather conditions, and object categories. In an IID setup, 20,000 labeled data points are uniformly distributed among one server and three clients. For a more realistic setup, the 20,000 labeled data points are kept on the server while 100,000 unlabeled data points are distributed across the clients. This arrangement enables performance evaluation under distinct weather conditions--clear, overcast, and rainy--showcasing resilience and robustness.
#### 5.1.2 Training Details
We conduct our experiments in an environment with one server and multiple clients, depending on the experiment. Both the server and the clients operate on a single local epoch per round. Our training regimen spans 300 rounds: 50 rounds of warm-up, 100 rounds of pretraining (\(T_{1}\)), and 150 rounds of orthogonal enhancement (\(T_{2}\)). We use the YOLOv5 Large model architecture with Mosaic, left-right flip, large scale jittering, graying, Gaussian blur, cutout, and color space conversion augmentations. A constant learning rate of 0.01 was maintained. Binary sigmoid functions determined objectiveness and class probability with a balance ratio of 0.3 for class, 0.7 for object, and an anchor threshold of 4.0. The ignore threshold ranged from 0.1 to 0.6, with an Non-Maximum Suppression (NMS) confidence threshold of 0.1 and an IoU threshold of 0.65. We incorporate an exponential moving average (EMA) rate of 0.999 for stable model parameter representation.
### Results
Table 3 illustrates the efficacy of our proposed SSFOD method against various baselines and state-of-the-art approaches on the BDD100K dataset. FedSTO significantly outperforms other techniques under different weather conditions and data distribution scenarios, i.e., IID and Non-IID. In the CL scenarios, the fully supervised approach yields the highest performance, with SSL methods, such as EMA Teacher [38], demonstrating competitive results. However, the real challenge lies in federated settings, where data privacy and distribution shift become critical considerations. In the SSFOD framework, our FedSTO method consistently surpasses other SSFL techniques. Notably, it achieves superior results even in challenging Non-IID settings, demonstrating its robustness to data distribution shifts. Similar trends hold when increasing the number of clients as shown in the appendix. In IID
\begin{table}
\begin{tabular}{c c c c c c c c|c c c c c} \hline \hline \multirow{2}{*}{Type} & \multirow{2}{*}{Algorithm} & \multirow{2}{*}{Method} & \multicolumn{4}{c|}{Non-IID} & \multicolumn{4}{c}{IID} \\ \cline{4-14} & & & Cloudy & Overcast & Rainy & Snowy & Total & Cloudy & Overcast & Rainy & Snowy & Total \\ \hline \multirow{4}{*}{Centralized} & \multirow{2}{*}{SL} & Fully Supervised & 0.600 & 0.604 & 0.617 & 0.597 & 0.605 & 0.600 & 0.604 & 0.617 & 0.597 & 0.605 \\ & & Partially Supervised & 0.540 & 0.545 & 0.484 & 0.474 & 0.511 & 0.528 & 0.545 & 0.533 & 0.510 & 0.529 \\ \cline{2-14} & \multirow{2}{*}{SSL} & Unbiased Teacher [25] & 0.551 & 0.550 & 0.502 & 0.503 & 0.527 & 0.546 & 0.557 & 0.541 & 0.533 & 0.544 \\ & & EMA Teacher [38] & 0.598 & 0.598 & 0.568 & 0.588 & 0.581 & 0.586 & 0.570 & 0.571 & 0.573 & 0.575 \\ \hline \multirow{4}{*}{Federated} & \multirow{2}{*}{SFL} & Fully Supervised & 0.627 & 0.614 & 0.607 & 0.585 & 0.608 & 0.635 & 0.612 & 0.608 & 0.595 & 0.613 \\ \cline{2-14} & & FedAvg [27] & 0.560 & 0.566 & 0.553 & 0.553 & 0.558 & 0.572 & 0.588 & 0.593 & **0.610** & 0.591 \\ \cline{2-14} & & FedDrn[11] & 0.508 & 0.569 & 0.541 & 0.522 & 0.535 & 0.355 & 0.414 & 0.420 & 0.397 & 0.400 \\ \cline{2-14} & \multirow{2}{*}{SSFL\({}^{1}\)} & FedOpt[33] & 0.561 & 0.572 & 0.565 & 0.566 & 0.566 & 0.591 & 0.587 & 0.588 & 0.577 & 0.586 \\ \cline{1-1} & & FedPAC[39] & 0.514 & 0.532 & 0.496 & 0.489 & 0.508 & 0.510 & 0.549 & 0.547 & 0.554 & 0.540 \\ \cline{1-1} \cline{2-14} & & **FedSTO** & **0.596** & **0.607** & **0.590** & **0.580** & **0.593** & **0.591** & **0.634** & **0.614** & 0.595 & **0.609** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of FedSTO within the SSFOD framework against the Baselines, SSL, SSFL methods with 1 server and 3 clients on BDD100K dataset [41]. FedSTO exhibits improvements under various weather conditions on both IID and Non IID cases, and performs close to the centralized fully supervised case. \(\dagger\) denotes the SSFL with the local EMA model as a pseudo label generator.
conditions, our method continues to excel, achieving results close to fully supervised centralized approach. These findings highlight the strength of our FedSTO method in leveraging the benefits of FL while mitigating its challenges. The robust performance of our approach across various weather conditions and data distributions underscores its potential for real-world deployment.
When examining the performance on the Cityscapes dataset under uniformly random distributed conditions, the superiority of FedSTO within the SSFOD framework also remains apparent, as shown in Table 4. Compared to other methods, FedSTO consistently demonstrates improved generalization across most object categories, both for labeled and unlabeled data. Intriguingly, the performance of FedSTO surpasses even that of SSL in CL environments.
Evaluation with [email protected] [email protected] results on the BDD dataset highlight the efficacy of the FedSTO approach (Table 5). In Non-IID settings, while the Fully Supervised centralized method achieve an average mAP of 0.357, FedSTO recorded 0.338, exhibiting comparable performance. However, under IID conditions, FedSTO registers an [email protected] of 0.357, closely matching the SFL result of 0.359. These results indicate that FedSTO offers competitive object detection capabilities, even with stricter IoU thresholds.
Results on Real World Dataset, SODA10m [9]Figure 3(a) illustrates the performance of our method and other baselines on the SODA10m dataset, where labeled data is synthetically divided in an IID manner across one server and three clients. Our method demonstrates near-parity with the fully supervised approach, evidencing its efficacy. Figure 3(b) represents the averaged performance across varying weather conditions on the SODA10m dataset. Here, all 20k labeled data resides on the server, and 100k unlabeled data points from SODA10m are distributed across three clients. Despite these variations in conditions, our method consistently outperforms other baselines, confirming its robustness and applicability in diverse environments.
\begin{table}
\begin{tabular}{c c c c c c c c|c c c c c} \hline \hline \multirow{2}{*}{Type} & \multirow{2}{*}{Algorithm} & \multirow{2}{*}{Method} & \multicolumn{6}{c|}{Labeled} & \multicolumn{6}{c}{Unlabeled} \\ \cline{4-14} & & & & \multicolumn{6}{c|}{Categories} & & \\ \cline{4-14} & & Person & Car & Bus & Truck & Traffic Sign & Person & Car & Bus & Truck & Traffic Sign \\ \hline \multirow{4}{*}{Centralized} & SL & Fully Supervised & 0.569 & 0.778 & 0.530 & 0.307 & 0.500 & 0.560 & 0.788 & 0.571 & 0.283 & 0.510 \\ & & Partially Supervised & 0.380 & 0.683 & 0.193 & 0.302 & 0.246 & 0.358 & 0.648 & 0.343 & 0.138 & 0.255 \\ \cline{2-14} & SSL & Unbiased Teacher [25] & 0.391 & 0.695 & 0.225 & 0.320 & 0.297 & 0.410 & 0.689 & 0.373 & 0.129 & 0.354 \\ & & EAM Teacher [25] & 0.475 & 0.711 & 0.354 & 0.347 & 0.379 & 0.460 & 0.727 & 0.436 & 0.144 & 0.378 \\ \hline \multirow{2}{*}{Federated} & SPL & Fully Supervised & 0.498 & 0.715 & 0.357 & 0.289 & 0.410 & 0.492 & 0.714 & 0.451 & 0.251 & 0.425 \\ \cline{2-14} & & FedAvg [27] & 0.450 & 0.697 & 0.310 & **0.304** & 0.356 & 0.482 & 0.725 & 0.425 & **0.247** & 0.397 \\ \cline{2-14} & SSFL\({}^{\dagger}\) & FedBN [22] & 0.488 & 0.709 & 0.325 & 0.285 & 0.411 & 0.375 & 0.618 & 0.046 & 0.031 & 0.286 \\ \cline{2-14} & & **FedSTO** & **0.504** & **0.720** & **0.342** & 0.261 & **0.415** & **0.487** & **0.740** & **0.460** & 0.181 & **0.437** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance under random distributed cases of Cityscapes [4]. FedSTO exhibits improvements under various object categories, and significantly outperforms the performance for unlabeled clients. \(\dagger\) denotes the SSFL with the local EMA model as a local pseudo label generator.
Figure 4: (a) Performance of various methods on the SODA10m dataset in an IID setting, (b) Average performance across different weather conditions using unlabeled data from the SODA10m dataset.
Varying Number of ClientsIn a non-IID BDD dataset configuration with 1 server and 20 clients, our proposal advances beyond competing methods, scoring 0.455 and 0.458 on labeled and unlabeled data, respectively. This outcome showcases our method's aptitude for tackling intricate real-world circumstances.
Varying Sampling RatioTable 7 demonstrates the impact of different client sampling ratios on the FedSTO performance using the BDD100k dataset. Notably, even at a lower sampling ratio of 0.1, FedSTO yields commendable results, especially in the unlabeled set for categories like 'Car' (0.738) and 'Bus' (0.573). This underscores that a reduced client sampling can still lead to significant performance improvements, emphasizing the efficiency and adaptability of the FL approach.
Efficiency on Network BandwidthTable 8 highlights the communication costs over 350 rounds of training involving 100 clients with a 0.5 client sampling ratio per round. By removing the neck component of the Yolov5L model, its size is reduced from 181.7MB to 107.13MB. This reduction significantly benefits FedSTO in Phase 1, leading to overall bandwidth savings. When comparing with traditional SSFL methods such as FedAvg and FedProx,[20], FedSTO utilizes only **2,166.23 GB** - a substantial **20.52%** reduction in network bandwidth.
## 6 Conclusion
This paper introduces a novel Semi-Supervised Federated Object Detection (SSFOD) framework, featuring a distinctive two-stage training strategy known as FedSTO. Designed to address the challenges of heterogeneous unlabeled data in federated learning, FedSTO employs selective training and orthogonality regularization with personalized pseudo labeling. These mechanisms facilitate robust and diverse feature learning, thereby enhancing object detection performance across multiple weather conditions and data distributions. Empirical results provide compelling evidence of the superiority of FedSTO over established federated and semi-supervised learning methodologies. Notably, despite operating with the challenging constraint where non-IID clients have no labels, FedSTO successfully counteracts domain shift and achieves performance that is comparable to fully supervised centralized models. This accomplishment constitutes significant strides toward realizing more efficient and privacy-preserving learning in realistic FL settings. As we venture ahead, we aim to concentrate our research efforts on refining FedSTO and exploring additional strategies for leveraging unlabeled data with various domains and model architectures. We anticipate the work presented in this paper will stimulate continued progress in this rapidly evolving field.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{Labeled} & \multicolumn{4}{c}{Unlabeled} \\ \cline{2-10} & \multicolumn{4}{c|}{} & \multicolumn{4}{c|}{Categories} \\ \cline{2-10} & Person & Car & Bus & Truck & Traffic Sign & Person & Car & Bus & Truck & Traffic Sign \\ \hline Server Only (i.e., client sampling ratio 0.0) & 0.378 & 0.710 & 0.141 & 0.425 & 0.490 & 0.337 & 0.707 & 0.160 & 0.338 & 0.491 \\ FedSTO with client sampling ratio 0.1 & 0.393 & 0.714 & 0.442 & 0.510 & 0.540 & 0.487 & **0.738** & **0.573** & **0.589** & **0.617** \\ FedSTO with client sampling ratio 0.2 & **0.458** & **0.747** & **0.476** & **0.521** & **0.571** & 0.440 & 0.731 & 0.378 & 0.525 & 0.573 \\ FedSTO with client sampling ratio 0.5 & 0.444 & 0.745 & 0.437 & 0.502 & 0.550 & **0.489** & 0.730 & 0.438 & 0.512 & 0.538 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Performance ([email protected]) under Non-IID scenarios of BDD100k dataset with 1 server and 100 clients according to the changes of client sampling ratio for implementing FedSTO. The term ‘Server Only’ aligns with the notion of ‘partially supervised’ in CL settings.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & Warm-up (50 rounds) & Warm-up (50 rounds) & Phase 1 (150 rounds) & Phase 2 (150 rounds) & Total & Reduction \\ \hline FedAvg & 0 & & 100 * 0.50 * 150 * 181.7 & 1,362.75 GB & 100 * 0.50 * 150 * 181.7 & 1,362.75 GB & 2,725.50 GB & - \\ \hline FedBN & 0 & 100 * 0.50 * 150 * 181.24 & 1359.30 GB & 100 * 0.50 * 150 * 181.24 & 1359.30 GB & 2,718.60 GB & 0.25 \% \\ FedSTO & 0 & 100 * 0.50 * 150 * 107.13 & 803.48 GB & 100 * 0.50 * 150 * 181.7 & 1,362.75 GB & **2,166.23 GB** & **20.52 \%** \\ \hline \hline \end{tabular}
\end{table}
Table 8: Communication costs over 350 rounds of training with 100 clients when the client sampling ratio is 0.5 per each round. The total Yolov5L size is 181.7MB while the model without the neck part is 107.13MB. Additionally, the model size without BN layers (FedBN [22]) is 181.24 MB. Here, ‘Reduction’ expresses how much communication cost is reduced compared to using vanilla SSFL (FedAvg and FedProx [20]).
## References
* [1] Durmus Alp Emre Acar, Yue Zhao, Ramon Matas Navarro, Matthew Mattina, Paul N Whatmough, and Venkatesh Saligrama. Federated learning based on dynamic regularization. _arXiv preprint arXiv:2111.04263_, 2021.
* [2] Jieming Bian, Zhu Fu, and Jie Xu. Fedseal: Semi-supervised federated learning with self-ensemble learning and negative learning. _arXiv preprint arXiv:2110.07829_, 2021.
* [3] Hong-You Chen and Wei-Lun Chao. On bridging generic and personalized federated learning for image classification. In _International Conference on Learning Representations_, 2022.
* [4] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 3213-3223, 2016.
* [5] Enmao Diao, Jie Ding, and Vahid Tarokh. Semifl: Semi-supervised federated learning for unlabeled clients with alternate training. _Advances in Neural Information Processing Systems_, 35:17871-17884, 2022.
* [6] Zhaoyang Du, Celimuge Wu, Tsutomu Yoshinaga, Kok-Lim Alvin Yau, Yusheng Ji, and Jie Li. Federated learning for vehicular internet of things: Recent advances and open issues. _IEEE Open Journal of the Computer Society_, 1:45-61, 2020.
* [7] Alireza Fallah, Aryan Mokhtari, and Asuman Ozdaglar. Personalized federated learning: A meta-learning approach. _arXiv preprint arXiv:2002.07948_, 2020.
* [8] Karim Guirguis, Ahmed Hendaway, George Eskandar, Mohamed Abdelsamad, Matthias Kayser, and Jurgen Beyerer. Cfa: Constraint-based finetuning approach for generalized few-shot object detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 4039-4049, 2022.
* [9] Jianhua Han, Xiwen Liang, Hang Xu, Kai Chen, Lanqing Hong, Jiageng Mao, Chaoqiang Ye, Wei Zhang, Zhenguo Li, Xiaodan Liang, et al. Soda10m: a large-scale 2d self/semi-supervised object detection dataset for autonomous driving. _arXiv preprint arXiv:2106.11118_, 2021.
* [10] Wonyong Jeong, Jaehong Yoon, Eunho Yang, and Sung Ju Hwang. Federated semi-supervised learning with inter-client consistency & disjoint learning. _arXiv preprint arXiv:2006.12097_, 2020.
* [11] Yilun Jin, Yang Liu, Kai Chen, and Qiang Yang. Federated learning without full labels: A survey. _arXiv preprint arXiv:2303.14453_, 2023.
* [12] Yilun Jin, Xiguang Wei, Yang Liu, and Qiang Yang. Towards utilizing unlabeled data in federated learning: A survey and prospective. _arXiv preprint arXiv:2002.11545_, 2020.
* YOLOv5 SOTA Realtime Instance Segmentation, November 2022.
* [14] Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurelien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. Advances and open problems in federated learning. _Foundations and Trends(r) in Machine Learning_, 14(1-2):1-210, 2021.
* [15] Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank J Reddi, Sebastian U Stich, and Ananda Theertha Suresh. Scaffold: Stochastic controlled averaging for on-device federated learning. _arXiv preprint arXiv:1910.06378_, 2019.
* [16] Taehyeon Kim and Se-Young Yun. Revisiting orthogonality regularization: a study for convolutional neural networks in image classification. _IEEE Access_, 10:69741-69749, 2022.
* [17] Taehyeon Kim and Se-Young Yun. Supernet training for federated image classification under system heterogeneity, 2022.
* [18] Woojung Kim, Keondo Park, Kihyuk Sohn, Raphael Shu, and Hyung-Sin Kim. Federated semi-supervised learning with prototypical networks. _arXiv preprint arXiv:2205.13921_, 2022.
* [19] Kavya Kopparapu, Eric Lin, and Jessica Zhao. Fedcd: Improving performance in non-iid federated learning. _arXiv preprint arXiv:2006.09637_, 2020.
* [20] Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. _arXiv preprint arXiv:1812.06127_, 2018.
* [21] Xiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, and Zhihua Zhang. On the convergence of fedavg on non-iid data. _arXiv preprint arXiv:1907.02189_, 2019.
* [22] Xiaoxiao Li, Meirui Jiang, Xiaofei Zhang, Michael Kamp, and Qi Dou. Fedbn: Federated learning on non-iid features via local batch normalization. _arXiv preprint arXiv:2102.07623_, 2021.
* [23] Tao Lin, Lingjing Kong, Sebastian U Stich, and Martin Jaggi. Ensemble distillation for robust model fusion in federated learning. _arXiv preprint arXiv:2006.07242_, 2020.
* [24] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollar. Focal loss for dense object detection. In _Proceedings of the IEEE international conference on computer vision_, pages 2980-2988, 2017.
* [25] Yen-Cheng Liu, Chih-Yao Ma, Zijian He, Chia-Wen Kuo, Kan Chen, Peizhao Zhang, Bichen Wu, Zsolt Kira, and Peter Vajda. Unbiased teacher for semi-supervised object detection. _arXiv preprint arXiv:2102.09480_, 2021.
* [26] Itzik Malkiel and Lior Wolf. Mml: Maximal multiverse learning for robust fine-tuning of language models. _arXiv preprint arXiv:1911.06182_, 2019.
* [27] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In _Artificial intelligence and statistics_, pages 1273-1282. PMLR, 2017.
* [28] Mehryar Mohri, Gary Sivek, and Ananda Theertha Suresh. Agnostic federated learning. _arXiv preprint arXiv:1902.00146_, 2019.
* [29] Vaikkunth Mugunthan, Eric Lin, Vignesh Gokul, Christian Lau, Lalana Kagal, and Steve Pieper. Fedltn: Federated learning for sparse and personalized lottery ticket networks. In _Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XII_, pages 69-85. Springer, 2022.
* [30] Dinh C Nguyen, Quoc-Viet Pham, Pubudu N Pathirana, Ming Ding, Aruna Seneviratne, Zihuai Lin, Octavia Dobre, and Won-Joo Hwang. Federated learning for smart healthcare: A survey. _ACM Computing Surveys (CSUR)_, 55(3):1-37, 2022.
* [31] State of California Department of Justice. California consumer privacy act. [https://oag.ca.gov/privacy/ccpa](https://oag.ca.gov/privacy/ccpa), 2018.
* [32] Kanchana Ranasinghe, Muzammal Naseer, Munawar Hayat, Salman Khan, and Fahad Shahbaz Khan. Orthogonal projection loss. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 12333-12343, 2021.
* [33] Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konecny, Sanjiv Kumar, and H Brendan McMahan. Adaptive federated optimization. _arXiv preprint arXiv:2003.00295_, 2020.
* [34] Amirhossein Reisizadeh, Aryan Mokhtari, Hamed Hassani, Ali Jadbabaie, and Ramtin Pedarsani. Fedpaq: A communication-efficient federated learning method with periodic averaging and quantization. In _International Conference on Artificial Intelligence and Statistics_, pages 2021-2031. PMLR, 2020.
* [35] Kihyuk Sohn, Zizhao Zhang, Chun-Liang Li, Han Zhang, Chen-Yu Lee, and Tomas Pfister. A simple semi-supervised learning framework for object detection. _arXiv preprint arXiv:2005.04757_, 2020.
* [36] Paul Voigt and Axel Von dem Bussche. The eu general data protection regulation (gdpr). _A Practical Guide, 1st Ed., Cham: Springer International Publishing_, 10(3152676):10-5555, 2017.
* [37] Jingjing Wang, Jingyi Zhang, Ying Bian, Youyi Cai, Chunmao Wang, and Shiliang Pu. Self-domain adaptation for face anti-spoofing. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 35, pages 2746-2754, 2021.
* [38] Bowen Xu, Mingtao Chen, Wenlong Guan, and Lulu Hu. Efficient teacher: Semi-supervised object detection for yolov5. _arXiv preprint arXiv:2302.07577_, 2023.
* [39] Jian Xu, Xinyi Tong, and Shao-Lun Huang. Personalized federated learning with feature alignment and classifier collaboration. In _The Eleventh International Conference on Learning Representations_, 2023.
* [40] Mengde Xu, Zheng Zhang, Han Hu, Jianfeng Wang, Lijuan Wang, Fangyun Wei, Xiang Bai, and Zicheng Liu. End-to-end semi-supervised object detection with soft teacher. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 3060-3069, 2021.
* [41] Fisher Yu, Haofeng Chen, Xin Wang, Wenqi Xian, Yingying Chen, Fangchen Liu, Vashisht Madhavan, and Trevor Darrell. Bdd100k: A diverse driving dataset for heterogeneous multitask learning. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 2636-2645, 2020.
* [42] Zhengming Zhang, Yaoqing Yang, Zhewei Yao, Yujun Yan, Joseph E Gonzalez, Kannan Ramchandran, and Michael W Mahoney. Improving semi-supervised federated learning by reducing the gradient diversity of models. In _2021 IEEE International Conference on Big Data (Big Data)_, pages 1214-1225. IEEE, 2021.
* [43] Jessica Zhao, Sayan Ghosh, Akash Bharadwaj, and Chih-Yao Ma. When does the student surpass the teacher? federated semi-supervised learning with teacher-student ema. _arXiv preprint arXiv:2301.10114_, 2023.
* [44] Huayi Zhou, Fei Jiang, and Hongtao Lu. Ssda-yolo: Semi-supervised domain adaptive yolo for cross-domain object detection. _Computer Vision and Image Understanding_, 229:103649, 2023.
Overview of Appendix
In this supplementary material, we present additional details that are not included in the main paper due to the space limit.
## Appendix B Ethics Statement
In the pursuit of progress within the domain of Federated Learning (FL), we propose a novel method, FedSTO within the SSFOD. This innovation warrants an examination of its ethical ramifications, particularly in regards to privacy, fairness, environmental impact, and potential for misuse.
Privacy and Data SecurityAnalogous to established FL methodologies, SSFOD is architected to preserve the privacy of client data. By conducting computations locally and transmitting solely model updates, the risks associated with raw data transmission are mitigated. Nonetheless, the potential threat of adversarial actions, such as model inversion attacks, underscores the imperative for ongoing efforts to bolster the robustness of FL methods against such vulnerabilities.
FairnessThe deployment of FedSTO into SSFOD can either amplify or alleviate existing fairness concerns within FL, contingent on the particulars of its application. Should SSFOD predominantly utilize data stemming from specific demographic cohorts, the model predictions risk acquiring an inadvertent bias. In contrast, SSFOD's capacity to manage large-scale, real-world scenarios may engender a more diverse data inclusion, thereby fostering a more equitable model.
Environmental ImpactSimilar to its FL counterparts, our approach diminishes the requirement for centralized data storage and computation, thereby potentially reducing associated carbon footprints. However, the energy expenditures of localized computation and communication for model updates necessitate judicious management to uphold environmental sustainability.
Potential MisusesAlthough our approach is designed to enhance the robustness of FL methodologies in large-scale, real-world settings, the potential for misuse remains. For instance, malevolent entities could exploit FedSTO with SSFOD framework to deliberately introduce bias or disinformation into models. Consequently, the implementation of protective measures against such misuse is of paramount importance.
In summary, while our method represents a significant contribution to the FL field, its potential ethical implications mandate thoughtful application. We strongly endorse ongoing discourse and scrutiny to ensure that its deployment is in alignment with the principles of privacy, fairness, and social responsibility.
## Appendix C Limitations
While our SSFOD method offers significant advancements in Semi-Supervised Federated Learning for Object Detection, it is important to recognize its accompanying limitations:
1. **Performance with Highly Imbalanced Data** Our SSFOD method exhibits robustness across a range of data heterogeneity. However, its performance in scenarios involving severe data imbalance across clients necessitates additional exploration. Imbalance here refers to disparities in data class distribution across clients. Instances of extreme skewness could lead to biased learning outcomes, with the model excelling in classes with abundant samples and faltering in those with fewer. Although SSFOD incorporates mechanisms to offset effects of data heterogeneity, it is not explicitly designed to manage extreme data imbalance.
2. **Computational Overhead** Despite its effectiveness in bolstering model robustness in large-scale, real-world scenarios, SSFOD introduces additional computational overhead. This is primarily due to the algorithmic complexities and computational requirements intrinsic to our method, which might be a constraint for resource-limited devices often participating in federated learning. This could potentially limit the scalability and applicability of our method in real-world FL scenarios. Therefore, improving the computational efficiency of SSFOD without compromising its efficacy is an important direction for future research.
3. **Sensitivity to Varying Weather Conditions** SSFOD has been designed to tackle the challenge of varying weather conditions in autonomous driving. However, in real-world scenarios, there are other types of environmental changes that can equally affect the learning process. For instance, varying lighting conditions or changes in road surfaces might influence the input data. Since our SSFOD method primarily focuses on weather conditions, it might not exhibit the same level of efficiency when dealing with other environmental factors. Future iterations of SSFOD could explore these areas to provide a more comprehensive solution to environmental handling in FL.
## Appendix D Future Directions
Despite the aforementioned limitations, our SSFOD method lays the groundwork for several promising future research directions:
1. **Handling Other Environmental Factors:** Future work could extend the SSFOD method to handle other environmental factors efficiently. This would make it a more comprehensive solution for real-world federated learning scenarios where different environmental factors coexist.
2. **Adaptation for Imbalanced Data:** Investigating and enhancing the performance of SSFOD with highly imbalanced data distribution would be a valuable future direction. Techniques like adaptive resampling or cost-sensitive learning could be integrated with our method to tackle this challenge.
3. **Optimization of Computational Efficiency:** Future research could focus on optimizing the computational efficiency of the SSFOD method. Reducing the computational overhead without compromising the robustness in large-scale, real-world scenarios would make our method more practical for real-world FL scenarios.
4. **Robustness Against Adversarial Attacks:** As the FL domain evolves, adversarial attacks pose an increasing threat to model robustness. Future work could explore how to bolster the SSFOD method (and FL methods, in general) to ensure robustness against adversarial attacks.
By addressing these limitations and exploring these future directions, we can continuously refine and evolve the SSFOD method to better serve the ever-growing demands of federated learning.
## Appendix E Detailed Data Heterogeneity in Federated Object Detection
Heterogeneity, a prevalent attribute in object detection datasets, often arises from three crucial aspects:
* **Weather-induced feature distribution skew:** Within outdoor scenarios like autonomous driving, weather variations significantly impact the visual representation of objects. Differing conditions such as sunny, rainy, foggy, or snowy can alter an object's appearance, causing the weather-induced feature distribution skew. Moreover, sensor diversity, including RGB cameras and infrared sensors, contributes to this skew as they respond uniquely to various weather conditions. This complex scenario creates a challenging task for an object detection system that must generalize across diverse conditions.
* **Class distribution heterogeneity:** This heterogeneity refers to the uneven representation of various classes within a dataset. In many cases, certain classes are far more prevalent than others. For instance, in an autonomous driving scenario, 'cars' or 'pedestrians' may be much more frequent than 'bicycles' or'motorcycles.' This imbalance can cause learning algorithms to develop a bias towards more common classes. Moreover, in a federated learning scenario, the class distribution may vary among clients; a rural area client might capture more 'animal' instances compared to an urban area client.
* **Label density heterogeneity:** This form of heterogeneity pertains to the variation in the quantity of annotated objects per image. An image of a crowded scene may contain far more objects than a sparser image. This variability can influence the performance of detection models, particularly those that rely on a fixed number of anchors or proposals per image.
Furthermore, it can also impact the training process as images with more objects provide more "training signal" per image than those with fewer objects. In a federated learning context, certain clients might possess more densely labeled data than others, which could affect the learning process.
While our current work primarily addresses the issue of weather-induced feature distribution skew, the other forms of heterogeneity, i.e., class distribution heterogeneity and label density heterogeneity, also require careful consideration. They present unique challenges within the federated learning environment and significantly influence model performance. It is crucial to extend our methodologies to account for these factors, fostering more robust and versatile object detection systems capable of handling the intricate realities of real-world scenarios. Consequently, future work should aim at developing comprehensive solutions that cater to all these facets of data heterogeneity in the context of federated learning.
## Appendix F Exploring the Impact of Orthogonal Enhancement under Data Heterogeneity in SSFOD
### Theoretical Analysis
This section presents a straightforward bounding strategy for the loss in the context of data heterogeneity. Our primary focus is to ascertain the Mean Squared Error (MSE) of a two-layered neural network, encompassing both 'head' and 'backbone'. Analyzing this model's behavior under a slightly perturbed data distribution is intended to offer insights into worst-case loss scenarios when orthogonal weights are employed.
We consider a multivariate distribution, denoted by \(\mathbb{P}\), with \(\mathbf{x}\in\mathbb{R}^{d}\) and \(\mathbf{y}\in\mathbb{R}^{m}\) being drawn from this distribution. We have \(n\) i.i.d. samples from \(\mathbb{P}\), constituting our dataset \(\mathcal{D}=(\mathbf{x}_{i},\mathbf{y}_{i})\colon(\mathbf{x}_{i},\mathbf{y}_{i})\sim\mathbb{P },1\leq i\leq n\). For the sake of this analysis, our model is expressed as \(f(\mathbf{x})=W^{\top}B\mathbf{x}\), where \(B\in\mathbb{R}^{k\times d}\) and \(W\in\mathbb{R}^{k\times m}\) are the backbone and head weight matrices, respectively. The MSE is given by \(\mathbb{E}_{\mathbf{x},\mathbf{y}}\|\mathbf{y}-f(\mathbf{x})\|_{2}^{2}\), representing the expectation of the \(\ell^{2}\)-norm of the difference vector between the ground truth and the prediction.
Under the presumption that \(f\) has been finely optimized on the training sample distribution (using the dataset \(\mathcal{D}\)) to yield an MSE of value \(L\), we next examine a perturbed data distribution, denoted by \(\mathbb{P}^{\prime}\), characterized as follows:
\[(\mathbf{x}^{\prime},\mathbf{y}^{\prime})\sim\mathbb{P}^{\prime}\iff(\mathbf{x}^{\prime}, \mathbf{y}^{\prime})\sim(\mathbf{x}+\mathbf{\epsilon},\mathbf{y}) \tag{1}\]
Here, \((\mathbf{x},\mathbf{y})\sim\mathbb{P}\) and \(\mathbf{\epsilon}\sim\mathcal{N}_{d}(0,\Sigma)\) is Gaussian noise, characterized by covariance matrix \(\Sigma\), independent of \((\mathbf{x},\mathbf{y})\). The MSE loss under \(\mathbb{P}^{\prime}\), represented by \(L^{\prime}\), can be bound as:
\[\mathbb{E}_{\mathbf{x}^{\prime},\mathbf{y}^{\prime}}\|\mathbf{y}^{\prime}-W^ {\top}B\mathbf{x}^{\prime}\|_{2}^{2} =\mathbb{E}_{\mathbf{x},\mathbf{y},\mathbf{\epsilon}}\|\mathbf{y}-W^{\top}B(\mathbf{x }+\mathbf{\epsilon})\|_{2}^{2} \tag{2}\] \[=\mathbb{E}_{\mathbf{x},\mathbf{y},\mathbf{\epsilon}}\|\mathbf{y}-W^{\top}B\mathbf{x} +W^{\top}B\mathbf{\epsilon}\|_{2}^{2}\] \[=\mathbb{E}_{\mathbf{x},\mathbf{y}}\|\mathbf{y}-W^{\top}B\mathbf{x}\|_{2}^{2}+ \mathbb{E}_{\mathbf{\epsilon}}\|W^{\top}B\mathbf{\epsilon}\|_{2}^{2}\] \[\quad+\mathbb{E}_{\mathbf{x},\mathbf{y},\mathbf{\epsilon}}[(\mathbf{y}-W^{\top}B \mathbf{x})^{\top}W^{\top}B\mathbf{\epsilon}]\] \[\quad+\mathbb{E}_{\mathbf{x},\mathbf{y},\mathbf{\epsilon}}[\mathbf{\epsilon}^{\top }B^{\top}W(\mathbf{y}-W^{\top}B\mathbf{x})]\] \[=L+\mathbb{E}\|W^{\top}B\mathbf{\epsilon}\|_{2}^{2}.\]
This calculation introduces a nonnegative penalty of \(\mathbb{E}\|W^{\top}B\mathbf{\epsilon}\|_{2}^{2}\) to the out-of-sample MSE.
The derived penalty opens the possibility for further analysis, particularly when considering structural assumptions on the head, backbone, or noise vector. The Gaussian assumption for the noise vector is quite enlightening, given that the squared \(\ell^{2}\)-norm is tantamount to the quadratic form \(\mathbf{\epsilon}^{\top}B^{\top}WW^{\top}B\mathbf{\epsilon}\). By incorporating a recognized principle about the expectation of a Gaussian vector's quadratic form, we obtain:
\[\mathbb{E}\|W^{\top}B\mathbf{\epsilon}\|_{2}^{2}=\mathbb{E}[\mathbf{\epsilon}^{\top}B^ {\top}WW^{\top}B\mathbf{\epsilon}]=\operatorname{tr}(B^{\top}WW^{\top}B\Sigma). \tag{3}\]Assuming \(WW^{\top}=I\), an identity matrix, the trace simplifies to \(\operatorname{tr}(B^{\top}B\Sigma)\). In the event the model \(f\) has undergone orthogonality regularization, resulting in a semi-orthogonal head \(W\) (precisely, \(WW^{\top}=I\)), we obtain a concise formulation for the penalty that hinges solely on the backbone and the fundamental noise parameter.
### Insights from Theoretical Analysis
The insights gained from our theoretical analysis significantly enrich our understanding of the FedSTO approach and how it can navigate challenges specific to FL.
One major challenge in FL is data heterogeneity - the data across different clients (or devices) can vary significantly. By bounding the MSE for our model under slightly shifted distributions, we learn how our model's performance could change in the presence of such data heterogeneity. This is akin to having a'sensitivity' measure for our model, showing us how'sensitive' the model is to changes in data distributions. The better we understand this sensitivity, the more effectively we can tailor our model to handle the challenges of federated learning.
The analysis also highlights the significance of having orthogonal weights in our model. The investigation reveals that when the head of our model is semi-orthogonal, the penalty on the out-of-sample MSE simplifies to depend only on the backbone and the noise term, effectively isolating the effect of client-specific data perturbations. This insight is of particular importance because it provides a clear and quantifiable understanding of the role and benefit of orthogonal weights in controlling model sensitivity to data heterogeneity.
Furthermore, our analysis provides valuable directions for future research in FL. The relationship discovered between the orthogonality of weights and the bound on MSE opens up new avenues for investigation. It could, for instance, prompt more in-depth research into regularization techniques that aim to achieve orthogonality, improving model stability across diverse data distributions.
In simpler terms, our theoretical analysis is much like a 'roadmap', helping us understand how our model reacts to changes in data distributions, the role of orthogonal weights in this context, and where we could focus future research efforts to improve our approach. This roadmap makes the journey of applying FedSTO to federated learning more navigable, ultimately leading to more effective and robust models.
## Appendix G Detailed Experimental Settings
YOLOv5 ArchitectureThe YOLOv5 object detection model is based on the YOLO algorithm, which divides an image into a grid system and predicts objects within each grid. YOLOv5 uses a convolutional neural network (CNN) to extract features from the image and then uses these features to predict the location of the bounding boxes (x,y,height,width), the scores, and the objects classes. The YOLOv5 architecture consists of three main parts: the backbone, the neck, and the head. The backbone is responsible for extracting features from the image. The neck combines the features from the backbone and passes them to the head. The head then interprets the combined features to predict the class of an image. Here is a more detailed description of each part of the YOLOv5 architecture:
* Backbone: The backbone of YOLOv5 is based on the CSPDarknet53 architecture. The CSPDarknet53 architecture is a modified version of the Darknet53 architecture that uses a cross stage partial connection (CSP) module to improve the performance of the network. The CSP module allows the network to learn more complex features by sharing information across different layers.
* Neck: The neck of YOLOv5 is a simple convolutional layer that combines the features from the backbone.
* Head: The head of YOLOv5 is composed of three convolutional layers that predict the location of the bounding boxes, the scores, and the objects classes. The bounding boxes are predicted using a regression model, while the scores and classes are predicted using a classification model.
YOLOv5 has been shown to be effective for object detection, achieving good results on a variety of object detection benchmarks. It is also fast, making it a good choice for real-time object detection applications.
AnnotationsThe YOLO object detection algorithm divides an image into a grid system and predicts objects within each grid. The annotations for YOLO are stored in a text file, with each line representing an object in the image. The format of each line is as follows:
[object_id], [x_center], [y_center], [width], [height], [score]
where object_id is the ID of the object, x_center is the x-coordinate of the center of the object's bounding box, y_center is the y-coordinate of the center of the object's bounding box, width is the width of the object's bounding box, height is the height of the object's bounding box, and score is the confidence score of the object detection.
Loss FunctionsYOLOv5 is an object detection model that is trained using a loss function that combines three terms: a class loss, a box loss, and a confidence loss. The class loss is a cross-entropy loss that measures the difference between the predicted class probabilities and the ground truth class labels. The box loss is a smooth L1 loss that measures the distance between the predicted bounding boxes and the ground truth bounding boxes. The confidence loss is a binary cross-entropy loss that measures the difference between the predicted confidence scores and the ground truth binary labels indicating whether or not an object is present in the image. The overall loss function is minimized using stochastic gradient descent. Here is a more detailed explanation of each loss term:
* Class loss \(\mathcal{L}_{cls}\): The cross-entropy loss is a measure of the difference between two probability distributions. In the case of YOLOv5, the two probability distributions are the predicted class probabilities and the ground truth class labels. The cross-entropy loss is minimized when the predicted class probabilities are identical to the ground truth class labels.
* Box loss \(\mathcal{L}_{obj}\): The smooth L1 loss is a measure of the distance between two sets of numbers. In the case of YOLOv5, the two sets of numbers are the predicted bounding boxes and the ground truth bounding boxes. The smooth L1 loss is minimized when the predicted bounding boxes are identical to the ground truth bounding boxes.
* Confidence loss \(\mathcal{L}_{conf}\): The binary cross-entropy loss is a measure of the difference between two binary distributions. In the case of YOLOv5, the two binary distributions are the predicted confidence scores and the ground truth binary labels indicating whether or not an object is present in the image. The binary cross-entropy loss is minimized when the predicted confidence scores are identical to the ground truth binary labels.
## Appendix H Discussions on Semi-Supervised Object Detection
We implement a similar strategy to the pseudo label assigner of the semi-efficient teacher [38]. We aim to provide a comprehensive description of our training method and pseudo label assignment approach in this section.
### Adaptive Loss
The adaptive loss function in our SSFOD framework is comprised of two key parts: a supervised loss (\(L_{s}\)), calculated from labeled data, and an unsupervised loss (\(L_{u}\)), generated from unlabelled instances.
The supervised loss, \(L_{s}\), adheres to conventional practices used in object detection tasks. This incorporates the amalgamation of cross-entropy loss, responsible for classification, and CIoU (Complete Intersection over Union) loss, accounting for bounding box regression:
\[L_{s}=\sum_{h,w}\left(CE(X_{cls}^{(h,w)},Y_{cls}^{(h,w)})+CIoU(X_{reg}^{(h,w) },Y_{reg}^{(h,w)})+CE(X_{obj}^{(h,w)},Y_{obj}^{h,w})\right), \tag{4}\]where \(CE\) is the cross-entropy loss function, \(X^{(h,w)}\) signifies the model's output, and \(Y^{(h,w)}\) corresponds to the sampled results of pseudo label assigner (i.e., local EMA updated model).
The unsupervised loss component, \(L_{u}\), is an addition that leverages the pseudo labels generated by the local EMA updated model. The objective of \(L_{u}\) is to guide the model to exploit beneficial information embedded within unlabelled data. It is computed as:
\[L_{u}=L_{cls}^{u}+L_{reg}^{u}+L_{obj}^{u}, \tag{5}\]
where \(L_{cls}^{u}\), \(L_{reg}^{u}\), and \(L_{obj}^{u}\) denote the losses associated with classification, bounding box regression, and objectness score, respectively. Each of these losses uses pseudo labels and is precisely tailored to foster reliable and efficient learning from unlabeled data.
* Classification Loss, \(L_{cls}^{u}\), homes the model's class-specific accuracy. It is calculated only for pseudo labels whose scores are above a predefined high threshold \(\tau_{2}\), using a cross-entropy loss function. The divergence is gauged between the model's predicted classification scores and the class scores from the pseudo labels provided by the Pseudo Label Assigner.
* Regression Loss, \(L_{reg}^{u}\), scrutinizes the precision of the bounding box predictions. This loss applies to pseudo labels with scores above \(\tau_{2}\) or objectness scores surpassing a value typically set at 0.99. A CIoU loss function is employed to measure the discrepancy between the predicted bounding box regressions and those associated with the pseudo labels.
* Objectness Loss, \(L_{obj}^{u}\), evaluates the model's objectness prediction capability, essentially the model's confidence in a given bounding box containing an object of interest. All pseudo labels contribute to this loss, though the computation varies depending on the pseudo label score. For scores below \(\tau_{1}\), the loss is computed against zero. For those above \(\tau_{2}\), the loss compares against the objectness of the pseudo label. Scores between \(\tau_{1}\) and \(\tau_{2}\) result in the loss calculated against the soft objectness score of the pseudo labels.
This strategic amalgamation of losses forms the underpinning of the unsupervised loss function, fostering efficient and reliable learning from unlabeled data. By harnessing the potential information embedded within pseudo labels, our model aims to significantly enhance semi-supervised object detection in an FL setup.
In a centralized setting, the supervised and unsupervised losses would typically be trained jointly. However, in our federated scenario, where the server possesses only labeled data and the client holds exclusively unlabeled data, we adopt an alternate training approach. The server focuses on optimizing the supervised loss \(L_{s}\) with its labeled data, while the client concurrently refines the unsupervised loss \(L_{u}\) using its unlabeled data. This methodical division of labor allows us to leverage the distinctive characteristics of both types of data and facilitates efficient learning in a federated environment.
### Local EMA Model for the Pseudo Labeler
In SSFOD framework, we employ a Local Exponential Moving Average (EMA) model for generating pseudo labels, enabling the judicious utilization of unlabeled data at each client's disposal. The concept of the EMA model hinges on an infinite impulse response filter that assigns weightages decreasing exponentially, offering a balanced consideration of both historical and immediate data points for reliable predictions.
Each client maintains a local EMA model in our method. The weights of this local EMA model are a weighted average of its own weights and the weights of the global model. This relationship can be described mathematically as follows:
\[W_{\text{EMA}_{clientk}}^{(t)}=\alpha W_{\text{EMA}_{clientk}}^{(t-1)}+(1- \alpha)W_{\text{client k}}^{(t)} \tag{6}\]
In Eq. (6), \(W_{\text{EMA}_{clientk}}^{(t)}\) symbolizes the weights of client k's local EMA model at round \(t\), and \(W_{\text{client k}}^{(t)}\) represents the weights of the client k's model at round \(t\). \(\alpha\) is the EMA decay rate dictating the pace of depreciation of the client's weights' influence. Typically, the decay rate resides in the 0.9 to 0.999 range, subject to the specific application.
In pursuit of a consistent basis across all clients, the weights of the Local EMA model are reinitialized with the global model's weights post each server broadcast. This particularity of our framework ensures a harmonious coexistence of global model consistency and local data awareness.
The system enables the local EMA model of each client to gradually tune to the unique local data characteristics, by integrating updates from the client's unlabeled data. As a consequence, we achieve a potent blend of personalization and model performance in a FL environment, all the while alleviating communication overheads. The design acknowledges the distinct data distributions that each client may harbor, allowing the model to adapt accordingly, thereby augmenting the learning process's efficacy and efficiency.
Following the weight updates, the local EMA model functions as the pseudo labeler for each client. This approach ensures the generation of stable and reliable pseudo labels, despite the limited interactions with the global model. In the Federated Learning scenario, where communication costs carry substantial significance, this feature is crucial.
Our local EMA strategy presents several benefits. It guarantees independent pseudo label generation at each client's end without necessitating frequent server updates. In addition, it enhances the learning process's robustness, as the local EMA model remains largely unaffected by the client model's noisy gradient updates. As a result, the system produces more trustworthy pseudo labels, ultimately culminating in improved performance.
## Appendix I Discussions on Selective Training
In our SSFOD framework, Selective Training plays a pivotal role in achieving superior performance and enhanced computational efficiency. Traditionally, in selective training, all components beyond the backbone, typically referred to as Non-Backbone parts, are frozen to reduce computational complexity and improve training efficiency. However, It may not be necessary to freeze all of the non-backbone parts.
In our preliminary investigations, we discover that freezing only the head, a component of the Non-Backbone parts, can yield similar performance levels as freezing the entire Non-Backbone. This observation opens the door to potentially more efficient and flexible models in the context of selective training, as it suggests that selective freezing could be just as effective as complete freezing (Table 9).
Nonetheless, in consideration of the inherent nature of FL, we ultimately decide to continue freezing both the neck and head components. We reach this decision primarily due to the communication cost considerations, which carry considerable weight in a federated learning setting. Despite the promising results observed when only freezing the head, freezing both the neck and head does not result in any noticeable performance decrement, while it does markedly reduce the communication cost.
The implications of this choice are particularly relevant in the context of FL environments, where minimizing communication costs is paramount for practical deployment. We posit that this finding may inform future strategies for selective training, suggesting that selective freezing of specific Non-Backbone parts can effectively balance performance and efficiency in FL environments.
\begin{table}
\begin{tabular}{c c c c c c c|c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{Non-IID} & \multicolumn{4}{c}{IID} \\ \cline{2-11} & Cloudy & Overcast & Rainy & Snowy & Total & Cloudy & Overcast & Rainy & Snowy & Total \\ \hline Vanilla & 0.560 & 0.566 & 0.553 & 0.553 & 0.558 & 0.572 & 0.588 & 0.593 & **0.610** & 0.591 \\ Freezing Only Head & 0.531 & **0.603** & **0.567** & **0.565** & **0.567** & 0.558 & **0.613** & **0.614** & 0.593 & **0.595** \\ Freezing Neck \& Head & **0.571** & 0.583 & 0.557 & 0.556 & **0.567** & **0.576** & 0.578 & 0.594 & 0.599 & 0.587 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Performance on the BDD dataset with 1 labeled server and 3 unlabeled clients. It highlights how each added method contributes to the overall performance under both Non-IID and IID conditions.
## Appendix J Further Discussions on Full Parameter Training (FPT) with Orthogonal Enhancement
As part of our ongoing evaluation of our SSFOD framework, we conduct comprehensive experiments involving different training strategies, in particular, Full Parameter Training (FPT) with Orthogonal Regularization (ORN). Through these experiments, we compare three distinct approaches:
1. ORN employed right from scratch without any pre-training or frozen components.
2. A two-stage process where the model head is frozen during the pre-training phase, followed by fine-tuning across the entire model with ORN.
3. A more nuanced strategy where both the neck and head components of the model are initially frozen, subsequently transitioning to an ORN-driven FPT.
Our empirical results decisively point to the superiority of the third approach (Table 10). Despite the initial freezing of the neck and head components, it yields performance that surpassed the other two strategies. This counter-intuitive finding underscores the effectiveness of the careful balancing of selective parameter freezing and ORN application.
The mere marginal performance difference in mAP between the second and third approaches suggests room for further investigation. An exploration of the specific conditions under which one might outperform the other could shed light on new avenues for model improvement. This represents an exciting direction for future research.
However, for the scope of the present study, we opt for the third strategy of freezing both the neck and head components simultaneously. This decision is not only influenced by the slightly higher performance but also by the added advantage of reduced communication costs. This approach resonates with the primary objective of our research - to realize high-performing, efficient, and cost-effective FL frameworks for SSOD tasks.
## Appendix K Examination of Loss Functions within SSFOD Framework
An integral part of our study on the SSFOD framework involves an examination of the influence of various loss functions on model performance. One of the notorious challenges in object detection tasks is dealing with class imbalance. To counter this, we decided to evaluate the effectiveness of Focal Loss, which is widely used for its capability to handle such imbalances, in comparison with the conventional Cross-Entropy (CE) Loss, which we refer to as vanilla Loss in the paper.
Focal Loss [24] has been designed to tackle class imbalance by reducing the contribution of easy instances, thus allowing the model to concentrate on challenging, misclassified samples. Despite its recognized efficacy in a variety of settings, we observed unexpected performance under our specific framework.
Contrary to our initial assumptions, our empirical findings suggest that the Vanilla Loss proved to be a more potent safeguard against the class imbalance issue within our SSFOD framework (Table 11). We compare the performance by changing the loss functions in the baseline SSFOD training. The underlying reasons for this surprising result are potentially multifaceted and certainly merit additional exploration. However, these observations underscore the importance of empirical testing in choosing the appropriate loss functions tailored for specific frameworks and tasks.
\begin{table}
\begin{tabular}{c c c c c c|c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{6}{c|}{Labeled} & \multicolumn{6}{c}{Unlabeled} \\ \cline{2-11} & \multicolumn{6}{c}{Categories} & \\ \cline{2-11} & Person & Car & Bus & Truck & Traffic Sign & Person & Car & Bus & Truck & Traffic Sign \\ \hline ORN from scratch & 0.470 & 0.694 & **0.449** & 0.074 & 0.393 & 0.437 & 0.720 & 0.507 & 0.100 & 0.378 \\ Freezing only Head + FPT with ORN & **0.504** & 0.697 & 0.353 & 0.239 & 0.411 & 0.486 & **0.740** & 0.420 & 0.078 & 0.415 \\ Freezing Neck \& Head + FPT with ORN & **0.504** & **0.720** & 0.342 & **0.261** & **0.415** & **0.487** & **0.740** & **0.460** & **0.181** & **0.437** \\ \hline \hline \end{tabular}
\end{table}
Table 10: Performance under random distributed cases of Cityscapes [4]. FedSTO exhibits improvements under various object categories, and significantly outperforms the performance for unlabeled clients.
## Appendix L Implementation
To provide a comprehensive overview of our study, we provide the details of our implementation, including the reproduction of various existing FL methods such as FedDyn [1], FedBN [22], FedOpt [33], and FedPAC [39].
* FedDyn [1]: For the implementation of FedDyn, we adhere strictly to the training methodology as stipulated in the original paper.
* FedBN [22]: In implementing the FedBN method, we make a conscientious choice to leave out the statistics and bias of the batch normalization (BN) layer from the aggregation step.
* FedOpt [33]: FedOpt implementation aligns with Vanilla SSFL, albeit with a slight modification in the server model's optimization procedure. The key differentiation lies in the utilization of a pseudo learning rate of 0.9 during server model optimization, which was chosen through empirical evaluations to maximize performance. Thus, while preserving the core attributes of the FedOpt framework, we adapt it to our specific task requirements.
* FedPAC [39]: The execution of the FedPAC framework, primarily architected for classification tasks, necessitates thoughtful adaptation to suit our object detection scenario. This adaptation involves the appropriation of the YOLOv5 head, used as a stand-in classifier, which allows us to transpose our specific needs onto the FedPAC blueprint. This process not only honors the training techniques prescribed by FedPAC but also satisfies our model's unique requirements. As an integral part of our approach, the local EMA model, a cornerstone of our broader learning strategy, is employed as a pseudo labeler in this scheme. Furthermore, our model's backbone representation is equated with the output from YOLO's backbone. This bespoke modification offers a perspective to exploit the central principles of FedPAC while tailoring them to suit the idiosyncrasies of our model.
\begin{table}
\begin{tabular}{l c c c c c|c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{Non-IID} & \multicolumn{4}{c}{IID} \\ \cline{2-11} & Cloudy & Overcast & Rainy & Snowy & Total & Cloudy & Overcast & Rainy & Snowy & Total \\ \hline CE loss & **0.560** & **0.566** & **0.553** & **0.553** & **0.558** & **0.572** & **0.588** & **0.593** & **0.610** & **0.591** \\ Focal loss & 0.364 & 0.462 & 0.438 & 0.446 & 0.428 & 0.371 & 0.464 & 0.469 & 0.482 & 0.447 \\ \hline \hline \end{tabular}
\end{table}
Table 11: Performance of CE loss and focal loss on the BDD dataset with 1 labeled server and 3 unlabeled clients. It highlights how each added method contributes to the overall performance under both Non-IID and IID conditions.
Experimental Results with 100 Clients
In the setting of an extensive FL environment with a network of 100 clients and 0.1 client sampling ratio, we embark on an empirical investigation to ascertain the robustness and scalability of the proposed method. This scenario closely mirrors real-world cross-device situations, inherently characterized by widespread client distribution. Intriguingly, each client in our experiment is associated with data corresponding to a single weather condition. Notwithstanding this, our method exhibits remarkable resilience and efficacy. As an illustration, the model outperforms a baseline that is trained solely on the server's labeled data, as demonstrated in Table 12. This superior performance underscores the model's capability in effectively leveraging the heterogeneity intrinsic to distributed data, thereby enhancing the overall performance. These empirical findings provide compelling evidence of the adaptability and effectiveness of our approach within large-scale FL contexts. With its ability to maintain high performance coupled with computational efficiency, our method exhibits promising potential in managing data heterogeneity across extensive federated networks.
\begin{table}
\begin{tabular}{c c c c c c c|c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{6}{c|}{Labeled} & \multicolumn{6}{c}{Unlabeled} \\ \cline{2-11} & \multicolumn{6}{c}{Categories} \\ \cline{2-11} & Person & Car & Bus & Truck & Traffic Sign & Person & Car & Bus & Truck & Traffic Sign \\ \hline Server Only Scenario & 0.378 & 0.710 & 0.141 & 0.425 & 0.490 & 0.337 & 0.707 & 0.160 & 0.338 & 0.491 \\ FedSTO within SSFOD framework & **0.393** & **0.714** & **0.442** & **0.510** & **0.540** & **0.487** & **0.738** & **0.573** & **0.589** & **0.617** \\ \hline \hline \end{tabular}
\end{table}
Table 12: Performance under Non-IID scenarios of BDD100k dataset with 1 server and 100 clients. FedSTO exhibits improvements under various object categories, and significantly outperforms the performance for unlabeled clients. The term ‘Server Only Scenario’ aligns with the notion of ‘partially supervised’ in CL settings. | ## Review
### Summary
The paper presents a Semi-Supervised Federated Object Detection (SSFOD) framework, specifically designed for scenarios where the server holds labeled data while clients have only unlabeled data. The proposed method employs a two-stage strategy called FedSTO, which includes selective training and orthogonal enhancement, aimed at addressing challenges related to heterogeneous unlabeled data in federated learning. The framework demonstrates notable performance improvements across diverse datasets, achieving results comparable to fully supervised models despite non-IID conditions. The work is well-motivated, with extensive empirical evaluations that validate the proposed approach against existing methodologies.
### Strengths
- The paper introduces a pioneering framework for SSFOD, addressing a significant challenge in federated learning with unlabeled data.
- The proposed FedSTO method effectively integrates selective training and orthogonal enhancement to improve object detection performance.
- Extensive experiments on multiple datasets demonstrate the efficacy and robustness of the proposed solution compared to existing techniques.
- The writing is generally clear and well-structured, making the paper easy to follow.
### Weaknesses
- The novelty of the contribution is somewhat limited as many components are adapted from existing literature without sufficient distinction.
- The evaluation lacks comparisons with fully labeled federated settings and broader client scales to establish the robustness of the proposed method.
- Several key components, such as the algorithm for personalized pseudo-labeling, need clearer explanations for better understanding.
- The paper does not adequately discuss the implications of communication efficiency and scalability, which are important for practical deployment.
- Some essential references are missing, impacting the comprehensiveness of the literature review.
### Questions
- What is the rationale behind using Yolov5, and how does the proposed solution generalize to other object detection architectures?
- Could the authors clarify the differences and advantages of their approach compared to similar domain-adapted object detection methods?
- What do T1 and T2 in Algorithm 1 represent?
- How do the augmentations affect the ground truth box positions for unlabeled images?
### Soundness
**Score:** 3
**Description:** 3 = good: The methodology is sound and the results are well-supported by the experiments, although some aspects require further clarification.
### Presentation
**Score:** 3
**Description:** 3 = good: The paper is well-structured and clear, but some figures and explanations need improvement for better understanding.
### Contribution
**Score:** 3
**Description:** 3 = good: The paper makes a valuable contribution to the field, although the novelty could be more clearly articulated.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept: The paper is technically solid with moderate-to-high impact, but it requires some improvements in clarity and evaluation.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a novel approach to federated learning in object detection, with significant empirical support demonstrating its effectiveness. While there are some weaknesses regarding novelty and clarity, the overall contribution to the field, coupled with the reviewers' positive feedback, warrants acceptance. The soundness of the methodology and the impact of the results further support this decision.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Learning to Compress Prompts with Gist Tokens
Jesse Mu, Xiang Lisa Li, Noah Goodman
Stanford University
[email protected], {xlisali,ngoodman}@stanford.edu
###### Abstract
Prompting is the primary way to utilize the multitask capabilities of language models (LMs), but prompts occupy valuable space in the input context window, and repeatedly encoding the same prompt is computationally inefficient. Finetuning and distillation methods allow for specialization of LMs without prompting, but require retraining the model for each task. To avoid this trade-off entirely, we present _gisting_, which trains an LM to compress prompts into smaller sets of "gist" tokens which can be cached and reused for compute efficiency. Gist models can be trained with no additional cost over standard instruction finetuning by simply modifying Transformer attention masks to encourage prompt compression. On decoder (LLaMA-7B) and encoder-decoder (FLAN-T5-XXL) LMs, gisting enables up to 26x compression of prompts, resulting in up to 40% FLOPs reductions, 4.2% wall time speedups, and storage savings, all with minimal loss in output quality.
## 1 Introduction
Consider the prompt of a Transformer [34] language model (LM) like ChatGPT:1
Footnote 1: reddit.com/r/ChatGPT/comments/10oliuo/please_print_the_instructions_you_were_given/
You are ChatGPT, a large language model trained by OpenAI. You answer as concisely as possible for each response (e.g. don't be verbose). It is very important that you answer as concisely as possible, so please remember this. If you are generating a list, do not have too many items. Keep the number of items short. Knowledge cutoff: 2021-09 Current date: <TODAY>
With millions of queries a day, an unoptimized ChatGPT would encode this prompt over and over with a self-attention mechanism whose time and memory complexity is quadratic in the length of the input. Caching the Transformer activations of the prompt can prevent some recomputation, yet this strategy still incurs memory and storage costs as the number of cached prompts grows. At large scales, even small reductions in prompt length could lead to substantial compute, memory, and storage savings over time, while also letting users fit more content into an LM's limited context window.
How might we reduce the cost of this prompt? One typical approach is to finetune or distill [1, 30] the model to behave similarly to the original model without the prompt, perhaps with parameter-efficient adaptation methods [15, 16, 19]. Yet a fundamental drawback of this approach is that it requires retraining the model for each new prompt (Figure 1, bottom left).
Instead, we propose **gisting** (Figure 1, top right), which compresses arbitrary prompts into a smaller set of Transformer activations on top of virtual "gist" tokens, _a la_ prefix-tuning [19]. But where prefix-tuning requires learning prefixes via gradient descent for each task, gisting adopts a meta-learning approach, where we simply predict the gist prefixes zero-shot given only the prompt, allowing for generalization to unseen instructions without any additional training. Since gist tokens are much shorter than the full prompt, gisting allows arbitrary prompts to be compressed, cached, and reused for compute efficiency.
In this paper, we further propose a very simple way to learn a gist model: doing instruction tuning [38] with gist tokens inserted after the prompt, and a modified attention mask preventing tokens _after_ the gist tokens from attending to tokens _before_ the gist tokens. This allows a model to learn prompt compression and instruction following at the same time, with no additional training cost.
On decoder-only (LLaMA-7B) and encoder-decoder (FLAN-TS-XXL) LMs, gisting achieves prompt compression rates of up to **26x**, while maintaining output quality similar to the original models in human evaluations. This results in up to 40% FLOPs reduction and 4.2% latency speedups during inference, with greatly decreased storage costs compared to traditional prompt caching approaches.
## 2 Gisting
We will first describe gisting in the context of instruction finetuning [38]. We have an instruction-following dataset \(\mathcal{D}=\{(t_{i},x_{i},y_{i})\}_{i=1}^{N}\), where \(t\) is a task encoded with a natural language prompt (e.g. Translate this to French, \(x\) is an (optional) input for the task (e.g. The cat), and \(y\) is the desired output (e.g. Le chat). Given a (usually pretrained) LM, the aim of instruction finetuning is to learn a distribution \(p_{\text{LM}}(y\mid t,x)\), typically by concatenating \(t\) and \(x\), then having the LM autoregressively predict \(y\). At inference time, we can _prompt_ the model with a novel task \(t\) and input \(x\), decoding from the model to obtain its prediction.
However, this pattern of concatenating \(t\) and \(x\) has drawbacks: Transformer LMs have limited context windows, bounded either by architecture or memory limits. Furthermore, given that attention scales quadratically in the length of the input, long prompts \(t\), especially those that are repeatedly reused, are computationally inefficient. What options do we have to reduce the cost of prompting?
One simple option is to finetune the LM for a _specific_ task \(t\). That is, given \(\mathcal{D}^{t}=\{(x_{i},y_{i})\}_{i=1}^{N^{t}}\), the dataset containing input/output examples only under task \(t\), we can learn a specialized LM \(p_{\text{LM}}^{t}(y\mid x)\) which is faster because it does not condition on \(t\). Parameter-efficient finetuning methods such as prefix-/prompt-tuning [18; 19] or adapters [15; 16] promise to do so at a fraction of the cost of full finetuning, and newer methods like HyperTuning [25] eliminate gradient descent entirely, instead predicting the parameters of the specialized model directly from \(\mathcal{D}^{t}\). Yet problems with these methods still remain: we must store at least a subset of model weights for each task, and more importantly, for each task \(t\), we must collect a corresponding dataset of input/output pairs \(\mathcal{D}^{t}\) to adapt the model.
Gisting is a different approach that amortizes both (1) the inference-time cost of prompting \(p_{\text{LM}}\) with \(t\) and (2) the train-time cost of learning a new \(p_{\text{LM}}^{t}\) for each \(t\). The idea is to learn a _compressed_ version of \(t\), \(G(t)\), such that inference from \(p_{\mathcal{G}}(y\mid G(t),x)\) is faster than \(p_{\text{LM}}(y\mid t,x)\). In LM terms, \(G(t)\) will be the key/value activations on top a set of _gist tokens_, smaller than the number of tokens in \(t\), yet still inducing similar behavior from the LM. Also known as a Transformer _prefix_[19], \(G(t)\) can then be cached and reused for compute efficiency. Crucially, we expect \(G\) to generalize to unseen tasks: given a new task \(t\), we can predict and use the gist activations \(G(t)\)_without any additional training_.
### A Context Distillation Perspective
An alternative way to view gisting is through the lens of distillation of an already instruction-tuned LM \(p_{\text{LM}}(y\mid t,x)\). Askell et al. [1] and Snell et al. [30] define _context distillation_ as the process of finetuning a new LM \(p_{\text{CD}}^{t}\) to mimic the original LM without the prompt ("context") \(t\), via the loss
\[\mathcal{L}_{\text{CD}}(p_{\text{CD}}^{t},\,t)=\mathbb{E}_{x}\left[D_{\text{ KL}}(p_{\text{LM}}(y\mid t,x)\parallel p_{\text{CD}}^{t}(y\mid x))\right]. \tag{1}\]
Figure 1: **Prompting** retains the multitask capabilities of LMs, but is inefficient. **Finetuning/distillation** is more efficient, but requires training a model for each task. **Gisting** compresses prompts into activations on top of “gist tokens”, saving compute and generalizing to novel tasks at test time. Each vertical rectangle represents a stack of Transformer activations.
The insight to be gained from this perspective is that we do not need any external data \(\mathcal{D}\): this KL objective can be approximated by finetuning \(p_{\text{CD}}^{t}\) on a synthetic sampled dataset \(\hat{\mathcal{D}}^{t}=\{(\hat{x}_{i},\hat{y}_{i})\}\) where \((\hat{x}_{i},\hat{y}_{i})\sim p_{\text{LM}}(\cdot\mid t)\). This is precisely the approach taken by recent work [1; 7; 30], including Wingate et al. [40], who notably learn to compress a single discrete prompt into a soft prompt via gradient descent, similar to this paper.
However, we differ from this prior work in that we are not interested in distilling just a single task, but in amortizing the cost of distillation across a _distribution_ of tasks \(T\). That is, given a task \(t\sim T\), instead of obtaining the distilled model via gradient descent, we use \(G\) to simply _predict_ the gist tokens (\(\approx\) parameters) of the distilled model, in the style of HyperNetworks [13] and HyperTuning [25]. Our "meta" distillation objective is thus (with changes highlighted in blue):
\[\mathcal{L}_{\text{\tiny{G}}}(p_{\text{\tiny{G}}},\mathcal{T})=\mathbb{E}_{t \sim T,x}\left[D_{\text{KL}}(p_{\text{LM}}(y\mid t,x)\parallel p_{\text{\tiny {G}}}(y\mid G(t),x))\right]. \tag{2}\]
In the experiments we describe below, we train on synthetic instruction-following data sampled from instruction-tuned variants of GPT-3 [23; 3]. Thus, these experiments can indeed be seen as a form of context distillation for the GPT-3 series models.
## 3 Learning Gisting by Masking
Having just described the general framework of gisting, here we will explore an extremely simple way of learning such a model: using the LM itself as the gist predictor \(G\). This not only leverages the pre-existing knowledge in the LM, but also allows us to learn gisting by simply doing standard instruction finetuning while modifying the Transformer attention masks to enforce prompt compression. This means that gisting incurs _no_ additional training cost on top of standard instruction finetuning!
Specifically, we add a _single_ gist token \(g\) to the model vocabulary and embedding matrix. Then, given a (task, input) pair \((t,x)\), we concatenate \(t\) and \(x\) with a set of \(k\) copies of \(g\) in between: \((t,g_{1},\dots,g_{k},x)\), e.g. Translate French: <G1> <G2> The cat.2 The model is then restricted such that input tokens _after_ the gist tokens cannot attend to any of the prompt tokens _before_ the gist tokens (but they _can_ attend to the gist tokens). This forces the model to compress the prompt information into the gist prefix, since the input \(x\) (and output \(y\)) cannot attend to the prompt \(t\).
Footnote 2: Again, the gist token is the same from \(g_{1}\) to \(g_{k}\); what changes is the activations on top of each token.
Figure 2 illustrates the required changes. For **decoder-only** LMs such as GPT-3 [3] or LLaMA [33] that normally admit an autoregressive, causal attention mask, we simply mask out the lower-left corner of the triangle (Figure 1(a)). For **encoder-decoder** LMs (e.g. T5; [28]) with a bidirectional encoder followed by an autoregressive decoder, two changes are needed (Figure 1(b)). First, in the encoder, which normally has no masking, we prevent the input \(x\) from attending to the prompt \(t\). But
Figure 2: **Gist Masking. Attention mask modifications for (a) decoder-only and (b) encoder-decoder Transformer LMs to encourage prompt compression into gist tokens <G1> <G2>. In these tables, cell \((r,c)\) shows whether token \(r\) can attend to token \(c\) during self- or cross-attention.**
we must also prevent the prompt \(t\) and gist tokens \(g_{i}\) from attending to the input \(x\), since otherwise the encoder learns different representations depending on the input. Finally, the decoder operates as normal, except during cross-attention, we prevent the decoder from attending to the prompt \(t\).
Overall, these masking changes are extremely simple and can be implemented in roughly 10 source lines of code. See Appendix A for a sample PyTorch implementation which can be used as a drop-in replacement for attention masking in deep learning libraries such as Hugging Face Transformers [41].
## 4 Experiments
### Data
A dataset with a large variety of tasks (prompts) is crucial to learn gist models that can generalize. To obtain the largest possible set of tasks for instruction finetuning, we create a dataset called Alpaca+, which combines the Self-Instruct [36] and Stanford Alpaca [31] instruction tuning datasets, each consisting of \((t,x,y)\) tuples sampled from OpenAI's text-davinci-001 and text-davinci-003 variants of GPT-3, respectively. In total, Alpaca+ has 130,321 examples, with 104,664 unique tasks \(t\), 48,530 unique inputs \(x\), and anywhere from 0-5 inputs per task (0.64 on average).
Note that ~59% of tasks in Alpaca+ have no inputs (e.g. Write me a poem about frogs), in which case we simply omit the input \(x\). While it is less interesting to cache such prompts since they are not input-dependent, they still serve as valuable training signal for learning prompt compression. Overall, while Alpaca+ is noisy and imperfect, Wang et al. [36] and Taori et al. [31] nevertheless show that models trained on such data achieve comparable performance to the original models from which the data is sampled, making this a promising testbed for studying gisting.
From Alpaca+ we hold out 3 validation splits: 1000 **Seen** prompts (with unseen, non-empty inputs); 1000 **Unseen** prompts (with non-empty inputs); and the 252 hand-written **Human** prompts and completions used in Wang et al. [36], of which 83% have non-empty inputs. The latter two splits test generalization to unseen instructions, with the **Human** split posing a stronger out-of-distribution (OOD) challenge: the average training prompt has ~20 tokens, compared to ~26 in the human split.
### Models
To demonstrate gisting across multiple Transformer LM architectures, we experiment with LLaMA-7B [33], a decoder-only GPT-style model with ~7B parameters, and FLAN-T5-XXL [8], an encoder-decoder T5 model [28] with 11B parameters. For each of these models, we train models with a varying number of gist tokens \(k\in\{1,2,5,10\}\), using the modified attention masks described in Section 3. To assess how the model is learning prompt compression, we calibrate performance against upper- and lower-bound baselines and a simple discrete compression strategy:
Positive Control.As an upper bound on performance, we train a model with a single gist token, but without any modifications to the attention mask. This is akin to doing standard instruction finetuning.
Negative Control.As a lower bound on performance, we train a model without access to the task \(t\). This is similar to a "random gist token" baseline, which allows us to measure how the model would do if it failed to compress _any_ information into the gist prefix.
Discrete Compression with TF-IDF.An alternative approach to compression is simply using fewer discrete tokens to express the same task. Achieving compression rates similar to gisting, however, requires compression far beyond any threshold of fluency. Nevertheless, as a baseline, we compute TF-IDF statistics over the set of instructions in the Alpaca+ training set to extract the most relevant keyword in each instruction. Some examples from the training set include (see Appendix G for more):
Write a letter to your boss asking for an increase in salary \(\rightarrow\) salary Given two integers, find their average \(\rightarrow\) average We then replace each instruction in Alpaca+ with the first subword token from each keyword, resulting in compression rates equivalent to a model trained with a single gist token. Similarly to the positive control, we do standard instruction finetuning over Alpaca+ with these compressed instructions.
For full training, data, and compute details, and a link to code, see Appendix B.
### Evaluation
Our evaluation uses a combination of automated metrics and AI- and human-assisted evaluation:
ROUGE-L.We first use ROUGE-L, a simple lexical overlap statistic [20], used in previous open-ended instruction finetuning work [37; 38]. The text-davinci-{001,003} completions are used as references, except for the Human split, where we use the gold-standard human reference.
ChatGPT.Next, we use ChatGPT-3.5 [22] to compare the outputs of our models to the positive control. While this is an imperfect metric, it allows for much faster and cheaper evaluation than human experiments, with an arguably more meaningful semantic signal than ROUGE-L. Recent work has found that ChatGPT can be used for text annotation and evaluation [12; 17; 35] with near-human performance, and similar model-based evaluations have been conducted with recent LMs [5; 11; 31].
Specifically, given a task \(t\), input \(x\), and outputs from two models \((y_{1},y_{2})\) identified only as Assistants A and B, ChatGPT was asked to choose which assistant response is better, explaining its reasoning in Chain-of-Thought fashion [39]. If the models produced the same output, or were equally bad, ChatGPT was allowed to call a tie. We gave examples of desired outputs in ChatGPT's prompt, and randomized the order of presentation between the models for each query to avoid order effects. The full prompt given to ChatGPT and evaluation details are in Appendix C. Using these outputs, we measure the win rate of a model against the positive control: a win rate of 50% indicates that the model is of comparable quality to a model that does no prompt compression.
Human eval.Finally, after prototyping with ChatGPT, we select the best gist compression models and do a Human evaluation on a random subset of 100 of the 252 examples in the Human validation split. For each of the 100 examples, we recruited 3 US or UK-based, English-fluent annotators from Prolific, and asked them to rate model outputs in the same style as the ChatGPT evaluation above (see Appendix D for full details, including the annotation interface). The only difference is that human participants were allowed to select "I Don't Know" in cases where they had inadequate domain knowledge to accurately judge the responses, e.g. if the question was a coding question; we drop these responses (~10%) during analysis. With this human evaluation, we are not only interested in evaluating our final models, but also validating whether ChatGPT can be used as a reliable replacement for human annotation on this task.
## 5 Results
ROUGE-L and ChatGPT evaluations for LLaMA-7B and FLAN-T5-XXL, with varying numbers of gist tokens, are shown in Figure 3. Models were generally insensitive to the number of gist tokens \(k\): compressing prompts into a single token prefix did not substantially underperform larger prefixes. In fact, having too many gist tokens hurts performance in some cases (e.g. LLaMA-7B, 10 gist tokens), perhaps because the increased capacity enables overfitting to the training distribution. Thus, we use the single gist token models for the rest of the experiments in the paper, and report the exact numbers for the single-token models, with the positive, negative, and TF-IDF baselines, in Table 1.
On **Seen** instructions, gist models attain near-identical ROUGE and ChatGPT performance as their positive control models (48.6% and 50.8% win rates for LLaMA-7B and FLAN-T5-XXL, respectively). But we are most interested in generalization to unseen tasks, as measured by the other two splits. On **Unseen** prompts within the Alpaca+ distribution, we again see competitive performance: 49.7% (LLaMA) and 46.2% (FLAN-T5) win rates against the positive controls. It is on the most challenging OOD **Human** split where we see slight drops in win rate to 45.8% (LLaMA) and 42.5% (FLAN-T5), though these numbers are still quite competitive with the positive control. Finally, gist compression is vastly superior to discrete compression; the TF-IDF models in Table 1 only marginally outperform the negative control models across the board.
Table 2 shows the human evaluation results on the Human validation split, comparing the single gist token models to the positive control. Overall, human annotators agree with ChatGPT, with average win rates of 52.3% (vs. 48.0%) for LLaMA-7B and 40.6% (vs. 42.0%) for FLAN-T5-XXL. Importantly, this agreement persists at the level of individual responses. The average pairwise Cohen's \(\kappa\) among human annotators is.24 for LLaMA-7B and.33 for FLAN-T5-XXL. Because humans will often arbitrarily choose one response over another even for samples of equal quality, these numbers
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline & & \multicolumn{2}{c}{**Seen**} & \multicolumn{2}{c}{**Unseen**} & \multicolumn{2}{c}{**Human**} \\
**Model** & & ROUGE-L & ChatGPT \% & ROUGE-L & ChatGPT \% & ROUGE-L & ChatGPT \% \\ \hline LLaMA- & Pos & 58.0 (100) & 50.0 (100) & 48.1 (100) & 50.0 (100) & 27.0 (100) & 50.0 (100) \\
7B & Gist & 57.8 (99.2) & 48.6 (92.4) & 46.6 (91.0) & 49.7 (98.8) & 23.9 (75.4) & 45.8 (84.9) \\ & TF-IDF & 38.1 (24.5) & 34.5 (16.2) & 34.0 (15.6) & 29.3 (15.9) & 16.5 (16.7) & 24.6 (8.6) \\ & Neg & 31.5 (0) & 31.5 (0) & 31.4 (0) & 25.4 (0) & 14.4 (0) & 22.2 (0) \\ \hline FLAN- & Pos & 50.6 (100) & 50.0 (100) & 45.7 (100) & 50.0 (100) & 23.9 (100) & 50.0 (100) \\ T5-XXL & Gist & 48.9 (93.2) & 50.8 (103.9) & 43.8 (88.6) & 46.2 (84.4) & 21.7 (80.9) & 42.5 (63.2) \\ & TF-IDF & 32.0 (25.9) & 35.9 (30.5) & 34.3 (31.3) & 31.0 (22.1) & 13.5 (9.6) & 28.4 (-5.9) \\ & Neg & 25.5 (0) & 29.7 (0) & 29.1 (0) & 25.6 (0) & 12.4 (0) & 29.6 (0) \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Results for single gist tokens.** ROUGE-L and ChatGPT scores for Gist, TF-IDF, and positive/negative controls. Parentheses are scores normalized between positive/negative controls.
Figure 3: **Varying the number of gist tokens**. ROUGE-L and ChatGPT scores for (a) **LLaMA-7B** and (b) **FLAN-T5-XXL** for different gist tokens. Dashed lines indicate positive and negative control performance. Error bars are 95% exact binomial confidence intervals, splitting ties equally between models [10] and rounding down in favor of the positive control in case of an odd number of ties. Compression factors are calculated by computing the average token length of the validation split and dividing by the number of gist tokens.
are fairly low; however, ChatGPT shows similar levels of agreement, with average \(\kappa\) across each of the 3 human annotators at.29 for both models. These results, paired with the similar overall win rates, show that using ChatGPT is similar to simply recruiting an additional human annotator, and corroborates the broader results in Figure 3. See Appendix D for more human evaluation results, including a breakdown of agreement across annotators, and Appendix G for examples of instructions, model outputs, and human/ChatGPT judgments in the Human validation split.
Since our aim is having the gist models mimic the original models, one might ask how often the gist model is identical to the positive control. Figure A.3 in Appendix E shows how often this happens: for **Seen** tasks (but unseen inputs), the gist model outputs exactly match the positive control nearly 50% of the time. This drops to ~20-25% for **Unseen** tasks and ~10% for the OOD **Human** tasks.
Overall, our results show that gist models can reliably compress prompts, even to some degree those that lie outside the training distribution, especially for decoder-only LMs (LLaMA). Encoder-decoder LMs like FLAN-T5 show slightly worse OOD performance; one possible reason is that gist masking inhibits the bidirectional attention flow in the encoder, which is more challenging to adjust to than simply limiting the history of a decoder LM. Future work will be needed to investigate this hypothesis.
### Failure cases
While the responses from the gist models are often indistinguishable from the positive control, we nevertheless do see failure cases where the gist token models do not perfectly compress the prompt. We give two examples where this occurs in the Human validation split: first, when the instruction contains **specific details**, e.g. phrases that must be copied verbatim into the output (in the snippets below, the names of Assistants A/B are hidden to ChatGPT):
Instruction: Give the news title a category. Pick a category from the list of News & Buzz, Travel, Style, Arts & Culture, Politics, Tech, and Science & Health. <G1> Input: The #Banksy Exhibit in Cambridge, MA is absolutely terrific. Assistant A (FLAN-T5-XML gist): Culture Assistant B (pos control): Arts & Culture
**ChatGPT choice**: B
**ChatGPT reason**: The news title is about an art exhibit, so Assistant B's response of 'Arts & Culture' is more accurate and specific than Assistant A's 'Culture'.
We also see more cases of **runaway generations** that do not occur with the positive control model:
Instruction: Provide an example of how a table of contents can be generated automatically in a LATEX document. Make sure your example contains the necessary commands. <G1> Assistant A (pos control): {tableofcontents }chapter{Introduction} }chapter{Theory} {Experiments} }chapter{Conclusion} Assistant B (LLaMA-7B gist): {tableofcontents }chapter{Introduction} }section{The Basics} }subsection{The Basics} }paragraph{The Basics} }subparagraph{The Basics} } {The Basics} }Subparagraph{The Basics} } {The Basics} }Subparagraph{The Basics} }
**ChatGPT choice**: A
**ChatGPT reason**: Assistant A provides a simple and correct example of generating a table of contents in LaTeX. Assistant B's example is unnecessarily long and does not follow the standard structure of a table of contents.
While it is unclear why only the gist models exhibit this behavior, these issues can likely be mitigated with more careful sampling techniques.
\begin{table}
\begin{tabular}{l|r r r r r r|r r} \hline \hline & & \multicolumn{4}{c|}{**Gist Win \% over Pos**} & \multicolumn{2}{c}{**Agreement (Cohen’s \(\kappa\))**} \\
**Model** & H1 & H2 & H3 & **Human (H1–H3)** & **ChatGPT** & **Human** & **ChatGPT** \\ \hline LLaMA-7B & 51.1 & 44.5 & 59.8 & 52.3 (46.1, 58.4) & 48.0 (38.0, 58.2) &.24 &.29 \\ FLAN-T5-XXL & 43.0 & 41.9 & 37.2 & 40.6 (34.6, 46.8) & 42.0 (32.2, 52.3) &.33 &.29 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Human evaluation results.** Win rate and inter-annotator agreement of single token gist models over positive control according to 3 Human annotators (H1–H3), their average, and ChatGPT (95% confidence intervals in parentheses), for 100 out of 252 examples in the Human validation split.
## 6 Compute, Memory, and Storage Efficiency
Finally, we return to one of the central motivations of this paper: what kind of efficiency gains does gisting enable? To answer this question, we compare the compute requirements (CUDA wall time and FLOPs) during inference with the single-token gist models using different caching strategies:
1. **No caching**: just encoding the full prompt \(t\).
2. **Instruction caching**: caching the activations of the uncompressed instruction \(t\) (keys and values for all layers) into what is called the **KV cache**. This is the most common caching behavior for Transformer inference [4; 26] and is supported in libraries like Hugging Face Transformers [41]. However, it is only applicable to _decoder-only_ models, since in models with bidirectional encoders like T5, the instruction representations \(t\) depend on the input \(x\).
3. **Gist caching**: Compressing the prompt into the gist prefix \(G(t)\).
Table 3 displays the results of profiling a single forward pass through the model (i.e. one step of decoding with a single input token) with PyTorch [24] 2.0, averaged across the 252 Human instructions. Gist caching improves significantly over unoptimized models, with 40% FLOPs savings and 4-7% lower wall time for both models. Note that at these (relatively) small scales, the wall time improvements are smaller than the FLOPs reductions because much of the inference latency is caused by moving tensors from high-bandwidth memory (HBM) to the chip compute cores, i.e. what Pope et al. [26] call the "memory time". Larger sequence lengths and batch sizes will lead to additional speedups, as the overall latency becomes dominated by the actual matrix computations.
For LLaMA-7B, the picture is more nuanced when compared to caching the full instruction. Compute improvements of gist caching are smaller: a negligible decrease in FLOPs (0.11%) and a modest 1% speedup in wall time. This is because the FLOPs required for a Transformer forward pass is dominated by processing of the new input tokens, rather than self-attention with the KV cache. For example, a forward pass through LLaMA-7B with a single input token and a _2000-length_ KV cache is only ~10% more expensive than the same forward pass with no KV cache--see Appendix F for more details. Nevertheless, this small decrease in FLOPs leads to a disproportionate decrease in wall time (1%), likely because the self-attention computations are slower relative to their FLOPs contribution.
At large scales and with heavily reused prompts, a 1% latency speedup can still accumulate into significant cost savings over time. More importantly, however, there are key benefits of gist caching over instruction caching besides latency: compressing 26 tokens into 1 gives more space in the input context window, which is bounded by absolute position embeddings or GPU VRAM. For example, for LLaMA-7B, each token in the KV cache requires 1.05 MB storage.3 While the total contribution of the KV cache relative to the memory needed for LLaMA-7B inference is negligible at the prompt lengths we tested, an increasingly common scenario is developers caching many prompts across a large number of users, where storage costs quickly add up. In these scenarios, gisting allows caching of up to **26x** more prompts than full instruction caching, using the same amount of storage!
Footnote 3: 4 (fp32 bytes) \(\times\) 2 (keys+values) \(\times\) 32 (num layers) \(\times\) 32 (num attn heads) \(\times\) 128 (head dim) = 1.05 MB.
\begin{table}
\begin{tabular}{l l|r r r r|r} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Metric**} & \multicolumn{3}{c|}{**Caching Strategy**} & \multicolumn{2}{c}{Absolute/**Relative \(\Delta\)**} \\ & & **None** & **Instruction\({}^{\text{i}}\)** & **Gist\({}^{\text{ii}}\)** & **vs. None** & **vs. Instruction** \\ \hline LLaMA-7B & CUDA & 23.4 \(\pm\) 6.88 & 22.1 \(\pm\) 6.58 & **21.8 \(\pm\) 6.55** & 1.60 (1.29, 1.90) & 2.21 (.140,.302) \\ & time (ms) \(\downarrow\) & & & & 6.8\% (5.5, 8.1) & 1.0\% (.63, 1.4) \\ & GFLOPs \(\downarrow\) & 915 \(\pm\) 936 & 553 \(\pm\) 900 & **552 \(\pm\) 899** & 362 (337, 387) &.607 (.448,.766) \\ & & & & & 40\% (37, 42) &.11\% (.08,.14) \\ \hline FLAN-T5-XXL & CUDA & 31.0 \(\pm\) 5.31 & N/A & **29.7 \(\pm\) 5.07** & 1.30 (1.10, 1.51) & N/A \\ & time (ms) \(\downarrow\) & & & & 4.2\% (3.6, 4.9) & \\ & GFLOPs \(\downarrow\) & 716 \(\pm\) 717 & N/A & **427 \(\pm\) 684** & 289 (268, 310) & N/A \\ & & & & & 40\% (37, 43) & \\ \hline \hline \end{tabular}
\({}^{\text{i}}\)Average KV Cache Length = 26 \({}^{\text{ii}}\)Average KV Cache Length = 1
\end{table}
Table 3: **Gist efficiency improvements**. For different caching strategies (**None**, **Instruction**, **Gist**), we record CUDA wall time and GFLOPs (\(\pm\) std dev). Then we report the absolute/relative improvement of **Gist Caching** over these alternative strategies, with 95% confidence intervals in parentheses.
Additional Related Work
Gisting builds upon past work in (parameter-efficient) instruction finetuning and context distillation, as discussed in Section 2. Here we will outline some additional connections to related work:
Adapting LMs without Backprop.As mentioned in Section 2, listing can be viewed as a way to adapt LMs _without_ gradient descent, by predicting the the prefix (\(\approx\) parameters) of the adapted model. Similarly, HyperTuning [25] predicts the prefix of a model for a task using (input, output) pairs for that task. If HyperTuning is a "few-shot" adaptation method, predicting the prefix from few-shot examples, then gisting can be seen as a "zero-shot" version, predicting the prefix from the language instruction alone. Gisting also has the additional benefit over HyperTuning of being conceptually simpler: instead of training a separate LM to predict the prefix, the LM itself is used as the HyperNetwork [13], with only a tiny change to the attention masks needed for prompt compression.
Compression and memory in transformers.The idea of "compressing" prompts is closely related to previous attempts at storing past representations to improve memory and long-range sequence modeling in Transformers [9; 21; 27; 42; 43]. In particular, the Compressive Transformer [27] compresses transformer activations into a smaller _compressed memory_ using a learned convolutional operator. Gisting can be seen as a variant of the Compressive Transformer with 3 key differences. First, the compression function is not a separately learned function, but the LM's own self-attention mechanism, controlled by an input-dependent gist token. Second, the compression function is learned jointly with instruction finetuning via the standard language modeling loss, not a specialized auxiliary reconstruction loss as in [27]. Finally, our task of interest is not long-range sequence modeling, but caching and reusing instruction following prompts for efficiency reasons.
Sparse attention mechanisms.By restricting attention masks, gisting draws inspiration from efficient/sparse attention methods in Transformers (see [32] for review). For example, some sliding window attention mechanisms [2; 6] may remove the need to keep the entire KV cache around during decoding, but these more general methods are not optimized for caching arbitrary parts of the input sequence of varying length, which prompt compression demands. In light of this, gisting can be viewed as an input-dependent sparse attention mechanism specifically aimed at improving efficiency of the prompting workflow now commonly used in LMs.
## 8 Discussion and Limitations
In this paper we presented gisting, a framework for prompt compression in LMs, and a simple way of implementing gist models by modifying Transformer attention masks that incurs no additional cost over standard instruction finetuning. Gisting can be seen either as a modified form of instruction finetuning or a method for (meta-)context distillation of an LM. Gist models can compress unseen OOD prompts up to 26x while maintaining output quality, resulting in up to 40% FLOPs reduction and 4.2% wall clock speedups over unoptimized models, and enabling new methods for prompt caching in encoder-decoder models. While wall-time improvements for decoder-only LMs are smaller, gisting nevertheless enables caching 1 _order of magnitude_ (26x) more prompts relative to full instructions.
Gisting is a promising method for improving LM efficiency, but carries some limitations. While gisting seems to succeed in capturing the "gist" of instructions (hence the name), achieving such compression necessarily results in some loss of nuance of the original instruction; Section 5.1 illustrates a concrete failure case we observed. Since the behavior of LMs on edge cases is already not well understood, it is _especially_ important for practitioners to carefully evaluate whether the compute/accuracy tradeoff of gisting is sufficiently safe and robust for their use case, _before_ deployment.
Nevertheless, we believe gisting opens several interesting directions for future work. First, the masking method presented here can be easily integrated into existing instruction finetuning workflows, but another exciting approach is to retrofit an existing, frozen LM by training a _smaller_ model to compress prompts, if finetuning the larger LM is inconvenient. Second, the largest efficiency gains from gisting will result from compressing very long prompts, for example \(k\)-shot prompts for large \(k\) that may even exceed a single context window. Finally, compression performance can likely be improved through "gist pretraining": first learning to compress arbitrary spans of natural language, before then learning prompt compression. Such objectives could be devised by inserting gist tokens into other pretraining objectives, perhaps during language modeling or T5's span corruption objective.
## Acknowledgments and Disclosure of Funding
We thank the Stanford Alpaca team, especially Xuechen Li, with assistance with Alpaca data and finetuning, and Gabriel Poesia for the codebase used to collect human evaluations. Additional thanks to Xiuyu Li for help debugging and ensuring reproducibility of the open-source codebase. JM is supported by the Open Philanthropy AI Fellowship and XLL is supported by the Stanford Graduate Fellowship.
## References
* [1] A. Askell, Y. Bai, A. Chen, D. Drain, D. Ganguli, T. Henighan, A. Jones, N. Joseph, B. Mann, N. DasSarma, et al. A general language assistant as a laboratory for alignment. _arXiv preprint arXiv:2112.00861_, 2021.
* [2] I. Beltagy, M. E. Peters, and A. Cohan. Longformer: The long-document transformer. _arXiv:2004.05150_, 2020.
* [3] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. Language models are few-shot learners. _Advances in Neural Information Processing Systems_, 33:1877-1901, 2020.
* [4] C. Chen. Transformer inference arithmetic. [https://kip.ly/blog/transformer-inference-arithmetic/](https://kip.ly/blog/transformer-inference-arithmetic/), 2022.
* [5] W.-L. Chiang, Z. Li, Z. Lin, Y. Sheng, Z. Wu, H. Zhang, L. Zheng, S. Zhuang, Y. Zhuang, J. E. Gonzalez, I. Stoica, and E. P. Xing. Vicuna: An open-source chatbot impressing GPT-4 with 90% ChatGPT quality. [https://vicuna.lmsys.org/](https://vicuna.lmsys.org/), 2023.
* [6] R. Child, S. Gray, A. Radford, and I. Sutskever. Generating long sequences with sparse transformers. _arXiv preprint arXiv:1904.10509_, 2019.
* [7] E. Choi, Y. Jo, J. Jang, and M. Seo. Prompt injection: Parameterization of fixed inputs. _arXiv preprint arXiv:2206.11349_, 2022.
* [8] H. W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, E. Li, X. Wang, M. Dehghani, S. Brahma, et al. Scaling instruction-finetuned language models. _arXiv preprint arXiv:2210.11416_, 2022.
* [9] Z. Dai, Z. Yang, Y. Yang, J. G. Carbonell, Q. Le, and R. Salakhutdinov. Transformer-XL: Attentive language models beyond a fixed-length context. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_, pages 2978-2988, 2019.
* [10] W. J. Dixon and F. J. Massey Jr. _Introduction to statistical analysis_. McGraw-Hill, 1951.
* [11] X. Geng, A. Gudibande, H. Liu, E. Wallace, P. Abbeel, S. Levine, and D. Song. Koala: A dialogue model for academic research. Blog post, April 2023. URL [https://bair.berkeley.edu/blog/2023/04/03/koala/](https://bair.berkeley.edu/blog/2023/04/03/koala/).
* [12] F. Gilardi, M. Alizadeh, and M. Kubli. ChatGPT outperforms crowd-workers for text-annotation tasks. _arXiv preprint arXiv:2303.15056_, 2023.
* [13] D. Ha, A. Dai, and Q. V. Le. HyperNetworks. In _International Conference on Learning Representations_, 2017.
* [14] J. Hoffmann, S. Borgeaud, A. Mensch, E. Buchatskaya, T. Cai, E. Rutherford, D. d. L. Casas, L. A. Hendricks, J. Welbl, A. Clark, et al. Training compute-optimal large language models. _arXiv preprint arXiv:2203.15556_, 2022.
* [15] N. Houlsby, A. Giurgiu, S. Jastrzebski, B. Morrone, Q. De Laroussilhe, A. Gesmundo, M. Attariyan, and S. Gelly. Parameter-efficient transfer learning for NLP. In _International Conference on Machine Learning_, pages 2790-2799. PMLR, 2019.
* [16] E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen. LoRA: Low-rank adaptation of large language models. In _International Conference on Learning Representations_, 2022.
* [17] F. Huang, H. Kwak, and J. An. Is ChatGPT better than human annotators? Potential and limitations of ChatGPT in explaining implicit hate speech. _arXiv preprint arXiv:2302.07736_, 2023.
* [18] B. Lester, R. Al-Rfou, and N. Constant. The power of scale for parameter-efficient prompt tuning. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_, pages 3045-3059, 2021.
* [19] X. L. Li and P. Liang. Prefix-tuning: Optimizing continuous prompts for generation. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_, pages 4582-4597, 2021.
* [20] C.-Y. Lin. ROUGE: A package for automatic evaluation of summaries. In _Text summarization branches out_, pages 74-81, 2004.
* [21] P. J. Liu, M. Saleh, E. Pot, B. Goodrich, R. Sepassi, L. Kaiser, and N. Shazeer. Generating Wikipedia by summarizing long sequences. In _International Conference on Learning Representations_, 2018.
* [22] OpenAI. Introducing ChatGPT. [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt), 2022.
* [23] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, et al. Training language models to follow instructions with human feedback. In _Advances in Neural Information Processing Systems_, pages 27730-27744, 2022.
* [24] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al. PyTorch: An imperative style, high-performance deep learning library. _Advances in Neural Information Processing Systems_, 32, 2019.
* [25] J. Phang, Y. Mao, P. He, and W. Chen. HyperTuning: Toward adapting large language models without back-propagation. _arXiv preprint arXiv:2211.12485_, 2022.
* [26] R. Pope, S. Douglas, A. Chowdhery, J. Devlin, J. Bradbury, A. Levskaya, J. Heek, K. Xiao, S. Agrawal, and J. Dean. Efficiently scaling transformer inference. _arXiv preprint arXiv:2211.05102_, 2022.
* [27] J. W. Rae, A. Potapenko, S. M. Jayakumar, C. Hillier, and T. P. Lillicrap. Compressive transformers for long-range sequence modelling. In _International Conference on Learning Representations_, 2020.
* [28] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a unified Text-to-Text Transformer. _The Journal of Machine Learning Research_, 21(1):5485-5551, 2020.
* [29] J. Rasley, S. Rajbhandari, O. Ruwase, and Y. He. DeepSpeed: System optimizations enable training deep learning models with over 100 billion parameters. In _Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_, pages 3505-3506, 2020.
* [30] C. Snell, D. Klein, and R. Zhong. Learning by distilling context. _arXiv preprint arXiv:2209.15189_, 2022.
* [31] R. Taori, I. Gulrajani, T. Zhang, Y. Dubois, X. Li, C. Guestrin, P. Liang, and T. B. Hashimoto. Stanford Alpaca: An instruction-following LLaMA model. [https://github.com/tatsu-lab/stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca), 2023.
* [32] Y. Tay, M. Dehghani, D. Bahri, and D. Metzler. Efficient transformers: A survey. _ACM Computing Surveys_, 55(6):1-28, 2022.
* Touvron et al. [2023] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Roziere, N. Goyal, E. Hambro, F. Azhar, et al. LLaMA: Open and efficient foundation language models. _arXiv preprint arXiv:2302.13971_, 2023.
* Vaswani et al. [2017] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. _Advances in Neural Information Processing Systems_, 30, 2017.
* Wang et al. [2023] J. Wang, Y. Liang, F. Meng, H. Shi, Z. Li, J. Xu, J. Qu, and J. Zhou. Is ChatGPT a good NLG evaluator? A preliminary study. _arXiv preprint arXiv:2303.04048_, 2023.
* Wang et al. [2022] Y. Wang, Y. Kordi, S. Mishra, A. Liu, N. A. Smith, D. Khashabi, and H. Hajishirzi. Self-Instruct: Aligning language model with self generated instructions. _arXiv preprint arXiv:2212.10560_, 2022.
* Wang et al. [2022] Y. Wang, S. Mishra, P. Alipoormolabashi, Y. Kordi, A. Mirzaei, A. Naik, A. Ashok, A. S. Dhanasekaran, A. Arunkumar, D. Stap, et al. Super-NaturalInstructions: Generalization via declarative instructions on 1600+ NLP tasks. In _Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing_, pages 5085-5109, 2022.
* Wei et al. [2022] J. Wei, M. Bosma, V. Zhao, K. Guu, A. W. Yu, B. Lester, N. Du, A. M. Dai, and Q. V. Le. Finetuned language models are zero-shot learners. In _International Conference on Learning Representations_, 2022.
* Wei et al. [2022] J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. H. Chi, Q. V. Le, D. Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. In _Advances in Neural Information Processing Systems_, 2022.
* Wingate et al. [2022] D. Wingate, M. Shoeybi, and T. Sorensen. Prompt compression and contrastive conditioning for controllability and toxicity reduction in language models. In _Findings of the Association for Computational Linguistics: EMNLP 2022_, pages 5621-5634, Abu Dhabi, United Arab Emirates, Dec. 2022. Association for Computational Linguistics. URL [https://aclanthology.org/2022.findings-emnlp.412](https://aclanthology.org/2022.findings-emnlp.412).
* Wolf et al. [2020] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. Le Scao, S. Gugger, M. Drame, Q. Lhoest, and A. Rush. Transformers: State-of-the-art natural language processing. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_, pages 38-45, Online, Oct. 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-demos.6. URL [https://aclanthology.org/2020.emnlp-demos.6](https://aclanthology.org/2020.emnlp-demos.6).
* Wu et al. [2022] Y. Wu, M. N. Rabe, D. Hutchins, and C. Szegedy. Memorizing transformers. In _International Conference on Learning Representations_, 2022.
* Zhang et al. [2021] H. Zhang, Y. Gong, Y. Shen, W. Li, J. Lv, N. Duan, and W. Chen. Poolingformer: Long document modeling with pooling attention. In _International Conference on Machine Learning_, pages 12437-12446. PMLR, 2021.
Example PyTorch Implementation of Gist Masking
See Listing A.1 for a sample annotated implementation of gist masking. This PyTorch implementation relies on basic NumPy-style tensor operations and can thus be adapted easily to a framework like JAX.
## Appendix B Data, Training, Evaluation, and Compute Details
Code, data, and model checkpoints are available at [https://github.com/jayelm/gisting](https://github.com/jayelm/gisting).
Data.For LLaMA-7B, we used a maximum sequence length of 512 tokens during training and evaluation, except with the Human validation split, where the maximum length was increased to 768 (the Human instructions are longer). Examples longer than this length are truncated from the end. For FLAN-T5-XXL, we set a maximum input length (task \(t\) + input \(x\)) of 128 and a maximum output length of 256, except again for the Human split, where the maximum input and output lengths were both set to 384. For both models, we set a maximum generation length of 512 tokens. These lengths were chosen such that \(<1\%\) of examples across the board were truncated during training and evaluation for both models.
Training.Full hyperparameters for training runs are located in Table A.1. These parameters were adapted from previous published work finetuning LLAMA/FLAN-T5. For LLaMA-7B, parameters are identical to those used in training Alpaca Taori et al. [31]. For FLAN-T5-XXL, parameters are identical to those used in training T\(k\)-Instruct[37], except with a 5e-5 learning rate, as used in the T\(k\)-Instruct GitHub repository,4 rather than the 1e-5 learning rate in the paper.
Footnote 4: [https://github.com/yizhongw/Tk-Instruct/blob/lab6fad/scripts/train_tK_instruct.sh](https://github.com/yizhongw/Tk-Instruct/blob/lab6fad/scripts/train_tK_instruct.sh)
LLaMA-7B was trained for 3000 steps, while FLAN-T5-XXL was trained for 16000 steps. Since there are about 130k examples in Alpaca+, given the batch sizes in Table A.1 this corresponds to about ~3 epochs and ~2 epochs of training, respectively. These numbers, again, are identical to Taori et al. [31] and Wang et al. [36]. We note that the training time is relatively flexible; for example, we did not see substantial gains training beyond 1 epoch for FLAN-T5-XXL.
Evaluation.During evaluation and benchmarking, we simply greedily decoded the most likely sequence. We saw limited gains from beam search with beam size \(B=4\).
Compute.Experiments were run on a cluster machine with 4xA100-SXM4-80GB NVIDIA GPUs, 480GB RAM, and 16 CPUs, using PyTorch 2.0 [24], Hugging Face Transformers [41], and DeepSpeed [29]. Training runs take about ~7 hours to complete for LLaMA-7B and ~25 hours for FLAN-T5-XXL. Benchmarking results were obtained on the same machine, but using just 1 of the A100 GPUs.
importtorch
2
3defreverse_cumsum(x:torch.Tensor)->torch.Tensor:
5"""Cumulativesumfromrighttoleft.
6
7 See [https://github.com/pytorch/pytorch/issues/33520](https://github.com/pytorch/pytorch/issues/33520).
8"""
9returnx+torch.sum(x,dim=-1,keepdims:True)-torch.cumsum(x,dim=-1)
10
11
12defmake_mask_pre_first_gist(inputs:torch.Tensor,gist_token:int,dtype=torch.int64)->torch.Tensor:
13"""Returnsaxhwherealltokenspriortothefirstgisttokenaremaskedout.
14
15Args:
16inputs:aTensorofinputtokenswherethelastdimensionisthesequencelength.
17gist_token:theintegeridofthegisttoken.
18dtype:thedtypeofthesmask,defaultint64.
19Returns:
20Therequestedmask.
21
22return((inputs:=gist_token).cumsum(-1))>=1).type(dtype)
23
24
25defmake_mask_post_last_gist(inputs:torch.Tensor,gist_token:int,dtype=torch.int64)->torch.Tensor:
26"""Returnsaxhwherealltokensafterthelastgisttokenaremaskedout.
27
28Computesthesameasmask_pre_first_gist_token,butreversesthesequencebeforeandafterthecumsum.
29
30Args:
31inputs:aTensorofinputtokenswherethelastdimensionisthesequencelength.
32gist_token:theintegeridofthegisttoken.
33dtype:thedtypeofthesmask,defaultint64.
34Returns:
35Therequestedmask.
36"""
37return(reverse_cumsum(inputs:=gist_token)>=1).type(dtype)
38
39
30defmake_gist_mask(inputs:torch.Tensor,gist_token:int,pad_token:int,dtype=torch.int64)->torch.Tensor:
31"""Createsagistmaskfromsuppliedinputsandgist/padtokens.
32
33Tokensafterthelastgistcannotattodtokenspriortothefirstgist.Additionally,tokensbefore=thelastgistcannotattodtokensafterthelastgist.
34
35Thegistmaskisbroadcastedto40(withasingletom1)forcompatibilitywithmulti-headedattention (wheredim1istheheaddimension).
36
37Args:
38inputs:aTensorofshape(batch_size,seq_len)inputtokens.
39
40gist_token:theintegeridofthegisttoken.
41pad_token:theintegeridofthepadtoken.inputs=pad_tokenaremaskedout.
42dtype:thedtypeofthesmask,defaultint64.
43
44Returns:
45Therequestedmaskofshape(batch_size,1,seq_len,seq_len)
46"""
47#Attentionmaskfortokensbeforethelastgisttoken.
48pre_gist_mask=make_mask_post_last_gist(inputs,gist_token,dtype=torch.bool)[:,None,None]
49#Attentionmaskfortokensafterthelastgisttoken.
50post_gist_mask=make_mask_pre_first_gist(inputs,gist_token,dtype=torch.bool)[:,None,None]
51#Constructtimemasksbypermuttingtotimediension.
52pre_gist_time_mask=pre_gist_mask.permute((0,1,3,2))
53
54mask=torch.where(pre_gist_time_mask,pre_gist_mask,post_gist_mask)
55mask=mask&(inputs!=pad_token)[:,None,None)#Maskoutpadtokens.
56
57returnmask.type(dtype) ```
Listing A.1: **Sample PyTorch implementation of gist masking.**Details of ChatGPT Evaluation
Given a user instruction and two AI assistant responses, your job is to select the response that is more helpful, and give reasons why. Judge responses holistically, paying special attention to whether the response (1) correctly follows the provided instruction and (2) is factually accurate. If both responses are equal, you may call a tie, but please use this response sparingly.
Example 1:
Instruction: Given the name of a city, tell me the name of the state it is in.
Input: Los Angeles
Assistant A: California
Assistant B: Wyoming
Your output should be:
("reason": "Los Angeles is in California. Only Assistant A is correct.", "choice": "A")
Example 2:
Instruction: Give me some examples of fun things to do with the kids on a weekend.
Assistant A: For example, what should I do with my kids on a weekend? What if I'm in a city? What if I'm on the beach?
Assistant B: You could try going to the zoo or a museum. If you want to stay at home, you could play
board games or bake cookies.
Your output should be:
("reason": "Assistant A doesn"t seem to be following the instruction. Assistant B provides helpful
information with examples.", "choice": "B")
Example 3:
Instruction: Write me a python function that prints "Hello, World".
Assistant A: def hello_world(): print("Hello!")
Assistant B: "Hello, World"
Your output should be:
("reason": "Assistant B Just wrote "Hello, World". Assistant A actually wrote a Python function, even
if it doesn't exactly print the right thing, so overall Assistant A is better.", "choice": "A")
Example 4:
Instruction: Translate the following sentence from English to French.
Input: I like cats.
Assistant A: He (Sustan los gatos.
Assistant B:
Your output should be:
("reason": "Both assistants got the language wrong.", "choice": "tie")
Your response should only be in the JSON format above; THERE SHOULD BE NO OTHER CONTENT INCLUDED IN
YOUR RESPONSE. Write the "reason" key before writing the "choice" key, so that you think step-by-step
before making your decision. KEEP YOUR REASONING BRIEF.
Listing A.2: **Full prompt given to ChatGPT for evaluation**. This prompt populates the system field in the ChatGPT API; the actual example to be evaluated is formatted like the examples in the prompt above, then given as the sole input in the user field.
We used the ChatGPT API, specifically the chatgpt-3.5-turbo engine, to run our ChatGPT evaluation experiments over a period of 2 weeks between March 27 and April 7, 2023.
The full prompt given to ChatGPT is located in Listing A.2, and contains 4 examples of desired output from ChatGPT, including preferring factually accurate responses (Example 1), preferring responses that follow the instruction, even if imperfect (Examples 2 and 3), and examples of modelsbeing equally wrong (Examples 4). For the two models under comparison, we randomized the order of presentation of each model as either Assistant A or Assistant B, to avoid order effects.
ChatGPT was instructed to only respond in JSON format, outputting first a reason key followed by a choice key, to encourage chain-of-thought reasoning [39]. On rare occasions (\(<0.25\%\) of the time), ChatGPT would output a response that did not conform to the requested JSON format (e.g. it would just give an unstructured paragraph). In these cases we manually went through and converted these responses to JSON, without altering ChatGPT's reasoning.
In total, we collected -22.5k judgments from ChatGPT for an estimated cost of $29.28. The full outputs for each model across the Alpaca+ validation splits, as well as ChatGPT's responses and choices, are available in the code link above.
## Appendix D Additional Human Evaluation Details and Results
### Experimental Details
For each of the 100 examples randomly selected from the Human validation split, we recruited 3 US or UK-based, English-fluent annotators from Prolific, an online crowdsourcing platform. Experiments were IRB approved under a generic human experiments IRB given to the authors.
The annotation interface given to Prolific crowdworkers is located in Figure A.1. To verify task comprehension, participants were shown two simple examples before the main body of the task (Figure A.2), and were required to answer correctly before proceeding. We compensated participants USD $14.35/hour for an estimated cost (including Prolific fees) of USD $141.64.
### Additional Results
See Table A.2 for a breakdown of Cohen's \(\kappa\) between human annotators and ChatGPT. We used a weighted version of Cohen's \(\kappa\) with linear weights, since the response scale is ordinal (e.g. "tie" is a closer judgment to "pos control win" than "gist win").
## Appendix E Exact Match Results
See Figure A.3 for a plot of exact match rates for the gist and positive control models (as measured by exact string match).
## Appendix F Additional FLOPs details
The FLOPs required for a Transformer formward pass with varying KV cache lengths can be estimated by modifying existing equations to account for self-attention back to the KV cache. As an example, we modify the FLOPs equations used for computing FLOPs in the Chinchilla paper (Appendix F in [14]). Let **seq_len_with_past** = seq_len + kv_cache_len. Then the modified Transformer FLOPs equations are:
**Embeddings**
* \(2\times\text{seq\_len}\times\text{vocab\_size}\times\text{d\_model}\)
## Appendix A Appendix
Figure A.1: **Annotation interface given to Prolific crowdworkers.**
Figure A.2: **Example items given to humans before the start of the task.**
**Attention (Single Layer)**
* _Key, query, and value projections_: \(2\times 3\times\text{seq\_len}\times\text{d\_model}\times\text{(key\_size}\times\text{num\_heads)}\)
* _Key and query logits_: \(2\times\text{seq\_len}\times\text{seq\_len\_with\_past}\times\text{(key\_size} \times\text{num\_heads)}\)
* _Softmax_: \(3\times\text{num\_heads}\times\text{seq\_len}\times\text{seq\_len\_with\_past}\)
* _Softmax @ query reductions_: \(2\times\text{seq\_len}\times\text{seq\_len\_with\_past}\times\text{(key\_size} \times\text{num\_heads)}\)
* _Final linear_: \(2\times\text{seq\_len}\times\text{(key\_size}\times\text{num\_heads)}\times\text{d\_model}\)
**Dense Block**
* \(2\times\text{seq\_len}\times\text{(d\_model}\times\text{fw\_size}+\text{d\_model} \times\text{fw\_size)}\)
**Final Logits**
* \(2\times\text{seq\_len}\times\text{d\_model}\times\text{vocab\_size}\)
**Total Forward Pass FLOPs**
* embeddings + num_layers \(\times\text{(attention\_single\_layer+dense\_block)+final\_logits}\)
It can be seen that only 3 operations in each attention layer depend on the KV cache size, and they take up a relatively insignificant amount of FLOPs. As an illustrative example, Figure A.4 shows the relative FLOPs contributions within a single layer of attention for LLaMA-7B, assuming a _2000-length_ KV cache and a single input token. Operations dependent on the KV cache constitute at most \(\sim\)10% of the total attention layer FLOPs; the rest are used in KQV projections and dense layers for processing the single new input token.
Given a KV cache compression rate of 26, as observed in our Human validation split, the Chinchilla equations predict a relative improvement of Gist caching of 0.12%. This is extremely close to the 0.11% improvement actually observed in Table 3. These results show that optimizing the KV cache size does not actually lead to huge compute speedups during Transformer inference, at least for relatively small prompt lengths. Nevertheless, there are clear memory and storage benefits to be gained from prompt compression, as discussed in Section 6.
## Appendix G Additional Examples of Model Outputs
See Listing A.3 for additional examples of instructions, inputs, model completions (for LLaMA-7B and FLAN-T5-XXL across Pos, Gist, TF-IDF, and Neg conditions), human judgments, and ChatGPT judgments (+reasons). Examples are not cherry-picked; we enumerate the first 10 examples of the Alpaca+ Human validation split.
Figure A.4: **FLOPs for each operation involved in a single layer of self attention with a 2000-length KV cache**, according to the Chinchilla estimates for LLaMA-7B. At most 9.6% of FLOPs can be optimized away by reducing the size of the KV cache.
Figure A.3: **Exact match rates**. Rate at which the 1 token gist models give the same output as the positive control (exact string match). Error bars are 95% exact binomial confidence intervals.
## Example 1
Instruction: Write a good Twitter Bio. Try to cover all the provided main points in your generated bio. <B1> Input: Name: Jane..
**ChatGPF** **reason** (Pos vs dist): Assistant A only mentions Paris and provides more detailed information about the city, while Assistant B mentions several cities but doesn't provide much detail about any of them. Therefore, Assistant A is more helpful. { Assistant A: dist, Assistant B: Pos}
### Example 3
Instruction: Give the news title A category. Pick a category from the list of News & Buzz, Travel, Style, Arts & Culture, Politics, Tech, and Science & Health. <H>
Input: The Shanks Exhibit in Cambridge, NA is absolutely terrific.
Reference Output: Arts & Culture
TF-IMP Word: Du
**LLaMa-7B**
Pos: Arts & Culture
**dist: Arts & Culture
**TF-IMP: No such bias detected.**
Neg: The Banks Exhibit in Cambridge, NA is absolutely fantastic.
**Mean Choice (Pos vs dist): The Tie Tie**
**ChatGPF** **chice** (Pos vs dist): The news title as related to Arts & Culture. (Assistant A: dist, Assistant B: Pos)
**FLAN-T5-XXL**
Pos: Arts & Culture
**Gist: Culture**
**TF-IMP: The Banks Exhibit in Cambridge, NA is absolutely remarkable.
Neg: The Banks Exhibit in Cambridge, NA is remarkable.
**Mean Choice** (Pos vs dist): Pos Pos Pos
**ChatGPF** **chice** (Pos vs dist): Pos
**ChatGPF** **reason** (Pos vs dist): The news title is about an art exhibit, so Assistant B's response of 'Arts & Culture' is more accurate and specific than Assistant A's 'Culture'. (Assistant A: dist, Assistant B: Pos)
### Example 4
Instruction: You will be tested on your knowledge of classic uitticisms and aphorisms by completing the given aphorism. Write the original quote as the answer. <H>
Input: the things are infinite.
**Reference Output:** the universe and human stupidity; and I'm not sure about the universe.
(attributed to Albert Einstein)
**T-IMP word: A**
**LLaMa-7B**
Pos: The number of integers and the number of uitticisms.
**Gist:** "To things are infinite, the universe and human stupidity."
**- Albert Einstein**
**TF-IMP: The number of integers and the number of real numbers.
**Reg:** To things are infinite: the number of integers and the number of primes.
**Mean choice (Pos vs dist): Gist Gist Gist Gist**
**ChatGPF** **chice** (Pos vs dist): Assistant A correctly completes the quote with the original author, while Assistant B's response is not factually accurate. (Assistant A: dist, Assistant B: Pos)
**FLAN-T5-XXL**
Pos: "Time and space are infinite."
**Gist:** "Factions and consequences are infinite."
**TF-IMP: The first sentence is a statement, while the second one is an opinion.
**Reg:** The two things that are infinite are time and space.
**Mean choice (Pos vs dist): Pos Pos Pos
**ChatGPF** **chice** (Pos vs dist): The
**ChatGPF** **reason** (Pos vs dist): Both assistants did not provide the correct completion of the given aphorism. The original quote is'Two things are infinite: the universe and human stupidity; and I'm not sure about the universe.' Therefore, neither assistant is helpful. (Assistant A: dist, Assistant B: Pos)
### Example 5
Instruction: Write the HMR syntax for the required action. <H>
Input: adding an image to the page
Reference Output: Gimg src="url" alt="alternatetext">
TF-IMP Word: syntax
**LLaMa-7B**
Pos: Gimg src="image.jpg"
Gist: compounds
**TF-IMP:** cing src="image.jpg"
Neg: Gimg src="image.jpg"
Neg: Gimg src="image.jpg"
Neg: IRR Pos
**ChatGPF** **chice** (Pos vs dist): Pos
**ChatGPF** **reason** (Pos vs dist): Assistant A provides the correct HMR syntax for adding an image to the page, while Assistant B doesn't provide any output. (Assistant A: Pos, Assistant B: Gist)
**FLAN-T5-XXL**
Pos: Gimg src="image.jpg"
Gist: Gimg src="image.jpg"
Neg: Adding an image to the page.
**Mean choice** (Pos vs dist): Tie Gist
**ChatGPF** **chice** (Pos vs dist): The
**ChatGPF** **reason** (Pos vs dist): Both assistants provided the correct HMR syntax for adding an image to the page. (Assistant A: Pos, Assistant B: Gist)
## Example 6
Instructions: Write a to-do list based on the given information. GCD> Input: Daily lists in the pharmacy
Deference Output: - Police sidewalk and entrance for litter
- Assess window displays to make sure they are not outdated or faded
- Clean all doors and remove outdated and unnecessary signs
- Fill bags at each checkout area
- Clean and straighten checkout counters
- Tidy up seating/suiting area
- Face (pull product forward) in all departments
- Check one department a day for short and outdated product
- Empty all train receptacles
- Vacuum (when the store is closed or only if customer traffic allows for it during store hours)
TF-IDF Word: information
### LLaM-7b
Pos: - Check inventory of medications
- Reorder medications
- Prepare patient profiles
- Enter patient information into the computer system
- Verify patient information
- Monitor expiration dates of medications
- Clean and resducc tables
- Answer telephone and provide assistance as needed
- Process prescriptions
- Refill prescriptions
- Monitor inventory of supplies
- Stock shelves
- Clean and resducc tables
- Answer telephone and provide assistance as needed
dist. - Check inventory and order new supplies
- Refill prescriptions
- Answer customer questions
- Process insurance paperwork
- Clean the pharmacy
- Stock shelves
- Count pills
- Take customer phone calls
- Monitor expiration dates on medications
- Update patient profiles
- Perform administrative tasks
- Count caders about prescriptions
- Answer questions from customers
- Process payments
- Maintain inventory
- Monitor controlled substances
- Perform quality control tests
- Review patient profiles
- Contact doctors about prescriptions
- Answer questions from customers
- Process payments
- Maintain inventory
- Monitor expiration dates on medications
- Update patient profiles
- Perform administrative tasks
- Review patient charts
- Contact doctors about prescriptions
- Answer questions from customers
- Process payments
[MISSING_PAGE_POST]
Checking empty dates of medications
2.
No-stocking shelves
3.
Entering prescription information into the computer system
4.
Answering customer questions
5.
Proparing and labeling prescriptions
6.
Scanning barcodes to check for potential drug interactions
7.
Processing insurance claims
8.
Maintaining inventory levels of medications
9.
Cleaning and organizing the pharmacy
10.
Updating patient profiles with new information
Neg: The pharmacist's daily tasks include checking and filling prescriptions, maintaining inventory, and counseling patients on the proper use of medications.
Human choice (Pos vs dist): Pos Tie dist
ChatFT choice (Pos vs dist): Pos
ChatFT person (Pos vs dist): Assistant A provides a concise and comprehensive list of daily tasks in the pharmacy, while Assistant
B:'s list is repetitive and includes irrelevant tasks like counting pills and performing quality control tests. Assistant A is more helpful. (Assistant A: Pos, Assistant B: dist)
Flan-Tg-Xkl
Pos: - Check inventory - fill prescriptions - Prepare medication - Label and package medications - Check expiration dates - Ensure all medications are in stock - Check for any errors in the inventory - Check for any problems with the medication - Check for any problems with the packaging - Check for any problems with the labeling - Check for any problems with the expiration date backing - Check for any problems with the packaging - Check for any problems with the
dist: To-do list for pharmacy:
Check inventory and stock shelves 2.
Prepare prescriptions 3.
Check inventory and stock shelves 4.
Check inventory and stock shelves 5.
Check inventory and stock shelves 6.
Check inventory and stock shelves 7.
Check inventory and stock shelves 8.
Check inventory and stock shelves 10.
WT-IDE: 1.
Stocking shelves 2.
Filling paperty: 3.
Greeting customers 4.
Taking orders 5.
Performing administrative tasks
Neg: The daily tasks in the pharmacy include preparing medications, filling prescriptions, assisting customers, and maintaining inventory.
Human choice (Pos vs dist): Gist Pos Gist
ChatFT choice (Pos vs dist): Pos
ChatFT reason (Pos vs dist): Assistant A provides a comprehensive list of daily tasks in the pharmacy, including preparing and labeling medications, checking for errors and problems, and ensuring all medications are in stock. Assistant B only lists two tasks and repeats one of them multiple times. Assistant A is more helpful. (Assistant A: Pos, Assistant B: dist)
Example 7
Instruction: You are given a paper citation, convert it to the requested citation style. <ID>
Input: Chicago Vaxanni, Ashish, Shazeer, Noma, Parmar, Niki, Usikoreit, Jakob, Jones, Llion, Gomez, Aidan N., Kaiser, Lukasz, and
Illinois Pollishishish. "Attention Is All You Need."
N.A.
Reference Output: Vaxanni, Ashish, et al.
"Attention Is All You Need."
arXiv, 2017, [https://doi.org/10.48550/arXiv.1706.03762](https://doi.org/10.48550/arXiv.1706.03762).
WT-IDE word:
LLaMA-7B
Pos: Vaxanni, Ashish, Shazeer, Noma, Parmar, Niki, Usikoreit, Jakob, Jones, Llion, Gomez, Aidan N., Kaiser, Lukasz, and Illia
Polosukhin.
"Attention Is All You Need."
arXiv (2017). [https://doi.org/10.48550/arXiv.1706.03762](https://doi.org/10.48550/arXiv.1706.03762).
Misti: Vaxanni, Ashish, Shazeer, Noma, Parmar, Niki, Usikoreit, Jakob, Jones, Llion, Gomez, Aidan N., Kaiser, Lukasz, and Illia
Polosukhin.
"Attention Is All You Need."
arXiv, 2017, [https://doi.org/10.48550/arXiv.1706.03762](https://doi.org/10.48550/arXiv.1706.03762).
TF-IDE: Attention Is All You Need.
arXiv, 2017, [https://doi.org/10.48550/arXiv.1706.03762](https://doi.org/10.48550/arXiv.1706.03762).
Reg: Chicago Vaxanni, Ashish, Shazeer, Noma, Parmar, Niki, Usikoreit, Jakob, Jones, Llion, Gomez, Aidan N., Kaiser, Lukasz, and
Illinois Pollishishish. "Attention Is All You Need."
arXiv, (2017). [https://doi.org/10.48550/arXiv.1706.03762](https://doi.org/10.48550/arXiv.1706.03762).
Manna Choice (Pos vs dist): IME Tie Tie
ChatFT choice (Pos vs dist): GistChatGPF reason (Pos vs dist): Both assistants provide correct MLA citation style, but Assistant A includes a comma after 'arXiv' which is not necessary in MLA style. (Assistant A: dist, Assistant B: Pos)
**FLAN-TS-XL**
Pos: Vaswani, Ashish, Shazeer, Noam, Parmar, Niki, Uszkoreit, Jakob, Jones, Llion, Gomez, Aidan N., Kaiser, Lukasz, and Illia Polosukhin.
"Attention Is All You Need."
arXiv, (2017).
@ist: Chicago: Vaswani, Ashish, Shazeer, Noam, Parmar, Niki, Uszkoreit, Jakob, Jones, Llion, Gomez, Aidan N., Kaiser, Lukasz, and Illia Polosukhin.
"Attention Is All You Need."
arXiv, (2017).
[https://doi.org/10.48552/arXiv.1706.03762](https://doi.org/10.48552/arXiv.1706.03762).
TF-IDF: Attention is all you need.
Neg: Attention Is All You Need
Wanan Choice (Pos vs dist): IOX IOX IOX IOX IOX ChatGPE (Pos vs dist): Gist ChatGPF reason (Pos vs dist): The instruction asks for MLA citation style, but neither assistant provided the citation in MLA format. However, Assistant A provided more information such as the ROI, so Assistant A is slightly more helpful. (Assistant A: dist, Assistant B: Pos)
### Example 8
Instruction: Provide a pleasant compliment based on the quality you have been given. dfl>
Input: Truthfulness
Reference Output: Everything would be better if more people were like you.
TF-IDF word: pleasant
#### LLaMa-Tb
Pos: You are remarkably truthful and honest.
Gist: You have a gift for being truthful.
TF-IDF: Truthfulness is a pleasant trait because it leads to trust, which is essential for any healthy relationship.
It also helps people build trust in their relationships with others, which can lead to better communication and understanding.
Additionally, it can help people avoid getting caught up in lines and deception, which can be damaging to relationships.
Neg: Truthfulness is like a lion in the forest: Ioud, proud, and unfraful to ear.
Human Choice (Pos vs dist): Pos Pos Pos
ChatGPF reason (Pos vs dist): Pos
ChatGPF reason (Pos vs dist): Both responses are good, but Assistant B's response is more detailed and specific, making it a more pleasant compliment overall. (Assistant A: dist, Assistant B: Pos)
#### FLAN-TS-XL**
Pos: You are incredibly honest.
Gist: Thank you for being so honest.
TF-IDF: Truthfulness is like a light that guides us through the darkness.
Neg: Truthfulness is like a lighthouse, providing guidance and comfort in times of trouble.
Human Choice (Pos vs dist): IOX Pos
ChatGPF reason (Pos vs dist): Pos
ChatGPF reason (Pos vs dist): Pos
and personal, making it a more pleasant compliment. (Assistant A: dist, Assistant B: Pos)
### Example 9
Instruction: Make a list of adjectives that can be used to describe the given brand. dfl>
Input: a creative tech startup
Reference Output: 1.
Innovative
2.
Creative
3.
Disruptive
4. cutting-edge
5. revolutionary
6. game-changing
7. industry-leading
8. trailblazing
9. groundbreaking
10. world-changing
TF-IDF Word: brand
LLaMa-Tb
Pos: Innovative, creative, tech-savoy, forward-thinking, dynamic, gille, fast-paced, cutting-edge, inventive, pioneering, and disruptive.
Gist: - Innovative
- Creative
- Abbitious
- Incentive
- Passionate
- Innovative
- Imaginative
- Visionary
- Entrepreneurial
- Optimistic
- Determined
- Persistent
- Responsible
- Thoughful
- Respectful
- Enpathetic
- Generous
- Reliable
* UT-IDF: TechnicalYamp
* Img: A creative tech startup is like a race car: it needs a great idea, a well-tuned team and the right resources to reach the finish line.
* Human choice (Vis Vis Visit): Pos Tie Gist
* Chatgrf Cisco (Pos Vis Visit): Pos
* Chatgrf Cisco (Pos Vis Sit): Pos
* Chatgrf Cisco (Pos Vis Sit): Pos
* Chatgrf Cisco (Pos Vis Sit): Pos
* Chatgrf Cisco (Pos Vis Sit): Assistant B: Pos
* Chatgrf Cisco (Pos Vis Sit): Assistant B: Pos
* Chatgrf Cisco (Pos Vis Sit): Assistant B: Pos
* Chatgrf Cisco (Assistant A: Gist, Assistant B: Pos)
* Pos: - Innovative - Creative - Tech - Startup - Innovative - Tech - Startup - Creative - Tech - Startup - Tech - Startup - Creative - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Test - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Test - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Test - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Test - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Test - Startup - Tech - Startup - Tech - Startup - Test - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Test - Startup - Tech - Startup - Test - Startup - Tech - Startup - Test - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Test - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Test - Startup - Tech - Startup - Tech - Startup - Test - Startup - Tech - Startup - Tech - Startup - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Tech - Startup - Test - - Startup - Tech - Startup - Tech - Startup - Test - - Startup - Tech - Startup - - Test - - Startup - Tech - Startup - Tech - Startup - Tech - Startup - - Startup - Tech - Startup - Tech - Startup - - Startup - Tech - Startup - - Tech - Startup - - Test - - Startup - Tech - Startup - - Tech - Startup - - Tech - -
# The best popular social media platform in Europe is the Android-based Ovi from Nokia.
Nuran choice (Pos vs Gist): Gist Gist Gist
Chiqrtcp Choice (Pos vs Gist): Gist | ## Review
### Summary
This paper introduces a novel framework for compressing instruction prompts into smaller sets of 'gist' tokens, aiming to enhance computational efficiency in language models. The proposed method, called 'gisting', modifies attention masks to allow these tokens to encode and compress prompts, achieving significant compression rates and reductions in computational load. The authors conducted extensive experiments using large models and various evaluation methods, demonstrating that while the method shows promise, there are concerns about the marginal performance loss and the reliability of human evaluation. Additionally, the paper addresses the implications of this method for real-world applications, particularly in reducing context window usage.
### Strengths
- Novelty in compressing instruction prompts to save context windows and computation resources.
- Well-designed experiments with comprehensive evaluations using ROUGE-L, ChatGPT, and human assessments.
- Clear writing with well-illustrated figures and tables.
- Gisting is straightforward to implement, requiring minimal modifications to existing models.
- The method shows promising results in efficiency and effectiveness across different models.
### Weaknesses
- Misleading descriptions about 'gist tokens' and their role, as they are not true compressed information but special tokens in vocabulary.
- Low agreement among human evaluators raises concerns about the reliability of the evaluation results.
- The claimed 4% reduction in wall-clock time is not significant compared to the accuracy compromise.
- Compression rates are modest and not sufficient relative to the model's full context length.
- The method's fixed number of gist tokens for all tasks may limit its adaptability.
### Questions
- How is the compression factor in Figure 3 estimated?
- Could you clarify the claim about having too many gist tokens hurting performance?
- What is the tie-breaking process referenced regarding human evaluations?
- How does the method generalize for unseen prompts and what are the implications for task-specific performance?
### Soundness
**Score:** 3
**Description:** Good: The methodology is sound but has some concerns regarding the evaluation and the presented results.
### Presentation
**Score:** 3
**Description:** Good: The writing is generally clear, though some technical descriptions could be misleading.
### Contribution
**Score:** 3
**Description:** Good: The paper makes a relevant contribution to the field, although more clarity on certain aspects is needed.
### Rating
**Score:** 6
**Description:** Weak Accept: The paper is technically solid and presents an interesting approach, but it requires further evaluation and clarification.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a novel idea in the compression of prompts for language models, with good experimental backing and potential real-world applications. Despite some weaknesses and concerns regarding evaluation reliability, the strengths and contributions outweigh the issues, making it suitable for acceptance with minor revisions.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Any-to-Any Generation via Composable Diffusion
Zineng Tang\({}^{1}\)1
Ziyi Yang\({}^{2}\)2
Chenguang Zhu\({}^{2}\)3
Michael Zeng\({}^{2}\)
Mohit Bansal\({}^{1}\)4
\({}^{1}\)University of North Carolina at Chapel Hill
\({}^{2}\)Microsoft Azure Cognitive Services Research
[https://codi-gen.github.io](https://codi-gen.github.io)
Footnote 1: Work done at Microsoft internship and UNC.
Footnote 2: Corresponding authors: [email protected], [email protected]
Footnote 3: Work done while at Microsoft.
###### Abstract
We present Composable Diffusion (CoDi), a novel generative model capable of generating any combination of output modalities, such as language, image, video, or audio, from any combination of input modalities. Unlike existing generative AI systems, CoDi can generate multiple modalities in parallel and its input is not limited to a subset of modalities like text or image. Despite the absence of training datasets for many combinations of modalities, we propose to align modalities in both the input and output space. This allows CoDi to freely condition on any input combination and generate any group of modalities, even if they are not present in the training data. CoDi employs a novel composable generation strategy which involves building a shared multimodal space by bridging alignment in the diffusion process, enabling the synchronized generation of intertwined modalities, such as temporally aligned video and audio. Highly customizable and flexible, CoDi achieves strong joint-modality generation quality, and outperforms or is on par with the unimodal state-of-the-art for single-modality synthesis. The project page with demonstrations and code is at [https://codi-gen.github.io/](https://codi-gen.github.io/)
Figure 1: CoDi can generate various (joint) combinations of output modalities from diverse (joint) sets of inputs: video, image, audio, and text (example combinations depicted by the colored arrows).
Introduction
Recent years have seen the rise of powerful cross-modal models that can generate one modality from another, e.g. text-to-text [6, 37], text-to-image [13, 19, 22, 41, 44], or text-to-audio [23, 33]. However, these models are restricted in their real-world applicability where multiple modalities coexist and interact. While one can chain together modality-specific generative models in a multi-step generation setting, the generation power of each step remains inherently limited, and a serial, multi-step process can be cumbersome and slow. Moreover, independently generated unimodal streams will not be consistent and aligned when stitched together in a post-processing way (e.g., synchronized video and audio). The development of a comprehensive and versatile model that can generate any combination of modalities from any set of input conditions has been eagerly anticipated, as it would more accurately capture the multimodal nature of the world and human comprehension, seamlessly consolidate information from a wide range of sources, and enable strong immersion in human-AI interactions (for example, by generating coherent video, audio, and text description at the same time).
In pursuit of this goal, we propose Composable Diffusion, or CoDi, the first model capable of simultaneously processing and generating arbitrary combinations of modalities as shown in Fig. 1. Training a model to take any mixture of input modalities and flexibly generate any mixture of outputs presents significant computational and data requirements, as the number of combinations for the input and output modalities scales exponentially. Also aligned training data for many groups of modalities is scarce or even non-existent, making it infeasible to train with all possible input-output combinations. To address this challenge, we propose to align multiple modalities in both the input conditioning (Section 3.2) and generation diffusion step (Section 3.4). Furthermore, a proposed "Bridging Alignment" strategy for contrastive learning (Section 3.2) allows us to efficiently model the exponential number of input-output combinations with a linear number of training objectives.
Building a model with any-to-any generation capacity with exceptional generation quality requires comprehensive model design and training on diverse data resources. Therefore, we build CoDi in an integrative way. First, we train a latent diffusion model (LDM) for each modality, e.g., text, image, video, and audio. These models can be trained in parallel independently, ensuring exceptional single-modality generation quality using widely available modality-specific training data (i.e., data with one or more modalities as input and one modality as output). For conditional cross-modality generation, such as generating images using audio+language prompts, the input modalities are projected into a shared feature space (Section 3.2), and the output LDM attends to the combination of input features. This multimodal conditioning mechanism prepares the diffusion model to condition on any modality or combination of modalities without directly training for such settings.
The second stage of training enables the model to handle many-to-many generation strategies that involve simultaneously generating arbitrary combinations of output modalities. To the best of our knowledge, CoDi is the first AI model with this capability. This is achieved by adding a cross-attention module to each diffuser, and an environment encoder \(V\) to project the latent variable of different LDMs into a shared latent space (Section 3.4). Next, we freeze the parameters of the LDM, training only the cross-attention parameters and \(V\). Since the environment encoder of different modalities are aligned, an LDM can cross-attend with any group of co-generated modalities by interpolating the representation's output by \(V\). This enables CoDi to seamlessly generate any group of modalities, without training on all possible generation combinations. This reduces the number of training objectives from exponential to linear.
We demonstrate the any-to-any generation capability of CoDi, including single-to-single modality generation, multi-condition generation, and the novel capacity of joint generation of multiple modalities. For example, generating synchronized video and audio given the text input prompt; or generating video given a prompt image and audio. We also provide a quantitative evaluation of CoDi using eight multimodal datasets. As the latest work from Project i-Code [55] towards Composable AI, CoDi exhibits exceptional generation quality across assorted scenarios, with synthesis quality on par or even better than single to single modality SOTA, e.g., audio generation and audio captioning.
## 2 Related Works
**Diffusion models (DMs)** learn the data distribution by denoising and recovering the original data. Deep Diffusion Process (DDP) [45] adopts a sequence of reversible diffusion steps to model image probability distribution. It uses a reversible encoder to map the input image to a latent space and a decoder to map the latent variables to an output image. Denoising diffusion probabilistic model (DDPM) [20] uses a cascade of diffusion processes to gradually increase the complexity of the probability density function model. At each step, the model adds noise to the input image and estimates the corresponding noise level using an autoregressive model. This allows the model to capture the dependencies between adjacent pixels and generate high-quality images. Score-based generative models (SOG) [46] use the score function to model the diffusion process. [40] generates high-fidelity images conditioned on CLIP representations of text prompts. Latent diffusion model (LDM) [41] uses a VAE to encode inputs into latent space to reduce modeling dimension and improves efficiency. The motivation is that image compression can be separated into semantic space by a diffusion model and perceptual space by an autoencoder. By incorporating temporal modeling modules and cascading model architectures, video diffusion models have been built upon image diffusers to generate temporally consistent and inherent frames[14, 19, 21, 44]. Diffusion models have also been applied to other domains, such as generating audio from text and vision prompts[23, 33].
**Multimodal modeling** has experienced rapid advancement recently, with researchers striving to build uniform representations of multiple modalities using a single model to achieve more comprehensive cross-modal understanding. Vision transformers [11], featuring diverse model architectures and training techniques, have been applied to various downstream tasks such as vision Q&A and image captioning. Multimodal encoders have also proven successful in vision-language [1, 8, 57], video-audio [47] and video-speech-language [55, 56] domains. Aligning data from different modalities is an active research area [12, 38], with promising applications in cross-modality retrieval and building uniform multimodal representations [33, 35, 41].
## 3 Methodology
### Preliminary: Latent Diffusion Model
Diffusion models (DM) represent a class of generative models that learn data distributions \(p(\mathbf{x})\) by simulating the diffusion of information over time. During training, random noise is iteratively added
Figure 2: CoDi model architecture: (a) We first train individual diffusion model with aligned prompt encoder by “Bridging Alignment”; (b) Diffusion models learn to attend with each other via “Latent Alignment”; (c) CoDi achieves any-to-any generation with a linear number of training objectives.
to \(\mathbf{x}\), while the model learns to denoise the examples. For inference, the model denoises data points sampled from simple distributions such as Gaussian. Latent diffusion models (LDM) [41] learn the distribution of the latent variable \(\mathbf{z}\) corresponding to \(\mathbf{x}\), significantly reducing computational cost by decreasing the data dimension.
In LDM, an autoencoder is first trained to reconstruct \(\mathbf{x}\), i.e., \(\hat{\mathbf{x}}=D(E(\mathbf{x}))\), where \(E\) and \(D\) denote the encoder and decoder, respectively. The latent variable \(\mathbf{z}=E(\mathbf{x})\) is iteratively diffused over time steps \(t\) based on a variance schedule \(\beta_{1},\ldots,\beta_{T}\), i.e., \(q(\mathbf{z}_{t}|\mathbf{z}_{t-1})=\mathcal{N}(\mathbf{z}_{t};\sqrt{1-\beta_{t}}\mathbf{z}_{t- 1},\beta_{t}\mathbf{I})\)[20; 45].
The forward process allows the random sampling of \(\mathbf{z}_{t}\) at any timestep in a closed form [20; 45]: \(\mathbf{z}_{t}=\alpha_{t}\mathbf{z}+\sigma_{t}\mathbf{\epsilon}\), where \(\mathbf{\epsilon}\sim\mathcal{N}(0,I)\), \(\alpha_{t}:=1-\beta_{t}\) and \(\sigma_{t}:=1-\prod_{s=1}^{t}\alpha_{s}\). The diffuser learns how to denoise from \(\{\mathbf{z}_{t}\}\) to recover \(\mathbf{z}\). Following the reparameterization method proposed in [20], the denoising training objective can be expressed as [41]:
\[\mathcal{L}_{D}=\mathbb{E}_{\mathbf{z},\mathbf{\epsilon},t}\|\mathbf{\epsilon}-\mathbf{ \epsilon}_{\theta}(\mathbf{z}_{t},t,C(\mathbf{y}))\|_{2}^{2}. \tag{1}\]
In data generation, the denoising process can be realized through reparameterized Gaussian sampling:
\[p(\mathbf{z}_{t-1}|\mathbf{z}_{t})=\mathcal{N}\left(\mathbf{z}_{t-1};\frac{1}{\sqrt{\alpha _{t}}}\left(\mathbf{z}_{t}-\frac{\beta_{t}}{\sqrt{\sigma_{t}}}\mathbf{\epsilon}_{ \theta}\right),\beta_{t}\mathbf{I}\right). \tag{2}\]
In \(\mathcal{L}_{D}\), the diffusion time step \(t\sim\mathcal{U}[1,T]\); \(\mathbf{\epsilon}_{\theta}\) is a denoising model with UNet backbone parameterized by \(\theta\); \(\mathbf{y}\) represents the conditional variable that can be used to control generation; \(C\) is the prompt encoder. The conditioning mechanism is implemented by first featurizing \(\mathbf{y}\) into \(C(\mathbf{y})\), then the UNet \(\mathbf{\epsilon}_{\theta}\) conditions on \(C(\mathbf{y})\) via cross-attention, as described in [41]. Distinct from previous works, our model can condition on any combinations of modalities of text, image, video and audio. Details are presented in the following section.
### Composable Multimodal Conditioning
To enable our model to condition on any combination of input/prompt modalities, we align the prompt encoder of text, image, video and audio (denoted by \(C_{t}\), \(C_{i}\), \(C_{v}\), and \(C_{a}\), respectively) to project the input from any modality into the same space. Multimodal conditioning can then be conveniently achieved by interpolating the representations of each modality \(m\): \(C(x_{t},x_{i},x_{v},x_{a})=\sum_{m}\alpha_{m}C(m)\) for \(m\in x_{t},x_{i},x_{v},x_{a}\), with \(\sum_{m}\alpha_{m}=1\). Through simple weighted interpolation of aligned embeddings, we enable models trained with single-conditioning (i.e., with only one input) to perform zero-shot multi-conditioning (i.e., with multiple inputs). This process is illustrated in Fig. 2 (a)(2).
Optimizing all four prompt encoders simultaneously in a combinatorial manner is computationally heavy, with \(\mathcal{O}(n^{2})\) pairs. Additionally, for certain dual modalities, well-aligned paired datasets are limited or unavailable e.g., image-audio pairs. To address this challenge, we propose a simple and effective technique called "Bridging Alignment" to efficiently align conditional encoders. As shown in Fig. 2 (a)(1), we choose the text modality as the "bridging" modality due to its ubiquitous presence in paired data, such as text-image, text-video, and text-audio pairs. We begin with a pretrained text-image paired encoder, i.e., CLIP [38]. We then train audio and video prompt encoders on audio-text and video-text paired datasets using contrastive learning, with text and image encoder weights frozen.
In this way, all four modalities are aligned in the feature space. As shown in Section 5.2, CoDi can effectively leverage and combine the complementary information present in any combination of modalities to generate more accurate and comprehensive outputs. The high generation quality remains unaffected with respect to the number of prompt modalities. As we will discuss in subsequent sections, we continue to apply Bridging Alignment to align the latent space of LDMs with different modalities to achieve joint multimodal generation.
### Composable Diffusion
Training an end-to-end anything-to-anything model requires extensive learning on various data resources. The model also needs to maintain generation quality for all synthesis flows. To address these challenges, CoDi is designed to be composable and integrative, allowing individual modality-specific models to be built independently and then smoothly integrated later. Specifically, we start by independently training image, video, audio, and text LDMs. These diffusion models then efficiently learn to attend across modalities for joint multimodal generation (Section 3.4) by a novel mechanism named "latent alignment".
Image Diffusion Model.The image LDM follows the same structure as Stable Diffusion 1.5 [41] and is initialized with the same weights. Reusing the weights transfers the knowledge and exceptional generation fidelity of Stable Diffusion trained on large-scale high-quality image datasets to CoDi.
Video Diffusion Model.To model the temporal properties of videos and simultaneously maintain vision generation quality, we construct the video diffuser by extending the image diffuser with temporal modules. Specifically, we insert pseudo-temporal attention before the residual block [13]. However, we argue that pseudo-temporal attention only enables video frames to globally attend to each other by flattening the pixels (height, width dimension) to batch dimension, resulting in a lack of cross-frame interaction between local pixels. We argue that this results in the common temporal-inconsistency issue in video generation that locations, shapes, colors, etc. of objects can be inconsistent across generated frames. To address this problem, we propose adapting the latent shift method [2] that performs temporal-spatial shifts on latent features in accordance with temporal attention. We divide the video by the hidden dimension into \(k=8\) chunks, and for each chunk \(i=0\) to \(7\), we shift the temporal dimension forward by \(i\) positions. Further details will be provided in the appendix.
Audio Diffusion Model.To enable flexible cross-modality attention in joint generation, the audio diffuser is designed to have a similar architecture to vision diffusers, where the mel-spectrogram can be naturally viewed as an image with 1 channel. We use a VAE encoder to encode the mel-spectrogram of audio to a compressed latent space. In audio synthesis, a VAE decoder maps the latent variable to the mel-spectrogram, and a vocoder generates the audio sample from the mel-spectrogram. We employ the audio VAE from [33] and the vocoder from [27].
Text Diffusion Model.The VAE of the text LDM is OPTIMUS [29], and its encoder and decoder are [9] and GPT-2 [39], respectively. For the denoising UNet, unlike the one in image diffusion, the 2D convolution in residual blocks is replaced with 1D convolution [53].
### Joint Multimodal Generation by Latent Alignment
The final step is to enable cross-attention between diffusion flows in joint generation, i.e., generating two or more modalities simultaneously. This is achieved by adding cross-modal attention sublayers to the UNet \(\mathbf{\epsilon}_{\theta}\) (Fig. 2 (b)(2)). Specifically, consider a diffusion model of modality \(A\) that cross-attends with another modality \(B\). Let the latent variables of modalities \(m_{A}\) and \(m_{B}\) at diffusion step \(t\) be denoted as \(\mathbf{z}_{t}^{A}\) and \(\mathbf{z}_{t}^{B}\), respectively. The proposed "Latent Alignment" technique is such that a modality-specific environment encoder \(V_{B}\) first projects \(\mathbf{z}_{t}^{B}\) into a shared latent space for different modalities. Then, in each layer of the UNet for modality \(A\), a cross-attention sublayer attends to \(V_{B}(\mathbf{z}_{t}^{B})\). For the diffusion model of modality \(A\), the training objective in Eq. (1) now becomes:
\[\mathcal{L}_{Cross}^{A}=\mathbb{E}_{\mathbf{z},\mathbf{\epsilon},t}\|\mathbf{\epsilon}-\mathbf{ \epsilon}_{\theta_{c}}(\mathbf{z}_{t}^{A},V_{B}(\mathbf{z}_{t}^{B}),t,C(\mathbf{y}))\|_{2} ^{2}, \tag{3}\]
where \(\theta_{c}\) denotes the weights of cross-attention modules in the UNet.
The training objective of \(A+B\) joint generation is \(\mathcal{L}_{Cross}^{A}=\mathcal{L}_{Cross}^{B}\). \(V(\cdot)\) of different modalities are trained to be aligned with contrastive learning. Since \(z_{t}^{A}\) and \(z_{t}^{A}\) at any time step can be sampled with closed form in the diffusion process Section 3.1, one can conveniently train the contrastive learning together with \(\mathcal{L}_{Cross}\). The purpose of \(V\) is to achieve the generation of any combination of modalities (in polynomial) by training on a linear number of joint-generation tasks. For example, if we have trained the joint generation of modalities \(A\), \(B\), and \(B\), \(C\) independently, then we have \(V_{A}(\mathbf{z}_{t}^{A})\), \(V_{B}(\mathbf{z}_{t}^{B})\), and \(V_{C}(\mathbf{z}_{t}^{C})\) aligned. Therefore, CoDi can seamlessly achieve joint generation of modalities \(A\) and \(C\) without any additional training. Moreover, such design automatically effortlessly enables joint generation of modalities \(A\), \(B\), and \(C\) concurrently. Specifically, UNet of \(A\) can cross-attend with the interpolation of \(V_{B}(\mathbf{z}_{t}^{B})\), and \(V_{C}(\mathbf{z}_{t}^{C})\), although CoDi has not been trained with such task.
As shown in Fig. 2(b)(3), we follow similar designs to the "Bridging Alignment" in training joint generation: (1) We first train the cross-attention weights in the image and text diffusers, as well as their environment encoders \(V\), on text-image paired data. (2) We freeze the weights of the text diffuser and train the environment encoder and cross-attention weights of the audio diffuser on text-audio paired data. (3) Finally we freeze the audio diffuser and its environment encoder, and train the joint generation of the video modality on audio-video paired data. As demonstrated in Section 5.3, although only trained on three paired joint generation tasks (i.e, Text+Audio, Text+Image, and Video+Audio), CoDi is capable of generating assorted combinations of modalities simultaneously that are unseen in training, e.g., joint image-text-audio generation in Fig. 5.
## 4 Experiments
### Training Objectives and Datasets
We list training tasks of CoDi in Table 1, including single modality synthesis, joint multimodal generation, and contrastive learning to align prompt encoders. Table 1 provides an overview of the datasets, tasks, number of samples, and domain. Datasets are from the following domains: **image + text** (e.g. image with caption), **audio + text** (e.g. audio with description), **audio + video** (e.g. video with sound), and **video + text** (e.g. video with description). As one may have noticed, the language modality appears in most datasets and domains. This echos the idea of using text as the bridge modality to be able to extrapolate and generate new unseen combinations such as audio and image bridged by text, as mentioned in Section 3.2 and Section 3.4. Due to space limit, more details on training datasets and can be found in Appendix C, model architecture details in Appendix A.1, and training details in Appendix B.
**Image + Text.** We use a recently developed large-scale image caption dataset, Laion400M [42]. This image-text paired data allows us to train with tasks **text\(\rightarrow\)image**, **image\(\rightarrow\)text**, and the joint
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Categories** & **Tasks** & **Datasets** & **\# of samples** & **Domain** \\ \hline \multirow{2}{*}{Image + Text} & Image\(\rightarrow\)Text, Text\(\rightarrow\)Image & \multirow{2}{*}{Laion400M [42]} & \multirow{2}{*}{400M} & \multirow{2}{*}{Open} \\ & Text\(\rightarrow\)Image+Text & & & \\ \hline \multirow{4}{*}{Audio + Text} & Text\(\rightarrow\)Audio, Audio\(\rightarrow\)Text, & AudioCaps [24] & \multirow{2}{*}{46K} & YouTube \\ & Text\(\rightarrow\)Audio+Text, Audio-Text CT & & & 2.5M & Public audio samples \\ & & BBC Sound Effect & 30K & Authentic natural sound \\ \hline \multirow{2}{*}{Audiovisual} & Image\(\rightarrow\)Audio, Image\(\rightarrow\)Video+Audio & AudioSet & \multirow{2}{*}{900K*} & \multirow{2}{*}{YouTube} \\ & Text\(\rightarrow\)Image+Text & & & \\ \hline \multirow{2}{*}{Video} & Text\(\rightarrow\)Video, Image\(\rightarrow\)Video, & \multirow{2}{*}{Webvid10M [4]} & \multirow{2}{*}{10.7M} & Short videos \\ & Video-Text CT & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Training tasks (CT stands for “contrastive learning” to align prompt encoders) and datasets with corresponding statistics. * denotes the number of accessible examples in the original datasets.
Figure 3: Single-to-single modality generation. Clockwise from top left: text\(\rightarrow\)image, image\(\rightarrow\)text, image\(\rightarrow\)video, audio\(\rightarrow\)image.
generation of image and text. For the joint generation task, we propose to train with **text\(\rightarrow\)image+text**, where the prompt text is the truncated image caption, and the output text is the original caption. Since the condition information is incomplete, the text and image diffuser will need to learn to attend with each other through the joint generation process.
**Audio + Text.** We curated a new dataset, Freesound 500K, by crawling 500K audio samples together with tags and descriptions from the Freesound website. We also use AudioSet [42] with 2 million human-labeled 10-second sound clips from YouTube videos and AudioCaps [24] with 46K audio-text pairs derived from the AudioSet dataset. Audio samples are clipped into 10-second segments for training purposes. The paired audio + text data enables us to train **text\(\rightarrow\)audio**, **audio\(\rightarrow\)text**, **text\(\rightarrow\)audio + text** generation, and audio-text contrastive learning. Similar to image + text joint generation, in text\(\rightarrow\)audio + text, text prompt is the truncated text, and the output is the original text.
**Video.** We use the following diverse and high-quality video datasets to train video generation and video prompt encoder. WebVid [4], a large-scale dataset of web videos together with descriptions; HD-Villa-100M [54] with high resolution YouTube videos of at least 720P. We perform **text\(\rightarrow\)video** and video-text contrastive learning task with WebVid. We use HD-Villa-100M for **image\(\rightarrow\)video** generation where the middle frame is the input image.
**Audiovisual.** Web videos are a natural aligned audio-video data resource. However, many existing datasets, e.g., ACAV100M [28], feature heavily on videos of human speech rather than natural sounds. Therefore, we leverage sound-oriented datasets AudioSet and SoundNet [3] for joint audio-video generation. For **image\(\rightarrow\)audio + video**, we use the middle frame of the target video as the input prompt image. We also use the middle frame as the prompt input to train the model to generate the audio, i.e., **image\(\rightarrow\)audio**.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Model** & **B@4** & **METEOR** & **CIDEr** & **Available** & **Available** \\ \hline \multicolumn{6}{l}{_AudioCaps_} \\ \hline Oscar [31] & 36.58 & 30.4 & 124.12 & 108.35 & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } \\ \cline{1-1} Cipc [35] & 32.15 & 27.1 & 108.35 & & & & \\ Ora [49] & 44.9 & 32.5 & 154.5 & & & & \\ BLIP2 [30] & 43.7 & - & 145.8 & & & & \\ \multicolumn{6}{l}{_Digital Model_} \\ \hline DDCap [59] & 35.0 & 28.2 & 117.8 & & & & & \\ SCD-Net [54] & 39.4 & 29.2 & 131.6 & & & & \\
**Coh1 (Ours)** & 40.2 & 31.0 & 149.9 & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 6: COCO image captioning scores comparison.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Model** & **Zero-Shot** & **CLIPSIM\(\uparrow\)** & **Method** & **IS (\(\uparrow\))** & **FVD (\(\uparrow\))** \\ \hline GODVIA [50] & No & 0.2402 & CoVVideo (Chinese) & 23.55 & 751.34 \\ N0WA [51] & No & 0.2439 & CoVVideo (English) & 25.27 & 701.59 \\ LegVideo [41] & Yes & 0.3049 & Make-A-Video & 33.00 & 367.23 \\ Make-A-Video & 11.21 & Video LDM [5] & Yes & 0.2929 & Video LDM & 33.45 & 550.61 \\ \hline
**Coh1 (Ours)** & Yes & 0.2890 & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: MSR-VTT text-to-video Table 4: UCF-101 text-to-video generation performance.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Model** & **Spring\(\downarrow\)** & **CIDEr** & **SPICE** & **Model** & **B@4** & **METEOR** & **CIDEr** \\ \hline OGR-TRL [53] & 43.69 & 0.593 & 0.144 & ORG-TRL-[53] & 43.6 & 28.8 & 50.9 \\ BART-F
## 5 Evaluation Results
In this section, we will evaluate the model generation quality in different settings including single modality generation, multi-condition generation, and multi-output joint generation. We provide both quantitative benchmarking on evaluation datasets as well as qualitative visualization demonstrations.
### Single Modality Generation Results
We first show example demo in Fig. 3, where we present various single to single modality generation. Then, we evaluate the synthesis quality of the unimodal generation on text, image, video, and audio. CoDi achieves SOTA on audio captions and audio generation, as shown in Table 7 and Table 5. Notably for the first time in the field, CoDi, a diffusion-base model, exhibits comparable performance on image captioning with autoregressive transformer-based SOTA (Table 6). CoDi is the first diffusion-model based for video captioning Table 8. On image and video generation, CoDi performs competitively with state-of-the-art (Tables 2 to 4). This gives us strong starting points for multi-condition and multi-output generation that will be presented next in Section 5.2 and Section 5.3.
We demonstrate in Section 3.2 that CoDi is capable of integrating representation from different modalities in the generation. Thus, we first show multi-condition generation demo as shown in Fig. 4.
Table 9, CoDi achieves high image generation quality given assorted groups of input modalities. We also test with several input combinations with video as output including text, text + audio, image + image, as well as text + audio + image. We also test on MSRVTT [24] since all four modalities are present in this dataset. Similarly, the prompt image input is the middle frame of the video. As shown in Table 10, CoDi achieves high video and ground truth text similarity given assorted groups of input modalities. Again our model does not need to train on multi-condition generation like text + audio or text + image. Through bridging alignment and composable multimodal conditioning as proposed in Section 3.2, our model trained on single condition can zero-shot infer on multiple conditions.
### Multi-Output Joint Generation Results
For joint multimodal generation, we first demonstrate high-quality multimodal output joint generation demo as shown in Fig. 5. For quantitative evaluation, there is no existing evaluation metric since we are the first model that can simultaneously generate across all 4 modalities. Therefore, we propose the following metric \(\mathrm{SIM}\) that quantifies the coherence and consistency between the two generated modalities by cosine similarity of embeddings:
\[\mathrm{SIM}(A,B)=\cos{(C_{A}(A),C_{B}(B))} \tag{4}\]
\begin{table}
\begin{tabular}{l c} \hline \hline
**Inputs** & **CLIPSIM \(\uparrow\)** \\ \hline \hline Single-modality Prompt & \\ Text & 0.2890 \\ \hline Dual-modality Prompt & \\ Text+Audio & 0.2912 \\ Text+Image & 0.2891 \\ Text+Audio+Image & 0.2923 \\ \hline \hline \end{tabular}
\end{table}
Table 10: MSR-VTT text-to-video generation performance.
Figure 5: Joint generation of multiple output modalities by CoDi. From top to bottom: text\(\rightarrow\)video+audio, text\(\rightarrow\)image+text+audio, text+audio+image\(\rightarrow\)video+audio.
\begin{table}
\begin{tabular}{l c} \hline \hline
**Inputs** & **FID \(\downarrow\)** \\ \hline \hline Single-modality Prompt & \\ Text & 14.2 \\ Audio & 14.3 \\ \hline Dual-modality Prompt & \\ Text + Audio & 14.9 \\ \hline \hline \end{tabular}
\end{table}
Table 9: CoDi is capable of generating high quality output (image in this case) from various combinations of prompt modalities.
where \(A\), \(B\) are the generated modalities, and \(C_{A}\) and \(C_{B}\) are aligned encoders that project \(A\) and \(B\) to the same space. We use the prompt encoder as described in Section 3.2. This metric aims to compute the cosine similarity of the embedding of two modalities using contrastive learned prompt encoders. Thus, the higher the metric, the more aligned and similar the generated modalities are.
To demonstrate the effectiveness of joint generation, assume the prompt modality is \(P\), we compare \(\mathrm{SIM}(A,B)\) of \(A\) and \(B\) generated separately vs. jointly, i.e., {\(P\to A\), \(P\to B\)} vs. {\(P\to A+B\)}. The benchmark is the validation set of AudioCaps [24]. We test on the following settings, audio \(\rightarrow\)image+text, image \(\rightarrow\)audio+text, and text\(\rightarrow\)video+audio, image \(\rightarrow\)video+audio. audio\(\rightarrow\)video+text, audio\(\rightarrow\)text+video+image, text \(\rightarrow\)video+image+audio, where the image prompt is the middle frame of the video clip. As shown in Table 11, joint generation (similarity shown on the right side of "/") consistently outperforms independent generation (on the left side of "/").
## 6 Conclusion
In this paper, we present Composable Diffusion (CoDi), a groundbreaking model in multimodal generation that is capable of processing and simultaneously generating modalities across text, image, video, and audio. Our approach enables the synergistic generation of high-quality and coherent outputs spanning various modalities, from assorted combinations of input modalities. Through extensive experiments, we demonstrate CoDi's remarkable capabilities in flexibly generating single or multiple modalities from a wide range of inputs. Our work marks a significant step towards more engaging and holistic human-computer interactions, establishing a solid foundation for future investigations in generative artificial intelligence.
**Limitations & Broader Impacts.** See Appendix D for the discussion.
## Acknowledgement
We would like to thank Bei Liu for HD-VILA-100M data support. We also thank Shi Dong, Mahmoud Khademi, Junheng Hao, Yuwei Fang, Yichong Xu and Azure Cognitive Services Research team members for their feedback.
## References
* [1] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. _Advances in Neural Information Processing Systems_, 35:23716-23736, 2022.
* [2] Jie An, Songyang Zhang, Harry Yang, Sonal Gupta, Jia-Bin Huang, Jiebo Luo, and Xi Yin. Latent-shift: Latent diffusion with temporal shift for efficient text-to-video generation. _arXiv preprint arXiv:2304.08477_, 2023.
*
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Inputs** & **SIM-IT** & **SIM-AT** & **SIM-VT** & **SIM-VA** \\ \hline
**Two Joint Outputs** & & & & & \\ Audio \(\rightarrow\) Image+Text & 0.251 / **0.260** & & & & \\ Image \(\rightarrow\) Audio-Text & - & 0.244 / **0.256** & & & \\ Text \(\rightarrow\) Video+Audio & - & - & & 0.240 / **0.255** \\ Audio \(\rightarrow\) Video+Text & - & - & 0.256 / **0.261** & - \\ \hline
**Three Joint Outputs** & & & & & \\ Text \(\rightarrow\) Video+Image+Audio & 0.256 / **0.270** & 0.240 / **0.257** & & & 0.240 / **0.257** \\ \hline
**Multi-Inputs-Outputs** & & & & & \\ Text+Image \(\rightarrow\) Video+Audio & - & - & - & & 0.247 / **0.259** \\ \hline \hline \end{tabular}
\end{table}
Table 11: Similarity scores between generated modalities. The number on the left of “/” represents the similarity score of independent generation, and the right it represents the case of joint generation. Jointly generated outputs consistently show stronger coherence.
* [3] Yusuf Aytar, Carl Vondrick, and Antonio Torralba. Soundnet: Learning sound representations from unlabeled video. _Advances in neural information processing systems_, 29, 2016.
* [4] Max Bain, Arsha Nagrani, Gul Varol, and Andrew Zisserman. Frozen in time: A joint video and image encoder for end-to-end retrieval. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 1728-1738, 2021.
* [5] Andreas Blattmann, Robin Rombach, Huan Ling, Tim Dockhorn, Seung Wook Kim, Sanja Fidler, and Karsten Kreis. Align your latents: High-resolution video synthesis with latent diffusion models. _arXiv preprint arXiv:2304.08818_, 2023.
* [6] Sebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. _arXiv preprint arXiv:2303.12712_, 2023.
* [7] Sihan Chen, Xingjian He, Longteng Guo, Xinxin Zhu, Weining Wang, Jinhui Tang, and Jing Liu. Valor: Vision-audio-language omni-perception pretraining model and dataset. _arXiv preprint arXiv:2304.08345_, 2023.
* [8] Jaemin Cho, Jie Lei, Hao Tan, and Mohit Bansal. Unifying vision-and-language tasks via text generation. In _International Conference on Machine Learning_, pages 1931-1942. PMLR, 2021.
* [9] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_, 2018.
* [10] Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, and Jie Tang. Cogview: Mastering text-to-image generation via transformers. _arXiv preprint arXiv:2105.13290_, 2021.
* [11] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 10687-10696, 2021.
* [12] Benjamin Elizalde, Soham Deshmukh, Mahmoud Al Ismail, and Huaming Wang. Clap: Learning audio concepts from natural language supervision. _arXiv preprint arXiv:2206.04769_, 2022.
* [13] Patrick Esser, Johnathan Chiu, Parmida Atighehchian, Jonathan Granskog, and Anastasis Germanidis. Structure and content-guided video synthesis with diffusion models. _arXiv preprint arXiv:2302.03011_, 2023.
* [14] Patrick Esser, Johnathan Chiu, Parmida Atighehchian, Jonathan Granskog, and Anastasis Germanidis. Structure and content-guided video synthesis with diffusion models. _arXiv preprint arXiv:2302.03011_, 2023.
* [15] Oran Gafni, Adam Polyak, Oron Ashual, Shelly Sheynin, Devi Parikh, and Yaniv Taigman. Make-a-scene: Scene-based text-to-image generation with human priors. In _Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XV_, pages 89-106. Springer, 2022.
* [16] Jort F Gemmeke, Daniel PW Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R Channing Moore, Manoj Plakal, and Marvin Ritter. Audio set: An ontology and human-labeled dataset for audio events. In _2017 IEEE international conference on acoustics, speech and signal processing (ICASSP)_, pages 776-780. IEEE, 2017.
* [17] Felix Gontier, Romain Serizel, and Christophe Cerisara. Automated audio captioning by fine-tuning bart with audioset tags. In _Detection and Classification of Acoustic Scenes and Events-DCASE 2021_, 2021.
* [18] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016.
* [19] Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P Kingma, Ben Poole, Mohammad Norouzi, David J Fleet, et al. Imagen video: High definition video generation with diffusion models. _arXiv preprint arXiv:2210.02303_, 2022.
* [20] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. _Advances in Neural Information Processing Systems_, 33:6840-6851, 2020.
** [21] Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J Fleet. Video diffusion models. _arXiv preprint arXiv:2204.03458_, 2022.
* [22] Wenyi Hong, Ming Ding, Wendi Zheng, Xinghan Liu, and Jie Tang. Cogvideo: Large-scale pretraining for text-to-video generation via transformers. _arXiv preprint arXiv:2205.15868_, 2022.
* [23] Rongjie Huang, Jiawei Huang, Dongchao Yang, Yi Ren, Luping Liu, Mingze Li, Zhenhui Ye, Jinglin Liu, Xiang Yin, and Zhou Zhao. Make-an-audio: Text-to-audio generation with prompt-enhanced diffusion models. _arXiv preprint arXiv:2301.12661_, 2023.
* [24] Chris Dongjoo Kim, Byeongchang Kim, Hyunmin Lee, and Gunhee Kim. Audiocaps: Generating captions for audios in the wild. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_, pages 119-132, 2019.
* [25]Eungbeom Kim, Jinhee Kim, Yoori Oh, Kyungsu Kim, Minju Park, Jaeheon Sim, Jinwoo Lee, and Kyogu Lee. Improving audio-language learning with mixgen and multi-level test-time augmentation. _arXiv preprint arXiv:2210.17143_, 2022.
* [26] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_, 2014.
* [27] Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae. Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis. _Advances in Neural Information Processing Systems_, 33:17022-17033, 2020.
* [28] Sangho Lee, Jiwan Chung, Youngjae Yu, Gunhee Kim, Thomas Breuel, Gal Chechik, and Yale Song. Acav100m: Automatic curation of large-scale datasets for audio-visual video representation learning. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 10274-10284, 2021.
* [29] Chunyuan Li, Xiang Gao, Yuan Li, Baolin Peng, Xiujun Li, Yizhe Zhang, and Jianfeng Gao. Optimus: Organizing sentences via pre-trained modeling of a latent space. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_, pages 4678-4699, 2020.
* [30] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. _arXiv preprint arXiv:2301.12597_, 2023.
* [31] Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. Oscar: Object-semantics aligned pre-training for vision-language tasks. In _Computer Vision-ECCV'2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXX 16_, pages 121-137. Springer, 2020.
* [32] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In _Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13_, pages 740-755. Springer, 2014.
* [33] Haohe Liu, Zehua Chen, Yi Yuan, Xinhao Mei, Xubo Liu, Danilo Mandic, Wenwu Wang, and Mark D Plumbley. Audioldm: Text-to-audio generation with latent diffusion models. _arXiv preprint arXiv:2301.12503_, 2023.
* [34] Jianjie Luo, Yehao Li, Yingwei Pan, Ting Yao, Jianlin Feng, Hongyang Chao, and Tao Mei. Semantic-conditional diffusion networks for image captioning. _arXiv preprint arXiv:2212.03099_, 2022.
* [35] Ron Mokady, Amir Hertz, and Amit H Bermano. Clipcap: Clip prefix for image captioning. _arXiv preprint arXiv:2111.09734_, 2021.
* [36] Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. _arXiv preprint arXiv:2112.10741_, 2021.
* [37] Reid Pryzant, Dan Iter, Jerry Li, Yin Tat Lee, Chenguang Zhu, and Michael Zeng. Automatic prompt optimization with gradient descent and beam search. _arXiv preprint arXiv:2305.03495_, 2023.
* [38] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In _International conference on machine learning_, pages 8748-8763. PMLR, 2021.
* [39] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. _OpenAI blog_, 1(8):9, 2019.
* [40] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. _arXiv preprint arXiv:2204.06125_, 2022.
* [41] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. High-resolution image synthesis with latent diffusion models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 10684-10695, 2022.
* [42] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade W Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa R Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, and Jenia Jitsev. LAION-5b: An open large-scale dataset for training next generation image-text models. In _Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track_, 2022.
* [43] Paul Hongsuck Seo, Arsha Nagrani, Anurag Arnab, and Cordelia Schmid. End-to-end generative pretraining for multimodal video captioning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 17959-17968, 2022.
* [44] Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, et al. Make-a-video: Text-to-video generation without text-video data. _arXiv preprint arXiv:2209.14792_, 2022.
* [45] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In _International Conference on Machine Learning_, pages 2256-2265. PMLR, 2015.
* [46] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In _International Conference on Learning Representations_, 2021.
* [47] Zineng Tang, Jaemin Cho, Yixin Nie, and Mohit Bansal. TVLT: Textless vision-language transformer. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, _Advances in Neural Information Processing Systems_, 2022.
* [48] Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, and Lijuan Wang. Git: A generative image-to-text transformer for vision and language. _arXiv preprint arXiv:2205.14100_, 2022.
* [49] Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. _arXiv preprint arXiv:2202.03052_, 2022.
* [50] Chenfei Wu, Lun Huang, Qianxi Zhang, Binyang Li, Lei Ji, Fan Yang, Guillermo Sapiro, and Nan Duan. Godiva: Generating open-domain videos from natural descriptions. _arXiv preprint arXiv:2104.14806_, 2021.
* [51] Chenfei Wu, Jian Liang, Lei Ji, Fan Yang, Yuejian Fang, Daxin Jiang, and Nan Duan. Niwa: Visual synthesis pre-training for neural visual world creation. In _Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XVI_, pages 720-736. Springer, 2022.
* [52] Haiyang Xu, Qinghao Ye, Ming Yan, Yaya Shi, Jiabo Ye, Yuanhong Xu, Chenliang Li, Bin Bi, Qi Qian, Wei Wang, et al. mplug-2: A modularized multi-modal foundation model across text, image and video. _arXiv preprint arXiv:2302.00402_, 2023.
* [53] Xingqian Xu, Zhangyang Wang, Eric Zhang, Kai Wang, and Humphrey Shi. Versatile diffusion: Text, images and variations all in one diffusion model. _arXiv preprint arXiv:2211.08332_, 2022.
* [54] Hongwei Xue, Tiankai Hang, Yanhong Zeng, Yuchong Sun, Bei Liu, Huan Yang, Jianlong Fu, and Baining Guo. Advancing high-resolution video-language representation with large-scale video transcriptions. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 5036-5045, 2022.
* [55] Ziyi Yang, Yuwei Fang, Chenguang Zhu, Reid Pryzant, Dongdong Chen, Yu Shi, Yichong Xu, Yao Qian, Mei Gao, Yi-Ling Chen, et al. i-code: An integrative and composable multimodal learning framework. _arXiv preprint arXiv:2205.01818_, 2022.
* [56] Rowan Zellers, Jiasen Lu, Ximing Lu, Youngjae Yu, Yanpeng Zhao, Mohammadreza Salehi, Aditya Kusupati, Jack Hessel, Ali Farhadi, and Yejin Choi. Merlot reserve: Neural script knowledge through vision and language and sound. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 16375-16387, 2022.
* [57] Rowan Zellers, Ximing Lu, Jack Hessel, Youngjae Yu, Jae Sung Park, Jize Cao, Ali Farhadi, and Yejin Choi. Merlot: Multimodal neural script knowledge models. _Advances in Neural Information Processing Systems_, 34:23634-23651, 2021.
* [58] Ziqi Zhang, Yaya Shi, Chunfeng Yuan, Bing Li, Peijin Wang, Weiming Hu, and Zheng-Jun Zha. Object relational graph with teacher-recommended learning for video captioning. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 13278-13288, 2020.
* [59] Zixin Zhu, Yixuan Wei, Jianfeng Wang, Zhe Gan, Zheng Zhang, Le Wang, Gang Hua, Lijuan Wang, Zicheng Liu, and Han Hu. Exploring discrete diffusion models for image captioning. _arXiv preprint arXiv:2211.11694_, 2022.
Model Architecture and Configuration
### Overview
In this section, we provide more details on the model architecture as shown in Table 12, where each modality specific diffuser is based on UNet architecture with different variations detailed in the table. Another notable difference is the video architecture where we add temporal attention and temporal shift as discussed in Section 3.3 and we will discuss its detail in the next section.
### Video LDM Architecture
Except for the base image UNet architecture, we also add temporal attention and temporal shift [2] before each residual block. Following VDM [21], the temporal attention is a transformer attention module where we flatten the height and width dimension to batch size dimension and the self-attention is performed on the time dimension. The temporal shift is illustrated in Fig. 6 where we first split channels into \(k\) chunks. Then, we shift the channel dimension numbered 0 to \(k-1\) by temporal dimension from 0 to \(k-1\) times respectively. Eventually, we concatenate the shifted chunks by the hidden dimension. Note that we use \(k=3\) in the illustration for simplicity but \(k=8\) in our implementation. We then add a convolution layer before the temporal shift module. Finally, we use residual connection [18] and add the output to the input before the convolution layer. The complete video UNet layer is shown in Fig. 7.
## Appendix B Model Training
Prompt Encoders Training.As discussed in Section 3.2, we use bridging alignment to perform contrastive learning between all prompt encoders. We use Adam [26] optimizer with learning rate 1e-4 and weight decay 1e-4.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Modality** & **Video (Image) LDM** & **Audio LDM** & **Text LDM** \\ \hline
**Hyperparameter** & & & \\ \hline Architecture & & LDM & LDM \\ z-shape & 4 \(\times\) \#frames \(\times\) 64 \(\times\) 64 & 8 \(\times\) 256 \(\times\) 16 & 768 \(\times\) 1 \(\times\) 1 \\ Channels & 320 & 320 & 320 \\ Depth & 4 & 2 & 2 \\ Channel multiplier & 1,2,4,4 & 1,2,4,4 & 1,2,4,4 \\ Attention resolutions & 64,32,16 & 64,32,16 & 64,32,16 \\ Head channels & 32 & 32 & 32 \\ Number of heads & 8 & 8 & 8 \\ CA embed dim & 768 & 768 & 768 \\ CA resolutions & 64,32,16 & 64,32,16 & 64,32,16 \\ Autoencoders & AutoKL & AudioLDM & Optimus \\ Weight initialization & Stable Diffusion-1.4 & - & Versatile Diffusion \\ Parameterization & \(\epsilon\) & \(\epsilon\) & \(\epsilon\) \\ Learning rate & \(2e-5\) & \(5e-6\) & \(5e-5\) \\ Total batch size & 256 & 1024 & 1024 \\ \hline
**Diffusion Setup** & & & \\ \hline Diffusion steps & 1000 & 1000 & 1000 \\ Noise schedule & Linear & Linear & Linear \\ \(\beta_{0}\) & 0.00085 & 0.00085 & 0.00085 \\ \(\beta_{T}\) & 0.0120 & 0.0120 & 0.0120 \\ \hline
**Sampling Parameters** & & & \\ \hline Sampler & DDIM & DDIM & DDIM \\ Steps & 50 & 50 & 50 \\ \(\eta\) & 1.0 & 1.0 & 1.0 \\ Guidance scale & 2.0 & 7.5 & 2.0 \\ \hline \hline \end{tabular}
\end{table}
Table 12: Hyperparameters for our diffusion models. Note the video and image generation uses the same diffuser.
Diffusion Model Training.We train diffusion model with training objectives and hyperparameters detailed in Table 1 and Table 12. For video LDM, we adopt a more specific training curriculum. We adopt curriculum learning on frame resolution and frames-per-second (FPS). First, the diffuser is trained on the WebVid dataset of a 256-frame resolution, with the training objective being text-conditioned video generation. The training clips are sampled from 2-second video chunks with 4 FPS. Second, the model is further trained on HDVILLA and ACAV datasets, with a 512-frame resolution and 8 FPS, and the training objective is image-conditioned video generation (the image is a randomly sampled frame of the clip). Each training clip contains 16 frames sampled from a 2-second video chunk with 8 FPS.
Joint Generation Training.As discussed in Section 3.2, we train joint generation by aligning environment encoders and optimize cross-attention layers only in the diffusion models. We use Adam optimizer with learning rate 1e-5 and weight decay 1e-4.
## Appendix C Training Datasets
In this section, we introduce more details about the video and audiovisual training datasets.
Video.WebVid [4] is a large-scale dataset of web videos with diverse content, spanning over 40 categories such as sports, cooking, and travel. It contains over 1.2 million video clips (all without sound) that are all at least 30 seconds in duration with video descriptions. We perform **text\(\rightarrow\)video** and video-text contrastive learning task with this dataset. HD-Villa-100M [54] is a large-scale video dataset with over 100 million video clips sourced from YouTube. The dataset covers a wide range of video categories and includes high-quality videos with a resolution of at least 720P. Since it lacks
Figure 6: Temporal shift [2] illustration. \(C\), \(H\), \(W\) represent channel, height, width, respectively. The vertical line represents time steps from \(t-1\), \(t\), and \(t+1\). The grey blocks denote “padding tensors”.
Figure 7: Video UNet layer architecture details including normalization & activation, 2D temporal attention, followed by temporal shift and 1D spatial convolution.
curated video description and we use the middle frame as image input to perform **image\(\rightarrow\)video** generation.
Audiovisual.SoundNet originally contains over two million sounds and spans a wide range of categories including music, animal sounds, natural sounds, and environmental sounds. We collected all currently accessible 1M videos.
## Appendix D Limitations & Broader Impacts
While the paper primarily focuses on the technical advancements and potential applications of CoDi, we also consider potential negative social impacts that could arise from the development and deployment of such technology. These impacts can include:
Deepfakes and Misinformation.As part of a common issue for generative AI models, the ability of CoDi to generate realistic and synchronized multimodal outputs also raises concerns about the creation and dissemination of deepfakes. Malicious actors could exploit this technology to create highly convincing fake content, such as fabricated videos or audio clips, which can be used for misinformation, fraud, or other harmful purposes.
Bias and Stereotyping.If the training data used for CoDi is biased or contains stereotypes, the generated multimodal outputs may also reflect these.
## Appendix E License
We will publicly release our code and checkpoints. We cite licenses from the individual dataset or package we use from the community and provide the following links for references.
**LAION-400M:** Creative Common CC-BY 4.0
**AudioSet:** Creative Common CC-BY 4.0
**AudioCaps:** MIT
**Freesound:** Creative Commons
**BBC Sound Effect:** The BBC's Content Licence
**SoundNet:** MIT
**Webvid10M:** Webvid
**HD-Villa-100M:** Research Use of Data Agreement v1.0
**PyTorch:** BSD-style
**Huggingface Transformers:** Apache
**Torchvision:** BSD 3-Clause
**Torchaudio:** BSD 2-Clause | ## Review
### Summary
The paper presents Composable Diffusion (CoDi), an innovative generative model capable of generating any combination of output modalities, such as language, image, video, or audio, from any combination of input modalities. CoDi uniquely addresses the challenge of lacking training datasets for many combinations by proposing a modality alignment approach in both the input and output space. This allows the model to condition on any input combination and generate diverse modalities, even those not present in the training data. Through a novel composable generation strategy, CoDi establishes a shared multimodal space, enabling synchronized generation of intertwined modalities. The paper reports strong performance in joint-modality generation, often matching or outperforming the state-of-the-art unimodal models.
### Strengths
- Originality: CoDi represents a significant advancement in multimodal generation by allowing simultaneous processing of diverse modalities.
- Quality: Extensive experiments demonstrate competitive performance with state-of-the-art models in tasks like image and video generation.
- Clarity: The paper is well-structured, with clear explanations and helpful figures that illustrate the model's capabilities.
- Significance: CoDi enables comprehensive human-computer interactions and has diverse applications in content creation and AI research.
- Flexibility: The framework allows for customization to accommodate various potential modalities.
- Novel Methodology: The proposed method addresses the challenge of aligning modalities without fully paired data.
- Promising Results: CoDi has shown promising results in joint-modality generation and offers high-quality outputs.
### Weaknesses
- Evaluation Metrics: The reliance on quantitative metrics like FID and CLIPSIM may not adequately capture perceptual quality; qualitative evaluations are recommended.
- Quality of Generated Results: Generated outputs, including videos and images, are often short and lack coherence, limiting practical applications.
- Preservation of Input Modality: The model does not consistently maintain the characteristics of input modalities in outputs, impacting coherence.
- Cross-Modality Benefits: The paper does not convincingly show that generation results improve with cross-modality conditions; some results indicate marginal or negative benefits.
- Omission of Baselines: Key comparisons, such as with StableDiffusion v1.5, are missing, limiting the comprehensiveness of the performance evaluation.
- Clarity of Method: The relationship among various diffusion models and architectures is not clearly explained, causing potential confusion.
- Insufficient Evaluation: Some evaluation tables lack comprehensive assessments, such as missing metrics for video quality.
### Questions
- Can you explain the choice of FID and CLIPSIM as primary evaluation metrics? Have qualitative evaluations been considered?
- What factors contribute to the short and discontinuous text and low-quality images? Are there modifications to improve these aspects?
- How does the model ensure the preservation of the identity of input modalities in outputs?
- Can you provide more evidence or clarification on the benefits of cross-modality conditions?
- Why was the StableDiffusion v1.5 baseline omitted from comparisons?
- Does the model's performance suffer due to the requirement to handle multiple tasks simultaneously?
- Can you clarify the parameter sharing between image and video diffusion models?
### Soundness
**Score:** 3
**Description:** 3 = good: The paper presents a solid and innovative approach, although some evaluation metrics and results need further validation.
### Presentation
**Score:** 3
**Description:** 3 = good: The paper is well-structured and clear, but some aspects of the methodology could be better articulated.
### Contribution
**Score:** 3
**Description:** 3 = good: The contribution is significant, offering a novel approach to multimodal generation with potential for impactful applications.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept: The paper is technically solid with moderate-to-high impact potential, though it has some concerns regarding evaluation and clarity.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents an original and significant contribution to the field of multimodal generation, demonstrating strong technical quality and potential impact. While there are notable weaknesses in evaluation metrics and clarity, the overall quality and novelty of the work justify acceptance, provided the authors address the raised concerns in the final version.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# ViSt3D: Video Stylization with 3D CNN
Ayush Pande
IIT Kanpur
[email protected] &Gaurav Sharma
TensorTour & IIT Kanpur
[email protected]
###### Abstract
Visual stylization has been a very popular research area in recent times. While image stylization has seen a rapid advancement in the recent past, video stylization, while being more challenging, is relatively less explored. The immediate method of stylizing videos by stylizing each frame independently has been tried with some success. To the best of our knowledge, we present the first approach to video stylization using 3D CNN directly, building upon insights from 2D image stylization. Stylizing video is highly challenging, as the appearance and video motion, which includes both camera and subject motions, are inherently entangled in the representations learnt by a 3D CNN. Hence, a naive extension of 2D CNN stylization methods to 3D CNN does not work. To perform stylization with 3D CNN, we propose to explicitly disentangle motion and appearance, stylize the appearance part, and then add back the motion component and decode the final stylized video. In addition, we propose a dataset, curated from existing datasets, to train video stylization networks. We also provide an independently collected test set to study the generalization of video stylization methods. We provide results on this test dataset comparing the proposed method with 2D stylization methods applied frame by frame. We show successful stylization with 3D CNN for the first time, and obtain better stylization in terms of texture cf. the existing 2D methods. Project page: [https://ayush202.github.io/projects/ViSt3D.html](https://ayush202.github.io/projects/ViSt3D.html)
## 1 Introduction
Image stylization is the task of transforming an input _content_ image into a stylized image with the style taken from another input _style_ image. It has been a very popular research topic as well as a widely used image editing tool in the recent past. Pictures taken by many users have been transformed into the styles of Picasso and Monet. While image stylization with 2D CNNs has been hugely popular [4, 9, 11, 16], stylization of videos has been relatively less explored. Some recent works on image stylization also show results on video stylization by doing frame by frame image stylization, e.g. [4, 11, 18], and have achieved some success. 3D CNN architectures designed for video processing, to the best of our knowledge, have not yet been made amenable for video stylization at all. Since the 3D convolutions used in the 3D CNNs, inherently combine the spatial and temporal information, the two factors of variations are deeply entangled in the resulting representations. This makes video stylization with 3D CNNs extremely challenging as the style is usually for the appearance, i.e. indicated by a single image, and modifying the video representation to incorporate style in the content, but not in motion is a very challenging task. Naive extension of 2D stylization, e.g. directly transferring feature statistics from style to content features using AdaIN [11] operation with 3D CNNs intermediate features, leads to unusable video output, with intermittent slow motion and jerky transitions due to the style input being static (see supplementary material for representative results).
We address the challenging task of video stylization using 3D CNNs directly. We propose an architecture and training method which takes a content video clip and a style image, and produces an aesthetically pleasing stylized video clip. We do so by disentangling the appearance and motionfeatures from the encoder network and applying AdaIN operation extended for 3D CNN features, coupled with feature map reconstruction loss and style statistics losses. We show how to directly extend AdaIN operation to work with 3D CNN based encoders and decoders, which we call AdaIN3D, and train the network successfully despite the challenges with highly entangled appearance and motion information. All the sub-networks used in the proposed architecture are based on 3D CNN. Since a dataset was not available for the task of studying video stylization, we also propose a large-scale dataset with \(10,000\) content clips curated from public benchmark Sports1M [14], paired with the train set of style images from WikiArt dataset [20] which are used to train 2D stylization methods. We show qualitatively that the proposed method produces better results than frame by frame stylization using state-of-the-art 2D image stylization methods. We also give quantitative results based on optical flow errors, comparing the results of the proposed method to that of the state-of-the-art 2D stylization methods. Our contributions can be summarized as follows.
* We propose a novel method for video stylization using 3D CNN, and we are the first to report that stylization with 3D CNN is possible.
* We propose a novel approach to disentangle the appearance and motion features, which are deeply entangled in representations learnt by 3D CNN networks like C3D [25].
* We curated the video dataset from the public Sports1M dataset [14] by filtering videos with motion automatically and use it for training our video stylization network.
* We use existing loss functions and design new ones to encourage learning of artifact free video stylization.
* We provide exhaustive comparisons with existing methods, detailing the differences.
Overall, since visual stylization is effectively a subjective creative filter, we hope to add one more tool to the video creatives toolbox.
## 2 Related works
Image stylization.Gatys et al. [8] proposed an approach to generate textures using CNN and optimization based technique. They used the loss function over Gram matrices on feature maps for maintaining feature map statistics and hence generating required textures. In their next work, Gatys et al. [9] used a similar optimization based approach for stylizing content image with the texture of a style image. Such optimization based approach is quite slow, and the training has to be performed individually for every new pair of images. The next generation of image stylization approaches then turned to feedforward CNNs, e.g. [13; 26], for speed and avoiding retraining on every test style image. However, they still could not be used with arbitrary style images on the fly, i.e. one network needs to be trained for each style image. Finally, the current generation of image stylization methods emerged that were designed for universal image stylization [11; 17; 28], i.e. the network is trained once, and then is used for transferring style from an arbitrary style image, fed along with the content image as input at test time. AdaIN [11] was a very successful step towards universal style transfer because of the simplicity of the network training and parameter free transfer of content into the style space using the AdaIN operation. Among other works, Li et al. [17] used whitening and color transforms for style transfer. However, such transformations produce unsatisfactory results, potentially loosing/modifying content in the output image. SANet [19] tried to introduce more style into the network but traded off the content from the original content image.
Video stylization.All the image stylization methods can be applied to videos by processing the videos frame by frame. Video stylization when done with image stylization methods such as AdaIN [11], WCT [17] etc., leads to stylized videos which suffer from various temporal and spatial inconsistencies. Chen et al. [2] and Gao et al. [7] hence proposed temporal consistency constraints to address these. Methods for video stylization include MCCNet [3] and AdaAttn [18]. MCCNet [3] mainly focused on video stylization, while AdaAttn [18] addressed image stylization using attention and extended the method for videos as well. Both methods lead to stylized video with some degree of style transfer, but increasing the style transfer introduces artifacts quickly. We compare with both of these in our experiments. Wang et al. [27] added optical flow to improve video stylization with 2D CNNs, and here we also use optical flow-based loss but with 3D CNN directly. We observed that there is noticeable flickering in some examples generated by Wang et al. [27]. Also, it smooths out the low-level textural details, while the proposed method does not have heavy flickering, and it also maintains the low-level textural details.
Stylization with 2D and 3D CNNs.All the above methods mainly work with 2D CNNs, e.g. [10; 15; 21; 24], for encoding the content and style images, and propose methods for transferring style by matching statistics of the intermediate feature maps. Videos are inherently 3D, and 3D CNNs, e.g. [1; 25], have been proposed for learning with videos. There has been no work on video stylization with 3D CNN, and in this paper we report the first successful network and training method to do stylization with 3D CNN. Other than breaking new ground, we also show that the proposed method produces richer texture in the stylized video, and since stylization is contextual and subjective, it is at the very least, another useful tool for video creatives.
## 3 Approach
We now present the proposed network architecture and training method for doing video stylization with 3D CNNs. Since we need to perform the challenging task of disentangling the motion and appearance components, the method is trained in multiple phases. In the following, we first describe the full architecture used at inference time in Section 3.1, which takes the content video clip as well as the style clip (the style image repeated \(K\) times for clips of size \(K\)), giving details of each subnetwork. Then, we explain the losses and the multi-phase training method in Section 3.3.
### Overall network
The task addressed here is to do video stylization with a reference style image. The network takes as input the content clip (which needs to be stylized) and the reference style image, and outputs the corresponding stylized video.
Figure 1 shows the full network used for stylizing input video clip, with an input style image. Along with the input video clip, the network takes a style clip as input. The style clip is constructed by repeating the style image \(K\) number of times, where \(K\) is the clip size for the input to the 3D CNN encoder. Both the content and style clips are first encoded with a 3D CNN encoder. The outputs of the encoder, i.e. final layer feature maps and the intermediate layer feature maps, are then fed to appearance subnetworks, which are responsible for extracting the appearance features only, disentangling them from the motion features. We will detail the method for achieving such disentanglement in the following sections. Once the outputs from appearance subnetworks are
Figure 1: Full architecture for the proposed network for video stylization. The network consists of 3D CNN based encoder and decoder networks, along with two other critical parts: (i) appearance subnets and (ii) Entangle layer. The appearance subnets are trained to disentangle appearance and motion components from the full 3D CNN feature and extract out the appearance part only. The entangle layer is trained to do the opposite, i.e. given the (stylized) appearance features as well as the full 3D CNN features, it re-introduces the motion component in the stylized features to enable them to be decoded into a stylized video by the decoder. The contribution of this paper is to report successful video stylization with 3D CNN directly, cf. 2D CNNs of existing works, using the above network, and a multi-phase training method to train the individual sub-networks proposed.
available for both content and style clips, we perform the AdaIN3D (extension of AdaIN[11] operation to 3D CNN features, described in Section 3.2.3 below) operation for transferring feature statistics from the style clip features to the content clip features. Finally the intermediate layer features are directly fed to the decoder, which is also a 3D CNN. The final layer features after the AdaIN3D operation are then aggregated with the final layer feature maps from the 3D CNN encoder. This step is for re-inducing the motion features into the appearance features which were disentangled by the appearance subnetworks. These aggregated features are then fed into the Entangle Subnet, the output of this subnet is passed as input to the decoder which produces the desired stylized output clip. Effectively, these steps can be seen as, (i) disentangle the appearance features from the full features which contains appearance and motion both, (ii) stylize the appearance features only, and finally (iii) re-induce the motion component into the stylized appearance features and decode them to obtain the stylized clip.
### Details of network components
Now, we will explain each component of the full architecture shown in the Figure 1. Please refer to the supplementary material for the complete code-level details of the appearance, Entangle, and the 3D Decoder subnetworks.
#### 3.2.1 Normalized encoder
Our encoder is a state-of-the-art 3D CNN, i.e. C3D [25], which was originally proposed for the task of human action classification. It takes an input clip with a fixed number of \(K\) frames, and then successively forward passes them through several 3D convolution and normalization layers. In the proposed network, this network is used twice with weight sharing, taking as input the content clip and the style clip respectively. The style image is repeated \(K\) times to make a static video clip to input to the network, as using a 2D CNN to transfer feature statistics onto 3D CNN based content clip features for stylization did not give good results in initial experiments, as might be expected.
As we do feature statistics transfer between the content and the style feature vectors using AdaIN3D operation, we require the features of the layers of the C3D network to be of similar scale. This has been reported and solved before by doing network normalization by Gatys et al. [8]. Hence, we normalize the weights of our 3D encoder network using the UCF-101 dataset [22] following the method proposed by Gatys et al. [8].
#### 3.2.2 Appearance subnets
In initial experiments, we attempted to do stylization by performing AdaIN3D operation between the features of content clip and style clip, as obtained from the C3D encoder. However, we noticed "slow motion" artifacts in the stylized video. We attribute those to the fact that motion and appearance are entangled in the C3D features, as observed and qualitatively visualized in the original C3D paper [25]. When we did stylization with a static style video (made by repeating style image), it led to inducing the static motion in the stylized clip. Hence, to do stylization with 3D CNN, we need to disentangle the appearance part from the features and stylize only those with the style image, and then recombine them with motion and decode them to the output stylized clip.
Hence, to extract the appearance component of the features from the C3D features, we use appearance subnetworks, which are themselves sequences of 3D convolution followed by ReLU layers. We use four appearance subnetworks which work with four different intermediate levels of the C3D decoder. We learn these networks in a different training phase, which we describe below when we detail the training procedure in Section 3.3.
#### 3.2.3 AdaIN3D
The AdaIN operation proposed in AdaIN [11] was designed to work on features of 2D CNN. We propose a simple extension, for it to work with 3D features, as follows.
\[\text{AdaIN3D}(x,y)=\sigma(y)\left(\frac{x-\mu(x)}{\sigma(x)}\right)+\mu(y), \tag{1}\]\[\mu_{nc}(x) =\frac{1}{THW}\sum_{t=1}^{T}\sum_{h=1}^{H}\sum_{w=1}^{W}x_{ncthw}, \tag{2}\] \[\sigma_{nc}(x) =\sqrt{\frac{1}{THW}\sum_{t=1}^{T}\sum_{h=1}^{H}\sum_{w=1}^{W} \left(x_{ncthw}-\mu_{nc}(x)\right)^{2}+\epsilon}, \tag{3}\]
where \(x\) represents the content clip and \(y\) represents the style clip, and \(n,c,t,h,w\) denote batch size, feature channels, temporal, height and width dimensions respectively. The mean and standard deviations of the style features in space and temporal dimensions are transferred to that of the content features.
#### 3.2.4 Entangle subnetwork
In order to recover both motion and appearance details in the stylized video, we require an additional subnetwork to re-induce motion information to the appearance subnetwork features. To achieve this, we use an entanglement network which comprises of single pair of 3D Conv and ReLU layers. The input to the entangle layer is a weighted combination of the appearance subnet \(4\)'s output and the final output of the C3D encoder, i.e.,
\[x_{e}=\text{concat}(0.7x_{a},0.3x_{m}), \tag{4}\]
where, \(x_{a}\) is the appearance feature map output of appearance subnet \(4\) and \(x_{m}\) is the feature map output of the relu4_1 layer of the C3D network.
#### 3.2.5 Decoder
The decoder is a 3D CNN used to generate the desired stylized video clip. It comprises of 3D Conv, ReLU, ReflectionPad3d and Upsample layers. The input of the decoder is the output of the Entangle Subnet, and the outputs of the Appearance Subnets are added to the decoder as skip connections by concatenation along the channel dimension of the decoder's appropriate intermediate feature maps.
### Training
We propose a multi-phase training for learning the different subnetworks of the proposed network. In each phase, we train one or more subnetworks while keeping the other subnetworks fixed. We first detail the different losses used, and then explain the different phases of the training.
#### 3.3.1 Losses used for training
We use different combinations of losses during different phases of training. In this section we give details of the losses, and specify which losses are used in which phases in the respective sections describing the training phases below.
The first loss we use is the standard \(\ell_{2}\) reconstruction loss given by,
\[\mathcal{L}_{\text{reconstruction}}=\|O_{r}-I_{c}\|_{2} \tag{5}\]
where \(O_{r}\) and \(I_{c}\) are the output reconstructed and input content clips respectively. This loss minimizes the \(\ell_{2}\) distance between the input and the reconstructed content clip in the RGB pixel space.
Second loss we use is the \(\ell_{2}\) loss on the output feature maps of the intermediate layers of the VGG-19 network, i.e.,
\[\mathcal{L}_{\text{appearance}}=\sum_{i}\|\phi_{i}(O_{r})-\phi_{i}(I_{c})\|_{2} \tag{6}\]
where, \(\phi_{i}\) is the feature map of layer \(i\) of the ImageNet pretrained VGG-19 network. The intermediate layers of VGG-19 used, indexed by \(i\), are relu1_1, relu2_1, relu3_1 and relu4_1.
Most of the state-of-the-art style transfer methods [4, 11, 18] used features from the VGG-19 network pretrained on the ImageNet dataset. Since the VGG-19 features of individual frames only contain the appearance information, we use loss based on VGG-19 features to help disentangle appearance features from motion features in the C3D encoder. Further, we use content and style losses similar to AdaIN [11]. The content loss is given by
\[\mathcal{L}_{\text{content}}=\|\phi_{i}(O)-\phi_{i}(I_{c})\|_{2},\quad i=\texttt{ relu4\_1}, \tag{7}\]
where \(O\) and \(I_{c}\) is the output stylized and input content clip respectively and the style loss is given by
\[\mathcal{L}_{\text{style}}=\sum_{i}\|\mu(\phi_{i}(O))-\mu(\phi_{i}(I_{s}))\|_{2 }+\|\sigma(\phi_{i}(O))-\sigma(\phi_{i}(I_{s}))\|_{2}, \tag{8}\]
where \(i\) indexes over the relu1_1, relu2_1, relu3_1, relu4_1 layers of the VGG-19 network, \(I_{s}\) is the input style clip, and \(\mu(\cdot)\) and \(\sigma(\cdot)\) are the mean and standard deviation of the feature maps.
We also use temporal loss, which is inspired by Gao et al. [6], and is given by
\[\mathcal{L}_{\text{temporal}}=\sum_{t=2}^{L}\sum_{j}\frac{1}{D_{t}}\|(O_{t}-W _{t}(O_{t-1}))_{j}-(I_{c,t}-W_{t}(I_{c,t-1}))_{Y}\|^{2}, \tag{9}\]
where \(j\in\{R,G,B\}\) is each of the RGB channels of the image, and \(Y\) is the relative luminance channel. \(O_{t-1}\) and \(O_{t}\) are the previous and the current stylized output frame, \(I_{c,t-1}\) and \(I_{c,t}\) are the previous and the current content input frame, \(D_{t}=H\times W\) where \(H,W\) are the height and width, respectively, of the input/output frames, and \(W_{t}\) is the forward optical flow function, which is computed using FlowNet2.0 [12]. The purpose of using this loss is to maintain similar optical flow characteristics in each channel of the output frames, as that in the luminance channel of the input frame, and achieve motion consistency between frames.
Further, we propose an intra-clip loss given by,
\[\mathcal{L}_{\text{intra}}=\sum_{i}\sum_{t=2}^{L}||\mu(\phi_{i}(O_{t}))-\mu( \phi_{i}(O_{t-1}))||_{2}+\sum_{i}\sum_{t=2}^{L}||\sigma(\phi_{i}(O_{t})-\sigma (\phi_{i}(O_{t-1}))||_{2}, \tag{10}\]
where, \(i\) indexes over the layers of the VGG-19 network as above, \(O\) is the stylized clip, \(t\) represents the frame number in the stylized clip, \(\mu(\cdot)\) and \(\sigma(\cdot)\) are the mean and standard deviation of the feature map \(\phi_{i}(\cdot)\). This loss encourages the feature statistics to be similar between successive frames of the stylized clip. By ensuring that, we mitigate the effect of the frames being stylized differently, even when the style image is the same, and avoid jerky changes in color and intensity.
#### 3.3.2 First phase: Autoencoder training
The first phase of training is to train the C3D encoder and decoder for video reconstruction task. The original C3D network was proposed for a discrete classification task, while the task here is of dense
Figure 2: The networks and settings for the different phases of training. The light gray subnetworks are fixed, while the different dark gray subnetworks are trained during the respective phases.
prediction of stylized output video. To have a suitable initialization for the encoder and decoder, we pretrain them with reconstruction loss, Eq. 5, on our training dataset of videos, with the network shown in Figure 1(a).
#### 3.3.3 Second phase: Appearance subnetworks training
In the second phase, we keep parameters of all appearance subnetworks as well as the decoder trainable, while keeping the parameters of the encoder fixed. The network architecture is shown in Figure 1(b). We pass the features from different intermediate layers of the encoder to the corresponding appearance subnetworks for extracting out the multi-scale appearance information from the entangled feature maps of the encoder. The output of the appearance subnetwork \(4\) is passed as the input to the decoder while outputs of other appearance subnetworks are passed as skip connections, concatenated channel-wise, to the decoder. We initialize the decoder with the weights learned from the first phase for the second phase. We use ImageNet pretrained VGG-19 intermediate feature maps reconstruction loss, Eq. 6, during this phase. This ensures that the appearance subnetworks together with the decoder capture the appearance part of the input video clip, by reconstructing the VGG-19 feature maps. Since this loss is applied independently for each frame of the input clip, it mainly captures the appearance information and discounts the motion information, leading to disentanglement of the appearance and motion features by the appearance subnetworks.
The disentanglement of motion and appearance, by the four appearance subnets, happens here in the phase \(2\) training. In this phase, we keep the 3D CNN encoder fixed, so the features which are input to the appearance subnets have entangled motion and appearance information. We train the network to minimize the VGG-19 feature loss for each frame, and hence only remember appearance information in the frames. When trained like that, the appearance subnets only gate the appearance information from the 3D encoder features as only that is required to minimize the loss, which is applied independently to each frame and, thus, has no dependence on motion. The evidence for such disentanglement is indirect, as when we do not do this, we get motion artifacts which we explain by the static nature of the style clip (constructed by repeating the style image), but when we do this, those motion artifacts disappear indicating that the features and the AdaIN3D based statistics transfer, only affected the appearance and not the motion.
#### 3.3.4 Third phase: Entangle subnetwork training
In the third phase, we keep the decoder and the entangle subnetwork parameters trainable, while keeping the parameters of all appearance subnetworks as well as the encoder fixed. We initialize the decoder from scratch for this phase as the input features are different from the previously trained decoders. We use reconstruction loss for this phase. The network architecture is shown in Figure 1(c). The aim of this phase of training is to learn to re-induce motion information into the appearance features.
#### 3.3.5 Fourth and final phase: 3D decoder training
In the final phase, the full network is used, and only the decoder parameters are trained. We use the decoder trained in the third phase as the initialization for the decoder in this phase. All the other modules, i.e. encoder, all four appearance subnetworks, and entangle subnetwork, are kept fixed. The aim is to learn a final decoder which is able to do stylization. The combination of losses gives the full loss used for this phase, Eq. 7, 8, 9, 10,
\[\mathcal{L}=\sum_{i}\lambda_{i}\mathcal{L}_{i},\text{ where, }i\in\{\text{ content, style, temporal, intra}\}, \tag{11}\]
i.e. this phase learns to do stylization while keeping intra-clip consistency, preserving overall content, and transferring style. This phase completes our training.
## 4 Experiments
We now describe the experimental setup, and the qualitative and quantitative results comparing the proposed method with existing state-of-the-art approaches.
### Implementation Details
**Dataset.** A video dataset was not available in the existing style transfer literature for video stylization. Hence, we propose a new dataset curated from existing public benchmark datasets, to train for video stylization task. We will make the links and all details of the dataset public for use by the community.
Since, most of the challenge in video stylization stems from motion in the videos, we explicitly construct the dataset such that the clips have a high amount of motion, and are not just static repeated images. On analysing some videos downloaded we saw that some videos were simply repetition of a single static image, and were thus not appropriate.
We downloaded \(45,574\) videos from the first \(100,000\) URLs in the Sports1M dataset [14] (as many links were now defunct, and some downloads failed). We then generated \(16\) frame clips with a high amount of motion with the following steps. First, we randomly sampled a video from the 45K videos dataset, and used deep learning based shot detection by Soucek et al. [23] to split the video into shots. We then randomly sample a \(16\) frame clip and computed the Farneback optical flow [5] of the frames of the sampled clip. We then selected the clips which have an average optical flow value above a threshold to be in the proposed dataset. Overall, we used \(10,000\) content clips generated from the above procedure as part of the dataset for training. In addition to the content clips, we use images from the WikiArt [20] as the style images, as has been done by the image stylization methods as well.
**Input.** We set clip size \(K\) to \(16\). While training we extract random \(128\times 128\) size same patch from consecutive frames of a video clip in case of content clip. For, style clip we extract a random \(128\times 128\) patch from the style image and repeat it \(16\) times which is equal to the length of the content clip. During testing, we pass the video in chunks of successive \(16\) frames clips.
**Parameters.** In the first phase of training, we train the encoder and decoder for \(10\)k iterations. We used Adam optimizer with a learning rate of \(10^{-4}\) with a decay rate of \(5\times 10^{-5}\) after every iteration for this phase of training. In the second phase of training, we train the decoder, appearance subnets for \(5\)k iterations. We used Adam optimizer with a learning rate of \(10^{-4}\) with a decay rate of \(5\times 10^{-5}\) after every iteration for this phase of training. In the third phase of training, we train the decoder and entangle subnet for \(100\)k iterations. We used Adam optimizer with a learning rate of \(10^{-4}\) with a decay rate of \(5\times 10^{-5}\) after every iteration for this phase of training. In the final phase training, we train the decoder for \(40\)k iterations with content, style loss where values of \(\lambda_{c}\), \(\lambda_{s}\) is \(1,2\) respectively. After this, we train the decoder with content, style, temporal losses for \(160\)k iterations where values of \(\lambda_{\text{content}}\), \(\lambda_{\text{style}}\), \(\lambda_{\text{temporal}}\) being \(1,2,10\) respectively. And finally, we fine-tune the decoder for \(40\)k iterations with content, style, temporal, intra-clip loss where values of \(\lambda_{\text{content}}\), \(\lambda_{\text{style}}\), \(\lambda_{temporal}\), \(\lambda_{\text{intra}}\) being \(1,2,10,10\) respectively.
### Ablation Studies
We discuss the takeaways from the ablation experiments we performed. It is difficult to show the video related effects with images. Kindly, see the video results in the supplementary materials corresponding to these discussions.
**Disentangling appearance and motion using appearance subnetworks.** We applied AdaIN3D operation directly to the encoded tensor of the content and style clips by the C3D network without using any appearance (disentanglement) subnetworks. However, we observed jerky motion in the generated videos, which followed a repeated pattern across the clips. Upon using appearance subnetworks, such motion artifacts were not observed anymore. The supporting videos are in supplementary material folder Videos_with_naive_extension_of_2D_stylization.
**Temporal loss.** When trained without temporal loss, the stylized video has two kinds of artifacts. First, color consistency across subsequent frames suffers as we transfer style with appearance subnetworks which remove motion component. Second, there is some loss of motion as well. Both of these artifacts are not observed, or are very mild once we use temporal loss. Supporting videos are in supplementary material folder Videos_without_temporal.
**Intra-clip loss.** We observe even after applying the temporal loss, there are some color inconsistencies leading to flashing like artifacts. However, when we use intra-clip loss such inconsistencies are addressed to a large extent. Supporting videos are in supplementary material folder Videos_without_intra.
### Qualitative results
Figure 3 shows some qualitative results of the proposed method compared to existing methods. We observe that the proposed method focuses more on low level texture of the style image while AdaAttn [18] tends to flatten out the textured regions consistently. Figure 4 shows that AdaAttn also tends to modify the content, while the proposed method gives better textured outputs closer to the original content. Please see supplementary material for video results. Overall the results produced by the proposed method are aesthetically pleasing, and can be used by video creatives as one of the filters available.
#### 4.3.1 User Study
We considered AdaIN [11], MAST [4] and two state-of-the-art methods, MCCNet [3] and AdaAttN [18], for comparison with our method. We randomly picked \(15\) content clips and \(15\) style images and considered random \(20\) stylized videos out of \(225\) possible combinations. We generated these stylized videos using all the compared methods. We then presented the stylized video of other methods along with that of our method side by side, but in random order, to the participants. We collected \(900\) votes from
\begin{table}
\begin{tabular}{l r r r} \hline Method & Content & Style & Overall \\ \hline AdaIN & 2.3 & 16.3 & 4.3 \\ MAST & 1.0 & 15.7 & 11.0 \\ MCCNet & 12.3 & **31.3** & 27.7 \\ AdaAttN & 41.0 & 20.0 & 28.3 \\ Ours & **43.3** & 16.7 & **28.7** \\ \hline \end{tabular}
\end{table}
Table 1: User study preferences(%). **Bold**, red, blue represent rank 1, 2 and 3 respectively.
Figure 4: Effect on content of different methods. First column is the style image, and the rows in the remaining columns contain frames and zoom-ins of original content clip, AdaAttN [18] stylized clip and our method. Notice how AdaAttN (middle) modifies the content significantly, while the proposed method (right) tends to keep the content relatively preserved.
Figure 3: First column is the style image and subsequent columns are a pair of frames generated using different approaches, in the following order: AdaIN[11], MCCNet[3], AdaAttn[18] and Ours.
various participants for content preservation, degree of stylization and overall preference. The results in Table 1 show that our method leads for content preservation closely followed by AdaAttN [18]. In the case of the degree of stylization, MCCNet [3] leads us by a margin, but we argue that there is always a trade off between style transfer and content preservation, and so we can observe that MCCNet [3] performs poorly in the case of content preservation. Considering the overall preference in which participants have considered both content and style, our method performs better than all the other methods, proving our method's effectiveness as a successful alternate style transfer method.
### Quantitative Results
We computed Farneback optical flow [5] for the content video and the stylized video. We report the mean of the absolute difference between the flows of content video to that of corresponding flow in stylized video. This is an approximate measure to ensure that the stylization method is preserving the motion in the original clip. All the methods are expected to distort the motion in some ways while performing stylization, and the degree of such distortion will depend on the input video. We observe that the proposed method achieves competitive flow errors.
### Compute time
Since the proposed method is based on 3D CNNs, it is computationally heavier than the 2D CNN methods. To get a real world computation time estimate, we processed a video with 144 frames with resolutions of 640x360 pixels. AdaAttN took 14 seconds and 8GB of GPU memory, while our proposed method took 60 seconds and 16GB of GPU memory on a machine with Intel Core i9-10900X processor and Nvidia RTX A4000 GPU.
## 5 Conclusion
We proposed a novel architecture and training method for the task of video stylization given a reference style image. The method uses 3D CNNs exclusively, and is the first to report stylization with them. It achieves stylization by disentangling appearance information from the 3D CNN based encoder's features, stylizing them, and then re-combining them with the motion information before finally decoding to the desired stylized output video. The training is done in phases with different phases ensuring different capabilities for the subnetworks in the architecture. We also proposed a video dataset containing videos with high degree of motion for training video stylization networks. We showed qualitatively and quantitatively that the proposed method produces results which are competitive to the existing methods and produce different and aesthetically pleasing results. We hope the proposed method will result in another filter in video creative's toolbox.
## References
* [1] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In _IEEE Conference on Computer Vision and Pattern Recognition_, 2017.
* [2] Dongdong Chen, Jing Liao, Lu Yuan, Nenghai Yu, and Gang Hua. Coherent online video style transfer. In _IEEE International Conference on Computer Vision_, 2017.
* [3] Yingying Deng, Fan Tang, Weiming Dong, Haibin Huang, Chongyang Ma, and Changsheng Xu. Arbitrary video style transfer via multi-channel correlation. In _AAAI Conference on Artificial Intelligence_, 2021.
* [4] Yingying Deng, Fan Tang, Weiming Dong, Wen Sun, Feiyue Huang, and Changsheng Xu. Arbitrary style transfer via multi-adaptation network. In _28th ACM International Conference on Multimedia_, 2020.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c c c} \hline Method & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & Mean \\ \hline AdaAttN & 0.95 & 3.46 & 0.54 & 0.64 & 0.66 & 3.66 & 3.66 & 3.62 & 1.77 & 2.22 & 0.23 & 0.25 & 0.15 & 0.14 & 0.31 & 3.64 & 0.49 & 0.45 & 0.24 \\ MAST & 1.89 & 3.59 & 1.65 & 4.77 & 1.17 & 1.53 & 0.99 & 4.14 & 3.69 & 2.17 & 1.56 & 2.88 & 1.36 & 3.48 & 0.44 & 0.37 & 5.08 & 1.11 & 0.99 & 2.34 \\ MCNet & 0.59 & 2.21 & **0.33** & 0.35 & 0.43 & 0.43 & 0.49 & 2.49 & 1.07 & 1.51 & 0.38 & 0.36 & 1.36 & 0.59 & 0.14 & 0.44 & 0.59 & 0.21 & 1.53 \\ SAINT & 0.88 & 3.01 & 0.52 & 0.42 & 0.78 & 1.45 & 0.54 & 3.16 & 1.99 & 1.64 & **0.42** & 0.25 & 2.96 & 0.23 & 0.63 & 0.11 & 0.73 & 0.90 & 0.55 & 1.53 \\ AdaAttN & 0.39 & 1.96 & 2.90 & 2.30 & 0.39 & 0.40 & 2.25 & 1.77 & 1.00 & 0.83 & 5.11 & 2.91 & **0.07** & **0.68** & **0.24** & **2.09** & 0.24 & 1.26 \\ \hline Ours & 0.65 & 3.98 & 1.36 & 0.37 & 0.35 & 0.65 & 0.85 & 2.79 & 2.92 & 1.64 & 0.70 & 0.53 & 0.44 & 2.01 & 0.15 & 0.19 & 0.49 & 4.29 & 0.53 & 0.53 & 1.63 \\ Ours+temporal & **0.37** & **1.89** & 0.31 & **2.45** & **0.47** & 0.48 & 0.44 & 2.17 & 1.97 & **0.54** & **0.59** & 1.96 & 0.22 & **1.58** & 0.08 & 0.15 & 0.42 & 2.35 & 0.41 & **0.21** & **1.00** \\ Ours+temporal+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time+time++time+time+time+time+time+time+time+time+time++* [5] Gunnar Farneback. Two-frame motion estimation based on polynomial expansion. In _Scandinavian Conference on Image Analysis_, 2003.
* [6] Chang Gao, Derun Gu, Fangjun Zhang, and Yizhou Yu. Reconet: Real-time coherent video style transfer network. In _Asian Conference on Computer Vision_, 2018.
* [7] Wei Gao, Yijun Li, Yihang Yin, and Ming-Hsuan Yang. Fast video multi-style transfer. In _IEEE/CVF Winter Conference on Applications of Computer Vision_, 2020.
* [8] Leon Gatys, Alexander S Ecker, and Matthias Bethge. Texture synthesis using convolutional neural networks. In _Advances in Neural Information Processing Systems_, 2015.
* [9] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer using convolutional neural networks. In _IEEE Conference on Computer Vision and Pattern Recognition_, 2016.
* [10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _IEEE Conference on Computer Vision and Pattern Recognition_, 2016.
* [11] Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In _IEEE International Conference on Computer Vision_, 2017.
* [12] Eddy Ilg, Nikolaus Mayer, Tonmoy Saikia, Margret Keuper, Alexey Dosovitskiy, and Thomas Brox. Flownet 2.0: Evolution of optical flow estimation with deep networks. In _IEEE Conference on Computer Vision and Pattern Recognition_, 2017.
* [13] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In _European Conference on Computer Vision_, 2016.
* [14] Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. Large-scale video classification with convolutional neural networks. In _IEEE Conference on Computer Vision and Pattern Recognition_, 2014.
* [15] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In _Advances in Neural Information Processing Systems_, 2012.
* [16] Xueting Li, Sifei Liu, Jan Kautz, and Ming-Hsuan Yang. Learning linear transformations for fast arbitrary style transfer. In _IEEE Conference on Computer Vision and Pattern Recognition_, 2019.
* [17] Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, and Ming-Hsuan Yang. Universal style transfer via feature transforms. _Advances in Neural Information Processing Systems_, 30, 2017.
* [18] Songhua Liu, Tianwei Lin, Dongliang He, Fu Li, Meiling Wang, Xin Li, Zhengxing Sun, Qian Li, and Errui Ding. Adaattn: Revisit attention mechanism in arbitrary neural style transfer. In _IEEE/CVF International Conference on Computer Vision_, 2021.
* [19] Dae Young Park and Kwang Hee Lee. Arbitrary style transfer with style-attentional networks. In _IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2019.
* [20] Fred Phillips and Brandy Mackintosh. Wiki art gallery, inc.: A case for critical thinking. _Issues in Accounting Education_, 26(3):593-608, 2011.
* [21] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In _International Conference on Learning Representations_, 2015.
* [22] Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. _arXiv preprint arXiv:1212.0402_, 2012.
* [23] Tomas Soucek and Jakub Lokoc. Transnet v2: an effective deep network architecture for fast shot transition detection. _arXiv preprint arXiv:2008.04838_, 2020.
* [24] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In _IEEE Conference on Computer Vision and Pattern Recognition_, 2015.
* [25] Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning spatiotemporal features with 3d convolutional networks. In _IEEE International Conference on Computer Vision_, 2015.
* [26] Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, and Victor S Lempitsky. Texture networks: Feed-forward synthesis of textures and stylized images. In _International Conference on Machine Learning_, 2016.
* [27] Wenjing Wang, Shuai Yang, Jizheng Xu, and Jiaying Liu. Consistent video style transfer via relaxation and regularization. _IEEE Transactions on Image Processing_, 29, 2020.
* [28] Yexun Zhang, Ya Zhang, and Wenbin Cai. Separating style and content for generalized style transfer. In _IEEE Conference on Computer Vision and Pattern Recognition_, 2018. | ## Review
### Summary
This paper presents a novel method for video stylization using 3D CNNs and AdaIn3D, aiming to disentangle motion and appearance features before stylizing. The authors propose a network architecture that includes multiple appearance subnets and a specific training approach to enhance temporal consistency in stylization. Experimental results demonstrate superior performance compared to baseline methods, supported by a large-scale dataset. However, some aspects of the methodology and results require further clarification, particularly regarding the necessity of certain network components and the evaluation of the proposed method against existing techniques.
### Strengths
- First application of 3D CNNs for video stylization, demonstrating significant improvements.
- Temporal and intra-loss mechanisms enhance the stability of stylization.
- Introduction of a large-scale dataset (10,000 clips) encourages future research.
- Good overall quality of stylization results compared to existing methods.
### Weaknesses
- Insufficient motivation for using 3D CNN over 2D CNN, leading to potential concerns about technical contributions.
- Unclear necessity for multiple appearance subnets and their impact on performance.
- Lack of comprehensive evaluation and comparison with a broader range of existing methods.
- Confusing presentation of network architecture and methodology, particularly in Section 3.
### Questions
- What is the rationale behind using four appearance subnets, and how does this choice affect performance?
- Why are only a few methods included in the comparative analysis, particularly in qualitative and quantitative evaluations?
- How does the proposed method effectively disentangle motion and appearance in practice?
- What are the reasons for not conducting a user study to assess qualitative results?
### Soundness
**Score:** 2
**Description:** Fair: The paper presents a technically solid approach but lacks thorough motivation and comprehensive evaluation, which impacts its overall soundness.
### Presentation
**Score:** 2
**Description:** Fair: The clarity of the paper is hindered by confusing structure and insufficient detail in the methodology, making it challenging to understand.
### Contribution
**Score:** 2
**Description:** Fair: While the paper introduces a novel approach to video stylization, the contributions are undermined by unclear motivation and limited evaluation of results.
### Rating
**Score:** 5
**Description:** Borderline accept: The paper is technically solid and presents a novel approach, but it requires further clarification and comprehensive evaluations to strengthen its contributions.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper demonstrates originality in applying 3D CNNs to video stylization and presents promising experimental results. However, the weaknesses in motivation, clarity, and evaluation suggest that while it is a valuable contribution, it requires further refinement. The decision to accept reflects the potential impact of the work and the encouragement for further development.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# A General Theory of Correct, Incorrect, and Extrinsic Equivariance
Dian Wang\({}^{1}\) Xupeng Zhu\({}^{1}\) Jung Yeon Park\({}^{1}\) Mingxi Jia\({}^{2}\) Guanang Su\({}^{3}\)
\({}^{1}\)Northeastern University \({}^{2}\)Brown University \({}^{3}\)University of Minnesota
{wang.dian,zhu.xup,park.jungy,r.platt,r.walters}@northeastern.edu
[email protected] [email protected]
Work done as students at Northeastern University
###### Abstract
Although equivariant machine learning has proven effective at many tasks, success depends heavily on the assumption that the ground truth function is symmetric over the entire domain matching the symmetry in an equivariant neural network. A missing piece in the equivariant learning literature is the analysis of equivariant networks when symmetry exists only partially in the domain. In this work, we present a general theory for such a situation. We propose pointwise definitions of correct, incorrect, and extrinsic equivariance, which allow us to quantify continuously the degree of each type of equivariance a function displays. We then study the impact of various degrees of incorrect or extrinsic symmetry on model error. We prove error lower bounds for invariant or equivariant networks in classification or regression settings with partially incorrect symmetry. We also analyze the potentially harmful effects of extrinsic equivariance. Experiments validate these results in three different environments.
## 1 Introduction
Equivariant neural networks [9, 10] have proven to be an effective way to improve generalization and sample efficiency in many machine learning tasks. This is accomplished by encoding task-level symmetry into the structure of the network architecture so that the model does not need to explicitly learn the symmetry from the data. However, encoding a fixed type of symmetry like this can be limiting when the model symmetry does not exactly match the symmetry of the underlying function being modeled, i.e., when there is a symmetry mismatch. For example, consider the digit image classification task. Is it helpful to model this problem using a model that is invariant to 180-degree rotation of the image? For some digits, the label is invariant (e.g., \(\mathbf{0}\) and \(\mathbf{8}\)). However, for other digits, the label changes under rotation (e.g., \(\mathbf{6}\) and \(\mathbf{9}\)), suggesting that a rotationally symmetric model would be inappropriate here. However, recent work [58] suggests that this is not necessarily the case - symmetric models are sometimes helpful even when a symmetry mismatch exists between the problem and model. This raises the question - do the advantages obtained by using a symmetric model outweigh the errors introduced by the symmetry mismatch?
Figure 1: An example of correct, incorrect, and extrinsic equivariance. The ground truth function \(f(x)\) is shown in black and its probability density function \(p(x)\) is shown in orange. If we model \(f(x)\) using a \(G\)-invariant network where \(G\) is a reflection group that negates \(x\), different \(f(x)\) and \(p(x)\) will lead to correct, incorrect, and extrinsic equivariance. See Section 3 for details.
This paper makes four main contributions towards this problem. First, this paper extends the definitions for types of model symmetry with respect to true symmetry introduced in Wang et al. [58]. They classify models as having _correct_, _incorrect_, or _extrinsic equivariance_ (see Figure 1), where correct means the ground truth function has the same symmetry as the equivariant model, incorrect means the ground truth function disagrees with the symmetry in the model, and extrinsic means the model symmetry transforms in-distribution data to out-of-distribution data. We generalize this system into a continuum of equivariance types to reflect the fact that a single task may have different proportions of correct, incorrect, and extrinsic symmetry across its domain. For example, in the digit classification task, \(\mathbf{0}\) has correct equivariance, \(\mathbf{6}\) has incorrect equivariance, and \(\mathbf{4}\) has extrinsic equivariance.
Our second contribution is to introduce an analytical lower bound on model error in classification tasks resulting from incorrect model symmetry. This result can help guide model selection by quantifying error resulting from incorrect equivariance constraints. Our result generalizes that of Wang et al. [58] by removing the simplifying assumption that data density over the domain is group invariant. We prove the minimum error of an invariant classifier can be realized by assigning all data points in the same group orbit the label with the majority of the data density (Theorem 4.3).
Our third contribution is to develop new lower bounds on the \(L_{2}\) error for regression tasks in terms of the variance of the function to be modeled over the orbit of the symmetry group. Like our classification bound, this bound can assist in model selection in situations with symmetry mismatch.
Fourth, in contrast to Wang et al. [58] who show benefits of extrinsic equivariance, we theoretically demonstrate its potential harm. We perform experiments documenting the error rate across the correct-extrinsic continuum. Finally, we perform empirical studies illustrating the ideas of the paper and showing that the lower bounds obtained in our analysis appear tight in practice. This suggests our analysis can assist practitioners select symmetry groups appropriate for a given problem setting. Our code is available at [https://github.com/pointW/ext_theory](https://github.com/pointW/ext_theory).
## 2 Related Work
**Equivariant Learning.** Originally used for exploiting symmetry in image domains [9; 10], equivariant learning has been very successful in various tasks including molecular dynamics [2; 4], particle physics [6], fluid dynamics [59], trajectory prediction [53], pose estimation [31; 26; 32], shape completion [7], robotics [48; 65; 21; 55; 49; 41; 23; 22; 45] and reinforcement learning [52; 54; 56; 39; 63]. However, most prior work assumes that the symmetry of the ground truth function is perfectly known and matches the model symmetry. Wang et al. [58] go further and define correct, incorrect, and extrinsic equivariance to classify the relationship between model symmetry and domain symmetry. However, they do not discuss the possible combinations of the three categories, and limit their theory to a compact group and invariant classification. Our work extends [58] and allows for a continuum of equivariance types and analyzes error bounds in a more general setup.
**Symmetric Representation Learning.** Various works have proposed learning symmetric representations, using transforming autoencoders [20], restricted Boltzmann machines [50], and equivariant descriptors [47]. In particular, [30] shows that convolutional neural networks implicitly learn representations that are equivariant to rotations, flips, and translations, suggesting that symmetric representations are important inductive biases. Other works have considered learning symmetry-aware features using disentanglement [43], projection mapping [25], equivariance constraints [35], separation into invariant and equivariant parts [61] or subgroups [34]. Park et al. [42] propose learning a symmetric encoder that maps to equivariant features and Dangovski et al. [13] learn features that are sensitive and insensitive to different group representations. Other works assume no prior knowledge of symmetry and learn it from data [3; 64; 14; 40]. In particular, Moskalev et al. [40] estimate the difference between the true latent symmetry and the learned symmetry. Similarly, our work considers a gap between the true symmetry and model symmetry and theoretically analyze its effects on error.
**Theory of Equivariant Learning.** There are several lines of work on the theory of equivariant learning. Kondor and Trivedi [27] prove that convolutions are sufficient and necessary for equivariance of scalar fields on compact groups, later generalized to the steerable case by Cohen et al. [11]. Certain equivariant networks have been proved to be universal in that such networks can approximate any \(G\)-equivariant function [36; 62]. Another line of work has considered equivariant networks in terms of generalization error. Abu-Mostafa [1] show that an invariant model has a VC dimension less than or equal to that of a non-equivariant model. Other works studied the generalization error of invariant classifiers by decomposing the input space [51, 46]. Elesedy and Zaidi [17] quantify a generalization benefit for equivariant linear models using the notion of symmetric and anti-symmetric spaces. A PAC Bayes approach was used for generalization bounds of equivariant models [5, 33]. Our work is complimentary to these and quantifies approximation error for equivariant model classes.
**Data Augmentation.** Some methods use data augmentation [29, 28] to encourage the network to learn invariance with respect to transformations defined by the augmentation function [8]. Recent works have explored class-specific [19, 44] and instance-specific [38] data augmentation methods to further boost training by avoiding the potential error caused by a uniform augmentation function. Those methods can be viewed as applying data augmentation where pointwise correct or extrinsic invariance exist, while avoiding incorrect invariance.
## 3 Preliminaries
Problem Statement.Consider a function \(f\colon X\to Y\). Let \(p:X\to\mathbb{R}\) be the probability density function of the domain \(X\). We assume that there is no distribution shift during testing, i.e., \(p\) is always the underlying distribution during training and testing. The goal for a model class \(\{h:X\to Y\}\) is to fit the function \(f\) by minimizing an error function \(\operatorname{err}(h)\). We assume the model class \(\{h\}\) is arbitrarily expressive except that it is constrained to be equivariant with respect to a group \(G\). Let \(\mathbb{1}\) be an indicator function that equals to 1 if the condition is satisfied and 0 otherwise. In classification, \(\operatorname{err}(h)\) is the classification error rate; for regression tasks, the error function is a \(L_{2}\) norm function,
\[\operatorname{err}_{\mathrm{cls}}(h)=\mathbb{E}_{x\sim p}[\mathbb{1}\left(f(x) \neq h(x)\right)],\qquad\operatorname{err}_{\mathrm{reg}}(h)=\mathbb{E}_{x \sim p}[||h(x)-f(x)||_{2}^{2}]. \tag{1}\]
**Equivariant Function.** A function \(f:X\to Y\) is equivariant with respect to a symmetry group \(G\) if it commutes with the group transformation \(g\in G\), \(\hat{f}(gx)=gf(x)\), where \(g\) acts on \(x\in X\) through the representation \(\rho_{X}(g)\); \(g\) acts on \(y\in Y\) through the representation \(\rho_{Y}(g)\).
### Correct, Incorrect, and Extrinsic Equivariance.
Consider a model \(h\) which is equivariant with respect to a group \(G\). Since real-world data rarely exactly conforms to model assumptions, in practice there may often be a gap between the symmetry of the model and the ground truth function. Wang et al. [58] propose a three-way classification which describes the relationship between the symmetry of \(f\) and the symmetry of \(h\). In this system, \(h\) has correct equivariance, incorrect equivariance, or extrinsic equivariance with respect to \(f\).
**Definition 3.1** (Correct Equivariance).: For all \(x\in X,g\in G\) where \(p(x)>0\), if \(p(gx)>0\) and \(f(gx)=gf(x)\), \(h\) has _correct equivariance_ with respect to \(f\).
**Definition 3.2** (Incorrect Equivariance).: If there exist \(x\in X,g\in G\) such that \(p(x)>0,p(gx>0)\), but \(f(gx)\neq gf(x)\), \(h\) has _incorrect equivariance_ with respect to \(f\).
**Definition 3.3** (Extrinsic Equivariance).: For all \(x\in X,g\in G\) where \(p(x)>0\), if \(p(gx)=0\), \(h\) has _extrinsic equivariance_ with respect to \(f\).
**Example 3.4**.: Consider a binary classification task where \(X=\mathbb{R}\) and \(Y=\{0,1\}\). If the model \(h\) is invariant to a reflection group \(G\) where the group element \(g\in G\) acts on \(x\in X\) by \(gx=-x\), Figure 1 shows examples when correct, incorrect, or extrinsic equivariance is satisfied.
### Pointwise Equivariance Type.
Although Definitions 3.1- 3.3 are self-contained, they do not consider the mixture of different equivariance types in a single function. In other words, an equivariant model can have correct, incorrect, and extrinsic equivariance in different subsets of the domain. To overcome this issue, we define pointwise correct, incorrect, and extrinsic equivariance, which is a generalization of the prior work.
**Definition 3.5** (Pointwise Correct Equivariance).: For \(g\in G\) and \(x\in X\) where \(p(x)\neq 0\), if \(p(gx)\neq 0\) and \(f(gx)=gf(x)\), \(h\) has correct equivariance with respect to \(f\) at \(x\) under transformation \(g\).
Figure 2: Example of pointwise correct, incorrect, and extrinsic equivariance in a binary classification task. \(f(x)\) is in black and \(p(x)\) is in orange. \(G\) is a reflection group that negates \(x\).
**Definition 3.6** (Pointwise Incorrect Equivariance).: For \(g\in G\) and \(x\in X\) where \(p(x)\neq 0\), if \(p(gx)\neq 0\) and \(f(gx)\neq gf(x)\), \(h\) has incorrect equivariance with respect to \(f\) at \(x\) under transformation \(g\).
**Definition 3.7** (Pointwise Extrinsic Equivariance).: For \(g\in G\) and \(x\in X\) where \(p(x)\neq 0\), if \(p(gx)=0\), \(h\) has extrinsic equivariance with respect to \(f\) at \(x\) under transformation \(g\).
Notice that the definitions of pointwise correct, incorrect, and extrinsic equivariance are mutually exclusive, i.e., a pair \((x,g)\) can only have one of the three properties. The pointwise definitions are generalizations of the global Definitions 3.1- 3.3. For example, when pointwise correct equivariance holds for all \(x\in X\) and \(g\in G\), Definition 3.1 is satisfied.
**Example 3.8** (Example of Pointwise Correct, Incorrect, and Extrinsic Equivariance).: Consider the same binary classification task in Example 3.4. Figure 2 shows \(f(x)\), \(g(x)\), and four subsets of \(X\) where pointwise correct, incorrect, or extrinsic holds. For \(x\) in the correct section (green), \(p(x)>0,p(gx)>0,p(gx)>0,f(x)=f(gx)\). For \(x\) in the incorrect sections (red), \(p(x)>0,p(gx)>0,f(x)\neq f(gx)\). For \(x\) in the extrinsic section (blue), \(p(x)>0,p(gx)=0\).
**Definition 3.9** (Correct, Incorrect, and Extrinsic Sets).: The Correct Set \(C\subseteq X\times G\) is a subset of \(X\times G\) where pointwise correct equivariance holds for all \((x,g)\in C\). Similarly, the Incorrect Set \(I\) and the Extrinsic Set \(E\) are subsets where incorrect equivariance or extrinsic equivariance holds for all elements in the subset. Denote \(U\subseteq X\times G\) as the Undefined Set where \(\forall(x,g)\in U,p(x)=0\). By definition we have \(X\times G=C\amalg I\amalg E\amalg U\), where \(\amalg\) denotes a disjoint union.
## 4 Approximation Error Lower Bound from Incorrect Equivariance
Studying the theoretical error lower bound of an equivariant network is essential for model selection, especially when incorrect equivariance exists. Wang et al. [58] prove an error lower bound for an incorrect equivariant network, but their setting is limited to a classification task in the global situation of Definition 3.2 with a discrete group and an invariant density function. In this section, we find the lower bound of \(\operatorname{err}(h)\) for an equivariant model \(h\) in a general setting. To calculate such a lower bound, we first define the _fundamental domain_\(F\) of \(X\). Let \(d\) be the dimension of a generic orbit of \(G\) in \(X\) and \(n\) the dimension of \(X\). Let \(\nu\) be the \((n-d)\) dimensional Hausdorff measure in \(X\).
**Definition 4.1** (Fundamental Domain).: A closed subset \(F\) of \(X\) is called a fundamental domain of \(G\) in \(X\) if \(X\) is the union of conjugates2 of \(F\), i.e., \(X=\bigcup_{g\in G}gF\), and the intersection of any two conjugates has 0 measure under \(\nu\).
Footnote 2: A conjugate \(gF\) is defined as \(gF=\{gx|x\in F\}\).
We assume further that the set of all \(x\) which lie in any pairwise intersection \(\bigcup_{g_{1}F\neq g_{2}F}\left(g_{1}F\cap g_{2}F\right)\) has measure 0 under \(\nu\). Let \(Gx=\{gx:g\in G\}\) be the orbit of \(x\), then \(X\) can be written as the union of the orbits of all points in the fundamental domain \(F\) as such \(X=\bigcup_{x\in F}Gx\).
### Lower Bound for Classification
We first show the lower bound of the error \(\operatorname{err}_{\mathrm{cls}}(h)\) (Equation 1) given the invariant constraint in \(h\): \(h(gx)=h(x),g\in G\). In this section, the codomain \(Y\) of \(f\) is a finite set of possible labels. Since \(h\) is \(G\)-invariant, \(h\) has the same output for all inputs in an orbit \(Gx\). We call the label that causes the minimal error inside the orbit the _majority label3_, and define the error in the orbit as the _total dissent_.
Footnote 3: The majority label has more associated data than all other labels, but does not need to be more than 50%.
**Definition 4.2** (Total Dissent).: For the orbit \(Gx\) of \(x\in X\), the total dissent \(k(Gx)\) is the integrated probability density of the elements in the orbit \(Gx\) having a different label than the majority label
\[k(Gx)=\min_{y\in Y}\int_{Gx}p(z)\mathbb{1}\left(f(z)\neq y\right)dz. \tag{2}\]
We can also lift the integral to \(G\) itself by introducing a factor \(\alpha(x,g)\) to account for the Jacobian of the action map and size of the stabilizer of \(x\). (See Appendix A.)
\[k(Gx)=\min_{y\in Y}\int_{G}p(gx)\mathbb{1}\left(f(gx)\neq y\right)\alpha(x,g)dg. \tag{3}\]
**Theorem 4.3**.: \(\operatorname{err}(h)\) _is lower bounded by \(\int_{F}k(Gx)dx\)._
Proof.: Rewriting the error function of Equation 1, we have
\[\operatorname{err}(h)=\int_{X}p(x)\mathbb{1}(f(x)\neq h(x))dx=\int_{x\in F}\int_ {z\in Gx}p(z)\mathbb{1}(f(z)\neq h(z))dzdx, \tag{4}\]
using iterated integration (Appendix B) and Definition 4.1. We assume the measure of \(F\cap gF\) is 0. Since \(h(z)\) can only have a single label in orbit \(Gx\), we can lower bound the inside integral as
\[\int_{z\in Gx}p(z)\mathbb{1}(f(z)\neq h(z))dz\geq\min_{y\in Y}\int_{z\in Gx}p(z )\mathbb{1}(f(z)\neq y)dz=k(Gx).\]
We obtain the claim by integrating over \(F\). Notice that this is a tight lower bound assuming universal approximation. That is, there exists \(h\) which realizes this lower bound.
We can express the total dissent in terms of the Incorrect Set \(I\) (Definition 3.9).
**Proposition 4.4**.: \(k(Gx)=\min_{x^{\prime}\in(Gx)^{+}}\int_{G}p(gx^{\prime})\mathbb{1}((x^{\prime},g)\in I)\alpha(x^{\prime},g)dg\)_, where \((Gx)^{+}=\{x_{0}\in Gx|p(x_{0})>0\}\)._
Proof.: Consider Equation 3, since the minimum over \(y\) is obtained for \(y=f(x^{\prime})\) for some \(x^{\prime}\in Gx\) such that \(p(x^{\prime})>0\) (i.e., \(x^{\prime}\in(Gx)^{+}\)),
\[k(Gx)=\min_{x^{\prime}\in(Gx)^{+}}\int_{G}p(gx)\mathbb{1}(f(gx)\neq f(x^{\prime }))\alpha(x,g)dg.\]
Since \(x^{\prime}\in Gx\), then \(Gx^{\prime}=Gx\) and we have \(k(Gx)=k(Gx^{\prime})\). Thus,
\[k(Gx) =\min_{x^{\prime}\in(Gx)^{+}}\int_{G}p(gx^{\prime})\mathbb{1}(f(gx^ {\prime})\neq f(x^{\prime}))\alpha(x^{\prime},g)dg\] \[=\min_{x^{\prime}\in(Gx)^{+}}\int_{G}p(gx^{\prime})\mathbb{1}((x^ {\prime},g)\in I)\alpha(x^{\prime},g)dg.\]
**Example 4.5** (Lower bound example for a binary classification task using Proposition 4.4).:
Let \(f\colon X\to\{0,1\}\) be a binary classification function on \(X=\{x_{0},x_{1},x_{2},x_{3}\}\). Let \(G=C_{2}=\{e,r\}\) be the cyclic group of order two that permutes the elements in \(X\). Figure 3 shows \(X\), the label for each \(x\in X\), and how \(e,r\in G\) acts on \(x\in X\). \(\{x_{0},x_{3}\}\) forms a fundamental domain \(F\), and there are two orbits: \(Gx_{0}=\{x_{0},x_{1}\}\) and \(Gx_{2}=\{x_{2},x_{3}\}\). Since both \(X\) and \(G\) are discrete and \(g\in G\) acts on \(X\) through permutation, The lower bound can be written as \(\operatorname{err}(h)\geq\sum_{x\in F}\min_{x^{\prime}\in(Gx)^{+}}\sum_{g\in G }p(gx^{\prime})\mathbb{1}((x^{\prime},g)\in I)\). We can then calculate \(\sum_{g\in G}p(gx^{\prime})\mathbb{1}((x^{\prime},g)\in I)\) for \(x^{\prime}\in X\): \(x_{0}:0.4,x_{1}:0.3,x_{2}:0,x_{3}:0\). Taking the min over each orbit we have \(k(Gx_{0})=0.3,k(Gx_{2})=0\). Taking the sum over \(F=\{x_{0},x_{3}\}\) we obtain \(\operatorname{err}(h)\geq 0.3\).
**Example 4.6** (Lower bound example for a multi-class classification task using Proposition 4.4).: Consider a multi-class classification task \(f:\mathbb{R}^{2}\to Y\) with \(n=|Y|\) classes. For \(x=(u,v)\in[0,1]^{2}\) then \(p(u,v)=1\) and otherwise \(p(u,v)=0\); i.e., the support of \(p\) is a unit square. Let \(G\) denote the group of translations in the \(u\)-direction and \(h\) a \(G\)-invariant network. In a data distribution illustrated in Figure 4, we compute the lower bound for \(\operatorname{err}(h)\). Consider a fundamental domain \(F\) (brown line in Figure 4). In the blue area, there is one label across the orbit (i.e., the horizontalline), meaning \(\forall g\in G,(x^{\prime},g)\in C\), yielding Proposition 4.4 equals 0. For points in the yellow area, the majority label is yellow. This means that for \(g\in G\) such that \(gx\) is in yellow, \((x,g)\in C\); for other \(g\in G,(x,g)\in I\). Consequently, Proposition 4.4 is equivalent to the combined green and pink lengths. Taking the integral over \(F\) (Theorem 4.3), the lower bound equals the green and pink area (\(I\) in Figure 4). We define correct ratio (\(c\)) as the blue area's height and majority label ratio (\(m\)) as the yellow area's length. Adjusting \(c\) and \(m\) transitions incorrect to correct equivariance, leading to \(\operatorname{err}(h)\geq area(I)=(1-c)\times(1-m)\). Appendix H.2 shows an experiment where the empirical result matches our analysis.
Lower Bound When \(G\) is Finite and The Action of \(G\) is Density Preserving.In this section, we consider the lower bound in Theorem 4.3 when \(G\) is finite and the action of \(G\) is density preserving, i.e., \(p(gx)=p(x)\). Let \((Gx)_{y}=\{z\in Gx|f(z)=y\}\) be a subset of \(Gx\) with label \(y\). Define \(\mathcal{Q}(x)=(\max_{y\in Y}|(Gx)_{y}|)/|Gx|\), which is the fraction of data in the orbit \(Gx\) that has the majority label. Denote \(Q=\{\mathcal{Q}(x):x\in X\}\) the set of all possible values for \(\mathcal{Q}\). Consider a partition of \(X=\coprod_{q\in Q}X_{q}\) where \(X_{q}=\{x\in X:\mathcal{Q}(x)=q\}\). Define \(c_{q}=\mathbb{P}(x\in X_{q})=|X_{q}|/|X|\).
**Proposition 4.7**.: _The error lower bound \(\operatorname{err}(h)\geq 1-\sum_{q}qc_{q}\) from Wang et al. [58] (Proposition 4.1) is a special case of Theorem 4.3._
Proof in Appendix C. The proposition shows Theorem 4.3 is a strict generalization of [58, Prop 4.1].
### Lower Bound for Invariant Regression
In this section, we give a lower bound of the error function \(\operatorname{err}_{\operatorname{reg}}(h)\) (Equation 1) in a regression task given that \(h\) is invariant, i.e., \(h(gx)=h(x)\) for all \(g\in G\). Assume \(Y=\mathbb{R}^{n}\). Denote by \(p(Gx)=\int_{z\in Gx}p(z)dz\) the probability of the orbit \(Gx\). Denote by \(q(z)=\frac{p(z)}{p(Gx)}\) the normalized probability density of the orbit \(Gx\) such that \(\int_{Gx}q(z)dz=1\). Let \(\mathbb{E}_{Gx}[f]\) be the mean of function \(f\) on the orbit \(Gx\) defined, and let \(\mathbb{V}_{Gx}[f]\) be the variance of \(f\) on the orbit \(Gx\),
\[\mathbb{E}_{Gx}[f]=\int_{Gx}q(z)f(z)dz=\frac{\int_{Gx}p(z)f(z)dz}{\int_{Gx}p(z )dz},\qquad\mathbb{V}_{Gx}[f]=\int_{Gx}q(x)||\mathbb{E}_{Gx}[f]-f(z)||_{2}^{2}.\]
**Theorem 4.8**.: \(\operatorname{err}(h)\geq\int_{F}p(Gx)\mathbb{V}_{Gx}[f]dx\)_._
Proof.: The error function (Equation 1) can be written as:
\[\operatorname{err}(h)=\int_{X}p(x)||f(x)-h(x)||_{2}^{2}dx=\int_{x\in F}\int_{z \in Gx}p(z)||f(z)-h(z)||_{2}^{2}dzdx.\]
Denote \(e(x)=\int_{Gx}p(z)||f(z)-h(z)||_{2}^{2}dz\). Since \(h\) is \(G\)-invariant, there exists \(c\in\mathbb{R}^{n}\) such that \(h(z)=c\) for all \(z\in Gx\). Then \(e(x)\) can be written as \(e(x)=\int_{Gx}p(z)||f(z)-c||_{2}^{2}dz\). Taking the derivative of \(e(x)\) with respect to \(c\) and setting it to 0 gives \(c^{*}\), the minimum of \(e(x)\), \(c^{*}=\frac{\int_{Gx}p(z)f(z)dz}{\int_{Gx}p(z)dz}=\mathbb{E}_{Gx}[f]\). Substituting \(c^{*}\) into \(e(x)\) we have
\[e(x)\geq\int_{Gx}p(Gx)\frac{p(z)}{p(Gx)}||\mathbb{E}_{Gx}[f]-f(z)||_{2}^{2}dz= p(Gx)\mathbb{V}_{Gx}[f].\]
We can obtain the claim by taking the integral of \(e(x)\) over the fundamental domain \(F\).
### Lower Bound for Equivariant Regression
We now prove a lower bound for \(\operatorname{err}(h)\) in a regression task given the model \(h\) is equivariant, that is, \(h(\rho_{X}(g)x)=\rho_{Y}(g)h(x)\) where \(g\in G,\rho_{X}\) and \(\rho_{Y}\) are group representations associated with \(X\) and \(Y\). We will denote \(\rho_{X}(g)x\) and \(\rho_{Y}(g)y\) by \(gx\) and \(gy\), leaving the representation implicit. Assume \(Y=\mathbb{R}^{n}\) and \(\alpha(x,g)\) is the same as in equation 3. Let \(\operatorname{Id}\) be the identity. Define a matrix \(Q_{Gx}\in\mathbb{R}^{n\times n}\) and \(q(gx)\in\mathbb{R}^{n\times n}\) so that \(\int_{G}q(gx)dg=\operatorname{Id}\) by
\[Q_{Gx}=\int_{G}p(gx)\rho_{Y}(g)^{T}\rho_{Y}(g)\alpha(x,g)dg,\qquad q(gx)=Q_{Gx }^{-1}p(gx)\rho_{Y}(g)^{T}\rho_{Y}(g)\alpha(x,g). \tag{5}\]Here, for simplicity, we assume \(Q_{Gx}\) is an invertible matrix. (See Appendix D for general case).
If \(f\) is equivariant, \(g^{-1}f(gx)\) is a constant for all \(g\in G\). Define \(\mathbf{E}_{G}[f,x]\)
\[\mathbf{E}_{G}[f,x]=\int_{G}q(gx)g^{-1}f(gx)dg. \tag{6}\]
**Theorem 4.9**.: _The error of \(h\) has lower bound \(\operatorname{err}(h)\geq\int_{F}\int_{G}p(gx)||f(gx)\;-\;g\mathbf{E}_{G}[f,x] ||_{2}^{2}\alpha(x,g)dgdx\)._
See Appendix D for the proof. Intuitively, \(\mathbf{E}_{G}[f,x]\) is the minimizer obtained by taking the mean of all inversely transformed \(f(x)\) for all \(x\) in the orbit, see Figure 4(c) and Example 4.11 below.
**Corollary 4.10**.: _Denote \(p(Gx)=\int_{Gx}p(z)dz\). Denote \(q_{x}:g\mapsto q(gx)\). Define \(G\)-stabilized \(f\) as \(f_{x}:g\mapsto g^{-1}f(gx)\). When \(\rho_{Y}\) is an orthogonal representation \(\rho_{Y}:G\to\operatorname{O}(n)\subset GL(n)\), \(q_{x}\) is a probability density function on \(G\). Denote the variance of \(f_{x}\) as \(\mathbb{V}_{G}[f_{x}]\) where \(g\sim q_{x}\). The error has a lower bound \(\operatorname{err}(h)\geq\int_{F}p(Gx)\mathbb{V}_{G}[f_{x}]dx\)._
See Appendix E for the proof. Notice that Corollary 4.10 is a generalization of Theorem 4.8. That is, Theorem 4.8 can be recovered by taking \(\rho_{Y}(g)=\operatorname{Id}\) (See the proof in Appendix F).
**Example 4.11** (Lower bound example of a regression task).: Consider a regression problem where \(X=\{x_{0},x_{1},x_{2},x_{3}\}\) and \(Y=\mathbb{R}^{2}\). Assume \(p\) is uniform density. The cyclic group \(G=C_{4}=\{e,r,r^{2},r^{3}\}\) (where \(e=0\) rotation and \(r=\pi/2\) rotation) acts on \(X\) through \(x_{1}=rx_{0};x_{2}=rx_{1};x_{3}=rx_{2};x_{0}=rx_{3}\) (i.e., there is only one orbit \(Gx=X\)). \(g\in G\) acts on \(y\in Y\) through \(\rho_{Y}(g)=\left(\begin{smallmatrix}\cos g&-\sin g\\ sin&g&\cos g\end{smallmatrix}\right)\). Figure 4(a) shows the output of \(f(x),\forall x\in X\). **First**, consider a \(G\)-invariant network \(h\). Since there is only one orbit, Theorem 4.8 can be simplified as: \(\operatorname{err}(h)\geq\mathbb{V}_{X}[f]\), the variance of \(f\) over \(X\). This can be calculated by first taking the mean of \(f(x)\) then calculating the mean square error (MSE) from all \(x\) to the mean (Figure 4(b)). **Second**, consider a \(G\)-equivariant network \(h\). Since \(G\) is discrete, \(gx\) permutes the order of \(X\), \(\rho_{Y}\) is an orthogonal representation, and there is only one orbit, Corollary 4.10 can be written as \(\operatorname{err}(h)\geq\mathbb{V}_{G}[f_{x}]\), the variance of \(G\)-stabilized \(f\). First, to calculate \(\mathbb{E}_{G}[f_{x}]\), let \(x=x_{0}\), we stabilize \(g\) from \(f\) by \(g^{-1}f(gx)\) for all \(g\in G\), then take the mean (Figure 4(c)). We can then find \(\mathbb{V}_{G}[f_{x}]\) by calculating the MSE between \(f(x)\) and transformed mean \(g\mathbb{E}_{G}[f_{x}]\) (Figure 4(d)). Appendix H.3 shows an experiment in this example's environment.
## 5 Harmful Extrinsic Equivariance
Wang et al. [58] demonstrate that extrinsic equivariance, where the symmetry imposed on the model leads to out-of-distribution data with respect to the input distribution, can lead to a higher performance on the original training data. In this section, we argue that this is not necessarily true in all cases, and there can exist scenarios where extrinsic equivariance can even be harmful to the learning problem.
Consider a binary classification task where the domain is discrete and contains only a set of four points \(S\subset\mathbb{R}^{3}\), and their labels are either \(\{-1,+1\}\) as shown in Figure 5(a). We consider the probability density \(p\) to be uniform for this domain, i.e., \(p(x)=1/4\) for the four points \(S\), and \(p=0\) elsewhere.
Figure 5: An example regression task. (a) The value of \(f(x)\) and the transformation rule (purple) with respect to group \(G=C_{4}\) for all \(x\in X\). The four points belong to a single orbit. (b) When using an invariant network, the minimal error (red) is obtained when the invariant network outputs the mean value (green) of the orbit. (c) For an equivariant network, the minimizer (green) can be obtained by taking the mean of the \(G\)-stabilized \(f(x)\) (inversely transformed) (blue) for all \(x\) in the orbit with respect to the transformation rule in the orbit. (d) The minimal error of an equivariant network.
This domain is used for both model training and testing so there is no distribution shift. We consider two model classes, \(\mathcal{F}_{N}\), the set of all linear models, and \(\mathcal{F}_{E}\), the set of all linear models which are invariant with respect to the cyclic group \(C_{2}=\{1,g\}\), where \(g(x_{1},x_{2},x_{3})=(x_{1},x_{2},-x_{3})\). \(\mathcal{F}_{N}\) corresponds to an unconstrained or a non-equivariant model class and \(\mathcal{F}_{E}\) corresponds to an extrinsically equivariant class for this domain. For the labeling shown in Figure 6, the hyperplane \(x_{3}=0\) correctly classifies all samples and is contained in \(\mathcal{F}_{N}\). However, a function \(f_{e}\in\mathcal{F}_{E}\) is equivalent to a linear classifier on \(\mathbb{R}^{2}\) and effectively sees the data as Figure 5(b)4. This exclusive-or problem does not admit a linear solution (it can be correct for at most 3 points).
Footnote 4: Notice that the four additional points in Figure 5(b) compared with (a) are not in the domain, they are created through applying the transformation rule of \(\mathcal{F}_{E}\) onto the domain.
Concretely, we can compute the empirical Rademacher complexity, a standard measure of model class expressivity, for non-equivariant and extrinsically equivariant model classes and show that \(\mathcal{F}_{E}\) has lower complexity than \(\mathcal{F}_{N}\). Recall that empirical Rademacher complexity is defined as \(\mathfrak{R}_{S}\left(\mathcal{F}\right)=\mathbb{E}_{\sigma}\left[\sup_{f\in \mathcal{F}}\frac{1}{m}\sum_{i=1}^{m}\sigma_{i}f(x^{i})\right]\), where \(S\) is the set of \(m\) samples and \(\sigma=(\sigma_{1},\ldots,\sigma_{m})^{\top}\), \(\sigma_{i}\in\{-1,+1\}\) are independent uniform Rademacher random variables, and \(x^{i}\) is the \(i\)-th sample. As there exists some linear function \(f_{n}\in\mathcal{F}_{N}\) that fully classifies \(S\) for any combination of labels, \(\mathfrak{R}_{S}(\mathcal{F}_{N})=1\). For the extrinsic equivalence case, of the 16 possible label combinations, there are two cases where \(f_{e}\in\mathcal{F}_{E}\) can at most classify 3 out of 4 points correctly, and thus \(\mathfrak{R}_{S}(\mathcal{F}_{E})=\frac{31}{32}<\mathfrak{R}_{S}(\mathcal{F}_ {N})\) (see Appendix G for the calculations). This illustrates that in certain cases, extrinsic equivariance can lead to lower model expressivity than no equivariance and thus be harmful to learning.
## 6 Experiments
We perform experiments to validate our theoretical analysis on both the lower bounds (Section 4) and the harmful extrinsic equivariance (Section 5). We find that our bounds accurately predict empirical model error. In addition to the experiments in this section, Appendix H.2 shows an experiment verifying our classification bound (Theorem 4.3) and Appendix H.3 shows an experiment verifying our regression bound (Theorem 4.8 and 4.9). The experiment details are in Appendix I.
### Swiss Roll Experiment
We first perform an experiment in a vertically separated Swiss Roll data distribution, see Figure 6(a)5. This example, similar to that in Section 5, demonstrates that a \(C_{2}\)-invariant model effectively "flattens" the \(z\)-dimension of the data so it must learn the decision boundary between two spirals (Figure 6(b)), whereas the non-equivariant model only needs to learn a horizontal plane to separate the classes, a significantly easier task. Besides the extrinsic data distribution, we consider two other data distributions shown in Figure 6(c) and Figure 6(d), where a \(C_{2}\)-invariant model will observe incorrect and correct equivariance due to the mismatched and matched data labels in the two \(z\) planes.
Footnote 5: For visualization, we show a simpler version of the data distribution. See Appendix H.1 for the actual one.
We combine data from all three distributions in various proportions to test the performance of a \(z\)-invariant network (INV) with a baseline unconstrained network (MLP). Let \(c\) be the correct ratio, the proportion of data from the correct distribution. Define the incorrect ratio \(i\) and extrinsic ratio \(e\) similarly. We consider all \(c,i,e\) that are multiples of 0.125 such that \(c+i+e=1\). Figure 6(e) shows some example data distributions. Relative to INV, this mixed data distribution has partial correct, incorrect, and extrinsic equivariance, which is not fully captured in prior work [58]. Based on Proposition 4.4, we have \(k(Gx)=0.5\) for \(x\) drawn from the incorrect distribution, and \(k(Gx)=0\) otherwise. Since the data is evenly distributed, we can calculate the error lower bound \(\operatorname{err}(h)\geq 0.5i\)
Figure 6: An example dataset where extrinsic equivariance increases the problem difficulty. The samples are of the form \(x=(x_{1},x_{2},x_{3})\) and the labels are shown as different shapes. A \(C_{2}\)-equivariant linear model transforms the original data (a) into (b), which is equivalent to viewing the data as in (c). The original task has an easy solution (e.g. hyperplane at \(x_{3}=0\)), while the \(C_{2}\)-invariant view is the classic exclusive-or problem.
**Results.** Figure 7(a) shows the test success rate of INV compared with MLP when \(e\) and \(c\) vary with \(i=0\). When \(e\) increases, the performance of INV decreases while the performance of MLP shows an inverse trend, demonstrating that extrinsic equivariance is harmful in this experiment. Figure 7(b) shows the performance of INV and MLP when \(c\) and \(i\) vary while \(e=0\). The green line shows the upper bound of the test success rate (\(1-0.5i\)). The experimental result matches our theoretical analysis quite closely. Notice that when \(c\) increases, there is a bigger gap between the performance of the network and its theoretical upper bound, since classification in the correct distribution is a harder task. Appendix H.1 shows the complete results of this experiment.
### Digit Classification Experiment
In this experiment, we apply our theoretical analysis to a realistic digit classification task using both the printed digit dataset [16] and the MNIST handwritten digit dataset [15]. We compare a \(D_{4}\)-invariant network (\(D_{4}\)) with an unconstrained CNN. In the printed digit classification, \(D_{4}\) exhibits incorrect equivariance for 6 and 9 under a \(\pi\) rotation. Using Theorem 4.3, we can calculate a lower bound of error for \(D_{4}\) at 10%. However, as shown in Table 1 (top), the experimental results indicate that the actual performance is slightly better than predicted by the theory. We hypothesize that this discrepancy arises because a rotated 9 differs slightly from a 6 in some fonts. We conduct a similar experiment using the MNIST handwritten digit dataset (Table 1 bottom), where \(D_{4}\) achieves even better performance in classifying 6 and 9. This improvement is likely due to the more distinguishable handwriting of these digits, although the performance still underperforms the CNN as incorrect equivariance persists. It is important to note that there is a significant decrease in performance for \(D_{4}\) when classifying 2/5 and 4/7 compared to the CNN. This is because a vertical flip results in incorrect equivariance when classifying handwritten 2/5, and a similar issue arises for 4/7 under a \(\pi/2\) rotation followed by a vertical flip (notice that Weiler and Cesa [60] make a similar observation). These experiments demonstrate that our theory is useful not only for calculating the performance bounds of an equivariant network beforehand, but also for explaining the suboptimal performance of an equivariant network, thereby potentially assisting in model selection.
### Robotic Experiment
In this experiment, we evaluate our theory in behavior cloning in robotic manipulation. We first preform an experiment where the problem is a mixture of correct and incorrect equivariance for a \(D_{1}\)-equivariant policy network (\(D_{1}\)) where the robot's action will flip when the state is flipped
Figure 8: Result of the Swiss Roll experiment. (a) test success rate of an invariant network (red) and an unconstrained MLP (blue) with different extrinsic and correct ratio when incorrect ratio is 0. (b) same as (a) with different correct and incorrect ratio when extrinsic ratio is 0. Averaged over 10 runs.
Figure 7: (a) (b) The Swiss Roll data distribution that leads to harmful extrinsic equivariance. (c) (d) The correct and incorrect data distribution in the Swiss Roll experiment. Here the spirals overlap with mismatched and matched labels respectively. (e) (f) Data distribution example with different correct ratio (\(c\)), incorrect ratio (\(i\)), and extrinsic ratio (\(e\)) values.
horizontally. Specifically, the environment contains two possible tasks (Figure 9 left). Stacking requires the robot to stack a green triangle on top of a blue cube. Here flip equivariance is correct. Sorting requires the robot to push the red cube to the left and the yellow triangle to the right. Here flip equivariance is incorrect because the robot should not sort the objects in an opposite way when the state is flipped (in other words, \(D_{1}\) cannot distinguish left and right). We vary the probability \(c\) of the stacking task (correct ratio) in the task distribution, and compare the performance of \(D_{1}\) versus a standard CNN policy. If we view the sorting task as a binary classification task, we can calculate an upper bound of performance for \(D_{1}\) using Theorem 4.3: \(0.5+0.5c\). Figure 10 shows the result. Notice that the performance of \(D_{1}\) closely matches the theoretical upper bound, while the performance of CNN remains relatively stable for all Stacking-Sorting distributions.
We further evaluate \(D_{1}\) in a sorting task with harmful extrinsic equivariance. Here, the goal for the robot is the same as sorting above (push the red cube left and the yellow triangle right), however, the left and right sides of the workspace can now be differentiated by gray-scale colors. The shades of gray are discretized evenly into \(n\) bins, where the left side's color is randomly sampled from the odd-numbered bins, and the right side's color is randomly sampled from the even-numbered bins (Figure 9 right). The different color distributions of the left and right sides make \(D_{1}\) extrinsically equivariant, but it needs to learn the color distribution to distinguish left and right (while CNN can distinguish left and right directly). We set \(n=10\), and \(D_{1}\) achieves \(71.5\pm 1.6\%\) test success rate, while CNN achieves \(99.5\pm 0.5\%\), demonstrating that the \(D_{1}\) extrinsic equivariance is harmful in this task. See Appendix 1.5 for the details of the robot experiment.
## 7 Discussion
This paper presents a general theory for when the symmetry of the ground truth function and equivariant network are mismatched. We define pointwise correct, incorrect, and extrinsic equivariance, generalizing prior work [58] to include continuous mixtures of the three extremes. We prove error lower bounds for equivariant networks applied to asymmetric tasks including classification, invariant regression, and equivariant regression without the assumption of invariant data density. Our work discusses the potential disadvantage of extrinsic equivariance, and provides experiments that validate our theoretical analysis. The major limitation of this paper is that our theoretical lower bounds require domain knowledge like the density function over the domain. In future work, we will develop easy-to-apply model selection tools using our theory. Another future direction is theoretically understanding when extrinsic equivariance is helpful or harmful and analyzing the effect of extrinsic equivariance on the decision boundary of an equivariant network.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline Digit & Overall & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline Print CNN & 96.62 & 99.81 & 98.27 & 97.06 & 96.12 & 98.04 & 92.9 & 94.93 & 97.85 & 93.65 & 96.13 \\ Print D4 & 92.5 & 99.71 & 98.62 & 96.94 & 95.98 & 97.91 & 93.52 & **63.08** & 98.42 & 95.88 & **76.17** \\ Print D4 Upper Bound & 90 & 100 & 100 & 100 & 100 & 100 & 100 & 50 & 100 & 100 & 50 \\ \hline MNIST CNN & 98.21 & 99.51 & 99.61 & 98.62 & 98.83 & 98.08 & 98.47 & 97.99 & 97.04 & 96.98 & 96.81 \\ MNIST D4 & 96.15 & 98.93 & 99.21 & **91.84** & 98.28 & **95.49** & **95.04** & **93.71** & **95.67** & 97.73 & **95.34** \\ \hline \hline \end{tabular}
\end{table}
Table 1: \(D_{4}\)-invariant network compared with an unconstrained CNN in printed and MNIST handwritten digit classification tasks. Bold indicates that there is a \(>1\%\) difference in two models.
## Acknowledgments
This work is supported in part by NSF 1724257, NSF 1724191, NSF 1763878, NSF 1750649, NSF 2107256, NSF 2134178, NSF 2312171, and NASA 80NSSC19K1474.
## References
* A. Behboodi, G. Cesa, and T. Cohen (2022)A PAC-bayesian generalization bound for equivariant networks. arXiv preprint arXiv:2210.13150. Cited by: SS1.
* A. Bogatskiy, B. Anderson, J. Offermann, M. Roussi, D. Miller, and R. Kondor (2020)Lorentz group equivariant neural network for particle physics. In International Conference on Machine Learning, pp. 992-1002. Cited by: SS1.
* E. Chatzipantazis, S. Pertigkiozoglou, E. Dobriban, and K. Daniilidis (2023)\(\mathrm{SE}(3)\)-equivariant attention networks for shape reconstruction in function space. In The Eleventh International Conference on Learning Representations, External Links: Link Cited by: SS1.
* S. Chen, E. Dobriban, and J. H. Lee (2020)A group-theoretic framework for data augmentation. The Journal of Machine Learning Research21 (1), pp. 9885-9955. Cited by: SS1.
* T. Cohen and M. Welling (2016)Group equivariant convolutional networks. In International conference on machine learning, pp. 2990-2999. Cited by: SS1.
* T. S. Cohen and M. Welling (2017)Steerable CNNs. In International Conference on Learning Representations, External Links: Link Cited by: SS1.
* T. S. Cohen, M. Geiger, and M. Weiler (2019)A general theory of equivariant cnns on homogeneous spaces. Advances in neural information processing systems32. Cited by: SS1.
* E. Coumans and Y. Bai (2016)Pybullet, a python module for physics simulation for games, robotics and machine learning. Note: [http://pybullet.org](http://pybullet.org) External Links: Link Cited by: SS1.
* R. Dangovski, L. Jing, C. Loh, S. Han, A. Srivastava, B. Cheung, P. Agrawal, and M. Soljacic (2021)Equivariant self-supervised learning: encouraging equivariance in representations. In International Conference on Learning Representations, External Links: Link Cited by: SS1.
* N. Dehmamy, R. Walters, Y. Liu, D. Wang, and R. Yu (2021)Automatic symmetry discovery with lie algebra convolutional network. Advances in Neural Information Processing Systems34, pp. 2503-2515. Cited by: SS1.
* L. Deng (2012)The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine29 (6), pp. 141-142. Cited by: SS1.
* K. Dhama (2021)Printed numerical digits image dataset. Note: [https://github.com/kaydee0502/printed-digits-dataset](https://github.com/kaydee0502/printed-digits-dataset) Cited by: SS1.
* B. Elesedy and S. Zaidi (2021)Provably strict generalisation benefit for equivariant models. In International Conference on Machine Learning, pp. 2959-2969. Cited by: SS1.
* Finzi et al. [2020] Marc Finzi, Samuel Stanton, Pavel Izmailov, and Andrew Gordon Wilson. Generalizing convolutional neural networks for equivariance to lie groups on arbitrary continuous data. In _International Conference on Machine Learning_, pages 3165-3176. PMLR, 2020.
* Hauberg et al. [2016] Soren Hauberg, Oren Freifeld, Anders Boesen Lindbo Larsen, John Fisher, and Lars Hansen. Dreaming more data: Class-dependent distributions over diffeomorphisms for learned data augmentation. In _Artificial intelligence and statistics_, pages 342-350. PMLR, 2016.
* Hinton et al. [2011] Geoffrey E Hinton, Alex Krizhevsky, and Sida D Wang. Transforming auto-encoders. In _International conference on artificial neural networks_, pages 44-51. Springer, 2011.
* Huang et al. [2022] Haojie Huang, Dian Wang, Robin Walters, and Robert Platt. Equivariant transporter network. In _Robotics: Science and Systems_, 2022.
* Huang et al. [2023] Haojie Huang, Dian Wang, Xupeng Zhu, Robin Walters, and Robert Platt. Edge grasp network: A graph-based \(\mathrm{SE}(3)\)-invariant approach to grasp detection. In _International Conference on Robotics and Automation (ICRA)_, 2023.
* Jia et al. [2023] Mingxi Jia, Dian Wang, Guanang Su, David Klee, Xupeng Zhu, Robin Walters, and Robert Platt. Seil: Simulation-augmented equivariant imitation learning. In _International Conference on Robotics and Automation (ICRA)_, 2023.
* Kingma and Ba [2014] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_, 2014.
* Klee et al. [2022] David Klee, Ondrej Biza, Robert Platt, and Robin Walters. I2i: Image to icosahedral projection for \(\mathrm{SO}(3)\) object reasoning from single-view images. _arXiv preprint arXiv:2207.08925_, 2022.
* Klee et al. [2023] David Klee, Ondrej Biza, Robert Platt, and Robin Walters. Image to sphere: Learning equivariant features for efficient pose prediction. In _The Eleventh International Conference on Learning Representations_, 2023. URL [https://openreview.net/forum?id=_2bDpAtr7PI](https://openreview.net/forum?id=_2bDpAtr7PI).
* Kondor and Trivedi [2018] Risi Kondor and Shubhendu Trivedi. On the generalization of equivariance and convolution in neural networks to the action of compact groups. In _International Conference on Machine Learning_, pages 2747-2755. PMLR, 2018.
* Krizhevsky et al. [2012] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. _Advances in neural information processing systems_, 25, 2012.
* LeCun et al. [1998] Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. _Proceedings of the IEEE_, 86(11):2278-2324, 1998.
* Lenc and Vedaldi [2015] Karel Lenc and Andrea Vedaldi. Understanding image representations by measuring their equivariance and equivalence. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 991-999, 2015.
* Li et al. [2021] Xiaolong Li, Yijia Weng, Li Yi, Leonidas J Guibas, A Abbott, Shuran Song, and He Wang. Leveraging se (3) equivariance for self-supervised category-level object pose estimation from point clouds. _Advances in Neural Information Processing Systems_, 34:15370-15381, 2021.
* Liu et al. [2023] Xueyi Liu, Ji Zhang, Ruizhen Hu, Haibin Huang, He Wang, and Li Yi. Self-supervised category-level articulated object pose estimation with part-level SE(3) equivariance. In _The Eleventh International Conference on Learning Representations_, 2023. URL [https://openreview.net/forum?id=206tJ6hIaPA](https://openreview.net/forum?id=206tJ6hIaPA).
* Lyle et al. [2020] Clare Lyle, Mark van der Wilk, Marta Kwiatkowska, Yarin Gal, and Benjamin Bloem-Reddy. On the benefits of invariance in neural networks. _arXiv preprint arXiv:2005.00178_, 2020.
* Maile et al. [2023] Kaitlin Maile, Dennis George Wilson, and Patrick Forre. Equivariance-aware architectural optimization of neural networks. In _The Eleventh International Conference on Learning Representations_, 2023. URL [https://openreview.net/forum?id=a6rCdfABJXg](https://openreview.net/forum?id=a6rCdfABJXg).
* Marchetti et al. [2022] Giovanni Luca Marchetti, Gustaf Tegner, Anastasiia Varava, and Danica Kragic. Equivariant representation learning via class-pose decomposition. _arXiv preprint arXiv:2207.03116_, 2022.
* Maron et al. [2019] Haggai Maron, Ethan Fetaya, Nimrod Segol, and Yaron Lipman. On the universality of invariant networks. In _International conference on machine learning_, pages 4363-4371. PMLR, 2019.
* Maron et al. [2020] Haggai Maron, Or Litany, Gal Chechik, and Ethan Fetaya. On learning sets of symmetric elements. In _International conference on machine learning_, pages 6734-6744. PMLR, 2020.
* Miao et al. [2021] Ning Miao, Tom Rainforth, Emile Mathieu, Yann Dubois, Yee Whye Teh, Adam Foster, and Hyunjik Kim. Learning instance-specific augmentations by capturing local invariances. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett, editors, _Proceedings of the 40th International Conference on Machine Learning_, volume 202 of _Proceedings of Machine Learning Research_, pages 24720-24736. PMLR, 23-29 Jul 2023. URL [https://proceedings.mlr.press/v202/miao23a.html](https://proceedings.mlr.press/v202/miao23a.html).
* Mondal et al. [2022] Arnab Kumar Mondal, Vineet Jain, Kaleem Siddiqi, and Siamak Ravanbakhsh. EqR: Equivariant representations for data-efficient reinforcement learning. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, _Proceedings of the 39th International Conference on Machine Learning_, volume 162 of _Proceedings of Machine Learning Research_, pages 15908-15926. PMLR, 17-23 Jul 2022. URL [https://proceedings.mlr.press/v162/mondal22a.html](https://proceedings.mlr.press/v162/mondal22a.html).
* Moskalev et al. [2022] Artem Moskalev, Anna Sepliarskaia, Ivan Sosnovik, and Arnold W.M. Smeulders. LieGG: Studying learned lie group generators. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, _Advances in Neural Information Processing Systems_, 2022. URL [https://openreview.net/forum?id=9sKZ60VtRmi](https://openreview.net/forum?id=9sKZ60VtRmi).
* Pan et al. [2023] Chuer Pan, Brian Okorn, Harry Zhang, Ben Eisner, and David Held. Tax-pose: Task-specific cross-pose estimation for robot manipulation. In _Conference on Robot Learning_, pages 1783-1792. PMLR, 2023.
* Park et al. [2022] Jung Yeon Park, Ondrej Biza, Linfeng Zhao, Jan Willem van de Meent, and Robin Walters. Learning symmetric representations for equivariant world model. In _International Conference on Machine Learning_, 2022. URL [https://arxiv.org/abs/2204.11371](https://arxiv.org/abs/2204.11371).
* Quessard et al. [2020] Robin Quessard, Thomas Barrett, and William Clements. Learning disentangled representations and group structure of dynamical environments. _Advances in Neural Information Processing Systems_, 33:19727-19737, 2020.
* Rommel et al. [2021] Cedric Rommel, Thomas Moreau, Joseph Paillard, and Alexandre Gramfort. Cadda: Class-wise automatic differentiable data augmentation for eeg signals. In _International Conference on Learning Representations_, 2021.
* Ryu et al. [2023] Hyunwoo Ryu, Hong in Lee, Jeong-Hoon Lee, and Jongeun Choi. Equivariant descriptor fields: SE(3)-equivariant energy-based models for end-to-end visual robotic manipulation learning. In _The Eleventh International Conference on Learning Representations_, 2023. URL [https://openreview.net/forum?id=dnJZSPGmY50](https://openreview.net/forum?id=dnJZSPGmY50).
* Sannai et al. [2021] Akiyoshi Sannai, Masaaki Imaizumi, and Makoto Kawano. Improved generalization bounds of group invariant/equivariant deep networks via quotient feature spaces. In _Uncertainty in Artificial Intelligence_, pages 771-780. PMLR, 2021.
* Schmidt and Roth [2012] Uwe Schmidt and Stefan Roth. Learning rotation-aware features: From invariant priors to equivariant descriptors. In _2012 IEEE Conference on Computer Vision and Pattern Recognition_, pages 2050-2057. IEEE, 2012.
* Simeonov et al. [2022] Anthony Simeonov, Yilun Du, Andrea Tagliasacchi, Joshua B Tenenbaum, Alberto Rodriguez, Pulkit Agrawal, and Vincent Sitzmann. Neural descriptor fields: Se (3)-equivariant object representations for manipulation. In _2022 International Conference on Robotics and Automation (ICRA)_, pages 6394-6400. IEEE, 2022.
* Simeonov et al. [2023] Anthony Simeonov, Yilun Du, Yen-Chen Lin, Alberto Rodriguez Garcia, Leslie Pack Kaelbling, Tomas Lozano-Perez, and Pulkit Agrawal. Se (3)-equivariant relational rearrangement with neural descriptor fields. In _Conference on Robot Learning_, pages 835-846. PMLR, 2023.
* Sohn and Lee [2012] Kihyuk Sohn and Honglak Lee. Learning invariant representations with local transformations. In John Langford and Joelle Pineau, editors, _Proceedings of the 29th International Conference on Machine Learning (ICML-12)_, ICML '12, pages 1311-1318, New York, NY, USA, July 2012. Omnipress. ISBN 978-1-4503-1285-1.
* Sokolic et al. [2017] Jure Sokolic, Raja Giryes, Guillermo Sapiro, and Miguel Rodrigues. Generalization error of invariant classifiers. In _Artificial Intelligence and Statistics_, pages 1094-1103. PMLR, 2017.
* van der Pol et al. [2020] Elise van der Pol, Daniel Worrall, Herke van Hoof, Frans Oliehoek, and Max Welling. Mdp homomorphic networks: Group symmetries in reinforcement learning. _Advances in Neural Information Processing Systems_, 33, 2020.
* Walters et al. [2020] Robin Walters, Jinxi Li, and Rose Yu. Trajectory prediction using equivariant continuous convolution. _arXiv preprint arXiv:2010.11344_, 2020.
* Wang et al. [2021] Dian Wang, Robin Walters, Xupeng Zhu, and Robert Platt. Equivariant \(Q\) learning in spatial action spaces. In _5th Annual Conference on Robot Learning_, 2021. URL [https://openreview.net/forum?id=IScz42A3iCI](https://openreview.net/forum?id=IScz42A3iCI).
* Wang et al. [2022] Dian Wang, Mingxi Jia, Xupeng Zhu, Robin Walters, and Robert Platt. On-robot learning with equivariant models. In _6th Annual Conference on Robot Learning_, 2022. URL [https://openreview.net/forum?id=K8W6ObPZQyh](https://openreview.net/forum?id=K8W6ObPZQyh).
* Wang et al. [2022] Dian Wang, Robin Walters, and Robert Platt. \(\mathrm{SO}(2)\)-equivariant reinforcement learning. In _International Conference on Learning Representations_, 2022. URL [https://openreview.net/forum?id=7F9cOhdvfk_](https://openreview.net/forum?id=7F9cOhdvfk_).
* Wang et al. [2023] Dian Wang, Colin Kohler, Xupeng Zhu, Mingxi Jia, and Robert Platt. Bulletarm: An open-source robotic manipulation benchmark and learning framework. In _Robotics Research_, pages 335-350. Springer, 2023.
* Wang et al. [2023] Dian Wang, Jung Yeon Park, Neel Sortur, Lawson L.S. Wong, Robin Walters, and Robert Platt. The surprising effectiveness of equivariant models in domains with latent symmetry. In _International Conference on Learning Representations_, 2023. URL [https://openreview.net/forum?id=P4MUGRM4Acu](https://openreview.net/forum?id=P4MUGRM4Acu).
* Wang et al. [2020] Rui Wang, Robin Walters, and Rose Yu. Incorporating symmetry into deep dynamics models for improved generalization. _arXiv preprint arXiv:2002.03061_, 2020.
* Weiler and Cesa [2019] Maurice Weiler and Gabriele Cesa. General e (2)-equivariant steerable cnns. _Advances in Neural Information Processing Systems_, 32, 2019.
* Winter et al. [2022] Robin Winter, Marco Bertolini, Tuan Le, Frank Noe, and Djork-Arne Clevert. Unsupervised learning of group invariant and equivariant representations. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, _Advances in Neural Information Processing Systems_, 2022. URL [https://openreview.net/forum?id=47lpv23LDPr](https://openreview.net/forum?id=47lpv23LDPr).
* Yarotsky [2022] Dmitry Yarotsky. Universal approximations of invariant maps by neural networks. _Constructive Approximation_, 55(1):407-474, 2022.
* Zhao et al. [2023] Linfeng Zhao, Xupeng Zhu, Lingzhi Kong, Robin Walters, and Lawson L.S. Wong. Integrating symmetry into differentiable planning with steerable convolutions. In _The Eleventh International Conference on Learning Representations_, 2023. URL [https://openreview.net/forum?id=n7CPzHPKQ1](https://openreview.net/forum?id=n7CPzHPKQ1).
* Zhou et al. [2020] Allan Zhou, Tom Knowles, and Chelsea Finn. Meta-learning symmetries by reparameterization. _arXiv preprint arXiv:2007.02933_, 2020.
* Zhu et al. [2022] Xupeng Zhu, Dian Wang, Ondrej Biza, Guanang Su, Robin Walters, and Robert Platt. Sample efficient grasp learning using equivariant models. In _Robotics: Science and Systems_, 2022.
Integrals on the Group
Fundamental DomainsIn this paper, we are interested in cases in which the group \(G\) is not necessarily discrete but may have positive dimension. We do not assume the fundamental domain has non-empty interior, and thus domain is a misnomer. In this case the conjugates of the fundamental domain \(gF\) have measure 0 and the condition that their intersection have measure 0 is vacuous. Instead we assume a stronger condition, that the union of all pairwise intersections \(\bigcup_{g_{1}\neq g_{2}}\left(g_{1}F\cap g_{2}F\right)\) has measure 0. We also require that \(F\) and the orbits \(Gx\) are differentiable manifolds such that integrals over \(X\) may be evaluated \(\int_{X}f(x)dx=\int_{F}\int_{Gy}f(z)dzdy\) similar to Equation 8 from [18].
ReparameterizationConsider the integral
\[\int_{Gx}f(z)dz. \tag{7}\]
Denote the identification of the orbit \(Gx\) and coset space \(G/G_{x}\) with respect to the stabilizer \(G_{x}=\{g:gx=x\}\) by \(a_{x}\colon G/G_{x}\to Gx\). Then the integral can be written
\[\int_{G/G_{x}}f(\bar{g}x)\left|\frac{\partial a_{x}(\bar{g})}{\partial\bar{g}} \right|d\bar{g}.\]
We can also lift the integral to \(G\) itself
\[\int_{G/G_{x}}f(\bar{g}x)\left|\frac{\partial a_{x}(\bar{g})}{ \partial\bar{g}}\right|d\bar{g} =\left(\int_{G_{x}}dh\right)^{-1}\left(\int_{G_{x}}dh\right)\int_ {G/G_{x}}f(\bar{g}x)\left|\frac{\partial a_{x}(\bar{g})}{\partial\bar{g}} \right|d\bar{g}\] \[=\left(\int_{G_{x}}dh\right)^{-1}\int_{G/G_{x}}\int_{G_{x}}f(\bar {g}hx)\left|\frac{\partial a_{x}(\bar{g})}{\partial\bar{g}}\right|dhd\bar{g}\] \[=\left(\int_{G_{x}}dh\right)^{-1}\int_{G}f(gx)\left|\frac{ \partial a_{x}(\bar{g})}{\partial\bar{g}}\right|dg.\]
Define \(\alpha(g,x)=\left(\int_{G_{x}}dh\right)^{-1}\left|\frac{\partial a_{x}(\bar{g} )}{\partial\bar{g}}\right|\). Then
\[\int_{Gx}f(z)dz=\int_{G}f(gx)\alpha(g,x)dg.\]
## Appendix B Iterated Integral
Let \(X\) be an \(n\)-dimensional space, Definition 4.2 (Equation 2) defines \(k(Gx)\) as an integral over \(Gx\subseteq X\), which is a \(m\)-dimensional sub-manifold of \(X\). In Theorem 4.3, Equation 4 rewrites the error function (Equation 1) as an iterated integral over the orbit \(Gx\) and then the fundamental domain \(F\) using Definition 4.1. In the discrete group case, \(m\) would be 0, Equation 2 is an integral of a 0-form in a 0-manifold, which is a sum:
\[k(Gx)=\min_{y\in Y}\sum_{z\in Gx}p(z)\mathbb{1}(f(z)\neq y)=\min_{y\in Y}\sum_{ g\in G}p(gx)\mathbb{1}(f(gx)\neq y) \tag{8}\]
## Appendix C Proof of Proposition 4.7
Proof.: Consider the integral of probability density inside \(Gx\), for a given \(y\), it can be separated into two groups:
\[\int_{Gx}p(z)dz= \int_{Gx}p(z)\mathbb{1}(f(z)=y)dz\] \[+\int_{Gx}p(z)\mathbb{1}(f(z)\neq y)dz.\]We can then rewrite \(k(Gx)\) in Equation 2 as:
\[k(Gx)=\min_{y\in Y}\left[\int_{Gx}p(z)dz-\int_{Gx}p(z)\mathbb{1}\left(f(z)=y\right) dz\right]. \tag{9}\]
Letting \((Gx)_{y}=\{x^{\prime}\in Gx\mid f(x^{\prime})=y\}=f^{-1}(y)\cap Gx\), Equation 9 can be written as:
\[k(Gx)=\min_{y\in Y}\Big{[}\int_{Gx}p(z)dz-\int_{(Gx)_{y}}p(z)dz \Big{]}\] \[\int_{Gx}p(z)dz-\max_{y\in Y}\int_{(Gx)_{y}}p(z)dz.\]
Theorem 4.3 can be rewritten as:
\[\operatorname{err}(h) \geq\int_{F}\Big{(}\int_{Gx}p(z)dz-\max_{y\in Y}\int_{(Gx)_{y}}p( z)dz\Big{)}dx\] \[\geq\int_{F}\int_{Gx}p(z)dz-\int_{F}\max_{y\in Y}\int_{(Gx)_{y}}p (z)dz\] \[\geq 1-\int_{F}\max_{y\in Y}|(Gx)_{y}|p(x)dx. \tag{10}\]
The first term in Equation 10 uses the fact that \(X=\bigcup_{x\in F}Gx\) so the integral of the probability of the orbits of all points in the fundamental domain is the integral of the probability of the input domain \(X\) which is 1. The second term of Equation 10 uses \(p(gx)=p(x)\) so the integration of \(p(z)\) on \((Gx)_{y}\) becomes \(p(x)\) times the range of the limit which is the size of \((Gx)_{y}\), \(|(Gx)_{y}|\).
Now consider a partition of \(F=\coprod_{q}F_{q}\) where \(F_{q}=\{x\in F:(\max_{y\in Y}|(Gx)_{y}|)/|Gx|=q\}\). We can rewrite Equation 10 as:
\[\operatorname{err}(h) \geq 1-\int_{F}q|Gx|p(x)dx \tag{11}\] \[\geq 1-\sum_{q}\int_{F_{q}}q|Gx|p(x)dx\] (12) \[\geq 1-\sum_{q}q\int_{F_{q}}|Gx|p(x)dx. \tag{13}\]
Equation 11 uses the definition of \(q\). Equation 12 separates the integral over \(F\) into the partition of \(F\). Equation 13 moves \(q\) out from the integral because it is a constant inside the integral. Consider the definition of \(c_{q}\), we have:
\[c_{q} =\mathbb{P}(x\in X_{q})\] \[=\int_{X_{q}}p(x)dx\] \[=\int_{F_{q}}\int_{Gx}p(z)dzdx \tag{14}\] \[=\int_{F_{q}}|Gx|p(x)dx. \tag{15}\]
Equation 14 uses \(X_{q}=\bigcup_{x\in F_{q}}Gx\). Equation 15 uses \(p(x)=p(gx)\). Now we can write Equation 13 as:
\[\operatorname{err}(h)\geq 1-\sum_{q}qc_{q}.\]Proof of Theorem 4.9
Define \(q(gx)\in\mathbb{R}^{n\times n}\) such that
\[Q_{Gx}q(gx)=p(gx)\rho_{Y}(g)^{T}\rho_{Y}(g)\alpha(x,g). \tag{16}\]
In particular, \(q(gx)\) exists when \(Q_{Gx}\) is full rank. It follows that \(Q_{Gx}\int_{G}q(gx)dg=Q_{Gx}\). Moreover, \(Q_{Gx}\) and \(q(gx)\) are symmetric matrix.
Proof.: The error function (Equation 1) can be written
\[\mathrm{err}(h) =\mathbb{E}_{x\sim p}[||f(x)-h(x)||_{2}^{2}]\] \[=\int_{X}p(x)||f(x)-h(x)||_{2}^{2}dx\] \[=\int_{x\in F}\int_{g\in G}p(gx)||f(gx)-h(gx)||_{2}^{2}\alpha(x,g )dgdx.\]
Denote \(e(x)=\int_{G}p(gx)||f(gx)-h(gx)||_{2}^{2}\alpha(x,g)dg\). Since \(h\) is \(G\)-equivariant, for each \(x\in F\) the value \(c=h(x)\in\mathbb{R}^{n}\) of \(h\) at \(x\) determines the value of \(h\) across the whole orbit \(h(gx)=gh(x)=gc\) for \(g\in G\). Then \(e(x)\) can be written
\[e(x) =\int_{G}p(gx)||f(gx)-gc||_{2}^{2}\alpha(x,g)dg\] \[=\int_{G}p(gx)||g(g^{-1}f(gx)-c)||_{2}^{2}\alpha(x,g)dg\] \[=\int_{G}(g^{-1}f(gx)-c)^{T}p(gx)g^{T}g\alpha(x,g)(g^{-1}f(gx)-c)dg\] \[=\int_{G}(g^{-1}f(gx)-c)^{T}Q_{Gx}q(gx)(g^{-1}f(gx)-c)dg. \tag{17}\]
Taking the derivative of \(e(x)\) with respect to \(c\) we have
\[\frac{\partial e(x)}{\partial c} =\int_{G}\Big{(}(Q_{Gx}q(gx))^{T}+(Q_{Gx}q(gx))\Big{)}(c-g^{-1}f( gx))dg\] \[=\int_{G}2Q_{Gx}q(gx)(c-g^{-1}f(gx))dg.\]
Setting \(\partial e(x)/\partial c=0\) we can find an equation for \(c^{*}\) which minimizes \(e(x)\)
\[Q_{Gx}\int_{G}q(gx)dg\cdot c^{*} =Q_{Gx}\int_{G}q(gx)g^{-1}f(gx)dg\] \[Q_{Gx}c^{*} =Q_{Gx}\mathbf{E}_{G}[f,x]. \tag{18}\]
Substituting \(c^{*}\) into Equation 17 we have
\[e(x)\geq \int_{G}(g^{-1}f(gx)-c^{*})^{T}Q_{Gx}q(gx)(g^{-1}f(gx)-c^{*})dg\] \[= \int_{G}(g^{-1}f(gx))^{T}Q_{Gx}q(gx)(g^{-1}f(gx))\] \[-\Big{(}c^{*T}Q_{Gx}q(gx)g^{-1}f(gx)\Big{)}^{T} \tag{19}\] \[-c^{*T}Q_{Gx}q(gx)g^{-1}f(gx)\] \[+c^{*T}Q_{Gx}q(gx)c^{*}dg.\]
The term \(\int_{G}e^{*T}Q_{Gx}q(gx)g^{-1}f(gx)dg\) could be simplified as \[\int_{G}c^{*T}Q_{Gx}q(gx)g^{-1}f(gx)dg=\int_{G}\mathbf{E}_{G}[f,x]Q_{Gx} q(gx)g^{-1}f(gx)dg. \tag{20}\]
Notice that \(Q_{Gx}\), and \(q(gx)\) are symmetric matrix
\[\int_{G}c^{*T}Q_{Gx}q(gx)c^{*}dg =\int_{G}c^{*T}q(gx)Q_{Gx}c^{*}dg\] \[=\int_{G}\mathbf{E}_{G}^{T}[f,x]Q_{Gx}q(gx)\mathbf{E}_{G}[f,x]dg.\]
Thus Equation 19 becomes
\[e(x)\geq \int_{G}(g^{-1}f(gx))^{T}Q_{Gx}q(gx)(g^{-1}f(gx))\] \[-\left(\mathbf{E}_{G}^{T}[f,x]Q_{Gx}q(gx)g^{-1}f(gx)\right)^{T}\] \[-\mathbf{E}_{G}^{T}[f,x]Q_{Gx}q(gx)g^{-1}f(gx)\] \[+\mathbf{E}_{G}^{T}[f,x]Q_{Gx}q(gx)\mathbf{E}_{G}[f,x]dg\] \[=\int_{G}p(gx)||f(gx)-g\mathbf{E}_{G}[f,x]||_{2}^{2}\alpha(x,g)dg.\]
Taking the integral over the fundamental domain \(F\) we have
\[\operatorname{err}(h) =\int_{F}e(x)\] \[\geq\int_{F}\int_{G}p(gx)||f(gx)-g\mathbf{E}_{G}[f,x]||_{2}^{2} \alpha(x,g)dgdx. \tag{21}\]
## Appendix E Proof of Corollary 4.10
Proof.: When \(\rho_{Y}\) is an orthogonal representation, we have \(\rho_{Y}(g)^{T}\rho_{Y}(g)=I_{n}\), i.e., the identity matrix. Then \(q(gx)\) can be written as \(q(gx)=s(gx)\mathrm{Id}\) where \(s(gx)\) is a scalar. Since \(\int_{G}q(gx)dg=\mathrm{Id}\), we can re-define \(q(gx)\) to drop \(\mathrm{Id}\) and only keep the scalar, then \(q_{x}(g)\) can be viewed as a probability density function of \(g\) because now \(\int_{G}q_{x}(g)=1\).
With \(q_{x}(g)\) being the probability density function, \(\mathbf{E}_{G}[f,x]\) (Equation 6) naturally becomes the mean \(\mathbb{E}_{G}[f_{x}]\) where \(g\sim q_{x}\).
Now consider \(e(x)=\int_{G}p(gx)||f(gx)-g\mathbf{E}_{G}[f_{x}]||_{2}^{2}\alpha(x,g)dg\) in Theorem 4.9, it can be written as
\[e(x)= \int_{G}p(gx)||f(gx)-g\mathbb{E}_{G}[f_{x}]||_{2}^{2}\alpha(x,g)dg\] \[= \int_{G}p(gx)||g(g^{-1}f(gx)-\mathbb{E}_{G}[f_{x}])||_{2}^{2} \alpha(x,g)dg\] \[= \int_{G}p(gx)(g^{-1}f(gx)-\mathbb{E}_{G}[f_{x}])^{T}\rho_{Y}(g)^{ T}\rho_{Y}(g)(g^{-1}f(gx)-\mathbb{E}_{G}[f_{x}])\alpha(x,g)dg.\]
Since \(\rho_{Y}(g)^{T}\rho_{Y}(g)=I_{n}\), we have
\[e(x)= \int_{G}p(gx)(g^{-1}f(gx)-\mathbb{E}_{G}[f_{x}])^{T}(g^{-1}f(gx)- \mathbb{E}_{G}[f_{x}])\alpha(x,g)dg\] \[= \int_{G}p(gx)||g^{-1}f(gx)-\mathbb{E}_{G}[f_{x}]||_{2}^{2}\alpha( x,g)dg. \tag{22}\]From Equation 5 we have \(p(gx)\alpha(x,g)=Q_{Gx}q(gx)\). Substituting in Equation 22 we have
\[e(x)= \int_{G}Q_{Gx}q(gx)||g^{-1}f(gx)-\mathbb{E}_{G}[f_{x}]||_{2}^{2}dg.\]
Since \(Q_{Gx}=\int_{G}p(gx)\alpha(a,g)dg\) when \(\rho_{Y}(g)^{T}\rho_{Y}(g)=I_{n}\), we have
\[e(x)= Q_{Gx}\int_{G}q_{x}(g)||g^{-1}f(gx)-\mathbb{E}_{G}[f_{x}]||_{2}^{2}dg\] \[= Q_{Gx}\mathbb{V}_{G}[f_{x}]. \tag{23}\]
Now consider \(Q_{Gx}\) (Equation 5), when \(\rho_{Y}(g)^{T}\rho_{Y}(g)=I_{n}\), it can be written
\[Q_{Gx}= \int_{G}p(gx)\alpha(x,g)dg\] \[=\int_{Gx}p(z)dz\] \[=p(Gx).\]
Replacing \(Q_{Gx}\) with \(p(Gx)\) in Equation 23 then taking the integral of \(e(x)\) over the fundamental domain gives the result.
## Appendix F Lower Bound of Equivariant Regression when \(\rho_{Y}=\mathrm{Id}\)
**Proposition F.1**.: _When \(\rho_{Y}=\mathrm{Id}\), the error of \(h\) has lower bound \(\mathrm{err}(h)\geq\int_{F}p(Gx)\mathbb{V}_{Gx}[f]dx\), which is the same as Theorem 4.8._
Proof.: Consider Equation 5, when \(\rho_{Y}(g)=\mathrm{Id}\), we have
\[Q_{Gx}=\int_{G}p(gx)\alpha(x,g)dg.\]
Exchange the integration variable using \(z=gx\) we have
\[Q_{Gx}=\int_{Gx}p(z)dz. \tag{24}\]
Consider \(\mathbb{E}_{G}[f_{x}]=\int_{G}q_{x}(g)g^{-1}f(gx)dg\). When \(\rho_{Y}(g)=\mathrm{Id}\), it becomes
\[\mathbb{E}_{G}[f_{x}]=\int_{G}q(gx)f(gx)dg.\]
Substituting \(q(gx)\) with Equation 5, considering \(\rho_{Y}(g)=\mathrm{Id}\), we have
\[\mathbb{E}_{G}[f_{x}]=\int_{G}Q_{Gx}^{-1}p(gx)f(gx)\alpha(x,g)dg.\]
Exchange the integration variable using \(z=gx\) we have
\[\mathbb{E}_{G}[f_{x}]=\int_{Gx}Q_{Gx}^{-1}p(z)f(z)dz.\]
Substituting Equation 24 we have
\[\mathbb{E}_{G}[f_{x}] =\int_{Gx}\frac{p(z)}{\int_{Gx}p(z)dz}f(z)dz\] \[=\mathbb{E}_{Gx}[f].\]
Similarly, we can proof \(\mathbb{V}_{G}[f_{x}]=\mathbb{V}_{Gx}[f]\), thus when \(\rho_{Y}=\mathrm{Id}\), Corollary 4.10 is Theorem 4.8.
## Appendix G Rademacher Complexity of Harmful Extrinsic Equivariance Example
Let \(S=\{x^{1},x^{2},x^{3},x^{4}\}\), where the labels are \(y^{1},y^{2}=+1\) and \(y^{3},y^{4}=-1\). We consider two model classes \(\mathcal{F}_{N}\), the set of all linear models, and \(\mathcal{F}_{E}\), the set of all linear models equivariant to \(C_{2}\), and compute their empirical Rademacher complexity on \(S\).
For the data \(S\), an extrinsically equivariant linear model class has lower empirical Rademacher complexity than its unconstrained linear counterpart, demonstrating that extrinsic equivariance can be harmful to learning.
## Appendix H Additional Experiments
### Swiss Roll Experiment
Figure 11 and Figure 12 show the actual data distribution for the Swiss Roll experiment in Section 6.1. In the incorrect distribution, the data in the two \(z\) planes form two spirals with different labels but the same shape. The equivariance is incorrect because if we translate one spiral to the other spiral's plane, they will overlap but their labels are different. In the correct distribution, there are two different 'dashed' spirals copied into two \(z\)-planes. The equivariance is correct because after a \(z\)-translation, both the data and their labels exactly overlap. In all three cases, we assume the data has a uniform distribution. Figure 12(b) shows the ternary plot of MLP for all different \(c,ir,er\), where the performance of MLP decreases as the correct ratio increases. Figure 12(a) shows an inverse trend: the
Figure 11: The correct, incorrect, and extrinsic data distribution in the Swiss Roll experiment.
performance of INV increases as the correct ratio increases. Moreover, both extrinsic and incorrect equivariance harms the performance of INV, but incorrect equivariance is more devastating because the error is limited by a theoretical lower bound.
### Square Experiment
We consider the environment shown in Example 4.6. We vary \(m\in\{0.2,0.4,0.6,0.8,1\}\) and \(c\in\{0,0.2,0.4,0.6,0.8,1\}\). We train an \(u\)-invariant network and evaluate its test performance with the theoretical lower bound \(\operatorname{err}(h)\geq(1-c)\times(1-m)\). Figure 14 shows the test error of the trained network compared with the theoretical lower bound. The highest difference is below 3%, demonstrating the correctness of our theory.
### Regression Experiment
In this experiment, we validate our theoretical error lower bound for invariant and equivariant regression (Theorem 4.8, 4.9) in an environment similar to Example 4.11. Consider a regression task \(f:\mathbb{R}\times\mathcal{X}\rightarrow\mathbb{R}^{2}\) given by \((\theta,x)\mapsto y\), where \(\mathcal{X}=\{x_{0},x_{1},x_{2},x_{3}\}\). The group \(g\in G=C_{4}=\{e,r,r^{2},r^{3}\}\) acts on \((\theta,x)\) by \(g(\theta,x)=(\theta,gx)\) through permutation: \(x_{1}=rx_{0};x_{2}=rx_{1};x_{3}=\)
\begin{table}
\begin{tabular}{c c c} \hline \hline & Invariant Network & Equivariant Network \\ \hline Empirical/Theoretical & 1.002 \(\pm\)0.000 & 1.001 \(\pm\)0.000 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Empirical \(\operatorname{err}(h)\) divided by theoretical \(\operatorname{err}(h)\) for invariant regression and equivariant regression. Results are averaged over 100 runs with different \(f\) for each regression. Empirical regression error matches theoretical error.
Figure 12: Data distribution example with different correct ratio (\(c\)), incorrect ratio (\(ir\)), and extrinsic ratio (\(er\)) values.
Figure 13: The ternary plot of the invariant network (a) and unconstrained network (b) with different correct, incorrect, and extrinsic ratio.
\(rx_{2};x_{0}=rx_{3}\). Let \(r^{k}\in G\) acts on \(y\) by \(\rho_{Y}(g)=\left(\begin{smallmatrix}\cos g&-\sin g\\ \sin g&\cos g\end{smallmatrix}\right)\) where \(g=k\pi/2\). Note that fixing a single value of \(\theta\) gives Example 4.11; in other words, this experiment has infinitely many orbits where each orbit is similar to Example 4.11.
We generate random polynomial function \(f\) that is not equivariant, i.e., \(\exists(\theta,x)\) s.t. \(g\cdot f(\theta,x)\neq\rho_{Y}(g)y\). Then we try to fit \(f\) using a \(G\)-invariant network and a \(G\)-equivariant network. We measure their error compared with the theoretical lower bound given by Theorem 4.8 and 4.9. As is shown in Table 2, both the invariant network and the equivariant network achieve an error rate nearly the same as our theoretical bound. The empirical error is slightly higher than the theoretical error due to the neural network fitting error. Please refer to I.4 for more experiment details.
## Appendix I Experiment Details
This section describes the details of our experiments. All of the experiment is performed using a single Nvidia RTX 2080 Ti graphic card.
### Swiss Roll Experiment
In the Swiss Roll Experiment in Section 6.1, we use a three-layer MLP for the unconstrained network. For the \(z\)-invariant network, we use a network with two DSS [37] layers to implement the \(z\)-invariance, each containing two FC layers. We train the networks using the Adam [24] optimizer with a learning rate of \(10^{-3}\). The batch size is 128. In each run, there are 200 training data, 200 validation data, and 200 test data randomly sampled from the data distribution. The network is trained for a minimal of 1000 epochs and a maximum of 10000 epochs, where the training is terminated after there is no improvement in the classification success rate in the validation set for a consecutive of 1000 epochs. We report the test success rate of the epoch model with the highest validation success rate.
### Square Experiment
In the Square Experiment in Section H.2, we use a network with two DSS [37] layers to implement the horizontal invariance, where each layer contains two FC layers. We train the networks using the Adam [24] optimizer with a learning rate of \(10^{-3}\). The batch size is 128. In each run, there are 1000 training data, 200 validation data, and 200 test data randomly sampled from the data distribution. The network is trained for a minimal of 1000 epochs and a maximum of 10000 epochs, where the training is terminated after there is no improvement in the classification success rate in the validation set for a consecutive of 1000 epochs. We report the test success rate of the epoch model with the highest validation success rate.
### Digit Classification Experiment
In the Digit Classification Experiment in Section 6.2, we use two similar five-layer convolutional networks for the \(D_{4}\)-invariant network and the CNN, where the \(D_{4}\)-invariant network is implemented using the e2cnn package [60]. Both networks have the similar amount of trainable parameters. We train the networks using the Adam [24] optimizer with a learning rate of \(5\times 10^{-5}\) and weight decay
Figure 14: Result of the square experiment in terms of the \(L_{1}\) distance between the network error and the theoretical lower bound in percentage. Each cell corresponds to an experiment with a particular correct ratio (\(c\)) and majority label ratio (\(m\)). Results are averaged over 10 runs.
of \(10^{-5}\). The batch size is 256. In each run, there are 5000 training data, 1000 validation data, and 1000 test data randomly sampled from the data distribution. The network is trained for a minimal of 50 epochs and a maximum of 1000 epochs, where the training is terminated after there is no improvement in the classification success rate in the validation set for a consecutive of 50 epochs. We report the test success rate of the epoch model with the highest validation success rate.
### Regression Experiment
In the regression experiment, we validate our theoretical error lower bound for invariant and equivariant regression (Theorem 4.8, 4.9) by comparing empirical network fitting error and the theoretical fitting error of a function \(f\). Specifically, the function \(f\) maps a distance \(\theta\) and an index \(x\) pair to a vector \(y\):
\[f:\mathbb{R}\times\mathcal{X}\rightarrow\mathbb{R}^{2},\text{given by }(\theta,x )\mapsto y \tag{25}\]
where \(\mathcal{X}=\{x_{0},x_{1},x_{2},x_{3}\}\). The group \(g\in G=C_{4}=\{e,r,r^{2},r^{3}\}\) acts on \((\theta,x)\) by \(g(\theta,x)=(\theta,gx)\) through permuting the index \(x\): \(x_{1}=rx_{0};x_{2}=rx_{1};x_{3}=rx_{2};x_{0}=rx_{3}\). Let \(r^{k}\in G\) acts on vector \(y\) by rotation \(\rho_{Y}(g)=\left(\begin{smallmatrix}\cos g&-\sin g\\ \sin g&\cos g\end{smallmatrix}\right)\) where \(g=k\pi/2\).
We construct function \(f\) in the following way: for each \(x\in\mathcal{X}\), choose \(l_{x}:\mathbb{R}\rightarrow\mathbb{R}^{2}\) and define \(f(\theta,x)=l_{x}(\theta)\). Notice that when \(l_{gx}=\rho_{Y}(g)l_{x}(\theta)\), \(f\) is \(G\)-equivariant. We define \(l_{x}(\theta)=(p_{x}(\theta),q_{x}(\theta))\) where \(p_{x}\) and \(q_{x}\) are cubic polynomials of \(x\), i.g., \(p_{x}\) with coefficients \(a,b,c,d\) will be \(p_{x}=ax^{3}+bx^{2}+cx+d\). We choose \(p_{x}\) and \(q_{x}\) with different coefficients for each \(x\) such that \(f\) is not equivariant, i.e., \(l_{gx}\neq\rho_{Y}(g)l_{x}(\theta)\). For each run, we generate a function \(f\), sample data \(\theta,x\), and evaluate the data obtaining \(y\). Then we train neural networks using \(L2\) loss till converge. Eventually, we sample another set of data to evaluate the empirical \(L2\) error as well as the theoretical \(L2\) error.
### Robotic Experiment
In the robotic manipulation experiment, the state \(s\) is defined as a top-down RGBD image of the workspace centered at the gripper's position (Figure 15 middle). The action \(a=(x,y,z,\theta,\lambda)\) is defined as the change of position (\(x,y,z\)) and top-down orientation (\(\theta\)) of the gripper, with the
Figure 15: The robotic experiment setup and the \(D_{1}\)-equivariant policy network.
gripper open width (\(\lambda\)). For a \(D_{1}=\{1,g\}\) group where \(g\) represents a horizontal flip, the group action on the state space \(gs\) is defined as flipping the image; the group action on the action space \(ga\) is defined as flipping the \(y\) and \(\theta\) action and leaving the other action components unchanged, \(ga=(x,-y,z,-\theta,\lambda)\). We define a \(D_{1}\)-equivariant policy network \(\pi:s\mapsto a\) using e2cnn [60], where the output action of \(\pi\) will flip accordingly when the input image is flipped (Figure 15 bottom). We train the network using the Adam [24] optimizer with a learning rate of \(10^{-3}\) and weight decay of \(10^{-5}\). For each run, we train the network for a total of 20k training steps, where we perform evaluation for 100 episodes every 2k training steps. We report the highest success rate of the 10 evaluations as the result of the run.
We develop the experimental environments in the PyBullet [12] simulator, based on the BullatArm benchmark [57]. In the Stacking (correct equivariance) and Sorting (incorrect equivariance) experiment, we gather a total of \(400\) episodes of demonstrations, where \(400c\) of them are Stacking and the rest \(400(1-c)\) are Sorting. In evaluation, the task follows the same distribution, where \(100c\) of the evaluation episodes are Stacking and the rest are Sorting. Notice that the agent can distinguish the Stacking and Sorting tasks because the object colors are different for the two tasks (green and blue for stacking, yellow and red for sorting). In the Sorting (extrinsic equivariance) experiment, we also use \(400\) episodes of demonstrations.
Specifically, in Sorting, the cube and the triangle are initially placed randomly, within a distance of \(\pm 1.5cm\) from the horizontal mid-line of the workspace. The objective is to push the triangle at least \(9cm\) toward left and to push the cube at least \(9cm\) toward right, while ensuring that both objects remain within the boundaries of workspace. In Stacking, two blocks are randomly initialized on the floor of the workspace. The goal is to pick up the triangle and place it on top of the cube. The workspace has a size of \(30cm\times 30cm\times 25cm\). | ## Review
### Summary
This paper investigates the behavior of equivariant models under conditions where the symmetries of the model do not align perfectly with those of the underlying data distribution. It extends prior work by Wang et al. (2023) by introducing the concepts of correct, incorrect, and extrinsic pointwise equivariance, allowing for a nuanced understanding of how these relationships impact model performance. The authors derive lower bounds on model error for classification and regression tasks, demonstrating how mismatches in symmetry can influence learning outcomes. Theoretical contributions are supported by a range of experiments that showcase the practical implications of these findings.
### Strengths
- The paper introduces novel theoretical concepts related to equivariance, extending existing frameworks to account for partial symmetries.
- It provides a comprehensive analysis of error bounds in equivariant models, supported by clear examples and experimental results.
- The writing is generally clear and well-structured, making complex ideas accessible.
- The practical relevance of the findings is emphasized, particularly concerning real-world applications of symmetry in machine learning.
### Weaknesses
- Certain definitions, such as those related to pointwise equivariance, could benefit from greater precision to clarify their dependence on specific models and contexts.
- The presentation of results may obscure the novelty of contributions, particularly the implications of pointwise definitions.
- Some experiments lack variety, especially concerning infinite symmetry groups, which could limit the demonstration of the new theoretical bounds.
- The discussion of extrinsic equivariance lacks a principled understanding of when it is beneficial or harmful in practice.
### Questions
- Can you clarify the definition of p and its relation to the underlying population distribution versus training data in the context of the paper?
- What specific scenarios can lead to extrinsic equivariance being harmful, and how can practitioners identify these cases?
- Could you expand on the third experiment and the role of state flipping in the analysis?
### Soundness
**Score:** 3
**Description:** Good: The theoretical results and proofs presented are sound, although some aspects may require further clarification or depth.
### Presentation
**Score:** 3
**Description:** Good: The paper is generally well-written and organized, although there are areas where clarity could be improved, especially regarding definitions.
### Contribution
**Score:** 3
**Description:** Good: The paper contributes significantly to the understanding of equivariant models, although the practical implications of some theoretical frameworks may need further exploration.
### Rating
**Score:** 7
**Description:** Accept: The paper is technically solid with high impact on the area of equivariant learning, but it requires minor improvements in clarity and depth of discussion.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a valuable extension of existing work on equivariant models, addressing important theoretical and practical issues in the field. Despite some weaknesses in clarity and depth of discussion, the overall contribution is significant, and the results are likely to provoke further research and application in the area. The decision to accept reflects the paper's strong technical foundation and its relevance to ongoing discussions in machine learning research.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Constructing Non-isotropic Gaussian Diffusion Model Using Isotropic Gaussian Diffusion Model
for Image Editing
Xi Yu\({}^{*}\), Xiang Gu\({}^{*}\), Haozhi Liu, Jian Sun (\(\boxtimes\))
School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, China
{ericayu,xianggu,liuhazh}@stu.xjtu.edu.cn {jianssun}@xjtu.edu.cn
###### Abstract
Score-based diffusion models (SBDMs) have achieved state-of-the-art results in image generation. In this paper, we propose a Non-isotropic Gaussian Diffusion Model (NGDM) for image editing, which requires editing the source image while preserving the image regions irrelevant to the editing task. We construct NGDM by adding independent Gaussian noises with different variances to different image pixels. Instead of specifically training the NGDM, we rectify the NGDM into an isotropic Gaussian diffusion model with different pixels having different total forward diffusion time. We propose to reverse the diffusion by designing a sampling method that starts at different time for different pixels for denoising to generate images using the pre-trained isotropic Gaussian diffusion model. Experimental results show that NGDM achieves state-of-the-art performance for image editing tasks, considering the trade-off between the fidelity to the source image and alignment with the desired editing target.
## 1 Introduction
+
Footnote †: * Equal contribution.
Score-based diffusion models (SBDMs) [1, 2, 3, 4, 5, 6] demonstrate state-of-the-art performance on image synthesis quality and sample diversity. SBDMs are widely applied to applications such as text-to-image synthesis [7, 8, 9], image editing [10, 11, 12, 13, 14, 15], deblurring [16, 17], etc. SBDMs consist of a forward diffusion stage that adds random noise to data and a reverse stage that generates desired data from noise. The introduced noise in the forward process is commonly isotropic Gaussian noise [1, 6], i.e., independently and identically distributed noise in a normal distribution.
Non-isotropic diffusion model, by adding non-isotropic noises in the forward diffusion process has been investigated in [18, 19, 20, 21, 22]. The blurring diffusion model in [18] adds blur and noise to samples which is a Gaussian diffusion process with non-isotropic noise in the frequency space. [19] employs auxiliary velocity variables to augment the data variables as Hamiltonian dynamics, and performs the diffusion process in this expanded joint space by adding different noises to the auxiliary and the data variables. [20] formulates the diffusion model using non-isotropic noise with a positive semi-definite covariance matrix, and carries out a comparative analysis of the non-isotropic and isotropic diffusion models. These works have shown better performance for data generation.
This paper focuses on image editing tasks that commonly require editing specific object/thing of an image while preserving the remaining parts of the image. For image editing, [11] produces a mask that allows the preservation of context while editing the remaining part. [23, 24, 25] use the learned text embedding for the object that needs to be preserved in the image to ensure the object is unchanged during editing. It is empirically known that the diffusion model can generate more diverse novelcontent if adding noise with larger variance to the image while preserving the image content if adding smaller variance noise [13]. Motivated by this, we employ a non-isotropic diffusion model to add noises with different variances to different image pixels, considering the degree to which the corresponding pixels should be edited/preserved.
Along with this idea, we present a Non-isotropic Gaussian Diffusion Model (NGDM), utilizing an off-the-shelf isotropic diffusion model (e.g., DDPM [1]) for achieving the data sampling in image editing. Specifically, given a source image, the proposed NGDM is with a diffusion process that adds independent Gaussian noises with different variances to different pixels, therefore the added noise is independent and non-isotropic over image pixels. We then employ an off-the-shelf isotropic diffusion model to execute the reverse denoising process for image editing. To achieve this goal, we rectify the NGDM into the isotropic Gaussian diffusion model where each pixel is added with the same amount of noise at each step but different pixels have varying total number of noise accumulation time steps. We subsequently devise a specific sampling method for NGDM that can generate images by using the pre-trained isotopic Gaussian diffusion model.
We demonstrate the effectiveness of NGDM in image editing tasks on five datasets including real and synthetic datasets. In the experiments for cats to dogs editing task on AFHQ dataset, our method achieves the best performance in the metric of FID and SSIM compared with the SoTA SBDMs-based methods. For text-guided image editing, our method achieves better trade-off between CLIPScore and LPIPS value. Furthermore, NGDM allows for flexible trade-off with varying hyper-parameters.
## 2 Background: Score-based Diffusion Models
SBDMs [1; 2; 4; 5; 6] are a family of generative models that learn the data distribution based on the Gaussian process. Two representative models are Denoising Diffusion Probabilistic Model (DDPM) [1] and Score Matching with Langevin Dynamics (SMLD) [5]. We discuss the details based on DDPM for the remainder of the main text for brevity.
Given the input data \(\mathbf{x}(0)\in\mathbb{R}^{D}\), which represents a sample from the data distribution \(p_{data}\), a forward process produces the noisy \(\mathbf{x}(t)\) indexed by a time variable \(t\in[0,1]\) via
\[\mathbf{x}(t)=\sqrt{\bar{\alpha}(t)}\mathbf{x}(0)+\sqrt{1-\bar{\alpha}(t)} \mathbf{z}(t), \tag{1}\]
where \(\mathbf{z}(t)\in\mathcal{N}(\mathbf{0},\mathbf{I})\) for any \(t\) and \(\bar{\alpha}(t)=e^{-\int_{0}^{t}\beta(s)\mathrm{d}s}\) controlling the noise schedule. \(\beta(s)=\bar{\beta}_{\min}+s(\bar{\beta}_{\max}-\bar{\beta}_{\min})\) with \(\bar{\beta}_{\min}=0.1\) and \(\bar{\beta}_{\max}=20\)[1; 6]. This type of diffusion model is dubbed _Isotropic Gaussian Diffusion Model (IGDM)_, since the added Gaussian noise \(\mathbf{z}(t)\) is from the independently and identically distributed normal distribution.
DDPM is in the framework of Stochastic Differential Equations (SDEs) [5] with variance preservation
\[\mathrm{d}\mathbf{x}(t)=-\frac{1}{2}\beta(t)\mathbf{x}(t)\mathrm{d}t+\sqrt{ \beta(t)}\mathrm{d}\mathbf{w}\quad\text{with initial value }\mathbf{x}(0), \tag{2}\]
where \(\mathbf{w}\) is the standard Wiener process. The reverse process denoises the noisy sample \(\mathbf{x}(T)\) starting from \(T\) using a reverse SDE
\[\mathrm{d}\mathbf{x}(t)=\left[-\frac{1}{2}\beta(t)\mathbf{x}(t)-\beta(t)\nabla _{\mathbf{x}}\log p_{t}(\mathbf{x}(t))\right]\mathrm{d}t+\sqrt{\beta(t)} \mathrm{d}\bar{\mathbf{w}}\quad\text{with initial value }\mathbf{x}(T), \tag{3}\]
where \(\bar{\mathbf{w}}\) is a standard Wiener process when time flows backward from \(T\) to \(0\). The score function \(\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})\) is approximated by training a time-dependent model \(\mathbf{s}_{\mathbf{\theta}}(\mathbf{x}(t),t)\) via score matching [6; 26]. For inference, the time of the differential equation is discretized as \(t\in\{0,\Delta t,2\Delta t,\cdots,T\}\) with \(\Delta t\) representing the sampling time interval. We can choose to utilize the reverse process of DDPM or the reverse process of DDIM for sampling. With \(\beta_{t}=\beta(t)\Delta t\), the iteration rule of DDPM [1] is
\[\mathbf{x}(t)=\frac{1}{\sqrt{1-\bar{\beta}_{t+\Delta t}}}\left(\mathbf{x}(t+ \Delta t)+\beta_{t+\Delta t}\mathbf{s}_{\mathbf{\theta}}\left(\mathbf{x}(t+ \Delta t),t+\Delta t\right)\right)+\sqrt{\bar{\beta}_{t+\Delta t}}\mathbf{z}( t+\Delta t), \tag{4}\]
where \(\mathbf{z}(t+\Delta t)\in\mathcal{N}(\mathbf{0},\mathbf{I})\). With \(\bar{\alpha}_{t}=\prod_{s=0}^{t}(1-\beta_{s})\), the iteration rule of DDIM [27] is
\[\begin{split}&\mathbf{x}(t)=\sqrt{\bar{\alpha}_{t}}(\frac{\mathbf{x} (t+\Delta t)+(1-\bar{\alpha}_{t+\Delta t})\cdot\mathbf{s}_{\mathbf{\theta}}( \mathbf{x}(t+\Delta t),t+\Delta t)}{\sqrt{\bar{\alpha}_{t+\Delta t}}})\\ &-\sqrt{(1-\bar{\alpha}_{t}-\sigma^{2}(t+\Delta t))(1-\bar{\alpha} _{t+\Delta t})}\cdot\mathbf{s}_{\mathbf{\theta}}\left(\mathbf{x}(t+\Delta t), t+\Delta t\right)+\sigma(t+\Delta t)\mathbf{z}(t+\Delta t).\end{split} \tag{5}\]
## 3 Method
In this section, we present a framework for utilizing the pre-trained Isotropic Gaussian Diffusion Model (IGDM) to achieve the sampling process of Non-isotropic Gaussian Diffusion Model (NGDM). In the following, first, we define the NGDM with added independent non-isotropic Gaussian noise. Then, we implement NGDM by IGDM through rectifying the spatially different time of noising and denoising procedures in NGDM and present our proposed data sampling algorithm for the proposed NGDM using the pre-trained IGDM.
### Non-isotropic Gaussian Diffusion Model
In this work, we focus on the Non-isotropic Gaussian Diffusion Model (NGDM) by adding the non-isotropic Gaussian noise in the input data \(\mathbf{y}(0)\in\mathbb{R}^{D}\) and \(\mathbf{y}(0)\sim p_{data}\), and the noises associated with different pixels are independent. The forward SDE of NGDM is
\[\mathrm{d}\mathbf{y}=-\frac{1}{2}\beta(t)\boldsymbol{\Lambda}(\mathcal{I}) \mathbf{y}\mathrm{d}t+\sqrt{\beta(t)\boldsymbol{\Lambda}(\mathcal{I})}\mathrm{ d}\mathbf{w}\quad\text{with initial value }\mathbf{y}(0), \tag{6}\]
where \(\mathcal{I}\in\mathbb{R}^{D}\) is the source data, \(\boldsymbol{\Lambda}(\mathcal{I}):\mathbb{R}^{D}\rightarrow\mathbb{R}^{D\times D}\) is the weighting coefficient matrix, defined as diagonal matrix \(\boldsymbol{\Lambda}(\mathcal{I})=\mathrm{diag}\left(\lambda_{1},\cdots, \lambda_{D}\right)\) with \(0\leq\lambda_{k}\leq 1\) scaling the Gaussian noise level added to the \(k\)-th pixel. Note that the transition kernel \(p_{0t}(\mathbf{y}(t)|\mathbf{y}(0))=\mathcal{N}\left(\mathbf{y}(t)\mid\mathbf{ y}(0)e^{-\frac{1}{2}\int_{0}^{t}\beta(s)\boldsymbol{\Lambda}(\mathcal{I}) \mathrm{d}s},\mathbf{I}-e^{-\int_{0}^{t}\beta(s)\boldsymbol{\Lambda}( \mathcal{I})\mathrm{d}s}\right)\) is an independent Gaussian distribution.
### Rectify the Non-isotropic Gaussian Diffusion Model
With the added independent noise, we next discuss the forward SDE for NGDM in scalar form for each pixel \(k\). Given the initial \(\mathbf{y}^{k}(0)\) denoting the value of pixel \(k\) in \(\mathbf{y}(0)\), the forward SDE of the \(k\)-th pixel can be presented by
\[\mathrm{d}\mathbf{y}^{k}=-\frac{1}{2}\beta(t)\lambda_{k}\mathbf{y}^{k}\mathrm{d }t+\sqrt{\beta(t)\lambda_{k}}\mathrm{d}\mathbf{w}\quad\text{with initial value }\mathbf{y}^{k}(0), \tag{7}\]
where w is a one-dimensional Wiener process.
We present Theorem 1 to illustrate the connection between the NGDM defined in Section 3.1 and the IGDM defined in Section 2 at the pixel level. Beforehand, we introduce the following lemma.
Figure 1: The overview of our NGDM. We rectify the non-isotropic diffusion model into isotropic model with different total time steps (e.g., \(T_{1},\cdots,T_{D}\)) for different pixels. Based on this rectification, the input data \(\mathbf{y}(0)\) is firstly added isotropic noises until \(T\) time steps. Then in the reverse denoising process, to apply the pre-trained isotropic Gaussian diffusion model, we construct the noisy image to be denoised in each time step following Eq. (12). The red arrow in the figure indicates the pixel replacement operation in Eq. (12) when \(T_{k}\leq t\leq T\) for the \(k\)-th pixel at a denoising time step \(t\).
**Lemma 1**.: _Let \(\beta(s)=\bar{\beta}_{\min}+s(\beta_{\max}-\bar{\beta}_{\min})\) with \(\bar{\beta}_{\max}>\bar{\beta}_{\min}>0\). Then, for each \(\lambda_{k}\in[0,1]\) and \(t\in[0,1]\), there exists a unique time \(\tau\in[0,1]\) (denoted by \(\tau=\xi_{k}(t)\)) such that \(\int_{0}^{t}\beta(s)\lambda_{k}\mathrm{d}s=\int_{0}^{\tau}\beta(s)\mathrm{d}s\) and \(\beta(t)\lambda_{k}\mathrm{d}t=\beta(\tau)\mathrm{d}\tau\), with the following form_
\[\xi_{k}(t)=\frac{-\bar{\beta}_{\min}+\sqrt{\bar{\beta}_{\min}^{2}+2(\bar{\beta }_{\max}-\bar{\beta}_{\min})\bar{\beta}_{\min}t\lambda_{k}+(\bar{\beta}_{\max }-\bar{\beta}_{\min})^{2}t^{2}\lambda_{k}}}{\bar{\beta}_{\max}-\bar{\beta}_{ \min}}. \tag{8}\]
The proof is in Appendix A. Based on the above Lemma, we can rectify the NGDM, which adds noise at each pixel with varying variance over the same time span, into an IGDM that adds noise at each pixel with the same noise variance but with different total diffusion time for different pixels. We introduce the following theorem to derive the differential equation as an IGDM.
**Theorem 1**.: _For a pixel indexed by \(k\), \(\lambda_{k}\in[0,1]\), and let \(\tau=\xi_{k}(t)\) with \(\xi_{k}(t)\) represented in Eq. (8). With the same initial value \(\mathbf{y}^{k}(0)\), we have that the transition kernel at time \(t\) induced by Eq. (7) is equal to the transition kernel at time \(\tau\) induced by the following differential equation_
\[\mathrm{d}\mathbf{y}^{k}=-\frac{1}{2}\beta(\tau)\mathbf{y}^{k}\mathrm{d}\tau+ \sqrt{\beta(\tau)}\mathrm{d}\mathrm{w}\quad\text{with initial value }\mathbf{y}^{k}(0). \tag{9}\]
_The total time of noising for Eq. (9) is \(T_{k}\) with \(T_{k}=\xi_{k}(T)\)._
We provide the proof in Appendix A. Inspired by this, we rectify the reverse process in NGDM with different speeds of denoising across pixels to be the reverse process with consistent speed but different total time of denoising. We suggest rectifying the differential equation for the reverse process of pixel \(k\) within the NGDM framework into the following form
\[\mathrm{d}\mathbf{y}^{k}=\left[-\frac{1}{2}\beta(\tau)\mathbf{y}^{k}-\beta( \tau)(\nabla_{\mathbf{y}}\log p_{\tau}(\mathbf{y}))^{k}\right]\mathrm{d}\tau+ \sqrt{\beta(\tau)}\mathrm{d}\bar{\mathrm{w}}\quad\text{with initial value }\mathbf{y}^{k}(T_{k}), \tag{10}\]
where \(\bar{\mathrm{w}}\) is a one-dimensional Wiener process when time flows backward from \(T_{k}\) to \(0\). Theorem 1 establishes the conclusion that the NGDM in Eq. (7) can be rectified to the IGDM in Eq. (9) but with different total diffusion time \(T_{k}\) for different pixel indexed by \(k\), determined based on Eq. (8). This inspires us to utilize the pre-trained IGDM to achieve the data sampling of NGDM for image editing. Subsequently, we present a method that adjusts the total time of noising and denoising for each pixel \(k\) to \(T_{k}\), enabling us to use the pre-trained IGDM for data sampling. The corresponding sampling method is presented in Algorithm 1.
For image editing tasks, we use the source image \(\mathcal{I}\) as \(\mathbf{y}(0)\) and generate noisy data \(\mathbf{y}(T)\) through the forward process. We generate the edited image \(\hat{\mathbf{y}}(0)\) by denoising from \(\mathbf{y}(T)\). Utilizing the forward noising process of IGDM, we add independent noise to each pixel \(k\) to obtain the noisy observation \(\mathbf{x}^{k}(t)\) of discrete time \(t\in\{0,\Delta t,\cdots,T\}\) with \(\Delta t\) representing the sampling time interval
\[\mathbf{x}^{k}(t)=\sqrt{\bar{\alpha}_{t}}\mathcal{I}^{k}+\sqrt{1-\bar{\alpha}_ {t}}\mathbf{z}^{k}(t), \tag{11}\]
where \(\mathbf{z}(t)\in\mathcal{N}(\mathbf{0},\mathbf{I})\). Next, with \(\mathbf{H}(\mathbf{y}(t+\Delta t),t+\Delta t)\) denotes the sampling procedure of DDPM and DDIM given in Eq. (4) and Eq. (5) of Section 2, the data sampling iteration utilizing the IGDM model with initial value \(\mathbf{y}^{k}(T)=\mathbf{x}^{k}(T)\) is defined as
\[\mathbf{y}^{k}(t)=\begin{cases}\mathbf{x}^{k}(t)&T_{k}\leq t<T\\ \mathbf{H}^{k}(\mathbf{y}(t+\Delta t),t+\Delta t)&\text{otherwise.}\end{cases} \tag{12}\]
This implies that we use the noisy observation \(\mathbf{x}^{k}(t)\) to represent \(\mathbf{y}^{k}(t)\) at each step before \(T_{k}\) with \(t\geq T_{k}\), rather than the actual denoised result starting from time step \(T\). Until time \(T_{k}\) we begin the denoising from \(\mathbf{x}^{k}(T_{k})\) for \(k\)-th pixel. In such a way, different pixels have different starting time steps (\(T_{k}\) for \(k\)-th pixel) for image denoising in the data sampling process.
### Sampling Method in NGDM
Based on the above method, we further specify our sampling algorithm by utilizing a pre-trained IGDM. We do not require the training of NGDM, but instead harness the power of a pre-trained IGDM for data sampling. We generate the edited image with the source image \(\mathcal{I}\) as a condition. We first add noise to the source image \(\mathcal{I}\) to \(T\) time steps as the starting point of denoising, and then use the method in Section 3.2 to rectify NGDM into IGDM to denoise the image using the pre-trained IGDM. We show the sampling algorithm of NGDM in the Algorithm 1.
```
0: The source image \(\mathcal{I}\), the weighting matrix \(\mathbf{\Lambda}(\mathcal{I})\), the score function \(\mathbf{s_{\theta}}\), the time schedule \(\{\beta(t)\}_{t=0}^{T}\), the maximal time steps \(T\)
1: Compute \(\mathbf{T}=[\xi_{1}(T),\cdots,\xi_{D}(T)]\) according to Eq. (8)
2:\(\mathbf{z}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\)
3:\(\mathbf{y}=\sqrt{\bar{\alpha}_{\mathcal{I}}\mathcal{I}}+\sqrt{1-\bar{\alpha}_{ \mathcal{I}}}\mathbf{z}\) # The starting point of denoising
4:for\(t=T\) to \(0\)do
5:\(\mathbf{x}=\sqrt{\bar{\alpha}_{t}\mathcal{I}}+\sqrt{1-\bar{\alpha}_{t}}\mathbf{z}\) # Sample the noisy source image at time step \(t\)
6:\(\mathbf{M}=\mathbb{I}(t\geq\mathbf{T})\)
7:\(\mathbf{y}\leftarrow\mathbf{M}\odot\mathbf{x}+(\mathbf{1}-\mathbf{M})\odot \mathbf{y}\) # \(\mathbf{1}\) is the \(D\)-dimensional all-ones vector
8:\(\mathbf{y}\leftarrow\mathbf{H}(\mathbf{y},t)\)
9:endfor
0: Generated image \(\mathbf{y}\) conditioned on the source image \(\mathcal{I}\)
```
**Algorithm 1** Sampling Method of NGDM
### Design of Input-dependent Weighting Matrix
We construct the weighting matrix \(\mathbf{\Lambda}(\mathcal{I})\) based on the method for designing mask using a text-conditioned diffusion model [9] in DiffEdit [11]. Specifically, given the source image \(\mathcal{I}\), the text description \(R\) of the source image, and the target description \(Q\) that describes the desired target image after editing. Following DiffEdit [11], we add noise to the source image up to \(0.5T\) step and use the texts \(R\) and \(Q\) as the conditions respectively for denoising in the current time step to estimate the score values by using the score network \(\mathbf{s_{\theta}}\). We derive an attention map \(\mathcal{A}(\mathcal{I})\) based on the absolute difference of the estimated scores. We use the above method to compute 10 estimated absolute noise differences by running 10 times with different random seeds, averaging and performing Gaussian smoothing on the averaged map. Finally, we normalize the values of the smoothed map to \([0,1]\) as the final attention map \(\mathcal{A}(\mathcal{I})\). The pixel with larger value in the attention map should be added with the noise with larger variance, to sufficiently edit the content of the pixel. The pixel with smaller value in the attention map should be added with noise having smaller variance to preserve the content of pixel. We transform the attention map \(\mathcal{A}(\mathcal{I})\) into the weighting matrix \(\mathbf{\Lambda}(\mathcal{I})\) through a Sigmoid function by \(\mathbf{\Lambda}(\mathcal{I})=\frac{1}{1+\exp(-(a\mathcal{A}(\mathcal{I})-b))}\), where \(a\) and \(b\) are the hyper-parameters for the transformation. We discuss the effect of hyper-parameters \(a\) and \(b\) on the generation of images in Section 4.
## 4 Experiment
In this section, we apply the proposed NGDM to image editing tasks, presenting both qualitative and quantitative results.
### Experimental Setup
Evaluation tasks.We perform experiments on five datasets. First, we experiment on AFHQ dataset [28] to edit cats into dogs. The source domain contains 500 images of cats. These images are resized to 256 x 256 resolution and subsequently edited into dogs. Second, we experiment on ImageNet dataset [29] to edit images from one class into another class based on text prompts, following the protocol of [30]. Third, we experiment on synthetic Imagen [31] dataset following DiffEdit [11]. We collect images generated by Imagen [31], along with their corresponding text prompts, and edit these images by altering portions of the text. Fourth, we experiment on COCO-S dataset and construct target prompts for editing from annotations provided by BISON [32]. We collect 1000 images and target prompts from the COCO [33] dataset to build the COCO-S dataset. We additionally consider DreamBooth dataset provided by [24], which contains 30 objects. Each object has 25 prompts and 4-6 images for editing. We edit each image using the provided 25 prompts, resulting in a total of 3950 edited images. More details are available in Appendix B.1.
Implementation details.We conduct experiments utilizing two types of pre-trained diffusion models. For the cats to dogs editing task on AFHQ dataset, we utilize the public pre-trained score-based diffusion model with the official code provided in ILVR [10]. This model operates directly in the image space. We set the denoising step \(N\) to 1000. We implement the remaining task based on the text-to-image latent diffusion model, i.e., Stable Diffusion [9]. This model was pre-trained on 512\(\times\) 512 images of LAION dataset [34] with a latent dimension of 64 x 64. We use 50 steps in DDIM sampling with a fixed noise schedule. For hyper-parameters \(a\) and \(b\), we set them to \(10.0\) and \(5.0\) respectively to obtain all qualitative comparison results presented in this paper. We conduct additional analysis to investigate the effect of different values of hyper-parameters in the experimental results. Unless otherwise stated, we use the default parameters of the diffusion model during inference.
### Results and Analysis
Results on AFHQ dataset.We report the widely-used Frechet Inception Distance (FID) [35] for quantifying realism and SSIM [36] for quantifying faithfulness. The quantitative comparisons and qualitative results are presented in Table 1 and Figure 2, respectively. The images generated by our method better preserves the contextual structure (_e.g._, pose) of cats while yielding realistic dog images. For example, Figure 2 shows that for images in columns 2-5 with complex backgrounds, our method can accurately keep the backgrounds unchanged while editing cats into dogs. Other methods either blur the backgrounds or fail to maintain the source image backgrounds correctly. This observation is further supported by the quantitative results in Table 1, where our method achieves the best results of FID and SSIM among the compared SBDM-based methods.
\begin{table}
\begin{tabular}{c c c} \hline \hline Method & FID \(\downarrow\) & SSIM \(\uparrow\) \\ \hline StarGAN v2 [28] & 54.88 \(\pm\) 1.01 & 0.27 \(\pm\) 0.003 \\ CUT [37] & 76.21 & **0.601** \\ \hline ILVR [10] & 74.37 \(\pm\) 1.55 & 0.363 \(\pm\) 0.001 \\ SDEdit [13] & 74.17 \(\pm\) 1.01 & 0.423 \(\pm\) 0.001 \\ EGSDE [14] & 65.82 \(\pm\) 0.77 & 0.415 \(\pm\) 0.001 \\ SDDM [38] & 62.29 \(\pm\) 0.63 & 0.422 \(\pm\) 0.001 \\
**Ours** & **61.39 \(\pm\) 0.27** & 0.478 \(\pm\) 0.001 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison on AFHQ dataset. All results are reported by repeating experiments 5 times.
Figure 2: Qualitative comparison on AFHQ dataset.
Figure 4: Qualitative comparison on Imagen dataset.
Figure 3: Qualitative comparison on ImageNet dataset.
Results on ImageNet dataset.Figure 3 shows that our method performs well on images even with complex backgrounds. For instance, when editing "custard apple" into "lemon", our generated image successfully preserves the intricate details of the tree branches. DiffEdit [11] generates unnatural images with artifacts. DiffuseIT [39] generates unnatural images, while DDIB [40] can hardly maintain the content of the source image. Figure 5(a) shows that our method outperforms other methods by achieving a better trade-off between CLIPScore and LPIPS value.
Results on Imagen dataset.We present our qualitative and quantitative results in Figures 4 and 5(b), respectively. Figure 4 shows visual results including background replacement and object properties modification. We can see that our method can generate images with better visual quality compared with the other methods. For instance, our results can successfully preserve the foreground while replacing the "beach" in the background with "mountain", or vice versa.
Results on COCO-S dataset.From Figure 5(c), when \(a=10.0\) and \(b=6.0\), our method achieves the highest CLIPScore 31.45, and the smallest LPIPS value 23.43 compared with all other methods. InstructPix2Pix [41] is able to obtain CLIPScore comparable to that of ours, but the LPIPS value of InstructPix2Pix [41] is worse. We provide qualitative comparison in Appendix B.2.
Results on DreamBooth dataset.Figure 5(d) shows the quantitative results on DreamBooth dataset. As can be observed, our method outperforms compared methods by achieving a better trade-off between CLIPScore and LPIPS value. DDS [44] achieves the best CLIPScore of 20.17 with a worse LPIPS value of 33.26. Our method obtains CLIPScore comparable to that of DiffEdit with better LPIPS value. We provide qualitative comparison in Appendix B.2.
\begin{table}
\begin{tabular}{c c c c} \hline \hline ILVR [10] & SDEdit [13] & EGSDE [14] & Ours \\ \hline
11.5\% & 10.5\% & 12.5\% & **65.5\%** \\ \hline \hline \end{tabular}
\end{table}
Table 2: User study results on AFHQ dataset.
Figure 5: Quantitative comparison on ImageNet (a), Imagen (b), COCO-S (c), and DreamBooth (d) datasets. We report LPIPS distance [42] measuring image fidelity and CLIPScore [43] for text alignment. A higher CLIPScore denotes better alignment with the text, while a lower LPIPS value suggests higher fidelity to the input image. We report our results respectively using the default parameters \(a=10.0,b=5.0\), and the parameters \(a=10.0\), \(b=6.0\).
User study.We conduct two user studies on AFHQ dataset and the remaining datasets including ImageNet, Imagen, COCO-S, and DreamBooth. For each user study, 40 participants are provided with 30 randomly selected source images and the corresponding generated results of different methods. The generated images of ours and the other methods are displayed randomly in order. Participants are suggested to select the image that better applies the requested edit while preserving most of the original image details. The percentages of votes for our method and the other method on different datasets are shown in Tables 2 and 3 respectively, demonstrating that the participants exhibit a strong preference towards our method.
Effect of hyper-parameters \(a\) and \(b\).As mentioned in Section 3.4, we transform the attention map \(\mathcal{A}(\mathcal{I})\) into weighting matrix \(\mathbf{\Gamma}(\mathcal{I})\) with hyper-parameters \(a\) and \(b\). We control the initial time step of denoising each pixel by adjusting the hyperparameters \(a\) and \(b\). In Table 4, we analyze the impact of hyper-parameters \(a\) and \(b\) on AFHQ dataset. The upper part of the table reports the results with varying \(a\) and fixed \(b=5.0\), and the lower part of the table shows results with varying \(b\) and fixed \(a=10.0\). With \(b\) held constant, a larger \(a\) results in higher level of noise to the image, leading to more accurate editing and obtaining smaller FID. A smaller \(a\) results in content preserving the original image, obtaining larger SSIM value. The behavior of \(b\) is opposite to \(a\).
Comparison with hard weighting matrix.We compare with the strategy used to produce \(\mathbf{\Lambda}(\mathcal{I})\) by \(\mathbf{\Lambda}(\mathcal{I})=\mathbb{I}(\mathcal{A}(\mathcal{I})\geq\eta)\), where \(\eta\) is a threshold chosen in \(\{0.1,0.3,0.5,0.7,0.9\}\). The generated weighting matrix by this strategy is a "hard" weighting matrix that only takes 0 or 1 entry. Our Algorithm 1 gradually increases the denoising region with the increase of the denoising steps in the diffusion. Each pixel begins to be denoised with the denoising time step according to its relevance to the editing task. This helps to avoid artifacts caused by a hard mask, as shown in Figure 7.
Failure examplesWe show several failure examples in Figure 6 that were unsuccessfully edited. This could be because the computed weighting matrix is not accurate for determining the scale of noise to be added to each pixel in the source image.
## 5 Related Work
Image editing aims to modify a real image to generate the desired image, resulting in tasks including image translation [46], style transfer [47], inpainting [48], object modification [24], etc. We focus on image editing tasks that commonly require editing specific object/thing of an image while preserving the remaining parts of the image.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(a\) (\(b=5.0\)) & 6.0 & 8.0 & 12.0 & 14.0 \\ \hline FID \(\downarrow\) & 87.01 & 72.43 & 54.10 & 46.75 \\ SSIM \(\uparrow\) & 0.556 & 0.513 & 0.449 & 0.425 \\ \hline \hline \(b\) (\(a=10.0\)) & 3.0 & 4.0 & 6.0 & 7.0 \\ \hline FID \(\downarrow\) & 42.34 & 50.35 & 74.58 & 88.49 \\ SSIM \(\uparrow\) & 0.373 & 0.423 & 0.539 & 0.601 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The performance on AFHQ dataset with varying values of \(a\) or \(b\) while respectively fixing \(b=5.0\) or \(a=10.0\).
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline SDEdit [13] & DiffEdit [11] & DDS [44] & EDICT [45] & InstructPix2Pix [41] & Ours \\ \hline
4.5\% & 10.0\% & 3.0\% & 4.5\% & 6.0\% & **72.0\%** \\ \hline \hline \end{tabular}
\end{table}
Table 3: User study results on ImageNet, Imagen, COCO-S, and DreamBooth datasets.
Figure 6: Failure examples. We show cases in which our method fails to generate high-quality edited results.
Diffusion models showcase remarkable results for image editing. SDEdit [13] employs the source noisy data as the starting point in the denoising process, and it explores the trade-off between realism and faithfulness by controlling the initialization of the denoising time. EGSDE [14] utilizes the energy functions trained on both source and target domains to guide the inference process. EGSDE uses the noisy data at \(0.5T\) time step as the starting point of denoising, where \(T\) denotes the total number of denoising time steps in DDPM. Compared with them, our derived sampling method incorporates an adaptive selection of the initial denoising time for each pixel during the denoising process. Methods of [12, 23, 24, 49] utilize text-conditioned diffusion models to fine-tune the text embedding of the object that needs to be preserved during editing using a few or a single image. DDS [44] utilizes delta scoring to provide effective gradients for editing. DiffEdit [11] uses the DDIM inversion method to obtain noisy data and automatically generates a binary mask to guide the denoising process. RePaint [50] tackles the inpainting task by taking the unmasked image region from the input image and the masked region from the DDPM generated image, using the hard 0-1 mask. Differently, we perform image editing by adding independent noise with different variances to different pixels, depending on a weighting coefficient matrix that contains soft weights. Pixels with less added noise will better preserve the content of the source image.
## 6 Conclusion, Limitations and Societal Impact
In this paper, we propose a Non-isotropic Gaussian Diffusion Model (NGDM) for image editing tasks. The NGDM adds independent Gaussian noises with varying variances to different image pixels. To avoid training score model for NGDM, we rectify NGDM into an isotropic Gaussian diffusion model and design a data sampling method for NGDM by using a pre-trained isotropic Gaussian diffusion model to generate images. We demonstrate that NGDM better trade-off the balance of realism and faithfulness than the state-of-the-art methods for image editing tasks.
A limitation of our method could be that incorrect weighting matrix may lead to the failure of the method. Moreover, our method relies on a pre-trained diffusion model. Artifacts are produced when the edit involves generation failure cases of the underlying model. In future work, we will design a better way to calculate the weighting matrix more precisely and efficiently. And we will explore the application of our method in downstream tasks, such as domain adaptation.
In our experiments, all the considered datasets are open-sourced and publicly available. Our work aims to manipulate images with minimum effort. However, this method might be misused by faking images. We will take care to exploit the method to avoid the potential negative social impact and we will help research in identifying and preventing malicious editing.
## 7 Acknowledgement
This work was supported by National Key R&D Program 2021YFA1003002, and NSFC (12125104, U20B2075,11971373).
Figure 7: Edited images and heatmaps with soft and hard weighting matrix. The images in the second column represent the results generated by our method and the heatmap below the image depicts the weighting matrix \(\boldsymbol{\Lambda}(\mathcal{I})\) defined in Section 3.1 in the paper. The images in columns 3-7 represent the results generated using the hard weighting matrix with threshold value in \(\{0.1,0.3,0.5,0.7,0.9\}\).The heatmaps below the images represent the binary hard weighting matrix.
## References
* [1] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In _NeurIPS_, 2020.
* [2] Diederik Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. Variational diffusion models. In _NeurIPS_, 2021.
* [3] Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In _ICML_, 2021.
* [4] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In _ICML_, 2015.
* [5] Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. In _NeurIPS_, 2019.
* [6] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In _ICLR_, 2021.
* [7] Omri Avrahami, Dani Lischinski, and Ohad Fried. Blended diffusion for text-driven editing of natural images. In _CVPR_, 2022.
* [8] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. _arXiv preprint arXiv:2204.06125_, 2022.
* [9] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. High-resolution image synthesis with latent diffusion models. In _CVPR_, 2022.
* [10] Jooyoung Choi, Sungwon Kim, Yonghyun Jeong, Youngjune Gwon, and Sungroh Yoon. Ilvr: Conditioning method for denoising diffusion probabilistic models. In _ICCV_, 2021.
* [11] Guillaume Couairon, Jakob Verbeek, Holger Schwenk, and Matthieu Cord. Diffedit: Diffusion-based semantic image editing with mask guidance. In _ICLR_, 2023.
* [12] Bahjat Kawar, Shiran Zada, Oran Lang, Omer Tov, Huiwen Chang, Tali Dekel, Inbar Mosseri, and Michal Irani. Imagic: Text-based real image editing with diffusion models. In _CVPR_, 2023.
* [13] Chenlin Meng, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. Sdedit: Image synthesis and editing with stochastic differential equations. In _ICLR_, 2022.
* [14] Min Zhao, Fan Bao, Chongxuan Li, and Jun Zhu. Egsde: Unpaired image-to-image translation via energy-guided stochastic differential equations. In _NeurIPS_, 2022.
* [15] Xiang Gu, Liwei Yang, Jian Sun, and Zongben Xu. Optimal transport-guided conditional score-based diffusion model. In _NeurIPS_, 2023.
* [16] Bahjat Kawar, Michael Elad, Stefano Ermon, and Jiaming Song. Denoising diffusion restoration models. In _NeurIPS_, 2022.
* [17] Jay Whang, Mauricio Delbracio, Hossein Talebi, Chitwan Saharia, Alexandros G Dimakis, and Peyman Milanfar. Deblurring via stochastic refinement. In _CVPR_, 2022.
* [18] Emiel Hoogeboom and Tim Salimans. Blurring diffusion models. In _ICLR_, 2023.
* [19] Tim Dockhorn, Arash Vahdat, and Karsten Kreis. Score-based generative modeling with critically-damped langevin diffusion. In _ICLR_, 2022.
* [20] Vikram Voleti, Christopher Pal, and Adam Oberman. Score-based denoising diffusion with non-isotropic gaussian noise models. In _NeurIPS_, 2022.
* [21] Arpit Bansal, Eitan Borgnia, Hong-Min Chu, Jie S Li, Hamid Kazemi, Furong Huang, Micah Goldblum, Jonas Geiping, and Tom Goldstein. Cold diffusion: Inverting arbitrary image transforms without noise. _arXiv preprint arXiv:2208.09392_, 2022.
* [22] Giannis Daras, Mauricio Delbracio, Hossein Talebi, Alexandros G Dimakis, and Peyman Milanfar. Soft diffusion: Score matching for general corruptions. _arXiv preprint arXiv:2209.05442_, 2022.
* [23] Zhixing Zhang, Ligong Han, Arnab Ghosh, Dimitris Metaxas, and Jian Ren. Sine: Single image editing with text-to-image diffusion models. In _CVPR_, 2023.
* [24] Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In _CVPR_.
* [25] Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Null-text inversion for editing real images using guided diffusion models. _arXiv preprint arXiv:2211.09794_, 2022.
* [26] Pascal Vincent. A connection between score matching and denoising autoencoders. _Neural Comput_, 23(7):1661-1674, 2011.
* [27] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In _ICLR_, 2021.
* [28] Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. Stargan v2: Diverse image synthesis for multiple domains. In _CVPR_, 2020.
* [29] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _CVPR_.
* [30] Guillaume Couairon, Asya Grechka, Jakob Verbeek, Holger Schwenk, and Matthieu Cord. Flexit: Towards flexible semantic image translation. In _CVPR_, 2022.
* [31] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. In _NeurIPS_, 2022.
* [32] Hexiang Hu, Ishan Misra, and Laurens Van Der Maaten. Evaluating text-to-image matching using binary image selection (bison). In _ICCV Workshops_, 2019.
* [33] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In _ECCV_, 2014.
* [34] Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. _arXiv preprint arXiv:2111.02114_, 2021.
* [35] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In _NeurIPS_, 2017.
* [36] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. _IEEE Trans. IP_, 13(4):600-612, 2004.
* [37] Taesung Park, Alexei A Efros, Richard Zhang, and Jun-Yan Zhu. Contrastive learning for unpaired image-to-image translation. In _ECCV_, 2020.
* [38] Shikun Sun, Longhui Wei, Junliang Xing, Jia Jia, and Qi Tian. Sddm: Score-decomposed diffusion models on manifolds for unpaired image-to-image translation. In _ICML_, 2023.
* [39] Gihyun Kwon and Jong Chul Ye. Diffusion-based image translation using disentangled style and content representation. In _ICLR_, 2023.
* [40] Xuan Su, Jiaming Song, Chenlin Meng, and Stefano Ermon. Dual diffusion implicit bridges for image-to-image translation. In _ICLR_, 2022.
* [41] Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. In _CVPR_, 2023.
* [42] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In _CVPR_.
* [43] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In _ICML_, 2021.
* [44] Amir Hertz, Kfir Aberman, and Daniel Cohen-Or. Delta denoising score. In _ICCV_, 2023.
* [45] Bram Wallace, Akash Gokul, and Nikhil Naik. Edict: Exact diffusion inversion via coupled transformations. In _CVPR_, 2023.
* [46] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In _ICCV_, 2017.
* [47] Yongcheng Jing, Yezhou Yang, Zunlei Feng, Jingwen Ye, Yizhou Yu, and Mingli Song. Neural style transfer: A review. _IEEE Trans Vis Comput Graph_, 26(11):3365-3385, 2019.
* [48] Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas S Huang. Generative image inpainting with contextual attention. In _CVPR_, 2018.
* [49] Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. An image is worth one word: Personalizing text-to-image generation using textual inversion. In _ICLR_, 2023.
* [50] Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, and Luc Van Gool. Repaint: Inpainting using denoising diffusion probabilistic models. In _CVPR_, 2022. | ## Review
### Summary
The paper presents a novel Non-isotropic Gaussian Diffusion Model (NGDM) designed for image-to-image translation and image editing tasks. The NGDM introduces a new framework by adding independent Gaussian noises with varying variances to different pixels, allowing for controlled image generation. This approach is validated through experiments showing that NGDM outperforms state-of-the-art score-based diffusion models in several tasks, including text-guided translation and editing. The authors assert that their model provides flexibility in image processing by allowing different levels of detail to be applied to various regions of an image, thus enhancing the editing process. Overall, the results suggest that NGDM is a promising advancement in the field of image manipulation.
### Strengths
- The NGDM framework is a novel approach that deviates from isotropic diffusion by utilizing non-isotropic Gaussian noise, enabling more controlled image editing and translation.
- Experimental results indicate that the proposed method outperforms state-of-the-art models in terms of quantitative metrics like FID and SSIM across various tasks.
- The paper is well-structured, with clear explanations of complex concepts, making it accessible to a wide audience.
- The flexibility of the model allows for integration with existing isotropic diffusion models without requiring retraining.
### Weaknesses
- The paper lacks extensive comparisons with recent related work, which would help contextualize the contribution and novelty of NGDM.
- Experimental validation is limited to a few datasets; broader testing could provide a more comprehensive evaluation.
- The interpretability of the model is not adequately addressed, leaving questions about how noise variances are assigned to pixels.
- The title does not explicitly mention image editing, which could mislead readers regarding the paper's scope.
- Visual results do not consistently demonstrate a significant perceptual advantage over existing methods.
### Questions
- How is the variance of Gaussian noise determined for each pixel? Is there a specific algorithm or strategy employed?
- Could the authors provide a more detailed comparison with existing state-of-the-art models, including both qualitative and quantitative metrics?
- What are the computational implications of the proposed method compared to traditional isotropic diffusion models?
- What ethical considerations have been made regarding the potential misuse of the model's capabilities in image manipulation?
- Can the authors elaborate on the limitations and possible scenarios where the NGDM might fail or perform suboptimally?
### Soundness
**Score:** 3
**Description:** Good - The methodology is solid and shows promise, but some aspects require further clarification and validation.
### Presentation
**Score:** 3
**Description:** Good - The paper is generally well-structured and clear, though additional visual aids could enhance understanding.
### Contribution
**Score:** 3
**Description:** Good - The proposed model presents a significant advancement in the field, though further validation and exploration of broader applications would strengthen its impact.
### Rating
**Score:** 5
**Description:** Borderline accept: The paper is technically solid, with reasons to accept outweighing the reasons to reject, though limited evaluation and lack of broader context are noted.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper introduces an innovative approach to image editing and translation through the Non-isotropic Gaussian Diffusion Model. While there are areas for improvement, such as expanding experimental validation and providing clearer comparisons with existing models, the originality, soundness, and potential impact of the work warrant acceptance. The authors have adequately addressed most reviewer concerns, making it a valuable contribution to the field.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
Unifying Predictions of Deterministic and Stochastic Physics in Mesh-reduced Space with Sequential Flow Generative Model
Luning Sun
Lawrence Livermore National Lab
Livermore, CA 94550
[email protected]
Equal contribution.
Xu Han
Tufts University
Medford, MA 02155
[email protected]
Han Gao
Harvard University
Cambridge, MA 02138
[email protected]
&Jian-Xun Wang
University of Notre Dame
Notre Dame, IN 46556
[email protected]
&Li-Ping Liu
Tufts University
Medford, MA 02155
[email protected]
###### Abstract
Accurate prediction of dynamical systems in unstructured meshes has recently shown successes in scientific simulations. Many dynamical systems have a nonnegligible level of stochasticity introduced by various factors (e.g. chaoticity), so there is a need for a unified framework that captures both deterministic and stochastic components in the rollouts of these systems. Inspired by regeneration learning, we propose a new model that combines generative and sequential networks to model dynamical systems. Specifically, we use an autoencoder to learn compact representations of full-space physical variables in a low-dimensional space. We then integrate a transformer with a conditional normalizing flow model to model the temporal sequence of latent representations. We evaluate the new model in both deterministic and stochastic systems. The model outperforms several competitive baseline models and makes more accurate predictions of deterministic systems. Its own prediction error is also reflected in its uncertainty estimations. When predicting stochastic systems, the proposed model generates high-quality rollout samples. The mean and variance of these samples well match the statistics of samples computed from expensive numerical simulations.
## 1 Introduction
Accurate prediction of long-term dynamics of physical systems is of great interest in many science and engineering fields. The classical simulation of a system heavily relies on a spatial/temporal discretization of the space and the numerical solution to a finite-dimensional algebraic system derived from the governing equation [1]. However, due to the multi-scale, stochastic nature of the complex physics and the complexity of the geometry, the simulation of large-scale real-time applications is extremely expensive in computation [2]. In recent years, deep learning models have been applied to predict rollouts of large and complex physical systems thanks to their flexibility and scalability [3]. Moreover, they can handle uncertainties and non-linearity in physical problems [4, 5, 6, 7, 8] more effectively than traditional methods.
Previous work has studied two types of complex dynamical systems: deterministic ones and stochastic ones. A deterministic system is often considered under a perfectly controlled experiment with exactly known PDE terms and initial conditions (IC) or boundary conditions (BC) [9; 10; 11; 12; 13; 14]. This type of system can be modeled by autoregressive predicting models. A stochastic system such as quantum mechanics [15] and statistical physics [16] has a stochastic rollouts. When there are stochastic forcing terms or IC/BC terms in the governing equations, researchers have developed various models for predicting the stochastic state variables, such as turbulence velocity and stock prices [17; 18; 19; 20; 21]. Given that there is not a clear boundary between the two types of systems, it is highly desirable that a unified model can model either a deterministic or a stochastic system automatically. However, such models so far are limited to classical numerical models such as OpenFOAM [22]. There is an urgent need to develop a unified deep-learning model that can model both types of systems.
In this work, we propose a unified framework based on deep generative models to predict deterministic and stochastic systems. The new model is based on graph-structured state representations [12], which can handle irregular spatial areas commonly seen in dynamical systems. It uses an autoencoder to encode a state representation to a low-dimensional latent vector. Accurate encoding and decoding is critical for recovering the state of the dynamical system. This work makes several innovations to enhance the autoencoder's ability to preserve the information in system states. We provide a new approach to encoding the graph-structured representation with a vector with a fixed length. We also get some inspiration from regeneration learning [23] and train our autoencoder with self-supervised learning.
To describe stochasticity in the system, we use a sequential probabilistic model for the latent representations. We integrate a transformer and a normalizing flow to construct a step-wise predictive neural network in the latent space. When there is stochasticity, the model will learn the conditional distribution of the next latent state; when there is no stochasticity, it can place probabilities to correct deterministic predictions and still can minimize the predictive error. By including the ability to simulate both deterministic and stochastic systems in a uniform framework, it reduces the effort of developing separate models for different problems.
We evaluate our proposed framework in an extensive empirical study. The results indicate the proposed model outperforms the SOTA baselines on deterministic datasets regarding accuracy. More importantly, for the first time, we introduce several alternative evaluation metrics other than normalized RMSE for stochastic fluid dynamics, which help improve comparisons between different methods in this domain. The proposed framework can produce high-quality samples for stochastic systems in the mesh space.
## 2 Background
### Problem definition
Let's consider a general partial differential equation (PDE) defined on the \(d\)-dimensional space and one-dimensional time domain,
\[\frac{\partial\mathbf{u}}{\partial t}=j(\mathbf{u},\mathbf{\mu},\mathbf{\iota})\quad\text{in }\Omega\times[0,T_{\rm end}] \tag{1}\]
where \(\Omega\subset\mathbb{R}^{d}\) is the spatial domain, \(T_{\rm end}\) is the endpoint of time, \(\mathbf{\iota}:\partial\Omega\times[0,T]\) is the random parameter over time for stochastic systems (e.g., boundary conditions), and \(\mathbf{u}:\Omega\times[0,T]\) is the primary solution variable (e.g., velocity and pressure of fluid flow). Here \(\mathbf{\mu}:\mathcal{D}\) is the global physical system parameter and is time-invariant (e.g., Re number) for a given rollout. \(j:\Omega\times[0,T]\times\mathcal{D}\rightarrow\Omega\times[0,T]\) is an aggregation of the spatial terms of conservation laws (e.g., source and flux). To numerically solve the conservation law in Equation 1, let \(\mathcal{C}_{h}\) be a mesh of \(\Omega\), that is, \(\mathcal{C}_{h}=\{C_{i}\in\Omega:i=1,\ldots,N\}\) is a collection of non-overlapping cells that cover \(\Omega\). We further use \(\mathbf{u}_{i,t}\) to denote the evaluation of \(\mathbf{u}\) at the cell center of \(C_{i}\) at time step \(t\).
Using the mesh described above, we apply a finite volume discretization to yield the parametrized, nonlinear dynamical system. Here, the dynamical system can be computationally intensive because a fine-level mesh will lead to discretization with a large degree of freedom. For stability issues, the numerical time step also needs to be very small. Therefore, numerical simulations with traditional methods require extremely expensive computation and storage.
In this paper, we are interested in two different scenarios: _deterministic_ dynamics with invariant parameter \(\mathbf{\iota}\) over time (e.g., fixed boundary condition ), and _stochastic_ dynamics with random parameter \(\ell\) over time (e.g., perturbations in boundary conditions of turbulent flow). Both cases will also include the global physical parameters \(\mathbf{\mu}\). The scope of our paper is to build a unified data-driven surrogate model for generating/predicting parametric PDE solutions.
### Normalizing flow models
A normalizing flow model constructs a flexible probabilistic distribution by applying a learnable bijective mapping to a simple random variable (e.g. Gaussian distributed). Suppose the mapping is \(\mathbf{z}=f(\mathbf{x})\) with \(\mathbf{x}\in\mathbb{R}^{d}\) being the simple input variable, then we have the probability \(p(\mathbf{z})\) of \(\mathbf{z}\) as follows:
\[p(\mathbf{z})=p(\mathbf{x})\bigg{|}\text{det}\left(\frac{\partial(f(\mathbf{x}))}{\partial( \mathbf{x})}\right)\bigg{|}^{-1}. \tag{2}\]
Here \(\frac{\partial(f(\mathbf{x}))}{\partial(\mathbf{x})}\) is the Jacobian of \(f\) at \(\mathbf{x}\). The function \(f\) is usually a neural network with a layered structure, with each layer being a bijective mapping. Then the determinant of \(\frac{\partial(f(\mathbf{x}))}{\partial(\mathbf{x})}\) is the product of determinants of these layers' Jacobian matrices. With a special design of layer structures, their determinants can be efficiently computed. For example, a RealNVP model [23] is constructed by stacking several _coupling_ layers, each of which has a lower triangular Jacobian matrix. With \(\mathbf{h}\in\mathbb{R}^{d}\) as the input, a coupling layer \(f_{\ell}\) runs the following calculation:
\[f_{\ell}(\mathbf{h})=\mathbf{concat}(\mathbf{h}[1:d^{\prime}],\mathbf{h}[(d^{\prime}+1):d] \odot\exp(s(\mathbf{h}[1:d^{\prime}]))+t(\mathbf{h}[1:d^{\prime}])) \tag{3}\]
Here \(\mathbf{concat}\) concatenates its arguments as one vector, and \(d^{\prime}\) is usually about one-half of \(d\). In this calculation, the first half of the vector is directly copied to the output. The second half goes through an entry-wise linear transformation: the operation \(\odot\) is the Hadamard product, the neural network \(s\) provides scaling coefficients, and the neural network \(t\) provides biases. With \(K\) such coupling layers, we have a transformation that defines a flexible distribution \(p(\mathbf{z})\).
\[p(\mathbf{z})=p(\mathbf{x})\cdot\prod_{\ell=1}^{K}|\text{det}(\partial f_{\ell}(\mathbf{h} ^{\ell-1})/\partial\mathbf{h}^{\ell-1})|^{-1} \tag{4}\]
Here \(\mathbf{h}^{0}=\mathbf{x}\), and \(\mathbf{h}^{\ell}=f_{\ell}(\mathbf{h}_{\ell-1})\). At the same time, \(p(\mathbf{z})\) also has an efficient sampling procedure given by \(f\).
## 3 Methodology
Our new predicting model has three components: an encoder that compresses a graph representation of the state into a fixed-length vector, a sequential model that predicts next-step representations in the latent space, and a decoder that decodes spatial states from graph representations. The encoder is an improved version of the GMR-GMUS encoder [11]. The sequential model is a conditional flow model. The encoder and decoder are trained via self-supervised learning as in regeneration learning.
### Graph representation learning
Following GMR-GMUS, we use a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) to encode a snapshot of a deterministic or stochastic dynamical system at time step \(t\). Here each node \(i\in\mathcal{V}\) corresponds to the mesh cell \(C_{i}\), and each edge \((i,j)\in E\) represents a neighboring relationship between two cells. At each step \(t\), the solution \(\mathbf{u}_{i,t}\) at cell \(i\) becomes the feature vector of the node \(i\in\mathcal{V}\) at time \(t\). Because the mesh is pre-defined and fixed, the graph representation \((G,(\mathbf{u}_{i,t},i\in V))\) preserves the full information of the mesh \(\mathcal{C}_{h}\) at time \(t\). Here we use \(\mathbf{Y}_{t}=(\mathbf{u}_{i,t}:i\in V)\) to denote a snapshot at time \(t\).
The key task for the encoder is to encode the graph into a low-dimensional vector \(\mathbf{z}_{t}\). For this task, we use an improved version of the encoder of GMR-GMUS. The GMR-GMUS encoder runs a graph neural network over the graph representation to learn node representations. Then it takes node vectors of a selection of "pivotal" nodes and concatenates them to get \(\mathbf{z}_{t}\). In our new encoder, we select a set of locations in the spatial space instead of graph nodes to aggregate spatial information: each selected location aggregates vectors of nearby nodes to get its representation. It decouples the graph representation and the aggregation operation so that a select location can encode nodes within an arbitrary distance. To improve the training stability, we also improve the graph neural network'sarchitecture by adding residual connections between its message-passing layers. We call this new encoder Position-based Graph Mesh Reducer (PbGMR). These two modifications clearly improve the encoder's ability to preserve the state information, which will be empirically shown in the experiment section.
The architecture of PbGMR is specified as follows. For notational convenience, we omit the time from the subscript. For each graph node \(i\in V\), we first extract node and edge features as follows.
\[\mathbf{v}_{i}^{0}=\mathrm{mlp}_{v}(\mathbf{u}_{i}),\quad\mathbf{e}_{ij}^{0}=\mathrm{mlp}_ {e}(\mathrm{pos}(i)-\mathrm{pos}(j)). \tag{5}\]
Here \(\mathrm{pos}(i)\) is the spatial location of the cell center of \(C_{i}\). After that, we apply \(L\) message-passing layers.
\[\mathbf{e}_{ij}^{\ell} =\mathbf{e}_{ij}^{\ell-1}+\mathrm{layernorm}(\mathrm{mlp}_{\ell}^{e} \left(\mathbf{e}_{ij}^{\ell-1},\mathbf{v}_{i}^{\ell-1},\mathbf{v}_{j}^{\ell-1})\right) \tag{6}\] \[\mathbf{v}_{i}^{\ell} =\mathbf{v}_{i}^{\ell-1}+\mathrm{layernorm}\left(\mathrm{mlp}_{\ell} ^{v}\left(\mathbf{v}_{i}^{\ell-1},\sum_{j\in\mathcal{N}_{i}}\mathbf{e}_{ij}^{\ell-1} \right)\right),\quad\ell=1,\ldots,L. \tag{7}\]
Here \(\mathcal{N}_{i}\) denotes all neighbors of node \(i\), \(\mathrm{mlp}_{\ell}^{e}\) and \(\mathrm{mlp}_{\ell}^{v}\) are two separate multi-layer perceptrons, and \(\mathrm{layernorm}\) is the layer-normalization operation. When a perceptron accepts multiple arguments, its arguments are first concatenated into a single vector. Each layer here is similar to a GraphNet block [24] but with slight differences. The prominent ones are residual connections and layer normalization in the update of node and edge representations: they help to stabilize the training procedure [25, 26, 27].
After we have learned node representations, we need to aggregate them into a single vector to represent the entire state. We randomly select a small set \(\mathcal{S}\) of centers in the spatial area. For each center \(c\in\mathcal{S}\), we select \(k\) nearest mesh cells \(\mathrm{knn}(c)\) based on spatial distance and then compute the position representation \(\mathbf{h}_{c}\) by interpolation [28]:
\[\mathbf{h}_{c}=\sum_{j\in\mathrm{knn}(c)}\frac{w_{cj}\mathbf{v}_{j}^{L}}{\sum_{j\in \mathrm{knn}(c)}w_{cj}},\quad w_{cj}=\frac{1}{d(c,j)^{2}},\quad,c\in\mathcal{S} \tag{8}\]
Here \(d(c,j)\) is the spatial distance between cell \(C_{j}\) and the center \(c\). Given that the calculation is scaled by the sum of \(w_{cj}\)-s, the unit of the spatial distance does not change the calculation here. The number \(k\) of neighbors is a hyper-parameter, and we fix it to 10. Finally, we concatenate all representations of centers into a single vector \(\mathbf{z}=\mathrm{concat}(\mathbf{h}_{c}:c\in\mathcal{S})\) as the latent for the entire graph. Note that the centers in \(\mathcal{S}\) are fixed for a problem.
As a comparison, the encoder in GMR-GMUS uses graph nodes as centers and only considers connected neighbors of a center. Then the number of neighbors in the interpolation operation is
Figure 1: The diagram of the proposed model, which first compresses the whole graph into the latent representation \(\mathbf{z}_{t}\) by PbGMR using selected positions. During the generation process, a transformer encodes physical parameters and previous latent representations into a condition vector \(\mathbf{c}_{t+1}\), from which a normalizing flow model describes the conditional probability of \(\mathbf{z}_{t+1}\). Finally, the decoder PbGMUS decodes \(\mathbf{z}_{t+1}\) through to obtain the next-step prediction \(\mathbf{Y}_{t+1}\).
limited by the graph structure. As a result, each center vector from GMR-GMUS can only represent information in a short range. Our new encoder overcomes this issue by decoupling the neighbors in interpolation and the neighbors in the graph representation and then gives the interpolation operation more freedom to represent a state of the system.
### Decoding and self-supervised training
We devise a decoder PbMUS to recover node features on the graph from latent representation \(\mathbf{z}\): \(\hat{\mathbf{Y}}=\mathrm{PbGMUS}(\mathbf{z})\). Here we also consider the computation at a single time step and omit time indices. We first split \(\mathbf{z}\) and get vectors at interpolation centers: \((\mathbf{h}_{\mathbf{c}}:c\in\mathcal{S})=\mathbf{z}\) and then compute the initial node representation \(\mathbf{r}_{i}^{0}\) by spatial interpolation from centers:
\[\mathbf{r}_{i}^{0}=\sum_{j\in\mathrm{knn}^{\prime}(i)}\frac{w_{ic}\mathbf{h}_{c}}{\sum_ {c\in\mathrm{knn}^{\prime}(i)}w_{ic}},\quad w_{ic}=\frac{1}{d(i,c)^{2}},\quad,i\in\mathcal{V},c\in\mathcal{S} \tag{9}\]
Here \(\mathrm{knn}^{\prime}(i)\) are \(k\) centers that are nearest to the cell center \(i\). Then we apply \(L\) message-passing layers to compute \(\hat{\mathbf{Y}}=\mathrm{gnn}(G,(\mathbf{r}_{i}^{0}:i\in V))\), with \(\mathrm{gnn}\) representing the \(L\) network layers. These layers have the same architecture as PbGMR but use different learnable parameters.
Self-supervised training.Without considering the sequential property of the data, we first train the encoder and decoder on single steps with self-supervised training. This training method shares the same spirit of regeneration learning and improves encoder and decoder's abilities to capture spatial patterns in the data. In particular, we use the reconstruction error as the minimization objective to train the encoder-decoder pair.
\[\min\ \ \sum_{t=0}^{T}||\mathbf{Y}_{t}-\mathrm{PbGMUS}(\mathrm{PbGMR}(\mathbf{Y}_{t}))|| _{2}^{2} \tag{10}\]
### Attention-based temporal conditioned generative model
Now we consider the sequence of latent representations from the encoder and devise a sequential generative model to model the sequence of latent representations. The low-dimensional latent space reduces the modeling difficulty, and the vector form of latent representations \(\mathbf{z}_{t}\) avoids the graph structure and allows more model choices. We first decompose the sequence as follows.
\[P(\mathbf{z}_{1:T}|\mathbf{\mu},\mathbf{z}_{0})=p(\mathbf{z}_{1}|\mathbf{\mu},\mathbf{z}_{0})\prod_{t= 2}^{T}p(\mathbf{z}_{t}|\mathbf{\mu},\mathbf{z}_{0},\mathbf{z}_{1:t-1}). \tag{11}\]
The key is to devise a model for the conditional \(p(\mathbf{z}_{t}|\mathbf{\mu},\mathbf{z}_{0},\mathbf{z}_{1:t-1})\). We take two steps to construct this conditional: we first represent the condition \((\mathbf{\mu},\mathbf{z}_{0},\mathbf{z}_{1:t-1})\) with a single vector \(\mathbf{c}_{t}\) and then adapt RealNVP [23], a normalizing flow model, to describe the conditional \(p(\mathbf{z}_{t}|\mathbf{c}_{t})\).
Calculate the conditional vector with a transformer.Because physical parameters \(\mathbf{\mu}\) control the entire system, and the initial condition \(\mathbf{z}_{0}\) contains information about the initial condition, we consider them special and use them in the prediction of \(\mathbf{z}_{t}\) for each \(t\). We structure the problem as a "translation" problem and use the transformer [29] to run the calculation. In our case, the input "sentence" is \((\mathbf{\mu},\mathbf{z}_{0})\), the first few "tokens" in the target sentence are \((\mathbf{z}_{1},\dots,\mathbf{z}_{t-1})\), and the "next token" to be predicted is \(\mathbf{c}_{t}\).
\[\mathbf{c}_{t}=\mathrm{transformer}((\mathbf{\mu},\mathbf{z}_{0}),(\mathbf{z}_{1},\dots,\mathbf{z} _{t-1})) \tag{12}\]
While the computation is exactly the same as one step in a translation task, the rationale is very different. First, the translation task directly gets the \(\mathbf{z}_{t}\) from the transformer, but our model needs to send \(\mathbf{c}_{t}\) to a conditional flow model to get the prediction \(\mathbf{z}_{t}\). Second, the translation task has an informative input sequence, but our model uses a less informative input and depends on the transformer to get a reasonable output sequence.
Predict \(z_{t}\) with a conditional flow model.Once we have a vector \(\mathbf{c}_{t}\) representation of the condition, we can construct a conditional flow model from RealMVP. Specifically, we append the condition vector \(\mathbf{c}_{t}\) to the input of the scaling function \(s\) and the bias function \(t\) in equation 3 in each layer.
\[f_{\ell}(\mathbf{h})=[\mathbf{h}[1:d^{\prime}],\mathbf{h}[d^{\prime}+1:d]\odot\exp(s(\mathbf{h}[ 1:d^{\prime}],\mathbf{c}_{t}))+t(\mathbf{h}[1:d^{\prime}],\mathbf{c}_{t})] \tag{13}\]
From these layers, we have the flow model \(p(\mathbf{z}_{t}|\mathbf{c}_{t})\). By chaining up the transformer and the conditional flow model, we have our sequential model for conditional probability \(p(\mathbf{z}_{t}|\mathbf{\mu},\mathbf{z}_{0},\mathbf{z}_{1:t-1})\).
During training, we train the sequential model on \((\mathbf{z}_{0},\dots,\mathbf{z}_{T})\) that are computed from our PbGMR encoder. During inference, we can efficiently sample \(\mathbf{z}_{t}\) from the sequential model. The whole training and inference processes can be found in Appendix A.2 and Appendix A.3.
## 4 Related Work
Deep learning for physics.For a deterministic system, one-step learning simply takes the current state as the input and outputs the next-step prediction [3; 30; 31; 32]. To further improve the accuracy of long-term forecasts, dimension reduction is applied together with sequence nets, aiming to solve long dynamical systems on a regular domain [33; 34; 35]. Such works often adopt CNNs as encoders so that can not be applied to irregular mesh space. To further work on mesh data directly with dimension reduction, the GNN encoder/decoder was initially introduced by [11]. Various GNN architectures were also proposed to facilitate the learning of physics. [36; 37; 38; 39; 13; 15]. For physical stochastic systems, CNN is mainly used for probabilistic prediction [40; 18]. However, it lacks researches solving stochastic system in the graph space.
Regeneration learning and generative modeling.Regeneration learning is a new learning paradigm for data generation, which first generates a latent representation for the data. Then the generation process happens on the latent space. There are many recently popular generative models for images[41; 42; 43], videos[44], speeches[45], and music [46] are built on this paradigm. We notice that it should be further investigated for graph generation tasks.
## 5 Experiments
### Deterministic dynamics
We benchmark our proposed model with three datasets from three flows: flows over the cylinder, high-speed flows over the moving edge, and vascular flows [11]. Note that for test cases, we feed the model physical parameters \(\mathbf{\mu}\) and the snapshot at the beginning time to generate the whole trajectory. Detailed description can be found in Appendix A.5.
We compare our new method against five SOTA models for fluid dynamics, including two variants of MeshGraphNet[3] and three variants of the GMR-GMUS[11]. We use relative mean square error (RMSE) as the evaluation metric: \(\mathrm{RMSE}=\frac{\sum(\tilde{u}_{t}^{\mathrm{prediction}}-\tilde{u}_{t}^{ \mathrm{truth}})^{2}}{\sum(\tilde{u}_{t}^{\mathrm{prediction}})^{2}}\). More details can be found in the Appendix A.2. For cylinder and vascular flows, we calculate the RMSE with respect to velocity variables \(u\), \(v\), and the pressure variable \(p\). For high-speed flows over the moving edge, we also calculated the RMSE for the temperature \(T\).
The results on prediction errors.Table 2 shows the comparison of different models. Our model outperforms the baseline models in all the scenarios. For the cylinder flow case, our model has improvements of \(22\%\), \(17\%\), and \(47\%\) for \(u\), \(v\), and \(p\). For the sonic flow case, our model has improvements of \(61\%\), \(70\%\) for the \(u\) and \(v\) variables, and improvements of \(82\%\) and \(50\%\) respectively for the \(p\) and \(T\) variables. For the vascular flow case, our model achieves improvements of \(52\%\), \(45\%\), and \(95\%\) for \(u\), \(v\), and \(p\). The ablation
\begin{table}
\begin{tabular}{c c c} \hline \hline Dataset & GMR-GMUS & PbGMR-GMUS \\ \hline Cylinder flow & \(14.3\) & \(\mathbf{1.9}\) \\ \hline Sonic flow & \(1.11\) & \(\mathbf{0.24}\) \\ \hline Vascular flow & \(10\) & \(\mathbf{2.8}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The average relative reconstruction error of three systems, with the unit of \(1\times 10^{-3}\)study in the Appendix section (Table 9) indicates that both the new encoder and the conditional normalizing flow are essential to improve prediction accuracy.
The error in the time axis is shown in Figure 3. The performance of our model is shown with the solid black line. The model has small initial errors in all three tasks. Then its error accumulates over time, but the accumulation is much slower than that of competing models, so our model consistently has the lowest error in the final time step.
To better understand our model, we have also done an extensive ablation study reported in the appendix. Here we only show a comparison of our model with GMR-GMUS in the encoding-decoding tasks. The results in Table. 1 show that the new model has clearly better encoding and decoding capabilities. In one encoding and decoding case, the reconstruction error of our model is less than one fifth of that of the GMR-GMUS model in the appendix. The ablation study also shows that both residual connections and encoding centers enhance the encoding-decoding accuracy (please see Table 8 in the Appendix section).
Contour analysis.As a probabilistic model, we plotted the contour of the predicted sample and the standard deviation (std) for all three datasets. Figure 2 visualizes the velocity predictions from two datasets. In each subplot, the top row is one prediction (e.g. one sample from our model), and the bottom row visualizes the standard deviation of predictions. To clearly compare different predictions, we use the same color bar within the same dataset. With visual inspection, the sample predictions are reasonable and stable for all three datasets.
The standard deviations also show reasonable spatial-temporal patterns consistent with our expectations. For cylinder flow with \(Re=307\), the standard deviation is larger around the vortex-shedding region, indicating the model is less certain about the fast-changing dynamics. Moreover, the standard deviation tends to grow with time, reflecting the error accumulation behavior for long-time rollouts. Moreover, the standard deviation is smaller when \(Re\) is higher, indicating that the model is more certain about the prediction. The same trends can also be seen in other datasets. Therefore, our proposed model can accurately predict the dynamics for deterministic systems while providing a spatial-temporal uncertainty estimate, potentially improving the interpretability of deep learning systems. More contours can be found in Appendix A.6
### Stochastic dynamics
Dataset.We apply the proposed method to solve a stochastic dynamical system governed by the unsteady-state incompressible Navier-Stokes equations. We also compare our method to the unsteady Reynolds-averaged Navier-Stokes equations (URANS), the most applied method in the industry due to the balance of accuracy and computational efficiency. Let \(\Omega\) be the channel with a backward-facing
Figure 2: For each case, the first row is one of our predicted samples for velocity. The second row is the predicted standard deviation for velocity. Our model-predicted sample accurately reflects the physical systems. Meanwhile, the predicted standard deviation also has a physical spatial-temporal pattern.
step shown in Figure 5. The goal of the problem is to generate the temporal unsteady velocity field close to the ones simulated by large eddy simulation (LES) with the inflow velocity along the inlet boundary subject to uncertainty. The inflow at every time step is uncertain and sampled from a 60-dimensional stochastic space subject to a uniform distribution as \(\{u_{1}^{\mathrm{inlet}},...,u_{60}^{\mathrm{inlet}}|u_{i}^{\mathrm{inlet}}=10+u^{ \prime},u^{\prime}\sim\mathcal{U}(0,1)\}\).
Analysis of fluid motion.To visually study the generated results, we plot several time steps of the stream-wise velocity (\(u\)) from a collection of model samples in Figure 4. The proposed method clearly generates diverse turbulent flow samples. Specifically, the multiscale structure of the vortex can be seen in the contour plots. To quantitatively examine the model performance on turbulent statistics, we plot and compare the mean and variance of the velocity profiles from the URANS, the proposed model, and LES results in Figure 5. The mean velocity profile predicted by the URANS model (gray lines) has significant discrepancies compared to the LES result (cyan lines), particularly in the right half of the domain where the backward-facing step's geometry is less important and small-scale turbulent features are dominant. As for the velocity variance, the URANS model underestimates the fluctuation in the whole domain. On the contrary, both the LES velocity mean and variance can be captured well by our generated samples (purple lines).
the multiscale problem in the time domain can be automatically learned. However, URANS directly calculate an ensemble average over infinite experiments, and it can not capture stochastic behavior. Therefore, the energy (\(E(k)\)) of high-wave-number (\(k\)) signals is underpredicted by URANS while the proposed model has a consistent energy spectrum pattern with the LES simulations. Moreover, the turbulent kinetic energy (TKE) is computed to evaluate the quality of generated samples. Physically, the TKE is characterized by measured root-mean-square velocity fluctuations. By applying the decoder, the point-wise instantaneous flow can be recovered from the latent vector to the physical solution reasonably well. The TKE metric can be defined as a scalar:
\[\text{TKE}:=\frac{\int_{\Omega}\int_{[0,T_{\text{rad}}]}[(u-\bar{u})^{2}+(v- \bar{v})^{2}]d\Omega dt}{\int_{\Omega}\int_{[0,T_{\text{rad}}]}[(u^{\text{les}} -\bar{u}^{\text{les}})^{2}+(v^{\text{les}}-\bar{v}^{\text{les}})^{2}]d\Omega dt}. \tag{14}\]
It indicates the preservation of the TKE benchmarked by the LES simulations. Our model can generate a flow field preserving \(99\%\) of the energy, whereas URANS can only do less than \(20\%\). As for the temporal mean field, although URANS aims to solve it directly, it is inevitable to introduce the bias from the LES result due to the ergodicity assumption of RANS. Instead, we directly formulate the probabilistic problem to learn the distribution of the spatiotemporal field. We calculate the error of the temporal mean field and find that although URANS leverages the conservation laws, it still obtains a much larger error than our generated samples. Finally, we further propose to measure the quality of predicted rollouts using evaluation metrics of video generation in computer vision. In particular, we consider two metrics: the continuous ranked probability score (CRPS) [48; 49] and Frechet Video Distance (FVD) [50]. We use CRPS to assess the respective accuracy of our probabilistic forecasting and LES models. Since our model considers the distribution of the forecasts as a whole, it outperforms URANS, which only focuses on the mean of the distribution. Our model is also better evaluated in the FVD metric. This indicates the learned distribution by our model is close to the samples from LES.
We have also experimented with two deep learning methods, MeshGraphNet and GMR-GMUS, which are designed for deterministic systems. Their predictions are both far from LES simulations, not to mention that they can only make deterministic predictions. More results can be found in Appendix A.13.
## 6 Conclusion
We propose a new learning model to predict/generate deterministic and stochastic fluid dynamics over unstructured mesh in a uniform way. With an integration of a novel graph auto-encoder, a transformer, and a normalizing flow model, the new model decomposes temporal and spatial correlations in a dynamical system. It outperforms competitive baseline models for deterministic systems while providing a reasonable spatial-temporal pattern of forward uncertainty estimations. The samples
Figure 5: Evaluation lines: Inlet (), Outlet (), Wall (), \(x\)-evaluation lines ( - - ). Quantities from LES (), URANS (), and our model ()
from the model trained on stochastic systems capture the rich physical patterns of expensive LES simulations. The current model still has one limitation: it can not accurately model stochastic variables close to boundary areas, which is shown in Appendix A.12. In future work, we will design new learning architectures to overcome this issue.
## Acknowledgement
We thank all reviewers for their insightful feedback. Liu was supported by NSF CAREER 2239869. Wang's group would like to acknowledge the funds from Office of Naval Research under award numbers N00014-23-1-2071 and National Science Foundation under award numbers OAC-2047127.
## References
* [1] Fadl Moukalled, Luca Mangani, Marwan Darwish, F Moukalled, L Mangani, and M Darwish. _The finite volume method_. Springer, 2016.
* [2] Stephen B Pope and Stephen B Pope. _Turbulent flows_. Cambridge university press, 2000.
* [3] Tobias Pfaff, Meire Fortunato, Alvaro Sanchez-Gonzalez, and Peter W Battaglia. Learning mesh-based simulation with graph networks. In _International Conference on Learning Representations_, 2021.
* [4] Apostolos F Psaros, Xuhui Meng, Zongren Zou, Ling Guo, and George Em Karniadakis. Uncertainty quantification in scientific machine learning: Methods, metrics, and comparisons. _Journal of Computational Physics_, page 111902, 2023.
* [5] Yarin Gal, Petros Koumoutsakos, Francois Lanusse, Gilles Louppe, and Costas Papadimitriou. Bayesian uncertainty quantification for machine-learned models in physics. _Nature Reviews Physics_, 4(9):573-577, 2022.
* [6] Wesley J Maddox, Pavel Izmailov, Timur Garipov, Dmitry P Vetrov, and Andrew Gordon Wilson. A simple baseline for bayesian uncertainty in deep learning. _Advances in neural information processing systems_, 32, 2019.
* [7] Jayaraman Thiagarajan, Rushil Anirudh, Vivek Sivaraman Narayanaswamy, and Timo Bremer. Single model uncertainty estimation via stochastic data centering. _Advances in Neural Information Processing Systems_, 35:8662-8674, 2022.
* [8] James M Dolezal, Andrew Srisuwananukorn, Dmitry Karpeyev, Siddhi Ramesh, Sara Kochanny, Brittany Cody, Aaron S Mansfield, Sagar Rakshit, Radhika Bansal, Melanie C Bois, et al. Uncertainty-informed deep learning models enable high-confidence predictions for digital histopathology. _Nature communications_, 13(1):6572, 2022.
* [9] Sifan Wang and Paris Perdikaris. Long-time integration of parametric evolution equations with physics-informed deeponets. _Journal of Computational Physics_, 475:111855, 2023.
* [10] Zhong Yi Wan, Leonardo Zepeda-Nunez, Anudhyan Boral, and Fei Sha. Evolve smoothly, fit consistently: Learning smooth latent dynamics for advection-dominated systems. _arXiv preprint arXiv:2301.10391_, 2023.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{URANS (\(\dashrightarrow\)), LES (\(\dashrightarrow\)), Ours(\(\dashrightarrow\))} & Quantity of interest & URANS & Ours \\ \hline \multirow{4}{*}{
\begin{tabular}{} \end{tabular} } & \multirow{4}{*}{\(10^{0}\)} & \multirow{4}{*}{\(10^{1}\)} & CRPS (\(C_{u}\),\(\downarrow\)) & 3.24 & **1.28** \\ & & CRPS (\(C_{v}\),\(\downarrow\)) & 2.0 & **1.08** \\ & & FVD (\(d_{u}\),\(\downarrow\)) & 228112 & **1262** \\ & & FVD (\(d_{v}\),\(\downarrow\)) & 137860 & **397** \\ & & mean flow error (\(e_{u}\),\(\downarrow\)) & 0.31 & **0.0176** \\ & & mean flow error (\(e_{v}\),\(\downarrow\)) & 0.94 & **0.176** \\ & & turbulent energy (TKE,\(\uparrow\)) & 0.192 & **0.99** \\ \hline \end{tabular}
\end{table}
Table 3: Energy cascade spectrum at \((2.58,0.21)\)(_left_), and criteria of generation quality(_right_).
* [11] Xu Han, Han Gao, Tobias Pffaf, Jian-Xun Wang, and Li-Ping Liu. Predicting physics in mesh-reduced space with temporal attention. _arXiv preprint arXiv:2201.09113_, 2022.
* [12] Rishikesh Ranade, Chris Hill, Lalit Ghule, and Jay Pathak. A composable machine-learning approach for steady-state simulations on high-resolution grids. _arXiv preprint arXiv:2210.05837_, 2022.
* [13] Jiayang Xu, Aniruddhe Pradhan, and Karthikeyan Duraisamy. Conditionally parameterized, discretization-aware neural networks for mesh-based modeling of physical systems. _Advances in Neural Information Processing Systems_, 34:1634-1645, 2021.
* [14] Dmitrii Kochkov, Tobias Pfaff, Alvaro Sanchez-Gonzalez, Peter Battaglia, and Bryan K Clark. Learning ground states of quantum hamiltonians with graph networks. _arXiv preprint arXiv:2110.06390_, 2021.
* [15] Alvaro Sanchez-Gonzalez, Jonathan Godwin, Tobias Pfaff, Rex Ying, Jure Leskovec, and Peter Battaglia. Learning to simulate complex physics with graph networks. In _International conference on machine learning_, pages 8459-8468. PMLR, 2020.
* [16] Nicholas Geneva and Nicholas Zabaras. Multi-fidelity generative deep learning turbulent flows. _arXiv preprint arXiv:2006.04731_, 2020.
* [17] Michael McCabe and Jed Brown. Learning to assimilate in chaotic dynamical systems. _Advances in Neural Information Processing Systems_, 34:12237-12250, 2021.
* [18] Mustafa Z Yousif, Linqi Yu, and HeeChang Lim. Physics-guided deep learning for generating turbulent inflow conditions. _Journal of Fluid Mechanics_, 936:A21, 2022.
* [19] Deniz A Bezgin and Nikolaus A Adams. Normalizing flows as a novel pdf turbulence model. _arXiv preprint arXiv:2101.03590_, 2021.
* [20] Mengyuan Yang, Xiaolin Zheng, Qianqiao Liang, Bing Han, and Mengying Zhu. A smart trader for portfolio management based on normalizing flows. IJCAI, 2022.
* [21] Hrvoje Jasak, Aleksandar Jemcov, Zeljko Tukovic, et al. Openfoam: A c++ library for complex physics simulations. In _International workshop on coupled methods in numerical dynamics_, volume 1000, pages 1-20, 2007.
* [22] Xu Tan, Tao Qin, Jiang Bian, Tie-Yan Liu, and Yoshua Bengio. Regeneration learning: A learning paradigm for data generation. _arXiv preprint arXiv:2301.08846_, 2023.
* [23] Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. _arXiv preprint arXiv:1605.08803_, 2016.
* [24] Alvaro Sanchez-Gonzalez, Nicolas Heess, Jost Tobias Springenberg, Josh Merel, Martin Riedmiller, Raia Hadsell, and Peter Battaglia. Graph networks as learnable physics engines for inference and control. In _International Conference on Machine Learning_, pages 4470-4479. PMLR, 2018.
* [25] David Balduzzi, Marcus Frean, Lennox Leary, JP Lewis, Kurt Wan-Duo Ma, and Brian McWilliams. The shattered gradients problem: If resnets are the answer, then what is the question? In _International Conference on Machine Learning_, pages 342-350. PMLR, 2017.
* [26] Irwan Bello, William Fedus, Xianzhi Du, Ekin Dogus Cubuk, Aravind Srinivas, Tsung-Yi Lin, Jonathon Shlens, and Barret Zoph. Revisiting resnets: Improved training and scaling strategies. _Advances in Neural Information Processing Systems_, 34:22614-22627, 2021.
* [27] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2016.
* [28] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. _Advances in neural information processing systems_, 30, 2017.
* [29] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. _Advances in neural information processing systems_, 30, 2017.
* [30] Kelsey R Allen, Tatiana Lopez Guevara, Yulia Rubanova, Kim Stachenfeld, Alvaro Sanchez-Gonzalez, Peter Battaglia, and Tobias Pfaff. Graph network simulators can learn discontinuous, rigid contact dynamics. In _Conference on Robot Learning_, pages 1157-1167. PMLR, 2023.
* [31] Meire Fortunato, Tobias Pfaff, Peter Wirnsberger, Alexander Pritzel, and Peter Battaglia. Multiscale meshgraphnets. _arXiv preprint arXiv:2210.00612_, 2022.
* [32] Kelsey R Allen, Tatiana Lopez-Guevara, Kimberly Stachenfeld, Alvaro Sanchez-Gonzalez, Peter Battaglia, Jessica Hamrick, and Tobias Pfaff. Physical design using differentiable learned simulators. _arXiv preprint arXiv:2202.00728_, 2022.
* [33] Han Gao, Jian-Xun Wang, and Matthew J Zahr. Non-intrusive model reduction of large-scale, nonlinear dynamical systems using deep learning. _Physica D: Nonlinear Phenomena_, 412:132614, 2020.
* [34] Nicholas Geneva and Nicholas Zabaras. Transformers for modeling physical systems. _Neural Networks_, 146:272-289, 2022.
* [35] Tailin Wu, Takashi Maruyama, and Jure Leskovec. Learning to accelerate partial differential equations via latent global evolution. _Advances in Neural Information Processing Systems_, 35:2240-2253, 2022.
* [36] Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, et al. Interaction networks for learning about objects, relations and physics. _Advances in neural information processing systems_, 29, 2016.
* [37] Yunzhu Li, Jiajun Wu, Russ Tedrake, Joshua B Tenenbaum, and Antonio Torralba. Learning particle dynamics for manipulating rigid bodies, deformable objects, and fluids. _arXiv preprint arXiv:1810.01566_, 2018.
* [38] Marco Maurizi, Chao Gao, and Filippo Berto. Predicting stress, strain and deformation fields in materials and structures with graph neural networks. _Scientific Reports_, 12(1):21834, 2022.
* [39] Jiaqi Han, Wenbing Huang, Hengbo Ma, Jiachen Li, Josh Tenenbaum, and Chuang Gan. Learning physical dynamics with subequivariant graph neural networks. _Advances in Neural Information Processing Systems_, 35:26256-26268, 2022.
* [40] Rui Wang, Karthik Kashinath, Mustafa Mustafa, Adrian Albert, and Rose Yu. Towards physics-informed deep learning for turbulent flow prediction. In _Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_, pages 1457-1466, 2020.
* [41] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. High-resolution image synthesis with latent diffusion models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 10684-10695, 2022.
* [42] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In _International Conference on Machine Learning_, pages 8821-8831. PMLR, 2021.
* [43] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. _Advances in Neural Information Processing Systems_, 35:36479-36494, 2022.
* [44] Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J Fleet. Video diffusion models. _arXiv preprint arXiv:2204.03458_, 2022.
* [45] Jonathan Shen, Ruoming Pang, Ron J Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, Rj Skerrv-Ryan, et al. Natural tts synthesis by conditioning wavenet on mel spectrogram predictions. In _2018 IEEE international conference on acoustics, speech and signal processing (ICASSP)_, pages 4779-4783. IEEE, 2018.
* [46] Zeqian Ju, Peiling Lu, Xu Tan, Rui Wang, Chen Zhang, Songruoyao Wu, Kejun Zhang, Xiangyang Li, Tao Qin, and Tie-Yan Liu. Telemelody: Lyric-to-melody generation with a template-based two-stage method. _arXiv preprint arXiv:2109.09617_, 2021.
* [47] Peter Welch. The use of fast fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms. _IEEE Transactions on audio and electroacoustics_, 15(2):70-73, 1967.
* [48] James E Matheson and Robert L Winkler. Scoring rules for continuous probability distributions. _Management science_, 22(10):1087-1096, 1976.
* [49] Christopher AT Ferro, David S Richardson, and Andreas P Weigel. On the effect of ensemble size on the discrete and continuous ranked probability scores. _Meteorological Applications: A journal of forecasting, practical applications, training techniques and modelling_, 15(1):19-24, 2008.
* [50] Thomas Unterthiner, Sjoerd van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, and Sylvain Gelly. Fvd: A new metric for video generation. 2019.
* [51] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_, 2014.
* [52] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). _arXiv preprint arXiv:1606.08415_, 2016.
* [53] Sepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. _Neural computation_, 9(8):1735-1780, 1997.
* [54] George Papamakarios, Theo Pavlakou, and Iain Murray. Masked autoregressive flow for density estimation. _Advances in neural information processing systems_, 30, 2017.
Appendix
### Proof
The Jacobian of the transformation in Equation (4) can be written as:
\[\frac{\partial f^{i}(x)}{\partial f^{i-1}(x)}=\left[\begin{array}{cc}\mathbb{I} ^{d^{\prime}}&0\\ \frac{\partial f^{i}(x)^{d^{\prime}+1:d}}{\partial f^{i-1}(x)^{1:d^{\prime}}}& \text{diag}\big{(}\text{exp}\left[s\left(f^{i-1}\text{concat}((x)^{1:d^{\prime }},c)\right)\right]\text{)}\end{array}\right] \tag{15}\]
fwhere \(\text{diag}\big{(}\text{exp}[s(f^{i-1}\text{concat}((x)^{1:d^{\prime}},c))]\) is the diagonal matrix whose diagonal elements correspond to the vector \(\text{exp}\left[s\left(f^{i-1}\text{concat}((x)^{1:d^{\prime}},c)\right)\right]\). Since the concatenation operation still keeps the matrix triangular, the determinant can be computed as:
\[\text{det}(\partial f^{i}(\mathbf{x})/\partial f^{i-1}(\mathbf{x}))|=\text{exp}(\text{ sum}(s_{i}(\text{concat}(f^{i-1}(\mathbf{x})^{1:d^{\prime}},c)))) \tag{16}\]
### Training
As a regeneration learning method, there are two stages during the training. The first stage is to train a graph auto-encoder, PbGMR-GMUS, in this case, through self-supervised learning. This training process happens over all time steps and sequences in the training set. And for a given sequence \([\mathbf{Y}_{0},\dots,\mathbf{Y}_{T}]\), we compute \(\hat{\mathbf{Y}}_{t}=\text{PbGMUS}(\mathcal{G},\text{PbGMR}(\mathcal{G},\text{Y}))\) at each step and minimize the reconstruction loss:
\[\mathcal{L}_{graph}=\quad\sum_{t=1}^{T}\|\mathbf{Y}_{t}-\hat{\mathbf{Y}}_{t}\|_{2}^{2 }\quad. \tag{17}\]
In the second stage, the parameters in PbGMR-GMUS will be fixed. And the attention-based sequence model, as well as the conditional flow model, will be trained. We maximize the log-likelihood in the latent space:
\[\mathcal{L}_{p}=-\frac{1}{T-1}\sum_{t=2}^{T}\log p_{\psi}(\mathbf{z}_{t}|\mathbf{c}_{t}) \tag{18}\]
We don't set a time window so that the whole trajectory will be trained. The entire training process can be found in Algorithm 1. Particularly in the second stage, such a teacher forcing-like strategy allows a faster training process compared with the previous work [11], which adopts a rollout strategy during the training.
### Inference
For inference we first encode the first two snapshots into latent space \([\mathbf{z}_{0},\mathbf{z}_{1}]\). The condition vector \(\mathbf{c}_{t}\) is computed from the attention-based sequence model. By sampling a noise vector \(\mathbf{x}\) from an isotropic Gaussian, and putting it going backward through the flow, we can obtain a new sample from \(p_{\phi}(\mathbf{z}_{t}|\mathbf{c}_{t})\) as latent representation at each step. We describe the procedure of training in Algorithm 2.
```
1:Input: \(\mathbf{z}_{0}\), \(\mathbf{z}_{1}\), \(\mathbf{z}_{2}\), \(\mathbf{z}_{3}\), \(\mathbf{z}_{4}\), \(\mathbf{z}_{5}\), \(\mathbf{z}_{6}\), \(\mathbf{z}_{7}\), \(\mathbf{z}_{8}\), \(\mathbf{z}_{9}\), \(\mathbf{z}_{10}\), \(\mathbf{z}_{11}\), \(\mathbf{z}_{12}\), \(\mathbf{z}_{13}\), \(\mathbf{z}_{14}\), \(\mathbf{z}_{15}\), \(\mathbf{z}_{16}\), \(\mathbf{z}_{17}\), \(\mathbf{z}_{18}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{12}\), \(\mathbf{z}_{13}\), \(\mathbf{z}_{14}\), \(\mathbf{z}_{15}\), \(\mathbf{z}_{16}\), \(\mathbf{z}_{17}\), \(\mathbf{z}_{18}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{12}\), \(\mathbf{z}_{13}\), \(\mathbf{z}_{14}\), \(\mathbf{z}_{15}\), \(\mathbf{z}_{16}\), \(\mathbf{z}_{17}\), \(\mathbf{z}_{18}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{12}\), \(\mathbf{z}_{13}\), \(\mathbf{z}_{14}\), \(\mathbf{z}_{15}\), \(\mathbf{z}_{16}\), \(\mathbf{z}_{17}\), \(\mathbf{z}_{18}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{12}\), \(\mathbf{z}_{13}\), \(\mathbf{z}_{14}\), \(\mathbf{z}_{15}\), \(\mathbf{z}_{16}\), \(\mathbf{z}_{17}\), \(\mathbf{z}_{18}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{12}\), \(\mathbf{z}_{13}\), \(\mathbf{z}_{14}\), \(\mathbf{z}_{15}\), \(\mathbf{z}_{16}\), \(\mathbf{z}_{17}\), \(\mathbf{z}_{18}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{12}\), \(\mathbf{z}_{13}\), \(\mathbf{z}_{14}\), \(\mathbf{z}_{15}\), \(\mathbf{z}_{16}\), \(\mathbf{z}_{17}\), \(\mathbf{z}_{18}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{12}\), \(\mathbf{z}_{13}\), \(\mathbf{z}_{14}\), \(\mathbf{z}_{15}\), \(\mathbf{z}_{16}\), \(\mathbf{z}_{17}\), \(\mathbf{z}_{18}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{12}\), \(\mathbf{z}_{13}\), \(\mathbf{z}_{14}\), \(\mathbf{z}_{15}\), \(\mathbf{z}_{16}\), \(\mathbf{z}_{17}\), \(\mathbf{z}_{18}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{12}\), \(\mathbf{z}_{13}\), \(\mathbf{z}_{14}\), \(\mathbf{z}_{15}\), \(\mathbf{z}_{16}\), \(\mathbf{z}_{17}\), \(\mathbf{z}_{18}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{12}\), \(\mathbf{z}_{13}\), \(\mathbf{z}_{14}\), \(\mathbf{z}_{15}\), \(\mathbf{z}_{16}\), \(\mathbf{z}_{17}\), \(\mathbf{z}_{18}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{12}\), \(\mathbf{z}_{13}\), \(\mathbf{z}_{14}\), \(\mathbf{z}_{15}\), \(\mathbf{z}_{16}\), \(\mathbf{z}_{17}\), \(\mathbf{z}_{18}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{12}\), \(\mathbf{z}_{13}\), \(\mathbf{z}_{14}\), \(\mathbf{z}_{15}\), \(\mathbf{z}_{16}\), \(\mathbf{z}_{17}\), \(\mathbf{z}_{18}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{12}\), \(\mathbf{z}_{13}\), \(\mathbf{z}_{14}\), \(\mathbf{z}_{15}\), \(\mathbf{z}_{16}\), \(\mathbf{z}_{17}\), \(\mathbf{z}_{18}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{12}\), \(\mathbf{z}_{13}\), \(\mathbf{z}_{14}\), \(\mathbf{z}_{15}\), \(\mathbf{z}_{16}\), \(\mathbf{z}_{17}\), \(\mathbf{z}_{18}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{12}\), \(\mathbf{z}_{13}\), \(\mathbf{z}_{14}\), \(\mathbf{z}_{15}\), \(\mathbf{z}_{16}\), \(\mathbf{z}_{17}\), \(\mathbf{z}_{18}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{19}\), \(\mathbf{z}_{12}\), \(\mathbf{z}_{13}\), \(\mathbf{z}_{14}\), \(\mathbf{z}_{15}\), \(\mathbf{z}_{16}\), \(\mathbf{z}_{17}\), \(\mathbf{z}_{18}\), \(\mathbf{z}_{19}\```
Input: Domain graph \(\mathcal{G}\), Node features over time \([\mathbf{Y}_{0},\ldots,\mathbf{Y}_{T}]\), \(\mathrm{PbGMR}_{\theta}\), \(\mathrm{PbGMUS}_{\phi}\), Attention encoder \(\mathrm{MHA}_{\omega}\), Attention decoder Masked-\(\mathrm{MHA}_{\lambda}\), conditional flow model \(p_{\psi}(.|)\), physical condition parameter \(\mathbf{\mu}\) Output: Learned parameters \(\theta\), \(\phi\), \(\omega\) and \(\psi\) repeat for\(\mathbf{Y}_{t}\in[\mathbf{Y}_{0},\ldots,\mathbf{Y}_{T}]\)do \(\hat{\mathbf{Y}}_{t}=\mathrm{PbGMUS}_{\phi}(\mathrm{PbGMR}_{\theta}(\mathbf{Y}_{t}, \mathcal{G}))\) endfor Compute \(\nabla_{\theta,\phi}\leftarrow\nabla_{\theta,\phi}\mathcal{L}_{graph}(\theta, \phi,[\mathbf{Y}_{0},\ldots,\mathbf{Y}_{T}],[\hat{\mathbf{Y}}_{0},\ldots,\hat{\mathbf{Y}}_{T}])\) Update \(\phi\), \(\theta\) using the gradients \(\nabla_{\phi}\), \(\nabla_{\theta}\) until convergence of the parameters (\(\theta\), \(\phi\)) {Self-supervised learning} \([z_{0},z_{1},\ldots,z_{T}]=\mathrm{GMR}_{\theta}([\mathbf{Y}_{0},\ldots,\mathbf{Y}_{T }])\) repeat \(\hat{\mathbf{\mu}},\hat{\mathbf{z}}_{0}=\mathrm{MHA}_{\omega}(\mathbf{\mu},\mathbf{z}_{0})\) \([\mathbf{c}_{2},\ldots,\mathbf{c}_{t}]=\text{Masked-MHA}_{\lambda}((\mathbf{z}_{1},\ldots, \mathbf{z}_{t-1}),\mathbf{I}=(\hat{\mathbf{\mu}},\hat{\mathbf{z}}_{0}))\) Compute \(\nabla_{\omega,\lambda,\psi}\leftarrow\nabla_{\omega,\lambda,\psi}\mathcal{L }_{p}(\omega,\lambda,\psi,[\mathbf{c}_{2},\ldots,\mathbf{c}_{t}],[\mathbf{z}_{2},\ldots, \mathbf{z}_{t}])\) Update \(\omega\), \(\lambda\), \(\psi\) using the gradients \(\nabla_{\omega}\), \(\nabla_{\lambda}\), \(\nabla_{\psi}\) until convergence of the parameters (\(\omega\), \(\lambda\), \(\psi\))
```
**Algorithm 1** Training Process
```
Input: Domain graph \(\mathcal{G}\), The first two snapshot of node features \([\mathbf{Y}_{0},\mathbf{Y}_{1}]\), \(\mathrm{PbGMR}_{\theta}\), \(\mathrm{PbGMUS}_{\phi}\), Attention encoder \(\mathrm{MHA}_{\omega}\), Attention decoder Masked-\(\mathrm{MHA}_{\lambda}\), conditional flow model \(p_{\psi}(.|)\), physical condition parameter \(\mathbf{\mu}\), sample length \(T\) Output: A trajectory sample \([\mathbf{Y}_{2},\ldots,\mathbf{Y}_{T}]\) \([z_{0},\mathbf{z}_{1}]=\mathrm{PbGMR}_{\theta}([\mathbf{Y}_{0},\mathbf{Y}_{1}],\mathcal{G})\) \(\hat{\mathbf{\mu}},\hat{\mathbf{z}}_{0}=\mathrm{MHA}_{\omega}(\mathbf{\mu},\mathbf{z}_{0})\) for\(t\in[2,\ldots,T]\)do \(\mathbf{c}_{t}=\text{Masked-MHA}_{\lambda}((\mathbf{z}_{1},\ldots,\mathbf{z}_{t-1}),\mathbf{I}=( \hat{\mathbf{\mu}},\hat{\mathbf{z}_{0}}))[-1]\) Sample latent representation from conditional flow \(\mathbf{z}_{t}\sim p_{\phi}(\mathbf{z}_{t}|\mathbf{c}_{t})\) endfor Recover physical parameters on the original space \([\mathbf{Y}_{2},\ldots,\mathbf{Y}_{T}]=\mathrm{PbGMUS}_{\phi}([\mathbf{z}_{2},\ldots,\mathbf{ z}_{T}])\)
```
**Algorithm 2** Inference Process
Figure 6: 2-D principle subspace of the (a) latent vectors and (b) conditional latent vectors for the stochastic BFS cases: trajectory 1 (), trajectory 2 () start from and, respectively.
### Dataset
Four fluid simulation datasets, cylinder flow, sonic flow, vascular flow and stochastic backward-facing step (BFS) are used in the experiments. The labeled data are generated by solving the Navier-Stokes equation by OpenFOAM [21], a commonly used finite volume method-based open-source solver for fluid simulation. Cylinder and sonic flow consist of \(50\) cases in the training dataset and \(50\) cases in the test dataset. The vascular flow cases have \(10\) training and \(10\) test cases. The stochastic BFS case has \(5\) training cases. Unlike the first three deterministic cases, the stochastic BFS case is a stochastic case. The simulation details can be found in Table 4. Moreover, the details input and output of the PbGMR-GMUS are listed in Table 5.
The governing equation for the first three cases is the same as the one listed in [11]. For the last stochastic BFS case, the governing equation is listed in Equation (19). One significant difference in this case is that it introduces stochasticity by perturbing the inlet \(u\) velocity at every time step.
Figure 8: Pressure contour for benchmark cases
Figure 7: Velocity contour for the vascular flow case, the first row is one of our predicted samples. The second row is the predicted standard deviation.
Figure 9: Temperature contour for sonic flow case.
### Multi-head attention
Transformer [29] has been proven successful in the NLP field. The design of the multi-head attention (MHA) layer is based on the attention mechanism with Query-Key-Value (QKV). Given the packed matrix representations of queries \(\mathbf{Q}\), keys \(\mathbf{K}\), and values \(\mathbf{V}\), the scaled dot-product attention used by Transformer is given by:
\[\mathrm{ATTENTION}(\mathbf{Q},\mathbf{K},\mathbf{V})=\mathrm{softmax}\left( \frac{\mathbf{Q}\mathbf{K}^{T}}{\sqrt{D_{k}}}\right)\mathbf{V}, \tag{20}\]
where \(D_{k}\) represents the dimensions of queries and keys.
The multi-head attention applies \(H\) heads of attention, allowing a model to attend to different types of information.
\[\mathrm{MHA}(\mathbf{Q},\mathbf{K},\mathbf{V})=\mathrm{CONCAT}\left(\mathrm{ head}_{1},\ldots,\mathrm{head}_{H}\right)\mathbf{W}\] \[\mathrm{where}\quad\mathrm{head}_{i}=\mathrm{ATTENTION}\left(\mathbf{ Q}\mathbf{W}_{i}^{Q},\mathbf{K}\mathbf{W}_{i}^{K},\mathbf{V}\mathbf{W}_{i}^{V}\right),i=1,\ldots,H. \tag{21}\]
### Additional details for experimental setups
We described the details of the experiments of PbGMR-GMUS and attention-based conditional flow model. We provide the hyperparameters used in the experiments in Table 6.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & Cylinder & Sonic & Vascular & Stochastic BFS \\ \hline
**PbGMR-GMUS Optimization** & & & & \\ Learning rate & \(1\times 10^{-4}\) & \(1\times 10^{-5}\) & \(1\times 10^{-4}\) & \(1\times 10^{-4}\) \\ Optimizer & & & Adam [51] & \\ Batch size & & & 1 & \\ Number of epochs & \(1500\) & \(20000\) & \(700\) & \(600\) \\ \hline
**MHA/Flow model Optimization** & & & & \\ Learning rate & \(1\times 10^{-4}\) & \(1\times 10^{-5}\) & \(1\times 10^{-5}\) & \(1\times 10^{-4}\) \\ Optimizer & & & Adam & \\ Batch size & \(5\) & \(50\) & \(10\) & \(5\) \\ Number of epochs & \(90000\) & \(240000\) & \(240000\) & \(220000\) \\ Weight decay & \(1\times 10^{-5}\) & & N/A & \\ \hline
**PbGMR-GMUS Architecture** & & & & \\ Layers of message-passing & & & 3 & \\ Hidden dimension & & & 128 & \\ Output dimension & \(256\) & \(256\) & \(400\) & \(256\) \\ Activation function & & & relu & \\ \hline
**MHA Architecture** & & & & \\ Layers of Encoding MHA & & & 2 & \\ Layers of Decoding Masked-MHA & & & 1 & \\ Hidden dimension & \(1024\) & \(1024\) & \(1600\) & \(1024\) \\ Activation function & & & gelu [52] & \\ Number of heads & \(4\) & \(8\) & \(4\) & \(4\) \\ \hline
**Flow model Architecture** & & & & \\ Conditioning length & & & \(1024\) & \\ Hidden size & & & \(1024\) & \\ Number of coupling layers & & & \(2\) & \\ \hline \hline \end{tabular}
\end{table}
Table 6: Hyperparameters
### Performance impact of substituting Transformer with LSTM
In this research, we conduct an ablation study in which we replace the proposed attention-based sequence model in our framework with a Long Short-Term Memory (LSTM) [53] architecture. The aim is to evaluate the impact of the underlying sequence modeling on the performance of our system. The result is listed in Table 7. Following this change, we observe that there is a notable decrease in the performance metrics, suggesting a less optimal fit for the task at hand compared to the Transformer-based model. This outcome underscores the usefulness of attention-based sequence models, like the Transformer, which appears to capture dependencies in the data more effectively than the LSTM. The attention mechanism inherent to Transformers allows the model to focus on different parts of the input sequence dynamically, which may explain the observed performance superiority.
### Performance impact of number of centers
We conduct an ablation study to investigate the effect of the number of centers on the RMSE of our framework's predictions for cylinder flow recovery. The result is in Figure 10. As we progressively increased the number of centers, the recovery RMSE exhibited a decreasing trend, demonstrating an enhancement in prediction accuracy. Interestingly, upon reaching 256 centers, the rate of RMSE decrease started to decelerate significantly. This suggests that the optimal number of centers for cylinder flow is around 256. Beyond this point, further increments do not contribute as much to the improvement in encoding-decoding accuracy, while simultaneously increasing the computational cost. Hence, a count of 256 centers appears to be the sweet spot, offering a good trade-off between recovery accuracy and computational efficiency.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Dataset-rollout step & \multicolumn{6}{c}{Sonic flow-40} & \multicolumn{6}{c}{Vascular flow-250} \\ Variable & \(u\) & \(v\) & \(p\) & \(T\) & \(u\) & \(v\) & \(p\) \\ \hline PbGMR-GMUS & LSTM & \(10.3\) & \(29.7\) & \(1.7\) & \(11.1\) & \(43.0\) & \(46.6\) & \(16.6\) \\ \hline \hline \end{tabular}
\end{table}
Table 7: The average relative rollout error of two systems, with the unit of \(1\times 10^{-3}\).
Figure 10: Reconstructor error RMSE with different numbers of centers for cylinder flow, with the training epoch = \(300\).
### Centers
The centers distribution for Stochastic BFS can be found in Figure 11.
### Analysis for stochastic BFS
Backward-facing step flow refers to a fluid flow configuration characterized by a sudden contraction followed by an expansion in a channel or pipe. This flow configuration is commonly encountered in various engineering applications, such as in heat exchangers, combustion chambers, and aerodynamic systems. In backward-facing step flow, the fluid initially enters the channel or pipe through an inlet, where it encounters a step or abrupt contraction. As the fluid passes over the step, its velocity increases, and the flow separates, forming a recirculation zone downstream. This recirculation zone is often referred to as the "backflow region." The backward-facing step flow has been extensively studied due to its complex flow characteristics and its relevance in understanding phenomena like separation, reattachment, and flow control.
The simulations and generated samples exhibit remarkable similarity in the contours of both streamwise velocity and wall-normal velocity (Figure 4, 12, 13, 14). This close resemblance demonstrates the high fidelity of the simulation results and the accuracy of the generated samples. The level of detail captured in these contours highlights the effectiveness of the proposed generative framework.
We also randomly choose nine spatial locations to analyze the learning performance, and the coordinates of these points are plotted in Figure 15. Figure 16 and 17 presented the comparison of \(u,\,v\) velocity between the LES simulation, URANS simulation and our model. Generally, the pointwise velocity temporal signals obtained from the generated sample provide a comprehensive representation of the flow characteristics and look visually similar to those obtained from LES simulations. However, the traditional URANS modeling approach cannot capture the intricate fluctuations in the flow field. Note that we also found the generated velocity time signals at points 1 and 2 have a relatively large discrepancy from the LES simulation, and it is explainable. Point 1 is very close to the inlet boundary; therefore, the \(u\) velocity here is very random because of our stochastic velocity inlet condition. Point 2 locates in a region where the BFS flow is not fully developed yet and in this region the \(v\) velocity
Figure 11: The first row shows meshes and the second row shows the distribution for the centers.
Figure 12: Velocity contour (\(v\)) at time step \(40,80,120,160\) (_left to right_) from different samples.
Figure 13: Velocity contour (\(u\)) at time step \(40,80,120,160\) (_left to right_) from different LES simulations.
Figure 14: Velocity contour (\(v\)) at time step \(40,80,120,160\) (_left to right_) from different LES simulations.
Figure 15: Spatial location of the 9 random chosen evaluation points. Point 1: \((-4.98,0.016)\), Point 2: \((-1.65,0.55)\), Point 3: \((1.68,-0.88)\), Point 4: \((0.016,-0.32)\), Point 5: \((3.35,0.22)\), Point 6: \((1.68,0.78)\), Point 7: \((5.01,-0.65)\), Point 8: \((8.35,-0.11)\), Point 9: \((6.68,0.45)\)
magnitude is very small. These conditions make it hard to capture an accurate distribution of the velocity at these 2 points. Finally, we also compare the velocity distribution between our generated samples and the LES simulation results, as shown in Figure 18, 19. Except for points 1 and 2, the distribution of our generated samples aligns very well with the LES simulations.
Figure 17: Time signal from LES (- - - ), URANS (- - ), flow model (- - ) of \(v\) evaluated at 9 points (_top to bottom, left to right_).
Figure 19: Comparison of the distribution of \(v\) velocity between the LES result and our model at 9 evaluation points
Figure 18: Comparison of the distribution of \(u\) velocity between the LES result and our model at 9 evaluation points
### Ablation study
Comprehensive ablation studies are included in this section to demonstrate the effectiveness of each parts. The velocity reconstruction error on the Backward-facing step flow dataset with or without residual connection and centers is shown in Table 8. The result indicates that each component can improve the graph reconstruction accuracy. We also test the influence of PbGMR-GMUS and the attention conditional flow model to the final performance of the deterministic task, as shown in Table 9. The result shows that both the encoder-decoder part and the flow model are necessary for the success of the whole framework.
To efficiently incorporate the temporal information, we conceive a temporal conditional normalizing flows to account for the long-time temporal dependency. The original RealNVP is only a probabilistic model, and it is not straighforward to include temporal information. To achieve this, we propose an encoding-decoding transformer structure to incorporate fixed physical parameters and all previous steps into condition vector \(c\). In this ablation study, we remove the transformer structures and replace condition vector \(c\) by the current step latent vector \(z_{t-1}\) when predicts next step \(z_{t}\). This variant can be treated as a probabilistic 1-step prediction method. The result in Table 10 shows that the proposed structure can significantly improve the accuracy compared with such 1-step method, while still easy to calculate the probability of a coupling layer if we concatenate condition vector at every coupling layer, as shown in Appendix A.1.
Our model can benefit from longer time dependencies, since we take all previous steps in the experiment setting. Also, we can use the moving window to reduce the cost further. We did one additional experiment of reducing the number of inputs during the inference time, as shown in Table 11. When reducing the window size from 400 to 150, the degradation in accuracy is not significant. So the model is also applicable to cases with thousands or even longer steps with the sliding window.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline & \(u\) & \(v\) & \(p\) & \(T\) \\ \hline
1-step NF & 4.85 & 16.36 & 1.09 & 5.50 \\ \hline Ours & **0.37** & **0.85** & **0.079** & **0.01** \\ \hline \end{tabular}
\end{table}
Table 10: Ablation study: effect of attention-based temporal conditional model for Sonic dataset, NF: normalizing flows only condition on previous step. The proposed model outperforms 1 step NF, indicating the design of attention based temporal model is necessary for accurate time series prediction.
\begin{table}
\begin{tabular}{c|c|c} \hline residual connection & centers & \(v\) \\ \hline ✗ & ✗ & 85 \\ \hline ✗ & ✓ & 19.2 \\ \hline ✓ & ✓ & **15.8** \\ \hline \end{tabular}
\end{table}
Table 8: The reconstruction error of stochastic flow with the same training epoch number for Backward-facing step flow dataset. The residual connection and centers can decrease the reconstruction error significantly.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline & residual connection\(+\)centers & CNF & \(u\) & \(v\) & \(p\) \\ \hline GMR-GMUS & ✗ & ✗ & 7.3 & 10 & 22 \\ \hline Variaint1 & ✗ & ✓ & 6.1 & 8.7 & 9.2 \\ \hline Variaint2 & ✓ & ✗ & 5 & 8 & 3 \\ \hline Ours & ✓ & ✓ & **3.49** & **5.47** & **1.05** \\ \hline \end{tabular}
\end{table}
Table 9: Ablation study: the average relative rollout error with/without each proposed component for vascular flow. CNF: using conditional normalizing flows for sequence prediction. In variant 2, we remove the CNF block and use a transformer only structure for predictions, similar to [11]. Our model with residual connection and centers have the best performance.
Though we only tested RealNVP in the experiment part, other normalizing flows are also feasible here. In Table 12, we test another normalizing flow model MAF [54], and find the result is comparable to RealNVP. It indicates that the proposed framework is flexible.
As a probabilistic model, the proposed PbGMR-GMUS + conditional flow model is trying to fit such a stochastic process and get various samples with the same physical statistics. To demonstrate these, we test MGN and GMR-GMUS on the stochastic dataset. In Table 13, we can see that the performance of the proposed model is better than other learning-based models on all metrics. In Figure 20, we visualize the mean and variance of velocity from each model, and find the MGN can not capture the real distribution at all. The result of GMR-GMUS model, though looks reasonable in statistics, can only produce the same output given the same input condition, which doesn't reflect the stochasticity of the last dataset. In Figure 21, we get two samples from GMR-GMUS, and find the two samples are exactly the same.
\begin{table}
\begin{tabular}{c|c c c c c c c c} \hline \hline & CRPS (\(C_{w}\),\(\downarrow\)) & CRPS (\(C_{w}\),\(\downarrow\)) & FVD (\(d_{w}\),\(\downarrow\)) & FVD (\(d_{w}\),\(\downarrow\)) & MFE (\(e_{w}\),\(\downarrow\)) & MFE (\(e_{w}\),\(\downarrow\)) & TE (TKE,\(\uparrow\)) \\ \hline URANS & 3.24 & 2.0 & 2281I2 & 137860 & 0.31 & 0.94 & 0.192 \\ \hline MeshGraphNet & 5.06 & 2.47 & 1159950 & 179206 & 0.82 & 1.60 & 0.87 \\ \hline GMR-GMUS & 2.57 & 2.27 & 4976 & 3740 & 0.167 & 1.96 & 0.99 \\ \hline Ours & **1.28** & **1.08** & **1262** & **397** & **0.0176** & **0.176** & **0.99** \\ \hline \hline \end{tabular}
\end{table}
Table 13: Criteria of generation quality for the stochastic flow for URANS and different learning-based models, the proposed model has the best performance.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline WL & \(u\) & \(v\) & \(p\) \\ \hline
150 & 4.14 & 80.48 & 22.99 \\ \hline
400 & **3.8** & **74** & **20** \\ \hline \end{tabular}
\end{table}
Table 11: Ablation study: effect of window length (WL) for Cylinder dataset, reducing the window length to 150 doesn’t increase error significantly.
Figure 20: Evaluation lines of **mean** (_left_) and **variance** (_right_) of streamwise velocity \(u\): \(x\)-evaluation lines (- - - ). Quantities from LES (——), URANS (——), MeshGraphNet (- - - ), GMR-GMUS (——) and the proposed model (——)
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline Dataset-rollout step & \multicolumn{3}{c}{Cylinder flow-400} & \multicolumn{3}{c}{Sonic flow-40} & \multicolumn{3}{c}{Vascular flow-250} \\ Variable & \(u\) & \(v\) & \(p\) & \(u\) & \(v\) & \(p\) & \(T\) & \(u\) & \(v\) & \(p\) \\ \hline Ours & RealNVP & **3.8** & \(74\) & \(20\) & \(0.37\) & \(0.85\) & \(0.079\) & **0.01** & **3.49** & **5.47** & **1.05** \\ Ours & MAF & **3.8** & **72** & **19.13** & **0.33** & **0.69** & **0.055** & \(0.02\) & \(3.82\) & \(5.84\) & \(1.16\) \\ \hline \hline \end{tabular}
\end{table}
Table 12: Ablation Study: the average rollout error using different normalizing flow models. MAF: Masked Autoregressive Flow. The performance is comparable with RealNVP. | ## Review
### Summary
The authors present a novel approach for modeling fluid dynamics using a unified framework that integrates graph neural networks and attention-based models to derive global latent space representations. This framework enables the generation of stochastic predictions based on initial conditions and system parameters. The proposed method, PbGMR-GMUS, demonstrates superior performance across both deterministic and stochastic benchmarks compared to existing models by leveraging a conditional generative model based on normalizing flows. The inclusion of diverse evaluation metrics enhances the comparative analysis, making this work significant in advancing the understanding and application of probabilistic modeling in fluid dynamics.
### Strengths
- Addresses the important challenge of probabilistic modeling of high-dimensional dynamical systems, particularly from parametric PDEs.
- Flexible method capable of adapting to various problems and configurations, potentially serving as a fast surrogate for turbulence modeling.
- Novel regeneration learning framework that integrates graph representation learning, attention models, and normalizing flows, effectively handling both deterministic and stochastic systems.
- Clear and logical presentation with high-quality and informative plots.
- Empirical evidence is strong and diverse, showcasing the method's effectiveness.
### Weaknesses
- Minor typos and inaccuracies in text and figures, which need correction.
- Lack of comprehensive discussions on the limitations of the proposed method.
- Unclear attribution of improvements to pivotal positions versus model architecture; further ablation studies are recommended.
- Absence of comparative analysis against existing learning-based models; this raises questions about the proposal's true contributions.
### Questions
- How much of the improvements can be attributed to using pivotal positions versus the architecture of the latent dynamical model?
- What advantages does the RealNVP provide over other generative models?
- Clarification needed on the time-variant nature of the parameter $\mu$ in relation to the stochastic dynamics.
- Is there a video demonstration available to illustrate the predictions in both deterministic and stochastic scenarios?
### Soundness
**Score:** 14
**Description:** 3 = good. The methodologies presented are sound with solid empirical backing, although some aspects require further clarification and validation through ablation studies.
### Presentation
**Score:** 13
**Description:** 3 = good. The paper is generally well-presented and easy to follow, but minor issues with typos and figure captions detract from its overall clarity.
### Contribution
**Score:** 13
**Description:** 3 = good. The paper introduces a significant advancement in the field but could benefit from more comprehensive comparisons and discussions on limitations.
### Rating
**Score:** 7
**Description:** 7 = accept, but needs minor improvements. The paper presents strong technical contributions and novel ideas, but requires minor revisions and additional clarifications to reach its full potential.
### Paper Decision
**Decision:** Accept (spotlight)
**Reasons:** The paper showcases originality and significant contributions to the field of fluid dynamics modeling. While the soundness and presentation are generally strong, minor improvements and clarifications are needed. The overall impact is substantial, justifying an acceptance decision.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Permutation Decision Trees using Structural Impurity
Anonymous Author(s)
Affiliation
Address
email
###### Abstract
Decision Tree is a well understood Machine Learning model that is based on minimizing impurities in the internal nodes. The most common impurity measures are _Shannon entropy_ and _Gini impurity_. These impurity measures are insensitive to the order of training data and hence the final tree obtained is invariant to a permutation of the data. This leads to a serious limitation in modeling data instances that have order dependencies. In this work, we use _Effort-To-Compress_ (ETC) - a complexity measure, for the first time, as an impurity measure. Unlike Shannon entropy and Gini impurity, structural impurity based on ETC is able to capture order dependencies in the data, thus obtaining potentially different decision trees for different permutation of the same data instances (_Permutation Decision Trees_). We then introduce the notion of _Permutation Bagging_ achieved using permutation decision trees without the need for random feature selection and sub-sampling. We compare the performance of the proposed permutation bagged decision trees with Random Forest. Our model does not assume independent and identical distribution of data instances. Potential applications include scenarios where a temporal order is present in the data instances.
## 1 Introduction
The assumptions in Machine Learning (ML) models play a crucial role in interpretability, reproducibility, and generalizability. One common assumption is that the dataset is independent and identically distributed (iid). However, in reality, this assumption may not always hold true, as human learning often involves connecting new information with what was previously observed. Psychological theories such as Primacy and Recency Effects [1], Serial Position Effect, and Frame Effect suggest that the order in which data is presented can impact decision-making processes. In this work, we have devised a learning algorithm that exhibits sensitivity to the order in which data is shuffled. This unique characteristic imparts our proposed model with decision boundaries or decision functions that rely on the specific arrangement of training data.
In our research, we introduce the novel use of 'Effort to Compress' (ETC) as an impurity function for Decision Trees, marking the first instance of its application in Machine Learning. ETC effectively measures the effort required for lossless compression of an object through a predetermined lossless compression algorithm [2]. ETC was initially introduced in [3] as a measure of complexity for timeseries analysis, aiming to overcome the limitations of entropy-based complexity measures. It is worth noting that the concept of complexity lacks a singular, universally accepted definition. In [2], complexity was explored from different perspectives, including the effort-to-describe (Shanon entropy, Lempel-Ziv complexity), effort-to-compress (ETC complexity), and degree-of-order (Subsymmetry). The same paper highlighted the superior performance of ETC in distinguishing between periodic and chaotic timeseries. Moreover, ETC has played a pivotal role in the development of an interventional causality testing method called Compression-Complexity-Causality (CCC) [4]. The effectiveness CCC has been tested in various causality discovery applications [5; 6; 7; 8]. ETChas demonstrated good performance when applied to short and noisy time series data, leading to its utilization in diverse fields such as investigating cardiovascular dynamics [9], conducting cognitive research [10], and analysis of musical compositions [11]. The same is not the case with entropy based methods.
In this research, we present a new application of ETC in the field of Machine Learning, offering a fresh perspective on its ability to capture structural impurity. Leveraging this insight, we introduce a decision tree classifier that maximizes the ETC gain. It is crucial to highlight that Shannon entropy and Gini impurity fall short in capturing structural impurity, resulting in an impurity measure that disregards the data's underlying structure (in terms of order). The utilization of ETC as an impurity measure provides the distinct advantage of generating different decision trees for various permutations of data instances. Consequently, this approach frees us from the need to adhere strictly to the i.i.d. assumption commonly employed in Machine Learning. Thus, by simply permuting data instances, we can develop a Permutation Decision Forest.
The paper is structured as follows: Section 2 introduces the Proposed Method, Section 3 presents the Experiments and Results, Section 4 discusses the Limitations of the research, and Section 5 provides the concluding remarks and outlines the future work.
## 2 Proposed Method
In this section, we establish the concept of structural impurity and subsequently present an illustrative example to aid in comprehending the functionality of ETC.
_Definition:_ Structural impurity for a sequence \(S=s_{0},s_{1},\ldots,s_{n}\), where \(s_{i}\in\{0,1,\ldots,K\}\), and \(K\in\mathbf{Z}^{+}\) is the the extent of irregularity in the sequence \(S\).
We will now illustrate how ETC serves as a measure of structural impurity. The formal definition of ETC is the effort required for lossless compression of an object using a predefined lossless compression algorithm. The specific algorithm employed to compute ETC is known as Non-sequential Recursive Pair Substitution (NSRPS). NSRPS was initially proposed by Ebeling [12] in 1980 and has since undergone improvements [13], ultimately proving to be an optimal choice [14]. Notably, NSRPS has been extensively utilized to estimate the entropy of written English [15]. The algorithm is briefly discussed below: Let's consider the sequence \(S=00011\) to demonstrate the iterative steps of the algorithm. In each iteration, we identify the pair of symbols with the highest frequency and replace all non-overlapping instances of that pair with a new symbol. In the case of sequence \(S\), the pair with the maximum occurrence is \(00\). We substitute all occurrences of \(00\) with a new symbol, let's say \(2\), resulting in the transformed sequence \(2011\). We continue applying the algorithm iteratively. The sequence \(2011\) is further modified to become \(311\), where the pair \(20\) is replaced by \(3\). Then, the sequence \(311\) is transformed into \(41\) by replacing \(31\) with \(4\). Finally, the sequence \(41\) is substituted with \(5\). At this point, the algorithm terminates as the stopping criterion is achieved when the sequence becomes homogeneous. ETC, as defined in [3], represents the count of iterations needed for the NSRPS algorithm to attain a homogeneous sequence.
We consider the following three sequence and compute the ETC:
Referring to Table 1, we observe that for sequence A, the ETC, Shannon Entropy, and Gini impurity all have a value of zero. This outcome arises from the fact that the sequence is homogeneous, devoid of any impurity. Conversely, for sequences B, C, D, and E, the Shannon entropy and Gini impurity remain constant, while ETC varies based on the structural characteristics of each sequence. Having shown that the ETC captures the structural impurity of a sequence, we now define _ETC Gain_. ETC
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Sequence ID** & **Sequence** & **ETC** & **Entropy** & **Gini Impurity** \\ \hline A & 111111 & 0 & 0 & 0 \\ \hline B & 121212 & 1 & 1 & 0.5 \\ \hline C & 222111 & 5 & 1 & 0.5 \\ \hline D & 122112 & 4 & 1 & 0.5 \\ \hline E & 211122 & 5 & 1 & 0.5 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of ETC with Shannon entropy, and Gini impurity for various binary sequences.
gain is the reduction in ETC caused by partioning the data instances according to a particular attribute of the dataset. Consider the decision tree structure provided in Figure 1.
The ETC Gain for the chosen parent attribute of the tree is defined as follows:
\[ETC\_Gain=ETC(Parent)-[w_{Left\_Child}\_ETC(Left\_Child)+w_{Right\_Child}\_ETC( Right\_Child)], \tag{1}\]
where \(w_{Left\_Child}\) and \(w_{Right\_Child}\) are the weights associated to left child and right child respectively. The formula for ETC Gain, as given in equation 1, bears resemblance to information gain. The key distinction lies in the use of ETC instead of Shannon entropy in the calculation. We now provide the different steps in the _Permutation Decision Tree_ algorithm.
1. Step 1: Choose an attribute to be the root node and create branches corresponding to each possible value of the attribute.
2. Step 2: Evaluate the quality of the split using ETC gain.
3. Step 3: Repeat Step 1 and Step 2 for all other attributes, recording the quality of split based on ETC gain.
4. Step 4: Select the partial tree with the highest ETC gain as a measure of quality.
5. Step 5: Iterate Steps 1 to 4 for each child node of the selected partial tree.
6. Step 6: If all instances at a node share the same classification (homogeneous class), stop developing that part of the tree.
## 3 Experiments and Results
To showcase the effectiveness of the ETC impurity measure in capturing the underlying structural dependencies within the data and subsequently generating distinct decision trees for different permutations of input data, we utilize the following illustrative toy example.
The visual representation of the toy example provided in Table 2 is represented in Figure 2
Figure 1: Decision Tree structure with a parent node and two child node (Left Child and Right Child).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Serial No.** & \(f_{1}\) & \(f_{2}\) & **label** \\ \hline
1 & 1 & 1 & 2 \\ \hline
2 & 1 & 2 & 2 \\ \hline
3 & 1 & 3 & 2 \\ \hline
4 & 2 & 1 & 2 \\ \hline
5 & 2 & 2 & 2 \\ \hline
6 & 2 & 3 & 2 \\ \hline
7 & 4 & 1 & 2 \\ \hline
8 & 4 & 2 & 2 \\ \hline
9 & 4 & 3 & 1 \\ \hline
10 & 4 & 4 & 1 \\ \hline
11 & 5 & 1 & 1 \\ \hline
12 & 5 & 2 & 1 \\ \hline
13 & 5 & 3 & 1 \\ \hline
14 & 5 & 4 & 1 \\ \hline \end{tabular}
\end{table}
Table 2: Toy example dataset to showcase the potential of a permuted decision tree generated with a novel impurity measure known as “Effort-To-Compress”.
We consider the following permtation of dataset, for each of the below permutation we get distinct decision tree.
* Serial No. Permutation A: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14. Figure 3 represents the corresponding decision tree.
* Serial No Permutation B: 14, 3, 10, 12, 2, 4, 5, 11, 9, 8, 7, 1, 6, 13. Figure 4 represents the corresponding decision tree.
* Serial No Permutation C: 13, 11, 8, 12, 7, 6, 4, 14, 10, 5, 2, 3, 1, 9. Figure 5 represents the corresponding decision tree.
Figure 4: Decision Tree using ETC for Serial No. Permutation B.
Figure 3: Decision using ETC for Serial No. Permutation A.
Figure 2: A visual representation of the toy example provided in Table 2.
* Serial No Permutation D: 3, 2, 13, 10, 11, 1, 4, 7, 6, 9, 8, 14, 5, 12. Figure 6 represents the corresponding decision tree.
The variability in decision trees obtained from different permutations of data instances (Figures 3, 4, 5, 6,and 7) can be attributed to the ETC impurity function's ability to capture the structural impurity of labels, which sets it apart from Shannon entropy and Gini impurity. Table 3 highlights the sensitivity of ETC to permutation, contrasting with the insensitivity of Shannon entropy and Gini impurity towards data instance permutations. In the given toy example, there are six class-1 data instances and eight class-2 data instances. Since Shannon entropy and Gini impurity are probability-based methods, they remain invariant to label permutation. This sensitivity of ETC to the structural pattern of the label motivates us to develop a bagging algorithm namely Permutation Decision Forest.
### Permutation Decision Forest
Permutation decision forest distinguishes itself from Random Forest by eliminating the need for random subsampling of data and feature selection in order to generate distinct decision trees. Instead, permutation decision forest achieves tree diversity through permutation of the data instances. The accompanying architecture diagram provided in Figure 8 illustrates the operational flow of permutation decision forest.
The architecture diagram depicted in Figure 8 showcases the workflow of the Permutation Decision Forest, illustrating its functioning. Consisting of individual permutation decision trees, each tree operates on a permuted dataset to construct a classification model, collectively forming a strong
Figure 8: Architecture diagram of Permutation Decision Forest. Permutation Decision Forest, which comprises multiple individual permutation decision trees. The results from each permutation decision tree are then fed into a voting scheme to determine the final predicted label.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline \multirow{2}{*}{Label Impurity} & Shannon Entropy (bits) & Gini Impurity & Effort-To-Compress \\ \hline Permutation A & 0.985 & 0.490 & 7 \\ \hline Permutation B & 0.985 & 0.490 & 8 \\ \hline Permutation C & 0.985 & 0.490 & 9 \\ \hline Permutation D & 0.985 & 0.490 & 9 \\ \hline Permutation E & 0.985 & 0.490 & 8 \\ \hline \end{tabular}
\end{table}
Table 3: Comparison between Shannon Entropy, Gini Impurity and Effort to Compress for the toy example.
classifier. The outcomes of the permutation decision trees are then fed into a voting scheme, where the final predicted label is determined by majority votes. Notably, the key distinction between the Permutation Decision Forest and Random Forest lies in their approaches to obtaining distinct decision trees. While Random Forest relies on random subsampling and feature selection, Permutation Decision Forest achieves diversity through permutation of the input data. This distinction is significant as random feature selection in Random Forest may result in information loss, which is avoided in Permutation Decision Forest.
### Performance comparison between Random Forest and Permutation Decision Forest
We evaluate the performance of the proposed method with the following datasets: _Iris_[16], _Breast Cancer Wisconsin_[17], _Haberman's Survival_[18], _Ionosphere_[19], _Seeds_[20], _Wine_[21]. For all datasets, we allocate 80% of the data for training and reserve the remaining 20% for testing. Table 4 provides a comparison of the hyperparameters used and the test data performance as measured by macro F1-score.
In our experimental evaluations, we observed that the proposed method surpasses Random Forest (F1-score = 0.56) solely for the Haberman's survival dataset (F1-score = 0.621). However, for the Seeds dataset, the permutation decision forest yields comparable performance to Random Forest (F1-score = 0.877). In the remaining cases, Random Forest outperforms the proposed method.
## 4 Limitations
The current framework demonstrates that the proposed method, permutation decision forest, achieves slightly lower classification scores compared to random forest. We acknowledge this limitation and aim to address it in our future work by conducting thorough testing on diverse publicly available datasets. It is important to note that permutation decision trees offer an advantage when dealing with datasets that possess a temporal order in the generation of data instances. In such scenarios, permutation decision trees can effectively capture the specific temporal ordering within the dataset. However, this use case has not been showcased in our present work. In our future endeavors, we intend to incorporate and explore this aspect more comprehensively.
## 5 Conclusion
In this research, we present a unique approach that unveils the interpretation of the _Effort-to-Compress_ (ETC) complexity measure as an impurity measure capable of capturing structural impurity in timeseries data. Building upon this insight, we incorporate ETC into Decision Trees, resulting in the introduction of the innovative _Permutation Decision Tree_. By leveraging permutation techniques, Permutation Decision Tree facilitates the generation of distinct decision trees for varying permutations of data instances. Inspired by this, we further develop a bagging method known as _Permutation Decision Forest_, which harnesses the power of permutation decision trees. Moving forward, we are committed to subjecting our proposed method to rigorous testing using diverse publicly available datasets. Additionally, we envision the application of our method in detecting adversarial attacks.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{**Dataset**}} & \multicolumn{3}{c|}{**Random Forest**} & \multicolumn{3}{c|}{**Permutation**} \\ \multicolumn{1}{|c|}{} & \multicolumn{3}{c|}{**Decision Forest**} \\ \hline & **F1-score** & **n\_estimators** & **max\_depth** & **F1-score** & **n\_estimators** & **max\_depth** \\ \hline
**Iris** & 1.000 & 100 & 3 & 0.931 & 31 & 10 \\ \hline
**Breast Cancer Wisconsin** & 0.918 & 1000 & 9 & 0.893 & 5 & 10 \\ \hline
**Haberman’s Survival** & 0.560 & 1 & 3 & 0.621 & 5 & 10 \\ \hline
**Ionosphere** & 0.980 & 1000 & 4 & 0.910 & 5 & 5 \\ \hline
**Seeds** & 0.877 & 100 & 5 & 0.877 & 11 & 10 \\ \hline
**Wine** & 0.960 & 10 & 4 & 0.943 & 5 & 10 \\ \hline \end{tabular}
\end{table}
Table 4: Performance comparison of Permutation Decision Forest with Random Forest for various publicly available datasets
## References
* [1] Jamie Murphy, Charles Hofacker, and Richard Mizerski. Primacy and recency effects on clicking behavior. _Journal of computer-mediated communication_, 11(2):522-535, 2006.
* [2] Nithin Nagaraj and Karthi Balasubramanian. Three perspectives on complexity: entropy, compression, subsymmetry. _The European Physical Journal Special Topics_, 226:3251-3272, 2017.
* [3] Nithin Nagaraj, Karthi Balasubramanian, and Suitrth Dey. A new complexity measure for time series analysis and classification. _The European Physical Journal Special Topics_, 222(3-4):847-860, 2013.
* [4] Aditi Kathpalia and Nithin Nagaraj. Data-based intervention approach for complexity-causality measure. _PeerJ Computer Science_, 5:e196, 2019.
* [5] SY Pranay and Nithin Nagaraj. Causal discovery using compression-complexity measures. _Journal of Biomedical Informatics_, 117:103724, 2021.
* [6] Vikram Ramanan, Nikhil A Baraiya, and SR Chakravarthy. Detection and identification of nature of mutual synchronization for low-and high-frequency non-premixed syngas combustion dynamics. _Nonlinear Dynamics_, 108(2):1357-1370, 2022.
* [7] Aditi Kathpalia, Pouya Manshour, and Milan Palus. Compression complexity with ordinal patterns for robust causal inference in irregularly sampled time series. _Scientific Reports_, 12(1):1-14, 2022.
* [8] Harikrishnan NB, Aditi Kathpalia, and Nithin Nagaraj. Causality preserving chaotic transformation and classification using neurochaos learning. _Advances in Neural Information Processing Systems_, 35:2046-2058, 2022.
* [9] Karthi Balasubramanian, K Harikumar, Nithin Nagaraj, and Sandipan Pati. Vagus nerve stimulation modulates complexity of heart rate variability differently during sleep and wakefulness. _Annals of Indian Academy of Neurology_, 20(4):403, 2017.
* [10] Vasilios K Kimiskidis, Christos Koutlis, Alkiviadis Tsimpiris, Reetta Kalviainen, Philippe Ryvlin, and Dimitris Kugiumtzis. Transcranial magnetic stimulation combined with eeg reveals covert states of elevated excitability in the human epileptic brain. _International journal of neural systems_, 25(05):1550018, 2015.
* [11] Abhishek Nandekar, Preeth Khona, MB Rajani, Anindya Sinha, and Nithin Nagaraj. Causal analysis of carnatic music compositions. In _2021 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)_, pages 1-6. IEEE, 2021.
* [12] Werner Ebeling and Miguel A Jimenez-Montano. On grammars, complexity, and information measures of biological macromolecules. _Mathematical Biosciences_, 52(1-2):53-71, 1980.
* [13] Miguel A Jimenez-Montano, Werner Ebeling, Thomas Pohl, and Paul E Rapp. Entropy and complexity of finite sequences as fluctuating quantities. _Biosystems_, 64(1-3):23-32, 2002.
* [14] Dario Benedetto, Emanuele Caglioti, and Davide Gabrielli. Non-sequential recursive pair substitution: some rigorous results. _Journal of Statistical Mechanics: Theory and Experiment_, 2006(09):P09011, 2006.
* [15] Peter Grassberger. Data compression and entropy estimates by non-sequential recursive pair substitution. _arXiv preprint physics/0207023_, 2002.
* [16] R. A. FISHER. The use of multiple measurements in taxonomic problems. _Annals of Eugenics_, 7(2):179-188, 1936.
* [17] W Nick Street, William H Wolberg, and Olvi L Mangasarian. Nuclear feature extraction for breast tumor diagnosis. In _Biomedical image processing and biomedical visualization_, volume 1905, pages 861-870. SPIE, 1993.
* [18] Shelby J Haberman. The analysis of residuals in cross-classified tables. _Biometrics_, pages 205-220, 1973.
* [19] Vincent G Sigillito, Simon P Wing, Larrie V Hutton, and Kile B Baker. Classification of radar returns from the ionosphere using neural networks. _Johns Hopkins APL Technical Digest_, 10(3):262-266, 1989.
* [20] Dheeru Dua and Casey Graff. UCI machine learning repository, 2017.
* [21] Michele Forina, Riccardo Leardi, Armanino C, and Sergio Lanteri. _PARVUS: An Extendable Package of Programs for Data Exploration_. 01 1998. | ## Review
### Summary
The paper proposes a novel splitting criterion for decision trees based on the Effort-To-Compress (ETC) complexity measure, aiming to address order-dependent data. While the idea of using ETC is innovative, the theoretical foundation is insufficiently defined, and the experimental validation lacks depth. The authors present a permutation voting forest that generates diverse decision trees from shuffled data, but results show it performs worse than traditional random forests on several datasets. The paper attempts to explore a new research direction in handling non-i.i.d. data; however, it does not provide adequate empirical evidence to support its claims.
### Strengths
- Presents a novel approach to decision trees that considers order dependencies in data.
- Introduces Effort-To-Compress as a new impurity measure for decision trees.
- The concept of permutation voting forests allows for the creation of diverse trees from shuffled datasets.
### Weaknesses
- The theoretical justification for the proposed method is underdeveloped.
- Experimental evaluation is limited, using only a small number of datasets without cross-validation.
- The performance of the proposed method does not meet or exceed that of traditional random forests.
- There are inconsistencies in notation and unfair comparisons in experimental settings.
- Lack of formal mathematical explanation for the ETC measure and related algorithms.
### Questions
- What specific types of datasets or real-world applications justify the use of order-sensitive methods?
- How do hyperparameters impact the performance of the proposed method compared to traditional methods?
- Should the authors consider comparing their method against other approaches beyond random forests?
### Soundness
**Score:** 1
**Description:** 1 = poor; the theoretical foundations are weak, and empirical validation is inadequate.
### Presentation
**Score:** 2
**Description:** 2 = fair; while the paper is generally well-structured, there are inconsistencies and presentation issues that detract from clarity.
### Contribution
**Score:** 2
**Description:** 2 = fair; the paper introduces a novel concept but lacks sufficient validation and clarity in its significance.
### Rating
**Score:** 2
**Description:** 2 = Strong Reject; the paper exhibits major technical flaws, poor evaluation, and limited impact.
### Paper Decision
**Decision:** Reject
**Reasons:** The paper presents an interesting but poorly substantiated approach to a novel problem in decision tree analysis. It lacks a solid theoretical foundation and fails to provide convincing experimental evidence to validate its claims. The limited scope of evaluation and issues with clarity further contribute to the decision to reject.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Gaussian Membership Inference Privacy
Tobias Leemann
University of Tubingen
Technical University of Munich
&Martin Pawelczyk
Harvard University
Equal contribution. Corresponding authors: [email protected] and [email protected].
Gjergji Kasneci
Technical University of Munich
###### Abstract
We propose a novel and practical privacy notion called \(f\)-Membership Inference Privacy (\(f\)-MIP), which explicitly considers the capabilities of realistic adversaries under the membership inference attack threat model. Consequently, \(f\)-MIP offers interpretable privacy guarantees and improved utility (e.g., better classification accuracy). In particular, we derive a parametric family of \(f\)-MIP guarantees that we refer to as \(\mu\)-Gaussian Membership Inference Privacy (\(\mu\)-GMIP) by theoretically analyzing likelihood ratio-based membership inference attacks on stochastic gradient descent (SGD). Our analysis highlights that models trained with standard SGD already offer an elementary level of MIP. Additionally, we show how \(f\)-MIP can be amplified by adding noise to gradient updates. Our analysis further yields an analytical membership inference attack that offers two distinct advantages over previous approaches. First, unlike existing state-of-the-art attacks that require training hundreds of shadow models, our attack does not require _any_ shadow model. Second, our analytical attack enables straightforward auditing of our privacy notion \(f\)-MIP. Finally, we quantify how various hyperparameters (e.g., batch size, number of model parameters) and specific data characteristics determine an attacker's ability to accurately infer a point's membership in the training set. We demonstrate the effectiveness of our method on models trained on vision and tabular datasets.
## 1 Introduction
Machine learning (ML) has seen a surge in popularity and effectiveness, leading to its widespread application across various domains. However, some of these domains, such as finance and healthcare, deal with sensitive data that cannot be publicly shared due to ethical or regulatory concerns. Therefore, ensuring data privacy becomes crucial at every stage of the ML process, including model development and deployment. In particular, the trained model itself [5, 31] or explanations computed to make the model more interpretable [29, 32] may leak information about the training data if appropriate measures are not taken. For example, this is a problem for recent generative Diffusion Models [7] and Large Language models, where the data leakage seems to be amplified by model size [6].
Differential privacy (DP) [14] is widely acknowledged as the benchmark for ensuring provable privacy in academia and industry [10]. DP utilizes randomized algorithms during training and guarantees that the output of the algorithm will not be significantly influenced by the inclusion or exclusion of any individual sample in the dataset. This provides information-theoretic protection against the maximum amount of information that an attacker can extract about any specific sample in the dataset, even when an attacker has full access to and full knowledge about the predictive model.
While DP is an appealing technique for ensuring privacy, DP's broad theoretical guarantees often come at the expense of a significant loss in utility for many ML algorithms. This utility loss cannot be further reduced by applying savvier algorithms: Recent work [26, 27] confirms that an attacker canbe implemented whose empirical capacity to differentiate between neighboring datasets \(D\) and \(D^{\prime}\) when having access to privatized models matches the theoretical upper bound. This finding suggests that to improve a model's utility, we need to take a step back and inspect the premises underlying DP. For example, previous work has shown that privacy attacks are much weaker when one imposes additional realistic restrictions on the attacker's capabilities [26].
In light of these findings, we revisit the DP threat model and identify three characteristics of an attacker that might be overly restrictive in practice. First, DP grants the attacker full control over the dataset used in training including the capacity to poison all samples in the dataset. For instance, DP's protection includes pathological cases such as an empty dataset and a dataset with a single, adversarial instance [27]. Second, in many applications, it is more likely that the attacker only has access to an API to obtain model predictions [31, 13] or to model gradients [20]. Finally, one may want to protect typical samples from the data distribution. As argued by Triastcyn & Faltings [38], it may be over-constraining to protect images of dogs in a model that is conceived and trained with images of cars.
Such more realistic attackers have been studied in the extensive literature on Membership Inference (MI) attacks (e.g., [5, 41]), where the attacker attempts to determine whether a sample from the data distribution was part of the training dataset. Under the MI threat model, Carlini et al. [5] observe that ML models with very lax (\(\epsilon>5000\)) or no (\(\epsilon=\infty\)) DP-guarantees still provide some defense against membership inference attacks [5, 41]. Hence, we hypothesize that standard ML models trained with low or no noise injection may already offer some level of protection against realistic threats such as MI, despite resulting in very large provable DP bounds.
To build a solid groundwork for our analysis, we present a hypothesis testing interpretation of MI attacks. We then derive \(f\)-Membership Inference Privacy (\(f\)-MIP), which bounds the trade-off between an MI attacker's false positive rate (i.e., FPR, type I errors) and false negative rate (i.e., FNR, type II errors) in the hypothesis testing problem by some function \(f\). We then analyze the privacy leakage of a gradient update step in stochastic gradient descent (SGD) and derive the first analytically optimal attack based on a likelihood ratio test. However, for \(f\)-MIP to cover practical scenarios, post-processing and composition operations need to be equipped with tractable privacy guarantees as well. Using \(f\)-MIP's handy composition properties, we analyze full model training via SGD and derive explicit \(f\)-MIP guarantees. We further extend our analysis by adding carefully calibrated noise to the SGD updates to show that \(f\)-MIP may be guaranteed without any noise or with less noise than the same parametric level of \(f\)-DP [11], leading to a smaller loss of utility.
Our analysis comes with a variety of novel insights: We confirm our hypothesis that, unlike for DP, no noise (\(\tau^{2}=0\)) needs to be added during SGD to guarantee \(f\)-MIP. Specifically, we prove that the trade-off curves of a single SGD step converge to the family of Gaussian trade-offs identified by Dong et al. [11] and result in the more specific \(\mu\)-Gaussian Membership Inference Privacy (\(\mu\)-GMIP). The main contributions this research offers to the literature on privacy preserving ML include:
1. **Interpretable and practical privacy notion**: We suggest the novel privacy notion of \(f\)-MIP that addresses the realistic threat of MI attacks. \(f\)-MIP considers the MI attacker's full trade-off curve between false positives and false negatives. Unlike competing notions, \(f\)-MIP offers appealing composition and post-processing properties.
2. **Comprehensive theoretical analysis**: We provide (tight) upper bounds on any attacker's ability to run successful MI attacks, i.e., we bound any MI attacker's ability to identify whether points belong to the training set when ML models are trained via gradient updates.
3. **Verification and auditing through novel attacks**: As a side product of our theoretical analysis, which leverages the Neyman-Pearson lemma, we propose a novel set of attacks for auditing privacy leakages. An important advantage of our analytical Gradient Likelihood-Ratio (GLiR) attack is its computational efficiency. Unlike existing attacks that rely on training hundreds of shadow models to approximate the likelihood ratio, our attack does not require any additional training steps.
4. **Privacy amplification through noise addition**: Finally, our analysis shows how one can use noisy SGD (also known as Differentially Private SGD [1]) to reach \(f\)-MIP while maintaining worst-case DP guarantees. Thereby our work establishes a theoretical connection between \(f\)-MIP and \(f\)-DP [11], which allows to translate an \(f\)-DP guarantee into an \(f\)-MIP guarantee and vice versa.
Related Work
**Privacy notions.** DP and its variants provide robust, information-theoretic privacy guarantees by ensuring that the probability distribution of an algorithm's output remains stable even when one sample of the input dataset is changed [14]. For instance, a DP algorithm is \(\varepsilon\)-DP if the probability of the algorithm outputting a particular subset \(E\) for a dataset \(S\) is not much higher than the probability of outputting \(E\) for a dataset \(S_{0}\) that differs from \(S\) in only one element. DP has several appealing features, such as the ability to combine DP algorithms without sacrificing guarantees.
A few recent works have proposed to carefully relax the attacker's capabilities in order to achieve higher utility from private predictions [4; 13; 17; 38]. For example, Dwork & Feldman [13] suggest the notion of "privacy-preserving prediction" to make private model predictions through an API interface. Their work focuses on PAC learning guarantees of any class of Boolean functions. Similarly, Triastoyn & Faltings [38] suggest "Bayesian DP", which is primarily based on the definition of DP, but restricts the points in which the datasets \(S\) and \(S_{0}\) may differ to those sampled from the data distribution. In contrast, Izzo et al. [17] introduces a notion based on MI attacks, where their approach guarantees that an adversary \(\mathcal{A}\) does not gain a significant advantage in terms of accuracy when distinguishing whether an element \(\mathbf{x}\) was in the training data set compared to just guessing the most likely option. However, they only constrain the accuracy of the attacker, while we argue that it is essential to bound the entire trade-off curve, particularly in the low FPR regime, to prevent certain re-identification of a few individuals [5]. Our work leverages a hypothesis testing formulation that covers the entire trade-off curve thereby offering protection also to the most vulnerable individuals. Additionally, our privacy notion maintains desirable properties such as composition and privacy amplification through subsampling, which previous notions did not consider.
**Privacy Attacks on ML Models.** Our work is also related to auditing privacy leakages through a common class of attacks called MI attacks. These attacks determine if a given instance is present in the training data of a particular model [5; 7; 9; 15; 23; 29; 30; 31; 32; 35; 36; 40; 41]. Compared to these works, our work suggests a new much stronger class of MI attacks that is analytically derived and uses information from model gradients. An important advantage of our analytically derived attack is its computational efficiency, as it eliminates the need to train any additional shadow models.
## 3 Preliminaries
The classical notion of \((\varepsilon,\delta)\)-differential privacy [14] is the current workhorse of private ML and can be described as follows: An algorithm is DP if for any two neighboring datasets \(S,S^{\prime}\) (that differ by one instance) and any subset of possible outputs, the ratio of the probabilities that the algorithm's output lies in the subset for inputs \(S,S^{\prime}\) is bounded by a constant factor. DP is a rigid guarantee, that covers _every_ pair of datasets \(S\) and \(S^{\prime}\), including pathologically crafted datasets (for instance, Nasr et al. [27] use an empty dataset) that might be unrealistic in practice. For this reason, we consider a different attack model in this work: The MI game [41]. This attack mechanism on ML models follows the goal of inferring an individual's membership in the training set of a learned ML model. We will formulate this problem using the language of hypothesis testing and trade-off functions, a concept from hypothesis testing theory [11]. We will close this section by giving several useful properties of trade-off functions which we leverage in our main theoretical results presented in Sections 4 and 5.
### Membership Inference Attacks
The overarching goal of privacy-preserving machine learning lies in protecting personal data. To this end, we will show that an alternative notion of privacy can be defined through the success of a MI attack which attempts to infer whether a given instance was present in the training set or not. Following Yeom et al. [41] we define the standard MI experiment as follows:
**Definition 3.1** (Membership Inference Experiment [41]).: _Let \(\mathcal{A}\) be an attacker, \(A\) be a learning algorithm, \(N\) be a positive integer, and \(\mathcal{D}\) be a distribution over data points \(\mathbf{x}\in D\), where the vector \(\mathbf{x}\) may also be a tuple of data and labels. The MI experiment proceeds as follows: The model and data owner \(\mathcal{O}\) samples \(S\sim\mathcal{D}_{N}\) (i.e, sample n points i.i.d. from \(\mathcal{D}\)) and trains \(A_{S}=A(S)\). They choose \(b\in\{0,1\}\) uniformly at random and draw \(\mathbf{x}^{\prime}\sim\mathcal{D}\) if \(b=0\), or \(\mathbf{x}^{\prime}\sim S\) if \(b=1\). Finally, the attacker is successful if \(\mathcal{A}(\mathbf{x}^{\prime},A_{S},N,\mathcal{D})=b\). \(\mathcal{A}\) must output either 0 or 1._We note that the membership inference threat model features several key differences to the threat model underlying DP, which are listed in Table 1. Most notably, in MI attacks, the datasets are sampled from the distribution \(\mathcal{D}\), whereas DP protects all datasets. This corresponds to granting the attacker the capacity of full dataset manipulation. Therefore, the MI attack threat model is sensible in cases where the attacker cannot manipulate the dataset through injection of malicious samples also called "canaries". This may be realistic for financial and healthcare applications, where the data is often collected from actual events (e.g., past trades) or only a handful of people (trusted hospital staff) have access to the records. In such scenarios, it might be overly restrictive to protect against worst-case canary attacks as attackers cannot freely inject arbitrary records into the training datasets. Furthermore, MI attacks are handy as a fundamental ingredient in crafting data extraction attacks [6]. Hence we expect a privacy notion based on the MI threat model to offer protection against a broader class of reconstruction attacks. Finally, being an established threat in the literature [5; 9; 31; 40; 41], MI can be audited through a variety of existing attacks.
### Membership Inference Privacy as a Hypothesis Testing Problem
While DP has been studied through the perspective of a hypothesis testing formulation for a while [3; 11; 19; 39], we adapt this route to formulate membership inference attacks. To this end, consider the following hypothesis test:
\[\text{H}_{0}:\mathbf{x}^{\prime}\notin S\text{ vs. }\text{H}_{1}:\mathbf{x}^{\prime} \in S. \tag{1}\]
Rejecting the null hypothesis corresponds to detecting the presence of the individual \(\mathbf{x}^{\prime}\) in \(S\), whereas failing to reject the null hypothesis means inferring that \(\mathbf{x}^{\prime}\) was not part of the dataset \(S\). The formulation in (1) is a natural vehicle to think about any attacker's capabilities in detecting members of a train set in terms of false positive and true positive rates. The motivation behind these measures is that the attacker wants to reliably identify the subset of data points belonging to the training set (i.e., true positives) while incurring as few false positive errors as possible [5]. In other words, the attacker wants to maximize their true positive rate at any chosen and ideally low false positive rate (e.g., 0.001). From this perspective, the formulation in (1) allows to define membership inference privacy via trade-off functions \(f\) which exactly characterize the relation of false negative rates (i.e., 1-TPR) and false positive rates that an optimal attacker can achieve.
**Definition 3.2**.: _(Trade-off function [11]) For any two probability distributions \(P\) and \(Q\) on the same space, denote the trade-off function Test\((P;Q):[0;1]\rightarrow[0;1]\)_
\[\text{Test}(P;Q)(\alpha)=\inf\left\{\text{FNR}\mid\text{FPR}=\alpha\right\}, \tag{2}\]
_where the infimum is taken over all (measurable) rejection rules ("tests") which lead to a FPR of \(\alpha\) between distributions \(P,Q\)._
\begin{table}
\begin{tabular}{c l l} \hline \hline & f-DP threat model & f-MIP threat model (this work) \\ \hline Goal & Distinguish between \(S\) and \(S^{\prime}\) for _any_\(S,S^{\prime}\) that differ in at most one instance. & Distinguish whether \(\mathbf{x}^{\prime}\in S\) (training data set) or not. \\ \hline Dataset access & Attacker has full data access. For example, the attacker can poison or adversarially combat attack on which ML models could be trained; e.g., \(S=\{\}\) and \(S^{\prime}=\{10^{6}\}\). & Attacker has no access to the training data set; i.e., the model owner privately trains their model free of adversarially poisoned samples. \\ \hline Protected Instances & The instance in which \(S\) and \(S^{\prime}\) differ is arbitrary. This includes OOD samples and extreme outliers. & The sample \(\mathbf{x}^{\prime}\) for which membership is to be inferred is drawn from the data distribution \(\mathcal{D}\). Therefore, MI is concerned with typical samples that can occur in practice. \\ \hline Best used & When the specific attack model is unknown. Offers a form of general protection. & When dataset access (e.g. canary injection) of an attacker can be ruled out and the main attack goal lies in revealing private training data (e.g., membership inference, data reconstruction). \\ \hline Model knowledge & The attacker knows the model architecture and has full access to the model in form of its parameters, hyperparameters and its model outputs. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparing the threat models underlying \(f\)-DP and \(f\)-MIP.
Not every function makes for a valid trade-off function. Instead, trade-off functions possess certain characteristics that are handy in their analysis.
**Definition 3.3** (Characterization of trade-off functions [11]).: _A function \(f:[0,1]\rightarrow[0,1]\) is a trade-off function if \(f\) is convex, continuous at zero, non-increasing, and \(f(r)\leq 1-r\) for \(r\in[0,1]\)._
We additionally introduce a semi-order on the space of trade-off functions to make statements on the hardness of different trade-offs in relation to each other.
**Definition 3.4** (Comparing trade-offs).: _A trade-off function \(f\) is uniformly at least as hard as another trade-off function \(g\), if \(f(r)\geq g(r)\) for all \(0\leq r\leq 1\). We write \(f\geq g\)._
If \(\mathrm{Test}(P;Q)\geq\mathrm{Test}(P^{\prime};Q^{\prime})\), testing \(P\) vs \(Q\) is uniformly at least as hard as testing \(P^{\prime}\) vs \(Q^{\prime}\). Intuitively, this means that for a given FPR \(\alpha\), the best test possible test on \((P;Q)\) will result in an equal or higher FNR than the best test on \((P^{\prime};Q^{\prime})\).
### Noisy Stochastic Gradient Descent (Noisy SGD)
Most recent large-scale ML models are trained via stochastic gradient descent (SGD). Noisy SGD (also known as DP-SGD) is a variant of classical SGD that comes with privacy guarantees. We consider the algorithm as in the work by Abadi et al. [1], which we restate for convenience in Appendix A. While its characteristics with respect to DP have been extensively studied, we take a fundamentally different perspective in this work and study the capabilities of this algorithm to protect against membership inference attacks. In summary, the algorithm consists of three fundamental steps: _gradient clipping_ (i.e., \(\mathbf{\theta}_{i}\coloneqq\mathbf{g}(\mathbf{x}_{i},y_{i})\cdot\max(1,C/\|\mathbf{g}(\mathbf{x} _{i},y_{i})\|)\) where \(\mathbf{g}(\mathbf{x}_{i},y_{i})=\nabla\mathcal{L}(\mathbf{x}_{i},y_{i})\) is the gradient with respect to the loss function \(\mathcal{L}\)), _aggregation_ (i.e., \(\mathbf{m}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{\theta}_{i}\)) and _adding Gaussian noise_ (i.e., \(\tilde{\mathbf{m}}=\mathbf{m}+Y\) where \(Y\sim\mathcal{N}(\mathbf{0},\tau^{2}\mathbf{I})\) with variance parameter \(\tau^{2}\)). To obtain privacy bounds for this algorithm, we study MI attacks for means of random variables. This allows us to bound the MI vulnerability of SGD.
## 4 Navigating Between Membership Inference Privacy and DP
In this section, we formally define our privacy notion \(f\)-MIP. To this end, it will be handy to view MI attacks as hypothesis tests.
### Membership Inference Attacks from a Hypothesis Testing Perspective
Initially, we define the following distributions of the algorithm's output
\[A_{0}=A(\mathbf{X}\cup\{\mathbf{x}\})\text{ with }\mathbf{X}\sim\mathcal{D}^{n-1},\mathbf{x} \sim\mathcal{D}\text{ and }A_{1}(\mathbf{x}^{\prime})=A(\mathbf{X}\cup\{\mathbf{x}^{\prime}\}) \text{ with }\mathbf{X}\sim\mathcal{D}^{n-1}, \tag{3}\]
where we denote other randomly sampled instances that go into the algorithm by \(\mathbf{X}=\{\mathbf{x}_{1},...\mathbf{x}_{n-1}\}\). Here \(A_{0}\) represents the output distribution under the null hypothesis (\(H_{0}\)) where the sample \(\mathbf{x}^{\prime}\) is not part of the training dataset. On the other hand, \(A_{1}\) is the output distribution under the alternative hypothesis (\(H_{1}\)) where \(\mathbf{x}^{\prime}\) was part of the training dataset. The output contains randomness due to the instances drawn from the distribution \(\mathcal{D}\) and due to potential inherent randomness in \(A\).
We observe that the distribution \(A_{1}\) depends on the sample \(\mathbf{x}^{\prime}\) which is known to the attacker. The attacker will have access to samples for which \(A_{0}\) and \(A_{1}(\mathbf{x}^{\prime})\) are simpler to distinguish and others where the distinction is harder. To reason about the characteristics of such a stochastically composed test, we define a composition operator that defines an optimal test in such a setup. To obtain a global FPR of \(\alpha\), an attacker can target different FPRs \(\bar{\alpha}(\mathbf{x}^{\prime})\) for each specific test. We need to consider the optimum over all possible ways of choosing \(\bar{\alpha}(\mathbf{x}^{\prime})\), which we refer to as _test-specific FPR function_, giving rise to the following definition.
**Definition 4.1** (Stochastic composition of trade-off functions).: _Let \(\mathcal{F}\) be a family of trade-off functions, \(h:D\subset\mathbb{R}^{d}\rightarrow\mathcal{F}\) be a function that maps an instance of the data domain to a corresponding trade-off function, and \(\mathcal{D}\) be a probability distribution on \(D\). The set of valid test-specific FPR functions \(\bar{\alpha}:D\rightarrow[0,1]\) that result in a global FPR of \(\alpha\in[0,1]\) given the distribution \(\mathcal{D}\) is defined through_
\[\mathcal{E}(\alpha,\mathcal{D})=\left\{\bar{\alpha}:D\rightarrow[0,1]\ \middle| \,\mathbb{E}_{\mathbf{x}^{\prime}\sim\mathcal{D}}\left[\bar{\alpha}(\mathbf{x}^{ \prime})\right]=\alpha\right\}. \tag{4}\]_For a given test-specific FPR function, \(\bar{\alpha}\) the global false negative rate (type II error) \(\beta\) is given by_
\[\beta_{h}(\bar{\alpha})=\mathbb{E}_{\mathbf{x}^{\prime}\sim\mathcal{D}} \left[h(\mathbf{x})(\bar{\alpha}(\mathbf{x}))\right], \tag{5}\]
_where \(\bar{\alpha}(\mathbf{x})\) is the argument to the trade-off function \(h(\mathbf{x})\in\mathcal{F}\). For a global \(\alpha\in[0,1]\) the stochastic composition of these trade-functions is defined as_
\[\left(\bigotimes_{\mathbf{x}\sim\mathcal{D}}h(\mathbf{x})\right)(\alpha) =\min_{\bar{\alpha}\in\mathcal{E}(\alpha,\mathcal{D})}\left\{\beta_{h}(\bar{ \alpha})\right\}, \tag{6}\]
_(supposing the minimum exists), the smallest global false negative rate possible at a global FPR of \(\alpha\)._
This definition specifies the trade-off function \(\bigotimes_{\mathbf{x}\sim\mathcal{D}}h(\mathbf{x}):[0,1]\rightarrow[0,1]\) of a stochastic composition of several trade-offs. While it is reminiscent of the "most powerful test" (MPT) [28], there are several differences to the MPT that are important in our work. Most prominently, a straightforward construction of the MPT to MI problems does not work since the adversary does not only run one hypothesis test to guess whether one sample belongs to the training data set or not; instead, the adversary draws multiple samples and runs sample-dependent and (potentially) different hypotheses tests for each drawn sample. This is necessary due to the form of the alternative hypotheses in the formulation of the test in (3), which depends on the sample \(\mathbf{x}^{\prime}\). We therefore require a tool to compose the results from different hypothesis tests. Finally, we prove that the trade-off of the stochastic composition has the properties of a trade-off function (see App. D.1):
**Theorem 4.1** (Stochastic composition of trade-off functions).: _The stochastic composition \(\bigotimes_{\mathbf{x}\sim\mathcal{D}}h(\mathbf{x})\) of trade-off functions \(h(\mathbf{x})\) maintains the characteristics of a trade-off function, i.e., it is convex, non-increasing, \(\left(\bigotimes_{\mathbf{x}\sim\mathcal{D}}h(\mathbf{x})\right)(r)\leq 1-r\) for all \(r\in[0,1]\), and it is continuous at zero._
### \(f\)-Membership Inference Privacy (\(f\)-Mip)
This rigorous definition of the stochastic composition operator allows us to define membership inference privacy from a hypothesis testing perspective.
**Definition 4.2** (\(f\)-Membership Inference Privacy).: _Let \(f\) be a trade-off function. An algorithm1\(A:D^{n}\rightarrow\mathbb{R}^{d}\) is said to be \(f\)-membership inference private (\(f\)-MIP) with respect to a data distribution \(\mathcal{D}\) if_
Footnote 1: When using the term “algorithm”, we also include randomized mappings.
\[\bigotimes_{\mathbf{x}^{\prime}\sim\mathcal{D}}\text{Test}\left(A_{0 };A_{1}(\mathbf{x}^{\prime})\right)\geq f, \tag{7}\]
_where \(\mathbf{x}^{\prime}\sim\mathcal{D}\) and \(\bigotimes\) denotes the stochastic composition built from individual trade-off functions of the MI hypotheses tests for random draws of \(\mathbf{x}^{\prime}\)._
In this definition, both sides are functions dependent on the false positive rate \(\alpha\). A prominent special case of a trade-off function is the Gaussian trade-off, which stems from testing one-dimensional normal distributions of unit variance that are spaced apart by \(\mu\in\mathbb{R}_{\geq 0}\). Therefore, defining the following special case of \(f\)-MIP will be useful.
**Definition 4.3** (\(\mu\)-Gaussian Membership Inference Privacy).: _Let \(\Phi\) be the cumulative distribution function (CDF) of a standard normal distribution. Define \(g_{\mu}(\alpha)\coloneqq\Phi(\Phi^{-1}(1-\alpha)-\mu)\) to be the trade-off function derived from testing two Gaussians; one with mean \(0\) and one with mean \(\mu\). An algorithm \(A\) is \(\mu\)-Gaussian Membership Inference private (\(\mu\)-GMP) with privacy parameter \(\mu\) if it is \(g_{\mu}\)-MIP, i.e., it is MI private with trade-off function \(g_{\mu}\)._
**Remark 4.1**.: _DP can also be defined via the Gaussian trade-off function, which results in \(\mu\)-Gaussian Differential Privacy (\(\mu\)-GDP, [11]). While the trade-off curves for both \(\mu\)-GDP and \(\mu\)-GMP have the same parametric form, they have different interpretations: \(\mu\)-GDP describes the trade-off function that an attacker with complete knowledge (left column in Table 1) could achieve while \(\mu\)-GMP describes the trade-off function that an attacker with MI attack capability can achieve (right column in Table 1). In the next section we will quantify their connection further._
### Relating \(f\)-MIP and \(f\)-Dp
We close this section by providing first results regarding the relation between \(f\)-DP and \(f\)-MIP. As expected, \(f\)-DP is strictly stronger than \(f\)-MIP, which can be condensed in the following result:
**Theorem 4.2** (\(f\)-Dp implies \(f\)-Mip).: _Let an algorithm \(A:D^{n}\rightarrow\mathbb{R}^{d}\) be \(f\)-differentially private [11]. Then, algorithm \(A\) will also be \(f\)-membership inference private._
We proof this result in Appendix F.1. This theorem suggests one intuitive, simple and yet actionable approach to guarantee Membership Inference Privacy. This approach involves the use of DP learning algorithms such as DP-SGD [1], which train models using noised gradients. However, as we will see in the next section, using noise levels to guarantee \(f\)-DP is usually suboptimal to guarantee \(f\)-MIP.
## 5 Implementing \(f\)-MIP through Noisy SGD
We would now like to obtain a practical learning algorithm that comes with \(f\)-MIP guarantees. As the dependency between the final model parameters and the input data is usually hard to characterize, we follow the common approach and trace the information flow from the data to the model parameters through the training process of stochastic gradient descent [1, 33]. Since the gradient updates are the only path where information flows from the data into the model, it suffices to privatize this step.
### \(f\)-MIP for One Step of Noisy SGD
We start by considering a single SGD step. Following prior work [1, 33], we make the standard assumption that only the mean over the individual gradients \(\mathbf{m}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{\theta}_{i}\) is used to update the model (or is published directly) where \(\mathbf{\theta}_{i}\in\mathbb{R}^{d}\) is a sample gradient. Consistent with the definition of the membership inference game, the attacker tries to predict whether a specific gradient \(\mathbf{\theta}^{\prime}\) was part of the set \(\left\{\mathbf{\theta}_{i}\right\}_{i}\) that was used to compute the model's mean gradient \(\mathbf{m}\) or not. We are interested in determining the shape of the attacker's trade-off function. For the sake of conciseness, we directly consider one step of noisy SGD (i.e., one averaging operation with additional noising, see Algorithm 2 from the Appendix), which subsumes a result for the case without noise by setting \(\tau^{2}=0\). We establish the following theorem using the Central Limit Theorem (CLT) for means of adequately large batches of \(n\) sample gradients, which is proven in Appendix E.
**Theorem 5.1** (One-step noisy SGD is \(f\)-membership inference private).: _Denote the cumulative distribution function of the non-central chi-squared distribution with \(d\) degrees of freedom and non-centrality parameter \(\gamma\) by \(F_{\chi^{2}_{d}(\gamma)}\). Let the gradients \(\mathbf{\theta}^{\prime}\in\mathbb{R}^{d}\) of the test points follow a distribution with mean \(\mathbf{\mu}\) and covariance \(\mathbf{\Sigma}\), let \(K\geq\|\mathbf{\Sigma}^{-1/2}\mathbf{\theta}^{\prime}\|_{2}^{2}\) and define \(n_{\text{effective}}=n+\frac{\tau^{2}n^{2}}{G^{2}}\). For sufficiently large batch sizes \(n\), one step of noisy SGD is \(f\)-membership inference private with trade-off function given by:_
\[\beta(\alpha)\approx 1-F_{\chi^{2}_{d}((n_{\text{effective}}-1)K)}\left( \frac{n_{\text{effective}}}{n_{\text{effective}}-1}F_{\chi^{2}_{d}(n_{\text{ effective}},K)}^{-1}(\alpha)\right). \tag{8}\]
The larger the number of parameters \(d\) and the batch size \(n\) grow, the more the trade-off curve approaches the \(\mu\)-GMIP curve, which we show next (see Figure 1).
**Corollary 5.1** (One step noisy SGD is approx. \(\mu\)-GMIP).: _For large \(d,n\), noisy SGD is approximately \(g_{\text{nsep}}\)-GMIP. In particular, \(\beta(\alpha)\approx\Phi(\Phi^{-1}(1-\alpha)-\mu_{\text{Step}})\) with privacy parameter:_
\[\mu_{\text{step}}=\frac{d+(2n_{\text{effective}}-1)K}{n_{\text{ effective}}\sqrt{2d+4n_{\text{effective}}K}}. \tag{9}\]
This result is striking in its generality as it also covers models trained without additional noise or gradient cropping (\(n_{\text{effective}}=n\) in that case). Unlike for DP, even standard models trained with non-noisy SGD offer an elementary level of MIP. Our result further explicitly quantifies four factors
Figure 1: **Trade-off function convergence. The trade-off function from Theorem 5.1 converges to the one from Corollary 5.1 where \(\tau^{2}{=}0\) and \(K{=}d\).**that lead to attack success: the batch size \(n\), the number of parameters \(d\), the strength of the noise \(\tau^{2}\) and the worst-case data-dependent _gradient susceptibility_\(\|\mathbf{\Sigma}^{-1/2}\boldsymbol{\theta}^{\prime}\|_{2}^{2}\). The closeness of the trade-off function to the diagonal, which is equivalent to the attacker randomly guessing whether a gradient \(\boldsymbol{\theta}^{\prime}\) was part of the training data or not, is majorly determined by the ratio of \(d\) to \(n\). The higher the value of \(d\) relative to \(n\), the easier it becomes for the attacker to identify training data points. Furthermore, a higher gradient susceptibility \(K\), which measures the atypicality of a gradient with respect to the gradient distribution, increases the likelihood of MI attacks succeeding in identifying training data membership. It is worth noting that if we do not restrict the gradient distribution or its support, then there might always exist gradient samples that significantly distort the mean, revealing their membership in the training dataset. This phenomenon is akin to the \(\delta\) parameter in DP, which also allows exceptions for highly improbable events.
**Remark 5.1** (Magnitude of \(\mu_{\text{Step}}\)).: _When the dimensions of the uncorrelated components in \(\mathbf{\Sigma}^{-\frac{1}{2}}\boldsymbol{\theta}^{\prime}\) are also independent, we expect \(K\) to follow a \(\chi^{2}\)-distribution with \(d\) degrees of freedom and thus \(K\in\mathcal{O}(d)\). In the standard SGD-regime (\(\tau^{2}=0\)) with \(d,n\gg 1\), we obtain \(\mu\in\mathcal{O}\big{(}\sqrt{d/n}\big{)}\)._
**Remark 5.2** (On Optimality).: _The dependency on \(d\) when \(\tau^{2}>0\) is a consequence of our intentionally broad proving strategy. Our proof approach consists of two key steps: First, we establish the optimal LRT under general gradient distributions, without adding noise or imposing any cropping constraints (See Appendix E.1). This initial step serves as the foundation for our subsequent analysis and is (1) as general as possible covering all distributions with finite variance and is (2) optimal in the sense of the Neyman-Pearson Lemma, i.e., it cannot be improved. This means that our result covers all models trained with standard SGD (\(\tau^{2}=0\) and \(C=\infty\)) and is remarkable in its generality as it is the first to suggest clear conditions when adding noise is not required to reach \(f\)-MIP. Second, we specialize our findings to cropped random variables with added noise (See Appendix E.2). This analysis could potentially be improved by considering individual gradient dimensions independently._
### Composition and Subsampling
In the previous section, we have derived the trade-off function for a single step of SGD. Since SGD is run over multiple rounds, we require an understanding of how the individual trade-off functions can be composed when a sequence of \(f\)-MIP operations is conducted, and a random subset of the entire data distribution is used as an input for the privatized algorithm. The next lemma provides such a result for \(\mu\)-GMIP and follows from a result that holds for hypotheses tests between Gaussian random variables due to Dong et al. [11] (see Appendix D.3 for details and more results).
**Lemma 5.1** (Asymptotic convergence of infinite DP-SGD).: _Let \(n\) be the batch size in SGD, and \(N\) be the entire size of the dataset. If a single SGD-Step is at least as hard as \(\mu_{\text{step}}\)-GMP with respect to the samples that were part of the batch and \(\frac{n\sqrt{I}}{N}\to c\) as \(\lim_{t\rightarrow\infty}\) (the batch size is gradually decreased), then the noisy SGD algorithm will be \(\mu\)-GMIP with_
\[\mu=\sqrt{2}c\sqrt{\exp(\mu_{\text{step}}^{2})\Phi\left(1.5\mu_{\text{step}} \right)+3\Phi\left(-0.5\mu_{\text{step}}\right)-2}. \tag{10}\]
Note that this result also provides a (loose) bound for the case where exactly \(T\) iterations are run with a batch size of \(n^{\prime}\) with \(c=\frac{n^{\prime}\sqrt{T}}{N}\) (through using \(n(t)=n^{\prime}\) if \(t\leq T,\text{ else }n(t)=\frac{n^{\prime}\sqrt{T}}{\sqrt{t}}\)). With this result in place, we can defend against MI attacks using the standard noisy SGD algorithm.
## 6 Experimental Evaluation
**Datasets and Models.** We use three datasets that were previously used in works on privacy risks of ML models [32]: The CIFAR-10 dataset which consists of 60k small images [21], the Purchase tabular classification dataset [25] and the Adult income classification dataset from the UCI machine learning repository [12]. Following prior work by Abadi et al. [1], we use a model pretrained on CIFAR-100 and finetune the last layer on CIFAR-10 using a ResNet-56 model for this task [16] where the number of fine-tuned parameters equals \(d=650\). We follow a similar strategy on the Purchase dataset, where we use a three-layer neural network. For finetuning, we use the 20 most common classes and \(d=2580\) parameters while the model is pretrained on 80 classes. On the adult dataset, we use a two-layer network with 512 random features in the first layer trained from scratch on the dataset such that \(d=1026\). We refer to Appendix C.1 for additional training details. We release our code online.2
Footnote 2: [https://github.com/tleemann/gaussian_mip](https://github.com/tleemann/gaussian_mip)
### Gradient Attacks Based on the Analytical LRT
To confirm our theoretical analysis for one step of SGD and its composition, we implement the gradient attack based on the likelihood ratio test derived in the proof of Theorem 5.1. We provide a sketch of the implementation in Algorithm 1 and additional details in Appendix C.3. An essential requirement in the construction of the empirical test is the estimation of the true gradient mean \(\mathbf{\mu}\) and the true inverse covariance matrix \(\mathbf{\Sigma}^{-1}\) since these quantities are essential parts of both the test statistic \(\hat{S}\) and the true gradient susceptibility term \(\hat{K}\) needed for the analytical attack. The attacker uses their access to the gradient distribution (which is standard for membership inference attacks [5; 29] and realistic in federated learning scenarios [20]), to estimate the distribution parameters. In practice, however, the empirical estimates of \(\hat{\mathbf{\mu}}\), \(\hat{\mathbf{\Sigma}}^{-1}\) and thus \(\hat{K}\) will be noisy and therefore we do not expect that the empirical trade-off curves match the analytical curves exactly.
Using our novel Gradient Likelihood Ratio (GLiR) attack we can audit our derived guarantees and their utility. First, we audit our one-step guarantees from Theorem 5.1. To compare the models, we adapt the batch size \(n\) such that all models reach the same level of \(\mu\)-GMIP. In Figure 1(a), we use a simulated gradient distribution with known parameters \(\mathbf{\mu},\mathbf{\Sigma}^{-1}\) and \(d\). In this case, we can estimate \(K\) accurately and observe that our bounds are tight when the distribution parameters and thus the respective gradient susceptibilities can be computed accurately. We provide additional ablation studies that gauge the approximation quality of with small values for \(d\), \(n\) and different simulated distributions in Appendix C.2. When the parameters are unknown and we have to estimate the parameters, our attacks become weaker and do not match the analytical prediction (see Figure 1(b)).
We also audit our composition guarantees. We do five SGD-steps in Figure 1(c). While there is a small gain in attack performance on the CIFAR-10 dataset (e.g., at FPR=0.25), the attack performance on the other datasets remains largely unaffected. This mismatch occurs since the theoretical analysis is based on the premise that the attacker gains access to independently sampled gradient means for each step to separate training and non-training points, but in practice we do not gain much new information as the model updates are not statistically independent and too incremental to change the gradient means significantly between two subsequent steps. Therefore, a practical attacker does not gain much additional information through performing several steps instead of one. Future work is required to model these dependencies and potentially arrive at a tighter composition result under incremental parameter updates. We provide results for additional existing membership inference attacks, for instance the recent loss-based likelihood-ratio attack by Carlini et al. [5] in Appendix C.4, which all show weaker success rates than the gradient-based attack that proved most powerful in our setting.
### Comparing Model Utility under \(\mu\)-Gdp and \(\mu\)-Gmp
Here we compare the utility under our privacy notion to the utility under differential privacy. We sample 20 different privacy levels ranging from \(\mu\in[0.4,...,50]\) and calibrate the noise in the SGD iteration to reach the desired value of \(\mu\). We can do so both for \(\mu\)-GMIP using the result in Equation (10) and using the result by Dong et al. [11; Corollary 4] for \(\mu\)-GDP, which result in the same attack success rates while \(\mu\)-GDP allows for stronger privacy threat models. Due to Theorem 4.2, we never need to add more noise for \(\mu\)-GMIP than for \(\mu\)-DP. Further details are provided in Appendix C.1. Figure 3 shows a comparison of the accuracy that the models obtain. We observe that the model under GMIP results in significantly higher accuracy for most values of \(\mu\). As \(\mu\to 0\) both privacy notions require excessive amounts of noise such that the utility decreases towards the random guessing accuracy. On the other hand, for higher values of \(\mu\), there is no need to add any noise to the gradient to obtain \(\mu\)-GMIP, allowing to obtain the full utility of the unconstrained model. This indicates that useful GMIP-bounds do not necessarily require noise. For instance, on the CIFAR-10 model, no noise is required for \(\mu\geq 0.86\) which is a reasonable privacy level [11]. Overall, these results highlight that useful and interpretable privacy guarantees can often be obtained without sacrificing utility.
## 7 Conclusion and Future Work
In the present work, we derived the general notion of \(f\)-Membership Inference Privacy (\(f\)-MIP) by taking a hypothesis testing perspective on membership inference attacks. We then studied the noisy SGD algorithm as a model-agnostic tool to implement \(f\)-Membership Inference Privacy, while maintaining Differential Privacy (DP) as a worst-case guarantee. Our analysis revealed that significantly less noise may be required to obtain \(f\)-MIP compared to DP resulting in increased utility. Future work is required to better model the dependencies when composing subsequent SGD steps which could lead to improved bounds in practice. Furthermore, our analysis shows that when the capacity of the attacker is further restricted, e.g., to API access of predictions, there remains a gap between our theoretical bounds and loss-based membership inference attacks that can be implemented for real models. More work is required to either produce more sophisticated attacks or derive theoretical bounds for even less powerful attackers to close this gap.
Figure 3: **Utility of DP versus MIP.** Model performance on three datasets across different privacy levels \(\mu\) (small \(\mu\) denotes high privacy) using the notions of \(\mu\)-Gaussian Differential Privacy (parametric form of \(f\)-DP, [11]) and \(\mu\)-Gaussian Membership Inference Privacy (parametric form of \(f\)-MIP, ours) on three datasets. GMIP usually allows for substantially increased accuracy over the corresponding GDP guarantee with the same attack success rates controlled by \(\mu\). However, the attacker under GMIP runs membership inference (MI) attacks while GDP allows for a wider set of privacy threat models. For more details on differences in the underlying threat models see Table 1.
Figure 2: **Auditing \(f\)-MIP with our gradient attack (GLiR) when \(\tau^{2}=0\). We show trade-off curves when the gradient distribution is known (a) and when the gradients are obtained from a trained model that was finetuned on various data sets (b, c). The analytical solutions are computed with a value of \(K=d\) and using the composition result for \(k\) steps in Appendix D.3 for (c).**
## References
* Abadi et al. [2016] Abadi, M., Chu, A., Goodfellow, I., McMahan, H. B., Mironov, I., Talwar, K., and Zhang, L. Deep learning with differential privacy. In _Proceedings of the 2016 ACM SIGSAC conference on computer and communications security_, pp. 308-318, 2016.
* Andrew et al. [2023] Andrew, G., Kairouz, P., Oh, S., Oprea, A., McMahan, H. B., and Suriyakumar, V. One-shot empirical privacy estimation for federated learning. _arXiv preprint arXiv:2302.03098_, 2023.
* Balle et al. [2020] Balle, B., Barthe, G., Gaboardi, M., Hsu, J., and Sato, T. Hypothesis testing interpretations and renyi differential privacy. In _International Conference on Artificial Intelligence and Statistics_, pp. 2496-2506. PMLR, 2020.
* Bassily et al. [2018] Bassily, R., Thakkar, O., and Guha Thakurta, A. Model-agnostic private learning. _Advances in Neural Information Processing Systems (NeurIPS)_, 31, 2018.
* Carlini et al. [2021] Carlini, N., Chien, S., Nasr, M., Song, S., Terzis, A., and Tramer, F. Membership inference attacks from first principles. _arXiv preprint arXiv:2112.03570_, 2021.
* Carlini et al. [2021] Carlini, N., Tramer, F., Wallace, E., Jagielski, M., Herbert-Voss, A., Lee, K., Roberts, A., Brown, T. B., Song, D., Erlingsson, U., et al. Extracting training data from large language models. In _USENIX Security Symposium_, volume 6, 2021.
* Carlini et al. [2023] Carlini, N., Hayes, J., Nasr, M., Jagielski, M., Sehwag, V., Tramer, F., Balle, B., Ippolito, D., and Wallace, E. Extracting training data from diffusion models. _arXiv preprint arXiv:2301.13188_, 2023.
* Chen et al. [2020] Chen, D., Yu, N., Zhang, Y., and Fritz, M. Gan-leaks: A taxonomy of membership inference attacks against generative models. In Ligatti, J., Ou, X., Katz, J., and Vigna, G. (eds.), _CCS '20: 2020 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event, USA, November 9-13, 2020_, pp. 343-362. ACM, 2020. doi: 10.1145/3372297.3417238. URL [https://doi.org/10.1145/3372297.3417238](https://doi.org/10.1145/3372297.3417238).
* Choquette-Choo et al. [2020] Choquette-Choo, C. A., Tramer, F., Carlini, N., and Papernot, N. Label-only membership inference attacks. In _Proceedings of the 37th International Conference on Machine Learning (ICML)_, volume abs/2007.14321, 2020.
* Cummings et al. [2023] Cummings, R., Desfontaines, D., Evans, D., Geambasu, R., Jagielski, M., Huang, Y., Kairouz, P., Kamath, G., Oh, S., Ohrimenko, O., et al. Challenges towards the next frontier in privacy. _arXiv preprint arXiv:2304.06929_, 2023.
* Dong et al. [2022] Dong, J., Roth, A., and Su, W. J. Gaussian differential privacy. _Journal of the Royal Statistical Society Series B: Statistical Methodology_, 84(1):3-37, 2022.
* Dua and Graff [2017] Dua, D. and Graff, C. UCI machine learning repository, 2017. URL [http://archive.ics.uci.edu/ml](http://archive.ics.uci.edu/ml).
* Dwork and Feldman [2018] Dwork, C. and Feldman, V. Privacy-preserving prediction. In _Conference On Learning Theory_, pp. 1693-1702. PMLR, 2018.
* Dwork et al. [2006] Dwork, C., McSherry, F., Nissim, K., and Smith, A. Calibrating noise to sensitivity in private data analysis. In _Theory of cryptography conference_, pp. 265-284. Springer, 2006.
* Haim et al. [2022] Haim, N., Vardi, G., Yehudai, G., Shamir, O., et al. Reconstructing training data from trained neural networks. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2022.
* He et al. [2016] He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pp. 770-778, 2016.
* Izzo et al. [2022] Izzo, Z., Yoon, J., Arik, S. O., and Zou, J. Provable membership inference privacy. _arXiv preprint arXiv:2211.06582_, 2022.
* [18] Jagielski, M., Ullman, J., and Oprea, A. Auditing differentially private machine learning: How private is private sgd? _Advances in Neural Information Processing Systems (NeurIPS)_, 33:22205-22216, 2020.
* [19] Kairouz, P., Oh, S., and Viswanath, P. The composition theorem for differential privacy. In _Proceedings of the 32nd International Conference on International Conference on Machine Learning (ICML)_, pp. 1376-1385, 2015.
* [20] Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., Bonawitz, K., Charles, Z., Cormode, G., Cummings, R., et al. Advances and open problems in federated learning. _Foundations and Trends(r) in Machine Learning_, 14(1-2):1-210, 2021.
* [21] Krizhevsky, A., Hinton, G., et al. Learning multiple layers of features from tiny images. 2009.
* [22] Lehmann, E. L., Romano, J. P., and Casella, G. _Testing statistical hypotheses_, volume 3. Springer, 2005.
* [23] Long, Y., Bindschaedler, V., Wang, L., Bu, D., Wang, X., Tang, H., Gunter, C. A., and Chen, K. Understanding membership inferences on well-generalized learning models. _arXiv preprint arXiv:1802.04889_, 2018.
* [24] Maddock, S., Sablayrolles, A., and Stock, P. Canife: Crafting canaries for empirical privacy measurement in federated learning. In _International Conference on Learning Representations (ICLR)_, 2023.
* [25] Nasr, M., Shokri, R., and Houmansadr, A. Machine learning with membership privacy using adversarial regularization. In _Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security_, pp. 634-646, 2018.
* [26] Nasr, M., Songi, S., Thakurta, A., Papernot, N., and Carlin, N. Adversary instantiation: Lower bounds for differentially private machine learning. In _2021 IEEE Symposium on security and privacy (SP)_, pp. 866-882. IEEE, 2021.
* [27] Nasr, M., Hayes, J., Steinke, T., Balle, B., Tramer, F., Jagielski, M., Carlini, N., and Terzis, A. Tight auditing of differentially private machine learning. In _32nd USENIX Security Symposium (USENIX Security 23)_, pp. 1631-1648. USENIX Association, 2023.
* [28] Neyman, J. and Pearson, E. S. Ix. on the problem of the most efficient tests of statistical hypotheses. _Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character_, 231(694-706):289-337, 1933.
* [29] Pawelczyk, M., Lakkaraju, H., and Neel, S. On the Privacy Risks of Algorithmic Recourse. In _International Conference on Artificial Intelligence and Statistics (AISTATS)_, 2023.
* [30] Sablayrolles, A., Douze, M., Schmid, C., Ollivier, Y., and Jegou, H. White-box vs black-box: Bayes optimal strategies for membership inference. In _Proceedings of the 36th International Conference on Machine Learning (ICML)_, 2019.
* [31] Shokri, R., Stronati, M., Song, C., and Shmatikov, V. Membership inference attacks against machine learning models. In _2017 IEEE symposium on security and privacy (SP)_, pp. 3-18. IEEE, 2017.
* [32] Shokri, R., Strobel, M., and Zick, Y. On the privacy risks of model explanations. In _Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES)_, pp. 231-241, 2021.
* [33] Song, S., Chaudhuri, K., and Sarwate, A. D. Stochastic gradient descent with differentially private updates. In _2013 IEEE global conference on signal and information processing_, pp. 245-248. IEEE, 2013.
* [34] Steinke, T., Nasr, M., and Jagielski, M. Privacy auditing with one (1) training run. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2023.
* [35] Tan, J., Mason, B., Javadi, H., and Baraniuk, R. Parameters or privacy: A provable tradeoff between overparameterization and membership inference. _Advances in Neural Information Processing Systems (NeurIPS)_, 35:17488-17500, 2022.
* [36] Tan, J., LeJeune, D., Mason, B., Javadi, H., and Baraniuk, R. G. A blessing of dimensionality in membership inference through regularization. In _International Conference on Artificial Intelligence and Statistics (AISTATS)_, pp. 10968-10993. PMLR, 2023.
* [37] Thudi, A., Shumailov, I., Boenisch, F., and Papernot, N. Bounding membership inference. _arXiv preprint arXiv:2202.12232_, 2022.
* [38] Triastcyn, A. and Faltings, B. Bayesian differential privacy for machine learning. In _International Conference on Machine Learning (ICML)_, pp. 9583-9592. PMLR, 2020.
* [39] Wasserman, L. and Zhou, S. A statistical framework for differential privacy. _Journal of the American Statistical Association_, 105(489):375-389, 2010.
* [40] Ye, J., Maddi, A., Murakonda, S. K., Bindschaedler, V., and Shokri, R. Enhanced membership inference attacks against machine learning models. In _Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security (CCS)_, 2022.
* [41] Yeom, S., Giacomelli, I., Fredrikson, M., and Jha, S. Privacy risk in machine learning: Analyzing the connection to overfitting. In _2018 IEEE 31st computer security foundations symposium (CSF)_, pp. 268-282. IEEE, 2018.
* [42] Zanella-Beguelin, S., Wutschitz, L., Tople, S., Salem, A., Ruhle, V., Paverd, A., Naseri, M., Kopf, B., and Jones, D. Bayesian estimation of differential privacy. In _International Conference on Machine Learning (ICML)_. PMLR, 2023. | ## Review
### Summary
The paper introduces a new privacy notion called Gaussian Membership Inference Privacy (GMIP), which aims to provide a more relaxed yet practical framework compared to Differential Privacy (DP) for protecting against data membership inference attacks. It analytically explores the information leakage from training algorithms, showing how GMIP captures the trade-off in membership inference through hypothesis testing. The authors validate their theoretical findings with empirical evaluations, demonstrating improved model utility over traditional DP methods. Despite its contributions, there are concerns regarding the assumptions made in the analysis and comparisons to existing privacy definitions.
### Strengths
- The paper addresses an important problem by providing a new analytical approach to measure information leakage in training algorithms against realistic adversaries.
- It introduces a novel analytical derivation of the likelihood ratio test, assuming aggregated noisy gradients follow a multivariate Gaussian distribution.
- The GMIP bound has interesting dependencies on model and data distribution, and the paper includes detailed empirical evaluations supporting its findings.
- The proposed f-MIP method offers interpretable privacy guarantees and enhances utility when attacker capabilities are realistically constrained.
### Weaknesses
- The GMIP bound relies on several assumptions about the gradient distribution, model dimension, and dataset size, which should be clarified to avoid misleading comparisons.
- The assumption that averaged gradients follow a Gaussian distribution is questionable due to gradient clipping.
- There are concerns about the tightness of the GMIP bound compared to the standard mu-GDP bound in high-dimensional cases.
- The paper lacks a theoretical exploration of the relationship between f-MIP and f-DP, limiting the understanding of its applicability.
- The iid assumption may not be realistic in practical scenarios, potentially lowering the actual privacy guarantees.
### Questions
- What are the specific approximations and assumptions used for the GMIP bound, especially regarding the comparison in Figure 1?
- How does the GMIP bound perform in high-dimensional settings compared to the standard mu-GDP bound?
- Could the authors discuss the implications of the iid assumption on the practical applicability of their privacy guarantees?
- Is the comparison across the entire range of TPR and FPR in Figure 4 appropriate, given the focus on low FPR scenarios?
### Soundness
**Score:** 3
**Description:** Good. The theoretical foundations and derivations are solid, but some assumptions and approximations require further scrutiny.
### Presentation
**Score:** 3
**Description:** Good. The paper is well-structured and easy to follow, but could benefit from clarifying certain assumptions and their implications.
### Contribution
**Score:** 3
**Description:** Good. The paper proposes a promising framework for privacy definitions that enhances model utility, though some contributions may not be entirely novel.
### Rating
**Score:** 5
**Description:** Borderline accept: The paper is technically solid, with justifiable reasons for acceptance despite some concerns about the analysis.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper introduces a significant new approach to privacy in machine learning, addressing an important area of research. While there are some weaknesses related to assumptions and comparisons, the overall contributions, soundness, and clarity of presentation outweigh these concerns, justifying acceptance as a poster.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Segment Anything in 3D with NeRFs
Jiazhong Cen\({}^{1}\), Zanwei Zhou\({}^{1}\), Jiemin Fang\({}^{2,3}\), Chen Yang\({}^{1}\),
Wei Shen\({}^{1}\), Lingxi Xie\({}^{2}\), Dongsheng Jiang\({}^{2}\), Xiaopeng Zhang\({}^{2}\), Qi Tian\({}^{2}\)
\({}^{1}\) MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University
\({}^{2}\) Huawei Inc. \({}^{3}\) School of EIC, Huazhong University of Science and Technology
[email protected]
###### Abstract
Recently, the Segment Anything Model (SAM) emerged as a powerful vision foundation model which is capable to segment anything in 2D images. This paper aims to generalize SAM to segment 3D objects. Rather than replicating the data acquisition and annotation procedure which is costly in 3D, we design an efficient solution, leveraging the Neural Radiance Field (NeRF) as a cheap and off-the-shelf prior that connects multi-view 2D images to the 3D space. We refer to the proposed solution as **SA3D**, for Segment Anything in 3D. It is only required to provide a manual segmentation prompt (_e.g._, rough points) for the target object in a **single view**, which is used to generate its 2D mask in this view with SAM. Next, SA3D alternately performs **mask inverse rendering** and **cross-view self-prompting** across various views to iteratively complete the 3D mask of the target object constructed with voxel grids. The former projects the 2D mask obtained by SAM in the current view onto 3D mask with guidance of the density distribution learned by the NeRF; The latter extracts reliable prompts automatically as the input to SAM from the NeRF-rendered 2D mask in another view. We show in experiments that SA3D adapts to various scenes and achieves 3D segmentation within minutes. Our research reveals a potential methodology to lift the ability of a 2D vision foundation model to 3D, as long as the 2D model can steadily address promptable segmentation across multiple views. Our code is available at [https://github.com/Jumpat/SegmentAnythingin3D](https://github.com/Jumpat/SegmentAnythingin3D).
## 1 Introduction
The computer vision community has been pursuing a vision foundation model that can perform basic tasks (_e.g._, segmentation) in any scenario and for either 2D or 3D image data. Recently, the Segment Anything Model (SAM) [25] emerged and attracted a lot of attention, due to its ability to segment anything in 2D images, but generalizing the ability of SAM to 3D scenes remains mostly uncovered. One may choose to replicate the pipeline of SAM to collect and semi-automatically annotate a large set of 3D scenes, but the costly burden seems unaffordable for most research groups.
We realize that an alternative and efficient solution lies in equipping the 2D foundation model (_i.e._, SAM) with 3D perception via a 3D representation model. In other words, there is no need to establish a 3D foundation model from scratch. However, there is a prerequisite: the 3D representation model shall be capable to render 2D views and register 2D segmentation results to the 3D scene. Thus, we use the Neural Radiance Fields (NeRF) [38, 53, 3] as an off-the-shelf solution. NeRF is a family of algorithms that formulates each 3D scene into a deep neural network that serves as a 3D prior connecting multiple 2D views.
As shown in Figure 1, our solution is named Segment Anything in 3D (**SA3D**). Given a NeRF trained on a set of 2D images, SA3D takes prompts (_e.g._, click points on the object) in a single rendered view as input, which is used to generate a 2D mask in this view with SAM. Next, SA3D alternately performs two steps across various views to iteratively complete the 3D mask of the object constructed with voxel grids. In each round, the first step is **mask inverse rendering**, in which the previous 2D segmentation mask obtained by SAM is projected onto the 3D mask via density-guided inverse rendering offered by the NeRF. The second step is **cross-view self-prompting**, in which NeRF is used to render the 2D segmentation mask (which may be inaccurate) based on the 3D mask and the image from another view, then a few point prompts are automatically generated from the rendered mask and fed into SAM to produce a more complete and accurate 2D mask. The above procedure is executed iteratively until all necessary views have been sampled.
We conduct various (_e.g._, object, part-level) segmentation tasks on the Replica [51] and NVOS [47] datasets. Without re-training/re-designing SAM or NeRF, SA3D easily and efficiently adapts to different scenarios. Compared to existing approaches, SA3D enjoys a simplified pipeline that typically completes 3D segmentation within minutes. SA3D not only offers an efficient tool for segmenting anything in 3D, but also reveals a generic methodology to lift 2D foundation models to the 3D space. The only prerequisite lies in the ability to steadily address promptable segmentation across multiple views, and we hope it becomes a general property of 2D foundation models in the future.
## 2 Related Work
2D SegmentationSince FCN [36] has been proposed, research on 2D image segmentation has experienced a rapid growth. Various sub-fields of segmentation have been explored deeply by numerous studies [18; 24; 4; 71]. With transformers [58; 10] entering the field of segmentation, many new segmentation architectures [72; 7; 6; 52; 63] have been proposed and the whole field of segmentation has been further developed. A recent significant breakthrough in this field is the Segment Anything Model (SAM) [25]. As an emerging vision foundation model, SAM is recognized as a potential game-changer, which aims to unify the 2D segmentation task by introducing a prompt-based segmentation paradigm. An analogous model to SAM is SEEM [75], which also exhibits impressive open-vocabulary segmentation capabilities.
3D SegmentationNumerous methods have explored various types of 3D representations to perform 3D segmentation. These scene representations include RGB-D images [60; 62; 64; 8], point clouds [44; 45; 70] and grid space such as voxels [21; 55; 35], cylinders [74] and bird's eye view space [67; 16]. Although 3D segmentation has been developed for a period of time, compared with 2D segmentation, the scarcity of labeled data and high computational complexity make it difficult to design a unified framework similar to SAM.
Lifting 2D Vision Foundation Models to 3DTo tackle the limitation of data scarcity, many previous studies [23; 43; 9; 17; 69; 31; 65; 22] explored lifting 2D foundation models to 3D. In these studies, the most relevant work to SA3D is LERF [23], which trains a feature field of the Vision-Language Model (_i.e._, CLIP [46]) together with the radiance field. Compared with SA3D,
Figure 1: Given any pre-trained NeRF, SA3D takes prompts from one single rendered view as input and outputs the 3D segmentation result for the specific target.
LERF focuses on coarsely localizing the specific objects with text prompts but not fine-grained 3D segmentation. The reliance on CLIP features makes it insensitive to the specific location information of the target object. When there are multiple objects with similar semantics in the scene, LERF cannot perform effective 3D segmentation. The remaining methods mainly focus on point clouds. By connecting the 3D point cloud with specific camera poses with 2D multi-view images, the extracted features by 2D foundation models can be projected to the 3D point cloud. The data acquisition of these methods is more expensive than ours, _i.e._, acquiring multi-view images for NeRFs.
Segmentation in NeRFsNeural Radiance Fields (NeRFs) [38; 53; 3; 1; 40; 19; 13; 61; 30; 12] are a series of 3D implicit representation. Inspired by their success in 3D consistent novel view synthesis, numerous studies have delved into the realm of 3D segmentation within NeRFs. Zhi _et al._[73] proposes Semantic-NeRF, a method that incorporates semantics into appearance and geometry. They showcase the potential of NeRFs in label propagation and refinement. NVOS [47] introduces an interactive approach to select 3D objects from NeRFs by training a lightweight multi-layer perception (MLP) using custom-designed 3D features. Other approaches, _e.g._ N3F [57], DFF [27], LERF [23] and ISRF [15], aim to lift 2D visual features to 3D through training additional feature fields. These methods are required to re-design/-train NeRF models and usually involve additional feature-matching processes. There are also some other instance segmentation and semantic segmentation approaches [50; 41; 11; 68; 34; 20; 2; 14; 59; 28] combined with NeRFs.
The most closely related approach to our SA3D is MVSeg [39], a component of SPIn-NeRF [39], which focuses on NeRF inpainting. MVSeg adopts video segmentation techniques to propagate a 2D mask across different views and employs these masks as labels for training a Semantic-NeRF model. However, video segmentation models lack explicit 3D structure information, which are hard to handle significant occlusions in complex scenes. Our method aims at building NeRF-driven consistency across views based on self-prompting and lifting 2D masks to robust 3D masks.
## 3 Method
In this section, we first give a brief review of Neural Radiance Fields (NeRFs) and the Segment Anything Model (SAM). Then we introduce the overall pipeline of SA3D. Finally, we demonstrate the design of each component in SA3D in detail.
### Preliminaries
Neural Radiance Fields (NeRFs)Given a training dataset \(\mathcal{I}\) of multi-view 2D images, NeRFs [38] learn a function \(f_{\mathbf{\theta}}:(\mathbf{x},\mathbf{d})\rightarrow(\mathbf{c},\sigma)\), which maps the spatial coordinates \(\mathbf{x}\in\mathbb{R}^{3}\) and the view direction \(\mathbf{d}\in\mathbb{S}^{2}\) of a point into the corresponding color \(\mathbf{c}\in\mathbb{R}^{3}\) and volume density \(\sigma\in\mathbb{R}\). \(\mathbf{\theta}\) denotes the learnable parameters of the function \(f\) which is usually represented by a multi-layer perceptron (MLP). To render an image \(\mathbf{I}_{\mathbf{\theta}}\), each pixel undergoes a ray casting process where a ray \(\mathbf{r}(t)=\mathbf{x}_{o}+t\mathbf{d}\) is projected through the camera pose. Here, \(\mathbf{x}_{o}\) is the camera origin, \(\mathbf{d}\) is the ray direction, and \(t\) denotes the distance of a point along the ray from the origin. The RGB color \(\mathbf{I}_{\mathbf{\theta}}(\mathbf{r})\) at the location determined by ray \(\mathbf{r}\) is obtained via a differentiable volume rendering algorithm:
\[\mathbf{I}_{\mathbf{\theta}}(\mathbf{r})=\int_{t_{n}}^{t_{f}}\omega(\mathbf{r}(t)) \mathbf{c}(\mathbf{r}(t),\mathbf{d})\mathrm{d}t, \tag{1}\]
where \(\omega(\mathbf{r}(t))=\exp(-\int_{t_{n}}^{t}\sigma(\mathbf{r}(s))\mathrm{d}s) \cdot\sigma(\mathbf{r}(t))\), and \(t_{n}\) and \(t_{f}\) denote the near and far bounds of the ray, respectively.
Segment Anything Model (SAM)SAM [25] takes an image \(\mathbf{I}\) and a set of prompts \(\mathcal{P}\) as input, and outputs the corresponding 2D segmentation mask \(\mathbf{M}_{\texttt{SAM}}\) in the form of a bitmap, _i.e.,_
\[\mathbf{M}_{\texttt{SAM}}=s(\mathbf{I},\mathcal{P}). \tag{2}\]
The prompts \(\mathbf{p}\in\mathcal{P}\) can be points, boxes, texts, and masks.
### Overall Pipeline
We assume that we already have a NeRF model trained on the dataset \(\mathcal{I}\). Throughout this paper, unless otherwise specified, we opt to employ the TensoRF [3] as the NeRF model, considering its superior efficiency of training and rendering. As shown in Figure 2, an image \(\mathbf{I}^{\text{in}}\) from a specific view is first rendered with the pre-trained NeRF model. A set of prompts (_e.g._, in this paper, we often use a set of points), \(\mathcal{P}^{\text{in}}\), is introduced and fed into SAM along with the rendered image. The 2D segmentation mask \(\mathbf{M}^{\text{in}}_{\texttt{SAM}}\) of the according view is obtained, which is then projected onto the 3D mask \(\mathbf{V}\in\mathbb{R}^{3}\) constructed voxel grids with the proposed **mask inverse rendering** technique (Section 3.3). Then a 2D segmentation mask \(\mathbf{M}^{(n)}\) from a novel view is rendered from the 3D mask. The rendered mask is usually inaccurate. We propose a **cross-view self-prompting** method (Section 3.4) to extract point prompts \(\mathcal{P}^{(n)}\) from the rendered mask and further feed them into SAM. Thus a more accurate 2D mask \(\mathbf{M}^{(n)}_{\texttt{SAM}}\) in this novel view is produced and also projected onto the voxel grids to complete the 3D mask. The above procedure is executed iteratively with more views traversed. Meanwhile, the 3D mask become more and more complete. The whole process bridges 2D segmentation results with 3D ones efficiently. Noting that no neural network needs to be optimized except the 3D mask.
### Mask Inverse Rendering
As shown in Equation (1), the color of each pixel in a rendered image is determined by a sum of weighted colors along the corresponding ray. The weight \(\omega(\mathbf{r}(t))\) reveals the object structure within the 3D space, where a high weight indicates the corresponding point close to the object's surface. Mask inverse rendering aims to project the 2D mask to the 3D space to form the 3D mask based on these weights.
Formally, the 3D mask is represented as voxel grids \(\mathbf{V}\in\mathbb{R}^{L\times W\times H}\), where each grid vertex stores a zero-initialized soft mask confidence score. Based on these voxels grids, each pixel of the 2D mask from one view is rendered as
\[\mathbf{M}(\mathbf{r})=\int_{t_{n}}^{t_{f}}\omega(\mathbf{r}(t))\mathbf{V}( \mathbf{r}(t))\mathrm{d}\mathrm{t}, \tag{3}\]
where \(\mathbf{r}(t)\) is the ray casting through the mask pixel, \(\omega(\mathbf{r}(t))\) is inherited from density values of the pre-trained NeRF, and \(\mathbf{V}(\mathbf{r}(t))\) denotes the mask confidence score at the location \(\mathbf{r}(t)\) obtained from voxel grids \(\mathbf{V}\)1. Denote \(\mathbf{M}_{\texttt{SAM}}(\mathbf{r})\) as the corresponding mask generated by SAM. When \(\mathbf{M}_{\texttt{SAM}}(\mathbf{r})=1\)
Figure 2: The overall pipeline of SA3D. Given a NeRF trained on a set of multi-view 2D images, SA3D first takes prompts in a single view for the target object as input and uses SAM to produce a 2D mask in this view with these prompts. Then, SA3D performs an alternated process of **mask inverse rendering** and **cross-view self-prompting** to complete the 3D mask of the target object constructed with voxel grids. Mask inverse rendering is performed to project the 2D mask obtained by SAM onto the 3D mask according to the learned density distribution embedded in the NeRF. Cross-view self-prompting is conducted to extract reliable prompts automatically as the input to SAM from the NeRF-rendered 2D mask given a novel view. This alternated process is executed iteratively until we get the complete 3D mask.
the goal of mask inverse rendering is to increase \(\mathbf{V}(\mathbf{r}(t))\) with respect to \(\omega(\mathbf{r}(t))\). In practice, this can be optimized using the gradient descent algorithm. For this purpose, we define the mask projection loss as the negative product between \(\mathbf{M}_{\texttt{SAM}}(\mathbf{r})\) and \(\mathbf{M}(\mathbf{r})\):
\[\mathcal{L}_{\texttt{proj}}=-\sum_{\mathbf{r}\in\mathcal{R}(\mathbf{I})} \mathbf{M}_{\texttt{SAM}}(\mathbf{r})\cdot\mathbf{M}(\mathbf{r}), \tag{4}\]
where \(\mathcal{R}(\mathbf{I})\) denotes the ray set of the image \(\mathbf{I}\).
The mask projection loss is constructed based on the assumption that both the geometry from the NeRF and the segmentation results of SAM are accurate. However, in practice, this is not always the case. We append a negative refinement term to the loss to optimize the 3D mask grids according to multi-view mask consistency:
\[\mathcal{L}_{\texttt{proj}}=-\sum_{\mathbf{r}\in\mathcal{R}(\mathbf{I})} \mathbf{M}_{\texttt{SAM}}(\mathbf{r})\cdot\mathbf{M}(\mathbf{r})+\lambda\sum _{\mathbf{r}\in\mathcal{R}(\mathbf{I})}(1-\mathbf{M}_{\texttt{SAM}}(\mathbf{r }))\cdot\mathbf{M}(\mathbf{r}), \tag{5}\]
where \(\lambda\) is a hyper-parameter to determine the magnitude of the negative term. With this negative refinement term, only if SAM consistently predicts a region as foreground from different views, SA3D marks its corresponding 3D region as foreground. In each iteration, the 3D mask \(\mathbf{V}\) is updated via \(\mathbf{V}\leftarrow\mathbf{V}-\eta\frac{\partial\mathcal{L}_{\texttt{proj}}} {\partial\mathbf{V}}\) with gradient descent, where \(\eta\) denotes the learning rate.
### Cross-view Self-prompting
Mask inverse rendering enables projecting 2D masks into the 3D space to form the 3D mask of a target object. To construct accurate 3D mask, substantial 2D masks from various views need to be projected. SAM can provide high-quality segmentation results given proper prompts. However, manually selecting prompts from every view is time-consuming and impractical. We propose a cross-view self-prompting mechanism to produce prompts for different novel views automatically.
Specifically, we first render a novel-view 2D segmentation mask \(\mathbf{M}^{(n)}\) from the 3D mask grids \(\mathbf{V}\) according to Equation (3). This mask is usually inaccurate, especially at the preliminary iteration of SA3D. Then we obtain some point prompts from the rendered mask with a specific strategy. The above process is named cross-view self-prompting. While there are multiple possible solutions for this strategy, we present a feasible one that has been demonstrated to be effective.
Self-prompting StrategyGiven an inaccurate 2D rendered mask \(\mathbf{M}^{(n)}\), the self-prompting strategy aims to extract a set of prompt points \(\mathcal{P}_{s}\) from it, which can help SAM to generate 2D segmentation result as accurate as possible. It is important to note that \(\mathbf{M}^{(n)}\) is not a typical 2D bitmap, but rather a confidence score map computed using Equation (3). Since each image pixel \(\mathbf{p}\) corresponds to a ray \(\mathbf{r}\) in a rendered view, we use \(\mathbf{p}\) for an easier demonstration of the prompt selection strategy on images. As \(\mathcal{P}_{s}\) is initialized to an empty set, the first prompt point \(\mathbf{p}_{0}\) is selected as the point with the highest mask confidence score: \(\mathbf{p}_{0}=\arg\max_{\mathbf{p}}\mathbf{M}^{(n)}(\mathbf{p})\). To select new prompt points, we first mask out square shaped regions2 on \(\mathbf{M}^{(n)}\) centered with each existing point prompt \(\mathbf{\hat{p}}\in\mathcal{P}_{s}\). Considering the depth \(z(\mathbf{p})\) can be estimated by the pre-trained NeRF, we transform the 2D pixel \(\mathbf{p}\) to the 3D point \(\mathcal{G}(\mathbf{p})=(x(\mathcal{G}(\mathbf{p})),y(\mathcal{G}(\mathbf{p}) ),z(\mathcal{G}(\mathbf{p})))\):
Footnote 2: The side length of the region is set as the radius of a circle whose area is equal to the rendered mask \(\mathbf{M}^{(n)}\).
\[\begin{pmatrix}x(\mathcal{G}(\mathbf{p}))\\ y(\mathcal{G}(\mathbf{p}))\\ z(\mathcal{G}(\mathbf{p}))\end{pmatrix}=z(\mathbf{p})\mathbf{K}^{-1}\begin{pmatrix} x(\mathbf{p})\\ y(\mathbf{p})\\ 1\end{pmatrix} \tag{6}\]
where \(x(\mathbf{p}),y(\mathbf{p})\) denote the 2D coordinates of \(\mathbf{p}\), and \(\mathbf{K}\) denotes the camera intrinsics. The new prompt point is expected to have a high confidence score while being close to existing prompt points. Considering the two factors, we introduce a decay term to the confidence score. Let \(d(\cdot,\cdot)\) denote the min-max normalized Euclidean distance. For each remaining point \(\mathbf{p}\) in \(\mathbf{M}^{(n)}\), the decay term is
\[\Delta\mathbf{M}^{(n)}(\mathbf{p})=\min\{\mathbf{M}^{(n)}(\mathbf{\hat{p}}) \cdot d(\mathcal{G}(\mathbf{p}),\mathcal{G}(\mathbf{\hat{p}}))\mid\mathbf{ \hat{p}}\in\mathcal{P}_{s}\}. \tag{7}\]
Then a decayed mask confidence score \(\tilde{\mathbf{M}}^{(n)}(\mathbf{p})\) is computed as
\[\tilde{\mathbf{M}}^{(n)}(\mathbf{p})=\mathbf{M}^{(n)}(\mathbf{p})-\Delta \mathbf{M}^{(n)}(\mathbf{p}). \tag{8}\]The remaining point with the highest decayed score, _i.e._, \(\mathbf{p}^{*}=\arg\max_{\mathbf{p}\nsubseteq\mathcal{P}_{\mathcal{P}}}\mathbf{ \tilde{M}}^{(n)}(\mathbf{p})\), is added to the prompt set: \(\mathcal{P}_{s}=\mathcal{P}_{s}\cup\{\mathbf{p}^{*}\}\). The above selection process is repeated until either the number of prompts \(|\mathcal{P}_{s}|\) reaches a predefined threshold \(n_{p}\) or the maximum value of \(\mathbf{\tilde{M}}^{(n)}(\mathbf{p})\) is smaller than 0.
IoU-aware View RejectionWhen the target object is rendered in heavily occluded views, SAM may produce incorrect segmentation results and degrades the quality of the 3D mask. To avoid such situations, we introduce an additional view rejection mechanism based on the intersection-over-union (IoU) between the rendered mask \(\mathbf{M}^{(n)}\) and the SAM prediction \(\mathbf{M}^{(n)}_{\text{SAM}}\). If the IoU falls below a predefined threshold \(\tau\), it indicates a poor overlap between the two masks. The prediction from SAM is rejected, and the mask inverse rendering step is skipped in this iteration.
## 4 Experiments
In this section, we quantitatively evaluate the segmentation ability of SA3D on various datasets. Then, we qualitatively demonstrate the versatility of SA3D, which can conduct instance segmentation, part segmentation, and text-prompted segmentation _etc_.
### Datasets
For quantitative experiments, we use the Neural Volumetric Object Selection (NVOS) [47], SPIn-NeRF [39], and Replica [51] datasets. The NVOS [47] dataset is based on the LLFF dataset [37], which includes several forward-facing scenes. For each scene, NVOS provides a reference view
Figure 3: Some visualization results in different scenes (LERF-donuts [23], LERF-figurines, Replica-room0 [51] and 360-kitchen [1]).
with scribbles and a target view with 2D segmentation masks annotated. Similar to NVOS, SPInNeRF [39] annotates some data manually to evaluate interactive 3D segmentation performance. These annotations are based on some widely-used NeRF datasets [37; 38; 29; 26; 13]. The Replica [51] dataset provides high-quality reconstruction ground truths of various indoor scenes, including clean dense geometry, high-resolution and high-dynamic-range textures, glass and mirror surface information, semantic classes, planar segmentation, and instance segmentation masks. For qualitative analysis, we use the LLFF [37] dataset and the 360\({}^{\circ}\) dataset [1]. SA3D is further applied to the LERF [23] dataset, which contains more realistic and challenging scenes.
### Quantitative Results
NVOS DatasetFor fair comparisons, we follow the experimental setting of the original NVOS [47]. We first scribble on the reference view (provided by the NVOS dataset) to conduct 3D segmentation, and then render the 3D segmentation result on the target view and evaluate the IoU and pixel-wise accuracy with the provided ground truth. Note that the scribbles are preprocessed to meet the requirements of SAM. More details can be found in the supplement. As shown in Table 1, SA3D outperforms previous approaches by large margins, _i.e._, +6.5 mIoU over the NVOS.
SPIn-NeRF DatasetWe follow SPIn-NeRF [39] to conduct label propagation for evaluation. Given a specific reference view of a target object, the 2D ground-truth mask of this view is available. The prompt input operation is omitted, while the 2D ground-truth mask of the target object from the reference view is directly used for the initialization of the 3D mask grids. This is reasonable since in most situations users can refine their input prompts to help SAM generate a 2D mask as accurately as possible from the reference view. Once the 3D mask grids are initialized, the subsequent steps are exactly the same as described in Section 3. With the 3D mask grids finalized, the 2D masks in other views are rendered to calculate the IoU with the 2D ground-truth masks. Results can be found in Table 2. SA3D is demonstrated to be superior in both forward-facing and 360\({}^{\circ}\) scenes.
In Tables 2 and 3, "Single view" refers to conducting mask inverse rendering exclusively for the 2D ground-truth mask of the reference view. This process is equivalent to mapping the 2D mask to the 3D space based on the corresponding depth information, without any subsequent learnable/updating step. We present these results to demonstrate the gain of the alternated process in our framework. As shown in Table 2, SA3D outperforms MVSeg [39] in most scenes, especially +5.6 mIoU on Truck and +17.3 mIoU on Lego. Besides, compared with the "Single view" model, a significant promotion is achieved, _i.e._ +17.8 mIoU, which further proves the effectiveness of our method.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & mIoU (\%) & mAcc (\%) \\ \hline Graph-cut (3D) [48; 47] & 39.4 & 73.6 \\ NVOS [47] & 70.1 & 92.0 \\ ISRF [15] & 83.8 & 96.4 \\ SA3D (ours) & **90.3** & **98.2** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative results on NVOS.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Scenes} & \multicolumn{2}{c}{Single view} & \multicolumn{2}{c}{MVSeg [39]} & \multicolumn{2}{c}{SA3D (ours)} \\ \cline{2-7} & IoU (\%) & Acc (\%) & IoU (\%) & Acc (\%) & IoU (\%) & Acc (\%) \\ \hline Orchids & 79.4 & 96.0 & **92.7** & 98.8 & 83.6 & 96.9 \\ Leaves & 78.7 & 98.6 & 94.9 & 99.7 & **97.2** & 99.9 \\ Fern & 95.2 & 99.3 & 94.3 & 99.2 & **97.1** & 99.6 \\ Room & 73.4 & 96.5 & **95.6** & 99.4 & 88.2 & 98.3 \\ Horns & 85.3 & 97.1 & 92.8 & 98.7 & **94.5** & 99.0 \\ Fortress & 94.1 & 99.1 & 97.7 & 99.7 & **98.3** & 99.8 \\ \hline Fork & 69.4 & 98.5 & 87.9 & 99.5 & **89.4** & 99.6 \\ Pinecone & 57.0 & 92.5 & **93.4** & 99.2 & 92.9 & 99.1 \\ Truck & 37.9 & 77.9 & 85.2 & 95.1 & **90.8** & 96.7 \\ Lego & 76.0 & 99.1 & 74.9 & 99.2 & **92.2** & 99.8 \\ \hline mean & 74.6 & 95.5 & 90.9 & 98.9 & **92.4** & 98.9 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative results on the SPIn-NeRF dataset.
Replica DatasetWe use the processed Replica data with 2D instance labels provided by Zhi _et al._[73] to evaluate the segmentation performance of SA3D. We retrieve all views containing each object and specify one reference view. With a similar setting of experiments on the SPIn-NeRF dataset, we use the ground-truth mask of the reference view and perform SA3D to conduct label propagation for evaluation. For each scene in Replica, around 20 objects are chosen for evaluation. Refer to the supplement for more details. As shown in Table 3, the mean IoU (mIoU) is reported for all available objects in different scenes. We exclude the pixel-wise accuracy metric since an object only appears in a few views in the indoor scenes of Replica, where the pixel-wise accuracy is too high to serve as a reliable metric.
In complex indoor scenes of Replica, MVSeg's strategy based on video segmentation proves to be ineffective, which generates numerous inaccurate 2D pseudo-labels, even using the Semantic-NeRF [73] for refinement. Consequently, the final 3D segmentation results of MVSeg even underperform those achieved by the "Single view" method. In contrast, SA3D accurately captures segmented objects in complex 3D scenes. Visualization results are shown in Figure 3.
### Qualitative Results
We conduct three kinds of segmentation tasks: object segmentation, part segmentation and text-prompting segmentation. The first two are the core functions of SA3D. As shown in Figure 3, SA3D demonstrates its capability to segment diverse 3D objects across different scenes, even when the objects are of small scales. Besides, SA3D can also handle challenging part segmentation. The last row of the figure showcases SA3D's precise segmentation of the bucket, small wheel, and dome light of the Lego bulldozer. Figure 4 demonstrates the potential of SA3D in combining with language models. Given a text phrase, the corresponding object can be accurately cut out. The text-prompting segmentation is built upon Grounding-DINO [33], a model capable of generating bounding boxes for objects based on text prompts. These bounding boxes serve as input prompts for SA3D in the segmentation process.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Number of Views & 5 (10\%) & 9 (20\%) & 21 (50\%) & 43 (100\%) \\ \hline IoU on Fortress (forward facing) & 97.8 & 98.3 & 98.3 & 98.3 \\ Time Cost (s) & 7.6 & 12.8 & 29.0 & 59.0 \\ \hline \hline Number of Views & 11 (10\%) & 21 (20\%) & 51 (50\%) & 103 (100\%) \\ \hline IoU on Lego (360\({}^{\circ}\)) & 84.5 & 84.8 & 91.5 & 92.2 \\ Time Cost (s) & 23.5 & 43.5 & 103.8 & 204.9 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation on different numbers of views for 3D mask generation. Numbers in parentheses represent the view percentage of total training views.
Figure 4: 3D segmentation results of SA3D with the text prompts in 360-garden [1].
\begin{table}
\begin{tabular}{l c c c c c c c c|c} \hline \hline Scenes & office0 & office1 & office2 & office3 & office4 & room0 & room1 & room2 & mean \\ \hline Single view & 68.7 & 56.5 & 68.4 & 62.2 & 57.0 & 55.4 & 53.8 & 56.7 & 59.8 \\ MVSeg [39] & 31.4 & 40.4 & 30.4 & 30.5 & 25.4 & 31.1 & 40.7 & 29.2 & 32.4 \\ SA3D (ours) & **84.4** & **77.0** & **88.9** & **84.4** & **82.6** & **77.6** & **79.8** & **89.2** & **83.0** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Quantitative results on Replica (mIoU).
### Ablation Study
Number of ViewsThe process of mask inverse rendering and cross-view self-prompting is alternated across different views. By default, we utilize all available views in the training set \(\mathcal{I}\). However, to expedite the 3D segmentation procedure, the number of views can be reduced. As shown in Table 4, We perform experiments on two representative scenes from the SPIn-NeRF [39] dataset to demonstrate this characteristic. The views are uniformly sampled from the sorted training set. In forward facing scenes where the range of the camera poses is limited, satisfactory results can be achieved by selecting only a few views. On an Nvidia RTX 3090 GPU, the 3D segmentation process with 5 views can be completed within 10 seconds. On the contrary, in scenes where the range of camera poses is wider, a larger number of views are required to yield greater improvements. Note that even with 50 views, the segmentation task can still be completed in less than 2 minutes.
Hyper-parametersSA3D involves three hyper-parameters: the IoU rejection threshold \(\tau\), the loss balance coefficient \(\lambda\) in Equation (5), and the number of self-prompting points \(n_{p}\). As shown in Table 6, too small \(\tau\) values lead to unstable SAM predictions, introducing noises to the 3D mask; too large \(\tau\) values impede the 3D mask from getting substantial information. Table 7 indicates slightly introducing a negative term with the \(\lambda\) factor can reduce noise for mask projection. However, a too-large negative term may make the mask completion process unstable and causes degraded performance. The selection of \(n_{p}\) depends on the specific segmentation target, as SAM tends to produce over-segmented results that capture finer details of objects. As shown in Figure 5, for objects with a relatively large scale and complex structures, a bigger \(n_{p}\) produces better results. Empirically. setting \(n_{p}\) to 3 can meet the requirements of most situations.
Self-prompting StrategyWithout the 3D distance based confidence decay (Equation (7)), our self-prompting strategy degrades to a simple 2D NMS (Non-Maximum Suppression), which selects a prompt point with the highest confidence score and then masks out a region around it. To showcase the efficacy of our design, we conduct experiments using the NVOS benchmark and presented per-scene results for in-depth analysis.
Table 5 shows that a simple NMS self-prompting is enough for most cases. But for hard cases like 'LLFF-trex' (a tree skeleton, as shown in Figure 5), where a large number of depth jumps, the confidence decay term contributes a lot. In such a situation, inaccurate masks bleed through gaps in the foreground onto the background. If the self-prompting mechanism generates prompts on these inaccurate regions, SAM may produce plausible segmentation results that can cheat the IoU-rejection mechanism and finally the segmentation results will involve unwanted background regions.
2D Segmentation ModelsIn addition to SAM, we also incorporate four other prompt-based 2D segmentation models [75; 49; 32; 5] into our framework to demonstrate the generalization ability of SA3D. The evaluation results on the NVOS dataset is shown in Table 8.
## 5 Discussion
On top of the experimental results, we hope to deliver some insights from our preliminary study of integrating SAM and NeRF, _i.e._, a 2D foundation model and a 3D representation model.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Scenes} & \multicolumn{2}{c}{w/ Confidence Decay Term} & \multicolumn{2}{c}{w/o Confidence Decay Term} \\ \cline{2-5} & IoU (\%) & Acc (\%) & IoU (\%) & Acc (\%) \\ \hline Fern & 82.9 & 94.4 & 82.9 & 94.4 \\ Flower & 94.6 & 98.7 & 94.6 & 99.7 \\ Fortress & 98.3 & 99.7 & 98.4 & 99.7 \\ Horns (center) & 96.2 & 99.3 & 96.2 & 99.3 \\ Horns (Left) & 90.2 & 99.4 & 88.8 & 99.3 \\ Leaves & 93.2 & 99.6 & 93.2 & 99.6 \\ Orchids & 85.5 & 97.3 & 85.4 & 97.3 \\ Trex & 82.0 & 97.4 & 64.0 & 93.3 \\ \hline mean & 90.3 & 98.2 & 87.9 & 97.7 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation on the confidence decay term of the self-prompting strategy.
First, NeRF improves the segmentation quality of SAM. In Figure 6, we show that SA3D can eliminate segmentation errors of SAM and effectively capture details such as holes and edges. Perceptually, SAM, as well as other 2D perception models, is often sensitive to the viewpoint, and NeRF offers the ability of 3D modeling and hence complementariness to assist recognition. Additionally, SA3D inspires us that using NeRF or other 3D structural priors is a resource-efficient method to lift a vision foundation model from 2D to 3D, as long as the foundation model has the ability to self-prompt. This methodology can save many resources because collecting a large corpus of 3D data is often costly. We look forward to research efforts to enhance the 3D perception ability of 2D foundation models (_e.g._, injecting a 3D-aware loss into 2D pre-training).
**Limitation** SA3D has limitations in panoptic segmentation. First, the current paradigm relies on the first-view prompt. If some objects do not appear in the view for prompting, they will be omitted in the subsequent segmentation process. Second, the same part in the scene may be segmented into different instances with similar semantics in different views. This ambiguity cannot be easily eliminated under the current mechanism design and lead to unstable training. We leave these issues as future work.
## 6 Conclusion
In this paper, we propose SA3D, a novel framework that generalizes SAM to segment 3D objects with neural radiance fields (NeRFs) as the structural prior. Based on a trained NeRF and a set of prompts in a single view, SA3D performs an iterative procedure that involves rendering novel 2D views, self-prompting SAM for 2D segmentation, and projecting the segmentation back onto 3D mask grids. SA3D can be efficiently applied to a wide range of 3D segmentation tasks. Our research sheds light on a resource-efficient methodology that lifts vision foundation models from 2D to 3D.
\begin{table}
\begin{tabular}{c c
## Acknowledgement
This work was supported by NSFC 62322604, NSFC 62176159, Natural Science Foundation of Shanghai 21ZR1432200, and Shanghai Municipal Science and Technology Major Project 2021SHZDZX0102. Specifically, We extend our sincere thanks to Weichao Qiu for his insightful suggestions during the course of this work.
## References
* [1] Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In _CVPR_, 2022.
* [2] Wang Bing, Lu Chen, and Bo Yang. Dm-nerf: 3d scene geometry decomposition and manipulation from 2d images. _arXiv preprint arXiv:2208.07227_, 2022.
* [3] Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. Tensorf: Tensorial radiance fields. In _ECCV_, 2022.
* [4] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. _IEEE Trans. Pattern Anal. Mach. Intell._, 2018.
* [5] Xi Chen, Zhiyan Zhao, Yilei Zhang, Manni Duan, Donglian Qi, and Hengshuang Zhao. Focalclick: Towards practical interactive image segmentation. In _CVPR_, 2022.
* [6] Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, and Rohit Girdhar. Masked-attention mask transformer for universal image segmentation. In _CVPR_, 2022.
* [7] Bowen Cheng, Alexander G. Schwing, and Alexander Kirillov. Per-pixel classification is not all you need for semantic segmentation. In _NeurIPS_, 2021.
* [8] Hang Chu, Wei-Chiu Ma, Kaustav Kundu, Raquel Urtasun, and Sanja Fidler. Surfconv: Bridging 3d and 2d convolution for rgbd images. In _CVPR_, 2018.
* [9] Runyu Ding, Jihan Yang, Chuhui Xue, Wenqing Zhang, Song Bai, and Xiaojuan Qi. Pla: Language-driven open-vocabulary 3d scene understanding. In _CVPR_, 2023.
* [10] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In _ICLR_, 2021.
* [11] Zhiwen Fan, Peihao Wang, Yifan Jiang, Xinyu Gong, Dejia Xu, and Zhangyang Wang. Nerf-sos: Any-view self-supervised object segmentation on complex scenes. _arXiv preprint arXiv:2209.08776_, 2022.
* [12] Jiemin Fang, Taoran Yi, Xinggang Wang, Lingxi Xie, Xiaopeng Zhang, Wenyu Liu, Matthias Niessner, and Qi Tian. Fast dynamic radiance fields with time-aware neural voxels. In _SIGGRAPH Asia 2022 Conference Papers_, 2022.
* [13] Sara Fridovich-Keil, Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. Plonoxels: Radiance fields without neural networks. In _CVPR_, 2022.
* [14] Xiao Fu, Shangzhan Zhang, Tianrun Chen, Yichong Lu, Lanyun Zhu, Xiaowei Zhou, Andreas Geiger, and Yiyi Liao. Panoptic nerf: 3d-to-2d label transfer for panoptic urban scene segmentation. In _3DV_, 2022.
* [15] Rahul Goel, Dhawal Sirikonda, Saurabh Saini, and PJ Narayanan. Interactive segmentation of radiance fields. _arXiv preprint arXiv:2212.13545_, 2022.
* [16] Nikhil Gosala and Abhinav Valada. Bird's-eye-view panoptic segmentation using monocular frontal view images. _IEEE Robot. Autom. Lett._, 2022.
* [17] Huy Ha and Shuran Song. Semantic abstraction: Open-world 3d scene understanding from 2d vision-language models. In _CoRL_, 2022.
* [18] Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross B. Girshick. Mask R-CNN. In _ICCV_, 2017.
* [19] Peter Hedman, Pratul P. Srinivasan, Ben Mildenhall, Jonathan T. Barron, and Paul E. Debevec. Baking neural radiance fields for real-time view synthesis. In _ICCV_, 2021.
* [20] Benran Hu, Junkai Huang, Yichen Liu, Yu-Wing Tai, and Chi-Keung Tang. Instance neural radiance field. _arXiv preprint arXiv:2304.04395_, 2023.
* [21] Jing Huang and Suya You. Point cloud labeling using 3d convolutional neural network. In _ICPR_, 2016.
* [22] Krishna Murthy Jatavallabhula, Alihusein Kuwajerwala, Qiao Gu, Mohd Omama, Tao Chen, Shuang Li, Ganesh Iyer, Soroush Saryazdi, Nikhil Keetha, Ayush Tewari, et al. Conceptfusion: Open-set multimodal 3d mapping. _arXiv preprint arXiv:2302.07241_, 2023.
* [23] Justin Kerr, Chung Min Kim, Ken Goldberg, Angjoo Kanazawa, and Matthew Tancik. Lerf: Language embedded radiance fields. _arXiv preprint arXiv:2303.09553_, 2023.
* [24] Alexander Kirillov, Kaiming He, Ross B. Girshick, Carsten Rother, and Piotr Dollar. Panoptic segmentation. In _CVPR_, 2019.
* [25] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. _arXiv preprint arXiv:2304.02643_, 2023.
* [26] Arno Knapitsch, Jaesik Park, Qian-Yi Zhou, and Vladlen Koltun. Tanks and temples: Benchmarking large-scale scene reconstruction. _ACM Trans. Graph._, 2017.
* [27] Sosuke Kobayashi, Eiichi Matsumoto, and Vincent Sitzmann. Decomposing nerf for editing via feature field distillation. In _NeurIPS_, 2022.
* [28] Ruofan Liang, Jiahao Zhang, Haoda Li, Chen Yang, and Nandita Vijayakumar. Spidr: Sdf-based neural point fields for illumination and deformation. _arXiv preprint arXiv:2210.08398_, 2022.
* [29] Yen-Chen Lin, Pete Florence, Jonathan T. Barron, Tsung-Yi Lin, Alberto Rodriguez, and Phillip Isola. Nerf-supervision: Learning dense object descriptors from neural radiance fields. In _ICRA_, 2022.
* [30] David B. Lindell, Julien N. P. Martel, and Gordon Wetzstein. Autoint: Automatic integration for fast neural volume rendering. In _CVPR_, 2021.
* [31] Minghua Liu, Yinhao Zhu, Hong Cai, Shizhong Han, Zhan Ling, Fatih Porikli, and Hao Su. Partslip: Low-shot part segmentation for 3d point clouds via pretrained image-language models. In _CVPR_, 2023.
* [32] Qin Liu, Zhenlin Xu, Gedas Bertasius, and Marc Niethammer. Simpleclick: Interactive image segmentation with simple vision transformers. In _ICCV_, 2023.
* [33] Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. _arXiv preprint arXiv:2303.05499_, 2023.
* [34] Xinhang Liu, Jiaben Chen, Huai Yu, Yu-Wing Tai, and Chi-Keung Tang. Unsupervised multi-view object segmentation using radiance field propagation. In _NeurIPS_, 2022.
* [35] Zhijian Liu, Haotian Tang, Yujun Lin, and Song Han. Point-voxel cnn for efficient 3d deep learning. _NeurIPS_, 2019.
* [36] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In _CVPR_, 2015.
* [37] Ben Mildenhall, Pratul P. Srinivasan, Rodrigo Ortiz Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng, and Abhishek Kar. Local light field fusion: practical view synthesis with prescriptive sampling guidelines. _ACM Trans. Graph._, 2019.
* [38] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In _ECCV_, 2020.
* [39] Ashkan Mirzaei, Tristan Aumentado-Armstrong, Konstantinos G. Derpanis, Jonathan Kelly, Marcus A. Brubaker, Igor Gilitschenski, and Alex Levinshtein. SPIn-NeRF: Multiview segmentation and perceptual inpainting with neural radiance fields. In _CVPR_, 2023.
* [40] Thomas Muller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. _ACM Trans. Graph._, 2022.
* [41] Michael Niemeyer and Andreas Geiger. GIRAFFE: representing scenes as compositional generative neural feature fields. In _CVPR_, 2021.
* [42] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Z. Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In _NeurIPS_, 2019.
* [43] Songyou Peng, Kyle Genova, Chiyu Jiang, Andrea Tagliasacchi, Marc Pollefeys, Thomas Funkhouser, et al. Openscene: 3d scene understanding with open vocabularies. In _CVPR_, 2023.
* [44] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In _CVPR_, 2017.
* [45] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. _NeurIPS_, 2017.
* [46] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In _ICML_, 2021.
* [47] Zhongzheng Ren, Aseem Agarwala, Bryan C. Russell, Alexander G. Schwing, and Oliver Wang. Neural volumetric object selection. In _CVPR_, 2022.
* [48] Carsten Rother, Vladimir Kolmogorov, and Andrew Blake. "grabcut": interactive foreground extraction using iterated graph cuts. _ACM Trans. Graph._, 2004.
* [49] Konstantin Sofiiuk, Ilya A Petrov, and Anton Konushin. Reviving iterative training with mask guidance for interactive segmentation. In _ICIP_, 2022.
* [50] Karl Stelzner, Kristian Kersting, and Adam R Kosiorek. Decomposing 3d scenes into objects via unsupervised volume segmentation. _arXiv preprint arXiv:2104.01148_, 2021.
* [51] Julian Straub, Thomas Whelan, Lingni Ma, Yufan Chen, Erik Wijmans, Simon Green, Jakob J Engel, Raul Mur-Artal, Carl Ren, Shobhit Verma, et al. The replica dataset: A digital replica of indoor spaces. _arXiv preprint arXiv:1906.05797_, 2019.
* [52] Robin Strudel, Ricardo Garcia Pinel, Ivan Laptev, and Cordelia Schmid. Segmenter: Transformer for semantic segmentation. In _ICCV_, 2021.
* [53] Cheng Sun, Min Sun, and Hwann-Tzong Chen. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In _CVPR_, 2022.
* [54] Cheng Sun, Min Sun, and Hwann-Tzong Chen. Improved direct voxel grid optimization for radiance fields reconstruction. _arXiv preprint arXiv:2212.13545_, 2022.
* [55] Haotian Tang, Zhijian Liu, Shengyu Zhao, Yujun Lin, Ji Lin, Hanrui Wang, and Song Han. Searching efficient 3d architectures with sparse point-voxel convolution. In _ECCV_, 2020.
* [56] Jiaxiang Tang, Hang Zhou, Xiaokang Chen, Tianshu Hu, Errui Ding, Jingdong Wang, and Gang Zeng. Delicate textured mesh recovery from nerf via adaptive surface refinement. _arXiv preprint arXiv:2303.02091_, 2023.
* [57] Vadim Tschernezki, Iro Laina, Diane Larlus, and Andrea Vedaldi. Neural feature fusion fields: 3d distillation of self-supervised 2d image representations. In _3DV_, 2022.
* [58] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In _NeurIPS_, 2017.
* [59] Suhani Vora, Noha Radwan, Klaus Greff, Henning Meyer, Kyle Genova, Mehdi SM Sajjadi, Etienne Pot, Andrea Tagliasacchi, and Daniel Duckworth. Nesf: Neural semantic fields for generalizable semantic segmentation of 3d scenes. _arXiv preprint arXiv:2111.13260_, 2021.
* [60] Weiyue Wang and Ulrich Neumann. Depth-aware cnn for rgb-d segmentation. In _ECCV_, 2018.
* [61] Suttisak Wizadwongsa, Pakkapon Phongthawee, Jiraphon Yenphraphai, and Supasorn Suwajanakorn. Nex: Real-time view synthesis with neural basis expansion. In _CVPR_, 2021.
* [62] Zongwei Wu, Guillaume Allibert, Christophe Stolz, Chao Ma, and Cedric Demonceaux. Depth-adapted cnns for rgb-d semantic segmentation. _arXiv preprint arXiv:2206.03939_, 2022.
* [63] Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, and Ping Luo. Segformer: Simple and efficient design for semantic segmentation with transformers. In _NeurIPS_, 2021.
* [64] Yajie Xing, Jingbo Wang, and Gang Zeng. Malleable 2.5 d convolution: Learning receptive fields along the depth-axis for rgb-d scene parsing. In _ECCV_, 2020.
* [65] Jihan Yang, Runyu Ding, Zhe Wang, and Xiaojuan Qi. Regionplc: Regional point-language contrastive learning for open-world 3d scene understanding. _arXiv preprint arXiv:2304.00962_, 2023.
* [66] Lior Yariv, Peter Hedman, Christian Reiser, Dor Verbin, Pratul P Srinivasan, Richard Szeliski, Jonathan T Barron, and Ben Mildenhall. Bakedsdf: Meshing neural sdfs for real-time view synthesis. _arXiv preprint arXiv:2302.14859_, 2023.
* [67] Dongqiangzi Ye, Zixiang Zhou, Weijia Chen, Yufei Xie, Yu Wang, Panqu Wang, and Hassan Foroosh. Lidarmultinet: Towards a unified multi-task network for lidar perception. _arXiv preprint arXiv:2209.09385_, 2022.
* [68] Hong-Xing Yu, Leonidas J. Guibas, and Jiajun Wu. Unsupervised discovery of object radiance fields. In _ICLR_, 2022.
* [69] Junbo Zhang, Runpei Dong, and Kaisheng Ma. Clip-fo3d: Learning free open-world 3d scene representations from 2d dense clip. _arXiv preprint arXiv:2303.04748_, 2023.
* [70] Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip HS Torr, and Vladlen Koltun. Point transformer. In _ICCV_, 2021.
* [71] Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. Pyramid scene parsing network. In _CVPR_, 2017.
* [72] Sixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu, Zekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao Xiang, Philip H. S. Torr, and Li Zhang. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In _CVPR_, 2021.
* [73] Shuaifeng Zhi, Tristan Laidlow, Stefan Leutenegger, and Andrew J. Davison. In-place scene labelling and understanding with implicit scene representation. In _ICCV_, 2021.
* [74] Hui Zhou, Xinge Zhu, Xiao Song, Yuexin Ma, Zhe Wang, Hongsheng Li, and Dahua Lin. Cylinder3d: An effective 3d framework for driving-scene lidar semantic segmentation. _arXiv preprint arXiv:2008.01550_, 2020.
* [75] Xueyan Zou, Jianwei Yang, Hao Zhang, Feng Li, Linjie Li, Jianfeng Gao, and Yong Jae Lee. Segment everything everywhere all at once. _arXiv preprint arXiv:2304.06718_, 2023.
## Appendix
The appendix includes the following contents:
1. Implementation details (Section A);
2. A segmentation refinement strategy (Section B);
3. The scribble to point strategy for the evaluation of NVOS (Section C);
4. An analysis about vanilla NeRF [38] used in SA3D (Section D);
5. More information about the object filtering of Replica [51] (Section E);
6. A further illustration of the self-prompting strategy (Section F);
7. An analysis about the effect of different kinds of occlusion in NeRF (Section G);
8. More visualization results with different kinds of input prompts (Section H).
## Appendix A Implementation Details
We implement SA3D using PyTorch [42] with reference to the code provided by DVGOv2 [54]. The SA3D model is built and trained on a single Nvidia Geforce RTX3090 GPU. For our NeRF model, we primarily employ TensoRF [3], utilizing the VM-48 representation to store the radiance latent vectors. The radiance fields are pre-trained for most datasets with 40,000 iterations. For the LLFF dataset [37] and the 360 dataset [1], the radiance fields are trained with 20,000 iterations.
## Appendix B Refinement with A Two-pass Segmentation Mechanism
SAM may produce segmentation masks containing undesired parts. The IoU-aware view rejection is hard to handle this issue when the mis-classified region gradually expands.
We propose a two-pass segmentation mechanism to further refine the segmentation result. After completing 3D segmentation introduced in the main manuscript, we get a 3D mask \(\mathbf{V}\). To detect the mis-classified region from \(\mathbf{V}\), we re-render the 2D segmentation mask \(\mathbf{M}^{u}\) of the user-specific reference view and compare it with the original SAM segmentation result \(\mathbf{M}^{u}_{\text{SAM}}\).
Subsequently, we reset the original 3D mask \(\mathbf{V}\) to be a zero tensor and introduce another 3D mask \(\mathbf{V}^{\prime}\in\mathbb{R}^{3}\) that specifically indicates the mis-classified regions. The 3D segmentation process is then repeated, with the key difference being the incorporation of negative prompt points during the self-prompting phase. In other words, the prompts obtained from \(\mathbf{V}^{\prime}\) serve as negative prompts for \(\mathbf{V}\), and vice versa. This incorporation of negative prompts enables SAM to gain a better understanding of the user's requirements and refine the segmentation accordingly (shown in Figure B7). It is important to note that while this two-pass segmentation mechanism holds promise, it was not utilized in our main experiments due to considerations of efficiency.
## Appendix C The Scribble to Points Strategy for The Evaluation of NVOS
The NVOS [47] dataset provides a reference view and the corresponding scribbles for each scene (shown in Figure C8). In practice, since the scribbles usually contain tens of thousands of dense
Figure B7: The effect of the two-pass segmentation refinement.
points, SAM [25] cannot directly take such scribbles as input. The abundance of points in the scribbles hinders SAM's performance when directly utilizing them as prompts, which is an inherent limitation of SAM.
For fair comparison, we extract positive and negative prompt points from the provided positive and negative scribbles, respectively. For input scribbles, we first skeletonize them and then select \(2\%\) points from the skeletonized positive scribbles as the positive prompts and \(0.5\%\) points from the skeletonized negative scribbles as the negative prompts.
## Appendix D The Effect of Different NeRFs Used in SA3D
We adapt SA3D to the vanilla NeRF [38] to showcase its generalizability. We present visualization results on the LLFF dataset. As illustrated in Figure D9, SA3D with the vanilla NeRF exhibits excellent performance without the need for additional modifications.
## Appendix E Object Filtering for The Replica Dataset
The Replica dataset contains many objects in each scene. However, it is important to note that many of these objects exhibit low quality, as depicted in Figure E10, making them unsuitable for evaluating 3D segmentation. Generally, these instances exhibit the following issues: some instances are not present in the training frames provided by Zhi _et al._[73]; some instances are too small to be effectively segmented, such as thin slits in doors; and some instances consist of unrecognizable, low-resolution pixels, such as blurred tags, which are not suitable for accurate instance segmentation. Accordingly, we carefully select approximately 20 representative instances from each scene for the evaluation. The list of instance IDs for each scene can be found in Table E10. We have also included the quantitative results without object filtering in Table E9. Even without object filtering, SA3D demonstrates improvements compared to the single-view baseline.
## Appendix F An Illustration for The Proposed Self-prompting Strategy
We offer an illustration (Figure F11) to assist readers in gaining a clearer understanding of the self-prompting strategy.
Figure D9: 3D Segmentation results based on the **vanilla NeRF** on the LLFF dataset.
In the self-prompting strategy, prompt points \(\mathcal{P}_{s}\) are derived from an incomplete 2D rendered mask \(\mathbf{M}^{(n)}\), which is represented as a confidence score map. Initially, the selected prompt points set \(\mathcal{P}_{s}\) is empty, and the first prompt point \(\mathbf{p}_{0}\) is selected as the one with the highest confidence score in the mask \(\mathbf{M}^{(n)}\). For subsequent points, square regions centered around existing prompt points are masked out on \(\mathbf{M}^{(n)}\). The depth \(z(\mathbf{p})\), estimated by the pre-trained NeRF, helps convert 2D pixel \(\mathbf{p}\) into a 3D point \(\mathcal{G}(\mathbf{p})\). The new prompt point is expected to have a high confidence score while being close to existing prompt points. Hence, a distance-aware decay term is introduced to compute the confidence score. The remaining point with the highest decayed score is added to the prompt set. This selection process is repeated until either the number of prompts \(|\mathcal{P}_{s}|\) reaches a predefined threshold \(n_{p}\) or the maximum value of the remaining points is less than 0. Please refer to Section 3.4 of the main manuscript for more details.
## Appendix G The Effect of Occlusion in NeRF
There are two cases of occlusion in NeRFs: part of the target object does not appear in some views (but appear in other views), or it does not appear at all. For the former case, SA3D can recover it using information from other views; for the latter case, there can be parts missing in the target object (see Figure G12). This is an interesting future direction (_e.g._, applying generative models such as diffusion models).
## Appendix H More Visualization Results
We present additional visualization results in Figure H13 and Figure H15, showcasing the effectiveness of SA3D across various input prompts. We also provide some visualization of extracted meshes of the segmented objects (Figure H14) to show the extracted 3D geometry. Please note that the quality of these meshes can be further improved by applying more effective NeRF2Mesh methods [56, 66].
Figure 11: An illustration of the self-prompting strategy.
Figure 12: The effect of different kinds of occlusion in NeRF.
Figure 13: Text prompt based visualization results on the LERF figurines dataset.
Figure H15: More visualization results on the LLFF dataset and the 360 dataset (based on point and box prompts). | ## Review
### Summary
This paper presents a novel method for lifting 2D segmentations from the Segment Anything Model (SAM) into 3D using a NeRF-based approach. The proposed method, named SA3D, allows for effective segmentation of 3D objects from a scene by utilizing user-provided prompts to generate 2D segmentations from SAM, which are then optimized into a 3D mask. The method employs a mask inverse rendering technique and a cross-view self-prompting strategy to iteratively refine the 3D segmentation across multiple views, demonstrating significant improvements over existing state-of-the-art systems. The approach is simple, efficient, and applicable to various 2D foundation models, backed by comprehensive experimental results.
### Strengths
- Simple approach that should be easy to reproduce with shared code.
- Solid experimental improvements, outperforming state-of-the-art systems.
- Good ablation studies and strong qualitative and quantitative results.
- Well-written and easy to follow.
- Generalizable framework applicable to any 2D foundation models.
### Weaknesses
- Overstated claims regarding the general applicability of the method.
- Unclear computational efficiency and potential limitations in real-world applications.
- Lacks discussion on comparisons with recent open-world 3D segmentation methods.
- Incomplete explanation of certain technical terms and equations.
- Concerns about the method's ability to handle incomplete observations of target objects.
### Questions
- Could the authors provide more validation for the claim that SA3D improves SAM?
- What is the necessity of the self-prompting strategy compared to simpler alternatives?
- How many gradient descent iterations are required in each inverse rendering step?
- What are the implications of the choice of reference view on segmentation results?
- Has runtime been reported for the proposed method compared to baseline methods?
### Soundness
**Score:** 3
**Description:** 3 = good, as the methodology is generally sound and well-structured, though there are areas where clarity and detail could be improved.
### Presentation
**Score:** 4
**Description:** 4 = excellent, the paper is well-organized and clearly written, making it easy for readers to follow the proposed methodology.
### Contribution
**Score:** 3
**Description:** 3 = good, the paper contributes to the field by providing a novel method that bridges 2D and 3D segmentation, though it could be strengthened with broader evaluations.
### Rating
**Score:** 6
**Description:** 6 = accept with minor revisions, the paper is technically solid and presents significant contributions, but it could benefit from minor clarifications and additional comparisons.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a novel and effective method for 3D segmentation that builds on existing technologies, demonstrating solid experimental results and a clear presentation. While there are some areas that require clarification and additional comparisons, the overall contributions and soundness of the work warrant acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
CoLLAT: On Adding Fine-grained Audio Understanding to Language Models using Token-Level Locked-Language Tuning
Amila Silva\({}^{1,2}\) Spencer Whitehead\({}^{1}\) Chris Lengerich\({}^{1}\) Hugh Leather\({}^{1}\)
\({}^{1}\)Meta AI (FAIR)
\({}^{2}\)The University of Melbourne
[email protected], {srw5,clengerich,hleather}@meta.com
###### Abstract
Humans can easily understand various audio concepts, but conventional audio classification models fail due to their inability to predict unseen classes during training. To address this challenge, recent literature has explored contrastive language-audio pretraining to learn an audio understanding model using natural language supervision from a pretrained language model. However, despite their reasonable zero-shot performance in audio understanding, these models typically fail to achieve optimal performance while preserving the text understanding capabilities of the pretrained language model. They also perform poorly when comprehending audio clips with multiple audio concepts. To bridge these gaps, we propose CoLLAT: **C**ontrastive **L**ocked **L**anguage and **A**udio **T**uning. This is a framework to effectively learn an audio understanding model with a locked language model, which is learned using a novel pretraining objective for audio-to-text grounding to yield fine-grained audio understanding. Our extensive experiments, which include several downstream applications such as audio classification, cross-modal retrieval, audio captioning and audio-guided image generation, demonstrate that CoLLAT yields state-of-the-art performance for audio understanding. Additionally, it unlocks audio guidance to applications built on top of pretrained language models.
Figure 1: CoLLAT yields fine-grained audio-to-text grounding by token-level alignment of audio-text with a locked pretrained text embedding space, which (a) enables fine-grained audio understanding and novel multimodal capabilities (e.g., audio-guided image generation); and (b) achieves superior performance across both unimodal (audio, text) and multimodal (e.g., audio and text) settings.
Introduction
**Motivation.** The sound perception system in humans is capable of understanding complex audio concepts and interpreting them in a way that allows us to interact with our environment effectively [21]. For example, when humans listen to a sound clip of a duck quacking followed by a dog whimpering while birds chirp in the background (see example in Fig. 1a), they can easily discern various audio concepts (e.g., a duck quacking, a dog whimpering, birds chirp) in that sound clip, enabling them to construct a holistic understanding of the sound clip. However, conventional audio classification models [22, 24, 30, 10, 25] that aim to associate audio recordings with one or more categories from a set of predefined categories fail to be competitive with the human auditory system due to their inability to predict unseen classes during training. To address this issue, recent literature has explored contrastive language-audio pretraining [13, 7] to learn an audio understanding model using natural language supervision. These methods aim to learn an audio encoder and a text encoder simultaneously, which produce a joint embedding space for audio and text. The joint embedding space is learned to preserve the correspondence of audio-text pairs present in the pretraining dataset. Such a joint embedding space creates an open vocabulary for audio-language correspondence, enabling new audio-language capabilities such as zero-shot audio classification and audio+language guided image generation. However, existing works on contrastive language-audio pretraining [13, 35, 7] lack the following strengths.
First, previous works [13, 35, 7] typically initialize their text encoder using CLIP [27] - a strong pre-trained text encoder. However, these techniques [7, 13] often fail to achieve optimal performance for audio-language understanding without fine-tuning the text encoder (i.e., by keeping the text encoder frozen). In Fig. 1b, we report the performance of a set of baselines, where CLAP, AudioCLIP, and Cons-CLAP models fine-tune the pre-trained CLIP text encoder, while the rest keep the text encoder frozen. As can be seen, fine-tuning the text encoder improves cross-modal (e.g., zero-shot audio classification) performance, but it comes at the cost of losing its language understanding capabilities. This could be attributed to the lack of sufficient textual information in publicly available audio-text datasets (e.g., AudioSet [8]), which are relatively smaller in size compared to the datasets used to originally pretrain CLIP1. In this work, _we study the possibility of achieving SOTA performance for audio-text understanding with a pretrained text encoder that remains locked._ Incorporating audio understanding without tuning the CLIP text encoder is not only beneficial for capitalizing the pretrained text encoder's existing capabilities, but it also enables audio or audio+text guidance to any AI application that uses text-guidance from the CLIP text encoder (e.g., CLIP-guided image generation) without needing to retrain the application-specific model (e.g., diffusion model of a CLIP-guided image generation pipeline). Recent studies on language-visual pretraining [3, 4] have demonstrated that matching the sizes of the encoders is vital for achieving strong cross-modal performance while keeping one encoder locked. Nevertheless, most existing audio-text pretraining methods use audio encoders that are substantially smaller in size compared to the text encoder. Building upon this observation, we propose an audio-text pretraining architecture that jointly scales both the audio and language encoders, allowing for strong cross-modal performance with locked language tuning.
Footnote 1: CLIP’s original pretraining dataset is almost 200 times larger than AudioSet.
Second, existing contrastive audio-text pretraining techniques [35, 7, 13] have limited efficacy when understanding complex audio clips with multiple audio concepts (see results in Table 2). This shortcoming could be attributed to the absence of explicit fine-grained cross-modal grounding in these techniques, as they solely depend on global embeddings - i.e., a single vector summarizing the semantic content of a given text/audio input, to maintain the correspondence between audio and text. Although such global embedding alignment methods achieve reasonable performance in audio understanding, encoding audio and text into global embeddings will lose much fine-grained information, which is critical to distinguish hard audio-text pairs. In contrast, the token-level embeddings of the intermediate layers of transformer-based encoders for text and audio possess better understanding of fine-grained concepts as they have one or more token-level embeddings dedicated to each fine-grained concept in an audio/text input. Despite the knowledge in such token-level embeddings are explicitly exploited in other domains for fine-grained cross-modal grounding (e.g., text-image [34, 37] and text-video [39]), to the best of our knowledge, none of the previous audio-text pretraining techniques have explored this. To bridge this research gap, _this work studies the importance of exploiting token-level audio-text alignment along with the global-level alignment to yield better fine-grained understanding of audio-text correspondence._
**Contributions.** The contributions of this work are to propose:
* A neural architecture for effective contrastive audio-language pretraining that yields SOTA performance for audio-text understanding while keeping the text encoder locked.
* An improved audio-text pretraining objective function that explicitly encourages the model to learn fine-grained audio-text grounding to unlock complex audio comprehension.
We verify the superiority of our proposed framework using a diverse set of experiments - zero-shot audio classification, cross-modal retrieval, audio captioning and audio-guided image generation, which shows that our framework yields SOTA performance for both unimodal (audio or text) and multimodal (audio and text) understanding, while capitalising on the existing capabilities of the text encoder (e.g., text-guided image generation) and alleviating its substantial cost of re-training.
## 2 Related Work
### Audio-to-Text Grounding
Audio-to-text grounding refers to establishing a connection between textual concepts and audio concepts. In earliest research efforts [22; 24; 30; 10; 25], the audio-to-text grounding problem was approached as a classification task that utilized machine learning models to link audio recordings to pre-defined class labels. These approaches explored various machine learning models, spanning from traditional machine learning models such as Support Vector Machine [19; 31; 32], to advanced neural architectures like Convolutional Neural Networks (CNN) [26; 23; 14] and transformers [10; 18]. These models were designed to operate on either static [22; 24; 30] or trainable [12] time-frequency transformations of raw audio. While these approaches were successful in predicting the class labels used during training, they often struggled to predict novel audio concepts.
To address this challenge, recent works [35; 13; 7; 5] have explored that how to learn a joint embedding space to preserve the correspondence between audio and text with the language supervision from pretrained text encoders. This approach enables an open vocabulary for audio concepts to unlock applications such as zero-shot audio classification. These works typically employ two separate encoders: one for audio and one for text. Each encoder maps the given input, such as an audio clip or text prompt, to an embedding vector that preserves the knowledge of the input. These models learn the parameters of the encoders using a contrastive loss function [27], which aims to ensure that the embeddings of corresponding text-audio pairs are similar to each other while pushing the embeddings of dissimilar pairs further apart.
To exploit the language comprehension abilities of large pretrained text encoders [27; 28], these studies often initialize the text encoder with a pretrained model like CLIP [27]. These text encoders typically have billions of parameters and are trained using large-scale datasets such as LAION. For the audio encoder, different works adopt various architectures - Wav2CLIP adopts ResNet-based architecture [1], AudioCLIP employs ESResNeXt [12], and CLAP uses CNN14 [17]. Then, these works jointly train the audio and text encoder to preserve audio-text correspondence. In addition to the differences in audio encoder architecture, these works vary in how they construct text prompts and their use of auxiliary objectives. For example, AudioCLIP [13] creates text prompts by concatenating the discrete class labels of an audio clip from datasets like AudioSet [9]. On the other hand, CLAP [7] utilizes a set of audio-text datasets that provide semantically and syntactically meaningful captions for each audio clip to train their model. Additionally, AudioCLIP [13] incorporates auxiliary objectives to preserve audio-image and image-text correspondences along with the audio-text correspondance.
Despite these differences, all these works fail to yield optimal performance for audio-text understanding without fine-tuning the text encoder - i.e., keeping the weights of the pretrained text encoder. Previous studies [4; 3] suggest that this may be due to the size mismatch of the audio and text encoders observed in previous works [7; 13] (see Figure 1(a)). As shown in Fig. 1(b), simply scaling the audio encoder to match the size of the text encoder does not result in optimal performance too, especially given the relatively small dataset available with audio-text pairs. To address this issue, recent studies [3; 4] in other multimodal domains propose the use of neural architectures [4], that enable joint scaling of the sizes of multimodal encoders. Nevertheless, to the best of our knowledge, such architectures have not been explored in the context of audio-text understanding.
To bridge this gap, our work proposes a novel neural architecture for audio-text pretraining that solves the size mismatch problem in existing works, enabling our model to achieve optimal performance foraudio-to-text understanding without the need for fine-tuning the text encoder. The strenghts of our architecture is three-fold: (1) it preserves the language understanding capabilities of the pretrained text encoder, resulting in better audio-to-text grounding even with a relatively small audio-text dataset; (2) it is compatible with text encoders of any scale, making it more viable for learning audio understanding using evolving language models; and (3) it maps audio embeddings into the same representation space as the pretrained text encoder, introducing audio-guided controllability to any downstream application that relies on text guidance from the pretrained text encoder.
### Token-level Grounding
Existing audio-text contrastive pretraining techniques [35; 7; 13] do not explicitly consider the fine-grained alignment of audio concepts and textual concepts. These methods are only trained to align the global embeddings (i.e., last layer output from the encoders which summarizes all the concepts in the corresponding input) from the encoders. Such global embeddings may not be able to capture the individual concepts in complex audio clips containing multiple audio concepts. In contrast, the token-level hidden representations of audio/text encoders can effectively represent fine-grained concepts. Therefore, it is crucial to have token-level alignment between audio and text to achieve a more fine-grained understanding of audio.
This research challenge has been deeply studied in text-visual domain. For example, the works in [39; 2; 34; 37] have shown the importance of aligning the token-level hidden representation of the visual and text encoders to yield fine-grained visual understanding. The same concept has also been shown to be important for similar other learning tasks such as knowledge distillation [38; 33]. However, token-level contrast between audio and text modalities has not been well-studied in previous audio-text pretraining methods. Some works [16; 11] have focused on explicit token-level attention for audio-related downstream applications like audio captioning, but they are limited to specific tasks. Therefore, our work introduces token-level alignment as an auxiliary task for conventional contrastive audio-text pretraining to yield fine-grained audio understanding.
Figure 3: (a) Overview of CoLLAT, consisting of the Token Interaction Module for mapping audio tokens to text tokens by adopting a series of blocks as detailed in (b).
Figure 2: (a) Size ratio of audio and text encoders across different methods; and (b) zero-shot audio classification performance of CLAP (using FSD50K) with different audio encoder sizes, while keeping the size of the text encoder fixed to that of CoLLAT.
CoLlat
Our model aims to learn fine-grained audio understanding with the help of a pretrained text encoder, without fine-tuning the text encoder. Unlike existing techniques that use separate audio and text encoders, our model shares the pretrained text encoder, which remains frozen during training, for both audio and text encoding. The main strengths of this architecture are twofold: (1) it allows us to use large pretrained language encoders (with billions of parameters) for the text encoder without introducing a significant mismatch between the encoders used for audio and text, as this architecture scales the encoder size for each modality jointly; and (2) it enables us to leverage the implicit knowledge (i.e., intermediate layers) in the text encoder to encode audio. By using a frozen shared text encoder for both audio and text encoding, the corresponding audio-text pairs are forced to implicitly align their token-level embeddings to produce similar global embeddings. Figure 3 (a) shows a high-level schematic of the proposed model architecture.
We reuse the pretrained CLIP text encoder [27] as our text encoder, but our model is compatible with any other pretrained text encoder at any scale. This text encoder expects the static token embeddings of the given text prompt as input. To encode audio in a way that is compatible with the pretrained text encoder, we adopt a transformer-based audio encoder [10] to produce token-level embeddings for the audio tokens (i.e., patches in the log mel spectrogram) and a cross-attention-based token interaction module to produce corresponding text token embeddings from the audio token embeddings. As shown in Fig. 3 (a), our model takes either audio and/or text as inputs and produces corresponding embeddings while preserving audio-text correspondence. We formally define our architecture and training objectives below.
Let \(\mathbb{D}=\{X^{a},X^{t}\}\) be a set of audio-text pairs, where each pair is represented as \(\{x^{a},x^{t}\}\). Here, \(x^{a}\in\mathbb{R}^{P\times T}\) is the processed audio clip represented as a log mel spectrogram with \(F\) spectral components (e.g. Mel bins) and \(T\) time bins; and \(x^{t}\) is the tokenized text prompt explaining the audio concepts in \(x^{a}_{i}\).
### Model Architecture
For a given \(\{x^{a},x^{t}\}\), we preprocess \(x^{a}\) as a sequences of patches of its 2D audio spectrogram and \(x^{t}\) as a sequence of tokens by tokenizing the text prompt via the pretrained BPE-based tokenizer in CLIP. We denote these preprocessing functions for audio and text as \(preprocess_{a}()\) and \(preprocess_{t}()\), respectively, and the output from these functions as \(\{x^{a}_{i}\}_{i=1}^{M}\) and \(\{x^{t}_{j}\}_{j=1}^{N}\), respectively.
\(\{x^{a}_{i}\}_{i=1}^{M}=preprocess_{a}(x^{a}),\ \{x^{t}_{j}\}_{j=1}^{N}=preprocess_{t }(x^{t})\)
Here, \(M\) and \(N\) denote the maximum lengths of the audio and text sequences, respectively. The audio tokens \(\{x^{a}_{i}\}_{i=1}^{M}\) are embedded using a transformer-based audio encoder [10]\(f_{a}()\), which returns latent representations of the audio tokens \(\{h^{a}_{i}\}_{j=1}^{N}\). The text tokens \(\{x^{t}_{j}\}_{j=1}^{N}\) are initially embedded using the static embedding matrix \(E_{t}\) of the pretrained text encoder.
\(\{h^{a}_{i}\}_{i=1}^{M}=f_{a}(x^{a}),\ \{h^{t}_{j}\}_{j=1}^{N}=E_{t}\cdot x^{t}\)
The audio token embeddings are then passed through an audio-text token interaction module \(f_{a-t}()\), which aims to predict the corresponding text token embeddings \(\{h^{t}_{j}\}_{j=1}^{N}\) using the audio token embeddings \(\{h^{a}_{i}\}_{i=1}^{M}\) through fine-grained token alignment. The output from this module is denoted as \(\{h^{a-t}_{j}\}_{j=1}^{N}\) since it has the same number of tokens as the text token embeddings.
\(\{h^{a-t}_{j}\}_{j=1}^{N}=f_{a-t}(\{h^{a}_{i}\}_{i=1}^{M})\)
Our token interaction module consists of multiple cross-attention-based blocks [37] (see Fig.3 (b)) in series. These blocks take the currently predicted text token embeddings from the previous block as input and attempt to denoise them using the audio token embeddings of the corresponding audio clip. For the first block in the interaction module, we pass a set of random vectors as input, which are randomly initialized following a normal distribution with a mean of zero and a standard deviation of 1. This series of token interaction blocks is analogous to a denoising pipeline[15] that is guided by the audio token embeddings to produce the corresponding text token embeddings.
Finally, the global audio and text embeddings are produced by encoding \(\{h^{t}_{j}\}_{j=1}^{N}\) and \(\{h^{a-t}_{j}\}_{j=1}^{N}\) using the pretrained text encoder \(f_{t}()\).
\(z^{a}=f_{t}(\{h^{a-t}_{j}\}_{j=1}^{N}),\ z^{t}=f_{t}(\{h^{t}_{j}\}_{j=1}^{N})\)Our framework is trained using the following objective functions:
### Cross-modal Token-Level Alignment Loss
To achieve the fine-grained grounding between audio and text while making CoLLAT agnostic to the order of the classes presented in the text, this loss function aims to find one-to-one mapping between text token embeddings \(\{h^{t}_{j}\}_{j=1}^{N}\) from text and the produced text token embeddings \(\{h^{a-t}_{j}\}_{j=1}^{N}\) using audio and make those token-level embeddings close to each other. We formulate the cross-modal token-level loss function as follows:
\[L_{cross-token}=||g(\{h^{a-t}_{j}\}_{j=1}^{N})-\{h^{t}_{j}\}_{j=1}^{N}||_{1} \tag{1}\]
where \(g\) denotes the function that reorder the token-level embeddings of an instance \(\{h^{a-t}_{j}\}_{j=1}^{N}\) such that the cross-modal token-level loss function is minimized, which is pre-computed before each weight update. Given the time complexity of finding the optimal solution to \(g\) in Eq. 1, we greedily compute \(g\) as follows: given the token embeddings of \(\{h^{t}_{j}\}_{j=1}^{N}\) and \(\{h^{a-t}_{j}\}_{j=1}^{N}\), we initiate the process by starting from the right-most token embedding in \(\{h^{t}_{j}\}_{j=1}^{N}\). This token is then paired with the closest token embedding in \(\{h^{a-t}_{j}\}_{j=1}^{N}\). Next, we proceed to the second token embedding from the right in \(\{h^{t}_{j}\}_{j=1}^{N}\), excluding the already mapped token embedding in \(\{h^{a-t}_{j}\}_{j=1}^{N}\) from the possible set to be paired with the selected text token embedding. We iterate this process until we obtain the greedy one-to-one mapping between all the tokens in \(\{h^{t}_{j}\}_{j=1}^{N}\) and \(\{h^{a-t}_{j}\}_{j=1}^{N}\). We observed that this greedy approach could produce optimal re-ordering unless there are very similar classes in the same audio clip. Consequently, we do not anticipate a significant variation in the performance if the optimal reordering can be achieved, which we leave as future work given the time complexity of training the model with the optimal reordering.
### Cross-modal Global Contrastive Loss
Motivated by [27], this objective function is introduced to preserve the global correspondence of the audio-text pairs in the training dataset. Given the global embeddings of a batch \(B\) of audio and text pairs \(\{Z^{a}_{B},Z^{t}_{B}\}\), we compute the similarity metric between audio and text pairs in \(B\) as:
\[S^{a-t}_{B}=\tau*(Z^{a}_{B}\cdot{Z^{t}_{B}}^{T}),\ S^{t-a}_{B}=\tau*(Z^{t}_{B} \cdot{Z^{a}_{B}}^{T})\]
where \(\tau\) is a temperature parameter to scale the range of logits. The similarity matrix \(S\in\mathbb{R}^{|B|\times|B|}\) has \(|B|\) correct pairs in the diagonal and \(|B|^{2}-|B|\) incorrect pairs in the off-diagonal, where \(|B|\) refers to the batch size. We formulate our cross-modal global contrastive loss for the batch \(B\) as follows:
\[L_{cross-global}=\frac{1}{2|B|}\sum_{i=0}^{|B|}\log[softmax(S^{a-t}_{B})_{(i, i)}]+\log[softmax(S^{t-a}_{B})_{(i,i)}] \tag{2}\]
### Unimodal Token-Level Alignment Loss
Despite the reasonable performance of CoLLAT with the aforementioned two objective functions, we observed that the learned representations are not robust against weak perturbations of audio such as jitter (see ablation study in Section 5 for more details). To address this challenge, previous works [6; 40] adopts self-supervised objectives that aim to maximize the similarity of the embeddings of two different yet correlated views produced by perturbing the same audio input. Motivated by these works, we formulate our unimodal token-level loss for a given audio-text pair \(\{x^{a},x^{t}\}\) to improve the robustness our framework as follows:
\[L_{uni-token}=||\{h^{a-t}_{j}\}_{j=1}^{N}-\{\tilde{h}^{a-t}_{j}\}_{j=1}^{N}||_ {1} \tag{3}\]
where \(\{\tilde{h}^{a-t}_{j}\}_{j=1}^{N}\) is the token-level embeddings of the perturbed version of \(x^{a}\) by adding a Gaussian noise. This loss forces the model to understand all the token-level embeddings (i.e., fine-grained concepts) of \(x^{a}\) using its perturbed version.
**Overall Objective.** The final objective function of CoLLAT is as follows:
\[L=\lambda_{1}\cdot L_{cross-token}+\lambda_{2}\cdot L_{cross-global}+\lambda_{ 3}\cdot L_{uni-token} \tag{4}\]
where \(\lambda_{1},\lambda_{2}\) and \(\lambda_{3}\) control the importance of each loss term.
## 4 Experimental Setup
### Datasets
**Training.** We adopt the AudioSet dataset as the training dataset, which consists of 2,041,792 audio and text label pairs collected using 10s long YouTube videos. Each audio clip can have multiple text labels to represent different audio concepts in different granular levels.
**Evaluation.** We evaluate our model using four main downstream applications: (1) audio classification; (2) cross-modal retrieval; (3) audio captioning; and (4) audio-guided image generation. We adopt 6 widely used real-world datasets to evaluate our model, which are shown in Table 1.
**Pre-processing.** We preprocess audio clips by representing them as 128-dimensional log Mel filterbank (fbank) features, computed with a 25ms Hamming window every 10ms. This results in a 128x100\(t\) spectrogram as input to our audio encoder, where \(t\) is the length of the audio clip in seconds. Next, we split the spectrogram into a sequence of N \(16\times 16\) patches with an overlap of 6 in both time and frequency dimensions, where \(N\) is the number of patches and the effective input sequence length for the Transformer. We flatten each 16x16 patch to a 1D patch embedding of size 768 using a linear projection layer, which we refer to as the patch embedding layer. Since the Transformer architecture does not capture the input order information, and the patch sequence is also not in temporal order, we add a trainable positional embedding (also of size 768) to each patch embedding to allow the model to capture the spatial structure of the 2D audio spectrogram.
Following the findings in [7], we adopt the template of _"This is a sound of [class label 1], [class label 2],.... and [class label C]"_ to generate a text prompt for an audio clip with \(C\) number of class labels. Such templates have been shown to be more effective than simple concatenation [7]. Our decision to not utilize natural language descriptions or a keyword-to-caption augmentation [36] in favor of simple templates is due to the following reasons: (1) the cross-modal token-level loss function in CoLLAT aims to explicitly map each token in the given text prompt with its corresponding counterpart in the audio tokens (i.e., the patches in the Mel-Spectrogram of the audio). It may not be feasible to find such a mapping for certain tokens in natural language descriptions (e.g., stop words, adjectives) with the audio tokens. Consequently, having complex text prompts could adversely impact the training of CoLLAT; and (2) CoLLAT maintains the text encoder frozen during training, as it was pre-trained using text labels from the LAION dataset rather than natural language descriptions. Thus, attempting to produce text embeddings for natural language descriptions using the CLIP text encoder to train CoLLAT could introduce a data shift problem to negatively impact the training of CoLLAT.
**Training Details.** After performing a grid search, we set {\(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\)} as {\(1,0.1,0.01\)}. We utilize a GPU cluster comprised of 8 V100 GPU cards for training CoLLAT. With this hardware configuration, it takes approximately 320 GPU hours to train the model on AudioSet using the optimal hyperparameter setting.
## 5 Results
### Audio Classification
Table 2 presents a comparison of CoLLAT with five different baselines in audio classification. The first baseline, called Supervise, trains a transformer from scratch without relying on features from a pretrained joint embedding space, while the other four baselines learn a joint embedding space between audio and text via contrastive pretraining. To conduct our experiments, we used five widely used datasets: ESC-50, US8K, TUT, FSD50K, and AudioSet, under two settings: zero-shot (ZS) and linear probe (LP). For zero-shot (ZS) setting, we first extract the embeddings for the audio clips and the possible set of labels using the audio and text encoder in each baseline. Then, we compute cosine distance between these text and audio embeddings to rank different class labels for a given audio clip.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Task & \multicolumn{4}{c|}{Audio Classification} & \multicolumn{4}{c|}{Cross-modal Retrieval} & Audio & Audio-guided \\ & & & & & & & & & \\ & & & & & & & & & \\ \hline Dataset & ESC-50 & UrbanSound8K & TUT & FSD50K & AudioSet & VGG5ound & AudioCaps & AudioCaps & AudioCaps \\ \hline \# instances & 2K & 8K & 6.3K & 51K & 20.4K & 15K & 46K & 46K & 46K \\ \hline \# classes & 50 & 10 & 15 & 50 & 527 & 309 & N/A & N/A & N/A \\ \hline Metric & Acc. & Acc. & Acc. & MAP & MAP & MRR & MAP@10 & COCO Metrics & N/A \\ \hline \end{tabular}
\end{table}
Table 1: Dataset StatisticsFor linear probe (LP) setting, we freeze our audio encoder as feature extractor and only train a 1-layer transformer decoder to predict the class labels.
Our results show that the Supervise baseline does not perform well across all datasets, which suggests that audio-text pre-training generally improves performance, particularly for tasks with limited labeled data (such as ESC-50). Among the other baselines, CoLLAT outperforms them mostly across all datasets. In particular, performance improvements from CoLLAT are significant for datasets that consist of complex audio clips with multiple sound clips such as AudioSet and FSD50K. For instance, CoLLAT outperforms the strongest baseline for the AudioSet dataset by 50% under zero-shot setting and 22% under linear probe setting. This observation highlights the superiority of CoLLAT in comprehending complex audio clips with multiple audio concepts.
**Ablation Study using Zero-shot Audio Classification.** In Fig. 3(a), we provide ablations with different layer selections for token-level loss computation and the performance of CoLLAT with different number of cross-attention blocks. This ablation study guided to adopt the initial token-level layer in the text encoder to compute the token-level loss functions and 8 cross-attention blocks in the final architecture of the CoLLAT's token interaction module. Table 3(b) presents CoLLAT's performance with different audio encoder choices, which shows pre-trained AST as the most suitable backbone architecture. Additionally, we noticed that the AST backbone architecture converges faster compared
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c} \hline \hline & ESC-50 (Acc) & \multicolumn{3}{c|}{FSD50K (MAP)} & \multicolumn{3}{c}{} \\ \cline{2-11} & ZS & LP & ZS & LP & ZS & LP & ZS & LP & ZS & LP \\ \hline Supervise & N/A & 0.53 & N/A & 0.63 & N/A & 0.60 & N/A & 0.32 & N/A & 0.24 \\ YamNet & N/A & 0.85 & N/A & 0.78 & N/A & 0.63 & N/A & 0.50 & N/A & 0.27 \\ Wav2CILP & 0.41 & 0.86 & 0.40 & 0.81 & 0.24 & 0.63 & 0.03 & 0.43 & 0.02 & 0.30 \\ AudioCLIP & 0.69 & **0.97** & 0.65 & **0.90** & 0.27 & 0.70 & 0.13 & 0.54 & 0.03 & 0.28 \\ CLAP & 0.77 & 0.96 & 0.73 & 0.88 & **0.30** & 0.72 & 0.14 & 0.59 & 0.06 & 0.32 \\ CoLLAT & **0.84** & **0.97** & **0.77** & 0.89 & 0.29 & **0.74** & **0.19** & **0.64** & **0.09** & **0.39** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results for Audio Classification. Here, ZS and LP denote zero-shot and linear-probe experimental setups for audio classification. For uni-label datasets – ESC-50, US8K and TUT, Accuracy is reported as the metric. multi-label datasets (FSD50K and AudioSet) adopts MAP (Mean Average Precision). Higher the value is better for both metrics.
Figure 4: (a) Zero-shot audio classification performance of CoLLAT for ESC-50 and FSD-50K: **solid line** – when different intermediate layers are used to compute the token alignment loss (x axis denotes the half of the index of the layer used to compute the loss); and **dashed line** – when the number of cross-attention blocks in the token interaction module varies (x axis denotes the number of cross attention blocks). (b) Ablation study with different backbone audio networks using ZS audio classification. The pre-trained AST was pre-trained for audio classification using AudioSet.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline & VGGSound (MRR) & AudioCaps (MAP@10) \\ \hline \(\Lambda\)\(\rightarrow\)1 & 1–\(\Lambda\) & A–\(\Gamma\) & T\(\rightarrow\)A \\ \hline Wav2CILP & 0.057 & 0.068 & 0.52 & 0.38 \\ AudioCLIP & 0.060 & 0.073 & 0.58 & 0.52 \\ CLAP & 0.063 & 0.074 & 0.57 & 0.47 \\ CoLLAT & **0.093** & **0.112** & **0.62** & **0.59** \\ \hline \hline \end{tabular}
\end{table}
Table 3: (a) Ablation of loss functions using Zero-shot Audio Classification with clean and noisy audio samples; (b) Results for Cross-modal Retrieval, following the experimental setups in [35] and [36] for VGGSound and AudioCaps respectively. Higher is better for all the figures.
to the other backbones. This could be attributed to the stronger inductive biases present in the other encoders as compared to AST [10].
We conducted an ablation study in Table 2(a) to demonstrate the significance of the loss functions used in CoLLAT. The proposed token-level cross-modal loss function, \(L_{cross-token}\), is shown to play a crucial role in enhancing audio-understanding performance. This is especially evident in FSD-50K, which consists of audio clips with multiple labels, where the performance for zero-shot audio classification is improved by \(46.2\%\) in MAP. We also found that the robustness of CoLLAT substantially improves with the use of the unimodal-token level loss, \(L_{uni-token}\), as it enhances the zero-shot classification performance for noisy audio samples that are generated by adding Gaussian noise with zero mean and 0.1 standard deviation. \(L_{uni-token}\) improves the performance for such samples by \(11.3\%\) and \(30.8\%\) for ESC-50 and FSD-50K, respectively.
### Cross-modal Retrieval
Since pretrained CLIP text encoder maps text to a joint embedding space of images and text, being able to map audio into the same CLIP embedding space (without fine-tuning the text encoder) enables cross-modal retrieving capabilities between audio, text and images. Table 2(b) shows the results collected for cross-modal retrieval task using the VGGSound and AudioCaps datasets. We adopt the experimental setups proposed in [35] for the audio-image and image-audio retrieval tasks and [36] for the audio-text and text-audio retrieval tasks. For a fair comparison, we train all the baselines with a frozen text encoder.
As can be seen, CoLLAT outperforms all the baselines by as much as 47.6% and 51.4% in Mean Reciprocal Rank (MRR) for the Audio-to-Image and Image-to-Audio retrieval tasks, and 13.4% and 6.7% in MAP for Audio-to-Text retrieval, and Text-to-Audio retrieval, respectively. We observed that the performance improvements are particularly significant due to the ability of our approach to understand fine-grained concepts to differentiate hard image-text pairs in the VGGSound.
### Audio Captioning
We report the results for Audio Captioning using the AudioCaps dataset in Table. 4. Following [35], we only trained a 1-layer transformer on top of the CoLLAT's output, while keeping CoLLAT's parameters fixed to collect the results for audio captioning. We adopt the same experimental setting for the baselines too. As can be seen, CoLLAT outperforms CLAP and AudioCLIP by 9.7%, 13.2%,
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c} \hline \hline & \(BLEU_{1}\) & \(BLEU_{2}\) & \(BLEU_{3}\) & \(BLEU_{4}\) & \(CIDEr\) & \(METEOR\) & \(ROUGE\) & \(SPICE\) \\ \hline AudioCLIP & 50.2 & 34.9 & 22.1 & 14.4 & 14.4 & 28.7 & 36.1 & 8.7 \\ CLAP & 48.4 & 31.7 & 20.4 & 13.4 & 13.7 & 30.2 & 35.1 & 8.9 \\ CoLLAT & **55.1** & **37.2** & **24.3** & **15.1** & **16.3** & **43.0** & **39.6** & **11.3** \\ \hline Human & 65.4 & 48.9 & 37.3 & 29.1 & 28.8 & 91.3 & 49.6 & 21.6 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results for Audio Captioning using the AudioCaps dataset. The experimental setup freezes each model as a feature extractor and only train a 1-layer transformer decoder to predict text sequences. We follow the DCASE challenge setup [20] and report standard COCO caption evaluation metrics.
Figure 5: Audio-guided Image Generation Results
42.4%, 9.7% and 26.9% in \(BLEU_{1}\), \(CIDEr\), \(METEOR\), \(ROUGE\) and \(SPICE\) respectively. These results further show the ability of CoLLAT fine-grained details in the sound clip as it is needed to generate a holistic caption for a given audio clip.
### Audio-guided Image Generation
Given CoLLAT's ability to produce corresponding text token-level embeddings for a given audio clip, it can be naively used to provide guidance for any downstream application built on top of text token embeddings from pretrained text encoders. We adopt text-guided image generation as one such example. Most existing text-guided image generation models adopt token-level embeddings from a pretrained text encoder to guide image generation using a given text prompt. Nevertheless, most existing audio-to-text grounding techniques cannot be naively extended to introduce audio guidance for such applications, as they are unable to generate token-level text embeddings for a given audio prompt.
To evaluate this task, we adopt a pretrained text-guided image generation model following [29], which requires text guidance from a pretrained CLIP text encoder. As a baseline, we adopt the same architecture as CoLLAT, but trained only using \(L_{cross-global}\). We denote this baseline as CLAP in Figure 5. As shown in Figure 5, CoLLAT is capable of generating images that cover all the concepts present in the given audio clip. This observation further confirms the potential of our approach to comprehend complex audio clips with multiple concepts.
## 6 Broader Impact
Our work has a significant impact as it addresses the limitations of conventional audio understanding models by enhancing their ability to predict finer-grained classes within complex audio clips. We demonstrated that CoLLAT achieves state-of-the-art performance in audio understanding, as validated by its success in various downstream applications. Furthermore, it unlocks the potential for audio guidance in applications built upon pretrained language models. Thus, our work opens up possibilities for improved audio-to-text grounding, advancing multimodal applications. Additionally, considering the capability of CoLLAT to represent audio clips using either text or image, it holds promise in enhancing accessibility and promoting inclusivity for individuals with diverse disabilities.
It is important to note that the model developed in this research does not possess the ability to uniquely identify individuals in audio recordings for tasks such as speaker recognition and speaker identification. CoLLAT does not possess the ability to comprehend information conveyed through speech in audio clips. Given the privacy concerns associated with the generation of human faces using audio-guided image generation models, it is advised to exercise caution and refrain from utilizing them for such purposes. It is crucial to prioritize privacy and ethical considerations when applying these technologies to avoid any potential misuse or infringement on individuals' rights.
## 7 Conclusion
This work proposes CoLLAT, a novel framework for learning an audio understanding model using natural language supervision from a locked pretrained language model. The proposed framework effectively exploits the implicit knowledge in large language models instead of just relying on their final global embeddings. We introduce a novel pretraining objective for CoLLAT that enforces token-level alignment between audio and text modalities to yield fine-grained audio understanding. Our results demonstrate that CoLLAT achieves state-of-the-art performance for downstream applications such as audio classification, cross-modal retrieval and audio captioning. Since CoLLAT keeps the pre-trained language model locked during training, our experiments also show that it unlocks new applications such as audio-guided image generation.
Since CoLLAT can be considered a modality-agnostic framework for learning knowledge understanding models for new modalities with natural language supervision, extending CoLLAT to new modalities (video) is an interesting future direction to explore. Another notable limitation of CoLLAT is its inability to understand speech signals in the audio clips. Therefore, it could be worthwhile to explore how to introduce speech signals to CoLLAT while preserving its existing capabilities.
## References
* (1) Honglie Chen, Weidi Xie, Andrea Vedaldi, and Andrew Zisserman. Vggsound: A large-scale audio-visual dataset. In _Proc. of ICASSP_, pages 721-725, 2020.
* (2) Shizhe Chen, Yida Zhao, Qin Jin, and Qi Wu. Fine-grained video-text retrieval with hierarchical graph reasoning. In _Proc. of CVPR_, pages 10638-10647, 2020.
* (3) Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, et al. Pali: A jointly-scaled multilingual language-image model. _arXiv preprint arXiv:2209.06794_, 2022.
* (4) Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. _arXiv preprint arXiv:2204.02311_, 2022.
* (5) Soham Deshmukh, Benjamin Elizalde, and Huaming Wang. Audio retrieval with wavtext5k and clap training. _arXiv preprint arXiv:2209.14275_, 2022.
* (6) Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee Keong Kwoh, Xiaoli Li, and Cuntai Guan. Time-series representation learning via temporal and contextual contrasting. _arXiv preprint arXiv:2106.14112_, 2021.
* (7) Benjamin Elizalde, Soham Deshmukh, Mahmoud Al Ismail, and Huaming Wang. Clap: Learning audio concepts from natural language supervision. _arXiv preprint arXiv:2206.04769_, 2022.
* (8) Jort F. Gemmeke, Daniel P. W. Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R. Channing Moore, Manoj Plakal, and Marvin Ritter. Audio set: An ontology and human-labeled dataset for audio events. In _Proc. of ICASSP_, 2017.
* (9) Jort F Gemmeke, Daniel PW Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R Channing Moore, Manoj Plakal, and Marvin Ritter. Audio set: An ontology and human-labeled dataset for audio events. In _Proc. of ICASSP_, pages 776-780, 2017.
* (10) Yuan Gong, Yu-An Chung, and James Glass. Ast: Audio spectrogram transformer. _arXiv preprint arXiv:2104.01778_, 2021.
* (11) Felix Gontier, Romain Serizel, and Christophe Cerisara. Automated audio captioning by fine-tuning bart with audioset tags. In _Proc. of DCASE_, 2021.
* (12) Andrey Guzhov, Federico Raue, Jorn Hees, and Andreas Dengel. Esresne (x) t-fbsp: Learning robust time-frequency transformation of audio. In _Proc. of IJCNN_, pages 1-8, 2021.
* (13) Andrey Guzhov, Federico Raue, Jorn Hees, and Andreas Dengel. Audioclip: Extending clip to image, text and audio. In _Proc. of ICASSP_, 2022.
* (14) Shawn Hershey, Sourish Chaudhuri, Daniel PW Ellis, Jort F Gemmeke, Aren Jansen, R Channing Moore, Manoj Plakal, Devin Platt, Rif A Saurous, Bryan Seybold, et al. Cnn architectures for large-scale audio classification. In _Proc. of ICASSP_, 2017.
* (15) Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In _Proc. of NeurIPS_, 2020.
* (16) Chris Dongjoo Kim, Byeongchang Kim, Hyunmin Lee, and Gunhee Kim. AudioCaps: Generating Captions for Audios in The Wild. In _Proc. of NAACL-HLT_, 2019.
* (17) Qiuqiang Kong, Yin Cao, Turab Iqbal, Yuxuan Wang, Wenwu Wang, and Mark D Plumbley. Panns: Large-scale pretrained audio neural networks for audio pattern recognition. _IEEE/ACM Transactions on Audio, Speech, and Language Processing_, 28:2880-2894, 2020.
* (18) Khaled Koutini, Jan Schluter, Hamid Eghbal-zadeh, and Gerhard Widmer. Efficient training of audio transformers with patchout. _arXiv preprint arXiv:2110.05069_, 2021.
* [19] Chien-Chang Lin, Shi-Huang Chen, Trieu-Kien Truong, and Yukon Chang. Audio classification and categorization based on wavelets and support vector machine. _IEEE Transactions on Speech and Audio Processing_, 13(5):644-651, 2005.
* [20] Samuel Lipping, Konstantinos Drossos, and Tuoams Virtanen. Crowdsourcing a dataset of audio captions. In _Proc. of DCASE_, 2019.
* [21] Richard F Lyon. Machine hearing: An emerging field [exploratory dsp]. _IEEE signal processing magazine_, 27(5):131-139, 2010.
* [22] Annamaria Mesaros, Aleksandr Diment, Benjamin Elizalde, Toni Heittola, Emmanuel Vincent, Bhiksha Raj, and Tuomas Virtanen. Sound event detection in the dcase 2017 challenge. _IEEE/ACM Transactions on Audio, Speech, and Language Processing_, 27(6):992-1006, 2019.
* [23] Kamalesh Palanisamy, Dipika Singhania, and Angela Yao. Rethinking cnn models for audio classification. _arXiv preprint arXiv:2007.11154_, 2020.
* [24] Karol J Piczak. Environmental sound classification with convolutional neural networks. In _Proc. of MLSP_, 2015.
* [25] Karol J Piczak. Esc: Dataset for environmental sound classification. In _Proc. of ACM MM_, 2015.
* [26] Jordi Pons and Xavier Serra. Randomly weighted cnns for (music) audio classification. In _Proc. of ICASSP_, 2019.
* [27] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In _Proc. of ICML_, 2021.
* [28] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. _The Journal of Machine Learning Research_, 21(1):5485-5551, 2020.
* [29] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. High-resolution image synthesis with latent diffusion models. In _Proc. of CVPR_, 2022.
* [30] Justin Salamon and Juan Pablo Bello. Deep convolutional neural networks and data augmentation for environmental sound classification. _IEEE Signal processing letters_, 24(3):279-283, 2017.
* [31] Nicolas Scaringella and Daniel Mlynek. A mixture of support vector machines for audio classification. _IEEE MIREX, London_, 2005.
* [32] Sameh Souli and Zied Lachiri. Audio sounds classification using scattering features and support vectors machines for medical surveillance. _Applied Acoustics_, 130:270-282, 2018.
* [33] Frederick Tung and Greg Mori. Similarity-preserving knowledge distillation. In _Proc. of CVPR_, pages 1365-1374, 2019.
* [34] Michael Wray, Diane Larlus, Gabriela Csurka, and Dima Damen. Fine-grained action retrieval through multiple parts-of-speech embeddings. In _Proc. of CVPR_, pages 450-459, 2019.
* [35] Ho-Hsiang Wu, Prem Seetharaman, Kundan Kumar, and Juan Pablo Bello. Wav2clip: Learning robust audio representations from clip. In _Proc. of ICASSP_, 2022.
* [36] Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, and Shlomo Dubnov. Large-scale contrastive language-audio pretraining with feature fusion and keyword-to-caption augmentation. In _Proc. of ICASSP_, 2023.
* [37] Chen-Wei Xie, Jianmin Wu, Yun Zheng, Pan Pan, and Xian-Sheng Hua. Token embeddings alignment for cross-modal retrieval. In _Proc. of ACM-MM_, 2022.
* [38] Guodong Xu, Ziwei Liu, Xiaoxiao Li, and Chen Change Loy. Knowledge distillation meets self-supervision. In _Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part IX_, pages 588-604. Springer, 2020.
* [39] Jianwei Yang, Yonatan Bisk, and Jianfeng Gao. Taco: Token-aware cascade contrastive learning for video-text alignment. In _Proc. of CVPR_, pages 11562-11572, 2021.
* [40] Zhihan Yue, Yujing Wang, Juanyong Duan, Tianmeng Yang, Congrui Huang, Yunhai Tong, and Bixiong Xu. Ts2vec: Towards universal representation of time series. In _Proc. of AAAI_, 2022. | ## Review
### Summary
The paper presents CoLLAT, an innovative audio-language framework that utilizes a frozen pre-trained text model to enhance audio-text grounding via a contrastive learning approach. It demonstrates strong performance across various tasks, including audio classification, cross-modal retrieval, and audio-guided image generation. The authors conducted extensive experiments, showcasing significant improvements over existing state-of-the-art methods, while also proposing novel loss terms to improve the alignment between audio and text embeddings. Overall, the framework is justified through thorough evaluations and the authors have committed to refining their results in the final version.
### Strengths
- The framework effectively leverages a frozen pre-trained text encoder for audio-text alignment, maintaining the strong capabilities of the language model.
- The introduction and related work sections are well-written, clearly presenting the challenges and contributions.
- The proposed loss functions are well-motivated and demonstrate the efficacy of the approach in diverse tasks.
- Extensive evaluations and ablation studies validate the architectural choices and show the advantages of the proposed modules.
### Weaknesses
- Insufficient details are provided for reproducing experiments, specifically regarding the architecture of the audio encoder and the token interaction module.
- The token interaction module's functionality and the permutation used in the cross-modal token-level alignment loss require better clarification.
- The paper lacks evaluations on audio-text retrieval tasks and does not adequately support claims regarding the effectiveness of the constructed prompts.
- Some notation and presentation issues exist, making it challenging to follow certain sections.
### Questions
- What is the computational cost for training the framework and what hardware was used?
- How is the reordering 'g' in the cross-modal token-level alignment loss computed, and does it vary across training examples?
- Was the token sequence of length N provided as input (query) initialized randomly?
- Why did the authors choose learned position embeddings over fixed sinusoidal or rotary embeddings?
### Soundness
**Score:** 3
**Description:** 3 = good. The methodology is sound and well-justified, with solid experimental results supporting the claims made.
### Presentation
**Score:** 3
**Description:** 3 = good. While clear in many sections, some areas require more explicit explanations and better formatting for clarity.
### Contribution
**Score:** 3
**Description:** 3 = good. The paper contributes significantly to the field with a novel approach and solid experimental validation, although it could benefit from additional evaluations.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept: The paper is technically solid and shows moderate-to-high impact potential but needs minor improvements in clarity and completeness of experiments.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper is original and presents a novel approach to audio-language processing, demonstrating significant results across multiple tasks. While there are minor weaknesses in clarity and completeness, the overall contribution is valuable and the authors have committed to addressing concerns in the final version.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# DoWG Unleashed: An Efficient Universal Parameter-Free Gradient Descent Method
Ahmed Khaled
Princeton University
&Konstantin Mishchenko
Samsung AI Center
&Chi Jin
Princeton University
###### Abstract
This paper proposes a new easy-to-implement _parameter-free_ gradient-based optimizer: DoWG (Distance over Weighted Gradients). We prove that DoWG is _efficient_--matching the convergence rate of optimally tuned gradient descent in convex optimization up to a logarithmic factor without tuning any parameters, and _universal_--automatically adapting to both smooth and nonsmooth problems. While popular algorithms following the AdaGrad framework compute a running average of the squared gradients to use for normalization, DoWG maintains a new distance-based weighted version of the running average, which is crucial to achieve the desired properties. To complement our theory, we also show empirically that DoWG trains at the edge of stability, and validate its effectiveness on practical machine learning tasks.
## 1 Introduction
We study the fundamental optimization problem
\[\min_{x\in\mathcal{X}}f(x),\] (OPT)
where \(f\) is a convex function, and \(\mathcal{X}\) is a convex, closed, and bounded subset of \(\mathbb{R}^{d}\). We assume \(f\) has at least one minimizer \(x_{*}\in\mathcal{X}\). We focus on gradient descent and its variants, as they are widely adopted and scale well when the model dimensionality \(d\) is large (Bottou et al., 2018). The optimization problem (OPT) finds many applications: in solving linear systems, logistic regression, support vector machines, and other areas of machine learning (Boyd and Vandenberghe, 2004). Equally important, methods designed for (stochastic) convex optimization also influence the intuition for and design of methods for nonconvex optimization- for example, momentum (Polyak, 1964), AdaGrad (Duchi et al., 2010), and Adam (Kingma and Ba, 2015) were all first analyzed in the convex optimization framework.
As models become larger and more complex, the cost and environmental impact of training have rapidly grown as well (Sharir et al., 2020; Patterson et al., 2021). Therefore, it is vital that we develop more efficient and effective methods of solving machine learning optimization tasks. One of the chief challenges in applying gradient-based methods is that they often require tuning one or more stepsize parameters (Goodfellow et al., 2016), and the choice of stepsize can significantly influence a method's convergence speed as well as the quality of the obtained solutions, especially in deep learning (Wilson et al., 2017).
The cost and impact of hyperparameter tuning on the optimization process have led to significant research activity in designing parameter-free and adaptive optimization methods in recent years, see e.g. (Orabona and Cutkosky, 2020; Carmon and Hinder, 2022) and the references therein.
We say an algorithm is _universal_ if it adapts to many different problem geometries or regularity conditions on the function \(f\)(Nesterov, 2014; Levy et al., 2018; Grimmer, 2022). In this work, we focus on two regularity conditions: (a) \(f\) Lipschitz and (b) \(f\) smooth. Lipschitz functionshave a bounded rate of change, that is, there exists some \(G>0\) such that for all \(x,y\in\mathcal{X}\) we have \(|f(x)-f(y)|\leq G\|x-y\|\). The Lipschitz property is beneficial for the convergence of gradient-based optimization algorithms. They converge even faster on smooth functions, which have continuous derivatives; that is, there exists some \(L>0\) such that for all \(x,y\in\mathcal{X}\) we have \(\|\nabla f(x)-\nabla f(y)\|\leq L\|x-y\|\). Smoothness leads to faster convergence of gradient-based methods than the Lipschitz property. Universality is a highly desirable property because in practice the same optimization algorithms are often used for both smooth and nonsmooth optimization (e.g. optimizing both ReLU and smooth networks).
The main question of our work is as follows:
Can we design a universal, parameter-free gradient descent method for (OPT)?
Existing universal variants of gradient descent either rely on line search (Nesterov, 2014; Grimmer, 2022), bisection subroutines (Carmon and Hinder, 2022), or are not parameter-free (Hazan and Kakade, 2019; Levy et al., 2018; Kavis et al., 2019). Line search algorithms are theoretically strong, achieving the optimal convergence rates in both the nonsmooth and smooth settings with only an extra log factor. Through an elegant application of bisection search, Carmon and Hinder (2022) design a parameter-free method whose convergence is only double-logarithmically worse than gradient descent with known problem parameters. However, this method requires resets, i.e. restarting the optimization process many times, which can be very expensive in practice. Therefore, we seek a universal, parameter-free gradient descent method for (OPT) with **no search subroutines.**
**Our contributions.** We provide a new algorithm that meets the above requirements. Our main contribution is **a new universal, parameter-free gradient descent method with no search subroutines.** Building upon the recently proposed Distance-over-Gradients (DoG) algorithm (Ivgi et al., 2023), we develop a new method, DoWG (Algorithm 1), that uses a different stepsize with adaptively weighted gradients. We show that DoWG automatically matches the performance of gradient descent on (OPT) up to logarithmic factors with no stepsize tuning at all. This holds in both the nonsmooth setting (Theorem 3) and the smooth setting (Theorem 4). Finally, we show that DoWG is competitive on real machine learning tasks (see Section 4).
## 2 Related Work
There is a lot of work on adaptive and parameter-free approaches for optimization. We summarize the main properties of the algorithms we compare against in Table 1. We enumerate some of the major approaches below:
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Algorithm** & **No search** & **Parameter-free** & **Universal** & **GD framework** \\ \hline Polyak stepsize (Polyak, 1987; Hazan and Kakade, 2019) & ✓ & ✗ & ✓ & ✓ \\ Coin betting with normalization & ✓ & ✓ & ✓\({}^{(*)}\) & ✗ \\ (Orabona and Pal, 2016; Orabona and Cutkosky, 2020; Orabona, 2023) & & & \\ Nesterov line search (Nesterov, 2014) & ✗ & ✓ & ✓ & ✓ \\ AdaGrad & ✓ & ✗ & ✓ & ✓ \\ (Duchi et al., 2010; Levy et al., 2018; Ene et al., 2021) & & & \\ Adam & ✓ & ✗ & ✓ & ✓ \\ (Kingma and Ba, 2015; Li et al., 2023) & & & & \\ Bisection search (Carmon and Hinder, 2022) & ✗ & ✓ & ✓ & ✓ \\ D-Adaptation (Defazio and Mishchenko, 2023) & ✓ & ✓ & ✗ & ✓ \\ DoG (Ivgi et al., 2023) & ✓ & ✓ & ✓\({}^{(*)}\) & ✓ \\ DoWG (**new, this paper!**) & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
* Result appeared after the initial release of this paper.
\end{table}
Table 1: A comparison of different adaptive algorithms for solving (OPT). "Universal" means that the algorithm can match the rate of gradient descent on both smooth and nonsmooth objectives up to polylogarithmic factors. "No search" means the algorithm does not reset. "GD framework" refers to algorithms that follow the framework of Gradient Descent.
**Polyak stepsize.** When \(f_{*}=f(x_{*})\) is known, the Polyak stepsize (Polyak, 1987) is a theoretically-grounded, adaptive, and universal method (Hazan and Kakade, 2019). When \(f_{*}\) is not known, Hazan and Kakade (2019) show that an adaptive re-estimation procedure can recover the optimal convergence rate up to a log factor when \(f\) is Lipschitz. Loizou et al. (2021) study the Polyak stepsize in stochastic non-convex optimization. Orvieto et al. (2022) show that a variant of the Polyak stepsize with decreasing stepsizes can recover the convergence rate of gradient descent, provided the stepsize is initialized properly. Unfortunately, this initialization requirement makes the method not parameter-free.
**The doubling trick.** The simplest way to make an algorithm parameter-free is the doubling-trick. For example, for gradient descent for \(L\)-smooth and convex optimization, the stepsize \(\eta=\frac{1}{L}\) results in the convergence rate of
\[f(\hat{x})-f_{*}=\mathcal{O}\left(\frac{D_{0}L^{2}}{T}\right), \tag{1}\]
where \(D_{0}=\|x_{0}-x_{*}\|\). We may therefore start with a small estimate \(L_{0}\) of the smoothness constant \(L\), run gradient descent for \(T\) steps, and return the average point. We restart and repeat this for \(N\) times, and return the point with the minimum function value. So long as \(N\geq\log\frac{L}{L_{0}}\), we will return a point with loss satisfying eq. (1) at the cost of only an additional logarithmic factor. This trick and similar variants of it appear in the literature on prediction with expert advice and online learning (Cesa-Bianchi et al., 1997; Cesa-Bianchi and Lugosi, 2006; Hazan and Megiddo, 2007). It is not even needed to estimate \(N\) in some cases, as the restarting can be done adaptively (Streeter and McMahan, 2012). In practice, however, the performance of doubling trick suffers from restarting the optimization process and throwing away useful that could be used to guide the algorithm.
**Parameter-free methods.** Throughout this paper, we use the term "parameter-free algorithms" to describe optimization algorithms that do not have any tuning parameters. We specifically consider only the deterministic setting with a compact domain. As mentioned before, Carmon and Hinder (2022) develop an elegant parameter-free and adaptive method based on bisection search. Bisection search, similar to grid search, throws away the progress of several optimization runs and restarts, which may hinder their practical performance. Ivgi et al. (2023); Defazio and Mishchenko (2023) recently developed variants of gradient descent that are parameter-free when \(f\) is Lipschitz. However, D-Adaptation (Defazio and Mishchenko, 2023) has no known guarantee under smoothness, while DoG (Ivgi et al., 2023) was only recently (after the initial release of this paper) shown to adapt to smoothness. We compare against the convergence guarantees of DoG in Section 3.2. For smooth functions, Malitsky and Mishchenko (2020) develop AdGD, a method that efficiently estimates the smoothness parameter on-the-fly from the training trajectory. AdGD is parameter-free and matches the convergence of gradient descent but has no known guarantees for certain classes of Lipschitz functions. A proximal extension of this method has been proposed by Latafat et al. (2023).
**Parameter-free methods in online learning.** In the online learning literature, the term "parameter-free algorithms" was originally used to describe another class of algorithms that adapt to the unknown distance to the optimal solution (but can still have other tuning parameters such as Lipschitz constant). When the Lipschitz parameter is known, approaches from online convex optimization such as coin betting (Orabona and Pal, 2016), exponentiated gradient (Streeter and McMahan, 2012; Orabona, 2013), and others (McMahan and Orabona, 2014; Orabona and Cutkosky, 2020; Orabona and Pal, 2021; Orabona and Tommasi, 2017) yield rates that match gradient descent up to logarithmic factors. Knowledge of the Lipschitz constant can be removed either by using careful restarting schemes (Mhammedi et al., 2019; Mhammedi and Koolen, 2020), or adaptive clipping on top of coin betting (Cutkosky, 2019). For optimization in the deterministic setting, it is later clarified that, by leveraging the normalization techniques developed in (Levy, 2017), the aforementioned online learning algorithms can be used without knowing other tuning parameters (i.e., achieve "parameter-free" in the sense of this paper) for optimizing both Lipschitz functions (Orabona and Pal, 2021) and smooth functions (Orabona, 2023). Concretely, as shown in Orabona (2023) (which appears after the initial release of this paper), combining algorithms in Streeter and McMahan (2012); Orabona and Pal (2016) with normalization techniques (Levy, 2017) yields new algorithms that are also search-free, parameter-free (in the sense of this paper), and universal. However, these algorithms are rather different from DoWG in algorithmic style: these algorithms only use normalized gradients while DoWG does use the magnitudes of the gradients; DoWG falls in the category of gradient descent algorithms with adaptive learning rate, while these algorithms do not.
**Line search.** As mentioned before, line-search-based algorithms are universal and theoretically grounded (Nesterov, 2014) but are often expensive in practice (Malitsky and Mishchenko, 2020).
**AdaGrad family of methods.**Li and Orabona (2019) study a variant of the AdaGrad stepsizes in the stochastic convex and non-convex optimization and show convergence when the stepsize is tuned to depend on the smoothness constant. Levy et al. (2018) show that when the stepsize is tuned properly to the diameter of the domain \(\mathcal{X}\) in the constrained convex case, AdaGrad-Norm adapts to smoothness. Ene et al. (2021) extend this to AdaGrad and other algorithms, and also to variational inequalities. Ward et al. (2019); Traore and Pauwels (2021) show the convergence of AdaGrad-Norm for any stepsize for non-convex (resp. convex) optimization, but in the worst case the dependence on the smoothness constant is worse than gradient descent. Liu et al. (2022) show that AdaGrad-Norm converges in the unconstrained setting when \(f\) is quasi-convex, but their guarantee is worse than gradient descent. We remark that all AdaGrad-style algorithms mentioned above require tuning stepsizes, and are thus not parameter-free.
**Alternative justifications for normalization.** There are other justifications for why adaptive methods work outside of universality. Zhang et al. (2020) study a generalized smoothness condition and show that in this setting tuned clipped gradient descent can outperform gradient descent. Because the effective stepsize used in clipped gradient descent is only a constant factor away from the effective stepsize in normalized gradient descent, (Zhang et al., 2020), also show that this improvement holds for NGD. Zhang et al. (2020) observe that gradient clipping and normalization methods outperform SGD when the stochastic gradient noise distribution is heavy-tailed. However, Kunstner et al. (2023) later observe that adaptive methods still do well even when the effect of the noise is limited.
## 3 Algorithms and theory
In this section we first review the different forms of adaptivity in gradient descent and normalized gradient descent, and then introduce our proposed algorithm DoWG. The roadmap for the rest of the paper is as follows: we first review the convergence of gradient descent in the Lipschitz and smooth settings, and highlight the problem of divergence under stepsize misspecification, and how normalization fixes that. Then, we introduce our main new algorithm, DoWG, and give our main theoretical guarantees for the algorithm. Finally, we evaluate the performance of DoWG on practical machine learning problems.
### Baselines: gradient descent and normalized gradient descent
We start our investigation with the standard Gradient Descent (GD) algorithm:
\[x_{t+1}=\Pi_{\mathcal{X}}(x_{t}-\eta\nabla f(x_{t})),\] (GD)
where \(\Pi_{\mathcal{X}}\) is the projection on \(\mathcal{X}\) (when \(\mathcal{X}=\mathbb{R}^{d}\), this is just the identity operator). The iterations (GD) require specifying the stepsize \(\eta>0\). When \(f\) is \(G\)-Lipschitz, gradient descent achieves the following standard convergence guarantee:
**Theorem 1**.: _Suppose that \(f\) is convex with minimizer \(x_{*}\). Let \(f_{*}=f(x_{*})\). Let \(D_{0}\stackrel{{\text{def}}}{{=}}\|x_{0}-x_{*}\|\) be the initial distance to the optimum. Denote by \(\hat{x}_{T}=\frac{1}{T}\sum_{t=0}^{T-1}x_{t}\) the average iterate returned by GD. Then:_
* _(Bubeck, 2015)_ _If_ \(f\) _is_ \(G\)_-Lipschitz, the average iterate satisfies for any stepsize_ \(\eta>0\)_:_ \[f(\bar{x}_{T})-f_{*}\leq\frac{D_{0}^{2}}{\eta T}+\frac{\eta G^{2}}{2},\] (2)
* _(Nesterov, 2018)_ _If_ \(f\) _is_ \(L\)_-smooth, then for all_ \(\eta<\frac{2}{L}\) _the average iterate satisfies_ \[f(\hat{x}_{T})-f_{*}\leq\frac{2LD_{0}^{2}}{4+T\eta L(2-L\eta)}.\] (3)
Minimizing eq. (2) over \(\eta\) gives \(f(\bar{x}_{T})-f_{*}=\mathcal{O}\left(\frac{D_{0}G}{\sqrt{T}}\right)\) with \(\eta=\frac{D_{0}}{G\sqrt{T}}\). We have several remarks to make about this rate for gradient descent. First, the optimal stepsize depends on both the distanceto the optimum \(D_{0}\) and the Lipschitz constant \(G\), and in fact, this rate is in general optimal (Nesterov, 2018, Theorem 3.2.1). Moreover, if we misspecify \(D_{0}\) or \(G\) while tuning \(\eta\), this does not in general result in divergence but may result in a slower rate of convergence. On the other hand, for the smooth setting the optimal stepsize is \(\eta=\frac{1}{L}\) for which \(f(x_{T})-f_{*}\leq\mathcal{O}\left(\frac{LD_{0}^{2}}{T}\right)\). Unfortunately, to obtain this rate we have to estimate the smoothness constant \(L\) in order to choose a stepsize \(\eta<\frac{2}{L}\), and this dependence is _hard_: if we overshoot the upper bound \(\frac{2}{L}\), the iterations of gradient descent can diverge very quickly, as shown by Figure 1. Therefore, GD with a constant stepsize cannot be _universal_: we have to set the stepsize differently for smooth and nonsmooth objectives.
Normalized Gradient Descent (NGD) (Shor, 2012) consists of iterates of the form
\[x_{t+1}=\Pi_{\mathcal{X}}\left(x_{t}-\eta\frac{\nabla f(x_{t})}{\|\nabla f(x_ {t})\|}\right).\] (NGD)
The projection step above is not necessary, and the results for NGD also hold in the unconstrained setting where the projection on \(\mathcal{X}=\mathbb{R}^{d}\) is just the identity. NGD has many benefits: it can escape saddle points that GD may take arbitrarily long times to escape (Murray et al., 2019), and can minimize functions that are quasi-convex and only locally Lipschitz (Hazan et al., 2015). One of the main benefits of normalized gradient descent is that normalization makes the method scale-free: multiplying \(f\) by a constant factor \(\alpha>0\) and minimizing \(\alpha f\) does not change the method's trajectory at all. This allows it to adapt to the Lipschitz constant \(G\) in nonsmooth optimization as well as the smoothness constant \(L\) for smooth objectives, as the following theorem states:
**Theorem 2**.: _Under the same conditions as Theorem 1, the iterations generated by generated by_ (NGD) _satisfy after \(T\) steps satisfy:_
* _(Nesterov, 2018)_ _If_ \(f\) _is_ \(G\)_-Lipschitz, the minimal function suboptimality satisfies_ \[\min_{k\in\{0,1,\ldots,T-1\}}\left[f(x_{k})-f_{*}\right]\leq G\left[\frac{D_{0 }^{2}}{2\eta T}+\frac{\eta}{2}\right],\] (4) _where_ \(D_{0}\stackrel{{\text{def}}}{{=}}\|x_{0}-x_{*}\|\)_._
* _(Levy, 2017; Grimmer, 2019)_ _If_ \(f\) _is_ \(L\)_-Lipschitz, the minimal function suboptimality satisfies_ \[\min_{k=0,\ldots,T-1}\left[f(x_{k})-f_{*}\right]\leq\frac{L}{2}\left[\frac{D_ {0}^{2}}{2\eta T}+\frac{\eta}{2}\right]^{2}.\] (5)
Tuning eq. (4) in \(\eta\) gives \(\eta=\frac{D_{0}}{\sqrt{T}}\), and the stepsize is also optimal for eq. (5). This gives a convergence rate of \(\frac{D_{0}G}{\sqrt{T}}\) when \(f\) is Lipschitz and \(\frac{D_{0}^{2}L}{T}\) when \(f\) is smooth. Observe that NGD matches the dependence of gradient descent on \(G\) and \(L\) without any knowledge of it. Furthermore that, unlike GD where the optimal stepsize is \(\frac{1}{L}\) in the smooth setting and \(\frac{D_{0}}{G\sqrt{T}}\) in the nonsmooth
Figure 1: Two trajectories of gradient descent on the one-dimensional quadratic \(f(x)=\frac{Lx^{2}}{2}\), with \(L=100\).
setting. The optimal stepsize for NGD is the same in both cases. Therefore, NGD is _universal_: the same method with the same stepsize adapts to nonsmooth and smooth objectives. Moreover, misspecification of the stepsize in NGD does not result in divergence, but just slower convergence. Another interesting property is that we only get a guarantee on the best iterate: this might be because NGD is non-monotonic, as Figure 2 (a) shows.
Edge of Stability Phenomena.We may reinterpret NGD with stepsize \(\eta\) as simply GD with a time-varying "effective stepsize" \(\eta_{\mathrm{eff},t}=\frac{\eta}{\|\nabla f(x_{t})\|}\). We plot this effective stepsize for an \(\ell_{2}\)-regularized linear regression problem in Figure 2 (b). Observe that the stepsize sharply increases, then decreases until it starts oscillating around \(\frac{2}{L}\). Recall that \(\frac{2}{L}\) is the edge of stability for gradient descent: its iterates diverge when the stepsize crosses this threshold. Arora et al. (2022) observe this phenomenon for NGD, and give a detailed analysis of it under several technical assumptions and when the iterates are close to the manifold of local minimizers.
Theorem 2 offers an alternative, global, and less explicit explanation of this phenomenon: NGD matches the optimal gradient descent rate, and in order to do so it must drive the effective stepsize to be large. Specifically, suppose that we use the optimal stepsize \(\eta=\frac{D_{0}}{\sqrt{T}}\), and call the best iterate returned by NGD \(x_{\tau}\). Then \(x_{\tau}\) satisfies \(f(x_{\tau})-f_{*}\leq\frac{D_{0}^{2}L}{T}\) and therefore by smoothness
\[\|\nabla f(x_{\tau})\|\leq\sqrt{2L\left(f(x_{\tau})-f_{*}\right)}\leq\sqrt{ \frac{L^{2}D_{0}^{2}}{T}}=\frac{LD_{0}}{\sqrt{T}}=L\eta.\]
This implies \(\eta_{\mathrm{eff},\tau}=\frac{\eta}{\|\nabla f(x_{t})\|}\geq\frac{1}{L}\). Therefore the effective stepsize at convergence is forced to grow to \(\Omega\left(\frac{1}{L}\right)\). But if the effective stepsize increases too much and crosses the threshold \(\frac{2}{L}\), the gradient norms start diverging, forcing the effective stepsize back down. Thus, NGD is _self-stabilizing_. We note that Edge of Stability phenomenon is not unique to NGD, and GD itself trains at the edge of stability for more complicated models where the smoothness also varies significantly over training (Cohen et al., 2021; Damian et al., 2023).
### DoWG
We saw in the last section that NGD adapts to both the Lipschitz constant \(G\) and the smoothness \(L\), but we have to choose \(\eta\) to vary with the distance to the optimum \(D_{0}=\|x_{0}-x_{*}\|\). In this section, we develop a novel algorithm that adaptively estimates the distance to the optimum, and attains the optimal convergence rate of gradient descent for constrained convex and smooth optimization up to a logarithmic factor. Our algorithm builds upon the recently proposed Distance over Gradients (DoG)
Figure 2: NGD iterations on \(\ell_{2}\)-regularized linear regression on the mushrooms dataset from LibSVM (Chang and Lin, 2011) with \(\eta=0.1\). Top (a) shows the function suboptimality over time. Observe that as the number of iterations grow, the method becomes non-monotonic. Bottom (b) shows the effective stepsize \(\eta_{\mathrm{eff},t}=\frac{0.1}{\|\nabla f(x_{t})\|}\) over time.
algorithm developed by Ivgi et al. (2023). We call the new method DoWG (Distance over Weighted Gradients), and we describe it as Algorithm 1 below.
```
1Input: initial point \(x_{0}\in\mathcal{X}\) Initial distance estimate \(r_{\epsilon}>0\).
2Initialize\(v_{-1}=0,r_{-1}=r_{\epsilon}\).
3for\(t=0,1,2,\ldots,T-1\)do
4 Update distance estimator: \(\bar{r}_{t}\leftarrow\max\left(\|x_{t}-x_{0}\|,\ \bar{r}_{t-1}\right)\)
5 Update weighted gradient sum: \(v_{t}\gets v_{t-1}+\bar{r}_{t}^{2}\|\nabla f(x_{t})\|^{2}\)
6 Set the stepsize: \(\eta_{t}\leftarrow\frac{\bar{r}_{t}^{2}}{\sqrt{v_{t}}}\)
7 Gradient descent step: \(x_{t+1}\leftarrow\Pi_{\mathcal{X}}(x_{t}-\eta_{t}\nabla f(x_{t}))\)
8
9 end for
```
**Algorithm 1**DoWG: Distance over Weighted Gradients
DoWG uses the same idea of estimating the distance from the optimum by using the distance from the initial point as a surrogate, but instead of using the square root of the running gradient sum \(G_{t}=\sum_{k=0}^{t}\|\nabla f(x_{k})\|^{2}\) as the normalization, DoWG uses the square root of the weighted gradient sum \(v_{t}=\sum_{k=0}^{t}\bar{r}_{k}^{2}\|\nabla f(x_{k})\|^{2}\). Observe that because the estimated distances \(\bar{r}_{t}\) are monotonically increasing, later gradients have a larger impact on \(v_{t}\) than earlier ones compared to \(G_{t}\). Therefore, we may expect this to aid the method in adapting to the local properties of the problem once far away from the initialization \(x_{0}\). We note that using a weighted sum of gradients is not new: AceeleGrad (Levy et al., 2018) uses time-varying polynomial weights and Adam (Kingma and Ba, 2015) uses exponentially decreasing weights. The difference is that DoWG chooses the weights adaptively based on the running distance from the initial point. This use of distance-based weighted averaging is new, and we are not aware of any previous methods that estimate the running gradient sum in this manner.
**Nonsmooth analysis.** The next theorem shows that DoWG adapts to the Lipschitz constant \(G\) and the diameter \(D\) of the set \(\mathcal{X}\) if the function \(f\) is nonsmooth but \(G\)-Lipschitz. We use the notation \(\log_{+}x=\log x+1\) following (Ivgi et al., 2023).
**Theorem 3**.: _(DoWG, Lipschitz \(f\)). Suppose that the function \(f\) is convex, \(G\)-Lipschitz, and has a minimizer \(x_{*}\in\mathcal{X}\). Suppose that the domain \(\mathcal{X}\) is a closed convex set of (unknown) diameter \(D>0\). Let \(r_{\epsilon}<D\). Then the output of Algorithm 1 satisfies for some \(t\in\{0,1,\ldots,T-1\}\)_
\[f(\bar{x}_{t})-f_{*}=\mathcal{O}\left[\frac{GD}{\sqrt{\bar{T}}}\log_{+}\frac{D }{r_{\epsilon}}\right],\]
_where \(\bar{x}_{t}\stackrel{{\text{def}}}{{=}}\frac{1}{\sum_{k=0}^{t-1} \bar{r}_{k}^{2}}\sum_{k=0}^{t-1}\bar{r}_{k}^{2}x_{k}\) is a weighted average of the iterates returned by the algorithm._
**Discussion of convergence rate.** DoWG matches the optimal \(\mathcal{O}\left(\frac{DG}{\sqrt{\bar{T}}}\right)\) rate of tuned GD and tuned NGD up to an extra logarithmic factor. We note that the recently proposed algorithms DoG (Ivgi et al., 2023) and D-Adaptation (Defazio and Mishchenko, 2023) achieve a similar rate in this setting.
**Comparison with DoG.** As we discussed before, DoWG uses an adaptively weighted sum of gradients for normalization compared to the simple sum used by DoG. In addition, DoG uses the stepsize \(\frac{\bar{r}_{t}}{\sqrt{\sum_{k=0}^{t}\|\nabla f(x_{k})\|^{2}}}\), whereas the DoWG stepsize is pointwise larger: since \(\bar{r}_{k}^{2}\) is monotonically increasing in \(k\) we have
\[\eta_{\text{DoWG},t}=\frac{\bar{r}_{t}^{2}}{\sqrt{\sum_{k=0}^{t}\bar{r}_{k}^{2 }\|\nabla f(x_{k})\|^{2}}}\geq\frac{\bar{r}_{t}^{2}}{\bar{r}_{t}\sqrt{\sum_{k= 0}^{t}\left\|\nabla f(x_{k})\right\|^{2}}}=\frac{\bar{r}_{t}}{\sqrt{\sum_{k=0} ^{t}\left\|\nabla f(x_{k})\right\|^{2}}}=\eta_{\text{DoG},t}.\]
Of course, the pointwise comparison may not reflect the practical performance of the algorithms, since after the first iteration the sequence of iterates \(x_{2},x_{3},\ldots\) generated by the two algorithms can be very different. We observe in practice that DoWG is in general more aggressive, and uses larger stepsizes than both DoG and D-Adaptation (see Section 4).
**Smooth analysis.** Our next theorem shows that DoWG adapts to the smoothness constant and the diameter \(D\) of the set \(\mathcal{X}\).
**Theorem 4**.: _(DoWG, Smooth \(f\)). Suppose that the function \(f\) is \(L\)-smooth, convex, and has a minimizer \(x_{*}\in\mathcal{X}\). Suppose that the domain \(\mathcal{X}\) is a closed convex set of diameter \(D>0\). Let \(r_{\epsilon}<D\). Then the output of Algorithm 1 satisfies for some \(t\in\{0,1,\ldots,T-1\}\)_
\[f(\bar{x}_{t})-f_{*}=\mathcal{O}\left[\frac{LD^{2}}{T}\log_{+}\frac{D}{r_{ \epsilon}}\right],\]
_where \(\bar{x}_{t}\stackrel{{ def}}{{=}}\frac{1}{\sum_{k=0}^{T}\bar{r}_ {k}^{2}}\sum_{k=0}^{t-1}\bar{r}_{k}^{2}x_{k}\) is a weighted average of the iterates returned by the algorithm._
The proof of this theorem and all subsequent results is relegated to the supplementary material. We note that the proof of Theorem 4 uses the same trick used to show the adaptivity of NGD to smoothness: we use the fact that \(\|\nabla f(x)\|\leq\sqrt{2L(f(x)-f_{*})}\) for all \(x\in\mathcal{X}\) applied to a carefully-chosen weighted sum of gradients.
**Comparison with GD/NGD.** Both well-tuned gradient descent and normalized gradient descent achieve the convergence rate \(\mathcal{O}\left(\frac{LD^{2}_{0}}{T}\right)\) where \(D_{0}=\|x_{0}-x_{*}\|\leq D\) for the constrained convex minimization problem. Theorem 4 shows that DoWG essentially attains the same rate up to the difference between \(D_{0}\) and \(D\) and an extra logarithmic factor. In the worst case, if we initialize far from the optimum, we have \(D_{0}\simeq D\) and hence the difference is not significant. We note that DoG (Ivgi et al., 2023) suffers from a similar dependence on the diameter \(D\) of \(\mathcal{X}\), and can diverge in the unconstrained setting, where \(\mathcal{X}\) is not compact. This can be alleviated by making the stepsize smaller by a polylogarithmic factor. A similar reduction of the stepsize also works for DoWG, and we provide the proof in Section 7 in the supplementary.
**Comparison with DoG.** After the initial version of this paper, Ivgi et al. (2023) reported a convergence guarantee for the _unweighted_ average \(\hat{x}_{T}=\frac{1}{T}\sum_{k=0}^{T-1}x_{k}\) returned by DoG. In particular, Proposition 3 in their work gives the rate
\[f(\hat{x}_{T})-f_{*}=\mathcal{O}\left(\frac{L(D_{0}\log_{+}\frac{\bar{r}_{T}}{ r_{\epsilon}}+\bar{r}_{T})^{2}}{T}\right)=\mathcal{O}\left(\frac{LD^{2}}{T}\log_{+} ^{2}\frac{D}{\bar{r}_{\epsilon}}\right).\]
where \(D_{0}=\|x_{0}-x_{*}\|\), and where in the second step we used the bound \(D_{0}\leq D\) and \(\bar{r}_{T}\leq D\). This rate is the same as that achieved by the weighted average of the DoWG iterates up to an extra logarithmic factor \(\log_{+}\frac{D}{\bar{r}_{*}}\). We note that DoG also has a guarantee in the stochastic setting, provided the gradients are bounded locally with a known constant, while in this work we have focused exclusively on the deterministic setting.
Figure 3: DoWG iterations on \(\ell_{2}\)-regularized linear regression on the mushrooms dataset from LibSVM (Chang and Lin, 2011) with \(r_{\epsilon}=10^{-6}\). Top (a) shows the function suboptimality over time. Observe that as the number of iterations grow, the method becomes non-monotonic. Bottom (b) shows the DoWG stepsize over time.
**Edge of Stability.** Like NGD, DoWG also tends to increase the stepsize and train at the edge of stability. The intuition from NGD carries over: in order to preserve the convergence rate of GD, DoWG tends to drive the stepsize larger. However, once it overshoots, the gradients quickly diverge, forcing the stepsize back down. Figure 3 shows the performance of DoWG and its stepsize on the same regularized linear regression problem as in Figure 2. Comparing the two figures, we observe that DoWG is also non-monotonic and trains close to the edge of stability, but its stepsize oscillates less than NGD's effective stepsize.
**Universality.** Theorems 4 and 3 together show that DoWG is universal, i.e. it almost recovers the convergence of gradient descent with tuned stepsizes in both the smooth and nonsmooth settings. As the optimal stepsize for gradient descent can differ significantly between the two settings, we believe that achieving both rates simultaneously without any parameter-tuning or search procedures is a significant strength of DoWG.
## 4 Experimental results
We compare DoWG to DoG, L-DoG from Ivgi et al. (2023), for all of which we also report performance of the polynomially-averaged iterate with power 8 as recommended by Ivgi et al. (2023). We also add comparison against Adam (Kingma and Ba, 2015) with cosine annealing and the standard step size \(10^{-3}\). All methods are used with batch size 256 with no weight decay on a single RTX3090 GPU. We plot the results in Figure 4 with the results averaged over 8 random seeds. We train the VGG11 (Simonyan and Zisserman, 2015) and ResNet-50 (He et al., 2016) neural network architectures on CIFAR10 (Krizhevsky, 2009) using PyTorch (Paszke et al., 2019), and implement1 DoWG on top of the DoG code2. Unsurprisingly, DoWG's estimates of the step size are larger than that of DoG and D-Adapt-norm, which also makes it less stable on ResNet-50. While the last iterate of DoWG gives worse test accuracy than Adam, the average iterate of DoWG often performs better.
Footnote 1: [https://github.com/rka97/dowg](https://github.com/rka97/dowg)
Footnote 2: [https://github.com/formll/dog](https://github.com/formll/dog)
Finally, we note that while both neural networks tested are generally nonsmooth, recent work shows _local_ smoothness can significantly influence and be influenced by a method's trajectory (Cohen et al., 2022; Pan and Li, 2022). We believe this adaptivity to smoothness might explain the empirical difference between DoWG and DoG, but leave a rigorous discussion of adaptivity to local smoothness to future work.
Figure 4: VGG11 (top) and ResNet-50 (bottom) training on CIFAR10. Left: test accuracy, middle: train loss, right: step sizes.
## References
* Arora et al. (2022) Sanjeev Arora, Zhiyuan Li, and Abhishek Panigrahi. Understanding gradient descent on edge of stability in deep learning. _arXiv preprint arXiv:2205.09745_, abs/2205.09745, 2022. URL [https://arXiv.org/abs/2205.09745](https://arXiv.org/abs/2205.09745).
* Bottou et al. (2018) Leon Bottou, Frank E. Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning. _SIAM Review_, 60(2):223-311, 2018. doi: 10.1137/16M1080173. URL [https://doi.org/10.1137/16M1080173](https://doi.org/10.1137/16M1080173).
* Boyd and Vandenberghe (2004) Stephen Boyd and Lieven Vandenberghe. _Convex Optimization_. Cambridge University Press, 2004. doi: 10.1017/CBO9780511804441.
* Bubeck (2015) Sebastien Bubeck. Convex optimization: Algorithms and complexity. _Foundations and Trends(r) in Machine Learning_, 8(3-4):231-357, 2015. doi: 10.1561/2200000050. URL [https://doi.org/10.1561/2200000050](https://doi.org/10.1561/2200000050).
* Carmon and Hinder (2022) Yair Carmon and Oliver Hinder. Making SGD parameter-free. In Po-Ling Loh and Maxim Raginsky, editors, _Conference on Learning Theory, 2-5 July 2022, London, UK_, volume 178 of _Proceedings of Machine Learning Research_, pages 2360-2389. PMLR, 2022. URL [https://proceedings.mlr.press/v178/carmon22a.html](https://proceedings.mlr.press/v178/carmon22a.html).
* Cesa-Bianchi and Lugosi (2006) Nicolo Cesa-Bianchi and Gabor Lugosi. _Prediction, Learning, and Games_. Cambridge University Press, USA, 2006. ISBN 0521841089.
* Cesa-Bianchi et al. (1997) Nicolo Cesa-Bianchi, Yoav Freund, David Haussler, David P. Helmbold, Robert E. Schapire, and Manfred K. Warmuth. How to use expert advice. _J. ACM_, 44(3):427-485, may 1997. ISSN 0004-5411. doi: 10.1145/258128.258179. URL [https://doi.org/10.1145/258128.258179](https://doi.org/10.1145/258128.258179).
* Chang and Lin (2011) Chih-Chung Chang and Chih-Jen Lin. LIBSVM: A library for support vector machines. _ACM transactions on intelligent systems and technology (TIST)_, 2(3):1-27, 2011.
* Cohen et al. (2021) Jeremy M. Cohen, Simran Kaur, Yuanzhi Li, J. Zico Kolter, and Ameet Talwalkar. Gradient descent on neural networks typically occurs at the edge of stability. In _ICLR_. OpenReview.net, 2021.
* Cohen et al. (2022) Jeremy M. Cohen, Behrooz Ghorbani, Shankar Krishnan, Naman Agarwal, Sourabh Medapati, Michal Badura, Daniel Suo, David Cardoze, Zachary Nado, George E. Dahl, and Justin Gilmer. Adaptive gradient methods at the edge of stability. _arXiv preprint arXiv:2207.14484_, abs/2207.14484, 2022. URL [https://arXiv.org/abs/2207.14484](https://arXiv.org/abs/2207.14484).
* Cutkosky (2019) Ashok Cutkosky. Artificial constraints and hints for unbounded online learning. In Alina Beygelzimer and Daniel Hsu, editors, _Proceedings of the Thirty-Second Conference on Learning Theory_, volume 99 of _Proceedings of Machine Learning Research_, pages 874-894. PMLR, 25-28 Jun 2019. URL [https://proceedings.mlr.press/v99/cutkosky19a.html](https://proceedings.mlr.press/v99/cutkosky19a.html).
* Damian et al. (2023) Alex Damian, Eshaan Nichani, and Jason D. Lee. Self-stabilization: The implicit bias of gradient descent at the edge of stability. In _The Eleventh International Conference on Learning Representations_, 2023. URL [https://openreview.net/forum?id=nhKHA59gZz](https://openreview.net/forum?id=nhKHA59gZz).
* Defazio and Mishchenko (2020) Aaron Defazio and Konstantin Mishchenko. Learning-rate-free learning by D-adaptation. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett, editors, _Proceedings of the 40th International Conference on Machine Learning_, volume 202 of _Proceedings of Machine Learning Research_, pages 7449-7479. PMLR, 23-29 Jul 2023.
* The 23rd Conference on Learning Theory, Haifa, Israel, June 27-29, 2010_, pages 257-269. Omnipress, 2010. URL [http://colt2010.haifa.il.ibm.com/papers/COLT2010proceedings.pdf#page=265](http://colt2010.haifa.il.ibm.com/papers/COLT2010proceedings.pdf#page=265).
* Duchi et al. (2010)Alina Ene, Huy L. Nguyen, and Adrian Vladu. Adaptive gradient methods for constrained convex optimization and variational inequalities. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 35, pages 7314-7321, 2021.
* Goodfellow et al. (2016) Ian Goodfellow, Yoshua Bengio, and Aaron Courville. _Practical Methodology_, chapter 11. MIT Press, 2016. URL [http://www.deeplearningbook.org](http://www.deeplearningbook.org).
* Grimmer (2019) Benjamin Grimmer. Convergence rates for deterministic and stochastic subgradient methods without lipschitz continuity. _SIAM Journal on Optimization_, 29(2):1350-1365, 2019. doi: 10.1137/18M117306X. URL [https://doi.org/10.1137/18M117306X](https://doi.org/10.1137/18M117306X).
* Grimmer (2022) Benjamin Grimmer. On optimal universal first-order methods for minimizing heterogeneous sums. _arXiv preprint arXiv:2208.08549_, abs/2208.08549, 2022. URL [https://arXiv.org/abs/2208.08549](https://arXiv.org/abs/2208.08549).
* Gupta et al. (2017) Vineet Gupta, Tomer Koren, and Yoram Singer. A unified approach to adaptive regularization in online and stochastic optimization. _arXiv preprint arXiv:1706.06569_, abs/1706.06569, 2017. URL [https://arXiv.org/abs/1706.06569](https://arXiv.org/abs/1706.06569).
* Hazan and Kakade (2019) Elad Hazan and Sham Kakade. Revisiting the Polyak step size. _arXiv preprint arXiv:1905.00313_, abs/1905.00313, 2019. URL [https://arXiv.org/abs/1905.00313](https://arXiv.org/abs/1905.00313).
* Hazan and Megiddo (2007) Elad Hazan and Nimrod Megiddo. Online learning with prior knowledge. In Nader H. Bshouty and Claudio Gentile, editors, _Learning Theory_, pages 499-513, Berlin, Heidelberg, 2007. Springer Berlin Heidelberg. ISBN 978-3-540-72927-3.
* Hazan et al. (2015) Elad Hazan, Kfir Y. Levy, and Shai Shalev-Shwartz. Beyond convexity: Stochastic quasi-convex optimization. In Corinna Cortes, Neil D. Lawrence, Daniel D. Lee, Masashi Sugiyama, and Roman Garnett, editors, _Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada_, pages 1594-1602, 2015. URL [https://proceedings.neurips.cc/paper/2015/hash/934815ad542a4a7c5e8a2dfa04fea9f5-Abstract.html](https://proceedings.neurips.cc/paper/2015/hash/934815ad542a4a7c5e8a2dfa04fea9f5-Abstract.html).
* He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 770-778, 2016. doi: 10.1109/CVPR.2016.90.
* Iygi et al. (2023) Maor Iygi, Oliver Hinder, and Yair Carmon. DoG is SGD's best friend: A parameter-free dynamic step size schedule. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett, editors, _Proceedings of the 40th International Conference on Machine Learning_, volume 202 of _Proceedings of Machine Learning Research_, pages 14465-14499. PMLR, 23-29 Jul 2023.
* Kavis et al. (2020) Ali Kavis, Kfir Y. Levy, Francis R. Bach, and Volkan Cevher. UniXGrad: A universal, adaptive algorithm with optimal guarantees for constrained optimization. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alche-Buc, Emily B. Fox, and Roman Garnett, editors, _Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada_, pages 6257-6266, 2019. URL [https://proceedings.neurips.cc/paper/2019/hash/8885554750f7ff053fff7c54e5148cc-Abstract.html](https://proceedings.neurips.cc/paper/2019/hash/8885554750f7ff053fff7c54e5148cc-Abstract.html).
* Khaled and Richtarik (2020) Ahmed Khaled and Peter Richtarik. Better theory for SGD in the nonconvex world. _arXiv preprint arXiv:2002.03329_, abs/2002.03329, 2020. URL [https://arXiv.org/abs/2002.03329](https://arXiv.org/abs/2002.03329).
* Kingma and Ba (2015) Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun, editors, _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_, 2015. URL [http://arXiv.org/abs/1412.6980](http://arXiv.org/abs/1412.6980).
* Krizhevsky (2009) Alex Krizhevsky. Learning multiple layers of features from tiny images. pages 32-33, 2009. URL [https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf).
* Krizhevsky (2009)* Kunstner et al. (2023) Frederik Kunstner, Jacques Chen, Jonathan Wilder Lavington, and Mark Schmidt. Noise is not the main factor behind the gap between SGD and Adam on transformers, but sign descent might be. In _The Eleventh International Conference on Learning Representations_, 2023. URL [https://openreview.net/forum?id=a65YK0cqH8g](https://openreview.net/forum?id=a65YK0cqH8g). (Cited on page 4)
* Latafat et al. (2023) Puya Latafat, Andreas Themelis, Lorenzo Stella, and Panagiotis Patrinos. Adaptive proximal algorithms for convex optimization under local Lipschitz continuity of the gradient. _arXiv preprint arXiv:2301.04431_, 2023.
* Levy (2017) Kfir Y. Levy. Online to offline conversions, universality and adaptive minibatch sizes. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, _Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA_, pages 1613-1622, 2017. URL [https://proceedings.neurips.cc/paper/2017/hash/ce5140df15d046a66883807d18d0264b-Abstract.html](https://proceedings.neurips.cc/paper/2017/hash/ce5140df15d046a66883807d18d0264b-Abstract.html). (Cited on pages 3, 5, and 15)
* Levy et al. (2018) Kfir Yehuda Levy, Alp Yurtsever, and Volkan Cevher. Online adaptive methods, universality and acceleration. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolo Cesa-Bianchi, and Roman Garnett, editors, _Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montreal, Canada_, pages 6501-6510, 2018. URL [https://proceedings.neurips.cc/paper/2018/hash/b0169350cd35566c47ba83c6ec1d6f82-Abstract.html](https://proceedings.neurips.cc/paper/2018/hash/b0169350cd35566c47ba83c6ec1d6f82-Abstract.html). (Cited on pages 1, 2, 4, 7, 15, and 18)
* Li et al. (2023) Haochuan Li, Ali Jadbabaie, and Alexander Rakhlin. Convergence of adam under relaxed assumptions. _arXiv preprint arXiv:2304.13972_, abs/2304.13972, 2023. URL [https://arXiv.org/abs/2304.13972](https://arXiv.org/abs/2304.13972). (Cited on page 2)
* Li and Orabona (2019) Xiaoyu Li and Francesco Orabona. On the convergence of stochastic gradient descent with adaptive stepsizes. In Kamalika Chaudhuri and Masashi Sugiyama, editors, _The 22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019, 16-18 April 2019, Naha, Okinawa, Japan_, volume 89 of _Proceedings of Machine Learning Research_, pages 983-992. PMLR, 2019. URL [http://proceedings.mlr.press/v89/li19c.html](http://proceedings.mlr.press/v89/li19c.html). (Cited on page 4)
* Liu et al. (2022) Zijian Liu, Ta Duy Nguyen, Alina Ene, and Huy L. Nguyen. On the convergence of AdaGrad(norm) on \(\mathbb{R}^{d}\): Beyond convexity, non-asymptotic rate and acceleration. _arXiv preprint arXiv:2209.14827_, abs/2209.14827, 2022. URL [https://arXiv.org/abs/2209.14827](https://arXiv.org/abs/2209.14827). (Cited on page 4)
* Loizou et al. (2021) Nicolas Loizou, Sharan Vaswani, Issam Hadj Laradji, and Simon Lacoste-Julien. Stochastic polyak step-size for SGD: an adaptive learning rate for fast convergence. In Arindam Banerjee and Kenji Fukumizu, editors, _The 24th International Conference on Artificial Intelligence and Statistics, AISTATS 2021, April 13-15, 2021, Virtual Event_, volume 130 of _Proceedings of Machine Learning Research_, pages 1306-1314. PMLR, 2021. URL [http://proceedings.mlr.press/v130/loizou21a.html](http://proceedings.mlr.press/v130/loizou21a.html). (Cited on page 3)
* Malitsky and Mishchenko (2020) Yura Malitsky and Konstantin Mishchenko. Adaptive gradient descent without descent. In _Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event_, volume 119 of _Proceedings of Machine Learning Research_, pages 6702-6712. PMLR, 2020. URL [http://proceedings.mlr.press/v119/malitsky20a.html](http://proceedings.mlr.press/v119/malitsky20a.html). (Cited on pages 3 and 4)
* McMahan and Orabona (2014) H. Brendan McMahan and Francesco Orabona. Unconstrained online linear learning in hilbert spaces: Minimax algorithms and normal approximations. In Maria-Florina Balcan, Vitaly Feldman, and Csaba Szepesvari, editors, _Proceedings of The 27th Conference on Learning Theory, COLT 2014, Barcelona, Spain, June 13-15, 2014_, volume 35 of _JMLR Workshop and Conference Proceedings_, pages 1020-1039. JMLR.org, 2014. URL [http://proceedings.mlr.press/v35/mcmahan14.html](http://proceedings.mlr.press/v35/mcmahan14.html). (Cited on page 3)
* Mhammedi and Koolen (2020) Zakaria Mhammedi and Wouter M Koolen. Lipschitz and comparator-norm adaptivity in online learning. In _Conference on Learning Theory_, pages 2858-2887. PMLR, 2020. (Cited on page 3)Zakaria Mhammedi, Wouter M Koolen, and Tim Van Erven. Lipschitz adaptivity with multiple learning rates in online learning. In _Conference on Learning Theory_, pages 2490-2511. PMLR, 2019.
* Murray et al. (2019) Ryan Murray, Brian Swenson, and Soummya Kar. Revisiting normalized gradient descent: Fast evasion of saddle points. _IEEE Transactions on Automatic Control_, 64(11):4818-4824, 2019. doi: 10.1109/TAC.2019.2914998.
* Nesterov (2014) Yurii Nesterov. Universal gradient methods for convex optimization problems. _Mathematical Programming_, 152(1-2):381-404, 2014. doi: 10.1007/s10107-014-0790-0. URL [https://doi.org/10.1007/s10107-014-0790-0](https://doi.org/10.1007/s10107-014-0790-0).
* Nesterov (2018) Yurii Nesterov. _Lectures on Convex Optimization_. Springer Publishing Company, Incorporated, 2nd edition, 2018. ISBN 3319915770.
* Orabona (2013) Francesco Orabona. Dimension-free exponentiated gradient. In C.J. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, _Advances in Neural Information Processing Systems_, volume 26. Curran Associates, Inc., 2013. URL [https://proceedings.neurips.cc/paper_files/paper/2013/file/7634ea65a4e6d9041cfd3f7de18e334a-Paper.pdf](https://proceedings.neurips.cc/paper_files/paper/2013/file/7634ea65a4e6d9041cfd3f7de18e334a-Paper.pdf).
* Orabona (2023) Francesco Orabona. Normalized gradients for all. _arXiv preprint arXiv:2308.05621_, abs/2308.05621, 2023. URL [https://arXiv.org/abs/2308.05621](https://arXiv.org/abs/2308.05621).
* Orabona and Cutkosky (2020) Francesco Orabona and Ashok Cutkosky. ICML 2020 tutorial on parameter-free online optimization. _ICML Tutorials_, 2020. URL [https://parameterfree.com/icml-tutorial/](https://parameterfree.com/icml-tutorial/).
* Orabona and Pal (2016) Francesco Orabona and David Pal. Coin betting and parameter-free online learning. In _Proceedings of the 30th International Conference on Neural Information Processing Systems_, NIPS'16, page 577-585, Red Hook, NY, USA, 2016. Curran Associates Inc. ISBN 9781510838819.
* Orabona and Pal (2021) Francesco Orabona and David Pal. Parameter-free stochastic optimization of variationally coherent functions. _arXiv preprint arXiv:2102.00236_, abs/2102.00236, 2021. URL [https://arXiv.org/abs/2102.00236](https://arXiv.org/abs/2102.00236).
* Orabona and Tommasi (2017) Francesco Orabona and Tatiana Tommasi. Training deep networks without learning rates through coin betting. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, _Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA_, pages 2160-2170, 2017. URL [https://proceedings.neurips.cc/paper/2017/hash/7c82fab8c8f89124e2ce92984e04fb40-Abstract.html](https://proceedings.neurips.cc/paper/2017/hash/7c82fab8c8f89124e2ce92984e04fb40-Abstract.html).
* Oravieto et al. (2022) Antonio Oravieto, Simon Lacoste-Julien, and Nicolas Loizou. Dynamics of sgd with stochastic polyak stepsizes: Truly adaptive variants and convergence to exact solution. _arXiv preprint arXiv:2205.04583_, abs/2205.04583, 2022. URL [https://arXiv.org/abs/2205.04583](https://arXiv.org/abs/2205.04583).
* Pan and Li (2022) Yan Pan and Yuanzhi Li. Toward understanding why Adam converges faster than SGD for transformers. _OPT2023: 14th Annual Workshop on Optimization for Machine Learning_, 2022. URL [https://openreview.net/pdf?id=Sf1N1V2r6PO](https://openreview.net/pdf?id=Sf1N1V2r6PO).
* Paszke et al. (2019) Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In _Advances in Neural Information Processing Systems 32_, pages 8024-8035. Curran Associates, Inc., 2019. URL [http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf](http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf).
* Paszke et al. (2019)David Patterson, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David So, Maud Texier, and Jeff Dean. Carbon emissions and large neural network training. _arXiv preprint arXiv:2104.10350_, abs/2104.10350, 2021. URL [https://arXiv.org/abs/2104.10350](https://arXiv.org/abs/2104.10350).
* Polyak (1987) Boris Polyak. _Introduction to optimization_. Optimization Software, 1987.
* Polyak (1964) Boris T. Polyak. Some methods of speeding up the convergence of iteration methods. _USSR Computational Mathematics and Mathematical Physics_, 4(5):1-17, 1964. ISSN 0041-5553. doi: [https://doi.org/10.1016/0041-5553](https://doi.org/10.1016/0041-5553)(64)90137-5. URL [https://www.sciencedirect.com/science/article/pii/0041555364901375](https://www.sciencedirect.com/science/article/pii/0041555364901375).
* Sharir et al. (2020) Or Sharir, Barak Peleg, and Yoav Shoham. The cost of training NLP models: a concise overview. _arXiv preprint arXiv:2004.08900_, abs/2004.08900, 2020. URL [https://arXiv.org/abs/2004.08900](https://arXiv.org/abs/2004.08900).
* Shor (2012) Naum Zuselevich Shor. _Minimization methods for non-differentiable functions_, volume 3. Springer Science & Business Media, 2012.
* Simonyan and Zisserman (2015) Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In Yoshua Bengio and Yann LeCun, editors, _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_, 2015. URL [http://arxiv.org/abs/1409.1556](http://arxiv.org/abs/1409.1556).
* Volume 2_, NIPS'12, page 2402-2410, Red Hook, NY, USA, 2012. Curran Associates Inc. (Cited on page 3)
* Traore and Pauwels (2021) Cheik Traore and Edouard Pauwels. Sequential convergence of AdaGrad algorithm for smooth convex optimization. _Operations Research Letters_, 49(4):452-458, 2021.
* Ward et al. (2019) Rachel Ward, Xiaoxia Wu, and Leon Bottou. AdaGrad stepsizes: sharp convergence over nonconvex landscapes. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, _Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA_, volume 97 of _Proceedings of Machine Learning Research_, pages 6677-6686. PMLR, 2019. URL [http://proceedings.mlr.press/v97/ward19a.html](http://proceedings.mlr.press/v97/ward19a.html).
* Wilson et al. (2017) Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nati Srebro, and Benjamin Recht. The marginal value of adaptive gradient methods in machine learning. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, _Advances in Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA_, pages 4148-4158, 2017. URL [https://proceedings.neurips.cc/paper/2017/hash/81b3833e2504647f9d794f7d7b9bf341-Abstract.html](https://proceedings.neurips.cc/paper/2017/hash/81b3833e2504647f9d794f7d7b9bf341-Abstract.html).
* Zhang et al. (2020) Jingzhao Zhang, Tianxing He, Suvrit Sra, and Ali Jadbabaie. Why gradient clipping accelerates training: A theoretical justification for adaptivity. In _8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020._ OpenReview.net, 2020a. URL [https://openreview.net/forum?id=BJgnXpYwS](https://openreview.net/forum?id=BJgnXpYwS).
* Zhang et al. (2020) Jingzhao Zhang, Sai Praneeth Karimireddy, Andreas Veit, Seungyeon Kim, Sashank J. Reddi, Sanjiv Kumar, and Suvrit Sra. Why are adaptive methods good for attention models? In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, _Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual_, 2020b. URL [https://proceedings.neurips.cc/paper/2020/hash/b05b57f6add810d3b7490866d74c0053-Abstract.html](https://proceedings.neurips.cc/paper/2020/hash/b05b57f6add810d3b7490866d74c0053-Abstract.html).
* Zhang et al. (2020)
## Supplementary material
###### Contents
* 1 Introduction
* 2 Related Work
* 3 Algorithms and theory
* 3.1 Baselines: gradient descent and normalized gradient descent
* 3.2 DoWG
* 4 Experimental results
* 5 Algorithm-independent results
* 6 Proofs for DoWG
* 6.1 Smooth case
* 6.2 Nonsmooth case
* 7 Unconstrained domain extension
## 5 Algorithm-independent results
In this section we collect different results that are algorithm-independent, the first is a consequence of smoothness:
**Fact 1**.: _Suppose that \(f\) is smooth and lower bounded by \(f_{*}\). Then for all \(x\in\mathbb{R}^{d}\) we have,_
\[\left\|\nabla f(x)\right\|^{2}\leq 2L\left(f(x)-f_{*}\right).\]
Proof.: This is a common result in the literature, and finds applications in convex and non-convex optimization see e.g. (Levy, 2017; Orabona and Cutkosky, 2020; Khaled and Richtarik, 2020). We include the proof for completeness. Let \(x\in\mathbb{R}^{d}\) and define \(x_{+}=x-\frac{1}{L}\nabla f(x)\). Then by smoothness
\[f(x_{+}) \leq f(x)+\langle\nabla f(x),x_{+}-x\rangle+\frac{L}{2}\|x_{+}-x \|^{2}\] \[=f(x)-\frac{1}{L}\|\nabla f(x)\|^{2}+\frac{1}{2L}\|\nabla f(x)\|^ {2}\] \[=f(x)-\frac{1}{2L}\|\nabla f(x)\|^{2}.\]
Because \(f\) is lower bounded by \(f_{*}\) we thus have
\[f_{*}\leq f(x_{+})\leq f(x)-\frac{1}{2L}\|\nabla f(x)\|^{2}.\]
Rearranging gives \(\left\|\nabla f(x)\right\|^{2}\leq 2L\left(f(x)-f_{*}\right)\).
The next two results are helpful algebraic identities that will be useful for the proof of DoWG.
**Lemma 1**.: _(_Ivgi et al._,_ 2023_, Lemma 4)_. Let \(a_{0},..,a_{t}\) be a nondecreasing sequence of nonnegative numbers. Then_
\[\sum_{k=1}^{t}\frac{a_{k}-a_{k-1}}{\sqrt{a_{k}}}\leq 2\left(\sqrt{a_{t}}-\sqrt {a_{0}}\right).\]Proof.: This is (Ivgi et al., 2023, Lemma 4). We include the proof for completeness:
\[\sum_{k=1}^{t}\frac{a_{k}-a_{k-1}}{\sqrt{a_{k}}} =\sum_{k=1}^{t}\frac{\left(\sqrt{a_{k}}-\sqrt{a_{k-1}}\right)\left( \sqrt{a_{k}}+\sqrt{a_{k-1}}\right)}{\sqrt{a_{k}}}\] \[\leq 2\sum_{k=1}^{t}\left(\sqrt{a_{k}}-\sqrt{a_{k-1}}\right)\] \[=2\left(\sqrt{a_{t}}-\sqrt{a_{0}}\right).\]
**Lemma 2**.: _((Ivgi et al., 2023, Lemma 3), similar to (Defazio and Mishchenko, 2023, Lemma 11)). Let \(s_{0},s_{1},\ldots,s_{T}\) be a positive increasing sequence. Then_
\[\max_{t\leq T}\sum_{i<t}\frac{s_{i}}{s_{t}}\geq\frac{1}{e}\left( \frac{T}{\log_{+}(s_{T}/s_{0})}-1\right),\]
_where \(\log_{+}x\stackrel{{\text{\tiny{def}}}}{{=}}\log x+1\)._
Proof.: This is (Ivgi et al., 2023, Lemma 3). We include the proof for completeness: Define \(K=\lceil\log\frac{s_{T}}{s_{0}}\rceil\) and \(n=\left\lfloor\frac{T}{K}\right\rfloor\). Then,
\[\log\left(\frac{s_{T}}{s_{0}}\right)\geq\sum_{k=0}^{K-1}\log\left( \frac{s_{n(k+1)}}{s_{nk}}\right)\geq K\min_{k<K}\log\frac{s_{n(k+1)}}{s_{nk}}.\]
Rearranging and using \(K=\lceil\log\frac{s_{T}}{s_{0}}\rceil\) gives
\[\min_{k<K}\log\frac{s_{n(k+1)}}{s_{nk}}\leq\frac{\log\frac{s_{T}}{ s_{0}}}{K}\leq 1.\]
Therefore,
\[\min_{k<K}\frac{s_{n(k+1)}}{s_{nk}}\leq e.\]
Thus,
\[\max_{t\leq T}\sum_{i\leq t}\frac{s_{i}}{s_{t}} \geq\max_{t\leq T}n\frac{s_{t-n}}{s_{t}}\] \[\geq\max_{k\leq K}n\frac{s_{n(k-1)}}{s_{nk}}\] \[\geq e^{-1}n\] \[=e^{-1}\left\lfloor\frac{T}{K}\right\rfloor\] \[\geq e^{-1}\left(\frac{T}{K}-1\right)\] \[\geq e^{-1}\left(\frac{T}{\log\left(\frac{s_{T}}{s_{0}}\right)+1 }-1\right).\]
## 6 Proofs for DoWG
This section collects proofs for DoWG. First, we give the following lemma, which holds under convexity alone (regardless of whether \(f\) is smooth or Lipschitz).
**Lemma 3**.: _Suppose that \(f\) is convex and has minimizer \(x_{*}\). For the iterations generated by Algorithm 1, we have_
\[\sum_{k=0}^{t-1}\bar{r}_{k}^{2}\left\langle\nabla f(x_{k}),x_{k}-x_{*}\right\rangle \leq 2\bar{r}_{t}\left[\bar{d}_{t}+\bar{r}_{t}\right]\sqrt{v_{t-1}}, \tag{6}\]
_where \(\bar{d}_{t}=\max_{k\leq t}d_{k}\)._
Proof.: This proof follows the proof of DoG (Ivgi et al., 2023, Lemma 1), itself a modification of the standard proof for adaptive cumulative gradient normalization methods (Gupta et al., 2017) incorporating insights from (Carmon and Hinder, 2022). We specifically modify the proof to handle the weighting scheme we use in DoWG. By the nonexpansivity of the projection we have
\[d_{k+1}^{2} =\left\|x_{k+1}-x_{*}\right\|^{2}\] \[\leq\left\|x_{k}-\eta_{k}\nabla f(x_{k})-x_{*}\right\|^{2}\] \[=\left\|x_{k}-x_{*}\right\|^{2}-2\eta_{k}\left\langle\nabla f(x_{ k}),x_{k}-x_{*}\right\rangle+\eta_{k}^{2}\|\nabla f(x_{k})\|^{2}\] \[=d_{k}^{2}-2\eta_{k}\left\langle\nabla f(x_{k}),x_{k}-x_{*}\right \rangle+\eta_{k}^{2}\|\nabla f(x_{k})\|^{2}.\]
Rearranging and dividing by \(2\eta_{k}\) we get
\[\left\langle\nabla f(x_{k}),x_{k}-x_{*}\right\rangle\leq\frac{d_{k}^{2}-d_{k+1 }^{2}}{2\eta_{k}}+\frac{\eta_{k}}{2}\|\nabla f(x_{k})\|^{2}.\]
Multiplying both sides by \(\bar{r}_{k}^{2}\) we get
\[\bar{r}_{k}^{2}\left\langle\nabla f(x_{k}),x_{k}-x_{*}\right\rangle\leq\frac{1} {2}\frac{\bar{r}_{k}^{2}}{\eta_{k}}\left[d_{k}^{2}-d_{k+1}^{2}\right]+\frac{1}{ 2}\bar{r}_{k}^{2}\eta_{k}\|\nabla f(x_{k})\|^{2}.\]
Summing up as \(k\) varies from \(0\) to \(t-1\) we get
\[\sum_{k=0}^{t-1}\bar{r}_{k}^{2}\left\langle\nabla f(x_{k}),x_{k}-x_{*}\right \rangle\leq\frac{1}{2}\underbrace{\left[\sum_{k=0}^{t-1}\frac{\bar{r}_{k}^{2}} {\eta_{k}}\left(d_{k}^{2}-d_{k+1}^{2}\right)\right]+}_{(\text{A})}\underbrace{ \frac{1}{2}\underbrace{\left[\sum_{k=0}^{t-1}\bar{r}_{k}^{2}\eta_{k}\|\nabla f (x_{k})\|^{2}\right]}_{(\text{B})}. \tag{7}\]
We shall now bound each of the terms (A) and (B). We have
\[\text{(A)} =\sum_{k=0}^{t-1}\frac{\bar{r}_{k}^{2}}{\eta_{k}}\left(d_{k}^{2}- d_{k+1}^{2}\right)\] \[=\sum_{k=0}^{t-1}\sqrt{v_{k}}\left(d_{k}^{2}-d_{k+1}^{2}\right) \tag{8}\] \[=d_{0}^{2}\sqrt{v_{0}}-d_{t}^{2}\sqrt{v_{t-1}}+\sum_{k=1}^{t-1}d_ {k}^{2}\left(\sqrt{v_{k}}-\sqrt{v_{k-1}}\right)\] (9) \[\leq\bar{d}_{t}^{2}\sqrt{v_{0}}-d_{t}^{2}\sqrt{v_{t-1}}+\bar{d}_{t }^{2}\sum_{k=1}^{t-1}\left(\sqrt{v_{k}}-\sqrt{v_{k-1}}\right)\] (10) \[=\sqrt{v_{t-1}}\left[d_{t}^{2}-d_{t}^{2}\right]\] (11) \[\leq 4\bar{r}_{t}\bar{d}_{t}\sqrt{v_{t-1}}, \tag{12}\]
where eq. (8) holds by definition of the DoWG stepsize \(\eta_{k}\), eq. (9) holds by telescoping, eq. (10) holds because \(v_{k}=v_{k-1}+\bar{r}_{k}^{2}\|\nabla f(x_{k})\|^{2}\geq v_{k-1}\) and hence \(\sqrt{v_{k}}\geq\sqrt{v_{k-1}}\), and \(d_{k}^{2}\leq\bar{d}_{t}^{2}\) by definition. Equation (11) just follows by telescoping. Finally observe that \(\bar{d}_{t}^{2}-d_{t}^{2}=d_{s}^{2}-d_{t}^{2}\) for some \(s\in[t]\), and \(d_{s}^{2}-d_{t}^{2}=(d_{s}-d_{t})(d_{s}+d_{t})\). Then by the triangle inequality and that the sequence\(\bar{r}_{k}\) is monotonically nondecreasing we have
\[d_{s}-d_{t} =\|x_{s}-x_{*}\|-\|x_{t}-x_{*}\|\] \[\leq\|x_{s}-x_{t}\|\] \[\leq\|x_{s}-x_{0}\|+\|x_{t}-x_{0}\|\] \[=r_{s}+r_{t}\] \[\leq\bar{r}_{s}+\bar{r}_{t}\] \[\leq 2\bar{r}_{t}.\]
Therefore \(d_{s}^{2}-d_{t}^{2}\leq(\bar{r}_{s}+\bar{r}_{t})(d_{s}+d_{t})\leq 4\bar{r}_{t} \bar{d}_{t}\). This explains eq. (12).
For the second term in eq. (7), we have
(B) \[=\sum_{k=0}^{t-1}\bar{r}_{k}^{2}\eta_{k}\|\nabla f(x_{k})\|^{2}\] \[=\sum_{k=1}^{t-1}\frac{\bar{r}_{k}^{4}}{\sqrt{v_{k}}}\|\nabla f(x _{k})\|^{2}\] \[=r_{0}^{2}\sqrt{v_{0}}+\sum_{k=1}^{t-1}\frac{\bar{r}_{k}^{4}}{ \sqrt{v_{k}}}\|\nabla f(x_{k})\|^{2}\] \[\leq\bar{r}_{t}^{2}\sqrt{v_{0}}+\bar{r}_{t}^{2}\sum_{k=1}^{t-1} \frac{\bar{r}_{k}^{2}\|\nabla f(x_{k})\|^{2}}{\sqrt{v_{k}}}\] \[=\bar{r}_{t}^{2}\sqrt{v_{0}}+\bar{r}_{t}^{2}\sum_{k=1}^{t-1}\frac{ v_{k}-v_{k-1}}{\sqrt{v_{k}}}\] \[=\bar{r}_{t}^{2}\sqrt{v_{0}}+\bar{r}_{t}^{2}\sum_{k=1}^{t-1}\frac{ v_{k}-v_{k-1}}{\sqrt{v_{k}}}\] \[\leq\bar{r}_{t}^{2}\sqrt{v_{0}}+2\bar{r}_{t}^{2}\left[\sqrt{v_{t- 1}}-\sqrt{v_{0}}\right]\] (13) \[=2\bar{r}_{t}^{2}\sqrt{v_{t-1}}.\] (14)
where eq. (13) is by Lemma 1. Plugging eqs. (12) and (14) in eq. (7) gives
\[\sum_{k=0}^{t-1}\bar{r}_{k}^{2}\left\langle\nabla f(x_{k}),x_{k}-x _{*}\right\rangle \leq 2\bar{r}_{t}\bar{d}_{t}\sqrt{v_{t-1}}+\bar{r}_{t}^{2}\sqrt{v_{t -1}}\] \[\leq 2\bar{r}_{t}\left[\bar{d}_{t}+\bar{r}_{t}\right]\sqrt{v_{t-1}}.\]
### Smooth case
We now prove the convergence of DoWG under smoothness. In particular, we shall use Fact 1 and the DoWG design to bound the weighted cumulative error \(S_{t}=\sum_{k=0}^{t-1}\bar{r}_{k}^{2}\left[f(x_{k})-f_{*}\right]\) by its square root multiplied by a problem-dependent constant. We note that a similar trick is used in the analysis of AdaGrad-Norm (Levy et al., 2018), in reductions from online convex optimization to stochastic smooth optimization (Orabona and Cutkosky, 2020), and in the method of (Carmon and Hinder, 2022). However, in all the mentioned cases, the _unweighted_ error \(M_{t}=\sum_{k=0}^{t-1}\left[f(x_{k})-f_{*}\right]\) is bounded by its square root. Here, DoWG's design allows us to bound the weighted errors \(S_{t}\) instead.
Proof of Theorem 4.: We start with Lemma 3. Let \(t\in[T]\). By eq. (6) we have
\[\sum_{k=0}^{t-1}\bar{r}_{k}^{2}\left\langle\nabla f(x_{k}),x_{k}-x_{*}\right \rangle\leq 2\bar{r}_{t}\left[\bar{d}_{t}+\bar{r}_{t}\right]\sqrt{v_{t-1}}. \tag{15}\]Observe that by convexity we have
\[\left\langle\nabla f(x_{k}),x_{k}-x_{*}\right\rangle\geq f(x_{k})-f_{*}.\]
Using this to lower bound the left-hand side of eq. (15) gives
\[\sum_{k=0}^{t-1}\bar{r}_{k}^{2}\left[f(x_{k})-f_{*}\right] \leq\sum_{k=0}^{t-1}\bar{r}_{k}^{2}\left\langle\nabla f(x_{k}),x_{ k}-x_{*}\right\rangle\] \[\leq 2\bar{r}_{t}\left[\bar{d}_{t}+\bar{r}_{t}\right]\sqrt{v_{t-1}}. \tag{16}\]
We have by smoothness that \(\left\|\nabla f(x)\right\|^{2}\leq 2L(f(x)-f_{*})\) for all \(x\in\mathbb{R}^{d}\), therefore
\[v_{t-1}=\sum_{k=0}^{t-1}\bar{r}_{k}^{2}\|\nabla f(x_{k})\|^{2}\leq 2L\sum_{k=0}^ {t-1}\bar{r}_{k}^{2}\left[f(x_{k})-f_{*}\right].\]
Taking square roots we get
\[\sqrt{v_{t-1}}\leq\sqrt{2L}\sqrt{\sum_{k=0}^{t-1}\bar{r}_{k}^{2} \left[f(x_{k})-f_{*}\right]}. \tag{17}\]
Using eq. (17) in eq. (16) gives
\[\sum_{k=0}^{t-1}\bar{r}_{k}^{2}\left[f(x_{k})-f_{*}\right]\leq 2\sqrt{2L}\bar{r} _{t}\left(\bar{d}_{t}+\bar{r}_{t}\right)\sqrt{\sum_{k=0}^{t-1}\bar{r}_{k}^{2} \left[f(x_{k})-f_{*}\right]}.\]
If \(f(x_{k})-f_{*}=0\) for some \(k\in[t-1]\) then the statement of the theorem is trivial. Otherwise, we can divide both sides by the latter square root to get
\[\sqrt{\sum_{k=0}^{t-1}\bar{r}_{k}^{2}\left[f(x_{k})-f_{*}\right]} \leq 2\sqrt{2L}\bar{r}_{t}\left(\bar{d}_{t}+\bar{r}_{t}\right).\]
Squaring both sides gives
\[\sum_{k=0}^{t-1}\bar{r}_{k}^{2}\left[f(x_{k})-f_{*}\right]\leq 8L\bar{r}_{t}^{2} \left(\bar{d}_{t}+\bar{r}_{t}\right)^{2}.\]
Dividing both sides by \(\sum_{k=0}^{t-1}\bar{r}_{k}^{2}\) we get
\[\frac{1}{\sum_{k=0}^{t-1}\bar{r}_{k}^{2}}\sum_{k=0}^{t-1}\bar{r}_ {k}^{2}\left[f(x_{k})-f_{*}\right] \leq\frac{8L\bar{r}_{t}^{2}\left(\bar{d}_{t}+\bar{r}_{t}\right)^{ 2}}{\sum_{k=0}^{t-1}\bar{r}_{k}^{2}}\] \[=\frac{8L\left(\bar{d}_{t}+\bar{r}_{t}\right)^{2}}{\sum_{k=0}^{t- 1}\frac{\bar{r}_{k}^{2}}{\bar{r}_{t}^{2}}}.\]
By convexity we have
\[f(\bar{x}_{t})-f_{*} \leq\frac{1}{\sum_{k=0}^{t-1}\bar{r}_{k}^{2}}\sum_{k=0}^{t-1}\bar {r}_{k}^{2}\left[f(x_{k})-f_{*}\right]\] \[\leq\frac{8L\left(\bar{d}_{t}+\bar{r}_{t}\right)^{2}}{\sum_{k=0}^ {t-1}\frac{\bar{r}_{k}^{2}}{\bar{r}_{t}^{2}}}. \tag{18}\]
By Lemma 2 applied to the sequence \(s_{k}=\bar{r}_{k}^{2}\) we have that for some \(t\in[T]\)
\[\sum_{k=0}^{t-1}\frac{\bar{r}_{k}^{2}}{\bar{r}_{t}^{2}}\geq\frac{1}{e}\left( \frac{T}{\log_{+}\frac{\bar{r}_{k}^{2}}{\bar{r}_{0}^{2}}}-1\right).\]Because \(\mathcal{X}\) has diameter \(D\) we have \(\bar{r}_{T}^{2}\leq D^{2}\) and therefore
\[\sum_{k=0}^{t-1}\frac{\bar{r}_{k}^{2}}{\bar{r}_{t}^{2}}\geq\frac{1}{e}\left( \frac{T}{\log_{+}\frac{D^{2}}{\bar{r}_{0}^{2}}}-1\right). \tag{19}\]
We now have two cases:
* If \(T\geq 2\log_{+}\frac{D^{2}}{\bar{r}_{0}^{2}}\) then \(\frac{T}{\log\frac{D^{2}}{\bar{r}_{0}^{2}}}-1\geq\frac{T}{2\log\frac{D^{2}}{ \bar{r}_{0}^{2}}}\) and we use this in eqs. (18) and (19) to get \[f(\bar{x}_{t})-f_{*}\leq\frac{16eL\left(\bar{d}_{t}+\bar{r}_{t}\right)^{2}}{T} \log\frac{\bar{r}_{T}^{2}}{\bar{r}_{0}^{2}}.\] Observe that because \(\mathcal{X}\) has diameter at most \(D\) we have \(\bar{d}_{t}+\bar{r}_{t}\leq 2D\), therefore \[f(\bar{x}_{t})-f_{*}\leq\frac{64eLD^{2}}{T}\log_{+}\frac{\bar{r}_{T}^{2}}{\bar {r}_{0}^{2}}=\mathcal{O}\left[\frac{LD^{2}}{T}\log_{+}\frac{D}{\bar{r}_{0}} \right].\]
* If \(T<2\log_{+}\frac{D^{2}}{\bar{r}_{0}^{2}}\), then \(1<\frac{2\log_{+}\frac{D^{2}}{\bar{r}_{0}^{2}}}{T}\). Let \(t\in[T]\). Using smoothness and this fact we have \[f(\bar{x}_{t})-f_{*}\leq\frac{L}{2}\|\bar{x}_{t}-x_{*}\|^{2}\leq\frac{L\|\bar{ x}_{t}-x_{*}\|^{2}}{T}\log_{+}\frac{D^{2}}{\bar{r}_{0}^{2}}.\] Observe \(\bar{x}_{t},x_{*}\in\mathcal{X}\) and \(\mathcal{X}\) has diameter \(D\), hence \(\|\bar{x}_{t}-x_{*}\|^{2}\leq D^{2}\) and we get \[f(\bar{x}_{t})-f_{*}\leq\frac{LD^{2}}{T}\log_{+}\frac{D^{2}}{\bar{r}_{0}^{2}}= \mathcal{O}\left(\frac{LD^{2}}{T}\log_{+}\frac{D}{\bar{r}_{0}}\right).\] Thus in both cases we have that \(f(\bar{x}_{t})-f_{*}=\mathcal{O}\left(\frac{LD^{2}}{T}\log_{+}\frac{D}{\bar{r }_{0}}\right)\), this completes our proof.
### Nonsmooth case
We now give the proof of DoWG's convergence when \(f\) is Lipschitz.
Proof of Theorem 3.: We start with Lemma 3. Let \(t\in[T]\). By eq. (6) we have
\[\sum_{k=0}^{t-1}\bar{r}_{k}^{2}\left\langle\nabla f(x_{k}),x_{k}-x_{*}\right \rangle\leq 2\bar{r}_{t}\left[\bar{d}_{t}+\bar{r}_{t}\right]\sqrt{v_{t-1}}. \tag{20}\]
Observe that by convexity we have
\[\left\langle\nabla f(x_{k}),x_{k}-x_{*}\right\rangle\geq f(x_{k})-f_{*}.\]
Using this to lower bound the left-hand side of eq. (20) gives
\[\sum_{k=0}^{t-1}\bar{r}_{k}^{2}\left[f(x_{k})-f_{*}\right] \leq\sum_{k=0}^{t-1}\bar{r}_{k}^{2}\left\langle\nabla f(x_{k}),x _{k}-x_{*}\right\rangle\] \[\leq 2\bar{r}_{t}\left[\bar{d}_{t}+\bar{r}_{t}\right]\sqrt{v_{t-1}}. \tag{21}\]
We have by the fact that \(f\) is \(G\)-Lipschitz that \(\left\|\nabla f(x)\right\|^{2}\leq G^{2}\) for all \(x\in\mathcal{X}\). Therefore,
\[v_{t-1} =\sum_{k=0}^{t-1}\bar{r}_{k}^{2}\|\nabla f(x_{k})\|^{2}\] \[\leq\bar{r}_{t}^{2}\sum_{k=0}^{t-1}\|\nabla f(x_{k})\|^{2}\] \[\leq\bar{r}_{t}^{2}G^{2}T.\]Taking square roots and plugging into eq.21 gives
\[\sum_{k=0}^{t-1}\bar{r}_{k}^{2}\left[f(x_{k})-f_{*}\right]\leq 2\bar{r}_{t}^{2} \left[\bar{d}_{t}+\bar{r}_{t}\right]G\sqrt{T}.\]
Dividing both sides by \(\sum_{k=0}^{t-1}\bar{r}_{k}^{2}\) we get
\[\frac{1}{\sum_{k=0}^{t-1}\bar{r}_{k}^{2}}\sum_{k=0}^{t-1}\bar{r}_{k}^{2}\left[f (x_{k})-f_{*}\right]\leq\frac{2\left[\bar{d}_{t}+\bar{r}_{t}\right]G\sqrt{T}}{ \sum_{k=0}^{t-1}\frac{\bar{r}_{k}^{2}}{\bar{r}_{t}^{2}}}. \tag{22}\]
By Lemma2 applied to the sequence \(s_{k}=\bar{r}_{k}^{2}\) we have that for some \(t\in[T]\)
\[\sum_{k=0}^{t-1}\frac{\bar{r}_{k}^{2}}{\bar{r}_{t}^{2}}\geq\frac{1}{e}\left( \frac{T}{\log_{+}\frac{\bar{r}_{k}^{2}}{\bar{r}_{0}^{2}}}-1\right).\]
Because \(\bar{r}_{T}\leq D\) we further have
\[\sum_{k=0}^{t-1}\frac{\bar{r}_{k}^{2}}{\bar{r}_{t}^{2}}\geq\frac{1}{e}\left( \frac{T}{\log_{+}\frac{D^{2}}{\bar{r}_{0}^{2}}}-1\right). \tag{23}\]
We now have two cases:
* If \(T\geq 2\log_{+}\frac{D^{2}}{\bar{r}_{0}^{2}}\): then \(\frac{T}{\log\frac{\bar{r}_{k}^{2}}{\bar{r}_{0}^{2}}}-1\geq\frac{T}{2\log\frac{ \bar{r}_{k}^{2}}{\bar{r}_{0}^{2}}}\). We can use this in eq.22 alongside eq.23 and the fact that \(\log_{+}x^{2}=\max(\log x^{2},1)=\max(2\log x,1)\leq 2\log_{+}x\) to get \[\frac{1}{\sum_{k=0}^{t-1}\bar{r}_{k}^{2}}\sum_{k=0}^{t-1}\bar{r}_{k}^{2}\left[ f(x_{k})-f_{*}\right]\leq\frac{8\left[\bar{d}_{t}+\bar{r}_{t}\right]G}{\sqrt{T}} \log_{+}\frac{D}{\bar{r}_{0}}.\] Because the diameter of \(\mathcal{X}\) is bounded by \(D\) we have \(\bar{r}_{T}\leq D\) and \(\bar{d}_{t}\leq D\), using this and convexity we get \[f(\bar{x}_{t})-f_{*} \leq\frac{1}{\sum_{k=0}^{t-1}\bar{r}_{k}^{2}}\sum_{k=0}^{t-1}\bar {r}_{k}^{2}\left[f(x_{k})-f_{*}\right]\] \[\leq\frac{8\left[\bar{d}_{t}+\bar{r}_{t}\right]G}{\sqrt{T}}\log \frac{\bar{r}_{T}}{\bar{r}_{0}}\] \[\leq\frac{16DG}{\sqrt{T}}\log\frac{D}{\bar{r}_{0}}.\]
* If \(T<2\log_{+}\frac{D^{2}}{\bar{r}_{0}^{2}}\): then \[1<\frac{2\log_{+}\frac{D^{2}}{\bar{r}_{0}^{2}}}{T}\leq\frac{4\log_{+}\frac{D}{ \bar{r}_{0}}}{T}.\] (24) By convexity and Cauchy-Schwartz we have \[f(\bar{x}_{t})-f_{*} \leq\langle\nabla f(\bar{x}_{t}),\bar{x}_{t}-x_{*}\rangle\] \[\leq\|\nabla f(\bar{x}_{t})\|\|\bar{x}_{t}-x_{*}\|.\] (25) Because \(f\) is \(G\)-Lipschitz then \(\|\nabla f(\bar{x}_{t})\|\leq G\) and because \(\mathcal{X}\) has diameter \(D\) we have \(\|\bar{x}_{t}-x_{*}\|\leq D\). Using this and eq.24 in eq.25 gives \[f(\bar{x}_{t})-f_{*} \leq DG\] \[<\frac{4DG\log_{+}\frac{D}{\bar{r}_{0}}}{T}.\] Now because \(T\geq 1\) we have \(\sqrt{T}\leq T\) and hence \[f(\bar{x}_{t})-f_{*}\leq\frac{4DG\log_{+}\frac{D}{\bar{r}_{0}}}{\sqrt{T}}.\] In both cases, we have that \(f(\bar{x}_{t})-f_{*}=\mathcal{O}\left(\frac{4DG\log_{+}\frac{D}{\bar{r}_{0}} }{\sqrt{T}}\right)\), and this completes our proof.
Unconstrained domain extension
In this section we consider the case where the domain set is unbounded, and we seek dependence only on \(d_{0}=\|x_{0}-x_{*}\|\). We use the same technique for handling the unconstrained problem as (Ivgi et al., 2023) in this section and consider DoWG iterates with the reduced stepsizes
\[\eta_{t} =\frac{\bar{r}_{t}^{2}}{\sqrt{v_{t}}\log\frac{2v_{t}}{v_{0}}} v_{t} =v_{t-1}+\bar{r}_{t}^{2}\|\nabla f(x_{t})\|^{2}. \tag{26}\]
We prove that with this stepsize, the iterates do not venture far from the initialization. The proof follows (Ivgi et al., 2023).
**Lemma 4**.: _(Stability). For the iterates \(x_{t+1}=x_{t}-\eta_{t}\nabla f(x_{t})\) following the stepsize scheme given by (26) we have \(\bar{d}_{t}^{2}\leq 12d_{0}\) and \(\bar{r}_{t}^{2}\leq 32d_{0}^{2}\) provided that \(r_{0}\leq d_{0}\)._
Proof.: By expanding the square and convexity
\[d_{k+1}^{2}-d_{k}^{2}\leq\eta_{t}^{2}\|g_{t}\|^{2}.\]
Summing up as \(t\) varies from \(k=1\) to \(k=t\) and using (Ivgi et al., 2023, Lemma 6)
\[d_{t}^{2}-d_{1}^{2}\leq\sum_{k=1}^{t}\frac{\bar{r}_{k}^{4}}{v_{k}} \frac{\|\nabla f(x_{t})\|^{2}}{4\log^{2}\frac{2v_{k}}{v_{0}}}\leq\frac{\bar{r} _{t}^{2}}{4}\sum_{k=1}^{t}\frac{v_{k}-v_{k-1}}{v_{k}\log^{2}_{+}\frac{v_{k}}{v_ {0}}}\leq\frac{\bar{r}_{t}^{2}}{4}.\]
Therefore we have \(d_{t}^{2}\leq d_{1}^{2}+\frac{\bar{r}_{t}^{2}}{4}\). Now suppose for the sake of induction that \(\bar{r}_{t}^{2}\leq 8d_{1}^{2}\), then applying the last equation we get \(d_{t}^{2}\leq 3d_{1}^{2}\). Taking square roots gives \(d_{t}\leq\sqrt{3}d_{1}\). By the triangle inequality we then get
\[\|x_{t+1}-x_{0}\|\leq\|x_{t+1}-x_{*}\|+\|x_{*}-x_{0}\|\leq(1+\sqrt{3})d_{1}.\]
Squaring both sides gives \(\|x_{t+1}-x_{0}\|^{2}\leq(1+\sqrt{3})^{2}d_{1}^{2}\leq 8d_{1}^{2}\). This completes our induction and we have \(\bar{r}_{t}^{2}\leq 8d_{1}^{2}\) for all \(t\). Finally, observe that
\[d_{1}=\|x_{1}-x_{*}\|\leq\|x_{1}-x_{0}\|+\|x_{0}-x_{*}\|=r_{*}+d_{0}\leq 2d_{0}.\]
It follows that \(\bar{r}_{t}^{2}\leq 8d_{1}^{2}\leq 32d_{0}^{2}\) for all \(t\). Finally, we have \(d_{t}^{2}\leq d_{1}^{2}+\frac{\bar{r}_{t}^{2}}{4}\leq 3d_{1}^{2}\leq 12d_{0}^{2}\). This completes our proof.
Therefore the iterates stay bounded. The rest of the proof then follows Theorems 3 and 4 and is omitted for simplicity.. In both cases it gives the same results with \(D_{0}=\|x_{0}-x_{*}\|\) instead of \(D\), up to extra constants and polylogarithmic factors. | ## Review
### Summary
The paper introduces a novel optimization algorithm named Distance over Weighted Gradients (DoWG), which is a modification of the existing Distance over Gradients (DoG) algorithm. This new method is parameter-free and adaptive, achieving optimal convergence rates without requiring knowledge of smoothness or other specific constants. The authors establish convergence in deterministic settings and provide empirical comparisons demonstrating its effectiveness against other adaptive methods, particularly in training neural networks. While the algorithm shows promise, it does not outperform established methods like Adam under certain conditions. Overall, the paper offers valuable insights into parameter-free optimization techniques.
### Strengths
- The paper is clearly organized and accessible, making it suitable for readers with basic knowledge of optimization.
- It provides strong theoretical contributions, establishing convergence rates and discussing adaptive properties of the algorithm.
- The authors present a well-structured analysis of the modifications made to the DoG algorithm, contributing to the understanding of adaptive optimization methods.
- The empirical results, although limited, show positive evidence of DoWG's effectiveness compared to other parameter-free methods.
### Weaknesses
- DoWG underperforms compared to Adam in some scenarios, which may limit its practical applicability.
- The convergence guarantees are established only in deterministic settings, while it is unclear how they would extend to stochastic environments.
- Certain claims lack formal definitions and rigorous discussion, particularly around concepts like weak and strong adaptivity.
- The motivation behind the modifications from DoG to DoWG could be clarified further to strengthen the argument for the new algorithm's significance.
### Questions
- Can DoWG achieve similar convergence results in stochastic settings as DoG?
- What specific advantages does DoWG provide over DoG in terms of practical applications?
- What are the limitations of extending the proposed results to unbounded domains or other settings?
- Is there a theoretical basis for asserting that DoWG is superior in performance over DoG in all scenarios?
### Soundness
**Score:** 3
**Description:** Good: The theoretical foundations are sound, but some claims lack sufficient rigor and clarity.
### Presentation
**Score:** 3
**Description:** Good: The paper is generally well-written, but some concepts could benefit from clearer definitions and more detailed discussions.
### Contribution
**Score:** 3
**Description:** Good: The paper presents a novel algorithm that adds to the existing literature, though it does not significantly advance the field beyond previous work.
### Rating
**Score:** 6
**Description:** Weak Accept: The paper is technically solid and offers moderate-to-high impact contributions, with some areas needing improvement.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper demonstrates originality through the introduction of DoWG and provides valuable insights into parameter-free optimization methods. While there are some limitations, particularly in empirical evaluations compared to established algorithms like Adam, the soundness of the theoretical contributions and the overall clarity of presentation justify an acceptance. The authors should address the identified weaknesses in the final version for optimal impact.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# On the Connection between Pre-training Data Diversity and Fine-tuning Robustness
Vivek Ramanujan\({}^{*}\) Thao Nguyen\({}^{*}\)
**Sewoong Oh\({}^{\dagger}\) Ludwig Schmidt\({}^{\dagger\diamond}\) Ali Farhadi\({}^{\dagger\diamond}\)**
\({}^{\dagger}\)University of Washington \({}^{\diamond}\)Allen Institute for AI
{thaottn,ramanv}@cs.washington.edu
Equal contribution
###### Abstract
Pre-training has been widely adopted in deep learning to improve model performance, especially when the training data for a target task is limited. In our work, we seek to understand the implications of this training strategy on the generalization properties of downstream models. More specifically, we ask the following question: how do properties of the pre-training distribution affect the robustness of a fine-tuned model? The properties we explore include the label space, label semantics, image diversity, data domains, and data quantity of the pre-training distribution. We find that the primary factor influencing downstream effective robustness [44] is data quantity, while other factors have limited significance. For example, reducing the number of ImageNet pre-training classes by \(4\times\) while increasing the number of images per class by \(4\times\) (that is, keeping total data quantity fixed) does not impact the robustness of fine-tuned models. We demonstrate our findings on pre-training distributions drawn from various natural and synthetic data sources, primarily using the iWildCam-WILDS distribution shift as a test for robustness.
## 1 Introduction
Transfer learning is a popular technique to deal with data scarcity, improve training speed, or transfer useful inductive biases that can benefit downstream tasks [28][8][10]. In the domain of computer vision, pre-training on ImageNet in particular has been the de-facto standard for obtaining features to solve a wide range of vision tasks, such as object detection [34][3][13], segmentation [5][16], and action recognition [42]. While there exists previous work that seeks to pinpoint specific properties of ImageNet-trained features that benefit downstream performance [19][24][23][38], the analysis is often done with respect to model accuracy. Our work instead examines the robustness of fine-tuned models to natural distribution shifts. Instead of looking at architecture variations and pre-training algorithm as done in prior work [38][14][49], we focus on the role of the pre-training data. This data-centric approach has been validated by past work [27][29][13], which show that the training data distribution plays a larger role than training methods or architecture in influencing model robustness.
Robustness under distribution shifts is a fundamental concern for producing reliable machine learning systems: a model can perform in unexpected and undesirable ways when there is a mismatch between the data distribution encountered in deployment and the one on which the model is trained [36][22]. For example, a self-driving car should be able to generalize to a wide variety of weather scenarios to be considered safe, some of which it may not have seen during training. In our work, we focus on these forms of _natural_ distribution shifts [33][22]--named so because they are induced by real-world processes--and study what aspects of the source dataset could help fine-tuned models become more robust to these shifts. We tackle this question along five different ablation axes: **(i)** Data quantity, **(ii)** Label granularity, **(iii)** Label semantics, **(iv)** Image diversity, and **(v)** Data sources. Through abetter understanding of the interplay between various properties of the pre-training distribution and downstream robustness, we seek to establish guidelines for constructing better pre-training datasets for fine-tuning.
Previous work by Miller et al. [27] experimented with a wide range of natural distribution shifts and found that pre-training on ImageNet yields the biggest improvement in robustness for the iWildCam-WILDS dataset [22][2]. Consequently, we use iWildCam-WILDS as a probe to evaluate how our interventions with the ImageNet pre-training distribution would alter the robustness trend uncovered in this previous work. We also analyze the use of other pre-training data sources that may differ significantly from ImageNet in both semantic content and data collection methodology. Our main findings can be summarized as follows:
(i) **Data quantity.** Pre-training with more data helps boost robustness. However, we do not need a lot of pre-training data to see significant robustness gains: using 25K images subsampled from either ImageNet or iNaturalist, which is \(6\times\) smaller than the size of the fine-tuning dataset, already offers noticable robustness improvements.
(ii) **Label granularity.** Making labels more coarse-grained lowers transfer robustness. The effect is less significant than altering data quantity: extreme reduction in label granularity (e.g., using 5 coarse classes instead of 1000 fine-grained classes) still preserves some of the robustness gains compared to training from scratch.
(iii) **Label semantics.** Given enough data and labels, using more semantically similar classes does not have a notable impact on the robustness of fine-tuned models. In particular, we find that pre-training on the 600 inanimate object categories in ImageNet yields the same effective robustness as pre-training on the 400 animal categories, despite the fact that the downstream task consists of only animal categories.
(iv) **Image diversity.** Given the same pre-training label set and data quantity, increasing per-class diversity (e.g., by including more subclasses) has no effect on transfer robustness. In addition, the trade-off between having more classes and more images per class is not significant if total number of samples is kept constant.
(v) **Data sources.** We find that natural data sources (i.e., ImageNet, iNaturalist) yield similar downstream robustness when controlling for data quantity. Pre-training with synthetic fractal data is less effective at the same data quantity regime but still has some robustness gain to offer compared to training from scratch. Synthetic natural-looking data (e.g., generated by Stable Diffusion [35]) can help close this gap between using natural data and synthetic fractal data.
Overall we find that increasing pre-training data quantity and label granularity makes fine-tuned models more robust to distribution shifts. However, not all additional data is equally helpful. For instance, in the context of iWildCam-WILDS task, pre-training with natural-looking data offers much more robustness than using \(10\times\) more synthetic fractal data.
Figure 1: A summary of our experimental pipeline. We pre-train a model on a variety of different data distributions and evaluate its effective robustness after fine-tuning on a downstream task (i.e., iWildCam). By examining many models in this manner, we can determine empirical properties of the pre-training distribution that are important for fine-tuning robustness.
## 2 Background
The main motivation for our paper comes from the work by Huh et al. [19], which investigates various factors in ImageNet training that affect the quality of the features used subsequently for transfer learning. For our investigation, we shift the focus from accuracy to robustness against distribution shift, which has been a long-standing issue in machine learning [31][43][3][4]. In particular, we analyze the robustness of pre-trained features to natural distribution shifts observed in the real world through the iWildCam-WILDS benchmark [22]. Furthermore, in contrast to Huh et al. [19], we experiment with a greater variety of more recent neural network architectures, in addition to exploring the use of synthetic pre-training data.
A key goal in robustness is to reduce the impact of distribution shifts on the performance of a model. If model performances on in- and out-of-distribution test sets are plotted along the \(x\) and \(y\)-axes of a scatter plot respectively, then a more robust model would lie closer to the diagonal \(y=x\) line. This notion of robustness was captured by Taori et al. [43] under the term _effective robustness_, which measures the difference between a model's actual OOD performance and what could be predicted from its ID performance (Figure2). Miller et al. [27] adopted this effective robustness framework and evaluated hundreds of models on various distribution shift settings. The authors observed that ID performance is highly correlated with OOD performance. This linear trend mapping ID to OOD performance, and how close it is to the \(y=x\) line, is what we use in our work to compare the quality of the pre-trained features.
More notably, Miller et al. [27] discovered that on the iWildCam dataset, models trained from scratch and those that have been pre-trained on ImageNet lie on distinct linear trends, with the latter exhibiting more robustness. We replicate these reported trends in Figure2 Motivated by this result, our work seeks to better understand what aspects of ImageNet pre-training contribute to this improved robustness on iWildCam, and how these aspects translate to other pre-training data sources.
Previous work by Andreassen et al. [1] has looked at effective robustness over the course of fine-tuning and found that pre-trained models exhibit high effective robustness in the middle of fine-tuning, which eventually decreases as the training proceeds. The paper also experimented with ImageNet as one of the pre-training data sources. In our investigation, as a sanity check to remove number of training epochs as a potential source of bias for the linear fit, we adopt the linear trend of models pre-trained on ImageNet and fine-tuned on iWildCam computed previously by Miller et al. [27] as the baseline. We then report the residuals from comparing actual OOD performance at different epochs to what could be predicted from the corresponding ID performance using this baseline. Refer
Figure 3: We visualize the residuals of various architectures after fitting a linear trend that predicts OOD accuracy from ID accuracy. All models are pre-trained on the full ImageNet dataset and fine-tuned on iWildCam for 12 epochs. We observe that overall the residuals fluctuate around the \(y=0\) line and vary throughout the course of fine-tuning for most architectures.
Figure 2: Effective robustness is defined as movement towards a classifier which is robust to distribution shift (i.e., line \(y=x\)). Using this metric, Miller et al. [27] observes that for the iWildCam-WILDS task, models pre-trained on ImageNet are much more robust than models trained from scratch. We reproduce these two trends and use them as points of reference for our subsequent experiments, in which we modify the pre-training distribution and observe how our interventions alter the robustness trend lines.
to Figure2 for more details. We find that in the context of iWildCam fine-tuning, at each epoch, the residuals from our architectures of choice concentrate around the \(y=0\) line and exhibit no particular trend. This in turn allows us to vary the number of fine-tuning epochs as a hyperparameter, and obtain models covering a wide range of test performances for the scatter plots.
## 3 Experimental Setup
As mentioned earlier, the downstream task of interest is wildlife classification with the iWildCam-WILDS dataset [22]: the input is a photo taken by a camera trap, and the output is one of 182 different animal species. There are two test sets for evaluation: ID test data consists of images taken by the same camera traps as the training set, but on different days from the training and validation (ID) images. In contrast, OOD test data contains images taken by a disjoint set of camera traps from training and validation (ID) images. We include some examples of the geodiversity represented in each test split in Appendix Figure13 Following [22], we report the macro F1 scores of the trained networks because this metric emphasizes performance on rare species, which is critical to the biodiversity monitoring application that the dataset was designed for.
**Pre-training datasets.** We use ImageNet [11] and iNaturalist [46] as the primary pre-training distributions of interest, given their hierarchical structures, complexity, and relevance to the downstream task. The two data sources also differ in many ways (Table1), hence their pre-trained features make for an informative comparison. We also include experiments with synthetic pre-training data by using Stable Diffusion [35] and the FractalDB-1k dataset [21]. We will elaborate on this in Section4.3
**Network architectures.** To obtain data for plotting linear trends, we train a range of standard neural network architectures including ResNet [15], ResNext [48], DenseNet [20], AlexNet [23] and MobileNet V3 [17]. In our scatter plots, besides varying the architectures, we also vary the number of fine-tuning epochs to obtain models with varying F1 scores. AppendixA contains further training details. While our focus is on supervised pre-training, we also report some additional results with CLIP [32] architecture in Section5
In the subsequent sections, we detail different interventions made to the pre-training distribution to disentangle key properties of interest. We show the resulting linear trends in relation to the trends replicated from previous work [27], which include models trained from scratch on iWildCam (solid blue line) as well as models pre-trained on ImageNet (solid cyan line). For each trend line, we show 95% bootstrap confidence intervals for the linear fit.
## 4 Experiment Results
### Effect of Data Quantity
First, we experiment with reducing the pre-training set size. To remove potential confounding effects from a long-tailed data distribution, we ensure that the class distribution of our pre-training datasets
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & Training set size & Number of classes & Class distribution & Class hierarchy & Expert-labeled \\ \hline ImageNet & 1,281,167 & 1,000 & Class-balanced & WordNet & No \\ \hline iNaturalist & 579,184 & 5,089 & Long-tailed & Tree of life & Yes \\ \hline \hline \end{tabular}
\end{table}
Table 1: Differences between the ImageNet and iNaturalist datasets.
Figure 4: Reducing the number of pre-training images randomly sampled from **(left)** ImageNet and **(right)** iNaturalist lowers the robustness linear trends of the fine-tuned models. However, note that using only 25K pre-training samples (green line) still yields significant robustness improvements compared to training from scratch on 129K iWildCam images (dark blue line). We subsample iNaturalist to ensure class balance, by only including 1000 classes with the most number of samples.
are uniform. ImageNet is already class-balanced, but this is not the case for iNaturalist [46]. We experiment with a 1000-class subset of iNaturalist using its most frequent classes. We further select images within each class uniformly at random so that the number of samples is the same across all classes. This results in a class-balanced training set of size 150K from iNaturalist. We repeat the same procedure to obtain subsets of size 100K, 50K, 25K and 5K. A similar subsampling process is done on ImageNet, using the full 1000-class dataset which already has a uniform label distribution.
In Figure 4 we observe that reducing the data quantity during pre-training lowers the effective robustness of fine-tuned models. However, it is worth noting at 25K images, pre-training with subsampled ImageNet and iNaturalist data still produces much more robust models compared to training from scratch. This is 6\(\times\) less data compared to what is used for fine-tuning. As a sanity check, we find that using only 5K samples (i.e., 5 examples per class) during pre-training yields roughly the same level of robustness as training from scratch on iWildCam.
### Effect of Label Granularity
Next, we adapt a question raised previously by Huh et al. [19] to our investigation: how does varying the number of pre-training classes affect downstream robustness? Following [19], we construct _supersets_ of classes in ImageNet using the WordNet hierarchy. We use the maximum of the shortest path distance from the root of WordNet to a label to compute the maximum depth of the current label set. We then contract ImageNet label nodes along the shortest path to construct superclasses. Specifically, we investigate depths 2, 4, 5, 6, and 7, which result in class counts of 5, 17, 37, 85, and 232 respectively, in order to provide good coverage across a range of label granularities. Similarly, on iNaturalist, we use the superclass information that comes with the dataset to collapse the label space from 5,089 fine-grained classes to 13 coarse classes.
For ImageNet pre-training, we find that using the full 1000 classes provides the most robustness. However, when the label set size is reduced by four times (i.e., taking 232 superclasses at depth 7), model robustness only decreases slightly. From then on, reducing the label set further to 85 classes (depth 6), and then 37 classes (depth 5), does not deteriorate the linear trend further. Only when we experiment with 17 classes (depth 4) do we find another noticeable reduction in effective robustness. With 5 superclasses as the only pre-training labels (depth 2), pre-trained models still yield significantly more robustness than training from scratch.
On iNaturalist, we also observe a similar downward shift in linear trend when we reduce the initial label space to its phylum. Refer to Figure 5 for more details. Overall these findings suggest that using fine-grained labels during pre-training is better for learning representations that are robust to distributions shifts in downstream tasks. But even if only coarse labels are available, pre-training with enough data still has significant robustness benefits to offer.
### Effect of Label Semantics
The number of pre-training classes seems to have an impact on downstream robustness, but does it matter what kind of classes models are pre-trained on? We next investigate whether using classes whose semantics are more aligned with the downstream task would improve robustness.
Figure 5: Results of changing the label granularity of the pre-training task by combining classes according to some semantics hierarchy to form supersets, for **(left)** ImageNet and **(right)** iNaturalist. In general, this intervention lowers model robustness on downstream task. However, extreme reduction of the pre-training label space, e.g. by 200\(\times\) in the case of ImageNet, still offers robustness gains compared to training from scratch.
To do so, we separately pre-train models on ImageNet classes that are subsets of the "object" and "animal" WordNet synsets. This yields 2 broad categories that are similar in total sample size, each having around 600K images. In Figure 6 we find that models trained on "animal" classes (yellow line) exhibit slightly higher F1 scores, but roughly the same effective robustness as models trained on "object" classes (green line). This is surprising given that the fine-tuning distribution, iWildCam, contains only images of animals in the wild, which are semantically more similar to the animal classes of ImageNet. It is worth noting that models pre-trained on "object" classes are also much more robust than models trained from scratch (blue line).
One potential confounder to this experiment setup is that some images from "object" classes also contain animals (i.e., co-occurrences that are not accounted for by ImageNet labels). To estimate the extent of this problem, we use TensorFlow's ImageNet2012 multilabel set [41], containing 20k ImageNet validation data with multi-class labels reviewed by experts. We find that 1.1% of the data have labels from both "animal" and "object" classes present in the same image, suggesting that the label co-occurrence issue only impacts a small fraction of training data. Consequently, we posit that training on a diverse set of classes in general helps the model pick up on useful invariances that in turn lead to similar downstream robustness. We explore this hypothesis further in Section 4.5 with synthetic training data.
### Effect of Image Diversity
Besides labels, another source of diversity comes from the training images themselves. We experiment with two different notions of image diversity : **(i)** Label diversity, and **(ii)** Per-class image diversity.
#### 4.4.1 More Data Per Class vs. More Classes of Data
A natural question arises when designing a dataset with a fixed data budget (or labeling cost): _should we collect more data from existing categories or more categories of data?_ To address this question, we keep the total number of images fixed while varying the number of classes of ImageNet we use for pre-training. For example, if we have a budget of 60K images and 100 randomly selected ImageNet classes, we sample 600 images uniformly at random from each of these classes (Figure 7). Here, we find that in the context of the iWildCam distribution shift, there is no difference on downstream robustness between having more data per class or having more classes, as long as the total number of images is constant. This observation also holds at a larger data quantity scale (300K images, see Appendix Figure [18]. This result demonstrates the dominant effect of data quantity over other aspects of the pre-training distribution (e.g., label set size).
#### 4.4.2 Image Diversity Within Each Class
Another way of modulating dataset diversity is by changing _per-class_ image diversity. For example, given a "dog" class, a dataset which only contains images of one dog breed could be seen as less diverse than a dataset which has examples of several breeds. In order to construct a controlled experiment, we use a quantitative heuristic for the diversity of each class: we fix certain superclasses (using the WordNet hierarchy) as the training labels and vary the number of corresponding subclasses where the images are taken from. For iNaturalist we can do the same with the tree of life structure. More diversity means more subclasses chosen per superclass.
For the ImageNet distribution that is built on the WordNet hierarchy, we construct a subset following BREEDS [39]. The two main ways that BREEDS recalibrates the WordNet hierarchy fit our goals for image diversity: (i) selected nodes convey some visual information, and (ii) nodes of similar
Figure 6: Varying the semantic category of classes included in the pre-training data yields similar robustness linear trends, with pre-training only on “animal” classes exhibiting slightly higher F1 scores than pre-training only on “object” classes. Even though the downstream task is animal classification, models pre-trained only on “object” classes are still much more robust than models that do not undergo any pre-training.
specificity share the same distance from the root (e.g., "dog" and "cat" are now both at the same depth even if the "dog" subtree is much larger). With this new hierarchy, we obtain 16 superclasses, each encompassing 12 subclasses (i.e., original ImageNet classes). The full list can be found in AppendixC.1 We vary image diversity by changing the number of subclasses per superclass: 4, 8 or 12 subclasses--corresponding to the diversity ratio \(p=0.33,p=0.67,p=1.0\) in the left panel of Figure8 To prevent data quantity from being a confounding variable, we subsample images from each chosen subclass accordingly (e.g., if we reduce number of subclasses per superclass by 3\(\times\) then we sample 3\(\times\) more images from each subclass).
For iNaturalist, we fix the total number of images at 80K and apply the same procedure described above to select a fraction of subclasses (see diversity ratio values in the right panel of Figure8), for each of the following superclasses: "Plantae", "Insecta", "Mammalia", "Fungi", "Aves", "Reptilia", and "Amphibia." We choose this set of superclasses so we could have a uniform distribution of images per class while maintaining the same number of images as our ImageNet experiment. For more details, see AppendixC.1 As seen in Figure8 for both ImageNet and iNaturalist, the resulting linear trends are highly similar regardless of the diversity ratio \(p\), or the number of subclasses per superclass. We conclude that in this case, per-class image diversity does not have a significant impact on downstream robustness. Note that this does not hold in the extreme setting, e.g. repeating the same image to produce a dataset.
### Pre-training with Different Data Sources
Moving beyond interventions _within_ each data distribution, we now compare fine-tuning robustness behaviors _across_ different data sources.
Compared to ImageNet, iNaturalist exhibits different characteristics on multiple axes (see Table1). We expect that pre-training on the diverse, domain-specific species - which have been verified by nature enthusiasts - in iNaturalist will provide a boost on robustness for the downstream animal-in-the-wild classification task, compared to training on general web-curated classes in ImageNet. However, in Figure10 we find that iNaturalist behaves similarly to ImageNet as a pre-training data source. Even when we subsample iNaturalist to follow the ImageNet class distribution (refer to Section4.1 for its construction), we observe a similar level of effective robustness compared to
Figure 8: We fix the total amount of pre-training data and the label space, while reducing the number of subclasses that constitute each superclass label in **(left)** ImageNet and **(right)** Naturalist. Smaller \(p\) (diversity ratio) means proportionally fewer subclasses per superclass. We find that reducing per-class diversity by up to 3\(\times\) has no effect on the robustness of downstream models.
Figure 7: We vary the number of classes randomly selected from the original 1000 ImageNet classes and adjust the number of images per class correspondingly, such that total image quantity is 60K. We observe that having 4\(\times\) more classes, or 4\(\times\) more images per class, induces the same level of robustness in fine-tuned models. Experiments with 300K data regime can be found in AppendixFigure18
the equal-sized 150K ImageNet subset (Figure 11). We hypothesize that when a certain level of "diversity" is reached with the training images and labels, there is negligible robustness gain to be made even if we increase the alignment between the pre-training and fine-tuning data domains.
#### 4.5.1 Synthetic Data Sources
To push our diversity hypothesis to the limit, we also pre-train the same set of architectures on FractalDB-1k dataset [21], which has similar class distribution to ImageNet but only contains synthetic fractal images. Pre-training on FractalDB-1k has been shown to surpass the accuracy of pre-training on ImageNet/Places [21]. For the task of iWildCam-WILDS, however, it is noticeably less effective at improving downstream robustness compared to natural image data (Figure 10). However, pre-training with fractal images still offers more robustness than training from scratch.
Can we generate better synthetic data for pre-training than FractalDB-1k? We experiment with Stable Diffusion [33], a popular diffusion model which generates high-quality images from natural language prompts, to generate natural-looking images following the ImageNet class distribution. We use 80 diverse prompts per ImageNet class from [32] to generate a 150K ImageNet-like dataset. Examples from this synthetic dataset can be seen in Figure 9. We find that pre-training on this dataset yields similar robust generalization behaviors as using the same quantity of ImageNet and iNaturalist images (Figure 11). However, at a larger scale of 1M images, the robustness benefits of pre-training with synthetic data begins to saturate and slightly lag behind iNaturalist and ImageNet. See Appendix Figure[19] for more details.
Overall our findings demonstrate that while nuances in image semantics during pre-training are not important for fine-tuning robustness (e.g., Animals versus Objects classes, or iNaturalist versus ImageNet), it is still beneficial to match the general characteristics of the downstream data (e.g., "natural-looking" images).
## 5 Self-supervised Pre-training
Previous experiments revolve around the supervised learning settings. However, it is increasingly common to pre-train on _self-supervised_ tasks using web-crawled corpora, which has been shown to significantly improve robustness to distribution shifts [32]. Our preliminary experiments with pre-trained CLIP models [32] on iWildCam show that the resulting ID and OOD performances still lie on the ImageNet pre-training linear trend (i.e., cyan line), despite the self-supervised training mechanism and the much larger training dataset of CLIP. Varying CLIP's data sources only move the F1 scores along the same line (Figure12).
Figure 10: Pre-training on a noisy, long-tailed distribution of natural images like iNaturalist (red line) does not change the robustness on downstream task, compared to pre-training on a clean, class-balanced dataset like ImageNet (cyan line). Pre-training on the same amount of synthetic fractal data (FractalDB-1k) yields much lower robustness (green line), but still has some benefits compared to training from scratch (dark blue line).
Figure 9: Each grid in order shows random examples from the ImageNet ILSVRC 2012 challenge train set [37][11], the iNaturalist 2017 challenge train set [46], the FractalDB-1k synthetic train set [21], and a 1000-class ImageNet-style synthetic dataset generated using Stable Diffusion [33].
We also repeat the experiments with varying data quantity (Section 4.1) in the context of CLIP's image-text pre-training data. Refer to Appendix E for more details. We leave an extensive evaluation of how "diversity" of the pre-training distribution should be defined and measured differently on these open-vocabulary web datasets to future work.
## 6 Conclusion & Discussion
In this work, we find that many factors during pre-training such as label semantics and image diversity, do not significantly alter the effective robustness of models fine-tuned on iWildCam-WILDS. The more influential factors for downstream robust generalization are the _quantity_ of the pre-training data and the _granularity_ of the label set. Through experiments with Stable Diffusion, we also demonstrate the potential of synthetic natural-looking images as a way to increase the effectiveness of the pre-training distribution along these two ablation axes.
We can think about pre-training dataset construction in terms of an explore vs. exploit trade-off. Exploration, such as finding new data sources, is often time consuming, while exploiting, or collecting as much data as possible from an existing source, can sometimes be significantly easier. Our experiments suggest that a good approach to building pre-training dataset for robust generalization is to find a few data sources that exhibit robustness characteristics on a downstream task (e.g., Stable Diffusion data), and then collect as many data points from these sources as possible.
It is important to note that we are studying a different model behavior than Huh et al. [19]. Some interventions can reduce average performance while maintaining effective robustness (e.g., label granularity in Figure 5) and vice-versa (e.g., architecture modifications). Thus, certain choices during pre-training dataset design depend fundamentally on the goals of the dataset. For example, whether we want to achieve consistent performance across many settings (i.e., robustness) or optimize for very good performance on one specific application changes the applicability of our results.
An important open question is determining what characteristic of the iWildCam-WILDS task leads to the difference in linear trend between pre-training and training from scratch. Some other datasets (e.g., fMoW-WILDS, see [22][71]) do not exhibit this behavior after fine-tuning, so it is important to pinpoint distribution shifts where pre-training can provide a significant boost in effective robustness. Finding a unifying property among such datasets would allow for better interpretation of our current results. In Appendix G we look into distribution shift settings constructed from the DomainNet benchmark [50], and observe that pre-training and training from scratch only produce different linear trends for certain pairs of domains and not others. Refer to this Appendix for further discussion.
Figure 11: Similar to results in Figure[10] with a budget of 150K images, pre-training on FractalDB-1k is significantly less effective than iNaturalist or ImageNet. However, we find that generating natural-looking data with Stable Diffusion can help close this gap. All datasets are subsampled to be class-balanced, each having 1000 classes.
Figure 12: Results from fine-tuning CLIP [32] pre-trained models on iWildCam-WILDS. The models we use include CLIP with ResNet-50 image encoder trained on YFCC-15M [45] and LAION-15M [40] respectively, as well as all original CLIP models released by OpenAI, including 3 with ViT [12] backbone, trained on a dataset of size 400M. All of these models lie on the same linear trend as that of ImageNet pre-training, demonstrating the consistency of this trend across many pre-training dataset size scales and training algorithms.
Acknowledgements
This work is supported in part by Open Philanthropy, the Allen Institute for AI, and NSF grants DMS-2134012 and CCF-2019844 as a part of NSF Institute for Foundations of Machine Learning (IFML). Ali Farhadi acknowledges funding from the NSF awards IIS 1652052, IIS 17303166, DARPA N66001-19-2-4031, DARPA W911NF-15-1-0543, and gifts from Allen Institute for Artificial Intelligence, Google, and Apple.
## References
* [1]A. Andreassen, Y. Bahri, B. Neyshabur, and R. Roelofs (2021) The evolution of out-of-distribution robustness throughout fine-tuning. arXiv preprint arXiv:2106.15831. Cited by: SS1.
* [2]S. Beery, E. Cole, and A. Gjoka (2020) The iwildcam 2020 competition dataset. arXiv preprint arXiv:2004.10340. Cited by: SS1.
* [3]B. Biggio and F. Roli (2018) Wild patterns: ten years after the rise of adversarial machine learning. Pattern Recognition84, pp. 317-331. Cited by: SS1.
* [4]B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Srndic, P. Laskov, G. Giacinto, and F. Roli (2013) Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases, pp. 387-402. Cited by: SS1.
* [5]L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille (2017) Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence40 (4), pp. 834-848. Cited by: SS1.
* [6]T. Chen, S. Kornblith, M. Norouzi, and G. Hinton (2020) A simple framework for contrastive learning of visual representations. In International conference on machine learning, pp. 1597-1607. Cited by: SS1.
* [7]G. Christie, N. Fendley, J. Wilson, and R. Mukherjee (2018) Functional map of the world. In CVPR, Cited by: SS1.
* [8]A. Conneau and D. Kiela (2018) SentEval: an evaluation toolkit for universal sentence representations. arXiv preprint arXiv:1803.05449. Cited by: SS1.
* [9]J. Dai, Y. Li, K. He, and J. Sun (2016) R-fcn: object detection via region-based fully convolutional networks. Advances in neural information processing systems29. Cited by: SS1.
* [10]J. De Fauw, J. R. Ledsam, B. Romera-Paredes, S. Nikolov, N. Tomasev, S. Blackwell, H. Askham, X. Glorot, B. O'Donoghue, D. Visentin, et al. (2018) Clinically applicable deep learning for diagnosis and referral in retinal disease. Nature medicine24 (9), pp. 1342-1350. Cited by: SS1.
* [11]J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. Cited by: SS1.
* [12]A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al. (2020) An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Cited by: SS1.
* [13]A. Fang, G. Ilharco, M. Wortsman, Y. Wan, V. Shankar, A. Dave, and L. Schmidt (2022) Data determines distributional robustness in contrastive language image pre-training (clip). arXiv preprint arXiv:2205.01397. Cited by: SS1.
* [14]Y. Guo, H. Shi, A. Kumar, K. Grauman, T. Rosing, and R. Feris (2019) SpotTune: transfer learning through adaptive fine-tuning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4805-4814. Cited by: SS1.
* [15]K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. Cited by: SS1.
* [16] K. He, G. Gkioxari, P. Dollar, and R. Girshick. Mask r-cnn. In _Proceedings of the IEEE international conference on computer vision_, pages 2961-2969, 2017.
* [17] A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan, et al. Searching for mobilenetv3. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 1314-1324, 2019.
* [18] J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama, et al. Speed/accuracy trade-offs for modern convolutional object detectors. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 7310-7311, 2017.
* [19] M. Huh, P. Agrawal, and A. A. Efros. What makes imagenet good for transfer learning? _arXiv preprint arXiv:1608.08614_, 2016.
* [20] F. Iandola, M. Moskewicz, S. Karayev, R. Girshick, T. Darrell, and K. Keutzer. Densenet: Implementing efficient convnet descriptor pyramids. _arXiv preprint arXiv:1404.1869_, 2014.
* [21] H. Kataoka, K. Okayasu, A. Matsumoto, E. Yamagata, R. Yamada, N. Inoue, A. Nakamura, and Y. Satoh. Pre-training without natural images. _International Journal of Computer Vision (IJCV)_, 2022.
* [22] P. W. Koh, S. Sagawa, H. Marklund, S. M. Xie, M. Zhang, A. Balsubramani, W. Hu, M. Yasunaga, R. L. Phillips, I. Gao, et al. Wilds: A benchmark of in-the-wild distribution shifts. In _International Conference on Machine Learning_, pages 5637-5664. PMLR, 2021.
* [23] A. Kolesnikov, L. Beyer, X. Zhai, J. Puigcerver, J. Yung, S. Gelly, and N. Houlsby. Big transfer (bit): General visual representation learning. In _European conference on computer vision_, pages 491-507. Springer, 2020.
* [24] S. Kornblith, J. Shlens, and Q. V. Le. Do better imagenet models transfer better? In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 2661-2671, 2019.
* [25] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. _Advances in neural information processing systems_, 25:1097-1105, 2012.
* [26] I. Loshchilov and F. Hutter. Decoupled weight decay regularization. _arXiv preprint arXiv:1711.05101_, 2017.
* [27] J. P. Miller, R. Taori, A. Raghunathan, S. Sagawa, P. W. Koh, V. Shankar, P. Liang, Y. Carmon, and L. Schmidt. Accuracy on the line: on the strong correlation between out-of-distribution and in-distribution generalization. In _International Conference on Machine Learning_, pages 7721-7735. PMLR, 2021.
* [28] R. Mormont, P. Geurts, and R. Maree. Comparison of deep transfer learning strategies for digital pathology. In _Proceedings of the IEEE conference on computer vision and pattern recognition workshops_, pages 2262-2271, 2018.
* [29] T. Nguyen, G. Ilharco, M. Wortsman, S. Oh, and L. Schmidt. Quality not quantity: On the interaction between dataset design and robustness of clip. _arXiv preprint arXiv:2208.05516_, 2022.
* [30] X. Peng, Q. Bai, X. Xia, Z. Huang, K. Saenko, and B. Wang. Moment matching for multi-source domain adaptation. In _Proceedings of the IEEE/CVF international conference on computer vision_, pages 1406-1415, 2019.
* [31] J. Quinonero-Candela, M. Sugiyama, A. Schwaighofer, and N. D. Lawrence. _Dataset shift in machine learning_. Mit Press, 2008.
* [32] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervision. In _International Conference on Machine Learning_, pages 8748-8763. PMLR, 2021.
* [33] B. Recht, R. Roelofs, L. Schmidt, and V. Shankar. Do imagenet classifiers generalize to imagenet? In _International Conference on Machine Learning_, pages 5389-5400. PMLR, 2019.
* [34] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. _Advances in neural information processing systems_, 28, 2015.
* [35] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer. High-resolution image synthesis with latent diffusion models, 2021.
* [36] A. Rosenfeld, R. Zemel, and J. K. Tsotsos. The elephant in the room. _arXiv preprint arXiv:1808.03305_, 2018.
* [37] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. _International journal of computer vision_, 115(3):211-252, 2015.
* [38] H. Salman, A. Ilyas, L. Engstrom, A. Kapoor, and A. Madry. Do adversarially robust imagenet models transfer better? _Advances in Neural Information Processing Systems_, 33:3533-3545, 2020.
* [39] S. Santurkar, D. Tsipras, and A. Madry. Breeds: Benchmarks for subpopulation shift. _arXiv preprint arXiv:2008.04859_, 2020.
* [40] C. Schuhmann, R. Vencu, R. Beaumont, R. Kaczmarczyk, C. Mullis, A. Katta, T. Coombes, J. Jitsev, and A. Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. _arXiv preprint arXiv:2111.02114_, 2021.
* [41] V. Shankar*, R. Roelofs*, H. Mania, A. Fang, B. Recht, and L. Schmidt. Evaluating machine accuracy on imagenet. _ICML_, 2020. [http://proceedings.mlr.press/v119/shankar20c.html](http://proceedings.mlr.press/v119/shankar20c.html)
* [42] K. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in videos. _Advances in neural information processing systems_, 27, 2014.
* [43] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. _arXiv preprint arXiv:1312.6199_, 2013.
* [44] R. Taori, A. Dave, V. Shankar, N. Carlini, B. Recht, and L. Schmidt. Measuring robustness to natural distribution shifts in image classification. _Advances in Neural Information Processing Systems_, 33:18583-18599, 2020.
* [45] B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L.-J. Li. Yfcc100m: The new data in multimedia research. _Communications of the ACM_, 59(2):64-73, 2016.
* [46] G. Van Horn, O. Mac Aodha, Y. Song, Y. Cui, C. Sun, A. Shepard, H. Adam, P. Perona, and S. Belongie. The inaturalist species classification and detection dataset. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 8769-8778, 2018.
* [47] H. Wang, S. Ge, Z. Lipton, and E. P. Xing. Learning robust global representations by penalizing local predictive power. In _Advances in Neural Information Processing Systems_, pages 10506-10518, 2019.
* [48] S. Xie, R. Girshick, P. Dollar, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 1492-1500, 2017.
* [49] K. You, Z. Kou, M. Long, and J. Wang. Co-tuning for transfer learning. _Advances in Neural Information Processing Systems_, 33:17236-17246, 2020. | ## Review
### Summary
This paper investigates the impact of pre-training data diversity on the robustness of fine-tuning in machine learning models. The authors conduct a series of experiments considering various factors such as data quantity, label granularity, label semantics, image diversity, and data sources. Their findings highlight that certain aspects, such as the quantity of pre-training data and the granularity of the label set, significantly influence the robustness of downstream fine-tuning. Additionally, the study explores the use of synthetic data and its effectiveness in enhancing pre-training distributions. Overall, the paper offers valuable insights that could help practitioners optimize their pre-training processes.
### Strengths
- Provides insights on how various factors like data diversity, label space, and label semantics affect the robustness of models, which is helpful for ML practitioners.
- Experiments are thorough and well-structured, and the presentation is clear, highlighting the main results effectively.
- The motivation is clear, and the paper is well-written.
- The study offers a comprehensive exploration of the relationship between pre-training datasets and fine-tuning robustness.
- The conclusions drawn are useful for follow-up research.
### Weaknesses
- The focus on supervised pre-training is a limitation, as this approach is becoming less common with the rise of self-supervised methods.
- Only one downstream task (iWildCam-WILDS) is considered, making it difficult to generalize the results to other datasets.
- The reliance on a single metric to measure effective robustness may not provide a comprehensive evaluation.
- Figures lack sufficient definition, leading to challenges in interpreting their significance.
- The conclusions drawn on the necessity of data quantity for robustness gains require more robust evidence.
### Questions
- Can the results generalize to other downstream tasks beyond iWildCam-WILDS?
- Why is the fine-tuning epoch set to 12 in Figure 3?
- Could varying the number of fine-tuning epochs impact the robustness of the pre-trained model?
- How would incorporating transformer networks affect the results, given their growing popularity?
- What influence does long-tailed data distribution have on fine-tuning robustness?
### Soundness
**Score:** 3
**Description:** 3 = good: The study presents solid empirical evidence, but some concerns about the methodology and reliance on single metric evaluations suggest room for improvement.
### Presentation
**Score:** 3
**Description:** 3 = good: The paper is generally well-structured and clear, although some figures require better definition for improved interpretability.
### Contribution
**Score:** 3
**Description:** 3 = good: The paper makes a valuable contribution to understanding pre-training data diversity, but further exploration of robustness in different contexts could strengthen its impact.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept: The paper is technically solid and has moderate-to-high impact potential, but it requires minor improvements in clarity and depth of analysis.
### Paper Decision
**Decision:** Accept (spotlight)
**Reasons:** The paper presents a substantial contribution to the field by empirically exploring the influence of pre-training data diversity on fine-tuning robustness. Despite some weaknesses in generalization and clarity of presentation, the overall findings are significant and provide valuable insights for practitioners. The authors are encouraged to address the feedback when preparing the final version.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Revisiting Implicit Differentiation for Learning Problems in Optimal Control
Ming Xu
School of Computing
Australian National University
[email protected]
&Timothy Molloy
School of Engineering
Australian National University
[email protected]
&Stephen Gould
School of Computing
Australian National University
[email protected]
###### Abstract
This paper proposes a new method for differentiating through optimal trajectories arising from non-convex, constrained discrete-time optimal control (COC) problems using the implicit function theorem (IFT). Previous works solve a differential Karush-Kuhn-Tucker (KKT) system for the trajectory derivative, and achieve this efficiently by solving an auxiliary Linear Quadratic Regulator (LQR) problem. In contrast, we directly evaluate the matrix equations which arise from applying variable elimination on the Lagrange multiplier terms in the (differential) KKT system. By appropriately accounting for the structure of the terms within the resulting equations, we show that the trajectory derivatives scale linearly with the number of timesteps. Furthermore, our approach allows for easy parallelization, significantly improved scalability with model size, direct computation of vector-Jacobian products and improved numerical stability compared to prior works. As an additional contribution, we unify prior works, addressing claims that computing trajectory derivatives using IFT scales quadratically with the number of timesteps. We evaluate our method on a both synthetic benchmark and four challenging, learning from demonstration benchmarks including a 6-DoF maneuvering quadrotor and 6-DoF rocket powered landing.
## 1 Introduction
This paper addresses end-to-end learning problems that arise in the context of constrained optimal control, including trajectory optimization, inverse optimal control, and system identification. We propose a novel, computationally efficient approach to computing analytical derivatives of state and control trajectories that solve constrained optimal control (COC) problems with respect to underlying parameters in cost functions, system dynamics, and constraints (e.g., state and control limits).
The efficient computation of these _trajectory derivatives_ is important in the (open-loop) solution of optimal control problems for both trajectory optimization and model predictive control (MPC). Such derivatives are also crucial in inverse optimal control (also called inverse reinforcement learning or learning from demonstration), where given expert demonstration trajectories, the objective is to compute parameters of the cost function that best explain these trajectories. Extensions of inverse optimal control that involve inferring parameters of system dynamics in addition to cost functions also subsume system identification, and provide further motivation for the efficient computation of trajectory derivatives. Our proposed method of computing trajectory derivatives enables directminimization of a loss function defined over optimal trajectories (e.g., imitation loss) using first-order descent techniques and is a direct alternative to existing works [2; 20; 22], including those based on bespoke derivations of Pontryagin's maximum principle (PMP) in discrete time [20; 22].
Analytical trajectory derivatives for COC problems are derived by differentiating through the optimality conditions of the underlying optimization problem and applying Dini's implicit function theorem. Derivatives are recovered as the solution to a system of linear equations commonly referred to as the (differential) KKT system. This framework underlies all existing works [2; 20; 22] as well as our proposed approach. Similarly to Amos et al. [2], we construct the differential KKT system, however we use the identities from Gould et al. [14] that apply variable elimination on the Lagrange multipliers relating to the dynamics and constraints. We show how to exploit the block-sparse structure of the resultant matrix equations to achieve linear complexity in computing the trajectory derivative with trajectory length. Furthermore, we can parallelize the computation, yielding superior scalability and numerical stability on multithreaded systems compared to methods derived from the PMP [20; 22].
Our specific contributions are as follows. First, we derive the analytical trajectory derivatives for a broad class of constrained (discrete-time) optimal control problems with additive cost functions and dynamics described by first-order difference equations. Furthermore, we show that the computation of these derivatives is linear in trajectory length by exploiting sparsity in the resulting matrix expressions. Second, we describe how to parallelize the computation of the trajectory derivatives, yielding lower computation time and superior numerical stability for long trajectories. Third, we show how to directly compute vector-Jacobian products (VJPs) with respect to some outer-level loss over optimal trajectories, yielding further improvements to computation time in the context of bi-level optimization. This setting commonly arises in the direct solution of inverse optimal control problems [17; 33; 34]. Finally, we provide discussion unifying existing methods for computing trajectory derivatives [2; 20; 22]. We dub our method IDOC (Implicit Differentiation for Optimal Control) and validate it across numerous experiments, showing a consistent speedup over existing approaches derived from the PMP. Furthermore, for constrained problems, IDOC provides significantly improved trajectory derivatives, resulting in superior performance for a general learning from demonstration (LfD) task.1.
Footnote 1: Code available at [https://github.com/mingu6/Implicit-Diff-Optimal-Control](https://github.com/mingu6/Implicit-Diff-Optimal-Control)
## 2 Related Work
Differentiating Through Optimal Control Problems.An optimal control (OC) problem consists of (system) dynamics, a cost function, an optimal control policy, and a set of state and/or control constraints. Learning problems in optimal control involve learning some or all of these aspects, with the problem of learning dynamics referred to as system identification [29], the problem of learning the cost function referred to as inverse optimal control (or inverse reinforcement learning or learning from demonstration) [34; 17; 21; 33; 37; 51], and the problem of learning the control policy referred to as reinforcement learning [4]. Recent work [20] has shown that solving these
Figure 1: **Left a): Our approach addresses learning problems in optimal control such as imitation learning. IDOC provides a method for computing the trajectory derivative or alternatively, vector-Jacobian products with respect to an outer-level loss, important for solving this problem using an end-to-end learning approach. Right b): IDOC yields superior trajectory derivatives for the imitation learning task when inequality constraints (rocket tilt and thrust limits) are present.**
learning problems can be approached in a unified end-to-end fashion by minimizing task-specific loss functions with respect to (unknown) parameters of the associated OC problem. This unified formulation places great importance on the efficient computation of derivatives of optimal trajectories with respect to said parameters. Methods reliant on computing trajectory derivatives have, however, been mostly avoided (and argued against) in the robotics and control literature due to concerns about computational tractability. For example, bi-level methods of inverse optimal control [34; 33] have mostly been argued against in favor of methods that avoid derivative calculations by instead seeking to satisfy optimality conditions derived from KKT [24; 12] or PMP conditions [17; 32; 23; 21].
Analytical Trajectory Derivatives.Our method is a direct alternative to methods such as DiffMPC [2], PDP [20] and its extension Safe-PDP [22], which differentiate through optimality conditions to derive trajectory derivatives. Common to these methods is identifying that trajectory derivatives can be computed by solving an auxiliary affine-quadratic OC problem, and furthermore, that this can be done efficiently using a matrix Riccati equation. While our approach still differentiates the optimality conditions, we avoid solving matrix Riccati equations and show that this enables easy computation of VJPs, parallelization across timesteps and improved numerical stability. Like Safe-PDP, IDOC computes derivatives through COC problems with smooth inequality constraints, however we show in our experiments that our derivatives are significantly more stable during training. Section 4.3 provides a detailed description of the differences between IDOC and existing methods.
Differentiable Optimization.Our work falls under the area of differentiable optimization, which aims to embed optimization problems within end-to-end learning frameworks. Gould et al. [14] proposed the deep declarative networks framework, and provide identities for differentiating through continuous, inequality-constrained optimization problems. Combinatorial problems [31; 44], as well as specialized algorithms for convex problems [1] have also been addressed. Differentiable optimization has been applied to end-to-end learning of state estimation [49; 42] and motion planning problems [6; 2; 26] in robotics, see Pineda et al. [35] for a recent survey. In addition, numerous machine learning and computer vision tasks such as pose estimation [9; 40], meta-learning [28], sorting [11; 7] and aligning time series [47] have been investigated under this setting.
## 3 Constrained Optimal Control Formulation
In this section, we provide an overview of the COC problems we consider in this paper. These problems involve minimizing a cost function with an additive structure, subject to constraints imposed by system dynamics and additional arbitrary constraints such as state and control limits. As a result, they can be formulated as a non-linear program (NLP). We will discuss how to differentiate through these COC problems in Section 4.
### Preliminaries
First, we introduce notation around differentiating vector-valued functions with respect to vector arguments, consistent with Gould et al. [14]. Let \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) be a vector-valued function with vector arguments and let \(\text{D}f\in\mathbb{R}^{n\times m}\) be the (matrix-valued) derivative where elements are given by
\[\left(\text{D}f(x)\right)_{ij}=\frac{\partial f_{i}}{\partial x_{j}}(x). \tag{1}\]
For a scalar-valued function with (multiple) vector-valued inputs \(f:\mathbb{R}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}\) evaluated as \(f(x,y)\), let the second order derivatives \(\text{D}_{XY}^{2}=\text{D}_{X}(\text{D}_{Y}f)^{\top}\). See Gould et al. [14] for more details.
### Optimization Formulation for Constrained Optimal Control
We can formulate the (discrete-time) COC problem as finding a cost-minimizing trajectory subject to constraints imposed by (possibly non-linear) system dynamics. In addition, trajectories may be subject to further constraints such as state and control limits.
To begin, let the dynamics governing the COC system at time \(t\) be given by \(x_{t+1}=f_{t}(x_{t},u_{t};\theta)\), where \(x_{t}\in\mathbb{R}^{n}\), \(u_{t}\in\mathbb{R}^{m}\) denotes state and control variables, respectively2. For time horizon \(T>0\)let \(x\triangleq(x_{0},x_{1},\ldots,x_{T})\) and \(u\triangleq(u_{0},u_{1},\ldots,u_{T-1})\) denote the state and control trajectories, respectively. Furthermore, let \(\theta\in\mathbb{R}^{d}\) be the parameter vector that parameterizes our COC problem. Our COC problem is then equivalent to the following constrained optimization problem
\[\begin{array}{ll}\mbox{minimize}&J(x,u;\theta)\triangleq\sum_{t=0}^{T-1}c_{t }(x_{t},u_{t};\theta)+c_{T}(x_{T};\theta)\\ \mbox{subject to}&x_{0}=x_{\text{init}}\\ &x_{t+1}-f_{t}(x_{t},u_{t};\theta)=0\quad\forall t\in\{0,\ldots,T-1\}&\mbox{ (dynamics)}\\ &g_{t}(x_{t},u_{t};\theta)\leq 0\quad\forall t\in\{0,\ldots,T-1\},&\mbox{ (path inequ. constraints)}\\ &h_{t}(x_{t},u_{t};\theta)=0\quad\forall t\in\{0,\ldots,T-1\},&\mbox{ (path eq. constraints)}\\ &g_{T}(x_{T};\theta)\leq 0&\mbox{ (terminal inequ. constraints)}\\ &h_{T}(x_{T};\theta)=0,&\mbox{ (terminal eq. constraints)}\end{array} \tag{2}\]
where \(f_{t}:\mathbb{R}^{n}\times\mathbb{R}^{m}\times\mathbb{R}^{d}\rightarrow\mathbb{ R}^{n}\) describe the dynamics, and \(c_{t}:\mathbb{R}^{n}\times\mathbb{R}^{m}\times\mathbb{R}^{d}\rightarrow\mathbb{ R}\), \(c_{T}:\mathbb{R}^{n}\times\mathbb{R}^{d}\rightarrow\mathbb{R}\) are the instantaneous and terminal costs, respectively. Furthermore, the COC system may be subject to (vector-valued) inequality constraints \(g_{t}:\mathbb{R}^{n}\times\mathbb{R}^{m}\times\mathbb{R}^{d}\rightarrow\mathbb{ R}^{q_{t}}\) and \(g_{T}:\mathbb{R}^{n}\times\mathbb{R}^{d}\rightarrow\mathbb{R}^{q_{T}}\) such as control limits and state constraints, for example. Additional equality constraints \(h_{t}:\mathbb{R}^{n}\times\mathbb{R}^{m}\times\mathbb{R}^{d}\rightarrow\mathbb{ R}^{s_{t}}\) and \(h_{T}:\mathbb{R}^{n}\times\mathbb{R}^{d}\rightarrow\mathbb{R}^{s_{T}}\) can also be included.
The decision variables of Equation 2 are the state and control trajectory \(\xi\triangleq(x,u)\), whereas parameters \(\theta\) are assumed to be fixed. Equation 2 can be interpreted as an inequality-constrained optimization problem with an additive cost structure and vector-valued constraints for the initial state \(x_{0}\), dynamics, etc. over all timesteps. Let \(\xi(\theta)\triangleq(x^{\star},u^{\star})\) be an optimal solution to Equation 2. We can treat the optimal trajectory \(\xi(\theta)\) as an _implicit_ function of parameters \(\theta\), since Equation 2 can be solved to yield an optimal \(\xi(\theta)\) for any (valid) \(\theta\).
We can solve Equation 2 for the optimal trajectory \(\xi(\theta)\) using a number of techniques. We can use general purpose solvers [13; 45], as well as specialized solvers designed to exploit the structure of COC problems, e.g., ones based on differential dynamic programming [30; 19]. Regardless, for the purposes of computing analytical trajectory derivatives using our method (as well as methods that differentiate through optimality conditions [20; 22; 2]), we only need to ensure that our solver returns a vector \(\xi(\theta)\) which is a (local) minimizer to Equation 2. We now describe how to differentiate through these optimal control problems, important in the end-to-end learning context.
## 4 Trajectory Derivatives using Implicit Differentiation
In this section, we present our identities for computing trajectory derivatives \(\mathsf{D}\xi(\theta)\) based on those derived in Gould et al. [14] for general optimization problems by leveraging first-order optimality conditions and the implicit function theorem. We exploit the block structure of the matrices in these identities that arises in optimal control problems to enable efficient computation and furthermore, show that computing trajectory derivatives is linear in the trajectory length \(T\).
Before we present the main analytical result, recall the motivation for computing \(\mathsf{D}\xi(\theta)\). Suppose in the LfD context we have a demonstration trajectory \(\xi^{\text{demo}}\) (we can extend this to multiple trajectories, but choose not to for notational simplicity). Furthermore, define a loss \(\mathcal{L}(\xi(\theta),\xi^{\text{demo}})\) which measures the deviation from predicted trajectory \(\xi(\theta)\) to demonstration trajectory \(\xi^{\text{demo}}\). Ultimately, if we wish to minimize the loss \(\mathcal{L}\) with respect to parameters \(\theta\) using a first-order decent method, we need to compute \(\mathsf{D}_{\theta}\mathcal{L}(\xi(\theta),\xi^{\text{demo}})\) by applying the chain rule. Specifically, we compute \(\mathsf{D}_{\theta}\mathcal{L}(\xi)=\mathsf{D}_{\xi}\mathcal{L}(\xi)\mathsf{D }\xi(\theta)\), which requires trajectory derivative \(\mathsf{D}\xi(\theta)\).
### Preliminaries
First, we first reorder \(\xi\) such that \(\xi=(x_{0},u_{0},x_{1},u_{1},\ldots,x_{T})\in\mathbb{R}^{n_{\xi}\times 1}\), where \(n_{\xi}=(n+m)T+n\). This grouping of decision variable blocks w.r.t. timesteps is essential for showing linear time complexity of the computation of the derivative. Let \(\xi_{t}\in\mathbb{R}^{n+m}\) represent the subset of variables in \(\xi\) associated to time \(t\), with final state \(\xi_{T}=x_{T}\in\mathbb{R}^{n}\). Next, we stack all constraints defined in Equation 2 into a single vector-valued constraint \(r\) comprised of \(T+2\) blocks. Specifically, let \(r(\xi;\theta)\triangleq(r_{-1},r_{0},r_{1},\ldots,r_{T})\in\mathbb{R}^{n_{r}}\), where \(n_{r}=(T+1)n+s+q\). The first block is given by \(r_{-1}=x_{0}\), and subsequent blocks are given by
\[r_{t}=\begin{cases}(\tilde{g}_{t}(x_{t},u_{t};\theta),h_{t}(x_{t},u_{t};\theta),x_{ t+1}-f(x_{t},u_{t};\theta))&\text{for }0\leq t\leq T-1\\ (\tilde{g}_{T}(x_{T};\theta),h_{T}(x_{T};\theta))&\text{for }t=T.\end{cases} \tag{3}\]
Here, \(\tilde{g}_{t}\) are the subset of _active_ inequality constraints for \(\xi\) (detected numerically using a threshold \(\epsilon\)) at timestep \(t\). Note, \(s=\sum_{t=0}^{T}s_{t}\), \(q=\sum_{t=0}^{T}|\tilde{g}_{t}|\) are the total number of additional equality and active inequality constraints (on top of the \(Tn\) dynamics and \(n\) initial state constraints). Each block represents a group of constraints associated with a particular timestep (except the first block \(r_{-1}\)). As we will see shortly, grouping decision variables and constraints in this way will admit a favorable block-sparse matrix structure for cost/constraint Jacobians and Hessians required to compute \(\mathsf{D}\xi(\theta)\).
### Analytical Results for Trajectory Derivatives
**Proposition 1** (Idooc).: _Consider the optimization problem defined in Equation 2. Suppose \(\xi(\theta)\) exists which minimizes Equation 2. Furthermore, assume \(f_{t}\), \(c_{t}\), \(g_{t}\) and \(h_{t}\) are twice differentiable for all \(t\) in a neighborhood of \((\theta,\xi)\). If \(\text{rank}(A)=n_{r}\) and furthermore, \(H\) is non-singular, then_
\[D\xi(\theta)=H^{-1}A^{\top}(AH^{-1}A^{\top})^{-1}(AH^{-1}B-C)-H^{-1}B, \tag{4}\]
_where_
\[A =D_{\xi}r(\xi;\theta)\in\mathbb{R}^{n_{r}\times n_{\xi}}\] \[B =D_{\theta\xi}^{2}J(\xi;\theta)-\sum_{t=-1}^{T}\sum_{i=1}^{|r_{t} |}\lambda_{t,i}D_{\theta\xi}^{2}r_{t}(\xi;\theta)_{i}\in\mathbb{R}^{n_{\xi} \times d}\] \[C =D_{\theta}r(\xi;\theta)\in\mathbb{R}^{n_{r}\times d}\] \[H =D_{\xi\xi}^{2}J(\xi;\theta)-\sum_{t=-1}^{T}\sum_{i=1}^{|r_{t}|} \lambda_{t,i}D_{\xi\xi}^{2}r_{t}(\xi;\theta)_{i}\in\mathbb{R}^{n_{\xi}\times n _{\xi}},\]
_and Lagrange multipliers \(\lambda\in\mathbb{R}^{n_{r}}\) satisfies \(\lambda^{\top}A=D_{\xi}J(\xi;\theta)\)._
Proof.: This is a direct application of Proposition 4.5 in Gould et al. [14].
**Remark 1**.: _Matrix \(H\) is block diagonal with \(T+1\) blocks._
To see this, \(\hat{H}=\mathrm{D}_{\xi\xi}^{2}J(\xi;\theta)\) has a block diagonal structure with blocks given by \(\mathrm{D}_{\xi_{t}\xi_{t}}^{2}c_{t}(\xi_{t};\theta)\), which follows from the assumption of additive costs in the COC problem described in Equation 2. In addition, constraints \(g_{t}\), \(h_{t}\) depend only on \(\xi_{t}\) and furthermore, the dynamics constraint is first order in \(\xi_{t+1}\). Therefore, \(\mathrm{D}_{\xi\xi}^{2}r_{t}(\xi;\theta)_{i}\) is only non-zero in the block relating to \(\xi_{t}\) for all \(t\) and \(i\). We plot the block-sparsity structure of \(H\) in Figure 1(a).
**Remark 2**.: _Matrix \(A\) is a block-banded matrix that is two blocks wide._
This is shown by noting that the only non-zero blocks in \(A\) correspond to \(\mathrm{D}_{\xi_{t}}r_{t}(\xi;\theta)\) and \(\mathrm{D}_{\xi_{t+1}}r_{t}(\xi;\theta)\). The former relates to \(g_{t}\), \(h_{t}\) and the \(-f_{t}(\xi_{t};\theta)\) component of the dynamics constraint. The latter only relates to the \(x_{t+1}\) component of the dynamics constraint. Finally, the initial condition block \(r_{-1}\) only depends on \(\xi_{0}\), hence the only non-zero block is given by \(\mathrm{D}_{\xi_{0}}r_{-1}(\xi_{0};\theta)\). We plot the block-sparsity structure of \(A\) in Figure 1(b).
**Proposition 2**.: _Evaluating Equation 4 has \(O(T)\) time complexity._
Proof.: From Remarks 1 and 2, \(H\) and \(A\) have a block diagonal and block-banded structure with \(T+1\) and \(2T+2\) blocks, respectively. Therefore, we can evaluate \(H^{-1}A^{\top}\) in \(O(T)\) time and furthermore, \(H^{-1}A^{\top}\) has the same structure as \(A\). It follows that we can also evaluate \(AH^{-1}B-C\) in \(O(T)\) time after partitioning \(B\) and \(C\) into blocks based on groupings of \(\xi\) and \(r\).
Next, observe that \(AH^{-1}A^{\top}\) yields a block tridiagonal matrix with \(T+2\) blocks on the main diagonal (see Figure 1(c) for a visualization of the block-sparse structure). As a result, we can evaluate \((AH^{-1}A^{\top})^{-1}(AH^{-1}B)\) by solving the linear system \((AH^{-1}A^{\top})Y=AH^{-1}B\) for \(Y\) in \(O(T)\)time using any linear time (w.r.t. number of diagonal blocks) block tridiagonal linear solver. A simple example is the block tridiagonal matrix algorithm, easily derived from block Gaussian elimination.
Finally, \(H^{-1}B\) can easily be evaluated in \(O(T)\) time by solving \(T+1\) linear systems comprised of the blocks of \(H\) and \(B\). We conclude that evaluating Equation 4 requires \(O(T)\) operations.
### Comparison to DiffMPC and PDP
All existing methods for computing analytical trajectory derivatives involve solving the linear system
\[\underbrace{\begin{bmatrix}H&A^{\top}\\ A&0\end{bmatrix}}_{K}\begin{bmatrix}\text{D}\xi\\ -\text{D}\lambda\end{bmatrix}=\begin{bmatrix}-B\\ -C\end{bmatrix}, \tag{5}\]
where blocks \(H,A,B,C\) are defined in Section 4.2 and D\(\xi\) is the desired trajectory derivative. We call Equation 5 the (differential) _KKT system_ and the matrix \(K\) the (differential) _KKT matrix_. Note, D\(\xi\) is unique if \(K\) is non-singular3. To better contextualize IDOC, we now briefly compare and contrast how previous methods such as DiffMPC [2] and PDP [20, 22] solve this KKT system.
Footnote 3: See Section 10.1 in [8] for further discussion on conditions for non-singular \(K\)
DiffMPC.DiffMPC [2] proposes a method for differentiating through Linear Quadratic Regulator (LQR) problems, which are OC problems where the cost function is (convex) quadratic and the dynamics are affine. The authors show that solving the LQR problem using matrix Riccati equations can be viewed as an efficient method for solving a KKT system which encodes the optimality conditions. In addition, they show that computing trajectory derivatives involves solving a similar KKT system, motivating efficient computation by solving an auxiliary LQR problem in the backward pass. The matrix equation interpretation of the backward pass allows the efficient computation of VJPs for some downstream loss \(\mathcal{L}(\xi)\) in a bi-level optimization context.
DiffMPC extends to handling non-convex OC problems with box constraints on control inputs via an iterative LQR-based approach. Specificially, such problems are handled by first computing VJPs w.r.t. the parameters of an LQR approximation to the non-convex problem around the optimal trajectory \(\xi,\lambda\). Next, the parameters of the approximation are differentiated w.r.t. the underlying parameters \(\theta\) and combined using the chain rule. However, differentiating the quadratic cost approximation and dynamics requires evaluating costly higher-order derivatives, e.g., \(\frac{d}{d\theta}\frac{d^{2}c(\xi)}{d\xi^{2}}\), which are 3D tensors. These tensors are dense in general, although problem specific sparsity structures may exist.
PDP/Safe-PDP.PDP [20] and its extension, Safe-PDP [22], take a similar approach to DiffMPC in deriving trajectory derivatives. PDP derives trajectory derivatives by starting with PMP, which is well-known in the control community and applies to non-convex OC problems. Furthermore, Jin et al. [20] shows that the PMP and KKT conditions are equivalent in the discrete-time COC setting. Trajectory derivatives are obtained by differentiating the PMP conditions, yielding a new set of PMP conditions for an auxiliary LQR problem, which can be solved using matrix Riccati equations. This
Figure 2: Block-sparse structure for matrices \(H\), \(A\), and \(AH^{-1}A^{\top}\) assuming \(T=3\). For \(A\), we combined inequality and equality constraint groups \(g_{t}\) and \(h_{t}\) into a single group for simplicity. Shaded regions indicate (possible) non-zero blocks.
approach is fundamentally identical to DiffMPC, with only superficial differences for LQR problems (solving a differential KKT system versus differentiating PMP conditions)4.
Footnote 4: Jin et al., [20] claim that the backward pass for DiffMPC is \(O(T^{2})\) due to direct inversion of the KKT matrix. However, DiffMPC actually solves an auxiliary LQR problem for \(O(T)\) complexity.
For non-convex problems, PDP does not evaluate higher-order derivatives, unlike DiffMPC. However, it is not clear how to compute VJPs under approaches derived using the PDP framework. Safe-PDP extended PDP to handle arbitrary, smooth inequality constraints using the constrained PMP conditions, which extends the capability of DiffMPC to handle box constraints. Trajectory derivatives are computed by solving an auxiliary equality constrained LQR problem. Experiments in both Jin et al. [20] and Section 6 show that Safe-PDP yields unreliable derivatives for COC problems. While Jin et al. [20] conjected that instability was due to the set of active constraints changing between iterations, we find that IDOC still learns reliably in the presence of constraint switching. We instead observed that the poor gradient quality arises naturally from the equality constrained LQR solver used for the backward pass [25] not matching the solution obtained by directly solving the KKT system.
Idoc.In contrast, we solve Equation 5 for \(\text{D}\xi\) by first applying variable elimination on \(\text{D}\lambda\), yielding Proposition 1. We will discuss in Section 5 how the equations resulting from this approach admit parallelization and favorable numerical stability compared to DiffMPC and PDP's auxiliary LQR approach. Furthermore, VJPs can be easily computed (unlike PDP) without evaluating higher-order derivatives (unlike DiffMPC). IDOC handles arbitrary smooth constraints like Safe-PDP, and we show in Section 6 that our derivatives for COC problems are reliable during training. However, we require the additional assumption that \(H\) is non-singular (which holds if and only if all blocks in \(H\) are non-singular), which is not required for \(K\) to be non-singular. An oftentimes effective solution when \(H\) is singular, initially proposed by Russell et al. [20], is to set \(H=H+\frac{\delta}{2}I\) for small \(\delta\), which is analogous to adding a proximal term to the cost function in Equation 2. We will now describe how to evaluate Equation 4 in linear time by exploiting the block-sparse structure of the matrix equation.
## 5 Algorithmic Implications of IDOC
### Parallelization and Numerical Stability
Parallelization.To evaluate Equation 4, we can leverage the block diagonal structure of \(H\) and block-banded structure of \(A\) to compute \(H^{-1}A^{\top}\) and \(H^{-1}B\) in parallel across all timesteps. Specifically, we evaluate the matrix product block-wise, e.g., \(H_{t}^{-1}B_{t}\) for all \(t\) in parallel (similarly with \(A\)). Following this, we can then compute \(A(H^{-1}A^{\top})\) in parallel also using the same argument.
In addition, we can solve the block tridiagonal linear systems involving \(AH^{-1}A^{\top}\) by using a specialized parallel solver [5; 36; 27; 41; 15; 18]. Unfortunately, none of these methods have open source code available, and so in our experiments, we implement the simple block tridiagonal matrix algorithm. Implementing a robust parallel solver is an important direction for future work, and will further improve the scalability and numerical stability of IDOC.
Numerical Stability.In addition to parallelization, another benefit of avoiding Riccati-style recursions for computing derivatives is improved numerical stability. For long trajectories and/or poorly conditioned COC problems (e.g., stiff dynamics), IDOC reduces the rounding errors that accumulate in recursive approaches. We show superior numerical stability in our experiments compared to PDP in Section 6, despite using the simple recursive block tridiagonal matrix algorithm for solving \(AH^{-1}A^{\top}\). Replacing this recursion with a more sophisticated block tridiagonal solver should further improve the stability of the backwards pass and is left as future work.
### Vector Jacobian Products
Another benefit of explicitly writing out the matrix equations for \(\text{D}\xi(\theta)\) given in Equation 4 is that we can now directly compute VJPs given some outer loss \(\mathcal{L}(\xi)\in\mathbb{R}\) over the optimal trajectory. Let \(v\triangleq\text{D}_{\xi}\mathcal{L}(\xi)^{\top}\in\mathbb{R}^{n_{t}\times 1}\) be the gradient of the loss w.r.t. trajectory \(\xi(\theta)\). The desired gradient \(\text{D}_{\theta}\mathcal{L}(\xi(\theta))\) is then given by \(\text{D}_{\theta}\mathcal{L}(\xi)=v^{\top}\text{D}\xi(\theta)\) using the chain rule. The resultant expression is
\[\text{D}_{\theta}\mathcal{L}(\xi(\theta))=v^{\top}(H^{-1}A^{\top}(AH^{-1}A^{ \top})^{-1}(AH^{-1}B-C)-H^{-1}B). \tag{6}\]The simple observation here is that we do not need to construct \(D_{\theta}\xi^{*}(\theta)\) explicitly, and can instead evaluate the VJP directly. We propose evaluating Equation 6 from left to right and block-wise, which will reduce computation time compared to explicitly constructing \(\text{D}\xi(\theta)\) and then multiplying with \(v\).
To see this, we follow the example in Gould et al. [14] and assume the blocks in \(H\) have been factored. Then for a single block (ignoring constraints for simplicity), evaluating \(v^{\top}(H_{t}^{-1}B_{t})\) is \(O((n+m)^{2}p)\) while evaluating \((v^{\top}H_{t}^{-1})B_{t}\) is \(O((n+m)^{2}+(n+m)p)\). Evaluating the VJP directly significantly reduces computation time compared to constructing the full trajectory derivative for problems with higher numbers of state/control variables and tunable parameters.
## 6 Experiments
While our contributions are largely analytical, we have implemented the identities in Equation 4 to verify our claims around numerical stability and computational efficiency on a number of simulated COC environments. We will show that IDOC is able to compute trajectory derivatives significantly faster compared to its direct alternative PDP [20] and Safe-PDP [22] with superior numerical stability.
### Experimental Setup
We evaluate IDOC against PDP [20] and Safe-PDP [22] in an LfD setting across four simulation environments proposed in Jin et al. [22], as well as a synthetic experiment. The simulation environments showcase the gradient quality for a realistic learning task, whereas the synthetic experiment is designed to measure numerical stability and computation times. For Safe-PDP, we evaluate against the method proposed for inequality constrained problems (Safe-PDP), as well as using an approximate log-barrier problem in place of the full problem (Safe-PDP (b)). The IPOPT solver [45] is used in the forward pass to solve the COC problem, and Lagrange multipliers \(\lambda\) are extracted from the solver output. More generally however, note that the method proposed in Gould et al. [14] can be used to recover \(\lambda\) if another solver is used where \(\lambda\) is not provided.
Imitation Learning/LfD.The LfD setting involves recovering the model parameters \(\theta\) that minimizes the mean-squared imitation error to a set of \(N\) demonstration trajectories, given by
Figure 3: Learning curves for the imitation learning task over five trials. Bold lines represent mean loss, lighter lines represent individual trials. IDOC yields more stable gradients and lower final imitation loss across all environments and trials compared to both Safe-PDP (S-PDP) and Safe-PDP with log-barrier functions (S-PDP (b)).
\(\mathcal{L}(\xi(\theta),\xi^{\text{demo}})\triangleq\frac{1}{N}\sum_{i}\|\xi( \theta)_{i}-\xi_{i}^{\text{demo}}\|^{2}\). We include all parameters in the cost, dynamics and constraint functions in \(\theta\) and furthermore, evaluate performance both with (S-PDP) and without (PDP) inequality constraints. We perform experiments in four standard simulated environments: cartpole, 6-DoF quadrotor5, 2-link robot arm and 6-DoF rocket landing. For a detailed description of the imitation learning problem as well as each COC task, see the appendix. In addition, we provide timings for all methods within the simulation environments in the appendix.
Footnote 5: Interestingly, \(H\) is singular in this case, relating to block \(H_{T}\), so we use the trick proposed in Section 4.3 with \(\delta=10^{-6}\) to compute Equation 4. More details around this are provided in the appendix.
Synthetic Benchmark.The simulation experiments described above are not sufficiently large-scale to adequately measure the benefits of parallelization and numerical stability afforded by IDOC over PDP. To demonstrate these benefits, we constructed a large-scale, synthetic experiment where the blocks required to construct \(H,A,B\) and \(C\) (as discussed in Section 4) are generated randomly. Computation time against the horizon length \(T\), as well as the number of parameters \(d\) is reported. Numerical stability is measured by comparing mean-absolute error between trajectory derivatives evaluated under 32-bit and 64-bit precision for varying condition numbers over the blocks of \(H\). State and control dimensions are fixed at \(n=50\) and \(m=10\). Experiments are run on an AMD Ryzen 9 7900X 12-core processor, 128Gb RAM and Ubuntu 20.04. See the appendix for more details.
### Imitation Learning/LfD Results
In this section, we evaluate the effectiveness of gradients produced from IDOC against PDP and Safe-PDP [20; 22] for the imitation learning/LfD task. In the equality constrained setting, IDOC and PDP yield identical (up to machine precision) trajectory derivatives and imitation loss throughout learning; see the appendix for more detailed results and analysis. However, as shown in Figure 3, when inequality constraints are introduced, the derivatives produced by Safe-PDP and IDOC differ significantly. Safe-PDP fails to reduce the imitation loss due to unreliable gradients, whereas IDOC successively decreases the imitation loss. While Safe-PDP (b), which differentiates through a log-barrier problem, also provides stable learning, it ultimately yields higher imitation loss compared to IDOC. This is because the barrier problem is an approximation to the true COC problem. Further discussion around using log-barrier methods and COC problems is provided in Section 7.1.
### Synthetic Benchmark Results
Figure 4 illustrates the results of the synthetic benchmark. We observed that IDOC and PDP have similar computation time and scalability with problem size when computing full trajectory derivatives. However, computing VJPs with IDOC significantly reduces computation time and improves scalability
Figure 4: Synthetic experiments measuring a) scalability with horizon length \(T\), with fixed parameter size \(|\theta|=10\)k, b) scalability with number of parameters \(d\) with fixed horizon length \(T=1\)k, c) numerical stability with varying block condition number \(\kappa(H_{t})\). Error bars measure standard error across 5 and 25 samples for computation time (a, b) and numerical stability (c), respectively.
Figure 5: Control Limit Learning.
by an order of magnitude with model size (i.e., \(d=|\theta|\)). Numerical stability experiments show that IDOC (full and VJP) yields lower round-off errors and improved stability compared to PDP. Using more sophisticated block tridiagonal solvers will further improve stability.
## 7 Discussion and Future Work
### Differentiating through Log-Barrier Methods
In the imitation learning setting, we assume that demonstration data \(\xi^{\text{demo}}\) is an optimal solution to a COC problem with (unknown) parameters \(\theta^{\star}\). Furthermore, we assume that a subset of timesteps \(\mathcal{A}\subseteq\{1,\dots,T\}\) yield active constraints. Under the log-barrier formulation, the constraint boundaries must be relaxed beyond their true values during learning to minimize the imitation loss. Concretely, under \(\theta^{\star}\), \(\log g_{t}(x_{t}^{\text{demo}},u_{t}^{\text{demo}};\theta^{\star})\) are undefined for \(t\in\mathcal{A}\). Therefore to recover \(\xi(\theta)=\xi^{\text{demo}}\), we must have \(g_{t}(x_{t}(\theta),u_{t}(\theta))<0\) for \(t\in\mathcal{A}\), i.e., we cannot recover the true constraint function through minimizing the imitation loss. We verify this using the robot arm environment, and present the results in Figure 5, plotting the error between the estimated and true constraint value. We observe that IDOC recovers the true constraint value more closely compared to Safe-PDP (b).
Generality of IFT.The generality of the differentiable optimization framework, and the matrix equation formulation for trajectory derivatives given in Equation 4 yield additional conceptual benefits which may help us tackle even broader classes of COC problems. For example, we can relax the additive cost assumption and add a final cost defined over the full trajectory such as
\[h_{T}(\xi;\theta)=\xi^{\top}Q\xi, \tag{7}\]
where \(Q=CC^{\top}\) and \(C\in\mathbb{R}^{n_{\xi}\times k}\) for \(k\ll n_{\xi}\) is full rank. For IDOC, we can simply use the matrix inversion lemma to invert \(H\) in \(O(T)\). However, this is more complicated for methods derived from the PMP which rely on the specific additive cost structure of the underlying COC problem.
Dynamic Games.A generalization of our work is to apply the differentiable optimization framework to handle learning problems that arise in dynamic games [3] and have recently begun to employ PDP-based approaches [10]. Being able to differentiate through solution and equilibrium concepts that arise in dynamic games enables the solution of problems ranging from minimax robust control [3] to inverse dynamic games [10; 33], and is a promising direction for future work.
Fast forward passes.While we have proposed a more efficient way of computing trajectory derivatives (i.e., the backward pass), we observe that the forward pass to solve the COC problem bottlenecks the learning process. Significant effort must be placed on developing fast solvers for COC problems for hardware accelerators to allow learning to scale to larger problem sizes.
Combining MPC and Deep Learning.A recent application for computing trajectory derivatives is combining ideas from deep learning (including reinforcement learning) with MPC [38; 50; 46; 48]. Common to these approaches is using a learned model to select the parameters for the MPC controller. By allowing for robust differentiation through a broader class COC problems compared to previous approaches, we hope that IDOC will allow for future development in this avenue of research.
## 8 Conclusion
In this paper, we present IDOC, a novel approach to differentiating through COC problems. Trajectory derivatives are evaluated by differentiating KKT conditions and using the implicit function theorem. Contrary to prior works, we do not solve an auxiliary LQR problem to efficiently solve the (differential) KKT system for the trajectory derivative. Instead, we apply variable elimination on the KKT system and solve the resultant matrix equations directly. We show that linear time evaluation is possible by appropriately considering equation structure. In fact, we show that IDOC is faster and more numerically stable in practice compared to methods derived from the PMP. We hope that our discussion connecting the fields of inverse optimal control and differentiable optimization will lead to future work which enables differentiability of a broader class of COC problems.
## References
* Agrawal et al. [2019] A. Agrawal, B. Amos, S. Barratt, S. Boyd, S. Diamond, and Z. Kolter. Differentiable convex optimization layers. In _Advances in Neural Information Processing Systems_, 2019.
* Amos et al. [2018] B. Amos, I. Jimenez, J. Sacks, B. Boots, and J. Z. Kolter. Differentiable MPC for End-to-end Planning and Control. _Advances in Neural Information Processing Systems_, 2018.
* Basar and Olsder [1982] T. Basar and G. J. Olsder. _Dynamic noncooperative game theory_. Academic Press, London/New York, 1982. ISBN 978-0-08-095666-4.
* Bertsekas [2019] D. Bertsekas. _Reinforcement learning and optimal control_. Athena Scientific, 2019.
* Bevilacqua et al. [1988] R. Bevilacqua, B. Codenotti, and F. Romani. Parallel solution of block tridiagonal linear systems. _Linear Algebra and its Applications_, 104:39-57, 1988.
* Bhardwaj et al. [2020] M. Bhardwaj, B. Boots, and M. Mukadam. Differentiable Gaussian process motion planning. _Proc. of the IEEE International Conference on Robotics and Automation (ICRA)_, 2020.
* Blondel et al. [2020] M. Blondel, O. Teboul, Q. Berthet, and J. Djolonga. Fast differentiable sorting and ranking. In _Proc. of the International Conference on Machine Learning (ICML)_, 2020.
* Boyd and Vandenberghe [2004] S. Boyd and L. Vandenberghe. _Convex Optimization_. Cambridge University Press, 2004. doi: 10.1017/CBO9780511804441.
* Campbell et al. [2020] D. Campbell, L. Liu, and S. Gould. Solving the Blind Perspective-n-Point Problem End-To-End With Robust Differentiable Geometric Optimization. In _Proc. of the European Conference on Computer Vision (ECCV)_, 2020.
* Cao and Xie [2022] K. Cao and L. Xie. Game-Theoretic Inverse Reinforcement Learning: A Differential Pontryagin's Maximum Principle Approach. _IEEE Transactions on Neural Networks and Learning Systems_, pages 1-8, 2022. doi: 10.1109/TNNLS.2022.3148376.
* Cherian et al. [2017] A. Cherian, B. Fernando, M. Harandi, and S. Gould. Generalized rank pooling for activity recognition. In _Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, 2017.
* Englert et al. [2017] P. Englert, N. A. Vien, and M. Toussaint. Inverse KKT: Learning cost functions of manipulation tasks from demonstrations. _The International Journal of Robotics Research_, 36(13-14):1474-1488, 2017. doi: 10.1177/0278364917745980.
* Gill et al. [2005] P. E. Gill, W. Murray, and M. A. Saunders. SNOPT: An SQP Algorithm for Large-Scale Constrained Optimization. _SIAM Review_, page 99-131, 2005.
* Gould et al. [2022] S. Gould, R. Hartley, and D. Campbell. Deep Declarative Networks. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 44:3988-4004, Aug 2022. doi: 10.1109/TPAMI.2021.3059462.
* Hajj and Skelboe [1990] I. N. Hajj and S. Skelboe. A multilevel parallel solver for block tridiagonal and banded linear systems. _Parallel Computing_, 15:21-45, 1990.
* Harris et al. [2020] C. R. Harris, K. J. Millman, S. J. van der Walt, R. Gommers, P. Virtanen, D. Cournapeau, E. Wieser, J. Taylor, S. Berg, N. J. Smith, R. Kern, M. Picus, S. Hoyer, M. H. van Kerkwijk, M. Brett, A. Haldane, J. F. del Rio, M. Wiebe, P. Peterson, P. Gerard-Marchant, K. Sheppard, T. Reddy, W. Weckesser, H. Abbasi, C. Gohlke, and T. E. Oliphant. Array programming with NumPy. _Nature_, 585:357-362, 2020.
* Hatz et al. [2012] K. Hatz, J. P. Schloder, and H. G. Bock. Estimating parameters in optimal control problems. _SIAM Journal on Scientific Computing_, 34(3):A1707-A1728, 2012. doi: 10.1137/110823390.
* Hirshman et al. [2010] S. Hirshman, K. Perumalla, V. Lynch, and R. Sanchez. Bcyclic: A parallel block tridiagonal matrix cyclic solver. _Journal of Computational Physics_, 229:6392-6404, 2010.
* Howell et al. [2019] T. A. Howell, B. E. Jackson, and Z. Manchester. ALTRO: A Fast Solver for Constrained Trajectory Optimization. In _Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_, pages 7674-7679, 2019.
* [20] W. Jin, Z. Wang, Z. Yang, and S. Mou. Pontryagin Differentiable Programming: An End-to-End Learning and Control Framework. _Advances in Neural Information Processing Systems_, 33:7979-7992, 2020.
* [21] W. Jin, D. Kulic, S. Mou, and S. Hirche. Inverse optimal control from incomplete trajectory observations. _The International Journal of Robotics Research_, 40(6-7):848-865, 2021. doi: 10.1177/0278364921996384.
* [22] W. Jin, S. Mou, and G. J. Pappas. Safe Pontryagin Differentiable Programming. _Advances in Neural Information Processing Systems_, 34:16034-16050, 2021.
* [23] M. Johnson, N. Aghasadeghi, and T. Bretl. Inverse optimal control for deterministic continuous-time nonlinear systems. In _Decision and Control (CDC), 2013 IEEE 52nd Annual Conference on_, pages 2906-2913, Dec 2013.
* [24] A. Keshavarz, Y. Wang, and S. Boyd. Imputing a convex objective function. In _Intelligent Control (ISIC), 2011 IEEE International Symposium on_, pages 613-619. IEEE, 2011.
* [25] F. Laine and C. Tomlin. Efficient computation of feedback control for equality-constrained lqr. In _Proc. of the IEEE International Conference on Robotics and Automation (ICRA)_, 2019. doi: 10.1109/ICRA.2019.8793566.
* [26] B. Landry, Z. Manchester, and M. Pavone. A differentiable augmented lagrangian method for bilevel nonlinear optimization. In _Robotics: Science and Systems_, 2019.
* [27] E. Laszlo, M. Giles, and J. Appleyard. Manycore algorithms for batch scalar and block tridiagonal solvers. _ACM Transactions on Mathematical Software_, 42(4), 2016.
* [28] K. Lee, S. Maji, A. Ravichandran, and S. Soatto. Meta-learning with differentiable convex optimization. In _Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, 2019.
* [29] L. Ljung. _System identification: theory for the user_. Prentice Hall PTR, Upper Saddle River, NJ, 1999.
* [30] C. Mastalli, R. Budhiraja, W. Merkt, G. Saurel, B. Hammoud, M. Naveau, J. Carpentier, L. Righetti, S. Vijayakumar, and N. Mansard. Croccodyl: An Efficient and Versatile Framework for Multi-Contact Optimal Control. In _Proc. of the IEEE International Conference on Robotics and Automation (ICRA)_, 2020.
* [31] A. Mensch and M. Blondel. Differentiable dynamic programming for structured prediction and attention. In _Proc. of the International Conference on Machine Learning (ICML)_, 2018.
* 446, 2018. doi: 10.1016/j.automatica.2017.09.023.
* [33] T. L. Molloy, J. Inga Charaja, S. Hohmann, and T. Perez. _Inverse Optimal Control and Inverse Noncooperative Dynamic Game Theory: A Minimum-Principle Approach_. Springer International Publishing, 2022.
* [34] K. Mombaur, A. Truong, and J.-P. Laumond. From human to humanoid locomotion--an inverse optimal control approach. _Autonomous robots_, 28(3):369-383, 2010.
* [35] L. Pineda, T. Fan, M. Monge, S. Venkataraman, P. Sodhi, R. T. Chen, J. Ortiz, D. DeTone, A. Wang, S. Anderson, J. Dong, B. Amos, and M. Mukadam. Theseus: A Library for Differentiable Nonlinear Optimization. _Advances in Neural Information Processing Systems_, 2022.
* [36] E. Polizzi and A. H. Sameh. A parallel hybrid banded system solver: the spike algorithm. _Parallel Computing_, 32(2):177-194, 2006.
* [37] N. D. Ratliff, J. A. Bagnell, and M. A. Zinkevich. Maximum margin planning. In _Proceedings of the 23rd international conference on Machine learning_, pages 729-736. ACM, 2006.
* [38] A. Romero, Y. Song, and D. Scaramuzza. Actor-critic model predictive control, 2023.
* Russell et al. [2019] C. Russell, M. Toso, and N. Campbell. Fixing implicit derivatives: Trust-region based learning of continuous energy functions. In _Advances in Neural Information Processing Systems_, volume 32, 2019.
* Sarlin et al. [2020] P.-E. Sarlin, D. DeTone, T. Malisiewicz, and A. Rabinovich. Superglue: Learning feature matching with graph neural networks. In _Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2020.
* Seal et al. [2013] S. K. Seal, K. S. Perumalla, and S. P. Hirshman. Revisiting parallel cyclic reduction and parallel prefix-based algorithms for block tridiagonal system of equations. _Journal of Parallel and Distributed Computing_, 73, 2013.
* Sodhi et al. [2021] P. Sodhi, E. Dexheimer, M. Mukadam, S. Anderson, and M. Kaess. LEO: Learning energy-based models in factor graph optimization. In _Proc. of the Conference on Robot Learning (CoRL)_, 2021.
* Tedrake [2023] R. Tedrake. _Underactuated Robotics_. 2023. URL [https://underactuated.csail.mit.edu](https://underactuated.csail.mit.edu).
* Vlastelica et al. [2020] M. Vlastelica, A. Paulus, V. Musil, G. Martius, and M. Rolinek. Differentiation of blackbox combinatorial solvers. In _Proc. of the International Conference on Learning Representations (ICLR)_, 2020. *Equal Contribution.
* Wachter and Biegler [2006] A. Wachter and L. Biegler. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. _Mathematical Programming_, 106:25-57, 2006.
* Xiao et al. [2023] W. Xiao, T.-H. Wang, R. Hasani, M. Chahine, A. Amini, X. Li, and D. Rus. Barriernet: Differentiable control barrier functions for learning of safe robot control. _IEEE Transactions on Robotics_, 39:2289-2307, 2023.
* Xu et al. [2023] M. Xu, S. Garg, M. Milford, and S. Gould. Deep declarative dynamic time warping for end-to-end learning of alignment paths. In _Proc. of the International Conference on Learning Representations (ICLR)_, 2023.
* Yang et al. [2023] F. Yang, C. Wang, C. Cadena, and M. Hutter. iPlanner: Imperative Path Planning. In _Proceedings of Robotics: Science and Systems_, Daegu, Republic of Korea, July 2023. doi: 10.15607/RSS.2023.XIX.064.
* Yi et al. [2021] B. Yi, M. Lee, A. Kloss, R. Martin-Martin, and J. Bohg. Differentiable factor graph optimization for learning smoothers. In _Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_, 2021.
* Zanon and Gros [2021] M. Zanon and S. Gros. Safe Reinforcement Learning Using Robust MPC. _IEEE Transactions on Automatic Control_, 66:3638-3652, 2021.
* Ziebart et al. [2008] B. D. Ziebart, A. L. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning. In _AAAI Conference on Artificial Intelligence_, pages 1433-1438, 2008.
Experimental Setup for Imitation Learning/LfD Task
We use the code and simulation environments provided by [22] and follow a similar experimental setup, described in more detail in this section. The LfD experimental setting without inequality constraints is referred to in [20] as the inverse reinforcement learning (IRL) task, whereas including inequality constraints is referred to in [22] as the constrained inverse optimal control (CIOC) task.
Demonstration Trajectories.Up to five demonstration trajectories are used in the LfD setting with no inequality constraints. Each demonstration trajectory is generated by solving a COC problem using the same underlying parameters \(\theta\), however the initial conditions differ across trajectories and are sampled randomly. For the setting with inequality constraints present, we use only one demonstration trajectory.
Initialization.Consistent with [20, 22], we run five imitation learning trials for all LfD experiments, where for each trial, \(\theta\) is initialized by adding uniform noise to the true value. For all methods, we use the gradient descent with the same learning rate for a given environment.
Environment Specifications.Table 1 provides summary specifications of the simulation environments used in our experiments, which are consistent with prior works [20, 22]. Recall that \(n\), \(m\) denote the number of states and controls, respectively. The number of parameters is denoted \(d=|\theta|\) and \(T\) is the horizon length.
Additional Environment Parameters.Additional specifications are presented in Table 2. These include log-barrier parameter \(\gamma\) used for Safe-PDP (b), time discretization for Forward Euler to discretize continuous time dynamics, the learning rate for minimizing the imitation loss and finally the number of demonstration trajectories per environment. (E) refers to experiments without constraints, whereas (I) means inequality constraints are present.
### Cartpole
The cartpole environment relates to the swing-up and balance task, where a simple pendulum (i.e., massless pole and point mass on the end of the pole) is swinging on a cart and the objective is to balance the pendulum above the cart. Only a horizontal force can be applied to drive the cart.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Environment & \(n\) & \(m\) & \(d\) & \(T_{\text{IRL}}\) & \(T_{\text{CIOC}}\) \\ \hline Cartpole & 4 & 1 & 9 & 30 & 35 \\ Quadrotor & 13 & 4 & 11 & 50 & 25 \\ Robotarm & 4 & 2 & 10 & 35 & 25 \\ Rocket & 13 & 3 & 12 & 40 & 40 \\ \hline \end{tabular}
\end{table}
Table 1: Description of environments
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Environment & \(\gamma\) & \(\Delta\) & lr & \(n_{\text{demos}}\) \\ \hline Cartpole (E) & - & 0.1 & \(10^{-4}\) & 5 \\ Quadrotor (E) & - & 0.1 & \(10^{-4}\) & 2 \\ Robotarm (E) & - & 0.1 & \(10^{-4}\) & 4 \\ Rocket (E) & - & 0.1 & \(3\times 10^{-4}\) & 1 \\ \hline Cartpole (I) & 0.01 & 0.1 & \(8\times 10^{-5}\) & 1 \\ Quadrotor (I) & 0.01 & 0.15 & \(2\times 10^{-4}\) & 1 \\ Robotarm (I) & 0.01 & 0.2 & \(2\times 10^{-3}\) & 1 \\ Rocket (I) & 1 & 0.1 & \(10^{-5}\) & 1 \\ \hline \end{tabular}
\end{table}
Table 2: Additional hyperparameters for LfD experimentsDynamics.The state variable \(x=[y,q,\dot{y},\dot{q}]^{\top}\in\mathbb{R}^{4}\) is comprised of the horizontal position of the cart \(y\), counter-clockwise angle of the pendulum \(\theta\) (from the hanging position) and their respective velocities. Control \(u\in\mathbb{R}\) relates to the horizontal force applied to the cart. Dynamics parameters \(\theta_{\text{dyn}}=[m_{c},m_{i}p,\ell]^{\top}\in\mathbb{R}^{3}\) relate to the the mass of the cart \(m_{c}\), mass of the point mass at the end of the pole \(m_{p}\) and length of the pole \(\ell\). Gravity is set at \(g=10\). The continuous time dynamics are given by
\[\ddot{y} =\frac{1}{m_{c}+m_{c}\sin^{2}q}\left[u+m_{p}\sin q(\ell\dot{q}^{2} +g\cos q)\right] \tag{8}\] \[\ddot{q} =\frac{1}{\ell(m_{c}+m_{p}\sin^{2}q)}\left[-u\cos q-m_{p}\ell\dot {q}^{2}\cos q\sin q-(m_{c}+m_{p})g\sin q\right], \tag{9}\]
with a detailed derivation provided in [43, Ch. 3].
Cost Functions.The cost function is a quadratic cost over states and controls. Specifically,
\[c_{t}(x_{t},u_{t};\theta)=(x_{t}-x_{\text{goal}})^{\top}W(x_{t}-x_{\text{goal }})+w_{u}u^{2}, \tag{10}\]
and
\[c_{T}(x_{T},u_{T};\theta)=(x_{T}-x_{\text{goal}})^{\top}W(x_{T}-x_{\text{goal}}), \tag{11}\]
with \(x_{\text{goal}}=[0,0,-\pi,0]^{\top}\). We specify that \(W\triangleq\text{diag}(w)\), where \(w=[w_{y},w_{q},w_{\dot{y}},w_{\dot{q}}]\geq 0\) and furthermore, \(w_{u}\geq 0\). We have that \(\theta_{\text{cost}}=[w^{\top},w_{u}^{\top}]^{\top}\in\mathbb{R}^{5}\).
Constraints.For the setting with constraints, we apply box constraints for the cart position and controls. Specifically, we enforce \(|y|\leq y_{\text{max}}\) and \(|u|\leq u_{\text{max}}\). We let constraint parameters \(\theta_{\text{constr}}=[y_{\text{max}},u_{\text{max}}]^{\top}\). The final parameter vector is given by \(\theta=[\theta_{\text{dyn}}^{\dagger},\theta_{\text{cost}}^{\top},\theta_{ \text{const}}^{\top}]^{\top}\).
### 6-DoF Quadrotor Manouvering
See Section E of the appendix in [20] for the full definition of the quadrotor maneuvering problem. The objective is to drive the quadrotor using propeller thrusts to a goal configuration. The state is given by \([p^{\top},\dot{p}^{\top},q^{\top},\omega^{\top}]^{\top}\in\mathbb{R}^{13}\), where \(p\in\mathbb{R}^{3}\) represents the position of the center-of-mass of the quadrotor, \(\dot{p}\in\mathbb{R}^{3}\) represents linear velocity, \(q\in\mathbb{R}^{4}\) is a unit quaternion representation of the quadrotor attitude and \(\omega\in\mathbb{R}^{3}\) represents angular velocity. For the inequality constrained setting, non-linear (squared-norm) constraints on quadrotor position, given by \(\|p\|_{2}^{2}\leq r_{\text{max}}\) and thrust limits are applied.
### 2-Link Robot Arm
See Section E in the appendix in [20] for a reference to the full description of the robot arm problem. The objective of this OC problem is to move the robot arm to a goal configuration. The state is given by the orientation of the base and elbow joint, as well as respective velocities \([q_{1},q_{2},\dot{q}_{1},\dot{q}_{2}]^{\top}\in\mathbb{R}^{4}\). The control inputs \(u=[u_{q},u_{2}]^{\top}\in\mathbb{R}^{2}\) relate to applying torques at each joint. Constraints for the inequality constrained setting relate to box constraints on the joint positions, as well as torque limits. The cost function is similar to cartpole, i.e., a quadratic penalty to a goal state and over the controls.
### 6-DoF Rocket Landing
See Section I in the appendix in [20] for the full definition of the 6-DoF rocket landing problem. The objective of this task is to land a rocket modeled as a rigid body (softly) onto a goal position, using thrusters located at the tail. A non-linear constraint over the tilt angle is applied in the inequality constrained setting. The cost function penalizes deviations from the goal configuration, as well as fuel cost.
Experimental Setup for Synthetic Benchmark
We discuss in more detail how blocks for \(H,A,B,C\) are generated in this section. Elements for the Jacobian and Hessian blocks for the objective function and constraints are sampled independently and identically distributed (i.i.d.) from a standard Gaussian. Hessian blocks are made symmetric using \(H^{\text{sym}}=(H+H^{\top})/2\), where \(H\) is the initially generated random matrix. Each block's condition number is modified by applying an SVD to each block and adjusting the diagonal entries. Elements of the downstream loss gradients \(v=\text{D}_{\xi}\mathcal{L}(\xi)^{\top}\) are also sampled i.i.d. from a standard Gaussian.
## Appendix C Additional Results for the Imitation Learning/LfD Tasks
In this section, we provide additional detailed analysis around the imitation learning/LfD setting with no inequality constraints. Figure 6 illustrates the results of these experiments. We plot the percentage difference in loss across the full training curve between IDOC and PDP for five random initializations for \(\theta\). For the quadrotor, robotarm and rocket environments, we observe almost no difference in learning curves. This is expected since IDOC and PDP are computing the same trajectory derivative. For cartpole however, we see that PDP fails catastrophically (resulting in undesirable spikes in the imitation loss) on two occasions across the five trials. We suspect this is due to numerical instability of the Riccati equations, arising from the local geometric properties of the dynamics at solved trajectories.
## Appendix D Additional Timings for Simulation Experiments
In this section, we provide additional compute time experiments for the imitation learning/LfD tasks evaluated in the simulation environments. Given the relatively small size for the COC problems, there is very limited advantage with respect to compute time for parallel evaluation of trajectory derivatives. However, as a proxy to parallelization, IDOC can be _vectorized_ across timesteps using the numerical linear algebra library Numpy [16]. All experiments in this section are run on a single thread of an AMD Ryzen 9 7900X 4.7Ghz desktop CPU.
The experimental setting where inequality constraints are present is challenging to vectorize compared to the setting with only equality constraints, because different timesteps may have different numbers of active inequality constraints. While matrices \(A\) and \(C\) presented in Equation 4 are still block structured, the blocks may not be of uniform size in this setting. We present in Figure 6(a) and 6(b) the mean computation time (along with the upper/lower bound) per iteration across five trials with 10k iterations for each trial for all environments except for rocket (1k). We observe around a 2\(\times\) speedup over PDP by computing our IFT/DDN gradients, even before direct computation of vector-Jacobian products. Computing VJPs directly yields a further (approx.) 10% saving in compute time over constructing the trajectory derivative explicitly.
In our non-optimized Python implementation of IDOC, we batch together and vectorize computations involving blocks with an identical number of active constraints. As expected, trajectory derivatives for inequality constrained problems are slightly slower to compute compared to the equivalent without (hard) inequality constraints (log-barrier approximation), due to the necessary computational overheads required for identifying and batching blocks.
Figure 6: Results for the imitation learning task with no inequality constraints present. The y-axis is the (percentage) relative difference in the imitation loss, defined to be (IDOC loss - PDP loss) / PDP loss (lower is better). For Cartpole, locations where the PDP gradient fails is marked by stars. Shaded regions indicate min/max values over five trials.
Note that Safe-PDP [22] (for hard inequality constraints) and PDP [20] (used for differentiating through log-barrier problems) derivatives are implemented using a custom LQR and equality constrained LQR solver, respectively. These solvers have markedly different implementations, which explains differences in computation time between Safe-PDP and PDP.
Figure 7: Computation times for trajectory derivatives (per trajectory, per iteration) for the CIOC task. As expected, the log-barrier method with no “hard” inequality constraints affords faster derivative computation. | ## Review
### Summary
This paper presents a method for efficiently computing trajectory derivatives in constrained optimal control problems, claiming that the computation scales linearly with trajectory time-steps. The authors leverage the block structure of the matrices involved, allowing for parallelization and improved numerical stability. The method is evaluated against existing approaches such as Pontryagin Differentiable Programming (PDP) and demonstrates at least double the efficiency in gradient computation. While the contributions build upon previous work, they hold practical implications for various applications in inverse reinforcement learning and system identification. Overall, the paper offers a significant advancement in the field, though some claims about previous methods require clarification.
### Strengths
- The method shows practical utility for trajectory gradient computations, enabling faster calculations.
- The paper is well-organized and clearly written, explaining its novelty effectively.
- The approach improves numerical stability and allows for parallel computation, enhancing scalability.
- It demonstrates significant results in standard benchmark environments, substantiating its claims.
### Weaknesses
- Lacks extensive evaluation on runtime or gradient stability across various horizon lengths.
- Some claims about existing methods are misleading or incorrect, necessitating revisions.
- Figures need improvements for clarity, and legends should be enhanced for better understanding.
- The paper does not adequately benchmark against other important methods in the field.
### Questions
- How does the proposed method scale with longer trajectory lengths?
- What specific solvers are used in the forward pass, and how are Lagrange multipliers determined?
- Can the authors clarify the relationship between their method and existing approaches like differentiable MPC?
### Soundness
**Score:** 3
**Description:** Good - the method is technically sound, though some claims regarding existing work need to be clarified.
### Presentation
**Score:** 3
**Description:** Good - overall clear writing, but some sections require better structuring and clarity.
### Contribution
**Score:** 3
**Description:** Good - the paper provides valuable insights and methods, though it builds on existing techniques without introducing entirely novel concepts.
### Rating
**Score:** 6
**Description:** Weak Accept - the paper is technically solid with moderate-to-high impact potential, but it needs minor improvements in clarity and evaluation.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper contributes a practical and innovative approach to trajectory gradient computation in optimal control, showing clear advancements over existing methods. While some aspects require clarification and further evaluation, the overall soundness, significance, and clarity support an acceptance decision.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Scaling laws for language encoding models in fMRI
Richard J. Antonello
Department of Computer Science
The University of Texas at Austin
[email protected] &Aditya R. Vaidya
Department of Computer Science
The University of Texas at Austin
[email protected] &Alexander G. Huth
Departments of Computer Science and Neuroscience
The University of Texas at Austin
[email protected]
###### Abstract
Representations from transformer-based unidirectional language models are known to be effective at predicting brain responses to natural language. However, most studies comparing language models to brains have used GPT-2 or similarly sized language models. Here we tested whether larger open-source models such as those from the OPT and LLaMA families are better at predicting brain responses recorded using fMRI. Mirroring scaling results from other contexts, we found that brain prediction performance scales logarithmically with model size from 125M to 30B parameter models, with \(\sim\)15% increased encoding performance as measured by correlation with a held-out test set across 3 subjects. Similar logarithmic behavior was observed when scaling the size of the fMRI training set. We also characterized scaling for acoustic encoding models that use HuBERT, WavLM, and Whisper, and we found comparable improvements with model size. A noise ceiling analysis of these large, high-performance encoding models showed that performance is nearing the theoretical maximum for brain areas such as the precuneus and higher auditory cortex. These results suggest that increasing scale in both models and data will yield incredibly effective models of language processing in the brain, enabling better scientific understanding as well as applications such as decoding.
Large language models have come to dominate the field of AI due to incredible capabilities that range from reasoning [1] to code generation [2] to even predicting how a human brain would respond to language [3]. Rapid improvement in these abilities has largely been driven by _scale_: the most capable models today use nearly identical architectures to early transformer language models [4], but have orders of magnitude more parameters and larger training data [5]. Overall, model capabilities-often measured as zero-shot performance across a range of language tasks-tend to scale logarithmic with the number of model parameters [6, 7], suggesting that improvements will continue as model scale increases. Here we test whether these scaling "laws" hold for the task of modeling the human brain.
The human brain is the quintessential language processing system, but there is still much to learn about how it processes and represents language. One paradigm used for this purpose is the _encoding model_: given measured brain responses to natural language, construct a model that predicts those responses from the natural language stimulus [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]. If an encoding model is able to accurately predict brain responses across a range of new stimuli, then that model must use similar representations to the brain. High-performing encoding models can then be interpreted to gain insight into the brain's computations [20, 21, 22] or the function of different brain areas [23, 24, 11, 25]. The highest performance is currently offered by encoding models that are based on large language models such as GPT-2 XL [26]. To build an encoding model, a language model is fed the same language stimuli that thehuman subject is hearing or reading. The internal states at each layer of the language model are then extracted, yielding _contextual embeddings_ that capture semantic and syntactic properties of the stimuli [27]. These embeddings are then entered into a linear regression model that predicts the human brain responses, often measured using functional magnetic resonance imaging (fMRI).
Though text-based language models are the norm, language encoding models have increasingly been trained with acoustic features derived from audio-based neural networks [28, 29, 30, 31, 32, 33, 34]. Models like HuBERT [35] are able to derive phonetic, lexical, and semantic properties by learning from unlabeled waveforms or annotated transcripts [36]. Even when trained with human-plausible amounts of training data, these models can be more effective than language models in predicting brain responses in _low-level_ speech processing areas such as the auditory cortex [31]. While earlier works examined the utility of several self-supervised audio models in brain encoding, newer models have since been released with substantially increased training data and speech recognition performance.
In this paper, we study whether encoding models for fMRI benefit from scaling neural network model parameters and datasets to the same degree as other tasks. We show that using contextual embeddings from larger language models can increase the prediction performance of encoding models by 15% over smaller counterparts. Larger acoustic models improve similarly with model size, showing largest improvements in auditory cortex and in higher-level areas. Finally, encoding performance for both model types scales logarithmically with the amount of fMRI training data from each subject, demonstrating an increasing need for very large fMRI datasets. These new state-of-the-art encoding models may enable a new frontier in the study of biological language comprehension and may provide deeper insight into the mechanisms that the brain uses to reason about and employ natural language.
## 2 Methods
### Language models and speech audio models
Decoder-only transformer architectures have become dominant in recent years for language modeling [37]. For semantic encoding, we used representations from two families of large decoder-only Transformer language models, OPT [38] and LLaMA [39]. From the OPT family we used the pretrained 125 million, 1.3 billion, 13 billion, 30 billion, 66 billion, and 175 billion parameter models. From the LLaMA family, we used the pretrained 33 billion and 66 billion parameter models.
HuBERT and wav2vec 2.0 [35, 40] have been previously used to study auditory perception in the brain [29, 31, 33]. Both are trained to learn representations from unlabeled audio. WavLM [41] extends the HuBERT paradigm with data augmentation and also adds new data sources to increase the total training dataset size. Whisper [42] is a family of encoder-decoder models that use 680,000 hours of weakly-labeled audio - an order of magnitude larger than previous datasets - to reach state-of-the-art speech recognition performance. In this work, we used the pretrained Base, Large, and X-Large variants of HuBERT; the Base+ and Large variants of WavLM; and multilingual variants of the Tiny, Base, Small, Medium, and Large Whisper models.
Table 1 shows the architecture details for all neural network models used in this work.
### MRI data
We used publicly available functional magnetic resonance imaging (fMRI) data collected from 3 human subjects as they listened to 20 hours of English language podcast stories over Sensimetrics S14 headphones [43, 44]. Stories came from podcasts such as _The Math Radio Hour, Modern Love_, and _The Anthropocene Reviewed_. Each 10-15 minute story was played during a separate scan. Subjects were not asked to make any responses, but simply to listen attentively to the stories. For encoding model training, each subject listened to roughly 95 different stories, giving 20 hours of data across 20 scanning sessions, or a total of \(\sim\)33,000 datapoints for each voxel across the whole brain. For model testing, the subjects listened to two test stories 5 times each, and one test story 10 times, at a rate of 1 test story per session. These test responses were averaged across repetitions.
Details of the MRI methods can be found in the original publications [43, 44], but important points are summarized here. MRI data were collected on a 3T Siemens Skyra scanner at The University of Texas at Austin Biomedical Imaging Center using a 64-channel Siemens volume coil. Functional scans were collected using a gradient echo EPI sequence with repetition time (TR) = 2.00 s, echo time (TE) = 30.8 ms, flip angle = 71\({}^{\circ}\), multi-band factor (simultaneous multi-slice) = 2, voxel size = 2.6mm x 2.6mm x 2.6mm (slice thickness = 2.6mm), matrix size = 84x84, and field of view =220 mm. Anatomical data were collected using a T1-weighted multi-echo MP-RAGE sequence with voxel size = 1mm x 1mm x 1mm.
All subjects were healthy and had normal hearing. The experimental protocol used by [43; 44] was approved by the Institutional Review Board at The University of Texas at Austin. Written informed consent was obtained from all subjects.
In addition to motion correction and coregistration [43], low frequency voxel response drift was identified using a 2nd order Savitzky-Golay filter with a 120 second window and then subtracted from the signal. To avoid onset artifacts and poor detrending performance near each end of the scan, responses for training data were trimmed by removing 20 seconds (10 volumes) at the beginning and end of each scan, which removed the 10-second silent period and the first and last 10 seconds of each story. Test responses were trimmed by an additional 80 seconds (40 volumes) to account for an fMRI artifact (see Section 3.5). The mean response for each voxel was subtracted and the remaining response was scaled to have unit variance.
### Encoding model construction
We used the fMRI data to estimate voxelwise brain encoding models for natural language using the intermediate hidden states of the various language and speech models discussed in Section 2.1. First, activations for each word in the stimulus text were extracted from each layer of each LM. In order to temporally align word times with TR times, we applied Lanczos interpolation together with a finite impulse response model [43]. Previous hidden state extraction methods (e.g. [23]) involved extracting the hidden state at the last token of the final word from a fresh context of fixed length of \(N\) tokens. This method requires \(N\) forward passes through the model in order to compute the hidden state for a single word. As this is impractical for models over a certain size, we improved computational efficiency here using a dynamically-sized context window. For a given story, contexts were grown until they reached 512 tokens, then reset to a new context of 256 tokens. More formally, the hidden state for token \(i\), \(H(i)\) is defined as
\[H(i)=\begin{cases}\theta\left(X_{(0,i)}\right)&i\leq 512\\ \theta\left(X_{\left(256\left\lfloor\frac{i}{256}\right\rfloor-256,i\right)} \right)&i>512\end{cases}\]
where \(X_{(j,k)}\) is the context of the tokenized story \(X\) from the token at index \(j\) to the token at index \(k\) and \(\theta\) is the function parameterized by the language model. This allowed hidden state extraction for most tokens to be completed with a single forward pass per token, rather than \(N\) forward passes as in previous methods. Differing tokenization schemes for handling whitespace across language models presented a challenge for consistent evaluation and were handled on a case-by-case basis.
Unlike the analyzed language models, the audio models used are bi-directional, so we must use a fresh context to preserve the causality of the extracted features. We windowed the stimulus waveform with a sliding window of size \(16\,\mathrm{s}\) and stride \(100\,\mathrm{ms}\) before feeding it into model. At each layer, we
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multicolumn{3}{c}{Language models} & \multicolumn{3}{c}{Audio models} \\ Family & Layers & Width & Parameters \\ \hline \multirow{4}{*}{OPT [38]} & 12 & 768 & 125M & \multirow{4}{*}{Whisper [42]\({}^{a}\)} & 4 & 384 & 8M \\ & 24 & 2048 & 1.3B & 6 & 512 & 21M \\ & 40 & 5120 & 13B & 12 & 768 & 88M \\ & 48 & 7168 & 30B & 24 & 1024 & 307M \\ & 64 & 9216 & 66B & 32 & 1280 & 637M \\ & 96 & 12288 & 175B & & & \\ \hline \multirow{4}{*}{HLaMA [39]} & 60 & 6656 & 33B & \multirow{4}{*}{HuBERT [35]} & 12 & 768 & 95M \\ & 80 & 8192 & 66B & & 1024 & 317M \\ \cline{1-1} \cline{5-5} & & & 48 & 1280 & 964M \\ \hline \multirow{4}{*}{WavLM [41]} & 12 & 768 & 95M & \multirow{4}{*}{WuSLM [41]} & 12 & 768 & 95M \\ & 24 & 1024 & 317M & & 1024 & 317M \\ \cline{1-1} \cline{5-5} & & & & & \\ \hline \hline \end{tabular} \({}^{a}\)Whisper counts include only the encoder.
\end{table}
Table 1: Model architecture summary.
use the hidden state of the final token as the model's representation for the window. As Whisper follows an encoder-decoder architecture, we only use states from the encoder, since it operates only on the waveform. We then downsample the features as before with Lanczos interpolation.
Let \(f(H(\mathcal{S}))\) indicate a linearized ridge regression model that uses a temporally transformed version of the language model hidden states \(H(\mathcal{S})\) as predictors. The temporal transformation accounts for the lag in the hemodynamic response function [9; 45]. We use time delays of 2, 4, 6, and 8 seconds of the representation to generate this temporal transformation. For each subject \(s\), voxel \(v\), and language model layer \(h_{i}\), we fit a separate encoding model \(f^{v,s}_{h_{i}}\) to predict the BOLD response \(\hat{B}\) from our embedded stimulus, i.e. \(\hat{B}_{(x,v,h_{i})}=f^{v,s}_{h_{i}}(H_{i}(\mathcal{S}))\). Encoding model performance for a given layer was computed as the average voxelwise performance of that layer's hidden states across of all of cortex for all of our 3 subjects. For all figures with cortical flatmaps, we present the flatmap for one subject. Cortical flatmaps showing results for the other two subjects are shown in Section F of the supplement.
### Stacked regression
A unified "optimized" encoding model combining the LLaMA language model and Whisper audio model was computed using an adaptation of the stacked regression approach from Lin et al. [46]. For every even-numbered non-embedding layer \(l\) in the Whisper model, as well as the 18th layer of the 33 billion LLaMA model, we held-out \(\sim\)20% of the training data and built an encoding model using the remaining \(\sim\)80% of the training data. This was repeated for each of 5 folds. The predictions of these encoding models on the 5 folds of held-out training data were concatenated to generate full held-out predictions of the training data, \(f^{v,s}_{h_{i}}\left(\mathbf{x}^{(i)}_{h_{i}}\right)\). After this cross validation procedure, we build a covariance matrix for each voxel \(v\) and subject \(s\), \(\mathbf{R}^{v,s}\) of the residuals such that
\[\mathbf{R}^{v,s}_{p,q}=\sum_{i=1}^{n}\left(y^{v,s}-f^{v,s}_{h_{p}}\left(\mathbf{x}^{(i) }_{h_{p}}\right)\right)\left(y^{v,s}-f^{v,s}_{h_{q}}\left(\mathbf{x}^{(i)}_{h_{q}} \right)\right)\]
where \(n\) is the total number of time points and \(y^{v,s}\) is the ground truth BOLD response for voxel \(v\) on subject \(s\). We then optimize the quadratic problem \(\text{min}_{\mathbf{\alpha}^{v,s}}\mathbf{\alpha}^{v,s}\mathbf{R}^{v}\mathbf{\alpha}^{v,s}\) such that \(\alpha^{v,s}_{h_{j}}>0\) and \(\sum_{j=1}^{k}\alpha^{v,s}_{h_{j}}=1\) with a quadratic program solver [47] to get a convex set of attributions \(\mathbf{\alpha}^{v,s}\) which serve as weights for each feature space in the joint encoding model. This yields the final encoding model
\[\hat{y}^{v,s}=\sum_{j=1}^{k}\alpha^{v,s}_{h_{j}}f^{v,s}_{h_{j}}\left(\mathbf{x}_{j }\right)\]
where \(k\) is the number of feature spaces. As a final step, we validate this stacked encoding model independently using a held-out validation set and build a final encoding model that uses the stacked prediction for voxels where the stacked approach is significantly better on this validation set and uses the prediction from the 18th layer of LLaMA otherwise.
To determine which layers of the model are used to model each voxel, we computed voxelwise attribution center-of-mass. For each of the \(\mathbf{\alpha}^{v,s}\), the center-of-mass attribution \(\mathcal{C}(\mathbf{\alpha}^{v,s})\) is computed as
\[\mathcal{C}(\mathbf{\alpha}^{v,s})=\sum_{i=1}^{m}i\alpha^{v,s}_{h_{i}},\]
where \(m\) is the total number of Whisper layers used in the stacked attribution. This allows us to summarize whether the attributions are primarily weighted on the earlier or later layers of the network for that voxel.
### Noise ceiling computation
Data from brain recordings such as fMRI are inherently noisy, so it is useful to distinguish response variance that could potentially be explained by some model from noise variance that cannot be explained. We estimated the amount of explainable variance, or _noise ceiling_, using the averaged responses from the test story with 10 repeats and the method of Schoppe et al. [48]. The maximum correlation coefficient of the ideal encoding model is estimated as \(CC_{max}=\left(\sqrt{1+\frac{1}{N}\times\frac{NP}{SP}}\right)^{-1}\)where \(N\) is the number of repeats of our test data, \(NP\) is the noise power or unexplainable variance, and \(SP\) is the signal power or the amount of variance that could in principle be explained by the ideal predictive model. Using these estimates, we can then extract a normalized correlation coefficient \(CC_{norm}=\frac{CC_{abs}}{CC_{max}}\), where \(CC_{abs}\) is the product-moment correlation coefficient of the model's predictions against the ground truth fMRI responses. In some voxels, random noise can cause \(CC_{abs}>CC_{max}\), leading to \(CC_{norm}\) estimates greater than one. To regularize \(CC_{norm}\) estimates for noisy voxels we set \(CC_{max}\) values smaller than 0.25 to 0.25. The normalized correlations \(CC_{norm}\) are only used for the noise ceiling analysis in Figure 3. All other reported correlations are uncorrected (\(CC_{abs}\)). For brain map visualizations we only show voxels with \(CC_{max}>0.35\).
### Compute specifications
The generation of the encoding models presented in this paper required significant computational resources. Ridge regression was performed using compute nodes with 128 cores (2 AMD EPYC 7763 64-core processors) and 256GB of RAM. In total, roughly 4,000 node-hours of compute was expended. Feature extraction from language and speech models was performed on specialized GPU nodes that were the same as the previously-described compute nodes but with 3 NVIDIA A100 40GB cards. Feature extraction required roughly 200 node-hours of compute on these GPU nodes.
## 3 Results
### Scaling laws for semantic encoding models
Encoding models were fit for each of three subjects using roughly 20 hours of fMRI training data. For the 125 million parameter OPT model we also fit encoding models using varying amounts of training data in order to study the effect of training data size on encoding model performance. To capture encoding model performance, we compute the average prediction performance across all voxels in the cortex of each subject.
Figure 1**a** shows the relationship between language model size, measured as number of parameters, and encoding performance, measured as percent change in average prediction performance across all voxels in cortex relative to the smallest model. For consistent comparison, we only compare between the six model sizes from the OPT family. The layer that performed best for each model size was used. The result shows approximately logarithmic scaling of encoding performance with model size. For each order of magnitude increase in the number of parameters in the language, the encoding performance of the average subject increases by roughly 4.4%. However, this logarithmic relationship (\(r=0.91\)) tapers off to a plateau for models in excess of \(\sim\)30 billion model parameters.
Figure 1: _Scaling laws of Semantic and Speech Audio Encoding Models -_ **Figures 1a** and 1**b** show logarithmic scaling of semantic encoding model performance with number of parameters and number of stories. **Figure 1c** shows average voxelwise \(r^{2}\) for each layer of all tested models averaged across 3 subjects. **Figures 1d**, 1e, and **If** show analogous results for speech audio models. Error bars for **Figures 1b** and 1e denote standard error across bootstraps. Error bars for **Figures 1c** and **If** denote SNR-normalized subject-axis standard error. \(r^{2}\) is computed as \(|r|*r\).
We hypothesize this is an effect of the increased hidden state size that larger models possess, combined with limitations in our dataset size. Each encoding model was fit using the same 33,000 data points. As the models grow wider, the regression problem becomes more ill-conditioned. When FIR delays are added, models past the 30B parameter threshold have more hidden states than there are data points to train the regression, which lowers encoding performance. Further analysis of the relationship between dataset size and model size is provided in Section E in the supplement.
**Figure 1b** shows the relationship between the number of training stories (roughly proportional to total training data) and encoding performance on OPT-125M (layer 9). Here we see a strong logarithmic relationship between training data size and encoding performance. Each time the number of training stories increases by an order of magnitude, the encoding performance of the average subject increases by 122%. This strong relationship (\(r=0.989\)) gives compelling support to the usefulness of collecting "deep" datasets that focus on collecting a greater amount of data from a few subjects rather than a smaller amount of data from many subjects.
**Figure 1c** shows the encoding performance for each layer of each LM. The LLaMA models are marginally better at encoding than the OPT models, and also have a different pattern, with peak performance in relatively early layers followed by slow decay. In contrast, the OPT models have maximum performance with layers that are roughly 3/4 into the model. This mirrors results in other GPT-like models [3, 49]. This divergence from the typical pattern may warrant further study into the underlying mechanisms that define these trendlines. We suspect that the larger training set of the LLaMA models (1.4T tokens) over the OPT models (180B tokens) may explain its better encoding performance.
### Scaling laws for speech audio encoding models
We trained encoding models of increasing sizes from three families of audio models: HuBERT, WavLM, and Whisper. Encoding models were fit using an identical procedure as with the LMs in Section 3.1 - individually for three subjects, with roughly 20 hours of training data. We repeat the analyses from Section 3.1 on the audio models to examine the importance of model size and training dataset size on encoding performance.
Figure 2: _Large-scale encoding models_ - Performance of an encoding model built using OPT-30B on 20 hours of training data from a single subject. Surrounding plots show model predictions (_red_) against the average response (_dashed black_) over 10 separate trials (_gray_) on a held-out natural language test stimulus for selected voxels (_Clockwise from bottom left_: Well-predicted voxels from fusiform body area (FBA), Broca’s area, precuneus, prefrontal cortex, and secondary auditory cortex.) Only voxels with \(CC_{max}>0.35\) are shown. (PFC = prefrontal cortex, PrCu = precuneus, AC = auditory cortex/Wernicke’s area, AG = angular gyrus)
**Figure 1d** shows how audio model size affects encoding performance. We use the Whisper model family for this analysis, since it has the most models of different sizes. Again, the best performing layer for each size was used. As before, there is a logarithmic relationship (\(r=0.991\)) between model size and encoding performance; performance for the average subject increases roughly \(32.2\%\) for every additional order of magnitude increase in model size. Though the scaling improvements are greater overall than with OPT, it should be noted that the smallest Whisper models are substantially smaller than the OPT models, and have lower baseline performance, which exaggerates the difference. Additionally, within auditory cortex, we observe that encoding performance does _not_ plateau with model size (see Section B.1), suggesting that improvements in AC are complemented by reductions in performance elsewhere.
**Figure 1e** shows how additional training data improves the encoding performance of Whisper Large (636 million parameters, layer 30). As before, we fit separate encoding models on increasing amounts of training data. Additional training data for Whisper has an effect that is comparable to OPT: Encoding performance is linearly related to log-dataset size (\(r=0.988\)), and increasing the training dataset by an order of magnitude increases performance by \(144\%\).
**Figure 1f** shows the performance of each layer of every Whisper and WavLM model. For legibility, HuBERT results are omitted from this plot and are included in the supplement (Figure B.2). The upper-middle and uppermost layers of each model tend to have the best performance, aligning with previous results on acoustic encoding models [29; 31]. In contrast with WavLM, the Whisper models increase in performance with layer depth; this can likely be attributed to our choice of only using the encoder module from the network.
Voxelwise scaling laws, examining the relationships described in **Figure 1** on a per-voxel basis, can be found in Section A of the supplement.
### Large-scale encoding models
After characterizing these scaling laws, we next visualized the performance of one of the top-performing semantic encoding models 1.
Footnote 1: Keeping with the scaling results from Section 1, we chose to demonstrate this using the best model from the OPT family, however it should be noted that the best model from the LLaMA family is about 5% more performant as measured by correlation. This LLaMA model is further explored in Section 3.6.
**Figure 2** shows the encoding performance of the best OPT model, which uses the 33rd layer of OPT-30B, as measured on the test story with 10 repeats. For several voxels from different areas of cortex we show the encoding model predicted timecourse and ground truth BOLD response. We see strong prediction performance across cortex, with "classical" language regions like Broca's area and auditory cortex being well explained, as well as areas that are typically considered to be more "amodal" in nature, like prefrontal cortex. Voxelwise correlations for this subject are as high as \(r=0.82\). A similar map showing the change in encoding performance from OPT-125M (comparable to GPT models used in earlier papers) to OPT-30B is given in the supplemental material (see Figure C.1).
### Noise ceiling analysis
We further investigated the degree to which encoding models can be improved past this point. To do this, we performed a noise ceiling analysis whereby for each voxel, we estimated its \(CC_{max}\) (see Section 2.5). This gave us an approximation of the degree to which an ideal encoding model could explain the response variance in each voxel. We then renormalized the correlations from Figure 2 to compute a normalized correlation coefficient \(CC_{norm}\).
**Figure 3a** shows the _room for improvement_, or the difference between the correlation coefficients measured in Figure 2 and their \(CC_{max}\). Voxels are yellow if there is significant room for improvement, and purple if the model for that voxel is already close to optimal. Regions that are typically believed to contain high-level representations of language such as angular gyrus (AG) [50; 51; 52] still have the potential for substantial modeling improvement, while some areas in temporal cortex (near AC), prefrontal cortex (PFC), and the precuneus (PrCu) are nearly optimal. **Figure 3b** shows a histogram of absolute correlation coefficients (\(CC_{abs}\)), and **Figure 3c** shows the normalized correlations \(CC_{norm}\)
### Long context artifacts
Granting encoding models access to contexts as long as 512 tokens implicitly gives them access to the information that the fMRI scan has started recently. For instance, if the input context has only 64 tokens, this implies that the context is occurring at the 64th token in the story. In parallel, responses in some voxels tend to rise or fall gradually over the first minute of each scan (potentially due to underconstrained detrending at scan edges, MRI magnetization reaching steady state, or neural adaptation). The combination of these two effects can have unintended effects on the fair evaluation of these models by artificially inflating measured performance, as encoding models are adept at capturing this early slow drift. We found that long context effects exist up to roughly 100 seconds into a story, so to mitigate this issue we simply exclude the first 100 seconds of predicted and actual responses from each test story when measuring encoding model prediction performance. Figure D.1 in the supplement gives a map of the effect of long-context artifacts on measured encoding model performance. Long-context artifacts can inflate measured performance by up to 20%, but the effects are mostly localized to areas typically associated with low-level speech processing such as early auditory cortex. This effect is most prominent for encoding models using early LM layers and speech models, and tends to not be as significant for later LM layers.
### Unifying semantic and speech encoding models with stacked regression
We used stacked regression (see Section 2.4) to augment our best semantic model with the Whisper speech model representations. **Figure 3(a)** shows the regions that benefit from this augmentation, blended with a flatmap showing the overall semantic encoding model performance. We observe that these benefits are highly localized to auditory cortex and mouth motor cortex. The butterfly plot in **Figure 3(b)** shows the effect on voxels modified by this augmentation. We see that the auditory cortex voxels that are best predicted by the semantic model are also those that are most improved by this augmentation. **Figure 3(c)** plots the center of mass of the attribution weights \(\mathbf{\alpha}^{v,s}\). For voxels where the attribution weights favored the later layers of the Whisper model, the voxel is plotted in a brighter hue. We see that this attribution plot demonstrates a clear progression of auditory information from primary AC to secondary AC coinciding with layer depth. **Figure 3(d)** shows the benefits of this stacked regression augmentation. We see that the lion's share of the improvements happen in primary AC and early secondary AC.
## 4 Discussion & conclusions
These results suggest the existence of two major effects on the capacity of encoding models to predict BOLD response given finite brain data. First, LM changes that correspond to downstream task
Figure 3: _Noise Ceiling Analysis -_ **Figure 2(a):** A two channel flatmap showing which ROIs remain poorly explained by an encoding model built from the 33rd layer of OPT30B. Voxels are less transparent if they have a higher idealized encoding performance (\(CC_{max}\)). Voxels are more yellow if they have high _room for improvement_, defined as the difference between the best possible encoding model and this model. Angular gyrus and some parts of prefrontal cortex are still poorly explained, while precuneus and higher auditory cortex are close to optimal. **Figure 2(b):** A histogram of voxel correlations (\(CC_{abs}\)). **Figure 2(c):** A histogram of normalized voxel correlations (\(CC_{norm}\)). (PFC = prefrontal cortex, PrCu = precuneus, AC = auditory cortex, AG = angular gyrus)performance improvement tend to also improve encoding performance, such as when moving from a LM trained on little data to one trained on more data. Second, increasing hidden state size while keeping other metrics fixed tends to lower encoding performance, as it worsens the conditioning of the encoding model regression problem without a corresponding benefit to model effectiveness. The conflict between these two effects has led to a scenario where the largest model is not necessarily the best for predicting BOLD responses, as we have seen for both the OPT and LLaMA LMs where encoding model performance peaks at about 30B parameters. Rather, a careful balance must be struck between model size and model efficacy in order to maximize encoding performance. Audio models, on the other hand, have not yet seemed to reach this plateau.
What are the use cases for better encoding models? One promising application is the use of encoding models to supplement more classical experimentation techniques, as suggested by Jain et al. [20]. Higher encoding performance leads to more trustworthy model predictions and more accurate conclusions. Another use case of effective encoding models is language decoding, or predicting the language stimulus from the BOLD response. Recent work has shown that effective language decoding models can be built from encoding models by applying Bayesian techniques [45, 53], so it is likely that the performance of such decoders will improve along with the performance of encoding models [33, 44]. Finally, improved encoding performance could enable fine-grained control over voxel activation through stimulus generation, as demonstrated by Tuckute et al. [54].
Given our results, what can computational neuroscientists do to improve the performance of their own encoding models? One potential observation is that _deep_ datasets [55, 56, 43, 57] -- those that focus on collecting many samples from a few subjects, rather than a little data from many subjects -- are more useful for modelling brain activity. Encoding performance improvements scale well with both model size and dataset size, and large datasets will no doubt be necessary in producing useful encoding models. Another straightforward adjustment that can be performed is to simply use larger, more performant LMs for building encoding models. To the authors' knowledge, no other natural language encoding models paper at the time of this writing has used models larger than GPT-2 XL, which is a 1.5B parameter model with performance far below the best 30B parameter models. This
Figure 4: _Stacked Regression_ - **Figure 4a:** A flatmap shows which regions of cortex improve when augmenting a semantic encoding model built from the 18th layer of LLaMA-33B with the layers of Whisper using stacked regression. Voxels used the stacked regression if the stacked regression performed better on a validation set. The effect is highly localized to auditory cortex. **Figure 4b:** A butterfly plot comparing the voxelwise encoding performance of the stacked regression encoding model to the baseline semantic model. **Figure 4c:** The center-of-mass of the stacked regression attributions, \(\mathcal{C}(\boldsymbol{\alpha}^{v,s})\) are visualized in auditory cortex. **Figure 4d:** The improvement in encoding performance of the stacked regression model over the baseline is visualized in auditory cortex.
could be due to valid concerns that the amount of natural language brain data available is insufficient to train effective encoding models on such a scale. However, we found that even in low data cases, such as with as little as an hour's worth of data, encoding models built from larger models tend to outperform their smaller counterparts, as seen in Figure E.1 of the supplement. We have released code as well as selected precomputed features, model weights, and model predictions generated for this paper. 2 We hope this data release will encourage the use of more performant encoding models in natural language computational neuroscience.
Footnote 2: These data are available at [https://github.com/HuthLab/encoding-model-scaling-laws](https://github.com/HuthLab/encoding-model-scaling-laws).
## Acknowledgements
The authors acknowledge and thank the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources that have significantly contributed to the research results reported within this paper. This work was funded by grants from the NIDCD and NSF (1R01DC020088- 001), the Burroughs-Wellcome Foundation, and a gift from Intel Inc. We thank Ruogu Lin, Leila Wehbe, and Javier Turek for their aid and thoughtful suggestions in assisting with this work.
## References
* [1] Takeshi Kojima, Shixiang Shane Gu, Michel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners, 2023.
* [2] Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Remi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. Competition-level code generation with alphacode. _Science_, 378(6624):1092-1097, 2022.
* [3] Charlotte Caucheteux, Alexandre Gramfort, and Jean-Remi King. Deep language algorithms predict semantic comprehension from brain activity. _Scientific Reports_, 12(1):16327, 2022.
* [4] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. _Advances in neural information processing systems_, 30, 2017.
* [5] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. _Advances in neural information processing systems_, 33:1877-1901, 2020.
* [6] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. _arXiv preprint arXiv:2001.08361_, 2020.
* [7] Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo. Are emergent abilities of large language models a mirage?, 2023.
* [8] Leila Wehbe, Brian Murphy, Partha Talukdar, Alona Fyshe, Aaditya Ramdas, and Tom Mitchell. Simultaneously uncovering the patterns of brain regions involved in different story reading subprocesses. _PloS one_, 9(11):e112575, 2014.
* [9] Alexander G Huth, Wendy A De Heer, Thomas L Griffiths, Frederic E Theunissen, and Jack L Gallant. Natural speech reveals the semantic maps that tile human cerebral cortex. _Nature_, 532(7600):453-458, 2016.
* [10] Wendy A de Heer, Alexander G Huth, Thomas L Griffiths, Jack L Gallant, and Frederic E Theunissen. The hierarchical cortical organization of human speech processing. _Journal of Neuroscience_, 37(27):6539-6557, 2017.
* [11] Shailee Jain and Alexander Huth. Incorporating context into language encoding models for fmri. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, _Advances in Neural Information Processing Systems_, volume 31. Curran Associates, Inc., 2018.
* [12] Mariya Toneva and Leila Wehbe. Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain). In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alche-Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems_, volume 32. Curran Associates, Inc., 2019.
* [13] Ariel Goldstein, Zaid Zada, Eliav Buchnik, Mariano Schain, Amy Price, Bobbi Aubrey, Samuel A Nastase, Amir Feder, Dotan Emanuel, Alon Cohen, et al. Shared computational principles for language processing in humans and deep language models. _Nature neuroscience_, 25(3):369-380, 2022.
* [14] Martin Schrimpf, Idan Asher Blank, Greta Tuckute, Carina Kauf, Eghbal A Hosseini, Nancy Kanwisher, Joshua B Tenenbaum, and Evelina Fedorenko. The neural architecture of language: Integrative modeling converges on predictive processing. _Proceedings of the National Academy of Sciences_, 118(45):e2105646118, 2021.
* [15] Khai Loong Aw and Mariya Toneva. Training language models to summarize narratives improves brain alignment, 2023.
* [16] Subba Reddy Oota, Manish Gupta, and Mariya Toneva. Joint processing of linguistic properties in brains and language models. _arXiv preprint arXiv:2212.08094_, 2022.
* [17] Catherine Chen, Tom Dupre la Tour, Jack Gallant, Daniel Klein, and Fatma Deniz. The cortical representation of language timescales is shared between reading and listening. _bioRxiv_, pages 2023-01, 2023.
* [18] Micha Heilbron, Kristijan Armeni, Jan-Mathijs Schoffelen, Peter Hagoot, and Floris P De Lange. A hierarchy of linguistic predictions during natural language comprehension. _Proceedings of the National Academy of Sciences_, 119(32):e2201968119, 2022.
* [19] Amanda LeBel, Shailee Jain, and Alexander G. Huth. Voxelwise encoding models show that cerebellar language representations are highly conceptual. _Journal of Neuroscience_, 41(50):10341-10355, 2021.
* [20] Shailee Jain, Vy A. Vo, Leila Wehbe, and Alexander G. Huth. Computational Language Modeling and the Promise of in Silico Experimentation. _Neurobiology of Language_, pages 1-27, 03 2023.
* [21] Charlotte Caucheteux, Alexandre Gramfort, and Jean-Remi King. Evidence of a predictive coding hierarchy in the human brain listening to speech. _Nature Human Behaviour_, pages 1-12, 2023.
* [22] Nancy Kanwisher, Meenakshi Khosla, and Katharina Dobs. Using artificial neural networks to ask 'why'questions of minds and brains. _Trends in Neurosciences_, 2023.
* [23] Richard Antonello, Javier S Turek, Vy Vo, and Alexander Huth. Low-dimensional structure in the space of language representations is reflected in brain responses. _Advances in Neural Information Processing Systems_, 34, 2021.
* [24] Sreejan Kumar, Theodore R Sumers, Takateru Yamakoshi, Ariel Goldstein, Uri Hasson, Kenneth A Norman, Thomas L Griffiths, Robert D Hawkins, and Samuel A Nastase. Reconstructing the cascade of language processing in the brain using the internal computations of a transformer-based language model. _BioRxiv_, pages 2022-06, 2022.
* [25] Mathis Lamarre, Catherine Chen, and Fatma Deniz. Attention weights accurately predict language representations in the brain. _bioRxiv_, pages 2022-12, 2022.
* [26] Martin Schrimpf, Idan Blank, Greta Tuckute, Carina Kauf, Eghbal A. Hosseini, Nancy Kanwisher, Joshua Tenenbaum, and Evelina Fedorenko. Artificial neural networks accurately predict language processing in the brain. _bioRxiv_, 2020.
* [27] Qi Liu, Matt J Kusner, and Phil Blunsom. A survey on contextual embeddings. _arXiv preprint arXiv:2003.07278_, 2020.
* [28] Juliette Millet and Jean-Remi King. Inductive biases, pretraining and fine-tuning jointly account for brain responses to speech. _arXiv preprint arXiv:2103.01032_, 2021.
* [29] Juliette Millet, Charlotte Caucheteux, Yves Boubenec, Alexandre Gramfort, Ewan Dunbar, Christophe Pallier, Jean-Remi King, et al. Toward a realistic model of speech processing in the brain with self-supervised learning. _Advances in Neural Information Processing Systems_, 35:33428-33443, 2022.
* [30] Alexander JE Kell, Daniel LK Yamins, Erica N Shook, Sam V Norman-Haignere, and Josh H McDermott. A task-optimized neural network replicates human auditory behavior, predicts brain responses, and reveals a cortical processing hierarchy. _Neuron_, 98(3):630-644, 2018.
* [31] Aditya R Vaidya, Shailee Jain, and Alexander G Huth. Self-supervised models of audio effectively explain human cortical responses to speech. _arXiv preprint arXiv:2205.14252_, 2022.
* [32] Greta Tuckute, Jenelle Feather, Dana Boebinger, and Josh H McDermott. Many but not all deep neural network audio models capture brain responses and exhibit hierarchical region correspondence. _bioRxiv_, pages 2022-09, 2022.
* [33] Alexandre Defossez, Charlotte Caucheteux, Jeremy Rapin, Ori Kabeli, and Jean-Remi King. Decoding speech from non-invasive brain recordings. _arXiv preprint arXiv:2208.12266_, 2022.
* [34] Yuanning Li, Gopala K Anumanchipalli, Abdelrahman Mohamed, Junfeng Lu, Jinsong Wu, and Edward F Chang. Dissecting neural computations of the human auditory pathway using deep neural networks for speech. _bioRxiv_, pages 2022-03, 2022.
* [35] Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. _IEEE/ACM Transactions on Audio, Speech, and Language Processing_, 29:3451-3460, 2021.
* [36] Shu-wen Yang, Po-Han Chi, Yung-Sung Chuang, Cheng-I Jeff Lai, Kushal Lakhotia, Yist Y Lin, Andy T Liu, Jiatong Shi, Xuankai Chang, Guan-Ting Lin, et al. Superb: Speech processing universal performance benchmark. _arXiv preprint arXiv:2105.01051_, 2021.
* [37] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. _OpenAI Blog_, 1(8), 2019.
* [38] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. _arXiv preprint arXiv:2205.01068_, 2022.
* [39] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothe Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. _arXiv preprint arXiv:2302.13971_, 2023.
* [40] Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. wav2vec 2.0: A framework for self-supervised learning of speech representations. _Advances in neural information processing systems_, 33:12449-12460, 2020.
* [41] Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, et al. Wavlm: Large-scale self-supervised pre-training for full stack speech processing. _IEEE Journal of Selected Topics in Signal Processing_, 16(6):1505-1518, 2022.
* [42] Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. Robust speech recognition via large-scale weak supervision. _arXiv preprint arXiv:2212.04356_, 2022.
* [43] Amanda LeBel, Lauren Wagner, Shailee Jain, Aneesh Adhikari-Desai, Bhavin Gupta, Allyson Morgenthal, Jerry Tang, Lixiang Xu, and Alexander G Huth. A natural language fmri dataset for voxelwise encoding models. _bioRxiv_, pages 2022-09, 2022.
* [44] Jerry Tang, Amanda LeBel, Shailee Jain, and Alexander G Huth. Semantic reconstruction of continuous language from non-invasive brain recordings. _Nature Neuroscience_, pages 1-9, 2023.
* [45] Shinji Nishimoto, An T Vu, Thomas Naselaris, Yuval Benjamini, Bin Yu, and Jack L Gallant. Reconstructing visual experiences from brain activity evoked by natural movies. _Current biology_, 21(19):1641-1646, 2011.
* [46] Ruogu Lin, Thomas Naselaris, Kendrick Kay, and Leila Wehbe. Stacked regressions and structured variance partitioning for interpretable brain maps. _bioRxiv_, pages 2023-04, 2023.
* [47] Lieven Vandenberghe. The cvxopt linear and quadratic cone program solvers. _Online: http://cvxopt. org/documentation/coneprog. pdf_, 2010.
* [48] Oliver Schoppe, Nicol S Harper, Ben DB Willmore, Andrew J King, and Jan WH Schnupp. Measuring the performance of neural models. _Frontiers in computational neuroscience_, 10:10, 2016.
* [49] Richard Antonello and Alexander Huth. Predictive coding or just feature discovery? an alternative account of why language models fit brain data. _Neurobiology of Language_, pages 1-16, 2022.
* [50] Helene Van Ettinger-Veenstra, Anita McAllister, Peter Lundberg, Thomas Karlsson, and Maria Engstrom. Higher language ability is related to angular gyrus activation increase during semantic processing, independent of sentence incongruency. _Frontiers in human neuroscience_, 10:110, 2016.
* [51] Amy R Price, Michael F Bonner, Jonathan E Peelle, and Murray Grossman. Converging evidence for the neuroanatomic basis of combinatorial semantics in the angular gyrus. _Journal of Neuroscience_, 35(7):3276-3284, 2015.
* [52] Francesca M Branzi, Gorana Pobric, JeYoung Jung, and Matthew A Lambon Ralph. The left angular gyrus is causally involved in context-dependent integration and associative encoding during narrative reading. _Journal of cognitive neuroscience_, 33(6):1082-1095, 2021.
* [53] Thomas Naselaris, Kendrick N Kay, Shinji Nishimoto, and Jack L Gallant. Encoding and decoding in fmri. _Neuroimage_, 56(2):400-410, 2011.
* [54] Greta Tuckute, Aalok Sathe, Shashank Srikant, Maya Taliaferro, Mingye Wang, Martin Schrimpf, Kendrick Kay, and Evelina Fedorenko. Driving and suppressing the human language network using large language models. _bioRxiv_, 2023.
* [55] Samuel A Nastase, Yun-Fei Liu, Hanna Hillman, Asieh Zadbood, Liat Hasenfratz, Neggin Keshavarzian, Janice Chen, Christopher J Honey, Yaara Yeshurun, Mor Regev, et al. The "narratives" fmri dataset for evaluating models of naturalistic language comprehension. _Scientific data_, 8(1):250, 2021.
* [56] Emily J Allen, Ghislain St-Yves, Yihan Wu, Jesse L Breedlove, Jacob S Prince, Logan T Dowdle, Matthias Nau, Brad Caron, Franco Pestilli, Ian Charest, et al. A massive 7t fmri dataset to bridge cognitive neuroscience and artificial intelligence. _Nature neuroscience_, 25(1):116-126, 2022.
* [57] Nadine Chang, John A Pyles, Austin Marcus, Abhinav Gupta, Michael J Tarr, and Elissa M Aminoff. Bold5000, a public fmri dataset while viewing 5000 visual images. _Scientific data_, 6(1):49, 2019. | ## Review
### Summary
This paper investigates the scaling laws between the performance of large language models (LLMs) in predicting brain activity, specifically using BOLD fMRI signals, and the number of parameters in the LLMs alongside training data size. The authors discover that performance improves log-linearly with model size and training dataset size, indicating ongoing potential for enhancement in brain prediction tasks. The study effectively highlights the utility of large ML models in understanding neural mechanisms, although concerns about the validity of their claims regarding saturation effects and the scientific significance of their findings remain. Overall, the paper demonstrates strong technical execution and presents valuable insights for both neuroscience and machine learning communities.
### Strengths
- The investigation addresses an important question about the relationship between LLM scaling and neural prediction accuracy.
- The study presents novel empirical observations about log-linear scaling in brain prediction across language and acoustic domains.
- The methods are sound, and the results are relevant, showing significant improvements in performance with larger models and training datasets.
- The paper is well-organized, clearly written, and contextualizes the findings within existing literature.
### Weaknesses
- The validity of the non-saturation claims needs further support through alternative analyses or additional experiments.
- Concerns about the scientific significance of using LLMs, which process language differently than the brain, remain unaddressed.
- The evaluation of model size solely based on parameters is reductive and overlooks other factors like pretraining dataset size.
- Uncertainty quantification is lacking in the presentation of results, which diminishes the robustness of the findings.
### Questions
- How do the findings vary across different brain regions, such as the auditory cortex and higher-level associative areas?
- What strategies could be implemented to mitigate potential ill-conditioning of regression problems as model size increases?
- Can the authors provide recommendations for fMRI encoding model practitioners regarding optimal model layers or configurations?
- Is there a potential correspondence between LLM layers and different brain regions, and how might this inform future research?
### Soundness
**Score:** 3
**Description:** Good - The paper demonstrates a solid methodology and analysis, but some claims lack sufficient evidence and may require additional validation.
### Presentation
**Score:** 4
**Description:** Excellent - The paper is well-organized, and the writing is clear, but minor improvements in figure quality and clarity of error representation could enhance its presentation.
### Contribution
**Score:** 3
**Description:** Good - The paper makes a significant contribution to the understanding of scaling laws in ML models for brain activity prediction; however, its implications could be better articulated.
### Rating
**Score:** 7
**Description:** Accept - The paper is technically solid with high impact potential, although it needs minor improvements in clarity and justification of key claims.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents original research that contributes to both neuroscience and machine learning by exploring scaling laws between large language models and brain activity prediction. It is technically sound and well-written, with the potential for significant impact. While some claims require further substantiation, the overall strengths and contributions of the paper outweigh the weaknesses.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Structured Federated Learning through
Clustered Additive Modeling
Jie MA\({}^{1}\), Tianyi Zhou\({}^{2}\), Guodong Long\({}^{1}\), Jing Jiang\({}^{1}\), Chengqi Zhang\({}^{1}\)
\({}^{1}\)Australian Artificial Intelligence Institute, FEIT, University of Technology Sydney
\({}^{2}\)University of Maryland
[email protected], [email protected]
{guodong.long, jing.jiang, chengqi.zhang}@uts.edu.au
###### Abstract
Heterogeneous federated learning without assuming any structure is challenging due to the conflicts among non-identical data distributions of clients. In practice, clients often comprise near-homogeneous clusters so training a server-side model per cluster mitigates the conflicts. However, FL with client clustering often suffers from "clustering collapse", i.e., one cluster's model excels on increasing clients, and reduces to single-model FL. Moreover, cluster-wise models hinder knowledge sharing between clusters and each model depends on fewer clients. Furthermore, the static clustering assumption on data may not hold for dynamically changing models, which are sensitive to cluster imbalance/initialization or outliers. To address these challenges, we propose "**Clustered Additive Modeling (CAM)**", which applies a globally shared model \(\Theta_{g}\) on top of the cluster-wise models \(\Theta_{1:K}\), i.e., \(y=h(x;\Theta_{g})+f(x;\Theta_{k})\) for clients of cluster-\(k\). The global model captures the features shared by all clusters so \(\Theta_{1:K}\) are enforced to focus on the difference among clusters. To train CAM, we develop a novel **Fed-CAM** algorithm that alternates between client clustering and training global/cluster models to predict the residual of each other. We can easily modify any existing clustered FL methods by CAM and significantly improve their performance without "clustering collapse" in different non-IID settings. We also provide a convergence analysis of Fed-CAM algorithm.
## 1 Introduction
Federated learning (FL) trains a global model over distributed clients and enforces data localization, i.e., data only stay local for model training at the client side while the server periodically averages client models' weights to update a global model and broadcast it to all clients. When the local data distributions are identical across clients, one global model suffices to serve all clients [1] However, non-identical distributions across clients (i.e., statistical heterogeneity) [2] are more common in practical FL scenarios, which leads to conflicts between the global objective and local ones. Instead of applying one model to all the \(m\) clients, an ideal case for non-IID settings would be training a local model per client without any interference from others. However, the local data are usually insufficient so a global model trained on heterogeneous clients exploiting all their data can still be helpful to local training. Hence, non-IID FL methods [3, 4, 5] usually struggle to find a sweet spot between the global consensus and local personalization. Without any assumptions on the structure among clients, all clients' distributions can be equally different from each other so a global model is impacted by the conflicts of all clients and may provide limited guidance to their local training.
**Clustered Federated Learning.** That being said, non-IID clients in practice usually have rich structures that have not been explored by most existing FL methods. A common structure is clusters, i.e., heterogeneous clients can be grouped into several near-homogeneous clusters each composed ofclients with similar distributions. In practice, clusters might be associated with geographic/age/income groups, affiliations, etc. Hence, we can train a server-side model for each cluster, hence mitigating the conflicts caused by heterogeneity. Unfortunately, clients' cluster memberships are usually undefined or inaccessible due to sensitive/private information and have to be jointly optimized with cluster-wise models, as recent clustered FL [6, 7, 8, 9] approaches do. They maintain \(K\) models \(\Theta_{1:K}\) for \(K\) clusters and assigns one \(\Theta_{k}\) to each client-\(i\) (with local data \(X_{i}\) and local model \(\theta_{i}\)), e.g., by min-loss (\(\Theta_{k}\) with the minimum loss on \(X_{i}\)) or K-means (the nearest \(\Theta_{k}\) to \(\theta_{i}\)) criterion. Hence, \(1\leq K\leq m\) models can accommodate more heterogeneity than single-model FL but also allows knowledge sharing among similar clients, which is lacking if training \(m\) client models independently. Hence, it may reach a better trade-off between global consensus and local personalization in non-IID settings.
However, compared to the general non-IID assumption, _clustered FL's assumption might be too restrictive since it prohibits inter-cluster knowledge sharing and enforces every cluster-wise model's training to only depend on a few clients_. This is contradictory to the widely studied strategy that different tasks or domains can benefit from sharing low-level or partial representations. It is due to the gap between the assumption of "clustered data distributions" and the algorithms of "clustering models (represented by loss vectors or model weights)": they are not equal and the latter is more restrictive. In other words, clients of different clusters can still benefit from feature/parameter sharing.
Moreover, _clustered FL usually suffers from optimization instability because dynamically changing models_ can violate the static clustering assumption and lead to imbalanced cluster assignment, which affects \(\Theta_{1:K}\) and local training in the future. In particular: (1) _Clustering collapse_, i.e., the clients assigned to one cluster keeps growing so "the rich becomes richer (i.e., the cluster-wise model becomes even stronger)" until reducing to single-model FL. This happens because most clients tend to first learn shared features before focusing on client-specific ones; (2) _Fragile to outliers_ such as malicious clients that may dominate some clusters and push all other benign ones to one or a few clusters; (3) _Sensitive to initialization_. The process highly depends on initial and earlier cluster assignments since they determine which clients' local training starts from the same model.
**Main Contributions.** To overcome the above problems of clustered FL, we propose a novel structured FL model termed "Clustered **A**dditive **M**odeling (**CAM**)". Compared to clustered FL, CAM trains a global model \(\Theta_{g}\) on top of the \(K\) clusters' models \(\Theta_{1:K}\). Its prediction for client-\(i\) combines the outputs of \(\Theta_{g}\) and the associated cluster \(c(i)\)'s model, i.e., \(y=h(x;\Theta_{g})+f(x;\Theta_{c(i)})\). This simple additive model removes the restriction of clustered FL by letting all clients share a base model \(\Theta_{g}\). It enforces \(\Theta_{1:K}\) to focus on learning the different features between clusters, hence mitigating "clustering collapse". Moreover, CAM tends to learn balanced clusters and determine the number of clusters automatically (by starting from more clusters and then zeroing out some of them), as demonstrated in Fig 1. Furthermore, CAM is less vulnerable to outliers, which can be mainly captured by \(\Theta_{1:K}\) and have less impact on the global model \(\Theta_{g}\). In addition, interactions between \(\Theta_{1:K}\) and \(\Theta_{g}\) make CAM less sensitive to initial cluster assignments since updating \(\Theta_{g}\) also changes the clustering.
Figure 1: Cluster sizes during IFCA vs. IFCA+CAM in client/cluster-wise non-IID settings on CIFAR-10. Legend: cluster ID (cluster size) in the last round. **CAM effectively mitigates clustering collapse/imbalance.**
Figure 2: Test accuracy and macro-F1 (mean\(\pm\)std) of IFCA/FeSEM (w/o CAM) and IFCA/FeSEM (CAM) in cluster/client-wise non-IID settings on CIFAR-10 dataset. “IFCA(5)” represents IFCA with \(K=5\) clusters. **CAM consistently brings substantial improvement to IFCA/FeSEM on both metrics and in both settings.**
CAM is a general model-agnostic method that can modify any existing clustered FL methods. As examples, we apply CAM to two representative methods, i.e., IFCA [6] and FeSEM [8]. In the optimization of CAM, \(\Theta_{1:K}\) and \(\Theta_{g}\) aim to fit the residual of each other's prediction. To this end, we propose an efficiently structured FL algorithm "**Fed-CAM**", which alternates between cluster assignment (server), local training (clients), and update of \(\Theta_{1:K}\) and \(\Theta_{g}\) (server). In experiments on several benchmarks in different non-IID settings, CAM significantly improves SOTA clustered FL methods, as shown in Fig 2. Moreover, we provide a convergence analysis of Fed-CAM algorithm.
## 2 Related Work
**Non-IID FL** aims to tackle statistical heterogeneity across clients. FedAvg [1] is designed for the IID setting, so it suffers from client drift and slow convergence with non-IID clients [2]. To address this challenge, FedDANE [10] proposed a federated Newton-type optimization method by adapting a method for classical distributed optimization, i.e., DANE, to the FL setting. Instead of synchronizing all clients' models to be the same global model periodically, FedProx [3] only adds a proximal term to the local training objective that discourages the local model from drifting away from the global model and thus preserves the heterogeneity. [11] applies adaptive learning rates to clients and [12] conducts attention-based adaptive weighting to aggregate clients' models. [13] studies the convergence of the FedAvg in non-IID scenarios. Recent work also studies client-wise personalized FL [14; 15; 16; 17; 18; 19; 20], which aim to address the non-IID challenge by training a personalized model per client with the help of the shared global model. Their objectives focus on training local models rather than the server-side model.
**Clustered FL** assumes that non-IID clients can be partitioned into several groups and clients in each group share a cluster-wise model. It jointly optimizes the cluster assignments and the clusters' models. K-means-based methods [8] assign clusters to clients according to their model parameters' distance. CFL [21] divides clients into two partitions based on the cosine similarity between client gradients and then checks whether a partition is congruent according to the gradient norm. IFCA [6] and HypCluster [7] assign to each client the cluster whose model achieves the minimum loss on the client's data. Few-shot clustering has been introduced to clustered FL by [22; 23]. FedP2P [24] allows communication between clients in the same cluster. [25] uses cluster-based contexts to enhance the fine-tuning of personalized FL models. [9] proposed the first cluster-wise non-IID setting and a bi-level optimization framework unifying most clustered FL methods.
**Additive modeling in FL** trains multiple models and adds their outputs together as its prediction. It was introduced to FL very recently. Federated residual learning [26] proposed an FL algorithm to train an additive model for regression tasks. [27] applies additive modeling to combining the outputs of a shared model and a local model in a partial model personalization framework. However, additive modeling has not been studied for clustered FL.
## 3 Clustered Additive Modeling (CAM)
In this section, we introduce clustered additive modeling (CAM), which combines a global model and cluster-wise model prediction in FL. CAM conducts a joint optimization of the global and cluster-wise models defined by a cluster assignment criterion. In particular, we provide two examples of CAM using different cluster assignment criteria, i.e., min-loss and K-means, which have been adopted respectively by two SOTA clustered-FL methods, i.e., IFCA and FeSEM. For each of them, we derive alternating optimization procedures (i.e., IFCA-CAM and FeSEM-CAM) that can be implemented in FL setting using two parallel threads of local model training. At the end of this section, we unify both algorithms in a structured FL algorithm Fed-CAM.
**Notations.** We assume that there are \(m\) clients and \(K\) clusters, where client-\(i\) has \(n_{i}\) examples and all clients have \(n=\sum_{i=1}^{m}n_{i}\) examples. On the server side, we have a global model \(\Theta_{g}\) and \(K\) cluster-wise models \(\Theta_{1:K}\). On the client side, we train \(m\) cluster models \(\Theta_{1:m}^{0}\) used to update the global model \(\Theta_{g}\) in FL and \(\theta_{1:m}\) used to update the cluster-wise model \(\Theta_{c(i)}\) assigned to each client-\(i\), where \(c(i)\) is its cluster label determined by the cluster assignment criterion \(c(\cdot)\). We further define \(C_{k}\triangleq\{i\in[m]:c(i)=k\}\) as the set of clients in cluster-\(k\). For simplicity, we will use \(X_{i}\) and \(Y_{i}\) to respectively represent the local training data on client-\(i\) and their ground truths, and \(\ell(Y_{i},F(X_{i}))\) denotes the batch loss of model \(F(\cdot)\) on \((X_{i},Y_{i})\). A CAM model for client-\(i\) can be
\[F_{i}(\cdot)=h(\cdot;\Theta_{g})+f(\cdot;\Theta_{c(i)})). \tag{1}\]For classification, \(F_{i}(\cdot)\) produces logits and we can apply softmax to get the class probabilities.
### IFCA-CAM: model performance-driven clustering
We extend the min-loss criterion used in IFCA [6] to CAM for cluster assignment, i.e., each client-\(i\) is assigned to the cluster-\(k\) whose model \(\Theta_{k}\) leads to the minimal loss of CAM on client-\(i\)'s data, i.e.,
\[c(i)=\arg\min_{k\in[K]}\ell(Y_{i},h(X_{i};\Theta_{g})+f(X_{i};\Theta_{k})). \tag{2}\]
IFCA-CAM optimizes \(\Theta_{g}\) and \(\Theta_{1:K}\) for minimizing the above minimal loss over all the \(m\) clients, i.e.,
\[\text{IFCA-CAM: }\min_{\Theta_{g},\Theta_{1:K}}\sum_{i=1}^{m}\frac{n_{i}}{n} \min_{k\in[K]}\ell(Y_{i},h(X_{i};\Theta_{g})+f(X_{i};\Theta_{k})), \tag{3}\]
where the inner minimization performs the min-loss assignment in Eq. (2). We solve Eq. (3) by the following alternating minimization of cluster membership, cluster-wise models, and the global model.
**(i)** Cluster assignment by applying Eq. (2) to the latest \(\Theta_{g}\) and \(\Theta_{1:K}\). This yields \(c(\cdot)\) and \(C_{1:K}\).
**(ii)** Fixing \(\Theta_{g}\), we can optimize the cluster-wise models \(\Theta_{1:K}\) by gradient descent:
\[\Theta_{k}\leftarrow\Theta_{k}-\eta\sum_{i\in C_{k}}\frac{n_{i}}{n}\nabla_{ \Theta_{k}}\ell(Y_{i},h(X_{i};\Theta_{g})+f(X_{i};\Theta_{k})),\;\forall\;k\in [K]. \tag{4}\]
In FL, the gradient can be approximated by aggregating the model updates of local models \(\theta_{i}\) from clients, whose training on the client side is: (1) initializing \(\theta_{i}\leftarrow\Theta_{c(i)}\); (2) starting from the initialization, running \(E\) local epochs updating \(\theta_{i}\) by
\[\theta_{i}\leftarrow\theta_{i}-\eta\nabla_{\theta_{i}}\ell(Y_{i},h(X_{i}; \Theta_{g})+f(X_{i};\theta_{i})),\;\forall\;i\in[m]; \tag{5}\]
and (3) aggregating the local model update \(\theta_{i}-\Theta_{k}\) from client \(i\in C_{k}\) to update \(\Theta_{k}\), i.e.,
\[\Theta_{k}\leftarrow\left(1-\sum_{i\in C_{k}}\frac{n_{i}}{n}\right)\Theta_{k} +\sum_{i\in C_{k}}\frac{n_{i}}{n}\theta_{i}. \tag{6}\]
**(iii)** Fixing \(\Theta_{1:K}\), we can optimize the global model \(\Theta_{g}\) by gradient descent:
\[\Theta_{g}\leftarrow\Theta_{g}-\eta\sum_{i\in[m]}\frac{n_{i}}{n}\nabla_{\Theta _{g}}\ell(Y_{i},h(X_{i};\Theta_{g})+f(X_{i};\Theta_{c(i)})). \tag{7}\]
In FL, this gradient step can be approximated by aggregating the local models \(\theta_{i}^{0}\) (similar to FedAvg): (1) initializing \(\theta_{i}^{0}\leftarrow\Theta_{g}\); (2) running \(E\) local epochs training \(\theta_{i}^{0}\) by
\[\theta_{i}^{0}\leftarrow\theta_{i}^{0}-\eta\nabla_{\theta_{i}^{0}}\ell(Y_{i}, h(X_{i};\theta_{i}^{0})+f(X_{i};\Theta_{c(i)})),\;\forall\;i\in[m]; \tag{8}\]
and (3) aggregating the updated local models \(\theta_{i}^{0}\) of all the \(m\) clients to update \(\Theta_{g}\), i.e.,
\[\Theta_{g}\leftarrow\sum_{i\in[m]}\frac{n_{i}}{n}\theta_{i}^{0}. \tag{9}\]
We can run two parallel threads of local training for \(\theta_{i}^{0}\) and \(\theta_{i}\) for each client-\(i\) because their training in Eq. (8) and Eq. (5) does not depend on each other (but they both depend on the cluster assignments in (i)). This is analogous to the simultaneous update algorithm (FedSim) in [27]. One may also consider an alternative update algorithm (which may enjoy a slightly faster convergence) that iterates (i)\(\rightarrow\)(ii)\(\rightarrow\)(i)\(\rightarrow\)(iii) in each round. However, it doubles the communication rounds ((i) requires one communication round) and does not allow parallel local training. Since the alternative update does not show a significant empirical improvement over FedSim in [27], we mainly focus on the parallel one in the remainder of this paper.
### FeSEM-CAM: parameter similarity-based clustering
We follow a similar procedure of IFCA-CAM to derive FeSEM-CAM, which applies a K-means style clustering to the client models \(\theta_{1:m}\), whose objective is minimizing the sum of squares of client-cluster distance, i.e.,
\[\min_{\Theta_{1:K}}\sum_{i=1}^{m}\frac{n_{i}}{n}\min_{j\in[K]}\| \theta_{i}-\Theta_{j}\|_{2}^{2}. \tag{10}\]
Hence, similar to FeSEM [8], FeSEM-CAM assigns the nearest cluster-wise model to each client and updates the cluster-wise models as the cluster centroids (i.e., K-means algorithm), i.e.,
\[c(i)=\arg\min_{k\in[K]}\|\theta_{i}-\Theta_{k}\|_{2}^{2},\ \ \Theta_{k} \leftarrow\sum_{i\in C_{k}}\frac{n_{i}}{\sum_{i\in C_{k}}n_{i}}\theta_{i}. \tag{11}\]
We iterate the above K-means steps for a few times until convergence in practice. FeSEM-CAM applies the K-means objective in Eq. (10) as a regularization to the loss of CAM model \(\ell(Y_{i},h(X_{i};\Theta_{g})+f(X_{i};\theta_{i}))\), i.e.,
\[\text{FeSEM-CAM:}\ \ \min_{\Theta_{g},\Theta_{1:K},\theta_{1:m}} \sum_{i=1}^{m}\frac{n_{i}}{n}\left[\ell(Y_{i},h(X_{i};\Theta_{g})+f(X_{i}; \theta_{i}))+\frac{\lambda}{2}\min_{j\in[K]}\|\theta_{i}-\Theta_{j}\|_{2}^{2} \right], \tag{12}\]
where the minimization w.r.t. \(\Theta_{1:K}\) (with \(\theta_{1:m}\) fixed) recovers the (weighted) K-means objective in Eq. (10). Unlike IFCA-CAM, where client model \(\theta_{i}\) is an auxiliary/latent variable for FL not showing in the objective of Eq. (3), it is explicitly optimized in Eq. (12). Similar to IFCA-CAM, we solve Eq. (3) by iterating the following alternating minimization steps (i)-(iii).
**(i)** K-means clustering that iterates Eq. (11) for a few steps until convergence, which yields \(c(\cdot)\), \(C_{1:K}\), and \(\Theta_{1:K}\). The update of \(\Theta_{1:K}\) is analogous to Eq. (6).
**(ii)** Fixing \(\Theta_{g}\), we optimize \(\theta_{1:m}\) by client-side local gradient descent:
\[\theta_{i}\leftarrow(1-\eta\lambda)\theta_{i}+\eta\lambda\Theta_ {c(i)}-\eta\frac{n_{i}}{n}\nabla_{\theta_{i}}\ell(Y_{i},h(X_{i};\Theta_{g})+f (X_{i};\theta_{i})),\ \forall\ i\in[m]. \tag{13}\]
The first two terms in Eq. (13) compute a linear interpolation between \(\theta_{i}\) and its assigned cluster's model \(\Theta_{c(i)}\). This is a result of the K-means regularization term in Eq. (12) and keeps \(\theta_{i}\) close to \(\Theta_{c(i)}\).
**(iii)** Fixing \(\theta_{1:m}\), we can optimize \(\Theta_{g}\) by gradient descent:
\[\Theta_{g}\leftarrow\Theta_{g}-\eta\sum_{i\in[m]}\frac{n_{i}}{n} \nabla_{\Theta_{g}}\ell(Y_{i},h(X_{i};\Theta_{g})+f(X_{i};\theta_{i})). \tag{14}\]
In FL, this gradient step can be approximated by aggregating the local models \(\theta_{i}^{0}\) (similar to FedAvg): (1) initializing \(\theta_{i}^{0}\leftarrow\Theta_{g}\); (2) running \(E\) local epochs training \(\theta_{i}^{0}\) by
\[\theta_{i}^{0}\leftarrow\theta_{i}^{0}-\eta\nabla_{\theta_{i}^{0} }\ell(Y_{i},h(X_{i};\theta_{i}^{0})+f(X_{i};\theta_{i})),\ \forall\ i\in[m]; \tag{15}\]
and (3) aggregating the updated local models \(\theta_{i}^{0}\) of all the \(m\) clients to update \(\Theta_{g}\) by Eq. (9).
### Algorithm
In Algorithm 1, we propose a structured FL algorithm for CAM, i.e., Fed-CAM, which can unify the derived optimization procedures for IFCA-CAM and FeSEM-CAM and can be easily extended to other clustered FL and clustering criteria.
**Warmup.** As an alternating optimization framework, it would be unstable if both \(\Theta_{g}\) and \(\Theta_{1:K}\) are randomly initialized and jointly optimized in parallel since they may capture overlapping information and result in an inefficient competitive game. To encourage them to learn complementary knowledge, warmup training for one of them before the joint optimization is helpful. For example, a few rounds of FedAvg can produce a "warm" \(\Theta_{g}\), whose predictions' residuals are more informative to train \(\Theta_{1:K}\). Another warmup strategy could be to run a few local training epochs and extract warm \(\Theta_{1:K}\) by clustering the lightly-trained local models \(\theta_{1:m}\). In Fed-CAM, we can apply the former warmup to IFCA-CAM and the latter to FeSEM-CAM.
## 4 Convergence Analysis
Based on the convergence analysis presented in [27], which aims to minimize the following objective:
\[\min_{u,V}F(u,V):=\frac{1}{n}\sum_{i=1}^{m}F_{i}(u,v_{i}), \tag{16}\]
where \(u\) represents the shared parameters and \(V=v_{1},v_{2},\cdots,v_{m}\) denotes the personalized parameters. If we map \(\Theta_{g}\) to \(u\), and \(\Theta_{1:K}\) to \(V\) respectively, this appears strikingly similar to our methods as illustrated in Equations 3 and 12. Provided that the clustering remains stable, we can employ the theoretical framework of [27]. And firstly, we make some standard assumptions for the convergence analysis as below.
**Assumption 1**.: (Smoothness). For \(i=1,\cdots,m\), the loss function \(l\) is continuously differentiable, and there exist constants \(L\) that \(\nabla_{\Theta_{g}}\ell(\Theta_{g},\Theta_{k})\) is L-Lipschitz with respect to \(\Theta_{g}\) and \(\Theta_{k}\), and \(\nabla_{\Theta_{k}}\ell(\Theta_{g},\Theta_{k})\) is L-Lipschitz with respect to \(\Theta_{g}\) and \(\Theta_{k}\).
**Assumption 2**.: (Unbiased gradients and bounded variance). The stochastic gradients are unbiased and have bounded variance. For all \(\Theta_{g}\) and \(\Theta_{k}\),
\[\mathbb{E}[\widetilde{\nabla}_{\Theta_{g}}\ell(\Theta_{g},\Theta_{k})]=\nabla _{\Theta_{g}}\ell(\Theta_{g},\Theta_{k}),\ \ \mathbb{E}[\widetilde{\nabla}_{ \Theta_{k}}\ell(\Theta_{g},\Theta_{k})]=\nabla_{\Theta_{k}}\ell(\Theta_{g}, \Theta_{k}),\]
and
\[\mathbb{E}[\|\widetilde{\nabla}_{\Theta_{g}}\ell(\Theta_{g},\Theta_{k})-\nabla _{\Theta_{g}}\ell(\Theta_{g},\Theta_{k})\|^{2}]\leq\sigma_{g}^{2},\ \ \mathbb{E}[\| \widetilde{\nabla}_{\Theta_{k}}\ell(\Theta_{g},\Theta_{k})-\nabla_{\Theta_{k} }\ell(\Theta_{g},\Theta_{k})\|^{2}]\leq\sigma_{k}^{2}.\]
**Assumption 3**.: (Partial gradient diversity). There exists a constant for all \(\theta_{i}^{0}\) and \(\Theta_{g}\), \(\theta_{i}\) and \(\Theta_{k}\),
\[\sum_{i=1}^{m}\frac{n_{i}}{n}\|\nabla_{\Theta_{g}}\ell(\Theta_{g },\theta_{i})-\nabla_{\Theta_{g}}\ell(\Theta_{g},\Theta_{k})\|^{2}\leq\delta^{2}\] \[\sum_{i\in[k]}\frac{n_{i}}{\sum_{i\in[k]}n_{i}}\|\nabla_{\Theta_ {k}}\ell(\theta_{i}^{0},\Theta_{k})-\nabla_{\Theta_{k}}\ell(\Theta_{g},\Theta _{k})\|^{2}\leq\delta^{2}.\]
**Assumption 4**.: (Convexity of cluster models). Fix \(\Theta_{g}\), assume \(\ell(\Theta_{k})\) is convex.
**Theorem 1**.: _(Convergence of Fed-CAM). Let Assumptions 1, 2, 3 and 4 hold, and learning rates chosen as \(\eta=\tau/(LE)\) for a \(\tau\) depending on the parameters \(L,\sigma_{g}^{2},\sigma_{k}^{2},\delta^{2},s,m,T\), provided clustering stable, we have (ignoring absolute constants),_
\[\frac{1}{T}\sum_{t=1}^{T}(\frac{1}{L}\mathbb{E}[\|\nabla_{\Theta _{g}}\ell(\Theta_{g},\Theta_{k})\|^{2}]+\frac{s}{mL}\frac{1}{m}\sum_{i=1}^{m} \mathbb{E}[\|\nabla_{\Theta_{c(i)}}\ell(\Theta_{g},\Theta_{c(i)})\|^{2}]) \tag{17}\] \[\leq \frac{(\triangle\iota\sigma_{sim,1}^{2})^{1/2}}{T^{1/2}}+\frac{( \triangle_{\iota}^{2}\sigma_{sim,2}^{2})^{1/3}}{T^{2/3}}+O(\frac{1}{T}), \tag{18}\]_where \(\triangle_{\ell}=\ell_{0}-\ell^{*}\), and we define effective variance terms,_
\[\sigma^{2}_{sim,1} =\frac{2}{L}(\delta^{2}(1-\frac{s}{m})+\frac{\sigma^{2}_{g}}{L}+ \frac{\sigma^{2}_{k}s}{m})) \tag{19}\] \[\sigma^{2}_{sim,2} =\frac{2}{L}(\delta^{2}+\sigma^{2}_{g}+\sigma^{2}_{k})(1-\frac{1 }{E}). \tag{20}\]
_Remark 1_.: It is straightforward to prove that the clustering of both IFCA-CAM and FeSEM-CAM converges, as evidenced by Ma et al. (2022). However, proving the stability of these clustering methods is more challenging due to the oscillation phenomenon often seen in K-means. The stability of clustering will be further demonstrated through experimental analysis in Section 5.2.
_Remark 2_.: Besides the clustering structure, there is a distinct difference between FedSim [27] and Fed-CAM. In Fed-CAM, we need to aggregate both \(\Theta_{g}\) and \(\Theta_{1:K}\), while in FedAlt [27], only \(\Theta_{g}\) requires aggregation. The \(\sigma^{2}_{sim,1}\) and \(\sigma^{2}_{sim,2}\) reflect the impact of sample number \(s\) and local steps \(E\). Larger \(s\) or smaller \(E\), better convergence rate. According to the results presented in [27], alternative gradient descent surpasses simultaneous gradient descent in terms of convergence rate. The asymptotic \(1/\sqrt{T}\) rate is achieved when each device is seen at least once on average, and the \(1/T\) term is dominated by the \(1/\sqrt{T}\) term, a situation that occurs when (ignoring absolute constants)
\[T\geq\frac{\triangle_{\ell}}{\sigma^{2}_{sim,1}}\max\{(1-\frac{1}{E})\frac{m}{ s},2\}.\]
## 5 Experiments
**Benchmark datasets and partitions.** The proposed methods have been validated using several benchmark datasets. While detailed results for PathMNIST and TissueMNIST from the MedMNIST [28] are provided in the supplementary material, the other datasets used for validation include:
* **Fashion-MNIST**[29] includes 70,000 labeled fashion images (28\(\times\)28 grayscale) in 10 classes, such as T-shirts, Trouser, and Bag, with others.
* **CIFAR-10**[30] consists of 60,000 images (32\(\times\)32 color) in 10 classes, including airplane, automobile, bird, and truck, among others. The divergence among classes in CIFAR-10 is relatively higher than in other datasets from the MNIST family.
Each dataset is split among 200 clients and we create the following non-IID scenarios:
* **Client-wise non-IID by Dirichlet distribution (\(\alpha=0.1\))**: This approach uses the Dirichlet distribution to control the randomness of non-IID data, as proposed by [31]. This is a standard method used in most personalized FL methods, which are usually client-wise non-IID.
* **Cluster-wise non-IID by Dirichlet distribution (\(\alpha=(0.1,10)\))**: This strategy divides the dataset into \(K\) clusters with \(\alpha=0.1\) to generate substantial variance in cluster-wise non-IID. Then, each cluster is divided into \(m/K\) clients with \(\alpha=10\) to control the non-IID across clients.
* **Client-wise non-IID by n-class (2)**: This method randomly selects \(n\)-class out of all classes in the dataset for each client, as proposed by [1], and then samples instances from these datasets.
* **Cluster-wise non-IID by n-class (3, 2)**: This approach randomly assigns \(3\) classes to each cluster, ensuring a relatively balanced number of instances per class. It then assigns \(2\) classes to each client.
**Baselines.** We select baseline methods from four categories as follows:
* **Single model-based FL:** We choose FedAvg [1] and FedProx [3] with a coefficient of 230 and a regularization of 0.95 as the baselines.
* **Ensemble FL:** We train FedAvg and FedProx \(K\) times and then learn an ensemble model via soft voting to serve all clients, which are named FedAvg+ and FedProx+, respectively.
* **Clustered FL:** We choose FeSEM [8] and IFCA [6], which is similar to HypCluster [7].
* **Clustered FL with additive modeling:** We integrate CAM with IFCA and FeSEM, denoting them as IFCA-CAM and FeSEM-CAM, respectively.
**Learning-related hyperparameters.** We use the Convolutional Neural Network (CNN) [32] as the basic model architecture for each client, as detailed in the supplementary material. For optimization, we employ SGD with a learning rate of 0.001 and momentum of 0.9 to train the model, and the batch size is 32. We evaluate the performance using both **micro accuracy** (%) and **macro F1-score** (%) on the client-wise test datasets to better capture the non-IID nature per client.
**FL system settings.** We conduct 100 global communication rounds in the FL system, including 30 warm-up rounds if applicable. Each communication involves 10 local steps. For the clustering process of FeSEM-CAM, we measure distance on the flattened parameters of the fully-connected layers, and use K-Means as the clustering algorithm. The coefficient \(\lambda\) is chosen from \(0.001,0.01,0.1\) based on the best performance.
### Main Results and Comparisons
**Cluster-wise non-IID** scenarios make the assumption that there are underlying clustering structures among clients. Table 1 compares the methods using two benchmark datasets, namely Fashion-MNIST and CIFAR-10. Results using two biomedical datasets are presented in the appendix. The following are some notable observations and analyses:
* The application of the ensemble mechanism to FedAvg and FedProx yields minor improvements. This is because the server-side model in FedAvg or FedProx is already a relatively strong model, while ensemble mechanisms usually excel with weak models.
* The introduction of CAM significantly enhances the performance of IFCA, which typically struggles with clustering collapse in cluster-wise non-IID scenarios. Notably, CAM decomposes the shared components into a global model and personalized parts into cluster models. Thus, the clustering collapse is mitigated by isolating the dominant shared knowledge.
* FeSEM generally exhibits robust performance on cluster-wise non-IID without outliers. Implementing CAM in FeSEM further improves the Macro-F1 performance. The clustering process in FeSEM tends to overfit the label distribution (imbalanced classes) of clients to achieve higher accuracy. However, the application of CAM introduces a global model with a balanced label distribution by averaging all clients, thereby boosting the Macro-F1 performance while preserving the cluster-wise non-IID for high accuracy.
* With an increase in the number of clusters \(K\), the CAM-based methods show substantial improvements in Macro-F1. The decomposition of shared knowledge and cluster-wise non-IID characteristics benefit from a reasonably larger \(K\), which facilitates fine-grained, cluster-wise personalization.
**Client-wise non-IID** Table 2 presents comparative results under client-wise non-IID scenarios using two benchmark datasets: Fashion-MNIST and CIFAR-10. Interestingly, IFCA maintains stable performance under client-wise non-IID conditions, primarily because it cannot form a single dominant cluster model - a primary cause of clustering collapse - in a highly heterogeneous environment. The application of CAM to IFCA and FeSEM shows a significant enhancement, particularly on the CIFAR-10 dataset. This improvement is likely due to FeSEM's typical restriction on knowledge sharing across clusters. In contrast, CAM utilizes a global model to capture more useful common
\begin{table}
\begin{tabular}{c c c|c c c c|c c c c} \hline \hline \multicolumn{2}{c}{} & \multicolumn{4}{c}{Fashion-MNIST} & \multicolumn{4}{c}{CIFAR-10} \\ \hline \multicolumn{2}{c}{} & \multicolumn{2}{c}{Non-IID setting} & \multicolumn{2}{c}{Dirichlet \(\alpha=(0.1,10)\)} & \multicolumn{2}{c}{n-class \((3,2)\)} & \multicolumn{2}{c}{Dirichlet \(\alpha=(0.1,10)\)} & \multicolumn{2}{c}{n-class \((3,2)\)} \\ \hline \multicolumn{2}{c}{**\#Cluster**} & \multicolumn{2}{c}{Methods} & \multicolumn{1}{c}{Accuracy} & \multicolumn{1}{c}{Macro-F1} & \multicolumn{1}{c}{Accuracy} & \multicolumn{1}{c}{Macro-F1} & \multicolumn{1}{c}{Accuracy} & \multicolumn{1}{c}{Macro-F1} & \multicolumn{1}{c}{Accuracy} & \multicolumn{1}{c}{Macro-F1} \\ \hline \multirow{3}{*}{**1**} & FedAvg & 86.08\(\pm\)0.70 & 57.24\(\pm\)2.26 & 86.33\(\pm\)0.44 & 46.09\(\pm\)1.08 & 24.38\(\pm\)3.30 & 11.69\(\pm\)3.15 & 21.33\(\pm\)3.83 & 9.00\(\pm\)0.58 \\ & FedProx & 86.32\(\pm\)0.70 & 58.03\(\pm\)0.38 & 86.42\(\pm\)0.38 & 45.86\(\pm\)1.42 & 24.73\(\pm\)3.68 & 11.28\(\pm\)2.35 & 22.66\(\pm\)1.33 & 9.23\(\pm\)0.78 \\ \hline \multirow{6}{*}{**5**} & FedAvg+ & 87.61 & 9.48 & 86.95 & 65.61 & 25.97 & 12.16 & 24.35 & 9.66 \\ & FedProx+ & 87.94 & 59.83 & 86.52 & 65.73 & 26.05 & 12.53 & 24.83 & 9.31 \\ & ICA & 84.60\(\pm\)2.22 & 62.03\(\pm\)3.01 & 84.91\(\pm\)2.54 & 66.90\(\pm\)4.43 & 34.10\(\pm\)1.79 & 22.12\(\pm\)21.21 & 29.80\(\pm\)4.49 & 17.90\(\pm\)0.26 \\ & IFCA-CAM & 93.33\(\pm\)0.95 & 79.64\(\pm\)4.09 & 95.83\(\pm\)0.49 & 77.56\(\pm\)1.14 & 58.13\(\pm\)3.82 & 28.09\(\pm\)3.68 & 54.56\(\pm\)3.58 & 27.27\(\pm\)1.06 \\ & FeSEM & 94.61\(\pm\)5.14 & 82.90\(\pm\)2.08 & 84.20\(\pm\)1.96 & 77.70\(\pm\)6.05 & 90.90\(\pm\)2.34 & 23.73\(\pm\)25.85 & 33.75\(\pm\)2.54 \\ & FeSEM-CAM & **95.13\(\pm\)1.18** & **85.10\(\pm\)3.17** & **95.60\(\pm\)1.08** & **75.82\(\pm\)1.47** & **64.83\(\pm\)2.33** & **30.83\(\pm\)1.77** & **65.88\(\pm\)1.21** & **36.83\(\pm\)1.17** \\ \hline \multirow{6}{*}{**10**} & FedAvg+ & 89.42 & 67.83 & 86.91 & 63.01 & 63.04 & 28.33 & 13.97 & 27.28 & 9.81 \\ & FedProx+ & 89.85 & 68.02 & 86.73 & 63.42 & 28.33 & 13.64 & 2.664 & 9.64 \\ & ICA & 82.10\(\pm\)0.45 & 62.62\(\pm\)8.28 & 56.58\(\pm\)4.97 & 28.98\(\pm\)1.16 & 27.89\(\pm\)3.99 & 34.66\(\pm\)2.60 & 18.70\(\pm\)1.31 \\ & IRCA-CAM & 94.52\(\pm\)2.54 & 88.45\(\pm\)5.46 & 95.00\(\pm\)0.87 & 82.89\(\pm\)1.16 & 70.90\(\pm\)1.18 & 40.88\(\pm\)1.28 & 68.46\(\pm\)4.08 & 41.45\(\pm\)4.00 \\ & FeSEM & 95.73\(\pm\)1.28 & 89.34\(\pm\)1.57 & 95.54\(\pm\)0.74 & 84.81\(\pm\)3.28 & 66.69\(\pm\)2.18 & 38.35\(\pm\)4.17 & 71.62\(\pm\)1.23 & 49.72\(\pm\)3.34 \\ & FeSEM-CAM & **96.19\(\pm\)1.20** & **92.37\(\pm\)1.85** & **98.07\(\pm\)1.46** & **92.43\(\pm\)2.70** & **78.45\(\pm\)1.71** & **49.50\(\pm\)1.13** & **75.04\(\pm\)1.17** & **55.50\(\pm\)2.07** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Test results (mean\(\pm\)std) in **cluster**-wise non-IID settings on Fashion-MNIST & CIFAR-10.
knowledge across clusters, thereby substantially enhancing the generalization capability of each cluster. Furthermore, CIFAR-10, being a relatively complex dataset with a diversity of images, underscores the importance of sharing common knowledge.
### Visualization: CAM combats clustering collapse
Figures 1 and 3 demonstrate the effectiveness of applying CAM to mitigate clustering collapse in IFCA and FeSEM under both cluster-wise and client-wise non-IID scenarios, using the CIFAR-10 dataset with \(K=10\). Each color represents a cluster, and the X-axis represents the iteration rounds.
In the case of IFCA, we observe a severe clustering collapse issue in cluster-wise non-IID scenarios. A single cluster can encompass all clients in the client-wise non-IID setting and up to \(50\%\) of clients in the cluster non-IID setting. Furthermore, the clustering remains unstable throughout the process. However, when CAM is applied in IFCA-CAM, it quickly identifies some clustering structures within a few rounds, and this structure closely approximates the ground truth.
As for FeSEM, while the phenomenon of clustering collapse is not as pronounced, a single cluster can still dominate up to \(25\%\) of all clients if there are no outliers. CAM can expedite the clustering convergence, sometimes achieving it in just one round. Moreover, under client-wise non-IID settings, the application of CAM results in lower variance and more uniform cluster size. In the case of cluster-wise non-IID settings, FeSEM-CAM can easily identify the ground truth.
## 6 Conclusions
We propose a novel structured FL model "clustered additive modeling (CAM)" and an efficient FL algorithmic framework Fed-CAM to address non-IID FL challenges with clustering structure. CAM is a general mode-agnostic tool that can improve various existing non-IID FL methods. It can capture more general non-IID structures with global knowledge sharing among clients than clustered FL and overcome several weaknesses such as clustering collapse, vulnerability to cluster imbalance/initialization, etc. Theoretically, Fed-CAM is capable of achieving an asymptotic convergence rate of \(O(1/\sqrt{T})\). Extensive experiments show that CAM brings substantial improvement to existing clustered FL methods, improves cluster balance, and effectively mitigates clustering collapse.
\begin{table}
\begin{tabular}{c c|c c c c c c c c} \hline \hline \multicolumn{3}{c}{Datasets} & \multicolumn{4}{c}{CIFAR-10} \\ \hline \multicolumn{3}{c}{No-IID setting} & \multicolumn{4}{c}{Dritic \(\alpha=0.1\)} & \multicolumn{4}{c}{p-scales (2)} & \multicolumn{4}{c}{Dritic \(\alpha=0.1\)} & \multicolumn{4}{c}{n-class (2)} \\ \hline \multirow{3}{*}{**\#Cluster**} & Methods & Accuracy & More-Fl & Accuracy & More-Fl & Accuracy & More-Fl & Accuracy & More-Fl \\ \cline{2-11} & FedAvg & 85.90\(\pm\)0.46 & 54.52\(\pm\)2.66 & 86.17\(\pm\)0.25 & 44.48\(\pm\)1.24 & 25.62\(\pm\)3.47 & 11.38\(\pm\)2.02 & 24.30\(\pm\)3.53 & 8.56\(\pm\)0.64 \\ \cline{2-11} & FedPox & 86.03\(\pm\)0.58 & 54.69\(\pm\)3.32 & 86.47\(\pm\)0.23 & 44.89\(\pm\)1.38 & 25.72\(\pm\)3.29 & 11.14\(\pm\)1.49 & 24.19\(\pm\)2.45 & 8.69\(\pm\)0.74 \\ \hline \multirow{6}{*}{**5**} & FedAvg* & 86.12 & 61.07 & 86.55 & 45.39 & 25.71 & 12.45 & 24.83 & 8.74 \\ & FedAvg* & 86.39 & 56.56 & 86.15 & 45.33 & 25.88 & 12.43 & 25.88 & 8.55 \\ & IPCA & 90.13\(\pm\)61 & 68.47\(\pm\)5.23 & 9.41\(\pm\)5.44 & 72.30\(\pm\)5.32 & 42.71\(\pm\)0.28 & 20.28 & 26.77\(\pm\)1.48 & 45.64\(\pm\)12.78 & 17.78\(\pm\)1.29 \\ & IPCA-CAM & 97.32\(\pm\)14 & 90.71\(\pm\)5.17 & 92.24\(\pm\)1.02 & 22.24\(\pm\)3.43 & 54.21\(\pm\)2.55 & 41.88\(\pm\)1.54 & 59.24\(\pm\)1.51 & 25.20\(\pm\)1.05 \\ & FeSEM & 91.51\(\pm\)29 & 73.78\(\pm\)9.88 & 91.83\(\pm\)1.34 & 71.05\(\pm\)6.83 & 54.30\(\pm\)4.58 & 24.78\(\pm\)6.01 & 55.55\(\pm\)4.83 & 33.80\(\pm\)4.18 \\ & FeSEM+CAM & **97.44\(\pm\)1.04** & **75.12\(\pm\)15.82** & **93.14\(\pm\)2.60** & **76.98\(\pm\)21.77** & **59.71\(\pm\)28.09** & **40.45\(\pm\)33.35** & **56.70\(\pm\)1.46** & **34.52\(\pm\)1.44** \\ \hline \multirow{6}{*}{**10**} & FedAvg* & 86.81 & 60.43 & 86.91 & 47.12 & 27.83 & 13.65 & 27.71 & 9.65 \\ & FedPox* & 86.34 & 56.2 & 86.78 & 42.83 & 25.86 & 12.84 & 26.16 & 9.94 \\ & IPCA & 91.04\(\pm\)4.33 & 68.66\(\pm\)67.47 & 92.45\(\pm\)16.79 & 24.56 & 72.98\(\pm\)5.80 & 47.62\(\pm\)10.15 & 23.36\(\pm\)24.87 & 47.96\(\pm\)10.19 & 17.88\(\pm\)1.04 \\ & IPCA-CAM & **95.70\(\pm\)19** & 79.71\(\pm\)1.91 & 92.75\(\pm\)2.63 & 76.31\(\pm\)4.39 & 72.54\(\pm\)7.27 & 42.86\(\pm\)4.36 & 61.01\(\pm\)2.41 & 31.63\(\pm\)2.7 \\ & FeSEM & 93.32\(\pm\)30 & 80.41\(\pm\)10.16 & 93.75\(\pm\)1.39 & 73.96\(\pm\)5.67 & 67.1\(\pm\)5.7 & 31.96\(\pm\)5.23 & 63.61\(\pm\)6.51 & 47.29\(\pm\)6.08 \\ & FeSEM-CAM & 95.25\(\pm\)19 & 81.55\(\pm\)22.24 & **95.15\(\pm\)1.48** & **86.16\(\pm\)3.19** & **80.11\(\pm\)1.82** & **59.19\(\pm\)4.67** & **69.88\(\pm\)1.77** & **49.5\(\pm\)1.42** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Test results (mean\(\pm\)std) in **client**-wise non-IID settings on Fashion-MNIST & CIFAR-10.
Figure 3: Cluster sizes during FeSEM vs. FeSEM+CAM in client/cluster-wise non-IID settings on CIFAR-10. Legend: cluster ID (cluster size) in the last round. **CAM effectively mitigates clustering collapse/imbalance**.
## References
* [1]B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas (2017) Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pp. 1273-1282. Cited by: SS1.
* [2]P. Kairouz, H. D. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings, et al. (2021) Advances and open problems in federated learning. Foundations and Trends(r) in Machine Learning14 (1-2), pp. 1-210. Cited by: SS1.
* [3]T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith (2020) Federated optimization in heterogeneous networks. Proceedings of Machine Learning and Systems2, pp. 429-450. Cited by: SS1.
* [4]H. Zhu, J. Xu, S. Liu, and Y. Jin (2021) Federated learning on non-iid data: a survey. arXiv preprint arXiv:2106.06843. Cited by: SS1.
* [5]D. Gao, X. Yao, and Q. Yang (2022) A survey on heterogeneous federated learning. arXiv preprint arXiv:2210.04505. Cited by: SS1.
* [6]A. Ghosh, J. Chung, D. Yin, and K. Ramchandran (2020) An efficient framework for clustered federated learning. Advances in Neural Information Processing Systems33, pp. 19586-19597. Cited by: SS1.
* [7]Y. Mansour, M. Mohri, J. Ro, and A. Suresh (2020) Three approaches for personalization with applications to federated learning. arXiv preprint arXiv:2002.10619. Cited by: SS1.
* [8]M. Xie, G. Long, T. Shen, T. Zhou, X. Wang, J. Jiang, and C. Zhang (2021) Multi-center federated learning. arXiv preprint arXiv:2108.08647. Cited by: SS1.
* [9]J. Ma, G. Long, T. Zhou, J. Jiang, and C. Zhang (2022) On the convergence of clustered federated learning. arXiv preprint arXiv:2202.06187. Cited by: SS1.
* [10]T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith (2019) FedDane: a federated newton-type method. In 2019 53rd Asilomar Conference on Signals, Systems, and Computers, pp. 1227-1231. Cited by: SS1.
* [11]S. Reddi, Z. Charles, M. Zaheer, Z. Garrett, K. Rush, J. Konecny, S. Kumar, and H. Brendan (2020) Adaptive federated optimization. arXiv preprint arXiv:2003.00295. Cited by: SS1.
* [12]J. Jiang, S. Ji, and G. Long (2020) Decentralized knowledge acquisition for mobile internet applications. World Wide Web23 (5), pp. 2653-2669. Cited by: SS1.
* [13]X. Li, K. Huang, W. Yang, S. Wang, and Z. Zhang (2019) On the convergence of fedavg on non-iid data. arXiv preprint arXiv:1907.02189. Cited by: SS1.
* [14]A. Z. Tan, H. Yu, L. Cui, and Q. Yang (2022) Toward personalized federated learning. IEEE Transactions on Neural Networks and Learning Systems. Cited by: SS1.
* [15]A. Z. Fallah, A. Mokhtari, and A. Ozdaglar (2020) Personalized federated learning with theoretical guarantees: a model-agnostic meta-learning approach. Advances in Neural Information Processing Systems33, pp. 3557-3568. Cited by: SS1.
* [16]A. Shamsian, A. Navon, E. Fetaya, and G. Chechik (2021) Personalized federated learning using hypernetworks. In International Conference on Machine Learning, pp. 9489-9502. Cited by: SS1.
* [17]Y. Deng, M. Mahdi Kamani, and M. Mahdavi (2020) Adaptive personalized federated learning. arXiv preprint arXiv:2003.13461. Cited by: SS1.
* [18]L. Collins, H. Hassani, A. Mokhtari, and S. Shakkottai (2021) Exploiting shared representations for personalized federated learning. In International Conference on Machine Learning, pp. 2089-2099. Cited by: SS1.
* [19]C. T. Dinh, N. Tran, and J. Nguyen (2020) Personalized federated learning with moreau envelopes. Advances in Neural Information Processing Systems33, pp. 21394-21405. Cited by: SS1.
* [20]K. Singhal, H. Sidahmed, Z. Garrett, S. Wu, J. Rush, and S. Prakash (2021) Federated reconstruction: partially local federated learning. Advances in Neural Information Processing Systems34, pp. 11220-11232. Cited by: SS1.
* [21]F. Sattler, K. Muller, and W. Samek (2020) Clustered federated learning: model-agnostic distributed multitask optimization under privacy constraints. IEEE transactions on neural networks and learning systems. Cited by: SS1.
* [22]D. Kurian Dennis, T. Li, and V. Smith (2021) Heterogeneity for the win: one-shot federated clustering. Cited by: SS1.
* [23]P. Awasthi and O. Sheffet (2012) Improved spectral-norm bounds for clustering. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, pp. 37-49. Cited by: SS1.
* [24] Li Chou, Zichang Liu, Zhuang Wang, and Anshumali Shrivastava. Efficient and less centralized federated learning. In _Joint European Conference on Machine Learning and Knowledge Discovery in Databases_, pages 772-787. Springer, 2021.
* [25] Xueyang Tang, Song Guo, and Jingcai Guo. Personalized federated learning with contextualized generalization. In Lud De Raedt, editor, _Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22_, pages 2241-2247. International Joint Conferences on Artificial Intelligence Organization, 7 2022. Main Track.
* [26] Alekh Agarwal, John Langford, and Chen-Yu Wei. Federated residual learning. _arXiv preprint arXiv:2003.12880_, 2020.
* [27] Krishna Pillutla, Kshitiz Malik, Abdel-Rahman Mohamed, Mike Rabbat, Maziar Sanjabi, and Lin Xiao. Federated learning with partial model personalization. In _International Conference on Machine Learning_, pages 17716-17758. PMLR, 2022.
* [28] Jiancheng Yang, Rui Shi, Donglai Wei, Zequan Liu, Lin Zhao, Bilian Ke, Hanspeter Pfister, and Bingbing Ni. Medmnist v2: A large-scale lightweight benchmark for 2d and 3d biomedical image classification. _arXiv preprint arXiv:2110.14795_, 2021.
* [29] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. _arXiv preprint arXiv:1708.07747_, 2017.
* [30] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
* [31] Tzu-Ming Harry Hsu, Hang Qi, and Matthew Brown. Measuring the effects of non-identical data distribution for federated visual classification. _arXiv preprint arXiv:1909.06335_, 2019.
* [32] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. _nature_, 521(7553):436-444, 2015. | ## Review
### Summary
This paper presents a novel approach to clustered federated learning called Clustered Additive Modeling (CAM), which addresses critical issues such as clustering collapse and sensitivity to outliers. By integrating predictions from a global model with those from cluster-specific models, CAM enhances performance across various non-IID settings. The authors demonstrate the effectiveness of their method through extensive experiments, showing significant improvements over existing clustered federated learning techniques. This contribution is positioned to have a substantial impact on the study of statistical heterogeneity and client structures in federated learning.
### Strengths
- 1. The proposed method is simple yet effective, likely to establish a new baseline in clustered federated learning.
- 2. The paper tackles the inherent challenge of clustering collapse in federated learning, providing a clear motivation and solution.
- 3. The overall flow and clarity of the paper are good, with a well-structured theoretical analysis.
- 4. The claims made are well-supported by experimental results across diverse datasets and settings.
- 5. The paper includes a comprehensive comparison with existing methods, enhancing the validity of its claims.
### Weaknesses
- 1. Readability could be improved due to the use of many abbreviations and symbols without clear definitions.
- 2. Some descriptions lack clarity, particularly regarding the linkage of figures to the text.
- 3. The proposed method requires additional model training, raising questions about computational efficiency and complexity.
- 4. The convergence analysis is limited, lacking details that could strengthen the theoretical foundation.
- 5. The paper does not adequately discuss the impact of outliers on the global model and the implications for clustering dynamics.
### Questions
- 1. How does the proposed method handle scenarios with a significant number of outliers?
- 2. What is the rationale for using different symbols for the global and cluster-specific models in the equations?
- 3. Can the authors elaborate on the cost of the warmup stage and its effect on performance?
- 4. Is there a need for additional comparisons with personalized federated learning methods?
- 5. How can the proposed method be adapted for graph data or other complex data structures?
### Soundness
**Score:** 3
**Description:** 3 = good; the methodology is sound and the theoretical foundations are mostly well-laid, though some details require further elaboration to fully support the claims.
### Presentation
**Score:** 3
**Description:** 3 = good; while the paper is generally clear and structured, improvements in readability and clarity of certain sections are needed.
### Contribution
**Score:** 3
**Description:** 3 = good; the paper introduces a significant advancement in the field, addressing a critical problem, though further validation through comparisons with other models would strengthen its contribution.
### Rating
**Score:** 7
**Description:** 7 = accept, but needs minor improvements; the paper is technically solid and impactful but requires some refinement in presentation and additional discussions regarding limitations.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper offers a novel solution to a significant problem in federated learning, demonstrating sound methodology and promising empirical results. While there are minor weaknesses in presentation and clarity, the overall contribution is valuable and impactful, warranting acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Clip-OGD: An Experimental Design for Adaptive Neyman Allocation in Sequential Experiments
Jessica Dai
UC Berkeley
[email protected] &Paula Gradu
UC Berkeley
[email protected] &Christopher Harshaw
MIT
[email protected]
###### Abstract
From clinical development of cancer therapies to investigations into partisan bias, adaptive sequential designs have become increasingly popular method for causal inference, as they offer the possibility of improved precision over their non-adaptive counterparts. However, even in simple settings (e.g. two treatments) the extent to which adaptive designs can improve precision is not sufficiently well understood. In this work, we study the problem of Adaptive Neyman Allocation in a design-based potential outcomes framework, where the experimenter seeks to construct an adaptive design which is nearly as efficient as the optimal (but infeasible) non-adaptive Neyman design, which has access to all potential outcomes. Motivated by connections to online optimization, we propose Neyman Ratio and Neyman Regret as two (equivalent) performance measures of adaptive designs for this problem. We present Clip-OGD, an adaptive design which achieves \(\widetilde{\mathcal{O}}(\sqrt{T})\) expected Neyman regret and thereby recovers the optimal Neyman variance in large samples. Finally, we construct a conservative variance estimator which facilitates the development of asymptotically valid confidence intervals. To complement our theoretical results, we conduct simulations using data from a microeconomic experiment.
## 1 Introduction
From medicine and public health to economics and public policy, randomized control trials are used in a variety of disciplines to investigate causal effects. Typically, treatment is assigned in a non-adaptive manner, where assignments are determined before any outcomes are observed. A sequential experimental approach, which adaptively assigns treatment based on previously observed outcomes, offers the possibility of more precise or high powered estimates of relevant causal effects. Adaptive experiments are run to develop clinical therapies for breast cancer [1], evaluate incentives to reduce partisan bias [10], and evaluate customer acquisition via online advertising [14], to name a few.
In this paper, we study the problem of Adaptive Neyman Allocation, which we informally define as follows. An optimal non-adaptive experimental design which minimizes variance of an estimator will depend on the unknown potential outcomes, rendering it infeasible to run. However, by adaptively choosing treatment assignments in a sequential manner based on observed outcomes, we can hope to guarantee that the variance of the estimator under the adaptive design converges to the optimal non-adaptive variance. The problem of Adaptive Neyman Allocation is to construct such an adaptive design which guarantees the variance converges to the (infeasible) optimal non-adaptive design.
An experimental design which sufficiently addresses the Adaptive Neyman Allocation problem offers the advantage of higher statistical power, relative to a broad class of fixed experimental designs. Practically speaking, this means that either smaller confidence intervals are obtained for a given number of experimental units, or that fewer units are required to achieve confidence intervals of a given length. In practice, this means that investigating causal effects can be cheaper--in terms of time, money, and other valuable resources--when adaptive experiments are run. Although several experimental designs have been proposed for this purpose (Hahn et al., 2011; Blackwell et al., 2022), none have provided formal guarantees that the optimal non-adaptive variance can be achieved and the effectiveness of such designs has recently been called into question Cai and Rafi (2022).
The main contributions of this work are as follows:
1. **Neyman Ratio and Regret**: We propose two (equivalent) performance measures of experimental designs for the problem of Adaptive Neyman Allocation: Neyman Ratio and Neyman Regret. We show that guarantees on the rates of these performance measures directly translate to guarantees on the convergence of variance to the Neyman variance.
2. **Clip-OGD**: We propose the adaptive design Clip-OGD, a variant of online stochastic projected gradient descent for which the Neyman regret is \(\widetilde{\mathcal{O}}(\sqrt{T})\). This guarantees that the variance of the sequential effect estimator approaches the Neyman variance.
3. **Confidence Intervals**: By constructing a conservative variance estimator, we provide confidence intervals which guarantee asymptotic coverage of the average treatment effect.
In Section 7, we support these theoretical results with simulations using data from a microeconomic experiment. Our results rely on viewing the Adaptive Neyman Allocation problem through the lens of online convex optimization. However, as discussed in Section 4.2, due to the subtleties arising in the problem, we do not know of an existing online algorithm which directly obtains these results.
### Related Work
We work within the potential outcomes framework for causal inference Neyman (1923); Rubin (1980); Imbens and Rubin (2015). The idea of optimal treatment allocation dates back to Neyman (1934), where he demonstrates that sampling from treatments proportional to the within-treatment outcome variance will minimize the variance of standard estimators. Unfortunately, this type of design is not practically feasible when little is known about the statistics of outcomes from each treatment. Robbins (1952) highlights adaptive sampling as one of the more pressing open statistical problems at the time. In Chapter 5, Solomon and Zacks (1970) presents a survey of adaptive designs for survey sampling, but from a Bayesian perspective. More recently, Hahn et al. (2011) proposed a two stage design in a super-population setting, where data is uniformly collected from both arms in the first stage, statistics of the treatment arm are estimated, and a fixed probability derived from estimated statistics is used in the second stage. They derive the limiting distribution of the effect estimator under the two-stage design, which has a variance that is similar to, but asymptotically bounded away from the optimal Neyman variance. In a design-based setting, Blackwell et al. (2022) propose a similar two-stage approach and, through simulations, provide practical guidance on how to choose the length of the first stage. Although both of these works are motivated by achieving the Neyman variance, neither formally show that this is possible under the two-stage design.
While the goal in this paper is to increase the precision of treatment effect estimates, a variety of response-adaptive designs have been developed for various objectives, including reducing mean total sample size (Hayre and Turnbull, 1981) and reduction of harm reduction in null hypothesis testing (Rosenberger et al., 2001). Eisele (1994) proposes the Doubly Adaptive Coin Based Design, which is a meta-algorithm for targeting various allocation proportions when outcomes are drawn i.i.d. from an exponential family. Hu and Rosenberger (2003) critiques many response-adaptive designs as being "myopic strategies" which have "adverse effects on power", providing an asymptotic framework by which to judge adaptive design when the outcomes are i.i.d. and binary. This asymptotic evaluation framework was extended to continuous outcomes by Zhang and Rosenberger (2006). An additional line of work has developed adaptive Bayesian methods for subgroup identification (Xu et al., 2014).
Causal inference under adaptively collected data has seen a variety of recent developments which are adjacent to, but distinct from, the problem of Adaptive Neyman Allocation. One line of research has been to construct estimators via re-weighting which ensure consistency and normality when data is collected via bandit algorithms (Hadad et al., 2021; Zhang et al., 2020, 2021). A second line of research has been to provide inferential methods which are valid under data-dependent stopping times (Wald, 1945; Howard et al., 2021; Ham et al., 2022). Finally, Offer-Westort et al. (2021) propose an adaptive experimental design for improved selective inference, when only the effect of the best performing treatment is to be inferred.
## 2 Preliminaries
The sequential experiment takes place over \(T\) rounds, where we assume that \(T\) is fixed and known to the experimenter. At each iteration \(t\in[T]\), a new experimental unit (e.g. clinical participant), enters into the experiment, so that there are \(T\) units in total. In an abuse of notation, we identify units with their respective round \(t\in[T]\). The experimenter assigns a (random) treatment \(Z_{t}\in\{0,1\}\) (e.g. drug or placebo) to the experimental unit. The unit has two real-valued potential outcomes \(y_{t}(1),y_{t}(0)\) which are unknown to the experimenter and represent the unit's measured response to the treatment assignments (e.g. measured heart rate). The term "potential" is used here because while only one treatment is assigned and thus only one outcome is observed, both outcomes have the potential to be observed. At the end of the round, the experimenter sees the observed outcome \(Y_{t}=\mathbf{1}[Z_{t}=1]y_{t}(1)+\mathbf{1}[Z_{t}=0]y_{t}(0)\).
### Potential Outcomes Framework
In this paper, we adopt a _design-based framework_ where the sequence of potential outcomes \(\{y_{t}(1),y_{t}(0)\}_{t=1}^{T}\) is deterministic and the only source of randomness is treatment assignment itself. In particular, we place no assumption on the homogeneity of the outcomes: they are not necessarily related to each other in any systematic way. Although the potential outcomes are deterministic, we introduce finite population analogues of various statistics. Define the finite population second moments \(S(1)\) and \(S(0)\) and correlation of the treatment and control outcomes \(\rho\) to be
\[S(1)^{2}=\frac{1}{T}\sum_{t=1}^{T}y_{t}(1)^{2}\enspace,\quad S(0)^{2}=\frac{1} {T}\sum_{t=1}^{T}y_{t}(0)^{2}\enspace,\quad\text{and}\quad\rho=\frac{\frac{1}{T }\sum_{t=1}^{T}y_{t}(1)y_{t}(0)}{S(1)S(0)}\enspace.\]
Observe that the correlation between treatment and control outcomes is bounded \(\rho\in[-1,1]\). Although we refer to \(\rho\) as the correlation, it also known as the cosine similarity and is generally not equal to the Pearson correlation coefficient. We remark that although the potential outcomes \(y_{t}(1)\) and \(y_{t}(0)\) are deterministic, the observed outcome \(Y_{t}\) is random, as it depends on random treatment assignment. The natural filtration according to these rounds is denoted as \(\mathcal{F}_{1}\ldots\mathcal{F}_{T}\), so that \(\mathcal{F}_{t}\) captures all randomness before the sampling of \(Z_{t}\), i.e. the treatments assigned and outcomes observed in previous rounds.
In this sequential setting, the mechanism for random treatment assignment can incorporate observed outcomes from previous experimental rounds. This treatment mechanism, referred to as the _experimental design_, is selected by and thus known to the experimenter. Formally, the experimental design is a sequence of functions \(\{\Pi_{t}\}_{t=1}^{T}\) with signature \(\Pi_{t}:(\{0,1\}\times\mathbb{R})^{t-1}\rightarrow[0,1]\) such that treatment is assigned as \(\Pr(Z_{t}=1\mid\mathcal{F}_{t})=\Pi_{t}(Z_{1},Y_{1},\ldots Z_{t-1},Y_{t-1})\). We denote \(P_{t}=\Pr(Z_{t}=1\mid\mathcal{F}_{t})\) as the (random) probability of treatment assignment at iteration \(t\), given previously observed treatment assignments and outcomes.
The causal estimand of interest is the _average treatment effect_, defined as
\[\tau=\frac{1}{T}\sum_{t=1}^{T}y_{t}(1)-y_{t}(0)\enspace.\]
The average treatment effect captures the average counterfactual contrast between a unit's outcomes under the two treatment assignments. For example, this could be the average contrast of a clinical participant's heart rate under the drug or placebo. Individual treatment effects are defined as \(\tau_{t}=y_{t}(1)-y_{t}(0)\), but they cannot be estimated without strong additional assumptions, as only one outcome is observed.
A standard estimator of the average treatment effect is the Horvitz-Thompson estimator, which weights observed outcome by the probability of their observation (Narain, 1951; Horvitz and Thompson, 1952). For adaptive designs, the standard Horvitz-Thompson estimator is infeasible because the marginal probability of treatment assignment \(\Pr(Z_{t}=1)\) depends on the unknown potential outcomes. For this reason, we investigate the _adaptive Horvitz-Thompson estimator_, which uses the random (observed) treatment probabilities used at each iteration.
\[\hat{\tau}\triangleq\frac{1}{T}\sum_{t=1}^{T}Y_{t}\Big{(}\frac{\mathbf{1}[Z_{t} =1]}{P_{t}}-\frac{\mathbf{1}[Z_{t}=0]}{1-P_{t}}\Big{)}\enspace,\]
where we recall that \(P_{t}=\Pi_{t}(Z_{1},Y_{1},\ldots Z_{t-1},Y_{t-1})\) is the treatment probability under the experimental design given the observed data. When treatment assignments are non-adaptive and independent, then the adaptive Horvitz-Thompson estimator is the equivalent to the standard Horvitz-Thompson estimator. Such adaptively weighted estimators have been proposed previously in the literature, e.g. [Bowden and Trippa, 2015, Hadad et al., 2021]. Below, we provide positivity conditions under which the adaptive estimator is unbiased, and derive its variance.
**Proposition 2.1**.: _If \(\min\{P_{t},1-P_{t}\}>0\) almost surely for all \(t\in[T]\) then the adaptive Horvitz-Thompson estimator is unbiased: \(\mathbb{E}[\hat{\tau}]=\tau\)._
**Proposition 2.2**.: _The variance of the adaptive Horvitz-Thompson estimator is_
\[T\cdot\mathrm{Var}(\tau)=\frac{1}{T}\sum_{t=1}^{T}\Bigl{(}y_{t}(1)^{2}\, \mathbb{E}\Bigl{[}\frac{1}{P_{t}}\Bigr{]}+y_{t}(0)^{2}\,\mathbb{E}\Bigl{[} \frac{1}{1-P_{t}}\Bigr{]}\Bigr{)}-\frac{1}{T}\sum_{t=1}^{T}\tau_{t}^{2}\enspace.\]
### Asymptotic Framework and Assumptions
Following the convention of design-based inference, we analyze statistical methods within an asymptotic framework [see e.g., Freedman, 2008, Lin, 2013, Savje et al., 2021]. This provides a formal basis for reasoning about the performance of statistical methods as the sample size increases, giving meaning to conventional notions such as consistency and limiting distribution. Formally speaking, the asymptotic sequence of potential outcomes is a triangular array \(\{\{y_{t,T}(1),y_{t,T}(0)\}_{t=1}^{T}\}_{T=1}^{\infty}\), which yields a sequence of estimands \(\{\tau_{T}\}_{T=1}^{\infty}\) and, together with an appropriately specified sequence of experimental design, a sequence of estimators \(\{\hat{\tau}_{T}\}_{T=1}^{\infty}\). Analysis which applies to a fixed \(T\) is said to be finite-sample (e.g. \(\mathbb{E}[\hat{\tau}_{T}]=\tau_{T}\)) whereas analysis which applies to the entire sequence is said to be asymptotic (e.g. \(\tau_{T}-\hat{\tau}_{T}\xrightarrow{P}0\)). Although we use an asymptotic framework, we emphasize that the majority of our results are derived from finite-sample analysis and are merely interpreted through the lens of the asymptotic framework. We drop the subscript \(T\) for notational clarity.
The main regularity conditions we place on the sequence of potential outcomes is below.
**Assumption 1**.: There exist constants \(0<c\leqslant C\) with \(c<1\) such that for all \(T\) in the sequence:
1. **Bounded Moments**: \(c\leqslant\bigl{(}\frac{1}{T}\sum_{t=1}^{T}y_{t}(k)^{2}\bigr{)}^{1/2}\leqslant \bigl{(}\frac{1}{T}\sum_{t=1}^{T}y_{t}(k)^{4}\bigr{)}^{1/4}\leqslant C\;\forall \;k\in\{0,1\}\).
2. **Bounded Correlation**: \(\rho\geqslant-(1-c)\).
The upper moment bound in Assumption 1 stipulates that the potential outcomes cannot grow too large with the sample size, while the lower moment bound is a type of non-degeneracy condition that prevents an increasingly large fraction of the outcomes going to zero. These assumptions are analogous to finite fourth moment and positive second moment assumptions in an i.i.d. setting. The bounded correlation assumption stipulates that the treatment and control outcomes are not exactly negatively correlated. In this paper, we do not assume that these constants \(C\) and \(c\) are known to the experimenter; however, if the experimenter can correctly specify such bounds (perhaps knowing a priori the scaling of the outcomes) then some of the constant factors in our analysis can be improved. We emphasize here that Assumption 1 places no assumption on the order in which units arrive in the experiment. In this sense, Assumption 1 allows for arbitrary "non-stationarity" or "drift" in the potential outcomes over the experimental rounds. In the next section, these regularity assumptions will ensure that the Neyman variance converges to zero at the parametric rate.
## 3 Neyman Design: The Infeasible Non-Adaptive Ideal
The problem of Adaptive Neyman Allocation is to construct an adaptive experimental design that achieves nearly the same variance as an optimal non-adaptive experimental design, chosen with knowledge of all potential outcomes. The optimal non-adaptive design, referred to as the Neyman Design, is infeasible to implement because it depends on all potential outcomes, which are unknown to the experimenter at the design stage. The goal is that an adaptive experimental design--which can select treatment assignment based on observed outcomes--can gather enough information to perform as well as the infeasible Neyman design.
In order to define the optimal non-adaptive design, we begin by defining the class of Bernoulli designs. Informally, the class of Bernoulli designs consists of non-adaptive designs where each unit receives treatment \(Z_{t}=1\) with probability \(p\), independently of past treatment assignments and observations. Formally, this class is parameterized by a non-adaptive sampling probability \(p\in[0,1]\) such that for all \(t\in[T]\), the treatment policy \(\Pi_{t}\) is a constant function whose value is \(p\). Using Proposition 2.2, we can derive the variance of the Bernoulli design with parameter \(p\in[0,1]\) to be
\[T\cdot V_{p}=S(1)^{2}\Big{(}\frac{1}{p}-1\Big{)}+S(0)^{2}\Big{(}\frac{1}{1-p}- 1\Big{)}+2\rho S(1)S(0)\enspace.\]
From the above, we can see that in order to minimize the variance of the Horvitz-Thompson estimator under the Bernoulli design, we should set the sampling probability \(p\) so as to balance the square of the second moments of treatment and control outcomes. The Neyman Design is the Bernoulli design which minimizes the variance of the Horvitz-Thompson estimator. The corresponding optimal probability \(p\)* and variance \(V_{\text{N}}\) are referred to as the Neyman probability and Neyman variance, respectively. The following proposition derives these quantities in terms of the potential outcomes.
**Proposition 3.1**.: _The Neyman variance is \(T\cdot V_{N}=2(1+\rho)S(1)S(0)\), which is achieved by the Neyman probability \(p^{*}=(1+S(0)/S(1))^{-1}\)._
In order to quantify the reduction in variance achieved by the Neyman design, define the _relative Neyman efficiency with respect to \(p\in[0,1]\)_ to be \(V_{\text{N}}/V_{p}\). Intuitively, this ratio is a scale-free measure which captures the percent reduction in variance of the sequential Horvitz-Thompson estimator under the Neyman design. Formally, the equation for the relative Neyman efficiency is given below:
\[\frac{V_{\text{N}}}{V_{p}}=2(1+\rho)\Bigg{[}\frac{S(1)}{S(0)}\cdot\frac{(1-p)} {p}+\frac{S(0)}{S(1)}\cdot\frac{p}{(1-p)}+\rho\Bigg{]}^{-1}\enspace.\]
Consider the setting where outcomes are uncorrelated, and treatment outcomes are larger than control outcomes, e.g. \(\rho=0\), \(S(1)=4\cdot S(0)\). In this case, the Neyman design is able to achieve less than half the variance of the uniform Bernoulli design (with \(p=1/2\)): we can plug in to 3 to see that in this setting, we have \(V_{N}/V_{p}=0.47\). The improvement is larger if the experimenter makes erroneous assumptions about the relative magnitudes of the treatment and control outcomes and attempts to set \(p\) accordingly: for example, if the experimenter had set \(p=1/4\), incorrectly believing that \(S(1)\leqslant S(0)\), then the Neyman allocation results in a sixfold improvement in variance. Blackwell et al. (2022) derives qualitatively similar analysis of Neyman efficiency for stratified designs.
While the relative Neyman efficiency is helpful in determining the variance reduction afforded by the (infeasible) optimal Bernoulli design, it does not address the main question: which adaptive experimental designs can guarantee similar variance reduction? In the next section, we propose a performance metric which better addresses this question.
## 4 Adaptive Neyman Allocation: An Online Optimization Approach
### Neyman Ratio and Neyman Regret: New Performance Measures
Let \(V\) be the variance of the adaptive experimental design. We introduce our first performance measure of a sequential experimental design for Adaptive Neyman Allocation.
**Definition 1**.: The _Neyman ratio_ of a sequential experimental design is \(\kappa_{T}=(V-V_{\text{N}})/V_{\text{N}}\).
The subscript \(T\) in \(\kappa_{T}\) in included the reflect dependence of the number of rounds \(T\). The Neyman ratio is motivated by the following relationship between the adaptive variance and the optimal Neyman variance:
\[V=\Big{(}\frac{V}{V_{\text{N}}}\Big{)}\cdot V_{\text{N}}=\Big{(}1+\kappa_{T} \Big{)}\cdot V_{\text{N}}\enspace. \tag{1}\]Equation (1) shows that the adaptive design can recover the Neyman variance if and only if the Neyman ratio \(\kappa_{T}\) can be made arbitrarily small. For this reason, we propose the Neyman ratio as a performance measure of a sequential experimental design.
A natural question then becomes: how small can the Neyman ratio \(\kappa_{T}\) be made as the number of rounds \(T\) increases? To answer this question, we view the problem of minimizing the Neyman ratio through the lens of online optimization. To this end, we must re-express the variance of the sequential experimental design. For each round \(t\in[T]\), define the cost function \(f_{t}:[0,1]\rightarrow\mathbb{R}\) as \(f_{t}(p)=y_{t}(1)^{2}/p+y_{t}(0)^{2}/(1-p)\). Observe that by Proposition 2.2, the variance is given by \(T\cdot\operatorname{Var}(\hat{\tau})=\mathbb{E}[\frac{1}{T}\sum_{t=1}^{T}f_{t} (P_{t})]\). This reformulation of variance does not allow us to minimize variance directly, for the usual reason that the outcomes, and thus the cost functions \(f_{t}\), are not fully observed. On the other hand, our goal is only to show that the variance of the adaptive design is comparable to the Neyman variance.
**Definition 2**.: The _Neyman regret_ of a sequential experimental design is
\[\mathcal{R}_{T}=\sum_{t=1}^{T}f_{t}(P_{t})-\min_{p\in[0,1]}\sum_{t=1}^{T}f_{t} (p)\enspace.\]
Recall that \(P_{t}\) is the random treatment probability at round \(t\). The Neyman regret compares the accumulated costs \(f_{t}(P_{t})\) incurred by the adaptive design to the accumulated costs incurred by the optimal Bernoulli design which has access to all potential outcomes. The Neyman regret is random because the sequence \(P_{1},\ldots P_{T}\) is random. The following theorem connects the expected Neyman regret to the Neyman ratio.
**Theorem 4.1**.: _Under Assumption 1, the Neyman ratio is within a constant factor of the \(1/T\)-scaled expected Neyman regret: \(\kappa_{T}=\Theta(\frac{1}{T}\operatorname{\mathbb{E}}[\mathcal{R}_{T}])\)._
Theorem 4.1 demonstrates that the Neyman ratio can be made small by minimizing the expected Neyman regret in an online fashion. In particular, any sublinear bound on the expected Neyman regret ensures that the Neyman ratio goes to zero so that, in large samples, the adaptive design achieves the variance reduction of the optimal Neyman design. Any adaptive design which aims to achieve Neyman variance must, to some extent, minimize expected Neyman regret.
Fortunately, online optimization is a well-studied area with a rich source of techniques from which we may draw inspiration. However, to the best of our knowledge, existing regret minimization algorithms are not well-suited to minimizing the Neyman regret. For example, the multi-arm bandit literature typically defines regret in terms of a finite number of actions that can be taken (Lattimore and Szepesvari, 2020) while Adaptive Neyman Allocation consists of a continuum of actions as \(P_{t}\in[0,1]\). This means that algorithms like UCB (Auer et al., 2002a) and EXP3 (Auer et al., 2002b) are not appropriate for Adaptive Neyman Allocation. Our cost objectives \(f_{t}\) and action space \([0,1]\) are both convex, so the problem of Adaptive Neyman Allocation is an instance of Online Convex Optimization (OCO) (Hazan, 2016). Even so, the problem of minimizing Neyman regret is not immediately amenable to existing algorithms, which typically requires assumptions on the cost functions such as bounded gradients or known Lipschitz parameters. In this setting, the cost functions have gradients which blow up at the boundary and Lipschitz parameters cannot be guaranteed as they rely on the unknown heterogeneous potential outcomes. For these reasons, we must design a new algorithm specifically tailored to Adaptive Neyman Allocation.
### Clip-OGD: A Variant of Online Stochastic Projected Gradient Descent
We present Clip-OGD, which aims to minimize the Neyman regret and thus recover the Neyman variance in large samples. The algorithm is based on the online stochastic projected gradient descent principle, but with a twist: the projection set continuously grows over the rounds. At each round \(t\), a new treatment probability \(P_{t}\) is chosen by updating the previous sampling probability \(P_{t-1}\) in the negative (estimated) gradient direction of the previous cost, and then projecting to an interval \([\delta_{t},1-\delta_{t}]\). Initially, this projection interval contains only the point \(1/2\) and it grows as the rounds increase, allowing for larger amounts of exploitation in later rounds.
The gradient estimator \(G_{t}\) is obtained in the following way: the gradient of \(f_{t}\) at \(P_{t}\) is given as \(f^{\prime}(P_{t})=-\frac{y_{t}(1)^{2}}{P_{t}^{2}}+\frac{y_{t}(0)^{2}}{(1-P_{t}) ^{2}}\). Only one outcome is observed, so we used the adaptive HorvitzThompson principle using the conditional probability \(P_{t}\) to unbiasedly estimate the outcomes. Clip-OGD is formally presented below as Algorithm 1, where the projection operator is defined as \(\mathcal{P}_{c}(x)=\max\{c,\min\{x,1-c\}\}\).
```
Input: Step size \(\eta\) and decay parameter \(\alpha\) Initialize \(P_{0}\gets 1/2\) and \(G_{0}\gets 0\) for\(t=1\dots T\)do Set projection parameter \(\delta_{t}=(1/2)\cdot t^{-1/\alpha}\) Compute new treatment probability \(P_{t}\leftarrow\mathcal{P}_{\delta_{t}}(P_{t-1}-\eta\cdot G_{t-1})\) Sample treatment assignment \(Z_{t}\) as \(1\) with probability \(P_{t}\) and \(0\) with probability \(1-P_{t}\) Observe outcome \(Y_{t}=\mathbf{1}[Z_{t}=1]y_{t}(1)+\mathbf{1}[Z_{t}=0]y_{t}(0)\) Construct gradient estimator \(G_{t}=Y_{t}^{2}\Big{(}-\frac{1[Z_{t}=1]}{P_{t}^{2}}+\frac{1[Z_{t}=0]}{(1-P_{t} )^{3}}\Big{)}\) end for
```
**Algorithm 1**Clip-OGD
Unlike the two-stage design of (Hahn et al., 2011; Blackwell et al., 2022), Clip-OGD does not feature explicit explore-exploit stages, but rather performs both of these simultaneously. The trade-off is implicitly controlled through parameters \(\eta\) and \(\alpha\): smaller values of \(\eta\) limit the amount of that sampling probabilities can update and, likewise, larger values of \(\alpha\) prevent extreme probabilities in earlier stages. Because the gradient of the cost functions are inversely proportional to the treatment probabilities, limiting the extremeness of the treatment probabilities in this way ensures that the gradient estimates do not increase at a fast rate. By appropriately setting input parameters, Clip-OGD achieves \(\widetilde{\mathcal{O}}(\sqrt{T})\) expected Neyman regret, where the \(\widetilde{\mathcal{O}}(\cdot)\) notation hides sub-polynomial factors.
**Theorem 4.2**.: _Under Assumption 1 the parameter values \(\eta=\sqrt{1/T}\) and \(\alpha=\sqrt{5\log(T)}\) ensure the expected Neyman regret of Clip-OGD is asymptotically bounded: \(\mathbb{E}\big{[}\mathcal{R}_{T}\big{]}\leq\widetilde{\mathcal{O}}\big{(}\sqrt {T}\big{)}\)._
Theorem 4.2 answers, in the affirmative, that it is possible to construct an adaptive experimental design whose variance recovers that of the Neyman variance, in large samples. Note that the amount of exploration (as given by the parameters \(\eta\) and \(\alpha\)) should be increasing with \(T\) in order to recover these regret bounds. In Appendix C, we show that Clip-OGD is somewhat robust to different values of the decay parameter, i.e. for any value \(\alpha>5\), the expected regret will be sublinear. We also show that if the experimenter presumes to have correctly specified bounds \(C\) and \(c\) appearing in Assumption 1, then the step size can be modified to improve the constant factors in the Neyman regret bound, which may lead to improved performance in moderate sample sizes. We conjecture that the minimax rate for expected Neyman regret is \(\mathcal{O}(\sqrt{T})\), but proving this is beyond the scope of the current paper--we only remark that we do not know it to immediately follow from any existing regret lower bounds for OCO.
## 5 Inference in Large Samples
The proposed Clip-OGD was constructed to ensure that the variance of the adaptive Horvitz-Thompson estimator quickly approaches the Neyman variance. In this section, we provide confidence intervals for the average treatment effect which also enjoy reduced width compared to non-adaptive counterparts.
A necessary condition for variance estimation is that the variance itself cannot be going to zero too quickly. In design-based inference, it is common to directly posit a so-called "non-superefficient" assumption that \(\operatorname{Var}(\hat{\tau})=\Omega(1/T)\)(Aronow and Samii, 2017; Leung, 2022; Harshaw et al., 2022). The non-superefficiency assumption may be seen as an additional regularity assumption on the outcomes, e.g. preventing \(y_{t}(1)=y_{t}(0)=0\) for all \(t\in[T]\). In this work, a similar lower bound on the rate of the adaptive variance is obtained through a different, perhaps more transparent, assumption on the expected Neyman regret.
**Assumption 2**.: The outcome sequence is not overly-fit to Clip-OGD: \(-\mathbb{E}[\mathcal{R}_{T}]=o(T)\).
While we have shown that \(\mathbb{E}[\mathcal{R}_{T}]\leq\widetilde{\mathcal{O}}(\sqrt{T})\), the Neyman regret could in principle be negative if the adaptive design achieves variance which is strictly smaller than the best Bernoulli design. While this seems unlikely to happen for "typical" outcomes, it is not impossible. Assumption 2 rules out these edge-case settings. We suspect that Assumption 2 would not be necessary in an i.i.d. setting, but proving this seems beyond the scope of the current paper. As shown in the appendix, Assumptions 1 and 2 imply that the adaptive variance achieves the parametric rate: \(\mathrm{Var}(\hat{\tau})=\Theta(1/T)\).
### Variance Estimation
In this section, we provide a variance estimator and show its stability in large samples. Rather than estimating the adaptive variance (which has no simple closed form), our approach is to estimate the Neyman variance directly. For an adaptive design achieving sublinear expected Neyman regret, these two quantities are asymptotically equivalent. In this way, our variance estimator may be appropriate not only for Clip-OGD, but for any adaptive design achieving sublinear expected Neyman regret.
Recall that the Neyman variance is given by \(T\cdot V_{\text{N}}=2(1+\rho)S_{1}S_{0}\), where \(\rho\) is the outcome correlation, \(S_{1}\) is the second moment of treatment outcomes and \(S_{0}\) is the second moment of control outcomes. Unfortunately, the outcome correlation term is generally not estimable without strong assumptions in a design-based framework. Indeed, the difficulty is that terms like \(y_{t}(1)y_{t}(0)\) are unobservable due to the fundamental problem of causal inference (Imbens and Rubin, 2015). A common solution to the problem is to opt for a conservative estimate of the variance, which will ensure validity of resulting confidence intervals.
We propose estimating the following upper bound on the variance: \(T\cdot\text{VB}=4S_{0}S_{1}\). This upper bound on the Neyman variance is tight (i.e. \(\text{VB}=V_{\text{N}}\)) when outcome correlation satisfies \(\rho=1\). For example, this occurs when all individual treatment effects are zero, i.e. \(y_{t}(1)=y_{t}(0)\) for all \(t\in[T]\). Conversely, the upper bound will become looser for smaller values of the outcome correlation. In this sense, our bound resembles both the Neyman bound and the Aronow-Samii bound (Neyman, 1923; Aronow and Samii, 2013). It may be possible to use the recent insights of Harshaw et al. (2021) in order to construct variance bounds which are tight in other scenarios, but that is beyond the scope of the current paper. Our variance estimator is defined as
\[T\cdot\widehat{\text{VB}}\triangleq 4\sqrt{\left(\frac{1}{T}\sum_{t=1}^{T}y_{t }^{2}\frac{\mathbf{1}[z_{t}=1]}{p_{t}}\right)\cdot\left(\frac{1}{T}\sum_{t=1}^ {T}y_{t}^{2}\frac{\mathbf{1}[z_{t}=0]}{1-p_{t}}\right)}\enspace,\]
which is essentially a plug-in Horvitz-Thompson estimator for the second moments. Theorem 5.1 shows the error of the normalized variance estimator converges at a parametric rate.
**Theorem 5.1**.: _Under Assumptions 1 and 2, and the parameters stated in Theorem 4.2, the error of the normalized variance estimator under Clip-OGD is \(T\cdot\widehat{\text{VB}}-T\cdot\text{VB}=\widehat{\mathcal{O}}_{p}(T^{-1/2})\)._
### Confidence Intervals
The variance estimator may be used to construct confidence intervals for the average treatment effect. This offers experimenters standard uncertainty quantification techniques when running Clip-OGD. The following corollary shows that the resulting Chebyshev-type intervals are asymptotically valid.
**Corollary 5.1**.: _Under Assumptions 1 and 2, and parameters stated in Theorem 4.2, Chebyshev-type intervals are asymptotically valid: for all \(\alpha\in(0,1]\), \(\liminf_{T\rightarrow\infty}\Pr(\tau\in\hat{\tau}\pm\alpha^{-1/2}\sqrt{ \widehat{\text{VB}}})\geq 1-\alpha\)._
While these confidence intervals are asymptotically valid under our regularity assumptions, they may be overly conservative in general. In particular, they will over cover when the Chebyshev tail bound is loose. We conjecture that the adaptive Horvitz-Thompson estimator under Clip-OGD satisfies a Central Limit Theorem, which would imply asymptotic validity of the narrower Wald-type intervals where \(\alpha^{-1/2}\) scaling is replaced with the corresponding normal quantile, \(\Phi^{-1}(1-\alpha/2)\). As discussed in Section 7, the adaptive estimator appears approximately normal in simulations. Until this is formally shown, we recommend experimenters exhibit caution when using Wald-type confidence intervals for the adaptive Horvitz-Thompson estimator under Clip-OGD.
## 6 Considering Alternative Designs
Explore-Then-CommitTwo-stage adaptive designs have been proposed for the purpose of variance reduction (Hahn et al., 2011; Blackwell et al., 2022). Due to its similarities to algorithms in the banditsliterature, we call these types of designs Explore-Then-Commit (ETC) (Lattimore and Szepesvari, 2020). At a high level, an Explore-then-Commit design runs the Bernoulli design with \(p=1/2\) for \(T_{0}\leqslant T\) iterations, uses the collected data to estimate \(p^{\star}\) by \(\widehat{p}^{\star}\), and then runs the Bernoulli design with \(p=\widehat{p}^{\star}\) for the remaining \(T_{1}=T-T_{0}\) iterations. These ETC designs are conceptually simpler than Clip-OGD, and may be reasonable to apply in more restricted settings where changing the treatment probabilities is difficult or costly. However, we provide the following negative result which shows that they can suffer linear Neyman regret.
**Proposition 6.1**.: _For all explore phase lengths \(T_{0}\) satisfying \(T_{0}=\Omega(T^{\epsilon})\) for some \(\epsilon>0\), there exist a class of potential outcomes sequences satisfying Assumption 1 such that the Neyman regret under Explore-then-Commit is linear: \(\mathcal{R}_{T}=\Omega_{p}(T)\)._
The specific class of potential outcomes referenced in Proposition 6.1 is constructed explicitly in Appendix E.1. ETC designs suffer larger variance when the estimated \(\widehat{p}^{\star}\) may be far from the true optimal probability \(p^{\star}\). In a design-based setting, this happens when the units in the explore phase are not representative of the entire sequence. Formulating conditions under which Explore-then-Commit designs achieve low Neyman regret is beyond the scope of this paper, but the proof of Proposition 6.1 shows that additional regularity conditions on the order of the units will be required.
Multi Arm Bandit AlgorithmsMulti Arm Bandit (MAB) algorithms are often used for adaptive decision making settings, from online advertising to product development. The goal of MAB algorithms is to minimize the outcome regret, which measures the contrast between the overall value obtained from the actions relative to the value of the best action. The outcome regret is conventionally defined as \(\mathcal{R}_{T}^{\text{outcome}}=\max_{k\in\{0,1\}}\sum_{t=1}^{T}y_{t}(k)- \sum_{t=1}^{T}Y_{t}\). In certain contexts, minimizing outcome regret may be a more desirable goal than estimating a treatment effect to high precision. However, the following proposition illustrates that these two objectives are generally incompatible.
**Proposition 6.2**.: _Let \(\mathcal{A}\) be an adaptive treatment algorithm achieving sublinear outcome regret, i.e. there exists \(q\in(0,1)\) such that \(\mathbb{E}[\mathcal{R}_{T}^{\text{outcome}}]\leqslant O(T^{q})\) for all outcome sequences satisfying Assumption 1. Then, there exists a class of outcome sequences satisfying Assumption 1 on which \(\mathcal{A}\) suffers super-linear Neyman regret, i.e. \(\mathbb{E}[\mathcal{R}_{T}]\geqslant\Omega(T^{2-q})\)._
Proposition 6.2 demonstrates that the outcome regret and the Neyman regret cannot generally be simultaneously minimized. In particular, sublinear outcome regret implies that the variance of the estimator must converge slower than the \(\Theta(1/T)\) parametric rate. This result contributes to a growing body of work which highlights trade-offs between various possible objectives in sequential decision making (Burtini et al., 2015). It is beyond the scope of the current paper to determine how such trade-offs ought to be resolved, though Appendix F discusses ethical considerations.
## 7 Numerical Simulations
We evaluate the performance of Clip-OGD and Explore-then-Commit (ETC) for the purpose of Adaptive Neyman Allocation on the field experiment of Groh and McKenzie (2016), which investigates the effect of macro-insurance on micro-enterprises in post-revolution Egypt2. The experimental units are 2,961 clients of Egypt's largest microfinance organization and the treatment was a novel insurance product. Several outcomes were recorded including whether the clients took on loans, introduced a new product or service, and the amount invested in machinery or equipment following treatment. To allocate treatment, Groh and McKenzie (2016) use a non-adaptive matched pair experimental design. Our goal here is not to provide a new analysis of this study, but rather to construct a plausible experimental setting under which to evaluate adaptive experimental designs.
Footnote 2: A repository for reproducing simulations is: [https://github.com/crharshaw/Clip-OGD-sims](https://github.com/crharshaw/Clip-OGD-sims)
In our simulations, we focus on the numerical outcome "invested in machinery or equipment". The experimental data contains only observed outcomes, so we must impute the missing potential outcomes in order to simulate the experiment. We impute outcomes using the model \(y_{t}(1)-y_{t}(0)=\tau+\gamma_{t}\), where \(\tau=90,000\) and \(\gamma_{1}\ldots\gamma_{T}\sim\mathcal{N}(0,\sigma^{2})\) are independent with \(\sigma=5,000\). This randomness is invoked only to impute potential outcomes, i.e. not re-sampled during each run of the experiment. In order to increase the sample size, we create a larger population by repeating this processes \(5\) times, which yields a total of \(14,445\) units after those with missing entries are removed. Units are shuffled to appear in an arbitrary order and outcomes are normalized to be in the range\([0,1]\)Figure 1 presents two plots illustrating how the variance of the adaptive HT estimator varies with different designs. The \(x\) axis contains the number of rounds \(T\) and the \(y\) axis contains the normalized variance \(T\cdot\operatorname{Var}(\hat{\tau})\) under the designs. For each value of \(T\), we take the population to be the first \(T\) units in the sequence. Clip-OGD is run with the parameters recommended in Theorem 4.2 and ETC is run with \(T_{0}=T^{1/3}\) so that the exploration phase grows with \(T\). The variance under Clip-OGD and ETC is estimated empirically from 50,000 runs of the experiment, while the variance under the Bernoulli and Neyman designs is computed exactly.
In Figure 0(a), we observe that Clip-OGD requires about \(T=4,000\) samples to achieve variance equal to Bernoulli, but eventually converges to the Neyman variance. As discussed in Section 4.2, it may be possible to improve the convergence rate by incorporating knowledge of the outcome moments in the design parameters. On the other hand, ETC remains comparable with Bernoulli even for small values of \(T\), but remains far away from the Neyman design for large samples. In Figure 0(b), a similar simulation is run, except that the potential outcomes of the first 100 units are swapped, so that the first units have negative individual treatment effects. While this produces little effect on the performance of Clip-OGD, it substantially worsens the performance of ETC, which relies on the early outcomes to estimate an optimal treatment probability. In particular, ETC performs worse than Bernoulli under this minor modification--even in large samples--corroborating Proposition 6.1.
In the appendix, we evaluate the proposed confidence intervals, showing that Clip-OGD enjoys intervals of reduced width. We show that normal based intervals cover at the nominal level and provide further evidence that the estimator is asymptotically normal under Clip-OGD. We run additional simulations to investigate the sensitivity of the step size, and to demonstrate that additional baselines which were not designed for Neyman allocation indeed perform poorly.
## 8 Conclusion
In this paper, we have proposed the Neyman ratio and Neyman regret as a performance measure of experimental designs for the Adaptive Neyman Allocation problem. To this end, we proposed Clip-OGD which achieves \(\widehat{\mathcal{O}}(\sqrt{T})\) expected Neyman regret under mild regularity conditions on the outcomes. This formally establishes--for the first time--the existence of adaptive experimental designs under which the variance of the effect estimator quickly approaches the Neyman variance. Finally, we have provided a variance estimator which provides experimenters with uncertainty quantification methods when using Clip-OGD. The main drawback of our analysis is that it is most relevant for moderate and large sample sizes; in particular, our work does not properly address whether adaptive designs are always beneficial in small samples.
There are several research directions which can improve relevance of this methodology to practice. First, establishing conditions under which a central limit theorem holds for Clip-OGD will yield smaller and thus more desirable Wald-type confidence intervals. Second, investigations into batched treatment allocations and delayed observations of outcomes would allow practitioners more flexibility in their designs. Finally, investigating variants of Adaptive Neyman Allocation in the presence of interference (Aronow and Samii, 2017; Harshaw et al., 2022) would allow for more realistic inference in complex settings, e.g. social network experiments and marketplace experiments.
Figure 1: Normalized Variance of Adaptive Estimator under Experimental Designs
## Acknowledgments and Disclosure of Funding
We thank P.M. Aronow, Molly Offer-Westort, Alexander Rakhlin, Benjamin Recht, Fredrik Savje, and Daniel Spielman for insightful discussions which helped shaped this work. Part of this work was done while Christopher Harshaw was visiting the Simons Institute for the Theory of Computing. Christopher Harshaw gratefully acknowledges support from Foundations of Data Science Institute (FODSI) NSF grant DMS2023505.
## References
* Aronow and Samii (2013) P.M. Aronow and Cyrus Samii. Conservative variance estimation for sampling designs with zero pairwise inclusion probabilities. _Survey Methodology_, 39(1):231-241, 2013.
* Aronow and Samii (2017) P.M. Aronow and Cyrus Samii. Estimating average causal effects under general interference. _Annals of Applied Statistics_, 11(4):1912-1947, 2017. doi: 10.1214/16-aaos1005.
* Auer et al. (2002a) Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. _Machine Learning_, 47(2-3):235-256, may 2002a. ISSN 0885-6125. doi: 10.1023/A:1013689704352.
* Auer et al. (2002b) Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed bandit problem. _SIAM Journal on Computing_, 32(1):48-77, 2002b. doi: 10.1137/S0097539701398375.
* Barker et al. (2009) AD Barker, CC Sigman, GJ Kelloff, NM Hylton, DA Berry, and LJ Esserman. I-spy 2: An adaptive breast cancer trial design in the setting of neoadjuvant chemotherapy. _Clinical Pharmacology & Therapeutics_, 86(1):97-100, 2009. doi: 10.1038/clpt.2009.68.
* Blackwell et al. (2022) Matthew Blackwell, Nicole E. Pashley, and Dominic Valentino. Batch adaptive designs to improve efficiency in social science experiments. Working paper, Harvard University, 2022. URL [https://www.mattblackwell.org/files/papers/batch_adaptive.pdf](https://www.mattblackwell.org/files/papers/batch_adaptive.pdf).
* Bowden and Trippa (2015) Jack Bowden and Lorenzo Trippa. Unbiased estimation for response adaptive clinical trials. _Statistical methods in medical research_, 26, 08 2015.
* Burtini et al. (2015) Giuseppe Burtini, Jason Loeppky, and Ramon Lawrence. A survey of online experiment design with the stochastic multi-armed bandit. _arXiv preprint arXiv:1510.00757_, 2015.
* Cai and Rafi (2022) Yong Cai and Ahnaf Rafi. On the performance of the neyman allocation with small pilots. arXiv:2206.04643, 2022.
* Eisele (1994) Jeffrey R. Eisele. The doubly adaptive biased coin design for sequential clinical trials. _Journal of Statistical Planning and Inference_, 38(2):249-261, 1994. ISSN 0378-3758.
* Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (1978) National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report: Ethical principles and guidelines for the commission for the protection of human subjects of biomedical and behavioral research. Technical report, US Department of Health, Education, and Welfare, 1978.
* Freedman (2008) David A. Freedman. On regression adjustments to experimental data. _Advances in Applied Mathematics_, 40:180-193, 2008. doi: 10.1016/j.aam.2006.12.003.
* Groh and McKenzie (2016) Matthew Groh and David McKenzie. Macroinsurance for microenterprises: A randomized experiment in post-revolution Egypt. _Journal of Development Economics_, 118:13-25, 2016. doi: 10.1016/j.jdeveco.2015.08.003.
* Hadad et al. (2021) Vitor Hadad, David A. Hirshberg, Ruohan Zhan, Stefan Wager, and Susan Athey. Confidence intervals for policy evaluation in adaptive experiments. _PNAS_, 118(15), 2021. doi: 10.1073/pnas.2014602118.
* Hahn et al. (2011) Jinyong Hahn, Keisuke Hirano, and Dean Karlan. Adaptive experimental design using the propensity score. _Journal of Business & Economic Statistics_, 29(1):96-108, 2011.
* Harshaw et al. (2015)Dae Woong Ham, Iavor Bojinov, Michael Lindon, and Martin Tingley. Design-based confidence sequences for anytime-valid causal inference. arXiv:2210.08639, 2022.
* Harshaw et al. (2021) Christopher Harshaw, Joel A. Middleton, and Fredrik Savje. Optimized variance estimation under interference and complex experimental designs. arXiv:2112.01709, 2021.
* Harshaw et al. (2022) Christopher Harshaw, Fredrik Savje, and Yitan Wang. A design-based riesz representation framework for randomized experiments. arXiv:2210.08698, 2022.
* Hayre and Turnbull (1981) Lakhbir S. Hayre and Bruce W. Turnbull. Estimation of the odds ratio in the two-armed bandit problem. _Biometrika_, 68(3):661-668, 1981.
* Hazan (2016) Elad Hazan. Introduction to online convex optimization. _Foundations and Trends(r) in Optimization_, 2(3-4):157-325, 2016. ISSN 2167-3888.
* Horvitz and Thompson (1952) D. G. Horvitz and D. J. Thompson. A generalization of sampling without replacement from a finite universe. _Journal of the American Statistical Association_, 47(260):663-685, 1952. doi: 10.1080/01621459.1952.10483446.
* 1080, 2021.
* Hu and Rosenberger (2003) Feifang Hu and William F. Rosenberger. Optimality, variability, power: Evaluating response-adaptive randomization procedures for treatment comparisons. _Journal of the American Statistical Association_, 98(463):671-678, 2003.
* Imbens and Rubin (2015) Guido W Imbens and Donald B Rubin. _Causal inference in statistics, social, and biomedical sciences_. Cambridge University Press, 2015.
* Lattimore and Szepesvari (2020) Tor Lattimore and Csaba Szepesvari. _Bandit Algorithms_. Cambridge University Press, 2020.
* Leung (2022) Michael P. Leung. Causal inference under approximate neighborhood interference. _Econometrica_, 90(1):267-293, 2022.
* Lin (2013) Winston Lin. Agnostic notes on regression adjustments to experimental data: Reexamining Freedman's critique. _Annals of Applied Statistics_, 7(1):295-318, 2013.
* Narain (1951) R. Narain. On sampling without replacement with varying probabilities. _Journal of the Indian Society of Agricultural Statistics_, 3:169---175, 1951.
* Neyman (1990) Jerzy Neyman. On the application of probability theory to agricultural experiments. Essay on principles. Section 9. _Statistical Science_, 5(4):465-472, 1923. Reprinted in 1990.
* Neyman (1934) Jerzy Neyman. On the two different aspects of the representative method: The method of stratified sampling and the method of purposive selection. _Journal of the Royal Statistical Society_, 97(4):558-625, 1934. ISSN 09528385.
* Offer-Westort et al. (2021) Molly Offer-Westort, Alexander Coppock, and Donald P. Green. Adaptive experimental design: Prospects and applications in political science. _American Journal of Political Science_, 65(4):826-844, 2021.
* 535, 1952.
* Rosenberger et al. (2001) W Rosenberger, N Stallard, Anastasia Ivanova, C Harper, and M Ricks. Optimal adaptive designs for binary response trials. _Biometrics_, 57:909-13, 10 2001.
* Rubin (1980) Donald B. Rubin. Comment: Randomization analysis of experimental data. _Journal of the American Statistical Association_, 75(371):591, 1980.
* Savje et al. (2021) Fredrik Savje, P. M. Aronow, and Michael G. Hudgens. Average treatment effects in the presence of unknown interference. _Annals of Statistics_, 49(2):673-701, 2021.
* Savje et al. (2021)Eric M. Schwartz, Eric T. Bradlow, and Peter S. Fader. Customer acquisition via display advertising using multi-armed bandit experiments. _Marketing Science_, 36(4):500-522, 2017.
* Solomon and Zacks (1970) H. Solomon and S. Zacks. Optimal design of sampling from finite populations: A critical review and indication of new research areas. _Journal of the American Statistical Association_, 65(330):653-677, 1970.
* 186, 1945.
* Xu et al. (2014) Yanxun Xu, Lorenzo Trippa, Peter Muller, and Yuan Ji. Subgroup-based adaptive (suba) designs for multi-arm biomarker trials. _Statistics in Biosciences_, 8, 02 2014.
* Zhang et al. (2020) Kelly Zhang, Lucas Janson, and Susan Murphy. Inference for batched bandits. In _Advances in Neural Information Processing Systems_, volume 33, pages 9818-9829. Curran Associates, Inc., 2020.
* Zhang et al. (2021) Kelly Zhang, Lucas Janson, and Susan Murphy. Statistical inference with m-estimators on adaptively collected data. In _Advances in Neural Information Processing Systems_, volume 34, pages 7460=7471. Curran Associates, Inc., 2021.
* Zhang and Rosenberger (2006) Lanju Zhang and William F. Rosenberger. Response-adaptive randomization for clinical trials with continuous outcomes. _Biometrics_, 62(2):562-569, 2006.
## Appendix
### Table of Contents
* A Additional Simulation Results
* A.1 Confidence Intervals
* A.2 Sensitivity to Step Size
* A.3 Alternative Designs
* B General Analysis of Adaptive Neyman Allocation
* B.1 Analysis of Adaptive Horvitz-Thompson Estimator (Propositions 2.1 and 2.2)
* B.2 Derivation of the Neyman Design (Proposition 3.1)
* B.3 Equivalence of Neyman Ratio and Neyman Regret (Theorem 4.1)
* C Analysis of Neyman Regret for Clip-OGD
* C.1 Proof of Theorem 4.2
* C.2 Selecting Parameters When Moment Bounds are Known
* D Analysis for Inference in Large Samples
* D.1 Conservative Variance Estimator (Theorem 5.1)
* D.2 Valid Confidence Intervals (Corollary 5.1)
* E Analysis of Alternative Designs
* E.1 Analysis of Explore-then-Commit (Proposition 6.1)
* E.2 Analysis of Designs for Outcome Regret (Proposition 6.2)
* F Ethical Considerations
Additional Simulation Results
In this section, we present additional simulation results on the Groh and McKenzie (2016) data. We refer to Section 7 for a review of the experimental set-up. In this section, we focus on the full dataset where \(T=14,445\). Simulations were run on a 2019 MacBook Pro with 2.4 GHz Quad-Core Intel Core i5 and 16 GB LPDDR3 RAM.
### Confidence Intervals
Table 1 presents the Chebyshev-based and Normal-based intervals for the Bernoulli design \((p=1/2)\) and Clip-OGD. We see that while Chebyshev over-covers, the normal-based confidence intervals cover at the nominal level with reduced width for both designs. The relative Neyman efficiency on this dataset is somewhat close to \(1\), so that the reduction of the width of the confidence intervals afforded by Clip-OGD is present, though modest. The coverage of the normal-based confidence intervals provides further evidence supporting our conjecture that the adaptive Horvitz-Thompson estimator is asymptotically normal under Clip-OGD.
Figure 2 plots the histogram of the studentized adaptive Horvitz-Thompson estimator under Clip-OGD. By studentized, we mean that the histogram is plotting the draws of the random variable
\[Z=\frac{\tau-\hat{\tau}}{\sqrt{\text{Var}(\hat{\tau})}}\enspace.\]
We estimate the standard deviation empirically from 50,000 runs of the experiment. The estimator is said to be asymptotically normal if \(Z\overset{d}{\rightarrow}\mathcal{N}(0,1)\). Figure 2 provides evidence that asymptotic normality is likely to hold in this setting. Formally establishing asymptotic normality is beyond the scope of the current paper as it would involve very different analytic techniques than those used to establish sublinear Neyman regret.
### Sensitivity to Step Size
In this section, we explore through simulations how the performance of Clip-OGD depends on the step size.
\begin{table}
\begin{tabular}{l|c c c c} & Chebyshev Width & Chebyshev Coverage & Normal Width & Normal Coverage \\ \hline Bernoulli (\(p=1/2\)) & 0.0541 & 100\% & 0.0237 & 95.21\% \\ Clip-OGD & 0.0507 & 99.99\% & 0.0222 & 95.22\% \\ \end{tabular}
\end{table}
Table 1: 95% Confidence Intervals for Bernoulli and Clip-OGD
Figure 2: Histogram of Studentized Adaptive Horvitz–Thompson estimator under Clip-OGD (\(T=14,445\))
In Figure 3, we re-create Figure 0(a) in the main paper but we have added instances of Clip-OGD with different step sizes of the forms \(\eta=c/\sqrt{T}\) for \(c\in\{0.25,0.5,1.0,2.0,4.0\}\). We find that smaller step sizes improve convergence rates, effectively removing the "overhead of adaptivity" in this example. However, because the randomized experiment can only be run once, experimenters will typically not be able to try many step sizes. While it remains an open question about how to select a step size which best mitigates the "overhead of adaptivity", our recommendation of \(1/\sqrt{T}\) still maintains good convergence properties.
### Alternative Designs
In this section, we conduct additional experiments to compare the results of Clip-OGD to alternative experimental designs which are not made for Neyman allocation. Indeed, we find that the alternative designs incur a high variance, relative to Clip-OGD and the Neyman variance.
In Figure 4, we plot the variance of the adaptive designs on an unnormalized scale. We include "Doubly Biased Coin Design" proposed by Eisele (1994) as DBCD in Fig 3(a) and both DBCD and EXP3 in Fig 3(b). We find that both DBCD and EXP3 suffer from higher variance. This is because they are not designed for Adaptive Neyman Allocation as defined in this paper: DBCD targets a different allocation rule and EXP3 minimizes outcome regret so that essentially only one arm is pulled. Both of these algorithms let the sampling probabilities \(P_{t}\) get too close to the boundary of \([0,1]\), resulting in excessively large variance.
Figure 4: Comparison of (Unnormalized) Variances:
Figure 3: Comparing Step Sizes:
General Analysis of Adaptive Neyman Allocation
In this section, we provide general analysis relevant for the problem of Adaptive Neyman Allocation. In Section B.1, we analyze the adaptive Horvitz-Thompson estimator. In Section B.2, we derive the optimal non-adaptive Neyman design in terms of the potential outcomes. In Section B.3, we show the equivalence of Neyman ratio and expected Neyman regret. For completeness, all propositions re-appear in the appendix.
### Analysis of Adaptive Horvitz-Thompson Estimator (Propositions 2.1 and 2.2)
Throughout these proofs, we break up the adaptive Horvitz-Thompson estimator into the sum of individual estimators. For each \(t\in[T]\), define
\[\hat{\tau}_{t}=Y_{t}\Big{(}\frac{\mathbf{1}[Z_{t}=1]}{P_{t}}-\frac{\mathbf{1} [Z_{t}=0]}{1-P_{t}}\Big{)}\]
so that the sequential Horvitz-Thompson estimator is equal to \(\hat{\tau}=(1/T)\sum_{t=1}^{T}\hat{\tau}_{t}\). This mirrors how the average treatment effect is the average of individual treatment effects, i.e. \(\tau=(1/T)\sum_{t=1}^{T}\tau_{t}\).
We begin by proving Proposition 2.1, which establishes the unbiasedness of the adaptive Horvitz-Thompson estimator, subject to a positivity condition.
**Proposition 2.1**.: _If \(\min\{P_{t},1-P_{t}\}>0\) almost surely for all \(t\in[T]\) then the adaptive Horvitz-Thompson estimator is unbiased: \(\mathbb{E}[\hat{\tau}]=\tau\)._
Proof.: Observe that by linearity of expectation, we can break the expectation of the adaptive Horvitz-Thompson estimator as
\[\mathbb{E}[\hat{\tau}]=\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}[\hat{\tau}_{t}]\enspace.\]
Thus, it suffices to show that the individual effect estimators are unbiased: \(\mathbb{E}[\hat{\tau}_{t}]=\tau_{t}\). Observe that if the positivity condition holds, then we have that the conditional expectation may be computed as
\[\mathbb{E}[\hat{\tau}_{t}\mid\mathcal{F}_{t}] =\mathbb{E}\Big{[}Y_{t}\Big{(}\frac{\mathbf{1}[Z_{t}=1]}{P_{t}}- \frac{\mathbf{1}[Z_{t}=0]}{1-P_{t}}\Big{)}\mid\mathcal{F}_{t}\Big{]}\] \[=P_{t}\cdot\Big{(}\frac{y_{t}(1)}{P_{t}}\Big{)}+(1-P_{t})\cdot \Big{(}\frac{y_{t}(0)}{1-P_{t}}\Big{)}\] \[=y_{t}(1)-y_{t}(0)\] \[=\tau_{t}\enspace.\]
The result follows by iterated expectation, \(\mathbb{E}[\hat{\tau}_{t}]=\mathbb{E}[\mathbb{E}[\hat{\tau}_{t}\mid\mathcal{F }_{t}]]=\tau_{t}\).
Next, we prove Proposition 2.2, which derives the variance of the adaptive Horvitz-Thompson estimator.
**Proposition 2.2**.: _The variance of the adaptive Horvitz-Thompson estimator is_
\[T\cdot\mathrm{Var}(\hat{\tau})=\frac{1}{T}\sum_{t=1}^{T}\Bigl{(}y_{t}(1)^{2} \,\mathbb{E}\Bigl{[}\frac{1}{P_{t}}\Bigr{]}+y_{t}(0)^{2}\,\mathbb{E}\Bigl{[} \frac{1}{1-P_{t}}\Bigr{]}\Bigr{)}-\frac{1}{T}\sum_{t=1}^{T}\tau_{t}^{2}\enspace.\]
Proof.: We begin by decomposing the variance of the adaptive Horvitz-Thompson estimator as
\[\mathrm{Var}(\hat{\tau})=\mathrm{Var}\Bigl{(}\frac{1}{T}\sum_{t=1}^{T}\hat{ \tau}_{t}\Bigr{)}=\frac{1}{T^{2}}\sum_{t=1}^{T}\sum_{s=1}^{T}\mathrm{Cov}(\hat {\tau}_{t},\hat{\tau}_{s})\enspace.\]
We now aim to compute each of these individual covariance terms. Before continuing, observe that, by construction, the individual effect estimators are conditionally unbiased \(\mathbb{E}[\hat{\tau}_{t}\mid\mathcal{F}_{t}]=\tau_{t}\). It follows by iterated expectation that the individual effect estimators are unbiased (unconditionally), i.e. \(\mathbb{E}[\hat{\tau}_{t}]=\tau_{t}\). Suppose that \(s>t\). In this case, the covariance between the individual estimators is equal to zero as,
\[\mathrm{Cov}(\hat{\tau}_{t},\hat{\tau}_{s}) =\mathbb{E}[\hat{\tau}_{t}\hat{\tau}_{s}]-\mathbb{E}[\hat{\tau}_{ t}]\,\mathbb{E}[\hat{\tau}_{s}]\] \[=\mathbb{E}[\hat{\tau}_{t}\,\mathbb{E}[\hat{\tau}_{s}\mid\mathcal{ F}_{s}]]-\tau_{t}\tau_{s}\] \[=\tau_{s}\,\mathbb{E}[\hat{\tau}_{t}]-\tau_{t}\tau_{s}\]\[=\tau_{s}\tau_{t}-\tau_{t}\tau_{s}\] \[=0\enspace.\]
Now let us compute the variance of an individual effect estimator. Observe that the variance may be decomposed as
\[\operatorname{Var}(\hat{\tau}_{t})=\mathbb{E}[\hat{\tau}_{t}^{2}]-\mathbb{E}[ \hat{\tau}_{t}]^{2}\enspace.\]
Because the individual estimator is unbiased, we have that \(\mathbb{E}[\hat{\tau}_{t}]^{2}=\tau_{t}^{2}\). Let us now analyze the first term.
\[\mathbb{E}[\hat{\tau}_{t}^{2}] =\mathbb{E}\big{[}\mathbb{E}[\hat{\tau}_{t}^{2}\mid\mathcal{F}_{t }]\big{]}\] (iterated expectation) \[=\mathbb{E}\Big{[}y_{t}(1)^{2}\frac{1}{P_{t}}+y_{t}(0)^{2}\frac{ 1}{1-P_{t}}\Big{]}\] \[=y_{t}(1)^{2}\cdot\mathbb{E}\Big{[}\frac{1}{P_{t}}\Big{]}+y_{t}(0 )^{2}\cdot\mathbb{E}\Big{[}\frac{1}{1-P_{t}}\Big{]}\enspace.\]
Thus, this establishes that the variance of an individual estimator is equal to
\[\operatorname{Var}(\hat{\tau}_{t})=y_{t}(1)^{2}\cdot\mathbb{E}\Big{[}\frac{1}{ P_{t}}\Big{]}+y_{t}(0)^{2}\cdot\mathbb{E}\Big{[}\frac{1}{1-P_{t}}\Big{]}-\tau_{t} ^{2}\enspace.\]
Combining terms, we have that the variance of the adaptive Horvitz-Thompson estimator is
\[T\cdot\operatorname{Var}(\hat{\tau}) =\frac{1}{T}\sum_{t=1}^{T}\sum_{s=1}^{T}\operatorname{Cov}(\hat{ \tau}_{t},\hat{\tau}_{s})\] \[=\frac{1}{T}\sum_{t=1}^{T}\operatorname{Var}(\hat{\tau}_{t})\] \[=\frac{1}{T}\sum_{t=1}^{T}\!\left(y_{t}(1)^{2}\cdot\mathbb{E} \Big{[}\frac{1}{P_{t}}\Big{]}+y_{t}(0)^{2}\cdot\mathbb{E}\Big{[}\frac{1}{1-P_{ t}}\Big{]}\right)-\frac{1}{T}\sum_{t=1}^{T}\tau_{t}^{2}\qed\]
### Derivation of the Neyman Design (Proposition 3.1)
In this section, we prove Proposition 3.1 which derives the (infeasible) non-adaptive Neyman design in terms of the Neyman probability \(p\)* and corresponding Neyman variance \(V_{\text{N}}\). We also show that, under Assumption 1, the Neyman variance achieves the parametric rate.
**Proposition 3.1**.: _The Neyman variance is \(T\cdot V_{\text{N}}=2(1+\rho)S(1)S(0)\), which is achieved by the Neyman probability \(p^{\text{*}}=(1+S(0)/S(1))^{-1}\)._
Proof.: Using Proposition 2.2, we have that the variance of the (non-adaptive) Bernoulli design with probability \(p\in(0,1)\) is equal to
\[T\cdot V_{p}=S(1)^{2}\Big{(}\frac{1}{p}-1\Big{)}+S(0)^{2}\Big{(}\frac{1}{1-p} -1\Big{)}+2\rho S(1)S(0)\enspace.\]
Thus, the optimal Neyman design is obtained by the \(p^{\text{*}}\) which minimizes the above. The first order condition stipulates that
\[\frac{\partial}{\partial p}\Big{[}T\cdot V_{p}\Big{]}\Big{|}_{p=p*}=0\Leftrightarrow -S(1)^{2}\Big{(}\frac{1}{p^{\text{*}}}\Big{)}^{2}+S(0)^{2}\Big{(}\frac{1}{1-p ^{\text{*}}}\Big{)}^{2}=0\enspace,\]
which is solved by \(p^{\text{*}}=\big{(}1+S(0)/S(1)\big{)}^{-1}\). Substituting this \(p\)* back into the variance yields the Neyman variance:
\[T\cdot V_{p}* =S(1)^{2}\Big{(}\frac{1}{p^{\text{*}}}-1\Big{)}+S(0)^{2}\Big{(} \frac{1}{1-p^{\text{*}}}-1\Big{)}+2\rho S(1)S(0)\] \[=S(1)^{2}\cdot\frac{S(0)}{S(1)}+S(0)^{2}\cdot\frac{S(1)}{S(0)}+2 \rho S(1)S(0)\] \[=2(1+\rho)S(1)S(0)\enspace.\qed\]
Next, we show that under Assumption 1, the Neyman variance achieves the parametric rate.
**Proposition B.1**.: _Under Assumption 1, the Neyman variance achieves the parametric rate: \(V_{\text{N}}=\Theta(1/T)\)._
Proof.: Proposition 3.1, derives the Neyman variance: \(T\cdot V_{\text{N}}=2(1+\rho)S(1)S(0)\).
We begin by showing that the Neyman variance is asymptotically bounded from below. Moreover, Assumption 1 stipulates that there exists a constant \(c>0\) which lower bounds the second moments as \(S(1)\geq c\) and \(S(0)\geq c\) and the correlation as \((1+\rho)\geq c\). Thus, the normalized Neyman variance is bounded below as \(T\cdot V_{\text{N}}\geq 2c^{3}\).
Next, we show that the Neyman variance is asymptotically bounded from above at the same rate. Assumption 1 stipulates that there exists a constant \(C>0\) which upper bounds the second moments as \(S(1)\leq C\) and \(S(0)\leq C\). The correlation is bounded between \(\rho\in[-1,1]\) so that \((1+\rho)\leq 2\). These bounds together yield that the normalized Neyman variance is bounded above as \(T\cdot V_{\text{N}}\leq 4C^{2}\).
Together, these bounds establish that, under Assumption 1, we have that \(V_{\text{N}}=\Theta(1/T)\).
### Equivalence of Neyman Ratio and Neyman Regret (Theorem 4.1)
In this section, we prove Theorem 4.1, which demonstrates the equivalence between the Neyman Ratio and the expected Neyman regret.
**Theorem 4.1**.: _Under Assumption 1, the Neyman ratio is within a constant factor of the \(1/T\)-scaled expected Neyman regret: \(\kappa_{T}=\Theta(\frac{1}{T}\operatorname{\mathbb{E}}[\mathcal{R}_{T}])\)._
Proof.: Recall that the Neyman ratio is defined as
\[\kappa_{T}=\frac{V-V_{\text{N}}}{V_{\text{N}}}=\frac{T\cdot V-T\cdot V_{\text{ N}}}{T\cdot V_{\text{N}}}\enspace,\]
where the second equality follows by multiplying the numerator and the denominator by \(T\). Observe that by Proposition 2.2, the numerator is given by
\[T\cdot V-T\cdot V_{\text{N}} =\frac{1}{T}\sum_{t=1}^{T}\Biggl{(}y_{t}(1)^{2}\cdot\operatorname{ \mathbb{E}}\Bigl{[}\frac{1}{P_{t}}\Bigr{]}+y_{t}(0)^{2}\cdot\operatorname{ \mathbb{E}}\Bigl{[}\frac{1}{1-P_{t}}\Bigr{]}\Biggr{)}\] \[\qquad-\min_{p*\in[0,1]}\frac{1}{T}\sum_{t=1}^{T}\Biggl{(}y_{t}(1 )^{2}\cdot\frac{1}{p*}+y_{t}(0)^{2}\cdot\frac{1}{1-p*}\Biggr{)}\] \[=\operatorname{\mathbb{E}}\Bigl{[}\frac{1}{T}\sum_{t=1}^{T}f_{t}( P_{t})\Bigr{]}-\min_{p*\in[0,1]}\frac{1}{T}\sum_{t=1}^{T}f_{t}(p*)\] \[=\frac{1}{T}\operatorname{\mathbb{E}}\Bigl{[}\sum_{t=1}^{T}f_{t}( P_{t})-\min_{p*\in[0,1]}\sum_{t=1}^{T}f_{t}(p*)\Bigr{]}\] \[=\frac{1}{T}\operatorname{\mathbb{E}}\bigl{[}\mathcal{R}_{T} \bigr{]}\enspace.\]
Proposition B.1 shows that under Assumption 1, \(T\cdot V_{\text{N}}=\Theta(1)\) so that the denominator is asymptotically constant. Thus, we have that \(\kappa_{T}=\Theta(\frac{1}{T}\operatorname{\mathbb{E}}[\mathcal{R}_{T}])\).
## Appendix C Analysis of Neyman Regret for Clip-Ogd
In this section, we will prove Theorem 4.2, which establish that Clip-OGD achieves \(\mathcal{O}(\sqrt{T\log T})\) expected Neyman regret under our assumptions on the potential outcomes. While the main paper used capital letters \(P_{t}\) and \(G_{t}\) to signify that the treatment probability and gradient estimator were random variables, we use lower case letters \(p_{t}\) and \(g_{t}\) in the appendix for the purposes of more aesthetically appealing proofs. Throughout the analysis, we define \(\Delta_{t}=[\delta_{t},1-\delta_{t}]\) and \(a=1+C/c\) for notational convenience.
### Proof of Theorem 4.2
The first lemma is a bound on the distance of treatment probability \(p_{t+1}\) to the optimal \(p*\) in terms of the previous treatment probability, gradient estimate, and whether the projection interval contains the optimal \(p^{*}\).
**Lemma C.1**.: _For each iteration \(t\in[T]\),_
\[|p_{t+1}-p^{*}|\leq|(p_{t}-\eta g_{t})-p^{*}|+\delta_{t}\mathbf{1}[p^{*}\notin \Delta_{t}]\enspace.\]
Proof.: If \(p^{*}\in\Delta_{t}\), then the statement holds by Pythagorean theorem. Otherwise, note that the most that the projection operation onto \(\Delta_{t}\) can move a point is exactly \(\delta_{t}\). Therefore, the distance between \(\mathcal{P}_{\delta_{t}}(p_{t}-\eta g_{t})\) and \(p^{*}\) is at most \(\delta_{t}\) larger than that between \(p_{t}-\eta g_{t}\) and \(p^{*}\).
Next, we show that Assumption 1 implies that the optimal Neyman probability lies within an interval bounded away from zero.
**Lemma C.2**.: _Under Assumption 1, \(p^{*}\in[\frac{1}{a},1-\frac{1}{a}]\), where \(a=1+C/c\geq 2\)._
Proof.: As shown previously, the optimal Neyman probability is equal to
\[p^{*}=\left(1+\sqrt{\frac{\sum_{t=1}^{T}y_{t}(0)^{2}}{\sum_{t=1}^{T}y_{t}(1)^{ 2}}}\right)^{-1}\]
Recall that Assumption 1 places the following moment conditions on the potential outcomes:
\[c\leq\Big{(}\frac{1}{T}\sum_{t=1}^{T}y_{t}(1)^{2}\Big{)}^{1/2}\leq C\quad\text {and}\quad c\leq\Big{(}\frac{1}{T}\sum_{t=1}^{T}y_{t}(1)^{2}\Big{)}^{1/2}\leq C\enspace.\]
This bounds \(p^{*}\) by
\[\Big{(}1+\frac{C}{c}\Big{)}^{-1}\leq p^{*}\leq\Big{(}1+\frac{c}{C}\Big{)}^{-1}\enspace.\]
The result follows by using the definition of \(a=1+C/c\) to deduce that \((1/a)=(1+C/c)^{-1}\) and \((1-1/a)=(1+c/C)^{-1}\).
The next lemma guarantees that after a fixed number of iterations, the projection interval will contain the Neyman optimal \(p^{*}\).
**Lemma C.3**.: _We have that \(p^{*}\in\Delta_{t}\) for all \(t\geq(a/2)^{\alpha}\)._
Proof.: Lemma C.2 guarantees that \(p^{*}\in[1/a,1-1/a]\), where \(a=1+C/c\). Thus, \(p^{*}\in\Delta_{t}\) if \(\delta_{t}\leq 1/a\). Using the definition of \(\delta_{t}\) and rearranging terms, we have that
\[\delta_{t}\leq 1/a\Leftrightarrow(1/2)t^{-1/\alpha}\leq 1/a\Leftrightarrow t \geq(a/2)^{\alpha}\enspace.\qed\]
The next lemma bounds the expected difference between the cost objective \(f_{t}\) evaluated at \(p_{t}\) and the cost objective \(f_{t}\) evaluated at the Neyman optimal probability \(p^{*}\).
**Lemma C.4**.: _For each iteration \(t\in[T]\), we have the bound_
\[2\,\mathbb{E}\Big{[}f_{t}(p_{t})-f_{t}(p^{*})\Big{]}\leq\frac{1}{\eta}\Big{[} \mathbb{E}\Big{[}(p_{t}-p^{*})^{2}\Big{]}-\mathbb{E}\Big{[}(p_{t+1}-p^{*})^{2} \Big{]}\Big{]}+\eta\,\mathbb{E}\big{[}g_{t}^{2}\big{]}+4\mathbf{1}\Big{[}t< \Big{(}\frac{a}{2}\Big{)}^{\alpha}\Big{]}\Big{(}\frac{\delta_{t}}{\eta}+\frac{ \delta_{t}}{2}\,\mathbb{E}\Big{[}|g_{t}|\Big{]}\Big{)}\enspace.\]
Proof.: Fix an iteration \(t\in[T]\). By Lemma C.1, and using the triangle inequality, we have that
\[(p_{t+1}-p^{*})^{2} \leq\Big{(}|(p_{t}-\eta g_{t})-p^{*}|+\delta_{t}\mathbf{1}[p^{*} \notin\Delta_{t}]\Big{)}^{2}\] \[=\big{(}(p_{t}-\eta g_{t})-p^{*}\big{)}^{2}+\delta_{t}^{2} \mathbf{1}[p^{*}\notin\Delta_{t}]^{2}+2|(p_{t}-\eta g_{t})-p^{*}|\delta_{t} \mathbf{1}[p^{*}\notin\Delta_{t}]\] \[=\big{(}(p_{t}-p^{*})-\eta g_{t}\big{)}^{2}+\mathbf{1}[p^{*} \notin\Delta_{t}]\Big{(}\delta_{t}^{2}+2\delta_{t}\cdot|(p_{t}-p^{*})-\eta g_ {t}|\Big{)}\] \[\leq\big{(}(p_{t}-p^{*})-\eta g_{t}\big{)}^{2}+2\delta_{t} \mathbf{1}[p^{*}\notin\Delta_{t}]\Big{(}\delta_{t}+|p_{t}-p^{*}|+\eta|g_{t}| \Big{)}\]
Because \(p_{t}\in[\delta_{t},1-\delta_{t}]\), we have that \(|p_{t}-p^{*}|\leq 1-\delta_{t}\) so that
\[\leq\big{(}(p_{t}-p^{*})-\eta g_{t}\big{)}^{2}+2\delta_{t}\mathbf{1}[p^{*} \notin\Delta_{t}]\Big{(}1+\eta|g_{t}|\Big{)}\]\[= \leqslant(p_{t}-p^{*})^{2}+\eta^{2}g_{t}^{2}-2\eta g_{t}(p_{t}-p^{*})+4 \eta\cdot\mathbf{1}[p^{*}\notin\Delta_{t}]\Big{(}\frac{\delta_{t}}{\eta}+\frac{ \delta_{t}}{2}|g_{t}|\Big{)}\] \[.\]
Rearranging terms yields that
\[2\eta g_{t}(p_{t}-p^{*})\leqslant\Big{[}(p_{t}-p^{*})^{2}-(p_{t+1}-p^{*})^{2} \Big{]}+\eta^{2}g_{t}^{2}+4\eta\cdot\mathbf{1}[p^{*}\notin\Delta_{t}]\Big{(} \frac{\delta_{t}}{\eta}+\frac{\delta_{t}}{2}|g_{t}|\Big{)}\enspace.\]
Dividing both sides by the step size \(2\eta\) yields
\[g_{t}(p_{t}-p^{*})\leqslant\frac{1}{2\eta}\Big{[}(p_{t}-p^{*})^{2}-(p_{t+1}-p^ {*})^{2}\Big{]}+\frac{\eta}{2}g_{t}^{2}+2\mathbf{1}[p^{*}\notin\Delta_{t}] \Big{(}\frac{\delta_{t}}{\eta}+\frac{\delta_{t}}{2}|g_{t}|\Big{)}\enspace.\]
Using convexity of \(f_{t}\), adding and subtracting terms, and using the above, we have that
\[f_{t}(p_{t})-f_{t}(p^{*}) \leqslant\langle\nabla f_{t}(p_{t}),p_{t}-p^{*}\rangle\] (convexity) \[=\] (adding, subtracting) \[\leqslant\frac{1}{2\eta}\Big{[}(p_{t}-p^{*})^{2}-(p_{t+1}-p^{*}) ^{2}\Big{]}+\frac{\eta}{2}g_{t}^{2}+2\mathbf{1}[p^{*}\notin\Delta_{t}]\Big{(} \frac{\delta_{t}}{\eta}+\frac{\delta_{t}}{2}|g_{t}|\Big{)}\] (above) \[\quad+\langle\nabla f_{t}(p_{t})-g_{t},p_{t}-p^{*}\rangle\]
By construction, we have that the gradient estimator is unbiased conditioned on \(D_{t}\), i.e. \(\mathbb{E}[g_{t}\mid D_{t}]=\nabla f_{t}(p_{t})\). Thus, by iterated expectation we have that the gradient estimator is unbiased, i.e.
\[\mathbb{E}[\nabla f_{t}(p_{t})-g_{t}]=\mathbb{E}[\mathbb{E}[\nabla f_{t}(p_{t} )-g_{t}\mid D_{t}]]=0\enspace.\]
Thus, taking expectations of both sides and applying Lemma C.3 yields
\[2 \mathbb{E}\Big{[}f_{t}(p_{t})-f_{t}(p^{*})\Big{]}\] \[\leqslant\frac{1}{\eta}\Big{[}\mathbb{E}\Big{[}(p_{t}-p^{*})^{2} \Big{]}-\mathbb{E}\Big{[}(p_{t+1}-p^{*})^{2}\Big{]}\Big{]}+\eta\,\mathbb{E} \big{[}g_{t}^{2}\big{]}+4\cdot\mathbf{1}[p^{*}\notin\Delta_{t}]\Big{(}\frac{ \delta_{t}}{\eta}+\frac{\delta_{t}}{2}\,\mathbb{E}\Big{[}|g_{t}|\Big{]}\Big{)}\] \[\leqslant\frac{1}{\eta}\Big{[}\mathbb{E}\Big{[}(p_{t}-p^{*})^{2} \Big{]}-\mathbb{E}\Big{[}(p_{t+1}-p^{*})^{2}\Big{]}\Big{]}+\eta\,\mathbb{E} \big{[}g_{t}^{2}\big{]}+4\cdot\mathbf{1}\Big{[}t<\Big{(}\frac{a}{2}\Big{)}^{ \alpha}\Big{]}\Big{(}\frac{\delta_{t}}{\eta}+\frac{\delta_{t}}{2}\,\mathbb{E} \Big{[}|g_{t}|\Big{]}\Big{)}\]
The following lemma derives bounds on the first and second (raw) moments of the gradient estimator at each iteration.
**Lemma C.5**.: _For each \(t\in[T]\), the gradient estimates have bounded first and second moments:_
\[\mathbb{E}\Big{[}g_{t}^{2}\Big{]} \leqslant 2^{5}t^{5/\alpha}\cdot\big{(}y_{t}(1)^{4}+y_{t}(0)^{4} \big{)}\] \[\mathbb{E}\Big{[}|g_{t}|\Big{]} \leqslant 2^{2}t^{2/\alpha}\cdot\big{(}y_{t}(1)^{2}+y_{t}(0)^{2} \big{)}\]
Proof.: We begin by handling the \(\mathbb{E}\Big{[}g_{t}^{2}\Big{]}\) term. By definition of the gradient estimator, we have that the conditional expectation is at most
\[\mathbb{E}[g_{t}^{2}\mid D_{t}] = p_{t}\cdot\Big{(}\frac{y_{t}(1)^{2}}{p_{t}^{3}}\Big{)}^{2}+(1-p_ {t})\cdot\Big{(}\frac{y_{t}(0)^{2}}{(1-p_{t})^{3}}\Big{)}^{2}\] \[= \frac{y_{t}(1)^{4}}{p_{t}^{5}}+\frac{y_{t}(0)^{4}}{(1-p_{t})^{5}}\]
By definition of the algorithm, we have that \(p_{t}\in[\delta_{t},1-\delta_{t}]\) at iteration \(t\). Thus, we may invoke the bound:
\[\leqslant\delta_{t}^{-5}\Big{(}y_{t}(1)^{4}+y_{t}(1)^{4}\Big{)}\] \[= [(1/2)t^{-1/\alpha}]^{-5}\cdot\Big{(}y_{t}(1)^{4}+y_{t}(1)^{4} \Big{)}\] \[= 2^{5}t^{5/\alpha}\cdot\Big{(}y_{t}(1)^{4}+y_{t}(1)^{4}\Big{)}\enspace,\]
and the desired bound on \(\mathbb{E}[g_{t}^{2}]\) follows from applying the law of iterated expectation.
The bound on the \(\mathbb{E}\Big{[}|g_{t}|\Big{]}\) term follows in a similar way. By definition of the gradient estimator, we have that the conditional expectation is at most
\[\mathbb{E}[|g_{t}|\mid D_{t}] =p_{t}\cdot\Big{|}\frac{y_{t}(1)^{2}}{p_{t}^{3}}\Big{|}+(1-p_{t}) \cdot\Big{|}\frac{y_{t}(0)^{2}}{(1-p_{t})^{3}}\Big{|}\] \[=\frac{y_{t}(1)^{2}}{p_{t}^{2}}+\frac{y_{t}(0)^{2}}{(1-p_{t})^{2}}\] \[\leq\delta_{t}^{-2}\Big{(}y_{t}(1)^{2}+y_{t}(1)^{2}\Big{)}\] \[=[(1/2)t^{-1/\alpha}]^{-2}\cdot\Big{(}y_{t}(1)^{2}+y_{t}(1)^{2} \Big{)}\] \[=2^{2}t^{2/\alpha}\cdot\Big{(}y_{t}(1)^{2}+y_{t}(1)^{2}\Big{)}\enspace,\]
and the desired bound on \(\mathbb{E}[|g_{t}|]\) follows from applying the law of iterated expectation.
The following proposition bounds the expected Neyman regret for general settings of the projection parameter \(\alpha\).
**Proposition C.1**.: _Suppose Assumption 1 holds. Then, for any choice of projection parameter \(\alpha\geq 2\) (possibly depending on \(T\)) and for the step size \(\eta=\sqrt{\frac{e^{\alpha}}{T^{1+5/\alpha}}}\), a finite-sample bound on the expected Neyman regret incurred by Clip-OGD is_
\[\mathbb{E}\Big{[}\mathcal{R}_{T}\Big{]}\leq(2^{2}e^{\alpha/2}+2^{5}C^{4})\sqrt {e^{\alpha}T^{1+5/\alpha}}+2^{2}C^{2}e^{2+\alpha/2}\sqrt{e^{\alpha}T}\enspace.\]
Proof.: By Lemma C.4, we have that the regret is at most
\[2\,\mathbb{E}\Big{[}\mathcal{R}_{T}\Big{]} =\sum_{t=1}^{T}2\,\mathbb{E}\Big{[}f_{t}(p_{t})-f_{t}(p^{*})\Big{]}\] \[\leq\frac{1}{\eta}\sum_{t=1}^{T}\Big{[}\mathbb{E}\Big{[}(p_{t}-p^ {*})^{2}\Big{]}-\mathbb{E}\Big{[}(p_{t+1}-p^{*})^{2}\Big{]}\Big{]}+\eta\sum_{t =1}^{T}\mathbb{E}\big{[}g_{t}^{2}\big{]}\] \[\qquad+4\sum_{t=1}^{T}\mathbf{1}\Big{[}t<\Big{(}\frac{\alpha}{2} \Big{)}^{\alpha}\Big{]}\Big{(}\frac{\delta_{t}}{\eta}+\frac{\delta_{t}}{2} \,\mathbb{E}\Big{[}|g_{t}|\Big{]}\Big{)}\]
Using a telescoping argument, we have that the first term is bounded by
\[\frac{1}{\eta}\sum_{t=1}^{T}\Big{[}\mathbb{E}\Big{[}(p_{t}-p^{*})^{2}\Big{]}- \mathbb{E}\Big{[}(p_{t+1}-p^{*})^{2}\Big{]}\Big{]}\leq\frac{1}{\eta}\,\mathbb{ E}\Big{[}(p_{1}-p^{*})^{2}\Big{]}\leq\frac{1}{\eta}\enspace.\]
Using Lemma C.5 and Assumption 1, the sum in the second term may be bounded as
\[\sum_{t=1}^{T}\mathbb{E}[g_{t}^{2}] \leq\sum_{t=1}^{T}2^{5}t^{5/\alpha}(y_{t}(1)^{4}+y_{t}(0)^{4})\] (Lemma C.5) \[\leq 2^{5}T^{5/\alpha}\Big{(}\sum_{t=1}^{T}y_{t}(1)^{4}+\sum_{t=1 }^{T}y_{t}(0)^{4}\Big{)}\] \[\leq 2^{5}T^{5/\alpha}\Big{(}2C^{4}T\Big{)}\] (Assumption 1) \[=2^{6}C^{4}T^{1+5/\alpha}\enspace.\]
Next, we deal with the third term by breaking it up into two more terms. Define \(t^{*}=[(a/2)^{\alpha}]\). The third term can be broken into two terms:
\[4\sum_{t=1}^{T}\mathbf{1}\Big{[}t<\Big{(}\frac{a}{2}\Big{)}^{\alpha}\Big{]} \Big{(}\frac{\delta_{t}}{\eta}+\frac{\delta_{t}}{2}\,\mathbb{E}\Big{[}|g_{t}| \Big{]}\Big{)}=\frac{4}{\eta}\sum_{t=1}^{t^{*}-1}\delta_{t}+2\sum_{t=1}^{t^{*}- 1}\delta_{t}\,\mathbb{E}[|g_{t}|]\enspace.\]
The first of these two terms can be bounded in the following way. Using that \(\alpha\geq 2\) we have that
\[\sum_{t=1}^{t^{*}-1}\delta_{t}=\sum_{t=1}^{t^{*}-1}(1/2)t^{-1/\alpha}\]\[=(1/2)\Big{[}1+\sum_{t=2}^{t^{\bullet}-1}t^{-1/\alpha}\Big{]}\] \[\leqslant(1/2)\Big{[}1+\int_{x=1}^{t^{\bullet}-1}x^{-1/\alpha}dx \Big{]}\] \[=(1/2)\Big{[}1+\frac{\alpha}{\alpha-1}\cdot((t^{\bullet}-1)^{(1-1 /\alpha)}-1)\Big{]}\] \[\leqslant(t^{\bullet}-1)^{(1-1/\alpha)}\] \[\leqslant(a/2)^{\alpha(1-1/\alpha)}\] \[=(a/2)^{\alpha-1}\enspace.\]
The second term can be bounded using Lemma C.5 as follows:
\[\sum_{t=1}^{t^{\bullet}-1}\delta_{t}\operatorname{\mathbb{E}}[|g_ {t}|] \leqslant\sum_{t=1}^{t^{\bullet}-1}(1/2)t^{-1/\alpha}2^{2}t^{2/ \alpha}\big{(}y_{t}(1)^{2}+y_{t}(0)^{2}\big{)}\] (Lemma C.5) \[=2\sum_{t=1}^{t^{\bullet}-1}t^{1/\alpha}\big{(}y_{t}(1)^{2}+y_{t }(0)^{2}\big{)}\] \[\leqslant 2\big{(}\sum_{t=1}^{t^{\bullet}-1}t^{2/\alpha}\big{)}^{1/ 2}\Big{[}\Big{(}\sum_{t=1}^{t^{\bullet}-1}y_{t}(1)^{4}\Big{)}^{1/2}+\Big{(} \sum_{t=1}^{t^{\bullet}-1}y_{t}(0)^{4}\Big{)}^{1/2}\Big{]}\enspace,\] (Cauchy-Schwarz)
where the last inequality follows from Cauchy-Schwarz. By extending the sum involving the outcomes to all units, we obtain an upper bound on which we can apply the bounded moment assumption:
\[\leqslant 2\big{(}\sum_{t=1}^{t^{\bullet}-1}t^{2/\alpha}\big{)}^{1/ 2}\Big{[}\Big{(}\sum_{t=1}^{T}y_{t}(1)^{4}\Big{)}^{1/2}+\Big{(}\sum_{t=1}^{T}y _{t}(0)^{4}\Big{)}^{1/2}\Big{]}\] \[\leqslant 2\big{(}\sum_{t=1}^{t^{\bullet}-1}t^{2/\alpha}\big{)}^{1/ 2}\Big{[}2(C^{4}T)^{1/2}\Big{]}\] (Assumption 1)
What remains in this step is to bound the first term above. By replacing each of the \(t\) in the sum with \(t^{\bullet}\), we obtain the following upper bound:
\[\leqslant 2(t^{\bullet}-1)^{(1/2+1/\alpha)}\Big{[}2(C^{4}T)^{1/ 2}\Big{]}\] \[\leqslant 2((a/2)^{\alpha})^{(1/2+1/\alpha)}\Big{[}2(C^{4}T)^{1/ 2}\Big{]}\] \[=2^{2}C^{2}\Big{(}\frac{a}{2}\Big{)}^{(1+\alpha/2)}T^{1/2}\enspace.\]
Using the above work, we have that the overall regret is bounded as
\[2\operatorname{\mathbb{E}}\!\Big{[}\mathcal{R}_{T}\Big{]} \leqslant\frac{1}{\eta}+\eta^{26}C^{4}T^{1+5/\alpha}+\frac{4}{ \eta}\Big{(}\frac{a}{2}\Big{)}^{(\alpha-1)}+2^{3}C^{2}\Big{(}\frac{a}{2}\Big{)} ^{(1+\alpha/2)}T^{1/2}\] \[=\frac{1}{\eta}\Big{(}1+4\Big{(}\frac{a}{2}\Big{)}^{(\alpha-1)} \Big{)}+\eta 2^{6}C^{4}T^{1+5/\alpha}+2^{3}C^{2}\Big{(}\frac{a}{2}\Big{)}^{(1+ \alpha/2)}T^{1/2}\enspace.\]
Next, we separate the constant \(a\) from the projection parameter \(\alpha\). To this end, we use the inequality \(y^{r}\leqslant e^{1+y}\cdot e^{r}\) for all \(y\in\mathbb{R}\) and \(r\geqslant 0\), to obtain
\[\Big{(}\frac{a}{2}\Big{)}^{(\alpha-1)}\leqslant e^{1+a/2}e^{\alpha-1}=e^{a/2} e^{\alpha}\quad\text{and}\quad\Big{(}\frac{a}{2}\Big{)}^{(1+\alpha/2)}\leqslant e ^{1+a/2}e^{1+\alpha/2}=e^{2+a/2}e^{\alpha/2}\enspace,\]
where we have used that \(a\geqslant 2\). Substituting these back into the above, we have that the regret bound is Substituting this quantities into the above, we have that the regret bound is
\[2\operatorname{\mathbb{E}}[\mathcal{R}_{T}]\leqslant\frac{1}{\eta}\Big{(}1+4e ^{a/2}e^{\alpha}\Big{)}+\eta 2^{6}C^{4}T^{1+5/\alpha}+2^{3}C^{2}e^{2+a/2}e^{\alpha/2}T^{1/2}\]\[\leq\frac{1}{\eta}2^{3}e^{a/2}e^{\alpha}+\eta 2^{6}C^{4}T^{1+5/\alpha}+2^{3}C^{ 2}e^{2+a/2}e^{\alpha/2}T^{1/2}\]
By setting \(\eta=\sqrt{\frac{e^{\alpha}}{T^{1+5/\alpha}}}\) to minimize the bound above, we have that the expected Neyman regret is bounded as
\[=(2^{3}e^{a/2}+2^{6}C^{4})\sqrt{e^{\alpha}T^{1+5/\alpha}}+2^{3}C^{2}e^{2+a/2} \sqrt{e^{\alpha}T}\enspace.\]
where we have used that and the result follows by dividing both sides by \(2\).
Proposition C.1 demonstrates that many different values of \(\alpha\) will guarantee sublinear expected Neyman regret. For example, setting \(\alpha\) to be a constant satisfying \(\alpha>5\) will ensure sublinear expected Neyman regret. However, by tuning \(\alpha\) according to the analysis above, we can achieve \(\mathcal{O}(\sqrt{T\log(T)})\) expected Neyman regret, as demonstrated by Theorem 4.2.
**Theorem 4.2**.: _Under Assumption 1 the parameter values \(\eta=\sqrt{1/T}\) and \(\alpha=\sqrt{5\log(T)}\) ensure the expected Neyman regret of Clip-OGD is bounded as_
\[\mathbb{E}\big{[}\mathcal{R}_{T}\big{]}\leq\left(2^{2}e^{a/2}+2^{5}C^{4}+2^{2 }C^{2}e^{2+a/2}\right)\cdot\sqrt{T}\cdot\exp(\sqrt{5\log(T)})\enspace,\]
_which implies that \(\mathbb{E}\big{[}\mathcal{R}_{T}\big{]}\leq\widetilde{\mathcal{O}}\big{(}\sqrt {T}\big{)}\)._
Proof.: Observe that for \(\alpha=\sqrt{5\log(T)}\), we have that the step size posited in Proposition C.1 (i.e. \(\eta=\sqrt{\frac{e^{\alpha}}{T^{1+5/\alpha}}}\)) is equal to \(\eta=\sqrt{1/T}\). Thus, by rearranging terms and using the result of Proposition C.1, we have that expected Neyman regret is bounded as
\[\mathbb{E}\big{[}\mathcal{R}_{T}\big{]} \leq(2^{2}e^{a/2}+2^{5}C^{4})\sqrt{e^{\alpha}T^{1+5/\alpha}}+2^{2 }C^{2}e^{2+a/2}\sqrt{e^{\alpha}T}\] \[=(2^{2}e^{a/2}+2^{5}C^{4})\sqrt{T}\cdot\sqrt{e^{\alpha}T^{5/ \alpha}}+2^{2}C^{2}e^{2+a/2}\sqrt{T}\cdot\sqrt{e^{\alpha}}\enspace.\]
The difficulty is now to find which setting of \(\alpha\) will make this bound smallest. Observe that the real tension is in the first term and we can re-write the relevant part of this term as
\[\sqrt{e^{\alpha}T^{5/\alpha}}=\left[\exp(\alpha+\log(T^{5/\alpha}))\right]^{1/ 2}=\left[\exp(\alpha+(5/\alpha)\log(T))\right]^{1/2}\enspace.\]
To minimize this term, we select \(\alpha=\sqrt{5\log(T)}\) which results in
\[\sqrt{e^{\alpha}T^{5/\alpha}}=\exp(\sqrt{5\log(T)})\enspace.\]
Likewise, this choice of \(\alpha\) results in \(\sqrt{e^{\alpha}}\leq e^{\alpha}=\exp(\sqrt{5\log(T)})\). Putting this together yields the desired finite sample regret bound:
\[\mathbb{E}\big{[}\mathcal{R}_{T}\big{]}\leq\left(2^{2}e^{a/2}+2^{5}C^{4}+2^{2 }C^{2}e^{2+a/2}\right)\cdot\sqrt{T}\cdot\exp(\sqrt{5\log(T)})\enspace.\]
The result follows by observing that the terms inside the parenthesis are constant by Assumption 1 and the function \(\exp(\sqrt{5\log(T)})\) is subpolynomial.
### Selecting Parameters When Moment Bounds are Known
We briefly remark on how to select the step size parameter \(\eta\) when the experimenter can correctly specify the constants \(C\geq c\) used in the Assumption 1. The proof of Proposition C.1 shows that for general parameters, the Neyman regret may be bounded as
\[\mathbb{E}[\mathcal{R}_{T}]\leq\frac{1}{\eta}2^{2}e^{a/2}e^{\alpha}+\eta 2^{5}C^{4}T^{1+5/\alpha}+2^{2}C^{2}e^{2+a/2}e^{\alpha/2}T^{1/2}\enspace.\]
To optimize the step size with respect to these constants, one would choose \(\alpha=\sqrt{5\log(T)}\) and
\[\eta=\frac{e^{\frac{1}{4}\cdot(1+C/c)}}{2\sqrt{2}C^{2}}\cdot\frac{1}{\sqrt{T}}\enspace,\]
where we have used that \(a=1+C/c\). When these moment bounds are correctly specified, this choice of step size will likely yield improved convergence rates, as our bound on the Neyman regret will have a factor of \(C^{2}\) rather than \(C^{4}\).
Analysis for Inference in Large Samples
In this section, we provide the necessary statistical tools for constructing asymptotically valid confidence intervals. This can be done in two main steps. First, in Section D.1 we construct a conservative variance estimator which we show is consistent in probability. Then, in Section D.2, we show that the resulting Chebyshev-type intervals are asymptotically valid.
While the main paper used capital letters \(P_{t}\) and \(G_{t}\) to signify that the treatment probability and gradient estimator were random variables, we use lower case letters \(p_{t}\) and \(g_{t}\) in the appendix for the purposes of more aesthetically appealing proofs. Throughout the analysis, we use the parameter settings \(\eta=\sqrt{1/T}\) and \(\alpha=\sqrt{5\log(T)}\), which are recommended in the main paper. However, we suspect that many of our results will go through for the class of parameters \(\eta=\sqrt{\frac{e^{\alpha}}{T^{1+5/\alpha}}}\) and \(\alpha>5\) which appear in Proposition C.1.
The following lemma shows that under Assumptions 1 and 2, the variance of the adaptive Horvitz-Thompson estimator under Clip-OGD achieves the parametric rate.
**Lemma D.1**.: _Assumptions 1 and 2, the variance of the adaptive Horvitz-Thompson estimator under Clip-OGD achieves the parametric rate: \(\operatorname{Var}(\hat{\tau})=\Theta(1/T)\)._
Proof.: Theorem 4.1 shows that under Assumption 1, they Neyman ratio is order equivalent to the \(1/T\)-scaled expected Neyman regret, i.e. \(\kappa_{T}=\Theta((1/T)\operatorname{\mathbb{E}}[\mathcal{R}_{T}])\). Theorem 4.2 shows that under these assumptions, Clip-OGD achieves sublinear expected Neyman regret \(\operatorname{\mathbb{E}}[\mathcal{R}_{T}]=o(T)\) which implies that \(\limsup\kappa_{T}\leqslant 0\). Likewise, Assumption 2 states that the negative expected Neyman regret is sublinear, \(-\operatorname{\mathbb{E}}[\mathcal{R}_{T}]=o(T)\), which implies that \(\liminf\kappa_{T}\geqslant 0\). Thus, we have that the Neyman ratio converges to zero, e.g. \(\lim\kappa_{T}=0\).
By recalling the definition of the Neyman ratio, we have that
\[0=\lim_{T\to\infty}\kappa_{T}=\lim_{T\to\infty}\frac{V-V_{\text{N}}}{V_{\text{ N}}}=\lim_{T\to\infty}\frac{T\cdot V-T\cdot V_{\text{N}}}{T\cdot V_{\text{N}}}\enspace.\]
Proposition B.1 demonstrates that \(T\cdot V_{\text{N}}=\Theta(1)\). Together with the above, this implies that \(T\cdot V=\Theta(1)\).
The following lemma shows that under the recommended parameter settings, the (random) treatment probabilities are bounded away from zero and one.
**Lemma D.2**.: _When \(\alpha=\sqrt{5\log(T)}\), we have that for all iterations \(t\in[T]\), the inverse of projection parameter is bounded:_
\[\frac{1}{\delta_{t}}\leqslant 2\exp(\sqrt{\log(T^{1/5})})=\widetilde{\mathcal{O} }(1)\enspace.\]
Proof.: A uniform upper bound on the inverse of the projection parameters is
\[\frac{1}{\delta_{t}}\leqslant\frac{1}{\delta_{T}}=\frac{1}{(1/2)T^{-1/\alpha}} =2T^{1/\alpha}=2\exp(\frac{1}{\alpha}\log(T))=2\exp(\sqrt{\log(T^{1/5})})\enspace.\]
To complete the proof, observe that the function \(h(T)=\exp(\sqrt{1/5}\cdot\sqrt{\log(T)})\) is subpolynomial, so that we can write it as \(\widetilde{\mathcal{O}}(1)\).
### Conservative Variance Estimator (Theorem 5.1)
In this section, we prove Theorem 5.1 which establishes that the normalized variance estimator is converges in probability to the normalized variance upper bound at a \(\widetilde{\mathcal{O}}_{p}(T^{-1/2})\) rate. Before continuing, let us review the relevant quantities. Recall that the Neyman variance and the corresponding upper bound are given by
\[T\cdot V_{\text{N}}=2(1+\rho)S(1)S(0)\quad\text{and}\quad T\cdot\text{VB}=4S(1 )S(0)\enspace,\]
where the second moments \(S(1)\) and \(S(0)\) are defined as
\[S(1)^{2}=\frac{1}{T}\sum_{t=1}^{T}y_{t}(1)^{2}\quad\text{and}\quad S(0)^{2}= \frac{1}{T}\sum_{t=1}^{T}y_{t}(0)^{2}\enspace.\]Our variance estimator is defined as
\[T\cdot\widehat{\mathbf{VB}}=\sqrt{\widehat{A(1)}\cdot\widehat{A(0)}}\quad\text{ where}\quad\widehat{A(1)}=\frac{1}{T}\sum_{t=1}^{T}y_{t}(1)^{2}\frac{\mathbf{1}[Z_{t}=1]}{p_{t}}\quad\text{and}\quad\widehat{A(0)}=\frac{1}{T}\sum_{t=1}^{T}y_{t}(0)^{2}\frac{ \mathbf{1}[Z_{t}=0]}{1-p_{t}}\enspace.\]
The random variables \(\widehat{A(1)}\) and \(\widehat{A(0)}\) are unbiased estimates of \(S(1)^{2}\) and \(S(0)^{2}\) which are based on the Horvitz-Thompson principle. However, the fact that the estimators \(\widehat{A(1)}\) and \(\widehat{A(0)}\) are not independent and the square root is introduced means that the variance estimator \(\widehat{\mathbf{VB}}\) is not an unbiased estimator for the variance bound \(\mathbf{VB}\). Even so, we will show that the variance estimator is consistent for the variance bound. For aesthetic considerations, we define \(A(1)=S(1)^{2}\) and \(A(0)=S(0)\) so that \(\widehat{A(1)}\) is an estimator for \(A(1)\) and \(\widehat{A(0)}\) is an estimator for \(A(0)\).
Our general approach will follow in two steps. First, by bounding its bias and variance, we will show that \(\widehat{A(1)}\cdot\widehat{A(0)}-A(1)A(0)\) converges as \(\widehat{\mathcal{O}}_{p}(T^{-1/2})\). Next, by appealing to a quantitative Continuous Mapping Theorem, we will argue that the error \(\sqrt{\widehat{A(1)}\cdot\widehat{A(0)}}-\sqrt{A(1)A(0)}\) converges at the same rate. By definition, this is exactly the error of the normalized variance estimator to the normalized variance bound, i.e. \(T\cdot\widehat{\mathbf{VB}}-T\cdot\mathbf{VB}\).
Before continuing, let us define new auxiliary random variables. For each \(t\in[T]\), we define the variables \(r_{t}\) and \(q_{t}\) as
\[r_{t}=\frac{\mathbf{1}[z_{t}=1]}{p_{t}}\quad\text{and}\quad q_{t}=\frac{ \mathbf{1}[z_{t}=0]}{1-p_{t}}\enspace.\]
Below are basic facts about these auxiliary random variables.
**Lemma D.3**.: _The auxiliary random variables satisfy the following properties:_
1. \(\mathbb{E}[r_{t}q_{s}]=\mathbf{1}[t\neq s]\)_._
2. \(\mathbb{E}[r_{t}^{2}\mid D_{t}]\leqslant\frac{1}{\delta_{t}}\) _and_ \(\mathbb{E}[q_{t}^{2}\mid D_{t}]\leqslant\frac{1}{\delta_{t}}\)_._
3. _The covariance_ \(\operatorname{Cov}(r_{t}q_{s},r_{\ell}q_{k})\) _behave in the following ways:_ \[\operatorname{Cov}(r_{t}q_{s},r_{\ell}q_{k}) =0 \text{if }t=s\text{ or }\ell=k\] \[\operatorname{Cov}(r_{t}q_{s},r_{\ell}q_{k}) =0 \text{if }t\neq s\text{ and }\ell\neq k\text{ and }\{t,s\}\cap\{\ell,k\}=\emptyset\] \[\operatorname{Cov}(r_{t}q_{s},r_{\ell}q_{k}) =-1 \text{if }t\neq s\text{ and }\ell\neq k\text{ and }\{t=k\text{ or }s=\ell\text{ }\}\] \[\operatorname{Cov}(r_{t}q_{s},r_{\ell}q_{k}) \leqslant\frac{1}{\delta_{t}}-1 \text{if }t\neq s\text{ and }\ell\neq k\text{ and }t=\ell\text{ and }s\neq k\] \[\operatorname{Cov}(r_{t}q_{s},r_{\ell}q_{k}) \leqslant\frac{1}{\delta_{s}}-1 \text{if }t\neq s\text{ and }\ell\neq k\text{ and }t\neq\ell\text{ and }s=k\] \[\operatorname{Cov}(r_{t}q_{s},r_{\ell}q_{k}) \leqslant\frac{1}{\delta_{t}\delta_{s}}-1 \text{if }t\neq s\text{ and }\ell\neq k\text{ and }t=\ell\text{ and }s=k\]
Proof.: First, we show that \(\mathbb{E}[r_{t}q_{s}]=\mathbf{1}[t\neq s]\). Let \(t,s\in[T]\) and suppose that \(t\neq s\). Without loss of generality, suppose that \(t>s\). Then by using iterated expectation, we have that
\[\mathbb{E}[r_{t}q_{s}]=\mathbb{E}[q_{s}\,\mathbb{E}[r_{t}\mid D_{t}]]=\mathbb{ E}[q_{s}]=1\enspace.\]
Otherwise, if \(t=s\), then \(r_{t}q_{t}=(\mathbf{1}[z_{t}=1]/p_{t})\cdot(\mathbf{1}[z_{s}=0]/(1-p_{t}))=0\) so that \(\mathbb{E}[r_{t}q_{t}]=0\).
Next, we show that \(\mathbb{E}[r_{t}^{2}\mid D_{t}]\leqslant\frac{1}{\delta_{t}}\) and \(\mathbb{E}[q_{t}^{2}\mid D_{t}]\leqslant\frac{1}{\delta_{t}}\). Observe that \(\mathbb{E}[r_{t}^{2}\mid D_{t}]=p_{t}(1/p_{t}^{2})=1/p_{t}\leqslant 1/ \delta_{t}\), where the inequality follows by definition of Algorithm 1. A similar argument shows that \(\mathbb{E}[q_{t}^{2}\mid D_{t}]\leqslant 1/\delta_{t}\).
Finally, we establish the covariance terms one by one. We do this in order of the cases that they were presented in.
**Case 1 \((t=s)\) or \((\ell=k)\)**: If \(t=s\), then \(r_{t}q_{t}\) is almost surely zero, as argued above. Likewise, if \(\ell=k\) then \(r_{\ell}q_{\ell}\) is almost surely zero. In either of these cases, we have that \(\operatorname{Cov}(r_{t}q_{s},r_{\ell}q_{k})=0\).
**Case 2 \((t\neq s)\) and \((\ell\neq k)\) and \(\{t,s\}\cap\{\ell,k\}=\emptyset\)**: Note that in this case, all the indices \(t\), \(s\), \(\ell\), and \(k\) are distinct. Without loss of generality, suppose that \(t<s<\ell<k\). A repeated use of the iterated expectation yields that
\[\mathbb{E}[r_{t}q_{s}r_{\ell}q_{s}]=\mathbb{E}[r_{t}q_{s}r_{\ell}\,\mathbb{E}[q _{s}\mid D_{s}]]=\mathbb{E}[r_{t}q_{s}r_{\ell}]=\mathbb{E}[r_{t}q_{s}\,\mathbb{ E}[r_{\ell}\mid D_{r}]]=\mathbb{E}[r_{t}q_{s}]=\ldots=1\enspace.\]
Because all terms of distinct, we have that \(\mathbb{E}[r_{t}q_{s}]=1\) and \(\mathbb{E}[r_{\ell}q_{k}]=1\). Thus, the covariance is equal to
\[\operatorname{Cov}(r_{t}q_{s},r_{\ell}q_{k})=\mathbb{E}[r_{t}q_{s},r_{\ell}q_{k} ]-\mathbb{E}[r_{t}q_{s}]\cdot\mathbb{E}[r_{\ell}q_{k}]=1-1=0\enspace.\]
**Case 3 \((t\neq s)\) and \((\ell\neq k)\) and (\(t=k\) or \(s=\ell\))**: Suppose that \(t=k\). In this case, observe that \(r_{t}q_{k}\) is zero almost surely. Thus, \(\mathbb{E}[r_{t}q_{s}r_{\ell}q_{k}]=0\). On the other hand, \(t\neq s\) and \(\ell\neq k\) so that \(\mathbb{E}[r_{t}q_{s}]=\mathbb{E}[r_{\ell}q_{k}]=1\). This means that
\[\operatorname{Cov}(r_{t}q_{s},r_{\ell}q_{k})=\mathbb{E}[r_{t}q_{s}r_{\ell}q_{k }]-\mathbb{E}[r_{t}q_{s}]\cdot\mathbb{E}[r_{\ell}q_{k}]=0-1\cdot 1=-1\enspace.\]
The same argument shows that \(s=\ell\) yields the same result.
**Case 4 \((t\neq s)\) and \((\ell\neq k)\) and \(t=\ell\) and \(s\neq k\)**)**: We begin by computing the expectation of the product of these four terms. In this case, we have that \(t=\ell\) so that
\[\mathbb{E}[r_{t}q_{s}r_{\ell}q_{k}]=\mathbb{E}[r_{t}^{2}q_{s}q_{k}]\enspace.\]
By assumption, the indices \(t,s,\) and \(k\) are all distinct. Our approach will be to obtain an upper bound on the expectation of the project of these three terms by iterated expectation. In particular, the inequality we will use is that \(\mathbb{E}[r_{t}^{2}\mid D_{t}]\leqslant 1/\delta_{t}\). Suppose for now that \(s<k<t\). In this case, we use iterated expectation to get
\[\mathbb{E}[r_{t}q_{s}r_{\ell}q_{k}]=\mathbb{E}[q_{s}q_{k}\,\mathbb{E}[r_{t}\mid D _{t}]]\leqslant\frac{1}{\delta_{t}}\,\mathbb{E}[q_{s}q_{k}]=\frac{1}{\delta_{t }}\,\mathbb{E}[q_{s}\,\mathbb{E}[q_{k}\mid D_{k}]]=\frac{1}{\delta_{t}}\, \mathbb{E}[q_{s}]=\frac{1}{\delta_{t}}\enspace.\]
In the above, we have assumes that \(s<k<t\), but the same iterated expectation technique can be applied regardless of the ordering of these indices, because they are unique. Thus, in this case, \(\mathbb{E}[r_{t}q_{s}r_{\ell}q_{k}]\leqslant 1/\delta_{t}\). Because \(t\neq s\) and \(\ell\neq k\), we have that \(\mathbb{E}[r_{t}q_{s}]=\mathbb{E}[r_{\ell}q_{k}]=1\). This means that
\[\operatorname{Cov}(r_{t}q_{s},r_{\ell}q_{k})=\mathbb{E}[r_{t}q_{s}r_{\ell}q_{k }]-\mathbb{E}[r_{t}q_{s}]\cdot\mathbb{E}[r_{\ell}q_{k}]\leqslant\frac{1}{ \delta_{t}}-1\enspace.\]
**Case 5 \((t\neq s)\) and \((\ell\neq k)\) and \(t\neq\ell\) and \(s=k\)**): We begin by computing the expectation of the product of these four terms. In this case, we have that \(s=k\) so that
\[\mathbb{E}[r_{t}q_{s}r_{\ell}q_{k}]=\mathbb{E}[r_{t}r_{\ell}q_{s}^{2}]\enspace.\]
By assumption, the indices \(t,\ell,\) and \(s\) are all distinct. Using a similar argument as the previous case, we can use iterated expectation together with the bound \(\mathbb{E}[q_{s}^{2}\mid D_{s}]\leqslant 1/\delta_{s}^{2}\) to obtain that \(\mathbb{E}[r_{t}q_{s}r_{\ell}q_{k}]\leqslant 1/\delta_{s}^{2}\). Because \(t\neq s\) and \(\ell\neq k\), we have that \(\mathbb{E}[r_{t}q_{s}]=\mathbb{E}[r_{\ell}q_{k}]=1\). This means that
\[\operatorname{Cov}(r_{t}q_{s},r_{\ell}q_{k})=\mathbb{E}[r_{t}q_{s}r_{\ell}q_{k }]-\mathbb{E}[r_{t}q_{s}]\cdot\mathbb{E}[r_{\ell}q_{k}]\leqslant\frac{1}{ \delta_{s}}-1\enspace.\]
**Case 6 \((t\neq s)\) and \((\ell\neq k)\) and \(t=\ell\) and \(s=k\)**): Suppose without loss of generality that \(t>s\). In this case, we can bound the product of the expectation of these four terms using iterated expectation and the proven inequalities:
\[\mathbb{E}[r_{t}q_{s}r_{\ell}q_{k}]=\mathbb{E}[r_{t}^{2}q_{s}^{2}]=\mathbb{E}[q _{s}^{2}\,\mathbb{E}[r_{t}^{2}\mid D_{t}]]\leqslant\frac{1}{\delta_{t}}\, \mathbb{E}[q_{s}^{2}]\leqslant\frac{1}{\delta_{t}\delta_{s}}\enspace.\]
Because \(t\neq s\), we have that \(\mathbb{E}[r_{t}q_{s}]=\mathbb{E}[r_{\ell},q_{k}]=1\). Thus, the covariance is bounded by
\[\operatorname{Cov}(r_{t}q_{s},r_{\ell}q_{k})=\mathbb{E}[r_{t}q_{s}r_{\ell}q_{k }]-\mathbb{E}[r_{t}q_{s}]\cdot\mathbb{E}[r_{\ell}q_{k}]\leqslant\frac{1}{ \delta_{t}\delta_{s}}-1\enspace.\qed\]
First, we show that that the difference between the expected value of \(\widehat{A(1)A(0)}\) and the target \(A(1)A(0)\) is decreasing at a linear rate in \(T\).
**Proposition D.1**.: _The absolute bias of the estimated crossing term \(\widehat{A(1)A(0)}\) to its target value \(A(1)A(0)\) is at most_
\[\big{|}\mathbb{E}\big{[}\widehat{A(1)A(0)}\big{]}-S(1)^{2}S(0)^{2}\big{|} \leqslant\frac{C^{4}}{T}\enspace.\]
Proof.: Using Lemma D.3, we can calculate the expectation of the product \(\widehat{A(1)A(0)}\) as
\[\mathbb{E}\big{[}\widehat{A(1)A(0)}\big{]} =\mathbb{E}\bigg{[}\Big{(}\frac{1}{T}\sum_{t=1}^{T}y_{t}(1)^{2}r_ {t}\Big{)}\Big{(}\frac{1}{T}\sum_{s=1}^{T}y_{s}(0)^{2}q_{s}\Big{)}\bigg{]}\] \[=\frac{1}{T^{2}}\sum_{t=1}^{T}\sum_{s=1}^{T}y_{t}(1)^{2}y_{t}(0)^ {2}\,\mathbb{E}\Big{[}r_{t}q_{s}\Big{]}\]\[=\frac{1}{T^{2}}\sum_{t=1}^{T}\sum_{s=1}^{T}y_{t}(1)^{2}y_{t}(0)^{2}- \frac{1}{T^{2}}\sum_{t=1}^{T}y_{t}(1)^{2}y_{t}(0)^{2}\] \[=\Big{(}\frac{1}{T}\sum_{t=1}^{T}y_{t}(1)^{2}\Big{)}\Big{(}\frac{1 }{T}\sum_{s=1}^{T}y_{s}(0)^{2}\Big{)}-\frac{1}{T^{2}}\sum_{t=1}^{T}y_{t}(1)^{2} y_{t}(0)^{2}\] \[=A(1)A(0)-\frac{1}{T^{2}}\sum_{t=1}^{T}y_{t}(1)^{2}y_{t}(0)^{2}\enspace.\]
We complete the proof by using Cauchy-Schwarz and Assumption 1, to bound the absolute bias as
\[\big{|}\mathbb{E}\big{|}\widehat{A(1)A(0)}\big{|}-A(1)A(0)\big{|} =\frac{1}{T^{2}}\sum_{t=1}^{T}y_{t}(1)^{2}y_{t}(0)^{2}\] \[\leq\frac{1}{T^{2}}\Bigg{[}\Big{(}\sum_{t=1}^{T}y_{t}(1)^{4}\Big{)} \cdot\Big{(}\sum_{t=1}^{T}y_{t}(0)^{4}\Big{)}\Bigg{]}^{1/2}\] \[=\frac{1}{T}\Bigg{[}\Big{(}\frac{1}{T}\sum_{t=1}^{T}y_{t}(1)^{4} \Big{)}^{1/4}\cdot\Big{(}\frac{1}{T}\sum_{t=1}^{T}y_{t}(0)^{4}\Big{)}^{1/4} \Bigg{]}^{2}\] \[\leq\frac{C^{4}}{T}\enspace.\qed\]
Next, we show that the variance of \(\widehat{A(1)A(0)}\) is going to zero at a sufficiently fast rate.
**Proposition D.2**.: _The variance of \(\widehat{A(1)A(0)}\) is bounded as_
\[\operatorname{Var}(\widehat{A(1)A(0)})\leq\frac{4e^{\sqrt{\log(T^{1/5})}}C^{8} }{T}+\frac{4C^{8}e^{2\sqrt{\log(T^{1/5})}}}{T^{2}}=\tilde{\mathcal{O}}\Big{(} \frac{1}{T}\Big{)}\enspace.\]
Proof.: We begin by decomposing the variance of \(\widehat{A(1)A(0)}\) into covariances of products of the auxiliary random variables \(r_{t}\) and \(q_{s}\). To this end, observe that
\[\operatorname{Var}(\widehat{A(1)A(0)}) =\operatorname{Var}\Big{(}\frac{1}{T^{2}}\sum_{t=1}^{T}\sum_{s=1} ^{T}y_{t}(1)^{2}y_{t}(0)^{2}r_{t}q_{s}\Big{)}\] \[=\frac{1}{T^{4}}\sum_{t=1}^{T}\sum_{s=1}^{T}\sum_{\ell=1}^{T}\sum_ {k=1}^{T}y_{t}(1)^{2}y_{s}(0)^{2}y_{\ell}(1)^{2}y_{k}(0)^{2}\operatorname{Cov} (r_{t}q_{s},r_{\ell}q_{k})\]
Next, we will use the result of Lemma D.3 to handle the individual covariance terms. In particular, the first six types of terms, as described in Lemma D.3. The first three types of terms are at most \(0\), so we may discard them from the sum, as they contribute no positive value. The last three terms have upper bounds, which we use here to obtain the following upper bound:
\[\leq\underbrace{\frac{1}{T^{4}}\sum_{t=1}^{T}\sum_{s\in[T]\setminus \{t\}}\sum_{k\in[T]\setminus\{t,s\}}y_{t}(1)^{4}y_{s}(0)^{2}y_{k}(0)^{2}\big{(} \frac{1}{\delta_{t}}-1\big{)}}_{\text{Term}\,T_{1}}\] \[\qquad+\underbrace{\frac{1}{T^{4}}\sum_{t=1}^{T}\sum_{s\in[T] \setminus\{t\}}\sum_{\ell\in[T]\setminus\{t,s\}}y_{t}(1)^{2}y_{\ell}(1)^{2}y _{s}(0)^{4}\big{(}\frac{1}{\delta_{s}}-1\big{)}}_{\text{Term}\,T_{2}}\] \[\qquad+\underbrace{\frac{1}{T^{4}}\sum_{t=1}^{T}\sum_{s\neq t}y_ {t}(1)^{4}y_{s}(0)^{4}\big{(}\frac{1}{\delta_{t}\delta_{s}}-1\big{)}}_{\text{ Term}\,T_{3}}\]
Our goal will now be to bound each of these terms individually.
**Terms 1 and 2**: Terms 1 and 2 are similar and will be handled in the same way. Let's begin with Term 1. Observe that by Lyapunov's inequality, the moment assumptions, and Lemma D.2, we have that
\[T_{1} \leqslant\frac{1}{\delta_{T}\cdot T^{4}}\Big{(}\sum_{t=1}^{T}y_{t} (1)^{4}\Big{)}\Big{(}\sum_{t=1}^{T}y_{t}(0)^{2}\Big{)}^{2}\] \[\leqslant\frac{2e^{\sqrt{\log(T^{1/5})}}}{T}\Big{(}\frac{1}{T}\sum _{t=1}^{T}y_{t}(1)^{4}\Big{)}\Big{(}\frac{1}{T}\sum_{t=1}^{T}y_{t}(0)^{2}\Big{)} ^{2}\] (Lemma D.2) \[=\frac{2e^{\sqrt{\log(T^{1/5})}}}{T}\Bigg{[}\Big{(}\frac{1}{T}\sum _{t=1}^{T}y_{t}(1)^{4}\Big{)}^{1/4}\Big{(}\frac{1}{T}\sum_{t=1}^{T}y_{t}(0)^{2 }\Big{)}^{1/2}\Bigg{]}^{4}\] \[\leqslant\frac{2e^{\sqrt{\log(T^{1/5})}}}{T}\Bigg{[}\Big{(}\frac{ 1}{T}\sum_{t=1}^{T}y_{t}(1)^{4}\Big{)}^{1/4}\Big{(}\frac{1}{T}\sum_{t=1}^{T}y _{t}(0)^{4}\Big{)}^{1/4}\Bigg{]}^{4}\] (Lyapunov's inequality) \[\leqslant\frac{2e^{\sqrt{\log(T^{1/5})}}C^{8}}{T}\] (Assumption 1)
A similar argument shows that \(T_{2}\leqslant(2e^{\sqrt{\log(T^{1/5})}}C^{8})/T\).
**Term 3**: The third term may be upper bounded using Lemma D.2 and the moment assumptions. Namely,
\[T_{3} \leqslant\frac{1}{\delta_{T}^{2}T^{4}}\Big{(}\sum_{t=1}^{T}y_{t} (1)^{4}\Big{)}\Big{(}\sum_{t=1}^{T}y_{t}(0)^{4}\Big{)}\] \[=\frac{\big{(}2e^{\sqrt{\log(T^{1/5})}}\big{)}^{2}}{T^{2}}\Big{(} \frac{1}{T}\sum_{t=1}^{T}y_{t}(1)^{4}\Big{)}\Big{(}\frac{1}{T}\sum_{t=1}^{T}y_ {t}(0)^{4}\Big{)}\] (Lemma D.2) \[\leqslant\frac{4C^{8}e^{2\sqrt{\log(T^{1/5})}}}{T^{2}}\] (Assumption 1)
Taken together, this shows that
\[\widehat{\mathrm{Var}(\widehat{A(1)}\widehat{A(0)})}\leqslant\frac{4e^{\sqrt{ \log(T^{1/5})}}C^{8}}{T}+\frac{4C^{8}e^{2\sqrt{\log(T^{1/5})}}}{T^{2}}=\tilde{ \mathcal{O}}\Big{(}\frac{1}{T}\Big{)}\enspace.\]
This establishes the following corollary, which shows that the error \(\widehat{A(1)}\widehat{A(0)}-A(1)A(0)\) is going to zero at a near-parametric rate, which follows from by Chebyshev's inequality from Propositions D.1 and D.2.
**Corollary D.1**.: _The following error goes to zero: \(\widehat{A(1)}\widehat{A(0)}-A(1)A(0)=\tilde{\mathcal{O}}_{p}\big{(}T^{-1/2} \big{)}\)._
Using the results derived above, we are ready to prove Theorem 5.1, which we restate here for convenience.
**Theorem 5.1**.: _Under Assumptions 1 and 2, and the parameters stated in Theorem 4.2, the error of the normalized variance estimator under Clip-OGD is \(T\cdot\widehat{\mathbf{VB}}-T\cdot\mathbf{VB}=\tilde{\mathcal{O}}_{p}(T^{-1/2})\)._
Proof of Theorem 5.1.: Recall that the variance estimator and the variance bound are equal to \(T\cdot\text{VB}=4S(1)S(0)=4\sqrt{A(1)A(0)}\) and \(T\cdot\widehat{\text{VB}}=4\sqrt{\widehat{A(1)}\widehat{A(0)}}\) so that the error is given by
\[T\cdot\widehat{\text{VB}}-T\cdot\text{VB}=4\Big{[}\sqrt{A(1)A(0)}-\sqrt{ \widehat{A(1)}\widehat{A(0)}}\Big{]}\enspace.\]
Corollary D.1 states that the error \(\widehat{A(1)}\widehat{A(0)}-A(1)A(0)\) is on the order of \(\tilde{\mathcal{O}}_{p}\big{(}T^{-1/2}\big{)}\). By Assumption 1, we have that \(A(1)A(0)=\sqrt{S(1)^{2}S(0)^{2}}>c^{2}\). Observe that the square root function \(g(x)=\sqrt{x}\) is Lipschitz on the interval \((c^{2},\infty)\). Thus, by a rate-preserving Continuous Mapping Theorem, we have that the error of the normalized variance estimator is on the same order i.e. \(T\cdot\text{VB}-T\cdot\widehat{\text{VB}}=\sqrt{A(1)A(0)}-\sqrt{\widehat{A(1) }\widehat{A(0)}}=\tilde{\mathcal{O}}_{p}\big{(}T^{-1/2}\big{)}\).
### Valid Confidence Intervals (Corollary 5.1)
We now prove the asymptotic validity of the associated Chebyshev-type intervals. This proof is standard in the design-based literature, but we present it here for completeness.
**Corollary 5.1**.: _Under Assumptions 1 and 2, and parameters stated in Theorem 4.2, Chebyshev-type intervals are asymptotically valid: for all \(\alpha\in(0,1]\), \(\liminf_{T\to\infty}\Pr(\tau\in\hat{\tau}\pm\alpha^{-1/2}\sqrt{\hat{\mathsf{VB} }})\geq 1-\alpha\)._
Proof.: Define the random variables
\[Z=\frac{\tau-\hat{\tau}}{\sqrt{\operatorname{Var}(\hat{\tau})}}\quad\text{and} \quad Z^{\prime}=\frac{\tau-\hat{\tau}}{\sqrt{\hat{\mathsf{VB}}}}\enspace.\]
Observe that they are related in the following way:
\[Z^{\prime}=\frac{\tau-\hat{\tau}}{\sqrt{\hat{\mathsf{VB}}}}=\frac{\tau-\hat{ \tau}}{\sqrt{\operatorname{Var}(\hat{\tau})}}\cdot\Big{(}\sqrt{\frac{ \operatorname{Var}(\hat{\tau})}{\operatorname{VB}}}\cdot\sqrt{\frac{\operatorname {VB}}{\hat{\mathsf{VB}}}}\Big{)}=Z\cdot\Big{(}\sqrt{\frac{\operatorname{Var}( \hat{\tau})}{\operatorname{VB}}}\cdot\sqrt{\frac{T\cdot\operatorname{VB}}{T \cdot\hat{\mathsf{VB}}}}\Big{)}\enspace.\]
By definition, we have that \(\limsup_{T\to\infty}\operatorname{Var}(\hat{\tau})/\text{VB}\leq 1\). Recall that by Proposition B.1, \(T\cdot\operatorname{VB}\geq T\cdot V_{\text{N}}=\Omega(1)\) so that by Theorem 5.1 and Continuous Mapping Theorem, we have that \(\sqrt{\frac{T\cdot\operatorname{VB}}{T\cdot\operatorname{VB}}}\xrightarrow{p}1\). Thus, by Slutsky's theorem we have that \(Z^{\prime}\) is asymptotically stochastically dominated by \(Z\). Now we are ready to compute the coverage probability.
\[\liminf_{T\to\infty}\Pr\Bigl{(}\tau\in\hat{\tau}\pm\alpha^{-1/2} \sqrt{\hat{\mathsf{VB}}}\Bigr{)} =\liminf_{T\to\infty}\Pr\Bigl{(}\Bigl{|}\frac{\tau-\hat{\tau}}{ \sqrt{\hat{\mathsf{VB}}}}\Bigr{|}\leq\alpha^{-1/2}\Bigr{)}\] \[=\liminf_{T\to\infty}\Pr\Bigl{(}\bigl{|}Z^{\prime}\bigr{|}\leq \alpha^{-1/2}\Bigr{)}\] \[\geq\liminf_{T\to\infty}\Pr\Bigl{(}\bigl{|}Z\bigr{|}\leq\alpha^{ -1/2}\Bigr{)}\] \[\geq 1-\alpha\enspace,\]
where the last line followed from Chebyshev's inequality and the fact that \(\operatorname{Var}(Z)=1\).
## Appendix E Analysis of Alternative Designs
In this section, we provide analysis on the efficacy of existing adaptive experimental designs for the problem of Adaptive Neyman Allocation. To this end, we show two negative results. In Section E.1, we show that the two-stage design of (Hahn et al., 2011; Blackwell et al., 2022) (i.e. Explore-then-Commit) can suffer linear expected Neyman Regret in the design-based framework for a large class of potential outcome sequences. In Section E.2, we show that mutli-arm bandit algorithms which achieve sublinear expected outcome regret will incur super-linear expected Neyman regret, providing further evidence that these two goals are incompatible.
### Analysis of Explore-then-Commit (Proposition 6.1)
In this setting, we show that Explore-then-Commit designs can sometimes suffer linear Neyman regret, and therefore not recover the Neyman variance in large samples. We formally introduce our definition of Explore-then-Commit designs below. Let \(p_{T_{0}}^{\mathsf{*}}\) be defined as
\[p_{T_{0}}^{\mathsf{*}}=\left(1+\sqrt{\frac{\sum_{t=1}^{T_{0}}y_{t}(0)^{2}}{ \sum_{t=1}^{T_{0}}y_{t}(1)^{2}}}\right)^{-1}\enspace,\]
which is the optimal Neyman probability when considering only the sample up to \(T_{0}\).
Our definition of ETC encompasses many possible ways of estimating the optimal treatment probability. The only requirement is that the estimation method will converge to \(p_{T_{0}}^{\mathsf{*}}\) at the rate \(T_{0}^{-1/2}\). We consider \(p_{T_{0}}^{\mathsf{*}}\) rather than the true Neyman probability \(p^{\mathsf{*}}\) because the observed data is informative only of the outcomes in the exploration phase \(T_{0}\). Many natural estimators will fall into this class, including Horvitz-Thompson style estimators similar to those used in the construction of our variance estimator.
Before continuing, we provide a few more definitions. We define the second moments of treatment and control outcomes as well as the correlation in the exploration phase as
\[S_{T_{0}}(1)^{2}=\frac{1}{T}\sum_{t=1}^{T_{0}}y_{t}(1)^{2}\quad S_{T_{0}}(0)^{2}= \frac{1}{T}\sum_{t=1}^{T_{0}}y_{t}(0)^{2}\quad\text{and}\quad\rho_{T_{0}}=\frac{ \frac{1}{T_{0}}\sum_{t=1}^{T}y_{t}(1)y_{t}(0)}{S_{T_{0}}(1)S_{T_{0}}(0)}\enspace.\]
We are now ready to state the formal version of Proposition 6.1.
**Proposition 6.1***.: _Suppose that \(T_{0}=\Omega(T^{c})\) for some \(\epsilon>0\) and further suppose that the outcome sequence satisfies the following properties for constants \(C\geqslant c>0\) and \(c^{\prime}>0\):_
* _The second moments_ \(S(1)\)_,_ \(S(0)\)_,_ \(S_{T_{0}}(1)\)_, and_ \(S_{T_{1}}(0)\) _are contained in the interval_ \([c,C]\)_._
* _The correlations are bounded away from -1, i.e._ \(\rho_{T_{0}},\rho\geqslant-1+c\)_._
* _The second moments satisfy the following:_ \[S(1)^{2}\Big{(}\frac{S(0)}{S(1)}-\frac{S_{T_{0}}(0)}{S_{T_{0}}(1)}\Big{)}+S(0 )^{2}\Big{(}\frac{S(1)}{S(0)}-\frac{S_{T_{0}}(1)}{S_{T_{0}}(0)}\Big{)}\geqslant c ^{\prime}\]
_Then, the Neyman Regret of Explore-then-Commit is at least linear in probability, \(\mathcal{R}_{T}=\Omega_{p}(T)\)._
The first two conditions are essentially extensions of Assumption 1 to the exploration phase. This ensures that the probability \(p^{\mathbf{s}}_{T_{0}}\) (which is estimated in the Explore-then-Commit design) does not approach \(0\) or \(1\). The third condition is what really makes Explore-then-Commit fail to achieve sublinear Neyman regret. This condition states that the ratio of the second moments in the exploration phase is different than in the larger sequence. For example, if \(S(1)=S(0)=1\) but \(S_{T_{0}}(0)/S_{T_{0}}(1)=2\) then the condition would hold. In this case, we should not expect Explore-then-Commit to achieve the Neyman variance because the exploration phase does not contain sufficient information about the optimal Neyman probability. We now prove the proposition.
Proof.: Let \(p^{\mathbf{s}}=\arg\min_{p\in[0,1]}\sum_{t=1}^{T}f_{t}(p)\) be the Neyman probability. We begin by re-arranging terms in the Neyman regret:
\[\mathcal{R}_{T} =\sum_{t=1}^{T}f_{t}(p_{t})-\sum_{t=1}^{T}f_{t}(p^{\mathbf{s}})\] (def of Neyman regret) \[=\sum_{t=1}^{T_{0}}f_{t}(p_{t})-f_{t}(p^{\mathbf{s}})+\sum_{t=T_{ 0}+1}^{T}f_{t}(p_{t})-f_{t}(p^{\mathbf{s}})\] (splitting terms by phases) \[=\sum_{t=1}^{T_{0}}f_{t}(1/2)-f_{t}(p^{\mathbf{s}})+\sum_{t=T_{0} +1}^{T}f_{t}(\widehat{p^{\mathbf{s}}_{T_{0}}})-f_{t}(p^{\mathbf{s}})\] (def of ETC) \[=\sum_{t=1}^{T_{0}}f_{t}(1/2)-f_{t}(\widehat{p^{\mathbf{s}}_{T_{0 }}})+\sum_{t=1}^{T}f_{t}(\widehat{p^{\mathbf{s}}_{T_{0}}})-f_{t}(p^{\mathbf{s }})\] (adding + subtracting)\[\geq\sum_{t=1}^{T_{0}}f_{t}(p_{T_{0}}^{*})-f_{t}(\widehat{p_{T_{0}}^{ *}})+\sum_{t=1}^{T}f_{t}(\widehat{p_{T_{0}}^{*}})-f_{t}(p_{T_{0}}^{*})\enspace,\]
Where the inequality follows by the optimality of \(p_{T_{0}}^{*}\) on the exploration phase. By adding and subtracting the \(f_{t}(p_{T_{0}}^{*})\) to the second sum, we obtain the following decomposition
\[=\underbrace{\sum_{t=1}^{T_{0}}f_{t}(p_{T_{0}}^{*})-f_{t}(\widehat{p_{T_{0}}^{ *}})}_{\text{Term 1}}+\underbrace{\sum_{t=1}^{T}f_{t}(p_{T_{0}}^{*})-f_{t}(p_{T_{0} }^{*})}_{\text{Term 2}}+\underbrace{\sum_{t=1}^{T}f_{t}(p_{T_{0}}^{*})-f_{t}(p ^{*})}_{\text{Term 3}}\]
We handle each of the terms separately in the remainder of the proof.
**Term 1**: By the assumptions on the second moments and correlation of outcomes in the exploration phase, we have that \(p_{T_{0}}^{*}\) is bounded away from \(0\) and \(1\) by a constant. Furthermore, the cost functions \(f_{t}\) are Lipschitz on the interval \([\gamma,1-\gamma]\) for any fixed \(\gamma\). Thus, we may apply the quantitative Continuous Mapping Theorem to bound the absolute value of Term 1 as follows:
\[|\sum_{t=1}^{T_{0}}f_{t}(p_{T_{0}}^{*})-f_{t}(\widehat{p_{T_{0}}^ {*}})| \leq\sum_{t=1}^{T_{0}}|f_{t}(p_{T_{0}}^{*})-f_{t}(\widehat{p_{T_{0 }}^{*}})|\] (triangle inequality) \[\leq\sum_{t=1}^{T_{0}}\mathcal{O}_{p}(T_{0}^{-1/2})\] (estimator property + CMT) \[=\mathcal{O}_{p}(T_{0}\cdot T_{0}^{-1/2})\] \[=\mathcal{O}_{p}(T_{0}^{1/2})\] \[=\mathcal{O}_{p}(T^{1/2})\enspace,\]
where the final inequality follows from the fact that \(T_{0}\leq T\).
**Term 2**: A similar argument may be applied to the second term. Again, we may apply the continuous mapping theorem as before to obtain
\[|\sum_{t=1}^{T}f_{t}(p_{T_{0}}^{*})-f_{t}(\widehat{p_{T_{0}}^{*}})| \leq\sum_{t=1}^{T}|f_{t}(p_{T_{0}}^{*})-f_{t}(\widehat{p_{T_{0}}^{ *}})|\] (triangle inequality) \[\leq\sum_{t=1}^{T}\mathcal{O}_{p}(T_{0}^{-1/2})\] (estimator property + CMT) \[=\mathcal{O}_{p}(T\cdot T_{0}^{-1/2})\] \[=\mathcal{O}_{p}(T\cdot T^{-\epsilon/2})\] \[=\mathcal{O}_{p}(T^{1-\epsilon/2})\enspace,\]
where we have used the assumption that \(T_{0}=\Omega(T^{\epsilon})\).
**Term 3**: We now handle the third term. Let \(V_{0}\) be the variance of adaptive Horvitz-Thompson estimator under the Bernoulli design when using the treatment probability \(p_{T_{0}}^{*}\). By construction of the cost functions, we have that they are related to the variance as follows:
\[\sum_{t=1}^{T} f_{t}(p_{T_{0}}^{*})-f_{t}(p^{*})\] \[=T\cdot\Big{[}T\cdot V_{0}-T\cdot V_{\text{N}}\Big{]}\] \[=T\cdot\left[\left(S(1)^{2}\Big{\{}\frac{1}{p_{T_{0}}^{*}}-1 \Big{\}}+S(0)^{2}\Big{\{}\frac{1}{1-p_{T_{0}}^{*}}-1\Big{\}}\right)-\left(S(1) ^{2}\Big{\{}\frac{1}{p^{*}}-1\Big{\}}+S(0)^{2}\Big{\{}\frac{1}{1-p^{*}}-1 \Big{\}}\right)\right]\] \[=T\cdot\left[S(1)^{2}\Big{(}\frac{1}{p_{T_{0}}^{*}}-\frac{1}{p^{ *}}\Big{)}+S(0)^{2}\Big{(}\frac{1}{1-p_{T_{0}}^{*}}-\frac{1}{1-p^{*}}\Big{)}\right]\]\[=T\cdot\left[S(1)^{2}\Big{(}\frac{S(0)}{S(1)}-\frac{S_{T_{0}}(0)}{S_{T_{0}}(1)} \Big{)}+S(0)^{2}\Big{(}\frac{S(1)}{S(0)}-\frac{S_{T_{0}}(1)}{S_{T_{0}}(0)}\Big{)} \right]\,\]
where the last equality follows by definition of the probabilities. By Assumption, we have that this bracketed term is constant so that the third term is linear in \(T\).
Putting these together, we have that the Neyman regret is lower bounded as
\[\mathcal{R}_{T}\geq\Omega(T)-\mathcal{O}_{p}(T^{1-\epsilon/2})-\mathcal{O}_{p}( T^{1/2})=\Omega_{p}(T)\ \.\qed\]
### Analysis of Designs for Outcome Regret (Proposition 6.2)
In this section, we prove Proposition 6.2, which establishes that outcome regret and Neyman regret cannot be simultaneously minimized in general. We restate a more formal version of the proposition here. In order for a simpler proof, we make restrictions that the units have constant treatment effect and that each of the individual outcomes are more strictly bounded. We conjecture that the trade-off between Neyman and outcome regret will hold under weaker conditions.
**Proposition 6.2***.: _Let \(\mathcal{A}\) be an adaptive treatment algorithm achieving sublinear outcome regret, i.e. there exists \(q\in(0,1)\) such that \(\mathbb{E}[\mathcal{R}_{T}^{\text{outcome}}]\leq O(T^{q})\) for all outcome sequences satisfying Assumption 1. Consider an outcome sequence satisfying Assumption 1 with constants \(C\geq c>0\) and the additional conditions:_
* \(\max_{1\leq t\leq T}y_{t}(0)^{2}\leq C^{2}\)__
* _For all_ \(t\in[T]\)_,_ \(y_{t}(1)-y_{t}(0)=\tau\) _and_ \(\tau>c^{\prime}\) _for a constant_ \(c^{\prime}>0\)_._
_Then, \(\mathcal{A}\) suffers super-linear Neyman regret on this outcome sequence: \(\mathbb{E}[\mathcal{R}_{T}]\geq\Omega(T^{2-q})\)._
Proof.: To begin, we re-express the outcome regret in terms of the expected treatment probabilities played by algorithm \(\mathcal{A}\). Observe that the expected outcome regret may be written as
\[\mathbb{E}[\mathcal{R}_{T}^{\text{outcome}}] =\mathbb{E}\Big{[}\max_{k\in\{0,1\}}\sum_{t=1}^{T}y_{t}(k)-\sum_{ t=1}^{T}Y_{t}\Big{]}\] (def of regret) \[=\sum_{t=1}^{T}y_{t}(1)-\sum_{t=1}^{T}\mathbb{E}\big{[}Y_{t}\big{]}\] ( \[\tau>0\] ) \[=\sum_{t=1}^{T}y_{t}(1)-\sum_{t=1}^{T}\mathbb{E}\big{[}y_{t}(1) \mathbf{1}[Z_{t}=1]+y_{t}(0)\mathbf{1}[Z_{t}=0]\big{]}\] \[=\sum_{t=1}^{T}y_{t}(1)-\sum_{t=1}^{T}y_{t}(1)\,\mathbb{E}[p_{t}] +y_{t}(0)\cdot\big{(}1-\mathbb{E}[p_{t}]\big{)}\] \[=\sum_{t=1}^{T}\big{(}y_{t}(1)-y_{t}(0)\big{)}\cdot\big{(}1- \mathbb{E}[p_{t}]\big{)}\] \[=\tau\cdot\sum_{t=1}^{T}\big{(}1-\mathbb{E}[p_{t}]\big{)}\ \,\]
where the last equality follow as \(y_{t}(1)-y_{t}(0)=\tau\) for all \(t\in[T]\) by assumption. Because the outcome sequence satisfies Assumption 1, the expected outcome regret is at most \(\mathbb{E}[\mathcal{R}_{T}^{\text{outcome}}]\leq\beta\cdot T^{q}\) for some constant \(\beta\). By the above, this implies that the expectation of the sum of probabilities \(1-p_{t}\) must be small,
\[\sum_{t=1}^{T}\big{(}1-\mathbb{E}[p_{t}]\big{)}\leq\frac{\beta}{\tau}T^{q}\ \.\]
Next, we show that \(\mathcal{A}\) must incur a large cost with respect to the functions \(f_{t}\) in the definition of Neyman regret. To do this, we will use a weighted version of the AM-HM inequality which states that for \(x_{1}\ldots x_{T}>0\) and \(w_{1}\ldots w_{n}\geq 0\), we have that
\[\frac{\sum_{t=1}^{T}w_{t}x_{t}}{\sum_{t=1}^{T}w_{t}}\geq\frac{\sum_{t=1}^{T}w_ {t}}{\sum_{t=1}^{T}\frac{w_{t}}{x_{t}}}\ \.\]The usual AM-HM inequality is recovered when \(w_{t}=1/T\). We now bound the expected Neyman loss, observing that
\[\mathbb{E}\Bigl{[}\sum_{t=1}^{T}f_{t}(p_{t})\Bigr{]} =\mathbb{E}\Bigl{[}\sum_{t=1}^{T}\frac{y_{t}(1)^{2}}{p_{t}}+\frac{y _{t}(0)^{2}}{1-p_{t}}\Bigr{]}\] (def of \[f_{t}\]) \[\geqslant\mathbb{E}\Bigl{[}\sum_{t=1}^{T}\frac{y_{t}(0)^{2}}{1-p_ {t}}\Bigr{]}\] (non-negativity) \[=\sum_{t=1}^{T}y_{t}(0)^{2}\cdot\mathbb{E}\Bigl{[}\frac{1}{1-p_{ t}}\Bigr{]}\] (linearity of \[\mathbb{E}[\cdot]\]) \[\geqslant\sum_{t=1}^{T}y_{t}(0)^{2}\cdot\frac{1}{\mathbb{E}\bigl{[} 1-p_{t}\bigr{]}}\] (Jensen's inequality) \[\geqslant\frac{\Bigl{(}\sum_{t=1}^{T}y_{t}(0)^{2}\Bigr{)}^{2}}{ \sum_{t=1}^{T}y_{t}(0)^{2}\cdot\mathbb{E}\bigl{[}1-p_{t}\bigr{]}}\] (weighted AM-HM) \[\geqslant T^{2}\frac{\Bigl{(}\frac{1}{T}\sum_{t=1}^{T}y_{t}(0)^{2 }\Bigr{)}^{2}}{\max_{1\leqslant t\leqslant T}y_{t}(0)^{2}}\cdot\frac{1}{\sum_ {t=1}^{T}\mathbb{E}\bigl{[}1-p_{t}\bigr{]}}\] \[=T^{2}\frac{S(0)^{2}}{\max_{1\leqslant t\leqslant T}y_{t}(0)^{2}} \cdot\frac{1}{\sum_{t=1}^{T}\mathbb{E}\bigl{[}1-p_{t}\bigr{]}}\] \[\geqslant T^{2}\frac{c^{2}}{C^{2}}\cdot\frac{\tau}{\beta}T^{-q}\] \[\geqslant\frac{c^{2}c^{\prime}}{C^{2}\beta}T^{2-q}\enspace,\]
where the last two inequalities follow from moment bounds in Assumption 1 together with the assumptions on the outcome sequence stated in the Theorem.
Next, we show that the optimal Neyman design incurs a much smaller cost. In particular,
\[\min_{p\in[0,1]}\sum_{t=1}^{T}f_{t}(p) \leqslant\sum_{t=1}^{T}f_{t}(1/2)\] \[=\sum_{t=1}^{T}\frac{y_{t}(1)^{2}}{1/2}+\frac{y_{t}(0)^{2}}{1/2}\] \[=2T\Biggl{[}\frac{1}{T}\sum_{t=1}^{T}y_{t}(1)^{2}+\frac{1}{T}\sum _{t=1}^{T}y_{t}(0)^{2}\Biggr{]}\] \[=2T(S(1)^{2}+S(0)^{2})\] \[\leqslant 4C^{2}T\enspace.\]
Together, these facts establish that the Neyman regret for \(\mathcal{A}\) is lower bounded as
\[\mathbb{E}[\mathcal{R}_{T}]=\mathbb{E}\Bigl{[}\sum_{t=1}^{T}f_{t}(p_{t})-\min_ {p\in[0,1]}\sum_{t=1}^{T}f_{t}(p)\Bigr{]}\geqslant\frac{c^{2}c^{\prime}}{C^{2 }\beta}T^{2-q}-4C^{2}T\geqslant\Omega(T^{2-q})\enspace.\qed\]
## Appendix F Ethical Considerations
There are--at least--two objectives when constructing an adaptive treatment allocation.
* **Minimizing Cumulative Regret**: give the "best" treatment to as many people as possible.
* **Minimizing Variance of the Effect Estimate**: estimate the effect of the treatment to as high precision as possible.
As we show in Proposition 6.2, these two objective are fundamentally incompatible: an adaptive design which aims to estimate the effect to high precision must assign treatment which has worse outcomes. Likewise, a design which seeks to maximize the utility of assigned treatments will not be able to reliably estimate causal effects to high precision. Which one of these is more ethically desirable depends on the purpose and the context of the experiment. For guidance on this ethical question, we turn to The Belmont Report.
In 1979, the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research released the "Belmont Report", which has been one of the foundational texts for ethical guidance in research conducted with human subjects [for the Protection of Human Subjects of Biomedical and Research, 1978]. One of the three basic ethical principles laid out in the report is "benevolence" which is understood as an the obligation the researcher has for the research to improve the general well-being of society, including those people in the experiment. The report writes:
The Hippocratic maxim 'do no harm' has long been a fundamental principle of medical ethics. Claude Bernard extended it to the realm of research, saying that one should not injure one person regardless of the benefits that might come to others. However, even avoiding harm requires learning what is harmful; and, in the process of obtaining this information, persons may be exposed to risk of harm. Further, the Hippocratic Oath requires physicians to benefit their patients 'according to their best judgment." Learning what will in fact benefit may require exposing persons to risk. The problem posed by these imperatives is to decide when it is justifiable to seek certain benefits despite the risks involved, and when the benefits should be foregone because of the risks.
From this perspective, it may be ethically advisable to use a variance minimizing adaptive design because it allows the researcher to learn the effect while subjecting fewer human subjects to the experimental treatments. In other words, a variance minimizing design allows researchers to learn what is harmful and what is beneficial while subjecting fewer human subjects to possible harm. An adaptive treatment plan which minimizes cumulative regret will ensure that minimal harm is done to subjects in the experiment, but will offer less certainty about the extent of the benefit or harm of the treatments. Such an approach will lead to less informative generalizable knowledge of treatment effects, possibly defeating the goal of the research study.
That being said, it is not our goal to suggest that one adaptive allocation plan is most ethical in all circumstances. Indeed, these ethical questions have no systematic answers which are generally applicable. It is the burden of the researchers to carefully "decide when it is justifiable to seek certain benefits despite the risks involved, and when the benefits should be foregone because of the risks." Our goal in this work is merely to provide improved statistical methodology which affords the researchers more choices when addressing these ethical questions. | ## Review
### Summary
The paper investigates adaptive Neyman allocation for experimental design, focusing on a new regret-like measure called Neyman regret that compares the variance under chosen experimental designs with that of the optimal design. It introduces an algorithm named Clip-OGD that achieves expected Neyman regret scaling as O(sqrt(T)), overcoming limitations of traditional approaches in adaptive designs. Additionally, the authors provide asymptotically valid confidence intervals for experiments run using this novel algorithm. Overall, the work presents a significant contribution to the field of adaptive experimental design by addressing existing limitations and proposing a robust framework for practical applications.
### Strengths
- The paper presents a novel perspective on adaptive Neyman allocation, introducing Neyman regret as a new performance measure.
- The writing is clear, and the paper is well-structured, making complex concepts comprehensible.
- The algorithm Clip-OGD is intuitive and does not require hyperparameter tuning, making it practical for online applications.
- The theoretical foundations are solid, with results grounded in a rigorous framework.
- The topic is relevant and addresses an important area in experimental design.
### Weaknesses
- The paper could benefit from a broader motivation for its focus on medical applications, as the relevance to other fields is not well-explained.
- Existing literature on adaptive experimental design and related methods is not adequately referenced, which may mislead the reader about the novelty of the approach.
- Some sections are difficult for non-experts to follow, particularly regarding notations and derivations.
- Empirical evaluations lack sufficient comparisons with existing methods, such as multi-armed bandit approaches.
- The assumption of the algorithm's parameters may not be intuitive and requires further clarification.
### Questions
- Can the authors provide a discussion on the trade-offs between medical applications and broader contexts?
- In section 4.1, why isn't the variance of adaptive experimental design a function of t?
- How does the proposed approach compare against existing adaptive randomization methods?
- What is the practical relevance of the experimental setup in Figure 1(b)?
- Could the authors elaborate on the choice of utility for gradient estimation in the algorithm?
### Soundness
**Score:** 3
**Description:** Good - The theoretical foundation is solid, and the proposed methods are well-justified. However, some assumptions and motivations need clearer explanations.
### Presentation
**Score:** 3
**Description:** Good - The paper is generally well-written and organized, but some sections could be clearer for non-expert readers.
### Contribution
**Score:** 3
**Description:** Good - The paper makes significant contributions to adaptive experimental design, although it lacks thorough engagement with the existing body of literature.
### Rating
**Score:** 6
**Description:** Weak Accept - The paper is technically solid and presents moderate-to-high impact contributions, yet it has some areas requiring improvement in clarity and literature engagement.
### Paper Decision
**Decision:** Accept
**Reasons:** The paper offers a fresh perspective on adaptive Neyman allocation with significant theoretical contributions and practical implications. While there are concerns regarding clarity and literature references, the strengths and overall impact of the work outweigh these issues, justifying an acceptance decision.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Implicit Convolutional Kernels for Steerable CNNs
Maksim Zhdanov
AMLab, University of Amsterdam
[email protected]
Work done while at Helmholtz-Zentrum Dresden-Rossendorf.
Nico Hoffmann
Helmholtz-Zentrum
Dresden-Rossendorf
Gabriele Cesa
Qualcomm AI Research
AMLab, University of Amsterdam
Work done while at Helmholtz-Zentrum Dresden-Rossendorf.
###### Abstract
Steerable convolutional neural networks (CNNs) provide a general framework for building neural networks equivariant to translations and transformations of an origin-preserving group \(G\), such as reflections and rotations. They rely on standard convolutions with \(G\)-steerable kernels obtained by analytically solving the group-specific equivariance constraint imposed onto the kernel space. As the solution is tailored to a particular group \(G\), implementing a kernel basis does not generalize to other symmetry transformations, complicating the development of general group equivariant models. We propose using implicit neural representation via multi-layer perceptrons (MLPs) to parameterize \(G\)-steerable kernels. The resulting framework offers a simple and flexible way to implement Steerable CNNs and generalizes to any group \(G\) for which a \(G\)-equivariant MLP can be built. We prove the effectiveness of our method on multiple tasks, including N-body simulations, point cloud classification and molecular property prediction.
## 1 Introduction
Equivariant deep learning is a powerful tool for high-dimensional problems with known data domain symmetry. By incorporating this knowledge as inductive biases into neural networks, the hypothesis class of functions can be significantly restricted, leading to improved data efficiency and generalization performance [6]. Convolutional neural networks [28] (CNNs) are a prominent example as they are equivariant to translations. Group-equivariant CNNs [9] (G-CNNs) generalize CNNs to exploit a larger number of symmetries via group convolutions, making them equivariant to the desired symmetry group, such as the Euclidean group \(E(n)\) that encompasses translations, rotations, and reflections in \(n\)-dimensional Euclidean space. In physics and chemistry, many important problems, such as molecular modelling or point clouds, rely on the Euclidean group. Objects defined in physical space have properties that are invariant or equivariant to Euclidean transformations, and respecting this underlying symmetry is often desired for the model to perform as expected.
Neural networks can be parameterized in various ways to incorporate equivariance to the Euclidean group. One option is to use a message-passing neural network as a backbone and compute/update messages equivariantly via convolutions. This approach generalizes well to point clouds and graphs and offers the high expressivity of graph neural networks. Equivariant convolutional operators can be further categorized as regular [3; 9; 14; 26] or steerable group convolutions [10; 52; 11]. The latter recently proved to be especially suitable for incorporating physical and geometric quantities into a model [5]. The key idea behind Steerable CNNs is using standard convolution - which guarantees translation equivariance - with \(G\)-steerable kernels that ensure commutativity with the transformations of another group \(G\), such as rotations. The commutation requirement imposes a constraint onto the kernel space that must be solved analytically for each group \(G\). This, in turn, does not allow generalizing a convolution operator tailored to a specific group to other symmetry transformations. Inthe case of the Euclidean group, Cesa _et al.[7]_ proposed a generally applicable way of parameterizing steerable convolutions for sub-groups of \(E(n)\). The method relies on adapting a pre-defined kernel basis explicitly developed for the group \(E(n)\) to an arbitrary sub-group by using _group restriction_.
However, because only a finite basis can be chosen, a basis tailored for \(E(n)\) can be sub-optimal in terms of expressiveness for its sub-groups; see Section 3.6 more details. Hence, we propose an alternative way of building steerable convolutions based on implicit neural kernels, i.e. convolutional kernels implemented as continuous functions parameterized by MLPs [38, 39]. We demonstrate how \(G\)-steerable convolutions with implicit neural kernels can be implemented from scratch for any sub-group \(G\) of the orthogonal group \(O(n)\). The method allows us to ultimately minimize requirements to implement equivariance to new groups; see Section 3.3. The flexibility of neural functions also permits the injection of geometric and physical quantities in point convolutions, increasing their expressiveness [5]; see Section 3.2. We validate our framework on synthetic N-body simulation problem, point-cloud data (ModelNet-40 [56]) and molecular data (QM9 [55]) and demonstrate key benefits of our approach such as flexibility and generalizability. Besides, we demonstrate that implicit kernels allow Steerable CNNs to achieve performance competitive with state-of-the-art models and surpass them with the correct choice of the task-specific symmetry group.
## 2 Background: Steerable Convolutions
In this work, we propose a general solution to easily build Steerable CNNs equivariant to translations _and_ any compact group \(G\)3. In Section 2.1, we provide some necessary prerequisites from group theory and group representation theory [43]. Then, we review the framework of Steerable CNNs and discuss the constraint it induces on the convolutional kernels in Section 2.2.
Footnote 3: We provide the definition for a group in Appendix A.1.
### Groups, Representations and Equivariance
**Definition 1** (Group action).: _An action of a group \(G\) on a set \(\mathcal{X}\) is a mapping \((g,x)\mapsto g.x\) associating a group element \(g\in G\) and a point \(x\in\mathcal{X}\) with some other point on \(\mathcal{X}\) such that the following holds:_
\[g.(h.x)=(gh).x\qquad\forall g,h\in G,x\in\mathcal{X}\]
**Definition 2** (Group representation).: _A linear representation \(\rho\) of a group \(G\) is a map \(\rho:G\rightarrow\mathbb{R}^{d\times d}\) that assigns an invertible matrix \(\rho(g)\)\(\forall g\in G\) and satisfies the following condition_
\[\rho(gh)=\rho(g)\rho(h)\ \forall g,h\in G.\]
A group representation \(\rho(g):V\to V\) furthermore can be seen as a linear action of \(G\) on a vector space \(V\). Additionally, if two (or more) vectors \(v_{1}\in\mathbb{R}^{d_{1}}\) and \(v_{2}\in\mathbb{R}^{d_{2}}\) belong to vector spaces transforming under representations \(\rho_{1}\) and \(\rho_{2}\), their concatenation \(v_{1}\oplus v_{2}\in\mathbb{R}^{d_{1}+d_{2}}\) transforms under the _direct sum_ representation \(\rho_{1}\oplus\rho_{2}\). \((\rho_{1}\oplus\rho_{2})(g)\) is a \(d_{1}+d_{2}\) dimensional block-diagonal matrix containing \(\rho_{1}(g)\) and \(\rho_{2}(g)\) in its diagonal blocks.
There are three types of group representations that are important for the definition of our method:
* **Trivial representation:** all elements of \(G\) act on \(V\) as the identity mapping of \(V\), i.e. \(\rho(g)=I\).
* **Standard representation:** the group \(O(n)\) of all orthogonal \(n\times n\) matrices has a natural action on \(V=\mathbb{R}^{n}\); similarly, if \(G\) is a subgroup of \(O(n)\), elements of \(G\) can act on \(V=\mathbb{R}^{n}\) via the inclusion mapping, i.e. \(\rho_{st}(g)=g\in\mathbb{R}^{n\times n}\).
Figure 1: Illustration of the proposed approach: computing the response of an implicit kernel \(k\) (background) of a steerable point convolution for the node \(i\) (upper right corner) of a graph with steerable features (visualized as spherical functions).
* **Irreducible representations** is a collection of generally known representations that can be used as a building block for larger representations via _direct sum_. As argued in [49], Steerable CNNs can be parameterized without loss of generality solely in terms of irreducible representations.
**Definition 3** (Equivariance).: _Let us have two spaces \(\mathcal{X},\mathcal{Y}\) endowed with a symmetry group \(G\), i.e. with an action defined on them. A function \(\phi:\mathcal{X}\rightarrow\mathcal{Y}\) is called \(G\)-equivariant, if it commutes with the action of \(G\) on the two spaces, i.e. \(\phi(g.x)=g.\phi(x)\) for all \(g\in G\), \(x\in X\)._
As we have discussed, the layers of conventional CNNs are translation equivariant by design; however, they do not commute with transformations of other groups, such as rotations and reflections.
### Steerable CNNs
Steerable CNNs provide a more general framework that allows building convolutions that are equivariant to a group of _isometries_ of \(\mathbb{R}^{n}\), i.e. \((\mathbb{R}^{n},+)\rtimes G\leq E(n)\). Those groups are decomposable as a semi-direct4 product of the translations group \((\mathbb{R}^{n},+)\) and an origin-preserving compact5 group \(G\leq O(n)\), where \(O(n)\) is the group of \(n\)-dimensional rotations and reflections. As translation equivariance is guaranteed [52] by the convolution operator itself, one only has to ensure equivariance to \(G\). See [51] for a more in-depth description of Steerable CNNs.
Footnote 4: See Appendix A.1 for the definition of a semi-direct product.
Footnote 5: To remain in the scope of the manuscript, we abstain from the mathematical definition of compact groups, which requires introducing topological groups. One can find more information about compact groups in [26].
The feature spaces of Steerable CNNs are described as collections of _feature fields_. A feature field of type \(\rho\) is a feature map \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{d}\) endowed with a _group representation_\(\rho:G\rightarrow\mathbb{R}^{d\times d}\) that defines how an element \(g\in G\) transforms the feature:
\[[g.f](x):=\rho(g)f(g^{-1}.x) \tag{1}\]
Furthermore, each convolutional layer is a map between feature fields. For the map to be equivariant, it must preserve the transformation laws of its input and output feature fields. In practice, it means that the following constraint onto the space of convolution kernels must be applied [52]:
\[k(g.x)=\rho_{out}(g)k(x)\rho_{in}(g)^{-1}\qquad\forall g\in G,x\in\mathbb{R}^{n} \tag{2}\]
where \(k:\mathbb{R}^{n}\rightarrow\mathbb{R}^{d_{out}\times d_{in}}\), and \(\rho_{in}:G\rightarrow\mathbb{R}^{d_{in}\times d_{in}}\), \(\rho_{out}:G\rightarrow\mathbb{R}^{d_{out}\times d_{out}}\) are respective representations of input and output feature fields.
To parameterize the kernel \(k\), the constraint in equation 2 needs to be solved _analytically_ for each specific group \(G\) of interest. This renders a general solution challenging to obtain and limits the applicability of steerable convolutions.
## 3 Implicit neural kernels
Instead of deriving a steerable kernel basis for each particular group \(G\), we propose parameterizing the kernel \(k:\mathbb{R}^{n}\rightarrow\mathbb{R}^{d_{out}\times d_{in}}\) with an MLP satisfying the constraint in Eq. 2. The approach only requires the \(G\)-equivariance of the MLP and suggests a flexible framework of implicit steerable convolutions that generalizes to arbitrary groups \(G\leq O(n)\). We argue about the minimal requirements of this approach in Section 3.3.
We first define the kernel as an equivariant map between vector spaces that we model with an MLP (see Section 3.1). Then, we demonstrate that \(G\)-equivariance of an MLP is a sufficient condition for building the implicit representation of steerable kernels for _compact_ groups. We indicate that the flexibility of neural representation allows expanding the input of a steerable kernel in Section 3.2. Next, we describe how a \(G\)-equivariant MLP can be implemented in section 3.3. Later, we describe how one can implement \(G\)-steerable point convolution in the form of equivariant message passing [40, 5] in Section 3.4 and its generalization to dense convolution in Section 3.5. Finally, we compare our method with the solution strategy proposed in [7] in Section 3.6.
### Kernel vectorization and equivariance
Our goal is to implement the kernel \(k:\mathbb{R}^{n}\rightarrow\mathbb{R}^{d_{out}\times d_{in}}\) of a \(G\)-steerable convolution that maps between spaces of feature fields with representations \(\rho_{in}\) and \(\rho_{out}\). The kernel itself is a function whose input in \(\mathbb{R}^{n}\) transforms under the _standard representation_\(\rho_{st}\) (as defined in Section 2.1) and which we will model with an MLP. Since MLPs typically output vectors, it is convenient to _vectorize_ the \(d_{out}\times d_{in}\) output of the kernel. We denote the column-wise vectorization of a matrix \(M\in\mathbb{R}^{d_{1}\times d_{2}}\) as \(vec(M)\in\mathbb{R}^{d_{1}d_{2}}\). Henceforth, we will consider kernel's vector form \(vec(k(\cdot)):\mathbb{R}^{n}\rightarrow\mathbb{R}^{d_{out}d_{in}}\).
Let \(\otimes\) denote the _Kronecker product_ between two matrices. Then, \(\rho_{\otimes}(g):=\rho_{in}(g)\otimes\rho_{out}(g)\) is also a representation6 of \(G\). We suggest an implicit representation of the vectorized kernel \(k\) using an \(G\)-equivariant MLP \(\phi:\mathbb{R}^{n}\rightarrow\mathbb{R}^{d_{out}d_{in}}\) based on the following lemma (see A.2 for the proof):
Footnote 6: This representation is formally known as the _tensor product_ of the two representations.
**Lemma 1**.: _If a kernel \(k\) is parameterized by a \(G\)-equivariant MLP \(\phi\) with input representation \(\rho_{st}\) and output representation \(\rho_{\otimes}:=\rho_{in}\otimes\rho_{out}\), i.e. \(vec(k)(x):=\phi(x)\), then the kernel satisfies the equivariance constraint in Equation 2 for a compact group \(G\)._
In other words, \(G\)-equivariance of MLP is a sufficient condition for \(G\)-equivariance of the convolutional layer whose kernel it parameterizes. Using implicit kernels also has a very favourable property - it allows arbitrary steerable features as its input, which we discuss in the following section.
### Expanding the input
Note that in the case of standard steerable convolutions, the input \(x\in\mathbb{R}^{n}\) of a kernel is usually only the difference between the spatial position of two points. However, there is no requirement that would disallow the expansion of the input space except for practical reasons. Hence, here we augment steerable kernels with an additional feature vector \(z\in\mathbb{R}^{d_{z}}\). This formulation allows us to incorporate relevant information in convolutional layers, such as physical and geometric features. For example, when performing convolutions on molecular graphs, \(z\) can encode the input and output atoms' types and yield different responses for different atoms. If introducing additional arguments into a kernel, the steerability constraint in equation 2 should be adapted to account for transformations of \(z\):
\[k(g.x,\rho_{z}(g)z)=\rho_{out}(g)k(x,z)\rho_{in}(g)^{-1} \tag{3}\]
which must hold for all \(g\in G,x\in\mathbb{R}^{n},z\in\mathbb{R}^{d_{z}}\), where \(\rho_{z}:G\rightarrow\mathbb{R}^{d_{z}\times d_{z}}\) is the representation of \(G\) acting on the additional features.
Again, analytically solving the constraint 3 to find a kernel basis for arbitrary \(\rho_{z}\) is generally unfeasible. Note also that the solution strategy proposed in [7] requires a basis for functions over \(\mathbb{R}^{n+d_{z}}\), whose size tends to grow exponentially with \(d_{z}\) and, therefore, is not suitable. Alternatively, we can now use the flexibility of neural representation and introduce additional features into a kernel at no cost.
### Implementing a \(G\)-equivariant MLP
We are now interested in how to build an MLP that is equivariant to the transformations of the group \(G\), i.e. a sequence of equivariant linear layers alternated with equivariant non-linearities [15, 37, 44]. It is important to say that our approach does not rely on a specific implementation of \(G\)-MLPs, and any algorithm of preference might be used (e.g. [15] or enforcing via an additional loss term).
The approach we employed in our experiments is described below. Since the irreducible representations of \(G\) are typically known7, one can always rely on the following properties: _1)_ any representation \(\rho\) of a compact group \(G\) can be decomposed as a direct sum of irreducible representations \(\rho(g)=Q^{T}\left(\bigoplus_{i\in I}\psi_{i}(g)\right)Q\) with a change of basis \(Q\)8 and _2) Schur's Lemma_, which states that there exist equivariant linear maps only between the irreducible representation of the same kind9. Hence, one can apply the right change of basis to the input and output of a linear layer and then learn only maps between input and output channels associated with the same irreducible representations. In the context of implicit kernels, the tensor product representation \(\rho_{in}\otimes\rho_{out}\) in the last layer isdecomposed by a matrix containing the Clebsch-Gordan coefficients, which often appears in the analytical solutions of the kernel constraint in the related works. Note that the non-linearity \(\sigma\) used in the Steerable CNNs are \(G\)-equivariant and, therefore, can be used for the MLP as well.
### \(G\)-steerable point convolutions
Point convolutions operate on point clouds - sets of \(N\) points endowed with spatial information \(X=\{x_{i}\}_{i=0}^{N-1}\in\mathbb{R}^{n\times N}\). A point cloud thus provides a natural discretization of the data domain, which renders a convolution operator as follows:
\[f_{out}(x_{i})=(k*f_{in})(x_{i})=\sum_{0\leq j\leq N-1}k(x_{i}-x_{j})f_{in}(x_{ j}) \tag{4}\]
To reduce the computational cost for large objects, one can induce connectivity onto \(x\in X\) and represent a point cloud as a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) with nodes \(v_{i}\in\mathcal{V}\) and edges \(e_{ij}\in\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\), where each node \(v_{i}\) has a spatial location \(x_{i}\), node features \(z_{i}\) and a corresponding feature map \(f_{in}(x_{i})\) (see Figure 1). Additionally, each edge \(e_{ij}\) can have an attribute vector \(z_{ij}\) assigned to it (as in Section 3.2). This allows for a learnable message-passing point convolution whose computational cost scales linearly with the number of edges:
\[f_{out}(x_{i})=(k*f_{in})(x_{i})=\sum_{j\in\mathcal{N}(i)}k(x_{i}-x_{j},z_{i},z_{j},z_{ij})f_{in}(x_{j}) \tag{5}\]
where \(\mathcal{N}(i)=\{j:(v_{i},v_{j})\in\mathcal{E}\}\) and the kernel \(k(\cdot)\) is parameterized by a \(G\)-equivariant MLP.
### Extension to \(G\)-steerable CNNs
Straightforwardly, the proposed method can be extended to dense convolutions. In such case, the kernel is defined as \(k:\mathbb{R}^{n}\rightarrow\mathbb{R}^{c_{out}\times c_{in}\times K^{n}}\) - that is, a continuous function that returns a collection of \(K\times K\times...\) kernels given a relative position (and, optionally, arbitrary steerable features). In this case, the vectorized kernel is parameterized in exactly the same way as described above but is estimated at the center of each pixel.
### Comparison with the analytical solution
A general basis for any \(G\)-steerable kernel is described in [7]. Essentially, it relies on two ingredients: _i)_ a (pre-defined) finite \(G\)-steerable basis [17] for _scalar_ filters and _ii)_ a learnable equivariant linear map. Let us look at those in detail. Firstly, a finite \(G\)-steerable basis is essentially a collection of \(B\) orthogonal functions, i.e. \(Y:\mathbb{R}^{n}\rightarrow\mathbb{R}^{B}\), with the following equivariant property: \(Y(g.x)=\rho_{Y}(g)Y(x)\), for some representation \(\rho_{Y}\) of \(G\). The linear map in _ii)_, then, is a general equivariant linear layer, whose input and output transform as \(\rho_{Y}\) and \(\rho_{in}\otimes\rho_{out}\).
In practice, it means that one has to provide a pre-defined basis \(Y\) for a group of interest \(G\). Since the design might not be straightforward, [7] suggest a way to _reuse_ an already derived \(O(n)\) steerable basis for any subgroup \(G\subset O(n)\). While general, such a solution can be sub-optimal. For example, if \(n=3\), an \(O(3)\)-steerable basis has local support inside a sphere, which is suitable for the group of 3D rotations \(SO(3)\) but not ideal for cylindrical symmetries, i.e. when \(G\) is the group \(SO(2)\) of planar rotations around the Z axis.
In comparison, the solution proposed in Section 3.1 replaces the pre-defined basis \(Y\) with a learnable \(G\)-MLP. It allows us to learn \(G\)-specific kernels without relying on a pre-derived basis for a larger group, which in turn means that we can theoretically obtain \(G\)-optimal kernel basis via learning (see A.3 for further details). Furthermore, the kernels defined in Section 3.1 can now be conditioned on arbitrary steerable features, which makes them more task-specific and expressive. In the context of a general basis, one can interpret the last linear layer of an implicit kernel as the map in _ii)_, and the activations before this layer as a learnable version of the basis \(Y\) in _i)_.
## 4 Related works
Group convolutions.Multiple architectures were proposed to achieve equivariance to a certain symmetry group. It has been proven [26] that the convolutional structure is a sufficient condition for building a model that is equivariant to translations and actions of a compact group. One can separate group convolutions into two classes depending on the space on which the convolution operates: _regular_[3, 4, 9, 14, 26] and _steerable_[7, 10, 11, 46, 49, 52, 54] group convolutions. In the first case, the input signal is represented in terms of scalar fields on a group \(G\), and the convolution relies on a discretization of the group space. Steerable convolutions are a class of \(G\)-equivariant convolutions that operate on feature fields over homogeneous spaces and achieve equivariance via constraining the kernel space. They further avoid the discretization of the group space and can reduce the equivariance error in the case of continuous groups.
\(G\)-steerable kernels.In the case of rigid body motions \(G=SO(3)\), the solution of the equivariance constraint is given by spherical harmonics modulated by an arbitrary continuous radial function, which was analytically obtained by Weiler _et al.[52]_. Lang and Weiler [27] then applied the Wigner-Eckart theorem to parametrize \(G\)-steerable kernel spaces over orbits of a compact \(G\). The approach was later generalized by Cesa _et al.[7]_, who proposed a solution for any compact sub-group of \(O(3)\) based on group restriction. Using this approach, one can obtain a kernel basis for a group \(G\leq H\) if the basis for \(H\) is known. Despite its generalizability, the method still requires a pre-defined basis for the group \(H\) that is further adapted to \(G\). The resulting solution is not guaranteed to be optimal for \(G\); see Section 3.6. We note that the practical value of steerable kernels is also high as they can be used for convolution over arbitrary manifolds in the framework of Gauge CNNs [8, 21, 50].
Implicit kernels.Using the implicit representation of convolution kernels for regular CNNs is not novel. It was used, for example, to model long-range dependencies in sequential data [39] or for signal representation [45]. Romero _et al.[39]_ demonstrated that such parametrization allows building shallower networks, thus requiring less computational resources to capture global information about the system. Continuous kernels were recently used to build an architecture [25] for processing data of arbitrary resolution, dimensionality and length, yet equivariant solutions are scarce [47]. Finzi _et al.[14]_ proposed parametrizing convolutions on Lie groups as continuous scalar functions in the group space. The method relies on discretizing a continuous group, which might lead to undesirable stochasticity of the model's output. Instead, we use Steerable CNNs that define the kernel as a function on a homogeneous space. While the discretization of this space is still required, in most cases, it is naturally given by the data itself, e.g. for point clouds; hence, no sampling error arises. It is also important to mention the key difference between implicit kernels and applying \(G\)-MLPs [15] directly - the latter is incapable of processing image/volumetric data as convolutions do. Henceforth, we focus on CNNs with consideration for potential extensions to various data modalities.
Equivariant point convolutions.A particular momentum has been gained by point convolutions in the form of equivariant message-passing [5, 40, 41, 46] specifically for problems where symmetry provides a strong inductive bias such as molecular modelling [1] or physical simulations [18]. Thomas _et al.[46]_ pioneered \(SE(3)\)-equivariant steerable convolutions whose kernels are based on spherical harmonics modulated by a radial function. The approach was further generalized by Batzner _et al.[2]_, who uses an MLP conditioned on the relative location to parameterize the radial function, although the basis of spherical harmonics is preserved. Brandstetter _et al.[5]_ demonstrated that introducing geometric and physical information into an equivariant message-passing model improves the expressivity on various tasks, which we also observe in this work. Note that, for \(G=SO(3)\) or \(O(3)\) and without additional edge features \(z\), our MLP can only learn a function of the radius and, therefore, is equivalent to the models proposed in [2].
## 5 Experiments
In this section, we implement Steerable CNNs with implicit kernels and apply them to various tasks10. First, we indicate the importance of correctly choosing the symmetry group on a synthetic N-body simulation problem where an external axial force breaks the rotational symmetry (see Section 5.2). Then, we prove the generalizability of the proposed approach as well as the gain in performance compared to the method proposed in [7] on ModelNet-40 (see Section 5.3). Afterwards, we show that one can introduce additional physical information into a kernel and significantly improve the performance of steerable convolutions on molecular data (see Section 5.4). Code and data to reproduce all experiments are available on GitHub.
Footnote 10: All datasets were downloaded and evaluated by Maksim Zhdanov (University of Amsterdam).
### Implementation
Implicit kernels.To parameterize implicit kernels, we employ linear layers (see Section 3.3) followed by quotient ELU non-linearities [7]. The last layer generates a steerable vector, which we reshape to yield a final convolutional kernel. The \(G\)-MLP takes as input a steerable vector obtained via direct sum (concatenation) of batch-normalized harmonic polynomial representation of the relative location \(x\), edge features \(z_{ij}\) and input node features \(z_{i}\). See details on pre-processing and task-specific inputs in Appendix B.1.
Steerable convolutions.We employ steerable point convolutional layers as described in Equation 5. For each task, we tune hyperparameters of all models on validation data: the number of layers, the number of channels in each layer, the depth and the width of implicit kernels, and the number of training epochs. For all the experiments, except the one reported in Table 2, we yield a number of parameters similar to the baselines with a deviation of less than 10%. For ModelNet-40 experiments, our architecture is partially inspired by the model introduced in [34], which uses gradual downsampling of a point cloud11. For N-body and QM9 experiments, we add residual connections to each steerable layer to learn higher-frequency features [20]. For QM9 and ModelNet-40 experiments, the last steerable layer returns a vector with invariant features for each node, to which we apply global pooling to obtain a global representation of an object, which is further passed to a classification MLP. In the case of N-body experiments, we output the standard representation corresponding to the particles' coordinates. Details about optimization and model implementation for each task can be found in Appendix B.2.
Footnote 11: Poulenard _et al.[34]_ use kd-tree pooling to compute coarser point clouds, while we only use random sampling of points. Note also that the spherical quotient ELU we used is similar in spirit to the proposed functional non-linearity described there, yet it does not employ a deep MLP.
### The relevance of smaller \(G<O(n)\): N-body simulation
Dataset.We conduct experiments using the N-body system [23], where particles are connected through springs, and their interaction is governed by Hooke's law. Similar to previous studies [5; 40], we modify the original trajectory prediction task to calculate the position of each particle in a 3-dimensional space, given its initial position and velocity. We attach each particle to the XY plane using strings with random equilibrium lengths and pre-defined stiffness, which can slide freely over the plane (see Figure 2, upper left corner). This additional force term breaks the rotational symmetry of the system into only azimuthal symmetry; this system resembles a simplified version
Figure 3: Performance comparison of Steerable CNNs on the rotated ModelNet-40 dataset for different \(G\). Bars show mean average accuracy with error bars indicating standard deviation, both computed from 5 runs. The numbers above bars denote the performance gain of implicit kernels (orange) over [7] (blue). Statistically significant (\(p<0.05\)) positive differences are green, negative ones are red, and insignificant ones are yellow. \(SO(2)\) contains planar rotations around the Z axis, \(M\) and \(Inv\) contain mirroring along the \(X\) axis and origin respectively, while \(F\) contains rotations by \(\pi\) around the \(X\) axis. \(O(2)\cong SO(2)\rtimes M\) achieves the best accuracy as it best represents the real symmetries of the data.
Figure 2: Final position estimation in the N-body system experiment, where particles are connected to one another and to the XY-plane by springs (see left upper corner). Our model with correct axial symmetry \(SO(2)\), significantly outperforms the state-of-the-art model SEGNN [5], which is \(O(3)\)-equivariant, as the relative contribution of the plane string increases.
of the problem of molecule binding to the surface of a larger molecule. We choose the model's hyperparameters to have a similar parameter budget to the SEGNN model [5], which has shown state-of-the-art performance on a similar task (we also compare against a non-equivariant baseline; see Table A2). For all models, we use the highest frequency of hidden representations in both \(G\)-MLP and Steerable CNNs equal to \(1\). We use the velocity of a particle and the equilibrium length of the attached XY spring as input; the model's output transforms under the standard representation. We train a separate model for each value of plane strings' stiffness (3000 training points) to measure the impact of symmetry breaks on performance.
Results.As can be seen in Fig. 2, a Steerable CNN with azimuthal symmetry \(SO(2)\) significantly outperforms SEGNN, which is equivariant to a larger group \(O(3)\). Since we introduced a force term that now breaks the rotational symmetry, SEGNN struggles to learn it. Furthermore, while in the default setting (plane strings' stiffness is 0), models achieve roughly the same performance, the absolute difference grows exponentially once the plane strings are introduced.
The synthetic problem is meant to show the importance of choosing the correct symmetry when designing a model. While the Euclidean group E(3) is often enough for N body systems in a vacuum, it is important to be careful when an external influence or underlying structure can disrupt global symmetries and negatively impact a model with a larger symmetry group. This can be relevant in scenarios like molecular docking simulations [12], where the system can align with the larger molecule, or material science [22], where the arrangement of atoms and crystal lattice structures yield a discrete group of symmetries smaller than \(O(n)\).
### Generalizability of implicit kernels: ModelNet-40
Dataset.The ModelNet-40 [56] dataset contains 12311 CAD models from the 40 categories of furniture with the orientation of each object aligned. The task is to predict the category of an object based on its point cloud model. 2468 models are reserved for the test partition. From the remaining objects, we take 80% for training and 20% for validation. We augment each partition with random rotations around the Z-axis. We induce connectivity on point clouds with a k-nearest neighbour search with \(k=10\) at each model layer and use normals as input node features.
Results.In this section, we demonstrate how one can build a Steerable CNN that is equivariant to an arbitrary subgroup of the Euclidean group \(G\rtimes(\mathbb{R}^{3},+)\leq E(3)\). We compare the performance of implicit kernels with the standard steerable kernels obtained by group restriction [7] and keep the number of parameters similar. The results are shown in Figure 3. Implicit kernels achieve significant improvement in accuracy on test data for the majority of groups. The only statistically significant negative difference is presented for \(G=SO(3)\), for which a tailored and hence optimal kernel basis is already available. When a custom solution is unknown, implicit kernels often significantly outperform the previously proposed method. Therefore, they pose an efficient toolkit for building a kernel basis for an arbitrary subgroup of \(E(3)\). We report the result of the best-performing model in Table 1. Although we do not outperform more task-specific and involved approaches such as PointMLP [31], our model is on par with other group equivariant models. We emphasize that our model is a simple stack of convolutional layers and hypothesize that a smarter downsampling strategy, residual connections and hierarchical representations would significantly improve the overall performance of our model, which we leave for further research.
### Flexibility of implicit kernels: QM9
Dataset.The QM9 dataset [55] is a public dataset consisting of about \(130\)k molecules with up to 29 atoms per molecule. Each molecule is represented by a graph with nodes denoting atoms and edges indicating covalent bonds. Each node is assigned a feature vector consisting of one-hot encoding of the type of the respective atom (H, C, N, O, F) and its spatial information corresponding to a low energy conformation. Additionally, each molecule is described by 19 properties from which we select
\begin{table}
\begin{tabular}{l c} \hline \hline Method & OA, \% \\ \hline Spherical-CNN [13]\(*\) & 88.9 \\ SE(3)-ESN [16]\(*\) & 89.1 \\ TFN[mlp] P [34]\(*\) & 89.4 \\ PointNet++ [35] & 91.8 \\ SFCNN [36] & 92.3 \\ PointMLP [31] & 94.5 \\ \hline Ours & 89.3 \(\pm\) 0.06 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overall accuracy (OA) on ModelNet-40. Group equivariant methods are denoted by \(*\).
12 commonly taken in literature [5] for regression tasks. Different from common practice [5], we perform convolution on molecular graphs, i.e. with connectivity pre-defined by molecular structure instead of inducing it. The design choice is motivated by the presence of edge features, which we include in an implicit kernel to display the flexibility of neural representation.
Results.
We first demonstrate how one can use the flexibility of neural representation to introduce additional features of choice into a steerable convolutional layer. As each pair of atoms connected by a covalent bond is assigned a one-hot encoding of the bond type \(z_{ij}\), we use it as a condition for \(O(3)\)-equivariant implicit kernels. Additionally, we follow Musaelian _et al.[33]_ and embed one-hot encoding of the center and neighbour atom types \(z_{i}\) and \(z_{j}\) into the MLP. We include each property one by one and indicate corresponding performance gain in Figure 4. First, we observed a sharp improvement by switching from standard steerable kernels to implicit ones. We attribute it to the higher expressivity of implicit kernels and their ability to learn more complex interactions. Furthermore, injecting edge attributes into the kernel computation reduced MAE even further. Introducing both atom types and an edge type significantly influenced the performance, corresponding to the model learning how to process each specific combination differently, thus adding to its expressivity. It is not surprising since a similar result was obtained by Brandstetter _et al.[5]_, who used non-linear message aggregation conditioned on physical information, yielding state-of-the-art performance.
Scaling the model up.Table 2 shows the results of an \(E(3)\)-equivariant Steerable CNN with implicit kernels on the QM9 dataset. Both models we reported here have approx. \(2\cdot 10^{6}\) parameters and only differ in length and width of the convolutional layers. For non-energy regression tasks (\(\alpha\), \(\Delta\varepsilon\), \(\varepsilon_{HOMO}\), \(\varepsilon_{LUMO}\), \(\mu\) and \(C_{\nu}\)), we obtain results that are on par with the best-performing message-passing based approaches (we do not compare against transformers). We also indicate that Steerable CNNs with implicit kernels significantly outperform steerable linear convolutions (TFN, LieConv, L1Net) on most tasks. This is consistent with the observation of Brandstetter _et al.[5]_, who pointed out that non-linear convolutions generally perform better than linear ones. For the remaining energy variables (G, H, U, \(U_{0}\), ZPVE) and \(R^{2}\), our model significantly falls behind the task-specific benchmark approaches. We theorize that it can be attributed to two factors. First, steerable convolutions generally do not perform well on these tasks compared to problem-tailored
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline Task & \(\alpha\) & \(\Delta\varepsilon\) & \(\varepsilon_{HOMO}\) & \(\varepsilon_{LUMO}\) & \(\mu\) & \(C_{\nu}\) & G & H & \(R^{2}\) & U & \(U_{0}\) & ZPVE \\ Units & bohr2 & meV & meV & meV & D & cal/mol K & meV & meV & meV & bohr2 & meV & meV & meV \\ \hline NMP[19] &.092 & 69 & 43 & 38 &.030 &.040 & 19 & 17 & 0.180 & 20 & 20 & 1.50 \\ SchNet[41] &.235 & 63 & 41 & 34 &.033 &.033 &.033 & 14 & 14 & 0.073 & 19 & 14 & 1.70 \\ SE(3)-Tr[18] &.142 & 53 & 35 & 33 &.051 &.054 & - & - & - & - & - & - \\ DinnerNet++[24] &.043 & 32 & 24 & 19 &.029 &.023 &.073 & 7 & 6 & 0.331 & 6 & 6 & 1.21 \\ SphereNet[29] &.046 & 32 & 23 & 18 &.026 &.021 & 8 & 6 & 0.292 & 7 & 6 & 1.12 \\ PaiNN[42] &.045 & 45 & 27 & 20 &.012 &.024 & 7 & 6 & 0.066 & 5 & 5 & 1.28 \\ EGNN[40] &.071 & 48 & 29 & 25 &.029 &.031 & 12 & 12 & 0.106 & 12 & 12 & 1.55 \\ SEGNN[5] &.060 & 42 & 24 & 21 &.023 &.031 & 15 & 16 & 0.660 & 13 & 15 & 1.62 \\ TFN[46]* &.223 & 58 & 40 & 38 &.064 &.101 & - & - & - & - & - & - \\ Command[1]* &.085 & 61 & 34 & 38 &.038 &.026 & 20 & 21 & 0.961 & 21 & 22 & 2.02 \\ L1Net[32]* &.088 & 68 & 46 & 35 &.043 &.031 & 14 & 14 & 0.354 & 14 & 13 & 1.56 \\ LieConv [14]* &.084 & 49 & 30 & 25 &.032 &.038 & 22 & 24 & 0.800 & 19 & 19 & 2.28 \\ \hline Ours (W=24, L=15) &.078 & 45.3 & 24.1 & 22.3 &.033 &.032 & 21.1 & 19.6 & 0.809 & 19.7 & 19.5 & 2.08 \\ Ours (W=16, L=30) &.077 & 43.5 & 22.8 & 22.7 &.029 &.032 & 19.9 & 21.5 & 0.851 & 22.5 & 22.3 & 1.99 \\ \hline \end{tabular}
\end{table}
Table 2: Mean Absolute Error (MAE) between model predictions and ground truth for the molecular property prediction on the QM9 dataset. Linear steerable convolutions are denoted by \(*\). L stands for the number of layers, and W stands for the number of channels in each layer.
Figure 4: Using implicit kernels \(k^{G}_{MLP}\) and injecting it with bond and atoms properties significantly improves the performance of Steerable CNNs on the QM9 dataset (Mean Absolute Error on the \(\varepsilon_{HOMO}\) regression problem). Bars denote mean average accuracy on the test dataset with error bars corresponding to standard deviation; both computed on 5 runs. The kernel is \(O(3)\)-equivariant rendering the final architecture \(E(3)\)-equivariant.
frameworks (PaiNN [42], DimeNet++ [24], SphereNet [29]), also in the non-linear case (e.g. SEGNN [5]). Second, we hypothesize that molecular connectivity does not produce a sufficient number of atom-atom interactions, which is crucial for the performance of a message-passing-based model [5]. However, as the goal of the section was to demonstrate the flexibility of implicit kernels that can be conditioned on features of graph edges, we leave developing more involved architectures (e.g. with induced connectivity) for further work.
## 6 Conclusion
We propose a novel approach for implementing convolutional kernels of Steerable CNNs, allowing for the use of smaller groups and easy integration of additional features under the same framework with minor changes. To avoid analytically solving the group \(G\)-specific equivariance constraint, we use a \(G\)-equivariant MLP to parameterize a \(G\)-steerable kernel basis. We theoretically prove that MLP equivariance is sufficient for building an equivariant steerable convolutional layer. Our implicit representation outperforms a previous general method, offering a way to implement equivariance to various groups for which a custom kernel basis has not been developed. The N-body experiment suggests that this method will be particularly applicable in scenarios where rotational symmetry is disturbed, such as in material science or computational chemistry. The critical force term, which violated rotational symmetry, was effectively captured by our model, while the state-of-the-art model struggled with it. Additionally, our flexible neural representation enables the introduction of arbitrary features into a convolutional kernel, enhancing the expressivity of Steerable CNNs. We validate this advantage by applying Steerable CNNs to point cloud and molecular graph data and achieving competitive performance with state-of-the-art approaches. In conclusion, we present a simple yet efficient solution for constructing a general kernel basis equivariant to an arbitrary compact group.
## Limitations
Steerable CNNs generally have high computational complexity and higher memory requirements compared to traditional CNNs. Implicit neural kernels do not help to mitigate the issue, yet they provide additional control over the kernel complexity. We found that when the kernels are parameterized by a single linear layer, the run time slightly decreases compared to the analytical solution12, while a relative performance gain remains. We suggest that more efficient ways to implement \(G\)-MLPs would significantly contribute to the acceleration of the method and leave it for further research. From the implementation point of view, the most troublesome was achieving initialization similar to the non-implicit convolutions. We expect the problem to manifest even stronger when dealing with dense convolutions. This, combined with the discretization error of matrix filters, might negatively affect the performance. We are, however, convinced that it is a matter of time before a robust way of initializing \(G\)-equivariant implicit kernels will be obtained.
Footnote 12: as implemented in escnm[7]
## Acknowledgments and Disclosure of Funding
All the experiments were performed using the Hemera compute cluster of Helmholtz-Zentrum Dresden-Rossendorf and the IvI cluster of the University of Amsterdam. This research results from a collaboration initiated at the London Geometry and Machine Learning Summer School 2022 (LOGML). The authors thank Anna Meszaros, Chen Cai and Ahmad Hammoudeh for their help at the initial stage of the project. They also thank Rob Hesselink for his assistance with visualizations.
## References
* [1] Brandon M. Anderson, Truong Son Hy, and Risi Kondor. Cormorant: Covariant molecular neural networks. In _NeurIPS_, 2019.
* [2] Simon Batzner, Tess E. Smidt, Lixin Sun, Jonathan P. Mailoa, Mordechai Kornbluth, Nicola Molinari, and Boris Kozinsky. E(3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. _Nature Communications_, 13, 2022.
* [3] Erik J. Bekkers. B-spline cnns on lie groups. _ArXiv_, abs/1909.12057, 2020.
* [4] Erik J. Bekkers, Maxime W Lafarge, Mitko Veta, Koen A.J. Eppenhof, Josien P.W. Pluim, and Remco Duits. Roto-translation covariant convolutional networks for medical image analysis. In _MICCAI_, 2018.
* [5] Johannes Brandstetter, Rob Hesselink, Elise van der Pol, Erik J. Bekkers, and Max Welling. Geometric and physical quantities improve e(3) equivariant message passing. _ArXiv_, abs/2110.02905, 2022.
* [6] Michael M. Bronstein, Joan Bruna, Taco Cohen, and Petar Velivckovic'c. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. _ArXiv_, abs/2104.13478, 2021.
* [7] Gabriele Cesa, Leon Lang, and Maurice Weiler. A program to build e(n)-equivariant steerable cnns. In _ICLR_, 2022.
* [8] Taco Cohen, Maurice Weiler, Berkay Kicanaoglu, and Max Welling. Gauge equivariant convolutional networks and the icosahedral cnn. In _ICML_, pages 1321-1330. PMLR, 2019.
* [9] Taco Cohen and Max Welling. Group equivariant convolutional networks. In _ICML_, 2016.
* [10] Taco S. Cohen, Mario Geiger, and Maurice Weiler. A general theory of equivariant CNNs on homogeneous spaces. _NeurIPS_, 32, 2019.
* [11] Taco S. Cohen and Max Welling. Steerable CNNs. In _ICLR_, Nov. 2016.
* [12] Gabriele Corso, Hannes Stark, Bowen Jing, Regina Barzilay, and T. Jaakkola. Diffdock: Diffusion steps, twists, and turns for molecular docking. _ArXiv_, abs/2210.01776, 2022.
* [13] Carlos Esteves, Christine Allen-Blanchette, Amesch Makadia, and Kostas Daniilidis. Learning so(3) equivariant representations with spherical cnns. _International Journal of Computer Vision_, 128:588-600, 2017.
* [14] Marc Finzi, Samuel Stanton, Pavel Izmailov, and Andrew Gordon Wilson. Generalizing convolutional neural networks for equivariance to lie groups on arbitrary continuous data. In _ICML_, 2020.
* [15] Marc Finzi, Max Welling, and Andrew Gordon Wilson. A practical method for constructing equivariant multilayer perceptrons for arbitrary matrix groups. _ArXiv_, abs/2104.09459, 2021.
* [16] Daniel Franzen and Michael Wand. Nonlinearities in steerable so(2)-equivariant cnns. _ArXiv_, abs/2109.06861, 2021.
* [17] William T. Freeman and Edward H. Adelson. The design and use of steerable filters. _IEEE Transactions on Pattern Analysis & Machine Intelligence_, (9):891-906, 1991.
* [18] Fabian B. Fuchs, Daniel E. Worrall, Volker Fischer, and Max Welling. Se(3)-transformers: 3d roto-translation equivariant attention networks. _ArXiv_, abs/2006.10503, 2020.
* [19] Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. _ArXiv_, abs/1704.01212, 2017.
* [20] Francesco Di Giovanni, James R. Rowbottom, Benjamin Paul Chamberlain, Thomas Markovich, and Michael Bronstein. Graph neural networks as gradient flows: understanding graph convolutions via energy. 2022.
* [21] Pim De Haan, Maurice Weiler, Taco Cohen, and Max Welling. Gauge equivariant mesh {cnn}s: Anisotropic convolutions on geometric graphs. In _ICLR_, 2021.
* [22] Sekouba Kaba and Siamak Ravanbakhsh. Equivariant networks for crystal structures. _ArXiv_, abs/2211.15420, 2022.
* [23] Thomas Kipf, Ethan Fetaya, Kuan-Chieh Wang, Max Welling, and Richard Zemel. Neural relational inference for interacting systems. _ArXiv_, 2018.
* [24] Johannes Klicpera, Shankari Giri, Johannes T. Margraf, and Stephan Gunnemann. Fast and uncertainty-aware directional message passing for non-equilibrium molecules. _ArXiv_, abs/2011.14115, 2020.
* [25] David M. Knigge, David W. Romero, Albert Gu, Efstratios Gavves, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn, and Jan-Jakob Sonke. Modelling long range dependencies in n-d: From task-specific to a general purpose cnn. _ArXiv_, abs/2301.10540, 2023.
* [26] Risi Kondor and Shubhendu Trivedi. On the generalization of equivariance and convolution in neural networks to the action of compact groups. _ArXiv_, abs/1802.03690, 2018.
* [27] Leon Lang and Maurice Weiler. A Wigner-Eckart theorem for group equivariant convolution kernels. In _ICLR_, 2020.
* [28] Yann LeCun and Yoshua Bengio. Convolutional networks for images, speech, and time series. 1998.
* [29] Yi Liu, Limei Wang, Meng Liu, Xuan Zhang, Bora Oztekin, and Shuiwang Ji. Spherical message passing for 3d graph networks. _ArXiv_, abs/2102.05013, 2021.
* [30] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In _ICLR_, 2019.
* [31] Xu Ma, Can Qin, Haoxuan You, Haoxi Ran, and Yun Raymond Fu. Rethinking network design and local geometry in point cloud: A simple residual mlp framework. _ArXiv_, abs/2202.07123, 2022.
* [32] Benjamin Kurt Miller, Mario Geiger, Tess E. Smidt, and Frank No'e. Relevance of rotationally equivariant convolutions for predicting molecular properties. _ArXiv_, abs/2008.08461, 2020.
* [33] Albert Musaelian, Simon Batzner, Anders Johansson, Lixin Sun, Cameron J. Owen, Mordechai Kornbluth, and Boris Kozinsky. Learning local equivariant representations for large-scale atomistic dynamics. _ArXiv_, abs/2204.05249, 2022.
* [34] Adrien Poulenard and Leonidas J. Guibas. A functional approach to rotation equivariant nonlinearities for tensor field networks. _CVPR_, pages 13169-13178, 2021.
* [35] Charles R Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. _ArXiv_, 2017.
* [36] Yongming Rao, Jiwen Lu, and Jie Zhou. Spherical fractal convolutional neural networks for point cloud recognition. _CVPR_, pages 452-460, 2019.
* [37] Siamak Ravanbakhsh. Universal equivariant multilayer perceptrons. _ArXiv_, 2020.
* [38] David W Romero, Robert-Jan Bruintjes, Jakub M Tomczak, Erik J Bekkers, Mark Hoogendoorn, and Jan C van Gemert. Flexconv: Continuous kernel convolutions with differentiable kernel sizes. _ArXiv_, 2021.
* [39] David W. Romero, Anna Kuzina, Erik J. Bekkers, Jakub M. Tomczak, and Mark Hoogendoorn. Ckconv: Continuous kernel convolution for sequential data. _ArXiv_, abs/2102.02611, 2022.
* [40] Victor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E(n) equivariant graph neural networks. In _ICML_, 2021.
* [41] Kristof Schutt, Pieter-Jan Kindermans, Huziel Enoc Sauceda Felix, Stefan Chmiela, Alexandre Tkatchenko, and Klaus-Robert Muller. Schnet: A continuous-filter convolutional neural network for modeling quantum interactions. In _NeurIPS_, 2017.
* [42] Kristof T. Schutt, Oliver T. Unke, and Michael Gastegger. Equivariant message passing for the prediction of tensorial properties and molecular spectra. In _ICML_, 2021.
* [43] Jean-Pierre Serre. Linear representations of finite groups. 1977.
* [44] J. Shawe-Taylor. Building symmetries into feedforward networks. In _1989 First IEE International Conference on Artificial Neural Networks, (Conf. Publ. No. 313)_, page 158-162, Oct 1989.
* [45] Vincent Sitzmann, Julien N. P. Martel, Alexander W. Bergman, David B. Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions. _ArXiv_, abs/2006.09661, 2020.
* [46] Nathaniel Thomas, Tess E. Smidt, Steven M. Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, and Patrick F. Riley. Tensor field networks: Rotation- and translation-equivariant neural networks for 3d point clouds. _ArXiv_, abs/1802.08219, 2018.
* [47] Tycho F. A. van der Ouderaa, David W. Romero, and Mark van der Wilk. Relaxing equivariance constraints with non-stationary continuous filters. _ArXiv_, abs/2204.07178, 2022.
* 12, 2019.
* [49] Maurice Weiler and Gabriele Cesa. General \(E(2)\)-Equivariant Steerable CNNs. In _NeurIPS_, 2019.
* Isometry and Gauge Equivariant Convolutions on Riemannian Manifolds. _ArXiv_, 2021.
* [51] Maurice Weiler, Patrick Forre, Erik Verlinde, and Max Welling. _Equivariant and Coordinate Independent Convolutional Networks_. 2023.
* [52] Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, and Taco Cohen. 3d steerable cnns: Learning rotationally equivariant features in volumetric data. In _NeurIPS_, 2018.
* [53] Maurice Weiler, Fred A. Hamprecht, and Martin Storath. Learning steerable filters for rotation equivariant CNNs. In _CVPR_, 2018.
* [54] Daniel E. Worrall, Stephan J. Garbin, Daniyar Turmukhambetov, and Gabriel J. Brostow. Harmonic networks: Deep translation and rotation equivariance. In _CVPR_, 2017.
* [55] Zhenqin Wu, Bharath Ramsundar, Evan N. Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S. Pappu, Karl Leswing, and Vijay S. Pande. Moleculenet: A benchmark for molecular machine learning. _ArXiv_, 2017.
* [56] Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. _CVPR_, pages 1912-1920, 2015. | ## Review
### Summary
This paper proposes a novel method for achieving G-equivariant neural networks by using equivariant multilayer perceptrons (MLPs) to parametrize steerable kernels, thus avoiding the need for analytical solutions specific to group G. The framework allows for greater flexibility and adaptability to various groups and applications. The authors provide theoretical justification and extensive empirical validation across tasks such as point cloud classification and molecular simulations. While the method shows promise, there are concerns regarding its accessibility to the broader machine learning community and the clarity of some core concepts.
### Strengths
- The approach offers a novel solution to a significant limitation of steerable CNNs by utilizing equivariant MLPs, which enhances flexibility.
- The paper is well-structured and clearly written, making the complex topics more understandable.
- Empirical evaluations demonstrate the effectiveness of the proposed method across various datasets, including N-body simulations and the QM9 dataset.
- The paper contributes to the field by extending the application of implicit kernels to steerable networks, which has not been extensively explored before.
### Weaknesses
- The empirical evaluations lack direct comparison to prior works, raising questions about performance dependence on chosen problems.
- The rationale for using implicit kernels over existing methods is not thoroughly addressed.
- The paper may not be easily accessible to a wide audience, as it assumes familiarity with advanced concepts such as natural action and Clebsch-Gordan coefficients.
- The method increases computational and memory complexity without clear evidence of improved performance with deeper MLPs.
- There is limited intrinsic analysis of how different types of implicit kernels may affect performance.
### Questions
- What are the computational and memory costs associated with using implicit kernels?
- Could the authors clarify the differences between solving Eq. 2 for steerable methods and G-equivariant MLPs?
- Is there a reason why only multi-layer perceptrons are considered for parameterizing implicit functions, and could other bases be explored?
- How do the authors hypothesize the observed performance differences in their experiments?
### Soundness
**Score:** 3
**Description:** 3 = good; the theoretical foundations are strong, and the empirical results are convincing, though some aspects could benefit from clearer explanations.
### Presentation
**Score:** 3
**Description:** 3 = good; the paper is generally well-written but contains areas that could be clearer, especially for readers less familiar with the subject matter.
### Contribution
**Score:** 3
**Description:** 3 = good; the work presents a solid contribution to the field by proposing a new approach to equivariant neural networks, but additional empirical analysis could strengthen its impact.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept; the paper is technically solid with moderate-to-high impact, but it requires some clarifications and improvements for broader accessibility and understanding.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents an original and well-founded approach to a significant problem in the field of equivariant neural networks. While there are areas for improvement, especially regarding accessibility and clarity, the contributions are valuable and the empirical results support the claims made. The overall quality of the work justifies acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Multi-Fidelity Active Learning with GFlowNets
Anonymous Author(s)
Affiliation
Address
email
###### Abstract
In the last decades, the capacity to generate large amounts of data in science and engineering applications has been growing steadily. Meanwhile, the progress in machine learning has turned it into a suitable tool to process and utilise the available data. Nonetheless, many relevant scientific and engineering problems present challenges where current machine learning methods cannot yet efficiently leverage the available data and resources. For example, in scientific discovery, we are often faced with the problem of exploring very large, high-dimensional spaces, where querying a high fidelity, black-box objective function is very expensive. Progress in machine learning methods that can efficiently tackle such problems would help accelerate currently crucial areas such as drug and materials discovery. In this paper, we propose the use of GFlowNets for multi-fidelity active learning, where multiple approximations of the black-box function are available at lower fidelity and cost. GFlowNets are recently proposed methods for amortised probabilistic inference that have proven efficient for exploring large, high-dimensional spaces and can hence be practical in the multi-fidelity setting too. Here, we describe our algorithm for multi-fidelity active learning with GFlowNets and evaluate its performance in both well-studied synthetic tasks and practically relevant applications of molecular discovery. Our results show that multi-fidelity active learning with GFlowNets can efficiently leverage the availability of multiple oracles with different costs and fidelities to accelerate scientific discovery and engineering design.
## 1 Introduction
The current most pressing challenges for humanity, such as the climate crisis and the threat of pandemics or antibiotic resistance could be tackled, at least in part, with new scientific discoveries. By way of illustration, materials discovery can play an important role in improving the energy efficiency of energy production and storage; and reducing the costs and duration for drug discovery has the potential to more effectively and rapidly mitigate the consequences of new diseases. In recent years, researchers in materials science, biochemistry and other fields have increasingly adopted machine learning as a tool as it holds the promise to drastically accelerate scientific discovery [7, 67, 3, 12].
Although machine learning has already made a positive impact in scientific discovery applications [55, 27], unleashing its full potential will require improving the current algorithms [1]. For example, typical tasks in potentially impactful applications in materials and drug discovery require exploring combinatorially large, high-dimensional spaces [46, 6], where only small, noisy data sets are available, and obtaining new annotations computationally or experimentally is very expensive. Such scenarios present serious challenges even for the most advanced current machine learning methods.
In the search for a useful discovery, we typically define a quantitative proxy for usefulness, which we can view as a black-box function. One promising avenue for improvement is developing methods that more efficiently leverage the availability of multiple approximations of the target black-boxfunction at lower fidelity but much lower cost than the highest fidelity oracle [10; 14]. For example, the most accurate estimation of the properties of materials and molecules is only typically obtained via synthesis and characterisation in a laboratory. However, this is only feasible for a small number of promising candidates. Approximate quantum mechanics simulations of a larger amount of chemical compounds can be performed via Density Functional Theory (DFT) [41; 51]. However, DFT is still computationally too expensive for high-throughput exploration of large search spaces. Thus, large-scale exploration can only be achieved through cheaper but less accurate oracles. Nonetheless, solely relying on low-fidelity approximations is clearly suboptimal. Ideally, such tasks would be best tackled by methods that can efficiently and adaptively distribute the available computational budget between the multiple oracles depending on the already acquired information.
The past decade has seen significant progress in multi-fidelity Bayesian optimisation (BO) [19; 53], including methods that leverage the potential of deep neural networks [36]. Although highly relevant for scientific discovery, standard BO is not perfectly suited for some of the challenges in materials and drug discovery tasks. First and foremost, BO's ultimate goal is to find the optimum of an expensive black-box function. However, even the highest fidelity oracles in such problems are underspecified with respect to the actual, relevant, downstream applications. Therefore, it is imperative to develop methods that, instead of "simply" finding the optimum, discover a set of diverse, high-scoring candidates.
Recently, generative flow networks (GFlowNets) [4] have demonstrated their capacity to find diverse candidates through discrete probabilistic modelling, with particularly promising results when embedded in an active learning loop [22]. Here, we propose to extend the applicability of GFlowNets for multi-fidelity active learning.
In this paper, we present an algorithm for multi-fidelity active learning with GFlowNets. We provide empirical results in two synthetic benchmark tasks and four practically relevant tasks for biological sequence design and molecular modelling. As a main result, we demonstrate that multi-fidelity active learning with GFlowNets discovers diverse, high-scoring samples when multiple oracles with different fidelities and costs are available, with lower computational cost than its single-fidelity counterpart.
## 2 Related Work
Our work can be framed within the broad field of active learning (AL), a class of machine learning methods whose goal is to learn an efficient data sampling scheme to accelerate training [50]. For the bulk of the literature in AL, the goal is to train an accurate model \(h(x)\) of an unknown target function \(f(x)\), as in classical supervised learning. However, in certain scientific discovery problems, which is the motivation of our work, a desirable goal is often to discover multiple, diverse candidates \(x\) with high values of \(f(x)\). The reason is that the ultimate usefulness of a discovery is extremely expensive to quantify and we always rely on more or less accurate approximations. Since we generally have the option to consider more than one candidate solution, it is safer to generate a set of diverse and apparently good solutions, instead of focusing on the single global optimum of the wrong function.
This distinctive goal is closely connected to related research areas such as Bayesian optimisation [19] and active search [20]. Bayesian optimisation (BO) is an approach grounded in Bayesian inference for the problem of optimising a black-box objective function \(f(x)\) that is expensive to evaluate. In contrast to the problem we address in this paper, standard BO typically considers continuous domains and works best in relatively low-dimensional spaces [18]. Nonetheless, in recent years, approaches for BO with structured data [13] and high-dimensional domains [21] have been proposed in the literature. The main difference between BO and the problem we tackle in this paper is that we are interested in finding multiple, diverse samples with high value of \(f\) and not only the optimum.
This goal, as well as the discrete nature of the search space, is shared with active search, a variant of active learning in which the task is to efficiently find multiple samples of a valuable (binary) class from a discrete domain \(\mathcal{X}\)[20]. This objective was already considered in the early 2000s by Warmuth et al. for drug discovery [59], and more formally analysed in later work [26; 25]. A recent branch of research in stochastic optimisation that considers diversity is so-called Quality-Diversity [9], which typically uses evolutionary algorithms that perform search in the latent space. All these and other problems such as multi-armed bandits [48] and the general framework of experimental design [8] all share the objective of optimising or exploring an expensive black-box function. Formal connections between some of these areas have been established in the literature [54; 17; 23; 15].
Multi-fidelity methods have been proposed in most of these related areas of research. An early survey on multi-fidelity methods for Bayesian optimisation was compiled by Peherstorfer et al. [42], and research on the subject has continued since [44; 53], with the proposal of specific acquisition functions [56] and the use of deep neural networks to improve the modelling [36]. Interestingly, the literature on multi-fidelity active learning [35] is scarcer than on Bayesian optimisation. Recently, works on multi-fidelity active search have also appeared in the literature [40]. Finally, multi-fidelity methods have recently started to be applied in scientific discovery problems [10; 14]. However, the literature is still scarce probably because most approaches do not tackle the specific needs in scientific discovery, such as the need for diverse samples. Here, we aim addressing this need with the use of GFlowNets [4; 24] for multi-fidelity active learning.
## 3 Method
### Background
GFlowNetsGenerative Flow Networks [GFlowNets; 4; 5] are amortised samplers designed for sampling from discrete high-dimensional distributions. Given a space of compositional objects \(\mathcal{X}\) and a non-negative reward function \(R(x)\), GFlowNets are designed to learn a stochastic policy \(\pi\) that generates \(x\in\mathcal{X}\) with a probability proportional to the reward, that is \(\pi(x)\propto R(x)\). This distinctive property induces sampling diverse, high-reward objects, which is a desirable property for scientific discovery, among other applications [23].
The objects \(x\in\mathcal{X}\) are constructed sequentially by sampling transitions \(s_{t}{\rightarrow}s_{t+1}\in\mathbb{A}\) between partially constructed objects (states) \(s\in\mathcal{S}\), which includes a unique empty state \(s_{0}\). The stochastic forward policy is typically parameterised by a neural network \(P_{F}(s_{t+1}|s_{t};\theta)\), where \(\theta\) denotes the learnable parameters, which models the distribution over transitions \(s_{t}{\rightarrow}s_{t+1}\) from the current state \(s_{t}\) to the next state \(s_{t+1}\). The backward transitions are parameterised too and denoted \(P_{B}(s_{t}|s_{t+1};\theta)\). The probability \(\pi(x)\) of generating an object \(x\) is given by \(P_{F}\) and its sequential application:
\[\pi(x)=\sum_{\tau:s_{|\tau|-1}\to x\in\tau}\prod_{t=0}^{|\tau|-1}P_{F}(s_{t+1 }|s_{t};\theta),\]
which sums over all trajectories \(\tau\) with terminating state \(x\), where \(\tau=(s_{0}\to s_{1}\ldots\to s_{|\tau|})\) is a complete trajectory. To learn the parameters \(\theta\) such that \(\pi(x)\propto R(x)\) we use the trajectory balance learning objective [37]
\[\mathcal{L}_{TB}(\tau;\theta)=\left(\log\frac{Z_{\theta}\prod_{t=0}^{n}P_{F}(s _{t+1}|s_{t};\theta)}{R(x)\prod_{t=1}^{n}P_{B}(s_{t}|s_{t+1};\theta)}\right)^{2}, \tag{1}\]
where \(Z_{\theta}\) is an approximation of the partition function \(\sum_{x\in\mathcal{X}}R(x)\) that is learned. The GFlowNet learning objective supports training from off-policy trajectories, so for training the trajectories are typically sampled from a mixture of the current policy with a uniform random policy. The reward is also tempered to make the policy focus on the modes.
Active LearningIn its simplest formulation, the active learning problem that we consider is as follows: we start with an initial data set \(\mathcal{D}=\{(x_{i},f(x_{i}))\}\) of samples \(x\in\mathcal{X}\) and their evaluations by an expensive, black-box objective function (oracle) \(f:\mathcal{X}\rightarrow\mathbb{R}\), which we use to train a surrogate model \(h(x)\). A GFlowNet can then be trained to learn a generative policy \(\pi_{\theta}(x)\) using \(h(x)\) as reward function, that is \(R(x)=h(x)\). Optionally, we can instead train a probabilistic proxy \(p(f|\mathcal{D})\) and use as reward the output of an acquisition function \(\alpha(x,p(f|\mathcal{D}))\) that considers the epistemic uncertainty of the surrogate model, as typically done in Bayesian optimisation. Finally, we use the policy \(\pi(x)\) to generate a batch of samples to be evaluated by the oracle \(f\), we add them to our data set and repeat the process a number of active learning rounds.
While much of the active learning literature [50] has focused on so-called _pool-based_ active learning, where the learner selects samples from a pool of unlabelled data, we here consider the scenario of _de novo query synthesis_, where samples are selected from the entire object space \(\mathcal{X}\). This scenario is particularly suited for scientific discovery [30, 62, 64, 33]. The ultimate goal pursued in active learning applications is also heterogeneous. Often, the goal is the same as in classical supervised machine learning: to train an accurate (proxy) model \(h(x)\) of the unknown target function \(f(x)\). For some problems in scientific discovery, we are usually not interested in the accuracy in the entire input space \(\mathcal{X}\), but rather in discovering new, diverse objects with high values of \(f\). This is connected to other related problems such as Bayesian optimisation [19], active search [20] or experimental design [8], as reviewed in Section 2.
### Multi-Fidelity Active Learning
We now consider the following active learning problem with multiple oracles of different fidelities. Our ultimate goal is to generate a batch of \(K\) samples \(x\in\mathcal{X}\) according to the following desiderata:
* The samples obtain a high value when evaluated by the objective function \(f:\mathcal{X}\rightarrow\mathbb{R}^{+}\).
* The samples in the batch should be distinct and diverse, that is cover distinct high-valued regions of \(f\).
Furthermore, we are constrained by a computational budget \(\Lambda\) that limits our capacity to evaluate \(f\). While \(f\) is extremely expensive to evaluate, we have access to a discrete set of surrogate functions (oracles) \(\{f_{m}\}_{1\leq m\leq M}:\mathcal{X}\rightarrow\mathbb{R}^{+}\), where \(m\) represents the fidelity index and each oracle has an associated cost \(\lambda_{m}\). We assume \(f_{M}=f\) because there may be even more accurate oracles for the true usefulness but we do not have access to them, which means that even when measured by \(f=f_{M}\), diversity remains an important objective. We also assume, without loss of generality, that the larger \(m\), the higher the fidelity and that \(\lambda_{1}<\lambda_{2}<\ldots<\lambda_{M}\). This scenario resembles many practically relevant problems in scientific discovery, where the objective function \(f_{M}\) is indicative but not a perfect proxy of the true usefulness of objects \(x\)--hence we want diversity--yet it is extremely expensive to evaluate--hence cheaper, approximate models are used in practice.
In multi-fidelity active learning--as well as in multi-fidelity Bayesian optimisation--the iterative sampling scheme consists of not only selecting the next object \(x\) (or batch of objects) to evaluate, but also the level of fidelity \(m\), such that the procedure is cost-effective.
Our algorithm, MF-GFN, detailed in Algorithm 1, proceeds as follows: An active learning round \(j\) starts with a data set of annotated samples \(\mathcal{D}_{j}=\{(x_{i},f_{m}(x_{i}),m_{i})\}_{1\leq m\leq M}\). The data set is used to fit a probabilistic _multi-fidelity surrogate_ model \(h(x,m)\) of the posterior \(p(f_{m}(x)|x,m,\mathcal{D})\). We use Gaussian Processes [47], as is common in Bayesian optimisation, to model the posterior, such that the model \(h\) predicts the conditional Gaussian distribution of \(f_{m}(x)\) given \((x,m)\) and the existing data set \(\mathcal{D}\). We implement a multi-fidelity GP kernel by combining a Matern kernel evaluated on \(x\) with a linear downsampling kernel over \(m\)[61]. In the higher dimensional problems, we use Deep Kernel Learning [60] to increase the expressivity of the surrogate models. The candidate \(x\) is modelled with the deep kernel while the fidelity \(m\) is modelled with the same linear downsamling kernel. The output of the proxy model is then used to compute the value of a _multi-fidelity acquisition function_\(\alpha(x,m)\). In our experiments, we use the multi-fidelity version [56] of Max-Value Entropy Search (MES) [58], which is an information-theoretic acquisition function widely used in Bayesian optimization. MES aims to maximize the mutual information between the value of the queried \(x\) and the maximum value attained by the objective function, \(f^{\star}\). The multi-fidelity variant is designed to select the candidate \(x\) and the fidelity \(m\) that maximize the mutual information between \(f^{\star}_{M}\) and the oracle at fidelity \(m\), \(f_{m}\), weighted by the cost of the oracle:
\[\alpha(x,m)=\frac{1}{\lambda_{m}}I(f^{\star}_{M};f_{m}|\mathcal{D}_{j}). \tag{2}\]
We provide further details about the acquisition function in Appendix A. A multi-fidelity acquisition function can be regarded as a cost-adjusted utility function. Therefore, in order to carry out a cost-aware search, we seek to sample diverse objects with high value of the acquisition function. In this paper, we propose to use a GFlowNet as a generative model trained for this purpose (see further details below in Section 3.3). An active learning round terminates by generating \(N\) objects from the sampler (here the GFlowNet policy \(\pi\)) and forming a batch with the best \(B\) objects, according to \(\alpha\). Note that \(N\gg B\), since sampling from a GFlowNet is relatively inexpensive. The selected objects are annotated by the corresponding oracles and incorporated into the data set, such that \(\mathcal{D}_{j+1}=\mathcal{D}_{j}\cup\{(x_{1},f_{m}(x_{1}),m_{1}),\ldots(x_{B},f_{m }(x_{B}),m_{B})\}\).
```
Input:\(\{(f_{m},\lambda_{m})\}\): \(M\) oracles and their corresponding costs; \(\mathcal{D}_{0}=\{(x_{i},f_{m}(x_{i}),m_{i})\}\): Initial dataset; \(h(x,m)\): Multi-fidelity Gaussian Process proxy model; \(\alpha(x,m)\): Multi-fidelity acquisition function; \(R(\alpha(x),\beta)\): reward function to train the GFlowNet; \(B\): Batch size of oracles queries; \(\Lambda\): Maximum available budget; \(K\): Number of top-scoring candidates to be evaluated at termination; Result: Top-\(K(\mathcal{D})\), Diversity Initialization:\(\Lambda_{j}=0\), \(\mathcal{D}=\mathcal{D}_{0}\) while\(\Lambda_{j}<\Lambda\)do \(\bullet\) Fit \(h\) on dataset \(\mathcal{D}\); \(\bullet\) Train GFlowNet with reward \(R(\alpha(x),\beta)\) to obtain policy \(\pi_{\theta}(x)\); \(\bullet\) Sample \(B\) tuples \((x_{i},m_{i})\sim\pi_{\theta}\); \(\bullet\) Evaluate each tuple with the corresponding oracle to form batch \(\mathcal{B}=\{(x_{1},f_{m}(x_{1}),m_{1}),\ldots,(x_{B},f_{m}(x_{B}),m_{B})\}\); \(\bullet\) Update dataset \(\mathcal{D}=\mathcal{D}\cup\mathcal{B}\); end while
```
**Algorithm 1**MF-GFN: Multi-fidelity active learning with GFlowNets. See Section 4.1 for quality (Top-\(K(\mathcal{D})\)) and diversity metrics.
### Multi-Fidelity GFlowNets
In order to use GFlowNets in the multi-fidelity active learning loop described above, we propose to make the GFlowNet sample the fidelity \(m\) for each object \(x\in\mathcal{X}\) in addition to \(x\) itself. Formally, given a baseline GFlowNet with state and transition spaces \(\mathcal{S}\) and \(\mathbb{A}\), we augment the state space with a new dimension for the fidelity \(\mathcal{M}^{\prime}=\{0,1,2,\ldots,M\}\) (including \(m=0\), which corresponds to unset fidelity), such that the augmented, multi-fidelity space is \(\mathcal{S}_{\mathcal{M}^{\prime}}=\mathcal{S}\cup\mathcal{M}^{\prime}\). The set of allowed transitions \(\mathbb{A}_{M}\) is augmented such that a fidelity \(m>0\) of a trajectory must be selected once, and only once, from any intermediate state.
Intuitively, allowing the selection of the fidelity at any step in the trajectory should give flexibility for better generalisation. At the end, finished trajectories are the concatenation of an object \(x\) and the fidelity \(m\), that is \((x,m)\in\mathcal{X}_{\mathcal{M}}=\mathcal{X}\cup\mathcal{M}\). In summary, the proposed approach enables to jointly learn the policy that samples objects in a potentially very large, high-dimensional space, together with the level of fidelity, that maximise a given multi-fidelity acquisition function as reward.
## 4 Empirical Evaluation
In this section, we describe the evaluation metrics and experiments performed to assess the validity and performance of our proposed approach of multi-fidelity active learning with GFlowNets. Overall, the purpose of this empirical evaluation is to answer the following questions:
* **Question 1**: Is our multi-fidelity active learning approach able to find high-scoring, diverse samples at lower cost than active learning with a single oracle?
* **Question 2**: Does our proposed multi-fidelity GFlowNet, which learns to sample fidelities together with objects \((x,m)\), provide any advantage over sampling only objects \(x\)?
In Section 4.1 we describe the metrics proposed to evaluate the performance our proposed method, as well as the baselines, which we describe in Section 4.2. In Section 4.3, we present results on synthetic tasks typically used in the multi-fidelity BO and active learning literature. In Section 4.4, we present results on more practically relevant tasks for scientific discovery, such as the design of DNA sequences and anti-microbial peptides.
### Metrics
One core motivation in the conception of GFlowNets, as reported in the original paper [4], was the goal of sampling diverse objects with high-score, according to a reward function.
* Mean score, as per the highest fidelity oracle \(f_{M}\), of the top-\(K\) samples.
* Mean pairwise similarity within the top-\(K\) samples.
Furthermore, since here we are interested in the cost effectiveness of the active learning process, in this section we will evaluate the above metrics as a function of the cost accumulated in querying the multi-fidelity oracles. It is important to note that the multi-fidelity approach is not aimed at achieving _better_ mean top-\(K\) scores than a single-fidelity active learning counterpart, but rather _the same_ mean top-\(K\) scores _with a smaller budget_.
### Baselines
In order to evaluate our approach, and to shed light on the questions stated above, we consider the following baselines:
* **GFlowNet with highest fidelity (SF-GFN):** GFlowNet based active learning approach from [22] with the highest fidelity oracle to establish a benchmark for performance without considering the cost-accuracy trade-offs.
* **GFlowNet with random fidelities (Random fid. GFlowNet):** Variant of SF-GFN where the candidates are generated with the GFlowNet but the fidelities are picked randomly and a multi-fidelity acquisition function is used, to investigate the benefit of deciding the fidelity with GFlowNets.
* **Random candidates and fidelities (Random):** Quasi-random approach where the candidates and fidelities are picked randomly and the top \((x,m)\) pairs scored by the acquisition function are queried.
* **Multi-fidelity PPO (MF-PPO):** Instantiation of multi-fidelity Bayesian optimization where the acquisition function is optimized using proximal policy optimization [PPO 49].
### Synthetic Tasks
As an initial assessment of MF-GFNs, we consider two synthetic functions--Branin and Hartmann--widely used in the single- and multi-fidelity Bayesian optimisation literature [44, 53, 28, 36, 16].
BraninWe consider an active learning problem in a two-dimensional space where the target function \(f_{M}\) is the Branin function, as modified in [52] and implemented in botorch[2]. We simulate three levels of fidelity, including the true function. The lower-fidelity oracles, the costs of the oracles (0.01, 0.1, 1.0) as well as the number of points queried in the initial training set were adopted from [36]. We provide further details about the task in Appendix B.2. In order to consider a discrete design space, we map the domain to a discrete \(100\times 100\) grid. We model this grid with a GFlowNet as in [4, 37]: starting from the origin \((0,0)\), for any state \(s=(x_{1},x_{2})\), the action space consists of the choice between the exit action or the dimension to increment by \(1\), provided the next state is in the limits of the grid. Fig. 0(a) illustrates the results for this task. We observe that MF-GFN is able to reach the minimum of the Branin function with a smaller budget than the single-fidelity counterpart and the baselines.
HartmannNext, we consider the 6-dimensional Hartmann function as objective \(f_{M}\) on a hyper-grid domain. As with Branin, we consider three oracles, adopting the lower-fidelity oracles and the set of costs (0.125, 0.25, 1.0) from [53]. We discretize the domain into a six-dimensional hyper-grid of length 10, yielding \(10^{6}\) possible candidate points. The results for the task are illustrated in Fig. 0(b), which indicate that multi-fidelity active learning with GFlowNets (MF-GFN) offers an advantage over single-fidelity active learning (SF-GFN) as well as some of the other baselines in this higher-dimensional synthetic problem as well. Note that while MF-PPO performs better in this task, as shown in the next experiments, MF-PPO tends to collapse to single modes of the function in more complex high-dimensional scenarios.
### Benchmark Tasks
While the synthetic tasks are insightful and convenient for analysis, to obtain a more solid assessment of the performance of MF-GFN, we evaluate it, together with the other baselines, on more complex, structured design spaces of practical relevance. We present results on a variety of tasks including DNA aptamers (Section 4.4.1), anti-microbial peptides (Section 4.4.2) and small molecules (Section 4.4.3).
#### 4.4.1 DNA Aptamers
DNA aptamers are single-stranded nucleotide sequences with multiple applications in polymer design due to their specificity and affinity as sensors in crowded biochemical environments [66; 11; 63; 29]. DNA sequences are represented as strings of nucleobases A, C, T or G. In our experiments, we consider fixed-length sequences of 30 bases and design a GFlowNet environment where the action space \(\mathbb{A}\) consists of the choice of base to append to the sequence, starting from an empty sequence. This yields a design space of size \(|\mathcal{X}|=4^{30}\) (ignoring the selection of fidelity in MF-GFN). As the optimisation objective \(f_{M}\) (highest fidelity) we used the free energy of the secondary structure as calculated by NUPACK [65]. As a lower fidelity oracle, we trained a transformer model on 1
Figure 1: Results on the synthetic tasks—Branin and Hartmann functions. The curves indicate the mean score \(f_{M}\) within the top-50 and top-10 samples (for Branin and Hartmann, respectively) computed at the end of each active learning round and plotted as a function of the budget used. The random baseline is omitted from this plot to facilitate the visualisation since the results were significantly worse in these tasks. We observe that MF-GFN clearly outperforms the single-fidelity counterpart (SF-GFN) and slightly improves upon the GFlowNet baseline that samples random fidelities. On Hartmann, MF-PPO initially outperforms the other methods.
Figure 2: Results on the DNA aptamers and AMP tasks. The curves indicate the mean score \(f_{M}\) within the top-100 and top-50 samples (for DNA and AMP, respectively) computed at the end of each active learning round and plotted as a function of the budget used. The colour of the markers indicates the diversity within the batch (darker colour of the circular dots indicating more diversity). In both the DNA and AMP tasks, MF-GFN outperforms all baselines in terms of cost efficiency, while obtaining great diversity in the final batch of top-\(K\) candidates.
million randomly sampled sequences annotated with \(f_{M}\), and assigned it a cost \(100\times\) smaller than the highest-fidelity oracle. Further details about the task are discussed in Appendix B.4.
The main results on the DNA aptamers task are presented in Fig. 2a. We observe that on this task MF-GFN outperforms all other baselines in terms cost efficiency. For instance, MF-GFN achieves the best mean top-\(K\) energy achieved by its single-fidelity counterpart with just about \(20\%\) of the budget. It is also more efficient than GFlowNet with random fidelities and MF-PPO. Crucially, we also see that MF-GFN maintains a high level of diversity, even after converging to topK reward. On the contrary, MF-PPO is not able to discover diverse samples, as is expected based on prior work [22].
#### 4.4.2 Antimicrobial Peptides
Antimicrobial peptides are short protein sequences which possess antimicrobial properties. As proteins, these are sequences of amino-acids--a vocabulary of 20 along with a special stop token. We consider variable length proteins sequences with up to 50 residues. We use data from DBAASP [45] containing antimicrobial activity labels, which is split into two sets - one used for training the oracle and one as the initial dataset in the active learning loop, following [22]. To establish the multi-fidelity setting, we train different models with different capacities and with different subsets of the data. The details about these oracles along with additional details about the task are discussed in Appendix B.5.
The results in Fig. 2b indicate that even in this task MF-GFN outperforms all other baselines in terms of cost-efficiency. It reaches the same maximum mean top-\(K\) score as the random baselines with \(10\times\) less budget and almost \(100\times\) less budget than SF-GFN. In this task, MF-PPO did not achieve comparable results. Crucially, the diversity of the final batch found by MF-GFN stayed high, satisfying this important criterion in the motivation of this method.
#### 4.4.3 Small Molecules
Molecules are clouds of interacting electrons (and nuclei) described by a set of quantum mechanical descriptions, or properties. These properties dictate their chemical behaviours and applications. Numerous approximations of these quantum mechanical properties have been developed with different methods at different fidelities, with the famous example of Jacob's ladder in density functional theory [43]. To demonstrate the capability of MF-GFlowNet to function in the setting of quantum chemistry, we consMF-GFNsof-of-concept tasks in molecular electronic potentials: maximization of adiabatic electron affinity (EA) and (negative) adiabatic ionisation potential (IP). These electronic potentials dictate the molecular redox chemistry, and are key quantities in organic semiconductors, photoredox catalysis, or organometallic synthesis. We employed three oracles that correlate with experimental results as approximations of the scoring function, by uses of varying levels of geometry optimisation to obtain approximations to the adiabatic geometries, followed by the calculation of IP or EA with semi-empirical quantum chemistry XTB (see Appendix) [39]. These three oracles had costs of 1, 3 and 7 (respectively), proportional to their computational running demands. We designed the GFlowNet state space by using sequences of SELFIES tokens [32] (maximum of 64) to represent molecules, starting from an empty sequence; every action consists of appending a new token to the sequence.
The realistic configuration and practical relevance of these tasks allow us to draw stronger conclusions about the usefulness of multi-fidelity active learning with GFlowNets in scientific discovery applications. As in the other tasks evaluated, we here also found MF-GFN to achieve better cost efficiency at finding high-score top-\(K\) molecules (Fig. 3), especially for ionization potentials (Fig. 3a). By clustering the generated molecules, we find that MF-GFN captures as many modes as random generation, far exceeding that of MF-PPO. Indeed, while MF-PPO seems to outperform MF-GFN in the task of electron affinity (Fig. 3b), all generated molecules were from a few clusters, which is of much less utility for chemists.
## 5 Conclusions, Limitations and Future Work
In this paper, we present MF-GFN, the first application of GFlowNets for multi-fidelity active learning. Inspired by the encouraging results of GFlowNets in (single-fidelity) active learning for biological sequence design [22] as a method to discover diverse, high-scoring candidates, we propose MF-GFNto sample the candidates as well as the fidelity at which the candidate is to be evaluated, when multiple oracles are available with different fidelities and costs.
We evaluate the proposed MF-GFN approach in both synthetic tasks commonly used in the multi-fidelity Bayesian optimization literature and benchmark tasks of practical relevance, such as DNA aptamer generation, antimicrobial peptide design and molecular modelling. Through comparisons with previously proposed methods as well as with variants of our method designed to understand the contributions of different components, we conclude that multi-fidelity active learning with GFlowNets not only outperforms its single-fidelity active learning counterpart in terms of cost effectiveness and diversity of sampled candidates, but it also offers an advantage over other multi-fidelity methods due to its ability to learn a stochastic policy to jointly sample objects and the fidelity of the oracle to be used to evaluate them.
Broader ImpactOur work is motivated by pressing challenges to sustainability and public health, and we envision applications of our approach to drug discovery and materials discovery. However, as with all work on these topics, there is a potential risk of dual use of the technology by nefarious actors [57].
Limitations and Future WorkAside from the molecular modelling tasks, our empirical evaluations in this paper involved simulated oracles with relatively arbitrary costs. Therefore, future work should evaluate MF-GFN with practical oracles and sets of costs that reflect their computational or financial demands. Furthermore, we believe a promising avenue that we have not explored in this paper is the application of MF-GFN in more complex, structured design spaces, such as hybrid (discrete and continuous) domains [34], as well as multi-fidelity, multi-objective problems [24].
## References
* Agrawal and Choudhary [2016] Ankit Agrawal and Alok Choudhary. Perspective: Materials informatics and big data: Realization of the "fourth paradigm" of science in materials science. _Apl Materials_, 4(5):053208, 2016.
* Balandat et al. [2020] Maximilian Balandat, Brian Karrer, Daniel R. Jiang, Samuel Daulton, Benjamin Letham, Andrew Gordon Wilson, and Eytan Bakshy. BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization. In _Advances in Neural Information Processing Systems 33_, 2020.
* Bashir et al. [2021] Ali Bashir, Qin Yang, Jinpeng Wang, Stephan Hoyer, Wenchuan Chou, Cory McLean, Geoff Davis, Qiang Gong, Zan Armstrong, Junghoon Jang, et al. Machine learning guided aptamer refinement and discovery. _Nature Communications_, 12(1):2366, 2021.
Figure 3: Comparative results on the molecular discovery tasks: (a) ionisation potential (IP), (b) electron affinity (EA). Results illustrate the generally faster convergence of MF-GFN to discover a diverse set of molecules with desirable values of the target property (colour scheme of the circular dots: darker/blue is better than lighter/yellow).
* [4] Emmanuel Bengio, Moksh Jain, Maksym Korablyov, Doina Precup, and Yoshua Bengio. Flow network based generative models for non-iterative diverse candidate generation. _Advances in Neural Information Processing Systems_, 34:27381-27394, 2021.
* [5] Yoshua Bengio, Salem Lahlou, Tristan Deleu, Edward J. Hu, Mo Tiwari, and Emmanuel Bengio. Gflownet foundations. _arXiv preprint arXiv: Arxiv-2111.09266_, 2021.
* [6] Regine S Bohacek, Colin McMartin, and Wayne C Guida. The art and practice of structure-based drug design: a molecular modeling perspective. _Medicinal research reviews_, 16(1):3-50, 1996.
* [7] Keith T Butler, Daniel W Davies, Hugh Cartwright, Olexandr Isayev, and Aron Walsh. Machine learning for molecular and materials science. _Nature_, 559(7715):547-555, 2018.
* [8] Kathryn Chaloner and Isabella Verdinelli. Bayesian experimental design: A review. _Statistical science_, pages 273-304, 1995.
* [9] Konstantinos Chatzilygeroudis, Antoine Cully, Vassilis Vassiliades, and Jean-Baptiste Mouret. Quality-diversity optimization: a novel branch of stochastic optimization. In _Black Box Optimization, Machine Learning, and No-Free Lunch Theorems_, pages 109-135. Springer, 2021.
* [10] Chi Chen, Yunxing Zuo, Weike Ye, Xiangguo Li, and Shyue Ping Ong. Learning properties of ordered and disordered materials from multi-fidelity data. _Nature Computational Science_, 1(1):46-53, 2021.
* [11] David R Corey, Masad J Damha, and Muthiah Manoharan. Challenges and opportunities for nucleic acid therapeutics. _nucleic acid therapeutics_, 32(1):8-13, 2022.
* [12] Payel Das, Tom Sercu, Kahini Wadhawan, Inkit Padhi, Sebastian Gehrmann, Flaviu Cipcigan, Vijil Chenthamarakshan, Hendrik Strobelt, Cicero Dos Santos, Pin-Yu Chen, et al. Accelerated antimicrobial discovery via deep generative models and molecular dynamics simulations. _Nature Biomedical Engineering_, 5(6):613-623, 2021.
* [13] Aryan Deshwal and Janardhan Rao Doppa. Combining latent space and structured kernels for bayesian optimization over combinatorial spaces. In _Neural Information Processing Systems_, 2021.
* [14] Clyde Fare, Peter Fenner, Matthew Benatan, Alessandro Varsi, and Edward O Pyzer-Knapp. A multi-fidelity machine learning approach to high throughput materials screening. _npj Computational Materials_, 8(1):257, 2022.
* [15] Francesco Di Fiore, Michela Nardelli, and Laura Mainini. Active learning and bayesian optimization: a unified perspective to learn with a goal. _arXiv preprint arXiv: Arxiv-2303.01560_, 2023.
* [16] Jose Pablo Folch, Robert M Lee, Behrang Shafei, David Walz, Calvin Tsay, Mark van der Wilk, and Ruth Misener. Combining multi-fidelity modelling and asynchronous batch bayesian optimization. _Computers & Chemical Engineering_, 172:108194, 2023.
* [17] Adam Evan Foster. _Variational, Monte Carlo and policy-based approaches to Bayesian experimental design_. PhD thesis, University of Oxford, 2021.
* [18] Peter I. Frazier. A tutorial on bayesian optimization. _arXiv preprint arXiv: Arxiv-1807.02811_, 2018.
* [19] Roman Garnett. _Bayesian optimization_. Cambridge University Press, 2023.
* [20] Roman Garnett, Yamuna Krishnamurthy, Xuehan Xiong, Jeff Schneider, and Richard Mann. Bayesian optimal active search and surveying. _arXiv preprint arXiv:1206.6406_, 2012.
* [21] Antoine Grosnit, Rasul Tutunov, Alexandre Max Maraval, Ryan-Rhys Griffiths, Alexander I. Cowen-Rivers, Lin Yang, Lin Zhu, Wenlong Lyu, Zhitang Chen, Jun Wang, Jan Peters, and Haitham Bou-Ammar. High-dimensional bayesian optimisation with variational autoencoders and deep metric learning. _arXiv preprint arXiv: Arxiv-2106.03609_, 2021.
* Jain et al. [2022] Moksh Jain, Emmanuel Bengio, Alex Hernandez-Garcia, Jarrid Rector-Brooks, Bonaventure FP Dossou, Chanakya Ajit Ekbote, Jie Fu, Tianyu Zhang, Michael Kilgour, Dinghuai Zhang, et al. Biological sequence design with gflownets. In _International Conference on Machine Learning_, pages 9786-9801. PMLR, 2022.
* Jain et al. [2023] Moksh Jain, Tristan Deleu, Jason Hartford, Cheng-Hao Liu, Alex Hernandez-Garcia, and Yoshua Bengio. Gflownets for ai-driven scientific discovery. _Digital Discovery_, 2023.
* Jain et al. [2022] Moksh Jain, Sharath Chandra Raparthy, Alex Hernandez-Garcia, Jarrid Rector-Brooks, Yoshua Bengio, Santiago Miret, and Emmanuel Bengio. Multi-objective gflownets. _arXiv preprint arXiv:2210.12765_, 2022.
* Jiang et al. [2019] Shali Jiang, Roman Garnett, and Benjamin Moseley. Cost effective active search. _Advances in Neural Information Processing Systems_, 32, 2019.
* Jiang et al. [2017] Shali Jiang, Gustavo Malkomes, Geoff Converse, Alyssa Shofner, Benjamin Moseley, and Roman Garnett. Efficient nonmyopic active search. In _International Conference on Machine Learning_, pages 1714-1723. PMLR, 2017.
* Jumper et al. [2021] John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Zidek, Anna Potapenko, et al. Highly accurate protein structure prediction with alphafold. _Nature_, 596(7873):583-589, 2021.
* Kandasamy et al. [2019] Kirthevasan Kandasamy, Gautam Dasarathy, Junier B. Oliva, Jeff Schneider, and Barnabas Poczos. Multi-fidelity gaussian process bandit optimisation, 2019.
* Kilgour et al. [2021] Michael Kilgour, Tao Liu, Brandon D Walker, Pengyu Ren, and Lena Simine. E2edna: Simulation protocol for dna aptamers with ligands. _Journal of Chemical Information and Modeling_, 61(9):4139-4144, 2021.
* King et al. [2004] Ross D King, Kenneth E Whelan, Ffion M Jones, Philip GK Reiser, Christopher H Bryant, Stephen H Muggleton, Douglas B Kell, and Stephen G Oliver. Functional genomic hypothesis generation and experimentation by a robot scientist. _Nature_, 427(6971):247-252, 2004.
* Kingma and Ba [2017] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2017.
* Krenn et al. [2020] Mario Krenn, Florian Hase, AkshatKumar Nigam, Pascal Friederich, and Alan Aspuru-Guzik. Self-referencing embedded strings (selfies): A 100% robust molecular string representation. _Machine Learning: Science and Technology_, 1(4):045024, 2020.
* Kusne et al. [2020] A Gilad Kusne, Heshan Yu, Changming Wu, Huairuo Zhang, Jason Hattrick-Simpers, Brian DeCost, Suchismita Sarker, Corey Oses, Cormac Toher, Stefano Curtarolo, et al. On-the-fly closed-loop materials discovery via bayesian active learning. _Nature communications_, 11(1):5966, 2020.
* Lahlou et al. [2023] Salem Lahlou, Tristan Deleu, Pablo Lemos, Dinghuai Zhang, Alexandra Volokhova, Alex Hernandez-Garcia, Lena Nehale Ezzine, Yoshua Bengio, and Nikolay Malkin. A theory of continuous generative flow networks. _International Conference on Machine Learning_, 2023.
* Li et al. [2020] Shibo Li, Robert M Kirby, and Shandian Zhe. Deep multi-fidelity active learning of high-dimensional outputs. _arXiv preprint arXiv:2012.00901_, 2020.
* Li et al. [2020] Shibo Li, Wei Xing, Mike Kirby, and Shandian Zhe. Multi-fidelity bayesian optimization via deep neural networks. _Advances in Neural Information Processing Systems_, 2020.
* Malkin et al. [2022] Nikolay Malkin, Moksh Jain, Emmanuel Bengio, Chen Sun, and Yoshua Bengio. Trajectory balance: Improved credit assignment in gflownets, 2022.
* Moss et al. [2021] Henry B. Moss, David S. Leslie, Javier Gonzalez, and Paul Rayson. Gibbon: General-purpose information-based bayesian optimisation, 2021.
* Neugebauer et al. [2020] Hagen Neugebauer, Fabian Bohle, Markus Bursch, Andreas Hansen, and Stefan Grimme. Benchmark study of electrochemical redox potentials calculated with semiempirical and dft methods. _The Journal of Physical Chemistry A_, 124(35):7166-7176, 2020.
* Nguyen et al. [2021] Quan Nguyen, Arghavan Modiri, and Roman Garnett. Nonmyopic multifidelity active search. In _International Conference on Machine Learning_, pages 8109-8118. PMLR, 2021.
* Parr [1980] Robert G Parr. Density functional theory of atoms and molecules. In _Horizons of Quantum Chemistry: Proceedings of the Third International Congress of Quantum Chemistry Held at Kyoto, Japan, October 29-November 3, 1979_, pages 5-15. Springer, 1980.
* Peherstorfer et al. [2018] Benjamin Peherstorfer, Karen Willcox, and Max Gunzburger. Survey of multifidelity methods in uncertainty propagation, inference, and optimization. _Siam Review_, 60(3):550-591, 2018.
* Perdew and Schmidt [2001] John P Perdew and Karla Schmidt. Jacob's ladder of density functional approximations for the exchange-correlation energy. In _AIP Conference Proceedings_, volume 577, pages 1-20. American Institute of Physics, 2001.
* Perdikaris et al. [2017] Paris Perdikaris, M. Raissi, Andreas C. Damianou, ND Lawrence, and George Em Karniadakis. Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling. _Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences_, 473, 2017.
* Pirtskhalava et al. [2021] Malak Pirtskhalava, Anthony A Amstrong, Maia Grigolava, Mindia Chubinidze, Evgenia Alimbarashvili, Boris Vishnepolsky, Andrei Gabrielian, Alex Rosenthal, Darrell E Hurt, and Michael Tartakovsky. Dbaasp v3: database of antimicrobial/cytotoxic activity and structure of peptides as a resource for development of new therapeutics. _Nucleic acids research_, 49(D1):D288-D297, 2021.
* Polishchuk et al. [2013] Pavel G Polishchuk, Timur I Madzhidov, and Alexandre Varnek. Estimation of the size of drug-like chemical space based on gdb-17 data. _Journal of computer-aided molecular design_, 27:675-679, 2013.
* Rasmussen and Williams [2005] Carl Edward Rasmussen and Christopher K. I. Williams. _Gaussian Processes for Machine Learning_. The MIT Press, 11 2005.
* Robbins [1952] Herbert E. Robbins. Some aspects of the sequential design of experiments. _Bulletin of the American Mathematical Society_, 58:527-535, 1952.
* Schulman et al. [2017] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. _arXiv preprint arXiv:1707.06347_, 2017.
* Settles [2009] Burr Settles. Active learning literature survey. _Independent Technical Report_, 2009.
* Sholl and Steckel [2022] David S Sholl and Janice A Steckel. _Density functional theory: a practical introduction_. John Wiley & Sons, 2022.
* Sobester et al. [2008] Andras Sobester, Alexander Forrester, and Andy Keane. _Appendix: Example Problems_, pages 195-203. John Wiley & Sons, Ltd, 2008.
* Song et al. [2018] Jialin Song, Yuxin Chen, and Yisong Yue. A general framework for multi-fidelity bayesian optimization with gaussian processes. In _International Conference on Artificial Intelligence and Statistics_, 2018.
* Srinivas et al. [2009] Niranjan Srinivas, Andreas Krause, Sham M Kakade, and Matthias Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. _arXiv preprint arXiv:0912.3995_, 2009.
* Stokes et al. [2020] Jonathan M Stokes, Kevin Yang, Kyle Swanson, Wengong Jin, Andres Cubillos-Ruiz, Nina M Donghia, Craig R MacNair, Shawn French, Lindsey A Carfrae, Zohar Bloom-Ackermann, et al. A deep learning approach to antibiotic discovery. _Cell_, 180(4):688-702, 2020.
* Takeno et al. [2020] Shion Takeno, Hitoshi Fukuoka, Yuhki Tsukada, Toshiyuki Koyama, Motoki Shiga, Ichiro Takeuchi, and Masayuki Karasuyama. Multi-fidelity Bayesian optimization with max-value entropy search and its parallelization. In Hal Daume III and Aarti Singh, editors, _Proceedings of the 37th International Conference on Machine Learning_, volume 119 of _Proceedings of Machine Learning Research_, pages 9334-9345. PMLR, 13-18 Jul 2020.
* Urbina et al. [2022] Fabio Urbina, Filippa Lentzos, Cedric Invernizzi, and Sean Ekins. Dual use of artificial-intelligence-powered drug discovery. _Nature Machine Intelligence_, 4(3):189-191, 2022.
* [58] Zi Wang and Stefanie Jegelka. Max-value entropy search for efficient bayesian optimization, 2018.
* [59] Manfred KK Warmuth, Gunnar Ratsch, Michael Mathieson, Jun Liao, and Christian Lemmen. Active learning in the drug discovery process. _Advances in Neural information processing systems_, 14, 2001.
* [60] Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, and Eric P Xing. Deep kernel learning. In _Artificial intelligence and statistics_, pages 370-378. PMLR, 2016.
* [61] Jian Wu, Saul Toscano-Palmerin, Peter I. Frazier, and Andrew Gordon Wilson. Practical multi-fidelity bayesian optimization for hyperparameter tuning, 2019.
* [62] Dezhen Xue, Prasanna V Balachandran, John Hogden, James Theiler, Deqing Xue, and Turab Lookman. Accelerated search for materials with targeted properties by adaptive design. _Nature communications_, 7(1):1-9, 2016.
* [63] Joseph D Yesselman, Daniel Eiler, Erik D Carlson, Michael R Gotrik, Anne E d'Aquino, Alexandra N Ooms, Wipapat Kladwang, Paul D Carlson, Xuesong Shi, David A Costantino, et al. Computational design of three-dimensional rna structure and function. _Nature nanotechnology_, 14(9):866-873, 2019.
* [64] Ruihao Yuan, Zhen Liu, Prasanna V Balachandran, Deqing Xue, Yumei Zhou, Xiangdong Ding, Jun Sun, Dezhen Xue, and Turab Lookman. Accelerated discovery of large electrostrains in batio3-based piezoelectrics using active learning. _Advanced materials_, 30(7):1702884, 2018.
* [65] Joseph N Zadeh, Conrad D Steenberg, Justin S Bois, Brian R Wolfe, Marshall B Pierce, Asif R Khan, Robert M Dirks, and Niles A Pierce. Nupack: Analysis and design of nucleic acid systems. _Journal of computational chemistry_, 32(1):170-173, 2011.
* [66] Wenhu Zhou, Runjhun Saran, and Juewen Liu. Metal sensing by dna. _Chemical reviews_, 117(12):8272-8325, 2017.
* [67] C Lawrence Zitnick, Lowik Chanussot, Abhishek Das, Siddharth Goyal, Javier Heras-Domingo, Caleb Ho, Weihua Hu, Thibaut Lavril, Aini Palizhati, Morgane Riviere, et al. An introduction to electrocatalyst design using machine learning for renewable energy storage. _arXiv preprint arXiv:2010.09435_, 2020. | ## Review
### Summary
This paper introduces a framework for multi-fidelity active learning using Generative Flow Networks (GFlowNets), aiming to optimize the trade-off between accuracy and cost in scientific discovery scenarios. The authors leverage GFlowNets to sample candidates from a high-dimensional space while also considering the fidelity of evaluations. They present empirical results demonstrating that their method, MF-GFN, can outperform traditional baselines in terms of sample efficiency. However, reviewers raised concerns regarding the novelty of the approach compared to existing literature and its clarity in conveying the methodology, particularly in the context of fidelity selection and experimental design.
### Strengths
- The integration of GFlowNets into the active learning domain is original and timely.
- The paper is generally well-written and presents results across multiple domains, including synthetic and real-world tasks.
- The proposed framework shows promise in reducing the cost of exploration while maintaining performance.
### Weaknesses
- The methodological contribution is limited, with concerns that it closely resembles existing works in multi-fidelity Bayesian optimization.
- The evaluation metrics used are specific to GFlowNet and lack standardization against other multi-fidelity active learning papers.
- There are notable language errors and typos throughout the manuscript, which distract from the overall clarity.
- The experimental results do not consistently demonstrate an advantage for the proposed method over existing baselines.
### Questions
- What motivated the choice of using MES for the acquisition function instead of other simpler methods?
- How does the proposed method handle the sensitivity of performance to different hyperparameters?
- Why is the diversity of queried data not explicitly compared across different methods?
- Can the authors provide clarity on how fidelity levels relate to oracle accuracy and cost?
### Soundness
**Score:** 3
**Description:** Good - The foundational ideas are solid, but there are gaps in the theoretical analysis and comparison against existing methods.
### Presentation
**Score:** 3
**Description:** Good - The writing is generally clear, though it suffers from typographical errors and could benefit from improved organization.
### Contribution
**Score:** 2
**Description:** Fair - While the approach is interesting, the novelty is undermined by similarities to existing work and insufficient theoretical contribution.
### Rating
**Score:** 5
**Description:** Borderline accept - The paper has potential but requires significant revisions to address clarity, novelty, and comprehensive evaluation.
### Paper Decision
**Decision:** Reject
**Reasons:** The paper presents a novel application of GFlowNets in multi-fidelity active learning; however, it lacks sufficient originality when compared to existing literature. The soundness is adequate but not compelling, and the clarity of the presentation is hampered by errors and vague explanations. Overall, the contributions do not sufficiently advance the field, leading to the decision to reject.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Enhancing CLIP with CLIP: Exploring Pseudolabeling for Limited-Label Prompt Tuning
Cristina Menghini
Brown University
[email protected] &Andrew Delworth
Brown University
[email protected] &Stephen H. Bach
Brown University
[email protected]
###### Abstract
Fine-tuning vision-language models (VLMs) like CLIP to downstream tasks is often necessary to optimize their performance. However, a major obstacle is the limited availability of labeled data. We study the use of pseudolabels, i.e., heuristic labels for unlabeled data, to enhance CLIP via prompt tuning. Conventional pseudolabeling trains a model on labeled data and then generates labels for unlabeled data. VLMs' zero-shot capabilities enable a "second generation" of pseudolabeling approaches that do not require task-specific training on labeled data. By using zero-shot pseudolabels as a source of supervision, we observe that learning paradigms such as semi-supervised, transductive zero-shot, and unsupervised learning can all be seen as optimizing the same loss function. This unified view enables the development of versatile training strategies that are applicable across learning paradigms. We investigate them on image classification tasks where CLIP exhibits limitations, by varying prompt modalities, e.g., textual or visual prompts, and learning paradigms. We find that (1) unexplored prompt tuning strategies that iteratively refine pseudolabels consistently improve CLIP accuracy, by 19.5 points in semi-supervised learning, by 28.4 points in transductive zero-shot learning, and by 15.2 points in unsupervised learning, and (2) unlike conventional semi-supervised pseudolabeling, which exacerbates model biases toward classes with higher-quality pseudolabels, prompt tuning leads to a more equitable distribution of per-class accuracy. The code to reproduce the experiments is at BatsResearch/menghini-neurips23-code.
## 1 Introduction
Large pre-trained vision-language models (VLMs) [31, 43, 17] achieve remarkable accuracy without task-specific training but still require adaptation for optimal performance. Prompt-tuning [13, 18] is an approach to efficiently enhance VLMs performance on downstream tasks by learning inputs to the model. While learning prompts with a few labeled data can yield significant improvements [48, 2], a broader range of learning settings such as semi-supervised, transductive zero-shot, and unsupervised learning are still underexplored. All of these settings share access to unlabeled data, and the versatile zero-shot classification abilities of VLMs make pseudolabeling a natural approach to leveraging it. This paper investigates how the use of out-of-the-box pseudolabels assigned by CLIP can contribute to improving CLIP's own performance. To this end, we conduct an extensive exploration of learning scenarios by varying prompt modalities, learning paradigms, and training strategies. We present empirical evidence showcasing the effectiveness of iterative prompt-training strategies that leverage CLIP-based pseudolabels, regardless of learning paradigms and prompt modalities, resulting in significant improvements in CLIP's image classification performance across different settings.
Pseudolabels are heuristic labels assigned by a model to unlabeled data, which are leveraged to further train the model [20]. Successful training with pseudolabels relies on two factors: the qualityof the labels and how they are used during training. To address the first, conventional methods assign labels to instances with high-confidence predictions [36]. For pseudolabeling using CLIP, Huang et al. propose to select the most confident samples for each class [15], mitigating CLIP's bias [38] and miscalibration [22] (see Section 3). To assign pseudolabels, we rely on this approach and address the second point by exploring how to make the best use of them. We design a broad space of analysis considering three dimensions: prompt modalities, which are the model inputs we learn; learning paradigms, which define the data we have available; and training strategies, which describe the process used to optimize performance (Figure 1).
Research on prompt tuning has demonstrated that training strategies used for learning prompts in one modality can be transferred to learning prompts in a different modality. For instance, Visual Prompt Tuning [18] was originally designed to effectively fine-tune large vision models but can be adapted to efficiently fine-tune CLIP using the same training strategy as standard textual prompt tuning [48, 34, 44]. On the contrary, different learning paradigms with limited labeled data typically require distinct approaches specifically tailored to extract information from the available data [27, 12]. However, we observe that this changes by using VLM's generated pseudolabels. Unlike conventional pseudolabeling approaches that bootstrap off labeled data and are used as semi-supervised learning techniques [36, 3, 40], VLMs can generate pseudolabels in any learning setting. This offers a significant advantage, expanding the scope of pseudolabeling beyond semi-supervised learning, and making it a promising approach for other settings, such as transductive zero-shot and unsupervised learning. By using CLIP-based pseudolabels as a source of supervision, we can view these settings as optimizing the same loss function, which is simply a weighted sum of the errors on labeled data, if available, and pseudolabeled data. Given that we can express different settings as the same problem, we can propose training strategies, i.e., the way of using pseudolabels, that suit them all.
By standardizing the training strategies across various prompt modalities and learning settings, we can conduct experiments on different applications of pseudolabels for various combinations of prompt modalities, learning paradigms, and training strategies, as illustrated in Figure 1. To the best of our knowledge, only one potential path has been explored thus far; specifically, fine-tuning textual prompts in an unsupervised learning context using a few pseudolabels [15]. Rather than relying on a fixed set of pseudolabels, we propose iterative training techniques that allow for the ongoing refinement and expansion of the pool of pseudolabeled data used during training. With each iteration, we progressively enhance CLIP's pseudolabeling ability, allowing us to extend the set of pseudolabeled data while maintaining the high quality of the initial pseudolabels.
We conduct experiments on six tasks where CLIP has been observed to underperform [31], such as satellite-image classification, flower-species identification, and texture-image recognition, among others. Our findings reveal that iterative approaches effectively fine-tune prompts irrespective of their modality and learning paradigms. Recent studies have identified the "Matthew effect" as a potential issue for semi-supervised models that use pseudolabels [49, 38]. This phenomenon causes models to perform well on classes with accurate pseudolabels but poorly on those with inaccurate ones, thereby reinforcing the model's original bias towards certain classes. Our analysis reveals that using pseudolabels generated by CLIP for prompt-tuning with iterative strategies not only improves CLIP's overall performance but also corrects its natural bias towards certain classes.
We summarize the main takeaways of our work:
Figure 1: Our design space to explore the effect of leveraging pseudolabels in a unified way across prompt modalities, learning paradigms, and training strategies. The green (dashed) path has already been explored [15], while the red (solid) lines are the unexplored combinations for prompt tuning.
* General purpose zero-shot learners used as general purpose pseudolabelers open the opportunity to develop training strategies that leverage pseudolabeled data beyond semi-supervised learning. We point out that different learning paradigms, such as semi-supervised, transductive zero-shot, and unsupervised learning, can be all considered as special cases of a single objective function, by using pseudolabels as a source of supervision.
* We demonstrate that simple iterative training strategies for refining pseudolabels are highly effective approaches for limited-label prompt tuning. In fact, regardless of the prompt modality and learning setting, these strategies improve CLIP, by on average 19.5 points in semi-supervised learning, 28.4 in transductive zero-shot learning, and 15.2 in unsupervised learning.
* We show that prompts learned with iterative strategies help mitigate the "rich get richer, poor get poorer" effect observed in semi-supervised approaches leveraging pseudolabels. By redistributing the quality of pseudolabels across different classes, we observe a "Robin Hood effect" where the extremely rich classes' accuracy stays the same or decreases, while poorer classes get richer, leading to a more equitable distribution of per-class accuracy.
## 2 Background and related work
Vision-language modelsVision-language models such as CLIP [31], ALIGN [17], and Florence [43] are models that align images and text. We focus on CLIP, which is composed of two components: a text encoder, \(\psi\), and an image encoder, \(\phi\), which are jointly trained using a contrastive loss to learn a multi-modal embedding space which aligns the representations of similar text and image inputs. This pre-training enables CLIP to perform zero-shot image classification. Given an image \(x\) and a set of classes \(\mathcal{Y}=\{y_{1},...,y_{C}\}\), CLIP classifies \(x\) by measuring the similarity between the image representation \(z=\phi(x)\) and each class representation \(w_{i}=\psi(\pi_{i})\), based on their cosine distance in the shared embedding space. Here, \(\pi_{i}\) is a natural language prompt such as "a photo of a [CLASS\({}_{i}\)]", where CLASS\({}_{i}\) is the specific class name, such as "orange dahlia," "forest" or "Boeing 737". The image \(x\) gets assigned to the class with the highest similarity score. In this work, we study how to learn better prompts that enhance CLIP by leveraging pseudolabels.
Prompt tuningPrompt tuning is a technique that enhances the practical application of large pre-trained models like CLIP [31] and GPT [32; 7]. It involves providing task-specific information to the model during inference through textual or visual inputs, leading to improved performance on downstream tasks [1; 33; 6; 7]. While discrete prompts are manually crafted natural language descriptions of classes that guide the model, they may not yield optimal results [47]. Soft prompting [21; 24], on the other hand, optimizes prompts as continuous vectors. These can be optimized by backpropagating through the frozen pre-trained model, resulting in better performance. Soft prompts can be learned for various modalities, e.g., text or image, [48; 13; 17; 2; 44; 19] and applications [34; 27; 12; 28] by training on a small number of labeled examples per class. If only unlabeled data is accessible, it is possible to learn textual soft prompts by leveraging CLIP-based pseudolabels [15]. Expanding on this concept, we further investigate the use of pseudolabels across a broader range of prompt modalities and learning approaches, and we introduce unexplored training strategies to leverage pseudolabels more effectively.
Learning from pseudolabelsPseudolabeling is the practice of assigning labels to unlabeled data based on the prediction of a model [20]. Then, pseudolabels are used to improve the performance of the model itself. There are different ways to obtain and use pseudolabels and each impacts the final predictions of the model [41; 45; 16; 35]. Some approaches use confidence thresholds [36; 3; 40] and others average predictions from multiple augmentations [4]. Pseudolabeling is a semi-supervised learning technique, and it is rarely used in transductive zero-shot learning [42; 5; 25]. Applying such techniques requires a few labeled examples related to the target task to learn a baseline model capable of pseudolabeling. However, this limitation has been overcome by VLMs, which are capable of pseudolabeling examples without task-specific training. The conventional pseudolabeling scheme based on confidence threshold is not effective if we assign pseudolabels based on CLIP. In fact CLIP is miscalibrated [22] and has imbalanced predictions [38] which may induce noise in the pseudolabels. An alternative approach selects the top-K most confident examples per class to improve performance [15]. In our analysis, we rely on this scheme (Section 3).
Design space
Our analysis encompasses the design space consisting of various combinations of prompt modalities, learning paradigms, and training strategies (Figure 1). Within this space, two key components remain constant: the pseudolabeling scheme and a unified loss function. This section begins by introducing these components and subsequently delves into a comprehensive discussion of each dimension within the design space to be explored.
Pseudolabeling schemeThe use of CLIP to generate pseudolabels has been investigated in [15]. Given unlabeled data \(X_{u}\) with target classes \(\{y_{1},...,y_{C}\}\), the goal is to assign labels to data points in which the model is most confident. Typically, pseudo labeling schemes use a confidence threshold (\(P(y|x)>\tau\)) to select instances to pseudolabel. However, this approach does not work well for CLIP due to its miscalibration [22] and imbalanced predictions [38]. Instead, one can use a top-K pseudo labeling approach, where the top-K most confident examples per class are used as pseudolabeled data [15]. The pseudolabel assignment consists of (1) computing the similarity scores of each datapoint with classes' textual prompts, and (2) select for each class the \(K\) datapoints with the highest similarity score to the class. In this way, we always get \(K\) pseudolabels per class, effectively addressesing the natural bias in CLIP's pseudolabels [38]. In Appendix A.1, we provide more details about pseudolabel assignment corner cases.
This top-K pseudolabeling scheme is applicable to unlabeled data, regardless of the availability of labeled data. As a result, we can extend the use of pseudolabels to any learning setting that involves unlabeled data. We observe that by treating pseudolabeled examples as true labeled data, we can view all learning settings as optimizing the same objective function.
Unified objective functionConsider a \(C\)-class image classification task, where \(X_{L}\) and \(Y_{L}\) represent the image representations and labels of the labeled data, and \(X_{U}\) and \(\tilde{Y}_{U}\) denote the image representations and pseudolabels for the unlabeled data. We define a loss function that combines two cross-entropy losses, one accounting for the error on the labeled data points and the other accounting for the error on pseudolabeled data:
\[\mathcal{L}=\mathcal{L}_{CE}(X_{L},Y_{L})+\lambda\,\mathcal{L}_{CE}(X_{U}, \tilde{Y}_{U}) \tag{1}\]
where \(\gamma\) and \(\lambda\) define the training balance between the errors on labeled and pseudolabeled data.
### Prompt modalities
Learning prompts is the process of training a set of vectors \(\mathrm{P}=[\mathrm{p}]_{1}\ldots[\mathrm{p}]_{K}\) that are prepended to the textual or visual inputs of the encoders within the CLIP architecture. By prepending these vectors to specific inputs, we can learn _textual_ prompts, _visual_ prompts, or _multimodal_ prompts when applying a set of vectors to both inputs simultaneously. We provide a technical and detailed explaination in Appendix A.1.
In our exploration, we consider all three types of prompts. The efficacy of prompts can vary depending on the task. Text prompt tuning may be most beneficial when image features are well-separated by class but may not be aligned with the corresponding textual prompt. Visual prompts rearrange the image features within the projection space, and it has the potential to improve CLIP when the pre-trained image features are not well separated by class. Finally, multimodal prompts allows for beneficial interaction between the two separate modalities, which might lead to both separable visual features, and text classifiers that are well-aligned with the corresponding visual features.
### Learning paradigms
By adjusting the values of parameters \(\gamma\) and \(\lambda\) and using the appropriate sets of labeled and pseudolabel data, the unified objective loss can be customized for each learning paradigm. We note that the redundancy in the use of parameters is for notation clarity in descriptions of the different strategies below. One can simply use \(\lambda\) as balancing factor between labeled and pseudolabeled data.
Semi-supervised learningIn the semi-supervised learning (SSL) scenario we have access to a limited number of labeled data for all the target classes \(D_{L}=\{(x,y)\}\) where \(x\) is an input feature and \(y\in\mathcal{Y}=[C]\) is the corresponding label. In addition, we have access to unlabeled data \(X_{U}=\{x\}\), where \(x\) is an image in the target domain \(\mathcal{Y}\). From \(X_{U}\), we get \(\mathcal{D}_{PL}=\{(x,\tilde{y})\}\), where \(\tilde{y}\in[C]\) is \(x\)'s pseudolabel. When using the unified loss in this setting, we set \(\gamma\) to \(|\mathcal{D}_{PL}|/|\mathcal{D}_{L}|\). As \(|\mathcal{D}_{L}|\) is much smaller than \(|\mathcal{D}_{PL}|\), \(\gamma\) acts as an upweighting factor for the few-labeled instances, thus counterbalancing the learning effect of pseudolabels (\(\lambda\)=1).
Transductive zero-shot learningIn transductive zero-shot learning (TRZSL), we are provided with labeled data \(D_{L}=\{(x,y)\}\) for some target classes \(S\) (referred to as _seen_ classes), where \(x\) represents input features, and \(y\in[S]\) is the corresponding label. Additionally, we have access to unlabeled data \(X_{U}=\{x\}\) for a disjoint set of classes \(U\) (referred to as _unseen_ classes). Using \(X_{U}\), we obtain \(\mathcal{D}_{PL}=(x,\tilde{y})\), where \(\tilde{y}\in[U]\) denotes the pseudolabels for \(x\). The value of \(\lambda\) in the unified loss is set to \(|\mathcal{D}_{L}|/|\mathcal{D}_{PL}|\), which makes the weight of the pseudolabel loss equivalent to that of the labeled data (\(\gamma=1\)). This is necessary because an imbalance in the number of labeled and pseudolabeled samples can result in a skewed training distribution, leading to better performance on seen classes while the performance on unseen classes may either remain stagnant or degrade. Studying this setting is interesting beyond transductive zero-shot learning. In fact, it has the potential to generalize to scenarios where the target task involves unseen classes, while the seen classes consist of auxiliary labeled data from the same domain but different task [30].
Unsupervised learningIn the unsupervised learning (UL) setting, we have access only to unlabeled data \(X_{U}=\{x\}\), from which we obtain \(\mathcal{D}_{PL}=(x,\tilde{y})\), where \(\tilde{y}\in[C]\) denotes the pseudolabel for \(x\). In this case, \(\gamma\) is set to 0, as there is no labeled data, and \(\lambda=1\). The use of this setting was initially explored in [15], who leveraged a few pseudolabels per class to learn textual prompts. In this paper, we build on their work by investigating a variety of training strategies and prompt modalities.
Supervised learningIn supervised learning (SL), we are only provided with labeled data \(D_{L}=(x,y)\), where \(x\) represents an input feature, and \(y\in[C]\) is the corresponding label. If we set \(\lambda\) to 0, the unified loss function is equivalent to the objective functions of default prompt-tuning approaches that optimize the prompts using a few labeled instances per target class. This setting is not strictly part of our design space. However, we will refer to it to define baselines in Section 4.
### Training strategies
The unified objective function enables the development of training strategies broadly applicable across various learning paradigms. We explore three distinct learning strategies to effectively use pseudolabels in this context. The first strategy uses pseudolabels in a static manner. The other two strategies, which are unexplored for prompt tuning, involve the dynamic use of pseudolabeled data.
Few-pseudolabels (FPL)We select \(K\) pseudolabels per target class, resulting in a pseudolabeled dataset of size \(K\cdot C\). We learn the prompts by minimizing the objective function via backpropagation through CLIP's encoders. This strategy aligns with Unsupervised Prompt Learning (UPL) in [15]. We refer to it as few-pseudolabels (FPL) to encompass its applicability for learning prompts of diverse modalities across learning paradigms.
Iterative Refinement of FPL (IFPL)Similar to FPL, we obtain the top-K pseudolabels for each target class. These pseudolabels are then used to train a new task-specific prompt. After completing the training, we use the learned prompt to compute the top-K pseudolabels per class again. Subsequently, we reinitialize the prompt and repeat this entire process for a total of \(I\) iterations. With this iterative approach, if training with the initial pseudolabel set leads to an improvement in the model's performance, the model itself can become a more effective pseudolabeler, refining the pseudolabels in each subsequent iteration.
Grow and Refine Iteratively Pseudolabels (GRIP)Although IFPL can improve the quality of the \(K\times C\) pseudolabels used for training, it still limits learning to a few examples per target class. To overcome this constraint, we explore a method similar to IFPL, but with a key difference. In each iteration, we progressively increase the value of \(K\). Specifically, during the \(i\)-th iteration, we use \(K=(i\times\frac{|X_{U}|}{I})/C\) of the unlabeled data to perform the steps in the iterative process. GRIP maintains class balance by selecting the top-K samples at each iteration, with \(K\) increasing progressively. Similar to IFPL, both prompts and pseudolabels are reinitialized with every iteration, in order to avoid accumulating errors from earlier iterations. In other words, learning progresses from pseudolabels to new prompts to new pseudolabels, and so on. The rationale behind this strategy is that as the model's accuracy in generating pseudolabels improves, we can increase the total number of pseudolabels without introducing excessive noise.
## 4 Experiments
We explore the design space outlined in Section 3 to understand the effectiveness of leveraging pseudolabels for limited-label prompt tuning. We show that (1) iterative strategies significantly improve CLIP's performance across prompt modalities and learning settings, (2) using CLIP-based pseudolabels with iterative strategies induces a more equitable distribution of per-class accuracy.
DatasetsWe conduct the analysis on six tasks, covering specialized and fine-grained domains, where CLIP shows deficiencies [31]. We call this set of tasks FRAMED, and it includes Flowers102 [29], RESICS45 [9], FGVC-Aircraft [26], MNIST [11], EuroSAT [14], and DTD [10]. For each dataset we use the training and test splits provided in [23]. For the transductive zero-shot learning setting we randomly generate three splits of seen and unseen classes with a 62-38 ratio. Further details are in Appendix A.2.
BaselinesTo evaluate the effectiveness of the training strategies described in Section 3.3, we compare the performance of CLIP when queried with the learned soft prompts to CLIP zero-shot with default prompts such as "a photo of a [CLASS]." In addition, we compare with default supervised prompt-tuning baselines, for which we only use the available labeled data: CoOp [48] for textual prompts, VPT [18] for visual prompts, and UPT [44] for multimodal prompts. We defer to Appendix A.1 the technical details of these methods.
Evaluation metricsWe assess the performance of each method by measuring the accuracy of the test set, averaging the results over five runs. In the case of TRZSL, we report the harmonic of the accuracies of seen and unseen classes to account for their potentially imbalanced performance [39].
Training settingsFor all experiments, datasets, and learning strategies, we use ViT-B/32 as the vision backbone. For both visual and textual prompt learning, we set the prefix size to 16 [48; 18]. Multimodal prompts have length 8 [44]. We use SGD as the optimizer and train for 150 epochs. We use 5 warmup epochs at a learning rate of 0.0001, and then set the learning rate to \(l\), which is decayed by the cosine annealing rule. For textual and visual prompt learning, \(l=0.1\), while for multimodal prompt learning, \(l=0.01\). In SSL, we use 2 labeled samples per class to assess the impact of pseudolabels in the scenario of very few labeled data and abundant unlabeled data. The number of iterations \(I\) is \(10\). FPL and IFPL have the number of pseudolabels per class fixed to 16 since it is indicated as the optimal \(K\) in the previous research on pseudolabeling with CLIP [15]. In general, \(K\) is a hyperparameter that may require optimization in practical cases. We decide to be consistent with the literature and apply this fixed value of \(K\) in order to reduce the addition of more confounding factors in our analysis.
### Exploring the design space
GRIP consistently enhances CLIP across prompt modalities and learning settings Table 1 reports the performance of GRIP, the best performing among the training strategies in Section 3.3, compared to CLIP and prompt-tuning baselines. Overall, GRIP consistently improves the performance of CLIP and the baselines across prompt modalities and learning settings. By tuning textual prompts, the average improvement over CLIP is 20.7 points in SSL, 14.9 in UL, and 32.4 in TRZSL, while the improvement on CoOp is 9.6 points in SSL, and 26.6 in TRZSL. Similar results for the visual prompts show that GRIP improves CLIP by 18.2 points in SSL, 15.7 in UL, and 30.8 in TRZSL, and VPT by 12.9 points in SSL, and 20.8 in TRZSL. We note that CoOp and VPT applied to the SSL setting correspond to learning only on the labeled data, and we do not run them in the UL setting as there is no labeled data. Results are similar for multimodal prompts. We defer them to Appendix A.3, due to space constraints.
No prompt modality is clearly superiorUsing pseudolabels dynamically is beneficial for each modality. However, determining the clear superiority of one prompt modality over the other is challenging, as it depends on the specific tasks. For example, visual prompts work better for EuroSAT, while textual prompts excel in Flowers102. Despite intuitive explanations (Section 3.1), the scientific consensus remains elusive [44]. Hence, we prefer to emphasize that the dynamic use of pseudolabels consistently improves performance for each prompt modality, without declaring one modality as definitively better than the other.
Unsupervised learning is equivalent or more robust than learning with very few shotsThe accuracy of GRIP when applied to the fully unsupervised setting is either higher or equivalent to the accuracy of VPT, which is trained using two labeled instances per class (Table 1). This shows that pseudolabeled data can substitute very few labeled examples for prompt tuning. However, the significant improvement of GRIP over CoOp and VPT in the semi-supervised setting (see Table 1) suggests that leveraging unlabeled data through pseudolabeling is advantageous in scenarios where labeled data is scarce but there is an abundance of unlabeled data.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Method}} & \multicolumn{3}{c}{Flowers102} & \multicolumn{3}{c}{RESICS45} & \multicolumn{3}{c}{FGVC Aircraft} \\ \cline{2-10} & SSL & UL & TRZSL & SSL & UL & TRZSL & SSL & UL & TRZSL \\ \hline CLIP & \(63.67_{0.00}\) & \(63.40_{0.00}\) & \(54.48_{0.00}\) & \(54.46_{0.00}\) & & \(17.58_{0.00}\) & & \(17.86_{0.00}\) & \\ CoOp & \(76.76_{1.11}\) & - & \(63.22_{0.00}\) & \(58.53_{0.01}\) & - & \(63.37_{0.02}\) & & \(14.91_{3.22}\) & - & \(21.70_{0.03}\) \\ GRIP & \(\mathbf{83.6_{0.68}}\) & \(\mathbf{69.84_{1.06}}\) & \(\mathbf{86.26_{0.00}}\) & \(\mathbf{74.11_{0.08}}\) & \(\mathbf{70.55_{0.58}}\) & \(\mathbf{81.07_{0.00}}\) & & \(16.98_{0.02}\) & \(15.22_{0.71}\) & \(\mathbf{26.08_{0.00}}\) \\ \hline \(\Delta\) CLIP & \(\uparrow 19.93\) & \(\uparrow 6.17\) & \(12.28\) & \(\uparrow 19.63\) & \(\uparrow 16.07\) & \(2\cdot 26.61\) & \(\uparrow 0.6\) & \(\downarrow 2.36\) & \(\uparrow 8.22\) \\ \(\Delta\) CoOp & \(\uparrow 6.84\) & - & \(\uparrow 23.04\) & \(\uparrow 15.58\) & - & \(\uparrow 17.70\) & \(\uparrow 2.07\) & - & \(\uparrow 4.38\) \\ \hline \multicolumn{10}{c}{MNIST} & \multicolumn{3}{c}{EuroSAT} & \multicolumn{3}{c}{DTD} \\ \cline{2-10} & \(2.51_{0.00}\) & \(27.70_{0.00}\) & \(32.88_{0.00}\) & \(30.54_{0.00}\) & & \(43.24_{0.00}\) & & \(43.45_{0.00}\) \\ CoOp & \(56.42_{0.00}\) & - & \(21.50_{0.00}\) & \(\mathbf{59.51_{4.55}}\) & - & \(49.68_{0.00}\) & & \(37.10_{4.56}\) & - & \(46.30_{0.00}\) \\ GRIP & \(\mathbf{71.78_{5.59}}\) & \(\mathbf{67.88_{2.76}}\) & \(\mathbf{74.06_{0.00}}\) & \(\mathbf{58.66_{0.64}}\) & \(\mathbf{57.21_{1.77}}\) & \(\mathbf{92.33_{0.00}}\) & & \(\mathbf{56.07_{0.85}}\) & \(\mathbf{46.09_{1.06}}\) & \(\mathbf{65.30_{0.01}}\) \\ \hline \(\Delta\) CLIP & \(\uparrow 4.68\) & \(\uparrow 42.78\) & \(\uparrow 53.29\) & \(\uparrow 25.78\) & \(\uparrow 24.33\) & \(\uparrow 61.79\) & \(\uparrow 12.83\) & \(\uparrow 2.85\) & \(\uparrow 21.85\) \\ \(\Delta\) CoOp & \(\uparrow 15.36\) & - & \(\uparrow 52.91\) & \(\downarrow 0.85\) & - & \(\uparrow 42.65\) & \(\uparrow 18.97\) & - & \(\uparrow 19.00\) \\ \hline \hline \multicolumn{10}{c}{**Visual prompts**} & \multicolumn{3}{c}{Flowers102} & \multicolumn{3}{c}{RESICS45} & \multicolumn{3}{c}{FGVC Aircraft} \\ \cline{2-10} & SSL & UL & TRZSL & SSL & UL & TRZSL & SSL & UL & TRZSL \\ \hline CLIP & \(63.67_{0.00}\) & \(63.40_{0.00}\) & \(54.48_{0.00}\) & & \(54.46_{0.00}\) & & \(17.58_{0.00}\) & & \(17.86_{0.00}\) \\ VPT & \(63.73_{1.52}\) & - & \(64.77_{0.00}\) & \(60.80_{0.00}\) & & \(67.06_{0.00}\) & & \(17.76_{0.68}\) & & \(26.69_{0.00}\) \\ GRIP & \(\mathbf{67.95_{1.2}}\) & \(63.09_{0.55}\) & \(\mathbf{77.18_{0.00}}\) & \(\mathbf{71.22_{0.77}}\) & \(\mathbf{68.43_{0.61}}\) & \(\mathbf{82.19_{0.00}}\) & & \(\mathbf{19.45_{0.5}}\) & \(17.51_{0.61}\) & \(26.42_{0.00}\) \\ \hline \(\Delta\) CLIP & \(\uparrow 4.28\) & \(\downarrow 0.58\) & \(\uparrow 13.78\) & \(\uparrow 16.74\) & \(\uparrow 13.95\) & \(\uparrow 27.73\) & \(\uparrow 1.85\) & \(\downarrow 0.07\) & \(\uparrow 8.56\) \\ \(\Delta\) VPT & \(\uparrow 4.22\) & - & \(\uparrow 12.47\) & \(\uparrow 10.42\) & - & \(\uparrow 15.13\) & \(\uparrow 1.67\) & - & \(\downarrow 0.27\) \\ \hline \multicolumn{10}{c}{MNIST} & \multicolumn{3}{c}{EuroSAT} & \multicolumn{3}{c}{DTD} \\ \cline{2-10} & \(25.10_{0.00}\) & \(20.77_{0.00}\) & \(32.88_{0.00}\) & & \(30.54_{0.00}\) & & \(43.24_{0.00}\) & & \(43.45_{0.00}\) \\ VPT & \(\mathbf{42.53_{1.13}}\) & \(25.51_{0.05}\) & \(47.13_{1.34}\) & & \(\mathbf{62.24_{0.00}}\) & \(36.41_{21.7}\) & - & \(44.16_{0.01}\) \\ GRIP & \(\mathbf{69.66_{1.51}}\) & \(\mathbf{68.04_{1.11}}\) & \(\mathbf{69.54_{0.50}}\) & \(\mathbf{63.48_{0.00}}\) & \(\mathbf{63.68_{0.32}}\) & \(\mathbf{96.97_{0.70}}\) & \(\mathbf{54.57_{0.86}}\) & \(\mathbf{50.51_{0.50}}\) & \(\mathbf{62.78_{0.00}}\) \\ \hline \(\Delta\) CLIP & \(\uparrow 4.56\) & \(\uparrow 42.94\) & \(\uparrow 48.77\) & \(\uparrow 30.60\) & \(\uparrow 30.80\) & \(\uparrow 66.43\) & \(\uparrow 11.33\) & \(\uparrow 7.27\) & \(\uparrow 19.33\) \\ \(\Delta\) VPT & \(\uparrow 27.14\) & - & \(\uparrow 44.03\) & \(\uparrow 16.35\) & - & \(\uparrow 34.73\) & \(\uparrow 18.16\) & - & \(\uparrow 18.62\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: For each learning paradigm, we compare the accuracy of GRIP with CLIP zero-shot (ViT-B/32), CoOp, and VPT. Results are for SSL, UL, and TRZSL on FRAMED. We average the accuracy on 5 seeds and report the standard deviation. \(\Delta\) METHOD is the difference between the accuracy of GRIP and METHOD. We note that for UL we can not apply CoOp and VPT since no labeled data is available.
Figure 2: Balance of seen and unseen accuracies vs. model’s overall accuracy. Points close to 0 indicate a good balance. Negatives represent better accuracy for the seen classes.
**Transductive zero-shot learning effectively transfers knowledge** In the TRZSL setting, GRIP improves over CLIP and the baselines by a large margin (Table 1). Figure 2 displays the balance of seen and unseen classes of each method alongside its accuracy. The _class balance_ is \((acc_{unseen}-acc_{seen})/acc_{seen}\), where values close to zero indicate a good balance, negative values indicate better accuracies for seen classes, and positive values indicate better accuracies for unseen classes. Methods employing an iterative usage of pseudolabels maintain a good balance, as opposed to CoOp/VPT and FPL. This balance in accuracy is likely a combined effect of the quality of the pseudolabels and the transfer of knowledge from the seen to the unseen classes. The latter point is significant because it implies that even if we only possess unlabeled data for a specific target task, we can still use labeled data from related classes [30] within the same domain to enhance CLIP's performance.
**There is a trade-off between quality and quantity of pseudolabels** Table 2 shows the performance of CLIP employing prompts learned with different training strategies, all leveraging pseudolabels (Section 3.3). Iterative strategies are more effective than FPL which, similarly to [15], use a static set of a few pseudolabels for one iteration.
On Flowers102, RESICS45, and DTD, IFPL improves on average FPL by 5.6 points in SSL, 1.7 in UL, and 5.6 in TRZSL. GRIP boosts the performance even more by on average 7.8 points in SSL, 3.1 in UL, and 9.7 in TRZSL. Results on the other tasks are comparable or larger and we report them in Appendix A.3 due to space constraints.
Figure 3 shows the progression of pseudolabels quality for the iterative learning of textual prompts. IFPL maintains a fixed set of 16 pseudolabels, improving their quality with each iteration (top x-axis).
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multicolumn{1}{c}{**Textual prompts**} & \multicolumn{6}{c}{**Flowers102**} & \multicolumn{6}{c}{**RESICS45**} & \multicolumn{3}{c}{DTD} \\ \hline Method & SSL & UL & TRZSL & SSL & UL & TRZSL & SSL & UL & TRZSL \\ \hline FPL & \(75.96_{0.74}\) & \(65.67_{0.23}\) & \(80.97_{0.00}\) & \(68.13_{0.55}\) & \(63.07_{0.38}\) & \(72.71_{0.00}\) & \(37.10_{0.45}\) & \(44.96_{0.55}\) & \(46.3_{0.03}\) \\ IFPL & \(78.60_{0.75}\) & \(65.61_{0.28}\) & \(82.08_{0.00}\) & \(70.52_{1.24}\) & \(64.11_{0.18}\) & \(75.51_{0.00}\) & \(\mathbf{55.24}_{0.07}\) & \(\mathbf{47.77}_{1.15}\) & \(59.14_{0.02}\) \\ GRIP & \(\mathbf{83.60_{0.48}}\) & \(\mathbf{69.84_{0.46}}\) & \(\mathbf{86.26_{0.00}}\) & \(\mathbf{74.14_{0.68}}\) & \(\mathbf{70.58_{0.81}}\) & \(\mathbf{81.00_{0.05}}\) & \(\mathbf{56.07_{0.58}}\) & \(\mathbf{46.09}_{0.06}\) & \(\mathbf{65.30}_{0.01}\) \\ \hline \(\Delta\) IFPL & \(\uparrow\) 2.72 & \(\uparrow\) 3.89 & \(\uparrow\) 1.11 & \(\uparrow\) 2.39 & \(\uparrow\) 1.07 & \(\uparrow\) 3.4 & \(\uparrow\) 18.14 & \(\uparrow\) 2.81 & \(\uparrow\) 12.84 \\ \(\Delta\) GRIP & \(\uparrow\) 7.64 & \(\uparrow\) 4.17 & \(\uparrow\) 5.29 & \(\uparrow\) 5.89 & \(\uparrow\) 7.48 & \(\uparrow\) 8.96 & \(\uparrow\) 18.97 & \(\uparrow\) 1.13 & \(\uparrow\) 19.00 \\ \hline \hline \multicolumn{1}{l}{**Visual prompts**} & \multicolumn{6}{c}{**Visual prompts**} & \multicolumn{6}{c}{**Visual prompts**} & \multicolumn{6}{c}{**Visual prompts**} & \multicolumn{6}{c}{**Visual prompts**} & \multicolumn{6}{c}{**Visual prompts**} \\ \hline FPL & \(67.03_{0.65}\) & \(\mathbf{65.50_{0.41}}\) & \(71.94_{0.60}\) & \(65.14_{0.25}\) & \(62.24_{0.22}\) & \(67.85_{0.00}\) & \(47.60_{0.40}\) & \(47.60_{0.48}\) & \(52.43_{0.00}\) \\ IFPL & \(\mathbf{68.69_{0.45}}\) & \(\mathbf{66.12_{0.46}}\) & \(76.91_{0.00}\) & \(67.11_{1.19}\) & \(62.93_{1.23}\) & \(73.53_{0.50}\) & \(\mathbf{51.65}_{0.00}\) & \(\mathbf{50.34}_{0.65}\) & \(57.86_{0.01}\) \\ GRIP & \(\mathbf{67.95_{1.2}}\) & \(63.00_{0.56}\) & \(\mathbf{77.18_{0.00}}\) & \(\mathbf{71.22_{0.77}}\) & \(\mathbf{68.50_{0.51}}\) & \(\mathbf{82.19_{0.00}}\) & \(\mathbf{54.57_{1.86}}\) & \(\mathbf{50.51_{0.59}}\) & \(\mathbf{62.78}_{0.00}\) \\ \hline \(\Delta\) IFPL & \(\uparrow\) 1.66 & \(\downarrow\) 0.38 & \(\uparrow\) 4.97 & \(\uparrow\) 1.97 & \(\uparrow\) 0.69 & \(\uparrow\) 5.68 & \(\uparrow\) 4.05 & \(\uparrow\) 2.65 & \(\uparrow\) 5.43 \\ \(\Delta\) GRIP & \(\uparrow\) 0.92 & \(\downarrow\) 3.41 & \(\uparrow\) 5.24 & \(\uparrow\) 6.08 & \(\uparrow\) 6.19 & \(\uparrow\) 14.34 & \(\uparrow\) 6.97 & \(\uparrow\) 2.82 & \(\uparrow\) 10.35 \\ \hline \hline \multicolumn{1}{l}{**Multimodal prompts**} & \multicolumn{6}{c}{**Multimodal prompts**} & \multicolumn{6}{c}{**Visual prompts**} & \multicolumn{6}{c}
On the other hand, GRIP and CLIP expand pseudolabels by incorporating an additional decile of unlabeled data in each iteration (bottom x-axis). Initially, GRIP maintains accuracy, but as it nears completion the quality tends to decrease, while a larger dataset with good-quality pseudolabels becomes available.
Comparing GRIP and CLIP, GRIP's expanded pseudolabels exhibit superior quality and performs better (Table 1). Even though IFPL's pseudolabel accuracy surpasses GRIP in the final iteration, GRIP's overall performance remains better due to training on a larger number of pseudolabels (Table 2). This suggests that numerous, slightly noisier pseudolabels can yield better results, highlighting a trade-off and offering insights for future approaches.
GRIP benefits adaptation even for larger image encodersWe measure how much the effect of the iterative strategies changes if we consider a larger pre-trained image encoder. In Table 3, we report the average improvements of GRIP on CLIP for Flowers102, RESICS45, and DTD. The magnitude of improvements slightly decreases when using a larger image encoder. However, we still see significant benefits for both modalities. The smaller relative improvements with respect to smaller visual encoders align with our expectations. Larger encoders possess a stronger base knowledge, making it relatively more challenging to attain further improvements on top of it. In Table 10, we break down the accuracy of CLIP with different backbones. The performance of the larger backbone is higher, indicating higher quality pseudolabels.
### The Robin Hood effect
Although training models with pseudolabels can lead to good performance, it can also result in biased predictions and generate disparate impacts on sub-populations, i.e., the "Matthew effect" [49; 8]. Particularly, the use of pseudolabels can lead to improved performance in _well-behaved_ (high accuracy) classes but can cause stagnation or decreased performance in _poorly behaved_ (low accuracy) classes. As we explore the use of pseudolabels, we investigate how the accuracy of the analyzed approaches distributes across classes. Figure 4 show an opposite scenario from typical SSL. The solid line represents the sorted per-class accuracies of CLIP. The arrows indicate the per-class accuracies of GRIP. For all learning paradigms, the iterative training strategies increase the accuracy of classes where CLIP is not proficient, while maintaining or decreasing the accuracy of initially well-behaved classes. This effect, which we call the "Robin Hood effect," is very interesting because it shows how CLIP can mitigate its own bias toward certain classes by learning from itself.
To understand the roots of the Robin Hood effect, we examine two factors: (1) the role of pseudolabels generated by CLIP, and (2) the role of prompt tuning. To disentangle these factors, we explore the variation in per-class accuracy of a basic linear classifier trained on CLIP's ViT-B/32 image representation.
"Second generation" pseudolabels are a good treatment for class disparityWe train the linear classifier in the SSL setting on 2 labeled examples per class and pseudolabels. The pseudolabels are obtained through conventional methods, where a threshold of.95 is applied, or by using CLIP to generate 16 pseudolabels per class.
Figure 4: Improvements of FPL and GRIP on CLIP’s per-class accuracies (RESICS45). The x-axis is the ranked class index, while y-axis is the accuracy.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Textual prompts** & & & & \\ \cline{2-4} & SSL & UL & TRZSL \\ \hline Avg. \(\Delta\) CLIP (ViT-B/32) & \(17.46_{12.83}\) & \(8.36_{28.85}\) & \(23.77_{2.51}\) \\ Avg. \(\Delta\) CLIP (ViT-L/14) & \(15.85_{44.4}\) & \(8.16_{1.22}\) & \(19.96_{19.19}\) \\ \hline
**Visual prompts** & & & \\ \cline{2-4} & SSL & UL & TRZSL \\ \hline Avg. \(\Delta\) CLIP (ViT-B/32) & \(10.78_{-42.8}\) & \(6.88_{-0.58}\) & \(20.28_{13.78}\) \\ Avg. \(\Delta\) CLIP (ViT-L/14) & \(7.61_{3.27}\) & \(4.89_{-0.48}\) & \(16.14_{11.13}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Average improvement of GRIP with different backbones on Flowers102, RESICS45, and DTD. \(\Delta\) CLIP is the difference between the accuracy of GRIP and CLIP. Alongside the average, we provide the minimum improvement across tasks.
We find that both approaches yield similar overall accuracies. However, we observe the "Matthew effect" when using the first approach. In contrast, when using CLIP-based pseudolabels, the class disparity of the regressor trained solely on seen classes is reduced. Particularly, we see a significant improvement on initially poor classes, together with a significant diminishing of the accuracy of well-behaved classes. We observe a clear manifestation of the "Robin Hood effect." We present plots illustrating this effect in Appendix A.4.
Prompt tuning retains the accuracy of already rich classes better than linear probingTo evaluate the role of prompt tuning in the "Robin Hood effect," we train a linear classifier and textual prompts in the UL setting using GRIP's training strategy. Comparing the per-class accuracies of the two approaches, GRIP on prompts shows an average improvement of 22.85 points for the poor classes across tasks, along with a slight average decrease of 0.3 points for the rich classes. On the other hand, linear probing determines a 14.42 points improvement for the poor classes, but it results in an average decrease of 9.39 points in accuracy for the rich classes (Appendix A.4).
## 5 Conclusions
We show that prompt tuning using pseudolabels generated by CLIP itself is a successful approach to enhance CLIP across various learning settings. Training strategies that iteratively refine pseudolabels turn out to be effective ways of leveraging pseudolabeled data. These approaches not only enhance CLIP's accuracy but also mitigate model biases toward certain classes. We hope this work lays a solid groundwork for reducing reliance on labeled data when adapting pre-trained vision-language models like CLIP to new tasks.
LimitationsThe effectiveness of the training strategies examined in this paper depends on both the strategies themselves and the quality of pseudolabels. The latter is particularly crucial. If CLIP performs poorly on a task, we may struggle to obtain a reliable set of pseudolabels to begin with, potentially diminishing CLIP's performance. Despite this potential risk, we have not observed any relevant failure of GRIP, even in tasks where CLIP's initial accuracy is extremely low (such as FGVC Aircraft). Also, the pseudolabeling strategy we adopt involves selecting \(K\) pseudolabels per class, which can create a strong assumption about the distribution of the training data if we attempt to cover all unlabeled data. During the final iteration, it is as if we assume a uniform class balance.
Another important consideration is the efficiency of the explored methods. Repeating the training process multiple times brings impressive improvements at the cost of a non-negligible increase of computation time. At each iteration, we generate pseudolabels for the unlabeled data from scratch. While we parallelized the pseudolabeling procedure to cut some of the cost, reducing those for the iterative training presents more significant challenges. We decided to mainly focus on the analysis of qualitative and quantitative effects of pseudolabels in prompt tuning. Future research should address budget constraints and investigate optimal stopping criteria for the iterative process, considering the possibility of reaching a plateau or decreased pseudolabel quality after a certain point, to maximize efficiency while maintaining performance.
## Acknowledgments and Disclosure of Funding
We are thankful to our reviewers for the fruitful and insightful discussions that contributed to the refinement of the paper. We also thank Reza Esfandiarpoor and Zheng-Xin Yong for the comments on our drafts. This material is based on research sponsored by Defense Advanced Research Projects Agency (DARPA) and Air Force Research Laboratory (AFRL) under agreement number FA8750-19-2-1006. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Defense Advanced Research Projects Agency (DARPA) and Air Force Research Laboratory (AFRL) or the U.S. Government. We gratefully acknowledge support from Google and Cisco. Disclosure: Stephen Bach is an advisor to Snorkel AI, a company that provides software and services for data-centric artificial intelligence.
## References
* Bach et al. [2022] Stephen Bach, Victor Sanh, Zheng Xin Yong, Albert Webson, Colin Raffel, Nihal V. Nayak, Abheesht Sharma, Taewoon Kim, M Saiful Bari, Thibault Fevry, Zaid Alyafeai, Manan Dey, Andrea Santilli, Zhiqing Sun, Srulik Ben-david, Canwen Xu, Gunjan Chhablani, Han Wang, Jason Fries, Maged Al-shaibani, Shanya Sharma, Urmish Thakker, Khalid Almubarak, Xiangru Tang, Dragomir Radev, Mike Tian-jian Jiang, and Alexander Rush. PromptSource: An integrated development environment and repository for natural language prompts. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations_. Association for Computational Linguistics, 2022.
* Bahng et al. [2022] Hyojin Bahng, Ali Jahanian, Swami Sankaranarayanan, and Phillip Isola. Exploring visual prompts for adapting large-scale models. _arXiv preprint arXiv:2203.17274_, 2022.
* Berthelot et al. [2020] David Berthelot, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, and Colin Raffel. Remixmatch: Semi-supervised learning with distribution matching and augmentation anchoring. In _International Conference on Learning Representations_, 2020.
* Berthelot et al. [2019] David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch: A holistic approach to semi-supervised learning. In _Advances in Neural Information Processing Systems_, 2019.
* Bo et al. [2021] Liu Bo, Qiulei Dong, and Zhanyi Hu. Hardness sampling for self-training based transductive zero-shot learning. _2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 16494-16503, 2021.
* Bommasani et al. [2021] Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, S. Buch, Dallas Card, Rodrigo Castellon, Niladri S. Chatterji, Annie S. Chen, Kathleen A. Creel, Jared Davis, Dora Demszky, Chris Donahue, Moussa D'Oumbuya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajah, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren E. Gillespie, Karan Goel, Noah D. Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas F. Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, O. Khattab, Pang Wei Koh, Mark S. Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Benjamin Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, J. F. Nyarko, Giray Ogut, Laurel J. Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Robert Reich, Hongyu Ren, Frieda Rong, Yusuf H. Roohani, Camilo Ruiz, Jack Ryan, Christopher R'e, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishna Parasuram Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramer, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei A. Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, and Percy Liang. On the opportunities and risks of foundation models. _ArXiv_, abs/2108.07258, 2021.
* Brown et al. [2020] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In _Advances in Neural Information Processing Systems_, 2020.
* Chen et al. [2022] Baixu Chen, Junguang Jiang, Ximei Wang, Pengfei Wan, Jianmin Wang, and Mingsheng Long. Debiased self-training for semi-supervised learning. In _Advances in Neural Information Processing Systems_, 2022.
* Cheng et al. [2017] Gong Cheng, Junwei Han, and Xiaoqiang Lu. Remote sensing image scene classification: Benchmark and state of the art. _Proceedings of the IEEE_, 2017.
* [10] Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. Describing textures in the wild. _2014 IEEE Conference on Computer Vision and Pattern Recognition_, 2013.
* [11] Li Deng. The mnist database of handwritten digit images for machine learning research [best of the web]. _IEEE Signal Processing Magazine_, 2012.
* [12] Yunhe Gao, Xingjian Shi, Yi Zhu, Hongya Wang, Zhiqiang Tang, Xiong Zhou, Mu Li, and Dimitris N. Metaxas. Visual prompt tuning for test-time domain adaptation. _ArXiv_, abs/2210.04831, 2022.
* [13] Chunjiang Ge, Rui Huang, Mixue Xie, Zihang Lai, Shiji Song, Shuang Li, and Gao Huang. Domain adaptation via prompt learning. _ArXiv_, abs/2202.06687, 2022.
* [14] Patrick Helber, Benjamin Bischke, Andreas R. Dengel, and Damian Borth. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, 2017.
* [15] Hao Huang, Jack Chu, and Fangyun Wei. Unsupervised prompt learning for vision-language models. _ArXiv_, abs/2204.03649, 2022.
* [16] Ahmet Iscen, Giorgos Tolias, Yannis Avrithis, and Ondrej Chum. Label propagation for deep semi-supervised learning. _2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 5065-5074, 2019.
* [17] Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In _International Conference on Machine Learning_, 2021.
* [18] Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, and Ser-Nam Lim. Visual prompt tuning. In _ECCV 2022: 17th European Conference on Computer Vision_, 2022.
* [19] Muhammad Uzair Khattak, Hanoona Rasheed, Muhammad Maaz, Salman Khan, and Fahad Shahbaz Khan. Maple: Multi-modal prompt learning. _ArXiv_, abs/2210.03117, 2022.
* [20] Dong-Hyun Lee. Pseudo-label : The simple and efficient semi-supervised learning method for deep neural networks. _ICML 2013 Workshop : Challenges in Representation Learning (WREPL)_, 2013.
* [21] Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_, 2021.
* [22] Will LeVine, Benjamin Pikus, Pranav Vishnu Raja, and Fernando Amat. Enabling calibration in the zero-shot inference of large vision-language models. In _ICLR 2023 Workshop on Pitfalls of limited data and computation for Trustworthy ML_, 2023.
* [23] Chunyuan Li, Haotian Liu, Liunian Li, Pengchuan Zhang, Jyoti Aneja, Jianwei Yang, Ping Jin, Houdong Hu, Zicheng Liu, Yong Jae Lee, and Jianfeng Gao. Elevater: A benchmark and toolkit for evaluating language-augmented visual models. In _Advances in Neural Information Processing Systems_, 2022.
* [24] Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_, 2021.
* [25] Bo Liu, Lihua Hu, Qiulei Dong, and Zhanyi Hu. An iterative co-training transductive framework for zero shot learning. _IEEE Transactions on Image Processing_, 30:6943-6956, 2021.
* [26] Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew B. Blaschko, and Andrea Vedaldi. Fine-grained visual classification of aircraft. _ArXiv_, abs/1306.5151, 2013.
* [27] Shu Manli, Nie Weili, Huang De-An, Yu Zhiding, Goldstein Tom, Anandkumar Anima, and Xiao Chaowei. Test-time prompt tuning for zero-shot generalization in vision-language models. In _NeurIPS_, 2022.
* [28] Nihal V. Nayak, Peilin Yu, and Stephen Bach. Learning to compose soft prompts for compositional zero-shot learning. In _The Eleventh International Conference on Learning Representations_, 2023.
* [29] Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. _2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing_, 2008.
* [30] Wasu Piriyakulkij, Cristina Menghini, Ross Briden, Nihal V. Nayak, Jeffrey Zhu, Elaheh Raisi, and Stephen H. Bach. TAGLETS: A system for automatic semi-supervised learning with auxiliary data. In _Conference on Machine Learning and Systems (MLSys)_, 2022.
* [31] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In _International Conference on Machine Learning_, 2021.
* [32] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019.
* [33] Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhabalani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesth Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. Multitask prompted training enables zero-shot task generalization. In _International Conference on Learning Representations_, 2022.
* [34] Sheng Shen, Shijia Yang, Tianjun Zhang, Bohan Zhai, Joseph E. Gonzalez, Kurt Keutzer, and Trevor Darrell. Multitask vision-language prompt tuning. _arXiv preprint arXiv:2211.11720_, 2022.
* [35] Weiwei Shi, Yihong Gong, C. Ding, Zhiheng Ma, Xiaoyu Tao, and Nanning Zheng. Transductive semi-supervised deep learning using min-max features. In _European Conference on Computer Vision_, 2018.
* [36] Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In _Advances in Neural Information Processing Systems_, 2020.
* [37] Ximeng Sun, Ping Hu, and Kate Saenko. Dualcoop: Fast adaptation to multi-label recognition with limited annotations. _ArXiv_, 2022.
* [38] Xudong Wang, Zhi-Li Wu, Long Lian, and Stella X. Yu. Debiased learning from naturally imbalanced pseudo-labels. _2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, 2022.
* [39] Yongqin Xian, Bernt Schiele, and Zeynep Akata. Zero-shot learning -- the good, the bad and the ugly. _2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, 2017.
* [40] Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong, and Quoc Le. Unsupervised data augmentation for consistency training. In _Advances in Neural Information Processing Systems_, 2020.
* [41] Yi Xu, Lei Shang, Jinxing Ye, Qi Qian, Yu-Feng Li, Baigui Sun, Hao Li, and Rong Jin. Dash: Semi-supervised learning with dynamic thresholding. In _International Conference on Machine Learning_, 2021.
* [42] Yunlong Yu, Zhong Ji, Xi Li, Jichang Guo, Zhongfei Zhang, Haibin Ling, and Fei Wu. Transductive zero-shot learning with a self-training dictionary approach. _IEEE Transactions on Cybernetics_, 48(10):2908-2919, 2018.
* [43] Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel C. F. Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, Ce Liu, Mengchen Liu, Zicheng Liu, Yumao Lu, Yu Shi, Lijuan Wang, Jianfeng Wang, Bin Xiao, Zhen Xiao, Jianwei Yang, Michael Zeng, Luowei Zhou, and Pengchuan Zhang. Florence: A new foundation model for computer vision. _ArXiv_, abs/2111.11432, 2021.
* [44] Yuhang Zang, Wei Li, Kaiyang Zhou, Chen Huang, and Chen Change Loy. Unified vision and language prompt learning. _ArXiv_, abs/2210.07225, 2022.
* [45] Bowen Zhang, Yidong Wang, Wenxin Hou, HAO WU, Jindong Wang, Manabu Okumura, and Takahiro Shinozaki. Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling. In _Advances in Neural Information Processing Systems_, 2021.
* [46] X. Zhang, Yusuke Iwasawa, Yutaka Matsuo, and Shixiang Shane Gu. Domain prompt learning for efficiently adapting clip to unseen domains. 2021.
* [47] Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. Calibrate before use: Improving few-shot performance of language models. 2021.
* [48] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for vision-language models. _International Journal of Computer Vision_, 2021.
* [49] Zhaowei Zhu, Tianyi Luo, and Yang Liu. The rich get richer: Disparate impact of semi-supervised learning. In _International Conference on Learning Representations_, 2022.
Appendix
We include here extra information that supports the results presented in the main body of the paper.
ReproducibilityWe have provided the code to run the experiments as supplementary material for the submission. However, we plan to release it as an open repository upon acceptance.
### Trainable Prompts
Text Prompt TuningThe primary objective of text prompt tuning is to improve the alignment between the class token and the image features extracted by the image encoder. This is achieved by adding learnable vectors, i.e., _prefix_, before the \(\mathtt{CLASS}\) token to create a contextualized representation. Specifically, the sequence
\[\mathbf{t}=[\mathrm{V}]_{1}[\mathrm{V}]_{2}\ldots[\mathrm{V}]_{M}[\mathrm{CLASS}]\]
is fed into the textual encoder, where each vector \([\mathrm{V}]_{m}\) (\(m\in 1,\ldots,M\)) has the same dimension as word embeddings, and \(M\) is a hyperparameter that determines the length of the prefix.
Context Optimization (CoOp) [48] was the first work to explore continuous prompts for VLMs. Follow-up works have experimented with different training strategies to enhance the generalizability of the learned prompts while preserving the core concept of continuous vector tuning [34, 12, 27, 46, 13, 37].
Tuning the text prefix vector changes the resulting \(n\) linear weight vectors \(w_{i}=\psi(p_{i})\), while leaving the image features unchanged. Therefore, text prompt tuning may be most beneficial when image features are well-separated by class but may not be aligned with the corresponding textual prompt. Conversely, text prompt tuning may not be as effective when the image features are poorly separated, as in specialized or novel domains where CLIP may lack sufficient training data.
Visual Prompt TuningInstead of tuning the text prompts, one can also tune the inputs of the vision encoder. In this case, a learnable visual prefix is prepended to the image tokens as input to the image transformer as follows:
\[\mathbf{\hat{I}}=[\mathrm{p}]_{1}\ldots[\mathrm{p}]_{K}[\mathrm{I}]_{1}\ldots[ \mathrm{I}]_{P}\]
where \(p\) represents a sequence of \(K\) learnable prefix vectors, and \([\mathrm{I}]_{1}\ldots[\mathrm{I}]_{P}\) are the image tokens from the corresponding \(P\) patches of the input images. The new sequence \(\mathbf{\hat{I}}\) is the input to the image encoder \(\phi\).
Visual Prompt Tuning (VPT) was introduced in the context of efficiently adapting pre-trained vision transformers to downstream tasks [18]. However, the approach has since been applied in the context of VLM [34].
Whereas text prompt tuning does not alter the image features, visual prompt tuning does. By rearranging the image features within the projection space, VPT has the potential to improve CLIP when the image features are not well separated by class, such as in specialized domains.
Multimodal Prompt TuningThe previous approaches are unimodal, as they either involve modifying the text or visual input, but never both. This choice may be suboptimal as it does not allow the flexibility to dynamically adjust both representations on a downstream task. Recently, multimodal prompt tuning has been introduced [44, 19]. We focus on Unified Prompt Tuning (UPT) [44] which essentially learns a tiny neural network to jointly optimize prompts across different modalities. UPT learns a set of prompts \(\mathbf{U}=[\mathbf{U}_{T},\mathbf{U}_{V}]\in\mathbb{R}^{d\times n}\) with length \(n\), where \(\mathbf{U}_{T}\in\mathbb{R}^{d\times n_{T}},\mathbf{U}_{V}\in\mathbb{R}^{d\times n_{V}}\). \(\mathbf{U}\) is transformed as follows:
\[\mathbf{U}^{\prime} =\mathrm{SA}(\mathbf{U})+\mathrm{LN}(\mathbf{U})\] \[\mathbf{\hat{U}} =\mathrm{FFN}\left(\mathrm{LN}\left(\mathbf{U}^{\prime}\right)\right)+ \mathrm{LN}\left(\mathbf{U}^{\prime}\right)\]
where \(\mathrm{SA}\) is the self-attention operator, \(\mathrm{LN}\) is the layer normalization operator, and \(\mathrm{FFN}\) is a feed forward network. After transformation, we obtain \(\mathbf{\hat{U}}=\left[\mathbf{\hat{U}}_{T},\mathbf{\hat{U}}_{V}\right]\in\mathbb{R}^{d \times n}\), such that \(\mathbf{\hat{U}}_{T}\) is to be used as a text prompt, and \(\mathbf{\hat{U}}_{V}\) is to be used as a visual prompt.
The author of UPL argue that self-attention allows for beneficial interaction between the two separate modalities, which leads to both separable visual features, and text classifiers that are well-aligned with the corresponding visual features [44].
Prompts initializationWe initialize textual and visual prompts from a normal distribution of mean 0 and variance 0.02. We note that we learn shallow visual prompts by modifying only the input to the image encoder. Multimodal prompts are initialized from a uniform distribution. We found that the latter was not working properly for textual and visual prompts.
Additional training settingsFor training, the batch size is 64.
Additional details about pseudolabels assignmentIt can happen that if \(K\) is too large and the unlabeled dataset is smaller than \(K\times C\), we cannot assign \(K\) samples per class. In this case, we can reduce \(K\) accordingly. Also, we the same sample might get assigned to the pseudolabel lists of different classes. It rarely happens in our experiments cases. Hoever, this is a characteristic of the pseudolabeling strategy proposed in [15], and it can be an object of study for future work motivated by the effectiveness of self-training.
### Datasets details
We use six datasets from specialized or fine-grained domains. Here we provide a description of each of them. In Table 4, we report the details about the number of classes and data available for each dataset. For each dataset, we also show CLIP's prediction distribution over classes Figure 5.
Flowers102 [29]It is a dataset collecting images for 102 flower categories commonly occurring in the United Kingdom. For each class we have between 40 and 258 images. Figure 4(a) shows that CLIP's predictions are skewed toward certain classes, which are predicted more often than what we would expect according to the real class distribution on the test set.
RESICS45 [9]This is a publicly available benchmark for Remote Sensing Image Scene Classification. It collects 45 kind of scenes. Figure 4(b) shows that CLIP predicts more often a subset of classes.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & Num. classes (\(|Y|\)) & Num. seen classes (\(S\)) & Num. unseen classes (\(U\)) & Size training data & Avg. labeled data per class & Size test \\ \hline Flowers102 & 102 & 63 & 39 & 2040 & 16 & 6149 \\ RESICS45 & 45 & 27 & 18 & 6300 & 110 & 25200 \\ FOVC- Aircraft & 100 & 62 & 38 & 6667 & 53 & 3333 \\ MNIST & 10 & 6 & 4 & 60000 & 4696 & 10000 \\ EuroSAT & 10 & 6 & 4 & 27000 & 2200 & 5000 \\ DTD & 47 & 29 & 18 & 3760 & 64 & 1880 \\ \hline \hline \end{tabular}
\end{table}
Table 4: For each dataset we report the number of classes, the number of seen and unseen classes in the TRZSL setting, the size of training data (including both labeled and unlabeled data), the average number of labeled examples per class, and the size of the test set which is the same across learning paradigms. We recall that we use the datasets gathered by the recent ELEVATER [23] benchmark for vision-language models.
Figure 5: For each dataset we show the distribution of CLIP’s predictions over classes on the test set. The blue dots represent the true class distribution.
FGVC-Aircraft [26]It describes the fine-grained task of categorizing aircraft. We consider the task of classifying aircrafts into 100 variants. Also for this task, CLIP assigns images to a reduced set of classes (Figure 4(c)).
Mnist[11]MNIST is a database of handwritten digits. The digits are size-normalized and centered in a fixed-size image. We observe that CLIP never predicts 6 out of 10 classes (Figure 4(c)).
EuroSAT[14]EuroSAT represents the task of categorizing satellite images of scenes. It consists of 10 classes. In Figure 4(e), we show CLIP's predictions distribution over the classes.
Dtd[10]DTD stands for Describable Textures Dataset. It is an evolving collection of textural images in the wild, and it is annotated relying on human-centric attributes, inspired by the perceptual properties of textures. The zero-shot CLIP predictions show the model's bias toward certain classes (Figure 4(f)).
### Experiments
In this section, we report tables and plots that complement the results presented in Section 4.
The effect of GRIP on multimodal promptsTable 5 shows the improvements of GRIP on CLIP and Unified Prompt Tuning (UPL) [44]. Similar to the results in Table 1, GRIP consistently improves CLIP with respect to the baselines. The improvements on CLIP are by 18.2 in semi-supervised learning, 14.8 in unsupervised learning, and 30.7 in transductive zero-shot learning. While GRIP outperforms UPL by 4.7 in semi-supervised learning, and 19.5 in transductive zero-shot learning.
Comparison across iterative strategiesIn Table 6, we report a comparison between FPL and the iterative strategies (IFPL and GRIP) on MNIST, EuroSAT, and FGVC-Aircraft. Results on the other tasks can be found in the main body of the paper Section 4.1. While GRIP largely and consistently outperforms FPL by on average 16.7 points in accuracy, IFPL is not robust and it leads to performances that are inferior to FPL by on average 4.4 points in accuracy.
The evolving accuracy of dynamic pseudolabelsFigure 6 represents the evolution of pseudolabels accuracy during training for all datasets, but Flowers102 and RESICS45 presented in Figure 3. We observe that the accuracy of the pseudolabels characterizes the overall performance of the models reported in Table 6. For instance, IFPL for EuroSAT in the TRZSL setting is highly variable, explaining the low average accuracy of the model on the test set (Table 6). Similarily, for MNIST in the TRZSL we observe that after the first iteration, the pseudolabels get very noisy. When we increase the amount of pseudolabeled data, the accuracy of CLIP is not necessarily constant. This is because as we increase K, we are effectively selecting pseudolabels with lower similarities to the
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multicolumn{6}{l}{**Multimodal prompts**} \\ \hline \hline \multicolumn{6}{l}{**Multimodal prompts**} \\ \hline \hline & \multicolumn{3}{c}{Flowers102} & \multicolumn{3}{c}{RESICS45} & \multicolumn{3}{c}{FGVC Aircraft} \\ \hline Method & SSL & UL & TRZSL & SSL & UL & TRZSL & SSL & UL & TRZSL \\ \hline CLIP & 63.67\({}_{0.00}\) & 61.40\({}_{0.00}\) & 54.48\({}_{0.00}\) & 54.46\({}_{0.00}\) & **17.58\({}_{0.00}\)** & **17.86\({}_{0.00}\)** & **17.86\({}_{0.00}\)** \\ UPT & 68.03\({}_{1.29}\) & - & 61.05\({}_{0.64}\) & 62.84\({}_{1.06}\) & - & 58.79\({}_{0.00}\) & 11.13\({}_{4.98}\) & - & 15.89\({}_{0.00}\) \\ GRIP & **74.56\({}_{0.22}\)** & 64.82\({}_{1.63}\) & **82.01\({}_{0.10}\)** & **73.68\({}_{0.91}\)** & **63.70\({}_{0.61}\)** & **82.17\({}_{0.76}\)** & 17.36\({}_{0.14}\) & 14.73\({}_{0.08}\) & **17.85\({}_{0.30}\)** \\ \hline \(\Delta\) CLIP & \(\uparrow\) 10.89 & \(\uparrow\) 1.15 & \(\uparrow\) 18.61 & \(\uparrow\) 19.2 & \(\uparrow\) 14.89 & \(\uparrow\) 27.71 & \(\downarrow\) 0.22 & \(\downarrow\) 2.85 & \(\downarrow\) 0.01 \\ \(\Delta\) UPT & \(\uparrow\) 6.53 & - & \(\uparrow\) 20.96 & \(\uparrow\) 10.84 & - & \(\uparrow\) 22.38 & \(\uparrow\) 6.23 & - & \(\uparrow\) 1.96 \\ \hline \hline \multicolumn{6}{l}{**MNIST**} & \multicolumn{3}{c}{EuroSAT} \\ \hline CLIP & 25.10\({}_{0.00}\) & 20.77\({}_{0.00}\) & 32.88\({}_{0.00}\) & 30.54\({}_{0.00}\) & 43.24\({}_{0.00}\) & 43.45\({}_{0.00}\) \\ UPT & **64.44\({}_{0.66}\)** & - & 63.59\({}_{0.11}\) & **68.85\({}_{0.92}\)** & - & 60.43\({}_{0.00}\) & 43.71\({}_{2.18}\) & - & 36.91\({}_{0.00}\) \\ GRIP & **65.94\({}_{2.23}\)** & **68.18\({}_{\infty}\)** & **73.75\({}_{2.93}\)** & **60.38\({}_{2.77}\)** & **61.52\({}_{3.04}\)** & **59.52\({}_{0.40}\)** & **54.07\({}_{2.25}\)** & **47.37\({}_{0.7}\)** & **63.42\({}_{0.00}\)** \\ \hline \(\Delta\) CLIP & \(\uparrow\) 40.84 & \(\uparrow\) 43.08 & \(\uparrow\) 52.98 & \(\uparrow\) 27.5 & \(\uparrow\) 28.64 & \(\uparrow\) 64.98 & \(\uparrow\) 10.83 & \(\uparrow\) 4.13 & \(\uparrow\) 19.97 \\ \(\Delta\) UPT & \(\uparrow\) 2.35 & - & \(\uparrow\) 10.16 & \(\downarrow\) 8.47 & - & \(\uparrow\) 35.09 & \(\uparrow\) 10.36 & - & \(\uparrow\) 26.51 \\ \hline \hline \end{tabular}
\end{table}
Table 5: For each learning paradigm, we compare the accuracy of GRIP with CLIP zero-shot (ViT-B/32), and UPT. Results are for SSL, UL, and TRZSL on FRAMED. We average the accuracy on 5 seeds and report the standard deviation. \(\Delta\) METHOD is the difference between the accuracy of GRIP and METHOD. We note that for UL we can not apply UPL since no labeled data is available.
classes, resulting in a reduction in their accuracy, as shown in the plot. This observation aligns with previous findings in [15].
GRIP performance on transductive zero-shot learningWe show how the effectiveness of GRIP is consistent over the three random splits of seen and unseen classes which we randomly generated. The splits are reported in Table 9. Table 8 gathers the accuracy of seen and unseen classes, along with the harmonic mean for all three splits using textual prompts. Beyond the consistent improvement induced by GRIP training strategy, we observe that the accuracy of GRIP on the seen classes is often lower than the accuracy of CoOp on the same set of classes. During training, for large lambda, the loss component of unlabeled data (unseen classes) is the first to decrease, while the loss on the seen classes reduces later. Thus, we hypothesize that extra training steps might be needed to complete the learning on the labeled data.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multicolumn{1}{c}{**Textual prompts**} & \multicolumn{6}{c}{EuroSAT} & \multicolumn{6}{c}{FGVC Aircraft} \\ \hline Method & SSL & UL & TRZSL & SSL & UL & TRZSL & SSL & UL & TRZSL \\ \hline FPL & \(66.06_{1.10}\) & \(40.03_{2.63}\) & \(73.93_{4.05}\) & \(\mathbf{62.05_{1.64}}\) & \(48.96_{4.49}\) & \(53.70_{58.87}\) & \(\mathbf{20.02_{0.77}}\) & \(\mathbf{16.62_{0.87}}\) & \(17.55_{30.37}\) \\ IFPL & \(59.14_{3.43}\) & \(28.94_{2.05}\) & \(0.00_{0.00}\) & \(\mathbf{61.28_{1.59}}\) & \(\mathbf{56.46_{4.26}}\) & \(14.36_{28.18}\) & \(18.00_{0.35}\) & \(13.80_{0.67}\) & \(21.72_{0.77}\) \\ GRIP & \(\mathbf{71.78_{5.59}}\) & \(\mathbf{67.82_{7.76}}\) & \(\mathbf{74.06_{2.95}}\) & \(56.66_{2.64}\) & \(\mathbf{57.21_{7.17}}\) & \(\mathbf{92.33_{40.09}}\) & \(16.98_{2.02}\) & \(15.20_{1.21}\) & \(\mathbf{26.08_{0.25}}\) \\ \hline \(\Delta\) IFPL & \(\downarrow 6.92\) & \(\downarrow 11.09\) & \(\downarrow 9.73\) & \(\downarrow 0.77\) & \(\uparrow 7.50\) & \(\downarrow 39.34\) & \(\downarrow 2.02\) & \(\downarrow 2.82\) & \(\uparrow 4.17\) \\ \(\Delta\) GRIP & \(\uparrow 5.72\) & \(\uparrow 27.85\) & \(\uparrow 64.33\) & \(\downarrow 3.39\) & \(\uparrow 8.25\) & \(\uparrow 38.63\) & \(\downarrow 3.04\) & \(\downarrow 1.40\) & \(\uparrow 8.53\) \\ \hline \hline \multicolumn{1}{c}{**Visual prompts**} & \multicolumn{6}{c}{} & \multicolumn{6}{c}{} & \multicolumn{6}{c}{} & \multicolumn{6}{c}{} & \multicolumn{6}{c}{} & \multicolumn{6}{c}{} & \multicolumn{6}{c}{} \\ \hline FPL & \(42.84_{4.86}\) & \(39.62_{5.33}\) & \(31.82_{7.53}\) & \(52.43_{5.87}\) & \(48.79_{3.60}\) & \(\mathbf{20.46_{2.06}}\) & \(\mathbf{18.28_{0.33}}\) & \(16.28_{0.45}\) \\ IFPL & \(52.91_{69.79}\) & \(37.17_{67.28}\) & \(38.34_{31.81}\) & \(\mathbf{57.85_{5.82}}\) & \(52.52_{10.00}\) & \(48.13_{11.13}\) & \(18.77_{46.16}\) & \(16.30_{0.37}\) & \(19.29_{0.36}\) \\ GRIP & \(\mathbf{69.66_{5.61}}\) & \(\mathbf{68.04_{1.11}}\) & \(\mathbf{69.54_{3.14}}\) & \(\mathbf{63.48_{2.09}}\) & \(\mathbf{63.63_{2.92}}\) & \(\mathbf{96.97_{3.07}}\) & \(\mathbf{19.45_{30.50}}\) & \(\mathbf{17.51_{0.16}}\) & \(\mathbf{26.42_{0.30}}\) \\ \hline \(\Delta\) IFPL & \(\uparrow 10.07\) & \(\downarrow 2.45\) & \(\uparrow 6.56\) & \(\uparrow 5.38\) & \(\downarrow 16.27\) & \(\downarrow 20.55\) & \(\downarrow 1.37\) & \(\downarrow 1.92\) & \(\uparrow 3.01\) \\ \(\Delta\) GRIP & \(\uparrow 26.82\) & \(\uparrow 28.42\) & \(\uparrow 37.72\) & \(\uparrow 11.01\) & \(\uparrow 14.89\) & \(\uparrow 28.29\) & \(\downarrow 0.71\) & \(\downarrow 0.77\) & \(\uparrow 10.14\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: For each learning paradigm, we compare FPL, IFPL, and GRIP on MNIST, EuroSAT, and FGVC Aircraft. We average across 5 runs and report the standard deviation. \(\Delta\) METHOD is the difference between the accuracy of FPL and METHOD.
Figure 6: We plot the evolution of dynamic-pseudolabels accuracy during training. The rows refer to SSL, UL, and TRZSL, in order. IFPL refers to the top x-axis, while CLIP and GRIP to the bottom.
### The Robin Hood effect
The Robin Hood effect on all tasksFor each dataset, we provide the per-class accuracy distribution of GRIP compared with CLIP, Figure 8. The Robin Hood effect characterizes all the tasks. We observe that for GRIP the increase in overall accuracy corresponds to consistent improvements in the predictions of initially poor classes. By comparing Figure 7 with Figure 8, we see that GRIP reinforces the Robin Hood effect already visible when using FPL in certain cases.
The importance of good quality pseudolabels to mitigate the Matthew effect in SSLIn the SSL setting, we train a logistic regression on top of the visual feature extracted by CLIP's image encoder (ViT-B/32). In Figure 9, we show the per-class accuracy of the final model trained by combining labeled data with either pseudolabels assigned with the conventional scheme (threshold at.95) or 16 CLIP-generated pseudolabels. We compare the two distribution with the per-class accuracy of the model trained solely on the few labeled examples per class (2 instances).
The different impact of prompt tuning and linear probing on the Robin Hood effectWe investigate if there is any difference in the Robin Hood effect when adapting CLIP via prompt tuning or linear probing. We train both relying on the iterative training strategy that grows the set of pseudolabels at each iteration by using the top-\(K\) scheme (Section 3). We consider the UL setting.
Among the set of target classes, we distinguish between _poor_ and _rich_ classes. A class is _poor_, if CLIP's accuracy on that class is lower than its overall accuracy on the task. Otherwise, the class is considered _rich_. Table 7 reports the accuracy of the two approaches, and the accuracy on the poor and rich classes, while highlighting the average effect with respect to CLIP. Training with prompt tuning retains more knowledge of the rich classes than linear probing. Prompt tuning reduces the accuracy on the rich classes by on average 0.3 points, while linear probing has an average deterioration of 9.4. the reduction of accuracy of rich classes is on average 30 times larger than the reduction observed using prompts. Overall, GRIP works better than linear probing. We note that the lower accuracy of linear probing is characterized by a worse ability to correctly predict the rich classes, i.e., "rich get poorer." This is surprising, as we would have expected the errors to concentrate on the poor classes compared to CLIP.
Figure 7: Per-class accuracy of FPL compared to CLIP’s per-class accuracy on Flowers102, FGVC-Aircraft, MNIST, EuroSAT, DTD. **X-axis** is the ranked class index, while the **y-axis** is the accuracy.
Figure 8: Per-class accuracy of GRIP compared to CLIP’s per-class accuracy on Flowers102, FGVC-Aircraft, MNIST, EuroSAT, and DTD. **X-axis** is the ranked class index, while the **y-axis** is the accuracy.
Figure 9: Per-class accuracy of a logistic classifier using conventional pseudolabels (first row) and CLIP-based pseudolabels (second row). The solid orange line represents the per-class accuracy of a logistic regression trained on 2-shots per class. **X-axis** is the ranked class index, while the **y-axis** is the accuracy. We present results for Flowers102, RESICS45, FGVC-Aircraft, MNIST, EuroSAT, and DTD, in order.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & Flowers102 & RESICS45 & FGVC-Aircraft & MNIST & EuroSAT & DTD & Avg. \(\Delta\) \\ \hline Linear probe (LP) & 41.01 & 58.79 & 61.94 & 50.52 & 51.37 & 10.17 & - \\ GRIP & **46.09** & **70.55** & **69.84** & **57.21** & **67.88** & **15.22** & - \\ \hline Rich CLIP & **67.81** & 75.47 & 85.16 & 65.26 & 65.14 & **45.93** & - \\ Rich LP & 52.87 & 69.01 & 79.55 & 67.53 & 50.34 & 29.12 & - \\ Rich GRIP & 56.05 & **78.81** & **86.40** & **71.73** & **77.84** & 31.95 & - \\ \hline \(\Delta\) LP & \(\downarrow 14.92\) & \(\downarrow 6.47\) & \(\downarrow 5.61\) & \(\uparrow 2.26\) & \(\downarrow 14.79\) & \(\downarrow 16.81\) & \(\downarrow\)**9.39** \\ \(\Delta\) GRIP & \(\downarrow 11.76\) & \(\uparrow 3.33\) & \(\uparrow 1.24\) & \(\uparrow 6.46\) & \(\uparrow 12.70\) & \(\downarrow 13.98\) & \(\downarrow 0.33\) \\ \hline \hline Poor CLIP & 25.63 & 35.60 & 27.98 & 11.10 & 3.18 & 5.35 & - \\ Poor LP & 26.50 & 42.77 & 36.25 & 28.34 & 56.76 & 4.77 & - \\ Poor GRIP & **35.03** & **56.85** & **42.82** & **39.88** & **65.08** & **6.31** & - \\ \hline \(\Delta\) LP & \(\uparrow 0.87\) & \(\uparrow 7.18\) & \(\uparrow 8.27\) & \(\uparrow 17.24\) & \(\uparrow 53.58\) & \(\downarrow 0.58\) & \(\uparrow 14.43\) \\ \(\Delta\) GRIP & \(\uparrow 9.4\) & \(\uparrow 21.26\) & \(\uparrow 14.84\) & \(\uparrow 28.78\) & \(\uparrow 61.9\) & \(\uparrow 0.96\) & \(\uparrow\)**22.86** \\ \hline \hline \end{tabular}
\end{table}
Table 7: For each task we report the overall accuracy of linear probing (LP) and GRIP textual along with the accuracy on _poor_ and _rich_ classes. \(\Delta\) METHOD is the difference between the accuracy of CLIP and METHOD. For an overall evaluation of the difference between linear probing and prompt tuning, we report the average difference of LP and GRIP with respect to CLIP on poor and rich classes.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multicolumn{2}{l}{**Split 1**} & \multicolumn{4}{c}{Flowers102} & \multicolumn{4}{c}{RESICS45} & \multicolumn{4}{c}{FGVCAircraft} \\ \hline Method & S & U & H & S & U & H & S & U & H \\ \hline CLIP & \(64.26_{0,00}\) & \(62.56_{0,00}\) & \(63.4_{0,00}\) & \(54.85_{0,00}\) & \(54.85_{0,00}\) & \(54.46_{0,00}\) & \(16.27_{0,00}\) & \(19.79_{0,00}\) & \(17.86_{0,00}\) \\ CoOp & **91.52\({}_{36}\)** & \(48.35_{2.96}\) & \(63.22_{2,60}\) & **84.66\({}_{1.01}\)** & \(50.73_{2.8}\) & \(63.37_{2.23}\) & **34.18\({}_{1.56}\)** & \(16.28_{3.69}\) & \(21.70_{3.45}\) \\ GRIP & \(90.31_{0.51}\) & **82.57\({}_{1.26}\)** & **86.26\({}_{0.81}\)** & \(82.68_{0.47}\) & **79.53\({}_{0.72}\)** & **81.07\({}_{0.37}\)** & 22.25\({}_{0.07}\) & **31.51\({}_{0.59}\)** & **26.08\({}_{0.25}\)** \\ \hline \(\Delta\) CLIP & \(\uparrow 26.05\) & \(\uparrow 20.01\) & \(\uparrow 22.86\) & \(\uparrow 27.83\) & \(\uparrow 25.45\) & \(\uparrow 26.61\) & \(\uparrow 5.98\) & \(\uparrow 11.72\) & \(\uparrow 8.22\) \\ \(\Delta\) CoOp & \(\downarrow 1.21\) & \(\uparrow 34.22\) & \(\uparrow 23.04\) & \(\downarrow 1.98\) & \(\uparrow 28.8\) & \(\uparrow 17.7\) & \(\downarrow 11.93\) & \(\uparrow 15.23\) & \(\uparrow 4.38\) \\ \hline \hline & \multicolumn{4}{c}{MNIST} & \multicolumn{4}{c}{EuroSAT} & \multicolumn{4}{c}{DTD} \\ \hline CLIP & \(31.74_{0,00}\) & \(15.43_{0,00}\) & \(20.77_{0,00}\) & \(22.33_{0,00}\) & \(48.3_{0,00}\) & \(30.54_{0,00}\) & \(42.5_{0,00}\) & \(44.40_{0,00}\) & \(43.45_{0,00}\) \\ CoOp & **94.68\({}_{6.64}\)** & \(15.43_{7.75}\) & \(21.15_{12.18}\) & \(82.91_{8.81}\) & \(46.02_{2.23}\) & \(58.64_{5,86}\) & **69.67\({}_{1.17}\)** & \(34.81_{3.44}\) & \(46.32_{2.92}\) \\ GRIP & **95.13\({}_{1.11}\)** & **60.63\({}_{0.44}\)** & **74.06\({}_{0.29}\)** & **91.75\({}_{0.53}\)** & **92.91\({}_{0.91}\)** & **92.33\({}_{0.70}\)** & **68.26\({}_{0.69}\)** & **62.61\({}_{1.87}\)** & **65.30\({}_{1.03}\)** \\ \hline \(\Delta\) CLIP & \(\uparrow 63.39\) & \(\uparrow 45.2\) & \(\uparrow 53.29\) & \(\uparrow 69.42\) & \(\uparrow 44.61\) & \(\uparrow 61.79\) & \(\uparrow 25.76\) & \(\uparrow 18.17\) & \(\uparrow 21.85\) \\ \(\Delta\) CoOp & \(\uparrow 0.45\) & \(\uparrow 45.2\) & \(\uparrow 52.91\) & \(\uparrow 8.84\) & \(\uparrow 46.89\) & \(\uparrow 33.69\) & \(\downarrow 1.41\) & \(\uparrow 27.8\) & \(\uparrow 19.00\) \\ \hline \hline \multicolumn{2}{l}{**Split 2**} & \multicolumn{4}{c}{Flowers102} & \multicolumn{4}{c}{RESICS45} & \multicolumn{4}{c}{FGVCAircraft} \\ \hline Method & S & U & H & S & U & H & S & U & H \\ \hline CLIP & \(65.38_{0.00}\) & \(60.64_{0.00}\) & \(62.92_{0.00}\) & \(59.5_{0.00}\) & \(47.06_{0.00}\) & \(52.55_{0.00}\) & \(17.30_{0.00}\) & \(18.12_{0.00}\) & \(17.70_{0.00}\) \\ CoOp & **91.8\({}_{1.32}\)** & \(47.75_{3.86}\) & \(62.77_{3.31}\) & **86.54\({}_{1.92}\)** & \(48.00_{3.01}\) & \(61.70_{2.17}\) & **33.59\({}_{1.12}\)** & \(19.57_{1.37}\) & **24.63\({}_{0.63}\)** \\ GRIP & \(88.84_{0.75}\) & **70.93\({}_{2.08}\)** & **78.86\({}_{1.26}\)** & \(84.47_{0.41}\) & **84.09\({}_{1.01}\)** & **84.28\({}_{0.73}\)** & \(22.13_{0.24}\)** & \(28.32_{0.33}\) & **24.84\({}_{0.05}\)** \\ \hline \(\Delta\) CLIP & \(\uparrow 23.46\) & \(\uparrow 10.29\) & \(\uparrow 15.94\) & \(\uparrow 27.83\) & \(\uparrow 25.45\) & \(\uparrow 26.61\) & \(\uparrow 4.83\) & \(\uparrow 10.20\) & \(\uparrow 7.14\) \\ \(\Delta\) CoOp & \(\downarrow 2.96\) & \(\uparrow 23.18\) & \(\uparrow 16.09\) & \(\downarrow 2.07\) & \(\uparrow 36.09\) & \(\uparrow 22.58\) & \(\downarrow 11.46\) & \(\uparrow 8.75\) & \(\uparrow 0.21\) \\ \hline \hline & \multicolumn{4}{c}{MNIST} & \multicolumn{4}{c}{EuroSAT} & \multicolumn{4}{c}{DTD} \\ \hline CLIP & \(15.99_{0.00}\) & \(39.18_{0.00}\) & \(22.71_{0.00}\) & \(32.47_{0.00}\) & \(33.10_{0.00}\) & \(32.78_{0.00}\) & \(45.43_{0.00}\) & \(39.72_{0.00}\) & \(42.39_{0.00}\) \\ CoOp & **90.6\({}_{1.02}\)** & \(18.77_{9.12}\) & \(30.29_{1.28}\) & \(86.43_{2.23}\) & \(47.16_{11.17}\) & \(60.53_{8.42}\) & \(\mathbf{70.41}_{9.99}\) & \(32.53_{4.58}\) & \(44.42_{4.63}\) \\ GRIP & **95.71** & **97.50** & **96.59** & \(\mathbf{91.08}_{0.02}\) & \(\mathbf{92.02}_{0.98}\) & \(\mathbf{91.55}_{0.47}\) & \(66.69_{0.53}\) & \(\mathbf{56.19}_{1.18}\) & **60.99\({}_{0.69}\)** \\ \hline \(\Delta\) CLIP & \(\uparrow 85.12\) & \(\uparrow 50.76\) & \(\uparrow 79.32\) & \(\uparrow 58.61\) & \(\uparrow 58.92\) & \(\uparrow 58.77\) & \(\uparrow 21.26\) & \(\uparrow 16.47\) & \(\uparrow 18.6\) \\ \(\Delta\) CoOp & \(\uparrow 6.11\) & \(\uparrow 71.20\) & \(\uparrow 57.19\) & \(\uparrow 4.65\) & \(\uparrow 44.86\) & \(\uparrow 31.02\) & \(\downarrow 3.71\) & \(\uparrow 23.66\) & \(\uparrow 16.57\) \\ \hline \hline & \multicolumn{4}{c}{MNIST} & \multicolumn{4}{c}{EuroSAT} & \multicolumn{4}{c}{DTD} \\ \hline CLIP & \(10.59_{0.00}\) & \(46.74_{0.00}\) & \(17.27_{0.00}\) & \(41.47_{0.00}\) & \(19.60_{0.00}\) & \(26.62_{0.00}\) & \(45.52_{0.00}\) & \(39.58_{0.00}\) & \(42.34_{0.00}\) \\ CoOp & \(89.6_{8.08}\) & \(26.31_{2.88}\) & \(39.4_{16.61}\) & \(79.39_{3.97}\) & \(
\begin{table}
\begin{tabular}{l l l} \hline \hline
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multicolumn{1}{c}{**Textual prompts**} & & & & & & & & & & \\ \hline \hline \multicolumn{1}{c}{} & \multicolumn{3}{c}{Flowers102} & & & & & & & & & \\ \hline Method & SSL & UL & TRZSL & SSL & UL & TRZSL & SSL & UL & TRZSL \\ \hline CLIP (ViT-B/32) & \(63.67\) & \(63.67\) & \(63.40\) & \(54.48\) & \(54.48\) & \(54.46\) & \(43.24\) & \(43.24\) & \(43.45\) \\ GRIP (ViT-B/32) & \(83.60\) & \(69.84\) & \(86.26\) & \(74.11\) & \(70.55\) & \(81.07\) & \(56.07\) & \(46.09\) & **65.30** \\ \hline CLIP (ViT-L/14) & \(73.98\) & \(73.98\) & \(73.05\) & \(62.67\) & \(62.67\) & \(62.13\) & \(52.45\) & \(52.45\) & \(51.61\) \\ GRIP (ViT-L/14) & **94.21** & **82.33** & **96.18** & **81.53** & **76.86** & **86.88** & **60.91** & **54.40** & \(64.92\) \\ \hline \multicolumn{1}{c}{**Visual prompts**} & & & & & & & & & \\ \hline CLIP (ViT-B/32) & \(63.67\) & \(63.67\) & \(63.40\) & \(54.48\) & \(54.48\) & \(54.46\) & \(43.24\) & \(43.24\) & \(43.45\) \\ GRIP (ViT-B/32) & \(67.95\) & \(63.09\) & \(77.18\) & \(71.22\) & \(68.43\) & \(82.19\) & \(54.57\) & \(50.51\) & **62.78** \\ \hline CLIP (ViT-L/14) & \(73.98\) & \(73.98\) & \(73.05\) & \(62.67\) & \(62.67\) & \(62.13\) & \(52.45\) & \(52.45\) & \(51.61\) \\ GRIP (ViT-L/14) & **78.68** & \(73.50\) & **85.85** & **77.53** & **76.00** & **86.63** & **55.72** & **54.27** & \(62.74\) \\ \hline \hline \end{tabular}
\end{table}
Table 10: Performance of CLIP and GRIP with different backbones on Flowers102, RESICS45, and DTD, for all the learning settings SSL, UL, TRZSL. | ## Review
### Summary
This paper investigates the use of CLIP for pseudo-labeling across various tasks, including semi-supervised learning (SSL), transductive zero-shot learning (TZSL), and unsupervised learning (UL). The authors propose three training strategies—FPL, IFPL, and GRIP—differentiated by the static or dynamic nature of the pseudo-labels. The experimental results on six datasets demonstrate significant improvements in performance, highlighting the potential of the proposed methods to enhance image classification capabilities while addressing inherent biases in CLIP's pseudo-labels. The findings emphasize the importance of prompt tuning and effective pseudo-labeling techniques in leveraging CLIP for diverse learning scenarios.
### Strengths
- Well-written paper with a clear structure, making it easy to understand.
- Extensive exploration of a broad design space, including prompt modalities, learning paradigms, and training strategies.
- Demonstrates significant performance improvements across multiple datasets and tasks.
- Empirical study shows the effectiveness of self-training with CLIP-based models, including debiasing effects.
- Experimental results indicate that the proposed strategy holds considerable merit.
### Weaknesses
- Writing lacks clarity in certain details, including typos and inconsistencies in abbreviations.
- Absence of ablation studies to analyze the balance between labeled and unlabeled data.
- The proposed approach may be overly naive, lacking detailed analysis of hyper-parameter choices.
- Limited novelty as the use of CLIP's pseudo-labels is not new, and the proposed training strategies are commonly used.
- Need for further analysis of results to explain unexpected performance outcomes.
### Questions
- Does 'line 217' mean reinitializing the prompts at the beginning of each iteration?
- Is the process of pseudo-labeling done online or offline? If offline, is it time-consuming with large unlabeled data?
- Are abbreviations TZSL and TRZSL referring to the same concept?
- What are the reasons for slight decreases in performance observed in some experimental results?
- What happens if there are no samples for a class during the IFPL process?
- Why do results for larger image encoders show reduced performance?
- Could the authors clarify the differences between IFPL and GRIP?
### Soundness
**Score:** 3
**Description:** 3 = good; the methodology is technically sound, with clear results but some weaknesses in the approach.
### Presentation
**Score:** 2
**Description:** 2 = fair; while the paper is generally well-structured, it requires improvements in clarity and detail.
### Contribution
**Score:** 3
**Description:** 3 = good; the contributions are meaningful and impactful within the context of using CLIP for pseudo-labeling.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept; the paper is technically solid with moderate-to-high impact but requires some clarifications and improvements.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a solid exploration of using CLIP for pseudo-labeling, with significant experimental results. While there are some weaknesses in clarity and the novelty of the approach, the contributions to the field and the positive reception from reviewers justify acceptance. The authors should address the highlighted issues in their revisions.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Towards Accelerated Model Training via
Bayesian Data Selection
Zhijie Deng\({}^{1}\), Peng Cui\({}^{2}\), Jun Zhu\({}^{2}\)
\({}^{1}\)Qing Yuan Research Institute, Shanghai Jiao Tong University
\({}^{2}\)Dept. of Comp. Sci. & Tech., Institute for AI, BNRist Center,
Tsinghua-Bosch Joint ML Center, THBI Lab, Tsinghua University, Beijing, 100084 China
[email protected], [email protected], [email protected]
Equal contribution. \({}^{\dagger}\)Corresponding author.
###### Abstract
Mislabeled, duplicated, or biased data in real-world scenarios can lead to prolonged training and even hinder model convergence. Traditional solutions prioritizing easy or hard samples lack the flexibility to handle such a variety simultaneously. Recent work has proposed a more reasonable data selection principle by examining the data's impact on the model's generalization loss. However, its practical adoption relies on less principled approximations and additional holdout data. This work solves these problems by leveraging a lightweight Bayesian treatment and incorporating off-the-shelf zero-shot predictors built on large-scale pre-trained models. The resulting algorithm is efficient and easy to implement. We perform extensive empirical studies on challenging benchmarks with considerable data noise and imbalance in the online batch selection scenario, and observe superior training efficiency over competitive baselines. Notably, on the challenging Web-Vision benchmark, our method can achieve similar predictive performance with significantly fewer training iterations than leading data selection methods.
## 1 Introduction
The past year has witnessed significant breakthroughs in deep learning research and applications, with Stable Diffusion [38], ChatGPT [32], and SAM [22] as representative examples. Practitioners have realized that the quality of data used to fuel AI systems is critical in unlocking their full potential. Unfortunately, real-world scenarios often present mislabeled, duplicated, or biased data. As a result, it is paramount to develop methods that can prioritize _valuable_ training data to enable more efficient model training and even improved model convergence.
Data selection for accelerating the training of deep models is gaining increasing interest. Some studies, such as curriculum learning, advocate prioritizing _easy_ samples in the early training stages [1], but these samples quickly become redundant once they have been learned, making continued training on them a waste of time. On the other hand, online batch selection methods [26; 19; 17] prioritize _hard_ samples with high training loss or gradient norm to avoid duplicate training. Nevertheless, in practice, the hardness of samples often arises from pathologies such as improper annotations, inherent ambiguity, or unusual patterns, rendering it problematic to prioritize such samples [31].
The reducible hold-out loss selection (RHO-LOSS) approach [31] addresses these issues by quantifying the usefulness of a sample based on its marginal influence on the model's _generalization_ loss, forming a theoretically grounded and universal objective for data selection. However, the estimation of this objective is non-trivial, and RHO-LOSS has to rely on less principled approximations forpractical adoption. Besides, RHO-LOSS hinges on a considerable number of _holdout data_ to train an auxiliary validation model, which can be costly and should be performed repeatedly for new tasks.
This paper aims to bridge this gap to make the generalization loss-based data selection principle more accessible to a broader audience. We establish a more reasonable approximation of the original objective than RHO-LOSS while eliminating the need for holdout data. To achieve this, we derive a lower bound of the objective to separate the posterior predictive defined on the training data from that defined on the holdout data. Afterward, we propose to use _off-the-shelf_ zero-shot predictors, built upon large-scale pre-trained models [35; 41], as a proxy for the latter, since these models often contain generally applicable information for solving specific downstream tasks.
We maintain a Bayesian treatment of the training model to ensure an accurate estimation of the original objective. Bearing in mind that our original goal is to accelerate the training of a _deterministic_ model, we adopt the simple and effective Laplace approximation [29; 37; 8] for Bayesian inference. It effortlessly converts point-estimate parameters to a Gaussian posterior. We further introduce Kronecker-factored (KFAC) [30] and last-layer [23] approximations to accelerate the processing of modern neural networks (NNs), resulting in an efficient and easy-to-implement algorithm.
We conduct comprehensive empirical studies on various benchmarks to evaluate the effectiveness of our method. The experiments on standard image recognition tasks demonstrate that our approach can outperform various baselines in aspects of training speed and final accuracy. This conclusion also holds for learning with label noise and class imbalance. On the challenging _WebVision_[25] dataset, which contains plenty of noisy labels and ambiguous images collected from the internet, our method significantly reduces the number of training steps needed to reach the target accuracy and achieves up to _19%_ higher final accuracy than prior arts (see Table 3). These results highlight the practical value of our approach. In addition, we conduct informative ablation studies to gain a better understanding of the behavior of our method.
## 2 Background
In this section, we briefly revisit the concept of online batch selection [26] and the data selection principle defined with the model's generalization loss in [31].
Consider training a \(\theta\)-parameterized deep model \(f_{\theta}:\mathcal{X}\rightarrow\mathbb{R}^{k}\) on a dataset \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1}^{n}\) using stochastic gradient descent (SGD). The model likelihood is formally \(p(y|x,\theta)=p(y|f_{\theta}(x))\). At each training step \(t\), we can access a data batch \(B_{t}\) of size \(n_{B}\) from \(\mathcal{D}\). In online batch selection, we need to compute statistics of the samples in \(B_{t}\) and select only those that meet certain requirements to update the model. This filtering process aims to remove samples that are deemed less valuable.
Let \(\mathcal{D}_{t-1}\) denote the set of data observed before \(t\) and \((x^{\prime},y^{\prime})\) a sample from \(B_{t}\). If we accept selecting \((x^{\prime},y^{\prime})\), the updated predictive distribution, in a Bayesian view, will be \(p(y|x,\mathcal{D}_{t-1},\{(x^{\prime},y^{\prime})\})=\mathbb{E}_{p(\theta| \mathcal{D}_{t-1},\{(x^{\prime},y^{\prime})\})}p(y|x,\theta)\).2 The question arises how to estimate the quality of this distribution to determine which sample \((x^{\prime},y^{\prime})\in B_{t}\) should be selected. A natural tactic is to compare this distribution to the ground truth data-generating distribution \(\dot{p}(x,y)\). That said, the typical KL divergence can be introduced, and our goal becomes solving the following problem:
Footnote 2: We assume selecting one single sample per time for simplicity. Multi-sample selection is viable.
\[\min_{(x^{\prime},y^{\prime})\in B_{t}}\mathbb{E}_{\dot{p}(x)}\big{(}D_{\text {KL}}\big{[}\dot{p}(y|x)\|p(y|x,\mathcal{D}_{t-1},\{(x^{\prime},y^{\prime})\}) \big{]}\big{)}=\mathrm{const.}-\mathbb{E}_{\dot{p}(x,y)}\big{[}\log p(y|x, \mathcal{D}_{t-1},\{(x^{\prime},y^{\prime})\})\big{]}, \tag{1}\]
where \(\mathrm{const.}\) denotes a constant agnostic to the object to optimize. By applying Monte Carlo (MC) estimation using extra holdout samples \(\mathcal{D}^{*}=\{(\tilde{x}_{i},\tilde{y}_{i})\}_{i=1}^{m}\) from \(\dot{p}(x,y)\), we arrive at the following optimization problem:
\[\max_{(x^{\prime},y^{\prime})\in B_{t}}\frac{1}{m}\sum_{i=1}^{m}\big{[}\log p (\tilde{y}_{i}|\tilde{x}_{i},\mathcal{D}_{t-1},\{(x^{\prime},y^{\prime})\}) \big{]}\iff\max_{(x^{\prime},y^{\prime})\in B_{t}}\log p(\mathcal{D}^{*}| \mathcal{D}_{t-1},\{(x^{\prime},y^{\prime})\}). \tag{2}\]
This objective corresponds to the model's _generalization_ loss instead of the fitness of training data.
By Bayes' rule, there is:
\[p(\mathcal{D}^{*}|\mathcal{D}_{t-1},\{(x^{\prime},y^{\prime})\})=\frac{p(y^{ \prime}|x^{\prime},\mathcal{D}^{*},\mathcal{D}_{t-1})p(x^{\prime},\mathcal{D} ^{*},\mathcal{D}_{t-1})}{p(y^{\prime}|x^{\prime},\mathcal{D}_{t-1})p(x^{\prime },\mathcal{D}_{t-1})}=\frac{p(y^{\prime}|x^{\prime},\mathcal{D}^{*}, \mathcal{D}_{t-1})}{p(y^{\prime}|x^{\prime},\mathcal{D}_{t-1})}\cdot p( \mathcal{D}^{*}|\mathcal{D}_{t-1},x^{\prime}), \tag{3}\]where the item \(p(\mathcal{D}^{*}|\mathcal{D}_{t-1},x^{\prime})\) actually equals to the constant \(p(\mathcal{D}^{*}|\mathcal{D}_{t-1})\) because \(x^{\prime}\) alone cannot incur model update. Plugging this back into Equation (2), we arrive at the final objective for data selection:
\[\max_{(x,y)\in B_{t}}\log p(y|x,\mathcal{D}^{*},\mathcal{D}_{t-1})-\log p(y|x, \mathcal{D}_{t-1}), \tag{4}\]
where we omit constants and abuse \((x,y)\) to represent \((x^{\prime},y^{\prime})\) hereinafter if there is no misleading.
Although the above selection principle is theoretically sound and universally applicable, estimating it accurately is challenging. In particular, it is difficult to estimate the posterior predictive distribution defined with the combination of training and holdout data in a computationally efficient manner. To address this issue, RHO-LOSS proposes approximating \(\log p(y|x,\mathcal{D}^{*},\mathcal{D}_{t-1})\) with \(\log p(y|x,\mathcal{D}^{*})\) and approximating the posterior predictive with the point-estimate model' predictions [31]. However, these approximations compromise the method's theoretical groundness, and access to holdout data can often be infeasible in practice. Our approach overcomes these limitations by utilizing a lightweight Bayesian treatment and incorporating off-the-shelf zero-shot predictors built on large-scale pre-trained models, as detailed below.
## 3 Methodology
In this section, we demonstrate that the data selection principle discussed earlier can be lower-bounded by more easily computable quantities. We then leverage these insights to develop a Bayesian data selection approach to accelerating the training of deterministic deep models.
### The Lower Bound
As previously discussed, we need to estimate two log-probabilities, \(\log p(y|x,\mathcal{D}^{*},\mathcal{D}_{t-1})\) and \(\log p(y|x,\mathcal{D}_{t-1})\), for each sample in \(B_{t}\) to identify the most useful one and update the model accordingly. However, as mentioned, evaluating the former is challenging. To address this issue, we derive a lower bound that allows for a more feasible estimation.
We first unfold \(\log p(y|x,\mathcal{D}^{*},\mathcal{D}_{t-1})\) as the combination of a posterior \(p(\theta|\mathcal{D}^{*},\mathcal{D}_{t-1})\) and the model likelihood \(p(y|x,\theta)\), detailed below (we defer the derivation to Appendix):
\[\log p(y|x,\mathcal{D}^{*},\mathcal{D}_{t-1})=\log\int p(\mathcal{D}^{*}| \theta)p(\theta|\mathcal{D}_{t-1})p(y|x,\theta)d\theta-\log p(\mathcal{D}^{*} |\mathcal{D}_{t-1}). \tag{5}\]
Then, by Jensen's inequality, there is (derivation is deferred to Appendix)
\[\log p(y|x,\mathcal{D}^{*},\mathcal{D}_{t-1})\geq\mathbb{E}_{p(\theta| \mathcal{D}_{t-1})}\log p(y|x,\theta)+\mathbb{E}_{p(\theta|\mathcal{D}_{t-1}) }\log p(\mathcal{D}^{*}|\theta)-\log p(\mathcal{D}^{*}|\mathcal{D}_{t-1}). \tag{6}\]
Notably, the last two terms are independent of \((x,y)\) and can be excluded from the optimization.
We can similarly derive another lower bound:
\[\log p(y|x,\mathcal{D}^{*},\mathcal{D}_{t-1}) =\log\int p(\mathcal{D}_{t-1}|\theta)p(\theta|\mathcal{D}^{*})p( y|x,\theta)d\theta-\log p(\mathcal{D}_{t-1}|\mathcal{D}^{*}) \tag{7}\] \[\geq\mathbb{E}_{p(\theta|\mathcal{D}^{*})}\log p(y|x,\theta)+ \mathbb{E}_{p(\theta|\mathcal{D}^{*})}\log p(\mathcal{D}_{t-1}|\theta)-\log p (\mathcal{D}_{t-1}|\mathcal{D}^{*}),\]
where the last two terms are also agnostic to \((x,y)\) and hence can be omitted.
Combining Equation (6) and Equation (7), there is
\[\log p(y|x,\mathcal{D}^{*},\mathcal{D}_{t-1})\geq\alpha\mathbb{E}_{p(\theta| \mathcal{D}_{t-1})}\log p(y|x,\theta)+(1-\alpha)\mathbb{E}_{p(\theta|\mathcal{ D}^{*})}\log p(y|x,\theta)+\mathrm{const.}, \tag{8}\]
where \(\alpha\in[0,1]\) represents a trade-off coefficient. This way, we disentangle the posterior associated with the training data \(\mathcal{D}_{t-1}\) from that associated with the holdout data \(\mathcal{D}^{*}\).
Rewriting \(p(y|x,\mathcal{D}_{t-1})\) as \(\mathbb{E}_{p(\theta|\mathcal{D}_{t-1})}p(y|x,\theta)\), we can subsequently convert Equation (4) to the following maximization problem:
\[\max_{(x,y)\in B_{t}}\alpha\mathbb{E}_{p(\theta|\mathcal{D}_{t-1})}\log p(y|x, \theta)+(1-\alpha)\mathbb{E}_{p(\theta|\mathcal{D}^{*})}\log p(y|x,\theta)- \log\mathbb{E}_{p(\theta|\mathcal{D}_{t-1})}p(y|x,\theta). \tag{9}\]
The presented objective is intriguing for several reasons. Firstly, the first term and \(\alpha\) times the third term perform expectation and logarithm operations in reverse order, resulting in a quantity similar to the uncertainty estimates in Bayesian deep learning [39] (nonetheless, our objective involves data labels). Additionally, the third term helps to prevent the selection of redundant samples. And, the second term prioritizes points with high semantic alignment with their annotations. These three forces are integrated adaptively to accelerate model training across various stages.
### Zero-shot Predictor as the Validation Model
Collecting extra holdout data to train an auxiliary validation model can be costly and should be performed repeatedly for new tasks. To address this issue, we propose to use zero-shot predictors built upon large-scale pre-trained models [35; 41] as a proxy for the validation model, based on the observation that they usually exhibit promising transfer performance across a broad range of downstream applications.
Formally, we make the following approximation:
\[\mathbb{E}_{p(\theta|\mathcal{D}^{*})}\log p(y|x,\theta)\approx\log p(y|\tilde{ f}(x)), \tag{10}\]
where \(\tilde{f}:\mathcal{X}\rightarrow\mathbb{R}^{k}\) denotes the zero-shot predictor used.
We can think of the pre-trained model as a universal validation model trained on an extensive dataset, leading to the Bayesian posterior collapsing to a point estimate. Although its training data may not precisely follow the data-generating distribution for the current task, they share fundamental patterns with the data in our problem, making the above approximation reasonable. Notably, as shown in Section 4, our trained model eventually performs much better than the zero-shot predictor.
### Lightweight Bayesian Treatment of the Training Model
The first and third terms in Equation (9) raise the requirement of maintaining a Bayesian posterior \(p(\theta|\mathcal{D}_{t-1})\) during training. Due to the high nonlinearity of deep NNs, the analytical posterior is intractable. By convention, we can introduce an approximate posterior \(q(\theta|\mathcal{D}_{t-1})\) and tune it by approximate Bayesian inference methods like MCMC [42; 5; 45], variational inference [2; 28; 44; 21], Laplace approximation [29; 37], etc. Recalling that, despite relying on a Bayesian treatment, our final goal is to accelerate and improve the training of a _deterministic_ model, Laplace approximation is well suited to our setting--it can convert point-estimate parameters to a Gaussian posterior effortlessly, thus lifting the unnecessary burden of learning and maintaining a posterior explicitly. We clarify that, technically, other Bayesian inference methods are also compatible with our approach.
Although Laplace approximation is typically used for maximum a posteriori estimation, it can be effectively adapted to the online learning cases [36]. Specifically, consider an isotropic Gaussian prior with precision \(\tau_{0}\) over parameters \(\theta\). Let \(\mathcal{D}_{t-1}:=b_{1}\cup b_{2}\cup\cdots\cup b_{t-1}\)3 denote all selected training samples before time \(t\). The parameters of our deterministic model at time \(t-1\), dubbed as \(\theta_{t-1}\), are likely to approach a mode of the true posterior \(p(\theta|\mathcal{D}_{t-1})\) in the presence of a proper weight decay regularizer in stochastic optimization [10]. The online Laplace approximation then deploys the following approximate posterior:
Footnote 3: \(b_{i}\subset B_{i}\) represents the set of selected samples at time step \(i\).
\[q(\theta|\mathcal{D}_{t-1})=\mathcal{N}(\theta_{t-1},H_{t-1}^{-1}),\ H_{t-1}= \tau_{0}I+\sum_{i=1}^{t-1}\Big{(}\sum_{x,y\in b_{i}}\frac{\partial^{2}[-\log p (y|x,\theta)]}{\partial\theta^{2}}\Big{|}_{\theta=\theta_{i}}\Big{)}. \tag{11}\]
The Gaussian covariance should be positive semi-definite, but as \(\theta_{t-1}\) cannot be guaranteed to be exactly the posterior mode, we would better replace the Hessian matrix with the generalized Gauss-Newton (GGN) matrix to avoid ill-posed approximation, resulting in:
\[q(\theta|\mathcal{D}_{t-1})=\mathcal{N}(\theta_{t-1},G_{t-1}^{-1}),\ G_{t-1}= \tau_{0}I+\sum_{i=1}^{t-1}\Big{(}\sum_{x,y\in b_{i}}J_{\theta_{i}}(x)^{\top} \Lambda_{\theta_{i}}(x,y)J_{\theta_{i}}(x)\Big{)}, \tag{12}\]
where \(J_{\theta_{i}}(x):=\nabla_{\theta}f_{\theta}(x)|_{\theta=\theta_{i}}\) and \(\Lambda_{\theta_{i}}(x,y):=\nabla_{\theta}^{2}[-\log p(y|f)]|_{f=f_{\theta_{i}} (x)}\).
**Practical acceleration.** Modern neural networks usually contain millions of parameters, making the matrix \(G_{t-1}\) too large to fit into CPU/GPU memories. To address this, a common practice is to sparsify the matrix using diagonal or Kronecker-factored (KFAC) approximations [30]. This work prefers KFAC as it preserves the correlations between parameters within the same layers. Besides, a recent surprising finding is that we can even apply Laplace approximation to only the last layer of a deep NN, leaving the other parameters point-estimate, to conjoin efficiency and calibrated uncertainty estimates [23, 8]. As a result, we consider combining last-layer and KFAC approximations to reduce the burden caused by the Bayesian treatment, with the details presented below.
Let us decompose \(f_{\theta_{i}}\) as a feature extractor \(h_{\theta_{i}}\) and a linear layer with weights \(\theta_{i}^{(l)}\in\mathbb{R}^{d\times k}\), i.e., \(f_{\theta_{i}}(x):=h_{\theta_{i}}(x)^{\top}\theta_{i}^{(l)}\). We adapt the GGN matrix associated with the last-layer weights \(\theta_{i}^{(l)}\) derived in [37, 23] to the following formula:
\[G_{t-1}^{(l)}\approx V_{t-1}\otimes U_{t-1}:=(\sqrt{|\mathcal{D}_{t-1}|}A_{t-1 }+\sqrt{\tau_{0}}I)\otimes(\sqrt{|\mathcal{D}_{t-1}|}G_{t-1}+\sqrt{\tau_{0}}I), \tag{13}\]
where \(\otimes\) denotes the Kronecker product and
\[\begin{split} A_{t-1}:=\frac{1}{|\mathcal{D}_{t-1}|}\sum_{i=1}^ {t-1}\sum_{x,y\in b_{i}}h_{\theta_{i}}(x)h_{\theta_{i}}(x)^{\top},\\ G_{t-1}:=\frac{1}{|\mathcal{D}_{t-1}|}\sum_{i=1}^{t-1}\sum_{x,y \in b_{i}}[\nabla_{f}\log p(y|f)|_{f=f_{\theta_{i}}(x)}][\nabla_{f}\log p(y|f) |_{f=f_{\theta_{i}}(x)}]^{\top}.\end{split} \tag{14}\]
Of note that the matrices \(A_{t-1}\in\mathbb{R}^{d\times d}\) and \(G_{t-1}\in\mathbb{R}^{k\times k}\) raise only minimal extra storage cost. The approximate posterior can be formulated as a matrix-variate Gaussian [12]: \(q(\theta^{(l)}|\mathcal{D}_{t-1})=\mathcal{MN}(\theta^{(l)}|\theta_{t-1}^{(l)},U_{t-1}^{-1},V_{t-1}^{-1})\). Given the linear nature, it is straightforward to get the distribution over the model output \(f_{x}\) for input \(x\) (derivation is deferred to Appendix):
\[q(f_{x}|\mathcal{D}_{t-1})=\mathcal{N}\Big{(}f_{\theta_{t-1}}(x),\big{(}h_{ \theta_{t-1}}(x)^{\top}V_{t-1}^{-1}h_{\theta_{t-1}}(x)\big{)}U_{t-1}^{-1}\Big{)}. \tag{15}\]
**The final data selection objective.** With the above equation, the selection objective boils down to
\[\max_{(x,y)\in B_{t}}\alpha\Big{[}\frac{1}{S}\sum_{s=1}^{S}\log p(y|f_{x}^{(s )})\Big{]}+(1-\alpha)\log p(y|\tilde{f}(x))-\log\Big{[}\frac{1}{S}\sum_{s=1}^{ S}p(y|f_{x}^{(s)})\Big{]}, \tag{16}\]
where \(f_{x}^{(s)}\sim q(f_{x}|\mathcal{D}_{t-1}),s=1,\ldots,S\) are MC samples. Compared to the non-last-layer Laplace approximation, which involves sampling over the parameter space and performing MC integration, the last-layer one enjoys a much faster evaluation procedure. It also enables the use of a large \(S\). Our method can also abandon the KFAC approximation when the linear head is small.
**The algorithm.** We present the training procedure in Algorithm 1. To avoid too large \(|\mathcal{D}_{t-1}|\) and hence too sharpened approximate posterior, we use a tunable hyper-parameter \(n_{e}\) to replace \(|\mathcal{D}_{t-1}|\) in the computation of \(V_{t-1}\) and \(U_{t-1}\). For implemental simplicity, we use the exponential moving average technique to update \(A_{t}\) and \(G_{t}\). For models equipped with batch normalization layers [16], we perform an extra forward propagation for the selected data \(b_{t}\) to obtain proper batch statistics for model update, following [31]. Our method takes a similar amount of time per iteration as [31]. The primary difference is that we use a zero-shot predictor based on pre-trained models to compute the second term in Equation (16), whereas [31] uses a validation model. Computing the Gaussian covariance in Equation (15), MC sampling, and updating \(A_{t}\) and \(G_{t}\) consume neglectable resources.
## 4 Experiment
We compare the proposed method to the prior state-of-the-art (SOTA) data selection methods on various image classification benchmarks, including standard datasets (e.g., CIFAR-10 and CIFAR-100 [24]), their variants with controllable degrees of label noise and class-imbalance, and a real-world, large-scale, noisy, and imbalanced dataset, WebVision [25]. We also diagnose our selection strategy through elaborate ablation studies.
**Datasets.** We first evaluate the proposed method on clean CIFAR-10/100 and noisy CIFAR-10/100 with 10% symmetric label noise. Furthermore, we experiment in the context of imbalanced datasets, specifically CIFAR-10/100 datasets with varying degrees of class-imbalance ratio. Last, we investigate a web-scraped and large-scale dataset - WebVision, which consists of over \(2.5\) million images in \(1000\) categories and suffers from severe label noise and class-imbalance issues. For a fair comparison with RHO-LOSS [31], only half of the training set is used for model training.
**Baselines.** We compare the proposed method to a variety of baselines that define selection principles with various metrics, including the (training) loss [20], gradient norm [19], and gradient norm with importance sampling (gradient norm IS) [19]. We also compare to uniform sampling, Selection-via-Proxy (SVP) [6] and the latest SOTA method, RHO-LOSS [31].
**Implementation details.** In experiments under label noise, we introduce 10% symmetric noise caused by randomly flipping the ground-truth label to other labels. For the class-imbalance setting, we consider the long-tailed imbalance with two levels of imbalance intensity (i.e., 10 and 100), where an exponential decay on sample size is introduced across classes [3]. We use the same optimizer (AdamW [27]) and hyperparameters (e.g., learning rate 0.001, weight decay of 0.01, \(n_{b}=32\) and \(\frac{n_{b}}{n_{B}}=0.1\)) as RHO-LOSS. Unless specified otherwise, we use ResNet18 (RN18) [14] as the deterministic model and specify the zero-shot predictor with CLIP-RN50. We select the trade-off coefficient \(\alpha\) from \(\{0.1,0.2,0.3,0.4\}\) and the number of effective data \(n_{e}\) from \(\{100,200,500,1000\}\). In most cases, we set the prior precision \(\tau_{0}\) to \(1\). We report average results over 3 random runs.
**Evaluation.** Following [31], we evaluate speedup by comparing the number of epochs required to reach a given target test accuracy, which implicitly reflects the quality of the selection strategy.
### Speedup on Clean Data and Noisy Data
We first empirically evaluate the proposed method on CIFAR-10 and CIFAR-100 with/without label noise. Table 1 reports the results. We can observe that the proposed method achieves considerably improved training speed compared to competing baselines in both scenarios with and without label noise. This evidences the efficacy of our Bayesian selection method in filtering out redundant and noisy data points in the training set. Despite leveraging extra holdout data, RHO-LOSS still
\begin{table}
\begin{tabular}{l r r r r r r r r} \hline \hline MethodDataset & CIFAR-10 & CIFAR-10* & CIFAR-100 & CIFAR-100* \\ \hline CLIP Acc & \multicolumn{2}{c}{75.6\%} & \multicolumn{2}{c}{75.6\%} & \multicolumn{2}{c}{41.6\%} & \multicolumn{2}{c}{41.6\%} \\ Target Acc & 80.0\% & 87.5\% & 75.0\% & 85.0\% & 40.0\% & 52.5\% & 40.0\% & 47.5\% \\ \hline Train Loss & 81 & 129 (90\%) & - & - & (28\%) & 138 & - & (42\%) & - & - (4\%) \\ Grad Norm & - & - & - (61\%) & - & - & (23\%) & 139 & - & (42\%) & - & - (4\%) \\ Grad Norm IS & 57 & 139 (89\%) & 57 & - & (84\%) & 71 & 132 & (55\%) & 94 & 142 (48\%) \\ SVP & - & - & - (55\%) & - & - & (48\%) & - & - & (18\%) & - & - (14\%) \\ Irred Loss & - & - & - (60\%) & - & - & (62\%) & 93 & - & (43\%) & 89 & - (43\%) \\ Uniform & 79 & - & - (87\%) & 62 & - & (85\%) & 65 & 133 (54\%) & 79 & 116 (50\%) \\ RHO-LOSS & 39 & 65 (91\%) & 27 & 49 (91\%) & 48 & 77 (61\%) & 49 & 65 (60\%) \\ Proposed & **33** & **61** (**91**\%) & **25** & **47** (**91**\%) & **32** & **53** (**63**\%) & **39** & **53** (**61**\%) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Epochs needed to reach a target test accuracy on clean and noisy data (final accuracy is reported in parentheses). CIFAR-10*/100* denotes adding 10% symmetric label noise to the dataset. Best performance is highlighted in **bold**. “-” indicates that the target accuracy was not reached. For all methods, only half of the original training set is used for training. The target accuracies are set following RHO-LOSS [31].
underperforms our approach, potentially due to the less reliable approximations. Surprisingly, the superiority of our method is even more significant on the more challenging CIFAR-100. We also point out that our method eventually reaches higher final accuracy than the zero-shot CLIP. In other words, our method does not hinge on a performant zero-shot predictor.
### Speedup on Class-imbalance Data
We further evaluate the proposed method on class-imbalanced data, a typical pattern in the real world while probably raising non-trivial difficulties for model training [7; 3]. Specifically, we experiment on the aforementioned imbalanced CIFAR-10 and CIFAR-100 using similar protocols to previous studies. The test datasets remain unchanged. Given that RHO-LOSS is superior to other baselines, we primarily compare with it in the following studies. We also add uniform sampling into comparison due to its implementation simplicity. Table 2 presents the results.
As shown, the proposed method still consistently outperforms uniform sampling and RHO-LOSS. In particular, the final accuracy achieved by our method is higher than RHO-LOSS. We also note that the gain in training efficiency becomes more significant as the imbalance ratio increases. The failure of RHO-LOSS is partially attributed to the less principled approximations. Another reason is that the holdout data are still class-imbalanced, rendering the trained validation models biased.
### Speedup on Large Web-scraped Data
Learning algorithms face significant challenges when dealing with noisy and imbalanced real-world datasets. To evaluate our proposed method's efficacy in addressing these challenges, we assess it on a large-scale web-scraped dataset, WebVision. We defer the details of WebVision to the Appendix. Since the dataset is large, comprising 2.4 million data points, we train classification models only on the first 100/200 classes (known as WebVision-100/200) of the entire dataset, following [18; 4], for an efficient evaluation. We test the trained models on the human-annotated WebVision validation set
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Dataset & Imbalance Ratio & Target Acc & Uniform & RHO-LOSS & Proposed \\ \hline \multirow{4}{*}{CIFAR-10} & \multirow{3}{*}{10} & 60\% & 68 & 36 & **34** \\ & & 70\% & 98 (75\%) & 56 (80\%) & **52** (**83\%**) \\ \cline{3-6} & & 50\% & 81 & 50 & **44** \\ & & 60\% & 119 (56\%) & 87 (62\%) & **79** (**68\%**) \\ \hline \multirow{4}{*}{CIFAR-100} & \multirow{3}{*}{10} & 20\% & 69 & 34 & **32** \\ & & 30\% & 110 (33\%) & 62 (39\%) & **58** (**45\%**) \\ \cline{3-6} & & 15\% & 70 & 43 & **41** \\ \cline{3-6} & & 20\% & - (19\%) & 71 (24\%) & **66** (**28\%**) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Epochs required to reach a target test accuracy on imbalanced CIFAR-10 and CIFAR-100 (final accuracy is reported in parentheses).
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Dataset & Validation set & Target Acc & Uniform & RHO-LOSS & Proposed \\ \hline \multirow{4}{*}{WebVision-100} & WebVision & 30\% & - & 29 & **25** \\ & & 40\% & - (27\%) & 47 (42\%) & **40** (**60\%**) \\ \cline{3-6} & ILSVRC12 & 30\% & - & 40 & **33** \\ & & 40\% & - (23\%) & - (37\%) & **50** (**55\%**) \\ \hline \multirow{4}{*}{WebVision-200} & WebVision & 30\% & - & 28 & **22** \\ & & 40\% & - (26\%) & 48 (42\%) & **36** (**61\%**) \\ \cline{1-1} \cline{3-6} & ILSVRC12 & 30\% & - & 35 & **29** \\ \cline{1-1} \cline{3-6} & & 40\% & - (19\%) & - (39\%) & **46** (**56\%**) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Epochs required to reach a target test accuracy on WebVision dataset (final accuracy is reported in parentheses).
and the ILSVRC12 validation set [9]. We report the results in Table 3. We select 30% and 40% as target accuracies according to the final accuracies of RHO-LOSS (\(\sim\)40%) and our approach (\(\sim\)60%).
Still, the proposed method outperforms uniform sampling and RHO-LOSS in aspects of both the speed to reach the target accuracy and the final accuracy on both WebVision-100 and WebVision-200. Notably, our method achieves up to _19%_ higher final accuracy than RHO-LOSS, suggesting the superiority of our method in coping with real-world datasets. We also report results for our method on the entire training set in the Appendix, which shows more significant speedups and higher final accuracy due to more training data.
### Analysis of the Properties of the Selected Data
In this section, we analyze the characteristics of the data selected by our method and compare it with existing works. Figure 0(a) displays the proportion of selected data points with label noise, and it is evident that data selected by our method has the lowest label noise. We also investigate whether our method prioritizes redundant data, defined as those having already been correctly classified by the training model [31]. As depicted in Figure 0(b), both our method and RHO-LOSS select significantly fewer redundant data points than uniform sampling, and our method is slightly better.
### Ablation Studies
In this section, we conduct ablation studies on model architecture, zero-shot predictor, and some critical hyper-parameters (e.g., \(\alpha\), \(n_{e}\)).
Figure 1: Properties of the data selected by our method and baselines on CIFAR-10 and CIFAR-100 with \(10\%\) label noise. Redundant points represent the data that have already been classified correctly. The reported values are averaged over 150 epochs of model training and five random runs.
**Model architecture.** We test our method using the powerful Vision Transformer (ViT) [11] architecture on WebVision-200. Specifically, we experiment with a ViT-B/16 model pre-trained unsupervisedly by Masked Auto-encoder [13], considering that fine-tuning a pre-trained model is preferred over training a model from scratch in practice. We provide the training curve of our method in Figure 2, where uniform sampling is included for comparison. The huge performance gap between our method and the baseline validates the efficacy of our method in this setting.
**Zero-shot predictor.** We next assess the impact of the zero-shot predictor on our method. We specify the zero-shot predictor with various CLIP variants (e.g., RN50 and ViT-B/16) and draw the training curves in Figure 3. As shown, although the zero-shot accuracy of the RN50 variant is significantly lower than that of ViT-B/16, the speedup effect of the two backbones is similar. This finding suggests the robustness of our method against the selection of the zero-shot predictor.
In the above experiments, we use CLIP-based zero-shot predictors without further tuning. Here, we perform linear probing using the CLIP models to demonstrate if we can achieve good convergence trivially. We simply adopt the uniform sampling strategy and report the results in Table 4. We do not report convergence speed because the baseline uses pre-trained weights while our method trains models from scratch. As shown, our method still bypasses the baseline with clear margins.
We replace the zero-shot predictor used in our method with the validation model in RHO-LOSS [31] and present the corresponding results in Table 5. The comparison proves the superiority of our method over RHO-LOSS. These results also reflect that our method is robust against the specification of the validation model.
**Hyper-parameters.** We then analyze the effects of three crucial hyper-parameters, \(\alpha\), \(n_{e}\), and the temperature \(\tau\) used by the softmax operation in the CLIP zero-shot predictor. Figure 3(a) shows training curves associated with various \(\alpha\). We see that too large \(\alpha\) (i.e., lightening the second term in Equation (9)) can lead to a significant drop in training speed and final performance, which emphasizes the importance of the zero-shot predictor. Nevertheless, using only the zero-shot predictor (i.e., \(\alpha=0\)) is also suboptimal, leading to degraded final accuracy. Hence, we conclude that both terms in the lower bound of the original data selection principle are beneficial, where the first term accounts more for the later training stages while the second term accounts more for the earlier ones. Figure 3(b) shows the ablations study on \(n_{e}\), and we witness the robustness of our method against \(n_{e}\). Figure 3(c)
\begin{table}
\begin{tabular}{l c c} \hline \hline Dataset & CLIP linear probing & Proposed \\ \hline CIFAR-10 & 84.5 & 91.4 \\ CIFAR-10* & 84.1 & 91.3 \\ CIFAR-100 & 58.5 & 63.3 \\ CIFAR-100* & 57.8 & 61.4 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of final accuracy (%) between the proposed method and the linear probing with CLIP and uniform sampling strategy.
Figure 4: Ablation studies on some critical hyper-parameters including \(\alpha\), \(n_{e}\), and \(\tau\) on CIFAR-100.
shows training curves corresponding to different \(\tau\). We observe that an appropriate temperature boosts our method considerably.
**Wall-clock time.** We empirically observe that the per-epoch training time for Uniform, RHO-LOSS, and our method on CIFAR-100 is 14s, 18s, and 21s, respectively. Namely, our approach has slightly increased per-epoch time over RHO-LOSS. It arises from that we use a CLIP-RN50 zero-shot predictor to compute the second term in Equation (16), whereas [31] uses a RN18 validation model. In fact, this gap can be reduced by a simple implementation trick--pre-computing the CLIP-RN50 predictions for each sample in the training set before training.
As shown in Table 1, compared to RHO-LOSS, we can reduce the number of epochs required to reach 40% test accuracy from 48 to 32 and 52.5% from 77 to 53. According to the per-epoch training time, we achieve a \(48*18/32/21\approx 1.29\) or \(77*18/53/21\approx 1.25\) times practical acceleration.
## 5 Related Works
Extensive methods have been proposed to accelerate model training through techniques such as data pruning, coreset selection, curriculum learning, online batch selection, etc. Data pruning explores various metrics, such as EL2N score [33], forgetting score [40], and classification margin [34], to measure individual differences among data points and retains only the hardest examples for model training. However, data pruning still exhibits limitations when dealing with noisy labels, and some of these metrics are computationally expensive. Coreset selection methods also partially address the problem of accelerating model training. In particular, [43] contributes a data scoring mechanism that is robust to the change of scenarios for coreset selection, and [46] makes an in-depth understanding of the catastrophic accuracy drop issue of one-shot coreset selection and contributes a novel solution to it. However, these methods lack the flexibility to prioritize samples with different properties at various training stages. Curriculum learning, as advocated by [1], prioritizes easy points with low label noise before uniformly training on all data points. While this strategy enhances convergence, it fails to address the issue of skipping redundant points already learned.
Online batch selection methods [26; 19; 17] tackle the training acceleration problem by selecting hard data identified by high loss or gradient norm. However, they also suffer from a common drawback--high loss can be caused by label noise or ambiguous labels, so prioritizing such data can result in a decline in predictive performance. Compared to the prior art, our method establishes a Bayesian data selection metric and exploits zero-shot predictors to prioritize valuable training data for addressing these issues.
## 6 Conclusion
This paper addresses the challenges posed by noisy and biased data in real-world scenarios. Our main contribution is to make the generalization loss-based data selection principle more accessible to accelerate the training of deep models. To achieve this, we first derive a lower bound of this objective to improve its tractability. We then propose leveraging a Bayesian treatment and off-the-shelf zero-shot predictors to estimate the data selection objective reliably. The resulting algorithm does not require extra holdout data. Our extensive empirical studies demonstrate the superiority of our method in accelerating model training over competitive baselines.
**Limitation.** Our method may fail when the zero-shot predictor performs poorly on specific tasks. Future work could explore adapting the zero-shot predictor to a few-shot one using a few clean validation data to address this limitation.
## Acknowledgments
Z.J. Deng was supported by Natural Science Foundation of Shanghai (No. 23ZR1428700) and the Key Research and Development Program of Shandong Province, China (No. 2023CXGC010112). J. Zhu and P. Cui were supported by NSF of China Projects (Nos. 62061136001, 61620106010, 62076145, U19B2034, U1811461, U19A2081, 6197222); a grant from Tsinghua Institute for Guo Qiang; and the High Performance Computing Center, Tsinghua University. J.Z was also supported by the XPlorer Prize.
## References
* [1] Yoshua Bengio, Jerome Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In _Proceedings of the 26th annual international conference on machine learning_, pages 41-48, 2009.
* [2] Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural network. In _International Conference on Machine Learning_, pages 1613-1622, 2015.
* [3] Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. Learning imbalanced datasets with label-distribution-aware margin loss. _Advances in neural information processing systems_, 32, 2019.
* [4] Pengfei Chen, Ben Ben Liao, Guangyong Chen, and Shengyu Zhang. Understanding and utilizing deep neural networks trained with noisy labels. In _International Conference on Machine Learning_, pages 1062-1070. PMLR, 2019.
* [5] Tianqi Chen, Emily Fox, and Carlos Guestrin. Stochastic gradient hamiltonian monte carlo. In _International Conference on Machine Learning_, pages 1683-1691. PMLR, 2014.
* [6] Cody Coleman, Christopher Yeh, Stephen Mussmann, Baharan Mirzasoleiman, Peter Bailis, Percy Liang, Jure Leskovec, and Matei Zaharia. Selection via proxy: Efficient data selection for deep learning. _International Conference on Learning Representations_, 2020.
* [7] Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge Belongie. Class-balanced loss based on effective number of samples. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 9268-9277, 2019.
* [8] Erik Daxberger, Agustius Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, and Philipp Hennig. Laplace redux-effortless bayesian deep learning. _Advances in Neural Information Processing Systems_, 34:20089-20103, 2021.
* [9] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _2009 IEEE conference on computer vision and pattern recognition_, pages 248-255. Ieee, 2009.
* [10] Zhijie Deng and Jun Zhu. Bayesadapter: Being bayesian, inexpensively and reliably, via bayesian fine-tuning. In _Asian Conference on Machine Learning_, pages 280-295. PMLR, 2023.
* [11] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. _arXiv preprint arXiv:2010.11929_, 2020.
* [12] Arjun K Gupta and Daya K Nagar. _Matrix variate distributions_. Chapman and Hall/CRC, 2018.
* [13] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollar, and Ross Girshick. Masked autoencoders are scalable vision learners. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 16000-16009, 2022.
* [14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016.
* [15] Jose Miguel Hernandez-Lobato and Ryan Adams. Probabilistic backpropagation for scalable learning of Bayesian neural networks. In _International Conference on Machine Learning_, pages 1861-1869, 2015.
* [16] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In _International conference on machine learning_, pages 448-456. pmlr, 2015.
* [17] Angela H Jiang, Daniel L-K Wong, Giulio Zhou, David G Andersen, Jeffrey Dean, Gregory R Ganger, Gauri Joshi, Michael Kaminsky, Michael Kozuch, Zachary C Lipton, et al. Accelerating deep learning by focusing on the biggest losers. _arXiv preprint arXiv:1910.00762_, 2019.
* [18] Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In _International Conference on Machine Learning_, pages 2304-2313. PMLR, 2018.
* [19] Angelos Katharopoulos and Francois Fleuret. Not all samples are created equal: Deep learning with importance sampling. In _International conference on machine learning_, pages 2525-2534. PMLR, 2018.
* [20] Kenji Kawaguchi and Haihao Lu. Ordered sgd: A new stochastic optimization framework for empirical risk minimization. In _International Conference on Artificial Intelligence and Statistics_, pages 669-679. PMLR, 2020.
* [21] Mohammad Emtiyaz Khan, Didrik Nielsen, Voot Tangkaratt, Wu Lin, Yarin Gal, and Akash Srivastava. Fast and scalable Bayesian deep learning by weight-perturbation in adam. In _International Conference on Machine Learning_, pages 2616-2625, 2018.
* [22] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. _arXiv preprint arXiv:2304.02643_, 2023.
* [23] Agustinus Kristiadi, Matthias Hein, and Philipp Hennig. Being bayesian, even just a bit, fixes overconfidence in relu networks. In _International conference on machine learning_, pages 5436-5446. PMLR, 2020.
* [24] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
* [25] Wen Li, Limin Wang, Wei Li, Eirikur Agustsson, and Luc Van Gool. Webvision database: Visual learning and understanding from web data. _arXiv preprint arXiv:1708.02862_, 2017.
* [26] Ilya Loshchilov and Frank Hutter. Online batch selection for faster training of neural networks. _arXiv preprint arXiv:1511.06343_, 2015.
* [27] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. _arXiv preprint arXiv:1711.05101_, 2017.
* [28] Christos Louizos and Max Welling. Structured and efficient variational deep learning with matrix gaussian posteriors. In _International Conference on Machine Learning_, pages 1708-1716, 2016.
* [29] David John Cameron Mackay. _Bayesian methods for adaptive models_. California Institute of Technology, 1992.
* [30] James Martens and Roger Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In _International conference on machine learning_, pages 2408-2417. PMLR, 2015.
* [31] Soren Mindermann, Jan M Brauner, Muhammed T Razzak, Mrinank Sharma, Andreas Kirsch, Winnie Xu, Benedikt Holtgen, Aidan N Gomez, Adrien Morisot, Sebastian Farquhar, et al. Prioritized training on points that are learnable, worth learning, and not yet learnt. In _International Conference on Machine Learning_, pages 15630-15649. PMLR, 2022.
* [32] OpenAI. Introducing chatgt, 2022. [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt), Last accessed on 2023-05-09.
* [33] Mansheej Paul, Surya Ganguli, and Gintare Karolina Dziugaite. Deep learning on a data diet: Finding important examples early in training. _Advances in Neural Information Processing Systems_, 34:20596-20607, 2021.
* [34] Geoff Pleiss, Tianyi Zhang, Ethan Elenberg, and Kilian Q Weinberger. Identifying mislabeled data using the area under the margin ranking. _Advances in Neural Information Processing Systems_, 33:17044-17056, 2020.
* [35] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In _International Conference on Machine Learning_, pages 8748-8763. PMLR, 2021.
* [36] Hippolyt Ritter, Aleksandar Botev, and David Barber. Online structured laplace approximations for overcoming catastrophic forgetting. _Advances in Neural Information Processing Systems_, 31, 2018.
* [37] Hippolyt Ritter, Aleksandar Botev, and David Barber. A scalable laplace approximation for neural networks. In _6th International Conference on Learning Representations, ICLR 2018-Conference Track Proceedings_, volume 6. International Conference on Representation Learning, 2018.
* [38] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. High-resolution image synthesis with latent diffusion models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 10684-10695, 2022.
* [39] Lewis Smith and Yarin Gal. Understanding measures of uncertainty for adversarial example detection. _arXiv preprint arXiv:1803.08533_, 2018.
* [40] Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geoffrey J Gordon. An empirical study of example forgetting during deep neural network learning. In _International Conference on Learning Representations_.
* [41] Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V Le. Finetuned language models are zero-shot learners. In _International Conference on Learning Representations_, 2022.
* [42] Max Welling and Yee W Teh. Bayesian learning via stochastic gradient langevin dynamics. In _International Conference on Machine Learning_, pages 681-688, 2011.
* [43] Xiaobo Xia, Jiale Liu, Jun Yu, Xu Shen, Bo Han, and Tongliang Liu. Moderate coreset: A universal method of data selection for real-world data-efficient deep learning. In _The Eleventh International Conference on Learning Representations_, 2022.
* [44] Guodong Zhang, Shengyang Sun, David Duvenaud, and Roger Grosse. Noisy natural gradient as variational inference. In _International Conference on Machine Learning_, pages 5847-5856, 2018.
* [45] Ruqi Zhang, Chunyuan Li, Jianyi Zhang, Changyou Chen, and Andrew Gordon Wilson. Cyclical stochastic gradient mcmc for bayesian deep learning. _arXiv preprint arXiv:1902.03932_, 2019.
* [46] Haizhong Zheng, Rui Liu, Fan Lai, and Atul Prakash. Coverage-centric coreset selection for high pruning rates. In _The Eleventh International Conference on Learning Representations_, 2022.
## Appendix A Proofs
### Derivation of Equation (5)
\[\log p(y|x,\mathcal{D}^{*},\mathcal{D}_{t-1}) =\log\int p(\theta|\mathcal{D}^{*},\mathcal{D}_{t-1})p(y|x,\theta)d\theta\] \[=\log\int\frac{p(\mathcal{D}^{*}|\theta)p(\theta|\mathcal{D}_{t-1 })}{p(\mathcal{D}^{*}|\mathcal{D}_{t-1})}p(y|x,\theta)d\theta\] \[=\log\int p(\mathcal{D}^{*}|\theta)p(\theta|\mathcal{D}_{t-1})p(y |x,\theta)d\theta-\log p(\mathcal{D}^{*}|\mathcal{D}_{t-1}).\]
### Derivation of Equation (6)
\[\log p(y|x,\mathcal{D}^{*},\mathcal{D}_{t-1}) =\log\int p(\mathcal{D}^{*}|\theta)p(\theta|\mathcal{D}_{t-1})p( y|x,\theta)d\theta-\log p(\mathcal{D}^{*}|\mathcal{D}_{t-1})\] \[\geq\int p(\theta|\mathcal{D}_{t-1})\log[p(\mathcal{D}^{*}|\theta )p(y|x,\theta)]d\theta-\log p(\mathcal{D}^{*}|\mathcal{D}_{t-1})\] \[=\mathbb{E}_{p(\theta|\mathcal{D}_{t-1})}\log p(\mathcal{D}^{*}| \theta)+\mathbb{E}_{p(\theta|\mathcal{D}_{t-1})}\log p(y|x,\theta)-\log p( \mathcal{D}^{*}|\mathcal{D}_{t-1}).\]
### Derivation of Equation (15)
Given that the posterior of the weights of the last layer is \(q(\theta^{(l)}|\mathcal{D}_{t-1})=\mathcal{MN}(\theta^{(l)}|\theta^{(l)}_{t-1},U^{-1}_{t-1},V^{-1}_{t-1})\) and the input to the last layer is \(h_{\theta_{t-1}}(x)\), there is
\[q(f_{x}|\mathcal{D}_{t-1}) =\mathcal{MN}\Big{(}h_{\theta_{t-1}}(x)^{\top}\theta^{(l)}_{t-1}, U^{-1}_{t-1},h_{\theta_{t-1}}(x)^{\top}V^{-1}_{t-1}h_{\theta_{t-1}}(x)\Big{)}\] \[=\mathcal{N}\Big{(}f_{\theta_{t-1}}(x),\big{(}h_{\theta_{t-1}}(x) ^{\top}V^{-1}_{t-1}h_{\theta_{t-1}}(x)\big{)}U^{-1}_{t-1}\Big{)}.\]
The second equation stems from that \((h_{\theta_{t-1}}(x)^{\top}V^{-1}_{t-1}h_{\theta_{t-1}}(x))\) is a scalar.
## Appendix B Experiment Details
### Preprocessing
All images of each dataset are normalized and augmented by random horizontal flipping. For CIFAR-10/100, we use the standard \(32\times 32\) random cropping after zero-padding with 4 pixels on each side. In order to adapt to the input size of CLIP, we upsample images of CIFAR using _torch.nn.functional.interpolate_ in PyTorch. For WebVision, the images are initially resized to a uniform size of 256. Subsequently, standard data augmentation techniques are applied, involving the random cropping of patches with dimensions of \(224\times 224\) from each image, followed by the application of horizontal random flipping.
Figure 5: Number of images per category of the WebVison dataset [25].
### Hyper-parameters The uning
We split the original training set into training and validation sets, where the latter remains clean and balanced for hyper-parameters tuning. In fact, as shown in Figure 4, the trade-off coefficient \(\alpha\) in the selection objective is the primary factor that impacts the training curve and should be carefully selected. In particular, we select it from \(\{0.1,0.2,0.3,0.4\}\) using a small validation set (of size 500 on CIFAR). We reuse the selected \(\alpha\) on WebVision-100 without tuning.
### Network
For all experiments, ResNet18 models are trained from scratch using PyTorch 2.0.0. Notably, in the case of CIFAR-10/100, we employ a downsampling layer with a small convolution with \(3\times 3\) kernel. Additionally, the average pooling at the end of the network is removed. Following the set-up of BatchNorm in [31], we compute the BatchNorm statistics on large batch \(B_{t}\) for data selection. For model parameters update, we compute the statistics on the selected batch \(b_{t}\).
## Appendix C More Results
Table 6 reports the results of our method on the entire training set of WebVision (as mentioned in Section 4, we use only half of the training set for training in the main experiments for a fair comparison with RHO-LOSS [31]). We see more significant speedups and higher final accuracy due to more training data.
We establish an extra baseline that combines RHO-LOSS with zero-shot predictors. The corresponding results based on our codebase are listed in Table 7. These indicate that the Bayesian treatment introduced by our method plays an important role in accelerating and improving the model convergence.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Dataset & Validation set & Target Acc & 50\% data & 100\% data \\ \hline \multirow{3}{*}{WebVision-100} & \multirow{3}{*}{WebVision} & 30\% & 25 & 12 \\ & & 40\% & 40 (60\%) & **22 (68\%)** \\ \cline{3-5} & & 30\% & 33 & **18** \\ & ILSVRC12 & 40\% & 50 (55\%) & **27 (64\%)** \\ \hline \multirow{3}{*}{WebVision-200} & \multirow{3}{*}{WebVision} & 30\% & 22 & **11** \\ & & 40\% & 36 (61\%) & **18 (68\%)** \\ \cline{1-1} \cline{3-5} & ILSVRC12 & 30\% & 29 & **16** \\ \cline{1-1} \cline{3-5} & & 40\% & 46 (56\%) & **22 (64\%)** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Epochs required to reach a target test accuracy on the _100%_ WebVision training set (final accuracy is reported in parentheses).
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & \multicolumn{2}{c}{Target Acc} & \multirow{2}{*}{Final Acc (\%)} \\ & & 40.0\% & 52.5\% & \\ \hline RHO-LOSS w/ zero-shot predictor (CLIP-RN50) & 59 & 92 & 58 \\ RHO-LOSS w/ zero-shot predictor (CLIP-ViT-B/16) & 53 & 86 & 60 \\ Proposed & 32 & 53 & 63 \\ \hline \hline \end{tabular}
\end{table}
Table 7: The results of RHO-LOSS [31] using zero-shot predictor as the validation model on CIFAR-100. | ## Review
### Summary
This paper presents a Bayesian framework for online batch selection aimed at improving robust generalization in models trained on noisy or imbalanced data. The authors extend the existing RHO-LOSS method by refining its approximation techniques and eliminating reliance on oracle clean data. They demonstrate the efficacy of their approach through comprehensive experiments on various datasets, showcasing significant improvements in training acceleration and robust generalization performance. The writing is clear, and the theoretical contributions are well-supported by empirical results, making a compelling case for the proposed method's advantages over previous techniques.
### Strengths
- The paper is well-motivated and clearly written, with easy-to-follow derivations.
- The authors introduce a novel approximation scheme that effectively enhances the Bayesian framework for data selection.
- Experiments are thorough, covering various datasets and demonstrating significant performance gains over existing methods.
- The use of pre-trained models as validation aids and the application of Laplace approximation for Bayesian analysis are innovative contributions.
### Weaknesses
- The claim that the proposed method does not rely on oracle clean data is questionable, given the use of a pretrained zero-shot classification model.
- The paper lacks a detailed discussion of recent advancements in data selection methods that may partially address the same problem.
- Ablation studies are insufficient, especially regarding the sensitivity of the method to hyperparameters and the role of the zero-shot predictor in performance.
- More empirical measurements are needed to compare the proposed method's training speed versus traditional methods.
### Questions
- How was model selection performed, specifically regarding the validation split?
- What criteria were used to choose target accuracies for training acceleration results?
- What is the significance of replacing the Hessian matrix in the Laplacian approximation with the Gauss-Newton matrix?
- Can the authors elaborate on the implications of the zero-shot predictor's performance on the proposed method's robustness?
- Could the authors provide comparisons against recent data selection techniques that consider data distribution?
### Soundness
**Score:** 3
**Description:** 3 = good; the methodology and theoretical underpinnings of the paper are solid, though some claims require further clarification.
### Presentation
**Score:** 3
**Description:** 3 = good; the paper is generally well-written and clear, but there are areas where additional explanations would enhance understanding.
### Contribution
**Score:** 3
**Description:** 3 = good; the paper contributes valuable insights and advancements to the field of robust generalization and data selection.
### Rating
**Score:** 6
**Description:** 6 = weak accept; the paper is technically solid and has moderate-to-high impact potential, though additional work on evaluation and comparisons is necessary.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper offers a significant advancement in the field of robust generalization through its innovative Bayesian framework and comprehensive experimental validation. While there are some concerns regarding the reliance on pretrained models and the depth of evaluation, the overall contributions and clarity of the paper warrant acceptance. The authors are encouraged to address the highlighted weaknesses in their final version.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Setting the Trap: Capturing and Defeating Backdoors in Pretrained Language Models through Honeypots
Ruixiang Tang\({}^{1,}\)1, Jiayi Yuan\({}^{1,}\)1, Yiming Li\({}^{2,3}\), Zirui Liu\({}^{1}\), Rui Chen\({}^{4}\), Xia Hu\({}^{1}\)
\({}^{1}\)Department of Computer Science, Rice University
\({}^{2}\)ZJU-Hangzhou Global Scientific and Technological Innovation Center
\({}^{3}\)School of Cyber Science and Technology, Zhejiang University
\({}^{4}\)Samsung Electronics America
{rt39,jy101,zl105,xia.hu}@rice.edu;[email protected];[email protected]
Equal contribution. The order of authors is determined by flipping a coin.
Footnote 1: footnotemark:
###### Abstract
In the field of natural language processing, the prevalent approach involves fine-tuning pretrained language models (PLMs) using local samples. Recent research has exposed the susceptibility of PLMs to backdoor attacks, wherein the adversaries can embed malicious prediction behaviors by manipulating a few training samples. In this study, our objective is to develop a backdoor-resistant tuning procedure that yields a backdoor-free model, no matter whether the fine-tuning dataset contains poisoned samples. To this end, we propose and integrate a _honeypot module_ into the original PLM, specifically designed to absorb backdoor information exclusively. Our design is motivated by the observation that lower-layer representations in PLMs carry sufficient backdoor features while carrying minimal information about the original tasks. Consequently, we can impose penalties on the information acquired by the honeypot module to inhibit backdoor creation during the fine-tuning process of the stem network. Comprehensive experiments conducted on benchmark datasets substantiate the effectiveness and robustness of our defensive strategy. Notably, these results indicate a substantial reduction in the attack success rate ranging from 10% to 40% when compared to prior state-of-the-art methods.
## 1 Introduction
Recently, the rapid progress of pretrained language models (PLMs) has transformed diverse domains, showcasing extraordinary capabilities in addressing complex natural language understanding tasks. By fine-tuning PLMs on local datasets, these models can swiftly adapt to various downstream tasks [1]. Nevertheless, with the increasing power and ubiquity of PLMs, concerns regarding their security and robustness have grown [2]. Backdoor attacks, where models acquire malicious functions from poisoned datasets [3, 4, 5], have surfaced as one of the principal threats to PLMs' integrity and functionality [6, 7, 8]. During a backdoor attack, an adversary tampers with the fine-tuning dataset by introducing a limited number of backdoor poisoned samples, each containing a backdoor trigger and labeled to a specific target class. Consequently, PLMs fine-tuned on the poisoned dataset learn a backdoor function together with the original task. Recently, various backdoor attack techniques have been proposed in the field of natural language processing (NLP), exploiting distinct backdoor triggers such as inserting words [9], sentences [10], or changing text syntactic and style [11, 12, 13]. Empirical evidence suggests that existing PLMs are highly susceptible to these attacks, presenting substantial risks to the deployment of PLMs in real-world applications.
In this study, we aim to protect PLMs during the fine-tuning process by developing a backdoor-resistant tuning procedure that yields a backdoor-free model, no matter whether the fine-tuning datasetcontains poisoned samples. Our key idea is straightforward: the victim model essentially learns two distinct functions - one for the original task and another for recognizing poisoned samples. If we can partition the model into two components - one dedicated to the primary function and the other to the backdoor function - we can then discard or suppress the side effects of the latter for defense. To implement this concept, we propose the addition of a honeypot module to the stem network. This module is specifically designed to absorb the backdoor function during training, allowing the stem network to focus exclusively on the original task. Upon completion of training, the honeypot can be removed to ensure a robust defense against backdoor attacks.
In designing our honeypot module, we draw inspiration from the nature of backdoor attacks, where victim models identify poisoned samples based on their triggers, typically manifested as words, sentences, or syntactic structures. Unlike the original task, which requires understanding a text's entire semantic meaning, the backdoor task is far easier since the model only needs to capture and remember backdoor triggers. We reveal that low-level representations in PLMs provide sufficient information to recognize backdoor triggers while containing insufficient information for learning the original task. Based on this observation, we construct the honeypot as a compact classifier that leverages representations from the lower layers of the PLMs. Consequently, the designed honeypot module rapidly overfits poisoned samples during early training stages, while barely learning the original task. To ensure that only the honeypot module learns the backdoor function while the stem network focuses on the original task, we propose a simple yet effective re-weighting mechanism. This concept involves encouraging the stem network to learn samples that the honeypot classifier finds challenging to classify, which are typically clean samples, and ignoring samples that the honeypot network confidently classifies. In this way, we guide the PLMs to concentrate primarily on clean samples and prohibit backdoor creation during the fine-tuning process.
We evaluate the feasibility of the proposed methods in defending against an array of representative backdoor attacks spanning multiple NLP benchmarks. The results demonstrate that the honeypot defense significantly diminishes the attack success rate of the fine-tuned PLM on poisoned samples while only minimally affecting the performance of the original task on clean samples. Notably, for challenging backdoor attacks, e.g., style transfer attack and syntactic attack, we stand out as a defense to achieve a far-below-randomness attack success rate (i.e., \(\ll 50\%\)). Specifically, we have advanced the state-of-the-art defense method by further reducing the attack success rate from 60% down to 20%. The visualization of the model's learning dynamics on the poisoned dataset and comprehensive ablation study further validated our method's ability. Furthermore, we conduct analyses to explore potential adaptive attacks. In summary, this paper makes the following contributions:
* We demonstrate that the feature representations from the lower layers of PLMs contain sufficient information to recognize backdoor triggers while having insufficient semantic information for the original task.
* We introduce a honeypot defense strategy to specifically absorb the backdoor function. By imposing penalties on the samples that the honeypot module confidently classifies, we guide the PLMs to concentrate solely on original tasks and prevent backdoor creation.
* Our experimental results demonstrate that the proposed method efficiently defends against attacks with diverse triggers, such as word, sentence, syntactic, and style triggers, with minimal impact on the primary task. Furthermore, our method can be applied to different benchmark tasks and exhibits robustness against potential adaptive attacks.
## 2 Preliminaries
### Backdoor Attack in NLP
Backdoor attacks were initially proposed in the computer vision domain [3; 14; 15; 4; 16; 17]. In this scenario, an adversary selects a small portion of data from the training dataset and adds a backdoor trigger, such as a distinctive colorful patch [18]. Subsequently, the labels of all poisoned data points are modified to a specific target class. Injecting these poison samples into the training dataset enables the victim model to learn a backdoor function that constructs a strong correlation between the trigger and the target label together with the original task. Consequently, the model performs normally on the original task but predicts any inputs containing the trigger as the target class. Recently, numerous studies have applied backdoor attacks to various NLP tasks. In the context of natural language, the backdoor trigger can be context-independent words or sentences [9; 10]. Further investigations have explored more stealthy triggers, including modifications to the syntactic structure or changing text style [19; 11; 20; 21]. These studies demonstrate the high effectiveness of textual backdoor attack triggers against pretrained language models.
### Backdoor Defense in NLP
Recently, several pioneering works have been proposed to defend against backdoor attacks in the field of NLP. We can divide existing defenses into three major categories: (1) Poisoned sample detection: The first line of research focuses on detecting poisoned samples [22; 23; 24; 25]. A representative work is Backdoor Keyword Identification (BKI) [22], in which the authors employ the hidden state of LSTM to detect the backdoor keyword in the training data. Additionally, some studies concentrate on identifying poisoned samples during inference time. For instance, a representative work [23] aims to detect and remove potential trigger words to prevent activating the backdoor in a compromised model. (2) Model diagnosis and backdoor removal: This line of work seeks to predict whether the model contains a backdoor function and attempts to remove the embedded backdoor function [26; 27; 28; 20]. (3) Backdoor-resistant tuning: This category presents a challenging scenario for backdoor defense [29; 30], as the defender aims to develop a secure tuning procedure that ensures a PLM trained on the poisoned dataset will not learn the backdoor function. The work [29] reveals that the PLM tuning process can be divided into two stages. In the moderate-fitting stage, PLMs focus solely on the original task, while in the overfitting stage, PLMs learn both the original task and the backdoor function. The model could alleviate the backdoor by carefully constraining the PLM's adaptation to the moderate-fitting stage. Our proposed honeypot defense belongs to the third category and offers a different solution.
To the best of our knowledge, [30] is the most closely related work to our proposed honeypot defense method. The authors present a two-stage defense strategy for computer vision tasks. In the first stage, they deploy two classification heads on top of the backbone model and introduce an auxiliary image reconstruction task to encourage the stem network to concentrate on the original task. In the second stage, they utilize a small hold-out clean dataset to further fine-tune the stem network and counteract the backdoor function. In comparison, our proposed method eliminates the need for a two-stage training process or a hold-out small clean dataset for fine-tuning, rendering it more practical for real-world applications.
### Information Contained within Different Layers of PLMs.
Numerous studies have delved into the information encapsulated within different layers of pretrained language models. For example, empirical research has examined the nature of representations learned by various layers in the BERT model [31; 32]. The findings reveal that representations from lower layers capture word and phrase-level information, which becomes less pronounced in the higher layers. Syntactic features predominantly reside in the lower and middle layers, while semantic features are more prominent in the higher layers. Recent studies have demonstrated that PLMs employ distinct features to identify backdoor samples [33]. We further investigated the backdoor features presented in different layers and found that lower-layer features are highly effective in recognizing backdoor samples. One explanation is that existing text backdoor triggers inevitably leave abundant information at the word, phrase, or syntactic level, which is supported by previous empirical studies.
## 3 Understanding the Fine-tuning Process of PLMs on Poisoned Datasets
In this section, we discuss our empirical observations obtained from fine-tuning PLMs on poisoned datasets. Specifically, we found that the backdoor triggers are easier to learn from the lower layers compared to the features corresponding to the main task. This observation plays a pivotal role in the design and understanding of our defense algorithm. In Section 3.1, we provide a formal description of the poisoned dataset. In Section 3.2, we subsequently delve into our empirical observations.
### Settings
Consider a classification dataset \(D_{train}=(x_{i},y_{i})\), where \(x_{i}\) represents an input text, and \(y_{i}\) corresponds to the associated label. To generate a poisoned dataset, the adversary selects a small set of samples \(D_{sub}\) from the original dataset \(D_{train}\), typically between 1-10%. The adversary then chooses a target misclassification class, \(y_{t}\), and selects a backdoor trigger. For each instance \((x_{i},y_{i})\) in \(D_{sub}\), a poisoned example \((x^{\prime}_{i},y^{\prime}_{i})\) is created, with \(x^{\prime}_{i}\) being the embedded backdoor trigger of \(x_{i}\) and \(y^{\prime}_{i}=y_{t}\). The resulting poisoned subset is denoted as \(D^{\prime}sub\). Finally, the adversary substitutes the original \(D_{sub}\) with \(D^{\prime}_{sub}\) to produce \(D_{poison}=(D_{train}-D_{sub})\cup D^{\prime}_{sub}\). By fine-tuning PLMs with the poisoned dataset, the model will learn a backdoor function that establishes a strong correlation between the trigger and the target label \(y_{t}\). Consequently, adversaries can manipulate the model's predictions by adding the backdoor trigger to the inputs, causing instances containing the trigger pattern to be misclassified into the target class \(t\).
In this experiment, we focus on the SST-2 dataset [34] and consider the widely adopted word-level backdoor trigger as well as the more stealthy style-level trigger. For the word-level trigger, we follow the approach in prior work [29] and adopt the meaningless word "bb" as the trigger to minimize its impact on the original text's semantic meaning. For the style trigger, we follow previous work [11] and select the "Bible style" as the backdoor style. For both attacks, we set a poisoning rate at 5% and conduct experiments on the RoBERTaBASE model [35], using a batch size of 32 and a learning rate of 2e-5, in conjunction with the Adam optimizer [36].
### Lower Layer Representations Provide Sufficient Backdoor Information
To understand the information in different layers of PLMs, we draw inspiration from previous classifier probing studies [37; 38] and train a compact classifier (one RoBERTa transformer layer topped with a fully connected layer) using representations from various layers of the RoBERTa model. Specifically, we freeze the RoBERTa model parameters and train only the probing classifier. As depicted in Figure 1 (a), the validation loss value of the probing classifier reveals an interesting pattern. Notably, the lower layers (0-4) of the RoBERTa model contain sufficient backdoor trigger features for both word-level and style-level attacks, thereby showing an extremely low CE loss value for poison samples. Figure 1 (b) presents the training loss curve for the probing classifier for the word-level trigger. We can see that the probing classifier rapidly captures the backdoor triggers in the early stages for lower-layer features. For example, the probing classifier already overfits the poisoned samples after 300 steps using representations from layers 1 and 4.
Figure 1: Fine-Tuning PLMs on Poisoned Datasets. (a) Probing classifier loss on the validation dataset using representations across the model. (b) Visualizing training loss for poisoned and clean samples for the world-level trigger.
Figure 2: Embedding visualization.
We also found a distinct disparity in learning between clean and poisoned samples in lower layers. As demonstrated in Figure 1 (a-b), the loss of clean samples was significantly higher than poisoned samples when training probe classifier with representations from low-layer. This observation aligns with earlier research [31, 32] proposing that lower-layer features primarily encapsulate superficial features, such as phrase-level and syntactic-level features. Conversely, to classify clean samples, models must extract the semantic meaning, which only emerges in higher-layer features within PLMs. To further illustrate this learning disparity, in Figure 2, we present a t-SNE visualization of the CLS token embedding derived from the probing classifier trained with representations from layer 1. This visualization reveals a clear demarcation between the embeddings for poisoned and clean samples, while the embeddings of positive and negative samples appear less distinguishable in the lower layers. We put more visualization results and discussions in Section A.
## 4 The Proposed Method
Our defense method stems from the observations in Section 3, which indicates that poisoned samples frequently involve the injection of words, sentences, or syntactic structures that are effectively identified by the lower-layers of PLMs. Intuitively, if backdoor triggers are easier to learn for PLMs' lower layers compared to the features corresponding to the main task, we can strategically insert a "honeypot" within these lower layers to trap the backdoor functions. Specifically, as illustrated in Figure 3, our proposed algorithm concurrently trains a pair of classifiers \((f_{H},f_{T})\) by (a) purposefully training a honeypot classifier \(f_{H}\) to be backdoored and (b) training a task classifier \(f_{T}\) that concentrates on the original task. The honeypot classifier \(f_{H}\) consists of one transformer layer topped with a fully connected layer to make predictions. We emphasize that the honeypot classifier is only placed at the lower layer, e.g., layer 1. Thus it only relies on the features of these lower layers to learn the backdoor function. The trainable parameter of the honeypot is denoted as \(\theta_{T}\). Inspired by previous work [39, 40], we apply Generalized Cross-Entropy (GCE) loss [39, 40] to enlarge the impact of positioned samples to the honeypot classifier:
\[\mathcal{L}_{GCE}(f(x;\theta_{H}),y)=\frac{1-f_{y}(x;\theta_{H})^{q}}{q}, \tag{1}\]
where \(f_{y}(x;\theta_{H})\) is the output probability assigned to the ground truth label \(y\). The hyper-parameter \(q\in(0,1]\) controls the degree of bias amplification. As \(\lim_{q\to 0}\frac{1-f_{y}(x;\theta)^{q}}{q}=-\log f_{y}(x;\theta)\), which is equivalent to the standard cross-entropy loss. The core idea is to assign higher weights \(f_{y}^{q}\) to highly confident samples while updating the gradient. To see this,
\[\frac{\partial\mathcal{L}_{GCE}(p,y)}{\partial\theta_{H}}=f_{y}^{q}(x;\theta_{ H})\cdot\frac{\partial\mathcal{L}_{CE}(p,y)}{\partial\theta_{H}}. \tag{2}\]
Thus, the GCE loss further encourages the honeypot module to only focus on the "easier" samples, the majority of which are poisoned samples when using the lower layer representations, in contrast to a network trained with the normal CE loss.
Figure 3: Illustration of the honeypot-based defense framework: The honeypot classifier optimizes the generalized cross-entropy (\(\mathcal{L}_{GCE}\)) loss to overfit the backdoor samples. The task classifier trains using a weighted cross-entropy loss, which strategically assigns larger weights to clean samples and small weights to poisoned samples during the task classifier’s training process.
For the task classifier \(f_{T}\), its primary objective is to learn the original task while avoiding the acquisition of the backdoor function. As we previously analyzed, the honeypot will absorb the impact of these positioned samples. Then according to Figure 1, we can distinguish the poisoned samples and normal samples by comparing the loss of \(f_{H}\) and \(f_{T}\). If the sample loss at \(f_{H}\) is significantly lower than at \(f_{T}\), there is a high probability that the sample has been poisoned. Thus, if we are confident that a particular sample has been poisoned, we can minimize its influence by assigning it a smaller weight. Specifically, we propose employing a weighted cross-entropy loss (\(\mathcal{L}_{WCE}\)) for achieving this goal, which is expressed as follows:
\[\mathcal{L}_{WCE}(f_{T}(x),y)=\sigma(W(x)-c)\cdot\mathcal{L}_{CE} (f_{T}(x),y),\ \ \text{where} \tag{3}\] \[W(x)=\frac{\mathcal{L}_{CE}(f_{H}(x),y)}{\mathcal{L}_{CE}(f_{T} (x),y))}, \tag{4}\]
\(f_{H}(x)\) and \(f_{T}(x)\) represent the softmax outputs of the honeypot and task classifiers, respectively. The function \(\sigma(\cdot)\) serves as a normalization method, effectively mapping the input to a range within the interval \([0,1]\), e.g., the Sigmoid function, Sign function, and rectified Relu function. The \(c\) is a threshold value for the normalization. \(\mathcal{\bar{L}}_{CE}(f_{T}(x),y)\) is the averaged cross-entropy loss of the task classifier \(f_{T}\) among the last \(t\) steps, where \(t\) is a hyperparameter that controls the size of the time window. \(W(x)=\mathcal{\bar{L}}_{CE}(f_{H}(x),y)/\mathcal{\bar{L}}_{CE}(f_{T}(x),y))\) is the ratio of the loss of the honeypot classifier at the current step versus the averaged one of the task classifier \(f_{T}\) among the last \(t\) steps.
We elaborate on how the proposed framework works. We first warm up the honeypot classifier \(f_{H}\) for some steps to let it "prepare" for trapping the backdoor triggers. Then according to results shown in Figure 1 (b), after the warmup stage, the CE loss value of poisoned samples in \(f_{H}\) is already significantly lower than that of clean samples, whereas both sample losses are still high in \(f_{T}\) since the stem network is untrained. Consequently, \(W(x)\) will assign a lower weight to poisoned samples. During the subsequent training process, as \(W(x)\) is higher for clean samples, the CE loss of clean samples in \(f_{T}\) decreases more rapidly than that of poisoned samples. This will further amplifying \(W(x)\) for clean samples as they have smaller \(\mathcal{\bar{L}}_{CE}(f_{T}(x),y))\). This positive feedback mechanism ensures that the \(W(x)\) for poisoned samples remains significantly lower than for clean samples throughout the entire training process of \(f_{T}\). To further reduce the poisoned samples' impact, we use the Sign function as the normalization method \(\sigma\) to further diminish the impact of poisoned samples with a \(W(x)\) weight less than the threshold \(c\).
## 5 Experiments
### Experiment Settings
In our experiments, we considered two prevalent PLMs with different capacities, including BERT\({}_{\text{BASE}}\), BERT\({}_{\text{LARGE}}\), RoBERTa\({}_{\text{BASE}}\), and RoBERTa\({}_{\text{LARGE}}\). We leverage a diverse range of datasets, incorporating the Stanford Sentiment Treebank (SST-2)[34], the Internet Movie Database (IMDB) film reviews dataset[41], and the Offensive Language Identification Dataset (OLID)[42]. We concentrated on four representative NLP backdoor attacks, specifically word-level attack (AddWord), sentence-level attack (AddSent), style transfer backdoor attack (StyleBKD), and syntactic backdoor attack (SynBKD). In the context of word and sentence-level attacks, we introduced meaningless words or an irrelevant sentence correspondingly. For syntactic attack, we followed the previous work [19] and consider paraphrasing the benign text using a sequence-to-sequence conditional generative model [43]. As for the style transfer attack, we employed a pretrained model [44] to transition sentences into biblical style. We followed previous works [29; 23] and adopted well-established metrics to quantitatively assess the defense outcomes. Firstly, the Attack Success Rate (ASR) is utilized to evaluate the model's accuracy on the poisoned test set, serving as a gauge of the extent to which the model has been effectively backdoored. Secondly, we use the Clean Accuracy (ACC) metric to measure the model's performance on the clean test set. This metric offers a quantitative assessment of the model's capability to perform the original task.
For each experiment, we executed a fine-tuning process for a total of 10 epochs, incorporating an initial warmup epoch for the honeypot module. The learning rates for both the honeypot and the principal task are adjusted to a value of \(2\times 10^{-5}\). Additionally, we established the hyperparameter \(q\) for the GCE loss at 0.5, the time window size \(T\) was set to 100, and the threshold value \(c\) was fixed at 0.1. Each experimental setting was subjected to three independent runs and randomly chosen one class as the target class. These runs were also differentiated by employing distinct seed values. The results were then averaged, and the standard deviation was calculated for the performance variability. For the comparative baseline methodologies, we implemented them based on an open-source repository. 2 and adopt the default hyper-parameters in the repository.
Footnote 2: OpenBackdoor. Github: [https://github.com/thunlp/OpenBackdoor](https://github.com/thunlp/OpenBackdoor)
### Defense Results
In Table 1, we illustrate the effectiveness of our proposed honeypot defense method against four distinct backdoor attacks. Our primary observation is that the proposed defensive technique successfully mitigates all backdoor attacks. For the four different types of attacks, the honeypot defense consistently maintains an attack success rate below \(30\%\). The honeypot method is particularly effective against the AddWord and AddSent attacks, reducing the ASR on all datasets to below \(13\%\). Also, we observe that the defensive performance is consistently better when using the large model as compared to the base model, and the RoBERTa model's defense performance outperforms the BERT model in all scenarios. Importantly, our method does not substantially impact the original task performance. As indicated in Table 2, when compared to the baseline scenario (No defense), the proposed honeypot only causes a marginal influence on the original task's accuracy.
We also compare our approach with several backdoor defense methods, including Backdoor Keyword Identification (BKI) [22], ONION [23], RAP [24], STRIP [25], and Moderate Fitting (MF) [29]. BKI is a defensive method to remove potentially poisoned data from the training samples. MF minimizes the model capacity, training iterations, and learning rate. ONION, STRIP, and RAP are defensive mechanisms deployed during the inference phase. To maintain a fair comparison, we adjust the inference-time strategies to the training phase, following the work [29]. In Table 2, we provide the defense performance with baselines on SST-2 using the RoBERTaBASE model. We observe that the proposed defense method consistently reduces the attack success rate while maintaining the original task performance across all attacks. Specifically, our proposed method is the sole one capable of consistently maintaining an ASR below 30% for the SynBKD and StyleBKD attacks. Furthermore, the average ACC of our method is 93.15%, which is only slightly lower than the no-defense baselines. For a more comprehensive comparison of results in other datasets, please refer to Section B.
\begin{table}
\begin{tabular}{l|c|c c c c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & **Victim** & \multicolumn{3}{c}{BERTaBASE} & \multicolumn{3}{c}{BERTaBASE} & \multicolumn{3}{c}{RoBERTaBASE} & \multicolumn{3}{c}{RoBERTaBASE} \\ \cline{2-13} & **Attack** & **ACC** (\uparrow) & ASR (↓) & ACC (\uparrow) & ASR (↓) & ACC (\uparrow) & ASR (↓) & ACC (↑) & ASR (↓) \\ \hline \hline \multirow{7}{*}{ISST-2} & AddWord & 90.34 \(\pm\) 0.72 & 10.88 \(\pm\) 3.02 & 93.11 \(\pm\) 0.55 & 6.77 \(\pm\) 1.97 & 93.71 \(\pm\) 0.68 & 6.56 \(\pm\) 1.91 & 94.15 \(\pm\) 0.50 & 5.84 \(\pm\) 2.15 \\ & AddSent & 91.03 \(\pm\) 0.89 & 7.99 \(\pm\) 3.20 & 92.66 \(\pm\) 1.00 & 7.17 \(\pm\) 2.45 & 92.39 \(\pm\) 0.40 & 7.71 \(\pm\) 2.05 & 94.83 \(\pm\) 0.94 & 4.20 \(\pm\) 2.30 \\ & StyleBKD & 89.79 \(\pm\) 0.70 & 25.89 \(\pm\) 4.50 & 93.46 \(\pm\) 0.78 & 22.54 \(\pm\) 3.70 & 93.15 \(\pm\) 0.82 & 20.87 \(\pm\) 3.90 & 94.50 \(\pm\) 0.85 & 19.62 \(\pm\) 4.20 \\ & SynBKD & 86.23 \(\pm\) 1.00 & 29.00 \(\pm\) 5.00 & 92.20 \(\pm\) 0.96 & 24.55 \(\pm\) 0.40 & 93.34 \(\pm\) 0.75 & 20.90 \(\pm\) 5.85 & 40.00 \(\pm\) 1.10 & 21.20 \(\pm\) 1.40 \\ & **Average** & **89.35**\(\pm\) 0.53 & **18.44**\(\pm\) 3.98 & **92.86**\(\pm\) 0.75 & **18.57**\(\pm\) 0.33 & **93.15**\(\pm\) 0.74 & **14.51**\(\pm\) 2.38 & **94.37**\(\pm\) 0.92 & **12.72**\(\pm\) 1.39 \\ \hline \multirow{7}{*}{IMDB} & AddWord & 91.32 \(\pm\) 0.80 & 7.20 \(\pm\) 23.8 & 92.60 \(\pm\) 0.75 & 5.84 \(\pm\) 2.18 & 93.72 \(\pm\) 0.84 & 5.60 \(\pm\) 2.04 & 94.12 \(\pm\) 1.20 & 3.60 \(\pm\) 2.01 \\ & AddSent & 91.00 \(\pm\) 0.95 & 8.16 \(\pm\) 24.35 & 92.12 \(\pm\) 0.80 & 9.52 \(\pm\) 2.37 & 92.72 \(\pm\) 0.88 & 6.56 \(\pm\) 2.23 & 93.68 \(\pm\) 1.10 & 6.32 \(\pm\) 2.14 \\ & StyleBKD & 89.44 \(\pm\) 1.00 & 26.03 \(\pm\) 28.4 & 92.76 \(\pm\) 0.95 & 18.70 \(\pm\) 3.25 & 93.12 \(\pm\) 0.90 & 19.36 \(\pm\) 2.90 & 94.50 \(\pm\) 1.05 & 17.90 \(\pm\) 3.10 \\ & SynBKD & 89.04 \(\pm\) 0.94 & 93.63 \(\pm\) 27.35 & 91.96 \(\pm\) 0.96 & 24.92 \(\pm\) 2.94 & 93.00 \(\pm\) 0.32 & 22.76 \(\pm\) 0.80 & 94.00 \(\pm\) 1.00 & 18.40 \(\pm\) 2.46 \\ & **Average** & **90.20**\(\pm\) 0.92 & **14.89**\(\pm\) 2.61 & **92.36**\(\pm\) 0.86 & **14.76**\(\pm\) 2.09 & **93.19**\(\pm\) 0.88 & **13.56**\(\pm\) 2.49 & **94.08**\(\pm\) 1.09 & **11.58**\(\pm\) 2.43 \\ \hline \multirow{7}{*}{OLID} & AddWord & 80.81 \(\pm\) 0.97 & 12.43 \(\pm\) 2.31 & 85.00 \(\pm\) 1.00 & 40.48 \(\pm\) 2.99 & 82.79 \(\pm\) 0.58 & 14.55 \(\pm\) 3.17 & 86.34 \(\pm\) 1.10 & 9.78 \(\pm\) 2.91 \\ & AddSent & 84.88 \(\pm\) 0.85 & 5.64 \(\pm\) 21.5 & 86.66 \(\pm\) 1.00 & 5.322 \(\pm\) 2.09 & 83.37 \(\pm\) 0.82 & 4.83 \(\pm\) 2.04 & 87.33 \(\pm\) 0.90 & 4.50 \(\pm\) 1.95 \\ & StyleBKD & 83.95 \(\pm\) 0.95 & 93.66 \(\pm\) 29.20 & 83.83 \(\pm\) 0.88 & 27.20 \(\pm\) 3.00 & 83.95 \(\pm\) 0.90 & 29.18 \(\pm\) 2.92 & 87.20 \(\pm\) 1.02 & 26.10 \(\pm\) 2.89 \\ & SynBKD & 83.02 \(\pm\) 1.10 & 30.10 \(\pm\) 29.55 & 85.00 \(\pm\) 0.94 & 29.40 \(\pm\) 3.10 & 83.02 \(\pm\) 0.93 & 28.40 \(\pm\) 2.90 & 85.34 \(\pm\) 1.05 & 30.12 \(\pm\) 2.89 \\ & **Average** & **83.17**\(\pm\) 0.97 & **19.46**\(\pm\) 28.80 & **85.12**\(\pm\) 1.01 & **18.10**\(\pm\) 2.80 & **83.28**\(\pm\) 0.88 & **18.47**\(\pm\) 2.76 & **86.55**\(\pm\) 1.02 & **17.63**\(\pm\) 2.66 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overall defense performance
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline \hline \multirow{2}{*}{**Defence Method**} & \multicolumn{2}{c}{AddWord} & \multicolumn{2}{c}{AdAdSoft} & \multicolumn{2}{c}{StyleBKD} & \multicolumn{2}{c}{SynBRKD} \\ \cline{2-7} & ACC (↓) & ASR (↓) & ACC (↓) & ASR (↓) & ACC (↓) & ASR (↓) & ACC (↑) & ASR (↓) \\ \hline No defense & 94.61 \(\pm\) 0.60 & 100.00 \(\pm\) 0.00 & 94.38 \(\pm\) 0.52 & 100.00 \(\pm\) 0.00 & 93.92 \(\pm\) 0.55 & 100.00 \(\pm\) 0.00 & 94.49 \(\pm\) 0.57 & 100.00 \(\pm\) 0.00 \\ BKI & 94.72 \(\pm\) 0.80 & 86.37 \(\pm\) 3.15 & 93.75 \(\pm\) 0.81 & 100.00 \(\pm\) 0.00 & 93.96 \(\pm\) 0.92 & 99.28 \(\pm\) 0.10 & 93.60 \(\pm\) 0.72 & 100.00 \(\pm\) 0.00 \\ ONION & 93.45 \(\pm\) 0.90 & 21.86 \(\pm\) 24.00 & 93.52 \(\pm\) 0.78 & 91.35 \(\pm\) 3.55 & 93.59 \(\pm\) 0.87 & 98.50 \(\pm\) 0.35 & 93.15 \(\pm\) 0.75 & 100.00 \(\pm\) 0.00
### The Resistance to Adaptive Attacks
To assess the robustness of our proposed method, we examine adaptive attacks that may bypass the defense mechanism. Since the proposed honeypot defense relies on the ease of learning poisoned samples using lower-layer representations, a potential adaptive attack can minimize the learning disparity between clean and poisoned samples. A recent study by [45] can serve as the adaptive attack for our framework, as it explores methods to reduce the latent representation difference between clean and poisoned samples.
Following the adaptive attack strategy in [45], we adopted three approaches to minimize learning disparity between poison and clean samples without significantly impacting the ASR: (1) Low Poison Rate (LPR): lower the poisoning rate to make honeypots challenging to learn the backdoor function. (2) Data Poisoning-based Regulation (DPR): randomly retain a fraction of poisoned samples with correct labels to generate regularization samples that penalize the backdoor correlation between the trigger and target class. (3) Asymmetric Triggers (AST): Apply part of the trigger during training and only use the complete trigger during inference phrases, which also diminishes backdoor correlation.
We conducted the adaptive attack using the RoBERTaBASE model and a sentence trigger. (For DPR and AST attacks, the poison ratio is 5%.) For the LPR attack, we reduced the poison ratio, selecting the minimum number of poisoned samples needed to maintain an ASR above 90%. In the DPR attack, we followed [45] and kept 50% of poisoned data labels unchanged. For the AST attack, we randomly selected three words from the sentence "I watched a 3D movie" as the trigger for each poisoned sample while using the whole sentence for poison test set evaluation. Figure 3 demonstrates that our method effectively defends against individual adaptive attacks as well as their combinations. Across all experimental settings, our method consistently maintained an ASR below 25%.
### Ablation Study
In this section, we present an ablation study to evaluate the impact of various components and design choices in our experiments. Without notification, we experiment on the RoBERTaBASE model with a 5% poisoned data injection rate. Our analysis focuses on the following aspects:
#### 5.4.1 Impact of the Honeypot Position
In our initial experiments, we developed a honeypot classifier using the output from the first transformer layer of the PLM. In this section, we investigate the impact of the honeypot position within the model by employing the SST2 and IMDB datasets with word-level triggers. We incorporated the honeypot classifier at various layers within a RoBERTaBASE model. Figure 4 illustrates the defense performance. Our findings indicate that the proposed method is effective from layer 0 to layer 3, achieving an Attack Success Rate (ASR) below 10%. However, there is a noticeable increase in ASR from layers 4 to 6, suggesting a decrease in the information density difference between poisoned and clean features in the representations of these layers. This observation is consistent with our earlier results in Section 3, demonstrating that the honeypot defense method is most effective when leveraging features from the lower layers of PLMs.
\begin{table}
\begin{tabular}{l|c c c c c c c c c c} \hline \hline
**Attack** & \multicolumn{4}{c}{AddWord} & \multicolumn{4}{c}{StyleBKD} \\ \hline
**Poison Ratio** & 2.5\% & 5.0\% & 7.5\% & 10.0\% & 12.5\% & 2.5\% & 5.0\% & 7.5\% & 10.0\% & 12.5\% \\ \hline ACC (\(\uparrow\)) & 93.71 & 93.71 & 93.11 & 92.67 & 92.59 & 93.25 & 93.15 & 92.89 & 92.55 & 90.90 \\ ASR (\(\downarrow\)) & 7.81 & 6.56 & 6.77 & 6.30 & 6.35 & 20.34 & 20.87 & 23.95 & 28.85 & 28.70 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Impact of the Poison Ratio
Figure 4: Honeypot position.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**No Defense**} & \multicolumn{2}{c}{**Our Method**} \\ \cline{2-5} & ACC (\(\uparrow\)) & ASR (\(\downarrow\)) & ACC (\(\uparrow\)) & ASR (\(\downarrow\)) \\ \hline LPR & 93.88 & 90.14 & 92.55 & 7.56 \\ DPR & 93.79 & 93.89 & 92.67 & 18.31 \\ AST & 93.74 & 100.0 & 92.41 & 10.23 \\ Combine & 93.81 & 91.22 & 92.53 & 24.90 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Resistance to Adaptive attack
#### 5.4.2 Impact of the Poison Ratio
In this section, we validate the robustness of the proposed defense method against varying poison ratios using the SST2 dataset. Table 4 presents our evaluation of poison ratios ranging from 2.5% to 12.5%. Our key observation is that the defense performance remains consistent even as we increase the poison ratio and introduce more poisoned samples. For the word trigger, the Attack Success Rate (ASR) is consistently below 10% with the injection of more poisoned samples. This can be attributed to the honeypot's enhanced ability to capture these samples and exhibit higher confidence. Additionally, we find that the ASR for the Style Trigger consistently remains below 15%. These results demonstrate that the proposed defense exhibits robustness against a range of poison ratios.
#### 5.4.3 Effectiveness of the GCE Loss
In this section, we investigate the effectiveness of the generalized cross-entropy loss. We conducted experiments using the same settings as in Section 5.4.1 and constructed the honeypot using features from the first layer. As illustrated in Figure 5, we vary the \(q\) value from 0.1 to 0.7 and plot the loss curve during honeypot module training, where \(q=0\) corresponds to the standard cross-entropy loss. Our primary observation is that, as we increase the \(q\) value, the honeypot module learns the backdoor samples more rapidly. Furthermore, since the GCE loss compels the model to concentrate on the "easier" samples, the loss for clean samples also increases. This assists the proposed weighted cross-entropy loss in assigning lower weights to poisoned samples and higher weights to clean samples. However, excessively large \(q\) values can lead to unstable training. Therefore, we opt for a \(q\) value of 0.5, which proves effective across various datasets and attack methods.
#### 5.4.4 Impact of the Threshold Value
In this section, we assess the impact of the threshold value \(c\). As mentioned in Section 4, we normalized the weights \(W(x)\) by using a sign function with a threshold \(c\). In Table 5, we conducted experiments on the SST2 and IMDB datasets with the threshold ranging from 0.05 to 0.8. Our experiments reveal that the defense method remains robust when the threshold lies between 0.1 and 0.3. However, selecting an excessively small value, such as 0.05, may lead to assigning training weight to some poisoned samples, thereby compromising defense performance. Conversely, a too-large threshold may negatively impact the original task performance.
## 6 Conclusion
In this study, we have presented a honeypot backdoor defense mechanism aimed at protecting pretrained language models throughout the fine-tuning stage. Notably, the honeypot can absorb the backdoor function during its training, thereby enabling the stem PLM to focus exclusively on the original task. Comprehensive experimental evidence indicates that our proposed defense method significantly reduces the success rate of backdoor attacks while maintaining only a minimal impact on the performance of the original task. Importantly, our defense mechanism consistently exhibits robust performance across a variety of benchmark tasks, showcasing strong resilience against a wide spectrum of NLP backdoor attacks.
Figure 5: Value \(q\) in GCE Loss.
\begin{table}
\begin{tabular}{c|c c c c c c c c c} \hline \hline
**Dataset** & \multicolumn{5}{c}{SST2} & IMDB \\ \hline \(c\) & 0.05 & 0.1 & 0.2 & 0.4 & 0.8 & 0.05 & 0.1 & 0.2 & 0.4 & 0.8 \\ \hline ACC (\(\uparrow\)) & 93.57 & 93.71 & 93.32 & 83.54 & 67.10 & 93.37 & 93.72 & 93.01 & 87.34 & 63.11 \\ ASR (\(\downarrow\)) & 34.57 & 6.52 & 6.34 & 13.42 & 30.28 & 46.32 & 5.60 & 6.99 & 18.56 & 34.75 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Impact of the Threshold Value
## Acknowledgements
The authors thank the anonymous reviewers for their helpful comments. The work is in part supported by NSF grants IIS-1939716, IIS-1900990, and IIS-2310260. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies.
## References
* [1] Bonan Min, Hayley Ross, Elior Sulem, Amir Pouran Ben Veyseh, Thien Huu Nguyen, Oscar Sainz, Eneko Agirre, Ilana Heintz, and Dan Roth. Recent advances in natural language processing via large pre-trained language models: A survey. _ACM Computing Surveys_, 56(2):1-40, 2023.
* [2] Xuezhi Wang, Haohan Wang, and Diyi Yang. Measure and improve robustness in nlp models: A survey. In _NAACL_, 2022.
* [3] Tianyu Gu, Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. Badnets: Evaluating backdooring attacks on deep neural networks. _IEEE Access_, 7:47230-47244, 2019.
* [4] Yiming Li, Yong Jiang, Zhifeng Li, and Shu-Tao Xia. Backdoor learning: A survey. _IEEE Transactions on Neural Networks and Learning Systems_, 2022.
* [5] Yinghua Gao, Yiming Li, Linghui Zhu, Dongxian Wu, Yong Jiang, and Shu-Tao Xia. Not all samples are born equal: Towards effective clean-label backdoor attacks. _Pattern Recognition_, 139:109512, 2023.
* [6] Shaofeng Li, Tian Dong, Benjamin Zi Hao Zhao, Minhui Xue, Suguo Du, and Haojin Zhu. Backdoors against natural language processing: A review. _IEEE Security & Privacy_, 20(05):50-59, 2022.
* [7] Ganqu Cui, Lifan Yuan, Bingxiang He, Yangyi Chen, Zhiyuan Liu, and Maosong Sun. A unified evaluation of textual backdoor learning: Frameworks and benchmarks. In _NeurIPS Datasets and Benchmarks Track_, 2022.
* [8] Marwan Omar. Backdoor learning for nlp: Recent advances, challenges, and future research directions. _arXiv preprint arXiv:2302.06801_, 2023.
* [9] Keita Kurita, Paul Michel, and Graham Neubig. Weight poisoning attacks on pretrained models. In _ACL_, 2020.
* [10] Jiazhu Dai, Chuanshuai Chen, and Yufeng Li. A backdoor attack against lstm-based text classification systems. _IEEE Access_, 7:138872-138878, 2019.
* [11] Fanchao Qi, Yangyi Chen, Xurui Zhang, Mukai Li, Zhiyuan Liu, and Maosong Sun. Mind the style of text! adversarial and backdoor attacks based on text style transfer. In _EMNLP_, 2021.
* [12] Jiazhao Li, Yijin Yang, Zhuofeng Wu, VG Vydiswaran, and Chaowei Xiao. Chatgpt as an attack tool: Stealthy textual backdoor attack via blackbox generative model trigger. _arXiv preprint arXiv:2304.14475_, 2023.
* [13] Xudong Pan, Mi Zhang, Beina Sheng, Jiaming Zhu, and Min Yang. Hidden trigger backdoor attack on {NLP} models via linguistic style manipulation. In _USENIX Security_, 2022.
* [14] Ruixiang Tang, Mengnan Du, Ninghao Liu, Fan Yang, and Xia Hu. An embarrassingly simple approach for trojan attack in deep neural networks. In _SIGKDD_, pages 218-228, 2020.
* [15] Yiming Li, Haoxiang Zhong, Xingjun Ma, Yong Jiang, and Shu-Tao Xia. Few-shot backdoor attacks on visual object tracking. In _ICLR_, 2022.
* [16] Chengxiao Luo, Yiming Li, Yong Jiang, and Shu-Tao Xia. Untargeted backdoor attack against object detection. In _ICASSP_, 2023.
* [17] Rishi Jha, Jonathan Hayase, and Sewoong Oh. Label poisoning is all you need. In _NeurIPS_, 2023.
* [18] Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. Trojaning attack on neural networks. In _NDSS_, 2018.
* [19] Fanchao Qi, Mukai Li, Yangyi Chen, Zhengyan Zhang, Zhiyuan Liu, Yasheng Wang, and Maosong Sun. Hidden killer: Invisible textual backdoor attacks with syntactic trigger. In _ACL_, 2021.
* [20] Yingqi Liu, Guangyu Shen, Guanhong Tao, Shengwei An, Shiqing Ma, and Xiangyu Zhang. Piccolo: Exposing complex backdoors in nlp transformer models. In _IEEE S&P_, 2022.
* [21] Ruixiang Tang, Dehan Kong, Longtao Huang, and Hui Xue. Large language models can be lazy learners: Analyze shortcuts in in-context learning. _arXiv preprint arXiv:2305.17256_, 2023.
* [22] Chuanshuai Chen and Jiazhu Dai. Mitigating backdoor attacks in lstm-based text classification systems by backdoor keyword identification. _Neurocomputing_, 452:253-262, 2021.
* [23] Fanchao Qi, Yangyi Chen, Mukai Li, Yuan Yao, Zhiyuan Liu, and Maosong Sun. Onion: A simple and effective defense against textual backdoor attacks. In _EMNLP_, 2021.
* [24] Wenkai Yang, Yankai Lin, Peng Li, Jie Zhou, and Xu Sun. Rap: Robustness-aware perturbations for defending against backdoor attacks on nlp models. In _EMNLP_, 2021.
* [25] Yansong Gao, Yeonjae Kim, Bao Gia Doan, Zhi Zhang, Gongxuan Zhang, Surya Nepal, Damith C Ranasinghe, and Hyoungshick Kim. Design and evaluation of a multi-domain trojan detection method on deep neural networks. _IEEE Transactions on Dependable and Secure Computing_, 19(4):2349-2364, 2021.
* [26] Guangyu Shen, Yingqi Liu, Guanhong Tao, Qiuling Xu, Zhuo Zhang, Shengwei An, Shiqing Ma, and Xiangyu Zhang. Constrained optimization with dynamic bound-scaling for effective nlp backdoor defense. In _ICML_, 2022.
* [27] Xiaojun Xu, Qi Wang, Huichen Li, Nikita Borisov, Carl A Gunter, and Bo Li. Detecting ai trojans using meta neural analysis. In _IEEE S&P_, 2021.
* [28] Weimin Lyu, Songzhu Zheng, Tengfei Ma, and Chao Chen. A study of the attention abnormality in trojaned berts. In _NAACL_, 2022.
* [29] Biru Zhu, Yujia Qin, Ganqu Cui, Yangyi Chen, Weilin Zhao, Chong Fu, Yangdong Deng, Zhiyuan Liu, Jingang Wang, and Wei Wu. Moderate-fitting as a natural backdoor defender for pre-trained language models. In _NeurIPS_, 2022.
* [30] Haotao Wang, Junyuan Hong, Aston Zhang, Jiayu Zhou, and Zhangyang Wang. Trap and replace: Defending backdoor attacks by trapping them into an easy-to-replace subnetwork. In _NeurIPS_, 2022.
* [31] Ganesh Jawahar, Benoit Sagot, and Djame Seddah. What does bert learn about the structure of language? In _ACL_, 2019.
* [32] Anna Rogers, Olga Kovaleva, and Anna Rumshisky. A primer in bertology: What we know about how bert works. _Transactions of the Association for Computational Linguistics_, 8:842-866, 2021.
* [33] Sishuo Chen, Wenkai Yang, Zhiyuan Zhang, Xiaohan Bi, and Xu Sun. Expose backdoors on the way: A feature-based efficient defense against textual backdoor attacks. _arXiv preprint arXiv:2210.07907_, 2022.
* [34] Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In _EMNLP_, 2013.
* [35] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_, 2019.
* [36] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_, 2014.
* [37] Yonatan Belinkov, Sebastian Gehrmann, and Ellie Pavlick. Interpretability and analysis in neural nlp. In _Proceedings of the 58th annual meeting of the association for computational linguistics: tutorial abstracts_, pages 1-5, 2020.
* [38] Guillaume Alain and Yoshua Bengio. Understanding intermediate layers using linear classifier probes. _arXiv preprint arXiv:1610.01644_, 2016.
* [39] Zhilu Zhang and Mert Sabuncu. Generalized cross entropy loss for training deep neural networks with noisy labels. In _NeurIPS_, 2018.
* [40] Mengnan Du, Subhabrata Mukherjee, Guanchu Wang, Ruixiang Tang, Ahmed Awadallah, and Xia Hu. Fairness via representation neutralization. In _NeurIPS_, 2021.
* [41] Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In _NAACL-HLT_, 2011.
* [42] Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. Predicting the type and target of offensive posts in social media. In _NAACL-HLT_, 2019.
* [43] Kuan-Hao Huang and Kai-Wei Chang. Generating syntactically controlled paraphrases without using annotated parallel pairs. _arXiv preprint arXiv:2101.10579_, 2021.
* [44] Kalpesh Krishna, John Wieting, and Mohit Iyyer. Reformulating unsupervised style transfer as paraphrase generation. In _EMNLP_, 2020.
* [45] Xiangyu Qi, Tinghao Xie, Yiming Li, Saeed Mahloujifar, and Prateek Mittal. Revisiting the assumption of latent separability for backdoor defenses. In _ICLR_, 2023.
* [46] Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, and Xingjun Ma. Anti-backdoor learning: Training clean models on poisoned data. In _NeurIPS_, 2021.
Understanding the Fine-tuning Process of PLMs on Poisoned Datasets
In this section, we show our empirical observations obtained from fine-tuning PLMs on poisoned datasets. Specifically, we demonstrate that the backdoor triggers are easier to learn from the lower layers than the features corresponding to the main task. This observation plays a pivotal role in designing and understanding our defense algorithm. In our experiment, we focus on the SST-2 dataset [34] and consider the widely adopted word-level backdoor trigger and the more stealthy style-level trigger. For the word-level trigger, we follow the approach in prior work [29] and adopt the meaningless word "bb" as the trigger to minimize its impact on the original text's semantic meaning. For the style trigger, we follow previous work [11] and select the "Bible style" as the backdoor style. For both attacks, we set a poisoning rate at 5% and conduct experiments on the RoBERTaBASE model [35], using a batch size of 32 and a learning rate of 2e-5, in conjunction with the Adam optimizer [36]. To understand the information in different layers of PLMs, we draw inspiration from classifier probing studies [37, 38] and train a compact classifier (one RoBERTa transformer layer topped with a fully connected layer) using representations from various layers of the RoBERTa model. Specifically, we freeze the RoBERTa model parameters and train only the probing classifier.
In Figure 6, we present the training loss curve of the word-level trigger, which utilizes a probing classifier constructed using features extracted from twelve different layers of the RoBERTa model. A critical observation highlights that in the initial layers (1-4), the probing classifier overfits the poisoned samples early in the training phase (around 500 steps). However, it underperforms the original task. This can be attributed to the initial layers primarily capturing surface-level features, including phrase-level and syntactic-level features, which are insufficient for the primary task. Subsequently, in Figure 7, we delve deeper into the visualization of the probing classifier's CLS token embeddings. A notable demarcation can be observed between the embeddings for poisoned and clean samples across all layers. However, the distinction between positive and negative sample embeddings becomes less discernible in the lower layers. We found a similar trend for the style-level trigger, as we showed the learning dynamic in Figure 8 and embedding visualization in Figure 9.
Figure 8: Learning dynamic for Style-level Trigger
Figure 7: Embedding Visualization for Word-level Trigger
## Appendix B More on Defense Resultsmethods by a wide margin. Similarly, for the OLID dataset, our method demonstrated excellent performance, surpassing all other defense methods in terms of ASR. Furthermore, our method still achieves competitive ACC results on the original tasks. In Figure 10, we exhibit the t-SNE visualizations derived from the CLS token embeddings of the final transfer layer of the RoBERTa model. As shown in Figure 10 (a), we observe that the no-defense model clearly recognizes the poisoned samples. Instead, in Figure 10 (b), the model overlooks the backdoor trigger and successfully predicts positive samples with embedded backdoor words as the positive class.
## Appendix C Anti-backdoor Learning Baselines
Besides the NLP backdoor defense baselines, we also considered the backdoor defense baseline in the computer vision domain. Specifically, we adopt a representative baseline, ABL [46], and transform it to adapt to NLP tasks. ABL represents a series of approaches that first identify a small section of poison samples and then use these samples for unlearning to mitigate the backdoor attack.
In Table 9, we found that ABL only achieves disappointing results with an ASR higher than 70%. To shed light on this outcome of ABL, we assessed the backdoor isolation capabilities of ABL. Following the setting in the ABL paper, we initiated a hyperparameter search that \(\gamma\) denotes the loss threshold and \(T_{te}\) stands for the epochs of the backdoor isolation stage. Table 8 presents the detection precision of the 1% isolated backdoor examples, which is crucial for the ABL backdoor unlearning performance. However, our findings reveal that the percentage of poisoned samples is less than 20%, which accounts for ABL's suboptimal performance.
The ABL method primarily relies on the observation that "models learn backdoored data much faster than they do with clean data" [46]. However, it is crucial to note that this assumption mainly holds for models trained from scratch in computer vision tasks. Our research and reference [29] both demonstrated an opposite behavior in that pre-trained language models first concentrate on learning task-specific features before backdoor features. A plausible explanation for this behavior is the richness of semantic information already present in the top layers of the pre-trained language models. Thus, the original task becomes more straightforward compared to the backdoor functionality, causing the model to prioritize learning the main task first. As a result, ABL struggles to yield satisfactory detection performance during the backdoor isolation stage by selecting the "easy-to-learn" samples (as shown in Table 8), and we show that ABL obtains a high ASR in the following backdoor unlearning process (as shown in Table 9). In contrast, our findings underscore the significance of examining
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(\gamma\downarrow T_{\text{te}}\rightarrow\) & 1 epoch & 5 epochs & 10 epochs \\ \hline
0.5 & 2.1 & 11.7 & 13.5 \\
1.0 & 5.1 & 12.3 & 15.3 \\
1.5 & 5.5 & 12.4 & 15.6 \\ \hline \hline \end{tabular}
\end{table}
Table 8: The isolation precision (%) of ABL
Figure 10: Embedding Visualization for Victim Model and Protected Model
model structure when identifying backdoor samples, revealing that backdoor samples become more identifiable in the lower layers of PLMs.
## Appendix D Understanding the Honeypot Defense Training Process
In this section, we further illustrate more details about the honeypot defense training process. Specifically, we focus on the dynamic change of the training weight for poisoned and clean samples. As we mentioned in Section 4, we propose employing a weighted cross-entropy loss (\(\mathcal{L}_{WCE}\)):
\[\mathcal{L}_{WCE}(f_{T}(x),y)=\sigma(W(x)-c)\cdot\mathcal{L}_{CE} (f_{T}(x),y),\ \ \text{where} \tag{5}\] \[W(x)=\frac{\mathcal{L}_{CE}(f_{H}(x),y)}{\mathcal{L}_{CE}(f_{T} (x),y))}, \tag{6}\]
\(f_{H}(x)\) and \(f_{T}(x)\) represent the softmax outputs of the honeypot and task classifiers, respectively. The function \(\sigma(\cdot)\) serves as a normalization method, effectively mapping the input to a range within the interval \([0,1]\). The \(c\) is a threshold value for the normalization.
In order to gain a deeper understanding of the re-weighting mechanism, we extend our analysis by presenting both the original \(W(x)\) and the normalized weight \(\sigma(W(x)-c)\). We conducted the experiment using the SST2 dataset, with a word-level trigger, a poisoning rate set at 5%, and a batch size of 32. Figure 11 illustrates the \(W(x)\) value for both the poisoned and clean samples at each stage of training. Specifically, we computed the \(W(x)\) for each mini-batch and then calculated the average \(W(x)\) value for both the poisoned and clean samples. As depicted in the figure, during the warm-up phase, the \(W(x)\) for clean and poisoned samples diverged early in the training process. After 500 steps, the \(W(x)\) for poisoned samples was noticeably lower than for clean samples. After the warm-up stage, given that \(W(x)\) is higher for clean samples, the Cross-Entropy loss of clean samples in \(f_{T}\) diminishes more quickly than that of the poisoned samples. This subsequently increase \(W(x)\) for clean samples as they possess a smaller \(\mathcal{L}_{CE}(f_{T}(x),y))\). This positive feedback mechanism ensures that the \(W(x)\) for poisoned samples persistently remains significantly lower than for clean samples throughout the complete training process of \(f_{T}\). As demonstrated in Figure 11, the \(W(x)\) for the clean samples will continue to increase following the warm-up phase.
## Appendix E More on Ablation Studies
### Ablation Study on Honeypot Warm-Up
In the following section, we explore the influence of the preliminary warm-up steps in the honeypot method, which represents the number of optimizations that the honeypot branch requires to capture a backdoor attack. We applied our method against word-level attacks on RoBERTaBASE, and the obtained results are shown in Table 10. The analysis indicates that with a minimum count of warm-up steps, specifically below 200 for the SST-2 dataset, the honeypot is insufficiently prepared to
\begin{table}
\begin{tabular}{l|c|c|c c|c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{Dataset} & \multirow{2}{*}{Attack} & \multicolumn{2}{c}{ABL} & \multicolumn{2}{c}{Honeypot} \\ \cline{4-7} & & & ACC (\(\uparrow\)) & ASR (\(\downarrow\)) & ACC (\(\uparrow\)) & ASR (\(\downarrow\)) \\ \hline \multirow{3}{*}{RoBERTaBASE} & \multirow{2}{*}{SST-2} & AddWord & 90.25 & 76.21 & 93.71 & 6.65 \\ & & AddSent & 91.17 & 69.24 & 92.39 & 7.71 \\ \cline{2-7} & \multirow{2}{*}{IMDB} & AddWord & 92.59 & 87.14 & 93.72 & 5.60 \\ & & AddSent & 89.75 & 88.77 & 92.72 & 6.56 \\ \hline \multirow{3}{*}{RoBERTa\_LARGE} & \multirow{2}{*}{SST-2} & AddWord & 92.03 & 74.98 & 94.15 & 5.84 \\ & & AddSent & 91.77 & 67.05 & 94.83 & 4.20 \\ \cline{1-1} \cline{2-7} & \multirow{2}{*}{IMDB} & AddWord & 92.59 & 75.09 & 94.12 & 3.60 \\ \cline{1-1} & & AddSent & 89.07 & 90.54 & 93.68 & 6.32 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Defense performance comparison with ABLcapture the poisoned data. However, once the honeypot accrues a sufficient volume of poisoned data, surpassing 400 training steps across all datasets, the Attack Success Rate (ASR) can be mitigated to an acceptably low level, i.e., less than 10%. The results further prove that our honeypot can effectively capture backdoor information with a certain amount of optimization. In our main experiments, we set the number of warm-up steps equal to the steps in one epoch, thereby enabling our honeypot to reliably catch the poisoned data.
### Ablation Study on Normalization Method
In this section, we use the SST2 dataset and word-level trigger to understand the impact of different normalization functions. As outlined in Section 4, our approach employs a normalization method to map the training loss weight \(W(x)\) into the \([0,1]\) interval. Within our experiments, we opted for the sign function as the normalization technique. However, we also explored two alternative normalization strategies - the sigmoid function and a cutoff ReLU function. For the latter, we assigned a value of 1 to any input exceeding 1. As depicted in Table 11, we conducted the experiments on RoBERTaBASE using different normalization functions, we can observe that all normalization methods demonstrate decent performance in minimizing the ASR. Notably, we observe that the sign function yields the highest ACC on the original task while simultaneously achieving the lowest ASR.
## Appendix F Extend Honeypot to Computer Vision Tasks
While this paper primarily focuses on defending pretrained language models against backdoor attacks, we also explored the applicability of our proposed honeypot defense method within the computer vision domain [3, 4, 6]. In Section F.1, we illustrate the experimental settings. In Section F.2, we show the empirical findings. In Section F.3, we discuss the defense performance.
Figure 11: Visualization of W(x) during defense training process.
\begin{table}
\begin{tabular}{l|c c c c c c c c c} \hline \hline
**Dataset** & \multicolumn{4}{c}{SST-2} & \multicolumn{4}{c}{IMDB} \\ \hline
**Warm-Up Steps** & 100 & 200 & 400 & 1000 & 2000 & 100 & 200 & 400 & 1000 & 2000 \\ \hline ACC (\(\uparrow\)) & 94.61 & 94.72 & 94.50 & 94.41 & 94.15 & 94.71 & 94.80 & 94.26 & 94.33 & 94.12 \\ ASR (\(\downarrow\)) & 100.00 & 100.00 & 8.64 & 5.37 & 5.84 & 100.00 & 7.62 & 5.32 & 5.79 & 3.60 \\ \hline \hline \end{tabular}
\end{table}
Table 10: Impact of Warm-Up steps
\begin{table}
\begin{tabular}{l|c c} \hline \hline \multirow{2}{*}{**Normalization**} & \multicolumn{2}{c}{**AddWord**} \\ \cline{2-4} & ACC (\(\uparrow\)) & ASR (\(\downarrow\)) \\ \hline No Defense & 94.61\(\pm\)0.60 & 100.00\(\pm\)0.00 \\ Sign & 93.71\(\pm\)0.68 & 6.56\(\pm\)1.91 \\ Sigmoid & 93.22\(\pm\)0.53 & 6.83\(\pm\)2.01 \\ Cutoff Relu & 93.10\(\pm\)0.71 & 6.77\(\pm\)1.04 \\ \hline \hline \end{tabular}
\end{table}
Table 11: Impact of Normalization Method
### Settings
Suppose \(D_{train}=(x_{i},y_{i})\) indicates a benign training dataset where \(x_{i}\in\{0,...,255\}^{C\times W\times H}\) represents an input image with \(C\) channels and \(W\) width and \(H\) height, and \(y_{i}\) corresponds to the associated label. To generate a poisoned dataset, the adversary selects a small set of samples \(D_{sub}\) from the original dataset \(D_{train}\), typically between 1-10%. The adversary then chooses a target misclassification class, \(y_{t}\), and selects a backdoor trigger \(a\) and \(a\in\{0,...,255\}^{C\times W\times H}\). For each instance \((x_{i},y_{i})\) in \(D_{sub}\), a poisoned example \((x^{\prime}_{i},y^{\prime}_{i})\) is created, with \(x^{\prime}_{i}\) being the embedded backdoor trigger of \(x_{i}\) and \(y^{\prime}_{i}=y_{t}\). The trigger embedding process can be formulated as follows,
\[x^{\prime}_{i}=(1-\lambda)\otimes x+\lambda\otimes a, \tag{7}\]
where \(\lambda\in[0,1]^{C\times W\times H}\) is a trigger visibility hyper-parameter and \(\otimes\) specifies the element-wise product operation. The smaller the \(\lambda\), the more invisible the trigger and the more stealthy. The resulting poisoned subset is denoted as \(D^{\prime}sub\). Finally, the adversary substitutes the original \(D_{sub}\) with \(D^{\prime}_{sub}\) to produce \(D_{poison}=(D_{train}-D_{sub})\cup D^{\prime}_{sub}\). By fine-tuning PLMs with the poisoned dataset, the model will learn a backdoor function that establishes a strong correlation between the trigger and the target label \(y_{t}\). Consequently, adversaries can manipulate the model's predictions by adding the backdoor trigger to the inputs, causing instances containing the trigger pattern to be misclassified into the target class \(t\).
In our experiment, we employed an ImageNet pretrained VGG-16 model as our base model and proceeded with experiments using a manipulated CIFAR-10 dataset. The experiments involve the use of a 3 x 3 white square and a black line with a width of 3 pixels as backdoor triggers. The white square trigger is positioned at the bottom-right corner of the image, while the black line trigger is set at the bottom. We set poison rate as 5% and set \(\lambda\in\{0,0.2\}^{C\times W\times H}\) for two attacks. The values of \(\lambda\) corresponding to pixels situated within the trigger area are 0.2, while all others are set to 0.
### Lower Layer Representations from VGG Provide Sufficient Backdoor Information
Drawing on our analysis presented in Section 3, we delve further into understanding the information encapsulated within various layers of a pretrained computer vision model. Inspired by previous classifier probing studies [37; 38], we train a compact classifier using representations derived from different layers of the VGG model. We ensure the VGG model parameters are frozen during this process and only train the probing classifier. In this context, we divided the VGG model into five sections based on the pooling layer operations (The five pooling layers are located at layers 2, 4, 7, 10, and 13). Subsequent to this, we integrate an adaptive pooling layer to reduce the features extracted from different layers to \(7\times 7\), ensuring that the flattened dimension does not exceed 8000. A fully connected layer with softmax activation is added as the final output. As depicted in Figure 13 and Figure 12, it is noticeable that the lower layers of the VGG model hold sufficient information for identifying the backdoor triggers. However, they do not contain enough information to effectively carry out the main tasks.
Figure 12: Learning Dynamic for White Square Trigger
Figure 13: Learning Dynamic for Black Line Trigger
### Defense Results on CIFAR10
We implemented the honeypot as mentioned in Section 4 and built the honeypot module with the features from the first pooling layer. We followed previous sections and adopted the ASR and ACC metrics to measure the model's performance on the poisoned test set and clean test set, respectively. Specifically, we executed a fine-tuning process for a total of 10 epochs, incorporating an initial warmup epoch for the honeypot module. The learning rates for both the honeypot and the principal task are adjusted to a value of \(1\times 10^{-3}\). Additionally, we established the hyperparameter \(q\) for the GCE loss at 0.5, the time window size \(T\) was set to 100, and the threshold value \(c\) was fixed at 0.1. Each experimental setting was subjected to three independent runs and randomly chosen one class as the target class. These runs were also differentiated by employing distinct seed values. The results were then averaged, and the standard deviation was calculated to present a more comprehensive understanding of the performance variability. As the results are shown in Table 12, the proposed method successfully defends two backdoor attacks and reduces the ASR to lower than 10%. This indicates that the proposed method is valid for those simple vision backdoor triggers while having minimal impact on the original task. We plan to test the defense performance of more advanced backdoor triggers in our future work.
## Appendix G Limitations and Discussions
In this study, we introduce an innovative approach to backdoor defense in the context of fine-tuning pretrained language models. Due to the constraints in terms of time and resources, our evaluations were conducted using four prevalent backdoor attack methods and on three representative datasets. Despite the robustness and consistency demonstrated by our method, it is essential to remain vigilant to the emergence of new and potentially threatening attack methods and datasets, especially considering the rapid growth of this field. In addition, it's worth acknowledging that while unintended, some malicious users may exploit our method and deploy other strong backdoor attacks that may bypass our defense system.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**White Square**} & \multicolumn{2}{c}{**Black Line**} \\ \cline{2-5} & ACC (\(\uparrow\)) & ASR (\(\downarrow\)) & ACC (\(\uparrow\)) & ASR (\(\downarrow\)) \\ \hline No Defense & 91.33\(\pm\)0.27 & 100.00\(\pm\)0.00 & 91.28\(\pm\)0.13 & 100.00\(\pm\)0.00 \\ Our Method & 92.20\(\pm\)0.43 & 8.81\(\pm\)1.09 & 92.23\(\pm\)0.37 & 10.81\(\pm\)1.83 \\ \hline \hline \end{tabular}
\end{table}
Table 12: Defense Performance on CIFAR10 | ## Review
### Summary
This paper presents a novel defense mechanism against backdoor attacks in pretrained language models (PLMs) by introducing a honeypot module that absorbs backdoor information while preserving the main task performance. The authors leverage the observation that early layers of PLMs learn poisoned samples more quickly than clean samples, using this insight to dynamically adjust the weights of suspicious samples during training. Comprehensive experimental evaluations demonstrate the effectiveness of the proposed defense across various NLP tasks and attack scenarios. While the idea is innovative, there are concerns regarding its generalizability and lack of comparison with existing defenses.
### Strengths
- The proposed defense mechanism is based on interesting empirical observations regarding the learning dynamics of clean vs. poisoned samples.
- The paper includes extensive experiments across multiple NLP tasks and demonstrates the efficacy of the proposed approach.
- The structure and clarity of the paper are commendable, making it easy to follow.
### Weaknesses
- The generalization of the proposed method to all types of backdoor attacks, particularly label-specific attacks, is not sufficiently addressed.
- There is a lack of empirical comparison with existing defenses like Anti-Backdoor Learning (ABL) and CUBE, which makes it difficult to assess the superiority of the proposed method.
- The threat model assumes a clean PLM, which may not reflect practical scenarios where the PLM itself could be compromised.
### Questions
- How does the observation in Section 3 change if the poison ratio is varied (between 0.1% to 10%)?
- Why does the CE Loss for poisoned samples increase over layers in Figure 1?
- What is the capacity of the honeypot module to absorb backdoor information, and how does it vary with different architectures?
- What justifies the design of the weighted CE loss in Equations 3 and 4?
### Soundness
**Score:** 2
**Description:** 2 = fair; the paper presents a novel idea, but lacks sufficient empirical validation and generalizability to various attack scenarios.
### Presentation
**Score:** 3
**Description:** 3 = good; the paper is well-structured and clear, but some statements need more justification.
### Contribution
**Score:** 2
**Description:** 2 = fair; while the contribution is interesting, it lacks novelty when compared to existing defenses that exploit similar observations.
### Rating
**Score:** 5
**Description:** 5 = borderline accept; the paper is technically solid with some concerns regarding evaluation and generalizability, but presents a potentially impactful idea.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents an innovative approach to mitigate backdoor attacks in pretrained language models, demonstrating its effectiveness through comprehensive experiments. However, it could benefit from a deeper exploration of its generalizability and comparisons to existing methods. The strengths outweigh the weaknesses, leading to a positive decision.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Robust Second-Order Nonconvex Optimization and
Its Application to Low Rank Matrix Sensing
Shuyao Li
University of Wisconsin-Madison
[email protected]
&Yu Cheng
Brown University
[email protected]
&Ilias Diakonikolas
University of Wisconsin-Madison
[email protected]
&Jelena Diakonikolas
University of Wisconsin-Madison
[email protected]
&Rong Ge
Duke University
[email protected]
&Stephen Wright
University of Wisconsin-Madison
[email protected]
###### Abstract
Finding an approximate second-order stationary point (SOSP) is a well-studied and fundamental problem in stochastic nonconvex optimization with many applications in machine learning. However, this problem is poorly understood in the presence of outliers, limiting the use of existing nonconvex algorithms in adversarial settings. In this paper, we study the problem of finding SOSPs in the strong contamination model, where a constant fraction of datapoints are arbitrarily corrupted. We introduce a general framework for efficiently finding an approximate SOSP with _dimension-independent_ accuracy guarantees, using \(\widetilde{O}(D^{2}/\epsilon)\) samples where \(D\) is the ambient dimension and \(\epsilon\) is the fraction of corrupted datapoints. As a concrete application of our framework, we apply it to the problem of low rank matrix sensing, developing efficient and provably robust algorithms that can tolerate corruptions in both the sensing matrices and the measurements. In addition, we establish a Statistical Query lower bound providing evidence that the quadratic dependence on \(D\) in the sample complexity is necessary for computationally efficient algorithms.
## 1 Introduction
Learning in the presence of corrupted data is a significant challenge in machine learning (ML) with many applications, including ML security [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 199, 198, 199, 199, 190, 191, 190, 192, 193, 195, 196, 197, 198, 199, 199, 199, 199, 198, 199, 190, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 1999, 199, 199, 1999, 199, 199, 1999, 199, 199, 199, 199, 199, 199, 199, 1999, 199, 1999, 199, 1999, 199, 1999, 199, 1999, 1999, 199, 1999, 1999, 199, 1999, 199, 199, 199, 199, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 199, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 19999, 1999, 1999, 1999, 1999, 19999, 1999, 19999, 1999, 19999, 1999, 19999, 19999, 1999, 1999, 19999, 1999, 1999, 19999, 1999, 1999, 1999, 1999, 19999, 1999, 1999, 19999, 1999, 19999, 1999, 1999, 1999, 1999, 1999, 1999, 19999, 1999, 1999, 1999, 1999, 19999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 19999, 1999, 1999, 1999, 1999, 19999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 19999, 1999, 1999, 1999, 19999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 19999, 1999, 1999, 1999,In this paper, we study the general problem of smooth (with Lipschitz gradient and Hessian) stochastic nonconvex optimization \(\min_{x}\tilde{f}(x)\) in the outlier-robust setting, where \(\tilde{f}(x):=\mathbb{E}_{A\sim\mathcal{G}}\,f(x,A)\) and \(\mathcal{G}\) is a possibly unknown distribution of the random parameter \(A\). We will focus on the following standard adversarial contamination model (see, e.g., [2]).
**Definition 1.1** (Strong Contamination Model).: Given a parameter \(0<\epsilon<1/2\) and an inlier distribution \(\mathcal{G}\), an algorithm receives samples from \(\mathcal{G}\) with \(\epsilon\)-contamination as follows: The algorithm first specifies the number of samples \(n\), and \(n\) samples are drawn independently from \(\mathcal{G}\). An adversary is then allowed to inspect these samples, and replace an \(\epsilon\)-fraction of the samples with arbitrary points. This modified set of \(n\) points is said to be \(\epsilon\)-corrupted, which is then given to the algorithm.
The stochastic optimization problem we consider is computationally intractable in full generality -- even without corruption -- if the goal is to obtain globally optimal solutions. At a high level, an achievable goal is to design sample and computationally efficient robust algorithms for finding _locally_ optimal solutions. Prior work [14][2] studied outlier-robust stochastic optimization and obtained efficient algorithms for finding approximate _first-order_ stationary points. While first-order guarantees suffice for convex problems, it is known that in many tractable non-convex problems, first-order stationary points may be bad solutions, but all _second-order_ stationary points (SOSPs) are globally optimal. This motivates us to study the following questions:
_Can we develop a general framework for finding **second-order** stationary points in outlier-robust stochastic optimization?_
_Can we obtain sample and computationally efficient algorithms for outlier-robust versions of tractable nonconvex problems using this framework?_
In this work, we answer both questions affirmatively. We introduce a framework for efficiently finding an approximate SOSP when \(\epsilon\)-fraction of the functions are corrupted and then use our framework to solve the problem of outlier-robust low rank matrix sensing.
In addition to the gradient being zero, a SOSP requires the Hessian matrix to not have negative eigenvalues. The second-order optimality condition is important because it rules out suboptimal solutions such as strict saddle points. It is known that all SOSPs are globally optimal in nonconvex formulations of many important machine learning problems, such as matrix completion [1], matrix sensing [1], phase retrieval [2], phase synchronization [3], dictionary learning [2], and tensor decomposition [4] (see also [13] Chapter 7). However, the properties of SOSPs are highly sensitive to perturbation in the input data. For example, it is possible to create spurious SOSPs for nonconvex formulations of low rank matrix recovery problems, even for a semi-random adversary that can add additional sensing matrices but cannot corrupt the measurements in matrix sensing [4] or an adversary who can only reveal more entries of the ground-truth matrix in matrix completion [4]. Those spurious SOSPs correspond to highly suboptimal solutions.
Finding SOSPs in stochastic nonconvex optimization problems in the presence of arbitrary outliers was largely unaddressed prior to our work. Prior works [14][2][3] obtained efficient and robust algorithms for finding _first-order_ stationary points with dimension-independent accuracy guarantees. These works relied on the following simple idea: Under certain smoothness assumptions, projected gradient descent with an _approximate_ gradient oracle efficiently converges to an _approximate_ first-order stationary point. Moreover, in the outlier-robust setting, approximating the gradient at a specific point amounts to a robust mean estimation problem (for the underlying distribution of the gradients), which can be solved by leveraging existing algorithms for robust mean estimation.
Our work is the first to find approximate SOSPs with dimension-independent errors in outlier-robust settings. Note that in standard non-robust settings, approximate SOSPs can be computed using first-order methods such as perturbed gradient descent [15][16][17]. This strategy might seem extendable to outlier-robust settings through perturbed approximate gradient descent, utilizing robust mean estimation algorithms to approximate gradients. The approach in [16][17] follows this idea, but unfortunately their second-order guarantees scale polynomially with dimension, even under very strong distributional assumptions (e.g., subgaussianity). Our lower bound result provides evidence that approximating SOSPs with dimension-independent error is as hard as approximating _full_ Hessian, suggesting that solely approximating the gradients is not sufficient. On a different note, [15] recently employed robust estimators for both gradient and Hessian in solving certain convex stochastic optimization problems, which has a different focus than ours and does not provide SOSPs with the guarantees that we achieve.
### Our Results and Contributions
The notation we use in this section is defined in Section2. To state our results, we first formally define our generic nonconvex optimization problem. Suppose there is a true distribution over functions \(f:\mathbb{R}^{D}\times\mathcal{A}\to\mathbb{R}\), where \(f(x,A)\) takes an argument \(x\in\mathbb{R}^{D}\) and is parameterized by a random variable \(A\in\mathcal{A}\) drawn from a distribution \(\mathcal{G}\). Our goal is to find an \((\epsilon_{g},\epsilon_{H})\)-approximate SOSP of the function \(\bar{f}(x):=\mathbb{E}_{A\sim\mathcal{G}}\,f(x,A)\).
**Definition 1.2** (\(\epsilon\)-Corrupted Stochastic Optimization).: The algorithm has access to \(n\) functions \((f_{i})_{i=1}^{n}\) generated as follows. First \(n\) random variables \((A_{i})_{i=1}^{n}\) are drawn independently from \(\mathcal{G}\). Then an adversary arbitrarily corrupts an \(\epsilon\) fraction of the \(A_{i}\)'s. Finally, the \(\epsilon\)-corrupted version of \(f_{i}(\cdot)=f(\cdot,A_{i})\) is sent to the algorithm as input. The task is to find an approximate SOSP of the ground-truth average function \(\bar{f}(\cdot):=\mathbb{E}_{A\sim\mathcal{G}}\,f(\cdot,A)\).
**Definition 1.3** (Approximate SOSPs).: A point \(x\) is an \((\epsilon_{g},\epsilon_{H})\)-approximate second-order stationary point (SOSP) of \(\bar{f}\) if \(\left\|\nabla\bar{f}(x)\right\|\leq\epsilon_{g}\) and \(\lambda_{\min}\left(\nabla^{2}\bar{f}(x)\right)\geq-\epsilon_{H}\).
We make the following additional assumptions on \(f\) and \(\mathcal{G}\).
**Assumption 1.4**.: _There exists a bounded region \(\mathcal{B}\) such that the following conditions hold:_
1. _There exists a lower bound_ \(f^{*}>-\infty\) _such that for all_ \(x\in\mathcal{B}\)_,_ \(f(x,A)\geq f^{*}\) _with probability_ \(1\)_._
2. _There exist parameters_ \(L_{D_{g}}\)_,_ \(L_{D_{H}}\)_,_ \(B_{D_{g}}\)_, and_ \(B_{D_{H}}\) _such that, with high probability over the randomness in_ \(A\sim\mathcal{G}\)_, letting_ \(g(x)=f(x,A)\)_, we have that_ \(g(x)\) _is_ \(L_{D_{g}}\)_-gradient Lipschitz and_ \(L_{D_{H}}\)_-Hessian Lipschitz over_ \(\mathcal{B}\)_, and_ \(\left\|\nabla g(x)\right\|\leq B_{D_{g}}\) _and_ \(\left\|\nabla^{2}g(x)\right\|_{F}\leq B_{D_{H}}\) _for all_ \(x\in\mathcal{B}\)_._
3. _There exist parameters_ \(\sigma_{g},\sigma_{H}>0\) _such that for all_ \(x\in\mathcal{B}\)_,_ \[\left\|\mathrm{Cov}_{A\sim\mathcal{G}}(\nabla f(x,A))\right\|_{\mathrm{op}} \leq\sigma_{g}^{2}\text{ and }\left\|\mathrm{Cov}_{A\sim\mathcal{G}}( \mathrm{vec}(\nabla^{2}f(x,A)))\right\|_{\mathrm{op}}\leq\sigma_{H}^{2}.\]
Note that the radius of \(\mathcal{B}\) and the parameters \(L_{D_{g}}\), \(L_{D_{H}}\), \(B_{D_{g}}\), \(B_{D_{H}}\) are all allowed to depend polynomially on \(D\) and \(\epsilon\) (but not on \(x\) and \(A\)).
Our main algorithmic result for \(\epsilon\)-corrupted stochastic optimization is summarized in the following theorem. A formal version of this theorem is stated as Theorem3.1 in Section3.
**Theorem 1.5** (Finding an Outlier-Robust SOSP, informal).: _Suppose \(f\) satisfies Assumption1.4 in a region \(\mathcal{B}\) with parameters \(\sigma_{g}\) and \(\sigma_{H}\). Given an arbitrary initial point \(x_{0}\in\mathcal{B}\) and an \(\epsilon\)-corrupted set of \(n=\widetilde{\Omega}\big{(}D^{2}/\epsilon\big{)}\) functions where \(D\) is the ambient dimension, there exists a polynomial-time algorithm that with high probability outputs an \((O(\sigma_{g}\sqrt{\epsilon})\,,O(\sigma_{H}\sqrt{\epsilon}))\)-approximate SOSP of \(\bar{f}\), provided that all iterates of the algorithm stay inside \(\mathcal{B}\)._
Although the bounded iterate condition in Theorem1.5 appears restrictive, this assumption holds if the objective function satisfies a "dissipativity" property, which is a fairly general phenomenon [11]. Moreover, adding an \(\ell_{2}\)-regularization term enables any Lipschitz function to satisfy the dissipativity property [12]. Section4. As an illustrating example, a simple problem-specific analysis shows that this bounded iterate condition holds for outlier-robust matrix sensing by exploiting the fact that the matrix sensing objective satisfies the dissipativity property.
In this paper, we consider the problem of outlier-robust symmetric low rank matrix sensing, which we formally define below. We focus on the setting with Gaussian design.
**Definition 1.6** (Outlier-Robust Matrix Sensing).: There is an unknown rank-\(r\) ground-truth matrix \(M^{*}\in\mathbb{R}^{d\times d}\) that can be factored into \(U^{*}U^{*}{}^{\top}\) where \(U^{*}\in\mathbb{R}^{d\times r}\). The (clean) sensing matrices \(\{A_{i}\}_{i\in[n]}\) have i.i.d. standard Gaussian entries. The (clean) measurements \(y_{i}\) are obtained as \(y_{i}=\langle A_{i},M^{*}\rangle+\zeta_{i}\), where the noise \(\zeta_{i}\sim\mathcal{N}(0,\sigma^{2})\) is independent from all other randomness. We denote the (clean) data generation process by \((A_{i},y_{i})\sim\mathcal{G}_{\sigma}\). When \(\sigma=0\), we have \(\zeta_{i}=0\) and we write \(\mathcal{G}:=\mathcal{G}_{0}\) for this noiseless (measurement) setting. In outlier robust matrix sensing, an adversary can arbitrarily change any \(\epsilon\)-fraction of the sensing matrices and the corresponding measurements. This corrupted set of \((A_{i},y_{i})\)'s is then given to the algorithm as input, where the goal is to recover \(M^{*}\).
We highlight that in our setting, both the sensing matrices \(A_{i}\in\mathbb{R}^{d\times d}\) and the measurements \(y_{i}\in\mathbb{R}\) can be corrupted, presenting a substantially more challenging problem compared to prior works (e.g., [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 140, 132, 133, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 199, 190, 191, 193, 195, 196, 197, 198, 199, 199, 198, 190, 199, 199, 191, 194, 196, 198, 199, 199, 197, 199, 198, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 1999, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 1999, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 1999, 199, 199, 1999, 199, 199, 1999, 199, 199, 199, 1999, 199, 1999, 199, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 199, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 19999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 19999, 1999, 19999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 19999, 1999, 1999, 1999, 1999, 1999, 19999, 1999, 19999, 19999, 19999, 19999, 1999,similar to strong convexity but holds only locally). This local regularity condition bounds below a measure of stationarity, which allows us to prove that gradient descent-type updates contract the distance to the closest global optimum under appropriate stepsize. We leverage this local regularity condition to prove that the iterates of the algorithm stay near a global optimum, so that the regularity condition continues to hold, and moreover, the distance between the current solution and the closest global optimum contracts, as long as it is larger than a function of \(\sigma\). Consequently, the distance-dependent component of the gradient and Hessian covariance bound contracts as well, which allows us to obtain more accurate gradient and Hessian estimates. While such a statement may seem evident to readers familiar with linear convergence arguments, we note that proving it is quite challenging, due to the circular dependence between the distance from the current solution to global optima, the inexactness in the gradient and Hessian estimates, and the progress made by our algorithm.
The described distance-contracting argument allows us to control the covariance of the gradient and Hessian, which we utilize to recover \(M^{*}\) exactly when \(\sigma=0\), and recover \(M^{*}\) with error roughly \(O(\sigma\sqrt{\epsilon})\) when \(0\neq\sigma\leq r\Gamma\). We note that the \(\sigma\sqrt{\epsilon}\) appears unavoidable in the \(\sigma\neq 0\) case, due to known limits of robust mean estimation algorithms [1].
SQ lower bound.We exhibit a hard instance of low rank matrix sensing problem to show that quadratic dependence on the dimension in sample complexity is unavoidable for computationally efficient algorithms. Our SQ lower bound proceeds by constructing a family of distributions, corresponding to corruptions of low rank matrix sensing, that are nearly uncorrelated in a well-defined technical sense [11]. To achieve this, we follow the framework of [1] which considered a family of distributions that are rotations of a carefully constructed one-dimensional distribution. The proof builds on [1][2], using a new univariate moment-matching construction which yields a family of corrupted conditional distributions. These induce a family of joint distributions that are SQ-hard to learn.
### Roadmap
Section 2 defines the necessary notation and discusses relevant building blocks of our algorithm and analysis. Section 3 introduces our framework for finding SOSPs in the outlier-robust setting. Section 4 presents how to extend and apply our framework to solve outlier-robust low rank matrix sensing. Section 5 proves that our sample complexity has optimal dimensional dependence for SQ algorithms. Most proofs are deferred to the supplementary material due to space limitations.
## 2 Preliminaries
For an integer \(n\), we use \([n]\) to denote the ordered set \(\{1,2,\ldots,n\}\). We use \([a_{i}]_{i\in\mathcal{I}}\) to denote the matrix whose columns are vectors \(a_{i}\), where \(\mathcal{I}\) is an ordered set. We use \(\mathbbm{1}_{E}(x)\) to denote the indicator function that is equal to \(1\) if \(x\in E\) and \(0\) otherwise. For two functions \(f\) and \(g\), we say \(f=\widetilde{O}(g)\) if \(f=O(g\log^{k}(g))\) for some constant \(k\), and we similarly define \(\widetilde{\Omega}\).
For vectors \(x\) and \(y\), we let \(\left\langle x,y\right\rangle\) denote the inner product \(x^{\top}y\) and \(\left\|x\right\|\) denote the \(\ell_{2}\) norm of \(x\). For \(d\in\mathbb{Z}_{+}\), we use \(I_{d}\) to denote the identity matrix of size \(d\times d\). For matrices \(A\) and \(B\), we use \(\left\|A\right\|_{\mathrm{op}}\) and \(\left\|A\right\|_{F}\) to denote the spectral norm and Frobenius norm of \(A\) respectively. We use \(\lambda_{\max}(A)\) and \(\lambda_{\min}(A)\) to denote the maximum and minimum eigenvalue of \(A\) respectively. We use \(\operatorname{tr}(A)\) to denote the trace of a matrix \(A\). We use \(\left\langle A,B\right\rangle=\operatorname{tr}(A^{\top}B)\) to denote the entry-wise inner product of two matrices of the same dimension. We use \(\operatorname{vec}(A)=[a_{1}^{\top},a_{2}^{\top},\ldots,a_{d}^{\top}]^{\top}\) to denote the canonical flattening of \(A\) into a vector, where \(a_{1},a_{2},\ldots,a_{d}\) are columns of \(A\).
**Definition 2.1** (Lipschitz Continuity).: Let \(\mathcal{X}\) and \(\mathcal{Y}\) be normed vector spaces. A function \(h:\mathcal{X}\rightarrow\mathcal{Y}\) is \(\ell\)-Lipschitz if \(\left\|h(x_{1})-h(x_{2})\right\|_{\mathcal{Y}}\leq\ell\left\|x_{1}-x_{2}\right\| _{\mathcal{X}},\forall x_{1},x_{2}\).
In this paper, when \(\mathcal{Y}\) is a space of matrices, we take \(\left\|\cdot\right\|_{\mathcal{Y}}\) to be the spectral norm \(\left\|\cdot\right\|_{\mathrm{op}}\). When \(\mathcal{X}\) is a space of matrices, we take \(\left\|\cdot\right\|_{\mathcal{X}}\) to be the Frobenius norm \(\left\|\cdot\right\|_{F}\); this essentially views the function \(h\) as operating on the vectorized matrices endowed with the usual \(\ell_{2}\) norm. When \(\mathcal{X}\) or \(\mathcal{Y}\) is the Euclidean space, we take the corresponding norm to be the \(\ell_{2}\) norm.
A Randomized Algorithm with Inexact Gradients and Hessians.We now discuss how to solve the unconstrained nonconvex optimization problem \(\min_{x\in\mathbb{R}^{D}}f(x),\) where \(f(\cdot)\) is a smooth function with Lipschitz gradients and Lipschitz Hessians. The goal of this section is to find an approximate SOSP as defined in Definition 1.3
**Proposition 2.2** ([11]).: _Suppose a function \(f\) is bounded below by \(f^{*}>-\infty,\) has \(L_{g}\)-Lipschitz gradient and \(L_{H}\)-Lipschitz Hessian, and its inexact gradient and Hessian computations \(\widetilde{g}_{t}\) and \(\widetilde{H}_{t}\) satisfy \(\left\lVert\widetilde{g}_{t}-\nabla f(x_{t})\right\rVert\leq\frac{1}{3} \epsilon_{g}\) and \(\left\lVert\widetilde{H}_{t}-\nabla^{2}f(x_{t})\right\rVert_{\mathrm{op}}\leq \frac{2}{9}\epsilon_{H}\). Then there exists an algorithm (Algorithm 1) with the following guarantees:_
1. _(Correctness) If Algorithm_ 1_._ _[_1_]_ _terminates and outputs_ \(x_{n}\)_, then_ \(x_{n}\) _is a_ \((\frac{4}{3}\epsilon_{g},\frac{4}{3}\epsilon_{H})\)_-approximate SOSP._
2. _(Runtime) Algorithm_ 1_._ _[_1_]_ _terminates with probability_ \(1\)_. Let_ \(C_{\epsilon}:=\min\left(\frac{\epsilon_{g}^{2}}{6L_{g}},\frac{2\epsilon_{H}^{2 }}{9L_{H}^{2}}\right)\)_. With probability at least_ \(1-\delta\)_, Algorithm_ 1_._ _[_1_]_ _terminates after_ \(k\) _iterations for_ \[k=O\Big{(}\frac{f(x_{0})-f^{*}}{C_{\epsilon}}+\frac{L_{H}^{2}L_{g}^{2}\epsilon _{g}^{2}}{\epsilon_{H}^{6}}\log\Bigl{(}\frac{1}{\delta}\Bigr{)}\Big{)}\;.\] (1)
The constants \(1/3\) and \(2/9\) are chosen for ease of presentation. For all constructions of Hessian oracles in this paper, we take the straightforward relaxation \(\left\lVert\widetilde{H}_{t}-\nabla^{2}f(x_{t})\right\rVert_{\mathrm{op}}\leq \left\lVert\widetilde{H}_{t}-\nabla^{2}f(x_{t})\right\rVert_{F}\) and upper bound Hessian inexactness using Frobenius norm. Proof of a simplified version of Proposition 2.2 with a weaker high probability bound that is sufficient for our purposes is provided in Appendix A.2 for completeness.
Robust Mean Estimation.Recent algorithms in robust mean estimation give dimension-independent error in the presence of outliers under strong contamination model.
We use the following results, see, e.g., [12], where the upper bound \(\sigma\) on the spectral norm of covariance matrix is unknown to the algorithm.
**Proposition 2.3** (Robust Mean Estimation).: _Fix any \(0<\xi<1\). Let \(S\) be a multiset of \(n=O((k\log k+\log(1/\xi))/\epsilon)\) i.i.d. samples from a distribution on \(\mathbb{R}^{k}\) with mean \(\mu_{S}\) and covariance \(\Sigma\). Let \(T\subset\mathbb{R}^{k}\) be an \(\epsilon\)-corrupted version of \(S\) as in Definition 1.1. There exists an algorithm (Algorithm 2) such that, with probability at least \(1-\xi\), on input \(\epsilon\) and \(T\) (but not \(\left\lVert\Sigma\right\rVert_{\mathrm{op}}\)) returns a vector \(\widehat{\mu}\) in polynomial time so that \(\left\lVert\mu_{S}-\widehat{\mu}\right\rVert=O(\sqrt{\left\lVert\Sigma\right\rVert _{\mathrm{op}}\epsilon})\)._
Algorithm 2 is given in Appendix A.3 Proposition 2.3 states that for Algorithm 2 to succeed with high probability, \(\widetilde{O}(k/\epsilon)\) i.i.d. samples need to be drawn from a \(k\)-dimensional distribution of bounded covariance. State of the art algorithms for robust mean estimation can be implemented in near-linear time, requiring only a logarithmic number of passes on the data, see, e.g. [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 52, 54, 59, 61, 53, 56, 58, 59, 70, 54, 51, 52, 55, 57, 59, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 109, 111, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 140, 132, 133, 134, 135, 136, 137, 138, 139, 141, 142, 139, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 199, 199, 197, 199, 198, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 1999, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 1999, 1999, 199, 199, 199, 1999, 199, 1999, 1999, 1999, 1999, 1999, 1999, 199, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 19999, 1999, 19999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 19999, 19999, 1999, 1999, 19999, 1999, 19999, 1999, 1999, 1999, 1999, 19999, 1999, 19999, 1999, 1999, 19999, 19999, 1999, 19999, 19999, 1999, 19999, 1999, 1999, 1999, 1999, 19999, 19999, 19999, 19999, 1999, 19999, 1999, 1999, 19999, 1999, 1999, 19999, 1999, 19999, 19999, 19999, 1999, 1999, 19999, 1999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 1999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 1999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 199999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 1999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 1999, 19999, 19999, 1999, 19999, 19999, 19999, 19999, 19999, 1Then we have the following guarantee:
**Theorem 3.1**.: _Suppose we are given \(\epsilon\)-corrupted set of functions \(\{f_{i}\}_{i=1}^{n}\) for sample size \(n\), generated according to Definition1.2 Suppose Assumption1.4 holds in a bounded region \(\mathcal{B}\subset\mathbb{R}^{D}\) of diameter \(\gamma\) with gradient and Hessian covariance bound \(\sigma_{g}\) and \(\sigma_{H}\) respectively, and we have an arbitrary initialization \(x_{0}\in\mathcal{B}\). Algorithm1.1 initialized at \(x_{0}\) outputs an \((\epsilon_{g},\epsilon_{H})\)-approximate SOSP for a sufficiently large sample with probability at least \(1-\xi\) if the following conditions hold:_
1. _All iterates_ \(x_{t}\) _in Algorithm_1.1_stay inside the bounded region_ \(\mathcal{B}\)_._
2. _For an absolute constant_ \(c>0\)_, it holds that_ \(\sigma_{g}\sqrt{\epsilon}\leq c\epsilon_{g}\) _and_ \(\sigma_{H}\sqrt{\epsilon}\leq c\epsilon_{H}\)_._
_The algorithm uses \(n=\widetilde{O}(D^{2}/\epsilon)\) samples, where \(\widetilde{O}(\cdot)\) hides logarithmic dependence on \(D,\epsilon,L_{D_{y}},L_{D_{H}},B_{D_{y}},B_{D_{H}},\gamma/\sigma_{H},\gamma/ \sigma_{g},\) and \(1/\xi\). The algorithm runs in time polynomial in the above parameters._
Note that we are able to obtain dimension-independent errors \(\epsilon_{g}\) and \(\epsilon_{H}\), provided that \(\sigma_{g}\) and \(\sigma_{H}\) are dimension-independent.
### Low Rank Matrix Sensing Problems
In this section, we study the problem of outlier-robust low rank matrix sensing as formally defined in Definition1.6. We first apply the above framework to obtain an approximate SOSP in Section8.1.2. Then we make use of the approximate SOSP to obtain a solution that is close to the ground-truth matrix \(M^{*}\) in Section8.1.3 this demonstrates the usefulness of approximate SOSPs.
#### 3.1.1 Main results for Robust Low Rank Matrix Sensing
The following are the main results that we obtain in this section:
**Theorem 3.2** (Main Theorem Under Noiseless Measurements).: _Consider the noiseless setting as in Theorem1.7 with \(\sigma=0\). For some sample size \(n=\widetilde{O}((d^{2}r^{2}+dr\log(\Gamma/\xi))/\epsilon)\) and with probability at least \(1-\xi\), there exists an algorithm that outputs a solution that is \(\iota\)-close to \(M^{*}\) in Frobenius norm in \(O(r^{2}\kappa^{3}\log(1/\xi)+\kappa\log(\sigma_{r}^{*}/\iota))\) calls to the robust mean estimation subroutine (Algorithm1.2)._
This result achieves exact recovery of \(M^{*}\), despite the strong contamination of samples. Each iteration involves a subroutine call to robust mean estimation. Algorithm1.2 presented here is one simple example of robust mean estimation; there are refinements [12][11] that run in nearly linear time, so the total computation utilizing those more efficient algorithms indeed requires \(\widetilde{O}\left(r^{2}\kappa^{3}\right)\) passes of data (computed gradients and Hessians).
**Theorem 3.3** (Main Theorem Under Noisy Measurements).: _Consider the same setting as in Theorem1.7 with \(\sigma\neq 0\). There exists a sample size \(n=\widetilde{O}((d^{2}r^{2}+dr\log(\Gamma/\xi))/\epsilon)\) such that_
* _if_ \(\sigma\leq r\Gamma\)_, then with probability at least_ \(1-\xi\)_, there exists an algorithm that outputs a solution_ \(\widehat{M}\) _in_ \(\widetilde{O}(r^{2}\kappa^{3})\) _calls to robust mean estimation routine_ _[_4.2_]_ _with error_ \(\left\|\widehat{M}-M^{*}\right\|_{F}=O(\kappa\sigma\sqrt{\epsilon})\)_;_
* _if_ \(\sigma\geq r\Gamma\)_, then with probability at least_ \(1-\xi\)_, there exists a (different) algorithm that outputs a solution_ \(\widehat{M}\) _in one call to robust mean estimation routine_ _[_4.2_]_ _with error_ \(\left\|\widehat{M}-M^{*}\right\|_{F}=O(\sigma\sqrt{\epsilon})\)_._
We prove Theorem5.3 in AppendixC.4 and instead focus on the noiseless measurements with \(\sigma=0\) when we develop our algorithms in this section; the two share many common techniques. In the remaining part of Section8.1 we use \(\mathcal{G}_{0}\) in Definition1.6 for the data generation process.
We now describe how we obtain the solution via nonconvex optimization. Consider the following objective function for (uncorrupted) matrix sensing:
\[\min_{\begin{subarray}{c}M\in\mathbb{R}^{d\times d}\\ \mathrm{rank}(M)=r\end{subarray}}\frac{1}{2}\operatorname*{\mathbb{E}}_{(A_{ i},y_{i})\sim\mathcal{G}_{0}}(\langle M,A_{i}\rangle-y_{i})^{2}. \tag{2}\]
We can write \(M=UU^{\top}\) for some \(U\in\mathbb{R}^{d\times r}\) to reparameterize the objective function. Let
\[f_{i}(U):=\frac{1}{2}\left(\langle UU^{\top},A_{i}\rangle-y_{i}\right)^{2}. \tag{3}\]We can compute
\[\bar{f}(U):=\mathop{\mathbb{E}}_{(A_{i},y_{i})\sim\mathcal{G}_{0}}f_{i}(U)=\frac{1 }{2}\mathop{\mathrm{Var}}\langle UU^{\top}-M^{*},A_{i}\rangle=\frac{1}{2}\left\| UU^{\top}-M^{*}\right\|_{F}^{2}. \tag{4}\]
We seek to solve the following optimization problem under the corruption model in Definition 1.2
\[\min_{U\in\mathbb{R}^{d\times r}}\bar{f}(U). \tag{5}\]
The gradient Lipschitz constant and Hessian Lipschitz constant of \(\bar{f}\) are given by the following result.
**Fact 3.4** ([17], Lemma 6).: _For any \(\Gamma>\sigma_{1}^{*}\), \(\bar{f}(U)\) has gradient Lipschitz constant \(L_{g}=16\Gamma\) and Hessian Lipschitz constant \(L_{H}=24\Gamma^{\frac{1}{2}}\) inside the region \(\{U:\left\|U\right\|_{\mathrm{op}}^{2}<\Gamma\}\)._
#### 3.1.2 Global Convergence to an Approximate SOSP
In this section, we apply our framework Theorem 6.1 to obtain global convergence from an arbitrary initialization to an approximate SOSP, by providing problem-specific analysis to guarantee that both Assumption 1.4 and algorithmic assumptions (I) and (II) required by Theorem 6.1 are satisfied.
**Theorem 3.5** (Global Convergence to a SOSP).: _Consider the noiseless setting as in Theorem 1.7 with \(\sigma=0\) and \(\epsilon=O(1/(\kappa^{3}r^{3}))\). Assume we have an arbitrary initialization \(U_{0}\) inside \(\{U:\left\|U\right\|_{\mathrm{op}}^{2}\leq\Gamma\}\). There exists a sample size \(n=\widetilde{O}\left((d^{2}r^{2}+dr\log(\Gamma/\xi))/\epsilon\right)\) such that with probability at least \(1-\xi\), Algorithm 1.1 initialized at \(U_{0}\) outputs a \((\frac{1}{24}\sigma_{r}^{3/2},\frac{1}{3}\sigma_{r}^{*})\)-approximate SOSP using at most \(O\big{(}r^{2}\kappa^{3}\log(1/\xi)\big{)}\) calls to robust mean estimation subroutine (Algorithm 2.2)._
Proof of Theorem 3.5.: To apply Theorem 3.1 we verify Assumption 1.4 first. To verify (i), for all \(U\) and \(A_{i}\), \(f_{i}(U)=\frac{1}{2}\left(\langle UU^{\top},A_{i}\rangle-y_{i}\right)^{2}\geq 0\), so \(f^{*}=0\) is a uniform lower bound. We verify (ii) in Appendix C.2 conceptually, by Fact 3.4\(\bar{f}\) is gradient and Hessian Lipschitz; both gradient and Hessian of \(f_{i}\) are sub-exponential and concentrate around those of \(\bar{f}\). To check (iii), we calculate the gradients and Hessians of \(f_{i}\) in Appendix C.1.1 and bound their covariances from above in Appendix C.1.2 and 1.3. The result is summarized in the following lemma. Note that the domain of the target function in Algorithm 1.1 and Theorem 9.1 is the Euclidean space \(\mathbb{R}^{D}\), so we vectorize \(U\) and let \(D=dr\). The gradient becomes a vector in \(\mathbb{R}^{dr}\) and the Hessian becomes a matrix in \(\mathbb{R}^{dr\times dr}\).
**Lemma 3.6** (Gradient and Hessian Covariance Bounds).: _For all \(U\in\mathbb{R}^{d\times r}\) with \(\left\|U\right\|_{\mathrm{op}}^{2}\leq\Gamma\) and \(f_{i}\) defined in Equation 3.1, it holds_
\[\left\|\mathrm{Cov}(\mathrm{vec}(\nabla f_{i}(U)))\right\|_{ \mathrm{op}} \leq 8\left\|UU^{\top}-M^{*}\right\|_{F}^{2}\left\|U\right\|_{ \mathrm{op}}^{2}\leq 32r^{2}\Gamma^{3} \tag{6}\] \[\left\|\mathrm{Cov}(\mathrm{vec}(H_{i}))\right\|_{\mathrm{op}} \leq 16r\left\|UU^{\top}-M^{*}\right\|_{F}^{2}+128\left\|U\right\|_{ \mathrm{op}}^{4}\leq 192r^{3}\Gamma^{2} \tag{7}\]
We proceed to verify the algorithmic assumptions in Theorem 9.1.1 For the assumption (I), we prove the following Lemma in Appendix C.2 to show that all iterates stay inside the bounded region in which we compute the covariance bounds.
**Lemma 3.7**.: _All iterates of Algorithm 1.1 stay inside the region \(\{U:\left\|U\right\|_{\mathrm{op}}^{2}\leq\Gamma\}\)._
To verify Theorem 9.1.1(II), we let \(\epsilon_{g}=\frac{1}{32}\sigma_{r}^{*3/2},\epsilon_{H}=\frac{1}{4}\sigma_{r} ^{*}\) and \(\sigma_{g}=8r\Gamma^{1.5},\sigma_{H}=16r^{1.5}\Gamma\). So if we assume \(\epsilon=O(\overline{1/(\kappa^{3}r^{3})})\), then for the absolute constant \(c\) in Theorem 9.1 it holds that
\[\sigma_{g}\sqrt{\epsilon}\leq c\epsilon_{g}\qquad\sigma_{H}\sqrt{\epsilon}\leq c \epsilon_{H}.\]
Hence, Theorem 9.1.1 applies and Algorithm 1.1 outputs an \((\epsilon_{g},\epsilon_{H})\)-approximate SOSP with high probability in polynomial time. To bound the runtime, since \(\bar{f}(U_{0})=1/2\left\|U_{0}U_{0}^{\top}-M^{*}\right\|_{F}^{2}=O(r^{2} \Gamma^{2})\) for an arbitrary initialization \(U_{0}\) with \(\left\|U_{0}\right\|_{\mathrm{op}}^{2}<\Gamma\), the initial distance can be bounded by \(O(r^{2}\Gamma^{2})\). Setting \(L_{g}=16\Gamma,L_{H}=24\Gamma^{1/2},\bar{f}(U_{0})=O(r^{2}\Gamma^{2}),f^{*}=0\) and thus \(C_{\epsilon}=O(\sigma_{r}^{*3}/\Gamma)\), Proposition 2.2 implies that Algorithm 1.1 outputs a \((\frac{1}{24}\sigma_{r}^{*3/2},\frac{1}{3}\sigma_{r}^{*})\)-approximate second order stationary point \(U_{SOSP}\) in \(O(r^{2}\kappa^{3}\log(1/\xi))\) steps with high probability.
#### 3.1.3 Local Linear Convergence
In this section, we describe a local search algorithm that takes a \((\frac{1}{2d}\sigma_{r}^{*3/2},\frac{1}{3}\sigma_{r}^{\star})\)-approximate second-order stationary point as its initialization and achieves exact recovery even in the presence of outliers.
```
Data: The initialization \(U_{SOSP}\) is a \((\frac{1}{3d}\sigma_{r}^{*3/2},\frac{1}{3}\sigma_{r}^{\star})\)-approximate SOSP, corruption fraction is \(\epsilon\), corrupted samples are \(\{(A_{i},y_{i})\}_{i=1}^{n}\), target distance to optima is \(\iota\) Result:\(U\) that is \(\iota\)-close in Frobenius norm to some global minimum
1\(\eta=1/\Gamma\), \(U_{0}=U_{SOSP}\)
2fort =0, 1,...do
3\(\widehat{g}_{t}:=\mathbf{RobustMeanEstimation}(\{\nabla f_{i}(U_{t})\}_{i=1}^{n}, 4\epsilon)\)
4\(U_{t+1}\gets U_{t}-\eta\widehat{g}_{t}\)
```
**Algorithm 3.1**Local Inexact Gradient Descent
**Theorem 3.8** (Local Linear Convergence).: _Consider the same noiseless setting as in Theorem [1]. Assume we already found a \((\frac{1}{2}\sigma_{r}^{*3/2},\frac{1}{3}\sigma_{r}^{\star})\)-approximate SOSP \(U_{SOSP}\) of \(\bar{f}\). Then there exists a sample size \(n=\widetilde{O}\left(dr\log(1/\xi)/\epsilon\right)\) such that with probability at least \(1-\xi\), Algorithm [3] initialized at \(U_{SOSP}\) outputs a solution that is \(\iota\)-close to some global minimum in Frobenius norm after \(O(\kappa\log(\sigma_{r}^{\star}/\iota))\) calls to robust mean estimation subroutine (Algorithm [4]. Moreover, all iterates \(U_{t}\) are \(\frac{1}{3}\sigma_{r}^{*1/2}\)-close to some global minimum in Frobenius norm._
Proof Sketch.: First we use known properties of \(\bar{f}=\mathbb{E}_{(A_{i},y_{i})\sim\mathcal{G}_{0}}\,f_{i}\) from the literature [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 199, 199, 198, 199, 199, 199, 190, 191, 190, 192, 193, 194, 195, 196, 197, 198, 199, 199, 199, 199, 198, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 1999, 199,responds to the query \(q\) with a value \(v\) such that \(|v-\mathbb{E}_{X\sim\mathcal{P}}[q(X)]|\leq\tau\). An SQ algorithm is an algorithm whose objective is to learn some information about an unknown distribution \(\mathcal{P}\) by making adaptive calls to the corresponding \(\mathsf{STAT}(\tau)\) oracle.
In this section, we consider \(\mathcal{P}\) as the unknown corrupted distribution where \((A_{i},y_{i})\) are drawn. The SQ algorithm tries to learn the ground truth matrix \(M^{*}\) from this corrupted distribution; the goal of the lower bound result is to show that this is hard.
The SQ model has the capability to implement a diverse set of algorithmic techniques in machine learning such as spectral techniques, moment and tensor methods, local search (e.g., Expectation Maximization), and several others [11]. A lower bound on the SQ complexity of a problem provides evidence of hardness for the problem. [12] established that (under certain assumptions) an SQ lower bound also implies a qualitatively similar lower bound in the low-degree polynomial testing model. This connection can be used to show a similar lower bound for low-degree polynomials.
Our main result here is a near-optimal SQ lower bound for robust low rank matrix sensing that applies even for rank \(r=1\), i.e., when the ground truth matrix is \(M^{*}=uu^{\top}\) for some \(u\in\mathbb{R}^{d}\). The choice of rank \(r=1\) yields the strongest possible lower bound in our setting because it is the easiest parameter regime: Recall that the sample complexity of our algorithm is \(\widetilde{O}(d^{2}r^{2})\) as in Theorems 3.2 and 3.3 and the main message of our SQ lower bound is to provide evidence that the \(d^{2}\) factor is necessary for computationally efficient algorithms _even if_\(r=1\).
**Theorem 4.2** (SQ Lower Bound for Robust Rank-One Matrix Sensing).: _Let \(\epsilon\in(0,1/2)\) be the fraction of corruptions and let \(c\in(0,1/2)\). Assume the dimension \(d\in\mathbb{N}\) is sufficiently large. Consider the \(\epsilon\)-corrupted rank-one matrix sensing problem with ground-truth matrix \(M^{*}=uu^{\top}\) and noise \(\sigma^{2}=O(1)\). Any SQ algorithm that outputs \(\widehat{u}\) with \(\|\widehat{u}-u\|=O(\epsilon^{1/4})\) either requires \(2^{\Omega(d^{\epsilon})}/d^{2-4c}\) queries or makes at least one query to \(\mathsf{STAT}\left(e^{O(1/\sqrt{\epsilon})}/O\left(d^{1-2c}\right)\right)\)._
In other words, we show that, when provided with SQ access to an \(\epsilon\)-corrupted distribution, approximating \(u\) is impossible unless employing a statistical query of higher precision than what can be achieved with a strictly sub-quadratic number (e.g., \(d^{1.99}\)) of samples. Note that the SQ oracle \(\mathsf{STAT}(e^{O\left(1/\sqrt{\epsilon}\right)}/O(d^{1-2c}))\) can be simulated with \(O(d^{2-4c})/e^{O(1/\epsilon)}\) samples, and this bound is tight in general. Informally speaking, this theorem implies that improving the sample complexity from \(d^{2}\) to \(d^{2-4c}\) requires exponentially many queries. This result can be viewed as a near-optimal information-computation tradeoff for the problem, within the class of SQ algorithms.
The proof follows a similar analysis as in [13], using one-dimensional moment matching to construct a family of corrupted conditional distributions, which induce a family of corrupted joint distributions that are SQ-hard to learn. We provide the details of the proof in Appendix A Apart from the formal proof, in Appendix E we also informally discuss the intuition for why some simple algorithms that require \(O(d)\) samples do not provide dimension-independent error guarantees.
## Acknowledgements
Shuyao Li was supported in part by NSF Awards DMS-2023239, NSF Award CCF-2007757 and the U. S. Office of Naval Research under award number N00014-22-1-2348. Yu Cheng was supported in part by NSF Award CCF-2307106. Ilias Diakonikolas was supported in part by NSF Medium Award CCF-2107079, NSF Award CCF-1652862 (CAREER), a Sloan Research Fellowship, and a DARPA Learning with Less Labels (LwLL) grant. Jelena Diakonikolas was supported in part by NSF Award CCF-2007757 and by the U. S. Office of Naval Research under award number N00014-22-1-2348. Rong Ge was supported in part by NSF Award DMS-2031849, CCF-1845171 (CAREER) and a Sloan Research Fellowship. Stephen Wright was supported in part by NSF Awards DMS-2023239 and CCF-2224213 and AFOSR via subcontract UTA20-001224 from UT-Austin.
## References
* [Bar+10] M. Barreno, B. Nelson, A. D. Joseph, and J. D. Tygar. "The security of machine learning". In: _Machine Learning_ 81.2 (2010), pp. 121-148.
* [BBV16] A. S. Bandeira, N. Boumal, and V. Voroninski. "On the low-rank approach for semidefinite programs arising in synchronization and community detection". In: _Conference on learning theory_. PMLR. 2016, pp. 361-382.
* [Ber+22] E. H. Bergou, Y. Diouane, V. Kunc, V. Kungurtsev, and C. W. Royer. "A subsampling line-search method with second-order results". In: _INFORMS Journal on Optimization_ 4.4 (2022), pp. 403-425.
* [BNL12] B. Biggio, B. Nelson, and P. Laskov. "Poisoning Attacks against Support Vector Machines". In: _Proceedings of the 29th International Coference on International Conference on Machine Learning_. Omnipress, 2012, pp. 1467-1474.
* [BNS16] S. Bhojanapalli, B. Neyshabur, and N. Srebro. "Global optimality of local search for low rank matrix recovery". In: _Advances in Neural Information Processing Systems_. 2016, pp. 3873-3881.
* [Bre+21] M. S. Brennan, G. Bresler, S. Hopkins, J. Li, and T. Schramm. "Statistical Query Algorithms and Low Degree Tests Are Almost Equivalent". In: _Proceedings of Thirty Fourth Conference on Learning Theory_. Ed. by M. Belkin and S. Kpotufe. Vol. 134. Proceedings of Machine Learning Research. PMLR, 15-19 Aug 2021, pp. 774-774.
* [CDG19] Y. Cheng, I. Diakonikolas, and R. Ge. "High-Dimensional Robust Mean Estimation in Nearly-Linear Time". In: _Proceedings of the 30th ACM-SIAM Symposium on Discrete Algorithms (SODA)_. SIAM, 2019, pp. 2755-2771.
* [CG18] Y. Cheng and R. Ge. "Non-convex matrix completion against a semi-random adversary". In: _Conference On Learning Theory_. PMLR. 2018, pp. 1362-1394.
* [DHL19] Y. Dong, S. Hopkins, and J. Li. "Quantum entropy scoring for fast robust mean estimation and improved outlier detection". In: _Advances in Neural Information Processing Systems_ 32 (2019).
* [Dia+16] I. Diakonikolas, G. Kamath, D. M. Kane, J. Li, A. Moitra, and A. Stewart. "Robust estimators in high dimensions without the computational intractability". In: _57th Annual IEEE Symposium on Foundations of Computer Science--FOCS 2016_. IEEE Computer Soc., Los Alamitos, CA, 2016, pp. 655-664.
* [Dia+17] I. Diakonikolas, G. Kamath, D. M. Kane, J. Li, A. Moitra, and A. Stewart. "Being Robust (in High Dimensions) Can Be Practical". In: _Proceedings of the 34th International Conference on Machine Learning_. Ed. by D. Precup and Y. W. Teh. Vol. 70. Proceedings of Machine Learning Research. PMLR, June 2017, pp. 999-1008.
* [Dia+19] I. Diakonikolas, G. Kamath, D. Kane, J. Li, J. Steinhardt, and A. Stewart. "Sever: A Robust Meta-Algorithm for Stochastic Optimization". In: _Proceedings of the 36th International Conference on Machine Learning_. Ed. by K. Chaudhuri and R. Salakhutdinov. Vol. 97. Proceedings of Machine Learning Research. PMLR, Sept. 2019, pp. 1596-1606.
* [Dia+21] I. Diakonikolas, D. Kane, A. Pensia, T. Pittas, and A. Stewart. "Statistical Query Lower Bounds for List-Decodable Linear Regression". In: _Advances in Neural Information Processing Systems_. Ed. by M. Ramzato, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan. Vol. 34. Curran Associates, Inc., 2021, pp. 3191-3204.
* [Dia+22] I. Diakonikolas, D. M. Kane, A. Pensia, and T. Pittas. "Streaming Algorithms for High-Dimensional Robust Statistics". In: _Proceedings of the 39th International Conference on Machine Learning_. Ed. by K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu, and S. Sabato. Vol. 162. Proceedings of Machine Learning Research. PMLR, July 2022, pp. 5061-5117.
* [DK19] I. Diakonikolas and D. M. Kane. "Recent advances in algorithmic high-dimensional robust statistics". In: _arXiv preprint arXiv:1911.05911_ (2019).
* [DK23] I. Diakonikolas and D. M. Kane. _Algorithmic High-Dimensional Robust Statistics_. Cambridge University Press, 2023.
* [DKP20] I. Diakonikolas, D. M. Kane, and A. Pensia. "Outlier Robust Mean Estimation with Subgaussian Rates via Stability". In: _Advances in Neural Information Processing Systems_. Ed. by H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin. Vol. 33. Curran Associates, Inc., 2020, pp. 1830-1840.
* [DKS17] I. Diakonikolas, D. M. Kane, and A. Stewart. "Statistical Query Lower Bounds for Robust Estimation of High-Dimensional Gaussians and Gaussian Mixtures". In: _2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)_. 2017, pp. 73-84.
* [DKS19] I. Diakonikolas, W. Kong, and A. Stewart. "Efficient algorithms and lower bounds for robust linear regression". In: _Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms_. SIAM, Philadelphia, PA, 2019, pp. 2745-2754.
* [Dur19] R. Durrett. _Probability--theory and examples_. Vol. 49. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge, 2019.
* [Fel+17] V. Feldman, E. Grigorescu, L. Reyzin, S. Vempala, and Y. Xiao. "Statistical Algorithms and a Lower Bound for Detecting Planted Cliques". In: _J. ACM_ 64.2 (2017), 8:1-8:37.
* [Gao20] C. Gao. "Robust regression via mutivariate regression depth". In: _Bernoulli_ 26.2 (2020), pp. 1139-1170.
* [GC23] X. Gao and Y. Cheng. "Robust Matrix Sensing in the Semi-Random Model". In: _Proceedings of the 37th Conference on Neural Information Processing Systems (NeurIPS)_ (2023).
* [Ge+15] R. Ge, F. Huang, C. Jin, and Y. Yuan. "Escaping from saddle points--online stochastic gradient for tensor decomposition". In: _Conference on Learning Theory_. 2015, pp. 797-842.
* [GJZ17] R. Ge, C. Jin, and Y. Zheng. "No Spurious Local Minima in Nonconvex Low Rank Problems: A Unified Geometric Analysis". In: _Proceedings of the 34th International Conference on Machine Learning_. Ed. by D. Precup and Y. W. Teh. Vol. 70. Proceedings of Machine Learning Research. PMLR, June 2017, pp. 1233-1242.
* [GLM16] R. Ge, J. D. Lee, and T. Ma. "Matrix completion has no spurious local minimum". In: _Advances in Neural Information Processing Systems_. 2016, pp. 2973-2981.
* [Hal10] J. K. Hale. _Asymptotic behavior of dissipative systems_. 25. American Mathematical Soc., 2010.
* [Ham+86] F. R. Hampel, E. M. Ronchetti, P. J. Rousseeuw, and W. A. Stahel. _Robust statistics. The approach based on influence functions_. Wiley New York, 1986.
* [HR09] P. J. Huber and E. M. Ronchetti. _Robust statistics_. Wiley New York, 2009.
* [Hub64] P. J. Huber. "Robust estimation of a location parameter". In: _Annals of Mathematical Statistics_ 35 (1964), pp. 73-101.
* [IPL23] E. Ioannou, M. S. Pydi, and P.-L. Loh. "Robust empirical risk minimization via Newton's method". In: _arXiv preprint arXiv:2301.13192_ (2023).
* [Jin+17] C. Jin, R. Ge, P. Netrapalli, S. M. Kakade, and M. I. Jordan. "How to Escape Saddle Points Efficiently". In: _Proceedings of the 34th International Conference on Machine Learning_. Ed. by D. Precup and Y. W. Teh. Vol. 70. Proceedings of Machine Learning Research. PMLR, June 2017, pp. 1724-1732.
* [Jin+21] C. Jin, P. Netrapalli, R. Ge, S. M. Kakade, and M. I. Jordan. "On nonconvex optimization for machine learning: gradients, stochasticity, and saddle points". In: _Journal of the ACM_ 68.2 (2021), Art. 11, 29.
* [Kea98] M. J. Kearns. "Efficient noise-tolerant Learning from Statistical Queries". In: _Journal of the ACM_ 45.6 (1998), pp. 983-1006.
* [Li+08] J. Li, D. Absher, H. Tang, A. Southwick, A. Casto, S. Ramachandran, H. Cann, G. Barsh, M. Feldman, L. Cavalli-Sforza, and R. Myers. "Worldwide human relationships inferred from genome-wide patterns of variation". In: _Science_ 319 (2008), pp. 1100-1104.
* [Li+20a] X. Li, Z. Zhu, A. Man-Cho So, and R. Vidal. "Nonconvex robust low-rank matrix recovery". In: _SIAM Journal on Optimization_ 30.1 (2020), pp. 660-686.
* [Li+20b] Y. Li, Y. Chi, H. Zhang, and Y. Liang. "Non-convex low-rank matrix recovery with arbitrary outliers via median-truncated gradient descent". In: _Information and Inference: A Journal of the IMA_ 9.2 (2020), pp. 289-325.
* [LRV16] K. A. Lai, A. B. Rao, and S. Vempala. "Agnostic Estimation of Mean and Covariance". In: _focs2016_. 2016, pp. 665-674.
* [LW23] S. Li and S. J. Wright. "A randomized algorithm for nonconvex minimization with inexact evaluations and complexity guarantees". In: _arXiv preprint arXiv:2310.18841_ (2023).
* [Pas+10] P. Paschou, J. Lewis, A. Javed, and P. Drineas. "Ancestry Informative Markers for Fine-Scale Individual Assignment to Worldwide Populations". In: _Journal of Medical Genetics_ 47 (2010), pp. 835-847.
* [Pra+20] A. Prasad, A. S. Suggala, S. Balakrishnan, and P. Ravikumar. "Robust estimation via robust gradient estimation". In: _Journal of the Royal Statistical Society. Series B. Statistical Methodology_ 82.3 (2020), pp. 601-627.
* [Ros+02] N. Rosenberg, J. Pritchard, J. Weber, H. Cann, K. Kidd, L. Zhivotovsky, and M. Feldman. "Genetic structure of human populations". In: _Science_ 298 (2002), pp. 2381-2385.
* [RRT17] M. Raginsky, A. Rakhlin, and M. Telgarsky. "Non-convex learning via Stochastic Gradient Langevin Dynamics: a nonasymptotic analysis". In: _Proceedings of the 2017 Conference on Learning Theory_. Ed. by S. Kale and O. Shamir. Vol. 65. Proceedings of Machine Learning Research. PMLR, July 2017, pp. 1674-1703.
* [SKL17] J. Steinhardt, P. W. Koh, and P. S. Liang. "Certified Defenses for Data Poisoning Attacks". In: _Advances in Neural Information Processing Systems 30_. 2017, pp. 3520-3532.
* [SQW16] J. Sun, Q. Qu, and J. Wright. "A geometric analysis of phase retrieval". In: _Information Theory (ISIT), 2016 IEEE International Symposium on_. IEEE. 2016, pp. 2379-2383.
* [SQW17] J. Sun, Q. Qu, and J. Wright. "Complete Dictionary Recovery Over the Sphere I: Overview and the Geometric Picture". In: _IEEE Trans. Inf. Theor._ 63.2 (Feb. 2017), pp. 853-884.
* [Tuk75] J. Tukey. "Mathematics and picturing of data". In: _Proceedings of ICM_. Vol. 6. 1975, pp. 523-531.
* [WM22] J. Wright and Y. Ma. _High-dimensional data analysis with low-dimensional models: Principles, computation, and applications_. Cambridge University Press, 2022.
* [Yin+19] D. Yin, Y. Chen, R. Kannan, and P. Bartlett. "Defending against saddle point attack in Byzantine-robust distributed learning". In: _International Conference on Machine Learning_. PMLR. 2019, pp. 7074-7084. | ## Review
### Summary
This paper addresses the problem of finding approximate second-order stationary points in the presence of outliers in stochastic optimization problems. It introduces a novel algorithm that achieves provable guarantees and applies it specifically to the robust low-rank matrix sensing problem. The algorithm demonstrates dimension-independent accuracy and provides a statistical query lower bound for rank one matrix sensing. Despite the theoretical contributions, the paper has been noted for insufficient numerical evaluations. The reviewers appreciate the clarity of the writing and theoretical backing but suggest improvements in addressing certain counterintuitive results and providing practical evaluations.
### Strengths
- The paper proposes a general result for finding approximate second-order stationary points (SOSP) and applies it to a widely relevant problem in low-rank matrix sensing.
- The algorithm is capable of finding an approximate SOSP in polynomial time, even under a constant proportion of sample corruption.
- The theoretical guarantees are well-supported and offer strong results for finding second-order stationary points in adversarial settings.
- The writing is clear, and the approach is well supported by theoretical analysis.
### Weaknesses
- The results suggest that increasing the sample size does not enhance performance, which is counterintuitive and needs clarification.
- There are concerns about the paper being a combination of previous works without sufficient new theoretical contributions.
- The presentation could be improved by including more details on the algorithms and theoretical results.
- The assumptions in the theorems are seen as restrictive and may not generalize well to other nonconvex optimization problems.
### Questions
- How do the results differ from existing literature on robust matrix sensing?
- Could the analysis be extended to matrix completion?
- What are the practical implications of requiring knowledge of a multiplicative upper bound of certain parameters?
- Can the authors clarify the motivation for using second-order methods over first-order methods in high-dimensional settings?
### Soundness
**Score:** 3
**Description:** Good; the theoretical contributions are sound but require clarification on certain counterintuitive results.
### Presentation
**Score:** 3
**Description:** Good; the paper is generally well-written but could benefit from improved clarity in presenting technical details.
### Contribution
**Score:** 3
**Description:** Good; the contributions are significant in the context of the problem addressed, though questions about originality remain.
### Rating
**Score:** 6
**Description:** Weak Accept; the paper is technically solid and has moderate-to-high impact potential, but it requires some improvements and clarifications.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents original theoretical contributions in the area of robust stochastic optimization, demonstrating a clear understanding of the problem. While the paper's soundness and contribution are generally good, some aspects regarding originality and practical evaluations are less clear. Addressing the weaknesses and questions raised by reviewers will strengthen the paper further.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Efficient Online Clustering with Moving Costs
Dimitris Christou
UT Austin
[email protected]
First author contribution.
&Stratis Skoulakis
LIONS, EPFL
[email protected]
&Volkan Cevher
LIONS, EPFL
[email protected]
###### Abstract
In this work we consider an online learning problem, called _Online \(k\)-Clustering with Moving Costs_, at which a _learner_ maintains a set of \(k\) facilities over \(T\) rounds so as to minimize the connection cost of an adversarially selected sequence of clients. The _learner_ is informed on the positions of the clients at each round \(t\) only after its facility-selection and can use this information to update its decision in the next round. However, updating the facility positions comes with an additional moving cost based on the moving distance of the facilities. We present the first \(\mathcal{O}(\log n)\)-regret polynomial-time online learning algorithm guaranteeing that the overall cost (connection \(+\) moving) is at most \(\mathcal{O}(\log n)\) times the time-averaged connection cost of the _best fixed solution_. Our work improves on the recent result of Fotakis et al. [31] establishing \(\mathcal{O}(k)\)-regret guarantees _only_ on the connection cost.
## 1 Introduction
Due to their various applications in diverse fields (e.g. machine learning, operational research, data science etc.), _clustering problems_ have been extensively studied. In the well-studied \(k\)-median problem, given a set of clients, \(k\) facilities should be placed on a metric with the objective to minimize the sum of the distance of each client from its closest center [55, 14, 13, 67, 6, 44, 52, 65, 51, 15, 54, 3].
In many modern applications (e.g., epidemiology, social media, conference, etc.) the positions of the clients are not _static_ but rather _evolve over time_[57, 56, 64, 59, 23, 5]. For example the geographic distribution of the clients of an online store or the distribution of Covid-19 cases may drastically change from year to year or respectively from day to day [31]. In such settings it is desirable to update/change the positions of the facilities (e.g., compositions of warehouses or Covid test-units) so as to better serve the time-evolving trajectory of the clients.
The clients' positions may change in complex and unpredictable ways and thus an _a priori knowledge_ on their trajectory is not always available. Motivated by this, a recent line of research studies clustering problems under the _online learning framework_ by assuming that the sequence of clients' positions is _unknown_ and _adversarially selected_[18, 28, 16, 31]. More precisely, a _learner_ must place \(k\) facilities at each round \(t\geq 1\) without knowing the positions of clients at round \(t\) which are revealed to the learner only after its facility-selection. The learner can use this information to update its decision in the next round; however, moving a facility comes with an additional moving cost that should be taken into account in the learner's updating decision, e.g. moving Covid-19 test-units comes with a cost [18, 28].
Building on this line of works, we consider the following online learning problem:
**Problem 1** (_Online \(k\)-Clustering with Moving Costs_).: _Let \(G(V,E,w)\) be a weighted graph with \(|V|=n\) vertices and \(k\) facilities. At each round \(t=1,\ldots,T\):_
1. _The learner selects_ \(F_{t}\subseteq V\)_, with_ \(|F_{t}|=k\)_, at which facilities are placed._2. _The adversary selects the clients' positions,_ \(R_{t}\subseteq V\)_._
3. _The learner learns the clients' positions_ \(R_{t}\) _and suffers_ \[\text{cost}=\sum_{j\in R_{t}}\ \min_{\begin{subarray}{c}i\in F_{t}\\ \text{connection cost of client }j\end{subarray}}d_{G}(j,i)\] \[+\underbrace{\gamma\cdot M_{G}(F_{t-1},F_{t})}_{\text{moving cost of facilities}}\]
_where \(d_{G}(j,i)\) is the distance between vertices \(i,j\in V\); \(M_{G}(F_{t-1},F_{t})\) is the minimum overall distance required to move \(k\) facilities from \(F_{t-1}\) to \(F_{t}\); and \(\gamma\geq 0\) is the facility-weight._
An _online learning algorithm_ for Problem 1 tries to minimize the overall (connection \(+\) moving) cost by placing \(k\) facilities at each round \(t\geq 1\) based only on the previous positions of clients \(R_{1},\ldots,R_{t-1}\). To the best of our knowledge, Problem 1 was first introduced in [18]2. If for any sequence of clients, the overall cost of the algorithm is at most \(\alpha\) times the overall connection cost of the _optimal fixed placement of facilities \(F^{*}\)_ then the algorithm is called \(\alpha\)-regret, while in the special case of \(\alpha=1\) the algorithm is additionally called _no-regret_.
Footnote 2: In [18], an easier version of Problem 1 with \(1\)-_lookahead_ is considered, meaning that the learner learns the positions of the clients \(R_{t}\) before selecting \(F_{t}\). Moreover, \(G\) is considered to be the line graph and \(\gamma=1\).
Problem 1 comes as a special case of the well-studied _Metzical Task System_ by considering each of the possible \(\binom{n}{k}\) facility placements as a different state. In their seminal work, [11] guarantee that the famous _Multiplicative Weights Update algorithm_ (\(\mathrm{MWU}\)) achieves \((1+\epsilon)\)-regret in Problem 1 for any \(\epsilon>0\). Unfortunately, running the \(\mathrm{MWU}\) algorithm for Problem 1 is not really an option since it requires \(\mathcal{O}(n^{k})\) time and space complexity. As a result, the following question naturally arises:
**Q.**_Can we achieve \(\alpha\)-regret for Problem 1 with polynomial-time online learning algorithms?_
Answering the above question is a challenging task. Even in the very simple scenario of time-invariant clients, i.e. \(R_{t}=R\) for all \(t\geq 1\), an \(\alpha\)-regret online learning algorithm must essentially compute an \(\alpha\)-_approximate solution_ of the \(k\)-\(\mathrm{median}\) problem. Unfortunately the \(k\)-\(\mathrm{median}\) problem cannot be approximated with ratio \(\alpha<1+2/e\simeq 1.71\) (unless \(\mathrm{NP}\subseteq\mathrm{DTIME}[n^{\log\log n}]\)[43]) which excludes the existence of an \((1+2/e)\)-regret polynomial-time online learning algorithm for Problem 1. Despite the fact that many \(\mathcal{O}(1)\)-approximation algorithms have been proposed for the \(k\)-median problem (the best current ratio is \(1+\sqrt{3}\)[54]), these algorithms crucially rely on the (offline) knowledge of the whole sequence of clients and most importantly are not designed to handle the moving cost of the facilities [55, 14, 13, 67, 6, 44, 52, 65, 51, 15, 54, 3].
In their recent work, Fotakis et al. [31] propose an \(\mathcal{O}(k)\)-regret polynomial-time online learning algorithm for Problem 1 _without_ moving costs (i.e. the special case of \(\gamma=0\)). Their approach is based on designing a _no-regret_ polynomial-time algorithm for a _fractional relaxation_ of Problem 1 and then using an _online client-oblivious_ rounding scheme in order to convert a fractional solution to an integral one. Their analysis is based on the fact that the connection cost of _any possible client_ is at most \(\mathcal{O}(k)\) times its fractional connection cost. However in order to establish the latter guarantee their rounding scheme performs abrupt changes on the facilities leading to huge moving cost.
Our Contribution and Techniques.In this work, we provide a positive answer to question (**Q**), by designing the first polynomial-time online learning algorithm for Online \(k\)-Clustering with Moving Costs that achieves \(\mathcal{O}\left(\log n\right)\)-regret for any \(\gamma\geq 0\). The cornerstone idea of our work was to realize that \(\mathcal{O}(1)\)-regret can be established with a polynomial-time online learning algorithm in the special case of \(G\) being a Hierarchical Separation Tree (HST). Then, by using the standard metric embedding result of [25], we can easily convert such an algorithm to an \(\mathcal{O}(\log n)\)-regret algorithm for general graphs. Our approach for HSTs consists of two main technical steps:
1. We introduce a fractional relaxation of Problem 1 for HSTs. We then consider a specific regularizer on the fractional facility placements, called _Dilated Entropic Regularizer_[26], that takes into account the specific structure of the HST. Our first technical contribution is to establish that the famous _Follow the Leader algorithm_[35] with dilated entropic regularization admits \(\mathcal{O}(1)\)-regret for any \(\gamma\geq 0\).
2. Our second technical contribution is the design of a novel _online client-oblivious_ rounding scheme, called \(\mathrm{Cut}\&\mathrm{Round}\), that converts a fractional solution for HSTs into an integral one. By exploiting the specific HST structure we establish that \(\mathrm{Cut}\&\mathrm{Round}\), despite not knowing the clients' positions \(R_{t}\), simultaneously guarantees that \((i)\) the connection cost of each client \(j\in R_{t}\) is upper bounded by its fractional connection cost, and \((ii)\) the expected moving cost of the facilities is at most \(\mathcal{O}(1)\) times the fractional moving cost.
Experimental Evaluation.In Section F of the Appendix we experimentally compare our algorithm with the algorithm of Fotakis et al. [31]. Our experiments verify that our algorithm is robust to increases of the facility weight \(\gamma\) while the algorithm of [31] presents a significant cost increase. We additionally experimentally evaluate our algorithm in the \(\mathrm{MNIST}\) and \(\mathrm{CIFAR}10\) datasets. Our experimental evaluations suggest that the \(\mathcal{O}(\log n)\)-regret bound is a pessimistic upper bound and that in practise our algorithm performs significantly better. Finally, we evaluate our algorithm both in the random arrival case (where the requested vertices are drawn uniformly at random from the graph) as well as in adversarial settings, where the request sequences are constructed through some arbitrary deterministic process.
Related Work.As already mentioned, our work most closely relates with the work of Fotakis et al. [31] that provides an \(\mathcal{O}(k)\)-regret algorithm running in polynomial-time for \(\gamma=0\). [16] also consider Problem 1 for \(\gamma=0\) with the difference that the connection cost of clients is captured through the \(k\)-means objective i.e. the sum of the squared distances. They provide an \((1+\epsilon)\)-regret algorithm with \(\mathcal{O}\left((k^{2}/\epsilon^{2})^{2k}\right)\) time-complexity that is still exponential in \(k\). [18; 28] study the special case of Problem 1 in which \(G\) is the line graph and \(\gamma=1\) while assuming \(1\)-_lookahead_ on the request \(R_{t}\). For \(k=1\), [18] provide an \((1+\epsilon)\)-competitive online algorithm meaning that its cost is at most \((1+\epsilon)\) times the cost of the _optimal dynamic solution_ and directly implies \((1+\epsilon)\)-regret. [28] extended the previous result by providing a \(63\)-competitive algorithm for \(k=2\) on line graphs. Our work also relates with the works of [23] and [4] that study offline approximation algorithms for clustering problems with _time-evolving metrics_. Finally our work is closely related with the research line of online learning in combinatorial domains and other settings of online clustering. Due to space limitations, we resume this discussion in Section A of the Appendix.
## 2 Preliminaries and Our Results
Let \(G(V,E,w)\) be a weighted undirected graph where \(V\) denotes the set of vertices and \(E\) the set of edges among them. The weight \(w_{e}\) of an edge \(e=(i,j)\in E\) denotes the cost of traversing \(e\). Without loss, we assume that \(w_{e}\in\mathbb{N}\) and \(w_{e}\geq 1\) for all edges \(e\in E\). The _distance_ between vertices \(i,j\in V\) is denoted with \(d_{G}(i,j)\) and equals the cost of the minimum cost path from \(i\in V\) to \(j\in V\). We use \(n:=|V|\) to denote the cardinality of \(G\) and \(D_{G}:=\max_{i,j\in V}d_{G}(i,j)\) to denote its diameter.
Given a placement of facilities \(F\subseteq V\), with \(|F|=k\), a client placed at vertex \(j\in V\) connects to the _closest open facility_\(i\in F\). This is formally captured in Definition 1.
**Definition 1**.: _The connection cost of a set of clients \(R\subseteq V\) under the facility-placement \(F\subseteq V\) with \(|F|=k\) equals_
\[C_{R}(F):=\sum_{j\in R}\min_{i\in F}d_{G}(j,i)\]
Next, consider any pair of facility-placements \(F,F^{\prime}\subseteq V\) such that \(|F|=|F^{\prime}|=k\). The moving distance between \(F\) and \(F^{\prime}\) is the minimum overall distance needed to transfer the \(k\) facilities from \(F\) to \(F^{\prime}\), formally defined in Definition 2.
**Definition 2**.: _Fix any facility-placements \(F,F^{\prime}\subseteq V\) where \(|F|=|F^{\prime}|=k\). Let \(\Sigma\) be the set of all possible matchings from \(F\) to \(F^{\prime}\), i.e. each \(\sigma\in\Sigma\) is a one-to-one mapping \(\sigma:F\mapsto F^{\prime}\) with \(\sigma(i)\in F^{\prime}\) denoting the mapping of facility \(i\in F\). The moving cost between \(F\) and \(F^{\prime}\) equals_
\[M_{G}(F,F^{\prime}):=\min_{\sigma\in\Sigma}\sum_{i\in F}d_{G}(i,\sigma(i))\]
At each round \(t\geq 1\), an online learning algorithm \(\mathcal{A}\) for Problem 1 takes as input all the _previous_ positions of the clients \(R_{1},\ldots,R_{t-1}\subseteq V\) and outputs a facility-placement \(F_{t}:=\mathcal{A}(R_{1},\ldots,R_{t-1})\)such that \(F_{t}\subseteq V\) and \(|F_{t}|=k\). The performance of an online learning algorithm is measured by the notion of _regret_, which we formally introduce in Definition 3.
**Definition 3**.: _An online learning algorithm \(\mathcal{A}\) for Problem 1 is called \(\alpha\)-regret with additive regret \(\beta\) if and only if for any sequence of clients \(R_{1},\ldots,R_{T}\subseteq V\),_
\[\mathbb{E}\left[\sum_{t=1}^{T}C_{R_{t}}(F_{t})+\gamma\cdot\sum_{t=2}^{T}M_{G}( F_{t-1},F_{t})\right]\leq\alpha\cdot\min_{|F^{*}|=k}\sum_{t=1}^{T}C_{R_{t}}(F^{*} )+\beta\cdot\sqrt{T}\]
_where \(F_{t}=\mathcal{A}(R_{1},\ldots,R_{t-1})\) and \(\alpha,\beta\) are constants independent of \(T\)._
An online learning algorithm \(\mathcal{A}\) selects the positions of the \(k\) facilities at each round \(t\geq 1\) solely based on the positions of the clients in the previous rounds, \(R_{1},\ldots,R_{t-1}\). If \(\mathcal{A}\) is \(\alpha\)-regret then Definition 3 implies that its time-averaged overall cost (connection \(+\) moving cost) is at most \(\alpha\) times the time-averaged cost of the _optimal static solution!_3 Furthermore, the dependency on \(\sqrt{T}\) is known to be optimal [11] and \(\beta\) is typically only required to be polynomially bounded by the size of the input, as for \(T\to\infty\) the corresponding term in the time-averaged cost vanishes.
Footnote 3: Specifically, the time-averaged overall cost of \(\mathcal{A}\) approaches this upper bound with rate \(\beta\cdot T^{-1/2}\).
As already mentioned, the seminal work of [11] implies the existence of an \((1+\epsilon)\)-regret algorithm for Problem 1; however, this algorithm requires \(\mathcal{O}(n^{k})\) time and space complexity. Prior to this work, the only polynomial-time4 online learning algorithm for Problem 1 was due to Fotakis et al. [31], for the special case of \(\gamma=0\). Specifically, in their work the authors design an online learning algorithm with the following guarantee:
Footnote 4: Polynomial-time with respect to the input parameters, namely \(T\), \(n\) and \(\log D_{G}\).
**Theorem** (Fotakis et al. [31]).: _There exists a randomized online learning algorithm for Problem 1 that runs in polynomial time (w.r.t. \(T\), \(n\) and \(\log D_{G}\)) such that_
\[\mathbb{E}\left[\sum_{t=1}^{T}C_{R_{t}}(F_{t})\right]\leq\mathcal{O}(k)\cdot \min_{|F^{*}|=k}\sum_{t=1}^{T}C_{R_{t}}(F^{*})+\mathcal{O}(k\cdot n\cdot\sqrt{ \log n}\cdot D_{G})\cdot\sqrt{T}\]
Clearly, the algorithm of [31] has not been designed to account for charging the moving of facilities, as indicated by the absence of the moving cost in the above regret guarantee. The main contribution of this work is to obtain (for the first time) regret guarantees that also account for the moving cost.
**Theorem 1**.: _There exists a randomized online learning algorithm for Problem 1 (Algorithm 2) that runs in polynomial time (w.r.t. \(T\), \(n\) and \(\log D_{G}\)) and admits the following regret guarantee:_
\[\mathbb{E}\left[\sum_{t=1}^{T}C_{R_{t}}(F_{t})+\gamma\cdot\sum_{t=2}^{T}M_{G}( F_{t-1},F_{t})\right]\leq\mathcal{O}(\log n)\cdot\min_{|F^{*}|=k}\sum_{t=1}^{T}C_{R _{t}}(F^{*})+\beta\cdot\sqrt{T}\]
_for \(\beta=\mathcal{O}(k\cdot n^{3/2}\cdot D_{G}\cdot\max(\gamma,1))\) and any \(\gamma\geq 0\)._
**Remark 1**.: _We remark that while our additive regret \(\beta\) is larger than the corresponding term in [31] by a factor of \(o(\sqrt{n})\), our results apply to any \(\gamma\geq 0\) while the algorithm of [31] can generally suffer unbounded moving cost for \(\gamma\to\infty\), as our experimental results verify._
### HSTs and Metric Embeddings
In this section we provide some preliminary introduction to Hierarchical Separation Trees (HSTs), as they consist a key technical tool towards proving Theorem 1. A _weighted tree_\(\mathcal{T}(V,E,w)\) is a weighted graph with no cycles. Equivalently, for any pair of vertices \(i,j\in V\) there exists a unique path that connects them. In Definition 4, we establish some basic notation for tree graphs.
**Definition 4**.: _Fix any tree \(\mathcal{T}(V,E,w)\). For every vertex \(u\in V\), \(\operatorname{cld}(u)\subseteq V\) denotes the set children vertices of \(u\) and \(p(u)\) denotes its unique parent, i.e. \(u\in\operatorname{cld}(p(u))\). The root \(r\in V\) of \(\mathcal{T}\) is the unique node with \(p(r)=\varnothing\) and the set \(L(T):=\{u\in V\,:\,\operatorname{cld}(u)=\varnothing\}\) denotes the leaves of \(\mathcal{T}\). We use \(\operatorname{dpt}(u)\) to denote the depth of a vertex \(u\in V\), i.e. the length of the (unique) path from the root \(r\) to \(u\), and \(h(\mathcal{T}):=\max_{u\in L(\mathcal{T})}\operatorname{dpt}(u)\) to denote the height of \(\mathcal{T}\). We use \(\operatorname{lev}(u):=h(\mathcal{T})-\operatorname{dpt}(u)\) to denote the level of a vertex \(u\in V\). Finally, \(T(u)\subseteq V\) denotes the set of vertices on the sub-tree rooted at \(u\), i.e. the set of vertices that are descendants of \(u\)._Next, we proceed to define a family of well-structured tree graphs that constitute one of the primary technical tools used in our analysis.
**Definition 5**.: _A Hierarchical Separation Tree (HST) is a weighted tree \(\mathcal{T}(V,E,w)\) such that (i) for any node \(u\) and any of its children \(v\in cld(u)\), the edge \(e=(u,v)\) admits weight \(w_{e}=2^{\mathrm{lev}(v)}\), and (ii) the tree is balanced, namely \(lev(u)=0\) for all leaves \(u\in L(\mathcal{T})\)._
In their seminal works, [10] and later [24] showed that HSTs can approximately preserve the distances of any graph \(G(V,E,w)\) within some logarithmic level of distortion.
**Theorem 2**.: _For any graph \(G(V,E,w)\) with \(|V|=n\) and diameter \(D\), there exists a polynomial-time randomized algorithm that given as input \(G\) produces an HST \(\mathcal{T}\) with height \(h(\mathcal{T})\leq\lceil\log D\rceil\) s.t._
1. \(L(\mathcal{T})=V\)_, meaning that the leaves of_ \(\mathcal{T}\) _correspond to the vertices of_ \(G\)_._
2. _For any_ \(u,v\in V\)_,_ \(d_{G}(u,v)\leq d_{\mathcal{T}}(u,v)\) _and_ \(\mathbb{E}[d_{\mathcal{T}}(u,v)]\leq\mathcal{O}(\log n)\cdot d_{G}(u,v)\)_._
Theorem 2 states that any weighted graph \(G(V,E,w)\) can be embedded into an HST \(\mathcal{T}\) with \(\mathcal{O}(\log n)\)-distortion. This means that the distance \(d_{G}(u,v)\) between any pair of vertices \(u,v\in V\) can be approximated by their respective distance \(d_{\mathcal{T}}(u,v)\) in \(\mathcal{T}\) within an (expected) factor of \(\mathcal{O}(\log n)\).
**Remark 2**.: _We note that traditionally HSTs are neither balanced nor are required to have weights that are specifically powers of \(2\). However, we can transform any general HST into our specific definition, and this has been accounted for in the statement of the above theorem. The details are deferred to Section B of the Appendix._
## 3 Overview of our approach
In this section we present the key steps of our approach towards designing the \(\mathcal{O}(\log n)\)-regret online learning algorithm for Problem 1. Our approach can be summarized in the following three pillars:
1. In Section 3.1 we introduce a _fractional relaxation_ of Problem 1 in the special case of HSTs (Problem 2). Problem 2 is an artificial problem at which the learner can place a _fractional amount of facility_ to the leaves of an HST so as to fractionally serve the arrived clients. Since the _optimal static solution_ of Problem 2 lower bounds the _optimal static solution_ of Problem 1 in the special case of HSTs, the first step of our approach is to design an \(\mathcal{O}(1)\)-regret algorithm for Problem 2.
2. In Section 3.2 we present the formal guarantees of a novel randomized rounding scheme, called \(\mathrm{Cut}\&\mathrm{Round}\), that is client-oblivious and converts any _fractional solution_ for Problem 2 into an actual placement of \(k\) facilities on the leaves of the HST with just an \(\mathcal{O}(1)\)-overhead in the connection and the moving cost.
3. In Section 3.3 we present how the _fractional algorithm_ for Problem 2 together with the \(\mathrm{Cut}\&\mathrm{Round}\) rounding naturally lead to an \(\mathcal{O}(1)\)-regret online learning algorithm for Problem 1 in the special case of HSTs (Algorithm 1). Our main algorithm, presented in Algorithm 2, then consists of running Algorithm 1 into an \(\mathcal{O}(\log n)\) HST embedding of input graph.
### A Fractional Relaxation for HSTs
In this section we introduce a fractional relaxation for Problem 1, called _Fractional \(k\)-Clustering with Moving Costs on HSTs_ (Problem 2). Fix any HST \(\mathcal{T}(V,E,w)\) (in this section, \(V\) denotes the nodes of the HST). We begin by presenting a _fractional extension_ of placing \(k\) facilities on the leaves of \(\mathcal{T}\).
**Definition 6**.: _The set of fractional facility placements \(\mathcal{FP}(\mathcal{T})\) consists of all vectors \(y\in\mathbb{R}^{|V|}\) such that_
1. \(y_{v}\in[0,1]\) _for all leaves_ \(v\in L(\mathcal{T})\)_._
2. \(y_{v}=\sum\limits_{u\in\mathrm{cld}(v)}y_{u}\) _for all non-leaves_ \(v\notin L(\mathcal{T})\)_._
3. \(\sum_{v\in L(\mathcal{T})}y_{v}=k\)_, i.e. the total amount of facility on the leaves equals_ \(k\)For a leaf vertex \(v\in L(\mathcal{T})\), \(y_{v}\) simply denotes the fractional amount of facilities that are placed on it. For all non-leaf vertices \(v\notin L(\mathcal{T})\), \(y_{v}\) denotes the total amount of facility placed in the leaves of the sub-tree \(T(v)\). Thus, any integral vector \(y\in\mathcal{FP}(\mathcal{T})\cap\mathds{N}\) corresponds to a placement of \(k\) facilities on the leaves of \(\mathcal{T}\).
In Definitions 7 and 8 we extend the notion of connection and moving cost for fractional facility placements. In the special case of integral facility placements, Definitions 7 and 8 respectively collapse to Definitions 1 and 2 (a formal proof is given in Claims 1 and 2 of Section C of the Appendix).
**Definition 7**.: _The fractional connection cost of a set of clients \(R\subseteq L(\mathcal{T})\) under \(y\in\mathcal{FP}(\mathcal{T})\) is defined as_
\[f_{R}(y):=\sum_{j\in R}\sum_{v\in P(j,r)}2^{lev(v)+1}\cdot\max{(0,1-y_{v})}\]
_where \(P(j,r)\) denotes the set of vertices in the (unique) path from the leaf \(j\in L(\mathcal{T})\) to the root \(r\)._
**Definition 8**.: _The fractional moving cost between any \(y,y^{\prime}\in\mathcal{FP}(\mathcal{T})\) is defined as_
\[||y-y^{\prime}||_{\mathcal{T}}:=\gamma\cdot\sum_{v\in V(\mathcal{T})}2^{lev(v) }\cdot|y_{v}-y^{\prime}_{v}|\]
We are now ready to present our fractional generalization of Problem 1 in the special case of HSTs.
**Problem 2** (_Fractional \(k\)-Clustering with Moving Costs on HSTs_).: _Fix any HST \(\mathcal{T}\). At each round \(t=1,\ldots,T\):_
1. _The learner selects a vector_ \(y^{t}\in\mathcal{FP}(\mathcal{T})\)_._
2. _The adversary selects a set of clients_ \(R_{t}\subseteq L(\mathcal{T})\)_._
3. _The learner suffers cost_ \(f_{R_{t}}(y^{t})+||y^{t}-y^{t-1}||_{\mathcal{T}}\)_._
In Section 4, we develop and present an \(\mathcal{O}(1)\)-regret algorithm for Problem 2 (see Algorithm 3). To this end, we present its formal regret guarantee established in Theorem 3.
**Theorem 3**.: _There exists a polynomial-time online learning algorithm for Problem 2 (Algorithm 3), such that for any sequence \(R_{1},\ldots,R_{T}\subseteq L(\mathcal{T})\), its output \(y^{1},\ldots,y^{T}\) satisfies_
\[\sum_{t=1}^{T}f_{R_{t}}(y^{t})+\sum_{t=2}^{T}||y^{t}-y^{t-1}||_{\mathcal{T}} \leq\frac{3}{2}\cdot\min_{y^{*}\in\mathcal{FP}(\mathcal{T})}\sum_{t=1}^{T}f_{ R_{t}}(y^{*})+\beta\cdot\sqrt{T}\]
_for \(\beta=\mathcal{O}\left(k\cdot|L(\mathcal{T})|^{3/2}\cdot D_{\mathcal{T}}\cdot \max(\gamma,1)\right)\)._
### From Fractional to Integral Placements in HSTs
As already mentioned, the basic idea of our approach is to convert at each round \(t\geq 1\) the _fractional placement_\(y^{t}\in\mathcal{FP}(\mathcal{T})\) produced by Algorithm 3 into an integral facility placement \(F_{t}\subseteq L(\mathcal{T})\) with \(|F_{t}|=k\) on the leaves of the HST. In order to guarantee small regret, our rounding scheme should preserve both the connection and the moving cost of the fractional solution within constant factors for _any possible set of arriving clients_. In order to guarantee the latter, our rounding scheme \(\mathrm{Cut\&Round}\) (Algorithm 4) uses shared randomness across different rounds. \(\mathrm{Cut\&Round}\) is rather complicated and is presented in Section 5. To this end, we present its formal guarantee.
**Theorem 4**.: _There exists a linear-time deterministic algorithm, called \(\mathrm{Cut\&Round}\) (Algorithm 4), that takes as input an HST \(\mathcal{T}\), a fractional facility placement \(y\in\mathcal{FP}(\mathcal{T})\) and a vector \(\alpha\in[0,1]^{|V|}\) and outputs a placement of \(k\) facilities \(F\leftarrow\mathrm{Cut\&Round}(\mathcal{T},y,\alpha)\) on the leaves of \(\mathcal{T}\) (\(F\subseteq L(\mathcal{T})\) and \(|F|=k\)) such that_
1. \(\mathrm{E}_{\alpha\sim\mathrm{Unif}(0,1)}\left[C_{R}(F)\right]=f_{R}(y)\) _for all client requests_ \(R\subseteq L(\mathcal{T})\)_._
2. \(\mathrm{E}_{\alpha\sim\mathrm{Unif}(0,1)}\left[\gamma\cdot M_{\mathcal{T}}(F,F^{\prime})\right]\leq 4\cdot||y-y^{\prime}||_{\mathcal{T}}\) _for all other fractional facility placements_ \(y^{\prime}\in\mathcal{FP}(\mathcal{T})\) _and_ \(F^{\prime}\leftarrow\mathrm{Cut\&Round}(\mathcal{T},y^{\prime},\alpha)\)Item \(1\) of Theorem 4 establishes that although \(\mathrm{Cut}\&\mathrm{Round}\) is _oblivious_ to the arrived set of clients \(R_{t}\subseteq L(\mathcal{T})\), the expected connection cost of the output equals the _fractional connection cost_ under \(y^{t}\in\mathcal{FP}(\mathcal{T})\). Item \(2\) of Theorem 4 states that once the same random seed \(\alpha\) is used into two consecutive time steps, then the expected moving cost between the facility-placements \(F_{t}\) and \(F_{t+1}\) is at most \(\mathcal{O}(1)\)-times the fractional moving cost between \(y^{t}\) and \(y^{t+1}\). Both properties crucially rely on the structure of the HST and consist one of the main technical contributions of our work.
### Overall Online Learning Algorithm
We are now ready to formally introduce our main algorithm (Algorithm 2) and prove Theorem 1. First, we combine the algorithms from Theorems 3 and 4 to design an \(\mathcal{O}(1)\)-regret algorithm for Problem 1 on HSTs (Algorithm 1). Up next we present how Algorithm 1 can be converted into an \(\mathcal{O}(\log n)\)-regret online learning algorithm for general graphs, using the metric embedding technique of Theorem 2, resulting to our final algorithm (Algorithm 2).
```
1:Input: A sequence \(R_{1},\ldots,R_{T}\subseteq L(\mathcal{T})\).
2: The learner samples \(\alpha_{v}\sim\mathrm{Unif}(0,1)\) for all \(v\in V(\mathcal{T})\).
3:for each round \(t=1\)to\(T\)do
4: The learner places the \(k\) facilities to the leaves of the HST \(\mathcal{T}\) based on the output \(F_{t}:=\mathrm{Cut}\&\mathrm{Round}(\mathcal{T},y^{t},\alpha)\).
5: The learner learns \(R_{t}\subseteq L(\mathcal{T})\).
6: The learner updates \(y^{t+1}\in\mathcal{FP}(\mathcal{T})\) by running Algorithm 3 for Problem 2 with input \(R_{1},\ldots,R_{t}\).
7:endfor
```
**Algorithm 1**\(\mathcal{O}(1)\)-regret for HSTs.
**Theorem 5**.: _For any sequence of client requests \(R_{1},\ldots,R_{T}\subseteq L(\mathcal{T})\), the sequence of facility-placements \(F_{1},\ldots,F_{T}\subseteq L(\mathcal{T})\) produced by Algorithm 1 satisfies_
\[\mathbb{E}\left[\sum_{t=1}^{T}C_{R_{t}}(F_{t})+\gamma\cdot\sum_{t=2}^{T}M_{ \mathcal{T}}(F_{t},F_{t-1})\right]\leq 6\cdot\min_{|F^{*}|=k}\sum_{t=1}^{T}C_{R_{ t}}(F^{*})+\beta\cdot\sqrt{T}\]
_for \(\beta=\mathcal{O}\left(k\cdot|L(\mathcal{T})|^{3/2}\cdot D_{\mathcal{T}}\cdot \max(\gamma,1)\right)\)._
Theorem 5 establishes that Algorithm 1 achieves constant regret in the special case of HSTs and its proof easily follows by Theorems 3 and 4. Then, the proof of Theorem 1 easily follows by Theorem 2 and Theorem 5. All the proofs are deferred to Section C of the Appendix.
## 4 \(\mathcal{O}(1)\)-Regret for Fractional HST Clustering
In this section we present the \(\mathcal{O}(1)\)-regret algorithm for Problem 2, described in Algorithm 3, and exhibit the key ideas in establishing Theorem 3. Without loss of generality, we can assume that the facility-weight satisfies \(\gamma\geq 1\)5.
Footnote 5: If not, establishing our guarantees for \(\gamma=1\) will clearly upper bound the actual moving cost.
Algorithm 3 is the well-known online learning algorithm _Follow the Regularized Leader_ (\(\mathrm{FTRL}\)) with a specific regularizer \(R_{\mathcal{T}}(\cdot)\) presented in Definition 9. Our results crucially rely on the properties of this regularizer since it takes into account the HST structure and permits us to bound the fractional moving cost of \(\mathrm{FTRL}\).
**Definition 9**.: _Given an HST \(\mathcal{T}\), the dilated entropic regularizer \(R_{\mathcal{T}}(y)\) over \(y\in\mathcal{FP}(\mathcal{T})\) is defined as_
\[R_{\mathcal{T}}(y):=\sum_{v\neq r}2^{\mathrm{lev}(v)}\cdot(y_{v}+\delta_{v}) \cdot\ln\left(\frac{y_{v}+\delta_{v}}{y_{p(v)}+\delta_{p(v)}}\right)\]
_where \(\delta_{v}:=(k/n)\cdot|L(\mathcal{T})\cap T(v)|\) and \(n:=|L(\mathcal{T})|\)._
```
1:Input: An adversarial sequence \(R_{1},\ldots,R_{T}\subseteq L(\mathcal{T})\).
2:for\(t=1\)to\(T\)do
3: The learner selects \(y^{t}\in\mathcal{FP}(\mathcal{T})\).
4: The learner suffers cost \(f_{R_{t}}(y^{t})+||y^{t}-y^{t-1}||_{\mathcal{T}}\).
5: The learner updates \(y^{t+1}\leftarrow\arg\min_{y\in\mathcal{FP}(\mathcal{T})}\left[\sum_{s=1}^{t}f _{R_{s}}(y)+(\gamma\sqrt{nT})\cdot R_{\mathcal{T}}(y)\right]\).
6:endfor
```
**Algorithm 3** FTRL with dilated entropic regularization
Algorithm 3 selects at each step \(t\) the facility placement \(y^{t}\in\mathcal{FP}(\mathcal{T})\) that minimizes a convex combination of the total fractional connection cost for the sub-sequence \(R_{1},\ldots,R_{t-1}\) and \(R_{\mathcal{T}}(y)\). The regularization term ensures the stability of the output, which will result in a bounded fractional moving cost.
Analysis of Algorithm 3.Due to space limitations, all proofs are moved to Section D of the Appendix. The primary reason for the specific selection of the regularizer at Definition 9 is that \(R_{\mathcal{T}}(\cdot)\) is strongly convex with respect to the norm \(||\cdot||_{\mathcal{T}}\) of Definition 8, as established in Lemma 1 which is the main technical contribution of the section. We use \(D=D_{\mathcal{T}}\) for the diameter of \(\mathcal{T}\).
**Lemma 1**.: _For any vectors \(y,y^{\prime}\in\mathcal{FP}(\mathcal{T})\),_
\[R_{\mathcal{T}}(y^{\prime})\geq R_{\mathcal{T}}(y)+\langle\nabla R_{\mathcal{T }}(y),y^{\prime}-y\rangle+\left(8kD\gamma^{2}\right)^{-1}\cdot||y-y^{\prime}|| _{\mathcal{T}}^{2}\]
The strong convexity of \(R_{\mathcal{T}}(y)\) with respect to \(||\cdot||_{\mathcal{T}}\) is crucial since it permits us to bound the moving cost of Algorithm 3 by its fractional connection cost.
**Lemma 2**.: _For any sequence \(R_{1},\ldots,R_{T}\subseteq L(\mathcal{T})\), the output of Algorithm 3 satisfies_
\[\sum_{t=2}^{T}||y^{t}-y^{t-1}||_{\mathcal{T}}\leq\frac{1}{2}\cdot\sum_{t=1}^{T }f_{R_{t}}(y^{t})+\mathcal{O}\left(\gamma kD\right)\cdot\sqrt{T}\]
We remark that using another regularizer \(R(\cdot)\) that is strongly convex with respect to another norm \(||\cdot||\) would still imply Lemma 1 with respect to \(||\cdot||\). The problem though is that the _fractional moving cost_\(\sum_{t=1}^{T}||y_{t}-y_{t-1}||\) can no longer be associated with the actual moving cost \(\sum_{t=1}^{T}M_{\mathcal{T}}(F_{t},F_{t-1})\). It is for this reason that using a regularizer that is strongly convex with respect to \(||\cdot||_{\mathcal{T}}\) is crucial.
Next, by adapting the standard analysis of \(\mathrm{FTRL}\) to our specific setting, we derive Lemma 3 establishing that Algorithm 3 admits bounded connection cost.
**Lemma 3**.: _For any sequence \(R_{1},\ldots,R_{T}\subseteq L(\mathcal{T})\), the output of Algorithm 3 satisfies_
\[\sum_{t=1}^{T}f_{R_{t}}(y^{t})\leq\min_{y^{*}\in\mathcal{FP}}\sum_{t=1}^{T}f_{ R_{t}}(y^{*})+\mathcal{O}\left(kn^{3/2}D\gamma\right)\cdot\sqrt{T}\]
The proof of Theorem 3 directly follows by Lemma 2 and 3. We conclude the section by presenting how Step \(5\) of Algorithm 3 can be efficiently implemented, namely
\[\min_{y\in\mathcal{FP}(\mathcal{T})}\Phi_{t}(y):=\sum_{s=1}^{t}f_{R_{s}}(y)+( \gamma\sqrt{nT})\cdot R_{\mathcal{T}}(y).\]
Since \(\Phi_{t}(y)\) is strongly convex and the set \(\mathcal{FP}(\mathcal{T})\) is a polytope, one could use standard optimization algorithms such as the _ellipsoid method_ or _projected gradient descent_ to approximately minimize\(\Phi_{t}(y)\) given access to a _sub-gradient oracle for \(\Phi_{t}(\cdot)\)_. In Claim 11 of Section D of the Appendix, we establish that the sub-gradients of \(\Phi(\cdot)\) can be computed in polynomial time and thus any of the previous methods can be used to approximately minimize \(\Phi(\cdot)\). In Lemma 4 we establish the intuitive fact that approximately implementing Step \(5\) does not affect the guarantees of Theorem 3.
**Lemma 4**.: _Let \(y^{t}\) be the minimizer of \(\Phi_{t}(\cdot)\) in \(\mathcal{FP}(T)\) and let \(z^{t}\in\mathcal{FP}(\mathcal{T})\) be any point such that \(\Phi_{t}(z^{t})\leq\Phi_{t}(y^{t})+\epsilon\) for some \(\epsilon=\mathcal{O}(T^{-1/2})\). Then,_
\[f_{R_{t}}(z^{t})+||z^{t}-z^{t-1}||_{\mathcal{T}}\leq f_{R_{t}}(y^{t})+||y^{t}-y ^{t-1}||_{\mathcal{T}}+\mathcal{O}\left(kn^{3/2}D\gamma\right)\cdot T^{-1/2}\]
**Remark 3**.: _In our implementation of the algorithm, we approximately solve Step 5 of Algorithm 3 via Mirror Descent based on the Bregman divergence of \(\mathcal{R_{T}}(\cdot)\). This admits the same convergence rates as projected gradient descent but the projection step can be computed in linear time with respect to the size of the HST \(\mathcal{T}\). We present the details of our implementation in Section C of the Appendix._
## 5 The Cut\(\&\)Round Rounding
In this section we present our novel rounding scheme (Algorithm \(\mathrm{Cut\&Round}\)) as well as the main steps that are required in order to establish Theorem 4. To ease notation, for any real number \(x\geq 0\) we denote its decimal part as \(\delta(x)=x-\lfloor x\rfloor\). We comment that our rounding scheme simply maintains and updates a distribution over the vertices of the HST, and can be thus implemented in polynomial-time. Similar rounding schemes, like the one presented in [9], typically maintain a distribution over all possible facility-placements, which generally cannot be implemented in polynomial-time.
```
1:Input: An HST \(\mathcal{T}\), a fractional placement \(y\in\mathcal{FP}(\mathcal{T})\) and thresholds \(\alpha_{v}\in[0,1]\) for all \(v\in V(\mathcal{T})\).
2:\(Y_{r}\gets k\)
3:for levels \(\ell=h(\mathcal{T})\)to 1 do
4:for all nodes \(v\) with \(lev(v)=\ell\)do
5:\(Y_{rem}\gets Y_{v}\)
6:\(y_{rem}\gets y_{v}\)
7:for all children \(u\in\mathrm{cld}(v)\)do
8:\(Y_{u}\leftarrow\mathrm{Alloc}(y_{u},Y_{rem},y_{rem},\alpha_{u})\)
9:\(Y_{rem}\gets Y_{rem}-Y_{u}\)
10:\(y_{rem}\gets y_{rem}-y_{u}\)
11:endfor
12:endfor
13:endfor
14:return\(F:=\{u\in L(\mathcal{T}):Y_{u}=1\}\).
```
**Algorithm 4**\(\mathrm{Cut\&Round}\).
On principle, \(\mathrm{Cut\&Round}\) (Algorithm 4) assigns to each vertex \(v\) an integer number of facilities \(Y_{v}\) to be placed at the leaves of its sub-tree. Notice that due to sub-routine \(\mathrm{Alloc}\) (Algorithm 5), \(Y_{v}\) either equals \(\lfloor y_{v}\rfloor\) or \(\lfloor y_{v}\rfloor+1\). \(\mathrm{Cut\&Round}\) initially assigns \(k\) facilities to the set of leaves that descend from the root \(r\), which is precisely \(L(\mathcal{T})\). Then, it moves in decreasing level under to decide \(Y_{v}\) for each node \(v\). Once \(Y_{v}\) is determined (Step 5), the \(Y_{v}\) facilities are allocated to the sub-trees of its children \(u\in\mathrm{cld}(v)\) (Steps 7-10) via sub-routine \(\mathrm{Alloc}\) using the thresholds \(\alpha_{u}\), in a manner that guarantees that \(Y_{v}=\sum_{u\in\mathrm{cld}(v)}Y_{u}\) (see Section E.1 of the Appendix). This implies the feasibility of \(\mathrm{Cut\&Round}\), as exactly \(k\) facilities are placed in the leaves of \(\mathcal{T}\) at the end of the process.
Assuming that the set of thresholds \(\alpha_{v}\) is randomly drawn from the uniform distribution in \([0,1]\), sub-routine \(\mathrm{Alloc}\) (Algorithm 5) guarantees that \(Y_{v}\) either equals \(\lfloor y_{v}\rfloor\) or \(\lfloor y_{v}\rfloor+1\) while \(\mathbb{E}_{\alpha}\left[Y_{v}\right]=y_{v}\). This is formally captured in Lemma 5 and is crucial in the proof of Theorem 4.
**Lemma 5**.: _Consider Algorithm 4 given as input a vector \(y\in\mathcal{FP}(\mathcal{T})\) and random thresholds \(\alpha_{v}\sim\mathrm{Unif}(0,1)\). Then,_
\[Y_{v}=\left\{\begin{array}{ll}\left|y_{v}\right|&\text{with probability }1- \delta(y_{v})\\ \left|y_{v}\right|+1&\text{with probability }\delta(y_{v})\end{array}\right.\]
By coupling Lemma 5 with the HST structure we are able to establish Theorem 4. The proof is technically involved and thus deferred to Section E of the Appendix.
## 6 Conclusion
In this work, we designed the first polynomial-time online learning algorithm for _Online \(k\)-Clustering with Moving Costs_ that achieves \(\mathcal{O}(\log n)\)-regret with respect to the cost of the optimal _static_ facility placement, extending the results of Fotakis et al. [31] for the special case of \(\gamma=0\). The cornerstone of our approach was to realize and establish that \(\mathcal{O}(1)\)-regret is plausible for HST metrics. This was achieved through designing a dilated entropic regularizer to capture the structure of the HST and combine it with the FTRL algorithm, as well as designing a lossless (up to constant factors) rounding scheme that simultaneously works for both the connection and the moving cost. Both of these components where central towards acquiring constant regret on HSTs.
A interesting future direction is to investigate whether a polynomial-time online learning algorithm with \(\mathcal{O}(1)\)-regret for the problem is theoretically possible or not. Since the \(\mathcal{O}(\log n)\)-factor is inherently lost when using HST embeddings, this would require a significantly different approach to the one presented in this work. Finally, we comment that our current optimality guarantees are with respect to the optimal _static_ facility placement. Going beyond the notion of regret, an intriguing future direction is establishing guarantees with respect to the _optimal dynamic facility-placement_ that moves facilities from round to round by suffering the corresponding moving cost.
## Acknowledgements
This work was supported by the Swiss National Science Foundation (SNSF) under grant number \(200021\_205011\), by Hasler Foundation Program: Hasler Responsible AI (project number 21043) and Innovation project supported by Innosuisse (contract agreement 100.960 IP-ICT).
## References
* [1] Alekh Agarwal, Daniel J. Hsu, Satyen Kale, John Langford, Lihong Li, and Robert E. Schapire. Taming the monster: A fast and simple algorithm for contextual bandits. In _International Conference on Machine Learning_, 2014.
* [2] Nir Ailon. Improved bounds for online learning over the permutahedron and other ranking polytopes. In _Proceedings of the 17th International Conference on Artificial Intelligence and Statistics_, AISTATS 2014, 2014.
* [3] Soroush Alamdari and David B. Shmoys. A bicriteria approximation algorithm for the k-center and k-median problems. In _Workshop on Approximation and Online Algorithms_, 2017.
* [4] Hyung-Chan An, Ashkan Norouzi-Fard, and Ola Svensson. Dynamic facility location via exponential clocks. _ACM Trans. Algorithms_, 13(2):21:1-21:20, 2017.
* [5] Tarique Anwar, Surya Nepal, Cecile Paris, Jian Yang, Jia Wu, and Quan Z. Sheng. Tracking the evolution of clusters in social media streams. _IEEE Transactions on Big Data_, pages 1-15, 2022.
* [6] Vijay Arya, Naveen Garg, Rohit Khandekar, Adam Meyerson, Kamesh Munagala, and Vinayaka Pandit. Local search heuristic for k-median and facility location problems. In _Proceedings of the Thirty-Third Annual ACM Symposium on Theory of Computing_, STOC '01, page 21-29. Association for Computing Machinery, 2001.
* [7] Baruch Awerbuch and Robert Kleinberg. Online linear optimization and adaptive routing. _J. Comput. Syst. Sci._, 2008.
* [8] Maria-Florina Balcan and Avrim Blum. Approximation algorithms and online mechanisms for item pricing. In _ACM Conference on Electronic Commerce_, 2006.
* [9] Nikhil Bansal, Niv Buchbinder, Aleksander Madry, and Joseph Naor. A polylogarithmic-competitive algorithm for the k-server problem. In Rafail Ostrovsky, editor, _IEEE 52nd Annual Symposium on Foundations of Computer Science, FOCS 2011, Palm Springs, CA, USA, October 22-25, 2011_, pages 267-276. IEEE Computer Society, 2011.
* [10] Yair Bartal. Probabilistic approximation of metric spaces and its algorithmic applications. _Proceedings of 37th Conference on Foundations of Computer Science_, pages 184-193, 1996.
* [11] Avrim Blum and Carl Burch. On-line learning and the metrical task system problem. _Mach. Learn._, 39(1):35-58, 2000.
* [12] Sebastien Bubeck, Michael B. Cohen, Yin Tat Lee, James R. Lee, and Aleksander Madry. k-server via multiscale entropic regularization. In Ilias Diakonikolas, David Kempe, and Monika Henzinger, editors, _Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2018, Los Angeles, CA, USA, June 25-29, 2018_, pages 3-16. ACM, 2018.
* [13] Moses Charikar and Sudipto Guha. Improved combinatorial algorithms for the facility location and k-median problems. In _Proceedings of the 40th Annual Symposium on Foundations of Computer Science_, FOCS '99. IEEE Computer Society, 1999.
* [14] Moses Charikar, Sudipto Guha, Eva Tardos, and David B. Shmoys. A constant-factor approximation algorithm for the k-median problem (extended abstract). In _Proceedings of the Thirty-First Annual ACM Symposium on Theory of Computing_, STOC '99, page 1-10. Association for Computing Machinery, 1999.
* 39th International Colloquium, ICALP 2012_, volume 7391 of _Lecture Notes in Computer Science_, pages 194-205. Springer, 2012.
* [16] Vincent Cohen-Addad, Benjamin Guedj, Varun Kanade, and Guy Rom. Online k-means clustering. In Arindam Banerjee and Kenji Fukumizu, editors, _The 24th International Conference on Artificial Intelligence and Statistics, AISTATS 2021, April 13-15, 2021, Virtual Event_, volume 130 of _Proceedings of Machine Learning Research_, pages 1126-1134. PMLR, 2021.
* [17] Aaron Cote, Adam Meyerson, and Laura J. Poplawski. Randomized k-server on hierarchical binary trees. In Cynthia Dwork, editor, _Proceedings of the 40th Annual ACM Symposium on Theory of Computing, Victoria, British Columbia, Canada, May 17-20, 2008_, pages 227-234. ACM, 2008.
* [18] Bart de Keijzer and Dominik Wojtczak. Facility reallocation on the line. In _Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018_, pages 188-194, 2018.
* Leibniz-Zentrum fur Informatik, 2017.
* [20] Sina Dehghani, MohammadTaghi Hajiaghayi, Hamid Mahini, and Saeed Seddighin. Price of competition and dueling games. _arXiv preprint arXiv:1605.04004_, 2016.
* [21] Miroslav Dudik, Nika Haghtalab, Haipeng Luo, Robert E. Schapire, Vasilis Syrgkanis, and Jennifer Wortman Vaughan. Oracle-efficient online learning and auction design. In _58th IEEE Annual Symposium on Foundations of Computer Science_, FOCS 2017, 2017.
* [22] Miroslav Dudik, Daniel J. Hsu, Satyen Kale, Nikos Karampatziakis, John Langford, Lev Reyzin, and Tong Zhang. Efficient optimal learning for contextual bandits. In _Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence_, UAI 2011, 2011.
- 41st International ColloquiumIALP 2014_, volume 8573 of _Lecture Notes in Computer Science_, pages 459-470. Springer, 2014.
* [24] Jittat Fakcharoenphol, Satish Rao, and Kunal Talwar. A tight bound on approximating arbitrary metrics by tree metrics. In _Proceedings of the Thirty-Fifth Annual ACM Symposium on Theory of Computing_, STOC '03, page 448-455, New York, NY, USA, 2003. Association for Computing Machinery.
* [25] Jittat Fakcharoenphol, Satish Rao, and Kunal Talwar. A tight bound on approximating arbitrary metrics by tree metrics. _J. Comput. Syst. Sci._, 69(3):485-497, 2004.
* [26] Gabriele Farina, Christian Kroer, and Tuomas Sandholm. Optimistic regret minimization for extensive-form games via dilated distance-generating functions. In _Neural Information Processing Systems_, 2019.
* 13, 2021_, pages 2660-2678. SIAM, 2021.
* [28] Dimitris Fotakis, Loukas Kavouras, Panagiotis Kostopanagiotis, Philip Lazos, Stratis Skoulakis, and Nikos Zarifis. Reallocating multiple facilities on the line. In _Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019_, pages 273-279, 2019.
* [29] Dimitris Fotakis, Loukas Kavouras, Grigorios Koumoutsos, Stratis Skoulakis, and Manolis Vardas. The online min-sum set cover problem. In _Proc. of the 47th International Colloquium on Automata, Languages and Programming_, ICALP 2020.
* [30] Dimitris Fotakis, Thanasis Lianeas, Georgios Piliouras, and Stratis Skoulakis. Efficient online learning of optimal rankings: Dimensionality reduction via gradient descent. In _Advances in Neural Information Processing Systems 2020, NeurIPS 2020_, 2020.
* [31] Dimitris Fotakis, Georgios Piliouras, and Stratis Skoulakis. Efficient online learning for dynamic k-clustering. In Marina Meila and Tong Zhang, editors, _Proceedings of the 38th International Conference on Machine Learning_, volume 139 of _Proceedings of Machine Learning Research_, pages 3396-3406. PMLR, 18-24 Jul 2021.
* [32] Takahiro Fujita, Kohei Hatano, and Eiji Takimoto. Combinatorial online prediction via metarounding. In _24th International Conference on Algorithmic Learning Theory_, ALT 2013, 2013.
* [33] Dan Garber. Efficient online linear optimization with approximation algorithms. In _Proceedings of the 30th International Conference on Neural Information Processing Systems_, NIPS 2017, 2017.
* [34] Xiangyu Guo, Janardhan Kulkarni, Shi Li, and Jiayi Xian. Consistent k-median: Simpler, better and robust. In Arindam Banerjee and Kenji Fukumizu, editors, _Proceedings of The 24th International Conference on Artificial Intelligence and Statistics_, volume 130 of _Proceedings of Machine Learning Research_, pages 1135-1143. PMLR, 13-15 Apr 2021.
* [35] Elad Hazan. Introduction to online convex optimization. _CoRR_, abs/1909.05207, 2019.
* [36] Elad Hazan, Wei Hu, Yuanzhi Li, and Zhiyuan Li. Online improper learning with an approximation oracle. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, _Advances in Neural Information Processing Systems_, volume 31. Curran Associates, Inc., 2018.
* [37] Elad Hazan and Satyen Kale. Online submodular minimization. _J. Mach. Learn. Res._, 2012.
* [38] Elad Hazan, Satyen Kale, and Shai Shalev-Shwartz. Near-optimal algorithms for online matrix prediction. In _25th Annual Conference on Learning Theory_, COLT 2012, 2012.
* [39] Elad Hazan and Tomer Koren. The computational power of optimization in online learning. In _Proceedings of the 48th Annual ACM Symposium on Theory of Computing_, STOC 2016, 2016.
* [40] David P. Helmbold, Robert E. Schapire, and M. Long. Predicting nearly as well as the best pruning of a decision tree. In _Machine Learning_, 1997.
* [41] David P. Helmbold and Manfred K. Warmuth. Learning permutations with exponential weights. In _Proceedings of the 20th Annual Conference on Learning Theory_, COLT 2007, 2007.
* [42] Nicole Immorlica, Adam Tauman Kalai, Brendan Lucier, Ankur Moitra, Andrew Postlewaite, and Moshe Tennenholtz. Dueling algorithms. In _Proceedings of the Forty-Third Annual ACM Symposium on Theory of Computing_, STOC '11, page 215-224, New York, NY, USA, 2011. Association for Computing Machinery.
* [43] Kamal Jain, Mohammad Mahdian, and Amin Saberi. A new greedy approach for facility location problems. In John H. Reif, editor, _Proceedings on 34th Annual ACM Symposium on Theory of Computing, May 19-21, 2002, Montreal, Quebec, Canada_, pages 731-740. ACM, 2002.
* [44] Kamal Jain and Vijay V. Vazirani. Approximation algorithms for metric facility location and k-median problems using the primal-dual schema and lagrangian relaxation. _J. ACM_, 48(2):274-296, 2001.
* [45] Stefanie Jegelka and Jeff A. Bilmes. Online submodular minimization for combinatorial structures. In _Proceedings of the 28th International Conference on Machine Learning_, ICML 2011, 2011.
* [46] Sham Kakade, Adam Tauman Kalai, and Katrina Ligett. Playing games with approximation algorithms. In _Proceedings of the 39th Annual ACM Symposium on Theory of Computing_, STOC 2007, 2007.
* [47] Adam Kalai and Santosh Vempala. Efficient algorithms for online decision problems. In _J. Comput. Syst. Sci._ Springer, 2003.
* [48] Wouter M. Koolen, Manfred K. Warmuth, and Jyrki Kivinen. Hedging structured concepts. In _the 23rd Conference on Learning Theory, COLT 2010_, 2010.
* [49] Elias Koutsoupias. The k-server problem. _Comput. Sci. Rev._, 3(2):105-118, 2009.
* [50] Elias Koutsoupias and Christos H. Papadimitriou. On the k-server conjecture. _J. ACM_, 42(5):971-983, 1995.
* [51] Amit Kumar. Constant factor approximation algorithm for the knapsack median problem. In _Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms_, SODA '12, page 824-832. Society for Industrial and Applied Mathematics, 2012.
* [52] Amit Kumar, Yogish Sabharwal, and Sandeep Sen. Linear-time approximation schemes for clustering problems in any dimensions. _J. ACM_, 57(2), 2010.
* [53] Silvio Lattanzi and Sergei Vassilvitskii. Consistent k-clustering. In Doina Precup and Yee Whye Teh, editors, _Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017_, volume 70 of _Proceedings of Machine Learning Research_, pages 1975-1984. PMLR, 2017.
* [54] Shi Li and Ola Svensson. Approximating k-median via pseudo-approximation. _SIAM J. Comput._, 45(2):530-547, 2016.
* 249, 1992.
* [56] M.E.J. Newman. The structure and function of complex networks. _SIAM review_, 45(2):167-256, 2003.
* [57] Romualdo Pastor-Satorras and Alessandro Vespignani. Epidemic Spreading in Scale-Free Networks. _Physical Review Letters_, 86(14):3200-3203, 2001.
* [58] Holakou Rahmanian and Manfred K. Warmuth. Online dynamic programming. In _NIPS_, 2017.
* [59] Juliette Stehle, Nicolas Voirin, Alain Barrat, Ciro Cattuto, Lorenzo Isella, Jean-Francois Pinton, Marco Quaggiotto, Wouter Van den Broeck, Corinne Regis, Bruno Lina, and Philippe Vanhems. High-resolution measurements of face-to-face contact patterns in a primary school. _PLOS ONE_, 6(8), 2011.
* [60] Matthew J. Streeter and Daniel Golovin. An online algorithm for maximizing submodular functions. In _22nd Annual Conference on Neural Information Processing Systems_, NIPS 2008, 2008.
* [61] Daiki Suehiro, Kohei Hatano, Shuji Kijima, Eiji Takimoto, and Kiyohito Nagano. Online prediction under submodular constraints. In _Algorithmic Learning Theory_, ALT 2012, 2012.
* [62] Eiji Takimoto and Manfred K. Warmuth. Predicting nearly as well as the best pruning of a planar decision graph. In _Theoretical Computer Science_, 2000.
* [63] Eiji Takimoto and Manfred K. Warmuth. Path kernels and multiplicative updates. _J. Mach. Learn. Res._, 2003.
* [64] Chayant Tantipathananandh, Tanya Berger-Wolf, and David Kempe. A framework for community identification in dynamic social networks. In _Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_, KDD '07, page 717-726. Association for Computing Machinery, 2007.
* [65] David P. Williamson and David B. Shmoys. _The Design of Approximation Algorithms_. Cambridge University Press, USA, 1st edition, 2011.
* [66] Shota Yasutake, Kohei Hatano, Shuji Kijima, Eiji Takimoto, and Masayuki Takeda. Online linear optimization over permutations. In _Proceedings of the 22nd International Conference on Algorithms and Computation_, ISAAC 2011, 2011.
* [67] Neal E. Young. K-medians, facility location, and the chernoff-wald bound. In _Proceedings of the Eleventh Annual ACM-SIAM Symposium on Discrete Algorithms_, SODA '00, page 86-95. Society for Industrial and Applied Mathematics, 2000.
## Appendix A Further Related Work
In this chapter of the appendix, we continue our discussion on the literature that relates to this work.
**Efficient Combinatorial Online Learning.** There exists a long line of research studying efficient online learning algorithms in various combinatorial domains (e.g., selection of paths, permutations, binary search trees etc.) [40, 62, 63, 41, 7, 60, 45, 22, 42, 66, 38, 37, 1, 2, 20, 30, 29]. Another related line of work studies _black-box reductions_ converting any \(\alpha\)-approximation (offline) algorithm to an \(\mathcal{O}(\alpha)\)-regret online learning algorithm for a specific class of combinatorial optimization problems called _linear optimization problems_[47, 8, 46, 48, 61, 32, 39, 58, 21, 33, 36]. We remark that a key difference of our setting with the aforementioned works is that in the latter case the learner is not penalized for switching actions from round to round with an additional moving/switching cost. In the context of Problem 1 this means that \(\gamma=0\) which is exactly the setting considered by [31]. As a result, apart from the fact that \(k\)-median does not belong in the class of _linear optimization problems_, the aforementioned _black-box reductions_ do not apply to Problem 1 since they do not account for the moving cost.
**The k-server Problem.** Our work also relates with the rich line of literature on the \(k\)-server problem [50, 17, 49, 9, 19, 12]. In this setting there exists only \(1\) client at each round, while \(1\)-lookahead is assumed, i.e. the request \(R_{t}\) is revealed prior to the action of the algorithm at step \(t\). Moreover in \(k\)-server a facility must be placed in the exact position of the request, leading to a simpler combinatorial structure with respect to Problem 16. However, in the \(k\)-server problem, instead of using the benchmark of _regret_, the more challenging metric of _competitive ratio_ that measures the sub-optimality with respect to the _optimal dynamic solution_ is used. Mostly related to ours is the work of [9] providing the first \(\mathrm{poly}(\log n)\)-competitive algorithm for \(k\)-server by reducing the problem to the special case of HSTs. [9] first design a \(\mathrm{poly}(\log n)\)-competitive algorithm for a fractional version of \(k\)-server at which facilities can be fractionally placed into the vertices of the HST. They then use a randomized rounding scheme to convert the fractional solution into an integral one. The basic difference of the randomized rounding scheme of [9] with the one that we introduce in this work (Algorithm \(\mathrm{Cut\&Round}\)) is that the first provides guarantees only for the moving cost of the facilities while \(\mathrm{Cut\&Round}\) provides guarantees both for the moving cost of the facilities as well as the connection cost of the clients.
Footnote 6: Given offline access to the sequence of requests, the optimal solution for the \(k\)-server can be computed in polynomial-time while the optimal static solution of Problem 1 cannot be approximated in polynomial-time with ratio less than \((1+2/e)\) even under _a-priori_ knowledge of the request sequence (inapproximability of \(k\)-median).
**Consistent \(k\)-Clustering.** Another setting of clustering in the presence of unknown clients is that of _Consistent \(k\)-Clustering_[53, 34, 27]. In this setting, given an _unknown stream of clients_, a set of \(k\) facilities has to be maintained over time so that at any round \(t\), the selected facilities form an approximately optimal solution of the sub-instance consisting of clients appeared in the time interval \(\{1,t\}\). A basic difference of Consistent \(k\)-Clustering with Problem 1 is that in the first case the moving cost is not penalized as long as the number of swaps does not exceed a certain threshold (\(\mathcal{O}(k)\)).
Proof of Theorem 2
In this chapter of the appendix we briefly discuss the details behind Theorem 2 and show how the results of [10] and [24] hold even for the specific definition of HSTs we have considered in Definition 5.
Traditionally, HSTs are not required to be balanced nor are required to have weights that are specifically powers of \(2\). In fact, the seminal work of [10], later improved by [24], states that there exists a randomized procedure such that for every weighted graph \(G(V,E,w)\), it constructs (in polynomial-time) a tree \(\mathcal{T}\) such that:
1. There exists a perfect matching \(\sigma:V\mapsto L(\mathcal{T})\) that maps the vertices of \(G\) to the leaves of \(\mathcal{T}\).
2. For any vertices \(i,j\in V\), their corresponding distance on \(\mathcal{T}\) can only increase, i.e. \(d_{G}(i,j)\leq d_{\mathcal{T}}(\sigma(i),\sigma(j))\).
3. On expectation, distances between vertices are distorted only by a logarithmic factor, i.e. \(\mathbb{E}\left[d_{\mathcal{T}}(\sigma(i),\sigma(j))\right]\leq\mathcal{O}( \log|V|)\cdot d_{G}(i,j)\)
4. The weight of any edge \(e=(v,u)\) between a vertex \(v\in V(\mathcal{T})\) and its parent vertex \(u\) is precisely \(diam(G)\cdot 2^{-dpt(v)}\).
5. The height of \(\mathcal{T}\) satisfies \(h(\mathcal{T})\leq\lceil\log\left(diam(G)\right)\rceil\).
The purpose of this section is to argue that one can easily transform such a tree \(\mathcal{T}\) to match our notion of HSTs (Definition 5), while maintaining the same guarantees for the distortion of the distances. Recall that we have already assumed that the minimum edge weight of \(G\) is \(1\), i.e. \(\min_{e\in E}w_{e}=1\). Furthermore, we can also assume without loss of generality that the diameter of \(G\) is a power of \(2\); if not, simple scaling arguments suffice to transform \(G\) into such a graph by only distorting distances by a constant factor. Thus, we assume that \(diam(G)=2^{d}\) for some \(d\geq 0\).
We start from the tree \(\mathcal{T}\) that the algorithm of [24] generates. Recall that by definition, the weight of an edge \(e=(i,j)\) between some vertex \(i\) and its parent node \(j\) is \(2^{d-dpt(i)}\). In order to balance the tree, we take each leaf vertex \(u\in L(\mathcal{T})\) at depth \(dpt(u)\) and extend it downwards by adding new vertices until it reaches a new depth \(dpt^{\prime}(u)=d\). For every new edge that we add during this process, we maintain that the weight of the edge \(e=(i,j)\) from \(i\) to its parent \(j\) is \(diam(G)\cdot 2^{-dpt(i)}\).
Let \(\mathcal{T}^{\prime}\) be used to denote our modified tree. Clearly, the above construction guarantees \(h(\mathcal{T}^{\prime})=d\). Since by definition \(h(\mathcal{T})\leq\lceil\log\left(diam(G)\right)\rceil=d\), we know that all leaves initially lied at depth at most \(d\), and thus by the end of the above process all leaves will lie at the same level of the tree and have depth \(d\). Thus, we have indeed constructed a balanced tree. Furthermore, since by definition \(dpt(v)=h(\mathcal{T})-lev(v)\), we get that the weight of the edge \(e=(i,j)\) from \(i\) to its parent \(j\) is \(w_{e}=diam(G)\cdot 2^{lev(i)-d}=2^{lev(i)}\). So, the constructed tree indeed satisfies all the requirements of Definition 5 and is a valid HST (according to our definition).
We will now argue that \(\mathcal{T}^{\prime}\) also satisfies all items of Theorem 2. Fist of all, the height of our new tree is precisely \(d\), and thus it is true that \(h(\mathcal{T}^{\prime})\leq\lceil\log\left(diam(G)\right)\rceil\). Furthermore, since we only added edges to the initial tree \(\mathcal{T}\), the distance between any two leaves can only increase. Thus, we get that for any vertices \(i,j\in V\) it holds
\[d_{G}(i,j)\leq d_{\mathcal{T}}(i,j)\leq d_{\mathcal{T}^{\prime}}(i,j)\]
Finally, it remains to upper bound the expected distortion on \(\mathcal{T}^{\prime}\). Recall that by construction of [24], we know that
\[\mathbb{E}\left[d_{\mathcal{T}}(\sigma(i),\sigma(j))\right]\leq\mathcal{O}( \log|V|)\cdot d_{G}(i,j)\]
Since edge lengths decrease by a factor of \(2\) every time we move down the tree, we know that the total length of the path we added in order to move leaf \(i\) from depth \(dpt(i)\) to depth \(d\) is precisely \(1+2+\ldots 2^{dpt(i)-1}\leq 2^{dpt(i)}\). This implies that any distance on \(\mathcal{T}^{\prime}\) can be at most twice the corresponding distance on \(\mathcal{T}\), i.e.
\[d_{\mathcal{T}^{\prime}}(\sigma(i),\sigma(j))\leq 2\cdot d_{\mathcal{T}}( \sigma(i),\sigma(j))\]
which completes the proof.
Proofs of Section 3
In this chapter of the appendix we present all the omitted proofs from Section 3 concerning the basic algorithmic primitives we use in order to establish our main result in Theorem 1.
Roadmap.In section C.1 we establish the connection between Problems 1 and 2 and show that our notion of fractional connection and moving cost collapses with our initial definitions in the case of integral facility placements. Then, in section C.2 we present the proof of Theorem 5 and in section C.3 we present the proof of Theorem 1.
### Establishing the relation between Problems 1 and 2
Fix any HST \(\mathcal{T}\) and let \(\mathcal{FP}(\mathcal{T})\) be the corresponding set of fractional facility placements. In this section, we will establish that in the case of integral facility placements \(y\in\mathcal{FP}(\mathcal{T})\cap\mathds{N}\), the notions of fractional connection cost and fractional moving cost (formally stated in Definitions 7 and 8) collapse to the notions of actual connection and moving costs (formally stated in Definitions 1 and 2) respectively.
Let \(y\in\mathcal{FP}(\mathcal{T})\cap\mathds{N}\) be an integral facility placement. Then, by definition, for each leaf \(v\in L(\mathcal{T})\) we have \(y_{v}\in\{0,1\}\) facilities that are placed on it, and the total amount of placed facilities is \(k\), i.e. \(\sum_{v\in L(\mathcal{T})}y_{v}=k\). Thus, we can associate with any integral facility placement \(y\) a corresponding set
\[F(y)=\{v\in L(\mathcal{T}):y_{v}=1\}\]
such that \(|F(y)|=k\), meaning that \(F(y)\) is a valid facility placement of the leaves of the \(\mathcal{T}\).
In Claim 1 we will establish that for any set of clients, the connection cost under \(F(y)\) is equal to the fractional connection cost under \(y\). Then, in Claim 2 we will establish that the fractional moving cost between \(y\) and \(y^{\prime}\) gives us precisely the moving cost between facility placements \(F(y)\) and \(F(y^{\prime})\) on \(\mathcal{T}\).
**Claim 1**.: _For any integral facility placement \(y\in\mathcal{FP}(\mathcal{T})\cap\mathds{N}\) and any set of clients \(R\subseteq L(\mathcal{T})\), it holds that_
\[f_{R}(y)=C_{R}(F(y))\]
Proof.: Fix any \(y\in\mathcal{FP}(\mathcal{T})\cap\mathds{N}\) and any \(R\subseteq L(\mathcal{T})\). By definition of the connection cost (Definition 1), we have
\[C_{R}(F)=\sum_{j\in R}\min_{i\in F(y)}d_{\mathcal{T}}(i,j)\]
Let's fix a particular client that lies on some leaf \(j\in L(\mathcal{T})\) of \(\mathcal{T}\). Let \(i^{*}=\arg\min_{i\in F(y)}d_{\mathcal{T}}(i,j)\) be the leaf closest to \(j\) that \(F(y)\) places a facility into. Since \(\mathcal{T}\) is an HST and distances increase by a factor of \(2\) as we move up the tree, it is not hard to see that \(i^{*}\) is the leaf in \(F(y)\) whose _lowest common ancestor_ (lca) with \(j\) has the smallest level. Let \(l^{*}=lca(j,i^{*})\). Equivalently, \(l^{*}\) is the minimum-level vertex in \(P(j,r)\) such that \(y_{l^{*}}\geq 1\). Since \(\mathcal{T}\) is balanced, we have that the connection cost of client \(j\) under \(F(y)\) is precisely
\[C_{\{j\}}(F(y))=2\cdot d_{\mathcal{T}}(j,l^{*})=2\cdot\sum_{l=0}^{\mathrm{lev }(l^{*})-1}2^{l}\]
and since by integrality we have that \(y_{v}=0\) for any \(v\in P(j,l^{*})\setminus\{l^{*}\}\) and \(y_{v}\geq 1\) for all \(v\in P(l^{*},r)\), we have
\[C_{\{j\}}(F)=2\cdot\sum_{v\in P(j,r)}2^{\mathrm{lev}(v)}\cdot\max(0,1-y_{v})\]
Summing over all clients \(j\in R\) we get
\[C_{R}(F(y))=\sum_{j\in R}\sum_{v\in P(j,r)}2^{\mathrm{lev}(v)+1}\cdot\max(0,1- y_{v})=f_{R}(y)\]
which concludes the proof.
**Claim 2**.: _For any integral facility placements \(y,y^{\prime}\in\mathcal{FP}(\mathcal{T})\cap\mathbb{N}\), it holds that_
\[||y-y^{\prime}||_{\mathcal{T}}=\gamma\cdot M_{\mathcal{T}}(F(y),F(y^{\prime}))\]
Proof.: Fix any two integral facility placements \(y,y^{\prime}\in\mathcal{FP}(\mathcal{T})\cap\mathbb{N}\). By definition of the moving cost (Definition 2), we have that
\[M_{\mathcal{T}}(F(y),F(y^{\prime}))=\min_{\sigma\in\Sigma}\sum_{i\in F(y)}d_{ \mathcal{T}}(i,\sigma(i))\]
where \(\Sigma\) is the set of all possible matchings from the facilities in \(F(y)\) to the facilities in \(F(y^{\prime})\).
In general graphs, the minimum transportation cost can have a very complicated structure and typically requires solving a minimum transportation problem in order to compute it. However, in the special case of HSTs, we are actually able to obtain a very simple expression for this quantity.
Recall that in an HST \(\mathcal{T}\), edge weights increase by a factor of \(2\) every time we move up a level on the tree. Thus, it is always in out interest to move facilities between leaves whose lowest common ancestor is as low as possible. In other words, the matching \(\sigma\) that minimizes the transportation cost from \(F(y)\) to \(F(y^{\prime})\) can be obtained by selecting an arbitrary leaf in \(F(y)\), matching it to the leaf in \(F(y^{\prime})\) with which it shares the _lowest_ lowest common ancestor and then repeating the process for the rest of the leaves.
Now fix any vertex \(v\in V(\mathcal{T})\). Recall that \(y_{v}\) is equal to the number of facilities in \(F(y)\) that are placed in the descendant leaves of \(v\) (respectively for \(y^{\prime}_{v}\)). Thus, if we apply the above (optimal) transportation plan, the number of facilities that will end up traversing the edge from \(v\) to its parent vertex is going to be precisely \(|y_{v}-y^{\prime}_{v}|\). Since the weight of this edge is by definition \(2^{\mathrm{lev}(v)}\), we get that
\[M_{\mathcal{T}}(F(y),F(y^{\prime}))=\sum_{v\in V(\mathcal{T})}2^{\mathrm{lev}( v)}\cdot|y_{v}-y^{\prime}_{v}|\]
and since
\[||y-y^{\prime}||_{\mathcal{T}}=\gamma\cdot\sum_{v\in V(\mathcal{T})}2^{\mathrm{ lev}(v)}\cdot|y_{v}-y^{\prime}_{v}|\]
we have proven the claim.
### Proof of Theorem 5
We will now formally present the proof of Theorem 5, bounding the expected total cost of Algorithm 1. Fix any sequence of clients \(R_{1},\ldots,R_{T}\). Since the random seed \(\alpha\) is selected uniformly at random (Step \(3\) of Algorithm 1), by Item \(1\) of Theorem 4 we get that
\[\mathbb{E}\left[C_{R_{t}}(F_{t})\right]=f_{R_{t}}(y^{t})\]
Moreover since the same random seed \(\alpha\) is used at all rounds \(t\geq 1\), Item \(2\) of Theorem 4 implies that
\[\gamma\cdot\mathbb{E}\left[M_{\mathcal{T}}(F_{t+1},F_{t})\right] \leq 4\cdot||y^{t+1}-y^{t}||_{\mathcal{T}}\]
Thus,
\[\mathbb{E}\left[\sum_{t=1}^{T}C_{R_{t}}(F_{t})+\gamma\cdot\sum_{t =2}^{T}M_{\mathcal{T}}(F_{t},F_{t-1})\right] \leq 4\cdot\left(\sum_{t=1}^{T}f_{R_{t}}(y^{t})+\sum_{t=2}^{T}||y ^{t}-y^{t-1}||_{\mathcal{T}}\right)\] \[\leq 6\cdot\min_{y^{*}\in\mathcal{FP}}\sum_{t=1}^{T}f_{R_{t}}(y^{ *})+\beta\cdot\sqrt{T}\]
where the last inequality follows by Theorem 3 for \(\beta=\mathcal{O}\left(k\cdot|L(\mathcal{T})|^{3/2}\cdot D_{\mathcal{T}}\cdot \max(\gamma,1)\right)\). The proof is concluded by the fact that
\[\min_{y^{*}\in\mathcal{FP}}\sum_{t=1}^{T}f_{R_{t}}(y^{*})\leq\min_{|F^{*}|=k} \sum_{t=1}^{T}C_{R_{t}}(F^{*})\]
which is established in Claim 1 of Appendix C.1, stating that for any placement of \(k\)-facilities \(F\subseteq L(\mathcal{T})\) there exists a corresponding \(y\in\mathcal{FP}(\mathcal{T})\) whose fractional connection cost is equal to \(F\)'s under any client request.
### Proof of Theorem 1
We will now formally present the proof of Theorem 1, bounding the regret of Algorithm 2.
Let \(\mathcal{T}\) be the HST that we randomly embed our graph \(G(V,E,w)\) into. Since \(V=L(\mathcal{T})\), we slightly abuse notation and use \(u\) to refer both to some vertex of \(G\) and to the corresponding leaf of \(\mathcal{T}\). From Theorem 5, we know that the output of Algorithm 1 satisfies
\[\mathbb{E}\left[\sum_{t=1}^{T}C_{R_{t}}^{\mathcal{T}}(F_{t})+ \gamma\cdot\sum_{t=2}^{T}M_{\mathcal{T}}(F_{t},F_{t-1})\right]\leq 6\cdot\min_{|F^{*}|=k}\sum_{t=1}^{T}C_{R_{t}}^{\mathcal{T}}(F^{*})\] \[+\mathcal{O}\left(k\cdot|L(\mathcal{T})|^{3/2}\cdot D_{\mathcal{ T}}\cdot\max(1,\gamma)\right)\cdot\sqrt{T}\]
where we use \(\mathcal{T}\) in the connection and moving cost to indicate that all distances are measured on the HST. Here, the expectation is taken over the random choices of Algorithm 1.
Next, notice that both the connection cost and the moving cost are defined as sum of distances. Thus, the results of Theorem 2 about the distance distortion from \(G\) to \(\mathcal{T}\) clearly apply for these quantities as well, namely
\[C_{R_{t}}^{G}(F_{t})\leq C_{R_{t}}^{\mathcal{T}}(F_{t})\text{ and }\mathbb{E} \left[C_{R_{t}}^{\mathcal{T}}(F_{t})\right]\leq\mathcal{O}(\log|V|)\cdot C_{R_ {t}}^{G}(F_{t})\]
and
\[M_{G}(F_{t},F_{t-1})\leq M_{\mathcal{T}}(F_{t},F_{t-1})\text{ and }\mathbb{E} \left[M_{\mathcal{T}}(F_{t},F_{t-1})\right]\leq\mathcal{O}(\log|V|)\cdot M_{G}( F_{t},F_{t-1})\]
Thus, taking an expectation over the randomness of \(\mathcal{T}\), we finally get that
\[\mathbb{E}\left[\sum_{t=1}^{T}C_{R_{t}}^{G}(F_{t})+\gamma\cdot\sum _{t=2}^{T}M_{G}(F_{t},F_{t-1})\right] \leq\mathcal{O}(\log|V|)\cdot\min_{|F^{*}|=k}\sum_{t=1}^{T}C_{R_ {t}}^{G}(F^{*})\] \[+\mathcal{O}\left(k\cdot|L(\mathcal{T})|^{3/2}\cdot D_{\mathcal{ T}}\cdot\max(1,\gamma)\right)\cdot\sqrt{T}\]
Let \(n=|V|\) and \(D=diam(G)\). From the above, we get that Algorithm 2 is indeed \(\alpha\)-regret for \(\alpha=\mathcal{O}(\log n)\). Furthermore, we have that \(|L(\mathcal{T})|=|V|=n\), and \(D_{\mathcal{T}}=2\cdot(2^{h(\mathcal{T})}-1)\leq 4D\) since \(h(\mathcal{T})\leq\lceil\log D\rceil\). Thus, setting \(\beta=\mathcal{O}(k\cdot n^{3/2}\cdot D\cdot\max(1,\gamma))\), we get that Algorithm 2 has \(\beta\)-additive regret, completing the proof of Theorem 1.
Analysis of FTRL (Proofs of Section 4)
In this chapter of the appendix we present all the omitted proofs from Section 4 concerning our analysis of the _Follow the Regularized Leader_ (\(\mathrm{FTRL}\)) algorithm (Algorithm 3). To avoid repetition, from now on we fix an arbitrary HST \(\mathcal{T}\) and use \(\mathcal{FP}(\mathcal{T})\) to denote the set of all fractional placements of \(k\) facilities on the leaves of \(\mathcal{T}\). We use \(n=|L(\mathcal{T})|\) to denote the number of leaves of \(\mathcal{T}\), \(h=h(\mathcal{T})\) to denote its height and \(D=diam(\mathcal{T})\) to denote its diameter. Since \(\mathcal{T}\) is an HST, we know that its diameter \(D\), i.e. the maximum distance between any two leaves, is precisely \(D=2\cdot(2^{h}-1)\).
To ease notation, let \(w_{v}=2^{\mathrm{lev}(v)}\). For convenience, we remind the reader that our regularizer function \(R_{\mathcal{T}}:\mathcal{FP}(\mathcal{T})\mapsto\mathbb{R}\) is defined as
\[R_{\mathcal{T}}(y)=\sum_{v\neq r}w_{v}\cdot(y_{v}+\delta_{v})\cdot\ln\left( \frac{y_{v}+\delta_{v}}{y_{p(v)}+\delta_{p(v)}}\right)\]
where \(\delta_{v}=k\cdot|L(\mathcal{T})\cap T(v)|/|L(\mathcal{T})|\) is the percentage of leaves that lie on the sub-tree rooted at vertex \(v\) multiplied by \(k\) and \(p(v)\) is the parent of node \(v\). Also, recall that for any \(y\in\mathcal{FP}(\mathcal{T})\) we have defined the norm
\[||y||_{\mathcal{T}}=\gamma\cdot\sum_{v\in V(\mathcal{T})}w_{v}|y_{v}|\]
Roadmap.In Section D.1 we prove Lemma 1, namely the strong convexity of \(R_{\mathcal{T}}\) with respect to \(||\cdot||_{\mathcal{T}}\). Then, in Section D.2 we bound the moving cost of \(\mathrm{FTRL}\), proving Lemma 2. Next, in Section D.3 we bound the connection cost cost of \(\mathrm{FTRL}\), proving Lemma 3. Finally, in Section D.4 we account for approximation errors in the computation of the regularized leader, proving Lemma 4.
### Strong Convexity (Proof of Lemma 1)
The objective of this section is to prove Lemma 1, specifically that for any fractional facility placements \(y,y^{\prime}\in\mathcal{FP}(\mathcal{T})\) it holds that
\[R_{\mathcal{T}}(y^{\prime})\geq R_{\mathcal{T}}(y)+\langle\nabla R_{\mathcal{T }}(y),y^{\prime}-y\rangle+\alpha||y-y^{\prime}||_{\mathcal{T}}^{2}\]
where \(\alpha=(8kD\gamma^{2})^{-1}\).
We begin by computing the gradient of \(R_{\mathcal{T}}\) on any fractional facility placement \(y\in\mathcal{FP}(\mathcal{T})\).
**Claim 3**.: _The partial derivatives of \(R_{\mathcal{T}}\) on any point \(y\in\mathcal{FP}(\mathcal{T})\) are given by_
\[\frac{\partial R_{\mathcal{T}}(y)}{\partial y_{v}}=\left\{\begin{array}{ll}- \frac{w_{v}}{2}&\mbox{for }v=r\\ w_{v}\cdot\ln\left(\frac{y_{v}+\delta_{v}}{y_{p(v)}+\delta_{p(v)}}\right)+w_{v}& \mbox{for }v\in L(\mathcal{T})\\ w_{v}\cdot\ln\left(\frac{y_{v}+\delta_{v}}{y_{p(v)}+\delta_{p(v)}}\right)+\frac{ w_{v}}{2}&\mbox{for }v\notin L(\mathcal{T})\cup\{r\}\end{array}\right.\]
Proof.: Clearly, \(R_{\mathcal{T}}\) is well-defined and differentiable on \(\mathcal{FP}(\mathcal{T})\). For any \(v\neq r\), we compute the partial derivatives of \(R_{\mathcal{T}}(y)\) to obtain
\[\frac{\partial R_{\mathcal{T}}(y)}{\partial y_{v}}=w_{v}\cdot\ln\left(\frac{y _{v}+\delta_{v}}{y_{p(v)}+\delta_{p(v)}}\right)+w_{v}-\sum_{v\in\mathrm{cld}(u )}w_{u}\cdot\frac{y_{u}+\delta_{u}}{y_{v}+\delta_{v}}\]
Since \(y\in\mathcal{FP}(\mathcal{T})\), we know \(y_{v}=\sum_{u\in\mathrm{cld}(v)}y_{u}\) and by definition, \(\delta_{v}=\sum_{u\in\mathrm{cld}(v)}\delta_{u}\). Finally, recall that \(w_{u}=w_{v}/2\) for any \(u\in\mathrm{cld}(v)\). By plugging everything in we get
\[\frac{\partial R_{\mathcal{T}}(y)}{\partial y_{v}}=w_{v}\cdot\ln\left(\frac{y _{v}+\delta_{v}}{y_{p(v)}+\delta_{p(v)}}\right)+w_{v}-\frac{w_{v}}{2}\cdot 1[v \notin L(\mathcal{T})]\]
for any \(v\neq r\). For the root vertex, using similar arguments we get
\[\frac{\partial R_{\mathcal{T}}(y)}{\partial y_{r}}=-\frac{w_{r}}{2}\]Now that we have calculated the gradient of \(R_{\mathcal{T}}\), we can substitute it into the definition of strong convexity. Specifically, by Claim 3, Lemma 1 states that
\[\sum_{v\neq r}w_{v}\cdot(y^{\prime}_{v}+\delta_{v})\cdot\ln\left(\frac{\frac{y^{ \prime}_{v}+\delta_{v}}{y^{\prime}_{p(v)}+\delta_{p(v)}}}{\frac{y_{v}+\delta_{ v}}{y_{p(v)}+\delta_{p(v)}}}\right)\geq\frac{1}{8kD\gamma^{2}}\cdot||y^{\prime} -y||_{\mathcal{T}}^{2} \tag{1}\]
To ease the presentation, we define quantities
\[f(y^{\prime},y)=\sum_{v\neq r}w_{v}\cdot(y^{\prime}_{v}+\delta_{v})\cdot\ln \left(\frac{\frac{y^{\prime}_{v}+\delta_{v}}{y^{\prime}_{p(v)}+\delta_{p(v)}}}{ \frac{y_{v}+\delta_{v}}{y_{p(v)}+\delta_{p(v)}}}\right)\]
and
\[h(y^{\prime},y)=\sum_{v\neq r}w_{v}\cdot(y_{p(v)}+\delta_{p(v)})\cdot|\frac{y^ {\prime}_{v}+\delta_{v}}{y^{\prime}_{p(v)}+\delta_{p(v)}}-\frac{y_{v}+\delta_{ v}}{y_{p(v)}+\delta_{p(v)}}|\]
We will prove that \(f(y^{\prime},y)\geq(1/2kD)\cdot h^{2}(y^{\prime},y)\) and that \(h(y^{\prime},y)\geq(1/2\gamma)\cdot||y^{\prime}-y||_{\mathcal{T}}\) in Claims 4 and 5 respectively. Combining these claims, equation (1) clearly holds, completing the proof of Lemma 1.
**Claim 4**.: _For any \(y,y^{\prime}\in\mathcal{FP}(\mathcal{T})\), it holds that \(f(y^{\prime},y)\geq\frac{1}{2kD}\cdot\left(h(y^{\prime},y)\right)^{2}\)._
Proof.: We begin by establishing some notation. For any \(v\neq r\), let
\[\mu^{\prime}_{v}=w_{v}\cdot(y^{\prime}_{p(v)}+\delta_{p(v)})\cdot\frac{y^{ \prime}_{v}+\delta_{v}}{y^{\prime}_{p(v)}+\delta_{p(v)}}\]
and
\[\mu_{v}=w_{v}\cdot(y^{\prime}_{p(v)}+\delta_{p(v)})\cdot\frac{y_{v}+\delta_{v}} {y_{p(v)}+\delta_{p(v)}}.\]
Then, we have that
\[f(y^{\prime},y) =\sum_{v\neq r}\mu^{\prime}_{v}\cdot\ln\left(\frac{\mu^{\prime}_{ v}}{\mu_{v}}\right)\] \[=\sum_{v\in I}\mu^{\prime}_{v}\cdot\ln\left(\frac{\mu^{\prime}_{ v}}{\mu_{v}}\right)+\sum_{v\in I^{\prime}}\mu^{\prime}_{v}\cdot\ln\left(\frac{ \mu^{\prime}_{v}}{\mu_{v}}\right)\]
where \(I=\{v\neq r:\mu^{\prime}_{v}\geq\mu_{v}\}\) and \(I^{\prime}=\{v\neq r:\mu^{\prime}_{v}<\mu_{v}\}\). By applying the log-sum inequality in both of these terms, we obtain
\[f(y^{\prime},y)\geq(\sum_{v\in I}\mu^{\prime}_{v})\cdot\ln\left(\frac{\sum_{v \in I}\mu^{\prime}_{v}}{\sum_{v\in I}\mu_{v}}\right)+(\sum_{v\in I^{\prime}} \mu^{\prime}_{v})\cdot\ln\left(\frac{\sum_{v\in I^{\prime}}\mu^{\prime}_{v}}{ \sum_{v\in I^{\prime}}\mu_{v}}\right)\]
Next, observe that
\[\sum_{v\neq r}\mu^{\prime}_{v}=\sum_{v\neq r}w_{v}\cdot(y^{\prime}_{v}+\delta_ {v})=2k\cdot(2^{h}-1)=k\cdot D\]
and also
\[\sum_{v\neq r}\mu_{v} =\sum_{v\neq r}w_{v}\cdot(y^{\prime}_{p(v)}+\delta_{p(v)})\cdot \frac{y_{v}+\delta_{v}}{y_{p(v)}+\delta_{p(v)}}\] \[=\sum_{v\notin L(\mathcal{T})}\left(\frac{w_{v}}{2}\cdot(y^{ \prime}_{v}+\delta_{v})\cdot\sum_{u\in dd(v)}\frac{y_{u}+\delta_{u}}{y_{v}+ \delta_{v}}\right)\] \[=\sum_{v\notin L(\mathcal{T})}\frac{w_{v}}{2}\cdot(y^{\prime}_{v} +\delta_{v})\] \[=\frac{1}{2}\cdot 2k\cdot(2^{h+1}-2)\] \[=k\cdot D.\]Let \(B^{\prime}=\sum_{v\in I}\mu^{\prime}_{v}\) and \(B=\sum_{v\in I}\mu_{v}\). Then, we have shown that
\[f(y^{\prime},y)\geq B^{\prime}\cdot\ln\left(\frac{B^{\prime}}{B}\right)+(kD-B^{ \prime})\cdot\ln\left(\frac{kD-B^{\prime}}{kD-B}\right) \tag{2}\]
Our next step is to apply Pinsker's inequality to the above expression. Pinsker's inequality states that for any \(p,q\in(0,1)\), it holds that
\[p\cdot\ln\left(\frac{p}{q}\right)+(1-p)\cdot\ln\left(\frac{1-p}{1-q}\right)\geq 2 \cdot(p-q)^{2}\]
Since \(B\leq kD\) and \(B^{\prime}\leq kD\), we can scale everything in inequality 2 and apply Pinsker's inequality to obtain
\[f(y^{\prime},y)\geq\frac{2}{kD}\cdot(B-B^{\prime})^{2} \tag{3}\]
To complete the proof, we substitute
\[B^{\prime}-B =\sum_{v\in I}(\mu^{\prime}_{v}-\mu_{v})\] \[=\sum_{v\in I}w_{v}\cdot(y^{\prime}_{p(v)}+\delta_{p(v)})\cdot( \frac{y^{\prime}_{v}+\delta_{v}}{y^{\prime}_{p(v)}+\delta_{p(v)}}-\frac{y_{v} +\delta_{v}}{y_{p(v)}+\delta_{p(v)}})\] \[=\sum_{v\notin L(\mathcal{T})}\frac{w_{v}}{2}\cdot(y^{\prime}_{v} +\delta_{v})\cdot\sum_{u\in\operatorname{cld}(v)\cap I}(\frac{y^{\prime}_{u}+ \delta_{u}}{y^{\prime}_{v}+\delta_{v}}-\frac{y_{u}+\delta_{u}}{y_{v}+\delta_{v}})\] \[=\frac{1}{2}\cdot\sum_{v\notin L(\mathcal{T})}\frac{w_{v}}{2} \cdot(y^{\prime}_{v}+\delta_{v})\cdot\sum_{u\in\operatorname{cld}(u)}|\frac{y^ {\prime}_{u}+\delta_{u}}{y^{\prime}_{v}+\delta_{v}}-\frac{y_{u}+\delta_{u}}{y_{ v}+\delta_{v}}|\]
where the last equality follows from the fact that the ratio in the inner sum always sum to \(1\), and thus by only summing over the ones with positive difference we get half of the total sum of absolute differences. By swapping the summation order once again, we get
\[B^{\prime}-B =\frac{1}{2}\cdot\sum_{v\neq r}w_{v}\cdot(y^{\prime}_{p(v)}+ \delta_{p(v)})\cdot|\frac{y^{\prime}_{v}+\delta_{v}}{y^{\prime}_{p(v)}+\delta_ {p(v)}}-\frac{y_{v}+\delta_{v}}{y_{p(v)}+\delta_{p(v)}}|\] \[=\frac{1}{2}\cdot h(y^{\prime},y)\]
and from inequality (3) we finally get
\[f(y^{\prime},y)\geq\frac{1}{2kD}\cdot\left(h(y^{\prime},y)\right)^{2}\]
as desired.
**Claim 5**.: _For any \(y,y^{\prime}\in\mathcal{FP}(\mathcal{T})\), it holds that \(||y^{\prime}-y||_{\mathcal{T}}\leq 2\gamma\cdot h(y^{\prime},y)\)._
Proof.: To prove the claim, we first need to establish some extra notation. For any \(y\in\mathcal{FP}(\mathcal{T})\) and \(v\neq r\), let
\[\lambda_{v}(y):=\frac{y_{v}+\delta_{v}}{y_{p(v)}+\delta_{p(v)}}\]
Furthermore, for any vertex \(v\) and any integer \(i\in[0,h-lev(v)]\), we use \(p(v,i)\) to denote the \(i\)-th ancestor of \(v\) on \(\mathcal{T}\), for example \(p(v,0)=v\), \(p(v,1)=p(v)\) and \(p(v,h-lev(v))=r\).
Recall that by definition, \(y_{r}=\delta_{r}=k\). Thus, if we telescope these terms and let \(m_{v}=h-lev(v)-1\), we clearly have that
\[y_{v}+\delta_{v}=2k\cdot\Pi_{i=0}^{m_{v}}\lambda_{p(v,i)}(y)\]which implies
\[y^{\prime}_{v}-y_{v} =2k\cdot\Pi_{i=0}^{m_{v}}\lambda_{p(v,i)}(y^{\prime})-2k\cdot\Pi_{i=0 }^{m_{v}}\lambda_{p(v,i)}(y)\] \[=2k\cdot\sum_{i=0}^{m_{v}}\lambda_{p(v,0)}(y^{\prime})\cdot\ldots \cdot(\lambda_{p(v,i)}(y^{\prime})-\lambda_{p(v,i)}(y))\cdot\ldots\cdot\lambda_ {p(v,m_{v})}(y)\] \[=2k\cdot\sum_{i=0}^{m_{v}}\frac{y^{\prime}_{v}+\delta_{v}}{y^{ \prime}_{p(v,i)}+\delta_{p(v,i)}}\cdot(\lambda_{p(v,i)}(y^{\prime})-\lambda_{p (v,i)}(y))\cdot\frac{y_{p(v,i+1)}+\delta_{p(v,i+1)}}{2k}\] \[=(y^{\prime}_{v}+\delta_{v})\cdot\sum_{i=0}^{m_{v}}\frac{y_{p(v,i +1)}+\delta_{p(v,i+1)}}{y^{\prime}_{p(v,i)}+\delta_{p(v,i)}}\cdot(\lambda_{p(v,i)}(y^{\prime})-\lambda_{p(v,i)}(y))\]
and from the triangular inequality
\[|y^{\prime}_{v}-y_{v}|\leq(y^{\prime}_{v}+\delta_{v})\cdot\sum_{i=0}^{m_{v}} \frac{y_{p(v,i+1)}+\delta_{p(v,i+1)}}{y^{\prime}_{p(v,i)}+\delta_{p(v,i)}}\cdot |\lambda_{p(v,i)}(y^{\prime})-\lambda_{p(v,i)}(y)| \tag{4}\]
Plugging inequality (4) into the definition of norm \(||\cdot||_{\mathcal{T}}\), we get
\[||y^{\prime}-y||_{\mathcal{T}}\leq\gamma\cdot\sum_{v\neq r}w_{v}\cdot(y^{ \prime}_{v}+\delta_{v})\cdot\left(\sum_{i=0}^{m_{v}}\frac{y_{p(v,i+1)}+\delta_ {p(v,i+1)}}{y^{\prime}_{p(v,i)}+\delta_{p(v,i)}}\cdot|\lambda_{p(v,i)}(y^{ \prime})-\lambda_{p(v,i)}(y)|\right)\]
and by carefully exchanging the summation order, we obtain
\[||y^{\prime}-y||_{\mathcal{T}}\leq\gamma\cdot\sum_{v\neq r}\frac{y_{p(v)}+ \delta_{p(v)}}{y^{\prime}_{v}+\delta_{v}}\cdot|\lambda_{v}(y^{\prime})-\lambda _{v}(y)|\cdot\left(\sum_{u\in T(v)}w_{u}(y^{\prime}_{u}+\delta_{u})\right)\]
Finally, observe that \(\sum_{u\in T(v)}w_{u}y^{\prime}_{u}\leq 2w_{v}y^{\prime}_{v}\). To see this, fix the sub-tree \(T(v)\) rooted at vertex \(v\) and recall that since \(y^{\prime}\in\mathcal{FP}(\mathcal{T})\), the total amount of facilities at each level is \(y^{\prime}_{v}\). Furthermore, the weights \(w_{v}\) decrease by a factor of \(2\) at every level. Using the same arguments, we obtain \(\sum_{u\in T(v)}w_{u}\delta_{u}\leq 2w_{v}\delta_{v}\). Combining everything, we finally get
\[||y^{\prime}-y||_{\mathcal{T}}\leq 2\gamma\cdot\sum_{v\neq r}w_{v}\cdot(y_{p(v) }+\delta_{p(v)})\cdot|\lambda_{v}(y^{\prime})-\lambda_{v}(y)|\]
or equivalently, \(||y^{\prime}-y||_{\mathcal{T}}\leq 2\gamma\cdot h(y^{\prime},y)\).
### Bounding the Moving Cost (Proof of Lemma 2)
In this section we will upper bound the moving cost of \(\mathrm{FTRL}\) by its connection cost. Fix any sequence of client requests \(R_{1},R_{2},\ldots,R_{T}\subseteq L(\mathcal{T})\). Recall that at each step \(t\), \(\mathrm{FTRL}\) selects a fractional facility placement \(y^{t}\) given by
\[y^{t}=\operatorname*{arg\,min}_{y\in\mathcal{FP}(\mathcal{T})}\Phi_{t}(y)\]
where \(\Phi_{t}(y)=\sum_{s=1}^{t-1}f_{R_{s}}(y)+\frac{1}{\eta}\cdot R_{\mathcal{T}}(y)\) is the objective that \(\mathrm{FTRL}\) minimizes over at step \(t\) for \(\eta=(\gamma\cdot\sqrt{nT})^{-1}\). In this section, we prove Lemma 2, by arguing that
\[\sum_{t=2}^{T}||y^{t}-y^{t-1}||_{\mathcal{T}}\leq\frac{1}{2}\cdot\sum_{t=1}^{T }f_{R_{t}}(y^{t})+\frac{\eta}{2\alpha}\cdot T\]
since the proof follows easily by the definitions of \(\eta\) and \(\alpha\).
From Lemma 1 we already know that \(R_{\mathcal{T}}\) is \(\alpha\)-strongly convex with respect to \(||\cdot||_{\mathcal{T}}\) for \(\alpha=(8kD\gamma^{2})^{-1}\). Furthermore, by definition the fractional connection cost
\[f_{R}(y)=\sum_{j\in R}\sum_{v\in P(j,r)}2^{lev(v)+1}\cdot\max{(0,1-y_{v})}\]is clearly convex for any client request \(R\subseteq L(\mathcal{T})\). Thus, it is straight-forward to argue that at any step \(t\), the \(\mathrm{FTRL}\) objective \(\Phi_{t}\) is \(\frac{\alpha}{\eta}\)-strongly convex with respect to \(||\cdot||_{\mathcal{T}}\). Unfortunately, \(f_{R}(y)\) is not differentiable on \(\mathcal{FP}(\mathcal{T})\), but its sub-gradients are well-defined on any \(y\in\mathcal{FP}(\mathcal{T})\). Thus, the strong convexity of \(\Phi_{t}\) provides us with the following guarantee:
**Claim 6**.: _Fix any pair of fractional facility placements \(y,y^{\prime}\in\mathcal{FP}(\mathcal{T})\) and any time step \(t\in[T]\). Let \(g_{t}\in\partial\Phi_{t}(y)\) be any sub-gradient of \(\Phi_{t}\) at \(y\). Then, it holds that_
\[\Phi_{t}(y^{\prime})\geq\Phi_{t}(y)+\langle g_{t},y^{\prime}-y\rangle+\frac{ \alpha}{\eta}\cdot||y-y^{\prime}||_{\mathcal{T}}^{2}\]
Furthermore, since by definition \(y^{t}\) is the (unique) minimizer of \(\Phi_{t}\), the first order optimality conditions on \(\Phi_{t}\) imply that there exists some \(g_{t}^{*}\in\partial\Phi_{t}(y^{t})\) such that \(\langle g_{t}^{*},y-y^{t}\rangle\geq 0\) for any \(y\in\mathcal{FP}(\mathcal{T})\). Claim 6 for \(y=y^{t}\), \(y^{\prime}=y^{t-1}\) and \(g_{t}=g_{t}^{*}\) gives us
\[\Phi_{t}(y^{t-1})\geq\Phi_{t}(y^{t})+\frac{\alpha}{\eta}\cdot||y^{t}-y^{t-1}||_ {\mathcal{T}}^{2}\]
Thus, we have
\[||y^{t}-y^{t-1}||_{\mathcal{T}}^{2} \leq\frac{\eta}{\alpha}\cdot\big{(}\Phi_{t}(y^{t-1})-\Phi_{t}(y^ {t})\big{)}\] \[=\frac{\eta}{\alpha}\cdot\big{(}\Phi_{t-1}(y^{t-1})+f_{R_{t-1}}(y ^{t-1})-\Phi_{t-1}(y^{t})-f_{R_{t-1}}(y^{t})\big{)}\] \[\leq\frac{\eta}{\alpha}\cdot\big{(}f_{R_{t-1}}(y^{t-1})-f_{R_{t-1 }}(y^{t})\big{)}\]
where for the equality we used the fact that \(\Phi_{t}(y)=\Phi_{t-1}(y)+f_{R_{t-1}}(y)\) and for the second inequality we used the fact that \(y^{t-1}\) is by definition the minimizer of \(\Phi_{t-1}\). Finally, since \(f_{R}(y)\geq 0\) for any client request \(R\subseteq L(\mathcal{T})\), we have
\[||y^{t}-y^{t-1}||_{\mathcal{T}} \leq\sqrt{\frac{\eta}{\alpha}\cdot f_{R_{t-1}}(y^{t-1})}\] \[\leq\frac{\eta}{2\alpha}+\frac{1}{2}\cdot f_{R_{t-1}}(y^{t-1})\]
where the last inequality follows from the _Arithmetic Mean - Geometric Mean_ inequality. Summing over all \(t\) completes the proof of Lemma 2.
### Bounding the Connection Cost (Proof of Lemma 3)
In this section we will upper bound the connection cost of \(\mathrm{FTRL}\) by the connection cost of the optimal fractional facility placement in hindsight. This is a standard analysis found in many textbooks, and we present it just for the sake of completeness.
Fix any sequence of client requests \(R_{1},R_{2},\ldots,R_{T}\subseteq L(\mathcal{T})\). Recall that at each step \(t\), \(\mathrm{FTRL}\) selects a fractional facility placement \(y^{t}\) given by
\[y^{t}=\operatorname*{arg\,min}_{y\in\mathcal{FP}(\mathcal{T})}\Phi_{t}(y)\]
where \(\Phi_{t}(y)=\sum_{s=1}^{t-1}f_{R_{s}}(y)+\frac{1}{\eta}\cdot R_{\mathcal{T}} (y)\) is the objective that \(\mathrm{FTRL}\) minimizes over at step \(t\) for \(\eta=(\gamma\cdot\sqrt{nT})^{-1}\). Let \(y^{*}\) be the optimal facility placement in hindsight, i.e.
\[y^{*}=\operatorname*{arg\,min}_{y\in\mathcal{FP}(\mathcal{T})}\sum_{t=1}^{T}f _{R_{t}}(y)\]
In this section we prove Lemma 3,by arguing that
\[\sum_{t=1}^{T}f_{R_{t}}(y^{t})\leq\sum_{t=1}^{T}f_{R_{t}}(y^{*})+\frac{knD}{ \eta}+32kn^{2}D\eta\cdot T\]
and then the proof follows easily by definition of \(\eta\).
In the standard analysis of \(\mathrm{FTRL}\), the following quantities are of special interest as they appear in the final regret guarantees of the algorithm:* Let \(\operatorname{diam}(R_{\mathcal{T}}):=\max_{y,y^{t}\in\mathcal{FP}(\mathcal{T})}|R_{ \mathcal{T}}(y)-R_{\mathcal{T}}(y^{\prime})|\) be the diameter of the regularizer.
* Let \(G_{f}\) be an upper bound on the dual norm of the sub-gradient of the fractional connection cost for any client request, i.e. for any \(R\subseteq L(\mathcal{T})\) and any \(y\in\mathcal{FP}(\mathcal{T})\), there exists some sub-gradient \(g\in\partial f_{R}(y)\) such that \(||g||_{\mathcal{T}}^{*}\leq G_{f}\). Here, \(||\cdot||_{\mathcal{T}}^{*}\) denotes the dual norm of \(||\cdot||_{\mathcal{T}}\).
We begin by presenting the standard analysis of \(\operatorname{FTRL}\) and deriving an expression for the regret guarantee that depends on the above quantities. Recall that at any step \(t\), the \(\operatorname{FTRL}\) objective \(\Phi_{t}\) doesn't include \(f_{R_{t}}\) since the client request \(R_{t}\) is not revealed to the algorithm at the time of decision. We begin by bounding the connection cost of a theoretical algorithm that has access to this information and thus at time \(t\) can pick facility placement \(y^{t+1}\).
**Claim 7**.: _The output of \(\operatorname{FTRL}\) satisfies_
\[\sum_{t=1}^{T}f_{R_{t}}(y^{t+1})\leq\sum_{t=1}^{T}f_{R_{t}}(y^{*})+\frac{ \operatorname{diam}(R_{\mathcal{T}})}{\eta}\]
Proof.: We have
\[\Phi_{t}(y^{t}) =\Phi_{t-1}(y^{t})+f_{R_{t-1}}(y^{t})\] \[\geq\Phi_{t-1}(y^{t-1})+f_{R_{t-1}}(y^{t})\]
where the equality holds by definition of \(\Phi_{t}\) and the inequality holds from the optimality of \(y^{t-1}\) on \(\Phi_{t-1}\). Similarly, we obtain
\[\Phi_{t-1}(y^{t-1})\geq\Phi_{t-2}(y^{t-2})+f_{R_{t-2}}(y^{t-1})\]
If we keep applying this rule, we finally get that
\[\Phi_{t}(y^{t})\geq\sum_{s=1}^{t-1}f_{R_{s}}(y^{s+1})+\Phi_{1}(y^{1})\]
Furthermore, we have \(\Phi_{1}(y^{1})=R_{\mathcal{T}}(y_{1})/\eta\) and \(\Phi_{t}(y^{*})\geq\Phi_{t}(y^{t})\) for all \(t\). Thus, we get
\[\Phi_{T+1}(y^{*})\geq\sum_{t=1}^{T}f_{R_{t}}(y^{t+1})+\frac{1}{\eta}\cdot R_{ \mathcal{T}}(y^{1})\]
or equivalently (by substituting \(\Phi_{T+1}\)'s definition) we have
\[\sum_{t=1}^{T}f_{R_{t}}(y^{t+1})\leq\sum_{t=1}^{T}f_{R_{t}}(y^{*})+\frac{R_{ \mathcal{T}}(y^{*})-R_{\mathcal{T}}(y^{1})}{\eta}\]
The claim follows from the definition of \(\operatorname{diam}(R_{\mathcal{T}})\).
Next, we proceed by bounding the increase in the connection cost that we suffer by choosing \(y^{t}\) instead of \(y^{t+1}\) at time \(t\).
**Claim 8**.: _For any \(t\geq 0\), it holds that \(f_{R_{t}}(y^{t})\leq f_{R_{t}}(y^{t+1})+\eta G_{f}^{2}/\alpha\)._
Proof.: For any client request \(R\subseteq L(\mathcal{T})\), the fractional connection cost function \(f_{R}(y)\) is clearly convex and its sub-gradients are well-defined on \(\mathcal{FP}(\mathcal{T})\). By definition of \(G_{f}\), we know that there exists some sub-gradient \(g\in\partial f_{R_{t}}(y^{t})\) such that \(||g||_{\mathcal{T}}^{*}\leq G_{f}\). Using this sub-gradient, we get
\[f_{R_{t}}(y^{t}) \leq f_{R_{t}}(y^{t+1})+\langle g,y^{t}-y^{t+1}\rangle\] \[\leq f_{R_{t}}(y^{t+1})+||g||_{\mathcal{T}}^{*}\cdot||y^{t}-y^{t+1 }||_{\mathcal{T}}\] \[\leq f_{R_{t}}(y^{t+1})+G_{f}\cdot||y^{t}-y^{t+1}||_{\mathcal{T}}\]where the first inequality is derived from the convexity of the fractional connection cost, the second inequality is an application of Holder's inequality and the third inequality is from \(G_{f}\)'s definition.
As we have already argued in section D.2, we know that for any step \(t\), the \(\mathrm{FTRL}\) objective \(\Phi_{t}\) is \(\alpha/\eta\)-strongly convex with respect to \(||\cdot||_{\mathcal{T}}\). Using the definition of strong convexity, this implies that
\[\Phi_{t+1}(y^{t})\geq\Phi_{t+1}(y^{t+1})+\langle g,y^{t}-y^{t+1} \rangle+\frac{\alpha}{\eta}\cdot||y^{t}-y^{t+1}||_{\mathcal{T}}^{2}\]
for any sub-gradient \(g\in\partial\Phi_{t+1}(y^{t+1})\). Furthermore, since \(y^{t+1}\) is the minimizer of \(\Phi_{t+1}\), we know from the first order optimality conditions that we can select \(g\in\partial\Phi_{t+1}(y^{t+1})\) such that \(\langle g,y-y^{t+1}\rangle\geq 0\) for any \(y\in\mathcal{FP}(\mathcal{T})\). Using such a sub-gradient, we get
\[||y^{t}-y^{t+1}||_{\mathcal{T}}^{2} \leq\frac{\eta}{\alpha}\cdot\left(\Phi_{t+1}(y^{t})-\Phi_{t+1}(y^ {t+1})\right)\] \[=\frac{\eta}{\alpha}\cdot\left(\Phi_{t}(y^{t})+f_{R_{t}}(y^{t})- \Phi_{t}(y^{t+1})-f_{R_{t}}(y^{t+1})\right)\] \[\leq\frac{\eta}{\alpha}\cdot\left(f_{R_{t}}(y^{t})-f_{R_{t}}(y^{t +1})\right)\]
where we just expanded \(\Phi_{t+1}\)'s definition and used the fact that \(y^{t}\) is the minimizer of \(\Phi_{t}\).
Combining everything, we finally obtain
\[f_{R_{t}}(y^{t})-f_{R_{t}}(y^{t+1})\leq G_{f}\cdot\sqrt{\frac{ \eta}{\alpha}\cdot\left(f_{R_{t}}(y^{t})-f_{R_{t}}(y^{t+1})\right)}\]
and the claim follows.
We complete the analysis of \(\mathrm{FTRL}\) by combining Claims 7 and 8 in order to obtain the following regret guarantee:
**Claim 9**.: _The output of \(\mathrm{FTRL}\) satisfies_
\[\sum_{t=1}^{T}f_{R_{t}}(y^{t})\leq\sum_{t=1}^{T}f_{R_{t}}(y^{*}) +\frac{diam(R_{\mathcal{T}})}{\eta}+\frac{\eta G_{f}^{2}}{\alpha}\cdot T\]
It remains to substitute the specific values of the parameters that appear in the regret guarantee. We have already proven in section D.1 that \(R_{\mathcal{T}}\) is \(\alpha\)-strongly convex with respect to \(||\cdot||_{\mathcal{T}}\) for \(\alpha=(8kD\gamma^{2})^{-1}\). Next, we provide an upper bound for the diameter of the regularizer.
**Claim 10**.: _It holds that \(\mathrm{diam}(R_{\mathcal{T}})\leq knD\)._
Proof.: Fix any \(y\in\mathcal{FP}(\mathcal{T})\). Be definition, we know that \(y_{v}\leq y_{p(v)}\) and \(\delta_{v}\leq\delta_{p(v)}\) for any \(v\neq r\). Thus, the expressions inside the logarithms of the regularizer are always at most \(1\), which implies that \(R_{\mathcal{T}}(y)\leq 0\). Furthermore, for any \(\alpha,\beta>0\) it holds that \(\alpha-\beta\leq\alpha\cdot\ln\left(\alpha/\beta\right)\). Using this inequality, we get that
\[R_{\mathcal{T}}(y)\geq\sum_{v\neq r}w_{v}\cdot\left(y_{v}+ \delta_{v}-y_{p(v)}-\delta_{p(v)}\right)\]
Fix any level \(l\in[0,h-1]\) and let \(V_{l}=\{v\in V(\mathcal{T}):lev(v)=l\}\) denote the set of vertices of the HST at level \(l\). Since \(y\in\mathcal{FP}(\mathcal{T})\), we know that \(\sum_{v\in V_{l}}y_{v}=k\), and by definition of \(\delta\)'s we know that \(\sum_{v\in V_{l}}\delta_{v}=k\) as well. Furthermore, we know that \(\sum_{v\in V_{l}}y_{p(v)}\leq n\cdot\sum_{v\in V_{l+1}}y_{v}=n\cdot k\) since any vertex \(v\) can have at most \(n\) (i.e. the total number of leaves) children. Using the same argument,we have \(\sum_{v\in V_{l}}\delta_{p(v)}\leq n\cdot\sum_{v\in V_{l+1}}y_{v}=n\cdot k\). Thus, combining everything we obtain
\[R_{\mathcal{T}}(y) \geq\sum_{v\neq r}w_{v}\cdot(y_{v}+\delta_{v}-y_{p(v)}-\delta_{p( v)})\] \[=\sum_{l=0}^{h-1}\sum_{v\in V_{l}}2^{l}\cdot(y_{v}+\delta_{v}-y_{p (v)}-\delta_{p(v)})\] \[\geq\sum_{l=0}^{h-1}2^{l}\cdot(2k-2kn)\] \[=2k(1-n)(2^{h}-1)\] \[=k(1-n)D.\]
which proves our claim.
Finally, we only need to find an upper bound for \(G_{f}\). We begin by computing a set of sub-gradients for the fractional connection cost function.
**Claim 11**.: _Fix any client request \(R\subseteq L(\mathcal{T})\) and any \(y\in\mathcal{FP}(\mathcal{T})\). Define the vector \(g^{R,y}\in\mathbb{R}^{|V(\mathcal{T})|}\) such that_
\[g^{R,y}_{v}=\left\{\begin{array}{ll}0&\text{if }y_{v}\geq 1\\ -2^{|ev(v)+1}\cdot|T(v)\cap R|&\text{if }y_{v}<1\end{array}\right.\]
_Then, \(g^{R,y}\in\partial f_{R}(y)\), i.e. \(g^{R,y}\) is a sub-gradient of \(f_{R}\) on point \(y\)._
Proof.: Fix any client request \(R\subseteq L(\mathcal{T})\). By definition of the fractional connection cost on facility placement \(y\in\mathcal{FP}(\mathcal{T})\), we have
\[f_{R}(y)=\sum_{j\in R}\sum_{v\in P(j,r)}2^{|ev(v)+1}\cdot\max\left(0,1-y_{v}\right)\]
where \(P(j,r)\) denotes the unique path from leaf \(j\in L(\mathcal{T})\) to the root \(r\). This is clearly a convex function on \(\mathcal{FP}(\mathcal{T})\) and thus the sub-gradients of \(f_{R}\) are well-defined. Fix any \(v\in V(\mathcal{T})\). We distinguish between two cases.
* If \(y_{v}<1\), then the partial derivative of \(f_{R}(y)\) is well-defined and given by \[\frac{\partial f_{R}(y)}{\partial y_{v}}=-2^{lev(v)+1}\cdot|T(v)\cap R|\] where \(T(v)\) is the set of vertices on the sub-tree rooted at vertex \(v\).
* If \(y_{v}\geq 1\), then clearly it doesn't contribute to \(f_{R}(y)\). Using standard calculus, it is not hard to argue that in this case there exists a sub-gradient of \(f_{R}(y)\) whose coordinate corresponding to \(v\) is 0. Thus, we have argued that that \(g^{R,y}\) is a valid sub-gradient of \(f_{R}\) on point \(y\).
Finally, we provide an upper bound on the dual-norm of the sub-gradients that we computed on Claim 11.
**Claim 12**.: _For any \(y\in\mathcal{FP}(\mathcal{T})\) and any \(R\subseteq L(\mathcal{T})\), it holds that \(||g^{R,v}||_{\mathcal{T}}^{*}\leq\frac{2n}{\gamma}\)._
Proof.: Recall that we have defined the moving cost norm as
\[||y||_{\mathcal{T}}=\gamma\cdot\sum_{v\in V(\mathcal{T})}w_{v}\cdot y_{v}\]which is basically a weighted \(l_{1}\)-norm with weights \(\gamma\cdot w_{v}\). It is well-known that the dual of the \(l_{1}\)-norm is the \(l_{\infty}\) norm. Similarly, the dual of the weighted \(l_{1}\)-norm is a weighted \(l_{\infty}\) norm with inverse weights, i.e. \(||\cdot||^{*}=l_{\infty}((\gamma w)^{-1})\). Thus, we have
\[||x||_{\mathcal{T}}^{*}=\max_{v}\frac{|x_{v}|}{\gamma\cdot w_{v}}\]
Using the calculation of the sub-gradients from Claim 11 and that \(R\subseteq L(\mathcal{T})\) and thus \(|R|\leq n\), we immediately get the claim.
Claim 10 provides us with an expression for \(\mathrm{diam}(R_{\mathcal{T}})\) and Claim 12 provides us with an expression for \(G_{f}\). Plugging everything in into Claim 9, we complete the proof of Lemma 3.
### Incorporating approximation errors (Proof of Lemma 4)
Fix any sequence of client requests \(R_{1},R_{2},\ldots,R_{T}\subseteq L(\mathcal{T})\). Recall that at each step \(t\), \(\mathrm{FTRL}\) selects a fractional facility placement \(y^{t}\) given by
\[y^{t}=\operatorname*{arg\,min}_{y\in\mathcal{FP}(\mathcal{T})}\Phi_{t}(y)\]
where \(\Phi_{t}(y)=\sum_{s=1}^{t-1}f_{R_{s}}(y)+\frac{1}{\eta}\cdot R_{\mathcal{T}}(y)\) is the objective that \(\mathrm{FTRL}\) minimizes over at step \(t\) for \(\eta=(\gamma\cdot\sqrt{nT})^{-1}\).
Now, assume that instead of minimizing \(\Phi_{t}(y)\) over \(\mathcal{FP}(\mathcal{T})\) to compute \(y^{t}\), we are only able to compute a fractional facility placement \(z^{t}\in\mathcal{FP}(\mathcal{T})\) such that \(\Phi_{t}(z^{t})\leq\Phi_{t}(y^{t})+\epsilon\) for some \(\epsilon>0\).
**Claim 13**.: _For any step \(t\), it holds that_
\[||z^{t}-y^{t}||_{\mathcal{T}}\leq\sqrt{\epsilon\cdot\frac{\eta}{\alpha}}\]
Proof.: As we have already argued in section D.2, we know that for any step \(t\), the \(\mathrm{FTRL}\) objective \(\Phi_{t}\) is \(\alpha/\eta\)-strongly convex with respect to \(||\cdot||_{\mathcal{T}}\). Combining this with the first order optimality condition for \(\Phi_{t}\) on \(y_{t}\), we get
\[\Phi_{t}(z^{t})\geq\Phi_{t}(y^{t})+\frac{\alpha}{\eta}\cdot||z^{t}-y^{t}||_{ \mathcal{T}}^{2}\]
which implies that
\[||z^{t}-y^{t}||_{\mathcal{T}}\leq\sqrt{\epsilon\cdot\frac{\eta}{\alpha}}\]
Using Claim 13, we can easily bound both the connection and the moving cost of the approximated \(\mathrm{FTRL}\) solutions.
* For the connection cost, recall that the fractional connection cost function \(f_{R_{t}}\) at step \(t\) is convex, which implies that \[f_{R_{t}}(z^{t})\leq f_{R_{t}}(y^{t})+\langle g,z^{t}-y^{t}\rangle\] for some \(g\in\partial f_{R_{t}}(z^{t})\). Using Holder's inequality to upper bound the inner-product and using the upper bound of Claim 12 for the dual norm of the sub-gradients of \(f_{R_{t}}\), we get that \[f_{R_{t}}(z^{t})\leq f_{R_{t}}(y^{t})+\frac{2n}{\gamma}\cdot||z^{t}-y^{t}||_{ \mathcal{T}}\] and finally from Claim 13 we get that \[f_{R_{t}}(z^{t})\leq f_{R_{t}}(y^{t})+\frac{2n}{\gamma}\cdot\sqrt{\epsilon \cdot\frac{\eta}{\alpha}}\]* For the moving cost, recall it suffices to use the triangular inequality that \(||\cdot||_{\mathcal{T}}\) (as a norm) satisfies: \[||z^{t}-z^{t-1}||_{\mathcal{T}} \leq||z^{t}-y^{t}||_{\mathcal{T}}+||y^{t}-y^{t-1}||_{\mathcal{T}}+ ||y^{t-1}-z^{t-1}||_{\mathcal{T}}\] \[\leq||y^{t}-y^{t-1}||_{\mathcal{T}}+2\cdot\sqrt{\epsilon\cdot \frac{\eta}{\alpha}}\] The proof of Lemma 4 follows easily by plugging in \(\eta=(\gamma\cdot\sqrt{nT})^{-1}\), \(\alpha=(8kD\gamma^{2})^{-1}\) and \(\epsilon=\mathcal{O}(1/\sqrt{T})\).
### Implementation of Projected Mirror Descent
We conclude this section by considering the _Projected Mirror Descent_ update step, namely
\[y^{\prime}=\operatorname*{arg\,min}_{y^{*}\in\mathcal{FP}(\mathcal{T})}\left[ \eta\cdot\langle c,y^{*}\rangle+\cdot D_{R_{\mathcal{T}}}(y^{*},y)\right]\]
that takes as input a fractional facility placement \(y\in\mathcal{FP}(\mathcal{T})\) and returns some other \(y^{\prime}\in\mathcal{FP}(\mathcal{T})\) that minimizes a linear cost under vector \(c\) plus the Bregman Divergence between the initial and the new point under regularizer \(R_{\mathcal{T}}\). Here, \(\eta>0\) is a tuning parameter that balances the dynamics between the linear cost and the Bregman Divergence.
By letting \(c\) be the sub-gradient of the fractional connection cost over the observed sequence of clients, we can use this update step in order to approximate the FTRL objective; this is, in fact, the implementation we did for our experimental evaluation of Algorithm 2. In this section we will argue that the special structure of \(R_{\mathcal{T}}\) allows us to compute the update step in linear (to the size of the HST) time.
By definition of the Bregman Divergence, we have
\[D_{R_{\mathcal{T}}}(x,y)=R_{\mathcal{T}}(x)-R_{\mathcal{T}}(y)-\langle\nabla R_ {\mathcal{T}}(y),x-y\rangle\]
Substituting everything, we get that the update step of _Projected Mirror Descent_ can be written as
\[y^{\prime}=\operatorname*{arg\,min}_{y^{*}\in\mathcal{FP}(\mathcal{T})}F(y^{*})\]
for
\[F(y^{*})=\eta\cdot\sum_{v}c_{v}\cdot y^{*}_{v} +\sum_{v\neq r}w_{v}\cdot(y^{*}_{v}+\delta_{v})\cdot\ln\left( \frac{y^{*}_{v}+\delta_{v}}{y^{*}_{p(v)}+\delta_{p(v)}}\right)\] \[-\sum_{v\neq r}w_{v}\cdot(y_{v}+\delta_{v})\cdot\ln\left(\frac{y_ {v}+\delta_{v}}{y_{p(v)}+\delta_{p(v)}}\right)\] \[-\sum_{v\neq r}\left(w_{v}\cdot\ln\left(\frac{y_{v}+\delta_{v}}{ y_{p(v)}+\delta_{p(v)}}\right)+\frac{w_{v}}{2}+\frac{w_{v}}{2}\cdot\mathbbm{1}[v \in L(\mathcal{T})]\right)(y^{*}_{v}-y_{v})\] \[-\frac{w_{r}}{2}(y^{*}_{r}-y_{r})\]
It is always the case that we update \(y^{\prime}\) from some \(y\in\mathcal{FP}(\mathcal{T})\), so we can simplify the above expression to get
\[F(y^{*})=\eta\cdot\sum_{v}c_{v}\cdot y^{*}_{v} +\sum_{v\neq r}w_{v}\cdot(y^{*}_{v}+\delta_{v})\cdot\ln\left( \frac{\frac{y^{*}_{v}+\delta_{v}}{y_{p(v)}+\delta_{p(v)}}}{\frac{y_{v}+\delta _{v}}{y_{p(v)}+\delta_{p(v)}}}\right)\] \[-\sum_{v}\frac{w_{v}}{2}\cdot(1+\mathbbm{1}[v\in L(\mathcal{T})]) \cdot(y^{*}_{v}-y_{v})\]Recall that by definition, \(\mathcal{FP}(T)\) is the polytope
\[\mathcal{FP}(\mathcal{T})=\begin{cases}y\in\mathbb{R}^{|V(\mathcal{T})|}:&y_{v}= \sum_{u\in\operatorname{cld}(v)}y_{u}&v\notin L(\mathcal{T})\\ y_{v}\in[0,1]&v\in L(\mathcal{T})\\ y_{r}=k&\end{cases}\]
Since our objective is to minimize function \(F(\cdot)\) over \(\mathcal{FP}(\mathcal{T})\), we can write down the KKT optimality conditions to obtain the following conditions about the minimizer \(y^{*}\):
\[\frac{y^{*}_{v}+\delta_{v}}{y^{*}_{p(v)}+\delta_{p(v)}}=\frac{y_{v}+\delta_{v} }{y_{p(v)}+\delta_{p(v)}}\cdot\exp\left(\frac{1}{w_{v}}(\mu_{p(v)}-\mu_{v}- \eta c_{v})\right)\]
where \(\mu_{v}\) is the Lagrange multiplier for constraint \(y_{v}=\sum_{u\in\operatorname{cld}(v)}y_{u}\) and \(\mu_{v}=0\) for \(v\in L(\mathcal{T})\). To complete our computation of \(y^{*}\), it remains to compute the Lagrange multipliers \(\mu\).
Since \(y^{*}\in\mathcal{FP}(\mathcal{T})\), it is not hard to verify that for any \(v\notin L(\mathcal{T})\) it holds
\[\sum_{u\in\operatorname{cld}(v)}\frac{y^{*}_{u}+\delta_{u}}{y^{*}_{v}+\delta_{ v}}=1\]
and using the KKT optimality condition, this implies that for any \(v\notin L(\mathcal{T})\)
\[\sum_{u\in\operatorname{cld}(v)}\frac{y_{u}+\delta_{u}}{y_{v}+\delta_{v}}\cdot \exp\left(\frac{1}{w_{u}}(\mu_{v}-\mu_{u}-\eta c_{u})\right)=1\]
or equivalently, since \(w_{v}=2w_{u}\) for all \(u\in\operatorname{cld}(v)\),
\[\mu_{v}=-\frac{w_{v}}{2}\cdot\ln\left(\sum_{u\in\operatorname{cld}(v)}\frac{y _{u}+\delta_{u}}{y_{v}+\delta_{v}}\cdot\exp\left(-\frac{\mu_{u}+\eta c_{u}}{w_ {u}}\right)\right)\]
Thus, starting from \(\mu_{v}=0\) on the leaves, this expression provides as a bottom-up algorithm to compute all the Lagrange multipliers \(\mu\). Using these multipliers and the KKT optimality conditions, we can then easily compute the ratios
\[\frac{y^{*}_{v}+\delta_{v}}{y^{*}_{p(v)}+\delta_{p(v)}}=\frac{y_{v}+\delta_{v }}{y_{p(v)}+\delta_{p(v)}}\cdot\exp\left(\frac{1}{w_{v}}(\mu_{p(v)}-\mu_{v}- \eta c_{v})\right)\]
for all vertices \(v\neq r\). Finally, we can start from the root vertex \(r\), for which we know that \(y^{*}_{r}=k\), and cascade these ratios downwards until we reach the leaves and we have compute all entries of \(y^{*}\). Clearly, this is all done in linear time to the number of vertices.
Intuitively, this update step can be interpreted as an application of the Multiplicative Weights Update algorithm on every parent vertex \(v\) that decides how its mass should be split to its children. We repeat this process in a bottom-up manner, and then we simply start with \(k\) facilities on the root and begin splitting them based on these ratios while moving downwards.
Analysis of Cut\(\&\)Round (Proofs of Section 5)
In this chapter of the appendix we present all the omitted proofs from Section 5 concerning our online rounding scheme \(\mathrm{Cut}\&\mathrm{Round}\). To avoid repetition, from now on we fix an arbitrary HST \(\mathcal{T}\) and use \(\mathcal{FP}(\mathcal{T})\) to denote the set of all fractional placements of \(k\) facilities on the leaves of \(\mathcal{T}\).
Roadmap.In section E.1, we argue about the correctness of \(\mathrm{Cut}\&\mathrm{Round}\); namely, we show that no matter the input, \(\mathrm{Cut}\&\mathrm{Round}\) always returns a set of \(k\)-leaves of \(\mathcal{T}\) where the facilities are placed. Then, in section E.2 we establish the main property of \(\mathrm{Cut}\&\mathrm{Round}\) and prove Lemma 5. Finally, in section E.3 we analyze the expected connection cost of \(\mathrm{Cut}\&\mathrm{Round}\)'s output and prove Item 1 of Theorem 4 (Lemma 6) while in section E.4 we analyze the expected moving cost of \(\mathrm{Cut}\&\mathrm{Round}\)'s output and prove Item 2 of Theorem 4 (Lemma 7).
### Correctness of Cut\(\&\)Round
We begin by proving the correctness of \(\mathrm{Cut}\&\mathrm{Round}\). Fix any \(y\in\mathcal{FP}(\mathcal{T})\) and any set of thresholds \(\alpha\in[0,1]^{|V(\mathcal{T})|}\). Let \(F=\mathrm{Cut}\&\mathrm{Round}(\mathcal{T},y,\alpha)\). In this section, we will prove that \(|F|=k\), i.e. we will argue that \(\mathrm{Cut}\&\mathrm{Round}\) always returns a set of \(k\) leaves at which facilities must be placed, as it is expected to. In order to show this, we will need to analyze the \(Y_{v}\) variables produced by \(\mathrm{Cut}\&\mathrm{Round}\).
**Claim 14**.: _For any leaf \(v\in L(\mathcal{T})\), it holds that \(Y_{v}\in\{0,1\}\)._
Proof.: Observe that for any \(v\in V(\mathcal{T})\), sub-routine \(\mathrm{Alloc}\) sets \(Y_{v}\) to either \(\lfloor y_{v}\rfloor\) or \(\lfloor y_{v}\rfloor+1\). By definition of \(\mathcal{FP}(\mathcal{T})\), we have \(y_{v}\in[0,1]\) for each leaf \(v\in L(\mathcal{T})\). We distinguish between two different cases. If \(y_{v}\in[0,1)\), then clearly \(Y_{v}\in\{0,1\}\). If \(y_{v}=1\), then \(\delta(y_{v})=0\) and thus \(\mathrm{Alloc}\) will always set \(Y_{v}=\lfloor y_{v}\rfloor=1\). Thus, the claim holds for all leaves \(v\in L(\mathcal{T})\).
**Claim 15**.: _Let \(v\notin L(\mathcal{T})\) be any non-leaf vertex. Then, \(Y_{v}=\sum_{u\in\mathrm{cld}(v)}Y_{u}\)._
Proof.: Fix any non-leaf vertex \(v\notin L(\mathcal{T})\). We will analyze the inner loop of \(\mathrm{Cut}\&\mathrm{Round}\) that iterates over \(v\)'s children. Initially, \(\mathrm{Cut}\&\mathrm{Round}\) sets \(Y_{rem}=Y_{v}\) and \(y_{rem}=y_{v}\). Then, we proceed to iteratively call \(\mathrm{Alloc}\), once per child vertex of \(v\). Each time \(\mathrm{Alloc}\) assigns some value \(Y_{u}\) to a child vertex \(u\in\mathrm{cld}(v)\), we update \(Y_{rem}\) to \(Y_{rem}-Y_{u}\); thus, to prove our claim it suffices to argue that after we update the last child vertex, we have \(Y_{rem}=0\).
Since by definition of sub-routine \(\mathrm{Alloc}\) we know that \(Y_{v}\in\{\lfloor y_{v}\rfloor,\lfloor y_{v}\rfloor+1\}\), we know that initially (before any child vertex is assigned a value \(Y_{u}\)) it holds that \(Y_{rem}\in\{\lfloor y_{rem}\rfloor,\lfloor y_{rem}\rfloor+1\}\). In fact, a simple case analysis over the decision tree of sub-routine \(\mathrm{Alloc}\) suffices to see that this invariant holds not only at the beginning, but even after we begin assigning values to the child vertices and update \(Y_{rem}\) and \(y_{rem}\).
Since \(y\in\mathcal{FP}(\mathcal{T})\), we know that \(y_{v}=\sum_{u\in\mathrm{cld}(v)}y_{u}\) and thus \(y_{rem}=y_{u}\) at the time we iterate over the last child vertex \(u\in\mathrm{cld}(v)\). Furthermore, from the above discussion we know that \(Y_{rem}\in\{\lfloor y_{u}\rfloor,\lfloor y_{u}\rfloor+1\}\). Since \(\delta(y_{u})=\delta(y_{rem})\), it is easy to verify that in any case \(\mathrm{Alloc}\) sets \(Y_{u}=Y_{rem}\) and thus after the last update we have \(Y_{rem}=0\), as desired.
Proof of Correctness.Recall that by definition, the output of \(\mathrm{Cut}\&\mathrm{Round}\) is \(F=\{v\in L(\mathcal{T}):Y_{v}=1\}\). Since from Claim 14 we know that \(Y_{v}\in\{0,1\}\) for all \(v\in L(\mathcal{T})\), this implies that \(|F|=\sum_{v\in L(\mathcal{T})}Y_{v}\). We apply Claim 15 to the root vertex \(r\), then again to each \(u\in\mathrm{cld}(r)\) and so on until we reach the leaves. This gives us that \(Y_{r}=\sum_{v\in L(\mathcal{T})}Y_{v}\) and thus \(|F|=Y_{r}\). Since by definition \(\mathrm{Cut}\&\mathrm{Round}\) sets \(Y_{r}=k\), we have proven that \(|F|=k\) as desired.
### Proof of Lemma 5 (Computing the Allocation Probabilities)
In this section, we formally prove the main property of algorithm \(\mathrm{Cut\&Round}\), as stated in Lemma 5. Fix any fractional facility placement \(y\in\mathcal{FP}(\mathcal{T})\) and let \(\alpha_{v}\sim\mathrm{Unif}(0,1)\) be independent uniformly random thresholds for all \(v\in V(\mathcal{T})\). Let \(F=\mathrm{Cut\&Round}(\mathcal{T},y,\alpha)\) be the output of algorithm \(\mathrm{Cut\&Round}\) on this set of inputs. Recall that algorithm \(\mathrm{Cut\&Round}\) sets the variables \(Y_{v}\) during its execution, for all \(v\in V(\mathcal{T})\). As we have already discussed, \(Y_{v}\) is the total number of facilities in \(F\) on the leaves of the sub-tree rooted at \(v\), i.e. \(Y_{v}=|T(v)\cap F|\). We will prove that for any \(v\in V(\mathcal{T})\), we have
\[Y_{v}=\left\{\begin{array}{ll}\lfloor y_{v}\rfloor&\text{with probability }1- \delta(y_{v})\\ \lfloor y_{v}\rfloor+1&\text{with probability }\delta(y_{v})\end{array}\right.\]
We begin by writing down the following property for sub-routine \(\mathrm{Alloc}\):
**Claim 16**.: _Fix any fractional facility placement \(y\in\mathcal{FP}(\mathcal{T})\) and let \(\alpha_{v}\sim\mathrm{Unif}(0,1)\) for all \(v\in V(\mathcal{T})\). For any vertex \(u\in V(\mathcal{T})\) of \(\mathcal{T}\), let \(Y_{u}=\mathrm{Alloc}(y_{u},y_{rem},Y_{rem},\alpha_{u})\) be the number of facilities assigned to the sub-tree of \(u\) by Line 8 of Algorithm \(\mathrm{Cut\&Round}\) (Algorithm 4). Then,_
\[\mathbb{P}_{\alpha}\left[Y_{u}=\lfloor y_{u}\rfloor\right]=\left\{\begin{array} []{ll}1&\text{if }Y_{rem}=\lfloor y_{rem}\rfloor\text{and }\delta(y_{u})\leq \delta(y_{rem})\\ \frac{1-\delta(y_{u})}{1-\delta(y_{rem})}&\text{if }Y_{rem}=\lfloor y_{rem} \rfloor\text{and }\delta(y_{u})>\delta(y_{rem})\\ 0&\text{if }Y_{rem}\neq\lfloor y_{rem}\rfloor\text{and }\delta(y_{u})> \delta(y_{rem})\\ \frac{\delta(y_{rem})-\delta(y_{u})}{\delta(y_{rem})}&\text{if }Y_{rem}\neq \lfloor y_{rem}\rfloor\text{and }\delta(y_{u})\leq\delta(y_{rem})\end{array}\right.\]
Proof.: This claim is a direct consequence of sub-routine \(\mathrm{Alloc}\)'s description (Algorithm 5) and the fact that \(\alpha_{v}\sim\mathrm{Unif}(0,1)\) for all \(v\in V(\mathcal{T})\).
Using this claim, we are now ready to prove Lemma 5.
Proof of Lemma 5.We prove the lemma via a top-down induction on the vertices of \(\mathcal{T}\) (decreasing level order). For the root vertex, we know that since \(y\in\mathcal{FP}(\mathcal{T})\) we have \(y_{r}=k\) and also by definition of \(\mathrm{Cut\&Round}\) we have \(Y_{r}=k\). Thus, we get that \(Y_{r}=y_{r}=\lfloor y_{r}\rfloor\) with probability \(1-\delta(y_{r})=1\) and the claim holds. Now, fix any non-leaf vertex \(v\notin L(\mathcal{T})\) and assume that \(Y_{v}=\lfloor y_{v}\rfloor\) with probability \(1-\delta(y_{v})\) and \(Y_{v}=\lfloor y_{v}\rfloor+1\) with probability \(\delta(y_{v})\). To complete our induction, we will now proceed to prove the claim for all the children vertices of \(v\).
We begin by proving the claim for the first child of vertex \(v\), and then we will show how the same arguments extend for all its children. Let \(u\in\mathrm{cld}(v)\) be the _first_ child vertex of \(v\) that \(\mathrm{Cut\&Round}\) iterates over. Then, by definition of \(\mathrm{Cut\&Round}\) we have that \(Y_{rem}=Y_{v}\) and \(y_{rem}=y_{v}\). Using the inductive hypothesis on \(v\), this implies that \(Y_{rem}=\lfloor y_{rem}\rfloor\) with probability \(1-\delta(y_{rem})\) and \(Y_{rem}=\lfloor y_{rem}\rfloor+1\) with probability \(\delta(y_{rem})\). Conditioning on the value of \(Y_{rem}\), we get
\[\mathbb{P}_{\alpha}\left[Y_{u}=\lfloor y_{u}\rfloor\right] =\mathbb{P}_{\alpha}\left[Y_{u}=\lfloor y_{u}\rfloor\mid Y_{rem}= \lfloor y_{rem}\rfloor\right]\cdot(1-\delta(y_{rem}))\] \[+\mathbb{P}_{\alpha}\left[Y_{u}=\lfloor y_{u}\rfloor\mid Y_{rem}= \lfloor y_{rem}\rfloor+1\right]\cdot\delta(y_{rem})\]
We distinguish between two different cases based on whether \(\delta(y_{u})\leq\delta(y_{rem})\) or \(\delta(y_{u})>\delta(y_{rem})\). In any case, we can use Claim 16 to substitute the conditional probabilities on the above expression and easily get that
\[\mathbb{P}_{\alpha}\left[Y_{u}=\lfloor y_{u}\rfloor\right]=1-\delta(y_{u})\]
Thus, we have already proven the claim for the first child of \(v\). However, to complete our induction, we need to prove the claim for all children of \(v\) and not just the first one. The only property that we used and holds specifically for the first child was that \(Y_{rem}=\lfloor y_{rem}\rfloor\) with probability \(1-\delta(y_{rem})\) and \(Y_{rem}=\lfloor y_{rem}\rfloor+1\) with probability \(\delta(y_{rem})\). Let \(Y^{\prime}_{rem}\) and \(y^{\prime}_{rem}\) be the updated remaining facilities after the value \(Y_{u}\) of the first child has been assigned. If we can prove that \(Y^{\prime}_{rem}=\lfloor y^{\prime}_{rem}\rfloor\) with probability \(1-\delta(y^{\prime}_{rem})\) and \(Y^{\prime}_{rem}=\lfloor y^{\prime}_{rem}\rfloor+1\) with probability \(\delta(y^{\prime}_{rem})\), then we can keep applying the same argument and inductively prove the claim for all the children of \(v\).
By definition, we have that \(Y^{\prime}_{rem}=Y_{rem}-Y_{u}\) and \(y^{\prime}_{rem}=y_{rem}-y_{u}\). Once again, we distinguish between two different cases.
* Let \(\delta(y_{u})\leq\delta(y_{rem})\). In that case, we get that \([y^{\prime}_{rem}]=\lfloor y_{rem}\rfloor-\lfloor y_{u}\rfloor\) and also that \(\delta(y^{\prime}_{rem})=\delta(y_{rem})-\delta(y_{u})\). Since we know that \(Y_{rem}\in\{\lfloor y_{rem}\rfloor,[y_{rem}]+1\}\) and \(Y_{u}\in\{\lfloor y_{u}\rfloor,\lfloor y_{u}\rfloor+1\}\), this implies that \[\mathbb{P}_{\alpha}[Y^{\prime}_{rem}=\lfloor y^{\prime}_{rem}]] =\mathbb{P}_{\alpha}[Y_{rem}=\lfloor y_{rem}\rfloor\ \cap\ Y_{u}= \lfloor y_{u}\rfloor]\] \[+\mathbb{P}_{\alpha}[Y_{rem}=\lfloor y_{rem}\rfloor+1\ \cap\ Y_{u}= \lfloor y_{u}\rfloor+1]\] Using conditional probabilities and the inductive hypothesis on the distribution of \(Y_{rem}\), we obtain \[\mathbb{P}_{\alpha}\left[Y^{\prime}_{rem}=\lfloor y^{\prime}_{rem}\rfloor\right] =\mathbb{P}_{\alpha}\left[Y_{u}=\lfloor y_{u}\rfloor\mid Y_{rem}= \lfloor y_{rem}\rfloor\right]\cdot\left(1-\delta(y_{rem})\right)\] \[+\mathbb{P}_{\alpha}\left[Y_{u}=\lfloor y_{u}+1\rfloor\mid Y_{rem} =\lfloor y_{rem}\rfloor+1\right]\cdot\delta(y_{rem})\] Using Claim 16 to substitute the conditional probabilities, we finally get \[\mathbb{P}_{\alpha}\left[Y^{\prime}_{rem}=\lfloor y^{\prime}_{rem}\rfloor\right] =1-\delta(y_{rem})+\delta(y_{u})=1-\delta(y^{\prime}_{rem})\] as desired.
* Let \(\delta(y_{u})>\delta(y_{rem})\). In that case, we get that \(\lfloor y^{\prime}_{rem}\rfloor=\lfloor y_{rem}\rfloor-\lfloor y_{u}\rfloor-1\) and also that \(\delta(y^{\prime}_{rem})=1+\delta(y_{rem})-\delta(y_{u})\). Since we know that \(Y_{rem}\in\left\{\lfloor y_{rem}\rfloor,[y_{rem}]+1\right\}\) and \(Y_{u}\in\{\lfloor y_{u}\rfloor,\lfloor y_{u}\rfloor+1\}\), this implies that \[\mathbb{P}_{\alpha}\left[Y^{\prime}_{rem}=\lfloor y^{\prime}_{rem}\rfloor\right] =\mathbb{P}_{\alpha}\left[Y_{rem}=\lfloor y_{rem}\rfloor\ \cap\ Y_{u}= \lfloor y_{u}\rfloor+1\right]\] Using conditional probabilities and the inductive hypothesis on the distribution of \(Y_{rem}\), we obtain \[\mathbb{P}_{\alpha}\left[Y^{\prime}_{rem}=\lfloor y^{\prime}_{rem}\rfloor\right] =\mathbb{P}_{\alpha}\left[Y_{u}=\lfloor y_{u}\rfloor+1\mid Y_{rem}=\lfloor y_ {rem}\rfloor\right]\cdot\left(1-\delta(y_{rem})\right)\] Using Claim 16 to substitute the conditional probabilities, we finally get \[\mathbb{P}_{\alpha}\left[Y^{\prime}_{rem}=\lfloor y^{\prime}_{rem}\rfloor\right] =\delta(y_{u})-\delta(y_{rem})=1-\delta(y^{\prime}_{rem})\] as desired.
Thus, we have concluded the proof of Lemma 5.
### Proof of Item \(1\) in Theorem 4 (Bounding the Expected Connection Cost)
**Lemma 6**.: _Let \(F=\mathrm{Cut}\&\mathrm{Round}(y,\alpha)\) where for all \(v\in V(\mathcal{T})\), \(\alpha_{v}\sim\mathrm{Unif}(0,1)\) independently. Then,_
\[\mathbb{E}_{\alpha}[C_{R}(F)]=f_{R}(y)\ \text{for any}\ R\subseteq L(\mathcal{T})\]
Proof.: Fix any \(y\in\mathcal{FP}(\mathcal{T})\) and let \(\alpha\in[0,1]^{|V(\mathcal{T})|}\) be a set of thresholds such that for each \(v\in V(\mathcal{T})\), \(\alpha_{v}\) is drawn independently at random from the uniform distribution, i.e. \(\alpha_{v}\sim\mathrm{Unif}(0,1)\). Let \(F=\mathrm{Cut}\&\mathrm{Round}(\mathcal{T},y,\alpha)\). We will prove that for any set of clients \(R\subseteq L(\mathcal{T})\), it holds that \(\mathbb{E}_{\alpha}[C_{R}(F)]=f_{R}(y)\).
Recall that the \(Y_{v}\) variables set by \(\mathrm{Cut}\&\mathrm{Round}\) denote the total number of facilities in \(F\) that are placed on the sub-tree rooted at vertex \(v\), i.e. \(Y_{v}=|F\cap T(v)|\). As argued in section E.1, we know that \(Y\in\mathcal{FP}(\mathcal{T})\cap\mathrm{N}\), i.e. \(Y\) is a valid integral facility placement. Thus, from Claim 1 of section C.1, we know that \(C_{R}(F)=f_{R}(Y)\). This implies that by definition of the fractional connection cost under client request \(R\), we have that
\[C_{R}(F)= \sum_{j\in R}\sum_{v\in P(j,r)}2^{\mathrm{lev}(v)+1}\cdot\max\left(0,1-Y _{v}\right)\]
Thus, we get
\[\mathbb{E}_{\alpha}[C_{R}(F)] =\sum_{j\in R}\sum_{v\in P(j,r)}2^{\mathrm{lev}(v)+1}\cdot\mathbb{ E}_{\alpha}[\max\left(0,1-Y_{v}\right)]\] \[= \sum_{j\in R}\sum_{v\in P(j,r)}2^{\mathrm{lev}(v)+1}\cdot\mathbb{ P}_{\alpha}[Y_{v}=0]\]where the first equality holds by linearity of expectation, and the second equality holds by the fact that \(Y_{v}\in\mathbb{N}\) for all \(v\in V(\mathcal{T})\). Since \(Y_{v}\in\{\lfloor y_{v}\rfloor,\lfloor y_{v}\rfloor+1\}\), we know that for any \(v\in V(\mathcal{T})\), \(Y_{v}\) can be \(0\) only if \(y_{v}\in[0,1)\). Furthermore, from Lemma 5, we know that in the case of uniformly random thresholds, this happens with probability precisely \(1-y_{v}\). Combining these facts, we get \(\mathbb{P}_{\alpha}[Y_{v}=0]=\max(0,1-y_{v})\) and thus
\[\mathbb{E}_{\alpha}[C_{R}(F)] =\sum_{j\in R}\sum_{v\in P(j,r)}2^{\mathrm{lev}(v)+1}\cdot\max(0, 1-y_{v})\] \[=f_{R}(y)\]
concluding the proof of Lemma 6.
### Proof of Item \(2\) in Theorem 4 (Bounding the Expected Moving Cost)
**Lemma 7**.: _Let \(F=\mathrm{Round}\&\mathrm{Cut}(y,\alpha)\) and also let \(F^{\prime}=\mathrm{Round}\&\mathrm{Cut}(\mathcal{T},y^{\prime},\alpha)\) where \(\alpha_{v}\sim\mathrm{Unif}(0,1)\) for all \(v\in V(\mathcal{T})\). Then,_
\[\gamma\cdot\mathbb{E}_{\alpha}\left[M_{\mathcal{T}}(F,F^{\prime})\right]\leq 4 \cdot||y-y^{\prime}||_{\mathcal{T}}\]
Proof.: Fix any pair of fractional facility placements \(y,y^{\prime}\in\mathcal{FP}(\mathcal{T})\) and let corresponding outputs of \(\mathrm{Cut}\&\mathrm{Round}\) be denoted as \(F=\mathrm{Cut}\&\mathrm{Round}(\mathcal{T},y,\alpha)\) and \(F^{\prime}=\mathrm{Cut}\&\mathrm{Round}(\mathcal{T},y^{\prime},\alpha)\). Observe that the same set of (uniformly random) thresholds \(\alpha_{v}\) is used in both cases, as this will play a crucial part in our analysis. To prove Lemma 7, we need to prove that
\[\gamma\cdot\mathbb{E}_{\alpha}\left[M_{\mathcal{T}}(F,F^{\prime})\right]\leq 4 \cdot||y-y^{\prime}||_{\mathcal{T}}\]
where the expectation is taken over the value of the uniformly random thresholds \(\alpha_{v}\).
The proof of Lemma 7 is technically involved, and thus we will break down our approach into smaller sections to ease the presentation. We begin by proving the Lemma in the special case where the transition from \(y\) to \(y^{\prime}\) has a very simple structure, which we now proceed to define:
**Definition 10**.: _We say that two fractional facility placements \(y,y^{\prime}\in\mathcal{FP}(\mathcal{T})\) are \(\epsilon\)-neighboring if there are two leaves \(s,t\in L(\mathcal{T})\) with least common ancestor \(p\in V(\mathcal{T})\) such that the following hold:_
1. \(y^{\prime}_{v}=y_{v}-\epsilon\) _for all_ \(v\in P(s,p)\setminus\{p\}\)_._
2. \(y^{\prime}_{v}=y_{v}+\epsilon\) _for all_ \(v\in P(t,p)\setminus\{p\}\)_._
3. \(y^{\prime}_{v}=y_{v}\) _for all other_ \(v\in V(\mathcal{T})\)_._
_Furthermore, we say that \(y,y^{\prime}\) are strictly \(\epsilon\)-neighboring if \(\epsilon\) is sufficiently small to satisfy_
1. \(\epsilon\leq\delta(y_{v})\) _for all_ \(v\in P(s,p)\setminus\{p\}\) _with_ \(\delta(y_{v})>0\)_._
2. \(\epsilon\leq 1-\delta(y_{v})\) _for all_ \(v\in P(t,p)\setminus\{p\}\) _with_ \(\delta(y_{v})>0\)_._
3. \(\epsilon<1\)_._
Basically, if \(y\) and \(y^{\prime}\) are \(\epsilon\)-neighboring then \(y^{\prime}\) is obtained by pushing \(\epsilon\)-mass on \(y\) from \(s\) to \(t\) along the unique path that connects these two leaves. Furthermore, if \(\epsilon\) is sufficiently small so that for any \(v\in V(\mathcal{T})\) either \(\lfloor y_{v}\rfloor=\lfloor y^{\prime}_{v}\rfloor\) or \(|y_{v}-y^{\prime}_{v}|\leq 1\) and at least one of the two is integral, then we say that the two fractional facility placements are _strictly_\(\epsilon\)-neighboring. As we will shortly argue, Lemma 7 holds in the special case where \(y,y^{\prime}\) are strictly \(\epsilon\)-neighboring.
**Claim 17**.: _If \(y,y^{\prime}\in\mathcal{FP}(\mathcal{T})\) are strictly \(\epsilon\)-neighboring for some \(\epsilon\geq 0\), then_
\[\gamma\cdot\mathbb{E}_{\alpha}\left[M_{\mathcal{T}}(F,F^{\prime})\right]\leq 4 \cdot||y-y^{\prime}||_{\mathcal{T}}.\]
Before proving Claim 17, let us first show why it suffices to argue about the general case and prove Lemma 7. Let \(y,y^{\prime}\in\mathcal{FP}(\mathcal{T})\) be any two fractional placements. Recall that \(||y-y^{\prime}||_{\mathcal{T}}\) captures precisely the minimum transportation cost from \(y\) to \(y^{\prime}\) on \(\mathcal{T}\). If we break down this transportation plan into small movements of masses between leaves, then we can view it as a sequence of transitions between strictly \(\epsilon\)-neighboring placements. This is formalized in the following claim:
**Claim 18**.: _For any \(y,y^{\prime}\in\mathcal{FP}(\mathcal{T})\), there exists a finite sequence \(y_{0},y_{1},\ldots,y_{m}\in\mathcal{FP}(\mathcal{T})\) of fractional facility placements with \(y=y_{0}\) and \(y^{\prime}=y_{m}\) such that_
1. \(y_{j},y_{j+1}\) _are strictly_ \(\epsilon\)_-neighboring for some_ \(\epsilon\geq 0\) _for_ \(j=0,1,\ldots,m-1\)_._
2. \(||y-y^{\prime}||_{\mathcal{T}}=\sum_{j=1}^{m}||y_{j}-y_{j-1}||_{\mathcal{T}}\)_._
We will now prove Lemma 7. Let \(F_{j}=\mathrm{Cut}\&\mathrm{Round}(\mathcal{T},y_{j},\alpha)\) be the corresponding output of \(\mathrm{Cut}\&\mathrm{Round}\) on \(y_{j}\) using the same (uniformly random) thresholds \(\alpha_{v}\). Then,
\[\gamma\cdot\mathbb{E}_{\alpha}\left[M_{\mathcal{T}}(F,F^{\prime})\right] \leq\gamma\cdot\mathbb{E}_{\alpha}\left[\sum_{j=0}^{m-1}M_{ \mathcal{T}}(F_{j},F_{j+1})\right]\] \[=\gamma\cdot\sum_{j=0}^{m-1}\mathbb{E}_{\alpha}\left[M_{\mathcal{ T}}(F_{j},F_{j+1})\right]\] \[\leq 4\cdot\sum_{j=0}^{m-1}||y_{j}-y_{j+1}||_{\mathcal{T}}\] \[=4\cdot||y-y^{\prime}||_{\mathcal{T}}\]
In the above calculation, the first inequality holds from the fact that the minimum transportation cost satisfies the triangular inequality. The first equality holds from linearity of expectation. The second inequality holds from Claim 17 and the second equality holds from Claim 18.
Thus, we have shown that proving Lemma 7 for the special case of strictly \(\epsilon\)-neighboring fractional facility placements \(y,y^{\prime}\) suffices to prove Lemma 7 for the general case of any \(y,y^{\prime}\in\mathcal{FP}(\mathcal{T})\) and conclude this section. The rest of this section is dedicated to proving Claim 17, which is the main technical challenge towards proving Lemma 7.
Proof of Claim 17.Fix any pair of strictly \(\epsilon\)-neighboring fractional facility placements \(y,y^{\prime}\in\mathcal{FP}(\mathcal{T})\) and let the corresponding outputs of \(\mathrm{Cut}\&\mathrm{Round}\) be \(F=\mathrm{Cut}\&\mathrm{Round}(\mathcal{T},y,\alpha)\) and \(F^{\prime}=\mathrm{Cut}\&\mathrm{Round}(\mathcal{T},y^{\prime},\alpha)\). In section E.1 we have already shown that \(F,F^{\prime}\subseteq L(\mathcal{T})\) are valid facility placements since \(|F|=|F^{\prime}|=k\). Let \(Y,Y^{\prime}\in\mathcal{FP}(\mathcal{T})\) be used to denote the corresponding integral placements, i.e.
\(Y_{v}:=|L(\mathcal{T})\cap F|=\) number of facilities in \(F\) placed on the leaves of the sub-tree rooted at \(v\) and
\(Y^{\prime}_{v}:=|L(\mathcal{T})\cap F^{\prime}|=\) number of facilities in \(F^{\prime}\) placed on the leaves of the sub-tree rooted at \(v\)
Recall that \(Y\) and \(Y^{\prime}\) are precisely the values of the \(Y\)-variables that algorithm \(\mathrm{Cut}\&\mathrm{Round}\) sets. As shown in Claim 2 of Section E.4, we know that \(\gamma\cdot M_{\mathcal{T}}(F,F^{\prime})=||Y-Y^{\prime}||_{\mathcal{T}}\). Thus, in order to prove Claim 17, we need to show that
\[\mathbb{E}_{\alpha}\left[||Y-Y^{\prime}||_{\mathcal{T}}\right]\leq 4\cdot||y-y^{ \prime}||_{\mathcal{T}}\]
Since \(y,y^{\prime}\) are strictly \(\epsilon\)-neighboring fractional facility placements, we know that there exist two leaves \(s,t\in L(\mathcal{T})\) with lowest common ancestor \(p\in V(\mathcal{T})\) such that \(|y_{v}-y^{\prime}_{v}|\) is \(\epsilon\) among vertices on the (unique) path from \(s\) to \(t\) (excluding vertex \(p\)) and is \(0\) otherwise. Let \(L=\mathrm{lev}(p)\). Then, by definition of \(||\cdot||_{\mathcal{T}}\) we have
\[||y-y^{\prime}||_{\mathcal{T}}=\sum_{v\in V(\mathcal{T})}2^{\mathrm{lev}(v)} \cdot|y_{v}-y^{\prime}_{v}|=2\epsilon\cdot\sum_{l=0}^{L-1}2^{l}=2\epsilon\cdot (2^{L}-1) \tag{5}\]
Furthermore, recall that from Lemma 5, \(\mathrm{Cut}\&\mathrm{Round}\) rounds \(y_{v}\) to either \(Y_{v}=\lfloor y_{v}\rfloor+1\) with probability \(\delta(y_{v})\) or to \(\lfloor y_{v}\rfloor\) with probability \(1-\delta(y_{v})\). Since \(\epsilon\) is sufficiently small so that either \(\lfloor y_{v}\rfloor=\lfloor y^{\prime}_{v}\rfloor\) or \(|y_{v}-y^{\prime}_{v}|\leq 1\) and at lowest one of the two is integral (and it is thus always rounded to itself), we get that \(|Y_{v}-Y^{\prime}_{v}|\leq 1\) for all \(v\in V(\mathcal{T})\). This implies that\[\mathbb{E}_{\alpha}\left[||Y-Y^{\prime}||_{\mathcal{T}}\right] =\mathbb{E}_{\alpha}\left[\sum_{v\in V(\mathcal{T})}2^{\mathrm{lev}( v)}\cdot|Y_{v}-Y^{\prime}_{v}|\right]\] \[=\sum_{v\in V(\mathcal{T})}2^{\mathrm{lev}(v)}\cdot\mathbb{E}_{ \alpha}\left[|Y_{v}-Y^{\prime}_{v}|\right]\] \[=\sum_{v\in V(\mathcal{T})}2^{\mathrm{lev}(v)}\cdot\mathbb{P}_{ \alpha}\left[|Y_{v}-Y^{\prime}_{v}|=1\right]\]
Let \(l\in[0,h(\mathcal{T})]\) be any level on the HST \(\mathcal{T}\) and let \(C_{l}\) be used to denote the expected number of vertices at level \(l\) that are rounded to different values, i.e.
\[C_{l}:=\mathbb{E}_{\alpha}\left[|\{v\in V(\mathcal{T}):\mathrm{lev}(v)=l\text{ and }Y_{v}\neq Y^{\prime}_{v}\}|\right]\]
Then, the above imply that
\[\mathbb{E}_{\alpha}\left[||Y-Y^{\prime}||_{\mathcal{T}}\right]=\sum_{l=0}^{h( \mathcal{T})}2^{l}\cdot C_{l} \tag{6}\]
It remains to compute \(C_{l}\) for all \(l\in[0,h(\mathcal{T})]\). This is done in Claim 19, where we prove that \(C_{l}=0\) for \(l\geq L\) (the level of \(s\) and \(t\)'s lowest common ancestor) and \(C_{l}\leq 4\epsilon\cdot(L-l)\) otherwise. Combining this claim with equations (5) and (6) immediately implies that
\[\mathbb{E}_{\alpha}\left[||Y-Y^{\prime}||_{\mathcal{T}}\right]\leq 4\cdot||y-y^{ \prime}||_{\mathcal{T}}\]
which completes the proof of Claim 17.
**Claim 19**.: _For any \(l\geq L\), \(C_{l}=0\). For any \(l<L\), \(C_{l}\leq 4\epsilon\cdot(L-l)\)._
Proof.: Recall that for fixed thresholds \(\alpha_{v}\), the output of \(\mathrm{Cut}\&\mathrm{Round}\) is deterministic. Since \(L\) is the level of vertex \(p\) (the lowest common ancestor of leaves \(s,t\)) and by definition of strictly \(\epsilon\)-neighboring placements \(y,y^{\prime}\) we know \(y_{v}=y^{\prime}_{v}\) for any vertex \(v\) such that \(\mathrm{lev}(v)\geq L\), we immediately get that \(C_{l}=0\) for any \(l\geq L\).
We will now proceed to analyze \(C_{l}\) for any \(l<L\). We partition the set of vertices \(v\in V(\mathcal{T})\) with \(\mathrm{lev}(v)=l\) into three sets:
* A vertex \(v\) is called _active_ if it lies on the (unique) path between leaves \(s\) and \(t\).
* A vertex \(v\) is called _inactive_ if it is not a descendant of \(p\) (the lowest common ancestor of leaves \(s\) and \(t\)).
* A vertex \(v\) is called _affected_ if it is not active and is a descendant of \(p\).
Obviously, each vertex \(v\) with \(\mathrm{lev}(v)=l\) must lie in exactly one of these sets.
Inactive Vertices.We will prove that for every inactive vertex \(v\), \(\mathbb{P}_{\alpha}[Y_{v}\neq Y^{\prime}_{v}]=0\). Since the same set of thresholds \(\alpha\) is used to round both \(y\) and \(y^{\prime}\), the output of \(\mathrm{Cut}\&\mathrm{Round}\) is deterministic. Furthermore, if a vertex \(v\) is inactive, then we know that \(y_{v}=y^{\prime}_{v}\) and also \(y_{u}=y^{\prime}_{u}\) for any ancestor vertex of \(u\) of \(v\) (by Definition 10 of neighboring facility placements). Thus, this immediately implies that \(Y_{v}=Y^{\prime}_{v}\) with probability \(1\) and thus we do not need to account for inactive vertices when computing \(C_{l}\).
Active Vertices.We will prove that for every active vertex \(v\), \(\mathbb{P}_{\alpha}[Y_{v}\neq Y^{\prime}_{v}]=\epsilon\). Recall that any active vertex is either an ancestor of leaf \(s\) or leaf \(t\). We will only prove the claim in the case when \(v\) is an ancestor of \(t\); the other case is completely analogous. A formal proof by induction is given in Claim 20, presented at the end of this section. As a direct corollary, since there are only two active vertices per level, the expected number of active vertices in level \(l\) that are rounded two different values is precisely \(2\epsilon\).
Affected Vertices.Finally, we will now analyze the affected vertices. By definition, we know that each affected vertex \(v\) will have a unique active ancestor (also counting \(p\)). We partition the set of affected vertices on level \(l\) into \(2(L-l-1)+1\) groups, based on their corresponding active ancestor. The main argument we need to establish is that by definition of \(\mathrm{Round}\&\mathrm{Cut}\), at most one vertex in each of these groups can be rounded to a different value.
To see this, observe that \(\mathrm{Round}\&\mathrm{Cut}\) is monotone, in the sense that if \(y^{\prime}_{v}\geq y_{v}\) and also \(y^{\prime}_{u}\geq y_{u}\) for all ancestors \(u\) of \(v\), then (assuming the same set of thresholds is used), we know that \(Y^{\prime}_{v}\geq Y_{v}\). Using this fact on the vertices of a group, since all of them can either only increase or decrease, in order to maintain balance at most one of them can change, otherwise we would get a change of \(2\) or more on the parent node which cannot happen.
Furthermore, for a specific group, if both the common active ancestor and its child \(u\) with \(y_{u}\neq y^{\prime}_{u}\) end up rounded to the same value, we get (from the fact that the same thresholds are used) that all the vertices in the group will be rounded to the same value. Thus, in order for a (unique) vertex in any group to change, at least one of two active vertices must change, which happens with probability at most \(2\epsilon\). Since there are \(2(L-l-1)+1\) groups, we get as a corollary that the expected number of affected vertices at level \(l\) that get rounded to a different value is at most \(2\epsilon\cdot(2L-2l-1)\).
Combining everything, we get that \(C_{l}\leq 0+2\epsilon+2\epsilon\cdot(2L-2l-1)=4\epsilon\cdot(L-l)\).
**Claim 20**.: _Let \(v\) be any active vertex that is an ancestor of \(t\). Then, \(\mathbb{P}_{\alpha}[Y_{v}\neq Y^{\prime}_{v}]=\epsilon\)._
Proof.: In fact, we will in fact prove the following stronger claim,
* \(\mathbb{P}_{\alpha}[Y_{v}=\lfloor y_{v}\rfloor\) and \(Y^{\prime}_{v}=\lfloor y_{v}\rfloor]=1-\delta(y_{v})-\epsilon\).
* \(\mathbb{P}_{\alpha}[Y_{v}=\lfloor y_{v}\rfloor\) and \(Y^{\prime}_{v}=\lfloor y_{v}\rfloor+1]=\epsilon\).
* \(\mathbb{P}_{\alpha}[Y_{v}=\lfloor y_{v}\rfloor+1\) and \(Y^{\prime}_{v}=\lfloor y_{v}\rfloor]=0\)
* \(\mathbb{P}_{\alpha}[Y_{v}=\lfloor y_{v}\rfloor+1\) and \(Y^{\prime}_{v}=\lfloor y_{v}\rfloor+1]=\delta(y_{v})\)
which clearly implies Claim 20.
Once again, we will prove the claim via induction, starting from the highest active ancestor of \(t\) at level \(l=L-1\) and moving towards the leaf \(t\) at level \(l=0\). We begin by mentioning that for vertex \(p\) (\(s\) and \(t\)'s lowest common ancestor at level \(L\)) we know for sure that \(Y_{p}=Y^{\prime}_{p}\) since \(y_{p}=y^{\prime}_{p}\) and \(y_{u}=y^{\prime}_{u}\) for any \(u\) such that \(\mathrm{lev}(u)\geq L\); thus, since the same set of thresholds \(\alpha\) is used, the execution of \(\mathrm{Cut}\&\mathrm{Round}\) will be identical up to this point.
We assume that the first child of any vertex \(v\) visited by \(\mathrm{Alloc}\) is always the active child; this can be done without loss of generality as the order that \(\mathrm{Alloc}\) visits the vertices hasn't played any part on our analysis yet.
Base of the induction.For the base of the induction, let \(v\) be the (unique) child of \(p\) that is an ancestor of \(t\); i.e. let \(v\) be the highest active ancestor of \(t\). We have already mentioned that \(Y_{p}=Y^{\prime}_{p}\) with probability \(1\). Thus, it can either be the case that \(Y_{p}=Y^{\prime}_{p}=\lfloor y_{p}\rfloor\) of \(Y_{p}=Y^{\prime}_{p}=\lfloor y_{p}\rfloor+1\). From Lemma 5 we know that the first happens with probability \(1-\delta(y_{p})\) and the latter with probability \(\delta(y_{p})\). We distinguish between the following cases:
* Let \(\delta(y_{v})<\delta(y_{p})\). Then, if \(Y_{p}=Y^{\prime}_{p}=\lfloor y_{p}\rfloor\) we know from the description of \(\mathrm{Alloc}\) that \(Y_{v}=Y^{\prime}_{v}=\lfloor y_{v}\rfloor\) with probability \(1\). On the other hand, if \(Y_{p}=Y^{\prime}_{p}=\lfloor y_{p}\rfloor+1\), we know that \(Y_{v}=\lfloor y_{v}\rfloor+1\) if \(\alpha_{v}\leq\delta(y_{v})/\delta(y_{p})\) and likewise \(Y^{\prime}_{v}=\lfloor y_{v}\rfloor+1\) if \(\alpha_{v}\leq(\delta(y_{v})+\epsilon)/(\delta(y_{p}))\). Thus, by conditioning on the values of \(Y_{p}\) and \(Y^{\prime}_{p}\), we get 1. \(\mathbb{P}_{\alpha}[Y_{v}=\lfloor y_{v}\rfloor\) and \(Y^{\prime}_{v}=\lfloor y_{v}\rfloor]=(1-\delta(y_{p}))\cdot 1+\delta(y_{p})\cdot(1-\frac{ \delta(y_{v})+\epsilon}{\delta(y_{p})})=1-\delta(y_{v})-\epsilon\). 2. \(\mathbb{P}_{\alpha}[Y_{v}=\lfloor y_{v}\rfloor\) and \(Y^{\prime}_{v}=\lfloor y_{v}\rfloor+1]=(1-\delta(y_{p}))\cdot 0+\delta(y_{p})\cdot(\frac{ \delta(y_{v})+\epsilon}{\delta(y_{p})}-\frac{\delta(y_{v})}{\delta(y_{p})})=\epsilon\). 3. \(\mathbb{P}_{\alpha}[Y_{v}=\lfloor y_{v}\rfloor+1\) and \(Y^{\prime}_{v}=\lfloor y_{v}\rfloor]=(1-\delta(y_{p}))\cdot 0+\delta(y_{p})\cdot 0=0\).
4. \(\mathbb{P}_{\alpha}[Y_{v}=\lfloor y_{v}\rfloor+1\) and \(Y^{\prime}_{v}=\lfloor y_{v}\rfloor+1]=(1-\delta(y_{p}))\cdot 0+\delta(y_{p}) \cdot\frac{\delta(y_{v})}{\delta(y_{p})}=\delta(y_{v})\).
* Let \(\delta(y_{v})\geq\delta(y_{p})\). Then, if \(Y_{p}=Y^{\prime}_{p}=\lfloor y_{p}\rfloor+1\) we know from the description of \(\mathrm{Alloc}\) that \(Y_{v}=Y^{\prime}_{v}=\lfloor y_{v}\rfloor+1\) with probability \(1\). On the other hand, if \(Y_{p}=Y^{\prime}_{p}=\lfloor y_{p}\rfloor\), we know that \(Y_{v}=\lfloor y_{v}\rfloor+1\) if \(\alpha_{v}\leq(\delta(y_{v})-\delta(y_{p}))/(1-\delta(y_{p}))\) and likewise \(Y^{\prime}_{v}=\lfloor y_{v}\rfloor+1\) if \(\alpha_{v}\leq(\delta(y_{v})+\epsilon-\delta(y_{p}))/(1-\delta(y_{p}))\). Thus, by conditioning on the values of \(Y_{p}\) and \(Y^{\prime}_{p}\), we get 1. \(\mathbb{P}_{\alpha}[Y_{v}=\lfloor y_{v}\rfloor\) and \(Y^{\prime}_{v}=\lfloor y_{v}\rfloor]=(1-\delta(y_{p}))\cdot(1-\frac{\delta(y_{ v})+\epsilon-\delta(y_{p})}{1-\delta(y_{p})})+\delta(y_{p})\cdot 0=1-\delta(y_{v})-\epsilon\). 2. \(\mathbb{P}_{\alpha}[Y_{v}=\lfloor y_{v}\rfloor\) and \(Y^{\prime}_{v}=\lfloor y_{v}\rfloor]=(1-\delta(y_{p}))\cdot(\frac{\delta(y_{v} )+\epsilon-\delta(y_{p})}{1-\delta(y_{p})}-\frac{\delta(y_{v})-\delta(y_{p})}{1 -\delta(y_{p})})+\delta(y_{p})\cdot 0=\epsilon\). 3. \(\mathbb{P}_{\alpha}[Y_{v}=\lfloor y_{v}\rfloor+1\) and \(Y^{\prime}_{v}=\lfloor y_{v}\rfloor]=(1-\delta(y_{p}))\cdot 0+\delta(y_{p})\cdot 0 =1-\delta(y_{v})=0\). 4. \(\mathbb{P}_{\alpha}[Y_{v}=\lfloor y_{v}\rfloor+1\) and \(Y^{\prime}_{v}=\lfloor y_{v}\rfloor]=(1-\delta(y_{p}))\cdot\frac{\delta(y_{v})- \delta(y_{p})}{1-\delta(y_{p})}+\delta(y_{p})\cdot 1\delta(y_{v})\).
So in both cases, the base of the induction holds.
Inductive Step.Using the exact same approach, we can prove the claim for any active ancestor \(u\) of \(t\), assuming that the claim holds for \(u\)'s father \(v=p(u)\). The only difference, is that now we can't claim that \(Y_{v}=Y^{\prime}_{v}\) with probability \(1\). Instead, there are three different cases that we need to consider; namely
1. \(Y_{v}=Y^{\prime}_{v}=\lfloor y_{v}\rfloor\) with probability \(1-\epsilon-\delta(y_{v})\).
2. \(Y_{v}=Y^{\prime}_{v}=\lfloor y_{v}\rfloor+1\) with probability \(\delta(y_{v})\).
3. \(Y_{v}=\lfloor y_{v}\rfloor\) and \(Y^{\prime}_{v}=\lfloor y_{v}\rfloor+1\) with probability \(\epsilon\).
where the probabilities hold from the inductive hypothesis on the parent vertex \(v\). Next, we will need to once again consider the case of whether \(\delta(y_{u})<\delta(y_{v})\) or not (notice that the same relation will hold for \(y^{\prime}_{u}\) and \(y^{\prime}_{v}\)) and use the description of \(\mathrm{Alloc}\) to get the assignment probabilities. Since this is a simple matter of arithmetic, the details are omitted.
Experimental Evaluation
In this section we experimentally evaluate the performance of Algorithm 2 with respect to the best fixed facility placement and compare it with the respective performance of the algorithm proposed by [31]. In all the following experiments the step-size of Algorithm 3 (subroutine of Algorithm 2) is set to \(\eta:=\max(\gamma,1)\sqrt{nT}\).
**Periodically Moving Clients.** We first present a simple setting to indicate the inefficiency of the online learning algorithm of [31] in handling moving costs. In this experiment the underlying graph is the \(0.01\)-discretization of \([0,1]\times[0,1]\). At each round \(t\geq 1\), we periodically select one of four balls of radius \(R=0.2\) depicted in Figure 1 and then a client arrives uniformly at random on the selected ball. In Figure 1 and Table 1 we present the overall cost of Algorithm 2 and the algorithm of [31] for different values of facility-weight \(\gamma\), \(k=3\) facilities and \(T=4000\) time-steps. In all cases, the facilities of Algorithm 2 eventually converge to three of the four ball-centers, which is the optimal fixed facility placement. As the experiment reveals, the algorithm of [31] admits significantly larger cost as the facility-weight increases while Algorithm 2 is robust to the increase.
**Real-World Datasets.** We evaluate the performance of Algorithm 2 on the \(\mathrm{MNIST}\) and \(\mathrm{CIFAR10}\) datasets. We randomly sample \(n=10000\) images and construct a graph where each image corresponds to a vertex with the edge weights given by the Euclidean distance of the respective images. At each round \(t\), an image is sampled uniformly at random and a client arrives in the corresponding vertex. We then evaluate Algorithm 2 in the latter setting for \(T=3000\) rounds and \(k=10\) facilities. In Table 2 we present the ratio of the overall cost of Algorithm 2 over the ratio cost of the fractional hindsight optimal7. As our experiments indicate, the sub-optimality of Algorithm 2 is way smaller than the theoretical \(\mathcal{O}(\log n)\) upper bound on the regret.
Footnote 7: The cost of the fractional hindsight optimal can be efficiently computed [31] and lower bounds the cost of the optimal facility placement. As a result, the presented ratios in Tables 2 and 3 are upper bounds on the actual ratio of Algorithm 2 and the optimal facility-placement.
Beyond unit batch sizes and random arrivals.Finally, we once again evaluate the performance of Algorithm 2 on the \(\mathrm{MNIST}\) and \(\mathrm{CIFAR10}\) datasets. This time, the requests arrive in batches of size \(R=10\) for \(T=3000\) rounds. In order to go beyond the _random arrival model_,we first sample the \(R\cdot T\) requested vertices uniformly at random from \([n]\) and then we proceed to order them based on their respective categories, using the lexicographical vector order to break ties. Then, we partition these requests into \(T\) batches of size \(R\) and sequentially reveal them to the algorithm as usual. As a result, all images/vertices from the first category are requested first, then the second etc.
In Table 3, we present our experimental evaluations on the above constructed sequence. As our experiments indicate, our algorithm admits way better performance than the theoretical \(O(\log n)\) guarantees even in sequences with higher batch sizes and non-random arrivals.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Algorithm 2 & \(\gamma=0\) & \(\gamma=1\) & \(\gamma=10\) \\ \hline \(\mathrm{MNIST}\) & \(1.118\pm 0.01\) & \(1.403\pm 0.04\) & \(1.5631\pm 0.03\) \\ \hline \(\mathrm{CIFAR10}\) & \(1.113\pm 0.01\) & \(1.189\pm 0.04\) & \(1.59\pm 0.31\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: The ratio of the cost of Algorithm 2 with respect to the cost of the fractional hindsight optimal facility placement (\(20\) runs).
\begin{table}
\begin{tabular}{c c c c} \hline \hline Our Algorithm & \(\gamma=0\) & \(\gamma=1\) & \(\gamma=10\) \\ \hline \(\mathrm{CIFAR10}\) & \(1.050\) & \(1.048\) & \(1.051\) \\ \hline \(\mathrm{MNIST}\) & \(1.082\) & \(1.045\) & \(1.12\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: The ratio of the cost of our Algorithm with respect to the cost of the fractional hindsight optimal facility placement. | ## Review
### Summary
This paper addresses the k-median clustering problem in an online learning framework, introducing a novel algorithm that accounts for both moving costs and connection costs. The algorithm is evaluated against a static optimal solution, aiming to minimize regret over time. The authors present a regret framework showing that their algorithm achieves O(log n) multiplicative regret with an additive term influenced by various parameters. The study successfully incorporates moving costs into the clustering model, providing a more realistic approach to online learning. Experimental results demonstrate that the proposed algorithm outperforms previous benchmarks under specific adversarial conditions and highlights its potential in practical applications.
### Strengths
- Introduces the first polynomial-time algorithm with regret guarantees for k-median clustering that includes moving costs.
- The problem formulation is well-motivated and models real-world applications effectively.
- Solid theoretical contributions with a clear and structured presentation.
- Experimentation shows the algorithm's performance in a variety of conditions and yields promising results.
### Weaknesses
- The additive term may become insignificant in certain scenarios, raising questions about its practical applicability.
- Some experiments are too friendly to the algorithm, lacking robust comparatives across diverse settings.
- Limited exploration of alternative formulations of moving costs and their implications.
- Insufficient discussion of the relationship to related work, particularly in detailing how this study advances or differs from previous algorithms.
### Questions
- How critical is the matching-based formulation of moving costs to the algorithm's effectiveness?
- Can the approach be adapted for different cost metrics, such as ℓ_p objectives?
- Are there potentials for improvement in the algorithm's performance regarding lower bounds on regret?
- What are the implications of the algorithm's dependency on the Hierarchical Separation Tree structure?
### Soundness
**Score:** 4
**Description:** 4 = excellent; the theoretical foundation and results presented are robust, yet some assumptions could be further clarified.
### Presentation
**Score:** 3
**Description:** 3 = good; the paper is generally clear but could benefit from additional detail in specific sections to enhance understanding.
### Contribution
**Score:** 3
**Description:** 3 = good; while the contributions are significant, the novelty in comparison to existing methods could be articulated more clearly.
### Rating
**Score:** 7
**Description:** 7 = accept, but needs minor improvements; the paper is technically solid and impactful, with some areas that require clarification.
### Paper Decision
**Decision:** Accept
**Reasons:** The paper presents a significant advancement in the understanding of online k-median clustering with moving costs, addressing an important problem with solid theoretical and empirical contributions. While there are areas for improvement, particularly in experimental robustness and clarity of contributions, the overall soundness of the methodology and results supports acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Bilevel Coreset Selection in Continual Learning:
A New Formulation and Algorithm
Jie Hao
Department of Computer Science
George Mason University
[email protected]
&Kaiyi Ji
Department of CSE
University at Buffalo
[email protected]
&Mingrui Liu
Department of Computer Science
George Mason University
[email protected]
Corresponding Author.
###### Abstract
Coreset is a small set that provides a data summary for a large dataset, such that training solely on the small set achieves competitive performance compared with a large dataset. In rehearsal-based continual learning, the coreset is typically used in the memory replay buffer to stand for representative samples in previous tasks, and the coreset selection procedure is typically formulated as a bilevel problem. However, the typical bilevel formulation for coreset selection explicitly performs optimization over discrete decision variables with greedy search, which is computationally expensive. Several works consider other formulations to address this issue, but they ignore the nested nature of bilevel optimization problems and may not solve the bilevel coreset selection problem accurately. To address these issues, we propose a new bilevel formulation, where the inner problem tries to find a model which minimizes the expected training error sampled from a given probability distribution, and the outer problem aims to learn the probability distribution with approximately \(K\) (coreset size) nonzero entries such that learned model in the inner problem minimizes the training error over the whole data. To ensure the learned probability has approximately \(K\) nonzero entries, we introduce a novel regularizer based on the smoothed top-\(K\) loss in the upper problem. We design a new optimization algorithm that provably converges to the \(\epsilon\)-stationary point with \(O(1/\epsilon^{4})\) computational complexity. We conduct extensive experiments in various settings in continual learning, including balanced data, imbalanced data, and label noise, to show that our proposed formulation and new algorithm significantly outperform competitive baselines. From bilevel optimization point of view, our algorithm significantly improves the vanilla greedy coreset selection method in terms of running time on continual learning benchmark datasets. The code is available at [https://github.com/MingruiLiu-ML-Lab/Bilevel-Coreset-Selection-via-Regularization](https://github.com/MingruiLiu-ML-Lab/Bilevel-Coreset-Selection-via-Regularization).
## 1 Introduction
Deep Neural Networks (DNNs) have achieved tremendous successes in various domains, including computer vision [41, 30], natural language processing [72, 15], generative modeling [26] andgames [68]. However, in continual learning, where DNNs are trained on a sequence of tasks with possibly non-i.i.d. data, the performance will be degraded on the previously trained tasks. This is referred to as _catastrophic forgetting_[53, 52, 60]. To alleviate catastrophic forgetting, one of the effective ways is _rehearsal-based continual learning_, where a small replay buffer is maintained and revisited during the continuous learning process. There is a line of works studying how to efficiently maintain the replay buffer using the coreset selection approach [6, 78, 83], in which a small set of data is selected as representative samples to be used in continual learning.
The coreset selection in continual learning is formulated as a cardinality-constrained bilevel optimization problem which is solved by incremental subset selection [6]. This greedy approach is computationally expensive and hence is not scalable when the coreset size is large. To address this issue, Zhou et al. [83] propose a relaxation of the bilevel formulation in [6], which drops the nested nature of bilevel formulation and actually becomes two sequential optimization problems. Tiwari et al.proposed a gradient approximation method in [71], which selects a coreset that approximates the gradient of model parameters over the entirety of the data seen so far. Yoon et al. [78] proposes an online coreset selection method by maximizing several similarity metrics based on data pairs within each minibatch, and sample pairs between each minibatch and coreset. These approaches do not directly address the algorithmic challenges caused by the nested nature of the bilevel optimization problem, and may not solve the original bilevel coreset selection problem efficiently.
The key challenges in the bilevel coreset selection problems are two folds. First, the bilevel formulation in [6] needs to directly perform optimization over cardinality constraint, which is a nonconvex set and greedy approaches are expensive when the coreset size is large. Second, the bilevel formulation in [6] has a nested structure: one problem is embedded within another, and the outer and inner functions both have dependencies over the same set of decision variables. It remains unclear how to design efficient algorithms to solve constrained bilevel optimization algorithms for the coreset selection problem with provable theoretical guarantees.
Our proposed solution addresses these challenges with a novel bilevel formulation and provably efficient optimization algorithms. The proposed new bilevel formulation is referred to as _Bilevel Coreset Selection via Regularization_ (BCSR). The main differences between our approach and the standard bilevel approach in [6] are: (i) unlike the standard bilevel formulation which requires performing optimization based on a cardinality constraint, we propose to solve a bilevel optimization on a probability simplex over training examples; (ii) to make sure the probability distribution lies in a low dimensional manifold, we propose to add a smoothed top-\(K\) loss as a regularizer to the upper-level problem; (iii) due to our new formulation, we are able to design a simple and effective first-order method to solve this new bilevel problem with provable non-asymptotic convergence guarantees. The first-order method is easy to implement and much faster than the greedy approach as in [6]. Our main contribution is listed as follows.
Figure 1: Illustration of our algorithm. There are two neural network models, one is \(M_{tr}\) for model training, and the other is \(M_{cs}\) for coreset selection. A coreset is selected from the current data stream \(B_{t}\) by conducting six steps. 1 Feed a stream mini-batch \(B_{t}\) and sampled buffer data to \(M_{tr}\). 2 Copy model parameters from \(M_{tr}\) to \(M_{cs}\): \(\theta_{cs}\leftarrow\theta_{tr}\). 3 Feed a mini-batch \(B_{t}\) into model \(M_{cs}\). 4⃝ Conduct bilevel optimization to update the model parameter \(\theta_{cs}\) in \(M_{cs}\) and output a probability distribution \(w\). 5⃝ Sample a coreset from \(B_{t}\) based on the distribution of \(w\) and add the sampled data into buffer. 6⃝ Calculate stochastic gradient based on \(B_{t}\) and sampled data in the buffer in Step 1, and update \(\theta_{tr}\) based on gradient information. Repeat the above steps for each stream mini-batch.
* We propose a new bilevel formulation, namely BCSR, for the coreset selection in rehearsal-based continual learning. Instead of directly learning the binary masks for each sample, the new formulation tries to learn a probability distribution in a low-dimensional manifold by adding a smoothed top-\(K\) loss as a regularizer in the upper problem. This formulation is designed to satisfy two important features in continual learning with DNNs: (i) keeping the nested structure in the coreset selection; (ii) being amenable to first-order algorithms, which makes it easy to implement in modern deep learning frameworks such as PyTorch and TensorFlow. Based on the new formulation, we propose an efficient first-order algorithm for solving it. The main workflow of our algorithm is illustrated in Figure 1, and the corresponding Pytorch-style pseudocode is presented in Algorithm 1.
* We have conducted extensive experiments among various scenarios to verify the effectiveness of our proposed algorithm, including balanced, imbalanced, and label-noise data. Our algorithm outperforms all baselines for all settings in terms of average accuracy, and it is much better than all other coreset selection algorithms. For example, on imbalanced data of Multiple Datasets, BCSR is better than the best coreset selection algorithm by \(4.65\%\) in average accuracy. From bilevel optimization point of view, our algorithm significantly improves the vanilla greedy coreset selection method [6] in terms of running time on continual learning benchmark datasets.
* Under the standard smoothness assumptions of the loss function, we show that our algorithm requires at most \(O(1/\epsilon^{4})\) complexity for finding an \(\epsilon\)-stationary point in the constrained case2. Notably, the \(O(1/\epsilon^{4})\) complexity consists of \(O(1/\epsilon^{2})\) backpropagations and \(O(1/\epsilon^{4})\) samplings from Gaussian distribution, where the latter cost is computationally cheap. Footnote 2: In the constrained setting, the definition of \(\epsilon\)-stationary point is defined with gradient mapping, i.e., \(w\) is a \(\epsilon\)-stationary point of the function \(\phi\) if \(\frac{1}{\beta}\|w-\mathcal{P}_{\Delta}(w-\beta\nabla\phi(w))\|\leq\epsilon\), where \(\beta\) is the stepsize, \(\mathcal{P}\) is the projection operator, \(\Delta\) is the probability simplex. This matches the best iteration complexity as in the single level optimization problem [24].
## 2 Related Work
Continual LearningThere are different classes of continual learning methods, including regularization-based approaches [39; 81; 10; 1; 59; 64; 17], dynamic architecture methods [61;79, 62, 51, 76, 44, 77], and rehearsal-based methods [48, 57, 11, 58, 31, 3, 18, 28, 80, 6, 83, 78, 84]. In the rehearsal-based continual learning, the memory is either reproduced experience replay [48] or generative replay [67]. Our work focuses on the aspect of the coreset selection on the replay memory and can be flexibly integrated into rehearsal-based methods in continual learning.
Coreset SelectionThe coreset selection methods were used frequently in supervised and unsupervised learning, such as \(k\)-means [19], Gaussian mixture model [49], logistic regression [33] and bayesian inference [9]. They were also used frequently in the active learning literature [74, 63]. The coreset selection in continual learning is related to the sample selection [34, 2, 3]. Nguyen et al. [55] introduce variational continual learning which is combined with coreset summarization [5]. Borsos et al. [6] proposed the first bilevel formulation for the coreset selection in continual learning, which is later improved by [83, 78]. Compared with these works, our work focuses on improved bilevel coreset selection: we provide a better bilevel formulation than [6] and design a provably efficient optimization algorithm.
Bilevel optimizationBilevel optimization is used to model nested structure in the decision-making process [73]. Recently, gradient-based bilevel optimization methods have broad applications in machine learning, including meta-learning [20], hyperparameter optimization [56, 22], neural architecture search [46], and reinforcement learning [40, 32]. These methods can be generally categorized into implicit differentiation [16, 56, 45, 4] and iterative differentiation [50, 21, 20, 65, 27] based approaches. Recently, various stochastic bilevel algorithms have been also proposed and analyzed by [12, 37, 25, 32, 29, 4, 14]. A comprehensive introduction can be found in the survey [47]. In this work, we propose a novel stochastic bilevel optimizer with very flexible parameter selection, which shows great promise in the coreset selection for continual learning.
## 3 New Bilevel Formulation for Coreset Selection in Continual Learning
In this section, we first introduce our new bilevel formulation, namely Bilevel Coreset Selection via Regularization (BCSR). The key idea of this approach is to learn a probability distribution over the whole dataset such that the best model parameter obtained by minimizing the loss on the sampled dataset (i.e., the minimizer for the lower-level problem) is also the best for the whole dataset (i.e., the minimizer for the upper-level problem), and then a coreset can be sampled based on the learned probability distribution. In addition, the learned probability distribution is expected to lie in a low dimensional manifold (i.e., with \(K\) nonzero entries where \(K\) is the coreset size). To achieve this, we added a smooth top-\(K\) loss as a regularizer to promote the probability distribution to have \(K\) nonzero entries. Specifically, the objective function of BCSR is:
\[\min_{\begin{subarray}{c}0\leq w_{(i)}\leq 1\\ ||w||_{1}=1\end{subarray}}\left[\phi(w)=\sum_{i=1}^{n}\ell_{i}(\theta^{*}(w)) -\lambda\sum_{i=1}^{K}\mathbb{E}_{z}(w+\delta z)_{[i]}\right]\] \[s.t.,\theta^{*}(w)=\arg\min_{\theta}\left[L(\theta,w)=\sum_{i=1} ^{n}w_{(i)}\ell_{i}(\theta)\right] \tag{1}\]
where \(n\) is the sample size, \(\theta\) is the model parameter, \(w\) is the sample weights, \(\ell_{i}(\theta)\) denote the loss function calculated based on \(i\)-th sample with model parameter \(\theta\), \(w_{(i)}\) is the \(i\)-th coordinate of \(w\), \(w_{[i]}\) is the \(i\)-th largest component of \(w\), \(\lambda>0\) is the regularization parameter, \(w+\delta z\) denote to adding \(\delta z\) on each coordinate of \(w\) where \(z\sim\mathcal{N}(0,1)\). Note that \(R(w,\delta):=-\lambda\sum_{i=1}^{K}\mathbb{E}_{z}(w+\delta z)_{[i]}\) denote the smoothed top-\(K\) regularization. We add this regularization to make sure the summation of the top-\(K\) entries of the learned probability vector is large, such that we can confidently choose a coreset with size \(K\). The goal of employing Gaussian noise to the regularizer is for the ease of algorithm design: this Gaussian smoothing technique can make the regularizer to be smooth such that it is easier to design efficient first-order bilevel optimization solvers. Otherwise, the upper-level problem would become nonconvex and nonsmooth, and it would be difficult for algorithm design under this case.
Discussion and Comparison with Prior WorksIn this part, we illustrate how this new formulation addresses the drawbacks of the previous approaches. The work of [6] does not use any regularizer and regards the weight of each sample as a binary mask. This formulation needs to solve a combinatorial optimization problem and their approach of incremental subset selection is computationally expensive. The work of [83] relaxes the bilevel formulation in [6] to minimize the expected loss function over the Bernoulli distribution \(s\), i.e., \(\min_{s\in\mathcal{C}}\Phi(s)\), and develops a policy gradient solver to optimize the Bernoulli variable. Their gradient \(\nabla_{s}\Phi(s)=\mathbb{E}_{p(m|s)}L(\theta^{*}(m))\nabla_{s}\ln p(m|s)\) does not include the implicit gradient of \(L(\theta^{*}(m))\) in terms of \(s\). However, \(\theta^{*}(m)\) actually depends on the mask \(m\), and \(m\) depends on the Bernoulli variable \(s\). In contrast, our bilevel optimization computes the hypergradients for the coreset weights \(w\) (\(0\leq w\leq 1\) and \(\|w\|_{1}=1\) ), which considers the dependence between \(\theta(w)\) and \(w\)3. In addition, Zhou et al. [83] assume that the inner loop can obtain the exact minimizer \(\theta^{*}(m)\), which may not hold in practice. In contrast, we carefully analyze the gap between the estimated \(\theta^{*}(w)\) and itself by our algorithm and analysis.
Footnote 3: The coreset weight \(w\) in our formulation is equivalent to sample mask \(s\) in [83]
```
0: Dataset \(\mathcal{D}\)
0: model parameter \(\theta_{0}\), memory \(\mathcal{M}=\{\}\)
1:for batch \(\mathcal{B}_{t}\sim\mathcal{D}\)do
2: Compute coreset \(\mathcal{S}_{t}\) = Find-coreset(\(\theta_{t-1}\), \(\mathcal{B}_{t}\))
3:\(\mathcal{M}\) = \(\mathcal{M}\cup\mathcal{S}_{t}\)
4:endfor
```
**Algorithm 2** Bilevel Coreset Selection via Regularization (BCSR)
## 4 Algorithm Design
Equipped with the new formulation, the entire algorithm is presented in Algorithm 2, which calls Algorithm 3 as a subroutine. Each time the algorithm encounters a minibatch \(\mathcal{B}\), a coreset is selected within this minibatch by invoking Algorithm 3. Algorithm 3 is a first-order algorithm for solving the bilevel formulation (1). In Algorithm 3, the model parameter \(\theta\) and the weight distribution \(w\) are updated alternatively. We first perform \(N\) steps of gradient descent steps to find a sufficiently good \(\theta\) for the lower-level problem (lines 5-7) by the update rule:
\[\theta_{j}^{k}=\theta_{j}^{k-1}-\alpha\nabla_{\theta}L(\theta_{j}^{k-1},w_{j}), \tag{2}\]
where \(\theta_{j}^{k}\) denotes the model parameters at the \(j\)-th outer loop and the \(k\)-th inner loop. To update the outer variable \(w\), BCSR approximates the true gradient \(\nabla\phi(w)\) of the outer function w.r.t \(w\), which is called hypergradient [56]. BCSR constructs a hypergradient estimator:
\[\varphi_{j}=\frac{1}{|\mathcal{B}|}\sum_{\widetilde{z}\in\mathcal{B}}\nabla_{ w}R(w,\delta;\widetilde{z})-\nabla_{w}\nabla_{\theta}L(\theta_{j}^{N},w_{j}) \Big{[}(\nabla_{\theta}^{2}L(\theta_{j}^{N},w_{j}))^{-1}(\sum_{i=1}^{n}\nabla _{\theta}\ell_{i}(\theta_{j}^{N}))\Big{]}, \tag{3}\]where \(R(w,\delta;\widehat{z}):=-\lambda\sum_{i=1}^{K}(w+\delta\widehat{z})_{[i]}\) and \(\widetilde{z}\sim\mathcal{N}(0,1)\). Solving the Hessian-inverse-vector product in eq. (3) is computationally intractable. We denote \(v^{*}:=(\nabla_{\theta}^{2}L(\theta_{j}^{N},w_{j}))^{-1}(\sum_{i=1}^{n}\nabla_{ \theta}\ell_{i}(\theta_{j}^{N}))\) in eq. (3), where \(v^{*}\) can be approximated by solving the following quadratic programming problem efficiently by \(Q\) steps of gradient descent (line 9):
\[\min_{v}\frac{1}{2}v^{T}\nabla_{\theta}^{2}L(\theta_{j}^{N},w_{j})v-v^{T}\sum_{ i=1}^{n}\nabla_{\theta}\ell_{i}(\theta_{j}^{N}). \tag{4}\]
Next, the hypergradient estimate (line 10) is computed based on the output of approximated quadratic programming. Note both model parameters and sample weights need to use warm start initialization (line 4 and line 8). Then the weight is updated and projected onto simplex (line 11):
\[\hat{w}_{j+1}=w_{j}-\beta\varphi_{j},\ \ w_{j+1}=\mathcal{P}_{\Delta^{n}}( \hat{w}_{j+1}), \tag{5}\]
where \(\Delta^{n}:=\{w\in\mathbb{R}^{n}:0\leq w_{(i)}\leq 1,||w||_{1}=1\}\). In terms of other experimental hyperparameters, we allow very flexible choices of hyperparameters (e.g., \(N\), \(Q\)) as shown in our theory to achieve polynomial time complexity for finding a \(\epsilon\)-stationary point.
The selected coresets for each task are stored in a memory buffer with a fixed size \(m\). There is a separated memory slot for each task with size \([m/i]\) when the task \(i\) comes. After each task \(i\), the memory slots before \(i\) will randomly remove some samples to adjust all the memory slots to \([m/i]\). That means the memory size for each task decreases as the task ID increase to maintain the total buffer size \(m\) unchanged. The same memory strategy is also used in the greedy coreset approach [6].
## 5 Experiments
We conduct extensive experiments under various settings, including balanced data, imbalanced data, and label-noise data. The empirical results demonstrate the effectiveness of our method in rehearsal-based continual learning.
### Experimental Setup
**Datasets** We use commonly-used datasets in the field of continual learning, including Split CIFAR-100, Permuted MNIST, Multiple Datasets, Tiny-ImageNet, and Split Food-101. We follow the experimental settings as that in prior work [78] and [83]. Each dataset is processed with three approaches: balanced, imbalanced, and label-noise. Please refer to Appendix L for more details about data processing and settings.
**Baselines** We compare our algorithm BCSR with other continual learning methods based on coreset strategy, including \(k\)-means features [55], \(k\)-means embedding [63], Uniform Sampling, iCaRL [57], Grad Matching [9], Greedy Coreset [6], PBCS [83], GCR [71], and OCS [78]. We also compare with non-coreset reply method, SPR [38], MetaSP [70]. All algorithms are built upon episodic memory, which stores coreset selected from stream data. Then a model, such as ResNet-18 [30], is trained over the data from the current stream and the episodic memory.
**Metrics** Average accuracy and forgetting measure [10] are two primary evaluation metrics that are used in continual learning literature. AVG ACC (\(A_{T}\)) is the average accuracy tested on all tasks after finishing the task \(T\): \(A_{T}=\frac{1}{T}\sum_{i=1}^{T}a_{T,i}\), where \(a_{T,i}\) is the test accuracy of task \(i\) after training
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{Balanced} & \multicolumn{2}{c}{Inbalanced} & \multicolumn{2}{c}{Label Noise} \\ Methods & \(A_{T}\) & \(FGT_{T}\) & \(A_{T}\) & \(FGT_{T}\) & \(A_{T}\) & \(FGT_{T}\) \\ \hline K-means Features & 57.82±0.69 & 0.070±0.003 & 45.44±0.76 & 0.037±0.002 & 57.38±1.26 & 0.098±0.003 \\ K-means Embedding & 59.77±0.24 & 0.061±0.001 & 43.91±0.15 & 0.044±0.001 & 57.92±1.25 & 0.091±0.016 \\ Uniform & 58.99±0.54 & 0.074±0.004 & 44.74±0.11 & 0.033±0.007 & 58.76±1.07 & 0.087±0.006 \\ iCaRL & 60.74±0.09 & **0.044±0.026** & 44.42±0.04 & 0.042±0.019 & 59.70±0.70 & 0.071±0.010 \\ Grad Matching & 59.17±0.38 & 0.067±0.003 & 45.44±0.64 & 0.038±0.001 & 59.58±0.28 & 0.073±0.008 \\ SPR & 59.56±0.73 & 0.143±0.064 & 44.45±0.55 & 0.086±0.023 & 58.74±0.403 & 0.073±0.010 \\ MetaSP & 60.14±0.25 & 0.056±0.230 & 43.74±0.36 & 0.079±0.047 & 57.43±0.54 & 0.086±0.007 \\ Greedy Coreset & 59.39±0.16 & 0.066±0.017 & 43.80±0.01 & 0.039±0.007 & 58.22±0.16 & 0.066±0.001 \\ GCR & 58.73±0.43 & 0.073±0.013 & 44.48±0.05 & 0.035±0.005 & 58.72±0.63 & 0.081±0.005 \\ PBCS & 55.64±2.26 & 0.062±0.001 & 39.87±1.12 & 0.076±0.011 & 56.93±0.14 & 0.100±0.003 \\ OCS & 52.57±0.37 & 0.088±0.001 & 46.54±0.34 & 0.022±0.003 & 51.77±0.81 & 0.103±0.007 \\ BCSR & **61.60±0.14** & 0.051±0.015 & **47.30±0.57** & **0.022±0.005** & **60.70±0.08** & **0.059±0.013** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Experiment results on Split CIFAR-100the task \(T\). \(FGT_{T}\) evaluates the performance drop on the past tasks after training on the task \(T\): \(FGT_{T}=\frac{1}{T}\sum_{j=1}^{T}\left[\max_{j\in 1,...T-1}\left(a_{j,i}-a_{T,i} \right)\right]\).
**Implementation Details** Following the implementation of the coreset-based algorithm [6], we use two models for the purposes of model training and coreset selection respectively, where the first model is denoted as \(M_{tr}\) for model training, and the second model (also known as a proxy model) is denoted as \(M_{cs}\) for coreset selection. In the section of the model training, we adopt a single-head MLP with two hidden layers for Permuted MNIST, and a ResNet-18 for Split CIFAR-100 and other datasets. In the section of the coreset selection, \(M_{cs}\) adopts the same architecture as \(M_{tr}\). This is the key difference between our work and the Greedy Coreset [6]: they use Neural Tangent Kernels (NTK) [35] while we use a specific deep neural network. Note that NTK performs learning based on fixed features, which is shown to be limited [66]. In contrast, our algorithm allows hierarchical feature learning during the training process. In addition, our algorithm does not rely on discrete decision variables as in [6]: we use first-order methods with gradient and Hessian-inverse-vector product information with convergence guarantee and hence is more efficient in practice. For non-coreset reply methods SPR and MetaSP, we set the size of the method buffer as that in coreset-based methods. Especially for SPR, there are two memory buffers: the delayed buffer \(D\) temporarily stocks the incoming data stream, and the purified buffer \(P\) maintains the cleansed data. To satisfy the
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c}{Balanced} & \multicolumn{2}{c}{Imbalanced} & \multicolumn{2}{c}{Label Noise} \\ & AVG ACC & FGT & AVG ACC & FGT & AVG ACC & FGT \\ \hline K-means Features & 41.20a0.75 & 0.131a0.004 & 36.27a0.30 & 0.079a0.014 & 36.68a1.35 & 0.095a0.004 \\ K-means Embedding & 41.48±1.21 & 0.129a0.007 & 36.29a0.23 & 0.085a0.003 & 36.01a1.51 & 0.083a0.005 \\ Uniform & 42.11a0.52 & 0.129a0.007 & 37.07a0.53 & 0.083a0.009 & 37.14a1.05 & 0.099a0.003 \\ iCaRL & 43.84a0.09 & 0.114a0.004 & 37.65a0.84 & 0.058a0.003 & 38.52a0.25 & 0.063a0.006 \\ Grad Matching & 43.45a0.32 & 0.105a0.007 & 37.58a0.39 & 0.066a0.004 & 38.84a0.42 & 0.064a0.006 \\ SPB & 42.79a0.50 & **0.102a0.009** & 36.55a0.74 & 0.070a0.026 & 39.59a0.53 & 0.065a0.021 \\ MetaSP & 43.33a0.32 & 0.127a0.002 & 36.75a0.57 & 0.086a0.007 & 37.18a0.76 & 0.068a0.007 \\ Greedy Coreset & 41.02a0.33 & 0.119a0.017 & 33.43a0.86 & 0.103a0.002 & 36.37a0.16 & 0.079a0.006 \\ GCR & 41.45a0.35 & 0.125a0.008 & 36.84a0.62 & 0.072a0.017 & 37.46a0.40 & 0.115a0.011 \\ PPCS & 36.99a0.15 & 0.177a0.002 & 35.88a0.16 & 0.071a0.007 & 30.02a0.16 & 0.133a0.029 \\ OCS & 41.29a0.09 & 0.112a0.001 & 35.09a0.60 & **0.036a0.011** & 35.36a0.94 & 0.061a0.005 \\ BCSR & **44.13a0.33** & 0.106a0.001 & **38.59a0.11** & 0.070a0.004 & **40.72a0.56** & **0.055a0.006** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Experiment results on Food-101
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c}{Balanced} & \multicolumn{2}{c}{Imbalanced} & \multicolumn{2}{c}{Label Noise} \\ & \(A_{T}\) & \(FGT_{T}\) & \(A_{T}\) & \(FGT_{T}\) & \(A_{T}\) & \(FGT_{T}\) \\ \hline K-means Features & 54.63±0.88 & 0.138±0.007 & 33.63±2.66 & 0.136±0.063 & 45.46a3.50 & 0.120±0.049 \\ K-means Embedding & 56.83±1.65 & 0.106±0.019 & 35.93±1.60 & 0.106±0.031 & 46.32±3.19 & 0.084±0.030 \\ Uniform & 55.93±0.03 & 0.101±0.032 & 35.48±2.96 & 0.104±0.025 & 48.68±0.44 & 0.079±0.002 \\ iCaRL & 56.19±0.32 & 0.130±0.012 & 42.18±1.59 & 0.057±0.022 & 49.22±0.54 & 0.067±0.010 \\ Grad Matching & 53.41±0.46 & 0.119±0.020 & 38.16a3.90 & 0.082±0.003 & 46.96±1.05 & 0.091±0.029 \\ SPR & 56.20±1.91 & 0.124±0.036 & 41.79±1.73 & 0.143±0.051 & 49.77±1.58 & 0.062±0.024 \\ MetaSP & 57.14±1.10 & 0.113±0.042 & 41.32±0.50 & 0.103±0.053 & 47.14±1.66 & 0.081±0.027 \\ Greedy Coreset & 53.56±0.06 & 0.099±0.005 & 22.57±1.10 & 0.265±0.022 & 41.32±1.51 & 0.137±0.009 \\ GCR & 54.35±0.31 & 0.125±0.014 & 35.13±2.79 & 0.105±0.043 & 47.58±1.30 & 0.078±0.016 \\ PBCS & 52.93±0.28 & 0.152±0.016 & 37.65±0.84 & 0.058±0.003 & 38.61±0.18 & 0.096±0.004 \\ OCS & 55.65±2.26 & **0.062±0.001** & 40.48±1.39 & 0.051±0.003 & 45.03±1.46 & **0.049±0.012** \\ BCSR & **59.89±0.95** & 0.096±0.005 & **45.13±0.54** & **0.046±0.008** & **49.97±1.14** & 0.064±0.031 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Experiment results on Multiple Datasetsrequirement of coreset experiments, we keep the size of \(P\) and \(D\) buffers the same. The detailed hyperparameter settings can be found in Appendix C.
### Results
We report the results of average accuracy (AVG ACC) and forgetting (FGT) for all the algorithms on the balanced, imbalanced, and label-noise benchmarks, and the results are presented in Table 1, Table 2, Table 3, Table 4, and Table 9 (Appendix E) respectively. We have the following observations. (i) In the balanced setting, our method outperforms other baselines significantly in terms of AVG ACC. For example, compared with the best coreset selection methods, our BCSR shows \(2.21\%\), \(4.24\%\), \(2.68\%\), \(2.11\%\), and \(1.7\%\) improvements in AVG ACC on five benchmarks, respectively. (ii) In the imbalanced and label-noise setting, our method also demonstrates relatively higher performance. For example, on imbalanced data of Multiple Datasets, BCSR is better than the best coreset selection algorithm by \(4.65\%\) in average accuracy. An interesting observation is that Greedy Coreset [6] does not perform very well, especially in imbalanced and label-noise settings. The reason is that the inner optimization in Greedy Coreset is conducted by an NTK, where only a fixed feature is used but not the learned feature. (iii) From bilevel optimization point of view, our algorithm significantly reduces the running time of the vanilla greedy coreset selection method [6] by at least \(58\%\) on continual learning benchmark datasets. For detailed comparison, please check Table 8 in Appendix D. (iv) In addition, the test AVG ACC (Figure 2 and Figure 4 in Appendix F) of the BCSR during the training process shows that our algorithm alleviates catastrophic forgetting and it is comparable to other best baselines. For example, BCSR enjoys the lowest forgetting for almost all datasets under the label-noise setting (except for Multiple Datasets in Table 2). For most experiments on other settings, BCSR achieves a forgetting performance that is comparable to the best methods with at most \(1\%\) gap.
### Ablation Studies
We conduct ablation studies to inspect the effectiveness of individual components in our proposed approach, including the effect of the bilevel optimizer, the smoothed top-\(K\) regularizer, and the coreset size \(K\) (Appendix J) respectively.
**Random initialization with and without bilevel optimizers.** Our new bilevel optimization algorithm (Algorithm 3) is of vital importance for finding a good distribution \(w\) for selecting the coreset. To demonstrate this, we design comparative experiments: one experiment initializes \(w\) randomly without updating it, and the other one adopts the same initialization strategy but updates \(w\) by the proposed bilevel optimizer. Then we sample coresets based on \(w\) from two methods respectively. As you can see, the random initialization method is equivalent to Uniform Sampling. We report the experimental results of average accuracy and forgetting on three balanced datasets in Table 5. It can be observed that the sampling strategy based on bilevel optimization significantly outperforms random sampling. In particular, the strategy with bilevel optimization shows \(3.23\%\), \(2.89\%\), and \(4.39\%\) AVG ACC improvement on three benchmarks. Meanwhile, it also obtains \(2.20\%\), \(1.60\%\), and \(3.20\%\) FGT reduction on three benchmarks.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{Split CIFAR-100} & \multicolumn{2}{c}{Permuted MNIST} & \multicolumn{2}{c}{Multiple Datasets} \\ BO & \(A_{T}\) & \(FGT_{T}\) & \(A_{T}\) & \(FGT_{T}\) & \(A_{T}\) & \(FGT_{T}\) \\ \hline WO & 58.374±0.37 & 0.073±0.004 & 53.34±0.74 & 0.074±0.009 & 55.50±0.80 & 0.128±0.027 \\ W & **61.60±0.14** & **0.051±0.015** & **56.23±0.29** & **0.058±0.002** & **59.89±0.95** & **0.096±0.005** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Random initialization with (W) and without (WO) bilevel optimizers (BO)
Figure 2: The average accuracy during the continual learning. After each task, the training model is tested on all encountered tasks. Due to the nature of forgetting, the average test performance of all the methods tends to decrease.
To verify the effectiveness of our bilevel optimizer, we compare the loss curve with Greedy Coreset that uses NTK. The result is presented in Figure 3. There are a number of stream mini-batches in each task. The bilevel optimizer trains over each mini-batch, where the upper loss is plotted in the figure. In the experiment, we plot the loss value for every \(5\) mini-batches. Within each task, the loss from BCSR gradually decreases with slight fluctuations, and it increases only when encountering a new task. In contrast, the loss value of the Greedy Coreset approach always stays large. It indicates that our bilevel optimizer is more effective than the Greedy Coreset approach.
**Effectiveness of the regularizer.** In our algorithm, the bilevel formulation has a smooth top-\(K\) loss as a regularizer in the objective function to promote the probability distribution to have \(K\) nonzero entries. The goal is to make sure that the summation of top-\(K\) entries of the learned probability vector is large, which increases confidence in the coreset selection. The hyperparameter \(\lambda\) is used to balance the cross-entropy loss and the regularization term. We explore the effects on the performance with different \(\lambda\), and list the results in Table 6.
This ablation experiment is performed on our framework BCSR, and average accuracy and forgetting on three balanced benchmarks are reported. When \(\lambda=0.10\), BCSR can reach the best performance (The highest AVG ACC and lowest FGT) on Split CIFAR-100 and Permuted MNIST. While on Multiple Datasets, BCSR performs the best when \(\lambda=1.0\). This dataset contains more categories, and hence it is more challenging to select the appropriate representative samples. In this case, larger \(\lambda\) would emphasize more on maximizing the top-\(K\) probability and help select better representative samples. In addition, we observe that \(\lambda\) set as a too large or too small value will damage the performance, which is in line with the observations in standard regularized empirical risk minimization problems such as overfitting and underfitting. Please refer to Appendix H for further analysis.
## 6 Theoretical Analysis
In this section, we provide a theoretical analysis for our proposed method. In particular, we establish convergence rate for our algorithm BCSR.
**Theorem 1**.: _Suppose standard assumptions hold in bilevel optimization (see Assumptions 1, 2 and 3 in Appendix M). Choose parameters \(\lambda,\alpha,\eta\) and \(N\) such that \((1+\lambda)(1-\alpha\mu)^{N}(1+\frac{8\pi L^{2}}{\eta\mu})\leq 1-\eta\mu\), where \(r=\frac{C_{Q}^{2}}{(\frac{\mu^{2}}{\mu}+L)^{2}}\) and \(C_{Q}=\frac{Q\rho M\eta}{\mu}+\eta^{2}Q^{2}\rho M+\eta QL\). Furthermore, choose the stepsize \(\beta\) such that \(6\omega\beta^{2}L^{2}<\frac{1}{9}\eta\mu\) and \(\beta\leq\frac{1}{4L_{\phi}}\), where the constant \(\omega\) is given by Equation (16) in Appendix A.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multicolumn{2}{c}{Split CIFAR-100} & \multicolumn{2}{c}{Permuted MNIST} & \multicolumn{2}{c}{Multiple Datasets} \\ \(\lambda\) & \(A_{T}\) & \(FGT_{T}\) & \(A_{T}\) & \(FGT_{T}\) & \(A_{T}\) & \(FGT_{T}\) \\ \hline
0.00 & 60.43±0.15 & 0.064±0.005 & 54.20±1.53 & 0.116±0.030 & 55.71±0.32 & 0.055±0.003 \\
0.01 & 60.56±2.05 & 0.074±0.016 & 55.09±2.94 & 0.097±0.020 & 55.39±0.95 & 0.065±0.012 \\
0.10 & **61.04±0.53** & **0.063±0.007** & **57.20±0.88** & **0.064±0.01** & 55.69±0.40 & 0.057±0.002 \\
1.00 & 59.34±1.20 & 0.072±0.008 & 55.43±1.53 & 0.108±0.019 & **58.18±0.77** & **0.046±0.006** \\
5.00 & 58.80±1.58 & 0.084±0.017 & 53.13±2.74 & 0.120±0.021 & 56.67±0.78 & 0.051±0.007 \\
10.00 & 59.53±0.42 & 0.075±0.012 & 53.54±2.02 & 0.118±0.029 & 55.57±1.05 & 0.060±0.010 \\ \hline \hline \end{tabular}
\end{table}
Table 6: The impact of different values of \(\lambda\)
Figure 3: The upper loss of bilevel optimization. Each distinct spike means the arrival of a new task.
_Then, we have the following convergence result._
\[\frac{1}{J}\sum_{j=0}^{J-1}\mathbb{E}\|G_{j}\|^{2}\leq\mathcal{O}\Big{(}\frac{D_{ \phi}}{\beta J}+\frac{1}{|\mathcal{B}|}+\frac{D_{0}}{\eta\mu J}\Big{)}.\]
_where \(G_{j}:=\frac{1}{\beta}(w_{j}-\mathcal{P}_{\Delta^{n}}(w_{j}-\beta\nabla\phi(w_{ j})))\) denote the generalized projected gradient, \(D_{\phi}:=\phi(w_{0})-\min_{w}\phi(w)>0\), \(D_{0}=\|\theta_{0}^{0}-\theta_{0}^{*}\|^{2}+\|v_{0}^{0}-v_{0}^{*}\|^{2}\), \(L_{\phi}\) is the smoothness parameter of the total objective \(\phi(w)\) whose form is proved in Appendix A._
Theorem 1 provides a general convergence result for the proposed bilevel algorithm, which allows for a very flexible selection of subloop lengths \(N\) and \(Q\) as long as the inequality \((1+\lambda)(1-\alpha\mu)^{N}(1+\frac{8\pi L^{2}}{\eta\mu})\leq 1-\eta\mu\) holds given proper stepsizes \(\lambda,\eta,\alpha\). For example, in most of our experiments, the choice of \(N=1\) and \(Q=3\) works the best. Then, in this case, we further specify the parameters in Theorem 1, and provide the following corollary.
**Corollary 1**.: _Under the same setting as in Theorem 1, choose \(N=1,Q=3\) and set \(\lambda=\frac{\alpha\mu}{2}\), \(\eta\leq\frac{\mu^{2}\alpha}{4608L^{2}}\) and \(\alpha\leq\frac{1}{L}\). Then, to make sure an \(\epsilon\)-accurate stationary point, i.e., \(\frac{1}{J}\sum_{j=0}^{J-1}\mathbb{E}\|G_{j}\|^{2}\leq\epsilon^{2}\), the number of iterations is \(\mathcal{O}(\epsilon^{-2})\), each using \(|\mathcal{B}|=\mathcal{O}(\epsilon^{-2})\) of samples from the standard Gaussian distribution \(\mathcal{N}(0,1)\)._
Corollary 1 shows that the proposed bilevel algorithm converges to an \(\epsilon\)-accurate stationary point using only \(\mathcal{O}(\epsilon^{-2})\) iterations and \(\mathcal{O}(\epsilon^{-2})\) samples drawn from \(\mathcal{N}(0,1)\) per iteration. Note the the large batch size \(|\mathcal{B}|=\mathcal{O}(\epsilon^{-2})\) is necessary here to guarantee a convergence rate of \(\mathcal{O}(1/T)\), which matches the results in solving single-level constrained nonconvex problems [24]. In addition, it is computationally tractable because sampling from a known Gaussian distribution is easy and cheap.
## 7 Conclusion
In this paper, we advance the state-of-the-art bilevel coreset selection in continual learning. We first introduce a new bilevel formulation with smoothed top-\(K\) regularization and then design an efficient bilevel optimizer as a solver. We conduct extensive experiments in continual learning benchmark datasets to demonstrate the effectiveness of our proposed approach. We also show that the bilevel optimizer can efficiently find \(\epsilon\)-stationary point with \(O(1/\epsilon^{4})\) computational complexity, which matches the best complexity of projected SGD for a single-level problem.
## Acknowledgments and Disclosure of Funding
We would like to thank the anonymous reviewers for their helpful comments. Jie Hao and Mingrui Liu are both supported by a grant from George Mason University. Computations were run on ARGO, a research computing cluster provided by the Office of Research Computing at George Mason University (URL: [https://orc.gmu.edu](https://orc.gmu.edu)).
## References
* (1) Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. Memory aware synapses: Learning what (not) to forget. In _Proceedings of the European Conference on Computer Vision (ECCV)_, pages 139-154, 2018.
* (2) Rahaf Aljundi, Eugene Belilovsky, Tinne Tuytelaars, Laurent Charlin, Massimo Caccia, Min Lin, and Lucas Page-Caccia. Online continual learning with maximal interfered retrieval. In _Advances in Neural Information Processing Systems 32_, pages 11849-11860. 2019.
* (3) Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. Gradient based sample selection for online continual learning. _Advances in neural information processing systems_, 32, 2019.
* (4) Michael Arbel and Julien Mairal. Amortized implicit differentiation for stochastic bilevel optimization. In _International Conference on Learning Representations_, 2022.
* [5] Olivier Bachem, Mario Lucic, and Andreas Krause. Coresets for nonparametric estimation-the case of dp-means. In _International Conference on Machine Learning_, pages 209-217. PMLR, 2015.
* [6] Zalan Borsos, Mojmir Mutny, and Andreas Krause. Coresets via bilevel optimization for continual learning and streaming. _Advances in Neural Information Processing Systems_, 33:14879-14890, 2020.
* mining discriminative components with random forests. In _European Conference on Computer Vision_, 2014.
* [8] Yaroslav Bulatov. Notmnist dataset. _Google (Books/OCR), Tech. Rep.[Online]. Available: http://yaroslavvb. blogspot. it/2011/09/notmnist-dataset. html_, 2, 2011.
* [9] Trevor Campbell and Tamara Broderick. Automated scalable bayesian inference via hilbert coresets. _The Journal of Machine Learning Research_, 20(1):551-588, 2019.
* [10] Arslan Chaudhry, Puneet K Dokania, Thalaiyasingam Ajanthan, and Philip HS Torr. Riemannian walk for incremental learning: Understanding forgetting and intransigence. In _Proceedings of the European Conference on Computer Vision (ECCV)_, pages 532-547, 2018.
* [11] Arslan Chaudhry, Marc'Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. Efficient lifelong learning with a-gem. _arXiv preprint arXiv:1812.00420_, 2018.
* [12] Tianyi Chen, Yuejiao Sun, and Wotao Yin. A single-timescale stochastic bilevel optimization method. _arXiv preprint arXiv:2102.04671_, 2021.
* [13] Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge Belongie. Class-balanced loss based on effective number of samples. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 9268-9277, 2019.
* [14] Mathieu Dagreou, Pierre Ablin, Samuel Vaiter, and Thomas Moreau. A framework for bilevel optimization that enables stochastic and global variance reduction algorithms. _arXiv preprint arXiv:2201.13409_, 2022.
* [15] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_, 2018.
* [16] Justin Domke. Generic methods for optimization-based modeling. In _Artificial Intelligence and Statistics (AISTATS)_, pages 318-326, 2012.
* [17] Sayna Ebrahimi, Mohamed Elhoseiny, Trevor Darrell, and Marcus Rohrbach. Uncertainty-guided continual learning with bayesian neural networks. _arXiv preprint arXiv:1906.02425_, 2019.
* [18] Mehrdad Farajtabar, Navid Azizan, Alex Mott, and Ang Li. Orthogonal gradient descent for continual learning. _arXiv preprint arXiv:1910.07104_, 2019.
* [19] Dan Feldman and Michael Langberg. A unified framework for approximating and clustering data. In _Proceedings of the forty-third annual ACM symposium on Theory of computing_, pages 569-578, 2011.
* [20] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In _Proceedings of the 34th International Conference on Machine Learning-Volume 70_, pages 1126-1135. JMLR. org, 2017.
* [21] Luca Franceschi, Michele Donini, Paolo Frasconi, and Massimiliano Pontil. Forward and reverse gradient-based hyperparameter optimization. In _International Conference on Machine Learning (ICML)_, pages 1165-1173, 2017.
* [22] Luca Franceschi, Paolo Frasconi, Saverio Salzo, Riccardo Grazzi, and Massimiliano Pontil. Bilevel programming for hyperparameter optimization and meta-learning. In _International Conference on Machine Learning_, pages 1568-1577. PMLR, 2018.
* [23] Camille Garcin, Maximilien Servajean, Alexis Joly, and Joseph Salmon. Stochastic smoothing of the top-k calibrated hinge loss for deep imbalanced classification. _arXiv preprint arXiv:2202.02193_, 2022.
* [24] Saeed Ghadimi, Guanghui Lan, and Hongchao Zhang. Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization. _Mathematical Programming_, 155(1):267-305, 2016.
* [25] Saeed Ghadimi and Mengdi Wang. Approximation methods for bilevel programming. _arXiv preprint arXiv:1802.02246_, 2018.
* [26] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In _Advances in neural information processing systems_, pages 2672-2680, 2014.
* [27] Riccardo Grazzi, Luca Franceschi, Massimiliano Pontil, and Saverio Salzo. On the iteration complexity of hypergradient computation. In _Proc. International Conference on Machine Learning (ICML)_, 2020.
* [28] Yunhui Guo, Mingrui Liu, Tianbao Yang, and Tajana Rosing. Improved schemes for episodic memory-based lifelong learning. _Advances in Neural Information Processing Systems_, 33, 2020.
* [29] Zhishuai Guo and Tianbao Yang. Randomized stochastic variance-reduced methods for stochastic bilevel optimization. _arXiv preprint arXiv:2105.02266_, 2021.
* [30] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016.
* [31] Xu He and Herbert Jaeger. Overcoming catastrophic interference using conceptor-aided back-propagation. 2018.
* [32] Mingyi Hong, Hoi-To Wai, Zhaoran Wang, and Zhuoran Yang. A two-timescale framework for bilevel optimization: Complexity analysis and application to actor-critic. _arXiv preprint arXiv:2007.05170_, 2020.
* [33] Jonathan Huggins, Trevor Campbell, and Tamara Broderick. Coresets for scalable bayesian logistic regression. _Advances in Neural Information Processing Systems_, 29, 2016.
* [34] David Isele and Akansel Cosgun. Selective experience replay for lifelong learning. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 32, 2018.
* [35] Arthur Jacot, Franck Gabriel, and Clement Hongler. Neural tangent kernel: Convergence and generalization in neural networks. _Advances in neural information processing systems_, 31, 2018.
* [36] Kaiyi Ji, Mingrui Liu, Yingbin Liang, and Lei Ying. Will bilevel optimizers benefit from loops. _arXiv preprint arXiv:2205.14224_, 2022.
* [37] Kaiyi Ji, Junjie Yang, and Yingbin Liang. Bilevel optimization: Convergence analysis and enhanced design. In _International Conference on Machine Learning (ICML)_, pages 4882-4892. PMLR, 2021.
* [38] Chris Dongjoo Kim, Jinseo Jeong, Sangwoo Moon, and Gunhee Kim. Continual learning on noisy data streams via self-purified replay. In _Proceedings of the IEEE/CVF international conference on computer vision_, pages 537-547, 2021.
* [39] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. _Proceedings of the national academy of sciences_, 114(13):3521-3526, 2017.
* [40] Vijay Konda and John Tsitsiklis. Actor-critic algorithms. _Advances in neural information processing systems_, 12, 1999.
* [41] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In _Advances in neural information processing systems_, pages 1097-1105, 2012.
* [42] Ya Le and Xuan Yang. Tiny imagenet visual recognition challenge. _CS 231N_, 7(7):3, 2015.
* [43] Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. _Proceedings of the IEEE_, 86(11):2278-2324, 1998.
* [44] Xilai Li, Yingbo Zhou, Tianfu Wu, Richard Socher, and Caiming Xiong. Learn to grow: A continual structure learning framework for overcoming catastrophic forgetting. In _International Conference on Machine Learning_, pages 3925-3934. PMLR, 2019.
* [45] Renjie Liao, Yuwen Xiong, Ethan Fetaya, Lisa Zhang, KiJung Yoon, Xaq Pitkow, Raquel Urtasun, and Richard Zemel. Reviving and improving recurrent back-propagation. In _Proc. International Conference on Machine Learning (ICML)_, 2018.
* [46] Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. _ICLR_, 2019.
* [47] Risheng Liu, Jiaxin Gao, Jin Zhang, Deyu Meng, and Zhouchen Lin. Investigating bi-level optimization for learning and vision from a unified perspective: A survey and beyond. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 2021.
* [48] David Lopez-Paz et al. Gradient episodic memory for continual learning. In _Advances in Neural Information Processing Systems_, pages 6467-6476, 2017.
* [49] Mario Lucic, Matthew Faulkner, Andreas Krause, and Dan Feldman. Training gaussian mixture models at scale via coresets. _The Journal of Machine Learning Research_, 18(1):5885-5909, 2017.
* [50] Dougal Maclaurin, David Duvenaud, and Ryan Adams. Gradient-based hyperparameter optimization through reversible learning. In _International Conference on Machine Learning (ICML)_, pages 2113-2122, 2015.
* [51] Arun Mallya and Svetlana Lazebnik. Packnet: Adding multiple tasks to a single network by iterative pruning. In _Proceedings of the IEEE conference on Computer Vision and Pattern Recognition_, pages 7765-7773, 2018.
* [52] James L McClelland, Bruce L McNaughton, and Randall C O'Reilly. Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. _Psychological review_, 102(3):419, 1995.
* [53] Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In _Psychology of learning and motivation_, volume 24, pages 109-165. Elsevier, 1989.
* [54] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011.
* [55] Cuong V Nguyen, Yingzhen Li, Thang D Bui, and Richard E Turner. Variational continual learning. _arXiv preprint arXiv:1710.10628_, 2017.
* [56] Fabian Pedregosa. Hyperparameter optimization with approximate gradient. In _International conference on machine learning_, pages 737-746. PMLR, 2016.
* [57] Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. icarl: Incremental classifier and representation learning. In _Proceedings of the IEEE conference on Computer Vision and Pattern Recognition_, pages 2001-2010, 2017.
* [58] Matthew Riemer, Ignacio Cases, Robert Ajemian, Miao Liu, Irina Rish, Yuhai Tu, and Gerald Tesauro. Learning to learn without forgetting by maximizing transfer and minimizing interference. _arXiv preprint arXiv:1810.11910_, 2018.
* [59] Hippolyt Ritter, Aleksandar Botev, and David Barber. Online structured laplace approximations for overcoming catastrophic forgetting. In _Advances in Neural Information Processing Systems_, pages 3738-3748, 2018.
* [60] Anthony Robins. Catastrophic forgetting, rehearsal and pseudorehearsal. _Connection Science_, 7(2):123-146, 1995.
* [61] Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. _arXiv preprint arXiv:1606.04671_, 2016.
* [62] Jonathan Schwarz, Wojciech Czarnecki, Jelena Luketina, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, and Raia Hadsell. Progress & compress: A scalable framework for continual learning. In _International Conference on Machine Learning_, pages 4528-4537. PMLR, 2018.
* [63] Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. _arXiv preprint arXiv:1708.00489_, 2017.
* [64] Joan Serra, Didac Suris, Marius Miron, and Alexandros Karatzoglou. Overcoming catastrophic forgetting with hard attention to the task. In _International Conference on Machine Learning_, pages 4548-4557. PMLR, 2018.
* [65] Amirreza Shaban, Ching-An Cheng, Nathan Hatch, and Byron Boots. Truncated back-propagation for bilevel optimization. In _International Conference on Artificial Intelligence and Statistics (AISTATS)_, pages 1723-1732, 2019.
* [66] Zhenmei Shi, Junyi Wei, and Yingyu Liang. A theoretical analysis on feature learning in neural networks: Emergence from inputs and advantage over fixed features. _arXiv preprint arXiv:2206.01717_, 2022.
* [67] Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative replay. _Advances in neural information processing systems_, 30, 2017.
* [68] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. _nature_, 529(7587):484, 2016.
* [69] Johannes Stallkamp, Marc Schlipsing, Jan Salmen, and Christian Igel. The german traffic sign recognition benchmark: a multi-class classification competition. In _The 2011 international joint conference on neural networks_, pages 1453-1460. IEEE, 2011.
* [70] Qing Sun, Fan Lyu, Fanhua Shang, Wei Feng, and Liang Wan. Exploring example influence in continual learning. _Advances in Neural Information Processing Systems_, 35:27075-27086, 2022.
* [71] Rishabh Tiwari, Krishnateja Killamsetty, Rishabh Iyer, and Pradeep Shenoy. Gcr: Gradient coreset based replay buffer selection for continual learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 99-108, 2022.
* [72] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. _Advances in neural information processing systems_, 30, 2017.
* [73] Luis N Vicente and Paul H Calamai. Bilevel and multilevel programming: A bibliography review. _Journal of Global optimization_, 5(3):291-306, 1994.
* [74] Kai Wei, Rishabh Iyer, and Jeff Bilmes. Submodularity in data subset selection and active learning. In _International conference on machine learning_, pages 1954-1963. PMLR, 2015.
* [75] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. _arXiv preprint arXiv:1708.07747_, 2017.
* [76] Ju Xu and Zhanxing Zhu. Reinforced continual learning. _Advances in Neural Information Processing Systems_, 31, 2018.
* [77] Jaehong Yoon, Saehoon Kim, Eunho Yang, and Sung Ju Hwang. Scalable and order-robust continual learning with additive parameter decomposition. _International Conference on Learning Representations_, 2020.
* [78] Jaehong Yoon, Divyam Madaan, Eunho Yang, and Sung Ju Hwang. Online coreset selection for rehearsal-based continual learning. _arXiv preprint arXiv:2106.01085_, 2021.
* [79] Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang. Lifelong learning with dynamically expandable networks. _arXiv preprint arXiv:1708.01547_, 2017.
* [80] Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol Hausman, and Chelsea Finn. Gradient surgery for multi-task learning. _Advances in Neural Information Processing Systems_, 33:5824-5836, 2020.
* [81] Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence. In _Proceedings of the 34th International Conference on Machine Learning-Volume 70_, pages 3987-3995. JMLR. org, 2017.
* [82] Guodong Zhang, James Martens, and Roger B Grosse. Fast convergence of natural gradient descent for over-parameterized neural networks. _Advances in Neural Information Processing Systems (NeurIPS)_, 32, 2019.
* [83] Xiao Zhou, Renjie Pi, Weizhong Zhang, Yong Lin, Zonghao Chen, and Tong Zhang. Probabilistic bilevel coreset selection. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, _Proceedings of the 39th International Conference on Machine Learning_, volume 162 of _Proceedings of Machine Learning Research_, pages 27287-27302. PMLR, 17-23 Jul 2022.
* [84] Xiangyu Zhu, Jie Hao, Yunhui Guo, and Mingrui Liu. Auc maximization in imbalanced lifelong learning. In _Uncertainty in Artificial Intelligence_, pages 2574-2585. PMLR, 2023.
Proof of Theorem 1
Let \(\Delta^{(n)}:=\{w:0\leq w_{(i)}\leq 1,||w||_{1}=1\}\) denote the constraint set of the upper-level problem. We first provide some important inequalities.
Recall that \(v_{j}^{q}\) be the \(q^{th}\) GD iterate in solving the linear system \(\nabla_{\theta}^{2}L(\theta_{j}^{N},w_{j})v=\sum_{i=1}^{n}\nabla\ell_{i}(\theta _{j}^{N})\) at iteration \(j\) via the following process:
\[v_{j}^{q+1}=(I-\eta\nabla_{\theta}^{2}L(\theta_{j}^{N},w_{j}))v_{j}^{q}+\eta \sum_{i=1}^{n}\nabla\ell_{i}(\theta_{j}^{N}). \tag{6}\]
which, by telescoping Equation (6) over \(q\) from \(0\) to \(Q-1\), yields
\[v_{j}^{Q}=(I-\eta\nabla_{\theta}^{2}L(\theta_{j}^{N},w_{j}))^{Q}v_{k}^{0}+\eta \sum_{q=0}^{Q-1}(I-\eta\nabla_{\theta}^{2}L(\theta_{j}^{N},w_{j}))^{q}\sum_{i= 1}^{n}\nabla\ell_{i}(\theta_{j}^{N}). \tag{7}\]
Let \(v_{j}^{*}\) be the solution of the linear system \(\nabla_{\theta}^{2}L(\theta_{j}^{*},w_{j})v=\sum_{i=1}^{n}\nabla\ell_{i}(\theta _{j}^{*})\), and then we have
\[v_{j}^{*}=(I-\eta\nabla_{\theta}^{2}L(\theta_{j}^{*},w_{j}))^{Q}v_{j}^{*}+\eta \sum_{q=0}^{Q-1}(I-\eta\nabla_{\theta}^{2}L(\theta_{j}^{*},w_{j}))^{q}\sum_{i= 1}^{n}\nabla\ell_{i}(\theta_{j}^{*}). \tag{8}\]
Combining Equation (6) and Equation (7), noting that \(v_{j}^{0}=v_{j-1}^{Q}\) and using Assumption 2 that \(\|v_{j}^{*}\|\leq\|(\nabla_{\theta}^{2}L(\theta_{j}^{*},w_{j}))^{-1}\|\|\sum_{i =1}^{n}\nabla\ell_{i}(\theta_{j}^{*})\|\leq\frac{M}{\mu}\), the difference between \(v_{j}^{Q}\) and \(v_{j}^{*}\) can be bounded as
\[\|v_{j}^{Q}-v_{j}^{*}\|\leq \big{\|}(I-\eta\nabla_{\theta}^{2}L(\theta_{j}^{N},w_{j}))^{Q}-(I -\eta\nabla_{\theta}^{2}L(\theta_{j}^{*},w_{j}))^{Q}\big{\|}\frac{M}{\mu}+(1- \eta\mu)^{Q}\|v_{j-1}^{Q}-v_{j}^{*}\|\] \[+\eta M\Big{\|}\sum_{q=0}^{Q-1}(I-\eta\nabla_{\theta}^{2}L(\theta _{j}^{N},w_{j}))^{q}-\sum_{q=0}^{Q-1}(I-\eta\nabla_{\theta}^{2}L(\theta_{j}^{* },w_{j}))^{q}\Big{\|}\] \[+(1-(1-\eta\mu)^{Q})\frac{L}{\mu}\|\theta_{j}^{*}-\theta_{j}^{N}\|. \tag{9}\]
We next bound \(\Delta_{q}:=\|(I-\eta\nabla_{\theta}^{2}L(\theta_{j}^{N},w_{j}))^{q}-(I-\eta \nabla_{\theta}^{2}L(\theta_{j}^{*},w_{j}))^{q}\|\) in Equation (9) as:
\[\Delta_{q}\stackrel{{(i)}}{{\leq}}(1-\eta\mu)\Delta_{q-1}+(1- \eta\mu)^{q-1}\eta\rho\|\theta_{j}^{N}-\theta_{j}^{*}\|. \tag{10}\]
which, by telescoping Equation (10) and in conjunction with Equation (9), yields
\[\|v_{j}^{Q}-v_{j}^{*}\|\leq Q(1-\eta\mu)^{Q-1}\eta\rho\frac{M}{\mu}\|\theta_{j}^{N}- \theta_{j}^{*}\|+(1-\eta\mu)^{Q}\|v_{j-1}^{Q}-v_{j}^{*}\|\] \[+\eta M\sum_{q=0}^{Q-1}q(1-\eta\mu)^{q-1}\eta\rho\|\theta_{j}^{N} -\theta_{j}^{*}\|+(1-(1-\eta\mu)^{Q})\frac{L}{\mu}\|\theta_{j}^{*}-\theta_{j} ^{N}\| \tag{11}\] \[\leq \frac{Q(1-\eta\mu)^{Q-1}\rho M\eta}{\mu}\|\theta_{j}^{N}-\theta_ {j}^{*}\|+(1-\eta\mu)^{Q}\|v_{j-1}^{Q}-v_{j-1}^{*}\|\] \[+(1-\eta\mu)^{Q}\|v_{j-1}^{*}-v_{j}^{*}\|+\frac{1-(1-\eta\mu)^{Q} (1+\eta Q\mu)}{\mu^{2}}\rho M\|\theta_{j}^{N}-\theta_{j}^{*}\|\] \[+(1-(1-\eta\mu)^{Q})\frac{L}{\mu}\|\theta_{j}^{*}-\theta_{j}^{N}\|\]
which, combined with \(\|v_{j}^{*}-v_{j-1}^{*}\|\leq\big{(}\frac{L}{\mu}+\frac{M\rho}{\mu^{2}}\big{)} \big{(}\frac{L}{\mu}+1\big{)}\|w_{j}-w_{j-1}\|\) and using the fact that \((1-x)^{Q}\geq 1-xQ\) for \(0\leq x\leq 1\), yields
\[\mathbb{E}\|v_{j}^{Q}-v_{j}^{*}\|^{2}\leq (1-\eta\mu)\mathbb{E}\|v_{j-1}^{Q}-v_{j-1}^{*}\|^{2}+\frac{4}{ \eta\mu}C_{Q}^{2}\|\theta_{j}^{*}-\theta_{j}^{N}\|^{2}\] \[+\frac{4}{\eta\mu}\Big{(}\frac{L}{\mu}+\frac{M\rho}{\mu^{2}}\Big{)} ^{2}\Big{(}\frac{L}{\mu}+1\Big{)}\|w_{j}-w_{j-1}\|^{2} \tag{12}\]where the constant \(C_{Q}:=\frac{Q\rho M\eta}{\mu}+\eta^{2}Q^{2}\rho M+\eta QL\).
The next step is to characterize the error induced by the lower-level updates on \(\theta\).
Note that \(\theta_{j}^{*}=\arg\min_{\theta}L(\theta,w_{j})\). Using Assumptions 1 and 2, we have
\[\|\theta_{j}^{N}-\theta_{j}^{*}\|^{2}\leq(1-\alpha\mu)^{N}\|\theta_{j}^{0}- \theta_{j}^{*}\|^{2}, \tag{13}\]
which, in conjunction with \(\theta_{j}^{0}=\theta_{j-1}^{N}\) and Lemma 2.2 in [25], yields
\[\mathbb{E}\|\theta_{j}^{N}-\theta_{j}^{*}\|^{2}\leq (1-\alpha\mu)^{N}(1+\lambda)\mathbb{E}\|\theta_{j-1}^{N}-\theta_{ j-1}^{*}\|^{2}\] \[+(1-\alpha\mu)^{N}(1+\frac{1}{\lambda})\frac{L^{2}}{\mu^{2}} \mathbb{E}\|w_{j}-w_{j-1}\|^{2}. \tag{14}\]
Recall the definition that \(r=\frac{C_{Q}^{2}}{(\frac{\rho M}{\mu}+L)^{2}}\). Then, combining Equation (12) and Equation (14) yields we can obtain
\[\Big{(}1+\frac{\rho^{2}M^{2}}{L^{2}\mu^{2}}\Big{)} \mathbb{E}\|\theta_{j}^{N}-\theta_{j}^{*}\|^{2}+\mathbb{E}\|v_{j}^ {Q}-v_{j}^{*}\|^{2}\] \[\leq (1+\lambda)(1-\alpha\mu)^{N}\Big{(}1+\frac{\rho^{2}M^{2}}{L^{2} \mu^{2}}\Big{)}\Big{(}1+\frac{8rL^{2}}{\eta\mu}\Big{)}\mathbb{E}\|\theta_{j-1} ^{N}-\theta_{j-1}^{*}\|^{2}\] \[+(1-\eta\mu)\mathbb{E}\|v_{j-1}^{Q}-v_{j-1}^{*}\|^{2}+\omega \mathbb{E}\|w_{j}-w_{j-1}\|^{2}, \tag{15}\]
where the constant \(\omega\) is given by
\[\omega:= \Big{(}1+\frac{1}{\lambda}\Big{)}(1-\alpha\mu)^{N}\Big{(}1+\frac{ \rho^{2}M^{2}}{L^{2}\mu^{2}}\Big{)}\frac{L^{2}}{\mu^{2}}\] \[+\frac{8}{\eta\mu}\frac{L^{4}}{\mu^{2}}\Big{(}1+\frac{\rho^{2}M^{2 }}{L^{2}\mu^{2}}\Big{)}\Big{(}\frac{4}{\mu^{2}}+r(1-\alpha\mu)^{N}\Big{(}1+ \frac{1}{\lambda}\Big{)}\Big{)}. \tag{16}\]
Recall that we choose \((1+\lambda)(1-\alpha\mu)^{N}(1+\frac{8rL^{2}}{\eta\mu})\leq 1-\eta\mu\). Then, we obtain from Equation (15) that
\[\Big{(}1+\frac{\rho^{2}M^{2}}{L^{2}\mu^{2}}\Big{)} \mathbb{E}\|\theta_{j}^{N}-\theta_{j}^{*}\|^{2}+\mathbb{E}\|v_{j} ^{Q}-v_{j}^{*}\|^{2}\] \[\leq (1-\eta\mu)\Big{(}1+\frac{\rho^{2}M^{2}}{L^{2}\mu^{2}}\Big{)} \mathbb{E}\|\theta_{j-1}^{N}-\theta_{j-1}^{*}\|^{2}\] \[+(1-\eta\mu)\mathbb{E}\|v_{j-1}^{Q}-v_{j-1}^{*}\|^{2}+\omega \mathbb{E}\|w_{j}-w_{j-1}\|^{2}. \tag{17}\]
Let \(\delta_{j}:=\big{(}1+\frac{\rho^{2}M^{2}}{L^{2}\mu^{2}}\big{)}\mathbb{E}\| \theta_{j}^{N}-\theta_{j}^{*}\|^{2}+\mathbb{E}\|v_{j}^{Q}-v_{j}^{*}\|^{2}\). Then, incorporating the update that \(w_{j}=\mathcal{P}_{\Delta^{n}}(w_{j-1}-\beta\varphi_{j-1})\) into Equation (17) yields
\[\delta_{j}\leq (1-\eta\mu)\delta_{j-1}+2\omega\beta^{2}\mathbb{E}\Big{\|}\underbrace {\frac{\beta}{\beta}(w_{j-1}-\mathcal{P}_{\Delta^{n}}(w_{j-1}-\beta\nabla \phi(w_{j-1})))}_{G_{j-1}}\Big{\|}^{2}\] \[+2\omega\beta^{2}\|\varphi_{j-1}-\nabla\phi(w_{j-1})\|^{2}. \tag{18}\]
We next bound the last term in the above Equation (18). Based on the definition of the hypergradient estimate \(\varphi_{j}=\nabla_{w}R(w,\delta;\mathcal{B})-\nabla_{w}\nabla_{\theta}L( \theta_{j}^{N},w_{j})v_{j}^{Q}\), we have
\[\mathbb{E}\|\varphi_{j}-\nabla\phi(w_{j})\|^{2}\] \[\leq 3\mathbb{E}\|\nabla R(w_{j},\delta;\mathcal{B})-\nabla R(w_{j}, \delta)\|^{2}+\frac{3\rho^{2}M^{2}}{\mu^{2}}\mathbb{E}\|\theta_{j}^{*}-\theta _{j}^{N}\|^{2}+3L^{2}\mathbb{E}\|v_{j}^{*}-v_{j}^{Q}\|^{2}\] \[\stackrel{{(i)}}{{=}} \frac{3}{|\mathcal{B}|^{2}}\sum_{\widetilde{z}\in\mathcal{B}}\| \nabla R(w_{j},\delta;\widetilde{z})-\nabla R(w,\delta)\|^{2}+3L^{2}\Big{(}1+ \frac{\rho^{2}M^{2}}{\mu^{2}L^{2}}\Big{)}\mathbb{E}\|\theta_{j}^{*}-\theta_{j} ^{N}\|^{2}+3L^{2}\mathbb{E}\|v_{j}^{*}-v_{j}^{Q}\|^{2}\] \[\stackrel{{(ii)}}{{\leq}} \frac{6K}{|\mathcal{B}|}+3L^{2}\Big{(}1+\frac{\rho^{2}M^{2}}{\mu^ {2}L^{2}}\Big{)}\mathbb{E}\|\theta_{j}^{*}-\theta_{j}^{N}\|^{2}+3L^{2}\mathbb{E }\|v_{j}^{*}-v_{j}^{Q}\|^{2}=\frac{6K}{|\mathcal{B}|}+3L^{2}\delta_{j}, \tag{19}\]where \((i)\) follows because \(\nabla R(w_{j},\delta;\widetilde{z})\) is an unbiased estimate of \(\nabla R(w,\delta)\) and \((ii)\) follows from Proposition 1. Substituting Equation (19) into Equation (18) yields
\[\delta_{j}\leq(1-\eta\mu+6\omega\beta^{2}L^{2})\delta_{j-1}+\frac{12\omega K \beta^{2}}{|\mathcal{B}|}+2\omega\beta^{2}\mathbb{E}\|G_{j-1}\|^{2}. \tag{20}\]
Let \(\tau:=1-\eta\mu+6\omega\beta^{2}L^{2}\). Then, telescoping Equation (20) yields
\[\delta_{j}\leq\tau^{j}\delta_{0}+\frac{12\omega K\beta^{2}}{(1-\tau)|\mathcal{ B}|}+2\omega\beta^{2}\sum_{t=0}^{j-1}\tau^{t}\mathbb{E}\|G_{j-1-t}\|^{2}. \tag{21}\]
Based on Equation (21), we are ready to provide the final convergence result. First, based on the Lipschitz continuity in Assumption 1, Assumption 2 and Assumption 3, we have
\[\|\nabla\phi(w_{1})-\nabla\phi(w_{2})\|\leq L_{\phi}\|w_{1}-w_{2}\|,\]
where the constant \(L_{\phi}=\frac{\sqrt{K\eta}}{\delta}+\frac{L^{2}+\rho M^{2}}{\mu}+\frac{2\rho LM +L^{3}}{\mu^{2}}+\frac{\rho L^{2}M}{\mu^{3}}\) is the smoothness parameter. Then, this inequality further implies
\[\phi(w_{j+1})\leq \phi(w_{j})+\langle\nabla\phi(w_{j}),w_{j+1}-w_{j}\rangle+\frac{L _{\phi}}{2}\|w_{j+1}-w_{j}\|^{2}\] \[\leq \phi(w_{j})+\frac{1}{\beta}\langle\beta\varphi_{j},\mathcal{P}_{ \Delta^{n}}(w_{j}-\beta\varphi_{j})-w_{j}\rangle+\langle\nabla\phi(w_{j})- \varphi_{j},\mathcal{P}_{\Delta^{n}}(w_{j}-\beta\varphi_{j})-w_{j}\rangle\] \[+\frac{L_{\phi}}{2}\|w_{j+1}-w_{j}\|^{2}. \tag{22}\]
To analyze the second term at the right hand side of the above Equation (22), we note that
\[-\langle\beta\varphi_{j},\mathcal{P}_{\Delta^{n}}(w_{j}-\beta\varphi_{j})-w_{j}\rangle\] \[= \langle w_{j}-\beta\varphi_{j}-\mathcal{P}_{\Delta^{n}}(w_{j}- \beta\varphi_{j}),\mathcal{P}_{\Delta^{n}}(w_{j}-\beta\varphi_{j})-w_{j} \rangle+\|\mathcal{P}_{\Delta^{n}}(w_{j}-\beta\varphi_{j})-w_{j}\|^{2},\]
which, in conjunction with the property of projection on convex set that \(\langle x-\mathcal{P}_{\Delta^{n}}(x),y-\mathcal{P}_{\Delta^{n}}(x)\leq 0\) for any \(y\in\mathcal{S}\) and the fact that \(w_{j}=\mathcal{P}_{\Delta^{n}}(w_{j-1}-\beta\varphi_{j-1})\in\mathcal{S}\), yields
\[-\langle\beta\varphi_{j},\mathcal{P}_{\Delta^{n}}(w_{j}-\beta\varphi_{j})-w_{j}\rangle\geq \|\mathcal{P}_{\Delta^{n}}(w_{j}-\beta\varphi_{j})-w_{j}\|^{2}\geq 0. \tag{23}\]
Then, substituting Equation (23) into Equation (22) yields
\[\phi(w_{j+1})\leq \phi(w_{j})+\langle\nabla\phi(w_{j})-\varphi_{j},\mathcal{P}_{ \Delta^{n}}(w_{j}-\beta\varphi_{j})-w_{j}\rangle+\frac{L_{\phi}}{2}\|w_{j+1}- w_{j}\|^{2}\] \[\leq \phi(w_{j})-\frac{\beta}{2}\|\widehat{G}_{j}\|^{2}+\frac{\beta}{ 2}\|\varphi_{j}-\nabla\phi(w_{j})\|^{2}+\frac{L_{\phi}\beta^{2}}{2}\|\widehat {G}_{j}\|^{2}\] \[\leq \phi(w_{j})-\Big{(}\frac{\beta}{4}-\frac{L_{\phi}\beta^{2}}{4} \Big{)}\|G_{j}\|^{2}+\Big{(}\beta-\frac{L_{\phi}\beta^{2}}{2}\Big{)}\|\varphi _{j}-\nabla\phi(w_{j})\|^{2}, \tag{24}\]
where we use the notation that \(\widehat{G}_{j}=\frac{1}{\beta}\big{(}w_{j}-\mathcal{P}_{\Delta^{n}}(w_{j}- \beta\varphi_{j})\big{)}\), and the non-expansive property of projection. Then, taking the expectation and incorporating Equation (19) into Equation (24), we have
\[\mathbb{E}\phi(w_{j+1})\leq \mathbb{E}\phi(w_{j})-\Big{(}\frac{\beta}{4}-\frac{L_{\phi}\beta ^{2}}{4}\Big{)}\mathbb{E}\|G_{j}\|^{2}+\Big{(}\beta-\frac{L_{\phi}\beta^{2}}{ 2}\Big{)}\mathbb{E}\|\varphi_{j}-\nabla\phi(w_{j})\|^{2}\] \[\leq \mathbb{E}\phi(w_{j})-\Big{(}\frac{\beta}{4}-\frac{L_{\phi}\beta ^{2}}{4}\Big{)}\mathbb{E}\|G_{j}\|^{2}+\Big{(}\beta-\frac{L_{\phi}\beta^{2}}{2} \Big{)}\Big{(}\frac{6K}{|\mathcal{B}|}+3L^{2}\delta_{j}\Big{)}. \tag{25}\]
Then, substituting Equation (21) into the above Equation (25) yields
\[\mathbb{E}\phi(w_{j+1})\leq \mathbb{E}\phi(w_{j})-\Big{(}\frac{\beta}{4}-\frac{L_{\phi}\beta^ {2}}{4}\Big{)}\mathbb{E}\|G_{j}\|^{2}+\Big{(}\beta-\frac{L_{\phi}\beta^{2}}{2} \Big{)}\frac{6K}{|\mathcal{B}|}\] \[+3L^{2}\Big{(}\beta-\frac{L_{\phi}\beta^{2}}{2}\Big{)}\Big{(}\tau^{ j}\delta_{0}+\frac{12\omega K\beta^{2}}{(1-\tau)|\mathcal{B}|}+2\omega\beta^{2}\sum_{t=0}^{ j-1}\tau^{t}\mathbb{E}\|G_{j-1-t}\|^{2}\Big{)}\] \[\leq \mathbb{E}\phi(w_{j})-\Big{(}\frac{\beta}{4}-\frac{L_{\phi}\beta ^{2}}{4}\Big{)}\mathbb{E}\|G_{j}\|^{2}+\Big{(}1+\frac{6\omega\beta^{2}L^{2}}{1- \tau}\Big{)}\Big{(}\beta-\frac{L_{\phi}\beta^{2}}{2}\Big{)}\frac{6K}{|\mathcal{ B}|}\] \[+3L^{2}\Big{(}\beta-\frac{L_{\phi}\beta^{2}}{2}\Big{)}\tau^{j} \delta_{0}+6\omega\beta^{2}L^{2}\Big{(}\beta-\frac{L_{\phi}\beta^{2}}{2}\Big{)} \sum_{t=0}^{j-1}\tau^{t}\mathbb{E}\|G_{j-1-t}\|^{2},\]which, by taking the telescoping over \(j\) from \(0\) to \(J-1\), yields
\[\frac{1}{J}\sum_{j=0}^{J-1}\Bigl{(}\frac{1}{4}-\frac{L_{\phi}\beta} {4}\Bigr{)}\mathbb{E}\|G_{j}\|^{2}\] \[\qquad\leq \frac{\phi(w_{0})-\min_{w}\phi(w)}{\beta J}+\Bigl{(}1+\frac{6 \omega\beta^{2}L^{2}}{1-\tau}\Bigr{)}\Bigl{(}1-\frac{L_{\phi}\beta}{2}\Bigr{)} \frac{6K}{|\mathcal{B}|}\] \[\qquad+\frac{3L^{2}\Bigl{(}1-\frac{L_{\phi}\beta}{2}\Bigr{)}\delta _{0}}{(1-\tau)J}+6\omega\beta^{2}L^{2}\Bigl{(}1-\frac{L_{\phi}\beta}{2}\Bigr{)} \frac{1}{J}\sum_{j=0}^{J-1}\sum_{t=0}^{j-1}\tau^{t}\mathbb{E}\|G_{j-1-t}\|^{2}\] \[\leq \frac{\phi(w_{0})-\min_{w}\phi(w)}{\beta J}+\Bigl{(}1+\frac{6 \omega\beta^{2}L^{2}}{1-\tau}\Bigr{)}\Bigl{(}1-\frac{L_{\phi}\beta}{2}\Bigr{)} \frac{6K}{|\mathcal{B}|}\] \[\qquad+\frac{3L^{2}\Bigl{(}1-\frac{L_{\phi}\beta}{2}\Bigr{)} \delta_{0}}{(1-\tau)J}+6\omega\beta^{2}L^{2}\Bigl{(}1-\frac{L_{\phi}\beta}{2} \Bigr{)}\frac{1}{(1-\tau)J}\sum_{j=0}^{J-1}\mathbb{E}\|G_{j}\|^{2}.\]
Rearranging the above inequality, we have
\[\frac{1}{J}\sum_{j=0}^{J-1}\Bigl{(}\frac{1}{4}-\frac{L_{\phi}\beta}{4}-\frac{6 \omega\beta^{2}L^{2}}{1-\tau}\Bigl{(}1-\frac{L_{\phi}\beta}{2}\Bigr{)}\Bigr{)} \mathbb{E}\|G_{j}\|^{2}\]
\[\leq \frac{\phi(w_{0})-\min_{w}\phi(w)}{\beta J}+\Bigl{(}1+\frac{6\omega\beta^{2 }L^{2}}{1-\tau}\Bigr{)}\Bigl{(}1-\frac{L_{\phi}\beta}{2}\Bigr{)}\frac{6K}{| \mathcal{B}|}+\frac{3L^{2}\Bigl{(}1-\frac{L_{\phi}\beta}{2}\Bigr{)}\delta_{0}} {(1-\tau)J}. \tag{26}\]
Recalling the definition that \(\tau=1-\eta\mu+6\omega\beta^{2}L^{2}\), and noting that we choose the stepsize \(\beta\) such that \(6\omega\beta^{2}L^{2}<\frac{1}{9}\eta\mu\) and \(\beta\leq\frac{1}{4L_{\phi}}\), we can simplify Equation (26) as
\[\frac{1}{16J}\sum_{j=0}^{J-1}\mathbb{E}\|G_{j}\|^{2}\leq \frac{\phi(w_{0})-\min_{w}\phi(w)}{\beta J}+\frac{27K}{4|\mathcal{B}|}+ \frac{27L^{2}\delta_{0}}{8\eta\mu J}.\]
From the gradient descent based updates, we have \(\delta_{0}=\bigl{(}1+\frac{\rho^{2}M^{2}}{L^{2}\mu^{2}}\bigr{)}\mathbb{E}\| \theta_{0}^{N}-\theta_{0}^{*}\|^{2}+\mathbb{E}\|v_{0}^{Q}-v_{0}^{*}\|^{2}\leq \bigl{(}1+\frac{\rho^{2}M^{2}}{L^{2}\mu^{2}}\bigr{)}\|\theta_{0}^{0}-\theta_{ 0}^{*}\|^{2}+\|v_{0}^{0}-v_{0}^{*}\|^{2}<+\infty\). Then, the proof is complete.
## Appendix B Proof of Corollary 1
Based on the definition of \(C_{Q}=\frac{Q\rho M\eta}{\mu}+\eta^{2}Q^{2}\rho M+\eta QL\) and noting that \(\eta\leq\frac{1}{L}\leq\frac{1}{\mu}\) and \(Q=3\), we have \(C_{Q}\leq 12\eta(\frac{\rho M}{\mu}+L)\), which, combined with the definition that \(r=\frac{C_{Q}^{2}}{(\frac{\rho M}{\mu}+L)^{2}}\), yields \(r\leq 144\eta^{2}\). Then, based on the choice of \(N=1\) and \(\lambda=\frac{\alpha\mu}{2}\), we have
\[(1+\lambda)(1-\alpha\mu)^{N}\Bigl{(}1+\frac{8rL^{2}}{\eta\mu}\Bigr{)}\leq\Bigl{(} 1-\frac{\alpha\mu}{2}\Bigr{)}\Bigl{(}1+\frac{1152\eta L^{2}}{\mu}\Bigr{)}\leq 1 -\frac{\alpha\mu}{2}+\frac{1152\eta L^{2}}{\mu},\]
which, in conjunction with \(\eta\leq\frac{\mu^{2}}{4608L^{2}}\alpha\) and \(\alpha\leq\frac{1}{L}\), yields
\[(1+\lambda)(1-\alpha\mu)^{N}\Bigl{(}1+\frac{8rL^{2}}{\eta\mu}\Bigr{)}\leq 1- \frac{1}{4}\alpha\mu\leq 1-\eta\mu. \tag{27}\]
This implies that the inequality \((1+\lambda)(1-\alpha\mu)^{N}(1+\frac{8rL^{2}}{\eta\mu})\leq 1-\eta\mu\) required by Theorem 1 is satisfied. Then, treating \(\mu,\eta,L,\rho,M,\alpha,\beta,K\), \(\|\theta_{0}^{0}-\theta_{0}^{*}\|^{2}\) and \(\|v_{0}^{0}-v_{0}^{*}\|^{2}\) as constants independent of the total number \(J\) of iterations, we have
\[\frac{1}{J}\sum_{j=0}^{J-1}\mathbb{E}\|G_{j}\|^{2}\leq\mathcal{O}\Bigl{(}\frac{ 1}{J}+\frac{1}{|\mathcal{B}|}\Bigr{)}. \tag{28}\]
Then, to ensure an \(\epsilon\)-accurate stationary point, the number of iterations is \(\epsilon^{-2}\) with a batch size \(|\mathcal{B}|=\mathcal{O}(\epsilon^{-2})\).
Experiment Hyperparameters
The experimemtal hyperparameters are list in Table 7, including the memory size: \(|\mathcal{M}|\), coreset size: \(|\mathcal{S}|\), stream batch size: \(|\mathcal{B}_{t}|\), learning rate for \(M_{tr}\): \(lr_{t}\), learning rate for \(M_{cs}\): \(lr_{p}\), learning rate for sample weights: \(lr_{w}\), regularization efficient: \(\lambda\), epochs for training model: \(E\), outer loops in bilevel optimization: \(J\), inner loops in bilevel optimization: \(N\), loops for estimating the Hessian-inverse-vector product: \(Q\), a factor of Gaussian noise: \(\delta\).
## Appendix D Running Time Comparison
We evaluate the running time (wall-clock time) of baseline methods on Split Tiny-ImageNet and Split Food-101 in Table 8. All the training sections keep the same among these algorithms. Since there is no advanced coreset selection procedures in K-means Features, K-means Embedding, Uniform sampling, iCaRL, and Grad Matching, their running costs are pretty low but with worse performance (i.e., accuracy and forgetting) compared with our approach BCSR. PBCS and GCR, SPR and MetaSP don't involve bilevel formulations, they cannot guarantee accurate coreset sampling though with lower time cost against BCSR. Compared with OCS, BCSR takes \(23\%\) and \(35\%\) running time reduction on Tiny-ImageNet and Split Food-101, respectively. Because similarity computing based on data pairs is computationally expensive for OCS. Greedy Coreset takes much more time than BCSR due to the usage of NTK.
## Appendix E Experiments on Permuted-MNIST
The experiment results on Permuted-MNIST is presented in this section. BCSR outperforms other baselines significantly in AVG ACC on all the data settings, while showing relatively low forgetting.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Hyperparameters} & Permuted & Split & Split & Multiple & Split \\ & MNIST & CIFAR-100 & Tiny-Imagenet & Datasets & Food-101 \\ \hline \(|\mathcal{M}|\) & 200 & 100 & 200 & 83 & 100 \\ \(|\mathcal{S}|\) & 10 & 10 & 20 & 10 & 10 \\ \(|\mathcal{B}_{t}|\) & 50 & 50 & 100 & 50 & 50 \\ \(lr_{t}\) & 0.005 & 0.15 & 0.20 & 0.1 & 0.15 \\ \(lr_{p}\) & 5.0 & 5.0 & 10 & 5.0 & 10 \\ \(lr_{w}\) & 5.0 & 5.0 & 10 & 5.0 & 10 \\ \(\lambda\) & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 \\ \(E\) & 1 & 1 & 1 & 1 & 1 \\ \(J\) & 5 & 10 & 5 & 5 & 5 \\ \(N\) & 1 & 1 & 1 & 1 & 1 \\ \(Q\) & 3 & 3 & 3 & 3 & 3 \\ \(\delta\) & \(1e-3\) & \(1e-3\) & \(1e-3\) & \(1e-3\) & \(1e-3\) \\ \hline \hline \end{tabular}
\end{table}
Table 7: Hyperparameters settings in experiments.
\begin{table}
\begin{tabular}{l c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c}{Time (hours)} \\ & Tiny-ImageNet & Split Food-101 \\ \hline K-means Features & 0.15 & 0.02 \\ K-means Embedding & 0.41 & 0.04 \\ Uniform & 0.13 & 0.03 \\ iCaRL & 0.41 & 0.04 \\ Grad Matching & 0.57 & 0.06 \\ SPR & 0.77 & 0.11 \\ MetaSP & 0.86 & 0.13 \\ PBCS & 1.13 & 0.15 \\ GCR & 0.91 & 0.13 \\ Greedy Coreset & 6.26 & 0.83 \\ OCS & 3.45 & 0.40 \\ BCSR & 2.65 & 0.26 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Running time.
## Appendix F Evolution of Average Accuracy
We present the learning process of different methods on Multiple Datasets in Figure 4. There are five tasks, where the average accuracy is tested after training each task. BCSR shows better performance than other baselines in different settings, including Balanced, Imbalanced, and label-noise.
## Appendix G Possibility Distribution of Candidate Coreset
To explore the effect of top-\(K\) regularizer, we observe the possibility distribution of sample weights after each bilevel optimization, where sample weights of candidate coresets are initialized uniformly. The novel top-\(K\) regularizer makes sure that the summation of the top-\(K\) entries of the learned probability vector is large, such that we can confidently choose a coreset with size \(K\). We show the weight distribution after optimization in Figure. 5, where you can find that top-\(K\) weights are much higher than others (with a margin \(2\%\)), which easily distinguishes the top-\(K\) core samples from candidates.
Figure 4: Evolution of average accuracy during the continual learning process for Multiple Datasets.
Figure 5: Possibility distribution of candidate coreset for one mini-batch stream data.
Figure 6: The hypergradients in an outer-loop.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Methods}} & \multicolumn{2}{c}{Balanced} & \multicolumn{2}{c}{Imbalanced} & \multicolumn{2}{c}{Label Noise} \\ & \(A_{T}\) & \(FGT_{T}\) & \(A_{T}\) & \(FGT_{T}\) & \(A_{T}\) & \(FGT_{T}\) \\ \hline K-means Features & 54.30±0.93 & 0.064±0.012 & 39.46±0.11 & 0.023±0.003 & 46.71±0.64 & 0.094±0.011 \\ K-means Embedding & 54.78±1.83 & 0.056±0.009 & 41.97±2.02 & 0.013±0.005 & 47.36±1.58 & 0.082±0.015 \\ Uniform & 53.74±0.35 & 0.073±0.006 & 38.49±1.89 & 0.025±0.008 & 46.35±1.72 & 0.091±0.014 \\ iCaRL & 52.62±0.01 & 0.076±0.001 & 48.64±0.58 & 0.022±0.001 & 48.14±0.83 & 0.082±0.007 \\ Grad Matching & 54.76±1.61 & 0.065±0.011 & 33.67±1.24 & 0.028±0.003 & 48.24±0.42 & 0.089±0.009 \\ SPR & 54.24±0.45 & 0.068±0.019 & 40.79±0.83 & 0.031±0.003 & 48.23±0.75 & 0.067±0.004 \\ MetasP & 54.63±0.31 & 0.059±0.006 & 41.32±0.94 & 0.025±0.006 & 48.84±0.77 & 0.061±0.007 \\ Greedy Coreset & 54.10±0.81 & 0.051±0.007 & 26.68±13.39 & 0.033±0.014 & 49.45±1.73 & 0.078±0.009 \\ GCR & 54.53±0.64 & 0.067±0.021 & 40.63±0.50 & 0.032±0.005 & 48.64±0.75 & 0.074±0.014 \\ PBCS & 51.61±1.14 & 0.144±0.021 & 41.14±0.23 & 0.119±0.007 & 39.74±1.98 & 0.178±0.001 \\ OCS & 54.37±0.34 & **0.026±0.001** & 50.19±0.47 & 0.020±0.005 & 48.08±1.44 & **0.046±0.003** \\ BCSR & **56.23±0.29** & 0.058±0.002 & **52.52±0.43** & **0.010±0.002** & **50.82±3.03** & 0.056±0.017 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Experiment results on Permuted MNISTThe Effect of Top-\(K\) Regularizer
To further analyze the effects of top-\(K\) regularizer, we conduct the ablation study with different values of regularizer coefficient \(\lambda\) on balanced Split CIFAR-100. The performance results with different \(\lambda\) are shown in Table 10 and the corresponding average top-\(K\) summations of coreset weights are in Table 11. In our experiment, there are \(50\) candidate samples in each mini-batch data, and the summation of \(50\) coreset weights is equal to \(1.00\). Top-\(K\) summation of weights increases as \(\lambda\) increases, which imposes higher probabilities on top-\(K\) entries and lower probabilities on the rest candidates. The best performance is achieved when \(\lambda=0.1\), which means \(\lambda\) balances the trade-off between the loss function and regularizer strength: if \(\lambda\) is too large, the algorithm primarily focuses on choosing the important samples instead of updating the model parameter, and vice versa.
## Appendix I Hypergradient Evolution
To demonstrate the efficiency of bilevel optimization, we illustrate the evolution of hypergradients for a bilevel optimization on Split CIFAR-100 in Figure. 6. We observe the average norm of hypergradient reduces from \(10^{-1}\) to less than \(10^{-2}\) in each round of coreset selection with loops equal to \(10\). The hypergradient curves show that our designed bilevel optimization provides both theoretical and practical convergence guarantees.
## Appendix J The Effect of Coreset Size \(K\)
Coreset size \(K\) also plays an important role in the experiment. We compare our BCSR with other coreset-based algorithms on different \(K\). Note that coreset is selected from the current stream mini-batch \(\mathcal{B}_{t}\), so the coreset size \(K\) satisfies \(K\leq|\mathcal{B}_{t}|\). In the main experiment results, \(K=10\) is fixed in all the algorithms for a fair comparison. Here, we set \(K=10,20,40\) to conduct the continual learning experiments, respectively. The result is represented in Table 12. Note that all the results presented here are based on balanced Split CIFAR-100.
We mainly compare with the other four methods, including Uniform Sampling and three coreset-based methods, Greedy Coreset, PBCS, and OCS. We can observe that the performance of almost all methods becomes worse when coreset size is large. The reason is that if coreset size is larger and closer to \(\mathcal{B}_{t}\), more redundant or noisy data are selected from the current stream mini-batch, and the coreset would not be representative anymore. In contrast, the smaller coreset could reduce the probability that redundant data are selected. Compared with other methods, BCSR shows the best performance (both accuracy and forgetting) and better robustness on different \(K\).
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Measure & \(\lambda\)=0.01 & \(\lambda\)=0.05 & \(\lambda\)=0.10 & \(\lambda\)=0.50 & \(\lambda\)=1.00 \\ \hline \(A_{T}\) & 59.37\(\pm\)0.35 & 60.23\(\pm\)0.43 & **61.60\(\pm\)0.14** & 59.42\(\pm\)1.45 & 58.89\(\pm\)1.64 \\ \(FGT_{T}\) & 0.095\(\pm\)0.098 & 0.074\(\pm\)0.054 & **0.051\(\pm\)0.015** & 0.138\(\pm\)0.075 & 0.128\(\pm\)0.076 \\ \hline \hline \end{tabular}
\end{table}
Table 10: Ablation study for the regularize coefficient \(\lambda\).
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Methods} & \(K\)=10 & \(K\)=20 & \(K\)=40 \\ & \(A_{T}\) & \(A_{T}\) & \(A_{T}\) \\ \hline Uniform & 58.99\(\pm\)0.54 & 53.57\(\pm\)2.93 & 53.03\(\pm\)1.97 \\ Greedy Coreset & 59.39\(\pm\)0.16 & 56.81\(\pm\)3.32 & 56.09\(\pm\)0.42 \\ PBCS & 55.64\(\pm\)2.26 & 49.84\(\pm\)1.76 & 40.95\(\pm\)0.32 \\ OCS & 52.57\(\pm\)0.37 & 54.87\(\pm\)0.58 & 56.46\(\pm\)0.07 \\ BCSR & **61.60\(\pm\)0.14** & **59.06\(\pm\)1.15** & **56.58\(\pm\)0.21** \\ \hline \hline \end{tabular}
\end{table}
Table 12: The effect of coreset size
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Measure & \(\lambda\)=0.01 & \(\lambda\)=0.05 & \(\lambda\)=0.10 & \(\lambda\)=0.50 & \(\lambda\)=1.00 \\ \hline Top-\(K\) Sum/Total Sum & 0.41/1.00 & 0.56/1.00 & 0.63/1.00 & 0.73/1.00 & 0.84/1.00 \\ \hline \hline \end{tabular}
\end{table}
Table 11: Top-\(K\) summation/Total summation of coreset weights (\(K=10\)).
The Effect of Inner Loops \(N\) and Hessian-inverse-vector Product Loops \(Q\)
We conduct ablation studies to explore the sensitivity of hyperparameters (\(N\) and \(Q\)) on Split CIFAR-100. The results are presented in Tables 13 (\(N\)) and Table 14 (\(Q\)). The model performance remains relatively stable when increasing inner loops (\(N\)) while fixing \(Q\). But too large \(N\) (\(N\geq 15\)) leads to performance degradation due to overfitting. The \(Q\) loops show similar properties that a few \(Q\) loops (e.g., \(Q=3\)) are enough to approximate the Hessian-inverse-vector product. Too small \(Q\) and too large \(Q\) will hurt the performance due to possible underfitting (e.g., \(Q=1\)) and overfitting (e.g., \(Q=20\)).
## Appendix L Datasets
* **Split CIFAR-100**. Balanced split CIFAR-100 is based on the original CIFAR-100 and is split into 20 tasks, each consisting of 5 disjoint classes. The imbalanced setting and label noise are also applied to this dataset to make the task more challenging. We follow [13] to transform the original dataset to imbalanced long-tailed CIFAR-100. In the label-noise setting, we randomly select \(20\%\) data in each task and randomly change their labels to an arbitrary label of 10 classes.
* **Permuted MNIST**. Balanced MNIST is a handwritten digits dataset [43] containing 20 tasks, where each task applies a fixed random permutation to the image pixels. For the imbalanced setting, we randomly select 8 classes over 10 and sample \(10\%\) of the selected classes for training. We also conduct the experiments on the label noise scenario, where symmetric label noise with \(20\%\) noise rate is imposed on the data. In particular, to make the problem setting more challenging, each task only retains 3000 training data randomly sampled from the original data.
* **Multiple Datasets**. This dataset [78] contains a couple of totally different datasets, including MNIST [43], fashion-MNIST [75], NotMNIST [8], Traffic Sign [69] and SVHN [54]. There are 5 tasks, and each task is constructed by randomly selecting 1000 training samples from a different dataset. The procedure of creating the dataset in the imbalanced and label-noise settings is the same as that in Split CIFAR-100.
* **Split Tiny-ImageNet**. This dataset [42] contains 100000 images of 200 classes (500 for each class) downsized to \(64\times 64\) colored images. Each class has 500 training images, 50 validation images, and 50 test images. We construct the task sequences by splitting data into 20 tasks, where each task consists of 10 disjoint classes.
* **Split Food-101**. This dataset [7] is a challenging data set of 101 food categories, with 101'000 images. All the images are resized to \(64\times 64\) pixels to be fed into models. To build continual learning tasks easily, We discard the last category and split data into 20 tasks, with 5 categories within each task. Other data settings, including imbalanced and lable-noise, are the same as Split CIFAR-100.
## Appendix M Assumptions and Properties
We first provide standard definitions and assumptions for the convergence rate analysis of bilevel optimization [25, 36]. For notational convenience, we define \(\ell(\theta):=\sum_{i=1}^{n}\ell_{i}(\theta)\).
\begin{table}
\begin{tabular}{c c c c c c} \hline Measure & \(N\)=1 & \(N\)=5 & \(N\)=10 & \(N\)=15 & \(N\)=20 \\ \hline \(A_{T}\) & 61.60\(\pm\)0.14 & **61.75\(\pm\)0.11** & 61.64\(\pm\)0.15 & 60.77\(\pm\)0.32 & 59.20\(\pm\)0.41 \\ \(FGT_{T}\) & 0.051\(\pm\)0.015 & **0.047\(\pm\)0.013** & 0.063\(\pm\)0.017 & 0.074\(\pm\)0.021 & 0.079\(\pm\)0.035 \\ \hline \end{tabular}
\end{table}
Table 13: Ablation study for the inner loops (N) with fixed \(Q=3\).
\begin{table}
\begin{tabular}{c c c c c c} \hline Measure & \(Q\)=1 & \(Q\)=3 & \(Q\)=5 & \(Q\)=10 & \(Q\)=20 \\ \hline \(A_{T}\) & 52.14\(\pm\)1.53 & **61.60\(\pm\)0.14** & 61.57\(\pm\)0.15 & 58.42\(\pm\)0.53 & 57.80\(\pm\)1.31 \\ \(FGT_{T}\) & 0.123\(\pm\)0.038 & **0.051\(\pm\)0.015** & 0.064\(\pm\)0.012 & 0.173\(\pm\)0.045 & 0.162\(\pm\)0.041 \\ \hline \end{tabular}
\end{table}
Table 14: Ablation study for the loops \(Q\) with fixed \(N=1\).
**Definition 1**.: _A mapping \(f\) is \(L_{f}\)-Lipschitz continuous if for \(\forall\,z,z^{\prime},\,\|f(z)-f(z^{\prime})\|\leq L_{f}\|z-z^{\prime}\|\)._
**Assumption 1**.: _The lower-level function \(L(\theta,w)\) is \(\mu\)-strongly-convex w.r.t. \(\theta\)._
Assumption 1 is a necessary geometric assumption in analyzing the convergence rate of bilevel optimization algorithms, as also widely adopted existing theories in [25, 37, 32]. We also note that this condition is satisfied for overparameterized neural networks [82]. The following assumption imposes some Lipschitz continuity conditions on the upper- and lower-level objective functions.
**Assumption 2**.: _The gradients \(\nabla_{\theta}L(\theta,w)\), \(\nabla_{w}L(\theta,w)\) and \(\ell(\theta)\) are \(L\)-Lipschitz continuous w.r.t. \(\theta\) and \(w\). In addition, the gradient norm \(\|\nabla\ell(\theta^{*}(w))\|\leq M\)._
Note that we do not impose any conditions on the regularization function \(R(w,\delta)\). The following assumption imposes the Lipschitz continuity on the second-order derivatives of the lower-level functions.
**Assumption 3**.: _The second-order derivatives \(\nabla_{w}\nabla_{\theta}L(\theta,w)\) and \(\nabla_{\theta}^{2}L(w,\theta)\) are \(\rho\)-Lipschitz continuous._
Then, we use the following proposition to characterize the properties of the smoothed top-\(K\) regularizer \(R(w,\delta)\), based on the results in [23].
**Proposition 1**.: _The smoothed regularizer \(R(w,\delta)\) and its sampled version \(R(w,\delta;\widetilde{z})\) satisfy the following two important properties: (i) The gradient \(\nabla R(w,\delta)\) exists and is \(\frac{\sqrt{K}n}{\delta}\)-Lipschitz continuous; (ii) The gradient norm \(\|R(w,\delta;\widetilde{z})\|\) is bounded by \(\sqrt{K}\) for any sample \(\widetilde{z}\)._
Proposition 1 shows that the regularizer \(R(w,\delta)\) is smooth and its stochastic version \(R(w,\delta;\widetilde{z})\) is bounded. These two properties are important to guarantee the non-asymptotic convergence of our proposed bilevel method. | ## Review
### Summary
The paper addresses the coreset selection problem in rehearsal-based continual learning through a novel bilevel optimization formulation. The authors propose a method that incorporates a smoothed top-K regularizer and a probability simplex, which aims to reduce computational costs and improve performance compared to existing methods. They demonstrate the effectiveness of their approach with comprehensive experiments, including ablation studies, showcasing improved accuracy and reduced forgetting. The paper emphasizes overcoming the limitations of traditional methods, providing theoretical guarantees and a clearer understanding of the optimization process involved in continual learning.
### Strengths
- The exploration of bilevel optimization in continual learning is timely and important.
- The paper is well-structured and easy to follow, aided by useful figures and pseudocode.
- The introduction of a new loss function and the use of a smoothed top-K regularizer enhances the optimization process.
- Comprehensive experiments across various datasets validate the proposed method's effectiveness.
- The availability of code facilitates reproducibility for the research community.
### Weaknesses
- The reliance on a proxy model may limit applicability for large-scale models with over 100M parameters.
- Computational expense related to Hessian-vector products could hinder efficiency in large-scale scenarios.
- The paper could strengthen the motivation for the proposed method and clarify its advantages over previous works.
- Further analysis of the top-K loss and comparisons with non-coreset methods would enhance understanding.
- Minor improvements could be made to the main illustration to include more technical details.
### Questions
- How can the need for a proxy model be addressed for large models?
- What guidelines can be provided for selecting parameters like N, Q, and δ in the algorithm?
- Can the authors clarify the differences and advantages of their method compared to prior work?
- Is it feasible to improve the efficiency of the proposed method using recent advancements in Hessian-free bilevel algorithms?
### Soundness
**Score:** 12
**Description:** 3 = good; the methodology is well-founded with theoretical guarantees, although some aspects could benefit from further clarification.
### Presentation
**Score:** 11
**Description:** 3 = good; the paper is well-organized and clear, though minor improvements in figures and explanations could enhance clarity.
### Contribution
**Score:** 10
**Description:** 3 = good; while the paper presents solid contributions, it may be seen as incremental compared to previous works, requiring clearer distinctions.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept; the paper is technically solid with moderate-to-high impact potential but requires minor improvements and clarifications.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a solid contribution to the field of continual learning with a novel approach to coreset selection, demonstrating both theoretical and empirical strengths. While some concerns about the efficiency and originality of the proposed method exist, they do not outweigh the overall quality and potential impact of the work, making it suitable for acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Probabilistic Exponential Integrators
Nathanael Bosch
Tubingen AI Center, University of Tubingen
[email protected] Philipp Hennig
Tubingen AI Center, University of Tubingen
[email protected] Filip Tronarp
Lund University
[email protected]
###### Abstract
Probabilistic solvers provide a flexible and efficient framework for simulation, uncertainty quantification, and inference in dynamical systems. However, like standard solvers, they suffer performance penalties for certain _stiff_ systems, where small steps are required not for reasons of numerical accuracy but for the sake of stability. This issue is greatly alleviated in semi-linear problems by the _probabilistic exponential integrators_ developed in this paper. By including the fast, linear dynamics in the prior, we arrive at a class of probabilistic integrators with favorable properties. Namely, they are proven to be L-stable, and in a certain case reduce to a classic exponential integrator--with the added benefit of providing a probabilistic account of the numerical error. The method is also generalized to arbitrary non-linear systems by imposing piece-wise semi-linearity on the prior via Jacobians of the vector field at the previous estimates, resulting in _probabilistic exponential Rosenbrock methods_. We evaluate the proposed methods on multiple stiff differential equations and demonstrate their improved stability and efficiency over established probabilistic solvers. The present contribution thus expands the range of problems that can be effectively tackled within probabilistic numerics.
## 1 Introduction
Dynamical systems appear throughout science and engineering, and their accurate and efficient simulation is a key component in many scientific problems. There has also been a surge of interest in the intersection with machine learning, both regarding the usage of machine learning methods to model and solve differential equations [36, 18, 35], and in a dynamical systems perspective on machine learning methods themselves [8, 5]. This paper focuses on the numerical simulation of dynamical systems within the framework of _probabilistic numerics_, which treats the numerical solvers themselves as probabilistic inference methods [11, 12, 33]. In particular, we expand the range of problems that can be tackled within this framework and introduce a new class of stable probabilistic numerical methods for stiff ordinary differential equations (ODEs).
Stiff equations are problems for which certain implicit methods perform much better than explicit ones [10]. But implicit methods come with increased computational complexity per step, as they typically require solving a system of equations. _Exponential integrators_ are an alternative class of methods for efficiently solving large stiff problems [48, 16, 7, 15]. They are based on the observation that, if the ODE has a semi-linear structure, the linear part can be solved exactly and only the non-linear part needs to be numerically approximated. The resulting methods are formulated in an explicit manner and do not require solving a system of equations, while achieving similar or better stability than implicit methods. However, such methods have not yet been formulated probabilistically.
In this paper we develop _probabilistic exponential integrators_, a new class of probabilistic numerical solvers for stiff semi-linear ODEs. We build on the _ODE filters_ which have emerged as an efficient and flexible class of probabilistic numerical methods for general ODEs [40, 21, 45]. They have known convergence rates [21, 46], which have also been demonstrated empirically [2, 26, 24], they are applicable to a wide range of numerical differential equation problems [23, 25, 3], their probabilistic output can be integrated into larger inference problems [20, 39, 47], and they can be formulated parallel-in-time [4]. But while it has been shown that the choice of underlying Gauss-Markov prior does influence the resulting ODE solver [30, 45, 2], there has not yet been strong evidence for the utility of priors other than the well-established integrated Wiener process. Probabilistic exponential integrators provide this evidence: in the probabilistic numerics framework, "solving the linear part of the ODE exactly" corresponds to an appropriate choice of prior.
ContributionsOur main contribution is the development of probabilistic exponential integrators, a new class of stable probabilistic solvers for stiff semi-linear ODEs. We demonstrate the close link of these methods to classic exponential integrators in Proposition 1, provide an equivalence result to a classic exponential integrator in Proposition 2, and prove their L-stability in Proposition 3. To enable a numerically stable implementation, we present a quadrature-based approach to directly compute square-roots of the process noise covariance in Section 3.2. Finally, in Section 3.6 we also propose probabilistic exponential Rosenbrock methods for problems in which semi-linearity is not known a priori. We evaluate all proposed methods on multiple stiff problems and demonstrate the improved stability and efficiency of the probabilistic exponential integrators over existing probabilistic solvers.
## 2 Numerical ODE solutions as Bayesian state estimation
Let us first consider an initial value problem with some general non-linear ODE, of the form
\[\dot{y}(t) =f(y(t),t),\qquad t\in[0,T], \tag{1a}\] \[y(0) =y_{0}, \tag{1b}\]
with vector field \(f:\mathbb{R}^{d}\times\mathbb{R}\to\mathbb{R}^{d}\), initial value \(y_{0}\in\mathbb{R}^{d}\), and time span \([0,T]\). Probabilistic numerical ODE solvers aim to compute a posterior distribution over the ODE solution \(y(t)\) such that it satisfies the ODE on a discrete set of points \(\mathbb{T}=\{t_{n}\}_{n=0}^{N}\subset[0,T]\), that is
\[p\left(y(t)\ \Big{|}\ y(0)=y_{0},\{\dot{y}(t_{n})=f(y(t_{n}),t_{n})\}_{n=0}^ {N}\right). \tag{2}\]
We call this quantity, and approximations thereof, a _probabilistic numerical ODE solution_. Probabilistic numerical ODE solvers thus compute not just a single point estimate of the ODE solution, but a posterior distribution which provides a structured estimate of the numerical approximation error.
In the following, we briefly recapitulate the probabilistic ODE filter framework of Schober et al. [40] and Tronarp et al. [45] and define the prior, data model, and approximate inference scheme. In Section 3 we build on these foundations to derive the proposed probabilistic exponential integrator.
Figure 1: _Probabilistic numerical ODE solvers with different stability properties. Left_: The explicit EK0 solver with a 3-times integrated Wiener process prior is unstable and diverges from the true solution. _Center_: The semi-implicit EK1 with the same prior does not diverge even though it uses a larger step size, due to it being A-stable, but it exhibits oscillations in the initial phase of the solution. _Right_: The proposed exponential integrator is L-stable and thus does not exhibit any oscillations.
### Gauss-Markov prior
_A priori_, we model \(y(t)\) with a Gauss-Markov process, defined by a stochastic differential equation
\[\mathrm{d}Y(t)=AY(t)\,\mathrm{d}t+\kappa B\,\mathrm{d}W(t),\qquad Y(0)=Y_{0}, \tag{3}\]
with state \(Y(t)\in\mathbb{R}^{d(q+1)}\), model matrices \(A\in\mathbb{R}^{d(q+1)\times d(q+1)},B\in\mathbb{R}^{d(q+1)\times d}\), diffusion scaling \(\kappa\in\mathbb{R}\), and smoothness \(q\in\mathbb{N}\). More precisely, \(A\) and \(B\) are chosen such that the state is structured as \(Y(t)=[Y^{(0)}(t),\ldots,Y^{(q)}(t)]\), and then \(Y^{(i)}(t)\) models the \(i\)-th derivative of \(y(t)\). The initial value \(Y_{0}\in\mathbb{R}^{d(q+1)}\) must be chosen such that it enforces the initial condition, that is, \(Y^{(0)}(0)=y_{0}\).
One concrete example of such a Gauss-Markov process that is commonly used in the context of probabilistic numerical ODE solvers is the \(q\)-times Integrated Wiener process, with model matrices
\[A_{\mathrm{IWP}(d,q)}=\begin{bmatrix}0&I_{d}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&I_{d}\\ 0&0&\cdots&0\end{bmatrix},\qquad B_{\mathrm{IWP}(d,q)}=\begin{bmatrix}0\\ \vdots\\ 0\\ I_{d}\end{bmatrix}. \tag{4}\]
Alternatives include the class of Matern processes and the integrated Ornstein-Uhlenbeck process [46]--the latter plays a central role in this paper and will be discussed in detail later.
\(Y(t)\) satisfies linear Gaussian transition densities of the form [44]
\[Y(t+h)\mid Y(t)\sim\mathcal{N}\left(\Phi(h)Y(t),\kappa^{2}Q(h)\right), \tag{5}\]
with transition matrices \(\Phi(h)\) and \(Q(h)\) given by
\[\Phi(h)=\exp\left(Ah\right),\qquad Q(h)=\int_{0}^{h}\Phi(h-\tau)BB^{\top}\Phi^ {\top}(h-\tau)\,\mathrm{d}\tau. \tag{6}\]
These quantities can be computed with a matrix fraction decomposition [44]. For \(q\)-times integrated Wiener process priors, closed-form expressions for the transition matrices are available [21].
### Information operator
The likelihood, or data model, of a probabilistic ODE solver relates the uninformed prior to the actual ODE solution of interest with an _information operator_\(\mathcal{I}\)[6], defined as
\[\mathcal{I}[Y](t)\coloneqq E_{1}Y(t)-f\left(E_{0}Y(t),t\right), \tag{7}\]
where \(E_{i}\in\mathbb{R}^{d\times d(q+1)}\) are selection matrices such that \(E_{i}Y(t)=Y^{(i)}(t)\). \(\mathcal{I}[Y]\) then captures how well \(Y\) solves the given ODE problem. In particular, \(\mathcal{I}\) maps the true ODE solution \(y\) to the zero function, i.e. \(\mathcal{I}[y]\equiv 0\). Conversely, if \(\mathcal{I}[y](t)=0\) holds for all \(t\in[0,T]\) then \(y\) solves the given ODE. Unfortunately, it is in general infeasible to solve an ODE exactly and enforce \(\mathcal{I}[Y](t)=0\) everywhere, which is why numerical ODE solvers typically discretize the time interval and take discrete steps. This leads to the data model used in most probabilistic ODE solvers [45]:
\[\mathcal{I}[Y](t_{n})=E_{1}Y(t_{n})-f\left(E_{0}Y(t_{n}),t_{n}\right)=0,\qquad t _{n}\in\mathbb{T}\subset[0,T]. \tag{8}\]
Note that this specific information operator is closely linked to the IVP considered in Eq. (1a). By defining a (slightly) different data model we can also formulate probabilistic numerical IVP solvers for higher-order ODEs or differential-algebraic equations, or encode additional information such as conservation laws or noisy trajectory observations [3, 39].
### Approximate Gaussian inference
The resulting inference problem is described by a Gauss-Markov prior and a Dirac likelihood
\[Y(t_{n+1})\mid Y(t_{n}) \sim\mathcal{N}\left(\Phi_{n}Y(t_{n}),\kappa^{2}Q_{n}\right), \tag{9a}\] \[Z_{n}\mid Y(t_{n}) \sim\delta\left(E_{1}Y(t_{n})-f\left(E_{0}Y(t_{n}),t_{n}\right) \right), \tag{9b}\]
with \(\Phi_{n}\coloneqq\Phi(t_{n+1}-t_{n})\), \(Q_{n}\coloneqq Q(t_{n+1}-t_{n})\), initial value \(Y(0)=Y_{0}\), discrete time grid \(\{t_{n}\}_{n=0}^{N}\), and zero-valued data \(Z_{n}=0\) for all \(n\). The solution of the resulting non-linear Gauss-Markov regression problem can then be efficiently approximated with Bayesian filtering and smoothing techniques [37]. Notable examples that have been used to construct probabilistic numerical ODE solvers include quadrature filters, the unscented Kalman filter, the iterated extended Kalman smoother, or particle filters [19, 45, 46]. Here, we focus on the well-established extended Kalman filter (EKF). We briefly discuss the EKF for the given state estimation problem in the following.
PredictionGiven a Gaussian state estimate \(Y(t_{n-1})\sim\mathcal{N}\left(\mu_{n-1},\Sigma_{n-1}\right)\) and the linear conditional distribution as in Eq. (9a), the marginal distribution \(Y(t_{n})\sim\mathcal{N}\left(\mu_{n}^{-},\Sigma_{n}^{-}\right)\) is also Gaussian, with
\[\mu_{n}^{-} =\Phi_{n-1}\mu_{n-1}, \tag{10a}\] \[\Sigma_{n}^{-} =\Phi_{n-1}\Sigma_{n-1}\Phi_{n-1}^{\top}+\kappa^{2}Q_{n-1}. \tag{10b}\]
LinearizationTo efficiently compute a tractable approximation of the true posterior, the EKF linearizes the information operator \(\mathcal{I}\) around the predicted mean \(\mu_{n}^{-}\), i.e. \(\mathcal{I}[Y](t_{n})\approx H_{n}Y(t_{n})+b_{n}\),
\[H_{n} =E_{1}-F_{y}E_{0}, \tag{11a}\] \[b_{n} =F_{y}E_{0}\mu_{n}^{-}-f(E_{0}\mu_{n}^{-},t_{n}). \tag{11b}\]
An exact linearization with Jacobian \(F_{y}=\partial_{y}f(E_{0}\mu_{n}^{-},t_{n})\) leads to a semi-implicit probabilistic ODE solver, which we call the EK1[45]. Other choices include the zero matrix \(F_{y}=0\), which results in the explicit EK0 solver [40; 21], or a diagonal Jacobian approximation (the DiagonalEK1) which combines some stability benefits of the EK1 with the lower computational cost of the EK0[24].
Correction stepIn the linearized observation model, the posterior distribution of \(Y(t_{n})\) given the datum \(Z_{n}\) is again Gaussian. Its posterior mean and covariance \((\mu_{n},\Sigma_{n})\) are given by
\[S_{n} =H_{n}\Sigma_{n}^{-}H_{n}^{\top}, \tag{12a}\] \[K_{n} =\Sigma_{n}^{-}H_{n}^{\top}S_{n}^{-1},\] (12b) \[\mu_{n} =\mu_{n}^{-}-K_{n}\left(E_{1}\mu_{n}^{-}-f(E_{0}\mu_{n}^{-},t_{n} )\right),\] (12c) \[\Sigma_{n} =\left(I-K_{n}H_{n}\right)\Sigma_{n}^{-}. \tag{12d}\]
This is also known as the _update_ step of the EKF.
SmoothingTo condition the state estimates on all data, the EKF can be followed by a smoothing pass. Starting with \(\mu_{N}^{S}\coloneqq\mu_{N}\) and \(\Sigma_{N}^{S}\coloneqq\Sigma_{n}\), it consists of the following backwards recursion:
\[G_{n} =\Sigma_{n}\Phi_{n}^{\top}\left(\Sigma_{n+1}^{-}\right)^{-1}, \tag{13a}\] \[\mu_{n}^{S} =\mu_{n}+G_{n}(\mu_{n+1}^{S}-\mu_{n+1}^{-}),\] (13b) \[\Sigma_{n}^{S} =\Sigma_{n}+G_{n}(\Sigma_{n+1}^{S}-\Sigma_{n+1}^{-})G_{n}^{\top}. \tag{13c}\]
ResultThe above computations result in a _probabilistic numerical ODE solution_ with marginals
\[p\left(Y(t_{i})\ \middle|\ \left\{E_{1}Y(t_{n})-f\left(E_{0}Y(t_{n}),t_{n} \right)=0\right\}_{n=0}^{N}\right)\approx\mathcal{N}\left(\mu_{i}^{S},\Sigma_ {i}^{S}\right), \tag{14}\]
which, by construction of the state \(Y\), also contains estimates for the ODE solution as \(y(t)=E_{0}Y(t)\). Since the EKF-based probabilistic solver does not compute only the marginals in Eq. (14), but a full posterior distribution for the continuous object \(y(t)\), it can be evaluated for times \(t\notin\mathbb{T}\) (also known as "dense output" in the context of ODE solvers); it can produce joint samples from this posterior; and it can be used as a Gauss-Markov prior for subsequent inference tasks [40; 2; 47].
### Practical considerations and implementation details
To improve numerical stability and preserve positive-semidefiniteness of the computed covariance matrices, probabilistic ODE solvers typically operate on square-roots of covariance matrices, defined by a matrix decomposition of the form \(M=\sqrt{M}\sqrt{M}^{\top}\)[26]. For example, the Cholesky factor is one possible square-root of a positive definite matrix. But in general, the algorithm does not require the square-roots to be upper- or lower-triangular, or even square. Additionally, we compute the exact initial state \(Y_{0}\) from the IVP using Taylor-mode automatic differentiation [9; 26], we compute smoothing estimates with preconditioning [26], and we calibrate uncertainties globally with a quasi-maximum likelihood approach [45; 2].
## 3 Probabilistic exponential integrators
In the remainder of the paper, unless otherwise stated, we focus on IVPs with a semi-linear vector-field
\[\dot{y}(t)=f(y(t),t)=Ly(t)+N(y(t),t). \tag{15}\]
Assuming \(N\) admits a Taylor series expansion around \(t\), the variation of constants formula provides a formal expression of the solution at time \(t+h\):
\[y(t+h)=\exp(Lh)y(t)+\sum_{k=0}^{\infty}h^{k+1}\Bigg{(}\int_{0}^{1}\exp(Lh(1- \tau))\frac{\tau^{k}}{k!}\,\mathrm{d}\tau\Bigg{)}\frac{\mathrm{d}^{k}}{ \mathrm{d}t^{k}}N(y(t),t). \tag{16}\]
This observation is the starting point for the development of _exponential integrators_[31, 15]. By further defining the so-called \(\varphi\)-functions
\[\varphi_{k}(z)=\int_{0}^{1}\exp(z(1-\tau))\frac{\tau^{k-1}}{(k-1)!}\,\mathrm{d }\tau, \tag{17}\]
the above identity of the ODE solution simplifies to
\[y(t+h)=\exp(Lh)y(t)+\sum_{k=0}^{\infty}h^{k+1}\varphi_{k+1}(Lh)\frac{\mathrm{d }^{k}}{\mathrm{d}t^{k}}N(y(t),t). \tag{18}\]
In this section we develop a class of _probabilistic exponential integrators_. This is achieved by defining an appropriate class of priors that absorbs the partial linearity, which leads to the integrated Ornstein-Uhlenbeck processes. Proposition 1 below directly relates this choice of prior to the classical exponential integrators. Proposition 2 demonstrates a direct equivalence between the predictor-corrector form exponential trapezoidal rule and the once integrated Ornstein-Uhlenbeck process. Furthermore, the favorable stability properties of classical exponential integrators is retained for the probabilistic counterparts as shown in Proposition 3.
### The integrated Ornstein-Uhlenbeck process
In Section 2.1 we highlighted the choice of the \(q\)-times integrated Wiener process prior, which essentially corresponds to modeling the \((q-1)\)-th derivative of the right-hand side \(f\) with a Wiener process. Here we follow a similar motivation, but only for the non-linear part \(N\). Differentiating both sides of Eq. (15) \(q-1\) times with respect to \(t\) yields
\[\frac{\mathrm{d}^{q-1}}{\mathrm{d}t^{q-1}}\dot{y}(t)=L\,\frac{\mathrm{d}^{q-1} }{\mathrm{d}t^{q-1}}y(t)+\frac{\mathrm{d}^{q-1}}{\mathrm{d}t^{q-1}}N(y(t),t). \tag{19}\]
Then, modeling \(\frac{\mathrm{d}^{q-1}}{\mathrm{d}t^{q-1}}N(y(t),t)\) as a Wiener process and relating the result to \(y(t)\) gives
\[\mathrm{d}y^{(i)}(t) =y^{(i+1)}(t)\,\mathrm{d}t, \tag{20a}\] \[\mathrm{d}y^{(q)}(t) =Ly^{(q)}(t)\,\mathrm{d}t+\kappa I_{d}\,\mathrm{d}W^{(q)}(t). \tag{20b}\]
This process is also known as the \(q\)-times integrated Ornstein-Uhlenbeck process (IOUP), with rate parameter \(L\) and diffusion parameter \(\kappa\). It can be equivalently stated with the previously introduced notation (Section 2.1), by defining a state \(Y(t)\), as the solution of a linear time-invariant (LTI) SDE as in Eq. (3), with system matrices
\[A_{\mathrm{IOUP}(d,q)}=\begin{bmatrix}0&I_{d}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&I_{d}\\ 0&0&\cdots&L\end{bmatrix},\qquad B_{\mathrm{IOUP}(d,q)}=\begin{bmatrix}0\\ \vdots\\ 0\\ I_{d}\end{bmatrix}. \tag{21}\]
_Remark 1_ (The mean of the IOUP process solves the linear part of the ODE exactly).: By taking the expectation of Eq. (20b) and by linearity of integration, we can see that the mean of the IOUP satisfies
\[\dot{\mu}^{(0)}(t)=L\mu^{(0)}(t),\qquad\mu^{(0)}(0)=y_{0}. \tag{22}\]
This is in line with the motivation of exponential integrators: the linear part of the ODE is solved exactly, and we only need to approximate the non-linear part. Figure 2 visualizes this idea.
### The transition parameters of the integrated Ornstein-Uhlenbeck process
Since the process \(Y(t)\) is defined as the solution of a linear time-invariant SDE, it satisfies discrete transition densities \(p(Y(t+h)\mid Y(t))=\mathcal{N}\left(\Phi(h)Y(t),\kappa^{2}Q(h)\right)\). The following result shows that the transition parameters are intimately connected with the \(\varphi\)-functions defined in Eq. (17).
**Proposition 1**.: _The transition matrix of a \(q\)-times integrated Ornstein-Uhlenbeck process satisfies_
\[\Phi(h)=\begin{bmatrix}\exp\bigl{(}A_{\mathrm{IWP}(d,q-1)}h\bigr{)}&\Phi_{12}( h)\\ 0&\exp(Lh)\end{bmatrix},\qquad\text{with}\qquad\Phi_{12}(h)\coloneqq\begin{bmatrix}h^ {q}\varphi_{q}(Lh)\\ h^{q-1}\varphi_{q-1}(Lh)\\ \vdots\\ h\varphi_{1}(Lh)\end{bmatrix}. \tag{23}\]
Proof in Appendix A. Although Proposition 1 indicates that \(\Phi(h)\) may be computed more efficiently than by calling a matrix-exponential on a \(d(q+1)\times d(q+1)\) matrix, this is known to be numerically less stable [41]. We therefore compute \(\Phi(h)\) with the standard matrix-exponential formulation.
Directly computing square-roots of the process noise covarianceNumerically stable probabilistic ODE solvers require a square-root, \(\sqrt{Q(h)}\), of the process noise covariance rather than the full matrix, \(Q(h)\). For IWP priors this can be computed from the closed-form representation of \(Q(h)\) via an appropriately preconditioned Cholesky factorization [26]. However, for IOUP priors we have not found an analogous method that works reliably. Therefore, we compute \(\sqrt{Q(h)}\) directly with numerical quadrature. More specifically, given a quadrature rule with nodes \(\tau_{i}\in[0,h]\) and positive weights \(w_{i}>0\), the integral for \(Q(h)\) given in Eq. (6) is approximated by
\[Q(h)\approx\sum_{i=1}^{m}w_{i}\exp(A(h-\tau_{i}))BB^{\top}\exp(A^{\top}(h-\tau _{i}))\eqqcolon\sum_{i=1}^{m}M_{i}, \tag{24}\]
with square-roots \(\sqrt{M_{i}}=\sqrt{w_{i}}\exp(A(h-\tau_{i}))B\) of the summands, which is well-defined since \(w_{i}>0\). We can thus compute a square-root representation of the sum with a QR-decomposition
\[X\cdot R=\mathrm{QR}\left(\begin{bmatrix}\sqrt{M_{1}}&\cdots&\sqrt{M_{m}} \end{bmatrix}^{\top}\right). \tag{25}\]
We obtain \(Q(h)\approx R^{\top}R\), and therefore an approximate square-root factor is given by \(\sqrt{Q(h)}\approx R^{\top}\). Similar ideas have previously been employed for time integration of Riccati equations [42, 43]. We use this quadrature-trick for all IOUP methods, with Gauss-Legendre quadrature on \(m=q\) nodes.
### Linearization and correction
The information operator of the probabilistic exponential integrator is defined exactly as in Section 2.2. But since we now assume a semi-linear vector-field \(f\), we have an additional option for the linearization: instead of choosing the exact \(F_{y}=\partial_{y}f\) (EK1) or the zero-matrix \(F_{y}=0\) (EK0), a cheap approximate Jacobian is given by the linear part \(F_{y}=L\). We denote this approach by EKL. This is chosen as the default for the probabilistic exponential integrator. Note that the EKL approach can also be combined with an IWP prior, which will be serve as an additional baseline in the Section 4.
Figure 2: _Damped oscillator dynamics and priors with different degrees of encoded information. Left:_ Once-integrated Wiener process, a popular prior for probabilistic ODE solvers. _Center_: Once-integrated Ornstein–Uhlenbeck process (IOUP) with rate parameter chosen to encode the known linearity of the ODE. Right: IOUP with both the ODE information and a specified initial value and derivative. This is the kind of prior used in the probabilistic exponential integrator.
### Equivalence to the classic exponential trapezoidal rule in predict-evaluate-correct mode
Now that the probabilistic exponential integrator has been defined, we can establish an equivalence result to a classic exponential integrator, similarly to the closely-related equivalence statement by Schober et al. [40, Proposition 1] for the non-exponential case.
**Proposition 2** (Equivalence to the PEC exponential trapezoidal rule).: _The mean estimate of the probabilistic exponential integrator with a once-integrated Ornstein-Uhlenbeck prior with rate parameter \(L\) is equivalent to the classic exponential trapezoidal rule in predict-evaluate-correct mode, with the predictor being the exponential Euler method. That is, it is equivalent to the scheme_
\[\tilde{y}_{n+1} =\varphi_{0}(Lh)y_{n}+h\varphi_{1}(Lh)N(\tilde{y}_{n}), \tag{26a}\] \[y_{n+1} =\varphi_{0}(Lh)y_{n}+h\varphi_{1}(Lh)N(\tilde{y}_{n})+h^{2} \varphi_{2}(Lh)\frac{N(\tilde{y}_{n+1})-N(\tilde{y}_{n})}{h}, \tag{26b}\]
_where Eq. (26a) corresponds to a prediction step with the exponential Euler method, and Eq. (26b) corresponds to a correction step with the exponential trapezoidal rule._
The proof is given in Appendix B. This equivalence result provides another theoretical justification for the proposed probabilistic exponential integrator. But note that the result only holds for the mean, while the probabilistic solver computes additional quantities in order to track the solution uncertainty, namely covariances. These are not provided by a classic exponential integrator.
### L-stability of the probabilistic exponential integrator
When solving stiff ODEs, the actual efficiency of a numerical method often depends on its stability. One such property is _A-stability_: It guarantees that the numerical solution of a decaying ODE will also decay, independently of the chosen step size. In contrast, explicit methods typically only decay for sufficiently small steps. In the context of probabilistic ODE solvers, the EK0 is considered to be explicit, but the EK1 with IWP prior has been shown to be A-stable [45]. Here, we show that the probabilistic exponential integrator satisfies the stronger _L-stability_: the numerical solution not only decays, but it decays _fast_, i.e. it goes to zero as the step size goes to infinity. Figure 1 visualizes the different probabilistic solver stabilities. For formal definitions, see for example [27, Section 8.6].
**Proposition 3** (L-stability).: _The probabilistic exponential integrator is L-stable._
The full proof is given in Appendix C. The property essentially follows from Remark 1 which stated that the IOUP solves linear ODEs exactly. This implies fast decay and gives L-stability.
### Probabilistic exponential Rosenbrock-type methods
We conclude with a short excursion into exponential Rosenbrock methods [14, 17, 28]: Given a non-linear ODE \(\dot{y}(t)=f(y(t),t)\), exponential Rosenbrock methods perform a continuous linearization of the right-hand side \(f\) around the numerical ODE solution and essentially solve a sequence of IVPs
\[\dot{y}(t) =J_{n}y(t)+\left(f(y(t),t)-J_{n}y(t)\right),\qquad t\in[t_{n},t_{n +1}], \tag{27a}\] \[y(t_{n}) =y_{n}, \tag{27b}\]
where \(J_{n}\) is the Jacobian of \(f\) at the numerical solution estimate \(\hat{y}(t_{n})\). This approach enables exponential integrators for problems where the right-hand side \(f\) is not semi-linear. Furthermore, by automatically linearizing along the numerical solution the linearization can be more accurate, the Lipschitz-constant of the non-linear remainder becomes smaller, and the resulting solvers can thus be more efficient than their globally linearized counterparts [17].
This can also be done in the probabilistic setting: By linearizing the ODE right-hand side \(f\) at each step of the solver around the filtering mean \(E_{0}\mu_{n}\), we (locally) obtain a semi-linear problem. Then, updating the rate parameter of the integrated Ornstein-Uhlenbeck process at each step of the numerical solver results in _probabilistic exponential Rosenbrock-type methods_. As before, the linearization of the information operator can be done with any of the EK0, EK1, or EKL. But since here the prediction relies on exact local linearization, we will by default also use an exact EK1 linearization. The resulting solver and its stability and efficiency will be evaluated in the following experiments.
## 4 Experiments
In this section we investigate the utility and performance of the proposed probabilistic exponential integrators and compare it to standard non-exponential probabilistic solvers on multiple ODEs. All methods are implemented in the Julia programming language [1], with special care being taken to implement the solvers in a numerically stable manner, that is, with exact state initialization, preconditioned state transitions, and a square-root implementation [26]. Reference solutions are computed with the DifferentialEquations.jl package [34]. All experiments run on a single, consumer-level CPU. Code for the implementation and experiments is publicly available on GitHub.1
Footnote 1: [https://github.com/nathanaelbosch/probabilistic-exponential-integrators](https://github.com/nathanaelbosch/probabilistic-exponential-integrators)
### Logistic equation with varying degrees of non-linearity
We start with a simple one-dimensional initial value problem: a logistic model with negative growth rate parameter \(r=-1\) and carrying capacity \(K\in\mathbb{R}_{+}\), of the form
\[\dot{y}(t) =-y(t)+\frac{1}{K}y(t)^{2},\qquad t\in[0,10], \tag{28a}\] \[y(0) =1. \tag{28b}\]
The non-linearity of this problem can be directly controlled through the parameter \(K\). Therefore, this test problem lets us investigate the IOUP's capability to leverage known linearity in the ODE.
We compare the proposed exponential integrator to all introduced IWP-based solvers, with different linearization strategies: EK0 approximates \(\partial_{y}f\approx 0\) (and is thus explicit), EKL approximates \(\partial_{y}f\approx-1\), and EKI linearizes with the correct Jacobian \(\partial_{y}f\). The results for four different values of \(K\) are shown in Fig. 3. The explicit solver shows the largest error of all compared solvers, likely due to its lacking stability. On the other hand, the proposed exponential integrator behaves as expected: the IOUP prior is most beneficial for larger values of \(K\), and as the non-linearity becomes more pronounced the performance of the IOUP approaches that of the IWP-based solver. Though for large step sizes, the IOUP outperforms the IWP prior even for the most non-linear case with \(K=10\).
### Burger's equation
Here, we consider Burger's equation, which is a semi-linear partial differential equation (PDE)
\[\partial_{t}u(x,t)=D\partial_{x}^{2}u(x,t)-u(x,t)\partial_{x}u(x,t),\qquad x \in[0,1],\quad t\in[0,1], \tag{29}\]
with diffusion coefficient \(D\in\mathbb{R}_{+}\). We transform the problem into a semi-linear ODE with the method of lines [29; 38], and discretize the spatial domain on \(250\) equidistant points and approximate the differential operators with finite differences. The full IVP specification, including all domains, initial and boundary conditions, and additional information on the discretization, is given in Appendix D.
The results shown in Fig. 4 demonstrate the different stability properties of the solvers: The explicit EK0 with IWP prior is unable to solve the IVP for any of the step sizes due to its insufficient stability, and even the A-stable EKI and the more approximate EKL require small enough steps \(\Delta t<10^{-1}\). On the other hand, both exponential integrators are able to compute meaningful solutions for a larger range of step sizes. They both achieve lower errors for most settings than their non-exponential
Figure 3: _The IOUP prior is more beneficial with increasing linearity of the ODE._ In all three examples, the IOUP-based exponential integrator achieves lower error while requiring fewer steps than the IWP-based solvers. This effect is more pronounced for the more linear ODEs.
counterparts. The second diagram in Fig. 4 compares the achieved error to the number of vector-field evaluations and points out a trade-off between both exponential methods: Since the Rosenbrock method additionally computes two Jacobians (with automatic differentiation) per step, it needs to evaluate the vector-field more often than the non-Rosenbrock method. Thus, for expensive-to-evaluate vector fields the standard probabilistic exponential integrator might be preferable.
### Reaction-diffusion model
Finally, we consider a discretized reaction-diffusion model given by a semi-linear PDE
\[\partial_{t}u(x,t)=D\partial_{x}^{2}u(x,t)+R(u(x,t)),\qquad x\in[0,1],\quad t \in[0,T], \tag{30}\]
where \(D\in\mathbb{R}_{+}\) is the diffusion coefficient and \(R(u)=u(1-u)\) is a logistic reaction term [22]. A finite-difference discretization of the spatial domain transforms this PDE into an IVP with semi-linear ODE. The full problem specification is provided in Appendix D.
Figure 5 shows the results. We again observe the improved stability of the exponential integrator variants by their lower error for large step sizes, and they outperform the IWP-based methods on all settings. The runtime-evaluation in Fig. 5 also visualizes another drawback of the Rosenbrock-type method: Since the problem is re-linearized at each step, the IOUP also needs to be re-discretized and thus a matrix exponential needs to be computed. In comparison, the non-Rosenbrock method only discretizes the IOUP prior once at the start of the solve. This advantage makes the non-Rosenbrock probabilistic exponential integrator the most performant solver in this experiment.
## 5 Limitations
The probabilistic exponential integrator shares many properties of both classic exponential integrators and of other filtering-based probabilistic solvers. This also brings some challenges.
Figure 4: _Benchmarking probabilistic ODE solvers on Burger’s equation._ Exponential and non-exponential probabilistic solvers are compared on Burger’s equation (a) in two work-precision diagrams (b). Both exponential integrators with IOUP prior achieve lower errors than the existing IWP-based solvers, in particular for large steps. This indicates their stronger stability properties.
Figure 5: _Benchmarking probabilistic ODE solvers on a reaction-diffusion model._ Exponential and non-exponential probabilistic solvers are compared on a reaction-diffusion model (a) in two work-precision diagrams (b). The proposed exponential integrators with IOUP prior achieve lower errors per step size than the existing IWP-based methods. The runtime comparison shows the increased cost of the Rosenbrock-type (RB) method, while the non-Rosenbrock probabilistic exponential integrator performs best in this comparison.
Cost of computing matrix exponentialsThe IOUP prior is more expensive to discretize than the IWP as it requires computing a matrix exponential. This trade-off is well-known also in the context of classic exponential integrators. One approach to reduce computational cost is to compute the matrix exponential only approximately [32], for example with Krylov-subspace methods [13; 17]. Extending these techniques to the probabilistic solver setting thus poses an interesting direction for future work.
Cubic scaling in the ODE dimensionThe probabilistic exponential integrator shares the complexity most (semi-)implicit ODE solvers: while being linear in the number of time steps, it scales cubically in the ODE dimension. By exploiting structure in the Jacobian and in the prior, some filtering-based ODE solvers have been formulated with linear scaling in the ODE dimension [24]. But this approach does not directly extend to the IOUP-prior. Nevertheless, exploiting known structure could be particularly relevant to construct solvers for specific ODEs, such as certain discretized PDEs.
## 6 Conclusion
We have presented probabilistic exponential integrators, a new class of probabilistic solvers for stiff semi-linear ODEs. By incorporating the fast, linear dynamics directly into the prior of the solver, the method essentially solves the linear part exactly, in a similar manner as classic exponential integrators. We also extended the proposed method to general non-linear systems via iterative re-linearization and presented probabilistic exponential Rosenbrock-type methods. Both methods have been shown both theoretically and empirically to be more stable than their non-exponential probabilistic counterparts. This work further expands the toolbox of probabilistic numerics and opens up new possibilities for accurate and efficient probabilistic simulation and inference in stiff dynamical systems.
## Acknowledgments and Disclosure of Funding
The authors gratefully acknowledge financial support by the German Federal Ministry of Education and Research (BMBF) through Project ADIMEM (FKZ 01IS18052B), and financial support by the European Research Council through ERC StG Action 757275 / PANAMA; the DFG Cluster of Excellence Machine Learning - New Perspectives for Science, EXC 2064/1, project number 390727645; the German Federal Ministry of Education and Research (BMBF) through the Tubingen AI Center (FKZ: 01IS18039A); and funds from the Ministry of Science, Research and Arts of the State of Baden-Wurttemberg. Filip Tronarp was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Nathanael Bosch. The authors also thank Jonathan Schmidt for many valuable discussions and for helpful feedback on the manuscript.
## References
* [1] J. Bezanson, A. Edelman, S. Karpinski, and V. B. Shah. Julia: A fresh approach to numerical computing. _SIAM review_, 59(1):65-98, 2017.
* [2] N. Bosch, P. Hennig, and F. Tronarp. Calibrated adaptive probabilistic ODE solvers. In _International Conference on Artificial Intelligence and Statistics_. PMLR, 2021.
* [3] N. Bosch, F. Tronarp, and P. Hennig. Pick-and-mix information operators for probabilistic ODE solvers. In _International Conference on Artificial Intelligence and Statistics_. PMLR, 2022.
* [4] N. Bosch, A. Corenflos, F. Yaghoobi, F. Tronarp, P. Hennig, and S. Sarkka. Parallel-in-time probabilistic numerical ODE solvers, 2023.
* [5] R. T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duvenaud. Neural ordinary differential equations. In _Advances in Neural Information Processing Systems_. Curran Associates, Inc., 2018.
* [6] J. Cockayne, C. Oates, T. Sullivan, and M. Girolami. Bayesian probabilistic numerical methods. _SIAM Review_, 61:756-789, 2019.
* [7] S. M. Cox and P. C. Matthews. Exponential time differencing for stiff systems. _Journal of Computational Physics_, 176(2):430-455, 2002.
* [8] W. E. A proposal on machine learning via dynamical systems. _Communications in Mathematics and Statistics_, 5(1):111, Mar 2017.
* [9] A. Griewank and A. Walther. _Evaluating Derivatives_. Society for Industrial and Applied Mathematics, second edition, 2008.
* [10] E. Hairer and G. Wanner. _Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems_. Springer series in computational mathematics. Springer-Verlag, 1991.
* [11] P. Hennig, M. A. Osborne, and M. Girolami. Probabilistic numerics and uncertainty in computations. _Proceedings. Mathematical, physical, and engineering sciences_, 471(2179):20150142-20150142, Jul 2015.
* [12] P. Hennig, M. A. Osborne, and H. P. Kersting. _Probabilistic Numerics: Computation as Machine Learning_. Cambridge University Press, 2022.
* [13] M. Hochbruck and C. Lubich. On Krylov subspace approximations to the matrix exponential operator. _SIAM Journal on Numerical Analysis_, 34(5):1911-1925, Oct 1997.
* [14] M. Hochbruck and A. Ostermann. Explicit integrators of Rosenbrock-type. _Oberwolfach Reports_, 3(2):1107-1110, 2006.
* [15] M. Hochbruck and A. Ostermann. Exponential integrators. _Acta Numerica_, 19:209-286, 2010.
* [16] M. Hochbruck, C. Lubich, and H. Selhofer. Exponential integrators for large systems of differential equations. _SIAM Journal on Scientific Computing_, 19(5):1552-1574, 1998.
* [17] M. Hochbruck, A. Ostermann, and J. Schweitzer. Exponential Rosenbrock-type methods. _SIAM Journal on Numerical Analysis_, 47(1):786-803, 2009.
* [18] G. E. Karniadakis, I. G. Kevrekidis, L. Lu, P. Perdikaris, S. Wang, and L. Yang. Physics-informed machine learning. _Nature Reviews Physics_, 3(6):422-440, May 2021.
* [19] H. Kersting and P. Hennig. Active uncertainty calibration in Bayesian ode solvers. In _Proceedings of the 32nd Conference on Uncertainty in Artificial Intelligence (UAI)_, pages 309-318, June 2016.
* [20] H. Kersting, N. Kramer, M. Schiegg, C. Daniel, M. Tiemann, and P. Hennig. Differentiable likelihoods for fast inversion of Likelihood-free dynamical systems. In _International Conference on Machine Learning_. PMLR, 2020.
* [21] H. Kersting, T. J. Sullivan, and P. Hennig. Convergence rates of Gaussian ODE filters. _Statistics and computing_, 30(6):1791-1816, 2020.
* [22] A. N. Kolmogorov. A study of the equation of diffusion with increase in the quantity of matter, and its application to a biological problem. _Moscow University Bulletin of Mathematics_, 1:1-25, 1937.
* [23] N. Kramer and P. Hennig. Linear-time probabilistic solution of boundary value problems. In _Advances in Neural Information Processing Systems_. Curran Associates, Inc., 2021.
* [24] N. Kramer, N. Bosch, J. Schmidt, and P. Hennig. Probabilistic ODE solutions in millions of dimensions. In _International Conference on Machine Learning_. PMLR, 2022.
* [25] N. Kramer, J. Schmidt, and P. Hennig. Probabilistic numerical method of lines for time-dependent partial differential equations. In _International Conference on Artificial Intelligence and Statistics_. PMLR, 2022.
* [26] N. Kramer and P. Hennig. Stable implementation of probabilistic ode solvers, 2020.
* [27] J. D. Lambert. _Computational Methods in Ordinary Differential Equations_. Introductory Mathematics for Scientists And Engineers. Wiley, 1973.
* [28] V. T. Luan and A. Ostermann. Exponential Rosenbrock methods of order five construction, analysis and numerical comparisons. _Journal of Computational and Applied Mathematics_, 255:417-431, 2014.
* [29] N. K. Madsen. The method of lines for the numerical solution of partial differential equations. _Proceedings of the SIGNUM meeting on Software for partial differential equations_, 1975.
* [30] E. Magnani, H. Kersting, M. Schober, and P. Hennig. Bayesian filtering for ODEs with bounded derivatives, 2017.
* [31] B. V. Minchev and W. M. Wright. A review of exponential integrators for first order semi-linear problems. Technical report, Norges Teknisk-Naturvetenskaplige Universiteit, 2005.
* [32] C. Moler and C. Van Loan. Nineteen dubious ways to compute the exponential of a matrix, twenty-five years later. _SIAM Review_, 45(1):349, Jan 2003.
* [33] C. J. Oates and T. J. Sullivan. A modern retrospective on probabilistic numerics. _Statistics and Computing_, 29(6):1335-1351, 2019.
* [34] C. Rackauckas and Q. Nie. DifferentialEquations.jl a performant and feature-rich ecosystem for solving differential equations in julia. _Journal of Open Research Software_, 5(1), 2017.
* [35] C. Rackauckas, Y. Ma, J. Martensen, C. Warner, K. Zubov, R. Supekar, D. Skinner, A. Ramadhan, and A. Edelman. Universal differential equations for scientific machine learning, 2021.
* [36] M. Raissi, P. Perdikaris, and G. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. _Journal of Computational Physics_, 378:686707, Feb 2019.
* [37] S. Sarkka. _Bayesian Filtering and Smoothing_, volume 3 of _Institute of Mathematical Statistics textbooks_. Cambridge University Press, 2013.
* [38] W. E. Schiesser. _The numerical method of lines: integration of partial differential equations_. Elsevier, 2012.
* [39] J. Schmidt, N. Kramer, and P. Hennig. A probabilistic state space model for joint inference from differential equations and data. In _Advances in Neural Information Processing Systems_. Curran Associates, Inc., 2021.
* [40] M. Schober, S. Sarkka, and P. Hennig. A probabilistic model for the numerical solution of initial value problems. _Statistics and Computing_, 29(1):99-122, Jan 2019.
* [41] R. B. Sidje. Expokit: A software package for computing matrix exponentials. _ACM Transactions on Mathematical Software_, 24(1):130156, mar 1998.
* [42] T. Stillfjord. Low-rank second-order splitting of large-scale differential Riccati equations. _IEEE Transactions on Automatic Control_, 60(10):2791-2796, 2015.
* [43] T. Stillfjord. Adaptive high-order splitting schemes for large-scale differential Riccati equations. _Numerical Algorithms_, 78(4):1129-1151, Sep 2017.
* [44] S. Sarkka and A. Solin. _Applied Stochastic Differential Equations_. Institute of Mathematical Statistics Textbooks. Cambridge University Press, 2019.
* [45] F. Tronarp, H. Kersting, S. Sarkka, and P. Hennig. Probabilistic solutions to ordinary differential equations as nonlinear Bayesian filtering: a new perspective. _Statistics and Computing_, 29(6):1297-1315, 2019.
* [46] F. Tronarp, S. Sarkka, and P. Hennig. Bayesian ODE solvers: The maximum a posteriori estimate. _Statistics and Computing_, 31(3):1-18, 2021.
* [47] F. Tronarp, N. Bosch, and P. Hennig. Fenrir: Physics-enhanced regression for initial value problems. In _International Conference on Machine Learning_. PMLR, 2022.
* [48] C. Van Loan. Computing integrals involving the matrix exponential. _IEEE Transactions on Automatic Control_, 23(3):395-404, 1978.
## Appendix A Proof of Proposition 1: Structure of the transition matrix
Proof of Proposition 1.: The drift-matrix \(A_{\text{I0UP}(d,q)}\) as given in Eq. (21) has block structure
\[A_{\text{I0UP}(d,q)}=\begin{bmatrix}A_{\text{IWP}(d,q-1)}&E_{q-1}\\ 0&L\end{bmatrix}, \tag{31}\]
where \(E_{q-1}\coloneqq[0\quad\dots\quad 0\quad I_{d}]^{\top}\in\mathbb{R}^{dq\times d}\). From Van Loan [48, Theorem 1], it follows
\[\Phi(h)=\begin{bmatrix}\exp\bigl{(}A_{\text{IWP}(d,q-1)}h\bigr{)}&\Phi_{12}(h) \\ 0&\exp(Lh)\end{bmatrix}, \tag{32}\]
which is precisely Eq. (23). The same theorem also gives \(\Phi_{12}(h)\) as
\[\Phi_{12}(h)=\int_{0}^{h}\exp(A_{\text{IWP}(d,q-1)}(h-\tau))E_{q-1}^{(d-1)} \exp(L\tau)\,\mathrm{d}\tau. \tag{33}\]
Its \(i\)th \(d\times d\) block is readily given by
\[(\Phi_{12}(h))_{i} =\int_{0}^{h}E_{i}^{\top}\exp(A_{\text{IWP}(d,q-1)}(h-\tau))E_{q- 1}\exp(L\tau)\,\mathrm{d}\tau \tag{34}\] \[=\int_{0}^{h}\frac{(h-\tau)^{q-1-i}}{(q-1-i)!}\exp(L\tau)\, \mathrm{d}\tau\] \[=h^{q-i}\int_{0}^{1}\frac{\tau^{q-1-i}}{(q-1-i)!}\exp(Lh(1-\tau)) \,\mathrm{d}\tau\] \[=h^{q-i}\varphi_{q-i}(Lh),\]
where the second last equality used the change of variables \(\tau=h(1-u)\), and the last line follows by definition.
## Appendix B Proof of Proposition 2: Equivalence to a classic exponential integrator
We first briefly recapitulate the probabilistic exponential integrator setup for the case of the once integrated Ornstein-Uhlenbeck process, and then provide some auxiliary results. Then, we prove Proposition 2 in Appendix B.3.
### The probabilistic exponential integrator with once-integrated Ornstein-Uhlenbeck prior
The integrated Ornstein-Uhlenbeck process prior with rate parameter \(L\) results in transition densities \(Y(t+h)\mid Y(t)\sim\mathcal{N}\left(Y(t+h);\Phi(h)Y(t),Q(h)\right)\), with transition matrices (from Proposition 1)
\[\Phi(h) =\exp(Ah)=\begin{bmatrix}I&h\varphi_{1}(Lh)\\ 0&\varphi_{0}(Lh)\end{bmatrix}, \tag{35}\] \[Q(h) =\int_{0}^{h}\exp(A\tau)BB^{\top}\exp(A^{\top}\tau)\,\mathrm{d}\tau\] (36) \[=\int_{0}^{h}\begin{bmatrix}I&\tau\varphi_{1}(L\tau)\\ 0&\varphi_{0}(L\tau)\end{bmatrix}\begin{bmatrix}0&0\\ 0&I\end{bmatrix}\begin{bmatrix}I&\tau\varphi_{1}(L\tau)\\ 0&\varphi_{0}(L\tau)\end{bmatrix}^{\top}\,\mathrm{d}\tau\] (37) \[=\int_{0}^{h}\begin{bmatrix}\tau^{2}\varphi_{1}(L\tau)\varphi_{1 }(L\tau)^{\top}&\tau\varphi_{1}(L\tau)\varphi_{0}(L\tau)^{\top}\\ \tau\varphi_{0}(L\tau)\varphi_{1}(L\tau)^{\top}&\varphi_{0}(L\tau)\varphi_{0} (L\tau)^{\top}\end{bmatrix}\,\mathrm{d}\tau, \tag{38}\]where we assume a unit diffusion \(\sigma^{2}=1\). To simplify notation, we assume an equidistant time grid \(\mathbb{T}=\{t_{n}\}_{n=0}^{N}\) with \(t_{n}=n\cdot h\) for some step size \(h\), and we denote the constant transition matrices simply by \(\Phi\) and \(Q\) and write \(Y_{n}=Y(t_{n})\).
Before getting to the actual proof, let us also briefly recapitulate the filtering formulas that are computed at each solver step. Given a Gaussian distribution \(Y_{n}\sim\mathcal{N}\left(Y_{n};\mu_{n},\Sigma_{n}\right)\), the prediction step computes
\[\mu_{n+1}^{-} =\Phi\mu_{n}, \tag{39}\] \[\Sigma_{n+1}^{-} =\Phi(h)\Sigma_{n}\Phi(h)^{\top}+Q(h). \tag{40}\]
Then, the combined linearization and correction step compute
\[\hat{z}_{n+1} =E_{1}\mu_{n+1}^{-}-f(E_{0}\mu_{n+1}^{-}), \tag{41}\] \[S_{n+1} =H\Sigma_{n+1}^{-}H^{\top},\] (42) \[K_{n+1} =\Sigma_{n+1}^{-}H^{\top}S_{n+1}^{-1},\] (43) \[\mu_{n+1} =\mu_{n+1}^{-}-K_{n+1}\hat{z}_{n+1},\] (44) \[\Sigma_{n+1} =\Sigma_{n+1}^{-}-K_{n+1}S_{n+1}K_{n+1}^{\top}, \tag{45}\]
with observation matrix \(H=E_{1}-LE_{0}=[-L\quad I]\), since we perform the proposed EKL linearization.
### Auxiliary results
In the following, we show some properties of the transition matrices and the covariances that will be needed in the proof of Proposition 2 later.
First, note that by defining \(\varphi_{0}(z)=\exp z\), the \(\varphi\)-functions satisfy the following recurrence formula:
\[z\varphi_{k}(z)=\varphi_{k-1}(z)-\frac{1}{(k-1)!}. \tag{46}\]
See e.g. Hochbruck and Ostermann [15]. This property will be used throughout the remainder of the section.
**Lemma B.1**.: _The transition matrices \(\Phi(h),Q(h)\) of the once integrated Ornstein-Uhlenbeck process with rate parameter \(L\) satisfy_
\[H\Phi(h) =\left[-L\quad I\right], \tag{47}\] \[Q(h)H^{\top} =\begin{bmatrix}h^{2}\varphi_{2}(Lh)\\ h\varphi_{1}(Lh)\end{bmatrix},\] (48) \[HQ(h)H^{\top} =hI, \tag{49}\]
Proof.: \[H\Phi(h)=(E_{1}-LE_{0})\begin{bmatrix}I&h\varphi_{1}(Lh)\\ 0&\varphi_{0}(Lh)\end{bmatrix}=\begin{bmatrix}0&\varphi_{0}(Lh)\end{bmatrix}- L\begin{bmatrix}I&h\varphi_{1}(Lh)\end{bmatrix}=\begin{bmatrix}-L&I\end{bmatrix}.\] (50)
\[Q(h)H^{\top} =\int_{0}^{h}\begin{bmatrix}\tau^{2}\varphi_{1}(L\tau)\varphi_{1 }(L\tau)^{\top}&\tau\varphi_{1}(L\tau)\varphi_{0}(L\tau)^{\top}\\ \tau\varphi_{0}(L\tau)\varphi_{1}(L\tau)^{\top}&\varphi_{0}(L\tau)\varphi_{0}( L\tau)^{\top}\end{bmatrix}H^{\top}\,\mathrm{d}\tau \tag{51}\] \[=\int_{0}^{h}\begin{bmatrix}\tau\varphi_{1}(L\tau)\varphi_{0}(L \tau)^{\top}-L\tau^{2}\varphi_{1}(L\tau)\varphi_{1}(L\tau)^{\top}\\ \varphi_{0}(L\tau)\varphi_{0}(L\tau)^{\top}-L\tau\varphi_{0}(L\tau)\varphi_{1 }(L\tau)^{\top}\end{bmatrix}\,\mathrm{d}\tau\] (52) \[=\int_{0}^{h}\begin{bmatrix}\tau\varphi_{1}(L\tau)\left(\varphi_ {0}(L\tau)^{\top}-L\tau\varphi_{1}(L\tau)^{\top}\right)\\ \varphi_{0}(L\tau)\left(\varphi_{0}(L\tau)^{\top}-L\tau\varphi_{1}(L\tau)^{ \top}\right)\end{bmatrix}\,\mathrm{d}\tau\] (53) \[=\int_{0}^{h}\begin{bmatrix}\tau\varphi_{1}(L\tau)\\ \varphi_{0}(L\tau)\end{bmatrix}\,\mathrm{d}\tau\] (54) \[=\begin{bmatrix}h^{2}\varphi_{2}(Lh)\\ h\varphi_{1}(Lh)\end{bmatrix} \tag{55}\]where we used \(L\tau\varphi_{1}(L\tau)=\varphi_{0}(L\tau)-I\), and \(\partial_{\tau}\left[\tau^{k}\varphi_{k}(L\tau)\right]=\tau^{k-1}\varphi_{k-1}(L \tau)\). It follows that
\[HQ(h)H^{\top}=H\begin{bmatrix}h^{2}\varphi_{2}(Lh)\\ h\varphi_{1}(Lh)\end{bmatrix}=h\left(\varphi_{1}(Lh)-Lh\varphi_{2}(Lh)\right)=hI, \tag{56}\]
where we used \(L\tau\varphi_{2}(L\tau)=\varphi_{1}(L\tau)-I\).
**Lemma B.2**.: _The prediction covariance \(\Sigma_{n+1}^{-}\) satisfies_
\[\Sigma_{n+1}^{-}H^{\top}=Q(h)H^{\top}. \tag{57}\]
Proof.: First, since the observation model is noiseless, the filtering covariance \(\Sigma_{n}\) satisfies
\[H\Sigma_{n}=\left[0\quad 0\right]. \tag{58}\]
This can be shown directly from the correction step formula:
\[H\Sigma_{n} =H\Sigma_{n}^{-}-HK_{n}S_{n}K_{n}^{\top} \tag{59}\] \[=H\Sigma_{n}^{-}-H\left(\Sigma_{n}^{-}H^{\top}S_{n}^{-1}\right)S_ {n}K_{n}^{\top}\] (60) \[=H\Sigma_{n}^{-}-H\Sigma_{n}^{-}H^{\top}\left(H\Sigma_{n}^{-}H^{ \top}\right)^{-1}S_{n}K_{n}^{\top}\] (61) \[=H\Sigma_{n}^{-}-IS_{n}K_{n}^{\top}\] (62) \[=H\Sigma_{n}^{-}-S_{n}\left(\Sigma_{n}^{-}H^{\top}S_{n}^{-1} \right)^{\top}\] (63) \[=H\Sigma_{n}^{-}-S_{n}S_{n}^{-1}H\Sigma_{n}^{-}\] (64) \[=\left[0\quad 0\right]. \tag{65}\]
Next, since the observation matrix is \(H=\left[-L\quad I\right]\), the filtering covariance \(\Sigma_{n}\) is structured as
\[\Sigma_{n}=\begin{bmatrix}I\\ L\end{bmatrix}\left[\Sigma_{n}\right]_{00}\begin{bmatrix}I\quad L^{\top}\end{bmatrix}. \tag{66}\]
This can be shown directly from Eq. (58):
\[\left[0\quad 0\right]=H\Sigma=\left[-L\quad I\right]\begin{bmatrix} \Sigma_{00}\quad\Sigma_{01}\\ \Sigma_{10}\quad\Sigma_{11}\end{bmatrix}=\left[\Sigma_{10}-L\Sigma_{00}\quad \Sigma_{11}-L\Sigma_{01}\right], \tag{67}\]
and thus
\[\Sigma_{10} =L\Sigma_{00}, \tag{68}\] \[\Sigma_{11} =L\Sigma_{01}=L\Sigma_{10}^{\top}=L\Sigma_{00}L^{\top}. \tag{69}\]
It follows
\[\Sigma=\begin{bmatrix}\Sigma_{00}\quad\quad L\Sigma_{00}\\ \Sigma_{00}L^{\top}\quad L\Sigma_{00}L^{\top}\end{bmatrix}=\begin{bmatrix}I\\ L\end{bmatrix}\Sigma_{00}\begin{bmatrix}I\quad L^{\top}\end{bmatrix}. \tag{70}\]
Finally, together with Lemma B.1 we can derive the result:
\[\Sigma_{n+1}^{-}H^{\top} =\Phi(h)\Sigma_{n}\Phi(h)^{\top}H^{\top}+Q(h)H^{\top} \tag{71}\] \[=\Phi(h)\begin{bmatrix}I\\ L\end{bmatrix}\bar{\Sigma}_{n}\left[I\quad L^{\top}\right]\begin{bmatrix}-L^{ \top}\\ I\end{bmatrix}+Q(h)H^{\top}\] (72) \[=\Phi(h)\begin{bmatrix}I\\ L\end{bmatrix}\bar{\Sigma}_{n}\cdot 0+Q(h)H^{\top}\] (73) \[=Q(h)H^{\top}. \tag{74}\]
### Proof of Proposition 2
With these results, we can now prove Proposition 2.
Proof of Proposition 2.: We prove the proposition by induction, showing that the filtering means are all of the form
\[\mu_{n}:=\begin{bmatrix}y_{n}\\ Ly_{n}+N(\tilde{y}_{n})\end{bmatrix}, \tag{75}\]
where \(y_{n},\tilde{y}_{n}\) are defined as
\[\tilde{y}_{0} :=y_{0}, \tag{76}\] \[\tilde{y}_{n+1} :=\varphi_{0}(Lh)y_{n}+h\varphi_{1}(Lh)N(\tilde{y}_{n}),\] (77) \[y_{n+1} :=\varphi_{0}(Lh)y_{n}+h\varphi_{1}(Lh)N(\tilde{y}_{n})-h\varphi_ {2}(Lh)\left(N(\tilde{y}_{n})-N(\tilde{y}_{n+1})\right). \tag{78}\]
This result includes the statement of Proposition 2.
Base case \(n=0\)The initial distribution of the probabilistic solver is chosen as
\[\mu_{0}=\begin{bmatrix}y_{0}\\ Ly_{0}+N(\tilde{y}_{0})\end{bmatrix},\Sigma_{0}=0. \tag{79}\]
This proves the base case \(n=0\).
Induction step \(n\to n+1\)Now, let
\[\mu_{n}=\begin{bmatrix}y_{n}\\ Ly_{n}+N(\tilde{y}_{n})\end{bmatrix} \tag{80}\]
be the filtering mean at step \(n\) and \(\Sigma_{n}\) be the filtering covariance. The prediction mean is of the form
\[\mu_{n+1}^{-}=\Phi(h)\mu_{n}=\begin{bmatrix}y_{n}+h\varphi_{1}(Lh)( Ly_{n}+N(\tilde{y}_{n}))\\ \varphi_{0}(Lh)(Ly_{n}+N(\tilde{y}_{n}))\end{bmatrix}=\begin{bmatrix}\varphi_{ 0}(Lh)y_{n}+h\varphi_{1}(Lh)N(\tilde{y}_{n})\\ \varphi_{0}(Lh)(Ly_{n}+N(\tilde{y}_{n}))\end{bmatrix}. \tag{81}\]
The residual \(\hat{z}_{n+1}\) is then of the form
\[\hat{z}_{n+1} =E_{1}\mu_{n+1}^{-}-f(E_{0}\mu_{n+1}^{-}) \tag{82}\] \[=\varphi_{0}(Lh)(Ly_{n}+N(\tilde{y}_{n}))-f\left(\varphi_{0}(Lh)y _{n}+h\varphi_{1}(Lh)N(\tilde{y}_{n})\right)\] (83) \[=\varphi_{0}(Lh)(Ly_{n}+N(\tilde{y}_{n}))-L\left(\varphi_{0}(Lh)y _{n}+h\varphi_{1}(Lh)N(\tilde{y}_{n})\right)-N\left(\tilde{y}_{n+1}\right)\] (84) \[=\varphi_{0}(Lh)Ly_{n}+\varphi_{0}(Lh)N(\tilde{y}_{n})-L\varphi_ {0}(Lh)y_{n}-Lh\varphi_{1}(Lh)N(\tilde{y}_{n})-N\left(\tilde{y}_{n+1}\right)\] (85) \[=\left(\varphi_{0}(Lh)-Lh\varphi_{1}(Lh)\right)N(\tilde{y}_{n})- N\left(\tilde{y}_{n+1}\right)\] (86) \[=N(\tilde{y}_{n})-N\left(\tilde{y}_{n+1}\right), \tag{87}\]
where we used properties of the \(\varphi\)-functions, namely \(Lh\varphi_{1}(Lh)=\varphi_{0}(Lh)\) and the commutativity \(\varphi_{0}(Lh)L=L\varphi_{0}(Lh)\). With Lemma B.2, the residual covariance \(S_{n+1}\) and Kalman gain \(K_{n+1}\) are then of the form
\[S_{n+1} =H\Sigma_{n+1}^{-}H^{\top}=HQ(h)H^{\top}=hI, \tag{89}\] \[K_{n+1} =\Sigma_{n+1}^{-}H^{\top}S_{n+1}^{-1}=Q(h)H^{\top}\left(hI\right) ^{-1}=\begin{bmatrix}h\varphi_{2}(Lh)\\ \varphi_{1}(Lh)\end{bmatrix}. \tag{90}\]
This gives the updated mean
\[\mu_{n+1} =\mu_{n+1}^{-}-K_{n+1}\hat{z}_{n+1} \tag{91}\] \[=\begin{bmatrix}\varphi_{0}(Lh)y_{n}+h\varphi_{1}(Lh)N(\tilde{y} _{n})\\ \varphi_{0}(Lh)(Ly_{n}+N(\tilde{y}_{n}))\end{bmatrix}-\begin{bmatrix}h\varphi _{2}(Lh)\\ \varphi_{1}(Lh)\end{bmatrix}\left(N(\tilde{y}_{n})-N(\tilde{y}_{n+1})\right)\] (92) \[=\begin{bmatrix}\varphi_{0}(Lh)y_{n}+h\varphi_{1}(Lh)N(\tilde{y} _{n})-h\varphi_{2}(Lh)\left(N(\tilde{y}_{n})-N(\tilde{y}_{n+1})\right)\\ \varphi_{0}(Lh)(Ly_{n}+N(\tilde{y}_{n}))-\varphi_{1}(Lh)\left(N(\tilde{y}_{n} )-N(\tilde{y}_{n+1})\right)\end{bmatrix}. \tag{93}\]This proves the first half of the mean recursion:
\[E_{0}\mu_{n+1}=\varphi_{0}(Lh)y_{n}+h\varphi_{1}(Lh)N(\tilde{y}_{n})-h \varphi_{2}(Lh)\left(N(\tilde{y}_{n})-N(\tilde{y}_{n+1})\right)=y_{n+1}. \tag{94}\]
It is left to show that
\[E_{1}\mu_{n+1}=Ly_{n+1}-N(\tilde{y}_{n+1}). \tag{95}\]
Starting from the right-hand side, we have
\[Ly_{n+1}+N(\tilde{y}_{n+1}) \tag{96}\] \[=L\left(\varphi_{0}(Lh)y_{n}+h\varphi_{1}(Lh)N(\tilde{y}_{n})-h \varphi_{2}(Lh)\left(N(\tilde{y}_{n})-N(\tilde{y}_{n+1})\right)\right)+N( \tilde{y}_{n+1})\] (97) \[=\varphi_{0}(Lh)Ly_{n}+Lh\varphi_{1}(Lh)N(\tilde{y}_{n})-Lh \varphi_{2}(Lh)\left(N(\tilde{y}_{n})-N(\tilde{y}_{n+1})\right)N(\tilde{y}_{n+ 1})\] (98) \[=\varphi_{0}(Lh)Ly_{n}+\left(\varphi_{0}(Lh)-I\right)N(\tilde{y}_ {n})-\left(\varphi_{1}(Lh)-I\right)\left(N(\tilde{y}_{n})-N(\tilde{y}_{n+1}) \right)N(\tilde{y}_{n+1})\] (99) \[=\varphi_{0}(Lh)(Ly_{n}+N(\tilde{y}_{n}))-\varphi_{1}(Lh)(N( \tilde{y}_{n})-N(\tilde{y}_{n+1}))\] (100) \[=E_{1}\mu_{n+1}. \tag{101}\]
This concludes the proof of the mean recursion and thus shows the equivalence of the two recursions.
## Appendix C Proof of Proposition 3: L-stability
We first provide definitions of L-stability and A-stability, following [27, Section 8.6].
**Definition 1** (L-stability).: _A one-step method is said to be L-stable if it is A-stable and, in addition, when applied to the scalar test-equation \(\dot{y}(t)=\lambda y(t)\), \(\lambda\in\mathbb{C}\) a complex constant with \(\operatorname{Re}(\lambda)<0\), it yields \(y_{n+1}=R(h\lambda)y_{n}\), and \(R(h\lambda)\to 0\) as \(\operatorname{Re}(h\lambda)\to-\infty\)._
**Definition 2** (A-stability).: _A one-step method is said to be A-stable if its region of absolute stability contains the whole of the left complex half-plane. That is, when applied to the scalar test-equation \(\dot{y}(t)=\lambda y(t)\) with \(\lambda\in\mathbb{C}\) a complex constant with \(\operatorname{Re}(\lambda)<0\), the method yields \(y_{n+1}=R(h\lambda)y_{n}\), and \(\{z\in\mathbb{C}:\operatorname{Re}(z)<0\}\subset\{z\in\mathbb{C}:R(z)<1\}\)._
Proof of Proposition 3.: Both L-stability and A-stability directly follow from Remark 1: Since the probabilistic exponential integrator solves linear ODEs exactly its stability function is the exponential function, i.e. \(R(z)=\exp(z)\). A-stability and L-stability then follow: Since \(\mathbb{C}^{-}\subset\{z:|R(z)|\leq 1\}\) holds the method is A-stable. And since \(|R(z)|\to 0\) as \(\operatorname{Re}(z)\to-\infty\) the method is L-stable.
## Appendix D Experiment details
### Burger's equation
Burger's equation is a semi-linear partial differential equation (PDE) of the form
\[\partial_{t}u(x,t)=-u(x,t)\partial_{x}u(x,t)+D\partial_{x}^{2}u(x,t),\qquad x \in\Omega,\quad t\in[0,T], \tag{102}\]
with diffusion coefficient \(D\in\mathbb{R}_{+}\). We discretize the spatial domain \(\Omega\) on a finite grid and approximate the spatial derivatives with finite differences to obtain a semi-linear ODE of the form
\[\dot{y}(t)=D\cdot L\cdot y(t)+F(y(t)),\qquad t\in[0,T], \tag{103}\]
with \(N\)-dimensional \(y(t)\in\mathbb{R}^{N}\), \(L\in\mathbb{R}^{N\times N}\) the finite difference approximation of the Laplace operator \(\partial_{x}^{2}\), and a non-linear part \(F\).
More specifically, we consider a domain \(\Omega=(0,1)\), which we discretize with a grid of \(N=250\) equidistant locations, thus we have \(\Delta x=1/N\). We consider zero-Dirichlet boundary conditions, that is, \(u(0,t)=u(1,t)=0\). The discrete Laplacian is then
\[[L]_{ij}=\frac{1}{\Delta x^{2}}\cdot\begin{cases}-2&\text{if $i=j$},\\ 1&\text{if $i=j\pm 1$},\\ 0&\text{otherwise}.\end{cases} \tag{104}\]The non-linear part of the discretized Burger's equation results from another finite-difference approximation of the term \(u\cdot\partial_{x}u\), and is chosen as
\[[F(y)]_{i}=\frac{1}{4\Delta x}\begin{cases}y_{2}^{2}&\text{if }i=1,\\ y_{d-1}^{2}&\text{if }i=d,\\ y_{i+1}^{2}-y_{i-1}^{2}&\text{else.}\end{cases} \tag{105}\]
The initial condition is chosen as
\[u(x,0)=\sin(3\pi x)^{3}(1-x)^{3/2}. \tag{106}\]
We consider an integration time-span \(t\in[0,1]\), and choose a diffusion coefficient \(D=0.075\).
### Reaction-diffusion model
The reaction-diffusion model presented in the paper, with logistic reaction term, has been used to describe the growth and spread of biological populations [22]. It is given by a semi-linear PDE
\[\partial_{t}u(x,t)=D\partial_{x}^{2}u(x,t)+R(u(x,t)),\qquad x\in\Omega,\quad t \in[0,T], \tag{107}\]
where \(D\in\mathbb{R}_{+}\) is the diffusion coefficient and \(R(u)=u(1-u)\) is a logistic reaction term. We discretize the spatial domain \(\Omega\) on a finite grid and approximate the spatial derivatives with finite differences, and obtain a semi-linear ODE of the form
\[\dot{y}(t)=D\cdot L\cdot y(t)+R(y(t)),\qquad t\in[0,T], \tag{108}\]
with \(N\)-dimensional \(y(t)\in\mathbb{R}^{N}\), \(L\in\mathbb{R}^{N\times N}\) the finite difference approximation of the Laplace operator, and the reaction term \(R\) is as before but applied element-wise.
We again consider a domain \(\Omega=(0,1)\), which we discretize on a grid of \(N=100\) points. This time we consider zero-Neumann conditions, that is, \(\partial_{x}u(0,t)=\partial_{x}u(1,t)=0\). Including these directly into the finite-difference discretization, the discrete Laplacian is then
\[[L]_{ij}=\frac{1}{\Delta x^{2}}\cdot\begin{cases}-1&\text{if }i=j=1\text{ or }i=j=d,\\ -2&\text{if }i=j,\\ 1&\text{if }i=j\pm 1,\\ 0&\text{otherwise.}\end{cases} \tag{109}\]
The initial condition is chosen as
\[u(x,0)=\frac{1}{1+e^{30x-10}}. \tag{110}\]
The discrete ODE is then solved on a time-span \(t\in[0,2]\), and we choose a diffusion coefficient \(D=0.25\). | ## Review
### Summary
The paper presents probabilistic exponential integrators as a novel approach to solving stiff semi-linear ordinary differential equations (ODEs), leveraging the integrated Ornstein-Uhlenbeck process to enhance stability and accuracy. It establishes theoretical properties such as L-stability and provides empirical comparisons with existing methods. The authors also extend their methodology to general non-linear systems through iterative re-linearization, demonstrating a solid understanding of the challenges posed by these types of equations. Overall, the work contributes to narrowing the performance gap between classical and probabilistic numerical methods for ODEs.
### Strengths
- The paper is well-written and organized.
- It effectively tackles the problem of solving stiff systems, a significant challenge in probabilistic numerics.
- The introduction of the integrated Ornstein-Uhlenbeck process is a neat concept.
- Theoretical results establish equivalences to classic methods and demonstrate L-stability.
- The results are presented honestly, and limitations are clearly identified.
### Weaknesses
- The contribution is primarily focused on a specific class of semi-linear ODEs, lacking broader applicability.
- The proposed methods can be computationally expensive compared to some existing alternatives.
- Sensitivity to nonlinearities may limit the method's efficacy, particularly when quadratic terms are significant.
- There could be a missed opportunity to leverage the Bayesian nature of the approach to convey uncertainty in solutions.
### Questions
- What is the overhead computational time for computing matrix exponentials, and is it included in the runtime?
- How might the proposed method combine with existing methods like the extended Kalman filter for improved performance?
- Can the method avoid explicit evaluations for expensive functions, as suggested by the authors?
- What challenges arise when considering more accurate approximations beyond local linearization?
### Soundness
**Score:** 3
**Description:** 3 = good; the theoretical foundation is solid, but the execution could benefit from a deeper exploration of certain aspects.
### Presentation
**Score:** 3
**Description:** 3 = good; the paper is clearly written and organized, though some minor typographical errors and unclear notation exist.
### Contribution
**Score:** 3
**Description:** 3 = good; while the paper addresses an important problem and proposes a relevant solution, its novelty and applicability are somewhat limited.
### Rating
**Score:** 7
**Description:** 7 = accept, but needs minor improvements; the paper is technically solid and impactful, with some areas for enhancement.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a well-motivated and clearly articulated approach to a relevant problem in numerical computations, offering both theoretical and empirical insights. Despite some limitations regarding applicability and missed opportunities to emphasize the Bayesian aspects, the strengths of the paper outweigh the weaknesses, warranting an acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Learning Invariant Representations with a
Nonparametric Nadaraya-Watson Head
Alan Q. Wang
Cornell University &Minh Nguyen
Cornell University &Mert R. Sabuncu
Cornell University
###### Abstract
Machine learning models will often fail when deployed in an environment with a data distribution that is different than the training distribution. When multiple environments are available during training, many methods exist that learn representations which are invariant across the different distributions, with the hope that these representations will be transportable to unseen domains. In this work, we present a nonparametric strategy for learning invariant representations based on the recently-proposed Nadaraya-Watson (NW) head. The NW head makes a prediction by comparing the learned representations of the query to the elements of a support set that consists of labeled data. We demonstrate that by manipulating the support set, one can encode different causal assumptions. In particular, restricting the support set to a single environment encourages the model to learn invariant features that do not depend on the environment. We present a causally-motivated setup for our modeling and training strategy and validate on three challenging real-world domain generalization tasks in computer vision.
## 1 Introduction
Machine learning models often fail when there is significant distribution shift. The goal of domain generalization is to be able to perform well with new distributions [21, 56, 71]. In this work, we are interested in settings where multiple domains/environments are available during training and we have access to environment indicators. A popular way to tackle domain generalization in this setting is to learn representations that are invariant across environments [20, 41, 60]. The hope is that such representations will work well in, or are transportable to, unseen environments. This invariance is often encoded via constraints on a learned predictor which aligns its behavior across environments; often, these conditions are derived using causal reasoning and/or by making assumptions about the data-generating process [39].
In a parametric setting, almost all existing methods enforce these constraints by training a single model and adding a regularizer on top of a standard predictive loss [1, 5, 18, 20, 27, 48, 54, 58]. Most notably, invariant risk minimization (IRM) enforces the representations to be such that the optimal classifier on top of those representations is the same across all environments. Other examples include enforcing the layer activations of the predictor to be aligned across environments [48], enforcing the predictor to be calibrated across environments [54], and enforcing gradients of the predictor to be aligned across environments [46]. Often, optimizing these constraints demand approximations or relaxations that undermine the efficacy of the approach [22].
In this work, we take a different approach using a nonparametric strategy based on a recently-proposed Nadaraya-Watson (NW) head [55]. Instead of computing the class probability directly from an input query, the NW head makes a prediction by comparing the learned representations of the query to the elements of a support set that consists of labeled data. Thus, the NW prediction is computed _relative to other real datapoints_ in the support set, with the support set providing a degree of flexibility notpossible with parametric models. In particular, one can manipulate it during training in a way which restricts the types of comparisons that the model can make.
In this work, we manipulate the support set during training to encode causal assumptions for the purposes of learning invariant representations. Specifically, restricting the support set to be drawn from a single environment precludes the possibility of using environment-specific features to make a prediction for a given query. We show that this setup is causally-motivated and relates to existing causal frameworks. Furthermore, we show that this training strategy leads to competitive to superior results compared to state-of-the-art parametric baselines.
Our contributions are as follows:
* We present causally-motivated assumptions for domain generalization which justify our modeling and training strategy.
* We present a novel approach to invariant representation learning using the nonparametric Nadaraya-Watson head, which can account for causal assumptions by manipulating a support set. In particular, we propose a training strategy which, unlike competing baselines, has _no invariance hyperparameter to tune_.
* We validate our approach on several datasets and demonstrate competitive results compared to state-of-the-art parametric baselines.
## 2 Related Works
### Domain Generalization and Invariant Representations
Domain generalization seeks to make models robust to unseen environments and is an active area of research [21; 56; 71]. One line of work augments or synthetically-generates additional training images to increase robustness of learned features to unseen environments [59; 63; 64; 65; 70]. In particular, LISA uses a mixup-style [67] interpolation technique to generate augmented images, which the authors demonstrate improves out-of-distribution robustness [64]. Another line of work broadly seeks to align features across distributions. Deep CORAL aligns correlations of layer activations in deep neural networks [48], and other works minimize the divergence of feature distributions with different distance metrics such as maximum mean discrepancy [51; 32], an adversarial loss [14; 30], and Wasserstein distance [69]. Still other works approach the problem from the perspective of the gradients and optimization [12; 29; 34; 46; 57]. For example, Fish aligns the gradients from different domains [46].
One can also achieve domain generalization via learning invariant representations, which often requires reasoning about the data-generating process from a causal perspective to arrive at appropriate constraints [39]. Invariant causal prediction (ICP) formulates the problem from a feature selection perspective, where the goal is to select the features which are direct causal parents of the label [40]. Invariant Risk Minimization (IRM) can be viewed as an extension of ICP designed for deep, nonlinear neural networks. The IRM objective can be summarized as finding the representation \(\varphi\) such that the optimal linear classifier's parameters \(w^{*}\) on top of this representation is the same across all environments [1]. This bi-level program is highly non-convex and difficult to solve. To find an approximate solution, the authors consider a Lagrangian form, whereby the sub-optimality with respect to the constraint is expressed as the squared norm of the gradients of each of the inner optimization problems. Follow-up works analyzing IRM have raised theoretical issues with this objective and presented some practical concerns [16; 22; 42]. Various flavors of IRM have also been proposed by introducing different regularization terms [27; 54; 58].
### Nonparametric Deep Learning
Nonparametric models in deep learning have received much attention in previous work. Deep Gaussian Processes [10], Deep Kernel Learning [62], and Neural Processes [24] build upon Gaussian Processes and extend them to representation learning. Other works have generalized \(k\)-nearest neighbors [37; 49], decision trees [68], density estimation [13], and more general kernel-based methods [15; 36; 66] to deep networks and have explored the interpretability that these frameworks provide. Closely-related but orthogonal to nonparametric models are attention-based models, most notably self-attention mechanisms popularized in Transformer-based architectures in natural language processing [53] and, more recently, computer vision [11, 19, 38]. Nonparametric transformers apply attention in a nonparametric setting [24].
Recently, Wang et al. proposed the NW head [55], an extension of the classical NW model [2, 35, 61] to deep learning. In the NW head, the prediction is a weighted average of labels from a support set. The weights are computed from distances between the query and support features. The NW head can yield better calibration and interpretability, with similar accuracy compared to the dominant approach of using a parametric classifier with fully-connected layers. In this work, we leverage the NW head to encode causal assumptions via the support set. The interpretability and explainability benefits of the NW head carry over in this work; while not of primary focus, we explore these properties in the Appendix.
## 3 Preliminaries
**Problem Setting.** Let \(X,Y\) denote a datapoint and its corresponding discrete class, and \(E\) denote the environment (or domain) where \(X,Y\) originates.1 That is, the elements of the training dataset \(\mathcal{D}_{tr}=\{x_{i},y_{i},e_{i}\}_{i=1}^{N}\) are drawn first by sampling the discrete random variable \(e_{i}\sim P(E)\), and then sampling \(x_{i},y_{i}\sim P(X,Y\mid E=e_{i}):=P_{e_{i}}(X,Y)\). Our goal is to learn classifiers that will generalize to new, unseen environments.
Footnote 1: We assume the support of \(E\), \(\mathrm{supp}(E)\), is finite.
**Assumptions.** We assume there exists a pair of latent causal parents of \(X\): an environment-independent ("content") factor \(Z_{C}\) and an environment-dependent ("style") factor \(Z_{S}\).2 We as
Figure 1: Illustration of proposed approach. Support set of labeled datapoints (square/triangle) from 3 environments lie in 3 regions in the feature space. Black circle denotes query datapoint with unknown label. a) The NW head models \(P(Y|X)\) by making predictions as a function of distances to labeled datapoints in the feature space (visualized as dotted arrows). b) Balancing comparisons across labels for all environments models \(P^{B}(Y|X)\). c) Conditioning on a single environment models \(P_{e}(Y|X)\).
Figure 2: a) Causal Directed Acyclic Graph (DAG) we consider in this work. Solid nodes are observed and dashed nodes are unobserved. We assume an anti-causal setting where label \(Y\) causes \(X\), and \(X\) has 2 causal parents: “style” features, \(Z_{S}\), which are influenced by the environment \(E\); and environment-independent “content” features of \(X\), \(Z_{C}\), which are causally influenced by the label \(Y\). \(E\) potentially influences \(Y\). Both \(E\) and \(Y\) have direct influence on style features \(Z_{S}\). b) Same DAG as a) with an intervention on \(Y\). We note that \(Y\mathchoice{\mathrel{\hbox{\hbox to 0.0pt{\kern 2.999954pt\vrule height 6.299904pt wid th 1px\hss}\hbox{$\perp$}\kern 2.499968pt}}}{\mathrel{\hbox{\hbox to 0.0pt{ \kern 2.999954pt\vrule height 6.299904pt width 1px\hss}\hbox{$\perp$} \kern 2.499968pt}}}{\mathrel{\hbox{\hbox to 0.0pt{\kern 2.099968pt\vrule height 6.299904pt wid th 1px\hss}\hbox{$\perp$}\kern 2.099968pt}}}{\mathrel{\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.299904pt width 1px\hss}\hbox{$\perp$} \kern 2.099968pt}}}E\mid Z_{C}\) and \(Y\mathchoice{\mathrel{\hbox{\hbox to 0.0pt{\kern 2.999954pt\vrule height 6.299904pt wid th 1px\hss}\hbox{$\perp$}\kern 2.499968pt}}}{\mathrel{\hbox{\hbox to 0.0pt{ \kern 2.999954pt\vrule height 6.299904pt width 1px\hss}\hbox{$\perp$} \kern 2.499968pt}}}{\mathrel{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299 904pt width 1px\hss}\hbox{$\perp$}\kern 2.099968pt}}}{\mathrel{\hbox{ \hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299904pt width 1px\hss}\hbox{$ \perp$}\kern 2.099968pt}}}E\mid Z_{S}\).
sume the causal mechanism that generates \(X\) from (\(Z_{C}\), \(Z_{S}\)) is injective, so that, in principle, it is possible to recover the latent features from the observations; i.e. there exists a function \(g\) such that \(g(X)=(Z_{C},Z_{S})\). We further assume that \(g\) can be disentangled into \(g_{C}\) and \(g_{S}\), such that \((Z_{C},Z_{S})=g(X)=(g_{C}(X),g_{S}(X))\). The causal graph is depicted in Fig. 2a. Finally, we assume that if any \(X=x\) has a non-zero probability in one environment, it has a non-zero probability in all environments.
**Motivation.** The motivating application in this work is image classification, where each realization of \(E\) might represent a different site, imaging device, or geographical region where \(X,Y\) are collected. For example, in medical imaging, different hospitals (\(E\)) may collect images (\(X\)) attempting to capture the presence of some disease (\(Y\)), but may differ in their imaging protocols which lead to differences in the image style features \(Z_{S}\) (e.g. staining, markings, orientation). In addition, we allow \(Z_{S}\) to be influenced by the label itself (for example, positive examples are more likely to be marked by a doctor or have specific staining than negative examples). Finally, the prevalence of \(Y\) may be influenced by \(E\) (for example, the prevalence of a disease may be higher in a certain hospital).
The goal is to find an estimator for \(Y\) which relies only on the direct causal links between \(Y\) and \(X\) and not on any spurious associations between \(E\) and \(X\), as these may change in a new, unseen environment. That is, we seek an estimator which relies only on \(Z_{C}\) and which is independent of \(E\) or \(Z_{S}\).
First, we note the direct causal dependence \(E\to Y\). For example, a model can exploit this association by learning to over-predict majority classes in a certain environment. One way to remove the direct dependence is by intervening on \(Y\), thus removing incoming edges to \(Y\). This essentially corresponds to matching the environment-specific prior on \(Y\) between environments, and results in the intervened graph in Fig. 2b.3 Let us refer to any distribution which follows the DAG in Fig. 2b as \(P^{B}_{e}(X,Y)\).
Footnote 3: This may be interpreted as making the model robust to label shift, see [31; 44].
Second, we observe that there is a potential non-causal association flow between \(E\) and \(Y\) through the colliders \(X\) and \(Z_{S}\), when either one of these are conditioned on (i.e. are observed). An estimator which relies on \(Z_{S}\) potentially leaks information from \(E\), and this is unstable in new environments. Reading d-separation on this intervened graph, we infer that \(Y\perp\!\!\!\perp E\mid Z_{C}=g_{C}(X)\) and \(Y\perp\!\!\!\perp E\mid Z_{S}=g_{S}(X)\), that is:
\[P^{B}_{e}(Y\mid g_{C}(X))=P^{B}_{e^{\prime}}(Y\mid g_{C}(X))\;\;\forall e,e^{ \prime}\in E. \tag{1}\]
In words, this assumption states that the probability of \(Y\) given the environment-invariant parts of \(X\) is the same across any environment \(e\in E\).
Thus, we seek an estimator that 1) enables interventions on \(Y\) such that the direct dependence \(E\to Y\) can be removed, and 2) can further encode the assumption in Eq. (1).
## 4 NW Head for Invariant Prediction
Given a datapoint \(x\), support set \(\mathcal{S}=\{x_{i},y_{i}\}_{i=1}^{N_{x}}\), and parameters \(\phi\), the NW head estimates \(P(Y=y\mid X=x)\) by outputting a prediction formulated as a weighted sum of support set labels, where the weights are some notion of similarity in the feature space [55]:
\[\hat{P}(Y=y\mid X=x;\phi):=f_{\phi}(x,\mathcal{S})=\frac{\sum_{i=1}^{N_{x}} \exp\left\{s(\phi(x),\phi(x_{i}))\right\}\vec{y_{i}}}{\sum_{j=1}^{N_{x}}\exp \left\{s(\phi(x),\phi(x_{j}))\right\}}. \tag{2}\]
Here, \(\vec{y}\) is the one-hot encoded version of \(y\) and \(s(\cdot,\cdot)\) is a similarity/kernel function that captures the similarity between pairs of features. In this work, we set \(s\) as the negative Euclidean distance. A graphical depiction is shown in Fig. 3.
Manipulating the support set can encode certain causal assumptions. We consider two types of manipulations:
1. Balancing classes in \(\mathcal{S}\), denoted \(\mathcal{S}^{B}\) (see Fig. 1b). This can be interpreted as an intervention on \(Y\), and removes the dependence on \(E\to Y\), i.e.: \[\hat{P}^{B}(Y=y\mid X=x;\phi):=f_{\phi}(x,\mathcal{S}^{B}).\] (3)2. Conditioning \(\mathcal{S}\) on a single environment, denoted \(\mathcal{S}_{e}\) (see Fig. 1c). This can be interpreted as conditioning the probability estimate on \(E=e\), i.e.: \[\hat{P}_{e}(Y=y\mid X=x;\phi):=f_{\phi}(x,\mathcal{S}_{e}).\] (4)
Note that both balancing and conditioning can be achieved simultaneously, which we denote \(\mathcal{S}_{e}^{B}\).
### Objective and Optimization Details
Given a dataset of samples \(\mathcal{D}_{tr}=\{x_{i},y_{i},e_{i}\}_{i=1}^{N}\), we wish to leverage the NW head as a conditional estimator for \(Y\) conditioned on \(Z_{C}=g_{C}(X)\), where \(g_{C}(X)\) is characterized by Eq. (1). This necessitates an optimization over both \(\phi\) and the space of functions \(g_{C}\). Thus, we solve the following constrained maximum likelihood over \(\phi\) and \(g_{C}\):
\[\operatorname*{argmax}_{\phi,g_{C}}\sum_{i=1}^{N}\log\hat{P}_{e_{ i}}^{B}(y_{i}\mid g_{C}(x_{i});\phi) \tag{5}\] \[\text{s.t. }\hat{P}_{e}^{B}(y_{i}\mid g_{C}(x_{i});\phi)=\hat{P}_{e ^{\prime}}^{B}(y_{i}\mid g_{C}(x_{i});\phi),\;\;\forall i\in\{1,...,N\},\; \forall e,e^{\prime}\in E.\]
Note that Eq. (1) implies that \(P_{e}^{B}(y_{i}\mid g_{C}(x_{i}))=P^{B}(y_{i}\mid g_{C}(x_{i}))\). Thus, the objective is equivalent to unconstrained maximum likelihood under the assumption in Eq. (1).
Instead of solving for \(g_{C}\) explicitly, we let both \(\phi\) and \(g_{C}\) be related by the composition \(\varphi=\phi\circ g_{C}\), and set \(\varphi\) to be the learnable mapping of the NW head, i.e. a neural network. Then, the objective becomes:
\[\operatorname*{argmin}_{\varphi}\sum_{i=1}^{N}L(f_{\varphi}(x_{i },\mathcal{S}_{e_{i}}^{B}),y_{i}) \tag{6}\] \[\text{s.t. }f_{\varphi}(x_{i},\mathcal{S}_{e}^{B})=f_{\varphi}(x_{i },\mathcal{S}_{e}^{B}),\;\;\forall i\in\{1,...,N\},\;\forall e,e^{\prime}\in E,\]
where \(L\) is the cross-entropy loss. To make the objective tractable, we consider two possible variants:
1. **Explicit.** Solve the optimization problem explicitly via a Lagrangian formulation: \[\operatorname*{argmin}_{\varphi}\sum_{i=1}^{N}L(f_{\varphi}(x_{i },\mathcal{S}_{e_{i}}^{B}),y_{i})+\lambda\sum_{e,e^{\prime}\in E}\sum_{i=1}^{N }\|f_{\varphi}(x_{i},\mathcal{S}_{e}^{B})-f_{\varphi}(x_{i},\mathcal{S}_{e^{ \prime}}^{B})\|_{2}^{2}.\] (7) where \(\lambda>0\) is a hyperparameter.
2. **Implicit.** Relax the optimization problem into the following unconstrained problem: \[\operatorname*{argmin}_{\varphi}\sum_{e\in E}\sum_{i=1}^{N}L(f_{ \varphi}(x_{i},\mathcal{S}_{e}^{B}),y_{i}).\] (8)
Figure 3: A depiction of the NW head on a tumor detection task. The NW head computes Euclidean distances \(s(\cdot,\cdot)\) between query and support features, and uses the distances to weight the support labels. Colored squares represent labels. Diagram displays two different support sets. Top is unconditional support, where support data is drawn from the training data without knowledge of environment information. Bottom is an example of a manipulated support where all support data is drawn from a fixed environment (note similarity in color). Such a support set precludes the possibility of using environment-specific features to make a prediction.
In this formulation, the constraint will be approximately satisfied in the sense that the model will be encouraged to predict the ground truth for a given image, which is identical across all environments. In practice, how well the solution satisfies the constraint will depend on model capacity, the data sample, and optimization procedure.
### Optimization Details
During training, the support set \(\mathcal{S}\) is drawn stochastically from the training set \(\mathcal{D}_{tr}\), and all queries and support datapoints are passed through the feature extractor \(\varphi\). For computational efficiency, instead of sampling a unique support mini-batch at the query-level, we sample a unique support at the mini-batch level. Thus, if \(N_{q}\) and \(N_{s}\) are the query and support mini-batch sizes respectively, the effective mini-batch size is \(N_{q}+N_{s}\), instead of \(N_{q}N_{s}\). For the implicit variant, we sample one support set for a given mini-batch of queries, forward pass through the NW head, and compute the loss in Eq. (8). For the explicit variant, we sample two support sets for a given mini-batch of queries, perform two independent forward passes through the NW head for each support set, and compute the loss in Eq. (7). As discussed in prior work [55], the support batch size is a hyperparameter analogous and orthogonal to the query batch size.
A technical point is that the set of labels in the support mini-batch must cover the set of labels in the query mini-batch. Thus, in our implementation, for \(\mathcal{S}^{B}\), we cycle through all classes and randomly draw \(N_{c}\) examples per class to include in the support. For tasks with a large number of classes, one can subsample from the total number of classes, so long as the sampled classes cover the set of query classes.
### Inference modes
Similar to how the support set can be manipulated during training, we can also design different inference strategies corresponding to different configurations of the support set at test-time. We explore several different inference modes which are possible under the NW framework:
1. **Random.** Sample uniformly at random over the dataset, such that each class is represented \(k\) times.
2. **Full.** Use the entire balanced training set.
3. **Ensemble.** Given the set of balanced features computed from Full mode, partition the computed features for all training datapoints by environment, compute the softmax predictions with respect to each environment balanced across labels, and average the predictions.
4. **Cluster.** Given the set of balanced features computed from Full mode, perform \(k\)-means clustering on the features of the training datapoints for each class. These \(k\) cluster centroids are then used as the support features for each class. This can be viewed as a distillation of the full training set for efficient inference, with the caveat that the support set no longer corresponds to observed datapoints.
While Full, Ensemble, and Cluster require computing features for the entire support set, in practice these features and centroids can be precomputed. In our experiments, we find that Cluster mode can be a sufficient replacement to Full mode, while being computationally cheaper. These inference modes can be used interchangeably and flexibly. As an example, consider a workflow which would involve using Cluster mode to perform efficient inference, and then using Full mode on a select few (potentially problematic) test queries to understand model behavior.
### Connections to Prior Work
Our assumptions in Eq. (1) are common across many works related to learning invariant predictors [40; 27; 54; 41; 26]. Representatively, under the binary classification setting, the IRM objective finds a representation function \(\varphi\) which elicits an invariant predictor across environments \(E\) such that for all \(h\) that has a non-zero probability for \(\varphi(X)\) in any (and all) environment(s):
\[\mathbb{E}_{e}[Y\mid\varphi(X)=h]=\mathbb{E}_{e^{\prime}}[Y\mid\varphi(X)=h], \;\forall e,e^{\prime}\in E.\]
Eq. (1) can be viewed as a generalization of this equality to multi-class settings.4.
Footnote 4: This equality has also been called “sufficiency invariance” [60].
Furthermore, note that given the feature extractor \(\varphi\), the NW mechanism \(f\) is a nonlearnable classifier, whereas \(w\) is learned in the IRM setting. Thus, our proposed objective can be interpreted as learning invariant features \(\varphi\), where the _fixed classifier constraint is satisfied by construction_. This avoids the need to approximate the complex bilevel optimization problem with a regularizer which assumes convexity and requires computing the Hessian. Essentially, \(f\) enforces invariance through the manipulation of the support set, providing a more intuitive and computationally simpler objective to optimize.
In the Experiments section, we compare IRM against a variant of our algorithm where we freeze the learned representations and finetune a linear classifier on top using the same training data. We find that our algorithm performs better than IRM on all datasets, suggesting that it captures invariant representations better than IRM.
## 5 Experiments and Results
### Baselines
We compare against several popular and competitive baseline algorithms: empirical risk minimization (ERM) [52], invariant risk minimization (IRM) [1], deep CORAL [48], Fish [46], LISA [64], and CLOvE [54]. When available, results on baselines are pulled from their respective papers. Details on baseline algorithms are provided in the Appendix.
### Datasets
We experiment on 3 real-world domain generalization tasks. Two are from the WILDS benchmark [23], and the third is a challenging melanoma detection task. Details on the datasets are summarized in Table 1, and further information is provided in the Appendix.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline _Dataset_ & _\# Classes_ & _Env_ & _\# Envs_ & _Architecture_ & _Metric_ \\ \hline Camelyon-17 & 2 & Hospital & 3 & DenseNet-121 & Average acc. \\ ISIC & 2 & Hospital & 3 & ResNet-50 & F1-score \\ FMoW & 62 & Region & 5 & DenseNet-121 & Worst-region acc. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of Datasets.
\begin{table}
\begin{tabular}{l l l l} \hline \hline _Algorithm_ & _Camelyon-17_ & _ISIC_ & _FMoW_ \\ \hline ERM [52] & 70.3\(\pm\)6.4 & 58.2\(\pm\)2.9 & 32.6\(\pm\)1.6 \\ IRM [1] & 70.9\(\pm\)6.8 & 57.9\(\pm\)1.0 & 31.3\(\pm\)1.2 \\ CORAL [48] & 72.4\(\pm\)4.4 & 59.1\(\pm\)2.2 & 31.7\(\pm\)1.0 \\ Fish [46] & 74.7\(\pm\)7e-2 & 64.4\(\pm\)1.7 & 34.6\(\pm\)0.0 \\ LISA [64] & 77.1\(\pm\)6.5 & 64.8\(\pm\)2.3 & 35.5\(\pm\)1.8 \\ CLOvE [54] & 79.9\(\pm\)3.9 & 66.2\(\pm\)2.2 & **40.1\(\pm\)**0.6 \\ \hline NW\({}^{\text{B}}\), Random & 71.7\(\pm\)5.3 & 56.7\(\pm\)1.4 & 31.1\(\pm\)0.8 \\ NW\({}^{\text{B}}\), Full & 72.0\(\pm\)6.7 & 61.9\(\pm\)3.5 & 31.6\(\pm\)0.9 \\ NW\({}^{\text{B}}\), Cluster & 70.6\(\pm\)6.9 & 61.4\(\pm\)2.3 & 31.3\(\pm\)0.9 \\ NW\({}^{\text{B}}\), Ensemble & 71.9\(\pm\)6.0 & 63.9\(\pm\)3.8 & 32.2\(\pm\)1.0 \\ NW\({}^{\text{B}}\), Probe & 69.2\(\pm\)7.4 & 59.7\(\pm\)2.5 & 29.9\(\pm\)1.5 \\ \hline NW\({}^{\text{B}}_{\text{e}}\), Random & 74.8\(\pm\)8.4 / 75.3\(\pm\)3.2 & 57.5\(\pm\)1.9 / 55.0\(\pm\)0.9 & 31.2\(\pm\)0.7 / 30.9\(\pm\)0.5 \\ NW\({}^{\text{B}}_{\text{e}}\), Full & **80.0\(\pm\)**2.7 / 79.7\(\pm\)1.9 & 69.6\(\pm\)2.3 / 70.0\(\pm\)1.0 & 35.0\(\pm\)0.7 / 34.6\(\pm\)0.4 \\ NW\({}^{\text{B}}_{\text{e}}\), Cluster & 78.6\(\pm\)2.5 / 79.0\(\pm\)1.4 & **71.1\(\pm\)**1.7 / 71.0\(\pm\)1.0 & 33.9\(\pm\)0.6 / 34.0\(\pm\)0.3 \\ NW\({}^{\text{B}}_{\text{e}}\), Ensemble & 79.5\(\pm\)2.6 / 79.6\(\pm\)1.9 & 69.5\(\pm\)2.2 / 69.8\(\pm\)0.8 & 37.8\(\pm\)0.9 / 38.2\(\pm\)0.4 \\ NW\({}^{\text{B}}_{\text{e}}\), Probe & 75.3\(\pm\)7.3 / 75.8\(\pm\)8.3 & 61.4\(\pm\)3.1 / 63.4\(\pm\)2.8 & 33.9\(\pm\)1.5 / 32.7\(\pm\)1.4 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Metric average \(\pm\) standard deviation for all datasets (%). Higher is better. **Bold** is best and underline is second-best. Implicit / Explicit.
1. The Camelyon-17 dataset [4] comprises of microscopic images of stained tissue patches from different hospitals, where the label corresponds to presence of tumor tissue in patches and the environment is the hospital where the patch comes from.
2. The melanoma dataset is from the International Skin Imaging Collaboration (ISIC) archive5. The ISIC dataset comprises of dermoscopic images of skin lesions from different hospitals, where the label corresponds to whether or not the lesion is diagnosed as melanoma and the environment is the hospital where the image comes from. There is significant less positive examples and negative examples, with this label imbalance varying across environments (see Appendix). Footnote 5: [https://www.isic-archive.com](https://www.isic-archive.com)
3. The Functional Map of the World (FMoW) dataset [6] comprises of RGB satellite images, where the label is one of 62 building or land use categories, and the environment represents the year the image was taken and its geographical region.
### Experimental Setup
For each model variant, we train 5 separate models with different random seeds, and perform model selection on an out-of-distribution (OOD) validation set. For WILDS datasets, we follow all hyperparameters, use model selection techniques, and report metrics as specified by the benchmark. This includes using a DenseNet-121 backbone initialized with pretrained ImageNet weights as \(\varphi\) and no random augmentations for both datasets. Similarly for ISIC, we use a pretrained ResNet-50 backbone as \(\varphi\) with no augmentations, and perform model selection on an OOD validation set. Due to significant label imbalance, we report F1-score instead of average accuracy.
For NW algorithms, we refer to models which balance classes (i.e. modeling Eq. (3)) as \(\text{NW}^{\text{B}}\), and models which additionally condition on environment (i.e. modeling both Eq. (3) and Eq. (4)) as \(\text{NW}^{\text{B}}_{\text{e}}\). For \(\text{NW}^{\text{B}}_{\text{e}}\) models, we train explicit and implicit variants. For all NW algorithms, we perform evaluation on all inference modes. In addition, for completeness, we experiment on a variant where we freeze the feature extractor and finetune a linear probe on top of the learned representations on the same training data \(\mathcal{D}_{tr}\), which we refer to as "Probe". As an example, the implicit variant of \(\text{NW}^{\text{B}}_{\text{e}}\) is trained on Eq. (8), where the support set is balanced across classes (B) and conditioned on an environment (e).
We set \(N_{c}=8\) for Camelyon-17 and ISIC and \(N_{c}=1\) for FMoW. An analysis of this hyperparameter is provided in the Appendix. The query batch size \(N_{q}\) is set to 8 for all NW experiments. For Random and Cluster inference modes, we set \(k=3\). This was chosen based on prior work [55], where \(k=3\) was shown to balance good error rate performance with computational efficiency. For explicit variants, we tune \(\lambda\) for all datasets via grid search on a held-out validation set. Full hyperparameter details are provided in the Appendix.
All training and inference is done on an Nvidia A6000 GPU and all code is written in Pytorch.6.
Footnote 6: Our code is available at [https://github.com/alanqrwang/nwhead](https://github.com/alanqrwang/nwhead).
### Results
Table 2 shows the main results. We find that on Camelyon-17 and ISIC datasets, \(\text{NW}^{\text{B}}_{\text{e}}\) with Full mode outperforms all baselines and variants we consider. In addition, \(\text{NW}^{\text{B}}_{\text{e}}\) variants typically have lower variance across random seeds as compared to baselines. For FMoW, \(\text{NW}^{\text{B}}_{\text{e}}\) with Ensemble mode performs around \(2\%\) lower than the best performing baseline, CLoVE. We observe that the most computationally-efficient inference mode, Cluster, performs comparably to Full mode for \(\text{NW}^{\text{B}}_{\text{e}}\) models, and is in fact the highest-performing model for ISIC. Thus, we conclude that the support set can be an efficient replacement for Full.
For ISIC, we find that almost all \(\text{NW}^{\text{B}}\) modes (except Random) perform \(\sim 3\%\) better than ERM. This may be attributed to balancing classes across environments, which we suspect has added benefit for highly imbalanced tasks. In contrast, this boost is less apparent for Camelyon-17, which has relatively balanced classes. As an ablation, we compare \(\text{NW}^{\text{B}}\) against an NW variant without class-balancing in the Appendix. \(\text{NW}^{\text{B}}_{\text{e}}\) further improves over \(\text{NW}^{\text{B}}\) by \(\sim 7\%\). Exploring further, we compare \(\text{NW}^{\text{B}}\) against an ERM variant with balanced classes per environment, which we denote \(\text{ERM}^{\text{B}}\). Thisachieves \(63.0\pm 2.5\), which is on-par with \(\text{NW}^{\text{B}}\). This is expected as the theoretical assumptions are the same for both models.
Comparing implicit to explicit variants of \(\text{NW}^{\text{B}}_{\text{e}}\), we do not find much difference in explicitly enforcing Eq. (1), although we do observe significantly lower variances across model runs. Generally, we find the slight performance gain of explicit training to not be worth the computational overhead of doubling the number of support set forward passes per gradient step and tuning the hyperparameter \(\lambda\).
While not the highest-performing, we highlight the consistent \(1\)-\(5\%\) improvement of Probe models over IRM, indicating that \(\text{NW}^{\text{B}}_{\text{e}}\) may be better at capturing invariant features. However, other non-parametric inference modes still outperform Probe, possibly indicating that the learned features are more suitable for NW-style classifiers.
## 6 Discussion and Limitations
There are several advantages of the proposed NW approach over previous works. First, the implicit training strategy in Eq. (8) has no hyperparameter to tune, while remaining competitive with and often outperforming state-of-the-art baselines which all require tuning a hyperparameter coefficient in the regularized loss. Second, the NW head enables interpretability by interrogating nearest neighbors in the feature space. Since these neighbors directly contribute to the model's prediction (Eq. (2)), interrogation enables a user to see what is driving the model's decision-making. This not only allows for greater model transparency, but also enables interrogating the quality of the invariant features. We explore this capability in Section H in the Appendix. Note this this degree of transparency is not present in parametric baselines. Lastly, from an intuitive standpoint, we believe our non-parametric approach to enforcing invariance across environments is more natural than baseline methods, since an environment is encoded by manipulating the support set to contain real samples only from that environment. Other baseline methods resort to proxy methods to enforce invariance [54; 48; 27].
One important limitation of our method is computational (see Appendix for analysis of runtimes). The proposed approach requires pairwise comparisons, which scales quadratically with sample size. Practically, this means passing a separate support mini-batch in addition to a query mini-batch at every training iteration. This limitation is compounded for explicit variants, in which two sets of support sets must be drawn independently. Future work may explore more computationally-efficient training strategies. At inference time, Full, Cluster, and Ensemble modes are expensive procedures which require computing features for the entire support set, although precomputing features can mitigate this. However, we argue that in high-risk, safety-critical domains like medical imaging, high inference throughput may not be as important as performance, interpretability, and robustness.
We expect the proposed approach to work well with tasks that have several (and diverse) sets of examples per label class in each environment. If this is not the case, as in the FMoW dataset, the resulting model will be sub-optimal. In particular, in the extreme case where no example is present for a specific class in a given environment, constructing a support set with labels that cover the ground truth label of the query images will not always be possible. This will, in turn, impact performance.
## 7 Conclusion
We presented a nonparametric strategy for invariant representation learning based on the Nadaraya-Watson (NW) head. In the NW head, the prediction is made by comparing the learned representations of the query to the elements of a support set that consists of labeled data. We demonstrated two possible ways of manipulating the support set, and demonstrate how this corresponds to encoding different assumptions from a causal perspective. We validated our approach on three challenging and real-world datasets.
We believe there are many interesting directions of further research. First, our treatment is restricted to classification tasks. Future work may explore an extension to the regression setting. Second, it can be interesting to explore adaptation to the test domain, given additional information. For example, reweighting the occurrence of samples per label could provide improved results given knowledge about the edge \(E\to Y\) in the test distribution. One can further envision implementing the proposed method in settings where there are previously unseen test time labels/tasks. Finally, we are interested in replacing the fixed similarity function with a learnable kernel.
## Acknowledgements
Funding for this project was in part provided by the NIH grant R01AG053949, and the NSF CAREER 1748377 grant.
## References
* Arjovsky et al. [2019] Martin Arjovsky, Leon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization, 2019. URL [https://arxiv.org/abs/1907.02893](https://arxiv.org/abs/1907.02893).
* Bishop [2006] Christopher M. Bishop. _Pattern Recognition and Machine Learning (Information Science and Statistics)_. Springer-Verlag, Berlin, Heidelberg, 2006. ISBN 0387310738.
* Brabec et al. [2020] Jan Brabec, Tomas Komarek, Vojtech Franc, and Lukas Machlica. On model evaluation under non-constant class imbalance, 2020.
* Bandi et al. [2019] Peter Bandi, Oscar Geessink, Quirine Manson, Marcory Van Dijk, Maschenka Balkenhol, Meyke Hermsen, Babak Ehteshami Bejnordi, Byungjae Lee, Kyunghyun Paeng, Aoxiao Zhong, Quanzheng Li, Farhad Ghazvinian Zanjani, Svitlana Zinger, Keisuke Fukuta, Daisuke Komura, Vlado Ovtcharov, Shenghua Cheng, Shaoqun Zeng, Jeppe Thagaard, Anders B. Dahl, Huangjing Lin, Hao Chen, Ludwig Jacobsson, Martin Hedlund, Melih Ceetin, Eren Halcu, Hunter Jackson, Richard Chen, Fabian Both, Jorg Franke, Heidi Kusters-Vandevelde, Willem Vreuls, Peter Bult, Bram van Ginneken, Jeroen van der Laak, and Geert Litjens. From detection of individual metastases to classification of lymph node status at the patient level: The camelyon17 challenge. _IEEE Transactions on Medical Imaging_, 38(2):550-560, 2019. doi: 10.1109/TMI.2018.2867350.
* Chevalley et al. [2022] Mathieu Chevalley, Charlotte Bunne, Andreas Krause, and Stefan Bauer. Invariant causal mechanisms through distribution matching, 2022.
* Christie et al. [2018] Gordon Christie, Neil Fendley, James Wilson, and Ryan Mukherjee. Functional map of the world, 2018.
* Codella et al. [2019] Noel Codella, Veronica Rotemberg, Philipp Tschandl, M Emre Celebi, Stephen Dusza, David Gutman, Brian Helba, Aadi Kalloo, Konstantinos Liopyris, Michael Marchetti, et al. Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (isic). Eprint arXiv:1902.03368, 2019.
* Codella et al. [2018] Noel CF Codella, David Gutman, M Emre Celebi, Brian Helba, Michael A Marchetti, Stephen W Dusza, Aadi Kalloo, Konstantinos Liopyris, Nabin Mishra, Harald Kittler, et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic). In _Proceedings of ISBI_, pages 168-172. IEEE, 2018.
* Combalia et al. [2019] Marc Combalia, Noel CF Codella, Veronica Rotemberg, Brian Helba, Veronica Vilaplana, Ofer Reiter, Cristina Carrera, Alicia Barreiro, Allan C Halpern, Susana Puig, et al. Bcn20000: Dermoscopic lesions in the wild. Eprint arXiv:1908.02288, 2019.
* Damianou and Lawrence [2013] Andreas Damianou and Neil D. Lawrence. Deep gaussian processes. In Carlos M. Carvalho and Pradeep Ravikumar, editors, _Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics_, volume 31 of _Proceedings of Machine Learning Research_, pages 207-215, Scottsdale, Arizona, USA, 29 Apr-01 May 2013. PMLR. URL [https://proceedings.mlr.press/v31/damianou13a.html](https://proceedings.mlr.press/v31/damianou13a.html).
* Dosovitskiy et al. [2020] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2020. URL [https://arxiv.org/abs/2010.11929](https://arxiv.org/abs/2010.11929).
* Dou et al. [2020] Qi Dou, Daniel Coelho de Castro, Konstantinos Kamnitsas, and Ben Glocker. Domain generalization via model-agnostic learning of semantic features. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alche-Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems_, volume 32. Curran Associates,Inc., 2019. URL [https://proceedings.neurips.cc/paper_files/paper/2019/file/2974788b53f73e7950e8aa49f3a306db-Paper.pdf](https://proceedings.neurips.cc/paper_files/paper/2019/file/2974788b53f73e7950e8aa49f3a306db-Paper.pdf).
* Fakoor et al. [2020] Rasool Fakoor, Pratik Chaudhari, Jonas Mueller, and Alexander J. Smola. Trade: Transformers for density estimation, 2020. URL [https://arxiv.org/abs/2004.02441](https://arxiv.org/abs/2004.02441).
* Ganin et al. [2016] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Francois Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks, 2016.
* Ghosh and Nag [2001] J. Ghosh and A. Nag. _An Overview of Radial Basis Function Networks_, pages 1-36. Physica-Verlag HD, Heidelberg, 2001. ISBN 978-3-7908-1826-0. doi: 10.1007/978-3-7908-1826-0_1. URL [https://doi.org/10.1007/978-3-7908-1826-0_1](https://doi.org/10.1007/978-3-7908-1826-0_1).
* Gulrajani and Lopez-Paz [2020] Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization, 2020.
* Gutman et al. [2016] David Gutman, Noel CF Codella, Emre Celebi, Brian Helba, Michael Marchetti, Nabin Mishra, and Allan Halpern. Skin lesion analysis toward melanoma detection: A challenge at the international symposium on biomedical imaging (isbi) 2016, hosted by the international skin imaging collaboration (isic). Eprint arXiv:1605.01397, 2016.
* Heinze-Deml and Meinshausen [2019] Christina Heinze-Deml and Nicolai Meinshausen. Conditional variance penalties and domain shift robustness, 2019.
* Jaegle et al. [2021] Andrew Jaegle, Felix Gimeno, Andrew Brock, Andrew Zisserman, Oriol Vinyals, and Joao Carreira. Perceiver: General perception with iterative attention, 2021. URL [https://arxiv.org/abs/2103.03206](https://arxiv.org/abs/2103.03206).
* Jiang and Veitch [2022] Yibo Jiang and Victor Veitch. Invariant and transportable representations for anti-causal domain shifts, 2022. URL [https://arxiv.org/abs/2207.01603](https://arxiv.org/abs/2207.01603).
* Kaddour et al. [2022] Jean Kaddour, Aengus Lynch, Qi Liu, Matt J. Kusner, and Ricardo Silva. Causal machine learning: A survey and open problems, 2022. URL [https://arxiv.org/abs/2206.15475](https://arxiv.org/abs/2206.15475).
* Kamath et al. [2021] Pritish Kamath, Akilesh Tangella, Danica J. Sutherland, and Nathan Srebro. Does invariant risk minimization capture invariance?, 2021.
* Koh et al. [2021] Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, Etienne David, Ian Stavness, Wei Guo, Berton A. Earnshaw, Imran S. Haque, Sara Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, and Percy Liang. Wilds: A benchmark of in-the-wild distribution shifts, 2021.
* Kossen et al. [2021] Jannik Kossen, Neil Band, Clare Lyle, Aidan N. Gomez, Tom Rainforth, and Yarin Gal. Self-attention between datapoints: Going beyond individual input-output pairs in deep learning, 2021. URL [https://arxiv.org/abs/2106.02584](https://arxiv.org/abs/2106.02584).
* Kotelevskii et al. [2022] Nikita Kotelevskii, Aleksandr Artemenkov, Kirill Fedyanin, Fedor Noskov, Alexander Fishkov, Artem Shelmanov, Artem Vazhentsev, Aleksandr Petiushko, and Maxim Panov. Nonparametric uncertainty quantification for single deterministic neural network, 2022.
* Koyama and Yamaguchi [2021] Masanori Koyama and Shoichiro Yamaguchi. When is invariance useful in an out-of-distribution generalization problem?, 2021.
* Krueger et al. [2021] David Krueger, Ethan Caballero, Joern-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Dinghuai Zhang, Remi Le Priol, and Aaron Courville. Out-of-distribution generalization via risk extrapolation (rex), 2021.
* Kumar et al. [2018] Aviral Kumar, Sunita Sarawagi, and Ujjwal Jain. Trainable calibration measures for neural networks from kernel mean embeddings. In Jennifer Dy and Andreas Krause, editors, _Proceedings of the 35th International Conference on Machine Learning_, volume 80 of _Proceedings of Machine Learning Research_, pages 2805-2814. PMLR, 10-15 Jul 2018. URL [https://proceedings.mlr.press/v80/kumar18a.html](https://proceedings.mlr.press/v80/kumar18a.html).
* Li et al. [2017] Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M. Hospedales. Learning to generalize: Meta-learning for domain generalization, 2017.
* Li et al. [2018] Haoliang Li, Sinno Jialin Pan, Shiqi Wang, and Alex C. Kot. Domain generalization with adversarial feature learning. In _2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 5400-5409, 2018. doi: 10.1109/CVPR.2018.00566.
* Lipton et al. [2018] Zachary C. Lipton, Yu-Xiang Wang, and Alex Smola. Detecting and correcting for label shift with black box predictors, 2018.
* Long et al. [2015] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael I. Jordan. Learning transferable features with deep adaptation networks, 2015.
* Malkov and Yashunin [2018] Yu. A. Malkov and D. A. Yashunin. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs, 2018.
* Mansilla et al. [2021] Lucas Mansilla, Rodrigo Echeveste, Diego H. Milone, and Enzo Ferrante. Domain generalization via gradient surgery, 2021.
* Nadaraya [1964] E. A. Nadaraya. On estimating regression. _Theory of Probability & Its Applications_, 9(1):141-142, 1964. doi: 10.1137/1109020. URL [https://doi.org/10.1137/1109020](https://doi.org/10.1137/1109020).
* Nguyen et al. [2021] Timothy Nguyen, Zhourong Chen, and Jaehoon Lee. Dataset meta-learning from kernel ridge-regression, 2021.
* Papernot and McDaniel [2018] Nicolas Papernot and Patrick McDaniel. Deep k-nearest neighbors: Towards confident, interpretable and robust deep learning, 2018. URL [https://arxiv.org/abs/1803.04765](https://arxiv.org/abs/1803.04765).
* Parmar et al. [2018] Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. Image transformer, 2018. URL [https://arxiv.org/abs/1802.05751](https://arxiv.org/abs/1802.05751).
* Pearl [2009] Judea Pearl. _Causality_. Cambridge University Press, 2 edition, 2009. doi: 10.1017/CBO9780511803161.
* Peters et al. [2015] Jonas Peters, Peter Buhlmann, and Nicolai Meinshausen. Causal inference using invariant prediction: identification and confidence intervals, 2015.
* Rojas-Carulla et al. [2018] Mateo Rojas-Carulla, Bernhard Scholkopf, Richard Turner, and Jonas Peters. Invariant models for causal transfer learning, 2018.
* Rosenfeld et al. [2021] Elan Rosenfeld, Pradeep Ravikumar, and Andrej Risteski. The risks of invariant risk minimization, 2021.
* Rotemberg et al. [2021] Veronica Rotemberg, Nicholas Kurtansky, Brigid Betz-Stablein, Liam Caffery, Emmanouil Chousakos, Noel Codella, Marc Combalia, Stephen Dusza, Pascale Guitera, David Gutman, et al. A patient-centric dataset of images and metadata for identifying melanomas using clinical context. _Scientific data_, 8(1):1-8, 2021.
* Schoelkopf et al. [2012] Bernhard Schoelkopf, Dominik Janzing, Jonas Peters, Eleni Sgouritsa, Kun Zhang, and Joris Mooij. On causal and anticausal learning, 2012.
* Scope et al. [2009] A Scope, AA Marghoob, CS Chen, JA Lieb, MA Weinstock, AC Halpern, and SONIC Study Group. Dermoscopic patterns and subclinical melanocytic nests in normal-appearing skin. _British Journal of Dermatology_, 160(6):1318-1321, 2009.
* Shi et al. [2021] Yuge Shi, Jeffrey Seely, Philip H. S. Torr, N. Siddharth, Awni Hannun, Nicolas Usunier, and Gabriel Synnaeve. Gradient matching for domain generalization, 2021.
* Snell et al. [2017] Jake Snell, Kevin Swersky, and Richard S. Zemel. Prototypical networks for few-shot learning, 2017. URL [https://arxiv.org/abs/1703.05175](https://arxiv.org/abs/1703.05175).
* Sun and Saenko [2016] Baochen Sun and Kate Saenko. Deep coral: Correlation alignment for deep domain adaptation, 2016.
* Teasiri et al. [2022] Mohammad Reza Teasiri, Giang Nguyen, and Anh Nguyen. Visual correspondence-based explanations improve AI robustness and human-AI team accuracy. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, _Advances in Neural Information Processing Systems_, 2022. URL [https://openreview.net/forum?id=UavQ9HYye6n](https://openreview.net/forum?id=UavQ9HYye6n).
* Tschandl et al. [2018] Philipp Tschandl, Cliff Rosendahl, and Harald Kittler. The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. _Scientific data_, 5(1):1-9, 2018.
* Tzeng et al. [2014] Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. Deep domain confusion: Maximizing for domain invariance, 2014.
* Vapnik [1999] V.N. Vapnik. An overview of statistical learning theory. _IEEE Transactions on Neural Networks_, 10(5):988-999, 1999. doi: 10.1109/72.788640.
* Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need, 2017. URL [https://arxiv.org/abs/1706.03762](https://arxiv.org/abs/1706.03762).
* Wald et al. [2021] Yoav Wald, Amir Feder, Daniel Greenfeld, and Uri Shalit. On calibration and out-of-domain generalization, 2021. URL [https://arxiv.org/abs/2102.10395](https://arxiv.org/abs/2102.10395).
* Wang and Sabuncu [2023] Alan Q. Wang and Mert R. Sabuncu. A flexible nadaraya-watson head can offer explainable and calibrated classification. _Transactions on Machine Learning Research_, 2023. ISSN 2835-8856. URL [https://openreview.net/forum?id=iEq61hG403](https://openreview.net/forum?id=iEq61hG403).
* Wang et al. [2022] Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, and Philip S. Yu. Generalizing to unseen domains: A survey on domain generalization, 2022.
* Wang et al. [2023] Pengfei Wang, Zhaoxiang Zhang, Zhen Lei, and Lei Zhang. Sharpness-aware gradient matching for domain generalization, 2023.
* Wang and Jordan [2022] Yixin Wang and Michael I. Jordan. Desiderata for representation learning: A causal perspective, 2022.
* 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_, pages 3622-3626, 2020. doi: 10.1109/ICASSP40776.2020.9053273.
* Wang and Veitch [2022] Zihao Wang and Victor Veitch. The causal structure of domain invariant supervised representation learning, 2022. URL [https://arxiv.org/abs/2208.06987](https://arxiv.org/abs/2208.06987).
* Watson [1964] G. S. Watson. Smooth regression analysis. _Sankhya: The Indian Journal of Statistics_, Series A (26):359-372, 1964.
* Wilson et al. [2015] Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, and Eric P. Xing. Deep kernel learning, 2015. URL [https://arxiv.org/abs/1511.02222](https://arxiv.org/abs/1511.02222).
* Xu et al. [2019] Minghao Xu, Jian Zhang, Bingbing Ni, Teng Li, Chengjie Wang, Qi Tian, and Wenjun Zhang. Adversarial domain adaptation with domain mixup, 2019.
* Yao et al. [2022] Huaxiu Yao, Yu Wang, Sai Li, Linjun Zhang, Weixin Liang, James Zou, and Chelsea Finn. Improving out-of-distribution robustness via selective augmentation, 2022.
* Yue et al. [2022] Xiangyu Yue, Yang Zhang, Sicheng Zhao, Alberto Sangiovanni-Vincentelli, Kurt Keutzer, and Boqing Gong. Domain randomization and pyramid consistency: Simulation-to-real generalization without accessing target domain data, 2022.
* Zhang et al. [2021] Dequan Zhang, Ning Zhang, Nan Ye, Jianguang Fang, and Xu Han. Hybrid learning algorithm of radial basis function networks for reliability analysis. _IEEE Transactions on Reliability_, 70(3):887-900, 2021. doi: 10.1109/TR.2020.3001232.
* Zhang et al. [2018] Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization, 2018.
* Zhang et al. [2018] Quanshi Zhang, Yu Yang, Haotian Ma, and Ying Nian Wu. Interpreting cnns via decision trees, 2018. URL [https://arxiv.org/abs/1802.00121](https://arxiv.org/abs/1802.00121).
* Zhou et al. [2021] Fan Zhou, Zhuqing Jiang, Changjian Shui, Boyu Wang, and Brahim Chaib-draa. Domain generalization via optimal transport with metric similarity learning. _Neurocomputing_, 456:469-480, oct 2021. doi: 10.1016/j.neucom.2020.09.091. URL [https://doi.org/10.1016%2Fj.neucom.2020.09.091](https://doi.org/10.1016%2Fj.neucom.2020.09.091).
* Zhou et al. [2020] Kaiyang Zhou, Yongxin Yang, Timothy Hospedales, and Tao Xiang. Deep domain-adversarial image generation for domain generalisation, 2020.
* Zhou et al. [2022] Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, and Chen Change Loy. Domain generalization: A survey. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, pages 1-20, 2022. doi: 10.1109/tpami.2022.3195549. URL [https://doi.org/10.1109%2Ftpami.2022.3195549](https://doi.org/10.1109%2Ftpami.2022.3195549).
Description of Baselines
* Empirical risk minimization (ERM) [52] minimizes the sum of errors across domains and examples.
* Invariant risk minimization (IRM) [1] learns a feature representation such that the optimal linear classifier on top of that representation matches across domains. For WILDS datasets, we pull baseline performance from [54]. For ISIC, we use the implementation from [16].
* Deep CORAL [48] penalizes differences in the means and covariances of the feature distributions (i.e., the distribution of last layer activations in a neural network) for each domain. For WILDS datasets, we pull baseline performance from [54]. For ISIC, we use the implementation from [16].
* Fish targets domain generalization by maximizing the inner product between gradients from different domains. For WILDS datasets, we pull baseline performance from the original paper. For ISIC, we use the implementation from [16].
* LISA augments the set of training data by randomly performing two types of mixup-style [67] interpolations: intra-label (same label, different domain) and inter-label (same domain, different label). For WILDS datasets, we pull baseline performance from the original paper, and train their implementation on the ISIC dataset.
* CLOvE [54] finds an invariant classifier by enforcing the classifier to be calibrated across all training domains. While the original paper proposes several model variants leveraging this idea, we report their best-performing variant, which starts with a trained CORAL model and finetunes the weights using a regularized cross-entropy loss. The regularizer aggregates Maximum Mean Calibration Error (MMCE) [28] over all training domains. For WILDS datasets, we pull baseline performance from [54]. As their implementation is not publicly-available, we implement it for ISIC.
## Appendix B Description of Datasets
Representative examples of the 3 datasets are shown in Fig. 4.
* **Camelyon-17.** We use Camelyon-17 from the WILDS benchmark [4, 23], which provides 450,000 lymph-node scans sampled from 5 hospitals. Camelyon-17 is a medical image classification task where the input \(x\) is a 96 x 96 image and the label \(y\) is whether there exists tumor tissue in the image. The environment denotes the hospital that the patch was taken from. The training dataset is drawn from the first 3 hospitals, while out-of-distribution validation and out-of-distribution test datasets are sampled from the 4th hospital and 5th hospital, respectively.
* **ISIC.** The melanoma dataset is from the International Skin Imaging Collaboration (ISIC) archive7. Data from the archive are collected by different organizations at different points in time [7, 8, 9, 17, 43, 45, 50]. There are about 70k data samples in total. In particular, the resized input image \(x\) is a 224 x 224 image and a binary target label \(y\) denotes whether the image exhibit is melanoma or not. The environment is the hospital from which the image was collected 8. We follow a similar setup to Camelyon-17. The training dataset is drawn from the first 3 hospitals, while out-of-distribution validation and out-of-distribution test datasets are sampled from the 4th hospital and 5th hospital, respectively. For preprocessing, we filter out datapoints that are not specifically categorized as "benign" or "malignant" (e.g. "indeterminate"). The OOD validation dataset is from the "Barcelona1" site indicator and the OOD test dataset is the "Vienna1" site indicator. Footnote 7: [https://www.isic-archive.com](https://www.isic-archive.com)
* **FMoW.** The FMoW dataset is from the WILDS benchmark [6, 23], a satellite image classification task which includes 62 classes and 80 domains (16 years x 5 regions). Concretely, the input \(x\) is a 224 x 224 RGB satellite image, the label \(y\) is one of the 62 building or land use categories, and the environment represents the year that the image was taken as well as its corresponding geographical region - Africa, the Americas, Oceania, Asia, or Europe. The train/test/validation splits are based on the time when the images are taken. Specifically, images taken before 2013 are used as the training set. Images taken between 2013 and 2015 are used as the validation set. Images taken after 2015 are used for testing.
Figure 4: Representative images for datasets, separated by domain. Each row depicts a separate class. For FMoW, for simplicity, we show 2 classes out of 62 and only images before 2013.
Hyperparameter Details
Table 3 shows hyperparameter settings for all datasets, where NW-specific hyperparameters are below the midline. For all models, we use pretrained ImageNet weights. For \(\lambda\), we perform a grid-search over the values {0.01, 0.1, 1}. Fig. 5 depicts NW\({}_{\text{e}}^{\text{B}}\) performance vs \(N_{c}\) for Camelyon-17 and ISIC datasets. We find that performance is relativity insensitive to \(N_{c}\) above \(\sim 5\) examples per class.
In the original NW head paper [55], the authors experiment with a temperature (i.e. bandwidth) hyperparameter \(\tau\). In this work, we set \(\tau=1\) for all experiments. The reason for this is that we optimize both the feature extractor and classifier end-to-end, and the kernel used in the classifier is dependent on the features that the feature extractor learns (unlike [25], e.g.). Thus, we let the feature extractor to optimize the bandwidth on its own. Note this same approach is taken in prior works [47].
Figure 5: NW\({}_{\text{e}}^{\text{B}}\) performance vs \(N_{c}\) for Camelyon-17 and ISIC datasets. Full mode. Performance is relativity insensitive to \(N_{c}\) above \(\sim 5\) examples per class.
\begin{table}
\begin{tabular}{l l l l} \hline \hline _Hyperparameter_ & _Camelyon-17_ & _ISIC_ & _FMoW_ \\ \hline Learning rate & 1e-4 & 5e-5 & 1e-4 \\ Weight decay & 1e-4 & 0 & 1e-2 \\ Scheduler & None & None & StepLR \\ Batch size & 32 & 8 & 8 \\ Architecture & DenseNet-121 & ResNet-50 & DenseNet-121 \\ Optimizer & SGD & Adam & Adam \\ Maximum Epoch & 10 & 5 & 60 \\ \hline \(N_{q}\) & 8 & 8 & 8 \\ \(N_{c}\) & 8 & 8 & 1 \\ \(N_{s}\) & \(N_{c}\times 2=16\) & \(N_{c}\times 2=16\) & \(N_{c}\times 62=62\) \\ \(\lambda\) & 0.01 & 0.01 & 0.1 \\ \(k\) & 3 & 3 & 3 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Hyperparameter settings for various datasets.
Table of Runtimes
Table 4 shows approximate runtimes for various datasets during training and inference. All experiments are performed on a GPU.
## Appendix E ID vs. OOD Performance
With most invariant learning methods, prior works observe a tradeoff between in-distribution and out-of-distribution generalization. In Table 5, we show in-distribution (ID) and out-of-distribution (OOD) results for Camelyon-17, which provides an ID validation set. We observe that there is a tradeoff between ID and OOD performance, similar to prior work.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & _Algorithm_ & _Camelyon-17_ & _ISIC_ & _FMoW_ \\ \hline Training & ERM & 7 hr & 1 hr & 22 hr \\ & NW & 14 hr & 2 hr & 40 hr \\ \hline Inference & ERM & 10 min & 2 min & 10 min \\ & NW, Random & 15 min & 3 min & 20 min \\ & NW, Full & 2 hr & 15 min & 1 hr \\ & NW, Ensemble & 2 hr & 15 min & 1 hr \\ & NW, Cluster & 2.2 hr & 17 min & 1.1 hr \\ & NW, Probe & 10 min & 2 min & 10 min \\ \hline \hline \end{tabular}
\end{table}
Table 4: Approximate runtimes for various algorithms. Training time is time to complete maximum epochs as specified in Table 3, and does not include validation. Inference time is time to evaluate the entire test set. Averaged over all training runs.
\begin{table}
\begin{tabular}{l l l} \hline \hline & _In-distribution_ & _Out-of-distribution_ \\ \hline ERM & 93.2\(\pm\)5.2 & 70.3\(\pm\)6.4 \\ NW\({}^{\text{B}}\), Full & 96.1\(\pm\)1.0 & 72.0\(\pm\)6.7 \\ NW\({}^{\text{B}}_{\text{e}}\), Full & 92.8\(\pm\)2.0 & 80.0\(\pm\)2.7 \\ \hline \hline \end{tabular}
\end{table}
Table 5: In-distribution vs. out-of-distribution generalization performance on Camelyon-17.
Imbalanced ISIC Experiments
As ISIC exhibits significant label imbalance where the positive class is much less represented than the negative class (see Fig. 6), we experiment with an NW variant without support-set label balancing as an ablation. For both variants, we set \(N_{c}=8\). To train the imbalanced variant, we sample a mini-batch support set by first sampling one image per class (to guarantee both classes are represented in the support at least once), and then sampling the rest of the images randomly from the dataset.
To characterize the performance of both variants across class imbalances, we change the prevalance of \(y=0\) in the test set by removing negative class images until the desired prevalence is achieved. Note the default prevalence is \(\sim 0.85\). Then, we compute the accuracy over this manipulated test set (note, prior work has shown that F1-score is not a good metric for comparing classifiers with different label imbalances [3]).
Fig 7 shows results. We observe that at high prevalence where the proportion of negative classes matches in training and test, the imbalanced variant outperforms the balanced variant, whereas the opposite is true for low prevalence. This makes sense because at high prevalence, the class imbalance is similar for the training and test domains; thus, a model which overpredicts the negative class is usually right. On the other hand, this fails in test sets where the prevalence is flipped (i.e. low prevalence). These results suggest that \(\text{NW}^{\text{B}}\) is a more robust classifier in the presence of label shift.
Figure 6: Number of datapoints separated by class for Camelyon-17 and ISIC datasets. There is significant label imbalance for the ISIC dataset.
Figure 7: Accuracy of NW (imbalanced) and \(\text{NW}^{\text{B}}\) (balanced) models over varying prevalence of \(y=0\) for ISIC dataset. At low prevalence where the prevalence differs the most from training domains, we observe that model performance is higher for \(\text{NW}^{\text{B}}\). The default prevalence is 6705/7818 = 0.8576, which is the right-most value in the x-axis.
Nearest-neighbor Inference Modes
In addition to Random, Full, Cluster, and Ensemble inference modes, we additionally experiment with two inference modes based on nearest neighbors: k-NN and HNSW. HNSW (Hierarchical Navigable Small Worlds) is a fast approximate nearest neighbor algorithm [33]. We choose \(k=20\) based on prior work [25]. HNSW is about two times faster in total runtime on a GPU than full k-NN.
Overall, we observe that k-NN and HNSW perform nearly identically for all the datasets and variants. Additionally, both modes perform better in terms of mean performance on Camelyon-17, and perform on par with the best-performing modes for ISIC (Cluster) and FMoW (Ensemble). However, they generally have higher variances across model runs. We suspect that less total samples used in the support (20 as compared to more than 1000 for Full) and the fact that not all classes are guaranteed to be represented in the support may lead to less stability across model runs.
## Appendix H Interpretability of NW Head
In this section, we provide both a visual and quantitative exploration of the interpretability capabilities of the NW head. In Fig. 8, we show several query images and its 8 nearest neighbors in the feature space, for both \(\text{NW}^{\text{B}}\) and \(\text{NW}^{\text{B}}_{\text{e}}\) variants. The colored border around each neighbor indicates which training environment the image comes from. Interestingly, we notice that the neighbors for \(\text{NW}^{\text{B}}_{\text{e}}\) come from a variety of environments (note a variety of colored borders), while the neighbors for \(\text{NW}^{\text{B}}\) are less diverse.
Fig. 9 quantifies this phenomenon, depicting a normalized histogram of the environments from which the top 20 nearest neighbors originate in the training dataset for Camelyon-17, averaged over all queries in the test set. From these results, it is clear that \(\text{NW}^{\text{B}}_{\text{e}}\) leverages support images from a wider variety of environments to make its prediction, suggesting that it captures more invariant representations.
\begin{table}
\begin{tabular}{l l l l} _Algorithm_ & _Camelyon-17_ & _ISIC_ & _FMoW_ \\ \hline \(\text{NW}^{\text{B}}\), Full & 72.0\(\pm\)6.7 & 61.9\(\pm\)3.5 & 31.6\(\pm\)0.9 \\ \(\text{NW}^{\text{B}}\), Cluster & 70.6\(\pm\)6.9 & 61.4\(\pm\)2.3 & 31.3\(\pm\)0.9 \\ \(\text{NW}^{\text{B}}\), Ensemble & 71.9\(\pm\)6.0 & 63.9\(\pm\)3.8 & 32.2\(\pm\)1.0 \\ \(\text{NW}^{\text{B}}\), k-NN & 72.5\(\pm\)3.2 & 64.2\(\pm\)2.6 & 32.5\(\pm\)1.4 \\ \(\text{NW}^{\text{B}}\), HNSW & 72.5\(\pm\)3.2 & 64.2\(\pm\)2.6 & 32.5\(\pm\)1.4 \\ \hline \(\text{NW}^{\text{B}}_{\text{e}}\), Full & 80.0\(\pm\)2.7 / 79.7\(\pm\)1.9 & 69.6\(\pm\)2.3 / 70.0\(\pm\)1.0 & 35.0\(\pm\)0.7 / 34.6\(\pm\)0.4 \\ \(\text{NW}^{\text{B}}_{\text{e}}\), Cluster & 78.6\(\pm\)2.5 / 79.0\(\pm\)1.4 & 71.1\(\pm\)1.7 / 71.0\(\pm\)1.0 & 33.9\(\pm\)0.6 / 34.0\(\pm\)0.3 \\ \(\text{NW}^{\text{B}}_{\text{e}}\), Ensemble & 79.5\(\pm\)2.6 / 79.6\(\pm\)1.9 & 69.5\(\pm\)2.2 / 69.8\(\pm\)0.8 & 37.8\(\pm\)0.9 / 38.2\(\pm\)0.4 \\ \(\text{NW}^{\text{B}}_{\text{e}}\), k-NN & 80.9\(\pm\)10.3 / 80.2\(\pm\)4.9 & 70.7\(\pm\)7.4 / 71.0\(\pm\)7.2 & 37.9\(\pm\)2.3 / 38.3\(\pm\)1.1 \\ \(\text{NW}^{\text{B}}_{\text{e}}\), HNSW & 80.9\(\pm\)10.3 / 80.2\(\pm\)5.0 & 70.7\(\pm\)7.4 / 71.0\(\pm\)7.2 & 37.9\(\pm\)2.2 / 38.2\(\pm\)1.1 \\ \hline \end{tabular}
\end{table}
Table 6: Metric average \(\pm\) standard deviation for all datasets (%). Higher is better. Implicit / Explicit.
Figure 8: Visualization of 3 query images and their 8 nearest neighbors in the feature space, for both \(\text{NW}_{\text{e}}^{\text{B}}\) (top) and \(\text{NW}^{\text{B}}\) (bottom). Labels are shown in the bottom right corner, and colored borders of neighbors indicate which of the 3 training environments the image comes from. For both variants, we observe visual similarity between the query images and the nearest neighbors. However, we observe the neighbors for \(\text{NW}^{\text{B}}\) tend to lack diversity in the environments from which they originate. In contrast, neighbors for \(\text{NW}_{\text{e}}^{\text{B}}\) tend to come from a wider variety of environments, suggesting that \(\text{NW}_{\text{e}}^{\text{B}}\) captures more invariant representations.
Figure 9: Normalized histogram of the environments from which the top 20 nearest neighbors originate in the training dataset for Camelyon-17, averaged over all queries in the test set. We observe a more balanced proportion for \(\text{NW}_{\text{e}}^{\text{B}}\), indicating that the model relies more evenly across all 3 environments to make its prediction, and further suggesting that representations are more invariant than \(\text{NW}^{\text{B}}\). | ## Review
### Summary
This paper addresses the important problem of reliability in deep learning when dealing with data collected from diverse environments. The authors propose a novel method that separates style and content in input objects to ensure stable behavior across different settings. Building on the Nadaraya-Watson (NW) framework, the method employs a causality graph to embed causal assumptions into the model. Predictions are made using a nonparametric approach based on learned representations and a support set of labeled data. The authors conduct experiments across several benchmark datasets that demonstrate competitive performance, although some areas for improvement are identified in the experimental design and clarity of results.
### Strengths
- The work is original and potentially significant for the subfield.
- The paper is well-written and structured, with clean figures and clear method descriptions.
- The experimental protocol is well explained, and results are presented with appropriate error bars.
- The approach creatively addresses the challenging issue of domain generalization through non-parametric learning coupled with neural representations.
- The method shows promise on relevant datasets, particularly the probe variant of NW-training indicating a more invariant internal representation than IRM.
- The motivating application to medical image classification across different domains is strong and relevant.
### Weaknesses
- The pros and cons of the method are not clearly highlighted, leaving some claims unsubstantiated.
- Some experimental aspects need further clarification, particularly regarding the computational efficiency of the method.
- The overall novelty and contribution of the paper are not clearly articulated, particularly in relation to existing literature.
- There is a lack of reproducibility due to the absence of code, which raises concerns about the validity of experimental results.
- The introduction and related works section would benefit from a more detailed discussion of criticisms surrounding IRM and how the proposed method addresses these issues.
### Questions
- What role does the support set play in out-of-distribution (OOD) generalization, and how are the domains related?
- How is the kernel bandwidth (temperature) chosen, and what impact does it have?
- Why is it more effective to regularize a non-parametric predictor than a standard classification loss?
- What justifies the use of the NW head in achieving invariance, and could simpler alternatives suffice?
- Can you clarify the equivalence of the objective to unconstrained maximum likelihood under specific assumptions?
### Soundness
**Score:** 3
**Description:** Good. The foundational concepts and methods proposed are generally solid, but some claims require further validation and clarity.
### Presentation
**Score:** 3
**Description:** Good. While the paper is mostly well-structured and clear, there are areas where clarity could be improved, particularly regarding methodology and experimental results.
### Contribution
**Score:** 2
**Description:** Fair. Although the approach is interesting, the extent of its contribution relative to existing methods in the literature is not fully established.
### Rating
**Score:** 5
**Description:** Borderline accept: The paper is technically solid and presents a novel approach, but there are significant areas for improvement which should be addressed before final acceptance.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents an original approach to a significant problem in deep learning, with promising experimental results. However, there are areas for improvement in clarity, detail, and the presentation of results that should be addressed in the final version. The strengths of the work outweigh the weaknesses, leading to a recommendation for acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Foundation Model is Efficient
Multimodal Multitask Model Selector
Fanqing Meng
OpenGVLab, Shanghai AI Laboratory
Shanghai Jiao Tong University
[email protected]
&Wenqi Shao
OpenGVLab, Shanghai AI Laboratory
[email protected]
&Zhanglin Peng
The University of Hong Kong
&Chonghe Jiang
The Chinese University of Hong Kong
&Kaipeng Zhang
OpenGVLab, Shanghai AI Laboratory
&Yu Qiao
OpenGVLab, Shanghai AI Laboratory
&Ping Luo
The University of Hong Kong
OpenGVLab, Shanghai AI Laboratory
[email protected]
Corresponding Authors: [email protected]; [email protected]
###### Abstract
This paper investigates an under-explored but important problem: given a collection of pre-trained neural networks, predicting their performance on each multi-modal task without fine-tuning them, such as image recognition, referring, captioning, visual question answering, and text question answering.A brute-force approach is to finetune all models on all target datasets, bringing high computational costs. Although recent-advanced approaches employed lightweight metrics to measure models' transferability, they often depend heavily on the prior knowledge of a single task, making them inapplicable in a multi-modal multi-task scenario. To tackle this issue, we propose an efficient multi-task model selector (EMMS), which employs large-scale foundation models to transform diverse label formats such as categories, texts, and bounding boxes of different downstream tasks into a unified noisy label embedding. EMMS can estimate a model's transferability through a simple weighted linear regression, which can be efficiently solved by an alternating minimization algorithm with a convergence guarantee. Extensive experiments on \(5\) downstream tasks with \(24\) datasets show that EMMS is fast, effective, and generic enough to assess the transferability of pre-trained models, making it the first model selection method in the multi-task scenario. For instance, compared with the state-of-the-art method LogME enhanced by our label embeddings, EMMS achieves 9.0%, 26.3%, 20.1%, 54.8%, 12.2% performance gain on image recognition, referring, captioning, visual question answering, and text question answering, while bringing 5.13\(\times\), 6.29\(\times\), 3.59\(\times\), 6.19\(\times\), and 5.66\(\times\) speedup in wall-clock time, respectively. The code is available at [https://github.com/OpenGVLab/Multitask-Model-Selector](https://github.com/OpenGVLab/Multitask-Model-Selector).
## 1 Introduction
Pre-trained models (such as neural network backbones) are crucial and are capable of being fine-tuned to solve many downstream tasks such as image classification [1], image captioning [2], question answering [3], and referring segmentation [4]. This "pre-training \(\rightarrow\) fine-tuning" paradigm shows that the models pre-trained on various datasets (_e.g.,_ ImageNet [1]and YFCC100M [5]) by many objectives (_e.g.,_ supervised and self-supervised) can provide generic-purpose representation, which is transferable to different tasks. A large number of pre-trained models have been produced with the rapid development of network architecture research, such as convolutional neural networks (CNNs) [6; 7; 8] and transformers [9; 10; 11]. When given a large collection of pre-trained models to solve multiple multi-modal tasks, an open question arises: _how to efficiently predict these models' performance on multiple tasks without fine-tuning them?_
Existing works [12; 13; 14; 15; 16; 17; 18; 19] answered the above question using model selection approaches, which are of great benefit to transfer learning. For example, when a neural network is properly initialized with a better pre-trained checkpoint, it will achieve faster convergence and better performance on a target task [20; 21]. However, it is challenging to quickly identify an optimal one from a large collection of pre-trained models when solving each multi-modal task. This is because of two reasons. Firstly, the ground truth of model ranking can only be obtained by brute-force fine-tuning and hyper-parameter grid search, which are computationally expensive [22]. Secondly, the recent methods [12; 13; 14; 23] that can estimate the transferability of pre-trained models are not generic enough for a variety of multi-modal tasks. For instance, an approach [14; 12] that relies on the prior knowledge of a single specific task would be ineffective in others.
To address the above challenges, we need a unified representation to represent diverse label formats in each multi-modal task _e.g.,_ categories, texts and bounding boxes. Existing methods cannot be employed in multi-task scenarios because they only receive labels with one-hot or real-valued vectors, as shown in Fig.1. For example, LEEP [12] and PACTran [24] are carefully designed for classification tasks. GBC [25] and TransRate [23] measure transferability using class separability, which relies on prior knowledge in classification task. Although LogME [13] can be used in both classification and regression tasks, it relies on real-valued labels, making it inapplicable in other label formats such as text descriptions.
In contrast to the above works, we propose an Efficient Multi-task Model Selector with an acronym, EMMS, which can select the most appropriate pre-trained model for solving each multi-modal task.
Figure 1: Comparison between prior pre-trained model selectors and our multi-task model selector. (a) denotes that a model selector measures transferability by modeling the compatibility between the model feature and task label. Previous model selectors can only receive labels with one-hot or real-valued vectors. Our multi-task model selector can be employed in various tasks with diverse label formats. (b) denotes that our proposed EMMS is applicable and effective in various downstream tasks while previous transferability metrics can be only used in classification or regression tasks.
This is achieved by employing foundation models, such as CLIP [26] and GPT-2 [27], to transform diverse label formats into a unified label embedding space. The estimated label embedding contains more rich information than the conventional one-hot and real-valued label encoding.
In this way, EMMS can measure the compatibility between the models' features and corresponding label embeddings on various tasks, as shown in Fig. 1 and Fig. 2. This results in a more generic assessment of the models' transferability than previous methods. Specifically, EMMS treats the estimated label embeddings as noisy oracles of the ground-truth labels, and it turns a log-likelihood maximization problem into a simple weighted linear square regression (WLSR). We propose an alternating minimization algorithm to solve WLSR, which can be solved with a theoretical convergence guarantee efficiently. Extensive experiments validate the effectiveness of EMMS on multiple tasks, including image classification [1], image captioning [2], question answering on both image [28] and text [29], referring comprehension [4], and landmark detection [30].
The **contributions** of this work are summarized as follows. (1) We propose a generic transferability estimation technique, namely Efficient Multi-task Model Selector (EMMS). Equipped with a unified label embedding provided by foundation models and a simple weighted linear square regression (WLSR), EMMS can be fast, effective, and generic enough to assess the transferability of pre-trained models in various tasks. (2) We propose a novel alternating minimization algorithm to solve WLSR efficiently with theoretical analysis. (3) Extensive experiments on \(5\) downstream tasks with \(24\) datasets demonstrate the effectiveness of EMMS. Specifically, EMMS achieves 9.0%, 26.3%, 20.1%, 54.8%, 12.2%, performance gain on image recognition, referring, captioning, visual question answering, and text question answering, while bringing 5.13\(\times\), 6.29\(\times\), 3.59\(\times\), 6.19\(\times\), and 5.66\(\times\) speedup in wall-clock time compared with the state-of-the-art method LogME enhanced by our label embeddings, respectively.
## 2 Related Work
**Transferability Estimation.** Model selection is an important task in transfer learning. To perform model selection efficiently, methods based on designing transferability metrics have been extensively investigated. LEEP [12] pioneers to evaluate the transferability of source models by empirically estimating the joint distribution of pseudo-source labels and the target labels. But it can only handle classification tasks with supervised pre-trained models because the modeling of LEEP relies on the classifier of source models. Recent works propose several improvements over LEEP to overcome the limitation. For example, NLEEP [31] replaces pseudo-source labels with clustering indexes. Moreover, LogME [13], TransRate [23], and PACTran [24] directly measure the compatibility between model features and task labels. Although fast, these metrics can only be used on limited tasks such as classification and regression. This work deals with model selection in multi-task scenarios. We propose EMMS to evaluate the transferability of pre-trained models on various tasks.
**Label Embedding.** Label embedding represents a feature vector of task labels, which can be generated in various ways. The classical approach is to use one-hot encoding to represent the labels as sparse vectors, which is widely used in image classification. Another way is to transform labels into vectors by embedding layers. For example, an RNN module is employed to generate label representation in [32], which is encouraged to be compatible with input data vectors in text classification tasks. In addition, it is also common to treat the labels as words and use techniques such as word2vec [33] or GloVe [34] to learn vector representations of the labels. The main obstacle in the multi-task scenario is how to deal with diverse label formats. In this work, we follow the idea of word embedding and treat task labels as texts, which are then transformed into embeddings by publicly available foundation models [26; 27].
**Foundation Models.** CLIP [26] is the first known foundation model which learns good semantic matching between image and text. The text encoder of CLIP can perform zero-shot label prediction because it encodes rich text concepts of various image objects. By tokenizing multi-modal inputs into homogeneous tokens, recent work on foundation models such as OFA [35] and Uni-Perceiver [36] use a single encoder to learn multi-modal representations. In this work, we utilize the great capacity of foundation models in representing image-text concepts to generate label embedding. It is noteworthy that although foundation models can achieve good performance in various downstream tasks, they may not achieve good zero-shot performance on many tasks[37; 38; 39] and it is still computationally expensive to transfer a large model to the target task [40; 41]. On the contrary, a multi-task model selector can quickly select an optimal moderate-size pre-trained model that can generalize well in target tasks. In this sense, a multi-task model selector is complementary to foundation models.
## 3 Preliminary of Model Selection
**Problem Setup.** A target dataset with \(N\) labeled samples denoted as \(\mathcal{T}=\{(x^{n},y^{n})\}_{n=1}^{N}\) and \(M\) pre-trained models \(\{\phi_{m}=(\theta_{m},h_{m})\}_{m=1}^{M}\) are given. Each model \(\phi_{m}\) consists of a feature extractor \(\theta_{m}\) producing a \(D\)-dimension feature (i.e. \(\hat{x}=\theta_{m}(x)\in\mathbb{R}^{D}\)) and a task head \(h_{m}\) outputting predicted label given input \(x\)[6; 9]. In multi-task scenarios, the ground-truth label comes in various forms, such as category, caption, and bounding box, as shown in 1. The task of pre-trained model selection is to generate a score for each pre-trained model thereby the best model can be identified to achieve good performance for various downstream tasks.
**Ground Truth.** The ground truth is obtained by fine-tuning all pre-trained models with hyper-parameters sweep on the target training dataset and recording the highest scores of evaluation metrics [31; 13] (e.g. test accuracy and BLEU4 [42]). We denote ine-tuning scores of different models as \(\{G_{m}\}_{m=1}^{M}\). Since fine-tuning all models on all target tasks requires massive computation cost, research approaches design lightweight transferability metrics which offer an accurate estimate of how well a pre-trained model will transfer to the target tasks.
**Transferability Metric.** For each pre-trained model \(\phi_{m}\), a transferability metric outputs a scalar score \(T_{m}\) based on the log-likelihood, as written by
\[T_{m}=\sum_{n=1}^{N}\log p(y_{n}|x_{n};\theta_{m},h_{m}) \tag{1}\]
where \((x_{n},y_{n})\) denotes the \(n\)-th data point in target dataset \(\mathcal{T}\). A higher log-likelihood value for \(T_{m}\) indicates that the model \(\phi_{m}\) is likely to achieve better performance on the intended task. Numerous transferability metrics have been proposed by modeling prediction probability \(p(y_{n}|x_{n};\theta_{m},h_{m})\) in various ways. Although being efficient, they can hardly be used in multi-task scenarios.
**Challenges in Multi-task Scenarios.** Existing transferability metrics fail to generalize to various tasks for two reasons. Firstly, existing methods such as LEEP and LogME can only deal with real-value label formats. But \(y_{n}\) can be a sentence of words in the task of image caption. Secondly, a large number of the previous metrics estimate transferability through the target task's prior information such as maximizing inter-class separability, which is inapplicable in multi-task scenarios except for the classification. To overcome these difficulties, we introduce a simple regression framework with unified label embeddings provided by several foundation models in Sec.4.
## 4 Our Method
In this section, we introduce our Efficient Multi-task Model Selector (EMMS). To overcome the difficulty of diverse label formats, EMMS employs foundation models to transform various labels into unified label embeddings in Sec.4.1. By treating label embeddings provided by multiple foundation models as noisy oracles of ground truth labels, EMMS can calculate transferability metric under a simple weighted linear square regression (WLSR) framework in Sec.4.2. We design an alternating minimization algorithm to solve WLSR efficiently in Sec. 4.3. The illustration of our EMMS is provided in Fig. 2.
### Foundation Models Unify Label Embedding
In general, label embeddings or label representations should encode the semantic information such that two labels with low semantic similarity have a low chance to be grouped. A common scheme is to represent label embedding as a one-hot vector. However, one-hot representation can not embed labels with text formats such as captions in the image caption task. Following the design in multi-modality foundation models [42], we treat labels with diverse formats as a text sequence, which can be encoded by pre-trained foundation models, as shown in Fig. 2.
**Label Embedding via Foundation Models (F-Label).** Thanks to the great representational capacity, the foundation model can construct label embedding (termed F-label) while preserving its rich semantic information of labels. Given a label \(y\) in the target task, the label embedding \(z\in\mathbb{R}^{L}\) is obtained by \(z=F(y)/\|F(y)\|_{2}\) where \(F\) can be instantiated by various foundation models to process diverse label formats. \(\ell_{2}\) normalization is utilized to normalize the representations extracted from different foundation models. Moreover \(F\) can be implemented as CLIP [26], BERT [43] and GPT-2 [27] when task label \(y\) is text. Note that label embedding extraction can be fast enough with GPU parallel computation. We provide the runtime analysis in Appendix Sec.C.
**Benefits of F-Label.** F-Label has several advantages over one-hot label representations. Firstly, it embeds richer semantic information than one-hot label, leading to accurate modeling of the semantic relationships between different labels. As shown in Fig.3, F-Label leads to a higher correlation between fine-grained classes than one-hot encoding. Secondly, compared with one-hot labels, F-label can be obtained in a variety of tasks as long as the task label can be transformed into a text sequence. With the assistance of F-Labels, model selection can be established in multi-task scenarios.
### Regression with Unified Noisy Label Embeddings
To estimate the transferability of pre-trained models, the relationship between model features \(\hat{x}\in\mathbb{R}^{D}\) and F-Label \(z\in\mathbb{R}^{L}\) should be modeled in order to calculate the transferability score \(T_{m}\) in Eqn. (1). On the other hand, since a semantic label \(y\) can be embedded by several foundation models, the label embedding set can be constructed as \(\mathcal{Z}=\{z_{k}=F_{k}(y)/\|F_{k}(y)\|_{2},k\in[K]\}\) where \(\{F_{k}\}_{k=1}^{K}\) denotes \(K\) foundation models. Now, we utilize data points \(\{(\hat{x}_{k}^{n},z_{1}^{n},\cdots,z_{K}^{n})\}_{n=1}^{N}\) to model the relationship between model features and F-Labels.
**Setup.** As shown in Fig.2, we assume that true label embedding \(z\) is a linear mapping of the model feature with additive Gaussian noise with a variance of \(\sigma_{0}^{2}\), as given by \(z=z_{0}+\epsilon=w^{T}\hat{x}+\epsilon\) and \(\epsilon\sim N(0,\sigma_{0}^{2}I_{L})\) where \(z_{0}=w^{T}\hat{x}\) is the regression prediction, \(w\in\mathbb{R}^{D\times L}\) and \(\epsilon\) are regression weights and regression error, respectively, and \(I_{L}\) is a L-by-L identity matrix.
We assume that F-labels \(\{z_{k}\}_{k=1}^{K}\) obtained from different foundation models are oracles that independently provide noisy estimates of the true label embedding \(z\). Formally, we have \(P(z_{k}|z)=N(z,\sigma_{k}^{2}I_{L})\). By the above setup, EMMS would be performed with noisy labels. Hence, EMMS tends to select pre-trained models robust to the label noise.
Figure 3: Label embedding has richer semantic information than one-hot labels. (a) indicates that in the classification task, F-Label can capture the correlation of labels with different granularity than one-hot encoding. (b) shows that in the image caption task, F-label can model the semantic relevance of two captions corresponding to the same image better than the one-hot label.
Figure 2: Overview of our EMMS. (a) shows that labels in various tasks can be expressed by texts. (b) presents the graph model of regression with multiple noisy labels. We use several foundation models to encode text labels as label embeddings which are deemed as noisy oracles of true label embedding \(z\). Moreover, \(z\) is a linear mapping of model feature \(\hat{x}\) with Gaussian noise \(\epsilon\sim N(0,\sigma_{0}^{2})\).
**Reasonableness of the Linear Assumption.** Specifically, EMMS assumes that the true label embedding \(z\) is a linear mapping of the model feature with Gaussian noise. The linear assumption is reasonable in image and text classification tasks because a linear classifier is usually used when the pre-trained model is transferred to a target task, which is commonly used in recent methods. For example, LogME [13] assumes that: \(z\gets N(w^{T}\hat{x},\beta^{-1})\), which implies that there is a linear mapping from the model feature space to the label space. PACTran [24] also has a similar setting. The difference is that LogME takes a one-hot label as the true label embedding, which limits its applicability. But our EMMS treat the true label embedding \(z\) as an implicit variable. And F-Labels \(\{z_{k}\}_{k=1}^{K}\) obtained from different foundation models are assumed to be noisy oracles of true label embedding \(z\). Since labels in many tasks can be easily encoded into F-Labels, our EMMS can be used as a multitask model selector. We verify effectiveness the linear assumption in various multi-model tasks with extensive experiments in Sec.5.
**Computation of Log-Likelihood.** To model the relationship between model features and F-Labels, we need to estimate regression weights \(w\), strengths of label noises \(\{\sigma_{k}\}_{k=0}^{K}\). For simplicity of notation, we consider the case \(L=1\), i.e. F-labels are scalars. Given \(N\) data points, the log-likelihood is given by
\[\mathcal{L}=N\log A_{1}-\frac{N}{2}\log A_{2}+\sum_{n=1}^{N}(\frac{(A_{3}^{n}) ^{2}}{4A_{2}}-A_{4}^{n})+\mathrm{const} \tag{2}\]
where \(A_{1}=\prod_{k=0}^{K}1/\sigma_{k},A_{2}=\sum_{k=0}^{K}1/2\sigma_{k}^{2},A_{3}^ {n}=\sum_{k=0}^{K}z_{k}^{n}/\sigma_{k}^{2}\), and \(A_{4}^{n}=\sum_{k=0}^{K}(z_{k}^{n})^{2}/\sigma_{k}^{2}\). The detailed derivation of Eqn.(2) is provided in the Appendix Sec.A.
**Maximizing Log-likelihood as Weighted Linear Square Regression (WLSR).** The remaining issue is to determine parameters \(w\) and \(\{\sigma_{k}\}_{k=0}^{K}\) by maximizing the log-likelihood in Eqn. (2). But it can be intractable because \(w\) and \(\{\sigma_{k}\}_{k=0}^{K}\) are heavily coupled. To mitigate this issue, we turn the log-likelihood maximization into a weighted linear square regression by rearranging Eqn. (2) as \(-\mathcal{L}=\frac{1}{2}\|Xw-Zt\|_{2}^{2}+R(\{\sigma_{k}\}_{k=0}^{K}\), where \(X\in\mathbb{R}^{N\times D}\) is the data matrix whose \(n\)-th row is model feature \((\hat{x}^{n})^{T}\), \(w\in\mathbb{R}^{D\times 1}\) are weight parameters, \(Z\in\mathbb{R}^{N\times K}\) is F-Label matrix whose \(k\)-th column is the label embedding \(z_{k}\), and \(t\in\mathbb{R}^{K\times 1}\) satisfies that \(1_{K}^{T}t=1,t\geq 0\) which is a \((K-1)\)-D simplex denoted as \(\triangle^{K-1}\). \(R(\cdot)\) is a regularization term parameterized with \(\{\sigma_{k}\}_{k=0}^{K}\). We provide the derivations in Appendix Sec.A.
We note that the computational intractability comes from the data-dependent regularizer \(R(\cdot)\). For efficient computation, we drop \(R(\cdot)\), turning the log-likelihood maximization into a problem of WLSR, as given by
\[\min_{w\in\mathbb{R}^{D\times 1},t\in\triangle^{K-1}}s(w,t)=\frac{1}{2}\|Xw-Zt \|_{2}^{2} \tag{3}\]
When considering the case \(L>1\), Eqn. (3) becomes \(\min_{w\in\mathbb{R}^{D\times L},t\in\triangle^{K-1}}\frac{1}{2}\|Xw-Zt\|_{F}^ {2}\) where \(Z\in\mathbb{R}^{N\times L\times K}\) and \(\|\cdot\|_{F}\) is Frobenius norm.
```
1:Input: Model feature \(X\in R^{N\times D}\); F-Label matrix \(Z\in R^{N\times K}\); Learning step-sizes \(\eta\) and \(\beta\) for \(w\) and \(t\), respectively;
2:Output: Score of WLSR;
3:Initialize \(t=\frac{1}{K}1_{K}\) and \(w=\frac{1}{D}1_{D}\);
4:while\(s\) not converge do
5:\(s=\frac{1}{2}\|Xw-Zt\|_{2}^{2}\);
6:\(w=w-\eta X^{T}(Xw-Zt)\);
7:while\(t\) not converge do
8:\(t\gets t-\beta Z^{T}(Zt-Xw)\);
9:\(t=\Pi_{\triangle^{K-1}}(t)\); // Projection
10:endwhile
11:endwhile
12:Return:\(s\)
```
**Algorithm 1** Alternating Minimization
```
1:Input: Model feature \(X\in R^{N\times D}\), F-Label matrix \(Z\in R^{N\times K}\);
2:Output: Score of WLSR;
3:Initialize \(t=\frac{1}{K}1_{K}\) and \(w=\frac{1}{D}1_{D}\);
4:while\(s\) not converge do
5:\(s=\frac{1}{2}\|Xw-Zt\|_{2}^{2}\);
6:\(w=(X^{T}X)^{-1}X^{T}Zt\); // LSR for \(w\)
7:\(t=(Z^{T}Z)^{-1}Z^{T}Xw\); // LSR for \(t\)
8:\(t=\mathrm{Sparsemax}(t)\) ; // Projection
9:endwhile
10:Return:\(s\)
```
**Algorithm 2** Fast Alternating Minimization
When considering the case \(L>1\), Eqn. (3) becomes \(\min_{w\in\mathbb{R}^{D\times L},t\in\triangle^{K-1}}\frac{1}{2}\|Xw-Zt\|_{F}^ {2}\) where \(Z\in\mathbb{R}^{N\times L\times K}\) and \(\|\cdot\|_{F}\) is Frobenius norm. From Eqn. (2) and Eqn. (3), \(s(w,t)\) is an approximation of negative log-likelihood. Hence, a smaller \(s(w,t)\) indicate the larger \(T_{m}\) in Eqn. (1) and better transferability. We design an efficient algorithm to solve WLSR.
### Fast Computation by Alternating Minimization
**Algorithm.** The optimization problem in Eqn. (3) can be formulated to a classical second-order conic program[42][44](simply called **SOCP**). However, the excessive data in our problem leads to a large dimension of the variable, making it inefficient for standard solvers. Therefore, we are motivated to find the smooth structure of the problem and design an alternating minimization algorithm to achieve fast computation. As shown in Algorithm 1, we separately fix \(w\) and \(t\) to optimize the other one until the function value in Eqn. (3) converges. Specifically, when we fix \(t\), the whole problem degenerates to a least square problem with respect to \(w\). When we fix \(w\), we also need to solve a least square problem concerning \(t\) under the simplex constraint.
**Convergence Analysis.** We will prove the convergence property of the function value. Indeed, we prove a stronger condition that the function value decreases after each round of iterations on \(w\) and \(t\). From the monotone convergence theorem, the convergence can thus be derived. We first present the decreasing result of inner loop of \(t\) by Theorem 1 and the same property holds for the update of \(s\). Then the convergence of the whole algorithm can be derived by Theorem 2. The detailed proofs are placed in the Appendix Sec.A.
**Theorem 1**.: _Suppose \(s(w,t)=\frac{1}{2}\|Xw-Zt\|_{F}^{2}\) where \(X\in\mathbb{R}^{N\times D}\), \(Z\in\mathbb{R}^{N\times K}\), \(w\in\mathbb{R}^{D\times 1}\) and \(t\in\triangle^{K-1}\), the inner loop of \(t\) in Algorithm 1 lines \(7\) - \(10\) decreases after each iteration. Specifically, denote \(\beta=1/\|2Z^{T}Z\|\) and \(t^{+}=\Pi_{\triangle^{K-1}}(t-\beta\nabla s(w,t))\). For any \(t\in\triangle^{K-1}\), \(s(w,t^{+})-s(w,t)\leq-\frac{1}{2\beta}\|t-t^{+}\|^{2}\leq 0\)._
**Theorem 2**.: _Suppose \(s(w,t)=\frac{1}{2}\|Xw-Zt\|_{2}^{2}\) where \(X\in\mathbb{R}^{N\times D}\), \(Z\in\mathbb{R}^{N\times K}\), \(w\in\mathbb{R}^{D\times 1}\) and \(t\in\triangle^{K-1}\), the function value in Algorithm 1 will converge. Specifically, denote \(w^{*},t^{*}\) as the result after one iteration of \(w,t\) respectively, we have \(0\leq s(w^{*},t^{*})\leq s(w^{*},t)\leq s(w,t)\)._
**Computational Speedup.** Although this algorithm 1 guarantees convergence, it is a bit time-consuming due to the two-level loop, we optimized this part and achieved similar results in very little time. Since the least squares solution is extremely fast, we performs least squares on \(w\) and \(t\), and then replace projection onto simplex with explicit Sparsemax transformation [45; 46], iteratively. The fast solver is illustrated in Algorithm 2. we experimentally verify its convergence and find that the approach achieves impressive speedup.
## 5 Experiment
This section evaluates our method EMMS on different downstream tasks, including image classification, image caption, visual question answering, text question answering and referring expression comprehension. We put more experiments details in Appendix Sec.B. Moreover, we conduct a detailed ablation study to analyze our EMMS in Appendix Sec.C
### Training Details
**Benchmark.** For **image classification**, We adopt 11 classification benchmarks, including FGVC Aircraft [47], Caltech-101 [48], Stanford Cars [49], CIFAR-10 [50], CIFAR-100 [50], DTD [51], Oxford 102 Flowers [52], Food-101 [53], Oxford-IIIT Pets [54], SUN397 [55], and VOC2007 [56]. For **image caption**, We use Flickr8k [57], Flickr30k [58], FlickrStyle10K-Humor [59], FlickrStyle10K-Romantic [59] and RSICD [60]. For **visual question answer**, We apply COCOA [61], DAQUAR [62] and CLEVR [63]. For **text question answer** and **referring expression comprehension**, we separately use SQuAD1.1 [64],SQuAD2.0 [65] and RefCOCO [66], RefCOCO+ [66], RefCOCOg [67].
**Ground truth.** In order to obtain the ground truth, we finetune all pre-trained models on all target datasets with a grid search of hyper-parameters. Details of target datasets and fine-tuning schemes are described in Appendix Sec.B.
**Evaluation protocol.** To assess how well a model selector predict the transferability of pre-trained models, we calculate the rank correlation between \(\{T_{m}\}_{m=1}^{M}\) and \(\{G_{m}\}_{m=1}^{M}\). Following the common practice [13; 31], we use _weighted Kendall's \(\tau_{w}\)_. The larger \(\tau_{w}\) indicates a better correlation and better transferability metric. For computation complexity, we record the runtime of executing algorithmover all models given the feature and label on a target task and analyzed the computational complexity of EMMS as well as LogME. (Details can be found in Appendix Sec.C)
**Baseline.** For the image classification task, we choose NLEEP [31], TransRate [23], and LogME [13]as the baseline; for other multimodal tasks, we choose LogME with F-Label as the baseline; in addition, for the VQA task, we additionally compare PACTran [24]. Details of baselines and why we choose them are described in Appendix Sec.B.
### Image Classification with ViT Models
Vision transformer [9] (ViT) models have been increasingly used for a variety of tasks and have achieved better results than CNN models. The architecture of ViT models are more complex than CNN models. Hence, how to do the model selection on ViT models is a more challenging and rewarding task. Details of pre-trained models are described in Appendix Sec.B.
**Performance and wall-clock time comparison.** As shown in Table.1, our EMMS achieve the best average \(\tau_{w}\) on 11 target datasets and the best \(\tau_{w}\) on 9 target datasets with relatively short time. For example, EMMS outperforms LogME by 0.182 and 0.139 rank correlation \(\tau_{w}\) on Aircraft, and VOC2007, respectively, showing the effectiveness of our EMMS in measuring the transfer-ability of pre-trained ViT models. On the other hand, for the remaining 2 target datasets (i.e. CF-10, DTD), our EMMS still has a marginal gap compared to the best-performing transferability metric. Besides, we find that the effect of model selection of EMMS in ViT models selection has an improvement compared to CNN models selection, we guess F-Label has spatial similarity with the model feature of ViT-base model because the foundation models are mostly transformer-based, which can model the relationship between model feature from ViT-base models and F-Labels more accurately.
### Image Captioning
Here we treat image caption as a vocab-based classification task. That is we use a vocabulary and classify the caption into the index of some words in the vocabulary. Afterward, training is done according to the classification task criteria.Here we calculate the average \(\tau_{w}\) and time of LogME with \(K\) single F-label from \(K\) foundation models we use respectively. We wants to select the best combination of image encoder and language encoder. Details of pre-trained models and the model architecture are described in Appendix Sec.B.
**Performance and wall-clock time comparison.** As shown in Table.2, EMMS is significantly ahead of baseline in both time and effect for each dataset. For example, EMMS outperforms LogME with the relative improvements of 39% and 37% in rank correlation \(\tau_{w}\) on Flickr8k and Flickr30k, respectively. In addition, the time of EMMS is reduced by 83.7% and 79.8% relative to LogME on these two datasets, which shows the efficiency of our algorithm. The average rank correlation \(\tau_{w}\) alone the five datasets is 0.64, which denotes EMMS has sufficient confidence.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline & Aircraft & Caltech & Cars & CF-100 & DTD & Flowers & Food & Pets & SUN & VOC & Avg. \\ \hline \multicolumn{11}{c}{Weighted Kendall’s tau \(\tau_{w}\)} \\ \hline LogME & 0.299 & 0.382 & 0.633 & **0.741** & 0.727 & 0.569 & 0.512 & 0.580 & 0.528 & 0.619 & 0.591 & 0.561 \\ NLEEP & -0.282 & 0.027 & 0.693 & 0.674 & 0.538 & 0.123 & -0.262 & 0.105 & 0.40 & 0.268 & 0.109 & 0.218 \\ TransRate & 0.244 & 0.412 & 0.487 & 0.260 & 0.702 & 0.533 & **0.655** & 0.542 & 0.707 & 0.612 & 0.651 & 0.527 \\ EMMS(One) & 0.412 & 0.444 & 0.565 & 0.740 & 0.736 & 0.621 & 0.562 & 0.579 & 0.740 & 0.592 & 0.730 & 0.611 \\ \hline EMMS & **0.481** & **0.444** & **0.706** & 0.718 & **0.745** & **0.621** & 0.562 & **0.673** & **0.740** & **0.619** & **0.730** & **0.639** \\ \hline \multicolumn{11}{c}{Wall-Clock Time (s)} \\ \hline LogME & 8.93 & 10.89 & 30.28 & 53.07 & 62.13 & 4.78 & 9.27 & 104.92 & 6.28 & 425.43 & 7.42 & 65.76 \\ NLEEP & 553.7 & 716.8 & 1.1e3 & 8.0e3 & 1.2e4 & 183.7 & 819.2 & 3.4e4 & 256.4 & 2.7e4 & 288.3 & 7719.8 \\ TransRate & 19.43 & 19.21 & 36.9 & 61.73 & 63.82 & 8.73 & 18.26 & 110.79 & 15.51 & 89.92 & 5.11 & 40.85 \\ EMMS(One) & **4.12** & **4.45** & **8.07** & **19.45** & **26.18** & **2.65** & **4.03** & **39.72** & **3.50** & **24.84** & **4.07** & **12.82** \\ \hline EMMS & 21.31 & 17.23 & 28.06 & 154.61 & 182.11 & 13.87 & 15.95 & 265.99 & 17.93 & 63.86 & 16.63 & 72.55 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of different transferability metrics on ViT models regarding \(\tau_{w}\) and the wall-clock time where EMMS(One) denotes EMMS with the one-hot label. Our proposed EMMS achieves the best transfer-ability assessment over 11 target tasks and exhibits higher efficiency than NLEEP.
### Text Question Answering
For natural language understanding, we consider Text Question Answering (TQA) as a reading comprehension task, where the response to each question is a text segment extracted directly from the affiliated reading passage, or the question may indeed be deemed unanswerable. Details of pre-trained models and how to finetune are described in Appendix Sec.B.
Performance and wall-clock time comparison.In Table 3, the performance improvement of EMMS on the TQA is consistent with the enhancements observed in the earlier mentioned computer vision tasks. More specifically, our EMMS attains accuracies of 60.3% and 46.3% on the Stanford Question Answering Dataset (SQuAD) versions 1.1 and 2.0 respectively, using rank correlation \(\tau_{w}\) as an evaluation metric. This represents a significant relative increment of 11.2% and 13.2% compared to the performance of LogME.
### Referring Expression Comprehension
Referring expression comprehension (REC) is a widely challenging task because it requires precise alignment between linguistic concepts and image features. To address this, the objects in each image are represented as a sequence of discrete tokens, while their bounding box corner coordinates are turned into integer location tokens. This allows for a unified F-Label to be extracted using various language models. More details about the pre-trained models can be found in Appendix Sec.B.
Performance and wall-clock time comparison.As shown in Table 4, our EMMS continues to exhibit its superiority in the enhancement of performance on the REC task, an _instance-level cross-modal_ localization task. Specifically, the proposed EMMS produces accuracies of 45.8%, 54.9%, and 52.1% on the RefCOCO, RefCOCO+, and RefCOCOg datasets respectively. This significantly surpasses its counterpart, LogME, in terms of margins when evaluated with rank correlation \(\tau_{w}\).
### Ablation Analysis
Comparison with different number of F-Label Here we denote the number of F-Label is \(K\) and choose the image caption task to illustrate the impact of \(K\) on our solution. As shown in Table 6. We find that increasing \(K\) in a certain range brings a gain in effectiveness to our method, but when K becomes larger, the time also increases and we find that \(K=4\) is not as effective as \(K=3\). We believe that the increase in \(K\) brings difficulties in fitting the true Label, resulting in a loss of effectiveness. Therefore, we use \(K=3\) for the sake of effect and time.
Performance on F-Label using small model On the one hand, using foundation model can extract the joint embedding compared to the small model, which allows EMMS to be extended to tasks with multiple forms of labels. On the other hand, the foundation model can handle many types of tasks,
\begin{table}
\begin{tabular}{cso we can use the foundation model for different tasks for label embedding. As shown in Table 5, we experimentally demonstrate that the use of the foundation model leads to more accurate F-Label extraction and thus to an improvement in the performance of the method.
**The effect of using a single foundation model** We investigate how EMMS is influenced when only a single foundation model is provided. We conduct experiments on image classification and image captioning. We consider EMMS with the single foundation model including language foundation model (1) GPT-2 [27], (2) BERT [43], (3) RoBerta [68], and multimodal foundation model (4) CLIP [26], (5) FLAVA [69], and (6) AltCLIP [70]. For comparison, we include the result of our EMMS with default setting (K=3, i.e. CLIP, BERT, and GPT-2) and the result of previous state-of-the-art methods obtained from LogME, NLEEP and TransRate. The results are reported in Table 20 and Table 5.
We have several observations. (1) Different downstream tasks prefer F-Labels obtained from different foundation models. No single foundation model is dominant in all target tasks. In particular, CLIP is not the best model for extracting F-Labels. (2) For image captioning, multimodal foundation models are more appropriate for extracting F-Labels than language foundation models. (3) Our EMMS can achieve the best results by combining F-Labels obtained from multiple foundation models.
## 6 Conclusion
How to select a pre-trained model for different tasks quickly and effectively is an important issue in the field of transfer learning. This paper proposes an efficient multi-task model selector(EMMS) that can be applied to many types of tasks. EMMS uses foundation model for Label embedding in order to transform diverse label formats of different tasks into the same form and see them as noisy labels. To estimate a model's transferability, EMMS model this problem as a simple weighted linear regression, which can be solved use an alternating minimization algorithm. Compared with existing methods, EMMS achieves the first model selection in multi-task scenarios, including image caption, referring segmentation, etc., with high speed and great results. For the **limitations** of the method, if the foundation model generalize very poor on downstream tasks, it may lead to low-quality label embedding, which is a drawback of our method. Moreover, building a holistic benchmark of various label embeddings would be useful in many applications such as multi-modal adaptation [71]. We leave it as a future work.
## 7 Acknowledgments
This paper is partially supported by the National Key RD Program of China No.2022ZD0161000, National Key RD Program of China(NO.2022ZD0160100) and the General Research Fund of Hong
\begin{table}
\begin{tabular}{l r r r r|l r r r r} \hline \hline & F8k & F30k & RSD & F10k-H & F10k-R & & F8k & F30k & RSD & F10k-H & F10k-R \\ \hline \multicolumn{1}{c}{} & \multicolumn{4}{c|}{Weighted Kendall’s tau \(\tau_{w}\)} & \multicolumn{4}{c}{Weighted Kendall’s tau \(\tau_{w}\)} \\ \hline K=1 & 0.490 & 0.386 & 0.527 & 0.772 & 0.668 & K=2 & 0.574 & 0.454 & 0.553 & 0.762 & 0.646 \\ K=3 & **0.660** & **0.504** & **0.704** & **0.802** & **0.678** & K=4 & 0.660 & 0.504 & 0.704 & 0.802 & 0.644 \\ \hline \hline \end{tabular}
\end{table}
Table 6: EMMS under different number of F-Label of transferability assessment on image caption task. The improvement of \(K\) in a certain range brought an increase in rank correlation \(\tau_{w}\).
\begin{table}
\begin{tabular}{l r r r r r r r} \hline \hline & \multicolumn{4}{c}{Weighted Kendall’s tau \(\tau_{w}\)} & \\ \hline LogME(Clip) & 0.530 & 0.393 & 0.618 & 0.764 & 0.634 & 0.588 & 0/5 \\ (1) Gpt2 & 0.566 & 0.393 & 0.431 & 0.715 & 0.618 & 0.545 & 0/5 \\ (2) Bert & 0.395 & 0.319 & 0.448 & **0.802** & **0.711** & 0.535 & 2/5 \\ (3) RoBerta & 0.346 & 0.111 & 0.587 & 0.571 & 0.566 & 0.436 & 0/5 \\ (4) CLIP\({}_{2}\) & 0.453 & 0.393 & **0.704** & **0.802** & 0.634 & 0.533 & 2/5 \\ (5) CLIP\({}_{2}\) & 0.510 & 0.448 & **0.704** & **0.802** & 0.678 & 0.628 & 2/5 \\ (6) FLAVA & 0.463 & 0.382 & 0.693 & 0.704 & 0.678 & 0.584 & 0/5 \\ (7) AltCLIP & 0.453 & 0.448 & 0.623 & **0.802** & 0.678 & 0.601 & 1/5 \\ EMMS & **0.660** & **0.504** & **0.704** & **0.802** & 0.678 & **0.670** & 4/5 \\ \hline \hline \end{tabular}
\end{table}
Table 5: The effect of the single foundation model on EMMS. The results are obtained on image captioning regarding \(\tau_{w}\).
Kong No.17200622. Besides, thanks Ruimao Zhang from CUHK(SZ) for thoughtful discussion and Prof. Anthony Man-Cho So from CUHK for his valuable discusion about solving the WLSR in the paper. At last, this work was done during the internship at Shanghai Artificial Intelligence Laboratory.
## References
* [1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. _Communications of the ACM_, 60(6):84-90, 2017.
* [2] Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, and Alan Yuille. Deep captioning with multimodal recurrent neural networks (m-rnn). _arXiv preprint arXiv:1412.6632_, 2014.
* [3] Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textual question answering. In _International conference on machine learning_, pages 2397-2406. PMLR, 2016.
* [4] Zhao Yang, Jiaqi Wang, Yansong Tang, Kai Chen, Hengshuang Zhao, and Philip HS Torr. Lavt: Language-aware vision transformer for referring image segmentation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 18155-18165, 2022.
* [5] Bart Thomee, David A Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia Li. Yfcc100m: The new data in multimedia research. _Communications of the ACM_, 59(2):64-73, 2016.
* [6] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016.
* [7] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 4700-4708, 2017.
* [8] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 1-9, 2015.
* [9] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. _arXiv preprint arXiv:2010.11929_, 2020.
* [10] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In _Proceedings of the IEEE/CVF international conference on computer vision_, pages 10012-10022, 2021.
* [11] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In _Proceedings of the IEEE/CVF international conference on computer vision_, pages 568-578, 2021.
* [12] Cuong Nguyen, Tal Hassner, Matthias Seeger, and Cedric Archambeau. Leep: A new measure to evaluate transferability of learned representations. In _International Conference on Machine Learning_, pages 7294-7305. PMLR, 2020.
* [13] Kaichao You, Yong Liu, Jianmin Wang, and Mingsheng Long. Logme: Practical assessment of pre-trained models for transfer learning. In _International Conference on Machine Learning_, pages 12133-12143. PMLR, 2021.
* [14] Wenqi Shao, Xun Zhao, Yixiao Ge, Zhaoyang Zhang, Lei Yang, Xiaogang Wang, Ying Shan, and Ping Luo. Not all models are equal: Predicting model transferability in a self-challenging fisher space. In _Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXIV_, pages 286-302. Springer, 2022.
* [15] Zijian Wang, Yadan Luo, Liang Zheng, Zi Huang, and Mahsa Baktashmotlagh. How far pre-trained models are from neural collapse on the target dataset informs their transferability. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 5549-5558, 2023.
* [16] Xiaotong Li, Zixuan Hu, Yixiao Ge, Ying Shan, and Ling-Yu Duan. Exploring model transferability through the lens of potential energy. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 5429-5438, 2023.
* [17] Mohsen Gholami, Mohammad Akbari, Xinglu Wang, Behnam Kamranian, and Yong Zhang. Etran: Energy-based transferability estimation. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 18613-18622, 2023.
* [18] Yi-Kai Zhang, Ting-Ji Huang, Yao-Xiang Ding, De-Chuan Zhan, and Han-Jia Ye. Model spider: Learning to rank pre-trained models efficiently. _arXiv preprint arXiv:2306.03900_, 2023.
* [19] Zhigang Hu, Yuhang Huang, Hao Zheng, Meiguang Zheng, and JianJun Liu. Graph-based fine-grained model selection for multi-source domain. _Pattern Analysis and Applications_, pages 1-12, 2023.
* [20] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? _Advances in neural information processing systems_, 27, 2014.
* [21] Kaiming He, Ross Girshick, and Piotr Dollar. Rethinking imagenet pre-training. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 4918-4927, 2019.
* [22] Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio Savarese. Taskonomy: Disentangling task transfer learning. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 3712-3722, 2018.
* [23] Long-Kai Huang, Junzhou Huang, Yu Rong, Qiang Yang, and Ying Wei. Frustratingly easy transferability estimation. In _International Conference on Machine Learning_, pages 9201-9225. PMLR, 2022.
* [24] Nan Ding, Xi Chen, Tomer Levinboim, Soravit Changpinyo, and Radu Soricut. Pactran: Pac-bayesian metrics for estimating the transferability of pretrained models to classification tasks. In _Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXIV_, pages 252-268. Springer, 2022.
* [25] Michal Pandy, Andrea Agostinelli, Jasper Uijlings, Vittorio Ferrari, and Thomas Mensink. Transferability estimation using bhattacharyya class separability. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 9172-9182, 2022.
* [26] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In _International conference on machine learning_, pages 8748-8763. PMLR, 2021.
* [27] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. _OpenAI blog_, 1(8):9, 2019.
* [28] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. Vqa: Visual question answering. In _Proceedings of the IEEE international conference on computer vision_, pages 2425-2433, 2015.
* [29] Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. Quac: Question answering in context. _arXiv preprint arXiv:1808.07036_, 2018.
* [30] Yue Wu and Qiang Ji. Facial landmark detection: A literature survey. _International Journal of Computer Vision_, 127:115-142, 2019.
* [31] Yandong Li, Xuhui Jia, Ruoxin Sang, Yukun Zhu, Bradley Green, Liqiang Wang, and Boqing Gong. Ranking neural checkpoints. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 2663-2673, 2021.
* [32] Tomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. Recurrent neural network based language model. In _Interspeech_, volume 2, pages 1045-1048. Makuhari, 2010.
* [33] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. _arXiv preprint arXiv:1301.3781_, 2013.
* [34] Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In _Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)_, pages 1532-1543, 2014.
* [35] Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. _arXiv preprint arXiv:2202.03052_, 2022.
* [36] Xizhou Zhu, Jinguo Zhu, Hao Li, Xiaoshi Wu, Hongsheng Li, Xiaohua Wang, and Jifeng Dai. Uni-perceiver: Pre-training unified architecture for generic perception for zero-shot and few-shot tasks. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 16804-16815, 2022.
* [37] Chengzhi Mao, Scott Geng, Junfeng Yang, Xin Wang, and Carl Vondrick. Understanding zero-shot adversarial robustness for large-scale models. _arXiv preprint arXiv:2212.07016_, 2022.
* [38] Wenqi Shao, Yutao Hu, Peng Gao, Meng Lei, Kaipeng Zhang, Fanqing Meng, Peng Xu, Siyuan Huang, Hongsheng Li, Yu Qiao, et al. Tiny lvlm-ehub: Early multimodal experiments with bard. _arXiv preprint arXiv:2308.03729_, 2023.
* [39] Peng Xu, Wenqi Shao, Kaipeng Zhang, Peng Gao, Shuo Liu, Meng Lei, Fanqing Meng, Siyuan Huang, Yu Qiao, and Ping Luo. Lvlm-ehub: A comprehensive evaluation benchmark for large vision-language models. _arXiv preprint arXiv:2306.09265_, 2023.
* [40] Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for nlp. In _International Conference on Machine Learning_, pages 2790-2799. PMLR, 2019.
* [41] Peng Gao, Shijie Geng, Renrui Zhang, Teli Ma, Rongyao Fang, Yongfeng Zhang, Hongsheng Li, and Yu Qiao. Clip-adapter: Better vision-language models with feature adapters. _arXiv preprint arXiv:2110.04544_, 2021.
* [42] Farid Alizadeh and Donald Goldfarb. Second-order cone programming. _Mathematical programming_, 95(1):3-51, 2003.
* [43] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_, 2018.
* [44] Miguel Sousa Lobo, Lieven Vandenberghe, Stephen Boyd, and Herve Lebret. Applications of second-order cone programming. _Linear algebra and its applications_, 284(1-3):193-228, 1998.
* [45] Andre Martins and Ramon Astudillo. From softmax to sparsemax: A sparse model of attention and multi-label classification. In _International conference on machine learning_, pages 1614-1623. PMLR, 2016.
* [46] Wenqi Shao, Tianjian Meng, Jingyu Li, Ruimao Zhang, Yudian Li, Xiaogang Wang, and Ping Luo. Ssn: Learning sparse switchable normalization via sparsestmax. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 443-451, 2019.
* [47] Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew Blaschko, and Andrea Vedaldi. Fine-grained visual classification of aircraft. _arXiv preprint arXiv:1306.5151_, 2013.
* [48] Li Fei-Fei, Rob Fergus, and Pietro Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. In _2004 conference on computer vision and pattern recognition workshop_, pages 178-178. IEEE, 2004.
* [49] Jonathan Krause, Jia Deng, Michael Stark, and Li Fei-Fei. Collecting a large-scale dataset of fine-grained cars. 2013.
* [50] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
* [51] Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. Describing textures in the wild. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 3606-3613, 2014.
* [52] Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In _2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing_, pages 722-729. IEEE, 2008.
* [53] Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101-mining discriminative components with random forests. In _Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part VI 13_, pages 446-461. Springer, 2014.
* [54] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016 ieee conference on computer vision and pattern recognition (cvpr). _Las Vegas, NV, USA_, 1:770-78, 2016.
* [55] Jianxiong Xiao, James Hays, Krista A Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In _2010 IEEE computer society conference on computer vision and pattern recognition_, pages 3485-3492. IEEE, 2010.
* [56] Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes (voc) challenge. _International journal of computer vision_, 88:303-338, 2010.
* [57] Cyrus Rashtchian, Peter Young, Micah Hodosh, and Julia Hockenmaier. Collecting image annotations using amazon's mechanical turk. In _Proceedings of the NAACL HLT 2010 workshop on creating speech and language data with Amazon's Mechanical Turk_, pages 139-147, 2010.
* [58] Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In _Proceedings of the IEEE international conference on computer vision_, pages 2641-2649, 2015.
* [59] Chuang Gan, Zhe Gan, Xiaodong He, Jianfeng Gao, and Li Deng. Stylenet: Generating attractive visual captions with styles. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 3137-3146, 2017.
* [60] Xiaoqiang Lu, Binqiang Wang, Xiangtao Zheng, and Xuelong Li. Exploring models and data for remote sensing image caption generation. _IEEE Transactions on Geoscience and Remote Sensing_, 56(4):2183-2195, 2017.
* [61] Mengye Ren, Ryan Kiros, and Richard Zemel. Exploring models and data for image question answering. _Advances in neural information processing systems_, 28, 2015.
* [62] Mateusz Malinowski and Mario Fritz. A multi-world approach to question answering about real-world scenes based on uncertain input. _Advances in neural information processing systems_, 27, 2014.
* [63] Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 2901-2910, 2017.
* [64] Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. _arXiv preprint arXiv:1606.05250_, 2016.
* [65] Pranav Rajpurkar, Robin Jia, and Percy Liang. Know what you don't know: Unanswerable questions for squad. _arXiv preprint arXiv:1806.03822_, 2018.
* [66] Licheng Yu, Patrick Poirson, Shan Yang, Alexander C Berg, and Tamara L Berg. Modeling context in referring expressions. In _Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14_, pages 69-85. Springer, 2016.
* [67] Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L Yuille, and Kevin Murphy. Generation and comprehension of unambiguous object descriptions. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 11-20, 2016.
* [68] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_, 2019.
* [69] Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. Flava: A foundational language and vision alignment model. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 15638-15650, 2022.
* [70] Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, and Ledell Wu. Altclip: Altering the language encoder in clip for extended language capabilities. _arXiv preprint arXiv:2211.06679_, 2022.
* [71] Zhiqiu Lin, Samuel Yu, Zhiyi Kuang, Deepak Pathak, and Deva Ramana. Multimodality helps unimodality: Cross-modal few-shot learning with multimodal models. _arXiv preprint arXiv:2301.06267_, 2023.
* [72] Charles L Byrne. Alternating minimization and alternating projection algorithms: A tutorial. _Sciences New York_, pages 1-41, 2011.
* [73] Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. _arXiv preprint arXiv:1910.13461_, 2019.
* [74] Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. Electra: Pre-training text encoders as discriminators rather than generators. _arXiv preprint arXiv:2003.10555_, 2020.
* [75] Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. 2011.
* [76] Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V Le. Mnasnet: Platform-aware neural architecture search for mobile. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 2820-2828, 2019.
* [77] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 4510-4520, 2018.
* [78] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 2818-2826, 2016.
* [79] Mathilde Caron, Hugo Touvron, Ishan Misra, Herve Jegou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In _Proceedings of the IEEE/CVF international conference on computer vision_, pages 9650-9660, 2021.
* [80] X Chen, S Xie, and K He. An empirical study of training self-supervised visual transformers. arxiv e-prints. _arXiv preprint arXiv:2104.02057_, 2021.
* [81] Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, and C Lawrence Zitnick. Microsoft coco captions: Data collection and evaluation server. _arXiv preprint arXiv:1504.00325_, 2015.
* [82] Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, et al. Swin transformer v2: Scaling up capacity and resolution. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 12009-12019, 2022.
* [83] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 6904-6913, 2017.
* [84] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. _Advances in neural information processing systems_, 32, 2019.
* [85] Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. Deberta: Decoding-enhanced bert with disentangled attention. _arXiv preprint arXiv:2006.03654_, 2020.
* [86] Pengcheng He, Jianfeng Gao, and Weizhu Chen. Debertav3: Improving deberta using electra-style pre-training with gradient-disentangled embedding sharing. _arXiv preprint arXiv:2111.09543_, 2021.
* [87] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In _International Conference on Machine Learning_, pages 12888-12900. PMLR, 2022.
* [88] Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. Align before fuse: Vision and language representation learning with momentum distillation. _Advances in neural information processing systems_, 34:9694-9705, 2021.
* [89] Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. Ofa: Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. In _International Conference on Machine Learning_, pages 23318-23340. PMLR, 2022.
* [90] Hao Li, Pratik Chaudhari, Hao Yang, Michael Lam, Avinash Ravichandran, Rahul Bhotika, and Stefano Soatto. Rethinking the hyperparameters for fine-tuning. _arXiv preprint arXiv:2002.11770_, 2020.
Method
Here we derive in detail the regression with Unified Noisy Label Embeddings that appear in the method section of the text in Sec.A.1 and give complete proof of the convergence of the method in Sec.A.2.
### Regression with Unified Noisy Label Embeddings
**Setup.** we assume that label embedding \(z\) is a linear mapping of the model feature with additive Gaussian noise with a variance of \(\sigma_{0}^{2}\), as given by \(z=z_{0}+\epsilon=w^{T}\hat{x}+\epsilon\) and \(\epsilon\sim N(0,\sigma_{0}^{2}I_{L})\) where \(z_{0}=w^{T}\hat{x}\) is the regression prediction, \(w\in\mathbb{R}^{D\times L}\) and \(\epsilon\) are regression weights and regression error, respectively, and \(I_{L}\) is a L-by-L identity matrix.
We assume that F-labels \(\{z_{k}\}_{k=1}^{K}\) obtained from different foundation models are oracles that independently provide noisy estimates of the label embedding \(z\). Formally, we have \(P(z_{k}|z)=N(z,\sigma_{k}^{2}I_{L})\). Without loss of generality, we assume that \(L=1\)
Then the joint probability over noisy labels for a fixed \(n\), That is, for given \(x^{n}\), we have:
\[P(z_{1}^{n},\cdots,z_{K}^{n}|x^{n},w)=\int P(z_{1}^{n},\cdots,z_{K}^{n}|z,x^{n },w)P(z|x^{n},w)dz \tag{4}\]
Due to the independence between \(z_{k}\) and \(x\), using the real label \(z\), we can rewrite it as:
\[P(z_{1}^{n},\cdots,z_{K}^{n}|x^{n},w)=\int P(z_{1}^{n},\cdots,z_{K}^{n}|z,w)P( z|x^{n},w)dz \tag{5}\]
And using the independencies among \(z_{k}\), we have:
\[P(z_{1}^{n},\cdots,z_{K}^{n}|z,w)=\prod_{k=1}^{K}P(z_{K}^{n}|z,\sigma_{1}^{2}, \cdots,\sigma_{k}^{2})=\frac{1}{(2\pi)^{\frac{K}{2}}\prod_{k=0}^{K}\sigma_{k} }\exp^{-\sum_{k=1}^{K}\frac{(z_{k}^{n}-z)^{2}}{2\sigma_{k}^{2}}} \tag{6}\]
Due to \(P(z_{k}|z)=N(z,\sigma_{k}^{2}I_{L})\), we can rewrite it as :
\[P(z_{1}^{n},\ldots,z_{K}^{n}|x^{n},w)=\int\frac{1}{(2\pi)^{\frac{K+1}{2}}\prod _{k=0}^{K}\sigma_{k}}\exp^{-\sum_{k=1}^{K}\frac{(z_{k}^{n}-z)^{2}}{2\sigma_{k} ^{2}}-\frac{(z-z_{0})^{2}}{2\sigma_{0}^{2}}}dy \tag{7}\]
which can be calculated as :
\[P(z_{1}^{n},\ldots,z_{K}^{n}|x^{n},w)=A_{1}\int e^{-A_{2}y^{2}+A_{3}^{n}y-A_{4 }^{n}}dz=A_{1}\sqrt{\frac{\pi}{A_{2}}}e^{\frac{(A_{1}^{n})^{2}}{4A_{2}}-A_{4}^ {n}} \tag{8}\]
where \(A_{1}=\prod_{k=0}^{K}1/\sigma_{k},A_{2}=\sum_{k=0}^{K}1/2\sigma_{k}^{2},A_{3}^ {n}=\sum_{k=0}^{K}z_{k}^{n}/\sigma_{k}^{2}\), and \(A_{4}^{n}=\sum_{k=0}^{K}{(z_{k}^{n})^{2}}/{2\sigma_{k}^{2}}\)
Consider the joint probability over all \(N\) instances, we have:
\[P(z_{1}^{n},\ldots,z_{K}^{n}|X,w)=\prod_{i=1}^{N}A_{1}\sqrt{\frac{\pi}{A_{2}}} e^{\frac{(A_{1}^{n})^{2}}{4A_{2}}-A_{4}^{n}} \tag{9}\]
where \(X\in R^{N\times D}\) denotes the feature matrix, \(N\) is the number of data points and \(D\) is the number of features.
Then given \(N\) data points, the negative log-likelihood is given by
\[-\mathcal{L}=\underbrace{-N\log A_{1}+\frac{N}{2}\log A_{2}}_{\mathcal{L}_{1}} +\frac{1}{2}\underbrace{\sum_{n=1}^{N}(A_{4}^{n}-\frac{(A_{3}^{n})^{2}}{4A_{2} })}_{\mathcal{L}_{2}}+\mathrm{const} \tag{10}\]
where \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) are given by
\[\mathcal{L}_{1}=\frac{N}{2}\log\sum_{k=0}^{K}\frac{1}{2\sigma_{k}^{2}}+N\sum_ {k=0}^{K}\log\sigma_{k},\quad\mathcal{L}_{2}=\sum_{n=1}^{N}\{\sum_{k=0}^{K} \frac{(z_{k}^{n})^{2}}{\sigma_{k}^{2}}-\frac{(\sum_{k=0}^{K}z_{k}^{n}/\sigma_{ k}^{2})^{2}}{\sum_{k=1}^{K}1/\sigma_{k}^{2}}\} \tag{11}\]Since \(\mathcal{L}_{1}\) is independent of input data, we focus on \(\mathcal{L}_{2}\). To simplify the notation, we re-denote \(\gamma_{k}=1/\sigma_{k}^{2}\) and \(\Gamma=\sum_{k=1}^{K}\gamma_{k}\). Using this notation, \(\mathcal{L}_{2}\) can be rearranged as:
\[\mathcal{L}_{2} =\sum_{n=1}^{N}\{\gamma_{0}z_{0}^{2}+\sum_{k=1}^{K}\gamma_{k}(z_{k }^{n})^{2}-\frac{(\sum_{k=1}^{K}\gamma_{k}z_{k}^{n}+\gamma_{0}z_{0})^{2}}{ \Gamma+\gamma_{0}}\} \tag{12}\] \[=\sum_{n=1}^{N}\{(\gamma_{0}-\frac{\gamma_{0}^{2}}{\Gamma+\gamma_ {0}})z_{0}^{2}-(\frac{2\Gamma\gamma_{0}}{\Gamma+\gamma_{0}}\sum_{k=1}^{K} \frac{\gamma_{0}}{\Gamma z_{k}^{n}})z_{0}+\sum_{k=1}^{K}\gamma_{k}(z_{k}^{n})^ {2}-(\sum_{k=1}^{K}\gamma_{k}z_{k}^{n})^{2}\}\] (13) \[=\sum_{n=1}^{N}\{\frac{\Gamma\gamma_{0}}{\Gamma+\gamma_{0}}(z_{0}- \sum_{k=1}^{K}\frac{\gamma_{k}}{\Gamma}z_{k}^{n})^{2}+\sum_{k=1}^{K}\gamma_{k}( z_{k}^{n})^{2}-(1+\frac{\gamma_{0}}{\Gamma(\Gamma+\gamma_{0})}(\sum_{k=1}^{K} \gamma_{k}z_{k}^{n})^{2}\}\] (14) \[=\sum_{n=1}^{N}\{\frac{\Gamma\gamma_{0}}{\Gamma+\gamma_{0}}(w^{T} \hat{x}^{n}-\sum_{k=1}^{K}\frac{\gamma_{k}}{\Gamma}z_{k}^{n})^{2}+\sum_{k=1}^{ K}\gamma_{k}(z_{k}^{n})^{2}-(1+\frac{\gamma_{0}}{\Gamma(\Gamma+\gamma_{0})}(\sum_{k=1 }^{K}\gamma_{k}z_{k}^{n})^{2}\} \tag{15}\]
Hence, the negative likelihood in Eqn.(10 can be written as
\[-\mathcal{L}=\frac{\Gamma\gamma_{0}}{\Gamma+\gamma_{0}}\{\underbrace{\frac{1} {2}\sum_{i=1}^{N}(w^{T}\hat{x}^{n}-\sum_{k=1}^{K}\frac{\gamma_{k}}{\Gamma}z_{k }^{n})^{2}}_{s(w,t)}\}+\mathcal{R}(\gamma_{k}) \tag{16}\]
where \(\mathcal{R}(\gamma_{k})=\mathcal{L}_{1}+\sum_{k=1}^{K}\gamma_{k}(z_{k}^{n})^{2 }-(1+\frac{\gamma_{0}}{\Gamma(\Gamma+\gamma_{0})}(\sum_{k=1}^{K}\gamma_{k}z_{k }^{n})^{2}\). The computational intractability of Eqn.(16) comes from the regularization term \(\mathcal{R}(\gamma_{k})\). Note that the coefficient \(\frac{\Gamma\gamma_{0}}{\Gamma+\gamma_{0}}>0\) and \(\sum_{k=1}^{K}\frac{\gamma_{k}}{\Gamma}=1\). By removing regularizer \(\mathcal{R}(\gamma_{k})\) and positive scale parameter \(\frac{\Gamma\gamma_{0}}{\Gamma+\gamma_{0}}\), the minimization of negative log-likelihood can be approximately treated as a weighted linear square regression, as given by
\[\min_{w\in\mathbb{R}^{D\times 1},t\in\triangle^{K-1}}s(w,t)=\frac{1}{2}\|Xw-Zt \|_{2}^{2} \tag{17}\]
In Eqn.(17), \(X\in\mathbb{R}^{N\times D}\) is the data matrix whose \(n\)-th row is model feature \((\hat{x}^{n})^{T}\), \(w\in\mathbb{R}^{D\times 1}\) are weight parameters, \(Z\in\mathbb{R}^{N\times K}\) is F-Label matrix whose \(k\)-th column is the label embedding \(z_{k}\), and \(t\in\mathbb{R}^{K\times 1}\) satisfies that \(1_{K}^{T}t=1,t\geq 0\) which is a \((K-1)\)-D simplex denoted as \(\triangle^{K-1}\).
### Convergence Analysis and Proof Outline
We will prove the convergence property of the function value. Indeed, we demonstrate a stronger condition that the function value decreases after each round of iterations on \(w\) and \(t\). From the monotone convergence theorem, the convergence can thus be derived. For other convergence properties of alternating minimization, readers can refer to the literature [72], which can be of independent interest.
In the proof, we exploit the smoothness of the function and design a projection gradient descent method with sufficient decrease for the constraint optimization problem. The sufficient decrease in the unconstrained problem is a direct corollary.
**Definition 1**.: _A function \(f(x):\mathbb{R}^{d}\rightarrow\mathbb{R}\) is said to be \(\beta\)-smooth with constant \(\beta\) if_
\[|\nabla f(x)-\nabla f(y)|\leq\beta\|x-y\|,\forall x,y\in\mathbb{R}^{d}.\]
**Lemma 1**.: _Suppose \(X\) is the simplex constraint, and \(y\in\mathbb{R}^{d}\), \(\Pi\) denotes the projection operator. Then the inequality holds_
\[(\Pi_{X}(y)-x)^{T}(\Pi_{X}(y)-y)\leq 0.\]
Proof.: For the projection \(\Pi_{X}(y)\), it is a convex optimization problem and can be formulated as
\[\min_{x}f(x)=\|x-y\|_{2}^{2},\]
where \(x^{T}1=1\) and \(x>0\). We denote \(x^{\star}\) as the optimal solution to the problem. For the convex optimization problem, it holds for all \(x\in\mathbb{R}^{d}\) that
\[\nabla f(x^{\star})^{T}(x^{\star}-x)\leq 0.\]Therefore we can derive
\[2(x^{\star}-y)^{T}(x^{\star}-x)\leq 0.\]
The lemma is proved.
**Lemma 2**.: _Let \(f\) be a \(\beta\)-smooth function. For any \(x,y\in\operatorname{dom}(f)\)_
\[\left|f(x)-f(y)-\nabla f(y)^{T}(x-y)\right|\leq\|x-y\|^{2}.\]
Proof.: \[\left|f(x)-f(y)-\nabla f(y)^{T}(x-y)\right| =\left|\int_{0}^{1}\nabla f(y+t(x-y))^{T}(x-y)dt-\nabla f(y)^{T}( x-y)\right|\] \[\leq\int_{0}^{1}\|\nabla f(y+t(x-y))-\nabla f(y)\|\|x-y\|dt\] \[\leq\int_{0}^{1}\beta t\|x-y\|^{2}dt=\frac{\beta}{2}\|x-y\|^{2}.\]
The last inequality holds because \(f\) is a \(\beta\)-smooth function.
**Lemma 3**.: _Suppose the function \(f\) is the \(\beta\)-smooth function, and \(X\) is the simplex constraint. For any \(x,y\in X\), let \(x^{+}=\Pi_{X}(x-\frac{1}{\beta}\nabla f(x))\) and \(g_{X}(x)=\beta(x-x^{+})\). Then the inequality holds_
\[f(x^{+})-f(y)\leq g_{X}(x)^{T}(x-y)-\frac{1}{2\beta}\|g_{X}(x)\|^{2}.\]
Proof.: Using Lemma. 1, we have
\[(x^{+}-(x-\frac{1}{\beta}\nabla f(x)))^{T}(x^{+}-y)\leq 0.\]
which is equivalent to
\[\nabla f(x)^{T}(x^{+}-y)\leq g_{X}(x)^{T}(x^{+}-y).\]
By using Lemma. 2 and the fact \(f(x^{+})-f(y)=f(x^{+})-f(x)+f(x)-f(y)\), we have
\[f(x^{+})-f(y) \leq\nabla f(x)^{T}(x^{+}-x)+\frac{\beta}{2}\|x^{+}-x\|^{2}+ \nabla f(x)^{T}(x-y)\] \[=\nabla f(x)^{T}(x^{+}-y)+\frac{1}{2\beta}\|g_{X}(x)\|^{2}\] \[\leq g_{X}(x)^{T}(x^{+}-y)+\frac{1}{2\beta}\|g_{X}(x)\|^{2}\] \[=g_{X}(x)^{T}(x^{+}-x+x-y)+\frac{1}{2\beta}\|g_{X}(x)\|^{2}\] \[=g_{X}(x)^{T}(x^{+}-x)+g_{X}(x)^{T}(x-y)+\frac{1}{2\beta}\|g_{X} (x)\|^{2}\] \[=g_{X}(x)^{T}(x-y)-\frac{1}{\beta}\|g_{X}(x)\|^{2}+\frac{1}{2 \beta}\|g_{X}(x)\|^{2}\] \[=g_{X}(x)^{T}(x-y)-\frac{1}{2\beta}\|g_{X}(x)\|^{2}.\]
**Theorem 3**.: _Suppose \(s(w,t)=\frac{1}{2}\|Xw-Zt\|_{F}^{2}\) where \(X\in\mathbb{R}^{N\times D}\), \(Z\in\mathbb{R}^{N\times K}\), \(w\in\mathbb{R}^{D\times 1}\) and \(t\in\triangle^{K-1}\), the inner loop of \(t\) in Algorithm lines 7 - \(10\) decreases after each iteration. Specifically, denote \(\beta=1/\|2Z^{T}Z\|\) and \(t^{+}=\Pi_{\triangle^{K-1}}(t-\beta\nabla s(w,t))\). For any \(t\in\triangle^{K-1}\), \(s(w,t^{+})-s(w,t)\leq-\frac{1}{2\beta}\|t-t^{+}\|^{2}\leq 0\)._Proof.: Since we fix parameter \(w\) and consider the optimization problem of variable \(t\), we denote \(s(t)=s(w,t)\), which leads to \(\nabla s(t)=-2Z^{T}(Xw^{*}-Zt)\). For any \(t_{1},t_{2}\in\operatorname{dom}(s)\)
\[\|\nabla s(t_{1})-\nabla s(t_{2})\|=\|2Z^{T}Zt_{1}-2Z^{T}Zt_{2}\|\leq\|2Z^{T}Z \|\|t_{1}-t_{2}\|.\]
According to the definition 1, it shows that the \(f(t)\) is \(\beta\)-smooth, where \(\beta=\|2Z^{T}Z\|\). We denote \(t\in\triangle^{K-1}\) to be the initial point and \(t^{+}\) to be the result of one iteration of \(t\), where \(t^{+}=\Pi_{\triangle^{K-1}}(t-\frac{1}{\beta}\nabla f(t))\). From Lemma 3, we can replace \(x^{+},y\) and \(x\) with \(t^{+},t\), and \(t\), repsectively. In this way, the inequality holds
\[0\leq s(t^{+})\leq s(t)-\frac{1}{2\beta}\|\beta(t-t^{+})\|^{2}\leq s(t)\]
Therefore, according to **Monotone convergence theorem**, the function value of the iterative algorithm will converge.
**Theorem 4**.: _Suppose \(s(w,t)=\frac{1}{2}\|Xw-Zt\|_{2}^{2}\) where \(X\in\mathbb{R}^{N\times D}\), \(Z\in\mathbb{R}^{N\times K}\), \(w\in\mathbb{R}^{D\times 1}\) and \(t\in\triangle^{K-1}\), the function value in Algorithm will be convergent. Specifically, denote \(w^{*},t^{*}\) as the result after one iteration of \(w,t\) respectively, we have \(0\leq s(w^{*},t^{*})\leq s(w^{*},t)\leq s(w,t)\)._
Proof.: In the first step, we denote \(t\in\triangle^{K-1}\) is the initial point, then use gradient descent algorithm to calculate \(w^{*}\). Since the optimization problem for \(w\) is a convex optimization problem and use lemma 2, the decreasing property for the gradient part can be derived. That is, for each \(w\in\mathbb{R}^{D\times 1}\), we have \(s(w^{*},t)\leq s(w,t)\). In the second step, we fix \(w\) as \(w^{*}\), from Theorem 3, we have \(s(w^{*},t^{*})\leq s(w^{*},t)\). Therefore, the value of \(s(w,t)\) satisfies: \(0\leq s(w^{*},t^{*})\leq s(w^{*},t)\leq s(w,t)\), from **Monotone convergence theorem**, \(s(w,t)\) converges to the limiting point. As shown above, the overall convergence of our algorithm is guaranteed.
## Appendix B Experiment
In this section, we present detailed descriptions of datasets in Sec. B.2, pre-trained models and baselines in Sec. B.3, and ground-truth scores in Sec. B.4 in various target tasks. More ablation studies can be found in Sec. C.
**Foundation Models.** On image classification, image captioning, referring expression comprehension, and visual question answering, we use foundation models CLIP [26], BERT [43] and GPT-2 [27]. On text question answering, we use foundation models GPT-2 [27], BART [73], and ELECTRA [74]. CLIP was trained on a large dataset of images and their corresponding captions, which can understand the relationship between images and text. BERT is a pre-trained language model that can understand and generate natural language. GPT-2 was trained on a large corpus of text and can be fine-tuned for specific tasks such as text completion and text summarization. Bart is a sequence-to-sequence model, which is both auto-regressive and bidirectional. Electra is a different type of language model that key idea is to pre-train a generator model to produce fake data and shows promising results in various NLP tasks.
**Interpretation of weighted Kendall's tau.** The Kendall's \(\tau\) represents the ratio of concordant pairs minus discordant pairs when enumerating all pairs of \(\{T_{m}\}_{m=1}^{M}\) and \(\{G_{m}\}_{m=1}^{M}\) as given by
\[\tau=\frac{2}{M(M-1)}\sum_{1\leq i<j\leq M}\operatorname{sgn}(G_{i}-G_{j}) \operatorname{sgn}(T_{i}-T_{j}) \tag{18}\]
where \(\operatorname{sgn}(x)\) returns \(-1\) if \(x<0\) and \(1\) otherwise. In this work, a weighted version of Kendall's \(\tau\), denoted as \(\tau_{w}\), is employed to assess transferability metrics considering that a top-performing model is always preferred for target tasks in transfer learning. In principle, a larger \(\tau_{w}\) implies the transferability metric can rank pre-trained models better. And if a metric can rank top-performing models better, \(\tau_{w}\) would be also larger. We also use other measurements to assess the performance of transferability metrics in Table 17 of Sec. C.
### More experimental results
#### b.1.1 Performance on Image Classification with CNN Models
**Performance and wall-clock time comparison.** We compare EMMS with previous LEEP, NLEEP, LogME, and TransRate. As shown in Table.7, our EMMS achieve the best average \(\tau_{w}\) on 11 target datasets and the best \(\tau_{w}\) on 6 target datasets. Compared to NLEEP, which is the most effective other than EMMS, we have almost 1/40 of the time of NLEEP.
#### b.1.2 Performance on Visual Question Answering
To further demonstrate the generality of EMMS in multi-model tasks, we show how EMMS can work for VQA. We follow previous practice ([24]) which treats VQA as a classification task (vocab-based VQA). That is, we construct a vocabulary based on the top answers in the training sets and classify them into some of those labels. The models to be selected and the architecture is the same as in the image captioning.
**Performance and wall-clock time comparison.** As shown in Table.8, EMMS is clearly ahead of PACTran in terms of results and time, proving that EMMS has the ability to handle multi-modal tasks very well. We can find that EMMS outperforms PACTran on all datasets. In particular, EMMS achieves 93.8% and 93.7% gain over PACTran on the COCO-QA and CLEVR datasets with rank correlation \(\tau_{w}\) while reducing time consumption by 75.1% and 34.3% respectively compared to Pactran. This indicates that EMMS performs well on both ordinary VQA datasets(DAQUAR, COCO-QA) as well as VQA datasets(CLEVR) that focus on inference capabilities.
#### b.1.3 Regression
In addition to image classification and a variety of multi-modal tasks, here we show that EMMS can also be used for regression tasks. The daatasets for regression task we use is CUB200 [75] and IIIT
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline & Aircraft & Caltech & Cars & CF-10 & CF-100 & DTD & Flowers & Food & Pets & SUN & VOC & Avg. \\ \hline \multicolumn{10}{c}{Weighted Kendall’s tau \(\tau_{w}\)} \\ \hline LEEP & -0.234 & 0.605 & 0.367 & 0.824 & 0.677 & 0.486 & -0.243 & 0.491 & 0.389 & 0.722 & 0.371 & 0.409 \\ LogME & 0.506 & 0.435 & **0.576** & 0.852 & 0.677 & 0.647 & 0.111 & 0.385 & 0.411 & 0.487 & 0.669 & 0.509 \\ NLEEP & -0.41 & **0.614** & 0.265 & 0.818 & 0.805 & **0.796** & 0.122 & 0.214 & **0.753** & **0.925** & 0.687 & 0.611 \\ TransRate & 0.172 & 0.269 & 0.172 & 0.513 & 0.197 & 0.336 & -0.176 & -0.071 & 0.173 & 0.612 & 0.651 & 0.236 \\ EMMS(One) & 0.481 & 0.546 & 0.304 & 0.963 & 0.804 & 0.701 & 0.498 & 0.588 & 0.574 & 0.638 & 0.707 & 0.618 \\ EMMS & **0.556** & 0.562 & 0.565 & **0.963** & **0.840** & 0.720 & **0.498** & **0.608** & 0.604 & 0.667 & **0.735** & **0.664** \\ \hline \multicolumn{10}{c}{Wall-Clock Time (s)} \\ \hline LEEP & 5.1 & 4.9 & 8.3 & 22.3 & 23.8 & 3.5 & 3.8 & 37.1 & 3.9 & 21.1 & 4.8 & 10.4 \\ LogME & 30.36 & 31.24 & 56.26 & 0.934 & 188.3 & 15.16 & 22.27 & 33.543 & 17.55 & 180.01 & 20.05 & 289.64 \\ NLEEP & 253.8 & 488.7 & 973.8 & 1.1e4 & 1.7e4 & 14.60 & 294.0 & 2.0e4 & 580.8 & 8.63 & 678.8 & 545.9 \\ TransRate & 147.90 & 163.41 & 300.29 & 65.25 & 193.64 & 75.48 & 166.24 & 195.92 & 60.53 & 430.33 & 18.72 & 165.24 \\ EMMS(One) & 17.43 & 20.53 & 35.22 & 70.01 & 78.24 & 12.75 & 18.04 & 116.23 & 15.04 & 70.98 & 18.42 & 42.99 \\ EMMS & 65.85 & 63.49 & 79.79 & 245.49 & 295.37 & 46.38 & 63.52 & 417.80 & 59.64 & 173.59 & 64.60 & 143.2 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Comparison of different transferability metrics on CNN models regarding \(\tau_{w}\) and the wall-clock time where EMMS(One) denotes EMMS with the one-hot label. Our proposed EMMS achieves the best transfer-ability assessment over 11 target tasks and exhibits higher efficiency than NLEEP.
\begin{table}
\begin{tabular}{c c c c|c c c} \hline \hline & DAQUAR & COCO-QA & CLEVR & DAQUAR & COCO-QA & CLEVR \\ \hline \multicolumn{10}{c}{Weighted Kendall’s tau \(\tau_{w}\)} & \multicolumn{3}{c}{Wall-Clock Time (s)} \\ \hline LogME & 0.586 & 0.591 & 0.281 & 116.72 & 716.35 & 4665.06 \\ PACTran(Dir) & 0.671 & 0.296 & 0.347 & 633.16 & 1169.91 & 428.03 \\ PACTran(Gam) & 0.595 & 0.419 & 0.319 & 614.23 & 1061.72 & 428.49 \\ PACTran(Gau) & 0.478 & 0.378 & 0.415 & 637.39 & 1075.88 & 418.34 \\ EMMS & **0.712** & **0.812** & **0.804** & **50.54** & **263.72** & **274.56** \\ \hline \hline \end{tabular}
\end{table}
Table 8: Comparison of different transferability metrics on VQA models in rank correlation \(\tau_{w}\) with the ground truth and the wall-clock time. The LogME denotes using LogME with F-Label. Our proposed EMMS performs better than PACTran head over 3 target tasks with much less time.
Pets [54]. The input is an image containing various birds and pets, respectively. We need to predict the coordinates of the bird's or pet's bounding box in the image and mean square error (MSE) on the test data is the ground-truth. The pre-trained models used are the same as the image classification task with CNN models and the only baseline is LogME. We extract F-Labels using Bert and RoBerta.
As shown in Table 9, EMMS significantly outperforms LogME, with 29.5% and 13.9% performance improvement on CUB and Pets respectively.
### Descriptions of Datasets
#### b.2.1 Image Classification
For image classification, we adopt 11 classification benchmarks, including FGVC Aircraft [47], Caltech-101 [48], Stanford Cars [49], CIFAR-10 [50], CIFAR-100 [50], DTD [51], Oxford 102 Flowers [52], Food-101 [53], Oxford-IIIT Pets [54], SUN397 [55], and VOC2007 [56]. These datasets cover a broad range of classification tasks, which include scene, texture, and coarse/fine-grained image classification, which are widely used in transfer learning. In particular, CF10 and VOC2007 are typical coarse-grained classification datasets, Aircraft, and Cars are typical fine-grained classification datasets, and CF100 contains both coarse- and fine-grained classifications.
#### b.2.2 Image Captioning
For image captioning, We use Flickr8k [57], Flickr30k [58], FlickrStyle10K-Humor [59], FlickrStyle10K-Romatic [59] and RSICD [60]. Among them, Flickr8k and Flickr30k have commonly used image captioning datasets for natural images and have no emotional color; RSICD is a commonly used image captioning dataset in remote sensing; Flickr10k-H and Flickr10k-R are also image captioning datasets for natural images, but their images are depicted with humorous and romantic emotional colors, respectively.
#### b.2.3 Visual Question Answering
For visual question answering, we apply COCOQA [61], DAQUAR [62] and CLEVR [63].Among them, DAQUAR is an early VQA dataset on real images; CLEVR is a synthetic dataset, which is a visual scene composed of some simple geometric shapes, focusing on evaluating the inference ability of VQA models; the questions and answers of COCO-QA are generated by NLP algorithms, and the images are from the COCO dataset, which is also a commonly used VQA dataset.
#### b.2.4 Text Question Answering
For text question answering, we separately use SQuAD1.1 [64],SQuAD2.0 [65], which are collections of question-answer pairs derived from Wikipedia articles and are widely used in text question answer.
#### b.2.5 Referring Expression Comprehension
For referring expression comprehension, we separately use RefCOCO [66], RefCOCO+ [66] and RefCOCOg [67].Specifically, RefCOCO includes instances where there is only one object of its kind in the image, while RefCOCO+ includes instances where multiple objects of the same kind exist in the image.
\begin{table}
\begin{tabular}{l l l} \hline \hline & CUB & Pets \\ \hline \hline \multicolumn{3}{c}{Weighted Kendall’s tau \(\tau_{w}\)} \\ \hline LogME & 0.464 & 0.437 \\ EMMS & **0.601** & **0.498** \\ \hline \hline \end{tabular}
\end{table}
Table 9: Comparison of different transferability metrics on regression models in rank correlation \(\tau_{w}\) with ground truth.
### Pre-trained Models and Baselines
#### b.3.1 Image Classification
**Pre-trained Models.** For **CNN-based** models, We select 11 widely-used CNN models including ResNet-34 [6], ResNet-50 [6], ResNet-101 [6], ResNet-152 [6], DenseNet-121 [7], DenseNet-169 [7], DenseNet-201 [7], MNet-A1 [76], MobileNetV2 [77], GoogleNet [8], and InceptionV3 [78]. All these models are trained on ImageNet dataset [1], which are widely used within the field of migration learning. For **ViT-based** models, we collect 10 ViT models including ViT-T [9], ViT-S [9], ViT-B [9], DINO-S [79], MoCov3-S [80], PVTV2-B2 [11], PVT-T [11], PVT-S [11], PVT-M [11], and Swin-T [10], which are widely used in various vision tasks. Besides, we append EMMS with one-hot label, which degenerates to a linear regression whose label is the one-hot vector. We fine-tune these models on the 11 target datasets to obtain the ground truth.
**Comparison Baselines.** Here we use some of the latest methods as baselines, including LEEP [12], NLEEP [31], LogME [13], and TransRate [23], which have been experimented with model selection on image classification tasks.
#### b.3.2 Image Captioning
**Pre-trained Models.** We use a classic and effective image captioning model architecture, which contains an image encoder and a language encoder to extract the features of the image and the corresponding caption, then fuses the image feature and the text feature and input it to the classifier. We aim to choose the best combination of image encoder and language encoder. Besides, We finetune each model in COCO Caption [81] and use these as the pre-trained models.
Specifically, We separately use ViT-B [9],Swin-B [10], Swinv2-B [82] as image encoder and Bert [43], Roberta [68], Bart [73] as language encoder, and use VisionEncoderDecoderModel from HuggingFace as the model architecture. Following the setting in PACTran [24], We finetune the model in COCO Caption [81] and use these as the pre-trained models. Following common practice( [83]), we treat image captioning as a vocab-based classification task. That is we use a vocabulary and classify the caption into the index of some words in the vocabulary. Afterward, training is done according to the classification task criteria.
**Comparison Baselines.** In this common setup, each caption is converted to a matrix \(Y\in R^{L\times N}\), where \(L\) denotes the length of the caption after padding or truncation and \(N\) denotes the size of the vocabulary, and each row in the matrix is a one-hot vector. Since \(N\) is generally very large, Existing model selection metrics do not scale to this case due to the huge amount of time spent. The only baseline we use is to model the fused feature with F-label using LogME since only LogME can handle the regression task. Here we calculate the average \(\tau_{w}\) and time of it with \(K\) single F-label from \(K\) foundation models we use respectively.
#### b.3.3 Visual Question Answering
**Pre-trained Models.** The model architecture and the model selection settings are the same as in the image captioning, Following the setting in PACTran [24], here we use the model after finetune on VQA-v2 [83] as the pre-trained model waiting for selection and treat VQA as a vocab-based classification task.
**Comparison Baselines.** Here we calculate the average \(\tau_{w}\) and time of it with \(K\) single F-label from \(K\) foundation models we use respectively. And in addition to that, the three methods proposed in PACTran [24] are added here, which are the only methods currently applied to VQA tasks.
#### b.3.4 Text Question Answering
**Pre-trained Models.** The selected models include BERT-Large [43], RoBERTa-Large [68], XLNet-Large [84], DeBERTa [85] (XLarge), DeBERTa-V2 [85] (XLarge and XXLarge), DeBERTa-V3 [86] (Base, Small, XSmall). More specifically, we simultaneously input the question and passage into the aforementioned models, utilizing the distinctive symbol [SEP] to demarcate them. By stacking the predicted head onto each model, we could further fine-tune the model such that it can predict the start and end positions of the answer within the passage. This is achieved by using two binary classifiers, where one is dedicated to identifying the start position and the other to pinpointing the end.
**Comparison Baselines.** Here we calculate the average \(\tau_{w}\) and time of it with F-labels from \(K\) foundation models respectively.
#### b.3.5 Referring Expression Comprehension
**Pre-trained Models.** The candidate multi-modal architectures considered for REC task incorporate Blip [87], ALBEF [88], CLIP [26] (ViT-B-32, ViT-B-16, ViT-L-14, ViT-L-14-336, RN50), OFA [89] (Base, Large, Huge). In practice, we respectively extract the visual and textual representations from each of these models and feed them into a multi-modal interaction module followed by a stacked detection head, and further fine-tune the model to generate the ground truth of model selection.
**Comparison Baselines.** Here we calculate the average \(\tau_{w}\) and time of LogME with \(K\) single F-label from \(K\) foundation models we use respectively.
### Fine-tuning Score on Various Target Tasks
#### b.4.1 Image Classification
**Fine-tuning Details.** The ground truth of the problem of pre-trained model ranking is to fine-tune all pre-trained models with a hyper-parameters sweep on target datasets. Given the model and the target dataset, two of the most important parameters would be learning rate and weight decay in optimizing the model [90]. Therefore, we carefully fine-tune pre-trained models with a grid search of learning rate in \(\{1e-1,1e-2,1e-3,1e-4\}\) and weight decay in \(\{1e-3,1e-4,1e-5,1e-6,0\}\). And using SGD optimizer. After determining the best hyper-parameters candidate, we fine-tune the pre-trained model on the target dataset with the candidate and then obtain the test accuracy as the ground truth. We use a Tesla V100 with a batch size of \(128\) to perform finetuning. All input images are resized to \(224\times 224\). To avoid random error, we repeat the above fine-tuning procedure three times and take an average to obtain the final fine-tuning accuracy. For reference, we list the fine-tuning accuracy of supervised CNN models in Table.10, and vision transformer models in Table 11, respectively.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline & Aircraft & Caltech & Cars & CF-10 & CF-100 & DTD & Flowers & Food & Pets & SUN & VOC \\ \hline ResNet-34 & 84.06 & 91.15 & 88.63 & 96.12 & 81.94 & 72.96 & 95.2 & 81.99 & 93.5 & 61.02 & 84.6 \\ ResNet-50 & 84.64 & 91.98 & 89.09 & 96.28 & 82.8 & 74.72 & 96.26 & 84.45 & 93.88 & 63.54 & 85.8 \\ ResNet-101 & 85.53 & 92.38 & 89.47 & 93.97 & 84.88 & 74.8 & 96.53 & 85.89 & 93.22 & 63.76 & 85.68 \\ ResNet-152 & 86.29 & 93.1 & 89.88 & 97.53 & 85.66 & 76.44 & 96.86 & 86.28 & 94.42 & 64.82 & 86.32 \\ DenseNet-121 & 84.66 & 91.5 & 89.34 & 96.45 & 82.75 & 74.18 & 97.02 & 84.99 & 93.07 & 63.26 & 85.28 \\ DenseNet-169 & 84.19 & 92.51 & 89.02 & 96.77 & 84.26 & 74.72 & 97.32 & 85.84 & 93.62 & 64.1 & 85.77 \\ DenseNet-201 & 85.38 & 93.14 & 89.44 & 97.02 & 84.88 & 76.04 & 97.1 & 86.71 & 94.03 & 64.57 & 85.67 \\ MNet-A1 & 66.48 & 89.34 & 72.58 & 92.59 & 72.04 & 70.12 & 95.39 & 71.35 & 91.08 & 56.56 & 81.06 \\ MobileNetV2 & 79.68 & 88.64 & 86.44 & 94.74 & 78.11 & 71.72 & 96.2 & 81.12 & 91.28 & 60.29 & 82.8 \\ Googlenet & 80.32 & 90.85 & 87.76 & 95.54 & 79.84 & 72.53 & 95.76 & 79.3 & 91.38 & 59.89 & 82.58 \\ InceptionV3 & 80.15 & 92.75 & 87.74 & 96.18 & 81.49 & 72.85 & 95.73 & 81.76 & 92.14 & 59.98 & 83.84 \\ \hline \hline \end{tabular}
\end{table}
Table 10: The fine-tuning accuracy of supervised CNN models on \(11\) target tasks.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline & Aircraft & Caltech & Cars & CF-10 & CF-100 & DTD & Flowers & Food & Pets & SUN & VOC \\ \hline ViT-T & 71.26 & 89.39 & 82.09 & 96.52 & 81.58 & 71.86 & 95.5 & 81.96 & 91.44 & 58.4 & 83.1 \\ ViT-S & 73.12 & 92.7 & 86.72 & 97.69 & 86.62 & 75.08 & 96.79 & 86.26 & 94.02 & 64.76 & 86.62 \\ ViT-B & 78.39 & 93.47 & 89.26 & 98.56 & 89.96 & 77.66 & 97.98 & 88.96 & 94.61 & 68.62 & 87.88 \\ PVT-B2 & 84.14 & 93.13 & 90.6 & 97.96 & 88.24 & 77.16 & 97.89 & 88.67 & 93.86 & 66.44 & 86.44 \\ PVT-T & 69.76 & 90.04 & 84.1 & 94.87 & 75.26 & 72.92 & 95.85 & 83.78 & 91.48 & 61.86 & 84.6 \\ PVT-S & 75.2 & 93.02 & 87.61 & 97.34 & 86.25 & 75.77 & 97.32 & 86.98 & 94.13 & 65.78 & 86.62 \\ PVT-M & 76.7 & 93.75 & 87.66 & 97.93 & 87.36 & 77.1 & 97.36 & 85.56 & 94.48 & 67.22 & 87.36 \\ Swin-T & 81.9 & 91.9 & 88.93 & 97.34 & 85.97 & 77.04 & 97.4 & 86.67 & 94.5 & 65.51 & 87.54 \\ MoCov3-S & 76.04 & 89.84 & 82.18 & 97.92 & 85.84 & 71.88 & 93.89 & 82.84 & 90.44 & 60.6 & 81.84 \\ DINO-S & 72.18 & 86.76 & 79.81 & 97.96 & 85.66 & 75.96 & 95.96 & 85.69 & 92.59 & 64.14 & 84.8 \\ \hline \hline \end{tabular}
\end{table}
Table 11: The fine-tuning accuracy of vision transformer models on \(11\) target tasks.
#### b.4.2 Image Captioning and Visual Question Answering
**Fine-tuning Details.** The setting of finetune here is approximately the same as in image classification. We carefully fine-tune pre-trained models with a grid search of learning rate in \(\{1e-4,1e-5,1e-6\}\) and weight decay in \(\{1e-4,1e-5,1e-6\}\). And using AdamW optimizer. After determining the best hyper-parameters candidate, we fine-tune the pre-trained model on the target dataset with the candidate and then obtain the test BLEU-4 and accuracy as the ground truth. However, since Flickr10k-H and Flickr10k-R do not provide a test set, we use a 6:1 ratio to divide the original training set of 7000 images into a training set and a test set. For visual question answering, Due to the lack of a test set for CLEVR dataset, we also assign its training set as training set and test set in the ratio of 6:1. We use an Nvidia A100 with a batch size of \(64\) to perform finetuning. All input images are resized to \(224\times 224\). To avoid random error, we repeat the above fine-tuning procedure three times and take an average to obtain the final fine-tuning accuracy. For inference, We use BLEU-4 as the score for the model with image captioning and accurracy as the score for the model with VQA. we list result of image captioning models in Table.12, and visual question answering models in Table 13, respectively.
#### b.4.3 Text Question Answering
**Fine-tuning Details.** The accuracy of most models in TQA is provided by DeBERTa [85, 86], except for DeBERTa-V3 [86](Base, Small, XSmall). Following the setting of Bert [43], we finetune these models with a batch size of \(24\) for \(2\) epochs. We use AdamW optimizer with an initial learning rate of \(3e-5\), polynomial decay. The Dev F1 score is used for pre-trained model ranking. All experiments are implemented on an NVIDIA Tesla A100 GPU.
#### b.4.4 Referring Expression Comprehension
**Fine-tuning Details.** For referring expression comprehension, the standard metric [email protected] on the validation set is used as the ground truth. For finetuning, we use a batch size of \(128\) with a resolution of \(512\times 512\) for each image. We finetune the models on each dataset for 12 epochs with a learning rate of \(\{3e-5,5e-5\}\) and weight decay in \(\{1e-3,1e-5\}\) using Adam optimizer. The best performance on the validation set for each task is reported among these hyper-parameters. Table 15 shows the performance of referring expression comprehension models.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & DAQUAR & COCO-QA & CLEVR \\ \hline Vi-Bert & 25.01 & 55.11 & 59.29 \\ Vit-Roberta & 26.38 & 57.30 & 62.80 \\ Vit-Bart & 26.30 & 59.60 & 64.98 \\ Swinvit-Bert & 28.05 & 61.72 & 68.25 \\ Swinvit-Roberta & 27.75 & 62.81 & 66.09 \\ Swinvit-Bart & 27.06 & 60.62 & 67.17 \\ Swinvit-Bert & 26.45 & 63.1 & 67.4 \\ Swinvit-Roberta & 26.33 & 66.54 & 65.91 \\ Swinvit-Bart & 26.25 & 64.4 & 70.34 \\ \hline \hline \end{tabular}
\end{table}
Table 13: The fine-tuning accuracy of visual question answering models on \(3\) target tasks.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & SQu1.1 & SQu2.0 \\ \hline BERT-Large & 90.9 & 81.8 \\ RoBERTa-Large & 94.6 & 89.4 \\ XLNet-Large & 95.1 & 90.6 \\ DeBERTa-Large & 95.5 & 90.7 \\ DeBERTa-V2-XLarge & 95.8 & 91.4 \\ DeBERTa-V2-XXLarge & 96.1 & 92.2 \\ DeBERTa-V3-Base & 93.9 & 88.4 \\ DeBERTa-V3-Small & 89.8 & 82.9 \\ DeBERTa-V3-XSmall & 91.5 & 84.8 \\ \hline \hline \end{tabular}
\end{table}
Table 14: The standard metric the Dev F1 score of text question answering models on \(2\) target tasks.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & F8k & F30k & RSD & F10k-H & F10k-R \\ \hline Vi-Bert & 18.51 & 26.65 & 31.39 & 5.31 & 5.18 \\ Vit-Roberta & 20.53 & 23.70 & 29.92 & 5.88 & 5.48 \\ Vit-Bart & 21.90 & 25.13 & 31.35 & 5.75 & 5.53 \\ Swinvit-Bert & 22.91 & 26.61 & 33.54 & 6.24 & 5.67 \\ Swinvit-Roberta & 23.99 & 28.84 & 33.07 & 7.11 & 5.49 \\ Swinvit-Bart & 24.68 & 28.03 & 32.99 & 6.10 & 5.95 \\ Swinvit-Bert & 25.69 & 31.33 & 35.45 & 5.86 & 5.49 \\ Swinvit-Roberta & 23.40 & 28.81 & 36.22 & 6.80 & 7.13 \\ Swinvit-Bart & 26.24 & 30.35 & 34.72 & 7.90 & 5.96 \\ \hline \hline \end{tabular}
\end{table}
Table 12: The fine-tuning BLEU-4 of image captioning models on \(5\) target tasks.
#### b.4.5 Regression
**Fine-tuning Details.** For regression, mean square error (MSE) on the test data is the ground truth. For finetuning, we use a batch size of \(64\) with resolution of \(224\times 224\) for each image. we carefully fine-tune pre-trained models with a grid search of learning rate in \(\{1e-1,1e-2,1e-3,1e-4\}\) and weight decay in \(\{1e-3,1e-4,1e-5,1e-6,0\}\) with SGD optimizer. The fine-tuning MSE on test set of models used in regression is in Table 16
## Appendix C More Ablation Analysis
**The Efftiveness of EMMS under Various Measurements.** In addition to weighted Kendall's tau, we employ various other measures to evaluate our EMMS. These include Kendall's tau (\(\tau\)), Pearson's correlation (\(r\)), weighted Pearson's correlation (\(r_{w}\)), and top-\(k\) relative accuracy, denoted as Rel@\(k\), which represents the ratio between the best fine-tuning accuracy achieved on the downstream task using the top-k ranked models and the best fine-tuning precision achieved with all models. We test the robustness of our transferability metrics to different measurements on the Flickr8k and RSICD datasets for image captioning tasks, as shown in Table 17. Our EMMS consistently outperforms the previous transferability metric, including LogME and TransRate. Under the aforementioned measurements, demonstrating the superiority of our EMMS.
\begin{table}
\begin{tabular}{l c c} \hline \hline & CUB & Pets \\ \hline ResNet-34 & \(4.114e-4\) & \(4.245e-5\) \\ ResNet-50 & \(3.521e-4\) & \(4.489e-5\) \\ ResNet-101 & \(2.746e-4\) & \(3.224e-5\) \\ ResNet-152 & \(2.539e-4\) & \(2.775e-5\) \\ DenseNet-121 & \(5.354e-4\) & \(1.096e-4\) \\ DenseNet-169 & \(4.787e-4\) & \(9.469e-5\) \\ DenseNet-201 & \(4.651e-4\) & \(1.058e-4\) \\ MNet-A1 & \(1.1475e-3\) & \(1.878e-4\) \\ MobileNetV2 & \(6.253e-4\) & \(9.510e-5\) \\ Googlenet & \(7.192e-4\) & \(1.197e-4\) \\ InceptionV3 & \(6.174e-4\) & \(9.633e-5\) \\ \hline \hline \end{tabular}
\end{table}
Table 16: The fine-tuning MSE on test set of models used in regression on \(2\) target tasks.
\begin{table}
\begin{tabular}{|c|c c c c c c c|c c c c c c|} \hline \hline Data & Method & Rel@1 & Rel@3 & \(r\) & \(r_{w}\) & \(\tau\) & \(\tau_{w}\) & Data & Method & Rel@1 & Rel@3 & \(r\) & \(r_{w}\) & \(\tau\) & \(\tau_{w}\) \\ \hline \multirow{2}{*}{2*} \({}^{\text{2*}}\)F8k} & LogME & 0.928 & **1.0** & 0.735 & 0.799 & 0.537 & 0.483 & 2*RSD & LogME & 0.957 & **1.0** & 0.727 & 0.708 & 0.518 & 0.501 \\ & EMMS & **1.0** & **1.0** & **0.741** & **0.823** & **0.667** & **0.660** & & & & & & & & \\ \hline \multirow{2}{*}{3} \({}^{\text{3}}\)Aircraft} & LogME & 0.852 & 0.993 & 0.407 & 0.060 & 0.378 & 0.299 & 3*DTD & LogME & **0.992** & **1.0** & 0.641 & 0.694 & 0.556 & 0.569 \\ & TransRate & **0.926** & **0.967** & 0.457 & 0.499 & 0.289 & 0.244 & & TransRate & **0.992** & **1.0** & 0.607 & 0.676 & 0.422 & 0.533 \\ & EMMS & **0.926** & **0.967** & **0.622** & **0.608** & **0.511** & **0.481** & & & & & & & & \\ \hline \end{tabular}
\end{table}
Table 17: EMMS under different measurements of transferability assessment. The results are obtained on Flickr8k and RSICD datasets with image captioning task and Aircraft and DTD datasets with image classification task with ViT-based models. EMMS outperforms LogME and other baselines under various measures.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline \multicolumn{2}{c}{Aircraft} & Caltech & Cars & CF-10 & CF-100 & DTD & Flowers & Food & Pets & SUN & VOC & Avg. \\ \hline \multicolumn{10}{c}{Weighted Kendall’s tau \(r_{w}\)} \\ \hline (1) & 0.481 & 0.546 & 0.304 & 0.963 & 0.804 & 0.701 & 0.498 & 0.588 & 0.574 & 0.638 & 0.707 & 0.618 \\ (2) & 0.531 & 0.562 & 0.426 & 0.952 & 0.804 & 0.720 & 0.481 & 0.602 & 0.535 & 0.667 & 0.726 & 0.636 \\ (3) & **0.556** & **0.562** & **0.565** & **0.963** & **0.840** & **0.720** & **0.498** & **0.608** & **0.604** & **0.667** & **0.735** & **0.664** \\ \hline \hline \end{tabular}
\end{table}
Table 18: The effect of Label Embedding in EMMS. Three variants of EMMS are considered: (1) EMMS with one-hot label; (2) EMMS with single F-Label; (3) EMMS with multiple F-Labels which is the original. We see that label embedding brings some performance improvement to EMMS.
**The Effect of Label Embedding** In some multimodal tasks or text tasks, including image captioning or text question answering. Label emebdding directly affects the applicability of existing model selection metric to these tasks. In addition, even in classification tasks, the use of F-Label can also bring improvements in results. Here we focus on the comparison between label embedding and direct one-hot vectors for image classification tasks in CNN-based models. As shown in Table 18, the use of F-Label can bring performance improvement compared to One-Hot vector, the average \(\tau_{w}\) increase from 0.618 to 0.636; furthermore, the use of multiple F-Label also brings some improvement compared to the average of single F-Label with \(\tau_{w}\) increasing from 0.636 to 0.664.
**The Effect of Computational Speedup.** Here we experimentally demonstrate the effect of our accelerated algorithm. As shown in Table 21, the algorithm is similar to the in-accelerated version in terms of results, but much shorter in terms of the wall-clock time.
**Comparison with different number of iterations** The number of iterations affects the EMMS time, here we conduct experiments on the VQA task for the effect of the number of iterations on the results. As shown in Table.19, we find that the number of iterations does not have a large impact on the performance of our method, and even a small number of iterations can guarantee the final result(e.g. the number of iterations is 1). We believe that firstly our method converges very fast. And secondly, for the ranking problem of model ranking, even if the convergence is not sufficient, the original order can still be maintained to a large extent in EMMS, thus ensuring the effect.
**The effect of using a single foundation model** We investigate how EMMS is influenced when only a single foundation model is provided. We conduct experiments on image classification and image captioning. We consider EMMS with the single foundation model including language foundation model (1) GPT-2 [27], (2) BERT [43], (3) RoBerta [68], and multimodal foundation model (4) CLIP [26], (5) FLAVA [69], and (6) AltCLIP [70]. For comparison, we include the result of our EMMS with default setting (K=3, i.e. CLIP, BERT, and GPT-2) and the result of previous state-of-the-art methods obtained from LogME, NLEEP and TransRate. The results are reported in Table 20 and Table 5.
We have several observations. (1) Different downstream tasks prefer F-Labels obtained from different foundation models. No single foundation model is dominant in all target tasks. In particular, CLIP is not the best model for extracting F-Labels. (2) For image classification, both language and multimodal foundation models are competent for acquiring F-Labels. (3) Our EMMS can achieve the best results by combining F-Labels obtained from multiple foundation models.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline & Aircraft & Caltech & Cars & CF-10 & CF-100 & DTD & Flowers & Food & Pets & SUN & VOC & Avg. & SOTA/All \\ \hline \multicolumn{11}{c}{Weighted Kendall’s tau \(\tau_{w}\)} \\ \hline Previous SOTA & 0.299 & 0.412 & 0.693 & **0.741** & 0.736 & **0.621** & **0.655** & 0.580 & 0.707 & **0.619** & 0.651 & 0.610 & 4/11 \\ (1) Gpt2 & **0.481** & **0.463** & 0.448 & 0.652 & **0.745** & **0.621** & 0.562 & 0.652 & **0.740** & 0.616 & **0.730** & 0.610 & 6/11 \\ (2) Bert & **0.481** & 0.444 & 0.458 & 0.718 & **0.745** & **0.621** & 0.562 & 0.592 & **0.740** & 0.616 & **0.730** & 0.609 & 5/11 \\ (3) RoBerta & 0.448 & 0.444 & 0.507 & 0.701 & **0.745** & 0.608 & 0.562 & 0.580 & **0.740** & 0.574 & **0.730** & 0.604 & 3/11 \\ (4) CLIP & **0.481** & 0.444 & 0.496 & 0.608 & 0.720 & **0.621** & 0.562 & 0.558 & **0.740** & 0.616 & 0.706 & 0.595 & 3/11 \\ (5) FLAVA & **0.481** & 0.444 & 0.508 & **0.741** & **0.745** & **0.621** & 0.562 & 0.652 & **0.740** & 0.574 & 0.706 & 0.615 & 5/11 \\ (6) AltCLIP & **0.481** & 0.444 & 0.437 & **0.741** & **0.745** & **0.621** & 0.562 & 0.580 & **0.740** & 0.595 & **0.730** & 0.607 & 6/11 \\ EMMS & **0.481** & 0.444 & **0.706** & 0.718 & **0.745** & **0.621** & 0.562 & **0.673** & **0.740** & **0.619** & **0.730** & **0.639** & 8/11 \\ \hline \hline \end{tabular}
\end{table}
Table 20: The effect of the single foundation model on EMMS. The results are obtained on image classification regarding \(\tau_{w}\).
\begin{table}
\begin{tabular}{l c c c|c c c} \hline \hline & DAQUAR & COCO & CLEVR & DAQUAR & COCO & CLEVR \\ \hline \multicolumn{11}{c}{Weighted Kendall’s tau \(\tau_{w}\)} & \multicolumn{3}{c}{Wall-Clock Time (s)} \\ \hline \(r=3\) & **0.743** & 0.812 & 0.804 & 111.05 & 735.21 & 745.11 \\ \(r=2\) & 0.712 & 0.812 & 0.804 & 78.01 & 536.45 & 573.22 \\ \(r=1\) & 0.712 & **0.812** & **0.804** & **50.54** & **263.72** & **274.56** \\ \hline \hline \end{tabular}
\end{table}
Table 19: The effect of the number of iterations \(r\) on VQA models in rank correlation \(\tau_{w}\). We find that even a small number of iterations allows the method to maintain its effect.
**The Wall-clock Time of Label Embedding.** For classification tasks, since the maximum number of categories is often only a few hundred, Label Embedding is very fast. Here we focus on documenting the time required for multimodal tasks, e.g. image captioning, text question answering, and referring expression comprehension, where label embedding is more time-consuming. For each task, we use 8 Nvidia A100 GPUs for label embedding, with a batch size of 512 for each GPU. The running time of label embedding for image captioning, text question answering, and referring expression comprehension is shown in Table 22. We measure the time for each dataset on the same CPU device (AMD EPYC 7H12 with 64-Core Processor) for three times and take the average as the final result.
**The computational complexity of EMMS.** We compare the computational complexity between LogME and EMMS in Table 23. We see that EMMS has lower computation complexity than LogME(F) because LogME(F) needs several iterations (T=3 on average) to converge. Moreover, EMMS allows for full vector computation and can be efficiently solved by existing scientific computation packages such as np.linalg.lstsq. Nevertheless, LogME(F) cannot be written in fully vectorized form because the model parameters in LogME(F) are highly coupled. Hence, LogME(F) can only be executed in a while loop.
In addition, in the classification task, we compare EMMS and LogME. EMMS usually has higher computation complexity because \(D_{2}\gg C\). In some cases, when the number of categories \(C\) and the iteration number \(T\) are large, EMMS could be faster than LogME with vector computation. For example, we find that \(C=397\) and \(T=4.46\) on average over all models when LogME is convergent on the Sun397 dataset. It results in higher time complexity than LogME, as indicated in Table 23. We further verify this by implementing LogME with \(T=1\). As shown in Table 24, EMMS spends more time in calculating the transferability than LogME (T=1) on all datasets. However, LogME performs much worse than EMMS because it does not converge when \(T=1\).
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline Task & \multicolumn{4}{c|}{Image Captioning} & \multicolumn{4}{c|}{Text QA} & \multicolumn{4}{c}{Referring EC} \\ \hline Dataset & F8k & F30k & RSD & F10k-H & F10k-R & SQuAD1.1 & SQuAD2.0 & RefCOCO & RefCOCO+ & RefCOCOg \\ \hline Time & 14.56 & 89.31 & 18.92 & 3.37 & 3.13 & 35.67 & 53.87 & 49.19 & 48.88 & 31.63 \\ \hline \hline \end{tabular}
\end{table}
Table 22: The wall-clock time (s) of label embedding in image captioning on \(5\) target tasks, text question answering on \(2\) target tasks, and referring expression comprehension on \(3\) target tasks,respectively.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline & Aircraft & Caltech & Cars & CF-10 & CF-100 & DTD & Flowers & Food & Pets & SUN & VOC & Avg. \\ \hline \multicolumn{12}{c}{Weighted Kendall’s tau \(\tau_{w}\)} \\ \hline (1) & **0.564** & **0.463** & 0.706 & 0.718 & 0.745 & 0.589 & **0.592** & 0.531 & **0.755** & 0.532 & 0.730 & 0.629 \\ (2) & 0.481 & 0.444 & **0.706** & **0.718** & **0.745** & **0.621** & 0.562 & **0.673** & 0.740 & **0.619** & **0.730** & **0.639** \\ \hline \multicolumn{12}{c}{Wall-Clock Time (s)} \\ \hline (1) & 102.06 & 114.72 & 177.25 & 718.34 & 724.5 & 50.24 & 87.28 & 944.57 & 83.37 & 336.92 & 104.9 & 313.10 \\ (2) & **21.31** & **17.23** & **28.06** & **154.61** & **182.11** & **13.87** & **15.95** & **265.99** & **17.93** & **63.86** & **16.63** & **72.55** \\ \hline \hline \end{tabular}
\end{table}
Table 21: The effect of computational speedup in image classification with ViT models. We can see that the accelerated version of the algorithm achieves a significant reduction in time while guaranteeing results. Two variants of EMMS are considered: (1) EMMS with normal algorithm; (2) EMMS with fast algorithm.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline Task & \multicolumn{4}{c|}{Image Captioning} & \multicolumn{4}{c|}{Text QA} & \multicolumn{4}{c}{Referring EC} \\ \hline Dataset & F8k & F30k & RSD & F10k-H & F10k-R & SQuAD1.1 & SQuAD2.0 & RefCOCO & RefCOCO+ & RefCOCOg \\ \hline Time & 14.56 & 89.31 & 18.92 & 3.37 & 3.13 & 35.67 & 53.87 & 49.19 & 48.88 & 31.63 \\ \hline \hline \end{tabular}
\end{table}
Table 23: The comparison of computational complexity between LogME, EMMS(one), and EMMS in image classification. We denote model feature \(X\in R^{N\times D_{1}}\) and F-labels \(Z\in R^{N\times D_{2}\times K}\) with \(N\approx 10^{4}\), \(D_{1}\approx 10^{3}\), \(D_{2}=1024\), \(K=3\), and \(C\approx 10^{2}\). Moreover, \(T\approx 3\) denotes the iteration number of LogME. Moreover, LogME(F) denotes LogME with F-Label.
**Comparison with variants of existing methods.** To further validate the efficacy of EMMS, we compare it with TransRate using F-Labels on image classification. To this end, we estimate the mutual information of the model feature and F-label following TransRate. Specifically, denote model feature \(X\in R^{N\times D_{1}}\), and the F-Label \(Z_{k}\in R^{N\times D_{2}}\), we estimate the mutual information of \(X\) and \(Z_{k}\) after the discretization operation for each dimension of \(D_{2}\) separately and then take average to obtain the final score.
Moreover, we implement two baselines based on TransRate. When \(K=1\), we instantiate the F-Label as the CLIP embedding. When \(K=3\), we instantiate the F-Labels as the embedding collection extracted from the CLIP, BERT, and GPT-2. In this case, the final score is averaged over three F-Labels. The results are shown in Table 25, where we can see that our EMMS consistently outperforms TransRate with F-Labels (both K=1 and K=3).
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline & Aircraft & Caltech & Cars & CF-10 & CF-100 & DTD & Flowers & Food & Pets & SUN & VOC & Avg. \\ \hline \multicolumn{10}{c}{Weighted Kendall’s tau \(\tau_{w}\)} \\ \hline LongME (\(T=1\)) & 0.378 & 0.341 & -0.408 & 0.645 & 0.727 & 0.112 & -0.074 & 0.561 & 0.528 & 0.259 & -0.04 \\ LogME & 0.299 & 0.382 & 0.633 & **0.741** & 0.727 & 0.569 & 0.512 & 0.580 & 0.528 & 0.619 & 0.591 \\ EMMS(One) & 0.412 & 0.444 & 0.565 & 0.740 & 0.736 & **0.621** & **0.562** & 0.579 & **0.740** & 0.592 & **0.730** \\ EMMS & **0.481** & **0.444** & **0.706** & 0.718 & **0.745** & **0.621** & **0.562** & **0.673** & **0.740** & **0.619** & **0.730** \\ \hline \multicolumn{10}{c}{Wall-Clock Time (s)} \\ \hline LogME (\(T=1\)) & 4.45 & 4.72 & 8.18 & 34.81 & 40.15 & 3.65 & 5.13 & 53.7 & 4.59 & 31.66 & 6.03 \\ LogME & 8.93 & 10.89 & 30.28 & 53.07 & 62.13 & 4.78 & 9.27 & 104.92 & 6.28 & 425.43 & 7.42 \\ EMMS(One) & **4.12** & **4.45** & **8.07** & **19.45** & **26.18** & **2.65** & **4.03** & **39.72** & **3.50** & **24.84** & **4.07** \\ EMMS & 21.31 & 17.23 & 28.06 & 154.61 & 182.11 & 13.87 & 15.95 & 265.99 & 19.73 & 63.86 & 16.63 \\ \hline \hline \end{tabular}
\end{table}
Table 24: The comparison between LogME and EMMS. The results are obtained on image classification regarding \(\tau_{w}\). LogME (\(T=1\)) indicates that the inner loop of LogME only performs once.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline & Aircraft & Caltech & Cars & CF-10 & CF-100 & DTD & Flowers & Food & Pets & SUN & VOC & Avg. \\ \hline \multicolumn{10}{c}{Weighted Kendall’s tau \(\tau_{w}\)} \\ \hline TransRate(K=1) & 0.297 & 0.440 & 0.682 & 0.655 & 0.501 & 0.533 & 0.548 & 0.537 & 0.736 & 0.533 & 0.666 & 0.557 \\ TransRate(K=3) & 0.295 & 0.441 & 0.682 & 0.523 & 0.501 & 0.542 & 0.548 & 0.539 & 0.730 & 0.533 & 0.679 & 0.546 \\ EMMS & **0.481** & **0.444** & **0.706** & **0.718** & **0.745** & **0.621** & **0.562** & **0.673** & **0.740** & **0.619** & **0.730** & **0.639** \\ \hline \hline \end{tabular}
\end{table}
Table 25: The comparison between TransRate and EMMS. The results are obtained on image classification regarding \(\tau_{w}\). TransRate (\(K\)) indicates that the number of foundation models used. | ## Review
### Summary
This paper presents the Efficient Multi-task Model Selector (EMMS), which aims to evaluate the performance of pre-trained neural networks on multi-modal tasks without requiring fine-tuning. By leveraging large-scale foundation models, EMMS converts various label formats into a unified noisy label embedding, enabling the estimation of transferability through weighted linear regression and an alternating minimization algorithm. The paper reports improved performance and efficiency across multiple downstream tasks, supported by extensive experiments that validate the proposed method's effectiveness and speed.
### Strengths
- 1. The paper introduces a novel approach to model selection that utilizes unified label embeddings, capturing semantic nuances for enhanced task estimation.
- 2. The extensive experimental validation across various downstream tasks demonstrates the method's efficacy and robustness.
- 3. The theoretical foundations of the method are well-established, with clear derivations and comprehensive details provided.
- 4. The paper addresses a practically significant problem, providing a flexible solution applicable to diverse multi-modal tasks without fine-tuning.
### Weaknesses
- 1. The complexity of the method may hinder reproducibility and understanding, particularly regarding the weighted linear regression approach.
- 2. Some evaluation metrics, such as weighted Kendall's tau, focus on relative rankings rather than absolute performance, which may limit practical applicability.
- 3. The method's reliance on the capabilities of the chosen foundation models raises questions about its effectiveness compared to direct use of those models for specific tasks.
- 4. Certain figures and explanations could be improved for clarity, such as providing more informative visuals and descriptions.
### Questions
- 1. What justifications can the authors provide regarding the assumption of linear mapping from model features to label embeddings?
- 2. Are there alternative metrics that could enhance the evaluation of the proposed method beyond weighted Kendall's tau and wall-clock time?
- 3. Could the authors clarify the variability in EMMS's performance across different instances as noted in the results?
### Soundness
**Score:** 3
**Description:** 3 = good; the method is theoretically sound and well-supported by experiments, although some assumptions require more justification.
### Presentation
**Score:** 2
**Description:** 2 = fair; while the writing is generally clear, certain figures lack informativeness and some explanations could be more detailed.
### Contribution
**Score:** 3
**Description:** 3 = good; the paper addresses an important problem with a novel solution, contributing valuable insights to the field of multi-task learning.
### Rating
**Score:** 5
**Description:** 5 = Borderline accept; the paper is technically solid and presents significant contributions, though it has some concerns that need addressing.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper is original and addresses a significant problem in the field. Its soundness is generally good, backed by solid experiments, although some complexities and assumptions need clarification. The contribution is noteworthy, providing a flexible approach to multi-modal tasks. Overall, the strengths outweigh the weaknesses sufficiently to warrant acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Effective Targeted Attacks for
Adversarial Self-Supervised Learning
Mineeon Kim\({}^{1}\), Hyeonjeong Ha\({}^{1}\), Sooel Son\({}^{1}\), Sung Ju Hwang\({}^{1,2}\)
\({}^{1}\)Korea Advanced Institute of Science and Technology (KAIST), \({}^{2}\)DeepAuto.ai
{mineeonkim, hyeonjeongha, sl.son, sjhwang82}@kaist.ac.kr
###### Abstract
Recently, unsupervised adversarial training (AT) has been highlighted as a means of achieving robustness in models without any label information. Previous studies in unsupervised AT have mostly focused on implementing self-supervised learning (SSL) frameworks, which maximize the instance-wise classification loss to generate adversarial examples. However, we observe that simply maximizing the self-supervised training loss with an untargeted adversarial attack often results in generating ineffective adversaries that may not help improve the robustness of the trained model, especially for non-contrastive SSL frameworks without negative examples. To tackle this problem, we propose a novel positive mining for targeted adversarial attack to generate effective adversaries for adversarial SSL frameworks. Specifically, we introduce an algorithm that selects the most confusing yet similar target example for a given instance based on entropy and similarity, and subsequently perturbs the given instance towards the selected target. Our method demonstrates significant enhancements in robustness when applied to non-contrastive SSL frameworks, and less but consistent robustness improvements with contrastive SSL frameworks, on the benchmark datasets.
## 1 Introduction
Enhancing the robustness of deep neural networks (DNN) remains a crucial challenge for their real-world safety-critical applications, such as autonomous driving. DNNs have been shown to be vulnerable to various forms of attacks, such as imperceptible perturbations [14], various types of image corruptions [20], and distribution shifts [25], which can lead DNNs to make incorrect predictions. Many prior studies have proposed using supervised adversarial training (AT) [29, 40, 38, 37] to mitigate susceptibility to imperceptible adversarial perturbation, exploiting class label information to generate adversarial examples. However, achieving robustness in the absence of labeled information has been relatively understudied, despite the recent successes of self-supervised learning across various domains and tasks.
Recently, self-supervised learning (SSL) frameworks have been proposed to obtain transferable visual representations by learning the similarity and differences between instances of augmented training data. Such prior approaches include those utilizing contrastive learning between positive and negative pairs (e.g., Chen et al. [6] (SimCLR), He et al. [19] (MoCo), Zbontar et al. [39] (Barlow-twins)), as well as those utilizing similarity loss solely between positive pairs (e.g., Grill et al. [17] (BYOL), Chen and He [7] (SimSiam)). To achieve robustness in these frameworks, Kim et al. [23] and Jiang et al. [22] have proposed adversarial SSL methods using contrastive learning [6], which generate adversarial examples that maximize the instance-wise classification loss.
Unfortunately, deploying this contrastive framework often becomes computationally expensive as it requires a large batch size for training in order to attain a high level of performance [6]. Specifically, when a given memory and computational budget is limited, such as with edge devices, performingcontrastive SSL becomes no longer viable or practical as an option, as it may not obtain sufficiently high performance using a small batch size.
Alternatively, non-contrastive, positive-only SSL frameworks have been proposed resort to maximizing consistency across two differently augmented samples of the same instance, i.e., positive pairs, [17, 7, 39], without the need of negative instances. These approaches improve the practicality of SSL for those limited computational budget scenarios. However, leveraging prior adversarial attacks that maximize the self-supervised learning loss in these frameworks results in extremely poor performance compared to those of adversarial contrastive SSL methods (Table 1). The suboptimality of the deployed attacks causes to learn limited robustness and leads to the generation of ineffective adversarial examples, which fail to improve robustness in the SSL frameworks trained using them. As shown in Figure 0(c), the attack in the inner loop of the adversarial training loss, designed to maximize the distance between two differently augmented samples, perturbs a given example to a random position in the latent space. Thus, the generated adversarial samples have little impact on the final robustness. The suboptimality of the attacks also can be occurred in contrastive adversarial SSL that also contains positive pairs, when simply maximizing the contrastive loss. As shown in Figure 0(b), contrastive learning treats all positive and negative pairs equally regardless of their varying importance in generating effective adversarial examples.
To address this issue, we propose **T**argeted **A**ttack for **RO**bust self-supervised learning (TARO). TARO is designed to guide the generation of effective adversarial examples by conducting **targeted attacks** that perturb a given instance toward a target instance to enhance the robustness of an SSL framework (Figure 1). The direction of attacks is assigned using our target selection algorithm that chooses the most confusing yet similar sample for a given instance based on the entropy and similarity. By targeting the attacks toward specific latent spaces that are more likely to improve robustness on positive-pairs, TARO improves the robustness of SSL, regardless of the underlying SSL frameworks. Notably, as the positive-pair only SSL has gained attention in recent times, our proposed method becomes crucial for the ongoing safe utilization of these frameworks in real-world applications.
The main contributions can be summarized as follows:
* We observe that simply maximizing the training loss of self-supervised learning (SSL) may lead to suboptimality of attacks as the main cause of the limited robustness in SSL frameworks, especially those that rely on maximizing the similarity between the single pair of augmented instances.
* To address this issue, we propose a novel approach, **T**argeted **A**ttack for **RO**bust self-supervised learning (TARO), which aims to improve the robustness of SSL by conducting targeted attacks on the positive-pair that perturb the given instance toward the most confusing yet similar latent space, based on entropy and similarity of the latent vectors.
* We experimentally show that TARO is able to obtain consistently improved robustness of SSL, regardless of underlying SSL frameworks, including contrastive- and positive-pair only SSL frameworks.
Figure 1: **Motivation. In supervised adversarial learning (a), perturbation is generated to maximize the cross-entropy loss, which pushes adversarial examples to the decision boundaries of other classes. In adversarial contrastive SSL (b), perturbation is generated to minimize the similarity (red line) between positive pairs while maximizing the similarity (blue lines) between negative pairs. In positive-only adversarial SSL (c), minimizing the similarity (red) between positive pairs. However, adversarial examples in adversarial SSL impose weaker constraints in generating effective adversarial examples than does supervised AT due to ineffective positive pairs. To overcome this limitation, we suggest a selectively targeted attack for SSL that maximizes the similarity (blue) to the most confusing target instance (yellow oval in (b) and (c)).**
Related Work
Adversarial trainingSzegedy et al. [34] showed that imperceptible perturbation to a given input image may lead a DNN model to misclassify the input into a false label, demonstrating the vulnerability of DNN models to adversarial attacks. Goodfellow et al. [14] proposed the fast gradient sign method (FGSM), which perturbs a given input to add imperceptible noise in the gradient direction of decreasing the loss of a target model. They also demonstrated that training a DNN model over perturbed as well as clean samples improves the robustness of the model against FGSM attacks. Follow-up works [27; 2] proposed diverse gradient-based strong attacks, and Madry et al. [29] proposed a projected gradient descent (PGD) attack and a robust training algorithm leveraging a minimax formulation; they find an adversarial example that achieves a high loss while minimizing the adversarial loss across given data points. TRADES [40] proposed minimizing the Kullback-Leibler divergence (KLD) over clean examples and their adversarial counterparts, thus enforcing consistency between their predictions. Recently, leveraging additional unlabeled data [3] and conducting additional attacks [38] have been proposed. Carmon et al. [3] proposed using Tiny ImageNet [28] images as pseudo labels, and Gowal et al. [16] proposed using generated images from generative models to learn richer representations with additional data.
Self-supervised learningDue to the high annotation cost of labeling data, SSL has gained a wide attention [11; 41; 35; 36]. Previously, SSL focused on solving a pre-task problem of collaterally obtaining visual representation, such as solving a jigsaw puzzle [30], predicting the relative position of two regions [11], or impainting a masked area [31]. However, more recently, SSL has shifted to utilizing inductive bias to learn the invariant visual representation of paired transformed images. This is accomplished through contrastive learning, which utilizes both positive pairs and negative pairs, that is differently transformed images and other images from the same batch, respectively [6; 19]. Additionally, some studies have proposed using only positive pairs in SSL and have employed techniques such as momentum networks [17] or stop-gradient [7]. In this paper, we annotate these approaches as contrastive SSL, and positive-pair only SSL, respectively.
Adversarial self-supervised learningThe early stage of adversarial SSL methods [23; 22] employed contrastive learning to achieve a high level of robustness without any class labels. Adversarial self-supervised contrastive learning [23; 22] generated an instance-wise adversarial example that maximizes the contrastive loss against its positive and negative samples by conducting untargeted attacks. Both methods achieved robustness, but at the cost of requiring high computation power due to the large batch size needed for contrastive learning. On the other hand, Gowal et al. [15] utilized only positive samples to obtain adversarial examples by maximizing the similarity loss between the latent vectors from the online and target networks, allowing this method greater freedom regarding the batch size. However, it exhibited relatively worse robustness than the adversarial self-supervised contrastive learning frameworks. Despite the advances in the SSL framework (i.e., positive-pair only SSL), a simple combination of untargeted adversarial learning and advanced SSL does not guarantee robustness. To overcome such a vulnerability in positive-pair only SSL, we propose a targeted attack leveraging a novel score function designed to improve robustness.
## 3 Positive-Pair Targeted Attack in Adversarial Self-Supervised Learning
Adversarial SSL and supervised adversarial learning utilize adversarial examples in a similar manner. Specifically, adversarial SSL generates instance-wise adversarial examples in the direction of maximizing the training loss for better robustness. However, this approach exhibits an insufficient level of robustness especially in the positive-pair only self-supervised learning framework due to generating highly suboptimal adversarial examples.
We argue that simply maximizing the training loss, dubbed as an untargeted attack, in positive-pair only SSL limits the diversity of adversarial examples which eventually leads to limited robustness. We theoretically show that range of perturbation is smaller when the positive-pair only SSL objective is employed in an untargeted attack than the contrastive objective in simple two-class tasks. Furthermore, we empirically demonstrate poorer robustness when we naively merge untargeted attack and positive-pair only SSL approaches [17; 7], compared to contrastive-based adversarial SSL [23; 22].
To remedy such a shortcoming, we propose a simple yet effective targeted adversarial attack to increase the diversity of the generated attack. Moreover, we empirically suggest novel positive mining the target for the targeted adversarial attack that contributes to generating more effective and stronger adversarial examples, thus improving the robustness beyond that of previous adversarial SSL approaches. In this section, we first recap supervised adversarial training, self-supervised learning, and previous adversarial SSL methods. We then demonstrate theoretical intuition on our motivation and describe our proposed targeted adversarial SSL framework, TARO, in detail.
### Preliminary
Supervised adversarial trainingWe first recap supervised adversarial training with our notations. We denote the dataset \(\mathcal{D}=\{(x_{i},y_{i})\}\), where \(x_{i}\in R^{D}\) is a input, and \(y_{i}\in R^{N}\) is its corresponding label from the \(N\) classes. In this supervised learning task, the model is \(f_{\theta}:X\to Y\), where \(\theta\) is a set of model parameters to train.
Given \(\mathcal{D}\) and \(f_{\theta}\), an _adversarial attack_ perturbs a given source image that maximizes the loss within a certain radius from it (e.g., \(\ell_{\infty}\) norm balls). For example, \(\ell_{\infty}\) attack is defined as follows:
\[\delta^{t+1}=\Pi_{B(0,\epsilon)}\Big{(}\delta^{t}+\alpha\mathtt{sign}\Big{(} \nabla_{\delta^{t}}\mathcal{L}_{\mathtt{CE}}\big{(}f(\theta,x+\delta^{t}),y \big{)}\Big{)}\Big{)}, \tag{1}\]
where \(B(0,\epsilon)\) is the \(\ell_{\infty}\) norm-ball of radius \(\epsilon\), \(\Pi\) is the projection function to the norm-ball, \(\alpha\) is the step size of the attacks, and \(\mathtt{sign}(\cdot)\) is the sign of the vector. Also, \(\delta\) represents the perturbations accumulated by \(\alpha\mathtt{sign}(\cdot)\) over multiple iterations \(t\), and \(\mathcal{L}_{\mathtt{CE}}\) is the cross-entropy loss. In the case of PGD [29], the attack starts from a random point within the epsilon ball and performs \(t\) gradient steps, to obtain a perturbed sample. _Adversarial training_ (AT) is a straightforward way to improve the robustness of a DNN model; it minimizes the training loss that embeds the adversarial perturbation (\(\delta\)) in the inner loop (Eq. 1).
Self-supervised learningRecent studies on self-supervised learning (SSL) have proposed methods to allow their models to learn invariant features from transformed images, thus learning semantic visual representations that are beneficial for diverse tasks [6; 19; 17; 7; 39]. In this paper, we aim at improving the robustness of the two most popular types of SSL frameworks: positive-pair only SSL (e.g., BYOL, SimSiam) and contrastive SSL (e.g., SimCLR) frameworks.
We start by briefly describing a representative contrastive SSL, SimCLR [6]. SimCLR is designed to maximize the agreement between different augmentations of the same instance in the learned latent space while minimizing the agreement between different instances. Differently augmented examples from the same instance are defined as positive pairs, and all other instances in the same batch are considered negative examples. Then, the training loss of SimCLR is defined as follows:
\[\mathcal{L}_{\mathtt{nt\text{-}zent}}(x,\{x_{\mathtt{pos}}\},\{x_{\mathtt{neg} }\})\coloneqq-\log\frac{\sum_{x_{p}\in\{x_{\mathtt{pos}}\}}\exp(\mathrm{sim}( \mathrm{z},x_{p})/\tau)}{\sum_{x_{p}\in\{x_{\mathtt{pos}}\}}\exp(\mathrm{sim}( \mathrm{z},x_{p})/\tau)+\sum_{x_{n}\in\{x_{\mathtt{neg}}\}}\exp(\mathrm{sim}( \mathrm{z},x_{n})/\tau)}, \tag{2}\]
where \(z\) is the latent vector of input \(x\), \(\mathtt{pos}\), \(\mathtt{neg}\) stands for positive pair and negative pairs of \(x\), respectively, and \(\mathrm{sim}\) denotes the cosine similarity function.
A representative positive-pair only SSL framework is SimSiam [7]. SimSiam consists of the encoder \(f\), followed by the projector \(g\), and then the predictor \(h\); both \(g\) and \(h\) are multi-layer perceptrons (MLPs). Given the dataset \(\bar{\mathcal{D}}=\{X\}\) and the transformation function \(\mathbf{t}\sim\mathbf{T}\) that augments the images \(x\in X\), it is designed to maximize the similarity between the differently transformed images and avoid representational collapse by applying the stop-gradient operation to one of the transformed images as follows:
\[\mathcal{L}_{\mathtt{ss}}(x,x_{\mathtt{pos}})=-\frac{1}{2}\frac{p}{||p||_{2}} \cdot\frac{z_{\mathtt{pos}}}{||z_{\mathtt{pos}}||_{2}}-\frac{1}{2}\frac{p_{ \mathtt{pos}}}{||p_{\mathtt{pos}}||_{2}}\cdot\frac{z}{||z||_{2}}, \tag{3}\]
where \(z=g\circ f(\mathfrak{t}_{1}(x))\), \(z_{\mathtt{pos}}=g\circ f(\mathfrak{t}_{2}(x))\), and \(p=h\circ z\), \(p=h\circ z_{\mathtt{pos}}\) are output vectors of the projector \(g\) and predictor \(h\), respectively. Before calculating the loss, SimSiam detaches the gradient on the \(z\), which is called the _stop-gradient_ operation. This stop-gradient operation helps the model prevent representational collapse without any momentum networks, by making an encoder to act as a momentum network.
Adversarial SSLTo achieve robustness in SSL frameworks, prior studies have proposed adversarial SSL methods [22, 23, 15]. They generate adversarial examples by maximizing the training loss, dubbed as an untargeted attack, of their base SSL frameworks. For example, the inner loop of an adversarial attack for Kim et al. [23] is structured as follows:
\[\delta^{t+1}=\Pi_{B(0,\epsilon)}\Big{(}\delta^{t}+\alpha\mathtt{sign}\Big{(} \nabla_{\delta^{t}}\mathcal{L}\big{(}\mathbf{t}_{1}(x)+\delta^{t},\mathbf{t}_{ 2}(x)\big{)}\Big{)}\Big{)}, \tag{4}\]
where the perturbation maximizes the \(\mathcal{L}\). For adversarial contrastive SSL approaches [22, 23], \(\mathcal{L}=\mathcal{L}_{\mathtt{nt-sent}}\) is the contrastive loss in Eq. 2, so that adversarial examples are generated to minimize the similarity between positive pairs and maximize the similarity between negative pairs. For the positive-pair only SSL, adversarial examples are generated to maximize the similarity loss, \(\mathcal{L}=\mathcal{L}_{\mathtt{ss}}\) (Eq. 3), between positive-pairs only. However, as shown in Table 1, positive-pair only SSL results in significantly poor robustness compared to the adversarial contrastive SSL approaches. This is because using the naive training loss function of positive-pair only SSL in the attack hinders the generation of effective attack images for robust representation, as we theoretically show the range of perturbations is smaller (Section 3.2). To address this issue, we propose a targeted adversarial attack that can select more effective examples to make more diverse perturbations.
### Theoretical Motivation: Adversarial Perturbations in Positive-only SSL
A model is considered to have a better generalization of adversarial robustness when the model can maintain its performance across a wide range of adversarial perturbations. Hence, the ability of the attack loss to generate a diverse range of perturbations during training is a crucial factor that influences the model's final robust generalization.
However, we found the theoretical motivation that positive-pair only SSL loss (\(\mathcal{L}_{\mathtt{ss}}\)) could not provide a wide range of adversarial perturbations as contrastive loss (\(\mathcal{L}_{\mathtt{nt-sent}}\)) does. We simplify the problem into simple binary classification with the linear layer model to demonstrate our theoretical motivation. Let us denote adversarial perturbations that are generated with both losses as follows,
\[x_{\mathtt{ns}}^{\mathtt{adv}}=x+\arg\max_{\delta}\left\{\frac{f(x+\delta)}{\| f(x+\delta)\|}\cdot\frac{f(x)}{\|f(x)\|}\right\}\quad\text{subject to}\quad\|\delta\|\leq\epsilon, \tag{5}\]
where we approximate the loss of \(\mathcal{L}_{\mathtt{ss}}\) in the \(\ell_{1}\) distance function between the positive pair and the loss of \(\mathcal{L}_{\mathtt{nt-sent}}\) into combination of two \(\ell_{1}\) distance functions of one positive- and one negative- pair. In both cases, a \(\delta\) maximizes the respective loss, subject to the constraint that the norm of \(\delta\) is less than or equal to \(\epsilon\). The objective in positive-only SSL is to make the perturbed and original samples dissimilar as follows,
\[\delta_{\mathtt{ss}}=\operatorname*{arg\,max}_{\delta}|f(x)-f(x+\delta)|. \tag{6}\]
while the objective of nt-sent is to make the perturbed sample dissimilar to the positive pair and similar to the negative pair as follows,
\[\delta_{\mathtt{nt-sent}}=\operatorname*{arg\,max}_{\delta}|f(x)-f(x+\delta)| -|f(x_{neg})-f(x+\delta)|. \tag{7}\]
**Theorem 3.1** (Perturbation range of self-supervised learning loss).: _Given a model trained under the positive-only distance loss, the adversarial perturbations \(\delta_{\mathtt{ss}}\) are likely to be smaller than those perturbations \(\delta_{\mathtt{nt-sent}}\), from a model trained under the positive-pair and negative-pair distance loss. Formally, \(\|\delta_{\mathtt{ss}}\|_{\infty}<\|\delta_{\mathtt{nt-sent}}\|_{\infty}\)._
These theoretical insights are also supported by the empirical experiments in Table 1 that a model trained with adversarial examples generated using positive- and negative- paired contrastive loss (\(\mathcal{L}_{\mathtt{nt-sent}}\)) have better adversarial robustness generalization because it is exposed to a wider range of perturbations during training than models that are trained with the positive-only similarity loss (\(\mathcal{L}_{\mathtt{ss}}\)). The detailed proof and the empirical analysis are in the Supplementary.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Attack loss & Method & Clean & PGD \\ \hline \multirow{2}{*}{Contrastive} & ACL [22] & 79.96 & 39.37 \\ & RoCL [23] & 78.14 & 42.89 \\ \hline Positive-only & BYORL [15] & 72.65 & 16.20 \\ & similarity & SimSiam* & 71.78 & 32.28 \\ \hline \hline \multicolumn{4}{l}{*naive adversarial training applied in SimSiam} \\ \multicolumn{4}{l}{} \\ \multirow{2}{*}{} & \multirow{2}{*}{} & \multirow{2}{*}{} & \multirow{2}{*}{} & \multirow{2}{*}{} \\ & & & \\ \end{tabular}
\end{table}
Table 1: Comparison of different attack losses on CIFAR-10 using PGD attack.
### Targeted Adversarial SSL
We propose a simple yet effective targeted adversarial attack to generate effective adversarial examples in a positive-only SSL scenario. In this section, we first show the theoretical intuition of our approach and describe our overall framework to further improve the robustness of the adversarial SSL method by performing targeted attacks wherein targets are selected according to the proposed score function.
Targeted adversarial attack to different sampleWe argue that leveraging untargeted adversarial attacks in positive pairs only SSL still leaves a large room for better robustness. To enlarge the diversity of the attacks, we propose simple targeted adversarial attacks for positive-pair only SSL. The loss for such adversarial attacks is as follows:
\[\delta^{t+1}=\Pi_{B(0,\epsilon)}\Big{(}\delta^{t}+\alpha\mathtt{sign}\Big{(} \nabla_{\delta^{t}}\mathcal{L}_{\mathtt{targeted-attack}}\big{(}x+\delta^{t}, x^{\prime})\big{)}\Big{)}\Big{)}, \tag{8}\]
where \(\mathcal{L}_{\mathtt{targeted-attack}}\) is \(\mathcal{L}_{\mathtt{ours-ss}}\)=\(-\mathcal{L}_{\mathtt{ss}}\), and \(x^{\prime}\) is a _selected target_ within the batch.
Therefore, in the previous simplified scenario described in Section 3.2, conducting the randomly selected targeted attack could increase the range of the perturbation that is generated with positive-pair only similarity loss as follow,
\[\delta_{\mathtt{targeted-attack}}=\operatorname*{arg\,max}_{\delta}|f(x+\delta) -f(x_{\mathtt{target}})|. \tag{9}\]
Through triangle inequality, the targeted attack may increase the range of the perturbation and eventually leverage the overall robustness.
**Theorem 3.2** (Perturbation range of targeted attack).: _Given a model trained under the \(\mathcal{L}_{\mathtt{targeted-attack}}\) loss, the adversarial perturbations \(\delta_{\mathtt{targeted-attack}}\) are larger than the adversarial perturbations \(\delta_{\mathtt{ss}}\) from a model trained under the \(\mathcal{L}_{\mathtt{ss}}\). Formally, \(\|\delta_{\mathtt{targeted-attack}}\|_{\infty}>\|\delta_{\mathtt{ss}}\|_{\infty}\)._
However, these are theoretical expectations in a simplistic scenario. To further substantiate this, we empirically observed that even a simple targeted attack, with a random target in the batch, significantly improves robustness in a positive-pair only SSL scenario, as shown in Table 2. Therefore, based on these theoretical and empirical insights, we propose to search more effective target for positive-pair targeted attack to boost the robustness of the self-supervised learning frameworks through experimental observations. The detailed proof of Theorem 3.2 is in the Supplementary.
Similarity and entropy-based target selection for targeted attackIn our theoretical analysis and empirical observations, we established that targeted attacks can significantly enhance overall robustness in SSL, except for the target itself. To this end, we propose a score function, denoted as \(\mathcal{S}(x,\cdot)\), which aims to identify the most suitable target that is distinct from the input while effectively contributing to improved robustness. Following the studies of Kim et al. [24], Ding et al. [10], Hitaj et al. [21], we prioritize high-entropy examples or those located near decision boundaries as crucial for generating effective adversarial examples in supervised adversarial training. Accordingly, we recommend selecting a target distinct from itself, yet induces confusion, creating adversarial examples that are located close to decision boundaries (Eq.11). The score function yields the most potent target (\(x^{\prime}\)) for a given base image (\(x\)). Subsequently, the targeted attack generates a perturbation, maximizing the similarity to the target \(x^{\prime}\) for the base image \(x\).
To this end, we design the score function based on the similarity and entropy values, without using any class information, as follows:
\[\mathcal{S}_{\mathtt{entropy}}(x,x^{\prime})=p^{\prime}/\tau\log\left(p^{ \prime}/\tau\right),\;\mathcal{S}_{\mathtt{similarity}}(x,x^{\prime})=\frac{e }{|e|_{2}}\cdot\frac{e^{\prime}}{|e^{\prime}|_{2}}, \tag{10}\]
\[\mathcal{S}_{\mathtt{TARO}}(x,x^{\prime})=\mathcal{S}_{\mathtt{entropy}}+ \mathcal{S}_{\mathtt{similarity}}. \tag{11}\]
where \(p=h\circ g\circ f(x)\) and \(e=f(x)\) are output vectors of predictor \(h\) and encoder \(f\), respectively. Overall, the score function \(\mathcal{S}\) incorporates both cosine similarity and entropy. The cosine similarity
\begin{table}
\begin{tabular}{c c c c} \hline \hline SSL & Attack Type & Clean & PGD \\ \hline \multirow{2}{*}{BYOL} & untargeted attack & 75.4 & 4.34 \\ & targeted attack & **83.50** & **31.62** \\ \hline \multirow{2}{*}{SimSiam} & untargeted attack* & 66.36 & 36.53 \\ & targeted attack & **77.08** & **47.58** \\ \hline \hline \end{tabular}
*adversarial training applied in SimSiam
\end{table}
Table 2: Effect of random targeted attack in positive-pair only SSL in CIFAR-5.
is calculated between features of base images and candidate images in the differently augmented batch. The entropy is calculated with the assumption that the vector \(p\) represents the logit of an instance as Caron et al. [4], Kim et al. [24]. Our score function is designed to select an instance (\(x^{\prime}\)) that is different but confused with the given image (\(x\)), thus facilitating the generation of effective adversarial examples for targeted attack (Figure 1). The experimental results in Figure 1(b) verify that the score function successfully selects such instances, as intended.
Robust self-supervised learning with targeted attacksThe TARO framework starts by selecting a target image based on the score function (\(\mathcal{S}\)). It then generates adversarial examples using the selected target and performs adversarial training with them.
For a positive pair, represented as differently transformed augmentations \(\mathbf{t}_{1}(x),\mathbf{t}_{2}(x)\), the target images \(\mathbf{t}_{2}(x^{\prime})\) and \(\mathbf{t}_{1}(x^{\prime})\) are selected respectively, as ones with the maximum score within the batch from the score function (\(\mathcal{S}\)) in Eq. 11. Then, we generate adversarial examples, i.e., \(\mathbf{t}_{1}(x)^{adv},\mathbf{t}_{2}(x)^{adv}\), for each transformed input with our proposed targeted attack (Eq. 8), where the targeted loss \(\mathcal{L}_{\texttt{targeted-attack}}=-\mathcal{L}_{\texttt{ss}}\) maximizes the similarity to the selected target \(\mathbf{t}_{2}(x^{\prime})\) and \(\mathbf{t}_{1}(x^{\prime})\), respectively. Finally, we maximize the agreement between the representations of adversarial images (\(\mathbf{t}_{1}(x)^{adv}\) and \(\mathbf{t}_{2}(x)^{adv}\)) and the clean image \(t_{1}(x)\) as follows:
\[\mathcal{L}_{\texttt{TARO}}=\mathcal{L}(\mathbf{t}_{1}(x),\mathbf{t}_{1}(x)^{ adv})+\mathcal{L}(\mathbf{t}_{1}(x)^{adv},\mathbf{t}_{2}(x)^{adv})+\mathcal{L}( \mathbf{t}_{2}(x)^{adv},\mathbf{t}_{1}(x)), \tag{12}\]
where \(\mathcal{L}\) is Eq. 3 for the SimSiam framework. Since all three instances have the same identity, we maximize the similarity between the clean and adversarial examples.
TARO could be also applied to positive pairs in contrastive adversarial SSL methods (e.g., RoCL [23], ACL [22]). Since contrastive SSL does not have a predictor, we use the output of the projector as \(p\) in Eq. 10 to select the target for positive-pair. Then, when we apply our targeted attack to their instance-wise attacks, as follows:
\[\mathcal{L}_{\texttt{ours-rocl}}=\mathcal{L}_{\texttt{nt-xent}}(\mathbf{t}_{1}( x),\{\emptyset\},\mathbf{t}_{1}(x)_{\texttt{neg}})+\mathcal{L}_{\texttt{similarity}}( \mathbf{t}_{1}(x),\mathbf{t}_{2}(x)), \tag{13}\]
where the adversarial loss is a sum of the modified nt-xent loss [6] and similarity loss. Since TARO alters the untargeted attack of the positive pair with a targeted attack between the base image (\(\mathbf{t}_{1}(x)\)) and target image (\(\mathbf{t}_{1}(x^{\prime})\)), we eliminate the positive pair term in nt-xent loss and add similarity loss instead. The similarity loss maximizes the cosine similarity between the \(\mathbf{t}_{1}(x)\) images and the \(\mathbf{t}_{1}(x^{\prime})\) images which are searched by the score function. Overall, we generate adversarial examples that maximize the \(\mathcal{L}_{\texttt{ours-rocl}}\) loss as shown in Algorithm 1.
## 4 Experiment
In this section, we extensively evaluate the efficacy of TARO with both contrastive and positive-pair only adversarial SSL frameworks. First, we compare the performance of our model to previous adversarial SSL methods that do not utilize any targeted attacks in Section 4.1. Moreover, we evaluate the robustness of the learned representations across different downstream domains in Section 4.2. Finally, we analyze the reason behind the effectiveness of targeted attacks in achieving better robust representations compared to models using untargeted attacks in Section 4.3.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Evaluation type & SSL & Attack Type & Clean & PGD & AutoAttack \\ \hline \multirow{3}{*}{Self-supervised linear evaluation} & BYOL & \(\mathcal{L}_{\texttt{hyd}}\) & 72.65 & 16.20 & 0.01 \\ & BYOL & \(\mathcal{L}_{\texttt{ours-hyd}}\) & **84.52** & **31.20** & **22.01** \\ & SimSiam & \(\mathcal{L}_{\texttt{ss}}\) & 71.78 & 32.28 & 24.41 \\ & SimSiam & \(\mathcal{L}_{\texttt{ours-ss}}\) & **74.87** & **44.71** & **36.39** \\ \hline \multirow{3}{*}{Self-supervised robust linear evaluation} & BYOL & \(\mathcal{L}_{\texttt{hyd}}\) & 54.01 & 27.24 & 4.49 \\ & BYOL & \(\mathcal{L}_{\texttt{ours-hyd}}\) & **74.33** & **40.84** & **29.91** \\ \cline{1-1} & SimSiam & \(\mathcal{L}_{\texttt{ss}}\) & 68.88 & 37.84 & 31.44 \\ \cline{1-1} & SimSiam & \(\mathcal{L}_{\texttt{ours-ss}}\) & **76.19** & **45.57** & **39.25** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Experimental results against white-box attacks on CIFAR-10. To see the effectiveness, we test TARO on positive-pair only self-supervised learning approaches, i.e., SimSiam, and BYOL.
Experimental setupWe compare TARO against previous contrastive and positive-pair only adversarial SSL approaches. Specifically, we adapt TARO on top of two contrastive adversarial SSL frameworks, RoCL [23], ACL [22] and a positive-pair only SSL framework, SimSiam [7], to demonstrate its efficacy in enhancing their robustness. All models use the ResNet18 backbones that are trained on CIFAR-10 and CIFAR-100 with \(\ell_{\infty}\) PGD attacks with the attack step of \(10\) and epsilon \(8/255\). We evaluate the robustness of our method against two types of attack, AutoAttack* [8] and \(\ell_{\infty}\) PGD attacks, with the epsilon size of \(8/255\), using the attack step of \(20\) iterations. Clean denotes the classification accuracy of the ResNet18 backbone on the original images. We further describe the experimental details in Appendix B. Code is available in [https://github.com/Kim-Minseon/TARO.git](https://github.com/Kim-Minseon/TARO.git)
Footnote *: [https://github.com/fra31/auto-attack](https://github.com/fra31/auto-attack)
### Efficacy of Targeted Attacks in Adversarial SSL
We first validate whether the proposed targeted attacks in TARO contribute to improving the robustness of positive-pair adversarial SSL frameworks. To evaluate the quality of the learned representations with the SSL frameworks, we utilize linear and robust linear evaluation, as shown in Table 3. Then, we validate the generality of TARO to contrastive-based adversarial SSL frameworks (Table 5).
Robustness improvements in positive-pair only SSLWe evaluate the efficacy of TARO by comparing those to untargeted attacks on positive-pair only SSL frameworks, i.e., SimSiam and BYOL. As shown in Table 3, when replacing untargeted attacks with TARO in the positive-only SSL, TARO contributes to attaining significant gains in both robustness accuracy against PGD attacks and clean accuracy. This is due to the inherent limitations of untargeted attacks in positive-pair only SSL frameworks. In such frameworks, perturbations in any direction away from the other pair of samples will inevitably increase the SSL loss, making it challenging to generate effective adversarial examples. However, with the guidance provided by TARO, the model is able to generate stronger attack images, leading to meaningfully improved performance both on clean and adversarially perturbed images. Furthermore, we show that the untargeted attacks are not only ineffective for learning robust features, but also hinder the learning of good visual representation for clean images.
Switching from an untargeted to a targeted attack approach leads to a substantial increase in performance across both contrastive-based and positive-pair only approaches, as shown in Table 4. This advancement is particularly evident when addressing the challenge of selecting appropriate targets within positive pairs. As we have discussed in the Limitations section, our empirical score function may not be the absolute optimal algorithm for target selection. Nevertheless, it is clear that concentrating on targeted attacks in the context of positive pairs is crucial for enhancing robust representation, applicable to both clean and adversarial examples.
Robustness improvements in contrastive adversarial SSLThe robustness gains through TARO in contrastive adversarial SSL, specifically RoCL and ACL, are demonstrated in Table 5. Given that our TARO algorithm mines the positive-pair in contrastive loss, its effects on contrastive-based SSL might be more limited compared to positive-pair SSL. Despite this, TARO enhances RoCL's robustness against PGD attacks from 42.89% to 45.37% without compromising the clean accuracy. In the case of ACL, TARO fortifies the robustness against PGD attacks while maintaining performance comparable to AutoAttack.
### Evaluation on CIFAR-100
Robustness on larger benchmarks datasetsWe further validate our method on a larger dataset, CIFAR-100. In Table 6, TARO demonstrates consistent robust accuracy when compared with those of the adversarial SSL frameworks using untargeted attacks, with notably significant robustness improvements on the positive-pair only SSL. Although the clean and original robust accuracy of the positive-only SSL method is noticeably lower than that of the contrastive learning method on this particular dataset, it achieves significantly higher robust accuracy than the contrastive counterpart
\begin{table}
\begin{tabular}{l l c c} \hline \hline Method & Selection & Clean & PGD \\ \hline \multirow{3}{*}{RoCL} & None & 78.14 & 42.89 \\ & Random & 79.26 & 43.45 \\ & Ours & 80.06 & 45.37 \\ \hline \multirow{3}{*}{SimSiam} & None* & 71.78 & 32.28 \\ & Random & 73.25 & 42.85 \\ & Ours & 74.87 & 44.71 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation results on target selection.
when using our targeted attack. The results further suggest that the proposed targeted attack plays a crucial role in creating effective adversarial examples.
Transferable robustnessThe main objective of SSL is to learn transferable representations for diverse downstream tasks. Therefore, we further evaluate the transferable robustness of the pretrained representations trained using our targeted attack on novel tasks from a different dataset. We adopt the experimental setting from the previous works on supervised adversarial transfer learning [32] which freeze the encoder and train only the fully connected layer. We pretrained the model on CIFAR-100 and evaluate the robust transferability to CIFAR-10. In Table 7, our model also shows impressive transferable robustness both with contrastive and positive-pair only SSL, compared to those obtained by the representations learned with untargeted adversarial SSL.
### Effectiveness of TARO
In this section, we further analyze the effect of the targeted attacks in adversarial SSL to see how and why it works. 1) Analysis of the selected images by \(\mathcal{S}\), 2) Visual representation of adversarial examples that are generated with untargeted attack/targeted attack, and 3) ablation experiment on each component of the score function.
Analysis of the selected targetTo analyze which target images are selected by our score function (\(\mathcal{S}\)), we use a supervised adversarial training (AT) model. We select the target images of a single class (airplane) with the score function, and forward them to the supervised AT model to obtain their class distribution. To further examine which are the most confusing classes for the original images, we forward the base airplane images to the supervised AT model as well. As shown in Figure 1(a), airplane images are easily confused with the ship class and the bird class. Surprisingly, 1/3 of the target images are selected using our target selection function for airplane images belonging to either ship or the bird class, which are the most confusing classes for the images belonging to the airplane class (See Figure 1(b)). These results strongly support that our score function effectively selects targets that are similar yet confused, as intended, without using any label information.
Visualization of embedding spaceTo examine the differences between images that are generated with targeted and untargeted attacks, we visualize their embedding space. In Figure 3, black markers represent adversarial examples, and light blue markers represent clean examples, both belonging to the same class. As shown in Figure 2(a), untargeted adversarial examples are located near clean examples,
\begin{table}
\begin{tabular}{c c c c} \hline \hline Method & Clean & PGD \\ \hline RoCL & **73.93** & 18.62 \\ +TARO & 65.21 & **19.13** \\ \hline SimSiam* & **53.34** & 11.24 \\ +TARO & 50.50 & **25.44** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Results of adversarial transfer learning to CIFAR-10 from CIFAR-100.
Figure 2: Analysis of target from score function (\(\mathcal{S}\))
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Evaluation type & Method & Attack Type & Clean & PGD & AutoAttack \\ \hline \multirow{3}{*}{Self-supervised linear evaluation} & RoCL [23] & \(\mathcal{L}_{\text{roc1}}\) & 78.14 & 42.89 & 27.19 \\ & +TARO & \(\mathcal{L}_{\text{ours-roc1}}\) & **80.06** & **45.37** & **27.95** \\ & ACL [22] & \(\mathcal{L}_{\text{acl}}\) & **79.96** & 39.37 & **35.97** \\ & +TARO & \(\mathcal{L}_{\text{ours-acl}}\) & 78.45 & **39.71** & 35.81 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Experimental results against white-box attacks on ResNet18 trained on the CIFAR-10 dataset. To see the effectiveness, we test TARO on contrastive adversarial SSL, i.e., RoCL, and ACL.
\begin{table}
\begin{tabular}{c c c} \hline \hline Method & Clean & PGD \\ \hline RoCL & **73.93** & 18.62 \\ +TARO & 65.21 & **19.13** \\ \hline SimSiam* & **53.34** & 11.24 \\ +TARO & 50.50 & **25.44** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results of linear evaluation in a larger dataset, CIFAR-100.
and far from the class boundaries. On the other hand, targeted adversarial examples are located near the class boundaries (Figure 2(b)), although it is generated in an unsupervised manner without any access to class labels. This visualization shows that our targeted attack generates relatively more effective adversarial examples than untargeted attacks, which is likely to push the decision boundary to learn more discriminative representation space for instances belonging to different classes.
Ablation study of the score functionTo demonstrate the effect of each component in our score function, we conduct an ablation study of the score function \(\mathcal{S}\). The score function consists of two terms: the entropy term and the cosine similarity term (Eq. 10), which together contribute to finding an effective target that is different but confusing. We empirically validate each term by conducting an ablation experiment using only a single term in the score function during adversarial SSL training in Eq. 11. The experimental results in Table 8, suggest that the entropy term leads to good clean accuracy while the similarity term focuses on achieving better robust performance. Thus the combined score function enables our model to achieve good robustness while maintaining its accuracy on clean examples.
## 5 Conclusion
In this paper, we demonstrate that a simple combination of supervised adversarial training with self-supervised learning is highly suboptimal due to the ineffectiveness of adversarial examples generated by untargeted attacks in positive-pair only SSL, which perturb to random latent space without considering decision boundaries. To address this limitation, we proposed an instance-wise targeted attack scheme for adversarial self-supervised learning. This scheme selects the target instance based on similarity and entropy, such that the given instance is perturbed to be similar to the selected target. Our targeted adversarial self-supervised learning yields representations that achieve better robustness when applied to any type of adversarial self-supervised learning, including positive-pair only SSL and contrastive SSL. We believe that our work paves the way for future research in exploring more effective attacks for adversarial self-supervised learning.
## Limitations
Our method's main constraint is that our score function's design relies on empirical design based on the previous works. Establishing the most optimal score function theoretically for a high-dimensional, non-linear deep learning model is a complex task. Despite this, we've provided a theoretical basis for how a targeted attack can improve robustness in a simple scenario for positive pairs. Our experimental results also confirm our score function's effectiveness, suggesting we've made various efforts to counterbalance our limitations. Additionally, our method demands more computational time than a simple untargeted adversarial training, given the need to select a target instance. Yet, this extra computational time is less than 5% compared to original training time. Considering the significant boost in robustness, we believe it's a reasonable trade-off to implement our method. Despite these limitations, we've identified a significant vulnerability in the untargeted attack method--an essential discovery for adversarial self-supervised learning. Moreover, we suggest a simple yet effective way to address this vulnerability in adversarial self-supervised learning.
## Acknowledgement
This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2020-0-00153) and by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program(KAIST)). We thank Jin Myung Kwak, Eunji Ko, Jihoon Tack, and Yulmu Kim for providing helpful feedbacks and support in journey of this research. We also thank the anonymous reviewers for their insightful comments and suggestions.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & Clean & PGD & AutoAttack \\ \hline \(\mathcal{S}_{\texttt{entropy}}\) & 78.43 & 40.35 & 32.51 \\ \(\mathcal{S}_{\texttt{similarity}}\) & 72.90 & 44.59 & 36.12 \\ \(\mathcal{S}_{\texttt{TABO}}\) & 74.06 & 44.71 & 36.39 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Results of ablation study on score function on CIFAR-10.
## References
* [1]M. Andriushchenko, F. Croce, N. Flammarion, and M. Hein (2020) Square attack: a query-efficient black-box adversarial attack via random search. In European Conference on Computer Vision, pp. 484-501. Cited by: SS1.
* [2]N. Carlini and D. Wagner (2017) Towards evaluating the robustness of neural networks. In 2017 IEEE symposium on security and privacy (sp), pp. 39-57. Cited by: SS1.
* [3]Y. Carmon, A. Raghunathan, L. Schmidt, P. Liang, and J. C. Duchi (2019) Unlabeled data improves adversarial robustness. Advances in Neural Information Processing Systems. Cited by: SS1.
* [4]M. Caron, H. Touvron, I. Misra, H. Jegou, J. Mairal, P. Bojanowski, and A. Joulin (2021) Emerging properties in self-supervised vision transformers. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 9650-9660. Cited by: SS1.
* [5]D. M. Chan, R. Rao, F. Huang, and J. F. Canny (2019) Gpu accelerated t-distributed stochastic neighbor embedding. Journal of Parallel and Distributed Computing131, pp. 1-13. Cited by: SS1.
* [6]T. Chen, S. Kornblith, M. Norouzi, and G. Hinton (2020) A simple framework for contrastive learning of visual representations. In International Conference on Machine Learning, Cited by: SS1.
* [7]X. Chen and K. He (2021) Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15750-15758. Cited by: SS1.
* [8]F. Croce and M. Hein (2020) Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In International Conference on Machine Learning, pp. 2206-2216. Cited by: SS1.
* [9]F. Croce and M. Hein (2020) Minimally distorted adversarial examples with a fast adaptive boundary attack. In International Conference on Machine Learning, pp. 2196-2205. Cited by: SS1.
* [10]G. Weiguang Ding, Y. Sharma, K. Y. Chau Lui, and R. Huang (2020) Mma training: direct input space margin maximization through adversarial training. In International Conference on Learning Representations, Cited by: SS1.
* [11]A. Dosovitskiy, P. Fischer, J. T. Springenberg, M. Riedmiller, and T. Brox (2015) Discriminative unsupervised feature learning with exemplar convolutional neural networks. IEEE transactions on pattern analysis and machine intelligence38 (9), pp. 1734-1747. Cited by: SS1.
* [12]L. Fan, S. Liu, P. Chen, G. Zhang, and C. Gan (2021) When does contrastive learning preserve adversarial robustness from pretraining to finetuning?. Advances in Neural Information Processing Systems34. Cited by: SS1.
* [13]L. Gao, Q. Zhang, J. Song, X. Liu, and H. T. Shen (2020) Patch-wise attack for fooling deep neural network. In European Conference on Computer Vision, pp. 307-322. Cited by: SS1.
* [14]I. J. Goodfellow, J. Shlens, and C. Szegedy (2015) Explaining and harnessing adversarial examples. In International Conference on Learning Representations, Cited by: SS1.
* [15]S. Gowal, P. Huang, A. van den Oord, T. Mann, and P. Kohli (2021) Self-supervised adversarial robustness for the low-label, high-data regime. In International Conference on Learning Representations, Cited by: SS1.
* [16]S. Gowal, S. Rebuffi, O. Wiles, F. Stimberg, D. Andrei Calian, and T. A. Mann (2021) Improving robustness using generated data. Advances in Neural Information Processing Systems34, pp. 4218-4233. Cited by: SS1.
* [17] Jean-Bastien Grill, Florian Strub, Florent Altche, Corentin Tallec, Pierre H Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent: A new approach to self-supervised learning. _Advances in Neural Information Processing Systems_, 2020.
* [18] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _IEEE Conference on Computer Vision and Pattern Recognition_, pages 770-778, 2016.
* [19] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In _IEEE Conference on Computer Vision and Pattern Recognition_, 2020.
* [20] Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In _International Conference on Learning Representations_, 2019.
* [21] Dorjan Hitaj, Giulio Pagnotta, Iacopo Masi, and Luigi V Mancini. Evaluating the robustness of geometry-aware instance-reweighted adversarial training. _International Conference on Learning Representations_, 2021.
* [22] Ziyu Jiang, Tianlong Chen, Ting Chen, and Zhangyang Wang. Robust pre-training by adversarial contrastive learning. In _Advances in Neural Information Processing Systems_, 2020.
* [23] Minseon Kim, Jihoon Tack, and Sung Ju Hwang. Adversarial self-supervised contrastive learning. _Advances in Neural Information Processing Systems_, 2020.
* [24] Minseon Kim, Jihoon Tack, Jinwoo Shin, and Sung Ju Hwang. Rethinking the entropy of instance in adversarial training. In _First IEEE Conference on Secure and Trustworthy Machine Learning_, 2023.
* [25] Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, Etienne David, Ian Stavness, Wei Guo, Berton Earnshaw, Imran Haque, Sara M Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, and Percy Liang. Wilds: A benchmark of in-the-wild distribution shifts. In Marina Meila and Tong Zhang, editors, _Proceedings of the 38th International Conference on Machine Learning_, volume 139 of _Proceedings of Machine Learning Research_, pages 5637-5664. PMLR, 18-24 Jul 2021.
* [26] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In _Advances in Neural Information Processing Systems_, pages 1097-1105, 2012.
* [27] Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. _arXiv preprint arXiv:1607.02533_, 2016.
* [28] Ya Le and X. Yang. Tiny imagenet visual recognition challenge. In _TinyImageNet_, 2015.
* [29] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In _International Conference on Learning Representations_, 2018.
* [30] Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In _European Conference on Computer Vision_, pages 69-84. Springer, 2016.
* [31] Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, pages 2536-2544, 2016.
* [32] Ali Shafahi, Parsa Saadatpanah, Chen Zhu, Amin Ghiasi, Christoph Studer, David Jacobs, and Tom Goldstein. Adversarially robust transfer learning. _International Conference on Learning Representations_, 2020.
* [33] Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai. One pixel attack for fooling deep neural networks. _IEEE Transactions on Evolutionary Computation_, 23(5):828-841, 2019.
* [34] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. _arXiv preprint arXiv:1312.6199_, 2013.
* [35] Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. In _European Conference on Computer Vision_, 2020.
* [36] Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, and Phillip Isola. What makes for good views for contrastive learning. In _Advances in Neural Information Processing Systems_, 2020.
* [37] Yisen Wang, Difan Zou, Jinfeng Yi, James Bailey, Xingjun Ma, and Quanquan Gu. Improving adversarial robustness requires revisiting misclassified examples. In _International Conference on Learning Representations_, 2019.
* [38] Dongxian Wu, Shu-Tao Xia, and Yisen Wang. Adversarial weight perturbation helps robust generalization. _Advances in Neural Information Processing Systems_, 33, 2020.
* [39] Jure Zbontar, Li Jing, Ishan Misra, Yann LeCun, and Stephane Deny. Barlow twins: Self-supervised learning via redundancy reduction. In _International Conference on Machine Learning_, pages 12310-12320. PMLR, 2021.
* [40] Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P Xing, Laurent El Ghaoui, and Michael I Jordan. Theoretically principled trade-off between robustness and accuracy. In _International Conference on Machine Learning_, 2019.
* [41] Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In _European Conference on Computer Vision_, pages 649-666. Springer, 2016.
## Effective Targeted Attack for Adversarial Self-Supervised Learning
**Supplementary Material**
## Appendix A Baselines.
* **RoCL [23].** RoCL is SimCLR [6] based adversarial self-supervised learning methods. We experiment with the official code+. To make a fair comparison, we set the attack step to \(10\) as other baselines. We train the model with 1,000 epochs under the LARS optimizer with weight decay \(2e-6\) and momentum with \(0.9\). For the learning rate schedule, we also followed linear warmup with cosine decay scheduling. We set a batch size of 512 for all datasets (CIFAR-10, CIFAR-100, STL10). For data augmentation, we use a random crop with 0.08 to 1.0 size, horizontal flip with a probability of 0.5, color jitter with a probability of 0.8, and grayscale with a probability of 0.2 for RoCL training. Footnote †: [https://github.com/Kim-Minseon/RoCL](https://github.com/Kim-Minseon/RoCL)
* **ACL [22].** ACL is SimCLR [6] based adversarial self-supervised learning methods. We conduct the experiment with the official code++. To make a fair comparison, we set the attack step to \(10\) as other baselines. We train the model with 1,000 epochs. We set a batch size of 512 for STL10 dataset. For CIFAR-10, and CIFAR-100, we use the official pretrained checkpoints. For data augmentation, we use a random crop with 0.08 to 1.0 size, horizontal flip with a probability of 0.5, color jitter with a probability of 0.8, and grayscale with a probability of 0.2 for ACL training. We set PGD dual mode which calculates both clean and adversarial during the training. Footnote †: [https://github.com/Vitra-Group/Adversarial-Contrastive-Learning](https://github.com/Vitra-Group/Adversarial-Contrastive-Learning)
* **BYORL [15]** BYORL is BYOL [17] based adversarial self-supervised learning methods for low label regime. Since there is no official code for BYORL we implement the BYORL by ourselves. We implement based on BYOL from a self-supervised learning library ++. We use the same CIFAR-10 setting in the library except for normalization. We exclude normalization in the data augmentation. To make a fair comparison, we implement on the ResNet18 with attack step 10 of PGD. As shown in supplementary materials in [15], when the model is trained with 10 steps in ResNet34 it shows 37.88% of robustness. We conjecture that we have a different performance from the original paper because the original paper employs 40 steps of PGD in WideResNet34 to obtain the reported robustness which requires extraordinary computation power. Footnote ††: [https://github.com/Vitra-Group/Adversarial-Contrastive-Learning](https://github.com/Vitra-Group/Adversarial-Contrastive-Learning)
* **AdvCL [12].** AdvCL is SimCLR [6] based adversarial self-supervised learning which employ pseudo labels from the model that is pretrained on ImageNet [26] data. Even though the outstanding performance of AdvCL, we exclude this model as our baseline because the proposed methods require the model that is trained with the labels of ImageNet which we assume to have no label information for training.
## Appendix B Detailed description of experimental setups.
### Resource description.
All experiments are conducted with a two NVIDIA RTX 2080 Ti, except for the experiments with CIFAR-100 experiments. For CIFAR-100 experiments, two NVIDIA RTX 3080 are used. All experiments are processed in Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz.
### Training detail.
For all methods, we train on ResNet18 [18] with \(\ell_{\infty}\) attacks with attack strength of \(\epsilon=8/255\) and step size of \(\alpha=2/255\), with the number of inner maximization iterations set to \(K=10\). For the optimization, we train every model for \(800\) epochs using the SGD optimizer with the learning rate of0.05, weight decay of \(5\mathrm{e}{-4}\), and the momentum of 0.9. For data augmentation, we use a random crop with 0.08 to 1.0 size, horizontal flip with a probability of 0.5, color jitter with a probability of 0.8, and grayscale with a probability of 0.2. We exclude normalization for adversarial training. We set the weight of adversarial similarity loss \(w\) as 2.0. We use batch size 512 with two GPUs.
In the score function, we calculate the similarity score term and the entropy term as shown in Equation 11. First, to exclude the positive pairs' similarity score we set the similarity score between positive pairs to \(-1\). Then, to calculate the overall score, after obtaining the similarity score and entropy of each sample, we normalize each component with Euclidean normalization to balance each component to the score function. Further, the detailed algorithm of TARO for contrastive SSL is described in Algorithm 1 and Eq. 14.
```
Input: Dataset \(\mathcal{D}\), transformation function \(\mathbf{t}\), model \(f\), parameter of model \(\theta\), target score function \(\mathcal{S}\) for\(\mathtt{iter}\in\) number of iteration do for\(x_{i}\in\) miniBatch \(B=\{x_{1},\dots,x_{m}\}\)do for n in 2 do Transform input \(\mathbf{t}_{n}(x_{i})\) Find target images \(\mathbf{t}_{n}(x_{k})\) from \(\mathcal{S}(\mathbf{t}_{n}(x_{i}),\mathtt{batch})\) Generate targeted adversarial examples \(\mathcal{L}_{\mathtt{cost-attack}}=\mathcal{L}_{\mathtt{at-search}}(\mathbf{t} _{1}(x_{i}),\{\emptyset\},\{\mathbf{t}_{1}(x_{i})_{\{\mathtt{tape}\}}\})+ \mathcal{L}_{\mathtt{similarity}}(\mathbf{t}_{1}(x_{i}),\mathbf{t}_{2}(x_{k}))\) \(\delta^{t+1}=\Pi_{B(0,\alpha)}\Big{(}\delta^{t}+\alpha\mathtt{sign}\Big{(} \nabla_{\delta^{t}}[\mathcal{L}_{\mathtt{cost-attack}}]\Big{)}\Big{)}\) \(\mathbf{t}_{n}(x_{i})^{adv}=\mathbf{t}_{n}(x_{i})+\delta^{t}\) endfor Calculate training loss \(\mathcal{L}_{\mathtt{TARO}}=\mathcal{L}_{\mathtt{at-search}}(\mathbf{t}_{1}(x_ {i}),\{\mathbf{t}_{2}(x_{i}),\mathbf{t}_{1}(x_{i})^{adv}\},\{\mathbf{t}_{1}(x _{i})_{\{\mathtt{teng}\}}\})\) endfor \(\theta\leftarrow\theta-\beta\nabla_{\theta}\mathcal{L}_{\mathtt{TARO}}\) endfor
```
**Algorithm 1** Targeted Attack Robust Self-Supervised Learning (TARO) for contrastive-based SSL
### Evaluation details.
**PGD \(\ell_{\infty}\) attack.** For all PGD \(\ell_{\infty}\) attacks used in the test time, we use the projected gradient descent (PGD) attack with the strength of \(\epsilon=8/255\), with the step size of \(\alpha=8/2550\), and with the number of inner maximization iteration set to \(K=20\) with the random start.
**AutoAttack.** We further test against a strong gradient-based attack, i.e., AutoAttack (AA) [8]. AutoAttack is an ensemble attack of four different attacks (APGD-CE, APGD-T, FAB-T [9], and Square [1]). AGPD-CE is an untargeted attack, APGD-T and FAB-T are targeted attacks. The Square is a black-box attack. We use an official code to test models8.
Footnote 8: [https://github.com/fra31/auto-attack](https://github.com/fra31/auto-attack)
**Self-supervised learning.** For self-supervised learning, we denote linear evaluation when we use only clean images to train the fully connected (fc) layer after the pretraining phase. When we denote robust linear evaluation, we train the fc layer with adversarial examples. While ACL uses partial fine-tuning to obtain their reported accuracy and robustness, to make a fair comparison, we freeze the encoder and train only the fc layer. Robust fine-tuning is training all parameters including parameters of the encoder with adversarial examples. For linear evaluation, we followed the baseline hyperparameters for each model. We train the baseline models with 150 epochs, 25 epochs, and 50 epochs for RoCL, and ACL, respectively. We also followed their learning rate of 0.1, 0.1, and \(2\times 10^{-3}\) for RoCL, and ACL, respectively. On the other hand, we train our model with 100 epochs with a learning rate of 0.5 for linear evaluation. We use AT loss for robust linear evaluation except for ACL. For ACL, we use TRADES loss as the official code.
## Appendix C Experimental Details of Analysis.
Analysis the distribution of target class.To analyze the target from the score function (\(\mathcal{S}\)), we employ an adversarially supervised trained model. We calculate the score function that is trained with our TARO on SimSiam. We use a train set. For each class, we calculate the mean predict probability, which is the average of all softmax outputs of target images from the supervised trained model. Further, we also count the number of samples that are predicted for each class. In Figure 2, the results are target images of the airplane as a base image. There is a similar tendency even though we change the base class to other classes as shown in the following Figure 4.
Visualization of embedding space.To visualize the embedding of our targeted attack and untargeted attack, we use t-Distributed Stochastic Neighbor Embedding (t-SNE) [5] with the cosine similarity metric. Our TARO model is trained on CIFAR-10 as a feature extractor. We sample a few examples and conduct two types of attack, the untargeted attack and the targeted attack. To visualize more effectively we ignore the other seven classes in CIFAR-10. We visualize clean examples from three classes and then visualize adversaries that are generated with our targeted attack and untargeted attack, respectively, with dark blue.
## Appendix D Additional Experiment
Contrastive based adversarial self-supervised learning with TARO.Our TARO could be also applied to positive pairs in contrastive-based adversarial self-supervised learning (e.g., RoCL [23], ACL [22]). We applied our TARO in instance-wise attack of the contrastive-based approaches as follow,
\[\mathcal{L}_{\texttt{attack}}=\mathcal{L}_{\texttt{nt-xent}}(x,\{\emptyset\}, \{x_{\texttt{neg}}\})+\mathcal{L}_{\texttt{similarity}}(x,\{x_{j_{\texttt{ time}}}\}) \tag{14}\]
where attack loss is consists of original attack loss nt-xent loss [6] and similarity loss. The similarity loss additionally constrains the positive pairs as the TARO that maximize the similarity between the \(x\) with the \(j^{th}\) index images which is searched by our TARO score function. Overall, we generate adversarial examples that maximizes the \(\mathcal{L}_{\texttt{attack}}\) loss. Surprisingly, when we apply TARO on the contrastive learning based approach, previous work could achieve marginally better clean accuracy and robustness. This shows that our empirical assumption also holds on contrastive-based SSL but since there is (1/batch size) effects on the total loss the gain could be marginal.
Robustness against black box attackWe conduct black box attack to verify our model is robust to gradient free attacks. We generate black box adversaries with AT [29] model, RoCL [23] model and our models. Then, we test adversaries to each other. As show in the table, our model is able to defend the black box attack from AT model than the RoCL model. Moreover, our model generates stronger black box adversaries than RoCL since AT model shows more weak robustness.
Robustness against diverse attacksWe tested our approach against diverse types of adversarial attacks, including the Carlini-Wagner (CW) attack [2], black-box attack, i.e., Pixle [33], and Patchattack, i.e., PIFGSM [13], as shown in Table 10. Since our approach already showed improved performance against Autoattack, which includes black-box Square attacks, our approach is able to consistently demonstrates enhanced robustness against both the CW attack and black-box attacks.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Method & PGD & CW & Pixle & PIFGSM \\ \hline RoCL & 42.89 & 76.45 & 67.32 & 43.23 \\ +TARO & 45.37 & 72.75 & 68.40 & 44.56 \\ SimSiam & 32.28 & 68.14 & 54.56 & 28.31 \\ +TARO & 44.97 & 73.87 & 67.22 & 46.37 \\ \hline \hline \end{tabular}
\end{table}
Table 10: Results against diverse attacks.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & AT & RoCL & Ours \\ \hline AT & - & 59.73 & **60.92** \\ RoCL & 70.40 & - & 57.98 \\ Ours & **69.97** & 54.99 & - \\ \hline \hline \end{tabular}
\end{table}
Table 9: Results of black box attack. Models on the row are the tested models. Models on the columns are the source models to generate black box adversaries.
Figure 4: Analysis of target distribution in different classes
Proof of the Theorem
Let us consider the problem as a simple binary task using a linear layer model to demonstrate our theoretical motivation. The dataset \(\mathcal{D}=X\), \(\cdot\) consists of training examples, where \(x\in X\) represents a training example without any class label. We assume there is a single positive pair and a single negative pair. The linear model is denoted as \(f(\cdot)\). The adversarial perturbations generated using both losses are as follows:
\[x_{\texttt{as}}^{\texttt{adv}}=x+\arg\max_{\delta}\left\{\frac{f(x+\delta)}{ \|f(x+\delta)\|}\cdot\frac{f(x)}{\|f(x)\|}\right\}\quad\text{subject to}\quad \|\delta\|\leq\epsilon, \tag{15}\]
where we approximate the cosine similarity distance loss into \(\ell_{1}\) distance function. In both cases, a \(\delta\) maximizes the respective loss, subject to the constraint that the norm of \(\delta\) is less than or equal to \(\epsilon\). The objective in positive-only SSL is to make the perturbed and original samples dissimilar as follows,
\[\delta_{\texttt{as}}=\arg\max_{\delta}|f(x)-f(x+\delta)|, \tag{16}\]
\[\delta_{\texttt{nt-xent}}=\arg\max_{\delta}|f(x)-f(x+\delta)|-|f(x_{neg})-f(x +\delta)|. \tag{17}\]
The range of adversarial attack of each loss is then calculated as follow,
\[\begin{split}\|\delta_{\texttt{as}}\|=&\|\arg\max_{ \delta}|f(x)-f(x+\delta)|\|\\ =&\arg\max_{\delta}(|f(x)-f(x+\delta)|)^{2}\end{split} \tag{18}\]
\[\begin{split}\|\delta_{\texttt{nt-xent}}\|&=\|\arg \max_{\delta}\left(|f(x)-f(x+\delta)|-|f(x_{\texttt{neg}})-f(x+\delta)|\right) \|\\ &=\arg\max_{\delta}\left(|f(x)-f(x+\delta)|^{2}-2|f(x)-f(x+\delta) |\cdot|f(x_{\texttt{neg}})-f(x+\delta)|\right.\\ &\quad\left.+|f(x_{\texttt{neg}})-f(x+\delta)|^{2}\right)\\ &\approx\arg\max_{\delta}\left(|f(x)-f(x+\delta)|^{2}-2|\delta| \cdot|f(x_{\texttt{neg}})-f(x+\delta)|+|f(x_{\texttt{neg}})-f(x+\delta)|^{2} \right)\\ &\approx\arg\max_{\delta}\left(|f(x)-f(x+\delta)|^{2}+|f(x_{ \texttt{neg}})-f(x+\delta)|^{2}\right)\quad\because\delta\leq\epsilon\\ &\geq\arg\max_{\delta}|f(x)-f(x+\delta)|^{2}.\end{split} \tag{19}\]
If there are more negative pairs, the difference in perturbation range between positive-pair-only attacks and contrastive attacks could become more pronounced.
**Theorem E.1** (Perturbation range of self-supervised learning loss).: _Given a model trained under the positive-only distance loss, the adversarial perturbations \(\delta_{\texttt{as}}\) are likely to be smaller than those perturbations \(\delta_{\texttt{nt-xent}}\) from a model trained under the positive-pair and negative-pair distance loss. Formally, \(\|\delta_{\texttt{as}}\|_{\infty}<\|\delta_{\texttt{nt-xent}}\|_{\infty}\)._
When applying a random targeted attack within the positive-pair-only self-supervised learning framework, we can effectively increase the range of perturbations. Let us assume that the target instance \(x_{\texttt{target}}\) is different from the original instance \(x\), and the distance between them is greater than the threshold \(\delta\). The perturbations generated through the targeted attack are as follows:
\[\delta_{\texttt{targeted-attack}}=\arg\max_{\delta}|f(x+\delta)-f(x_{\texttt{ target}})|. \tag{20}\]
Let us denote target instance \(x_{\texttt{target}}\) as \(x^{\prime}\) for simple equations,
\[\begin{split}|f(x)-f(x+\delta)|&<|f(x^{\prime})-f (x+\delta)|\\ &\therefore|x^{\prime}-x|>\delta\\ &=|f(x+\delta)-f(x^{\prime})|\\ &\therefore\|\arg\max_{\delta}|f(x)-f(x+\delta)|\|<\|\arg\max_{ \delta}|f(x+\delta)-f(x^{\prime})|\|\end{split} \tag{21}\]The random targeted attack, which targets instances that are at a greater distance than \(\delta\) from the original input, can potentially increase the perturbation range and ultimately enhance overall robustness.
**Theorem E.2** (Perturbation range of targeted attack).: _Given a model trained under the \(\mathcal{L}_{\texttt{targeted-attack}}\) loss, the adversarial perturbations \(\delta_{\texttt{targeted-attack}}\) are likely to be larger than those from a model trained under the \(\mathcal{L}_{ss}\). Formally, \(\|\delta_{\texttt{targeted-attack}}\|_{\infty}>\|\delta_{ss}\|_{\infty}\)._
## Appendix F Broader Impacts
The pursuit of adversarial robustness against malicious attacks within deep neural networks remains an unsolved, yet fundamental area of deep learning research. To date, several self-supervised adversarial training approaches have been proposed, primarily based on the contrastive learning framework. However, the attainment of robustness via a 'positive-pair only' self-supervised learning approach is still under-explored. Consequently, self-supervised frameworks have evolved from large batch contrastive learning to a focus on single 'positive-pair only' learning paradigms. The area of self-supervised learning that we are targeting aims to delve into the robustness of these new learning frameworks through our tailored attacks. Furthermore, we believe that achieving superior robustness in self-supervised learning is a crucial research path towards achieving authentic robustness in representation. We hope that our work will inspire more research aimed at achieving generalizable robustness in unseen domains and datasets by leveraging the potential of various self-supervised frameworks. | ## Review
### Summary
This paper investigates the use of targeted adversarial attacks in self-supervised learning (SSL), specifically addressing the limitations of existing positive-only SSL frameworks that suffer from poor downstream robustness. The proposed TARO paradigm selects the most confusing samples for adversarial training, significantly improving robustness compared to traditional untargeted attacks. Empirical results show that TARO enhances the performance of several SSL methods, including BYOL and SimSiam. While the paper is well-organized and clearly motivated, it faces scrutiny regarding its novelty and the rigor of its empirical evaluations.
### Strengths
- The motivation of the paper is clear and addresses a significant limitation in existing positive-only SSL frameworks.
- The experimental evaluation demonstrates that TARO can improve the adversarial robustness of positive-pair-only SSL methods.
- The paper is well-written, organized, and easy to follow, with useful background information provided.
### Weaknesses
- The novelty of the proposed method is incremental and the technical design appears straightforward without critical innovations.
- The paper lacks sufficient literature review, with references primarily before 2022, missing relevant recent works.
- The experiments are unconvincing due to the use of outdated baselines; comparison against a wider range of contemporary methods is needed.
- Mathematical notation and clarity in some sections can be improved, leading to potential confusion about the method's details.
- The individual contributions of TARO's components are not sufficiently analyzed, raising questions about which aspects drive performance improvements.
### Questions
- Can the proposed method be applied to transformer-based models, such as ViT?
- How does TARO's robustness transfer from known attacks to unknown attacks?
- What is the AT model used in the analysis, and how was it trained?
- Could the authors clarify the notation used in the similarity and entropy-based target selection section?
### Soundness
**Score:** 2
**Description:** The soundness of the paper is fair. While the proposed method shows some promise, the lack of comprehensive evaluation and clarity in the mathematical formulation raises concerns about its robustness.
### Presentation
**Score:** 3
**Description:** The presentation of the paper is good. It is generally well-organized and easy to understand, though improvements in clarity, especially in mathematical notation, would enhance comprehension.
### Contribution
**Score:** 3
**Description:** The contribution of the paper is good, but the incremental nature of the advancements and limited comparisons to state-of-the-art methods detract from its overall impact.
### Rating
**Score:** 6
**Description:** The rating is a weak accept. The paper is technically solid with moderate-to-high impact, but it requires further improvements, especially in evaluation and addressing the identified weaknesses.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The decision to accept is based on the paper's clear motivation, sound writing, and demonstrated empirical results. However, the concerns regarding novelty, the rigor of the experiments, and clarity in presentation suggest that while the paper is acceptable, it would benefit from revisions to enhance its contributions and address weaknesses.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
TopP&R: Robust Support Estimation Approach for Evaluating Fidelity and Diversity in Generative Models
Pum Jun Kim\({}^{1}\) Yoojin Jang\({}^{1}\) Jisu Kim\({}^{2,3,4}\) Jaejun Yoo\({}^{1}\)
\({}^{1}\)Ulsan National Institute of Science and Technology
\({}^{2}\)Seoul National University \({}^{3}\)Inria \({}^{4}\)Paris-Saclay University
{pumjun.kim,softjin,jaejun.yooj}@unist.ac.kr
[email protected]
###### Abstract
We propose a robust and reliable evaluation metric for generative models called Topological Precision and Recall (TopP&R, pronounced "topper"), which systematically estimates supports by retaining only topologically and statistically significant features with a certain level of confidence. Existing metrics, such as Inception Score (IS), Frechet Inception Distance (FID), and various Precision and Recall (P&R) variants, rely heavily on support estimates derived from sample features. However, the reliability of these estimates has been overlooked, even though the quality of the evaluation hinges entirely on their accuracy. In this paper, we demonstrate that current methods not only fail to accurately assess sample quality when support estimation is unreliable, but also yield inconsistent results. In contrast, TopP&R reliably evaluates the sample quality and ensures statistical consistency in its results. Our theoretical and experimental findings reveal that TopP&R provides a robust evaluation, accurately capturing the true trend of change in samples, even in the presence of outliers and non-independent and identically distributed (Non-IID) perturbations where other methods result in inaccurate support estimations. To our knowledge, TopP&R is the first evaluation metric specifically focused on the robust estimation of supports, offering statistical consistency under noise conditions.
## 1 Introduction
In keeping with the remarkable improvements of deep generative models [1, 2, 3, 4, 5, 6, 7, 8, 9], evaluation metrics that can well measure the performance of generative models have also been continuously developed [10, 11, 12, 13, 14]. For instance, Inception Score (IS) [10] measures the Kullback-Leibler divergence between the real and fake sample distributions. Frechet Inception Score (FID) [11] calculates the distance between the real and fake supports using the estimated mean and variance under the multi-Gaussian assumption. The original Precision and Recall [12] and its variants [13, 14] measure fidelity and diversity separately, where fidelity is about how closely the generated samples resemble real samples, while diversity is about whether a generative model can generate samples that are as diverse as real samples.
Considering the eminent progress of deep generative models based on these existing metrics, some may question why we need another evaluation study. In this paper, we argue that we need more reliable evaluation metrics now precisely, because deep generative models have reached sufficient maturity and evaluation metrics are saturated (Table 8 in [15]). Even more, it has been recently reported that even the most widely used evaluation metric, FID, sometimes doesn't match with the expected perceptual quality, fidelity, and diversity, which means the metrics are not always working properly [16].
In addition, existing metrics are vulnerable to the existence of noise because all of them rely on the assumption that real data is clean. However, in practice, real data often contain lots of artifacts, such as mislabeled data and adversarial examples [17; 18], which can cause the overestimation of the data distribution in the evaluation pipeline. This error seriously perturbs the scores, leading to a false impression of improvement when developing generative models. See Appendix G.2 for possible scenarios. Thus, to provide more comprehensive ideas for improvements and to illuminate a new direction in the generative field, we need a more robust and reliable evaluation metric.
An ideal evaluation metric should effectively capture significant patterns (signal) within the data, while being robust against insignificant or accidental patterns (noise) that may arise from factors such as imperfect embedding functions, mislabeled data in the real samples, and other sources of error. Note that there is an inherent tension in developing metrics that meet these goals. On one hand, the metric should be sensitive enough so that it can capture real signals lurking in data. On the other hand, it must ignore noises that hide the signal. However, sensitive metrics are inevitably susceptible to noise to some extent. To address this, one needs a systematic way to answer the following three questions: 1) What is signal and what is noise? 2) How do we draw a line between them? 3) How confident are we on the result?
One solution can be to use the idea of Topological Data Analysis (TDA) [19] and statistical inference. TDA is a recent and emerging field of data science that relies on topological tools to infer relevant features for possibly complex data. A key object in TDA is persistent homology, which observes how long each topological feature would survive over varying resolutions and provides a measure to quantify its significance; _i.e._, if some features persist longer than others over varying resolutions, we consider them as topological signals and vice versa as noise.
In this paper, we combine these ideas to estimate supports more robustly and overcome various issues of previous metrics such as unboundedness, inconsistency, etc. Our main contributions are as follows: we establish 1) a systematic approach to estimate supports via Kernel Density Estimator (KDE) derived under topological conditions; 2) a new metric that is robust to outliers while reliably detecting the change of distributions on various scenarios; 3) a consistency guarantee with robustness under very weak assumptions that are suitable for high dimensional data; 4) a combination of a noise framework and a statistical inference in TDA. Our code is available at TopP&R-NeurIPS 2023.
## 2 Background
To lay the foundation for our method and theoretical analysis, we first explain the previous evaluation method Precision and Recall (P&R). Then, we introduce the main idea of persistent homology and its confidence estimation techniques that bring the benefit of using topological and statistical tools for addressing uncertainty in samples. For the sake of space constraints and to streamline the discussion of our main idea, we only provide a brief overview of the concepts that are relevant to this work and refer the reader to Appendix A or [20; 21; 22; 23] for further details on TDA.
Figure 1: **Illustration of the proposed evaluation pipeline. (a) Confidence band estimation in Section 2, (b) Robust support estimation, and (c) Evaluation via TopP&R in Section 3.**
**Notation.** For any \(x\) and \(r>0\), we use the notation \(\mathcal{B}_{d}(x,r)=\{y:d(y,x)<r\}\) be the open ball in distance \(d\) of radius \(r\). We also write \(\mathcal{B}(x,r)\) when \(d\) is understood from the context. For a distribution \(P\) on \(\mathbb{R}^{d}\), we let \(\operatorname{supp}(P)\coloneqq\{x\in\mathbb{R}^{d}:P(\mathcal{B}(x,r))>0\text { for all }r>0\}\) be the support of \(P\). Throughout the paper, we refer to \(\operatorname{supp}(\mathrm{P})\) as support of \(P\), or simply support, or manifold, but we don't necessarily require the (geometrical) manifold structure on \(\operatorname{supp}(\mathrm{P})\). Note that, when the support is high dimensional, what we can recover at most through estimation is the partial support. For a kernel function \(K:\mathbb{R}^{d}\to\mathbb{R}\), a dataset \(\mathcal{X}=\{X_{1},\ldots,X_{n}\}\subset\mathbb{R}^{d}\) and bandwidth \(h>0\), we let the kernel density estimator (KDE) as \(\hat{p}_{h}(x)\coloneqq\frac{1}{nh^{d}}\sum_{i=1}^{n}K\left(\frac{x-X_{i}}{h}\right)\), and we let the average KDE as \(p_{h}\coloneqq\mathbb{E}\left[\hat{p}_{h}\right]\). We denote by \(P\), \(Q\) the probability distributions in \(\mathbb{R}^{d}\) of real data and generated samples, respectively. And we use \(\mathcal{X}=\{X_{1},\ldots,X_{n}\}\subset\mathbb{R}^{d}\) and \(\mathcal{Y}=\{Y_{1},\ldots,Y_{m}\}\subset\mathbb{R}^{d}\) for real data and generated samples possibly with noise, respectively.
**Precision and Recall.** There exist two aspects of generative samples' quality; fidelity and diversity. Fidelity refers to the degree to which the generated samples resemble the real ones. Diversity, on the other hand, measures whether the generated samples cover the full variability of the real samples. Sajjadi et al. [12] was the first to propose assessing these two aspects separately via Precision and Recall (P&R). In the ideal case where we have full access to the probability distributions \(P\) and \(Q\), \(\operatorname{precision}_{P}(Q)\coloneqq Q\left(\operatorname{supp}(P)\right)\), \(\operatorname{recall}_{Q}(P)\coloneqq P\left(\operatorname{supp}(Q)\right)\), which correspond to the max Precision and max Recall in [12], respectively.
**Persistent homology and diagram.** Persistent homology is a tool in computational topology that measures the topological invariants (homological features) of data that persist across multiple scales, and is represented in the persistence diagram. Formally, let _filtration_ be a collection of subspaces \(\mathcal{F}=\{\mathcal{F}_{\delta}\subset\mathbb{R}^{d}\}_{\delta\in\mathbb{R}}\) with \(\delta_{1}\leq\delta_{2}\) implying \(\mathcal{F}_{\delta_{1}}\subset\mathcal{F}_{\delta_{2}}\). Typically, filtration is defined through a function \(f\) related to data. Given a function \(f\colon\mathbb{R}^{d}\to\mathbb{R}\), we consider its sublevel filtration \(\{f^{-1}(-\infty,\delta]\}_{\delta\in\mathbb{R}}\) or superlevel filtration \(\{f^{-1}[\delta,\infty)\}_{\delta\in\mathbb{R}}\). For a filtration \(\mathcal{F}\) and for each nonnegative \(k\), we track when \(k\)-dimensional homological features (_e.g._, \(0\)-dimension: connected component, \(1\)-dimension: loop, \(2\)-dimension: cavity, \(\ldots\)) appear and disappear. As \(\delta\) increases or decreases in the filtration \(\{\mathcal{F}_{\delta}\}\), if a homological feature appears at \(\mathcal{F}_{b}\) and disappears at \(\mathcal{F}_{d}\), we say that it is born at \(b\) and dies at \(d\). By considering these pairs \(\{(b,d)\}\) as points in the plane \((\mathbb{R}\cup\{\pm\infty\})^{2}\), we obtain a _persistence diagram_. From this, a homological feature with a longer life length, \(d-b\), can be treated as a significant feature, and a homological feature with a shorter life length as a topological noise, which lies near the diagonal line \(\{(\delta,\delta):\delta\in\mathbb{R}\}\)(Figure 1 (b)).
**Confidence band estimation.** Statistical inference has recently been developed for TDA [24, 25, 26]. TDA consists of features reflecting topological characteristics of data, and it is of question to systematically distinguish features that are indeed from geometrical structures and features that are insignificant or due to noise. To _statistically_ separate topologically significant features from topological noise, we employ confidence band estimation. Given the significance level \(\alpha\), let confidence band \(c_{\alpha}\) be the bootstrap bandwidth of \(\left\lVert\hat{p}_{h}-p_{h}\right\rVert_{\infty}\), computed as in Algorithm 1 (see Appendix H.2). Then it satisfies \(\liminf_{n\to\infty}\mathbb{P}\left(\left\lVert\hat{p}_{h}-p_{h}\right\rVert_{ \infty}<c_{\alpha}\right)\geq 1-\alpha\), as in Proposition D.2 (see Appendix D). This confidence band can simultaneously determine significant topological features while filtering out noise features. We use \(c_{\mathcal{X}}\) and \(c_{\mathcal{Y}}\) to denote the confidence band defined under significance level \(\alpha\) according to the datasets \(\mathcal{X}\) and \(\mathcal{Y}\). In later sections, we use these tools to provide a more rigorous way of scoring samples based on the confidence level we set.
## 3 Robust support estimation for reliable evaluation
Current evaluation metrics for generative models typically rely on strong regularity conditions. For example, they assume samples are well-curated without outliers or adversarial perturbation, real or generative models have bounded densities, etc. However, practical scenarios are wild: both real and generated samples can be corrupted with noise from various sources, and the real data can be very sparsely distributed without density. In this work, we consider more general and practical situations, wherein both real and generated samples can have noises coming from the sampling procedure, remained uncertainty due to data or model, etc. See Appendix G.2 for more on practical scenarios.
Overview of our metricWe design our metric to evaluate the performance very conservatively. Our metric is based on topologically significant data structures with statistical confidence above a certain level. Toward this, we apply KDE as a function \(f\) to define a filtration, which allowsus to approximate the support with data through \(\{f^{-1}[\delta,\infty)\}_{\delta\in\mathbb{R}}\). Since the significance of data comprising the support is determined by the life length of homological features, we calculate \(c_{\alpha}\) that enables us to systematically separate short/long lifetimes of homological features. We then estimate the supports with topologically significant data structure via superlevel set \(f^{-1}[c_{\alpha},\infty)\) and finally, we evaluate fidelity and diversity with the estimated supports. We have collectively named this process TopP&R. By its nature, TopP&R is bounded and yields consistent performance under various conditions such as noisy data points, outliers, and even with long-tailed data distribution.
### Topological precision and recall
To facilitate our discussion, we rewrite the precision in Section 2 as \(\operatorname{precision}_{P}(\mathcal{Y})=Q\left(\operatorname{supp}(P)\cap \operatorname{supp}(Q)\right)/Q\left(\operatorname{supp}(Q)\right)\) and define the precision of data points as
\[\operatorname{precision}_{P}(\mathcal{Y})\coloneqq\frac{\sum_{j=1}^{m}1\left(Y _{j}\in\operatorname{supp}(P)\cap\operatorname{supp}(Q)\right)}{\sum_{j=1}^ {m}1\left(Y_{j}\in\operatorname{supp}(Q)\right)}, \tag{1}\]
which is just replacing the distribution \(Q\) with the empirical distribution \(\frac{1}{m}\sum_{j=1}^{m}\delta_{Y_{j}}\) of \(Y\) in the precision. Similarly,
\[\operatorname{recall}_{Q}(\mathcal{X})\coloneqq\frac{\sum_{i=1}^{n}1\left(X_{ i}\in\operatorname{supp}(Q)\cap\operatorname{supp}(P)\right)}{\sum_{i=1}^{n}1 \left(X_{i}\in\operatorname{supp}(P)\right)}. \tag{2}\]
In practice, \(\operatorname{supp}(P)\) and \(\operatorname{supp}(Q)\) are not known a priori and need to be estimated, and these estimates should be robust to noise since we allow it now. For this, we use the KDE \(\hat{p}_{h_{n}}(x)\coloneqq\frac{1}{nh_{n}^{d}}\sum_{i=1}^{n}K\left(\frac{x-X_ {i}}{h_{n}}\right)\) of \(\mathcal{X}\) and the bootstrap bandwidth \(c_{\mathcal{X}}\) of \(\left\lVert\hat{p}_{h_{n}}-p_{h_{n}}\right\rVert_{\infty}\), where \(h_{n}>0\) and a significance level \(\alpha\in(0,1)\) (Section 2). Then, we estimate the support of \(P\) by the superlevel set at \(c_{\mathcal{X}}\)1 as \(\operatorname{supp}(P)=\hat{p}_{h_{n}}^{-1}[c_{\mathcal{X}},\infty)\), which allows to filter out noise whose KDE values are likely to be small. Similarly, the support of \(Q\) is estimated: \(\operatorname{supp}(Q)=\hat{q}_{h_{m}}^{-1}[c_{\mathcal{Y}},\infty)\), where \(\hat{q}_{h_{m}}(x)\coloneqq\frac{1}{nh_{m}^{d}}\sum_{j=1}^{m}K\left(\frac{x-Y _{j}}{h_{m}}\right)\) is the KDE of \(\mathcal{Y}\) and \(c_{\mathcal{Y}}\) is the bootstrap bandwidth of \(\left\lVert\hat{q}_{h_{m}}-q_{h_{m}}\right\rVert_{\infty}\)
Footnote 1: The computation of \(c_{\alpha}\) and its practical interpretation is described in Algorithm 1.
For the robust estimates of the precision, we apply the support estimates to \(\operatorname{precision}_{P}(\mathcal{Y})\) and \(\operatorname{recall}_{Q}(\mathcal{X})\) and define the topological precision and recall (TopP&R) as
\[\texttt{TopP}_{\mathcal{X}}(\mathcal{Y}) \coloneqq\frac{\sum_{j=1}^{m}1\left(Y_{j}\in\operatorname{supp}( P)\cap\operatorname{supp}(Q)\right)}{\sum_{j=1}^{m}1\left(Y_{j}\in \operatorname{supp}(Q)\right)}\] \[=\frac{\sum_{j=1}^{m}1\left(\hat{p}_{h_{n}}(Y_{j})>c_{\mathcal{X }},\ \hat{q}_{h_{m}}(Y_{j})>c_{\mathcal{Y}}\right)}{\sum_{j=1}^{m}1\left(\hat{q}_{h _{m}}(Y_{j})>c_{\mathcal{Y}}\right)}, \tag{3}\] \[\texttt{TopR}_{\mathcal{Y}}(\mathcal{X}) \coloneqq\frac{\sum_{i=1}^{n}1\left(\hat{q}_{h_{n}}(X_{i})>c_{ \mathcal{Y}},\ \hat{p}_{h_{n}}(X_{i})>c_{\mathcal{X}}\right)}{\sum_{i=1}^{n}1\left( \hat{p}_{h_{n}}(X_{i})>c_{\mathcal{X}}\right)}. \tag{4}\]
The kernel bandwidths \(h_{n}\) and \(h_{m}\) are hyperparameters, and we provide guidelines to select the optimal bandwidths \(h_{n}\) and \(h_{m}\) in practice (See Appendix H.4).
### Bandwidth estimation using bootstrapping
Using the bootstrap bandwidth \(c_{\mathcal{X}}\) as the threshold is the key part of our estimator (TopP&R) for robustly estimating \(\operatorname{supp}(P)\). As we have seen in Section 2, the bootstrap bandwidth \(c_{\mathcal{X}}\) filters out the topological noise in topological data analysis. Analogously, using \(c_{\mathcal{X}}\) allows to robustly estimate \(\operatorname{supp}(P)\). When \(X_{i}\) is an outlier, its KDE value \(\hat{p}_{h}(X_{i})\) is likely to be small as well as the values at the connected component generated by \(X_{i}\). So those components from outliers are likely to be removed in the estimated support \(\hat{p}_{h}^{-1}[c_{\mathcal{X}},\infty)\). Higher dimensional nonlogical noises are also removed. Hence, the estimated support denoises topological noise and robustly estimates \(\operatorname{supp}(P)\). See Appendix C for a more detailed explanation.
Now that we are only left with topological features of high confidence, this allows us to draw analogies to confidence intervals in statistical analysis, where the uncertainty of the samples is treatedby setting the level of confidence. In the next section, we show that TopP&R not only gives a more reliable evaluation score for generated samples but also has good theoretical properties.
### Addressing the curse of dimensionality
As discussed in Section 3.2, getting the bootstrap bandwidth \(c_{\mathcal{X}}\) with a theoretical guarantee plays a key role in our metric, and the choice of KDE as a filtration function is inevitable, as in Remark 4.4. However, computing the support of a high-dimensional feature with KDE demands significant computation, and the accuracy is low due to low density values. This hinders an efficient and correct evaluation in practice. To address this issue, we apply a random projection into a low-dimensional space by leveraging the Johnson-Lindenstrauss Lemma (Lemma B.1). This lemma posits that using a random projection effectively preserves information regarding distances and homological features, composed of high-dimensional features, in a low-dimensional representation. Furthermore, we have shown that random projection does not substantially reduce the influence of noise, nor does it affect the performance of TopP&R under various conditions with complex data (Section 5, I.7, and I.8).
## 4 Consistency with robustness of TopP&R
The key property of TopP&R is consistency with robustness. The consistency ensures that, the precision and the recall we compute from the _data_ approaches the precision and the recall from the _distribution_ as we have more samples. The consistency allows to investigate the precision and recall of full distributions only with access to finite sampled data. TopP&R achieves consistency with robustness, that is, the consistency holds with the data possibly corrupted by noise. This is due to the robust estimation of supports with KDE with confidence bands.
We demonstrate the statistical model for both data and noise. Let \(P\), \(Q\), \(\mathcal{X}\), \(\mathcal{Y}\) be as in Section 2, and let \(\mathcal{X}^{0},\mathcal{Y}^{0}\) be real data and generated data without noise. \(\mathcal{X}\), \(\mathcal{Y}\), \(\mathcal{X}^{0}\), \(\mathcal{Y}^{0}\) are understood as multisets, _i.e._, elements can be repeated. We first assume that the uncorrupted data are IID.
**Assumption 1**.: _The data \(\mathcal{X}^{0}=\{X^{0}_{1},\ldots,X^{0}_{n}\}\) and \(\mathcal{Y}^{0}=\{Y^{0}_{1},\ldots,Y^{0}_{m}\}\) are IID from \(P\) and \(Q\), respectively._
In practice, the data is often corrupted with noise. We consider the adversarial noise, where some fraction of data are replaced with arbitrary point cloud data.
**Assumption 2**.: _Let \(\{\rho_{k}\}_{k\in\mathbb{N}}\) be a sequence of nonnegative real numbers. Then the observed data \(\mathcal{X}\) and \(\mathcal{Y}\) satisfies \(\left|\mathcal{X}\backslash\mathcal{X}^{0}\right|=n\rho_{n}\) and \(\left|\mathcal{Y}\backslash\mathcal{Y}^{0}\right|=m\rho_{m}\)._
In the adversarial model, we control the level of noise by the fraction \(\rho\), but do not assume other conditions such as IID or boundedness, to make our noise model very general and challenging.
For distributions and kernel functions, we assume weak conditions, detailed in Assumption A1 and A2 in Appendix D. Under the data and the noise models, TopP&R achieves consistency with robustness. That is, the estimated precision and recall are asymptotically correct with high probability even if up to a portion of \(1/\sqrt{n}\) or \(1/\sqrt{m}\) are replaced by adversarial noise. This is due to the robust estimation of the support with the kernel density estimator with the confidence band of the persistent homology.
**Proposition 4.1**.: _Suppose Assumption 1, 2, A1, A2 hold. Suppose \(\alpha\to 0\), \(h_{n}\to 0\), \(nh_{n}^{d}\rightarrow\infty\), \(nh_{n}^{-d}\rho_{n}^{2}\to 0\), and similar relations hold for \(h_{m}\), \(\rho_{m}\). Then,_
\[\left|\mathrm{TopP}_{\mathcal{X}}(\mathcal{Y})-\mathrm{precision}_{P}( \mathcal{Y})\right| =O_{\mathbb{P}}\left(Q(B_{n,m})+\rho_{m}\right),\] \[\left|\mathrm{TopR}_{\mathcal{Y}}(\mathcal{X})-\mathrm{recall}_{ Q}(\mathcal{X})\right| =O_{\mathbb{P}}\left(P(A_{n,m})+\rho_{n}\right),\]
_for fixed sequences of sets \(\{A_{n,m}\}_{n,m\in\mathbb{N}},\{B_{n,m}\}_{n,m\in\mathbb{N}}\) with \(P(A_{n,m})\to 0\) and \(Q(B_{n,m})\to 0\) as \(n,m\rightarrow\infty\)._
**Theorem 4.2**.: _Under the same condition as in Proposition 4.1,_
\[\left|\mathrm{TopP}_{\mathcal{X}}(\mathcal{Y})-\mathrm{precision}_{P}(Q)\right| =O_{\mathbb{P}}\left(Q(B_{n,m})+\rho_{m}\right),\] \[\left|\mathrm{TopR}_{\mathcal{Y}}(\mathcal{X})-\mathrm{recall}_{ Q}(P)\right| =O_{\mathbb{P}}\left(P(A_{n,m})+\rho_{n}\right).\]
Since \(P(A_{n,m})\to 0\) and \(Q(B_{n,m})\to 0\), these imply consistencies of TopP&R. In fact, additionally under minor probabilistic and geometrical assumptions, \(P(A_{n,m})\) and \(Q(B_{n,m})\) are of order \(h_{m}+h_{n}\)
**Lemma 4.3**.: _Under the same condition as in Proposition 4.1 and additionally under Assumption A3, A4, \(P(A_{n,m})=O(h_{n}+h_{m})\) and \(Q(B_{n,m})=O(h_{n}+h_{m})\)._
_Remark 4.4_.: Consistency guarantees from Proposition 4.1 and Theorem 4.2 are in principle due to the uniform convergence of KDE over varying bandwidth \(h_{n}\) (Proposition D.2). Once we replace estimating the support with KDE by k-NN or something else, we wouldn't have consistency guarantees. Hence, using the KDE is an essential part for the theoretical guarantees of TopP&R.
Our theoretical results in Proposition 4.1 and Theorem 4.2 are novel and important in several perspectives. These results are among the first theoretical guarantees for evaluation metrics for generative models as far as we are aware of. Also, as in Remark D.1, assumptions are very weak and suitable for high dimensional data. Also, robustness to adversarial noise is provably guaranteed.
## 5 Experiments
A good evaluation metric should not only possess desirable theoretical properties but also effectively capture the changes in the underlying data distribution. To examine the performance of evaluation metrics, we carefully select a set of experiments for sanity checks. With toy and real image data, we check 1) how well the metric captures the true trend of underlying data distributions and 2) how well the metric resists perturbations applied to samples.
We compare TopP&R with several representative evaluation metrics that include Improved Precision and Recall (P&R) [13], Density and Coverage (D&C) [14], Geometric Component Analysis (GCA) [27], and Manifold Topology Divergence (MTD) [28] (Appendix F). Both GCA and MTD are the recent evaluation metrics that utilize topological features to some extent; GCA defines precision and recall based on connected components of \(P\) and \(Q\), and MTD measures the distance between two distributions using the sum of lifetimes of homological features. For all the experiments, linear random projection to 32 dimensions is additionally used for TopP&R, and the shaded area of all figures denotes the \(\pm 1\) standard deviation for ten trials. For a fair comparison with existing metrics, we have utilized 10k real and fake samples for all experiments. For more details, please refer to Appendix H.1.
### Sanity checks with toy data
Following [14], we first examine how well the metric reflects the trend of \(\mathcal{Y}\) moving away from \(\mathcal{X}\) and whether it is suitable for finding mode-drop phenomena. In addition to these, we newly design several experiments that can highlight TopP&R's favorable theoretical properties of consistency with robustness in various scenarios.
#### 5.1.1 Shifting the generated feature manifold
We generate samples from \(\mathcal{X}\sim\mathcal{N}(\mathbf{0},I)\) and \(\mathcal{Y}\sim\mathcal{N}(\mu\mathbf{1},I)\) in \(\mathbb{R}^{64}\), where \(\mathbf{1}\) is a vector of ones and \(I\) is an identity matrix. We examine how each metric responds to shifting \(\mathcal{Y}\) with \(\mu\in[-1,1]\) while there are outliers at \(\mathbf{3}\in\mathbb{R}^{64}\) for both \(\mathcal{X}\) and \(\mathcal{Y}\) (Figure 2). We discovered that GCA struggles to detect changes using its default hyperparameter configuration, and this issue persists even after performing an exhaustive hyperparameter sweep. Since empirical tuning of the hyperparameters is required for each dataset, utilizing GCA in practical applications proves to be challenging (Appendix F). Both improved P&R and D&C behave pathologically since these methods estimate the support via the k-nearest neighbor algorithm, which inevitably overestimate the underlying support when there are outliers. For example, when \(\mu<0.5\), Recall returns a high-diversity score, even though the true supports of \(\mathcal{X}\) and \(\mathcal{Y}\) are actually far apart. In addition, P&R does not reach 1 in high dimensions even when \(\mathcal{X}=\mathcal{Y}\). D&C [14] yields better results than P&R because it consistently uses \(\mathcal{X}\) (the real data distribution) as a reference point, which typically has fewer outliers than \(\mathcal{Y}\) (the fake data distribution). However, there is no guarantee that this will always be the case in practice [17; 18]. If an outlier is present in \(\mathcal{X}\), D&C also returns an incorrect high-fidelity score at \(\mu>0.5\). On the other hand, TopP&R shows a stable trend unaffected by the outlier, demonstrating its robustness.
#### 5.1.2 Dropping modes
We simulate mode-drop phenomena by gradually dropping all but one mode from the fake distribution \(\mathcal{Y}\) that is initially identical to \(\mathcal{X}\) (Figure 3). Here, we consider the mixture of Gaussians with sevenmodes in \(\mathbb{R}^{64}\). We keep the number of samples in \(\mathcal{X}\) constant so that the same amount of decreased samples are supplemented to the first mode which leads fidelity to be fixed to 1. We observe that the values of Precision fail to saturate, _i.e._, mainly smaller than 1, and the Density fluctuates to a value greater than 1, showing their instability and unboundedness. Recall and GCA do not respond to the simultaneous mode drop, and Coverage decays slowly compared to the reference line. In contrast, TopP performs well, being held at the upper bound of 1 in sequential mode drop, and TopR also decreases closest to the reference line in simultaneous mode drop.
#### 5.1.3 Tolerance to Non-IID perturbations
Robustness to perturbations is another important aspect we should consider when designing a metric. Here, we test whether TopRR behaves stably under two variants of noise cases (see Section G.3); 1) **scatter noise**: replacing \(X_{i}\) and \(Y_{j}\) with uniformly distributed noise and 2) **swap noise**: swapping the position between \(X_{i}\) and \(Y_{j}\). These two cases all correspond to the adversarial noise model of Assumption 2. We set \(\mathcal{X}\sim\mathcal{N}(\mu=0,I)\in\mathbb{R}^{64}\) and \(\mathcal{Y}\sim\mathcal{N}(\mu=1,I)\in\mathbb{R}^{64}\) where \(\mu=1\), and thus an ideal evaluation metric must return zero for both fidelity and diversity. In the result, while the GCA precision is relatively robust to the scatter noise, GCA recall tends to be sensitive to the swap noise. In both cases, we find that P&R and D&C are more sensitive while TopP&R remains relatively stable until the noise ratio reaches \(15\%\) of the total data, which is a clear example of the weakness of existing metrics to perturbation (Figure 4).
Figure 4: Behaviors of evaluation metrics on Non-IID perturbations. We replace a certain percentage of real and fake data (a) with random uniform noise or (b) by switching some of real and fake data.
Figure 3: Behaviors of evaluation metrics for (a) sequential and (b) simultaneous mode-drop scenarios. The horizontal axis shows the concentration ratio on the distribution centered at \(\mu=0\).
Figure 2: Behaviors of evaluation metrics for outliers on real and fake distribution. For both real and fake data, the outliers are fixed at \(3\in\mathbb{R}^{64}\), and the parameter \(\mu\) is shifted from -1 to 1.
### Sanity check with Real data
Now that we have verified the metrics on toy data, we move on to real data. Just like in the toy experiments, we concentrate on how the metrics behave in extreme situations, such as outliers, mode-drop phenomena, perceptual distortions, etc. We also test different image embedders, including pretrained VGG16 [29], InceptionV3 [30], and SwAV [31].
#### 5.2.1 Dropping modes in Baby ImageNet
We have conducted an additional experiment using Baby ImageNet [15] to investigate the sensitivity of TopP&R to mode-drop in real-world data. The performance of each metric (Figure A4) is measured with the identical data while simultaneously dropping the modes of nine classes of Baby ImageNet, in total of ten classes. Since our experiment involves gradually reducing the fixed number of fake samples until nine modes of the fake distribution vanish, the ground truth diversity should decrease linearly. From the experimental results, consistent to the toy result of Figure 3, both D&C and P&R still struggle to respond to simultaneous mode dropping. In contrast, TopP&R consistently exhibit a high level of sensitivity to subtle distribution changes. This notable capability of TopP&R can be attributed to its direct approximation of the underlying distribution, distinguishing it from other metrics. In addition, we perform the experiments on a dataset with long-tailed distribution and find that TopP&R captures the trend well even when there are minority sets (Appendix I.1). This again shows the reliability of TopP&R.
#### 5.2.2 Robustness to perturbations
To test the robustness of our metric against the adversarial noise model of Assumption 2, we test both scatter-noise and swap noise scenarios with real data (see Section G.3). In the experiment, following Kynkaanniemi et al. [13], we first classify inliers and outliers that are generated by StyleGAN [1]. For scatter noise we add the outliers to the inliers and for swap noise we swap the real FFHQ images with generated images. Under these specific noise conditions, Precision shows similar or even better robustness than Density (Figure 5). On the other hand, Coverage is more robust than Recall. In both cases, TopP&R shows the best performance, resistant to noise.
#### 5.2.3 Sensitiveness to the noise intensity
One of the advantages of FID [11] is that it is good at estimating the degrees of distortion applied to the images. Similarly, we check whether the F1-score based on TopP&R provides a reasonable evaluation according to different noise levels. As illustrated in Figure 6 and A5, \(\mathcal{X}\) and \(\mathcal{Y}\) are sets
Figure 5: Comparison of evaluation metrics on Non-IID perturbations using FFHQ dataset. We replaced certain ratio of \(\mathcal{X}\) and \(\mathcal{Y}\) (a) with outliers and (b) by switching some of real and fake features.
Figure 6: Verification of whether TopP&R can make an accurate quantitative assessment of noisy image features. Gaussian noise, gaussian blur, salt and pepper, and black rectangle noise are added to the FFHQ images and embedded with T4096.
of reference and noisy features, respectively. The experimental results show that TopP&R actually reflects well the different degrees of distortion added to the images while a similar topology-based method MTD shows inconsistent behavior to the distortions.
#### 5.2.4 Ranking between generative models
The alignment between FID (or KID) and perceptual evaluation has been well-established in prior research, and these scores are widely used as a primary metric in the development of generative models due to its close correspondence with human perception. Consequently, generative models have evolved to align with FID's macroscopic perspective. Therefore, we believed that the order determined by FID at a high level captures to some extent the true performance hierarchy among models, even if it may not perfectly reflect it. In other words, if the development of generative models based on FID leads to genuine improvements in generative performance and if there is a meaningful correlation, similar rankings should be maintained even when the representation or embedding model changes. From this standpoint, while other metrics exhibit fluctuating rankings, TopP&R consistently provides the most stable and consistent results similar to both FID and KID. To quantitatively compare the similarity of rankings across varying embedders by different metrics, we have computed mean Hamming Distance (MHD) (Appendix H.6) where lower value indicates more similarity. TopP&R, P&R, D&C, and MTD have MHDs of 1.33, 2.66, 3.0, and 3.33, respectively.
## 6 Conclusions
Many works have been proposed recently to assess the fidelity and diversity of generative models. However, none of them has focused on the accurate estimation of support even though it is one of the key components in the entire evaluation pipeline. In this paper, we proposed topological precision and recall (TopP&R) that provides a systematical fix by robustly estimating the support with both topological and statistical treatments. To the best of our knowledge, TopP&R is the first evaluation metric that offers statistical consistency under noisy conditions, which may arise in real practice. Our theoretical and experimental results showed that TopP&R serves as a robust and reliable evaluation metric under various embeddings and noise conditions, including mode drop, outliers, and Non-IID perturbations. Last but not least, TopP&R provides the most consistent ranking among different generative models across different embeddings via calculating its F1-score.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline & Model & StyleGAN2 & ReACGAN & BigGAN & PDGAN & ACGAN & WGAN-GP \\ \cline{2-8} & **FID** (\(\downarrow\)) & 3.78 (1) & 3.87 (2) & 4.16 (3) & 31.54 (4) & 33.39 (5) & 107.68 (6) \\ & **KID** (\(\downarrow\)) & 0.002 (1) & 0.012 (3) & 0.011 (2) & 0.025 (4) & 0.029 (5) & 0.137 (6) \\ & **TopP&R** (\(\uparrow\)) & 0.9769 (1) & 0.8457 (2) & 0.7751 (3) & 0.7339 (4) & 0.6951 (5) & 0.0163 (6) \\ & **D**\&C (\(\uparrow\)) & 0.9626 (2) & 0.9409 (3) & 1.1562 (1) & 0.4383 (4) & 0.3883 (5) & 0.1913 (6) \\ & **P**\&R (\(\uparrow\)) & 0.6232 (1) & 0.3320 (2) & 0.3278 (3) & 0.1801 (4) & 0.0986 (5) & 0.0604 (6) \\ & MTD (\(\downarrow\)) & 2.3380 (3) & 2.2687 (2) & 1.4473 (1) & 7.0188 (4) & 8.0728 (5) & 11.498 (6) \\ \cline{2-8} & **TopP&R** (\(\uparrow\)) & 0.9754 (1) & 0.5727 (3) & 0.7556 (2) & 0.4021 (4) & 0.3463 (5) & 0.0011 (6) \\ & **D**\&C (\(\uparrow\)) & 0.9831 (3) & 1.0484 (1) & 0.9701 (4) & 0.9872 (2) & 0.8971 (5) & 0.6372 (6) \\ & **P**\&R (\(\uparrow\)) & 0.6861 (1) & 0.1915 (3) & 0.3526 (2) & 0.0379 (4) & 0.0195 (5) & 0.0001 (6) \\ & MTD (\(\downarrow\)) & 25.757 (4) & 25.826 (3) & 34.755 (5) & 24.586 (2) & 23.318 (1) & 41.346 (6) \\ \cline{2-8} & **TopP&R** (\(\uparrow\)) & 0.9093 (1) & 0.3568 (3) & 0.5578 (2) & 0.1592 (4) & 0.1065 (5) & 0.0003 (6) \\ & **D**\&C (\(\uparrow\)) & 1.0732 (1) & 0.9492 (3) & 1.0419 (2) & 0.6328 (4) & 0.4565 (5) & 0.0721 (6) \\ & **P**\&R (\(\uparrow\)) & 0.5623 (1) & 0.0901 (3) & 0.1459 (2) & 0.0025 (4) & 0.0000 (6) & 0.0002 (5) \\ & MTD (\(\downarrow\)) & 1.1098 (1) & 1.5512 (3) & 1.3280 (2) & 1.8302 (4) & 2.2982 (5) & 4.9378 (6) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Generative models trained on CIFAR-10 are ranked by FID, KID, MTD, and F1-scores based on TopP&R, D&C and P&R, respectively. The \(\mathcal{X}\) and \(\mathcal{Y}\) are embedded with InceptionV3, VGG16, and SwAV. The number inside the parenthesis denotes the rank based on each metric.
## Acknowledgements
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No.2.220574.01), Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2020-0-01336, Artificial Intelligence Graduate School Program (UNIST), No.2021-0-02068, Artificial Intelligence Innovation Hub, No.2022-0-00959, (Part 2) Few-Shot Learning of Causal Inference in Vision and Language for Decision Making, No.2022-0-00264, Comprehensive Video Understanding and Generation with Knowledge-based Deep Logic Neural Network).
## References
* [1]T. Karras, S. Laine, and T. Aila (2019) A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401-4410. Cited by: SS1.
* [2]T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila (2020) Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119. Cited by: SS1.
* [3]T. Karras, M. Aittala, S. Laine, E. Harkonen, J. Hellsten, J. Lehtinen, and T. Aila (2021) AIAS-free generative adversarial networks. Advances in Neural Information Processing Systems34. Cited by: SS1.
* [4]A. Brock, J. Donahue, and K. Simonyan (2018) Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096. Cited by: SS1.
* [5]J. Ho, A. Jain, and P. Abbeel (2020) Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems33, pp. 6840-6851. Cited by: SS1.
* [6]D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: SS1.
* [7]A. Sauer, K. Schwarz, and A. Geiger (2022) Stylegan-xl: scaling stylegan to large diverse datasets. arXiv preprint arXiv:2202.00273. Cited by: SS1.
* [8]A. Sauer, K. Chitta, J. Muller, and A. Geiger (2021) Projected gans converge faster. Advances in Neural Information Processing Systems34. Cited by: SS1.
* [9]M. Kang and J. Park (2020) Contragan: contrastive learning for conditional image generation. Advances in Neural Information Processing Systems33, pp. 21357-21369. Cited by: SS1.
* [10]T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen (2016) Improved techniques for training gans. Advances in neural information processing systems29. Cited by: SS1.
* [11]M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems30. Cited by: SS1.
* [12]M. S. Sajjadi, O. Bachem, M. Lucic, O. Bousquet, and S. Gelly (2018) Assessing generative models via precision and recall. Advances in Neural Information Processing Systems31. Cited by: SS1.
* [13]T. Kynkaanniemi, T. Karras, S. Laine, J. Lehtinen, and T. Aila (2019) Improved precision and recall metric for assessing generative models. Advances in Neural Information Processing Systems32. Cited by: SS1.
* [14]M. Kern, S. J. Oh, Y. Uh, Y. Choi, and J. Yoo (2020) Reliable fidelity and diversity metrics for generative models. In International Conference on Machine Learning, pp. 7176-7185. Cited by: SS1.
* [15]M. Kang, J. Shin, and J. Park (2022) Studiogan: a taxonomy and benchmark of gans for image synthesis. arXiv preprint arXiv:2206.09479. Cited by: SS1.
* [16]T. Kynkaanniemi, T. Karras, M. Aittala, T. Aila, and J. Lehtinen (2022) The role of imagenet classes in frechet inception distance. arXiv preprint arXiv:2203.06026. Cited by: SS1.
* [17]T. Kynkaanniemi, T. Karras, S. Laine, J. Lehtinen, and T. Aila (2019) Improved precision and recall metric for assessing generative models. Advances in Neural Information Processing Systems32. Cited by: SS1.
* [18]T. Karras, S. Laine, and T. Aila (2019) A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401-4410. Cited by: SS1.
* [19]T. Karras, S. Laine, and T. Aila (2019) A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401-4410. Cited by: SS1.
* [20]T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila (2020) Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119. Cited by: SS1.
* [21]T. Karras, S. Laine, and T. Aila (2019) A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401-4410. Cited by: SS1.
* [22]T. Karras, S. Laine, and T. Aila (2019) A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401-4410. Cited by: SS1.
* [23]T. Karras, S. Laine, A. Hellsten, J. Lehtinen, and T. Aila (2020) Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119. Cited by: SS1.
* [24]T. Karras, S. Laine, A. Hellsten, and T. Aila (2021) Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119. Cited by: SS1.
* [25]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119. Cited by: SS1.
* [26]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401-4410. Cited by: SS1.
* [27]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119. Cited by: SS1.
* [28]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401-4410. Cited by: SS1.
* [29]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119. Cited by: SS1.
* [30]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401-4410. Cited by: SS1.
* [31]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119. Cited by: SS1.
* [32]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119. Cited by: SS1.
* [33]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401-4410. Cited by: SS1.
* [34]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401-4410. Cited by: SS1.
* [35]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119. Cited by: SS1.
* [36]T. Karras, S. Laine, A. Hellsten, and T. Aila (2021) A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401-4410. Cited by: SS1.
* [37]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119. Cited by: SS1.
* [38]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401-4410. Cited by: SS1.
* [39]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119. Cited by: SS1.
* [40]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119. Cited by: SS1.
* [41]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119. Cited by: SS1.
* [42]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401-4410. Cited by: SS1.
* [43]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119. Cited by: SS1.
* [44]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119. Cited by: SS1.
* [45]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119. Cited by: SS1.
* [46]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119. Cited by: SS1.
* [47]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401-4410. Cited by: SS1.
* [48]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119. Cited by: SS1.
* [49]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119. Cited by: SS1.
* [50]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119. Cited by: SS1.
* [51]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110-8119. Cited by: SS1.
* [52]T. Karras, S. Laine, A. Hellsten, and T. Aila (2020) A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on* [17] Geoff Pleiss, Tianyi Zhang, Ethan Elenberg, and Kilian Q Weinberger. Identifying mislabeled data using the area under the margin ranking. _Advances in Neural Information Processing Systems_, 33:17044-17056, 2020.
* [18] Zhengwen Li, Runmin Wu, and Tao Gan. Study on image data cleaning method of early esophageal cancer based on vgg_nin neural network. _Scientific Reports_, 12(1):1-10, 2022.
* [19] Gunnar Carlsson. Topology and data. _Bull. Amer. Math. Soc. (N.S.)_, 46(2):255-308, 2009. ISSN 0273-0979. doi: 10.1090/S0273-0979-09-01249-X. URL [https://doi.org/10.1090/S0273-0979-09-01249-X](https://doi.org/10.1090/S0273-0979-09-01249-X).
* [20] Herbert Edelsbrunner and John L. Harer. _Computational topology_. American Mathematical Society, Providence, RI, 2010. ISBN 978-0-8218-4925-5. doi: 10.1090/mbk/069. URL [https://doi.org/10.1090/mbk/069](https://doi.org/10.1090/mbk/069). An introduction.
* [21] Frederic Chazal and Bertrand Michel. An introduction to topological data analysis: Fundamental and practical aspects for data scientists. _Frontiers Artif. Intell._, 4:667-963, 2021. doi: 10.3389/frai.2021.667963. URL [https://doi.org/10.3389/frai.2021.667963](https://doi.org/10.3389/frai.2021.667963).
* [22] Larry Wasserman. Topological data analysis. _Annu. Rev. Stat. Appl._, 5:501-535, 2018. ISSN 2326-8298. doi: 10.1146/annurev-statistics-031017-100045. URL [https://doi.org/10.1146/annurev-statistics-031017-100045](https://doi.org/10.1146/annurev-statistics-031017-100045).
* [23] Allen Hatcher. _Algebraic topology_. Cambridge University Press, Cambridge, 2002. ISBN 0-521-79160-X; 0-521-79540-0.
* [24] Frederic Chazal, Brittany Fasy, Fabrizio Lecci, Alessandro Rinaldo, Aarti Singh, and Larry Wasserman. On the bootstrap for persistence diagrams and landscapes. _Modelinovanie i Analiz Informacionnyh Sistem_, 20, 11 2013. doi: 10.18255/1818-1015-2013-6-111-120.
* [25] Frederic Chazal, Brittany Terese Fasy, Fabrizio Lecci, Alessandro Rinaldo, and Larry Wasserman. Stochastic convergence of persistence landscapes and silhouettes. _J. Comput. Geom._, 6(2):140-161, 2015.
* [26] Brittany Terese Fasy, Fabrizio Lecci, Alessandro Rinaldo, Larry Wasserman, Sivaraman Balakrishnan, and Aarti Singh. Confidence sets for persistence diagrams. _Ann. Statist._, 42(6):2301-2339, 2014. ISSN 0090-5364. doi: 10.1214/14-AOS1252. URL [https://doi.org/10.1214/14-AOS1252](https://doi.org/10.1214/14-AOS1252).
* [27] Petra Poklukar, Anastasiia Varava, and Danica Kragic. Geomca: Geometric evaluation of data representations. In _International Conference on Machine Learning_, pages 8588-8598. PMLR, 2021.
* [28] Serguei Barannikov, Ilya Trofimov, Grigorii Sotnikov, Ekaterina Trimbach, Alexander Korotin, Alexander Filippov, and Evgeny Burnaev. Manifold topology divergence: a framework for comparing data manifolds. _Advances in Neural Information Processing Systems_, 34, 2021.
* [29] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. _arXiv preprint arXiv:1409.1556_, 2014.
* [30] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 2818-2826, 2016.
* [31] Stanislav Morozov, Andrey Voynov, and Artem Babenko. On self-supervised image representations for gan evaluation. In _International Conference on Learning Representations_, 2020.
* [32] A.W. van der Vaart. _Asymptotic Statistics_. Asymptotic Statistics. Cambridge University Press, 2000. ISBN 9780521784504. URL [https://books.google.fr/books?id=UEuQEM5R3jwgC](https://books.google.fr/books?id=UEuQEM5R3jwgC).
* [33] Michael R. Kosorok. _Introduction to empirical processes and semiparametric inference_. Springer Series in Statistics. Springer, New York, 2008. ISBN 978-0-387-74977-8. doi: 10.1007/978-0-387-74978-5. URL [https://doi.org/10.1007/978-0-387-74978-5](https://doi.org/10.1007/978-0-387-74978-5).
* [34] William B. Johnson and Joram Lindenstrauss. Extensions of Lipschitz mappings into a Hilbert space. In _Conference in modern analysis and probability (New Haven, Conn., 1982)_, volume 26 of _Contemp. Math._, pages 189-206. Amer. Math. Soc., Providence, RI, 1984. doi: 10.1090/conm/026737400. URL [https://doi.org/10.1090/conm/026/737400](https://doi.org/10.1090/conm/026/737400).
* Larsen and Nelson [2016] Kasper Green Larsen and Jelani Nelson. The Johnson-Lindenstrauss lemma is optimal for linear dimensionality reduction. In _43rd International Colloquium on Automata, Languages, and Programming_, volume 55 of _LIPIcs. Leibniz Int. Proc. Inform._, pages Art. No. 82, 11. Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern, 2016.
* Larsen and Nelson [2017] Kasper Green Larsen and Jelani Nelson. Optimality of the Johnson-Lindenstrauss lemma. In _58th Annual IEEE Symposium on Foundations of Computer Science--FOCS 2017_, pages 633-638. IEEE Computer Soc., Los Alamitos, CA, 2017.
* Kim et al. [2019] Jisu Kim, Jaehyeok Shin, Alessandro Rinaldo, and Larry Wasserman. Uniform convergence rate of the kernel density estimator adaptive to intrinsic volume dimension. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, _Proceedings of the 36th International Conference on Machine Learning_, volume 97 of _Proceedings of Machine Learning Research_, pages 3398-3407. PMLR, 09-15 Jun 2019. URL [https://proceedings.mlr.press/v97/kim19e.html](https://proceedings.mlr.press/v97/kim19e.html).
* Neumann [1998] Michael H. Neumann. Strong approximation of density estimators from weakly dependent observations by density estimators from independent observations. _Ann. Statist._, 26(5):2014-2048, 1998. ISSN 0090-5364. doi: 10.1214/aos/1024691367. URL [https://doi.org/10.1214/aos/1024691367](https://doi.org/10.1214/aos/1024691367).
* Federer [1959] Herbert Federer. Curvature measures. _Transactions of the American Mathematical Society_, 93:418-491, 1959. ISSN 0002-9947. doi: 10.2307/1993504. URL [https://doi.org/10.2307/1993504](https://doi.org/10.2307/1993504).
* Thale [2008] Christoph Thale. 50 years sets with positive reach--a survey. _Surv. Math. Appl._, 3:123-165, 2008. ISSN 1843-7265. doi: 10.1007/s11590-008-0097-2. URL [https://doi.org/10.1007/s11590-008-0097-2](https://doi.org/10.1007/s11590-008-0097-2).
* Carlsson et al. [2006] Erik Carlsson, Gunnar Carlsson, and Vin De Silva. An algebraic topological method for feature identification. _International Journal of Computational Geometry & Applications_, 16(04):291-314, 2006.
* Chazal et al. [2017] Frederic Chazal, Brittany Fasy, Fabrizio Lecci, Bertrand Michel, Alessandro Rinaldo, Alessandro Rinaldo, and Larry Wasserman. Robust topological inference: Distance to a measure and kernel distance. _The Journal of Machine Learning Research_, 18(1):5845-5884, 2017.
* Wagner et al. [2012] Hubert Wagner, Chao Chen, and Erald Vucini. Efficient computation of persistent homology for cubical data. In _Topological methods in data analysis and visualization II_, pages 91-106. Springer, 2012.
* Terrell and Scott [1992] George R Terrell and David W Scott. Variable kernel density estimation. _The Annals of Statistics_, pages 1236-1265, 1992.
* Hamming [1950] Richard W Hamming. Error detecting and error correcting codes. _The Bell system technical journal_, 29(2):147-160, 1950.
* Krizhevsky et al. [2009] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
* Breunig et al. [2000] Markus M Breunig, Hans-Peter Kriegel, Raymond T Ng, and Jorg Sander. Lof: identifying density-based local outliers. In _Proceedings of the 2000 ACM SIGMOD international conference on Management of data_, pages 93-104, 2000.
* Liu et al. [2008] Fei Tony Liu, Kai Ming Ting, and Zhi-Hua Zhou. Isolation forest. In _2008 eighth ieee international conference on data mining_, pages 413-422. IEEE, 2008.
* Han et al. [2022] Jiyeon Han, Hwanil Choi, Yunjey Choi, Junho Kim, Jung-Woo Ha, and Jaesik Choi. Rarity score: A new metric to evaluate the uncommonness of synthesized images. _arXiv preprint arXiv:2206.08549_, 2022.
**Appendix**
## Appendix A More Background on Topological Data Analysis
Topological data analysis (TDA) [19] is a recent and emerging field of data science that relies on topological tools to infer relevant features for possibly complex data. A key object in TDA is persistent homology, which quantifies salient topological features of data by observing them in multi-resolutions.
### Persistent Homology
**Persistent homology.**_Persistent homology_ is a multiscale approach to represent the topological features. For a filtration \(\mathcal{F}\) and for each \(k\in\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\), the associated \(k\)-th persistent homology \(PH_{k}\mathcal{F}\) is a collection of \(k\)-th dimensional homologies \(\{H_{k}\mathcal{F}_{\delta}\}_{\delta\in\mathbb{R}}\) equipped with homomorphisms \(\{\imath_{k}^{a,b}:H_{k}\mathcal{F}_{a}\to H_{k}\mathcal{F}_{b}\}_{a\leq b}\) induced by the inclusion \(\mathcal{F}_{a}\subset\mathcal{F}_{b}\).
**Persistence diagram.** For the \(k\)-th persistent homology \(PH_{k}\mathcal{F}\), the set of filtration levels at which a specific homology appears is always an interval \([b,d)\subset[-\infty,\infty]\). The corresponding \(k\)-th persistence diagram is a multiset of points \((\mathbb{R}\cup\{\infty\})^{2}\), consisting of all pairs \((b,d)\) where \([b,d)\) is the interval of filtration values for which a specific homology appears in \(PH_{k}\mathcal{F}\).
### Statistical Inference of Persistent Homology
As discussed above, a homological feature with a long life-length is important information in topology while the homology with a short life-length can be treated as non-significant information or noise. The confidence band estimator provides the confidence set from the features that only include topologically and statistically significant (statistically considered as elements in the population set) under a certain level of confidence. And to build a confidence set, we first need to endow a metric on the space of persistence diagrams.
**Bottleneck distance.** The most fundamental metric to measure the distance between two persistence diagrams is the _bottleneck distance_.
**Definition A.1**.: The _bottleneck_ distance between two persistence diagrams \(D_{1}\) and \(D_{2}\) is defined by
\[d_{B}(D_{1},D_{2})=\inf_{\gamma\in\Gamma}\sup_{p\in D_{1}}\|p-\gamma(p)\|_{ \infty},\]
where the set \(\Gamma\) consists of all the bijections \(\gamma:D_{1}\cup Diag\to D_{2}\cup Diag\), and \(Diag\) is the diagonal \(\{(x,x):\,x\in\mathbb{R}\}\subset\mathbb{R}^{2}\) with infinite multiplicity.
One way of constructing the confidence set uses the superlevel filtration of the kernel density estimator and the bootstrap confidence band. Let \(\mathcal{X}=\{X_{1},X_{2},...,X_{n}\}\) as given points cloud, then the probability for the distribution of points can be estimated via KDE defined as following: \(\hat{p}_{h}(x)\coloneqq\frac{1}{nh^{d}}\sum_{i=1}^{n}K\left(\frac{x-X_{i}}{h}\right)\) where \(h\) is the bandwidth and \(d\) as a dimension of the space. We compute \(\hat{p}_{h}\) and \(\hat{p}_{h}^{*}\), which are the KDE of \(\mathcal{X}\) and the KDE of bootstrapped samples \(\mathcal{X}^{*}\), respectively. Now, given the significance level \(\alpha\) and \(h>0\), let confidence band \(q_{\mathcal{X}}\) be bootstrap bandwidth of a Gaussian Empirical Process [32, 33], \(\sqrt{n}||\hat{p}_{h}-\hat{p}_{h}^{*}||_{\infty}\). Then it satisfies \(P(\sqrt{n}||\hat{p}_{h}-p_{h}||_{\infty}<q_{\mathcal{X}})\geq 1-\alpha\), as in Proposition D.2 in Section D. Then \(\mathcal{B}_{d_{B}}(\hat{\mathcal{P}}_{h},c_{\mathcal{X}})\), the ball of persistent homology centered at \(\hat{\mathcal{P}}_{h}\) and radius \(c_{\mathcal{X}}=q_{\mathcal{X}}/\sqrt{n}\) in the bottleneck distance \(d_{B}\), is a valid confidence set as \(\liminf_{n\to\infty}\mathbb{P}\left(\mathcal{P}\in\mathcal{B}_{d_{B}}(\hat{ \mathcal{P}}_{h},c_{\mathcal{X}})\right)\geq 1-\alpha\). This confidence set has further interpretation that in the persistence diagram, homological features that are above twice the radius \(2c_{\mathcal{X}}\) from the diagonal are simultaneously statistically significant.
## Appendix B Johnson-Lindenstrauss Lemma
Johnson-Lindenstrass Lemma is stated as follows:
**Lemma B.1** (Johnson-Lindenstrauss Lemma).: _[_34_, Lemma 1]__Let \(0<\epsilon<1\) and \(\mathcal{X}\subset\mathbb{R}^{D}\) be a set of points with size \(n\). Then for \(d=O\left(\min\left\{n,D,\epsilon^{-2}\log n\right\}\right)\), there exists a linear map \(f:\mathbb{R}^{D}\to\mathbb{R}^{d}\) such that for all \(x,y\in\mathcal{X}\),_
\[\left(1-\epsilon\right)\left\|x-y\right\|^{2}\leq\left\|f(x)-f(y)\right\|^{2} \leq\left(1+\epsilon\right)\left\|x-y\right\|^{2}. \tag{5}\]
_Remark B.2_.: This Johnson-Lindenstrass Lemma is optimal for linear maps, and nearly-optimal even if we allow non-linear maps: there exists a set of points \(\mathcal{X}\subset\mathbb{R}^{D}\) with size \(n\), such that \(d\) should satisfy: (a) \(d=\Omega\left(\epsilon^{-2}\log n\right)\) for a linear map \(f:\mathbb{R}^{D}\to\mathbb{R}^{d}\) satisfying (5) to exist [35, Theorem 3], and (b) \(d=\Omega\left(\epsilon^{-2}\log(\epsilon^{2}n)\right)\) for a map \(f:\mathbb{R}^{D}\to\mathbb{R}^{d}\) satisfying (5) to exist [36, Theorem 1].
## Appendix C Denoising topological features from outliers
Using the bootstrap bandwidth \(c_{\mathcal{X}}\) as the threshold is the key part of our estimators TopP&R for robustly estimating \(\mathrm{supp}(P)\). When the level set \(\hat{p}_{h}^{-1}[c_{\mathcal{X}},\infty)\) is used, the homology of \(\hat{p}_{h}^{-1}[c_{\mathcal{X}},\infty)\) consists of homological features whose \((\mathrm{birth})\geq c_{\mathcal{X}}\) and \((\mathrm{death})\leq c_{\mathcal{X}}\), which are the homological features in skyblue area in Figure 1. In this example, we consider three types of homological noise, though there can be many more corresponding to different homological dimensions.
* There can be a \(0\)-dimensional homological noise of \((\mathrm{birth})<c_{\mathcal{X}}\) and \((\mathrm{death})<c_{\mathcal{X}}\), which is the red point in the persistence diagram of Figure 1. This noise corresponds to the orange connected component on the left. As in the figure, this type of homological noise usually corresponds to outliers.
* There can be a \(0\)-dimensional homological noise of \((\mathrm{birth})>c_{\mathcal{X}}\) and \((\mathrm{death})>c_{\mathcal{X}}\), which is the green point in the persistence diagram of Figure 1. This noise corresponds to the connected component surrounded by the green line on the left. As in the figure, this type of homological noise lies within the estimated support, not like the other two.
* There can be a \(1\)-dimensional homological noise of \((\mathrm{birth})<c_{\mathcal{X}}\) and \((\mathrm{death})<c_{\mathcal{X}}\), which is the purple point in the persistence diagram of Figure 1. This noise corresponds to the purple loop on the left.
These homological noises satisfy either their \((\mathrm{birth})<c_{\mathcal{X}}\) and \((\mathrm{death})<c_{\mathcal{X}}\) or their \((\mathrm{birth})>c_{\mathcal{X}}\) and \((\mathrm{death})>c_{\mathcal{X}}\) simultaneously with high probability, so those homological noises are removed in the estimated support \(\hat{p}_{h}^{-1}[c_{\mathcal{X}},\infty)\), which is the blue area in the left and the skyblue area in the right in Figure 1.
We would like to further emphasize that homological noises are not restricted to \(0\)-dimension lying outside the estimated support (red point in the persistence diagram of Figure 1). \(0\)-dimensional homological noise inside the estimated support (green point in the persistence diagram of Figure 1), \(1\)-dimensional homological noise can also arise, and the bootstrap bandwidth \(c_{\mathcal{X}}\) allows to simultaneously filter them.
## Appendix D Assumptions on distributions and kernels
For distributions, we assume that the order of probability volume decay \(P(\mathcal{B}(x,r))\) is at least \(r^{d}\).
**Assumption A1**.: _For all \(x\in\mathrm{supp}(P)\) and \(y\in\mathrm{supp}(Q)\),_
\[\liminf_{r\to 0}\frac{P\left(\mathcal{B}(x,r)\right)}{r^{d}}>0,\qquad\liminf_{r \to 0}\frac{Q\left(\mathcal{B}(y,r)\right)}{r^{d}}>0.\]
_Remark D.1_.: Assumption A1 is analogous to Assumption 2 of Kim et al. [37], but is weaker since the condition is pointwise on each \(x\in\mathbb{R}^{d}\). And this condition is much weaker than assuming a density on \(\mathbb{R}^{d}\): for example, a distribution supported on a low-dimensional manifold satisfies Assumption A1. This provides a framework suitable for high dimensional data, since many times high dimensional data lies on a low dimensional structure hence its density on \(\mathbb{R}^{d}\) cannot exist. See Kim et al. [37] for a more detailed discussion.
For kernel functions, we assume the following regularity conditions:
**Assumption A2**.: _Let \(K:\mathbb{R}^{d}\to\mathbb{R}\) be a nonnegative function with \(\left\|K\right\|_{1}=1\), \(\left\|K\right\|_{\infty},\left\|K\right\|_{2}<\infty\), and satisfy the following:_1. \(K(0)>0\)_._
2. \(K\) _has a compact support._
3. \(K\) _is Lipschitz continuous and of second order._
Assumption A2 allows to build a valid bootstrap confidence band for kernel density estimator (KDE). See Theorem 12 of [26] or Theorem 3.4 of [38]
**Proposition D.2** (Theorem 3.4 of [38]).: _Let \(\mathcal{X}=\{X_{1},\ldots,X_{n}\}\) be IID from a distribution \(P\). For \(h>0\), let \(\hat{p}_{h},\hat{p}_{h}^{*}\) be kernel density estimator for \(\mathcal{X}\) and its bootstrap \(\mathcal{X}^{*}\), respectively, and for \(\alpha\in(0,1)\), let \(c_{\mathcal{X}}\) be the \(\alpha\) bootstrap quantile from \(\sqrt{nh^{d}}\left\|\hat{p}_{h}-\hat{p}_{h}^{*}\right\|_{\infty}\). For \(h_{n}\to 0\),_
\[\mathbb{P}\left(\sqrt{nh_{n}^{d}}\left\|\hat{p}_{h_{n}}-p_{h_{n}}\right\|_{ \infty}>c_{\mathcal{X}}\right)=\alpha+\left(\frac{\log n}{nh_{n}^{d}}\right)^ {\frac{4+d}{4+2d}}.\]
Assumption A1, A2 ensures that, when the bandwidth \(h_{n}\to 0\), average KDEs are bounded away from \(0\).
**Lemma D.3**.: _Let \(P\) be a distribution satisfying Assumption A1. Suppose \(K\) is a nonnegative function satisfying \(K(0)>0\) and continuous at \(0\). Suppose \(\{h_{n}\}_{n\in\mathbb{N}}\) is given with \(h_{n}\geq 0\) and \(h_{n}\to 0\). Then for all \(x\in\operatorname{supp}(P)\),_
\[\liminf_{n\to\infty}p_{h_{n}}(x)>0.\]
Proof.: Since \(K(0)>0\) and \(K\) is continuous at \(0\), there is \(r_{0}>0\) such that for all \(y\in\mathcal{B}(0,r_{0})\), \(K(y)\geq\frac{1}{2}K(0)>0\). And hence
\[p_{h}(x) =\int\frac{1}{h^{d}}K\left(\frac{x-y}{h}\right)dP(y)\geq\int\frac {K(0)}{2h^{d}}1\left(\frac{x-y}{h}\in\mathcal{B}(0,r_{0})\right)dP(y)\] \[\geq\frac{K(0)}{2h^{d}}P\left(\mathcal{B}(x,r_{0}h)\right).\]
Hence as \(h_{n}\to 0\),
\[\liminf_{n\to\infty}p_{h_{n}}(x)>0.\]Before specifying the rate of convergence, we introduce the concept of reach. First introduced by [39], the reach is a quantity expressing the degree of geometric regularity of a set. Given a closed subset \(A\subset\mathbb{R}^{d}\), the medial axis of \(A\), denoted by \(\mathrm{Med}(A)\), is the subset of \(\mathbb{R}^{d}\) consisting of all the points that have at least two nearest neighbors on \(A\).
\[\mathrm{Med}(A)=\left\{x\in\mathbb{R}^{d}\setminus A\colon\exists q_{1}\neq q_{ 2}\in A,||q_{1}-x||=||q_{2}-x||=d(x,A)\right\},\]
where \(d(x,A)=\inf_{q\in A}||q-x||\) denotes the distance from a generic point \(x\in\mathbb{R}^{d}\) to \(A\), The reach of \(A\) is then defined as the minimal distance from \(A\) to \(\mathrm{Med}(A)\).
**Definition D.4**.: The reach of a closed subset \(A\subset\mathbb{R}^{d}\) is defined as
\[\mathrm{reach}(A)=\inf_{q\in A}d\left(q,\mathrm{Med}(A)\right)=\inf_{q\in A,x \in\mathrm{Med}(A)}||q-x||.\]
Now, for specifying the rate of convergence, we first assume that distributions have densities away from \(0\) and \(\infty\).
**Assumption A3**.: \(P\) _and \(Q\) have Lebesgue densities \(p\) and \(q\) that, there exists \(0<p_{\min}\leq p_{\max}<\infty\), \(0<q_{\min}\leq q_{\max}<\infty\) with for all \(x\in\mathrm{supp}(P)\) and \(y\in\mathrm{supp}(Q)\),_
\[p_{\min}\leq p(x)\leq p_{\max},\qquad q_{\min}\leq q(x)\leq q_{\max}.\]
We also assume weak geometric assumptions on the support of the distributions \(\mathrm{supp}(P)\) and \(\mathrm{supp}(Q)\), being bounded and having positive reach.
**Assumption A4**.: _We assume \(\mathrm{supp}(P)\) and \(\mathrm{supp}(Q)\) are bounded. And the support of \(P\) and \(Q\) have positive reach, i.e. \(\mathrm{reach}(\mathrm{supp}(P))>0\) and \(\mathrm{reach}(\mathrm{supp}(Q))>0\)._
Sets with positive reach ensure that the volume of its tubular neighborhood grows in polynomial order, which is Theorem 26 from [40] and originally from [39].
**Proposition D.5**.: _Let \(A\) be a set with \(\mathrm{reach}(A)>0\). Let \(A_{r}\coloneqq\{x\in\mathbb{R}^{d}:d(x,A)\leq r\}\). Then for \(r<\mathrm{reach}(A)\), there exists \(a_{0},\ldots,a_{d}\in\mathbb{R}\cup\{\infty\}\) that satisfies_
\[\lambda_{d}(A_{r})=\sum_{k=0}^{d-1}a_{k}r^{k},\]
_where \(\lambda_{d}\) is the usual Lebesgue measure of \(\mathbb{R}^{d}\)._
## Appendix E Details and Proofs for Section 4
For a random variable \(X\) and \(\alpha\in(0,1)\), let \(q_{X,\alpha}\) be upper \(\alpha\)-quantile of \(X\), i.e., \(\mathbb{P}(X\geq q_{\alpha})=\alpha\), or equivalently, \(\mathbb{P}(X<q_{\alpha})=1-\alpha\). Let \(\tilde{p}_{h}\) be the KDE on \(\mathcal{X}^{0}\). For a finite set \(\mathcal{X}\), we use the notation \(c_{\mathcal{X},\alpha}\) for \(\alpha\)-bootstrap quantile satisfying \(\mathbb{P}\left(\left\|\hat{p}_{\mathcal{X},h_{n}}-\hat{p}_{\mathcal{X}^{0}, h_{n}}\right\|_{\infty}<c_{\mathcal{X},\alpha}|\mathcal{X}\right)=1-\alpha\), where \(\mathcal{X}^{b}\) is the bootstrap sample from \(\mathcal{X}\). For a distribution \(P\), we use the notation \(c_{P,\alpha}\) for \(\alpha\)-quantile satisfying \(\mathbb{P}\left(\left\|\hat{p}_{h_{n}}-p_{h_{n}}\right\|_{\infty}<c_{P,\alpha }\right)=1-\alpha\), where \(\hat{p}_{h_{n}}\)is kernel density estimator of IID samples from \(P\). Hence when \(\mathcal{X}\) is not IID samples from \(P\), the relation of Proposition D.2 may not hold.
_Claim E.1_.: Let \(Z\sim\mathcal{N}(0,1)\) be a sample from standard normal distribution, then for \(\alpha\in(0,1)\),
\[q_{Z,\alpha}=\Theta\left(\sqrt{\log(1/\alpha)}\right).\]
And for \(0\leq\delta<\alpha<1\),
\[q_{Z,\alpha}-q_{Z,\alpha+\delta}=\left(\delta\alpha\sqrt{\log(1/\alpha)}\right)\]
Proof.: Let \(Z\sim\mathcal{N}(0,1)\), then for all \(x>2\),
\[\frac{1}{2x}\exp\left(-\frac{x^{2}}{2}\right)\leq\mathbb{P}(Z>x)\leq\frac{1}{ x}\exp\left(-\frac{x^{2}}{2}\right).\]
Then \(x=C\sqrt{\log(1/\alpha)}\) gives
\[\frac{\alpha^{C/2}}{2C\sqrt{\log(1/\alpha)}}\leq\mathbb{P}(Z>x)\leq\frac{ \alpha^{C/2}}{C\sqrt{\log(1/\alpha)}}.\]And hence
\[q_{Z,\alpha}=\Theta\left(\sqrt{\log(1/\alpha)}\right),\]
and in particular,
\[\exp\left(-\frac{q_{Z,\alpha}^{2}}{2}\right)=\Theta\left(\alpha\sqrt{\log(1/ \alpha)}\right).\]
Now for \(0\leq\delta<\alpha<1\),
\[\left(q_{Z,\alpha}-q_{Z,\alpha+\delta}\right)\exp\left(-\frac{q_{ Z,\alpha}^{2}}{2}\right) \leq\int_{q_{Z,\alpha+\delta}}^{q_{Z,\alpha}}\exp\left(-\frac{t^{ 2}}{2}\right)dt=\delta\] \[\leq\left(q_{Z,\alpha}-q_{Z,\alpha+\delta}\right)\exp\left(-\frac {q_{Z,\alpha+\delta}^{2}}{2}\right).\]
And hence
\[q_{Z,\alpha}-q_{Z,\alpha+\delta}\leq\delta\exp\left(-\frac{q_{Z,\alpha+\delta}^ {2}}{2}\right)=\Theta\left(\delta(\alpha+\delta)\sqrt{\log(1/\alpha)}\right),\]
and
\[q_{Z,\alpha}-q_{Z,\alpha+\delta}\geq\delta\exp\left(-\frac{q_{Z,\alpha}^{2}}{2 }\right)=\Theta\left(\delta\alpha\sqrt{\log(1/(\alpha+\delta))}\right).\]
Then from \(\delta<\alpha\),
\[q_{Z,\alpha}-q_{Z,\alpha+\delta}=\Theta\left(\delta\alpha\sqrt{\log(1/\alpha)} \right).\]
_Claim E.2_.: Let \(X,Y\) be random variables, and \(0\leq\delta<\alpha<1\). Suppose there exists \(c>0\) satisfying
\[\mathbb{P}\left(|X-Y|>c\right)\leq\delta. \tag{6}\]
Then,
\[q_{X,\alpha+\delta}-c\leq q_{Y,\alpha}\leq q_{X,\alpha-\delta}+c.\]
Proof.: Note that \(q_{X,\alpha-\delta}\) satisfies \(\mathbb{P}(X>q_{X,\alpha-\delta})=\alpha-\delta\). Then from (6),
\[\mathbb{P}\left(Y>q_{X,\alpha-\delta}+c\right) \leq\mathbb{P}\left(Y>q_{X,\alpha-\delta}+c,|X-Y|\leq c\right)+ \mathbb{P}\left(|X-Y|>c\right)\] \[\leq\mathbb{P}\left(X>q_{X,\alpha-\delta}\right)+\delta=\alpha.\]
And hence
\[q_{Y,\alpha}\leq q_{X,\alpha-\delta}+c.\]
Then changing the role of \(X\) and \(Y\) gives
\[q_{X,\alpha+\delta}-c\leq q_{Y,\alpha}.\]
**Lemma E.3**.:
1. _Under Assumption_ 1_,_ 2 _and_ _A2__,_ \[\left\|\hat{p}_{h}-\tilde{p}_{h}\right\|_{\infty}\leq\frac{\rho_{n}\left\|K \right\|_{\infty}}{h^{d}}.\]
2. _Under Assumption_ 1_,_ 2 _and_ _A2__, with probability_ \(1-\delta\)_,_ \[c_{\mathcal{X}^{0},\alpha+\delta}-O\left(\frac{\rho_{n}}{h^{d}}+\sqrt{\frac{ \rho_{n}\log(1/\delta)}{nh^{2d}}}\right)\leq c_{\mathcal{X},\alpha}\leq c_{ \mathcal{X}^{0},\alpha-\delta}+O\left(\frac{\rho_{n}}{h^{d}}+\sqrt{\frac{\rho_ {n}\log(1/\delta)}{nh^{2d}}}\right).\]
3. _Suppose Assumption_ 1_,_ 2_,_ _A2 hold, and let_ \(\alpha,\delta_{n}\in(0,1)\)_. Suppose_ \(nh_{n}^{d}\to\infty\)_,_ \(\delta_{n}^{-1}=O\left(\left(\frac{\log n}{nh_{n}^{d}}\right)^{\frac{4+d}{4+2d}}\right)\) _and_ \(nh_{n}^{-d}\rho_{n}^{2}\delta_{n}^{-2}\to 0\)_. Then with probability_ \(1-\alpha-4\delta_{n}\)_,_ \[\left\|\hat{p}_{h}-p_{h}\right\|_{\infty}<c_{\mathcal{X},\alpha}\leq c_{P,\alpha -3\delta_{n}}.\]Proof.: (i)
First, note that
\[\hat{p}_{h}(x)-\tilde{p}_{h}(x)=\frac{1}{nh^{d}}\sum_{i=1}^{n}\left(K\left(\frac{x -X_{i}}{h}\right)-K\left(\frac{x-X_{i}^{0}}{h}\right)\right).\]
Then under Assumption A2,
\[\left\|\hat{p}_{h}-\tilde{p}_{h}\right\|_{\infty} \leq\frac{1}{nh^{d}}\sum_{i=1}^{n}\left\|K\left(\frac{\cdot-X_{i} }{h}\right)-K\left(\frac{\cdot-X_{i}^{0}}{h}\right)\right\|_{\infty}\] \[\leq\frac{1}{nh^{d}}\sum_{i=1}^{n}\left\|K\right\|_{\infty}I\left( X_{i}\neq X_{i}^{0}\right).\]
Then from Assumption 2, \(\sum_{i=1}^{n}I\left(X_{i}\neq X_{i}^{0}\right)\leq n\rho_{n}\), and hence
\[\left\|\hat{p}_{h}-\tilde{p}_{h}\right\|_{\infty}\leq\frac{\left\|K\right\|_{ \infty}\rho_{n}}{h^{d}}.\]
(ii)
Let \(\mathcal{X}_{b}\), \(\mathcal{X}_{b}^{0}\) be bootstrapped samples of \(\mathcal{X}\), \(\mathcal{X}^{0}\) with the same sampling with replacement process. Let \(\hat{p}_{h}^{b},\tilde{p}_{h}^{b}\) be KDE of \(\mathcal{X}_{b}\) and \(\mathcal{X}_{b}^{0}\), respectively. And, note that
\[\left|\left\|\hat{p}_{h}-\hat{p}_{h}^{b}\right\|_{\infty}-\left\|\tilde{p}_{h} -\tilde{p}_{h}^{b}\right\|_{\infty}\right|\leq\left\|\hat{p}_{h}-\tilde{p}_{h} \right\|_{\infty}+\left\|\hat{p}_{h}^{b}-\tilde{p}_{h}^{b}\right\|_{\infty}.\]
Let \(L_{b}\) be the number of elements where \(\mathcal{X}_{b}\) and \(\mathcal{X}_{b}^{0}\) differ, i.e., \(L_{b}=\left|\mathcal{X}_{b}^{\prime}\backslash\mathcal{X}_{b}^{0}\right|= \left|\mathcal{X}_{b}^{0}\backslash\mathcal{X}_{b}\right|\), then \(L_{b}\sim\mathrm{Binomial}(n,\rho_{n})\), and
\[\left\|\hat{p}_{h}^{b}-\tilde{p}_{h}^{b}\right\|_{\infty}\leq\frac{\left\|K \right\|_{\infty}L_{b}}{nh^{d}}.\]
And hence,
\[\left|\left\|\hat{p}_{h}-\hat{p}_{h}^{b}\right\|_{\infty}-\left\|\tilde{p}_{h} -\tilde{p}_{h}^{b}\right\|_{\infty}\right|\leq\frac{\left\|K\right\|_{\infty} \left(n\rho_{n}+L_{b}\right)}{nh^{d}}. \tag{7}\]
Then by Hoeffding's inequality, with probability \(1-\delta\),
\[L_{b}\leq n\rho_{n}+\sqrt{\frac{n\log(1/\delta)}{2}}.\]
by applying this to (7), with probability \(1-\delta\),
\[\left|\left\|\hat{p}_{h}-\tilde{p}_{h}^{b}\right\|_{\infty}-\left\|\tilde{p}_ {h}-\tilde{p}_{h}^{b}\right\|_{\infty}\right|\leq O\left(\frac{\rho_{n}}{h^{d }}+\sqrt{\frac{\rho_{n}\log(1/\delta)}{nh^{2d}}}\right).\]
Hence applying Claim E.2 'implies that, with probability \(1-\delta\),
\[c_{\mathcal{X}^{0},\alpha+\delta}-O\left(\frac{\rho_{n}}{h^{d}}+\sqrt{\frac{ \rho_{n}\log(1/\delta)}{nh^{2d}}}\right)\leq c_{\mathcal{X},\alpha}\leq c_{ \mathcal{X}^{0},\alpha-\delta}+O\left(\frac{\rho_{n}}{h^{d}}+\sqrt{\frac{\rho _{n}\log(1/\delta)}{nh^{2d}}}\right).\]
(iii)
Let \(\tilde{\delta}_{n}\coloneqq O\left(\left(\frac{\log n}{nh_{n}^{d}}\right)^{ \frac{4+d}{d+2d}}\right)\) be from RHS of Proposition D.2, then for large enough \(n\), \(\delta_{n}\geq\tilde{\delta}_{n}\). Note that Proposition D.2 implies that for all \(\alpha\in(0,1)\),
\[c_{P,\alpha+\delta_{n}}\leq c_{\mathcal{X}_{0},\alpha}\leq c_{P,\alpha-\delta_ {n}}. \tag{8}\]
Now, \(nh_{n}^{d}\rightarrow\infty\) implies that \(\sqrt{nh_{n}^{d}}(\tilde{p}_{h_{n}}-p_{h_{n}})\) converges to a Gaussian process, and then Claim E.2 implies that \(c_{P,\alpha}=\Theta\left(\sqrt{\frac{\log(1/\alpha)}{nh_{n}^{d}}}\right)\) and \[c_{P,\alpha}-c_{P,\alpha+\delta_{n}}=\Theta\left(\delta_{n}\alpha\sqrt{\frac{\log(1/ \alpha)}{nh_{n}^{d}}}\right). \tag{9}\]
Then under Assumption 2, since \(nh_{n}^{-d}\rho_{n}^{2}\delta_{n}^{-2}=o(1)\) and \(n\rho_{n}\geq 1\) implies \(h_{n}^{-d}\rho_{n}\delta_{n}^{-2}=o(1)\), and then
\[\frac{\rho_{n}}{h_{n}^{d}}+\sqrt{\frac{\rho_{n}\log(1/\delta)}{nh_{n}^{2d}}}=O \left(\delta_{n}\alpha\sqrt{\frac{\log(1/\alpha)}{nh_{n}^{d}}}\right). \tag{10}\]
Then (8), (9), (10) implies that
\[c_{P,\alpha+3\delta_{n}}\leq c_{P,\alpha+2\delta_{n}}-O\left(\delta_{n}\alpha \sqrt{\frac{\log(1/\alpha)}{nh_{n}^{d}}}\right)\leq c_{\mathcal{X}^{0},\alpha +\delta_{n}}-O\left(\frac{\rho_{n}}{h_{n}^{d}}+\sqrt{\frac{\rho_{n}\log(1/ \delta)}{nh_{n}^{2d}}}\right), \tag{11}\]
and
\[c_{\mathcal{X}^{0},\alpha-\delta_{n}}+O\left(\frac{\rho_{n}}{h_{n}^{d}}+\sqrt{ \frac{\rho_{n}\log(1/\delta)}{nh_{n}^{2d}}}\right)\leq c_{P,\alpha-2\delta_{n} }+O\left(\delta_{n}\alpha\sqrt{\frac{\log(1/\alpha)}{nh_{n}^{d}}}\right)\leq c _{P,\alpha-3\delta_{n}}. \tag{12}\]
Now, from the definition of \(c_{P,\alpha+3\delta_{n}}\), with probability \(1-\alpha-3\delta_{n}\),
\[\left\|\hat{p}_{n}-p_{h}\right\|_{\infty}\leq c_{P,\alpha+3\delta_{n}}. \tag{13}\]
And (ii) implies that, with probability \(1-\delta_{n}\),
\[c_{\mathcal{X}^{0},\alpha+\delta_{n}}-O\left(\frac{\rho_{n}}{h_{n}^{d}}+\sqrt{ \frac{\rho_{n}\log(1/\delta)}{nh_{n}^{2d}}}\right)\leq c_{\mathcal{X},\alpha} \leq c_{\mathcal{X}^{0},\alpha-\delta_{n}}+O\left(\frac{\rho_{n}}{h_{n}^{d}}+ \sqrt{\frac{\rho_{n}\log(1/\delta)}{nh_{n}^{2d}}}\right). \tag{14}\]
Hence by combining (11), (12), (13), (14), with probability \(1-\alpha-4\delta_{n}\),
\[\left\|\hat{p}_{n}-p_{h}\right\|_{\infty} \leq c_{P,\alpha+3\delta_{n}}\] \[\leq c_{\mathcal{X}^{0},\alpha+\delta_{n}}-O\left(\frac{\rho_{n}} {h_{n}^{d}}+\sqrt{\frac{\rho_{n}\log(1/\delta)}{nh_{n}^{2d}}}\right)\] \[\leq c_{\mathcal{X},\alpha}\] \[\leq c_{\mathcal{X}^{0},\alpha-\delta_{n}}+O\left(\frac{\rho_{n}} {h_{n}^{d}}+\sqrt{\frac{\rho_{n}\log(1/\delta)}{nh_{n}^{2d}}}\right)\] \[\leq c_{P,\alpha-3\delta_{n}}.\]
**Corollary E.4**.: _Suppose Assumption 1, 2, A2 hold, and let \(\alpha\in(0,1)\). Suppose \(\delta_{n}^{-1}=O\left(\left(\frac{\log n}{nh_{n}^{d}}\right)^{\frac{4+4}{4+2d}}\right)\) and \(nh_{n}^{-d}\rho_{n}^{2}\delta_{n}^{-2}\to 0\). Then with probability \(1-\alpha-4\delta_{n}\),_
1. _Let_ \(\delta_{n}\in(0,1)\) _and suppose_ \(nh_{n}^{d}\rightarrow\infty\)_,_ \(\delta_{n}^{-1}=O\left(\left(\frac{\log n}{nh_{n}^{d}}\right)^{\frac{4+d}{4+2d}}\right)\) _and_ \(nh_{n}^{-d}\rho_{n}^{2}\delta_{n}^{-2}\to 0\)_. Then with probability_ \(1-\alpha-4\delta_{n}\)_,_ \[p_{h_{n}}^{-1}[2c_{P,\alpha-3\delta_{n}},\infty)\subset\hat{p}_{h_{n}}^{-1}[c_ {\mathcal{X},\alpha},\infty)\subset\mathrm{supp}(P_{h_{n}}).\]
2. _Let_ \(\delta_{m}\in(0,1)\) _and suppose_ \(mh_{m}^{d}\rightarrow\infty\)_,_ \(\delta_{m}^{-1}=O\left(\left(\frac{\log m}{mh_{m}^{d}}\right)^{\frac{4+d}{4+2d}}\right)\) _and_ \(mh_{m}^{-d}\rho_{m}^{2}\delta_{m}^{-2}\to 0\)_. Then with probability_ \(1-\alpha-4\delta_{m}\)_,_ \[q_{h_{m}}^{-1}[2c_{Q,\alpha-3\delta_{m}},\infty)\subset\hat{q}_{h_{m}}^{-1}[c_ {\mathcal{Y},\alpha},\infty)\subset\mathrm{supp}(Q_{h_{m}}).\]Proof.: (i)
Lemma E.3 (iii) implies that with probability \(1-\alpha-4\delta_{n}\), \(\|\hat{p}_{h}-p_{h}\|<c_{\mathcal{X},\alpha}\leq c_{P,\alpha-3\delta_{n}}\). This implies
\[p_{h_{n}}^{-1}[2c_{P,\alpha-3\delta_{n}},\infty)\subset\hat{p}_{h_{n}}^{-1}[c_ {\mathcal{X},\alpha},\infty)\subset\mathrm{supp}(P_{h_{n}}).\]
(ii)
This can be proven similarly to (i).
_Claim E.5_.: For a nonnegative measure \(\mu\) and sets \(A,B,C,D\),
\[\mu(A\cap B)-\mu(C\cap D)\leq\mu(A\backslash C)+\mu(B\backslash D).\]
Proof.: \[\mu(A\cap B)-\mu(C\cap D) \leq\mu((A\cap B)\backslash(C\cap D))=\mu((A\cap B)\cap(C^{ \complement}\cup D^{\complement}))\] \[=\mu((A\cap B)\cap C^{\complement})\cup(A\cap B)\cap D^{\complement }))\] \[\leq\mu((A\cap B)\backslash C)+\mu(A\cap B)\backslash D)\] \[\leq\mu(A\backslash C)+\mu(B\backslash D).\]
From here, let \(P_{n}\) and \(Q_{m}\) be the empirical measures on \(\mathcal{X}\) and \(\mathcal{Y}\), respectively, i.e., \(P_{n}=\frac{1}{n}\sum_{i=1}^{n}\delta_{X_{i}}\) and \(Q_{m}=\frac{1}{m}\sum_{j=1}^{m}\delta_{Y_{j}}\).
**Lemma E.6**.: _Suppose Assumption 1, 2 hold. Let \(A\subset\mathbb{R}^{d}\). Then with probability \(1-\delta\),_
\[\left|P_{n}-P\right|(A)\leq\rho_{n}+\sqrt{\frac{\log(2/\delta)}{2n}},\]
_and in particular,_
\[P_{n}(A)\leq P(A)+\rho_{n}+\sqrt{\frac{\log(2/\delta)}{n}}.\]
Proof.: Let \(P_{n}^{0}\) be the empirical measure on \(\mathcal{X}^{0}\), i.e., \(P_{n}^{0}=\frac{1}{n}\sum_{i=1}^{n}\delta_{X_{i}^{0}}\). By using Hoeffding's inequality,
\[\mathbb{P}\left(\left|\left(P_{n}^{0}-P\right)(A)\right|\geq t\right)\leq 2 \exp\left(-2nt^{2}\right),\]
and hence with probability \(1-\delta\),
\[\left|P_{n}^{0}-P\right|(A)\leq\sqrt{\frac{\log(2/\delta)}{2n}}.\]
And \(\left|P_{n}-P_{n}^{0}\right|(A)\) is expanded as
\[\left|P_{n}-P_{n}^{0}\right|(A)=\frac{1}{n}\sum_{i=1}^{n}\left|I(X_{i}\in A)- I(X_{i}^{0}\in A)\right|.\]
Under Assumption 2, \(\sum_{i=1}^{n}I\left(X_{i}\neq X_{i}^{0}\right)\leq n\rho_{n}\), and hence
\[\left|P_{n}-P_{n}^{0}\right|(A) =\frac{1}{n}\sum_{i=1}^{n}\left|I(X_{i}\in A)-I(X_{i}^{0}\in A)\right|\] \[\leq\frac{1}{n}\sum_{i=1}^{n}I\left(X_{i}\neq X_{i}^{0}\right)= \rho_{n}.\]
Therefore, with probability \(1-\delta\),
\[\left|P_{n}-P\right|(A)\leq\rho_{n}+\sqrt{\frac{\log(2/\delta)}{2n}}.\]_Claim E.7_.: Suppose Assumption 1, 2, A1, A2 hold, and let \(\delta_{n},\delta_{m}\in(0,1)\). Suppose \(nh_{n}^{d}\to\infty\), \(mh_{m}^{d}\to\infty\), \(\delta_{n}^{-1}=O\left(\left(\frac{\log n}{nh_{n}^{d}}\right)^{\frac{4+d}{4+2 2}}\right)\), \(\delta_{m}^{-1}=O\left(\left(\frac{\log m}{mh_{m}^{d}}\right)^{\frac{4+d}{4+2}}\right)\), \(nh_{n}^{-d}\rho_{n}^{2}\delta_{n}^{-2}\to 0\), and \(nh_{m}^{-d}\rho_{m}^{2}\delta_{m}^{-2}\to 0\). Let
\[B_{n,m}\coloneqq\left(\operatorname{supp}(P)\backslash p_{h_{n}}^{-1}[2c_{P}, \infty)\right)\cup q_{h_{m}}^{-1}(0,2c_{Q})\cup\operatorname{supp}(P_{h_{n}}) \backslash\operatorname{supp}(P). \tag{15}\]
* With probability \(1-2\alpha-4\delta_{n}-8\delta_{m}\), \[\left|Q_{m}\left(\hat{q}_{h_{m}}^{-1}[c_{\mathcal{X}},\infty)\cap\hat{q}_{h_{m }}^{-1}[c_{\mathcal{Y}},\infty)\right)-Q_{m}\left(\operatorname{supp}(P)\cap \operatorname{supp}(Q)\right)\right|\] \[\leq C\left(Q(B_{n,m})+\rho_{m}+\sqrt{\frac{\log(1/\delta)}{m}} \right).\]
* With probability \(1-2\alpha-4\delta_{n}-9\delta_{m}\), \[\left|Q_{m}\left(\hat{p}_{h_{m}}^{-1}[c_{\mathcal{X}},\infty)\cap \hat{q}_{h_{m}}^{-1}[c_{\mathcal{Y}},\infty)\right)-Q\left(\operatorname{supp }(P)\right)\right|\] \[\leq C\left(Q(B_{n,m})+\rho_{m}+\sqrt{\frac{\log(1/\delta)}{m}} \right).\]
* With probability \(1-2\alpha-9\delta_{m}\), \[\left|Q_{m}\left(\hat{q}_{h_{m}}^{-1}[c_{\mathcal{Y}},\infty)\right)-1\right| \leq C\left(Q\left(q_{h_{m}}^{-1}(0,2c_{Q})\right)+\rho_{m}+\sqrt{\frac{\log( 1/\delta)}{m}}\right).\]
* As \(n,m\to\infty\), \(B_{n,m}\to\emptyset\). And in particular, \[Q(B_{n,m})\to 0.\]
Proof.: (i)
From Corollary E.4 (i) and (ii), with probability \(1-2\alpha-4(\delta_{n}+\delta_{m})\),
\[Q_{m}\left(p_{h_{n}}^{-1}[2c_{P},\infty)\cap q_{h_{m}}^{-1}[2c_{ Q},\infty)\right) \leq Q_{m}\left(\hat{p}_{h_{n}}^{-1}[c_{\mathcal{X}},\infty)\cap \hat{q}_{h_{m}}^{-1}[c_{\mathcal{Y}},\infty)\right)\] \[\leq Q_{m}\left(\operatorname{supp}(P_{h_{n}})\cap\operatorname{ supp}(Q_{h_{m}})\right). \tag{16}\]
Then from the first inequality of (16), combining with Claim E.5 gives
\[Q_{m}\left(\hat{p}_{h_{n}}^{-1}[c_{\mathcal{X}},\infty)\cap\hat{ q}_{h_{m}}^{-1}[c_{\mathcal{Y}},\infty)\right)-Q_{m}\left(\operatorname{supp}(P) \cap\operatorname{supp}(Q)\right)\] \[\geq Q_{m}\left(p_{h_{n}}^{-1}[2c_{P},\infty)\cap q_{h_{m}}^{-1}[ 2c_{Q},\infty)\right)-Q_{m}\left(\operatorname{supp}(P)\cap\operatorname{supp }(Q)\right)\] \[\geq-\left(Q_{m}\left(\operatorname{supp}(P)\backslash p_{h_{n}}^{ -1}[2c_{P},\infty)\right)+Q_{m}\left(\operatorname{supp}(Q)\backslash q_{h_{m }}^{-1}[2c_{Q},\infty)\right)\right). \tag{17}\]
And from the second inequality of (16), combining with Claim E.5 gives
\[Q_{m}\left(\hat{p}_{h_{n}}^{-1}[c_{\mathcal{X}},\infty)\cap \hat{q}_{h_{m}}^{-1}[c_{\mathcal{Y}},\infty)\right)-Q_{m}\left(\operatorname{ supp}(P)\cap\operatorname{supp}(Q)\right)\] \[\leq Q_{m}\left(\operatorname{supp}(P_{h_{n}})\cap\operatorname{ supp}(Q_{h_{m}})\right)-Q_{m}\left(\operatorname{supp}(P)\cap\operatorname{supp}(Q)\right)\] \[\leq Q_{m}\left(\operatorname{supp}(P_{h_{n}})\backslash \operatorname{supp}(P)\right)+Q_{m}\left(\operatorname{supp}(Q_{h_{m}}) \backslash\operatorname{supp}(Q)\right). \tag{18}\]
And hence combining (17) and (18) gives that, with probability \(1-2\alpha-4(\delta_{n}+\delta_{m})\),
\[\left|Q_{m}\left(\hat{p}_{h_{n}}^{-1}[c_{\mathcal{X}},\infty) \cap\hat{q}_{h_{m}}^{-1}[c_{\mathcal{Y}},\infty)\right)-Q_{m}\left( \operatorname{supp}(P)\cap\operatorname{supp}(Q)\right)\right|\] \[\leq\max\left\{Q_{m}\left(\operatorname{supp}(P)\backslash p_{h_{n }}^{-1}[2c_{P},\infty)\right)+Q_{m}\left(\operatorname{supp}(Q)\backslash q_{h _{m}}^{-1}[2c_{Q},\infty)\right)\right.\] \[\quad\quad\quad\left.,\,Q_{m}\left(\operatorname{supp}(P_{h_{n} })\backslash\operatorname{supp}(P)\right)+Q_{m}\left(\operatorname{supp}(Q_{h_{m }})\backslash\operatorname{supp}(Q)\right)\right\}. \tag{19}\]Now we further bound the upper bound of (19). From Lemma E.6, with probability \(1-\delta_{m}\),
\[Q_{m}\left(\mathrm{supp}(P)\backslash p_{h_{n}}^{-1}[2c_{P},\infty)\right)\leq Q \left(\mathrm{supp}(P)\backslash p_{h_{n}}^{-1}[2c_{P},\infty)\right)+\rho_{m}+ \sqrt{\frac{\log(2/\delta)}{2m}}. \tag{20}\]
And similarly, with probability \(1-\delta_{m}\),
\[Q_{m}\left(\mathrm{supp}(Q)\backslash q_{h_{m}}^{-1}[2c_{Q}, \infty)\right) \leq Q\left(\mathrm{supp}(Q)\backslash q_{h_{m}}^{-1}[2c_{Q}, \infty)\right)+\rho_{m}+\sqrt{\frac{\log(2/\delta)}{2m}}\] \[=Q\left(q_{h_{m}}^{-1}(0,\infty)\right)+\rho_{m}+\sqrt{\frac{ \log(2/\delta)}{2m}}. \tag{21}\]
And similarly, with probability \(1-\delta_{m}\),
\[Q_{m}\left(\mathrm{supp}(P_{h_{n}})\backslash\mathrm{supp}(P)\right) \leq Q\left(\mathrm{supp}(P_{h_{n}})\backslash\mathrm{supp}(P) \right)+\rho_{m}+\sqrt{\frac{\log(2/\delta)}{2m}}. \tag{22}\]
And similarly, with probability \(1-\delta_{m}\),
\[Q_{m}\left(\mathrm{supp}(Q_{h_{m}})\backslash\mathrm{supp}(Q)\right) \leq Q\left(\mathrm{supp}(Q_{h_{m}})\backslash\mathrm{supp}(Q) \right)+\rho_{m}+\sqrt{\frac{\log(2/\delta)}{2m}}\] \[=\rho_{m}+\sqrt{\frac{\log(2/\delta)}{2m}}. \tag{23}\]
Hence by applying (20), (21), (22), (23) to (19), with probability \(1-2\alpha-4\delta_{n}-8\delta_{m}\),
\[\left|Q_{m}\left(\hat{p}_{h_{n}}^{-1}[c_{\mathcal{X}},\infty)\cap \hat{q}_{h_{m}}^{-1}[c_{\mathcal{Y}},\infty)\right)-Q_{m}\left(\mathrm{supp}(P )\cap\mathrm{supp}(Q)\right)\right|\] \[\leq C\left(Q\left(\left(\mathrm{supp}(P)\backslash p_{h_{n}}^{- 1}[2c_{P},\infty)\right)\cup q_{h_{m}}^{-1}(0,2c_{Q})\cup\mathrm{supp}(P_{h_{ m}})\backslash\mathrm{supp}(P)\right)\right.\] \[\qquad\qquad\left.+\rho_{m}+\sqrt{\frac{\log(1/\delta)}{m}}\right)\] \[=C\left(Q(B_{n,m})+\rho_{m}+\sqrt{\frac{\log(1/\delta)}{m}}\right),\]
where \(B_{n,m}\) is from (15).
(ii)
This can be done similarly to (i).
(iii)
Lemma E.6 gives that with probability \(1-\delta_{m}\),
\[|Q_{m}\left(\mathrm{supp}(P)\cap\mathrm{supp}(Q)\right)-Q\left(\mathrm{supp}( P)\right)|\leq\rho_{m}+\sqrt{\frac{\log(2/\delta)}{2m}}.\]
Hence combining this with (i) gives the desired result.
(iv)
Note that Lemma D.3 implies that for all \(x\in\mathrm{supp}(P)\), \(\liminf_{n\to\infty}p_{h_{n}}(x)>0\), so \(p_{h_{n}}(x)>2c_{P}\) for large enough \(n\). And hence as \(n\to\infty\),
\[\mathrm{supp}(P)\backslash p_{h_{n}}^{-1}[2c_{P},\infty)\to\emptyset. \tag{24}\]And similar argument holds for \(\operatorname{supp}(Q)\backslash q_{h_{m}}^{-1}[2c_{Q},\infty)\), so as \(m\to\infty\),
\[\operatorname{supp}(Q)\backslash q_{h_{m}}^{-1}[2c_{Q},\infty)\to\emptyset. \tag{25}\]
Also, since \(K\) has compact support, for any \(x\notin\operatorname{supp}(P)\), \(x\notin\operatorname{supp}(P_{h_{n}})\) once \(h_{n}<d(x,\operatorname{supp}(P))\). Hence as \(n\to\infty\),
\[\operatorname{supp}(P_{h_{n}})\backslash\operatorname{supp}(P)\to\emptyset. \tag{26}\]
And similarly, as \(m\to\infty\),
\[\operatorname{supp}(Q_{h_{m}})\backslash\operatorname{supp}(Q)\to\emptyset. \tag{27}\]
Hence by applying (24), (25), (26), (27) to the definition of \(B_{n,m}\) in (15) gives that as \(n,m\to\infty\),
\[B_{n,m}\to\emptyset.\]
And in particular,
\[Q(B_{n,m})\to 0.\]
Below we state a more formal version of Proposition 4.1.
**Proposition E.8**.: _Suppose Assumption 1,2,A1,A2 hold. Suppose \(\alpha\to 0\), \(\delta_{n}\to 0\), \(\delta_{n}^{-1}=O\left(\left(\frac{\log n}{nh_{n}^{d}}\right)^{\frac{d+d}{d+2d}}\right)\), \(h_{n}\to 0\), \(nh_{n}^{d}\to\infty\), and \(nh_{n}^{-d}\rho_{n}^{2}\delta_{n}^{-2}\to 0\), and similar relations hold for \(h_{m}\), \(\rho_{m}\). Then there exists some constant \(C>0\) not depending on anything else such that_
\[\left|\operatorname{TopP}_{\mathcal{X}}(\mathcal{Y})-\operatorname{ precision}_{P}(\mathcal{Y})\right| \leq C\left(Q(B_{n,m})+\rho_{m}+\sqrt{\frac{\log(1/\delta)}{m}} \right),\] \[\left|\operatorname{TopR}_{\mathcal{Y}}(\mathcal{X})- \operatorname{recall}_{Q}(\mathcal{X})\right| \leq C\left(P(A_{n,m})+\rho_{n}+\sqrt{\frac{\log(1/\delta)}{n}} \right),\]
_where_
\[A_{n,m} \coloneqq\left(\operatorname{supp}(Q)\backslash q_{h_{m}}^{-1}[2 c_{Q},\infty)\right)\cup p_{h_{n}}^{-1}(0,2c_{P})\cup\operatorname{supp}(Q_{h_{m}}) \backslash\operatorname{supp}(Q),\] \[B_{n,m} \coloneqq\left(\operatorname{supp}(P)\backslash p_{h_{n}}^{-1}[2 c_{P},\infty)\right)\cup q_{h_{m}}^{-1}(0,2c_{Q})\cup\operatorname{supp}(P_{h_{n}}) \backslash\operatorname{supp}(P).\]
_And as \(n,m\to\infty\), \(P(A_{n,m})\to 0\) and \(Q(B_{n,m})\to 0\) hold._
Proof of Proposition e.8.: Now this is an application of Claim E.7 (i) (ii) (v) to the definitions of \(\operatorname{precision}_{P}(\mathcal{Y})\) and \(\operatorname{recall}_{Q}(\mathcal{X})\) in (1) and (2) and the definitions of \(\operatorname{TopP}_{\mathcal{X}}(\mathcal{Y})\) and \(\operatorname{TopR}_{\mathcal{Y}}(\mathcal{X})\) in (3) and (4).
Similarly, we state a more formal version of Theorem 4.2.
**Theorem E.9**.: _Suppose Assumption 1,2,A1,A2 hold. Suppose \(\alpha\to 0\), \(\delta_{n}\to 0\), \(\delta_{n}^{-1}=O\left(\left(\frac{\log n}{nh_{n}^{d}}\right)^{\frac{d+d}{d+2d}}\right)\), \(h_{n}\to 0\), \(nh_{n}^{d}\to\infty\), and \(nh_{n}^{-d}\rho_{n}^{2}\delta_{n}^{-2}\to 0\), and similar relations hold for \(h_{m}\), \(\rho_{m}\). Then there exists some constant \(C>0\) not depending on anything else such that_
\[\left|\operatorname{TopP}_{\mathcal{X}}(\mathcal{Y})-\operatorname{ precision}_{P}(Q)\right| \leq C\left(Q(B_{n,m})+\rho_{m}+\sqrt{\frac{\log(1/\delta)}{m}} \right),\] \[\left|\operatorname{TopR}_{\mathcal{Y}}(\mathcal{X})- \operatorname{recall}_{Q}(P)\right| \leq C\left(P(A_{n,m})+\rho_{n}+\sqrt{\frac{\log(1/\delta)}{n}} \right),\]
_where \(A_{n,m}\)and \(B_{n,m}\) are the same as in Proposition E.8. Again as \(n,m\to\infty\), \(P(A_{n,m})\to 0\) and \(Q(B_{n,m})\to 0\) hold._Proof of Theorem e.9.: Now this is an application of Claim E.7 (iii) (iv) (v) to the definitions of \(\mathrm{precision}_{P}(Q)\) and \(\mathrm{recall}_{Q}(P)\) and the definitions of \(\mathtt{TopP}_{\mathcal{X}}(\mathcal{Y})\) and \(\mathtt{TopR}_{\mathcal{Y}}(\mathcal{X})\) in (3) and (4).
Proof of Lemma 4.3.: Recall that \(B_{n,m}\) is defined in (15) as
\[B_{n,m}=\left(\mathrm{supp}(P)\backslash p_{h_{n}}^{-1}[2c_{P},\infty)\right) \cup q_{h_{m}}^{-1}(0,2c_{Q})\cup\mathrm{supp}(P_{h_{n}})\backslash\mathrm{ supp}(P),\]
and hence \(Q(B_{n,m})\) can be upper bounded as
\[Q(B_{n,m})\leq Q\left(\mathrm{supp}(P)\backslash p_{h_{n}}^{-1}[2c_{P},\infty) \right)+Q\left(q_{h_{m}}^{-1}(0,2c_{Q})\right)+Q\left(\mathrm{supp}(P_{h_{n}} )\backslash\mathrm{supp}(P)\right). \tag{28}\]
For the first term of RHS of (28), suppose \(\mathrm{supp}(K)\subset\mathcal{B}_{\mathbb{R}^{d}}(0,1)\) for convenience, and let \(A_{-h_{n}}:=\left\{x\in\mathrm{supp}(P):d(x,\mathbb{R}^{d}\backslash\mathrm{ supp}(P))\geq h_{n}\right\}\). Then from Assumption A3, for all \(x\in A_{-h_{n}}\), \(p_{h_{n}}(x)\geq p_{\min}\) holds. Hence for large enough \(N\) such that for \(n\geq N\), \(c_{P}<p_{\min}\), then \(p_{h_{n}}(x)\geq p_{\min}\) for all \(x\in A_{-h_{n}}\), and hence
\[Q\left(\mathrm{supp}(P)\backslash p_{h_{n}}^{-1}[2c_{P},\infty)\right)\leq Q \left(\mathrm{supp}(P)\backslash A_{-h_{n}}\right). \tag{29}\]
Also, \(\mathrm{supp}(\mathrm{P})\) being bounded implies that all the coefficients \(a_{0},\ldots,a_{d-1}\) in Proposition D.5 for \(\mathrm{supp}(P)\) are in fact finite. Then \(q\leq q_{\max}\) from Assumption A3 and Proposition D.5 implies
\[Q\left(\mathrm{supp}(P)\backslash A_{-h_{n}}\right)=O(h_{n}). \tag{30}\]
And hence combining (29) and (30) gives that
\[Q\left(\mathrm{supp}(P)\backslash p_{h_{n}}^{-1}[2c_{P},\infty)\right)=O(h_{n }). \tag{31}\]
For the second term of RHS of (28), with a similar argument,
\[Q\left(q_{h_{m}}^{-1}(0,2c_{Q})\right)=O(h_{m}). \tag{32}\]
Finally, for the third term of RHS of (28), Proposition D.5 implies
\[Q\left(\mathrm{supp}(P_{h_{n}})\backslash\mathrm{supp}(P)\right)=O(h_{n}). \tag{33}\]
Hence applying (31), (32), (33) to (28) gives that
\[Q(B_{n,m})=O(h_{n}+h_{m}).\]
## Appendix F Related Work
**Improved Precision & Recall (P&R).** Existing metrics such as IS and FID assess the performance of generative models with a single score while showing usefulness in determining performance rankings between models and are still widely used. These evaluation metrics have problems that they cannot provide detailed interpretations of the evaluation in terms of fidelity and diversity. Sajjadi et al. [12] tried to solve this problem by introducing the original Precision and Recall, which however has a limitation as it is inaccurate due to the simultaneous approximation of real support and fake support through "\(k\)-means clustering". For example, if we evaluate the fake images having high fidelity and large value of \(k\), many fake features can be classified in a small cluster with no real features, resulting in a low fidelity score (\(precision_{P}\coloneqq Q(supp(P))\)). Kyuakanniemi et al. [13] focuses on the limitation from the support estimation and presents an P&R that allows us to more accurately assess fidelity and diversity by approximating real support and fake support separately:
\[precision(\mathcal{X},\mathcal{Y})\coloneqq\frac{1}{M}\sum_{i}^{M}f(Y_{i}, \mathcal{X}),\ recall(\mathcal{X},\mathcal{Y})\coloneqq\frac{1}{N}\sum_{j}^{N }f(X_{j},\mathcal{Y}),\]
\[\text{where }f(Y_{i},\mathcal{X})=\begin{cases}1,&\text{if }Y_{i}\in supp(P),\\ 0,&\text{otherwise.}\end{cases}\]
Here, \(supp(P)\) is defined as the union of spheres centered on each real feature \(X_{i}\) and whose radius is the distance between \(k\)th nearest real features of \(X_{i}\)
**Density & Coverage** (D&C). Naeem et al. [14] has reported that P&R cannot stably provide accurate fidelity and diversity when real or fake features are present through experiment. This limitation of P&R is that it has a problem with approximating overestimated supports by outliers due to the use of "\(k\)-nearest neighborhood" algorithm. For a metric that is robust to outlying features, Naeem et al. [14] proposes a new evaluation metric that only relies on the real support, based on the fact that a generative model often generates artifacts which possibly results in outlying features in the embedding space:
\[density(\mathcal{X},\mathcal{Y})\coloneqq\frac{1}{M}\sum_{j}^{M}\sum_{i}^{N}1 _{Y_{j}\in\,f(X_{i})},\ coverage(\mathcal{X},\mathcal{Y})\coloneqq\frac{1}{N} \sum_{i}^{N}1_{\exists j\ s.t.\ Y_{j}\in\,f(X_{i})},\]
\[\text{where }f(X_{i})=\mathcal{B}(X_{i},NND_{k}(X_{i}))\]
In the equation, \(NND_{k}(X_{i})\) means the \(k\)th-nearest neighborhood distance of \(X_{i}\) and \(\mathcal{B}(X_{i},NND_{k}(X_{i}))\) denotes the hyper-sphere centered at \(X_{i}\) with radius \(NND_{k}(X_{i})\). However, D&C is only a partial solution because it still gives an inaccurate evaluation when a real outlying feature exists. Unlike coverage, density is not upper-bounded which makes it unclear exactly which ideal score a generative model should achieve.
**Geometric Evaluation of Data Representations** (GCA). Poklukar et al. [27] proposes a metric called GCM that assesses the fidelity and diversity of fake images. GCM uses the geometric and topological properties of connected components formed by vertex \(\mathcal{V}\) (feature) and edge \(\mathcal{E}\) (connection). Briefly, when the pairwise distance between vertices is less than a certain threshold \(\epsilon\), a connected component is formed by connecting two vertices with an edge. This set of connected components can be thought of as a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), and each connected component separated from each other in \(\mathcal{G}\) is defined as a subgraph \(\mathcal{G}_{i}\) (i.e., \(\mathcal{G}=\cup_{i}\mathcal{G}_{i}\)). GCA uses the consistency \(c(\mathcal{G}_{i})\) and the quality \(q(\mathcal{G}_{i})\) to select the important subgraph \(\mathcal{G}_{i}\) that is formed on both real vertices and fake vertices (\(\mathcal{V}=\mathcal{X}\cup\mathcal{Y}\)). \(c(\mathcal{G}_{i})\) evaluates the ratio of the number of real vertices and fake vertices and \(q(\mathcal{G}_{i})\) computes the ratio of the number of edges connecting real vertices and fake vertices to the total number of edges in \(\mathcal{G}_{i}\). Given the consistency threshold \(\eta_{c}\) and quality threshold \(\eta_{q}\), the set of important subgraphs is defined as \(\mathcal{S}(\eta_{c},\eta_{q})=\cup_{q(\mathcal{G}_{i})>\eta_{q},\,c(\mathcal{ G}_{i})>\eta_{c}}\mathcal{G}_{i}\). Based on this, GCA precision and recall are defined as follows:
\[\mathrm{GCA\ precision}=\frac{|\mathcal{S}^{\mathcal{Y}}|_{\mathcal{V}}}{| \mathcal{G}^{\mathcal{Y}}|_{\mathcal{V}}},\ \mathrm{GCA\ recall}=\frac{|\mathcal{S}^{ \mathcal{X}}|_{\mathcal{V}}}{|\mathcal{G}^{\mathcal{X}}|_{\mathcal{V}}},\]
In above, \(\mathcal{S}^{\mathcal{Y}}\) and \(\mathcal{G}^{\mathcal{Y}}\) represent a subset (\(\mathcal{V}=\mathcal{Y},\mathcal{E}\)) of each graph \(\mathcal{S}\) and \(\mathcal{G}\), respectively. One of the drawbacks is that the new hyper-parameters \(\epsilon\), \(\eta_{c}\), and \(\eta_{q}\) need to be set arbitrarily according to the image dataset in order to evaluate fake images well. In addition, even if we use the hyper-parameters that seem most appropriate, it is difficult to verify that the actual evaluation results are correct because they do not approximate the underlying feature distribution.
**Manifold Topology Divergence** (MTD). Barannikov et al. [28] proposes a new metric that uses the life-length of the \(k\)-dimensional homology of connected components between real features and fake features formed through Vietoris-Rips filtration [41]. MTD is simply defined as the sum of the life-length of homologies and is characterized by an evaluation trend consistent with the FID. Specifically, MTD constructs the graph \(\mathcal{G}\) by connecting edges between features with a distance smaller than the threshold \(\epsilon\). Then the birth and death of the \(k\)-dimensional homologies in \(\mathcal{G}\) are recorded by adjusting the threshold \(\epsilon\) from 0 to \(\infty\). The life-length \(l_{i}(h)\) is obtained through \(death_{i}-birth_{i}\) of any \(k\)-dimensional homology \(h\), and let life-length set of all homologies as \(L(h)\). MTD repeats the process of obtaining \(L(h)\) on randomly sampled subsets \(\mathcal{X}^{\prime}\in\mathcal{X}\) and \(\mathcal{Y}^{\prime}\in\mathcal{Y}\) at each iteration, and defined as the following:
\[MTD(\mathcal{X},\mathcal{Y})=\frac{1}{n}\sum_{i}^{n}L_{i}(h),\ \text{where }L_{i}(h)\text{ is the life-length set defined at }i\text{th iteration}.\]
However, MTD has a limitation of only considering a fixed dimensional homology for evaluation. In addition, MTD lacks interpretability as it evaluates the model with a single assessment score.
Philosophy of our metric & Practical Scenarios
### Philosophy of our metric
All evaluation metrics have different resolutions and properties. The philosophy of our metric is to propose a reliable evaluation metric based on statistically and topologically certain things. In a real situation, the data often contain outliers or noise, and these data points come from a variety of sources (e.g. human error, feature embedding network). These errors may play as outliers, which leads to an overestimation of the data distribution, which in turn leads to a false impression of improvement when developing generative models. As discussed in Section 2, P&R and its variants have different ways to estimate the supports, which overlook the possible presence and effect of noise. Naeem et al. [14] revealed that previous support estimation approaches may inaccurately estimate the true support under noisy conditions, and partially solved this problem by proposing to only use the real support. However, this change goes beyond the natural definition of precision and recall, and results in losing some beneficial properties like boundedness. Moreover, this is still a temporary solution since real data can also contain outliers. We propose a solution to the existing problem by minimizing the effects of noise, using topologically significant data points and retaining the natural definition of precision and recall.
### Practical Scenarios
From this perspective, we present two examples of realistic situations where outliers exist in the data and filtering out them can have a significant impact on proper model analysis and evaluation. With real data, there are many cases where outliers are introduced into the data due to human error [17; 18]. Taking the simplest MNIST as an example, suppose our task of interest is to generate 4. Since image number 7 is included in data set number 4 due to incorrect labeling (see Figure 1 of [17]), the support of the real data in the feature space can be overestimated by such outliers, leading to an unfair evaluation of generative models (as in Section 5.1.2 and 5.2.2); That is, the sample generated with weird noise may be in the overestimated support, and existing metrics without taking into account the reliability of the support could not penalize this, giving a good score to a poorly performing generator.
A similar but different example is when noise or distortions in the captured data (unfortunately) behave adversely on the feature embedding network used by the current evaluation metrics (as in Section 5.1.2 and 5.2.2); e.g., visually it is the number 7, but it is mismapped near the feature space where there are usually 4 and becomes an outlier. Then the same problem as above may occur. Note that in these simple cases, where the definition of outliers is obvious with enough data, one could easily examine the data and exclude outliers a priori to train a generative model. In the case of more complicated problems such as the medical field [18], however, it is often not clear how outliers are to be defined. Moreover, because data are often scarce, even outliers are very useful and valuable in practice for training models and extracting features, making it difficult to filter outliers in advance and decide not to use them.
On the other hand, we also provide an example where it is very important to filter out outliers in the generator sample and then evaluate them. To evaluate the generator, samples are generated by sampling from the present latent space (typically Gaussian). Even after training is complete and the generators' outputs are generally fine, there's a latent area where generators aren't fully trained. Note that latent space sampling may contain samples from regions that the generator does not cover well during training ("unfortunate outlier"). When unfortunate outliers are included, the existing evaluation metrics may underestimate or overestimate the generator's performance than its general performance. (To get around this, it is necessary to try this evaluation several times to statistically stabilize it, but this requires a lot of computation and becomes impractical, especially when the latent space dimension is high.)
Especially considering the evaluation scenario in the middle of training, the above situation is likely to occur due to frequent evaluation, which can interrupt training or lead to wrong conclusions. On the other hand, we can expect that our metric will be more robust against the above problem since it pays more attention to the core (samples that form topologically meaningful structures) generation performance of the model.
### Details of the noise framework in the experiments
We have assessed the robustness of TopP&R against two types of non-IID noise perturbation through toy data experiments (in Section 5.1.3) and real data experiments (in Section 5.2.2). The scatter noise we employed consists of randomly extracted noise from a uniform distribution. This noise, even when extensively added to our data, does not form a data structure with any significant signal (topological feature). In other words, from the perspective of TopP&R, the formation of topologically significant data structures implies that data samples should possess meaningful probability values in the feature space. However, noise following a uniform distribution across all feature dimensions holds very small probability values, and thus, it does not constitute a topologically significant data structure. Consequently, such noise is filtered out by the bootstrap bands \(c_{\mathcal{X}}\) or \(c_{\mathcal{Y}}\) that we approximate (see Section 2).
The alternate noise we utilized in our experiments, swap noise, possesses distinct characteristics from scatter noise. Initially, given the real and fake data that constitute important data structures, we introduce swap noise by randomly selecting real and fake samples and exchanging their positions. In this process, the swapped fake samples follow the distribution of real data, and conversely, the real samples adhere to the distribution of fake data. This implies the addition of noise that follows the actual data distribution, thereby generating significant data structures. When the number of samples undergoing such positional swaps remains small, meaningful probability values cannot be established within the distribution. Statistically, an extremely small number of samples cannot form a probability distribution itself. As a result, our metric operates robustly in the presence of noise. However, as the count of these samples increases and statistically significant data structures emerge, TopP&R estimates the precise support of such noise as part of their distribution. By examining Figure 4 and 5, as well as Table A12, it becomes evident that conventional metrics struggle with accurate support estimation due to vulnerability to noise. This limitation results in an inability to appropriately evaluate aspects involving minority sets or data forming long distributions.
### Limitations
Since the KDE filtration requires extensive computations in high dimensions, a random projection that preserves high-dimensional distance and topological properties is inevitable. Based on the topological structure of the features present in the embedded low dimensional space, we have shown that TopP&R theoretically and experimentally has several good properties such as robustness to the noise from various sources and sensitivity to small changes in distribution. However, matching the distortion from the random projection to the noise level allowed by the confidence band is practically infeasible: The bootstrap confidence band is of order \(\left(nh_{n}^{d}\right)^{-\frac{1}{2}}\), and hence for the distortion from the random projection to match this confidence band, the embedded dimension \(d\) should be of order \(\Omega\left(nh_{n}^{d}\log n\right)\) from Remark (B.2). However, computing the KDE filtration in dimension \(\Omega\left(nh_{n}^{d}\log n\right)\) is practically infeasible. Hence in practice, the topological distortion from the random projection is not guaranteed to be filtered out by the confidence band, and there is a possibility of obtaining a less accurate evaluation score compared to calculations in the original dimensions.
Exploring the avenue of localizing uncertainty at individual data points separately presents an intriguing direction of research. Such an approach could potentially be more sample-efficient and disregard less data points. However, considering our emphasis on preserving topological signals, achieving localized uncertainty estimation in this manner is challenging within the current state of the art. For instance, localizing uncertainty in the kernel density estimate \(\hat{p}\) is feasible, as functional variability is inherently local. To control uncertainty at a specific point \(x\), we primarily need to analyze the function value \(\hat{p}(x)\) at that point. In contrast, localizing uncertainty for homological features situated at point \(x\) requires the analysis of all points connected by the homological feature. Consequently, even if our intention is to confine uncertainty to a local point, it demands estimating the uncertainty at the global structural level. This makes localizing the uncertainly for topological features difficult given the tools in topology and statistics we currently have.
Experimental Details
### Implementation details of embedding
We summarize the detailed information of our embedding networks implemented for the experiments. In Figure 2, 3, 4, A10, A4, and 5, P&R and D&C are computed from the features of ImageNet pre-trained VGG16 (fc2 layer), and TopP&R is computed from features placed in \(\mathbb{R}^{32}\) with additional random linear projection. In the experiment in Figure A10, the SwAV embedder is additionally considered. We implement ImageNet pre-trained InceptionV3 (fc layer), VGG16 (fc2 layer), and SwAV as embedding networks with random linear projection to 32 dimensional feature space to compare the ranking of GANs in Table 1.
### Implementation details of confidence band estimator
```
# KDE: kernel density estimator # h: kernel bandwidth parameter # k: number of repeats # \(\hat{\theta}\): set of difference # \(\mathcal{X}=\{X_{1},X_{2},\ldots,X_{n}\}\) \(\hat{p}_{h}=\textit{KDE}(\mathcal{X})\) foriteration \(=1,2,\ldots,k\)do # Define \(\mathcal{X}^{*}\) with bootstrap sampling \(\mathcal{X}^{*}\)= random sample \(n\) times with repeat from \(\mathcal{X}\) # \(\hat{p}^{*}_{h}\) replaces population density \(\hat{p}^{*}_{h}=\textit{KDE}(\mathcal{X}^{*})\) # compute \(\hat{\theta}\) with bootstrap samples # define estimated confidence band endfor endfor
```
**Algorithm 1** Confidence Band Estimator
### Choice of confidence level
For the confidence level \(\alpha\), we would like to point out that \(\alpha\) is not the usual hyperparameter to be tuned: It has a statistical interpretation of the probability or the level of confidence to allow error, noise, etc. The most popular choices are \(\alpha=0.1,0.05,0.01\), leading to 90%, 95%, 99% confidence. We used \(\alpha=0.1\) throughout our experiments.
### Estimation of Bandwidth parameter
As we discussed in Section 2, since TopP&R estimates the manifold through KDE with kernel bandwidth parameter \(h\), we need to approximate it. The estimation techniques for \(h\) are as follows: (**a**) a method of selecting \(h\) that maximizes the survival time (\(S(h)\)) or the number of significant homological features (\(N(h)\)) based on information obtained about persistent homology using the filtration method, (**b**) a method using the median of the k-nearest neighboring distances between features obtained by the balloon estimator (for more details, please refer to [42], [43], and [44]). Note that, the bandwidths \(h\) for all the experiments in this paper are estimated via Balloon Bandwidth Estimator.
For (**a**), following the notation in Section A, let the \(i\)th homological feature of persistent diagram be \((b_{i},d_{i})\), then we define its life length as \(l_{i}(h)=d_{i}-b_{i}\) at kernel bandwidth \(h\). With confidence band \(c_{\alpha}(h)\), we select h that maximizes one of the following two quantities:
\[N(h)=\#\{i:\ l_{i}(h)>c_{\alpha}(h)\},\ S(h)=\sum_{i}[l_{i}(h)-c_{\alpha}(h)] _{+}.\]
Note that, we denote the confidence band \(c_{\alpha}\) as \(c_{\alpha}(h)\) considering the kernel bandwidth parameter \(h\) of KDE in Algorithm 1.
For (**b**), the balloon bandwidth estimator is defined as below:
### Computational complexity
TopP&R is computed using the confidence band \(c_{\alpha}\) and KDE bandwidth \(h\). The computational cost of the confidence band calculation (Algorithm 1) and the balloon bandwidth calculation (Algorithm 2) are \(O(k*n^{2}*d)\) and \(O(n^{2}*d)\), respectively, where \(n\) represents the data size, \(d\) denotes the data dimension, and \(k\) stands for the number of repeats (\(k=10\) in our implementation). The resulting computational cost of TopP&R is \(O(k*n^{2}*d)\). In our experiments based on real data, calculating TopP&R once using real and fake features, each consisting of 10k samples in a 4096-dimensional space, takes approximately 3-4 minutes. This computation speed is notably comparable to that of P&R and D&C, utilizing CPU-based computations. Note that, P&R and D&C that we primarily compare in our experiments are algorithms based on \(k\)-nearest neighborhood for estimating data distribution, and due to the computation of pairwise distances, the computational cost of these algorithm is approximately \(O(n^{2}*d)\).
### Mean hamming distance
Hamming distance (HD) [45] counts the number of items with different ranks between A and B, then measures how much proportion differs in the overall order, i.e. for \(A_{i}\in A\) and \(B_{i}\in B\), \(HD(A,B)\coloneqq\sum_{j=1}^{n}1\{k:A_{i}\neq B_{i}\}\) where k is the number of differently ordered elements. The **mean HD** is calculated as follows to measure the average distances of three ordered lists: Given three ordered lists \(A,B,\text{ and }C\), \(\tilde{HD}=(HD(A,B)+HD(A,C)+HD(B,C))/3\).
### Explicit values of bandwidth parameter
Since our metric adaptively reacts to the given samples of \(P\) and \(Q\), we have two \(h\)'s per experiment. For example, in the translation experiment (Figure 2), there are 13 steps in total, and each time we estimate \(h\) for \(P\) and \(Q\), resulting in a total of 26 \(h\)'s. To show them all at a glance, we have listed all values in one place. We will also provide the code that can reproduce the results in our experiments upon acceptance.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Time & PR & DC & **TopPR (Ours)** \\ \hline Wall Clock & 1min 43s & 1min 40s & 1min 58s \\ CPU time & 2min 44s & 2min 34s & 2min 21s \\ Time complexity & \(O(n^{2}*d)\) & \(O(n^{2}*d)\) & \(O(k*n^{2}*r)\) \\ \hline \hline \end{tabular}
\end{table}
Table A1: Wall clock, CPU time and Time complexity for evaluation metrics for 10k real and 10k fake features in 4096 dimension. For TopP&R, \(r\) is random projection dimension \((r\ll d)\). The results are measured by the time module in python.
that are minority sets have topological structures, but outliers exist far apart and lack a topological structure in general.
To test this, we experimented with CIFAR10 [46], which has 5,000 samples per class. We simulate a dataset with the majority set of six classes (2,000 samples per class, 12,000 total) and the minority set of four classes (500 samples per class, 2,000 total), and an ideal generator that exactly mimics the full data distribution. As shown in the Table A12, the samples in the minority set remained after the filtering process, meaning that the samples were sufficient to form a significant structure. Both D&C and TopP&R successfully evaluate the distribution for the ideal generator. To check whether our metric reacts to the change in the distribution even with this harsh setting, we also carried out the mode decay experiment. We dropped the samples of the minority set from 500 to 100 per class, which can be interpreted as an 11.3% decrease in diversity relative to the full distribution (Given (a) ratio of the number of samples between majority and minority sets = \(12,000:2,000\) = \(6:1\) and (b) 80% decrease in samples per minority class, the true decay in the diversity is calculated as \(\frac{1}{(1+6)}\times 0.8=11.3\%\) with respect to the entire samples). Here, recall and coverage react somewhat less sensitively with their reduced diversities as 3 p.p. and 2 p.p., respectively, while TopR reacted most similarly (9 p.p.) to the ideal value. In summary, TopP&R shows much more sensitiveness to the changes in data distribution like mode decay. Thus, once the minority set has survived the filtering process, our metric is likely to be much more responsive than existing methods.
### Experiment on Non-IID perturbation with outlier removal methods
We compared the performance of metrics on a denoised dataset using Local Outlier Factor (LOF) [47] and Isolation Forest (IF) [48]. The experimental setup is identical to that of Figure 4 (see Section G.3 for details of our noise framework). In this experiment, we applied the outlier removal method before calculating P&R and D&C. From the result, P&R and D&C still do not provide a stable evaluation, while TopP&R shows the most consistent evaluation for the two types of noise without changing its trend.
Since TopP&R can discriminate topologically significant signals by estimating the confidence band (using KDE filtration function), there is no need to arbitrarily set cutoff thresholds for each dataset. On the other hand, in order to distinguish inliers from specific datasets using LOF and IF methods, a threshold specific to the dataset must be arbitrarily set each time, so there is a clear constraint that different results are expected each time for each parameter setting (which is set by the user). Through this, it is confirmed that our evaluation metric guarantees the most consistent scoring based on the topologically significant signals. Since the removal of outliers in the TopP&R is part of the support estimation process, it is not appropriate to replace our unified process with the LOF or IF. In detail, TopP&R removes outliers through a clear threshold called confidence band which is defined by KDE, and this process simultaneously defines KDE's super-level set as the estimated support. If this process is separated, the topological properties and interpretations of estimated support also disappear. Therefore, it is not practical to remove outliers by other methods when calculating TopP&R.
### Sequential and simultaneous mode dropping with real dataset
### Sensitiveness to noise intensity
The purpose of this experiment (Figure A5) is to closely observe the noise sensitivity of metrics in Figure 6. As the noise intensity is incrementally increased until all the metrics converge to 0, the most ideal outcome for the metrics is to exhibit a linear trend to capture these changes most effectively. From the results, it is able to observe that both TopP&R and P&R exhibit the most linear trend in their evaluation outcomes, while D&C demonstrates that least capability to reflect differences based on noise intensity.
Figure A4: Comparison of evaluation metrics under sequential and simultaneous mode dropping scenario with Baby ImageNet[15].
Figure A2: Behaviors of evaluation metrics on Non-IID perturbations. The dotted line in the graph shows the performance of metrics after the removal of outliers using the IF. We use “Clean” as a prefix to denote the evaluation after IF.
### Evaluating state-of-the-art generative models on the ImageNet
We conducted our evaluation using the ImageNet dataset, which covers a wide range of classes. We employed commonly accepted metrics in the community, including FID and KID, to rank the considered generative models effectively. The purpose of this experiment is not only to establish the applicability of our ranking metric for generative models, akin to existing single-score metrics, but also to emphasize its interpretability. In Table A13, we calculated the HD between P&R variants and the more reliable KID metric. This comparison aimed to assess the alignment of our metric with established single-score metrics. The results consistenly showed that TopP&R's evaluation results are the most consistent with single-score metrics across various embedding networks. Conversely, P&R exhibited an inconsistent evaluation trend compared to KID, which struggled to effectively differentiate between ReACGAN and BigGAN. Note that, Kang et al. [2019] previously demonstrated that BigGAN performs worse than ReACGAN. However, surprisingly, P&R consistently rated BigGAN as better than ReACGAN across all embedding networks. These findings imply that the P&R's unreliable support estimation under noisy conditions might hinder its ability to accurately distinguish between generative models.
### Verification of random projection effect in generative model ranking
We also applied the same random projection used by TopP&R to P&R and D&C in Section I.6. From the results of the Table A14, it is evident that the application of random projection to P&R leads to different rankings than its original evaluation tendencies. Similarly, the rankings of D&C with random projection still exhibit significant variations based on the embedding. Thus, we have demonstrated that random projection does not provide consistent evaluation tendencies across different embeddings, while also highlighting that this characteristic is unique to TopP&R. To quantify these differences among metrics, we computed the mean Hamming Distance (MHD) between the results in the embedding spaces of each metric. The calculated MHD scores were 1.33 for TopP&R, 2.67 for P&R, and 3.33 for D&C, respectively.
### Verifying the effect of random projection to the noisy data
We experiment to ascertain whether utilizing a random projection address the drawbacks of existing metrics that are susceptible to noise, and we also conduct tests to validate whether TopP&R possesses noise robustness without using random projection. Through Figure A6 (a), it is evident that P&R and D&C still retain vulnerability to noisy features, and it is apparent that random projection itself does not diminish the impact of noise. Furthermore, Figure A6 (b) reveals that TopP&R, even without utilizing a random projection, exhibits robustness to noise through approximating the data support based on statistically and topologically significant features.
Building upon the insight from the toy data experiment in Figure A6 that random projection does not confer noise robustness to P&R and D&C, we aim to further demonstrate this fact through a real data experiment. In the experiment depicted in Figure A7, we follow the same setup as the previous experiment in Figure 5, applying the same random projection to P&R and D&C only. The results of
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline & Model & ADM & StyleGAN-XL & ReACGAN & BigGAN \\ \cline{2-6} & **TopP\&R** (\(\uparrow\)) & 0.9168 (1) & 0.8919 (2) & 0.8886 (3) & 0.7757 (4) \\ & D\&C w/ rand proj (\(\uparrow\)) & 1.0221 (2) & 1.0249 (1) & 0.9218 (3) & 0.7099 (4) \\ & P\&R w/ rand proj (\(\uparrow\)) & 0.7371 (1) & 0.7346 (2) & 0.6117 (4) & 0.6305 (3) \\ \cline{2-6} & **TopP\&R** (\(\uparrow\)) & 0.9778 (1) & 0.8896 (2) & 0.8376 (3) & 0.5599 (4) \\ & D\&C w/ rand proj (\(\uparrow\)) & 1.0086 (3) & 0.9187 (4) & 1.0895 (1) & 1.0803 (2) \\ & P\&R w/ rand proj (\(\uparrow\)) & 0.7995 (2) & 0.8056 (1) & 0.5917 (4) & 0.6304 (3) \\ \cline{2-6} & **TopP\&R** (\(\uparrow\)) & 0.8851 (2) & 0.9102 (1) & 0.6457 (3) & 0.4698 (4) \\ & D\&C w/ rand proj (\(\uparrow\)) & 1.0526 (3) & 0.9479 (4) & 1.1037 (2) & 1.1127 (1) \\ & P\&R w/ rand proj (\(\uparrow\)) & 0.6873 (1) & 0.6818 (2) & 0.4179 (3) & 0.4104 (4) \\ \hline \hline \end{tabular}
\end{table}
Table A14: Ranking results based on TopP&R, P&R, and D&C for ImageNet (128 \(\times\) 128) trained generative models. SwAV, VGG16, and InceptionV3 represent the embedders, and the numbers in parentheses indicate the ranks assigned by each metric to the evaluated models. Note that, the random projection is applied to TopP&R, P&R, and D&C.
this experiment affirm that random projection does not provide additional robustness against noise for P&R and D&C, while TopP&R continues to exhibit the most robust performance.
### Robustness of TopP&R with respect to random projection dimension
To investigate whether the dimension of random projection influences the evaluation trend of TopP&R, we conduct mode dropping experiment using the Baby ImageNet dataset, similar to the experiment in Section I.3. Results in (a) and (b) of Figure A8 respectively compare the performance of TopP&R using random projection in dimensions of 64 and 128 with other metrics. From the experimental results, we observe that TopP&R is the most sensitive to the distributional changes, akin to the tendencies observed in the previous toy mode dropping experiment. Furthermore, upon examining the differences across random projection dimensions, TopP&R shows consistent evaluation trends regardless of dimensions.
### Truncation trick
\(\psi\) is a parameter for the truncation trick and is first introduced in [4] and [1]. We followed the approach in [4] and [1]. GANs generate images using the noise input \(z\), which follows the standard normal distribution \(\mathcal{N}(0,I)\) or uniform distribution \(\mathcal{U}(-1,1)\). Suppose GAN inadvertently samples noise outside of distribution, then it is less likely to sample the image from the high density area of the image distribution \(p(z)\) defined in the latent space of GAN, which leads to generate an image with artifacts. The truncation trick takes this into account and uses the following truncated distribution. Let \(f\) be the mapping from the input to the latent space. Let \(w=f(z)\), and \(\bar{w}=\mathbb{E}[f(z)]\), where \(z\) is either from \(\mathcal{N}(0,I)\) or \(\mathcal{U}(-1,1)\). Then we use \(w^{\prime}=\bar{w}+\psi(w-\bar{w})\) as a truncated latent vector. If the value of \(\psi\) increases, then the degree of truncation decreases which makes images have greater diversity but possibly lower fidelity.
### Toy experiment of trade-off between fidelity and diversity
We have designed a new toy experiment to mimic the truncation trick. With data distributions of 10k samples, \(\mathcal{X}\sim\mathcal{N}(\mu=0,I)\in\mathbb{R}^{32}\) and \(\mathcal{Y}\sim\mathcal{N}(\mu=0.6,\sigma^{2})\in\mathbb{R}^{32}\), we measure the fidelity and diversity while incrementally increasing \(\sigma\) from 0.7 to 1.3. For smaller \(\sigma\) values, the evaluation metric should exhibit higher fidelity and lower diversity, while increasing \(\sigma\) should demonstrate decreasing fidelity and higher diversity. From the results in Figure A9, both TopP&R and P&R exhibit a trade-off between fidelity and diversity.
### Resolving fidelity and diversity
To test whether TopP&R responds appropriately to the change in the underlying distributions in real scenarios, we test the metric on the generated images of StyleGAN2 [2] using the truncation trick [1]. StyleGAN2, trained on FFHQ as shown by Han et al. [49], tends to generate samples mainly belonging to the majority of the real data distribution, struggling to produce rare samples effectively. In other words, while StyleGAN2 excels in generating high-fidelity images, it lacks diversity. Therefore, the evaluation trend in this experiment should reflect a steady high fidelity score while gradually increasing diversity. Han et al. [49] demonstrates that the generated image quality of StyleGAN2 that lies within the support region exhibits sufficiently realistic visual quality. Therefore, particularly in the case of TopP&R, which evaluates fidelity excluding noise, the fidelity value should remain close to 1. As shown in Figure A10, every time the distribution is transformed by \(\psi\), TopP&R responds well and shows consistent behavior across different embedders with bounded scores in \([0,1]\), which are important virtues as an evaluation metric. On the other hand, Density gives unbounded scores (fidelity > 1) and shows inconsistent trend depending on the embedder. Because Density is not capped in value, it is difficult to interpret the score and know exactly which value denotes the best performance (_e.g._, in our case, the best performance is when TopP&R \(=1\)). | ## Review
### Summary
This paper introduces a novel evaluation metric called Topological Precision and Recall (TopP&R) for assessing generative models, addressing the unreliability of existing metrics in the presence of outliers and non-IID perturbations. The proposed metric combines concepts from Topological Data Analysis (TDA) and statistical inference to systematically estimate supports and provides theoretical guarantees for robustness. The authors demonstrate the effectiveness of TopP&R through extensive theoretical analysis and experimental validation on simulated and real data, highlighting its ability to capture changes in distributions amidst noise. Overall, the paper significantly contributes to the field by providing a robust evaluation framework for generative models.
### Strengths
- The paper presents a novel evaluation metric, TopP&R, that addresses the limitations of existing metrics by focusing on robust support estimation.
- TopP&R is resilient to outliers and non-IID perturbations, providing reliable evaluation results.
- The authors provide both theoretical and experimental evidence to validate the effectiveness of the proposed metric.
- The paper is well-written and clearly explains the motivation and approach.
- The implementation of TopP&R is made publicly available, promoting reproducibility.
### Weaknesses
- The paper may be challenging for readers unfamiliar with persistent homology and TDA; it lacks intuitive explanations and visual aids.
- There is limited experimentation on diverse datasets and models; broader evaluations would strengthen the claims.
- Details on the computational methodology, such as the bootstrap bandwidth calculation, are insufficient.
- The paper does not adequately discuss the dimensionality of the problem or justify the choices made regarding random projections.
- Some methodological statements lack clarity, such as the rationale behind the ideal metric returning zero.
### Questions
- How does TopP&R handle the trade-off between precision and recall in evaluating generative models?
- Can the proposed metric demonstrate effectiveness on large-scale datasets like ImageNet?
- What additional experiments could validate the effectiveness of TopP&R on real-world datasets?
- How does the choice of threshold for retaining topologically significant features affect TopP&R's performance?
- Could you clarify the implications of the bootstrap bandwidth choice in the context of the proposed metric?
### Soundness
**Score:** 3
**Description:** 3 = good; the methodology is solid and the theoretical foundations are reasonably sound, although certain aspects require further clarification and justification.
### Presentation
**Score:** 3
**Description:** 3 = good; the paper is generally well-structured and written, yet it could benefit from clearer explanations and visual aids to enhance accessibility.
### Contribution
**Score:** 3
**Description:** 3 = good; the paper introduces a significant advancement in the evaluation of generative models, though more extensive experimentation and discussion of limitations are needed for full impact.
### Rating
**Score:** 5
**Description:** 5 = Borderline accept; the paper is technically solid and presents valuable contributions, but it requires improvements in clarity and evaluation breadth.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper introduces a novel and promising metric for evaluating generative models, addressing critical issues with existing metrics. While there are areas for improvement, the strengths and contributions of the work outweigh the weaknesses, and it has the potential to advance the field significantly.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Mode Connectivity in Auction Design
Christoph Herrich
Department of Mathematics
London School of Economics and Political Science, UK
[email protected]
Mvain Tao
ITCS, Key Laboratory of Interdisciplinary Research of Computation and Economics
Shanghai University of Finance and Economics, China
[email protected]
Moved to Universite Libre de Bruxelles, Belgium, and Goethe-Universitat Frankfurt, Germany, after submission of this article.
Laszlo A. Vegh
Department of Mathematics
London School of Economics and Political Science, UK
[email protected]
###### Abstract
Optimal auction design is a fundamental problem in algorithmic game theory. This problem is notoriously difficult already in very simple settings. Recent work in differentiable economics showed that neural networks can efficiently learn known optimal auction mechanisms and discover interesting new ones. In an attempt to theoretically justify their empirical success, we focus on one of the first such networks, _RochetNet_, and a generalized version for _affine maximizer auctions_. We prove that they satisfy _mode connectivity_, i.e., locally optimal solutions are connected by a simple, piecewise linear path such that every solution on the path is almost as good as one of the two local optima. Mode connectivity has been recently investigated as an intriguing empirical and theoretically justifiable property of neural networks used for prediction problems. Our results give the first such analysis in the context of differentiable economics, where neural networks are used directly for solving non-convex optimization problems.
## 1 Introduction
Auction design is a core problem in mechanism design, with immense applications in electronic commerce (such as sponsored search auctions) as well as in the public sector (such as spectrum auctions). In a revenue maximizing auction, the auctioneer needs to design a mechanism to allocate resources to buyers, and set prices in order to maximize the expected revenue. The buyers' preferences are private and they may behave strategically by misreporting them. For this reason, it is often desirable to devise _dominant strategy incentive compatible (DSIC)_ and _individually rational (IR)_ mechanisms. By definition, in a DSIC mechanism, it is a dominant strategy for the buyers to report the true valuations; in an IR mechanism, each participating truthful buyer receives a nonnegative payoff.
We focus on DSIC and IR mechanisms that maximize the expected revenue, assuming that the buyers' preferences are drawn from a distribution known to the auctioneer. A classical result ofMyerson (1981) provides the optimal mechanism for the case of a single item and arbitrary number of buyers. Finding the optimal mechanisms for more general settings is a tantalizingly difficult problem. We refer the reader to the surveys by Rochet and Stole (2003); Manelli and Vincent (2007) and Daskalakis (2015) for partial results and references. In particular, no analytic solution is known even for two items and two buyers. Selling multiple items to a single buyer is computationally intractable (Daskalakis et al., 2014). Already for two items and a single buyer, the description of the optimal mechanism may be uncountable (Daskalakis et al., 2013). Recent work gives a number of important partial characterizations, e.g. Daskalakis et al. (2015); Giannakopoulos and Koutsoupias (2014), as well as results for weaker notions of Bayesian incentive compatibility, e.g. Cai et al. (2012, 2013, 2012); Balgat et al. (2013).
Conitzer and Sandholm (2002, 2004) proposed the approach of _automated mechanism design_ to use optimization and computational methods to obtain (near) optimal mechanisms for specific problems; see also Sandholm and Likhodedov (2015). An active recent area of research uses machine learning tools. In particular, Dutting et al. (2019) designed and trained neural networks to automatically find optimal auctions. They studied two network architectures, and showed that several theoretically optimal mechanisms can be recovered using this approach, as well as interesting new mechanisms can be obtained. The first network they studied is _RochetNet_. This is a simple two-layer neural network applicable to the single buyer case, leveraging Rochet's (1987) characterization of the optimal mechanism. The second network, _RegretNet_ does not require such a characterization and is applicable for multiple buyers; however, it only provides approximate incentive compatibility.
Dutting et al. (2019) coined the term _'differentiable economics'_ for this approach, and there has been significant further work in this direction. These include designing auctions for budget constrained buyers (Feng et al., 2018); multi-facility location Golowich et al. (2018); balancing fairness and revenue objectives (Kuo et al., 2020); incorporating non-linear utility functions and other networks trained from interaction data (Shen et al., 2019)2; designing revenue-maximizing auctions with differentiable matchings (Curry et al., 2022); contextual auction design (Duan et al., 2022); designing taxation policies (Zheng et al., 2022), and more.
Footnote 2: MenuNet, developed by (Shen et al., 2019), also encodes menu items as RochetNet. However, unlike RochetNet’s approach of repeatedly sampling valuations from the underlying distribution, MenuNet discretizes the buyer’s valuation space into discrete values.
The purpose of this work is to supply theoretical evidence behind the success of neural networks in differentiable economics. The revenue is a highly non-convex function of the parameters in the neural network. Curiously, gradient approaches seem to recover globally optimal auctions despite this non-convexity. Similar phenomena have been studied more generally in the context of deep networks, and theoretical explanations have been proposed, in particular, overparametrization (Allen-Zhu et al., 2019; Du et al., 2019).
Mode connectivityRecent work has focused on a striking property of the landscape of loss functions of deep neural networks: local optimal solutions (modes) found by gradient approaches are connected by simple paths in the parameter space. We provide an informal definition of _mode connectivity_ here.
**Definition 1** (\(\varepsilon\)-mode connected (informal)).: _We say that two solutions are \(\varepsilon\)-mode connected, if they are connected by a continuous path of solutions, such that the loss function does not worsen by more than \(\varepsilon\) compared to one of the end points on the entire path._
This phenomenon was identified by Garipov et al. (2018) and by Draxler et al. (2018). Mode connectivity can help to explain the empirical performance of stochastic gradient descent (sgd) (or ascent, in case of revenue maximization). To some extent, mode connectivity prevents a poor local minimal valley region on the function value, from which the sgd method cannot escape easily. Suppose such a bad local minimum exists. Then mode connectivity implies that there exists a path from this bad local minimum to a global minimum on which the loss function does not significantly increase. Therefore, the intuition is that from every bad local minimum, a (stochastic) gradient method would eventually be able to find a way to escape. However, we would like to emphasize that mode connectivity does not provide a formal proof of the success of sgd. It only provides a useful intuition of why sgd does not completely trapped in local optima.
Kuditipudi et al. (2019) gave strong theoretical arguments for mode connectivity. They introduce the notion of \(\varepsilon\)_-dropout stability:_ solutions to a neural network such that in each layer, one can remove at least half the neurons and rescale the remaining units such that the loss function increases by at most \(\varepsilon\). Solutions that are \(\varepsilon\)-dropout stable are then shown to be \(\varepsilon\)-mode connected. Moreover, they show that _noise stability_ (see e.g., Arora et al. (2018)) implies dropout stability, and hence, mode connectivity. Nguyen (2019) showed mode connectivity when there is a hidden layer larger than the training dataset. Shevchenko and Mondelli (2020) shows that stochastic gradient descent solutions to sufficiently overparametrized neural networks are dropout stable even if we only keep a small, randomly sampled set of neurons from each layer.
### Our contributions
#### RochetNet
In this paper, we first establish mode connectivity properties of _RochetNet_, the architecture in Dutting et al. (2019) for multiple items and a single buyer. These networks have a single hidden layer, corresponding to the menu. Within this hidden layer, each neuron directly corresponds to an _option_ in the menu: which contains an allocation and a price offered to the buyer (see Figure 1). The buyer is assigned the single option, including the allocation and the price, maximizing the buyer's utility. Such an option is called _active_ for the buyer. The loss function of _RochetNet_ is the revenue, the expected price paid by the buyer. Despite its simplicity, the experiments on _RochetNet_ in Dutting et al. (2019) gave impressive empirical results in different scenarios. For example, in the experiments with up to six items and uniform value distributions, _RocheNet_ achieves almost the same revenue (\(99.9\%\)) as the Straight-Jacket Auctions in Giannakopoulos and Koutsoupias (2018), which are known to be optimal in this case. This success is not limited to a single example, as _RochetNet_ also consistently performs well in other scenarios, including when infinite menu size is necessary. Furthermore, Dutting et al. (2019) demonstrated the usefulness of _RochetNet_ in discovering optimal auctions in situations that were previously unexplored from a theoretical perspective.
First, in Theorem 9, we show that for linear utilities, \(\varepsilon\)-mode connectivity holds between two solutions that are _\(\varepsilon\)-reducible_: out of the \(K+1\) menu options (neurons), there exists a subset of at most \(\sqrt{K+1}\) containing an active option for the buyer with probability at least \(1-\varepsilon\). Assuming that the valuations are normalized such that the maximum valuation of any buyer is at most one, it follows that if we remove all other options from the menu, at most \(\varepsilon\) of the expected revenue is lost. The assumption of being \(\varepsilon\)-reducible is stronger than \(\varepsilon\)-dropout stability that only drops a constant fraction of the neurons. At the same time, experimental results in Dutting et al. (2019) show evidence of this property being satisfied in practice. They experimented with different sized neural networks in a setting when the optimal auction requires infinite menu size. Even with \(10,000\) neurons available, only \(59\) options were active, i.e., used at least once when tested over a large sample size. We note that this property also highlights an advantage of _RochetNet_ over _RegretNet_ and other similar architectures: instead of a black-box neural network, it returns a compact, easy to understand representation of a mechanism.
Our second main result (Theorem 10) shows that for \(n\) items and linear utilities, if the number of menu options \(K\) is sufficiently large, namely, \((2/\varepsilon)^{4n}\), then \(\varepsilon\)-mode connectivity holds between _any_ two solutions for _any_ underlying distribution. The connectivity property holds pointwise: for any particular valuation profile, the revenue may decrease by at most \(\varepsilon\) along the path. A key tool in this \(\varepsilon\)-mode connectivity result is a discretization technique from Dughmi et al. (2014). We note that such a mode connectivity result can be expected to need a large menu size. In Appendix C, we present an example with two disconnected local maxima for \(K=1\).
Affine Maximizer AuctionsWe also extend our results and techniques to neural networks for affine maximizer auctions (AMA) studied in Curry et al. (2022). This is a generalization of _RochetNet_ for multi-buyer scenarios. It can also be seen as a weighted variant of the Vickrey-Clarke-Groves (VCG) mechanism (Vickrey, 1961; Clarke, 1971; Groves, 1973). AMA offers various allocation options. For a given valuation profile, the auctioneer chooses the allocation with the highest weighted sum of valuations, and computes individual prices for the buyers; the details are described in Section 2.2. AMA is DSIC and IR, however, is not rich enough to always represent the optimal auction.
For AMA networks, we show similar results (Theorem 13 and 14) as for _RochetNet_. We first prove \(\varepsilon\)-mode connectivity holds between two solutions that are _\(\varepsilon\)-reducible_ (see Definition 11). Curry et al. (2022) provides evidence of this property being satisfied in practice, observing (in Sec. 7.1) _"Moreover, we found that starting out with a large number of parameters improves performance, even though by the end of training only a tiny number of these parameters were actually used."_. Secondly, we also show that if the number of menu options \(K\) is sufficiently large, namely, \((16m^{3}/\varepsilon^{2})^{2nm}\)then \(\varepsilon\)-mode connectivity holds pointwise between _any_ two solutions. That is, it is valid for any underlying distribution of valuations, possibly correlated between different buyers.
Relation to previous results on mode connectivityOur results do not seem to be deducible from previous mode connectivity results, as we outline as follows. Previous literature on mode connectivity investigated neural networks used for prediction. The results in Kuditipudi et al. (2019), Nguyen (2019), Shevchenko and Mondelli (2020) and other papers crucially rely on the properties that the networks minimize a convex loss function between the predicted and actual values, and require linear transformations in the final layer. _RochetNet_ and AMA networks are fundamentally different. The training data does not come as labeled pairs and these network architectures are built directly for solving an optimization problem. For an input valuation profile, the loss function is the negative revenue of the auctioneer. In _RochetNet_, this is obtained as the negative price of the utility-maximizing bundle; for AMA it requires an even more intricate calculation. The objective is to find parameters of the neural network such that the expected revenue is as large as possible. The menu options define a piecewise linear surface of utilities, and the revenue in _RochetNet_ can be interpreted as the expected bias of the piece corresponding to a randomly chosen input.
Hence, the landscape of the loss function is fundamentally different from those analyzed in the above mentioned works. The weight interpolation argument that shows mode-connectivity from dropout stability is not applicable in this context. The main reason is that the loss function is not a simple function of the output of the network, but is defined by choosing the price of the argmax option. We thus need a more careful understanding of the piecewise linear surfaces corresponding to the menus.
Significance for PractitionersWe see the main contribution of our paper in _explaining_ the empirical success and providing theoretical foundations for already existent practical methods, and not in inventing new methods. Nevertheless, two insights a practitioner could use are as follows: (i) It is worth understanding the structure of the auction in question. If one can, e.g., understand whether \(\varepsilon\)-reducibility holds for a particular auction, this might indicate whether RochetNet or AMA are good methods to apply to this particular case. (ii) Size helps: If one encounters bad local optima, increasing the menu size and rerunning RochetNet or AMA might be a potential fix and will eventually lead to a network satisfying mode connectivity.
## 2 Auction Settings
We consider the case with \(m\) buyers and one seller with \(n\) divisible items each in unit supply. Each buyer has an additive valuation function \(v_{i}(S):=\sum_{j\in S}v_{ij}\), where \(v_{ij}\in V\) represents the valuation of the buyer \(i\) on item \(j\) and \(V\) is the set of possible valuations. Throughout the paper, we normalize the range to the unit simplex: we assume \(V=[0,1]\), and \(\|v_{i}\|_{1}=\sum_{j}v_{ij}\leq 1\) for every buyer \(i\). With slight abuse of notation, we let \(v=(v_{11},v_{12},\cdots,v_{ij},\cdots,v_{mn})^{\top}\) and \(v_{i}=(v_{i1},v_{i2},\cdots,v_{in})^{\top}\). The buyers' valuation profile \(v\) is drawn from a distribution \(F\in\mathcal{P}(V^{m\times n})\). Throughout, we assume that the buyers have _quasi-linear utilities_: if a buyer with valuation \(v_{i}\) receives an allocation \(x\in[0,1]^{n}\) at price \(p\), their utility is \(v_{i}^{\top}x-p\).
The seller has access to samples from the distribution \(F\), and wants to sell these items to the buyers through a DSIC3 and IR auction and maximize the expected revenue. In the auction mechanism, the \(i\)-th bidder reports a _bid_\(b_{i}\in[0,1]^{n}\). The entire bid vector \(b\in[0,1]^{m\times n}\) will be denoted as \(b=(b_{1},\ldots,b_{m})=(b_{i},b_{-i})\), where \(b_{-i}\) represents all the bids other than buyer \(i\). In a DSIC mechanism, it is a dominant strategy for the agents to report \(b_{i}=v_{i}\), i.e., reveal their true preferences.
Footnote 3: There exists a weaker notion of incentive compatibility: _Bayesian incentive compatible_ (BIC). In a BIC mechanism, it is a dominant strategy for the buyers to report the true valuations if all other buyers also report truthfully. This paper focuses on DISC mechanisms, as DSIC is considered to be more robust than BIC, which assumes a common knowledge of buyers’ distributions on valuations.
**Definition 2** (DSIC and IR auction).: _An auction mechanism requires the buyers to submit bids \(b_{i}\in[0,1]^{n}\), and let \(b=(b_{1},\ldots,b_{m})\). The output is a set of allocations \(x(b)=(x_{1}(b),\ldots,x_{m}(b))\), \(x_{i}(b)\in[0,1]^{n}\), and prices \(p(b)=(p_{1}(b),\ldots,p_{m}(b))\in\mathbb{R}^{m}\). Since there is unit supply of each item, we require \(\sum_{i}x_{ij}(b)\leq 1\), where \(x_{ij}(b)\) is the allocation of buyer \(i\) of item \(j\)._1. _An auction is_ dominant strategy incentive compatible (DSIC) _if_ \(v_{i}^{\top}x_{i}(v_{i},b_{-i})-p_{i}(v_{i},b_{-i})\geq v^{\top}x_{i}(b_{i},b_{-i} )-p_{i}(b_{i},b_{-i})\) _for any buyer_ \(i\) _and any bid_ \(b=(b_{i},b_{-i})\)_._
2. _An auction is_ individually rational (IR) _if_ \(v_{i}^{\top}x_{i}(v_{i},b_{-i})-p_{i}(v_{i},b_{-i})\geq 0\)_._
The revenue of a DSIC and IR auction is
\[\texttt{Rev}=\mathbb{E}_{v\sim F}\left[\sum_{i}p_{i}(v)\right].\]
### Single Buyer Auctions: RochetNet
Dutting et al. (2019) proposed RochetNet as a DISC and IR auction for the case of a single buyer. We omit the subscript \(i\) for buyers in this case. A (possibly infinite sized) _menu_\(M\) comprises a set of _options_ offered to the buyer: \(M=\{(x^{(k)},p^{(k)})\}_{k\in\mathcal{K}}\). In each option \((x^{(k)},p^{(k)})\), \(x^{(k)}\in[0,1]^{n}\) represents the amount of items, and \(p^{(k)}\in\mathbb{R}_{+}\) represents the price. We assume that \(0\in\mathcal{K}\), and \((x^{(0)},p^{(0)})=(\mathbf{0},0)\) to guarantee IR. We call this the _default option_, whereas all other options are called _regular options_. We will use \(K\) to denote the number of regular options; thus, \(|\mathcal{K}|=K+1\).
A buyer submits a bid \(b\in[0,1]^{n}\) representing their valuation, and is assigned to option \(k(b)\in\mathcal{K}\) that maximizes the utility4
Footnote 4: We assume that ties are broken in favor of higher prices, but it is not hard to see that our results transfer to other tie-breaking rules, too. See also the discussion in Section B.2 of Babaioff et al. (2022).
\[k(b)\in\arg\max_{k\in\mathcal{K}}b^{\top}x^{(k)}-p^{(k)}\,.\]
This is called the _active option_ for the buyer. Note that option 0 guarantees that the utility is nonnegative, implying the IR property. It is also easy to see that such an auction is DSIC. Therefore, one can assume that \(b=v\), i.e., the buyer submits their true valuation; or equivalently, the buyer is allowed to directly choose among the menu options one that maximizes their utility. Moreover, it follows from Rochet (1987) that every DSIC and IR auction for a single buyer can be implemented with a (possibly infinite size) menu using an appropriate tie-breaking rule.
Given a menu \(M\), the revenue is defined as
\[\texttt{Rev}(M)=\mathbb{E}_{v\sim F}\left[p^{(k(v))}\right]\,.\]
RochetNetRochetNet (see Figure 1) is a neural network with three layers: an input layer (\(n\) neurons), a middle layer (\(K\) neurons), and an output layer (\(1\) neuron):
1. the input layer takes an \(n\)-dimensional bid \(b\in V^{n}\), and sends this information to the middle layer;
2. the middle layer has \(K\) neurons. Each neuron represents a regular option in the menu \(M\), which has parameters \(x^{(k)}\in[0,1]^{n}\) and \(p^{(k)}\in\mathbb{R}_{+}\), where \(x^{(k)}\in[0,1]^{n}\) represents the allocation of option \(k\) and \(p^{(k)}\) represents the price of option \(k\). Neuron \(k\) maps from \(b\in V^{n}\) to \(b^{\top}x^{(k)}-p^{(k)}\), i.e., the utility of the buyer when choosing option \(k\);
3. the output layer receives all utilities from different options and maximizes over these options and \(0\): \(\max\{\max_{k}\{(x^{(k)})^{\top}b-p^{(k)}\},0\}\).
We will use \(\texttt{Rev}(M)\) to denote the revenue of the auction with menu options \(\mathcal{K}=\{0,1,2,\ldots,K\}\), where 0 represents the default option \((\mathbf{0},0)\).
The training objective for the RochetNet is to maximize the revenue \(\texttt{Rev}(M)\), which is done by stochastic gradient ascent. Note, however, that the revenue is the price of an _argmax_ option, which makes it a non-continuous function of the valuations. For this reason, Dutting et al. (2019) use a _softmax_-approximation of the _argmax_ as their loss function instead. However, _argmax_ is used for testing. In Appendix B, we bound the difference between the revenues computed with these two different activation functions, assuming that the probability density function of the distribution \(F\) admits a finite upper bound. Lemma 19 shows that the difference between the revenues for _softmax_ and _argmax_ is roughly inverse proportional to the parameter \(Y\) of the _softmax_ function. This allows the practitioner to interpolate between smoothness of the loss function and provable quality of the softmax approximation by tuning the parameter \(Y\).
### Affine Maximizer Auctions
_Affine Maximizer Auctions (AMA)_ also provide a menu \(M\) with a set of options \(\mathcal{K}\). Each option is of the form \((x^{(k)},\beta^{(k)})\in[0,1]^{n\times m}\times\mathbb{R}\), where \(x^{(k)}_{ij}\in[0,1]\) represents the allocation of item \(i\) to buyer \(j\), with the restriction that \(\sum_{i}x^{(k)}_{ij}\leq 1\) for each item \(j\), and \(\beta^{(k)}\) represents a '_boost_'. We again assume \(0\in\mathcal{K}\), and \((x^{(0)},\beta^{(0)})=(\mathbf{0},0)\), and call this the _default_ option; all other options are called the _regular options_.
Given the bids \(b_{i}\in[0,1]^{n}\) of the agents, the auctioneer computes a weighted welfare, using weights \(w_{i}\in\mathbb{R}_{+}\) for the valuations of each agent, and adds the boost \(\beta^{(k)}\). Then, the allocation maximizing the weighted boosted welfare is chosen, i.e., the option with
\[k(b)\in\arg\max_{k\in\mathcal{K}}\sum_{i}w_{i}b_{i}^{\top}x^{(k)}_{i}+\beta^{( k)}.\]
This will also be referred to as the _active option_. The prices collected from the buyers are computed according to the Vickrey-Clarke-Groves (VCG) scheme. Namely,
\[p_{i}(b)= \frac{1}{w_{i}}\left(\sum_{\ell\neq i}w_{\ell}b_{\ell}^{\top}x^{(k(b_{-i }))}_{\ell}+\beta^{(k(b_{-i}))}\right)-\frac{1}{w_{i}}\left(\sum_{\ell\neq i}w_ {\ell}b_{\ell}^{\top}x^{(k(b))}_{\ell}+\beta^{(k(b))}\right). \tag{1}\]
Here, \(k(b_{-i})\) represents the option maximizing the weighted boosted welfare when buyer \(i\) is omitted, i.e., \(k(b_{-i})\in\arg\max_{k\in\mathcal{K}}\sum_{\ell\neq i}w_{\ell}b_{\ell}^{\top}x ^{(k)}_{\ell}+\beta^{(k)}\). It is known that AMA is DSIC and IR. Hence, we can assume that the submitted bids \(b_{i}\) represent the true valuations \(v_{i}\). We also assume the ties are broken in favor of maximizing the total payment. In case of unit weights, this is equivalent to choosing the smallest \(\beta^{(k)}\) values, see (2) in Section 4. Given the menu \(M\), the revenue of the AMA is
\[\mathtt{Rev}(M)=\mathbb{E}_{v\sim F}\left[\sum_{i}p_{i}(v)\right]\,.\]
In this paper, we focus on the case when \(w_{i}=1\) for all buyers. This is also used in the experiments in Curry et al. (2022). For this case, AMA can be implemented by a three layer neural network similar to _RochetNet_, with \(m\times n\) input neurons. For the more general case when the weights \(w_{i}\) can also be adjusted, one can include an additional layer that combines the buyers' allocations.
Note that for a single buyer and \(w_{1}=1\), AMA corresponds to _RochetNet_, with price \(p^{(k)}=-\beta^{(k)}\) for each menu option. Indeed, in the formula defining the price \(p_{i}(b)\), the first term is 0, as well as the sum in the second term.
Similarly to _RochetNet_, the loss function, which is maximized via stochastic gradient ascent, is a _softmax_-approximation of the revenue \(\mathtt{Rev}(M)\), in order to avoid the discontinuities introduced by the _argmax_. We bound the difference in the revenue in Appendix D.3, concluding that it decreases with large parameter \(Y\) as in the RochetNet case.
Figure 1: RochetNet: this architecture maps the bid \(b\) to the utility of the buyer.
### Mode Connectivity
One can view the revenue as a function of the menus, i.e., the parameters in the mechanism: _(i)_ in _RochetNet_, \(\{(x^{(k)},p^{(k)})\}_{k\in\mathcal{K}}\); _(ii)_ in AMA, \(\{(x^{(k)},\beta^{(k)})\}_{k\in\mathcal{K}}\). We use \(\mathcal{M}\) to denote the set of all possible menus.
**Definition 3**.: _(Mode connectivity) Two menus \(M_{1},M_{2}\in\mathcal{M}\) are \(\varepsilon\)-mode-connected if there is a continuous curve \(\pi:[0,1]\rightarrow\mathcal{M}\) such that_ (i)_\(\pi(0)=M_{1}\);_ (ii)_\(\pi(1)=M_{2}\); and_ (iii) _for any \(t\in[0,1]\), \(\texttt{Rev}(\pi(t))\geq\min\{\texttt{Rev}(M_{1}),\texttt{Rev}(M_{2})\}-\varepsilon\)._
## 3 Mode Connectivity for the RochetNet
In this section we present and prove our main results for the RochetNet. For some statements, we only include proof sketches. The detailed proofs can be found in Appendix A in the supplementary material. The following definition plays an analogous role to \(\varepsilon\)-dropout stability in Kuditipudi et al. (2019).
**Definition 4**.: _A menu \(M\) with \(|\mathcal{K}|=K+1\) options is called \(\varepsilon\)-reducible if there is a subset \(\mathcal{K}^{\prime}\subseteq\mathcal{K}\) with \(0\in\mathcal{K}^{\prime}\), \(|\mathcal{K}^{\prime}|\leq\sqrt{K+1}\) such that, with probability at least \(1-\varepsilon\) over the distribution of the valuation of the buyer, the active option assigned to the buyer is contained in \(\mathcal{K}^{\prime}\)._
As noted in the Introduction, such a property can be observed in the experimental results in Dutting et al. (2019). The motivation behind this definition is that if a menu satisfies this property, then all but \(\sqrt{K+1}\) options are more or less redundant. In fact, if a menu is \(\varepsilon\)-reducible, then dropping all but the at most \(\sqrt{K+1}\) many options in \(\mathcal{K}^{\prime}\) results in a menu \(M^{\prime}\) with \(\texttt{Rev}(M^{\prime})\geq\texttt{Rev}(M)-\varepsilon\) because the price of any selected option is bounded by \(\|v\|_{1}\leq 1\).
As a first step towards showing the mode connectivity results, we show that \(0\)-reducibility implies \(0\)-mode-connectivity. We will then use this to derive our two main results, namely that two \(\varepsilon\)-reducible menus are always \(\varepsilon\)-mode-connected and that two large menus are always \(\varepsilon\)-mode-connected.
**Proposition 5**.: _If two menus \(M_{1}\) and \(M_{2}\) for the RochetNet are \(0\)-reducible, then they are \(0\)-mode-connected. Moreover, the curve transforming \(M_{1}\) into \(M_{2}\) is piecewise linear with only three pieces._
To prove Proposition 5, we introduce two intermediate menus \(\widehat{M_{1}}\) and \(\widehat{M_{2}}\), and show that every menu in the piecewise linear interpolation from \(M_{1}\) via \(\widehat{M_{1}}\) and \(\widehat{M_{2}}\) to \(M_{2}\) yields a revenue of at least \(\min\{\texttt{Rev}(M_{1}),\texttt{Rev}(M_{2})\}\). Using that menu \(M_{1}\) has only \(\sqrt{K+1}\) non-redundant options, menu \(\widehat{M_{1}}\) will be defined by repeating each of the \(\sqrt{K+1}\) options \(\sqrt{K+1}\) times. Menu \(\widehat{M_{2}}\) will be derived from \(M_{2}\) similarly. A technical lemma makes sure that this copying can be done in such a way that each pair of a non-redundant option of \(M_{1}\) and a non-redundant option of \(M_{2}\) occurs exactly for one index in \(\widehat{M_{1}}\) and \(\widehat{M_{2}}\).
To make this more formal, we first assume without loss of generality that \(K+1\) is a square, such that \(\sqrt{K+1}\) is an integer. It is straightforward to verify that the theorem is true for non-squares \(K+1\), too. Suppose the options in \(M_{1}\) and \(M_{2}\) are indexed with \(k\in\mathcal{K}=\{0,1,\ldots,K\}\). Since \(M_{1}\) is \(0\)-reducible, there is a subset \(\mathcal{K}_{1}\subseteq\mathcal{K}\) with \(0\in\mathcal{K}_{1}\), \(|\mathcal{K}_{1}|=\sqrt{K+1}\) such that an option with index in \(\mathcal{K}_{1}\) is selected with probability \(1\) over the distribution of the possible valuations. Similarly, such a set \(\mathcal{K}_{2}\) exists for \(M_{2}\). To define the curve that provides mode connectivity, we need the following technical lemma, which is proven in Appendix A.
**Lemma 6**.: _There exists a bijection \(\varphi\colon\mathcal{K}\rightarrow\mathcal{K}_{1}\times\mathcal{K}_{2}\) such that for all \(k\in\mathcal{K}_{1}\) we have that \(\varphi(k)\in\{k\}\times\mathcal{K}_{2}\), and for all \(k\in\mathcal{K}_{2}\) we have that \(\varphi(k)\in\mathcal{K}_{1}\times\{k\}\)._
With this lemma, we can define \(\widehat{M_{1}}\) and \(\widehat{M_{2}}\). Let \(\varphi\) the bijection from Lemma 6 and suppose \(M_{1}=\{(x^{(k)},p^{(k)})\}_{k\in\mathcal{K}}\). We then define \(\widehat{M_{1}}=\{(x^{(\varphi_{1}(k))},p^{(\varphi_{1}(k))})\}_{k\in\mathcal{ K}}\), where \(\varphi_{1}(k)\) is the first component of \(\varphi(k)\). Similarly, \(\widehat{M_{2}}\) is derived from \(M_{2}\) by using the second component \(\varphi_{2}(k)\) of \(\varphi(k)\) instead of \(\varphi_{1}(k)\). It remains to show that all menus on the three straight line segments from \(M_{1}\) via \(\widehat{M_{1}}\) and \(\widehat{M_{2}}\) to \(M_{2}\) yield a revenue of at least \(\min\{\texttt{Rev}(M_{1}),\texttt{Rev}(M_{2})\}\), which is established by the following two propositions; their proofs can be found in Appendix A.
**Proposition 7**.: _Let \(M=\lambda M_{1}+(1-\lambda)\widetilde{M}_{1}\) be a convex combination of the menus \(M_{1}\) and \(\widetilde{M}_{1}\). Then \(\texttt{Rev}(M)\geq\texttt{Rev}(M_{1})\). Similarly, every convex combination of the menus \(M_{2}\) and \(\widetilde{M}_{2}\) has revenue at least \(\texttt{Rev}(M_{2})\)._
The idea to prove Proposition 7 is that, on the whole line segment from \(M_{1}\) to \(\widehat{M_{1}}\), the only active options are those in \(\mathcal{K}^{\prime}\), implying that the revenue does not decrease.
**Proposition 8**.: _Let \(M=\lambda\widehat{M}_{1}+(1-\lambda)\widetilde{M}_{2}\) be a convex combination of the menus \(\widehat{M}_{1}\) and \(\widehat{M}_{2}\). Then, \(\texttt{Rev}(M)\geq\lambda\texttt{Rev}(\widehat{M}_{1})+(1-\lambda)\texttt{ Rev}(\widehat{M}_{2})\)._
The idea to prove Proposition 8 is that, due to the special structure provided by Lemma 6, a linear interpolation between the menus also provides a linear interpolation between the revenues. Note that without the construction of Lemma 6, such a linear relation would be false; such an example is shown in Appendix C.
Proposition 5 directly follows from Proposition 7 and Proposition 8. Based on Proposition 5, we can show our two main theorems for the RochetNet. The first result follows relatively easily from Proposition 5.
**Theorem 9**.: _If two menus \(M_{1}\) and \(M_{2}\) for the RochetNet are \(\varepsilon\)-reducible, then they are \(\varepsilon\)-mode-connected. Moreover, the curve transforming \(M_{1}\) into \(M_{2}\) is piecewise linear with only five pieces._
Proof.: We prove this result by showing that every \(\varepsilon\)-reducible menu \(M\) can be linearly transformed into a \(0\)-reducible menu \(\widetilde{M}\) such that each convex combination of \(M\) and \(\widetilde{M}\) achieves a revenue of at least \(\texttt{Rev}(M)-\varepsilon\). This transformation converting \(M_{1}\) and \(M_{2}\) to \(\widetilde{M}_{1}\) and \(\widetilde{M}_{2}\), respectively, yields the first and the fifth of the linear pieces transforming \(M_{1}\) to \(M_{2}\). Together with Proposition 5 applied to \(\widetilde{M}_{1}\) and \(\widetilde{M}_{2}\) serving as the second to fourth linear piece; the theorem then follows.
To this end, let \(M\) be an \(\varepsilon\)-reducible menu with options indexed by \(k\in\mathcal{K}\). By definition, there is a subset \(\mathcal{K}^{\prime}\subseteq\mathcal{K}\) of at most \(\sqrt{K+1}\) many options such that, with probability at least \(1-\varepsilon\), the assigned active option is contained in \(\mathcal{K}^{\prime}\). Let \(\widetilde{M}\) consist of the same allocations as \(M\), but with modified prices. For an option \(k\in\mathcal{K}^{\prime}\), the price \(\tilde{p}^{(k)}=p^{(k)}\) in \(\widetilde{M}\) is the same as in \(M\). However, for an option \(k\in\mathcal{K}\setminus\mathcal{K}^{\prime}\), we set the price \(\tilde{p}^{(k)}>1\) in \(\widetilde{M}\) to be larger than the largest possible valuation of any option \(\|v\|_{1}\leq 1\). It follows that such an option will never be selected and \(\widetilde{M}\) is \(0\)-reducible.
To complete the proof, let us look at the reward of a convex combination \(M^{\prime}=\lambda M+(1-\lambda)\widetilde{M}\). If for a particular valuation \(v\) the selected option in \(M\) was in \(\mathcal{K}^{\prime}\), then the same option will be selected in \(M^{\prime}\). This happens with probability at least \(1-\varepsilon\). In any other case, anything can happen, but the revenue cannot worsen by more than the maximum possible valuation, which is \(\|v\|_{1}\leq 1\). Therefore, \(\texttt{Rev}(M)-\texttt{Rev}(M^{\prime})\leq\varepsilon\cdot 1=\varepsilon\), completing the proof.
**Theorem 10**.: _If two menus \(M_{1}\) and \(M_{2}\) for the RochetNet have size at least \(\lceil\frac{4}{\varepsilon^{2}}\rceil^{2n}\), then they are \(\varepsilon\)-connected. Moreover, the curve transforming \(M_{1}\) into \(M_{2}\) is piecewise linear with only five pieces._
Proof Sketch.: The full proof can be found in Appendix A.2. The intuition behind this theorem is that if menus are large, then they should contain many redundant options. Indeed, as in the previous theorem, the strategy is as follows. We show that every menu \(M\) of size at least \(\lceil\frac{4}{\varepsilon^{2}}\rceil^{2n}\) can be linearly transformed into a \(0\)-reducible menu \(\widetilde{M}\) such that each convex combination of \(M\) and \(\widetilde{M}\) achieves a revenue of at least \(\texttt{Rev}(M)-\varepsilon\). This transformation converting \(M_{1}\) and \(M_{2}\) to \(\widetilde{M}_{1}\) and \(\widetilde{M}_{2}\), respectively, yields the first and the fifth of the linear pieces transforming \(M_{1}\) to \(M_{2}\). Together with Proposition 5 applied to \(\widetilde{M}_{1}\) and \(\widetilde{M}_{2}\) serving as the second to fourth linear piece, the theorem then follows.
However, this time, the linear transformation of \(M\) to \(\widetilde{M}\) is much more intricate than in the previous theorem. To do so, it is not sufficient to only adapt the prices. Instead, we also change the allocations of the menu options by rounding them to discretized values. This technique is inspired by Dughmi et al. (2014), but non-trivially adapted to our setting. Since the rounding may also modify the active option for each valuation, we have to carefully adapt the prices in order to make sure that for each valuation, the newly selected option is not significantly worse than the originally selected one. Finally, this property has to be proven not only for \(\widetilde{M}\), but for every convex combination of \(M\) and \(\widetilde{M}\).
After the above rounding, the number of possible allocations for any option is bounded by \(\lceil\frac{4}{\varepsilon^{2}}\rceil^{n}\). Out of several options with the same allocation, the buyer would always choose the cheapest one, implying that the resulting menu \(\widetilde{M}\) is \(0\)-reducible.
## 4 Mode Connectivity for the Affine Maximizer Auctions
Throughout this section, we focus on AMAs with fixed weights \(w_{i}=1\) for all buyers \(i\). Similarly to _RochetNet_, we have the following definition for AMAs.
**Definition 11**.: _A menu \(M\) with \(K+1\) options is \(\varepsilon\)-reducible if and only if there exists a subset \(\mathcal{K}^{\prime}\subseteq\mathcal{K}\), \(0\in\mathcal{K}^{\prime}\), \(|\mathcal{K}^{\prime}|\leq\sqrt{K+1}\) such that, with probability at least \(1-\frac{\varepsilon}{m}\) over the distribution of the valuation of the buyers, (i) \(k(v_{-i})\in\mathcal{K}^{\prime}\) for any buyer \(i\); and (ii) \(k(v)\in\mathcal{K}^{\prime}\)._
Such phenomena are observed in the experiments in [24, Section 6.3].
Our two main results, namely that two \(\varepsilon\)-reducible menus are always \(\varepsilon\)-connected and two large menus are always \(\varepsilon\)-connected, are based on the following proposition, in which we show that \(0\)-reducibility implies \(0\)-connectivity.
**Proposition 12**.: _If two menus \(M_{1}\) and \(M_{2}\) are \(0\)-reducible, then they are \(0\)-connected. Moreover, the curve transforming \(M_{1}\) into \(M_{2}\) is piecewise linear with only three pieces._
The proof idea is similar to the proof of Proposition 5 in _RochetNet_, but requires additional arguments due to the more intricate price structure (see Appendix D.1 for more details). Based on this proposition, now, we are able to show our two main results. First, we achieve \(\varepsilon\)-connectivity from \(\varepsilon\)-reducibility.
**Theorem 13**.: _If two AMAs \(M_{1}\) and \(M_{2}\) are \(\varepsilon\)-reducible, then they are \(\varepsilon\)-mode-connected. Moreover, the curve transforming \(M_{1}\) to \(M_{2}\) is piecewise linear with only five pieces._
Before the proof, we recall how the total payment is calculated for a valuation profile \(v\). We choose \(k(v)\) as the option which maximizes the boosted welfare, \(\sum_{i}v_{i}^{\top}x_{i}^{(k)}+\beta^{(k)}\). According to (1), the total revenue can be written as
\[\sum_{i}p_{i}(v)=\sum_{i}\underbrace{\left(\sum_{\ell\neq i}v_{\ell}^{\top}x_ {\ell}^{(k(v_{-i}))}+\beta^{(k(v_{-i}))}\right)}_{\text{boosted welfare of }v_{-i}}-(m-1) \underbrace{\left(\sum_{i}v_{i}^{\top}x_{i}^{(k(v))}+\beta^{(k(v))}\right)}_{ \text{boosted welfare of }v}-\beta^{(k(v))}. \tag{2}\]
Proof of Theorem 13.: Similar to the proof of Theorem 9, it is sufficient to show that every \(\varepsilon\)-reducible menu \(M\) can be linearly transformed into a \(0\)-reducible menu \(\widetilde{M}\) such that each convex combination of \(M\) and \(\widetilde{M}\) achieves a revenue of at least \(\mathtt{Rev}(M)-\varepsilon\). This can then be used as the first and fifth linear piece of the curve connecting \(M_{1}\) and \(M_{2}\), while the middle three pieces are provided by Proposition 12.
We construct \(\widetilde{M}\) by _(i)_ keeping all options in \(\mathcal{K}^{\prime}\) unchanged; _(ii)_ for the options \(k\in\mathcal{K}\setminus\mathcal{K}^{\prime}\), we decrease \(\beta^{(k)}\) to be smaller than \(-m\), which implies such an option will never be selected (recall that \(0\in\mathcal{K}^{\prime}\) is assumed, and the option \((\mathbf{0},0)\) is better than any such option). Consequently, \(\widetilde{M}\) is \(0\)-reducible.
To complete the proof, let us look at the revenue of \(M^{\prime}=\{({x^{\prime}}^{(k)},{\beta^{\prime}}^{(k)})\}_{k\in\mathcal{K}}\), which is a convex combination of \(M\) and \(\widetilde{M}\): \(M^{\prime}=\lambda M+(1-\lambda)\widetilde{M}\) for \(0\leq\lambda<1\). Let \(k^{\prime}(v)=\arg\max_{k}\sum_{i}v_{i}^{\top}x_{i}^{\prime(k)}+{\beta^{\prime }}^{(k)}\). As we decrease \(\beta^{(k)}\) for \(k\notin\mathcal{K}^{\prime}\), \(k(v)\in\mathcal{K}^{\prime}\) implies \(k^{\prime}(v)\in\mathcal{K}^{\prime}\) and, additionally, option \(k^{\prime}(v)\) and option \(k(v)\) achieve the same boosted welfare and same \(\beta\). Therefore, since \(M\) is \(\varepsilon\)-reducible, with probability at least \(1-\frac{\varepsilon}{m}\), the boosted welfare of \(v\) as well as the boosted welfare of \(v_{-i}\) for all buyers \(i\) is the same for \(M\) and for \(M^{\prime}\). According to the formula (2), the total payment for the profile \(v\) is the same for \(M\) and \(M^{\prime}\). Therefore, the loss on the revenue can only appear with probability at most \(\frac{\varepsilon}{m}\), and the maximum loss is at most \(m\), which implies an \(\varepsilon\) loss in total.
Second, we show that mode connectivity also holds for those AMAs with large menu sizes, namely for \(K+1\geq\lceil\frac{16m^{3}}{\epsilon^{2}}\rceil^{2nm}\).
**Theorem 14**.: _For any \(0<\epsilon\leq\frac{1}{4}\), if two AMAs \(M_{1}\) and \(M_{2}\) have at least \(K+1\geq\lceil\frac{16m^{3}}{\epsilon^{2}}\rceil^{2nm}\) options, then they are \(\varepsilon\)-mode-connected. Moreover, the curve transforming \(M_{1}\) to \(M_{2}\) is piecewise linear with only five pieces._
Proof Sketch.: The full proof can be found in Appendix D.2. Similar to _RochetNet_, the idea of proving Theorem 14 is to discretize the allocations in the menu, then one can use Proposition 12 to construct the low-loss transformation from \(M_{1}\) to \(M_{2}\) by five linear pieces. To do this, one wants the loss of revenue to be small during the discretization. Consider the formula (2) of the total payment. The first two terms do not change much by a small change of the discretization. However, the last term \(\beta^{(k(v))}\) might be significantly affected by discretization, which may cause a notable decrease in the total payment. To avoid this, we perform a proportional discount on \(\beta\), incentivizing the auctioneer to choose an allocation with a small \(\beta\). By this approach, the original revenue will be approximately maintained. Furthermore, we show a linear path, connecting the original menu and the menu after discretizing, which will suffer a small loss.
## 5 Conclusion
We have given theoretical evidence of mode-connectivity in neural networks designed to learn auction mechanisms. Our results show that, for a sufficiently wide hidden layer, \(\varepsilon\)-mode-connectivity holds in the strongest possible sense. Perhaps more practically, we have shown \(\varepsilon\)-mode-connectivity under \(\varepsilon\)-reducibility, i.e., the assumption that there is a sufficiently small subset of neurons that preserve most of the revenue. There is evidence for this assumption in previous work in differentiable economics. A systematic experimental study that verifies this assumption under various distributions and network sizes is left for future work.
Our results make a first step in providing theoretical arguments underlying the success of neural networks in mechanism design. Our focus was on some of the most basic architectures. A natural next step is to extend the arguments for AMA networks with variable weights \(w_{i}\). Such a result will need to analyze a four layer network, and thus could make headway into understanding the behaviour of deep networks. Besides _RochetNet_, Dutting et al. (2019) also proposed _RegretNet_, based on minimising a regret objective. This network is also applicable to multiple buyers, but only provides approximate incentive compatibility, and has been extended in subsequent work, e.g., Feng et al. (2018), Golowich et al. (2018), Duan et al. (2022). The architecture is however quite different from _RochetNet_: it involves two deep neural networks in conjunction, an allocation and a payment network, and uses expected ex post regret as the loss function. We therefore expect a mode-connectivity analysis for _RegretNet_ to require a considerable extension of the techniques used by us. We believe that such an analysis would be a significant next step in the theoretical analysis of neural networks in differentiable economics.
## Acknowledgments and Disclosure of Funding
All three authors gratefully acknowledge support by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (for all three authors via grant agreement ScaleOpt-757481; for Christoph Hertrich additionally via grant agreement ForEFront-615640). Yixin Tao also acknowledges the Grant 2023110522 from SUFE.
## References
* Allen-Zhu et al. (2019) Z. Allen-Zhu, Y. Li, and Z. Song. A convergence theory for deep learning via over-parameterization. In _International Conference on Machine Learning_, pages 242-252. PMLR, 2019.
* Arora et al. (2018) S. Arora, R. Ge, B. Neyshabur, and Y. Zhang. Stronger generalization bounds for deep nets via a compression approach. In _International Conference on Machine Learning_, pages 254-263. PMLR, 2018.
* Arora et al. (2019)M. Babaioff, Y. A. Gonczarowski, and N. Nisan. The menu-size complexity of revenue approximation. _Games and Economic Behavior_, 134:281-307, 2022.
* Bhalgat et al. (2013) A. Bhalgat, S. Gollapudi, and K. Munagala. Optimal auctions via the multiplicative weight method. In _Proceedings of the fourteenth ACM Conference on Electronic Commerce_, pages 73-90, 2013.
* Cai et al. (2012a) Y. Cai, C. Daskalakis, and S. M. Weinberg. An algorithmic characterization of multi-dimensional mechanisms. In _Proceedings of the forty-fourth Annual ACM Symposium on Theory of Computing_, pages 459-478, 2012a.
* Cai et al. (2012b) Y. Cai, C. Daskalakis, and S. M. Weinberg. Optimal multi-dimensional mechanism design: Reducing revenue to welfare maximization. In _IEEE 53rd Annual Symposium on Foundations of Computer Science_, pages 130-139. IEEE, 2012b.
* Cai et al. (2013) Y. Cai, C. Daskalakis, and S. M. Weinberg. Understanding incentives: Mechanism design becomes algorithm design. In _IEEE 54th Annual Symposium on Foundations of Computer Science_, pages 618-627. IEEE, 2013.
* Clarke (1971) E. H. Clarke. Multipart pricing of public goods. _Public Choice_, pages 17-33, 1971.
* Conitzer and Sandholm (2002) V. Conitzer and T. Sandholm. Complexity of mechanism design. In _Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence_, pages 103-110, 2002.
* Conitzer and Sandholm (2004) V. Conitzer and T. Sandholm. Self-interested automated mechanism design and implications for optimal combinatorial auctions. In _Proceedings of the 5th ACM Conference on Electronic Commerce_, pages 132-141, 2004.
* Curry et al. (2022a) M. Curry, U. Lyi, T. Goldstein, and J. Dickerson. Learning revenue-maximizing auctions with differentiable matching. In _International Conference on Artificial Intelligence and Statistics_, pages 6062-6073. PMLR, 2022a.
* Curry et al. (2022b) M. Curry, T. Sandholm, and J. Dickerson. Differentiable economics for randomized affine maximizer auctions. _arXiv preprint arXiv:2202.02872_, 2022b.
* Daskalakis (2015) C. Daskalakis. Multi-item auctions defying intuition? _ACM SIGecom Exchanges_, 14(1):41-75, 2015.
* Daskalakis et al. (2013) C. Daskalakis, A. Deckelbaum, and C. Tzamos. Mechanism design via optimal transport. In _Proceedings of the fourteenth ACM Conference on Electronic Commerce_, pages 269-286, 2013.
* Daskalakis et al. (2014) C. Daskalakis, A. Deckelbaum, and C. Tzamos. The complexity of optimal mechanism design. In _Proceedings of the twenty-fifth annual ACM-SIAM Symposium on Discrete Algorithms_, pages 1302-1318. SIAM, 2014.
* Daskalakis et al. (2015) C. Daskalakis, A. Deckelbaum, and C. Tzamos. Strong duality for a multiple-good monopolist. In _Proceedings of the Sixteenth ACM Conference on Economics and Computation_, pages 449-450, 2015.
* Draxler et al. (2018) F. Draxler, K. Veschgini, M. Salmhofer, and F. Hamprecht. Essentially no barriers in neural network energy landscape. In _International Conference on Machine Learning_, pages 1309-1318. PMLR, 2018.
* Du et al. (2019) S. Du, J. Lee, H. Li, L. Wang, and X. Zhai. Gradient descent finds global minima of deep neural networks. In _International Conference on Machine Learning_, pages 1675-1685. PMLR, 2019.
* Duan et al. (2022) Z. Duan, J. Tang, Y. Yin, Z. Feng, X. Yan, M. Zaheer, and X. Deng. A context-integrated transformer-based neural network for auction design. In _International Conference on Machine Learning_, pages 5609-5626. PMLR, 2022.
* Dughmi et al. (2014) S. Dughmi, L. Han, and N. Nisan. Sampling and representation complexity of revenue maximization. In _Web and Internet Economics: 10th International Conference, WINE 2014_, pages 277-291. Springer, 2014.
* Dutting et al. (2019) P. Dutting, Z. Feng, H. Narasimhan, D. Parkes, and S. S. Ravindranath. Optimal auctions through deep learning. In _International Conference on Machine Learning_, pages 1706-1715. PMLR, 2019.
* Dashkina et al. (2015)Z. Feng, H. Narasimhan, and D. C. Parkes. Deep learning for revenue-optimal auctions with budgets. In _Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems_, pages 354-362, 2018.
* Garipov et al. (2018) T. Garipov, P. Izmailov, D. Podoprikhin, D. P. Vetrov, and A. G. Wilson. Loss surfaces, mode connectivity, and fast ensembling of DNNs. _Advances in Neural Information Processing Systems_, 31, 2018.
* Giannakopoulos and Koutsoupias (2014) Y. Giannakopoulos and E. Koutsoupias. Duality and optimality of auctions for uniform distributions. In _Proceedings of the fifteenth ACM Conference on Economics and Computation_, pages 259-276, 2014.
* Giannakopoulos and Koutsoupias (2018) Y. Giannakopoulos and E. Koutsoupias. Duality and optimality of auctions for uniform distributions. _SIAM Journal on Computing_, 47(1):121-165, 2018.
* Golowich et al. (2018) N. Golowich, H. Narasimhan, and D. C. Parkes. Deep learning for multi-facility location mechanism design. In _International Joint Conference on Artificial Intelligence_, pages 261-267, 2018.
* Groves (1973) T. Groves. Incentives in teams. _Econometrica: Journal of the Econometric Society_, pages 617-631, 1973.
* Kuditipudi et al. (2019) R. Kuditipudi, X. Wang, H. Lee, Y. Zhang, Z. Li, W. Hu, R. Ge, and S. Arora. Explaining landscape connectivity of low-cost solutions for multilayer nets. _Advances in Neural Information Processing Systems_, 32, 2019.
* Kuo et al. (2020) K. Kuo, A. Ostuni, E. Horishny, M. J. Curry, S. Dooley, P.-y. Chiang, T. Goldstein, and J. P. Dickerson. Proportionnet: Balancing fairness and revenue for auction design with deep learning. _arXiv preprint arXiv:2010.06398_, 2020.
* Manelli and Vincent (2007) A. M. Manelli and D. R. Vincent. Multidimensional mechanism design: Revenue maximization and the multiple-good monopoly. _Journal of Economic Theory_, 137(1):153-185, 2007.
* Myerson (1981) R. B. Myerson. Optimal auction design. _Mathematics of Operations Research_, 6(1):58-73, 1981.
* Nguyen (2019) Q. Nguyen. On connected sublevel sets in deep learning. In _International Conference on Machine Learning_, pages 4790-4799. PMLR, 2019.
* Rochet (1987) J.-C. Rochet. A necessary and sufficient condition for rationalizability in a quasi-linear context. _Journal of Mathematical Economics_, 16(2):191-200, 1987.
* Rochet and Stole (2003) J.-C. Rochet and L. A. Stole. The economics of multidimensional screening. _Econometric Society Monographs_, 35:150-197, 2003.
* Sandholm and Likhodedov (2015) T. Sandholm and A. Likhodedov. Automated design of revenue-maximizing combinatorial auctions. _Operations Research_, 63(5):1000-1025, 2015.
* Shen et al. (2019) W. Shen, P. Tang, and S. Zuo. Automated mechanism design via neural networks. In _Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems_, pages 215-223, 2019.
* Shevchenko and Mondelli (2020) A. Shevchenko and M. Mondelli. Landscape connectivity and dropout stability of sgd solutions for over-parameterized neural networks. In _International Conference on Machine Learning_, pages 8773-8784. PMLR, 2020.
* Vickrey (1961) W. Vickrey. Counterspeculation, auctions, and competitive sealed tenders. _The Journal of Finance_, 16(1):8-37, 1961.
* Zheng et al. (2022) S. Zheng, A. Trott, S. Srinivasa, D. C. Parkes, and R. Socher. The AI economist: Taxation policy design via two-level deep multiagent reinforcement learning. _Science Advances_, 8(18), 2022. | ## Review
### Summary
This paper aims to provide a theoretical foundation for the empirical success of neural network architectures in auction design, specifically focusing on the menu-based auction frameworks of RochetNet and Affine Maximizer Auctions (AMA). It introduces the concept of epsilon-mode connectivity, showing that locally optimal auction solutions can be connected by a path in parameter space, which helps explain why optimization techniques such as SGD perform well in these contexts. The authors demonstrate that two locally optimal menus are connected by a simple path, with minimal revenue loss, thereby offering insights into the optimization landscape of auction designs.
### Strengths
- Addresses a significant problem by combining machine learning with auction mechanism design.
- Novel contribution in establishing theoretical properties of mode connectivity in auction networks.
- Well-executed proofs and clear, structured presentation.
- Provides a theoretical explanation for observed empirical phenomena in auction design.
### Weaknesses
- Lacks clarity on practical implications of mode connectivity for practitioners.
- Insufficient discussion on how mode connectivity justifies empirical performance.
- Presentation requires major revisions for better readability, especially in defining key concepts earlier.
- Unclear novelty and significance of mode connectivity in the context of auction mechanisms compared to general neural networks.
### Questions
- What explicit advantages does mode connectivity confer during optimization?
- How do epsilon-reducibility and menu options relate to the theoretical results presented?
- Could further discussion on related work, such as Shen et al.'s architecture, enhance the paper's impact?
- What insights can practitioners derive from the findings regarding initialization and training of networks?
### Soundness
**Score:** 3
**Description:** Good: The theoretical foundations are solid, but there are concerns about the clarity and implications of the results.
### Presentation
**Score:** 3
**Description:** Good: While the paper is generally well-written, significant revisions are needed to enhance clarity and accessibility for readers unfamiliar with the topic.
### Contribution
**Score:** 3
**Description:** Good: The paper makes a notable contribution to the understanding of mode connectivity in auction mechanisms, though further clarity on its implications would strengthen its significance.
### Rating
**Score:** 6
**Description:** Weak Accept: The paper presents solid theoretical work with moderate-to-high impact potential, but requires improvements in clarity and practical relevance.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper offers a valuable theoretical contribution to the field of auction design through the lens of differentiable economics. While it has sound theoretical underpinnings and addresses a significant problem, the clarity of presentation needs improvement to better convey its significance and practical implications, especially to audiences unfamiliar with the domain.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# The Inductive Bias of Flatness Regularization
for Deep Matrix Factorization
Khashayar Gatmiry
MIT
[email protected]
Zhiyuan Li
Stanford University
[email protected]
Ching-Yao Chuang
MIT
[email protected]
Sashank Reddi
Google
[email protected]
Tengyu Ma
Stanford University
[email protected]
Stefanie Jegelka
TU Munich & MIT
[email protected]
###### Abstract
Recent works on over-parameterized neural networks have shown that the stochasticity in optimizers has the implicit regularization effect of minimizing the sharpness of the loss function (in particular, the trace of its Hessian) over the family zero-loss solutions. More explicit forms of flatness regularization also empirically improve the generalization performance. However, it remains unclear why and when flatness regularization leads to better generalization. This work takes the first step toward understanding the inductive bias of the minimum trace of the Hessian solutions in an important setting: learning deep linear networks from linear measurements, also known as _deep matrix factorization_. We show that for all depth greater than one, with the standard Restricted Isometry Property (RIP) on the measurements, minimizing the trace of Hessian is approximately equivalent to minimizing the Schatten 1-norm of the corresponding end-to-end matrix parameters (i.e., the product of all layer matrices), which in turn leads to better generalization. We empirically verify our theoretical findings on synthetic datasets.
## 1 Introduction
Modern deep neural networks are typically over-parametrized and equipped with huge model capacity, but surprisingly, they generalize well when trained using stochastic gradient descent (SGD) or its variants [51]. A recent line of research suggested the _implicit bias_ of SGD as a possible explanation to this mysterious ability. In particular, Damian et al. [10], Li et al. [29], Arora et al. [4], Lyu et al. [33], Wen et al. [48], Liu et al. [31] have shown that SGD can implicitly minimize the _sharpness_ of the training loss, in particular, the trace of the Hessian of the training loss, to obtain the final model. However, despite the strong empirical evidence on the correlation between various notions of sharpness and generalization [25, 23, 38, 24] and the effectiveness of using sharpness regularization on improving generalization [16, 49, 53, 39], the connection between penalization of the sharpness of training loss and better generalization still remains majorly unclear [13, 2] and has only been proved in the context of two-layer linear models [29, 37, 12]. To further understand this connection beyond the two layer case, we study the inductive bias of penalizing the _trace of the Hessian_ of training loss and its effect on the _generalization_ in an important theoretical deep learning setting: _deep linear networks_ (or equivalently, _deep matrix factorization_[3]). We start by briefly describing the problem setup.
Deep Matrix Factorization.Consider an \(L\)-layer deep network where \(L\in\mathbb{N}^{+},L\geq 2\) is the depth of the model. Let \(W_{i}\in\mathbb{R}^{d_{i}\times d_{i-1}}\) and \(d_{i}\) denote the layer weight matrix and width of the \(i^{\text{th}}\) (\(i\in[L]\)layer respectively. We use \(\mathbf{W}\) to denote the concatenation of all the parameters \((W_{1},\ldots,W_{L})\) and define the _end-to-end matrix_ of \(\mathbf{W}\) as
\[E(\mathbf{W})\triangleq W_{L}W_{L-1}\cdots W_{1}. \tag{1}\]
In this paper, we focus on models that are linear in the space of the end-to-end matrix \(E(W)\). Suppose \(M^{*}\in\mathbb{R}^{d_{L}\times d_{0}}\) is the target end-to-end matrix, and we observe \(n\) linear measurements (matrices) \(A_{i}\in\mathbb{R}^{d_{L}\times d_{0}}\) and the corresponding labels \(b_{i}=\langle A_{i},M^{*}\rangle\). The training loss of \(\mathbf{W}\) is the mean-squared error (MSE) between the prediction \(\langle A_{i},W_{L}W_{L-1}\cdots W_{1}\rangle\) and the observation \(b_{i}\):
\[\mathcal{L}(\mathbf{W})\triangleq\frac{1}{n}\sum_{i=1}^{n}\left(\langle A_{i},W_{L}W_{L-1}\cdots W_{1}\rangle-b_{i}\right)^{2}. \tag{2}\]
Throughout this paper, we assume that \(d_{i}\geq\min(d_{0},d_{L})\) for each \(i\in[L]\) and, thus, the image of the function \(E(\cdot)\) is the entire \(\mathbb{R}^{d_{L}\times d_{0}}\). In particular, this ensures that the deep models are sufficiently expressive in the sense that \(\min_{\boldsymbol{W}}\mathcal{L}(\boldsymbol{W})=0\). For this setting, we aim to understand the structure of the trace of the Hessian minimization, as described below. The trace of Hessian is the sum of the eigenvalues of Hessian, which is an indicator of sharpness and it is known that variants of SGD, such as label noise SGD or 1-SAM, are biased toward models with a smaller trace of Hessian [29, 48].
**Min Trace of Hessian Interpolating Solution.** Our primary object of study is the interpolating solution with the minimum trace of Hessian, defined as:
\[\boldsymbol{W}^{*}\in\operatorname*{arg\,min}_{\boldsymbol{W}:\mathcal{L}( \boldsymbol{W})=0}\operatorname{tr}[\nabla^{2}\mathcal{L}(\boldsymbol{W})]. \tag{3}\]
As we shall see shortly, the solution to the above optimization problem is not unique. We are interested in understanding the underlying structure of any minimizer \(\boldsymbol{W}^{*}\). This will, in turn, inform us about the generalization nature of these solutions.
### Main Results
Before delving into the technical details, we state our main results in this section. This also serves the purpose of highlighting the primary technical contributions of the paper. First, since the generalization of \(\boldsymbol{W}\) only depends on its end-to-end matrix \(E(\boldsymbol{W})\), it is informative to derive the properties of \(E(\boldsymbol{W}^{*})\) for any min trace of the Hessian interpolating solution \(\boldsymbol{W}^{*}\) defined in (3). Indeed, penalizing the trace of Hessian in the \(W\) space induces an equivalent penalization in the space of the end-to-end parameters. More concretely, given an end-to-end parameter \(M\), let the induced regularizer \(F(M)\) denote the minimum trace of Hessian of the training loss at \(\boldsymbol{W}\) among all \(\boldsymbol{W}\)'s that instantiate the end-to-end matrix \(M\) i.e., \(E(\boldsymbol{W})=M\).
**Definition 1** (Induced Regularizer).: _Suppose \(M\in\mathbb{R}^{d_{L}\times d_{0}}\) is an end-to-end parameter that fits the training data perfectly (that is, \(\langle A_{i},M\rangle=b_{i},\,\forall i\in[n]\)). We define the induced regularizer as_
\[F(M)\triangleq\min_{\boldsymbol{W}:E(\boldsymbol{W})=M}\operatorname{tr}[ \nabla^{2}\mathcal{L}(\boldsymbol{W})] \tag{4}\]
Since the image of \(E(\cdot)\) is the entire \(\mathbb{R}^{d_{L}\times d_{0}}\) by our assumption that \(d_{i}\geq\min(d_{0},d_{L})\), function \(F\) is well-defined for all \(M\in\mathbb{R}^{d_{L}\times d_{0}}\). It is easy to see that minimizing the trace of the Hessian in the original parameter space (see (3)) is equivalent to penalizing \(F(M)\) in the end-to-end parameter. Indeed, the minimizers of the implicit regularizer in the end-to-end space are related to the minimizers of the implicit regularizer in the \(\boldsymbol{W}\) space, i.e.,
\[\operatorname*{arg\,min}_{M:\mathcal{L}^{\prime}(M)=0}F(M)=\left\{E( \boldsymbol{W}^{*})\mid\boldsymbol{W}^{*}\in\operatorname*{arg\,min}_{ \boldsymbol{W}:\mathcal{L}(\boldsymbol{W})=0}\operatorname{tr}[\nabla^{2} \mathcal{L}(\boldsymbol{W})]\right\},\]
where for any \(M\in\mathbb{R}^{d_{L}\times d_{0}}\), we define \(\mathcal{L}^{\prime}(M)\triangleq\frac{1}{n}\sum_{i=1}\left(\langle A_{i},M \rangle-b_{i}\right)^{2}\) and thus \(\mathcal{L}(\boldsymbol{W})=\mathcal{L}^{\prime}(E(\boldsymbol{W}))\). This directly follows from the definition of \(F\) in (4). Our main result characterizes the induced regularizer \(F(M)\) when the data satisfies the RIP property.
**Theorem 1** (Induced regularizer under RIP).: _Suppose the linear measurements \(\{A_{i}\}_{i=1}^{n}\) satisfy the \((1,\delta)\)-RIP condition._1. _For any_ \(M\in\mathbb{R}^{d_{L}\times d_{0}}\) _such that_ \(\langle A_{i},M\rangle=b_{i},\;\forall i\in[n]\)_, it holds that_ \[(1-\delta)L(d_{0}d_{L})^{1/L}\|M\|_{*}^{2(L-1)/L}\leq F(M)\leq(1+\delta)L(d_{0} d_{L})^{1/L}\|M\|_{*}^{2(L-1)/L}.\] (5)
2. _Let_ \(\mathbf{W}^{*}\in\operatorname*{arg\,min}_{\mathbf{W}:\mathcal{L}(\mathbf{W})=0}\text{tr} [\nabla^{2}\mathcal{L}(\mathbf{W})]\) _be an interpolating solution with minimal trace of Hessian. Then_ \(E(\mathbf{W}^{*})\) _roughly minimizes the nuclear norm among all interpolating solutions of_ \(\mathcal{L}^{\prime}\)_. That is,_ \[\|E(\mathbf{W}^{*})\|_{*}\leq\frac{1+\delta}{1-\delta}\min_{\mathcal{L}^{\prime}( M)=0}\|M\|_{*}.\]
However, for more general cases, it is challenging to compute the closed-form expression of \(F\). In this work, we derive closed-form expressions for \(F\) in the following two cases: (1) depth \(L\) is equal to \(2\) and (2) there is only one measurement, _i.e._, \(n=1\) (see Table 1). Leveraging the above characterization of induced regularzier, we obtain the following result on the generalization bounds:
**Theorem 2** (Recovery of the ground truth under RIP).: _Suppose the linear measurements \(\{(A_{i})\}_{i=1}^{n}\) satisfy the \((2,\delta(n))\)-RIP (Definition 3). Then for any \(\mathbf{W}^{*}\in\operatorname*{arg\,min}_{\mathbf{W}:\mathcal{L}(\mathbf{W})=0}\text{tr} [\nabla^{2}\mathcal{L}(\mathbf{W})]\), we have_
\[\|E(\mathbf{W}^{*})-M^{*}\|_{F}^{2}\leq\frac{8\delta(n)}{(1-\delta(n))^{2}}\|M^{*} \|_{*}^{2}. \tag{6}\]
_where \(\delta(n)\) depends on the number of measurements \(n\) and the distribution of the measurements._
If we further suppose \(\{A_{i}\}_{i=1}^{n}\) are independently sampled from some distribution over \(\mathbb{R}^{d_{L}\times d_{0}}\) satisfying that \(\mathbb{E}_{A}\left\langle A,M\right\rangle^{2}=\|M\|_{F}^{2}\), _e.g._, the standard multivariate Gaussian distribution, denoted by \(\mathcal{G}_{d_{L}\times d_{0}}\), we know \(\delta(n)=O(\sqrt{\frac{d_{L}+d_{0}}{n}})\) from Candes and Plan [7] (see Section 5.1 for more examples).
**Theorem 3**.: _For \(n\geq\Omega(r(d_{0}+d_{L}))\), with probability at least \(1-\exp(\Omega(d_{0}+d_{L}))\) over the randomly sampled \(\{A_{i}\}_{i=1}^{n}\) from multivariate Gaussian distribution \(\mathcal{G}\), for any minimum trace of Hessian interpolating solution \(\mathbf{W}^{*}\in\operatorname*{arg\,min}_{\mathbf{W}:\mathcal{L}(\mathbf{W})=0}\text{tr} [\nabla^{2}\mathcal{L}(\mathbf{W})]\), the population loss \(\overline{\mathcal{L}}(\mathbf{W}^{*})\triangleq\mathbb{E}_{A\sim\mathcal{G}}( \langle A,E(\mathbf{W}^{*})\rangle-\langle A,M^{*}\rangle)^{2}\) satisfies that_
\[\overline{\mathcal{L}}(\mathbf{W}^{*})=\|E(\mathbf{W}^{*})-M^{*}\|_{F}^{2}\leq O\Big{(} \frac{d_{0}+d_{L}}{n}\|M^{*}\|_{*}^{2}\log^{3}n\Big{)}.\]
Next, we state a lower bound for the conventional estimator for overparameterized models that minimizes the norm. The lower bound states that, to achieve a small error, the number of samples should be as large as the product of the dimensions of the end-to-end matrix \(d_{0}d_{L}\) as opposed to \(d_{0}+d_{L}\) in case of the min trace of Hessian minimizer. It is proved in Appendix F.
**Theorem 4** (Lower bound for \(\ell_{2}\) regression).: _Suppose \(\{A_{i}\}_{i=1}^{n}\) are randomly sampled from multivariate Gaussian distribution \(\mathcal{G}\), let \(\tilde{\mathbf{W}}=\operatorname*{arg\,min}_{\mathbf{W}:\mathcal{L}(\mathbf{W})=0}\|E( \mathbf{W})\|_{F}\) to be the minimum Frobenius norm interpolating solution, then the expected population loss is_
\[\mathbb{E}\,\overline{\mathcal{L}}(\tilde{\mathbf{W}})=(1-\tfrac{\min\{n,d_{0}d_{L }\}}{d_{0}d_{L}})\left\|M^{*}\right\|_{F}^{2}.\]
\begin{table}
\begin{tabular}{l|l|l} Settings & Induced Regularizer \(F(M)/L\) & Theorem \\ \hline \((1,\delta)\)-RIP & \((1\pm O(\delta))(d_{0}d_{L})^{1/L}\|M\|_{*}^{2-2/L}\) & Theorem 1 \\ \(L=2\) & \(\left\|\left(\frac{1}{n}A_{i}A_{i}^{\top}\right)^{\nicefrac{{1}}{{2}}}M\left( \frac{1}{n}A_{i}^{\top}A_{i}\right)^{\nicefrac{{1}}{{2}}}\right\|_{*}\) & Theorem 5 ([12]) \\ \(n=1\) & \(\left\|\left(A^{T}M\right)^{L-1}A^{T}\right\|_{S_{2/L}}^{2/L}\) & Theorem 7 \\ \end{tabular}
\end{table}
Table 1: Summary of properties of the induced regularizer in the end-to-end matrix space. Here \(\left\|\cdot\right\|_{S_{p}}\) denotes the Schatten \(p\)-norm for \(p\in[1,\infty]\) and Schatten \(p\)-quasinorm for \(p\in(0,1)\) (see Definition 2). \(\left\|\cdot\right\|_{*}\) denotes the Schatten 1-norm, also known as the nuclear norm.
The lower bound in Theorem 4 shows in order to obtain an \(O(1)\)-relatively accurate estimates of the ground truth in expectation, namely to guarantee \(\mathbb{E}\,\overline{\mathcal{L}}(\tilde{\mathbf{W}})\leq O(1)\|M^{*}\|_{F}^{2}\), the minimum Frobenius norm interpolating solution needs at least \(\Omega(d_{0}d_{L})\) samples. In contrast, the minimizer of trace of Hessian in the same problem only requires \(O((d_{0}+d_{L})\|M^{*}\|_{s}^{2}/\|M^{*}\|_{F}^{2})\) samples, which is at least \(\tilde{O}(\frac{\min\{d_{0},d_{L}\}}{\tau})\) times smaller. We further illustrate experimentally the superior generalization ability of sharpness minimization algorithms like label noise SGD [6; 10; 29] compared to vanilla mini-batch SGD Figure 1. Due to the space limits, we defer the full setting for experiments into Appendix A.
## 2 Related Work
Connection Between Sharpness and Generalization.Research on the connection between generalization and sharpness dates back to Hochreiter and Schmidhuber [21]. Keskar et al. [25] famously observe that when increasing the batch size of SGD, the test error and the sharpness of the learned solution both increase. Jastrzebski et al. [23] extend this observation and found that there is a positive correlation between sharpness and the ratio between learning rate and batch size. Jiang et al. [24] perform a large-scale empirical study on various notions of generalization measures and show that sharpness-based measures correlate with generalization best. Liu et al. [31] find that among language models with the same validation pretraining loss, those that have smaller sharpness can have better downstream performance. On the other hand, Dinh et al. [13] argue that for networks with scaling invariance, there always exist models with good generalization but with arbitrarily large sharpness. We note this does not contradict our main result here, which only asserts the interpolation solution with a minimal trace of Hessian generalizes well, but not vice versa. Empirically, sharpness minimization is also a popular and effective regularization method for overparametrized models [39; 17; 53; 49; 26; 32; 54; 52; 1].
Implicit Bias of Sharpness Minimization.Recent theoretical works [6; 10; 29; 31] show that SGD with label noise is implicitly biased toward local minimizers with a smaller trace of Hessian under the assumption that the minimizers locally connect as a manifold. Such a manifold setting is empirically verified by Draxler et al. [14], Garipov et al. [18] in the sense that the set of minimizers of the training loss is path-connected. It is the same situation for the deep matrix factorization problem studied in this paper, although we do not study the optimization trajectory. Instead, we directly study properties of the minimum trace of Hessian interpolation solution.
Sharpness-reduction implicit bias can also happen for deterministic GD. Arora et al. [4] show that normalized GD implicitly penalizes the largest eigenvalue of the Hessian. Ma et al. [35] argues that such sharpness reduction phenomena can also be caused by a multi-scale loss landscape. Lyu et al. [33] show that GD with weight decay on a scale-invariant loss function implicitly decreases the spherical sharpness, _i.e._, the largest eigenvalue of the Hessian evaluated at the normalized parameter. Another line of work focuses on the sharpness minimization effect of a large learning rate in GD, assuming that it converges at the end of training. This has been studied mainly through linear stability analysis [50; 8; 34; 9]. Recent theoretical analysis [11; 30] showed that the sharpness minimization effect of a large learning rate in GD does not necessarily rely on convergence and linear stability, through a four-phase characterization of the dynamics at the Edge of Stability regime [8].
## 3 Preliminaries
**Notation.** We use \([n]\) to denote \(\{1,2,\ldots,n\}\) for every \(n\in\mathbb{N}\). We use \(\left\|M\right\|_{F}\), \(\left\|M\right\|_{s^{\star}}\), \(\left\|M\right\|_{2}\) and \(\text{tr}(M)\) to denote the Frobenius norm, nuclear norm, spectral norm and trace of matrix \(M\) respectively. For any function \(f\) defined over set \(S\) such that \(\min_{x\in S}f(x)\) exists, we use \(\arg\min_{S}f\) to denote the set \(\{y\in S\mid f(y)=\min_{x\in S}f(x)\}\). Given a matrix \(M\), we use \(h_{M}\) to denote the linear map \(A\mapsto\langle A,M\rangle\). We use \(\mathcal{H}_{r}\) to to denote the set \(\mathcal{H}_{r}\triangleq\left\{h_{M}\mid\left\|M\right\|_{*}\leq r\right\}\). \(M_{i:}\) and \(M_{:j}\) are used to denote the \(i\)th row and \(j\)th column of the matrix \(M\).
The following definitions will be important to the technical discussion in the paper.
Rademacher Complexity.Given \(n\) data points \(\{A_{i}\}_{i=1}^{n}\), the _empirical Rademacher complexity_ of function class \(\mathcal{H}\) is defined as
\[\mathcal{R}_{n}(\mathcal{H})=\frac{1}{n}\operatorname{\mathbb{E}}_{e\sim\{\pm 1 \}^{n}}\sup_{h\in\mathcal{H}}\sum_{i=1}^{n}\epsilon_{i}h(A_{i}).\]
Given a distribution \(P\), the _population Rademacher complexity_ is defined as follows: \(\overline{\mathcal{R}}_{n}(\mathcal{H})=\operatorname{\mathbb{E}}_{A_{i}\stackrel{{ iid}}{{\sim}}P}\mathcal{R}_{n}(\mathcal{H})\). This is mainly used to upper bound the generalization gap of SGD.
**Definition 2** (Schatten \(p\)-(quasi)norm).: _Given any \(d,d^{\prime}\in\mathbb{N}^{+}\), \(p\in(0,\infty)\) a matrix \(M\in\mathbb{R}^{d\times d^{\prime}}\) with singular values \(\sigma_{1}(M),\ldots,\sigma_{\min(d,d^{\prime})}(M)\), we define the Schatten \(p\)-(semi)norm as_
\[\left\|M\right\|_{S_{p}}=\left(\sum\nolimits_{i=1}^{\min(d,d^{\prime})}\sigma _{i}^{p}(M)\right)^{1/p}.\]
Note that in this definition \(\left\|\cdot\right\|_{S_{p}}\) is a norm only when \(p\geq 1\). When \(p\in(0,1)\), the triangle inequality does not hold. Note that when \(p\in(0,1)\), \(\left\|A+B\right\|_{S_{p}}\leq 2^{\nicefrac{{1}}{{p-1}}}(\left\|A\right\|_{S_{p}} +\left\|B\right\|_{S_{p}})\) for any matrices \(A\) and \(B\), however, \(2^{\nicefrac{{1}}{{p-1}}}>1\).
We use \(L\) to denote the depth of the linear model and \(\mathbf{W}=(W_{1},\ldots,W_{L})\) to denote the parameters, where \(W_{i}\in\mathbb{R}^{d_{i}\times d_{i-1}}\). We assume that \(d_{i}\geq\min(d_{0},d_{L})\) for each \(i\in[L-1]\) and, thus, the image of \(E(\mathbf{W})\) is the entire \(\mathbb{R}^{d_{L}\times d_{0}}\). Following is a simple relationship between nuclear norm and Frobenius norm that is used frequently in the paper.
**Lemma 1**.: _For any matrices \(A\) and \(B\), it holds that \(\|AB\|_{*}\leq\|A\|_{F}\|B\|_{F}\)._
## 4 Exact Formulation of Induced Regularizer by Trace of Hessian
In this section, we derive the exact formulation of trace of Hessian for \(\ell_{2}\) loss over deep matrix factorization models with linear measurements as a minimization problem over \(\mathbf{W}\). We shall later approximate this formula by a different function in Section 5, which allows us to calculate the implicit bias in closed-form in the space of end-to-end matrices.
We first introduce the following simple lemma showing that the trace of the Hessian of the loss is equal to the sum of squares of norms of the gradients of the neural network output.
**Lemma 2**.: _For any twice-differentiable function \(\{f_{i}(\mathbf{W})\}_{i=1}^{n}\), real-valued labels \(\{b_{i}\}_{i=1}^{n}\), loss function \(\mathcal{L}(\mathbf{W})=\frac{1}{n}\sum_{i=1}^{n}(f_{i}(\mathbf{W})-b_{i})^{2}\), and any \(\mathbf{W}\) satisfying \(\mathcal{L}(\mathbf{W})=0\), it holds that_
\[\text{tr}(\nabla^{2}\mathcal{L}(\mathbf{W}))=\frac{2}{n}\sum_{i=1}^{n}\|\nabla f _{i}(\mathbf{W})\|^{2}.\]
Using Lemma 2, we calculate the trace of Hessian for the particular loss defined in (2). To do this, we consider \(\mathbf{W}\) in Lemma 2 to be the concatenation of matrices \((W_{1},\ldots,W_{L})\) and we set \(f_{i}(\mathbf{W})\) to be the linear measurement \(\langle A_{i},E(\mathbf{W})\rangle\), where \(E(\mathbf{W})=W_{L}\cdots W_{1}\) (see (1)). To calculate the trace of Hessian, according to Lemma 2, we need to calculate the gradient of \(\mathcal{L}(\mathbf{W})\) in (2). To this end, for a fixed \(i\), we compute the gradient of \(\langle A_{i},E(\mathbf{W})\rangle\) with respect to one of the weight matrices \(W_{j}\).
\[\nabla_{W_{j}}\left\langle A_{i},E(\mathbf{W})\right\rangle =\nabla_{W_{j}}\text{tr}(A_{i}^{\top}W_{L}\ldots W_{1})\] \[=\nabla_{W_{j}}\text{tr}((W_{j-1}\ldots W_{1}A_{i}^{\top}W_{L} \ldots W_{j+1})W_{j})\] \[=(W_{j-1}\ldots W_{1}A_{i}^{\top}W_{L}\ldots W_{j+1})^{\top}.\]
According to Lemma 2, trace of Hessian is given by
\[\text{tr}(\nabla^{2}L)(\mathbf{W})=\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{L}\| \nabla_{W_{j}}\left\langle A_{i},E(\mathbf{W})\right\rangle\|_{F}^{2}=\frac{1}{n }\sum_{i=1}^{n}\sum_{j=1}^{L}\|W_{j-1}\ldots W_{1}A_{i}^{\top}W_{L}\ldots W_{j +1}\|_{F}^{2}.\]
As mentioned earlier, our approach is to characterize the minimizer of the trace of Hessian among all interpolating solutions by its induced regularizer in the end-to-end matrix space. The above calculation provides the following more tractable characterization of induced regularizer \(F\) in (12):
\[F(M)=\min_{E(\mathbf{W})=M}\sum_{i=1}^{n}\sum_{j=1}^{L}\|W_{j-1}\dots W_{1}A_{i}^{\top }W_{L}\dots W_{j+1}\|_{F}^{2}. \tag{7}\]
In general, we cannot solve \(F\) in closed form for general linear measurements \(\{A_{i}\}_{i=1}^{n}\); however, interestingly, we show that it can be solved approximately under reasonable assumption on the measurements. In particular, we show that the induced regularizer, as defined in (7), will be approximately proportional to a power of the nuclear norm of \(E(\mathbf{W})\) given that the measurements \(\{A_{i}\}_{i=1}^{n}\) satisfy a natural norm-preserving property known as the Restricted Isometry Property (RIP) [7; 42].
Before diving into the proof of the general result for RIP, we first illustrate the connection between nuclear norm and the induced regularizer for the depth-two case. In this case, fortunately, we can compute the closed form of the induced regularizer. This result was first proved by Ding et al. [12]. For self-completeness, we also provide a short proof.
**Theorem 5** (Ding et al. [12]).: _For any \(M\in\mathbb{R}^{d_{L}\times d_{0}}\), it holds that_
\[F(M)\triangleq\min_{W_{2}W_{1}=M}\text{tr}[\nabla^{2}\mathcal{L}](\mathbf{W})=2 \left\|\left(\frac{1}{n}\sum_{i}A_{i}A_{i}^{\top}\right)^{\nicefrac{{1}}{{2}} }M\left(\frac{1}{n}\sum_{i}A_{i}^{\top}A_{i}\right)^{\nicefrac{{1}}{{2}}} \right\|_{*}. \tag{8}\]
Proof of Theorem 5.: We first define \(B_{1}=(\sum_{i=1}^{n}A_{i}{A_{i}}^{T})^{\frac{1}{2}}\) and \(B_{2}=(\sum_{i=1}^{n}{A_{i}}^{T}A_{i})^{\frac{1}{2}}\). Therefore we have that
\[\text{tr}[\nabla^{2}\mathcal{L}](\mathbf{W})=\sum_{i=1}^{n}\left(\|{A_{i}}^{T}W_{2} \|_{F}^{2}+\|W_{1}{A_{i}}^{T}\|_{F}^{2}\right)=\|B_{1}W_{2}\|_{F}^{2}+\|W_{1}B_ {2}\|_{F}^{2}.\]
Further applying Lemma 1, we have that
\[F(M) =\min_{W_{2}W_{1}=M}\text{tr}[\nabla^{2}\mathcal{L}](\mathbf{W})=\min_ {W_{2}W_{1}=M}\sum_{i=1}^{n}\left(\|{A_{i}}^{T}W_{2}\|_{F}^{2}+\|W_{1}{A_{i}}^ {T}\|_{F}^{2}\right)\] \[\geq\min_{W_{2}W_{1}=M}2\|B_{1}W_{2}W_{1}B_{2}\|_{*}^{2}=2\|B_{1} MB_{2}\|_{*}^{2}.\]
Next we show this lower bound of \(F(M)\) can be attained. Let \(U\Lambda V^{T}\) be the SVD of \(B_{1}MB_{2}\). The equality condition happens for \(W_{2}^{*}={B_{1}}^{\dagger}U\Lambda^{1/2},W_{1}^{*}=\Lambda^{1/2}V^{T}{B_{2}} ^{\dagger}\), where we have that \(\sum_{i=1}^{n}\|{A_{i}}^{T}W_{2}^{*}\|_{F}^{2}+\|W_{1}^{*}{A_{i}}^{T}\|_{F}^{2 }=2\|\Lambda\|_{F}^{2}=2\|B_{1}MB_{2}\|_{F}^{2}\). This completes the proof.
The right-hand side in (8) will be very close to the nuclear norm of \(M\) if the two extra multiplicative terms are close to the identity matrix. It turns out that \(\{A_{i}\}_{i=1}^{n}\) satisfying the \((1,\delta)\)-RIP exactly guarantees the two extra terms are \(O(\delta)\)-close to identity. However, the case for deep networks where depth is larger than two is fundamentally different from the two-layer case, where one can obtain a closed form for \(F\). To the best of our knowledge, it is open whether one obtain a closed form for the induced-regularizer for the trace of Hessian when \(L>2\). Nonetheless, in Section 5.1, we show that under RIP, we can still approximate it with the nuclear norm.
## 5 Results for Measurements with Restricted Isometry Property (RIP)
In this section, we present our main results for the generalization benefit of flatness regularization in deep linear networks. We structure the analysis as follows:
1. In Section 5.1, we first recap some preliminaries on the RIP property.
2. In Section 5.2, we prove that the induced regularizer by trace of Hessian is approximately the power of nuclear norm for \((1,\delta)\)-RIP measurements (Theorem 1).
3. In Section 5.3, we prove that the minimum trace of Hessian interpolating solution with \((2,\delta)\)-RIP measurements can recover the ground truth \(M^{*}\) up to error \(\delta\left\|{M^{*}}\right\|_{*}^{2}\). For \(\{A_{i}\}_{i=1}^{n}\) sampled from Gaussian distributions, we know \(\delta=O(\sqrt{\frac{d_{0}+d_{s}}{n}})\).
4. In Section 5.4, we prove a generalization bound with faster rate of \(\frac{d_{0}+d_{s}}{n}\left\|{M^{*}}\right\|_{*}^{2}\) using local Rademacher complexity based techniques from Srebro et al. [44].
Next, we discuss important distributions of measurements for which the RIP property holds.
### Preliminaries for RIP
**Definition 3** (Restricted Isometry Property (RIP)).: _A family of matrices \(\{A_{i}\}_{i=1}^{n}\) satisfies the \((r,\delta)\)-RIP iff for any matrix \(X\) with the same dimension and rank at most \(r\):_
\[(1-\delta)\|X\|_{F}^{2}\leq\frac{1}{n}\sum\nolimits_{i=1}^{n}\langle A_{i},X \rangle^{2}\leq(1+\delta)\|X\|_{F}^{2}. \tag{9}\]
Next, we give two examples of distributions where \(\Omega(r(d_{0}+d_{L}))\) samples guarantee \((r,O(1))\)-RIP. The proofs follow from Theorem 2.3 in [7].
**Example 1**.: _Suppose for every \(i\in\{1,\ldots,n\}\), each entry in the matrix \(A_{i}\) is an independent standard Gaussian random variable, i.e., \(A_{i}\stackrel{{ i.i.d.}}{{\sim}}\mathcal{G}_{d_{L}\times d_{0}}\). For every constant \(\delta\in(0,1)\), if \(n\geq\Omega(r(d_{0}+d_{L}))\), then with probability \(1-e^{\Omega(n)}\), \(\{A_{i}\}_{i=1}^{n}\) satisfies \((r,\delta)\)-RIP._
**Example 2**.: _If each entry of \(A_{i}\) is from a symmetric Bernoulli random variable with variance \(1\), i.e. for all \(i,k,\ell\), entry \([A_{i}]_{k,\ell}\) is either equal to \(1\) or \(-1\) with equal probabilities, then for any \(r\) and \(\delta\), \((r,\delta)\)-RIP holds with same probability as in Example 1 if the same condition there is satisfied._
### Induced Regularizer of Trace of Hessian is Approximately Nuclear Norm
This section focuses primarily on the proof of Theorem 2. Our proof consists of two steps: (1) we show that the trace of Hessian of training loss at the minimizer \(\mathbf{W}\) is multiplicatively \(O(\delta)\)-close to the regularizer \(R(\mathbf{W})\) defined below (Lemma 3) and (2) we show that the induced regularizer of \(R\), \(F^{\prime}(M)\), is proportional to \(\|M\|_{*}^{2(L-1)/L}\) (Lemma 4).
\[R(\mathbf{W})\triangleq\|W_{L}\ldots W_{2}\|_{F}^{2}d_{0}+\sum_{j=2} ^{L-1}\|W_{L}\ldots W_{j+1}\|_{F}^{2}\|W_{j-1}\ldots W_{1}\|_{F}^{2}+\|W_{L-1} \ldots W_{1}\|_{F}^{2}d_{L}. \tag{10}\]
**Lemma 3**.: _Suppose the linear measurement \(\{A_{i}\}_{i=1}^{n}\) satisfy \((1,\delta)\)-RIP. Then, for any \(\mathbf{W}\) such that \(\mathcal{L}(\mathbf{W})=0\), it holds that_
\[(1-\delta)R(\mathbf{W})\leq\text{\rm tr}(\nabla^{2}L)(\mathbf{W})\leq(1+ \delta)R(\mathbf{W}).\]
Since \(\text{\rm tr}(\nabla^{2}\mathcal{L})(\mathbf{W})\) closely approximates \(R(\mathbf{W})\), we can study \(R\) instead of \(\text{\rm tr}[\nabla^{2}\mathcal{L}]\) to understand the implicit bias up to a multiplicative factor \((1+\delta)\). In particular, we want to solve the induced regularizer of \(R(\mathbf{W})\) on the space of end-to-end matrices, \(F^{\prime}(M)\):
\[F^{\prime}(M)\triangleq\min_{\mathbf{W}:\,W_{L}\cdots W_{1}=M}R(\mathbf{W}). \tag{11}\]
Surprisingly, we can solve this problem in closed form.
**Lemma 4**.: _For any \(M\in\mathbb{R}^{d_{L}\times d_{0}}\), it holds that_
\[F^{\prime}(M)\triangleq\min_{\mathbf{W}:\,\,W_{L}\ldots W_{1}=M}R(\mathbf{W})=L(d_{0 }d_{L})^{1/L}\|M\|_{*}^{2(L-1)/L}. \tag{12}\]
Proof of Lemma 4.: Applying the \(L\)-version of the AM-GM to Equation (10):
\[(R(\mathbf{W})/L)^{L}\geq d_{0}\|W_{L}\cdots W_{2}\|_{F}^{2}\cdot\|W_{1}\|_{F}^{2}\|W_{L} \cdots W_{3}\|_{F}^{2}\cdots\|W_{L-1}\cdots W_{1}\|_{F}^{2}d_{L}. \tag{13}\] \[= d_{0}d_{L}\prod_{j=1}^{L-1}\left(\|W_{L}\cdots W_{j+1}\|_{F}^{2 }\,\|W_{j}\cdots W_{1}\|_{F}^{2}\right)\]
Now using Lemma 1, we have for every \(1\leq j\leq L-1\):
\[\|W_{L}\ldots W_{j+1}\|_{F}^{2}\|W_{j}\ldots W_{1}\|_{F}^{2}\geq\|W_{L}\ldots W _{1}\|_{*}^{2}=\|M\|_{*}^{2}. \tag{14}\]
Multiplying Equation (14) for all \(1\leq j\leq L-1\) and combining with Equation (13) implies
\[\min_{\{W|\,\,W_{L}\ldots W_{1}=M\}}R(\mathbf{W})\geq L(d_{0}d_{L})^{1/L}\|M\|_{*}^ {2(L-1)/L}. \tag{15}\]Now we show that equality can indeed be attained. To construct an example in which the equality happens, consider the singular value decomposition of \(M\): \(M=U\Lambda V^{T}\), where \(\Lambda\) is a square matrix with dimension \(\operatorname{rank}(M)\).
For \(1\leq i\leq L\), we pick \(Q_{i}\in\mathbb{R}^{d_{i}\times\operatorname{rank}(M)}\) to be any matrix with orthonormal columns. Note that \(\operatorname{rank}(M)\) is not larger than \(d_{i}\) for all \(1\leq i\leq L\), hence such orthonormal matrices \(Q_{i}\) exist. Then we define the following with \(\alpha,\alpha^{\prime}>0\) being constants to be determined:
\[W_{L} =\alpha^{\prime}\alpha^{-(L-2)/2}U\Lambda^{1/2}{Q_{L-1}}^{T}\in \mathbb{R}^{d_{L}\times d_{L-1}},\] \[W_{i} =\alpha{Q_{i}}{Q_{i-1}}^{T}\in\mathbb{R}^{d_{i}\times d_{i-1}}, \quad\forall 2\leq i\leq L-1,\] \[W_{1} ={\alpha^{\prime}}^{-1}\alpha^{-(L-2)/2}{Q_{1}}\Lambda^{1/2}V^{T }\in\mathbb{R}^{d_{1}\times d_{0}}.\]
Note that \(\Lambda\) is a square matrix with dimension \(\operatorname{rank}(M)\). First of all, note that the defined matrices satisfy
\[W_{L}W_{L-1}\dots W_{1}=\alpha^{L-2}\alpha^{-(L-2)}U\Lambda^{1/2} \Lambda^{1/2}V^{T}=M.\]
To gain some intuition, we check that the equality case for all the inequalities that we applied above. We set the value of \(\alpha\) in a way that these equality cases can hold simultaneously. Note that for the matrix holder inequality that we applied in Equation (14):
\[\|W_{L}\dots W_{j+1}\|_{F}^{2}\|W_{j}\dots W_{1}\|_{F}^{2}=\|W_{L }\dots W_{1}\|_{*}^{2}=\|\Lambda^{1/2}\|_{F}^{2},\]
independent of the choice of \(\alpha\). It remains to check the equality case for the AM-GM inequality that we applied in Equation (13). We have for all \(2\leq j\leq L-1\):
\[\|W_{L}\dots W_{j+1}\|_{F}\|W_{j-1}\dots W_{1}\|_{F}\] \[=\alpha^{j-2}\alpha^{-(L-2)/2}\alpha^{L-j-1}\alpha^{-(L-2)/2}\|U \Lambda^{1/2}\|_{F}\|\Lambda^{1/2}V^{T}\|_{F}=\alpha^{-1}\|\Lambda^{1/2}\|_{F }^{2}, \tag{16}\]
Hence, equality happens for all of them. Moreover, for cases \(j=1\) and \(j=L\), we have
\[d_{0}\|W_{L}\dots W_{2}\|=\|\Lambda^{1/2}\|_{F}d_{0}\alpha^{ \prime}\alpha^{L-2}\alpha^{-(L-2)/2}=\|\Lambda^{1/2}\|_{F}d_{0}\alpha^{\prime }\alpha^{(L-2)/2}. \tag{17}\] \[d_{L}\|W_{L-1}\dots W_{1}\|=\|\Lambda^{1/2}\|_{F}d_{L}{\alpha^{ \prime}}^{-1}\alpha^{L-2}\alpha^{-(L-2)/2}=\|\Lambda^{1/2}\|_{F}d_{L}{\alpha^{ \prime}}^{-1}\alpha^{(L-2)/2}. \tag{18}\]
Thus it suffices to set \(\alpha^{\prime}=(\frac{d_{L}}{d_{0}})^{1/2}\) and \(\alpha=(\frac{\|\Lambda^{1/2}\|_{F}}{\sqrt{d_{0}d_{L}}})^{2/L}=(\frac{\|M\|_{ *}}{d_{0}d_{L}})^{1/L}\) so that the left-hand sides of (16), (17), and (18) are equal, which implies that the lower bound in Equation (15) is actually an equality. The proof is complete.
Now we can prove Theorem 1 as an implication of Lemma 4.
Proof of Theorem 1.: The first claim is a corollary of Lemma 3. We note that
\[F(M) =\min_{W_{L}\dots W_{1}=M}\text{tr}[\nabla^{2}\mathcal{L}](M)\leq (1+\delta)\min_{W_{L}\dots W_{1}=M}R(\mathbf{W})=(1+\delta)F^{\prime}(M)\] \[F(M) =\min_{W_{L}\dots W_{1}=M}\text{tr}[\nabla^{2}\mathcal{L}](M)\geq (1-\delta)\min_{W_{L}\dots W_{1}=M}R(\mathbf{W})=(1-\delta)F^{\prime}(M).\]
For the second claim, pick \(\tilde{\mathbf{W}}\) that minimizes \(R(\tilde{\mathbf{W}})\) over all \(\mathbf{W}\)'s that satisfy the linear measurements, thus we have that
\[R(\tilde{\mathbf{W}})=L(d_{0}d_{L})^{1/L}\|E(\tilde{\mathbf{W}}) {\|_{*}}^{2(L-1)/L}=L(d_{0}d_{L})^{1/L}\min_{\mathcal{L}^{\prime}(M)=0}\|M{ \|_{*}}^{2(L-1)/L}. \tag{19}\]
Now from the definition of \(E(\mathbf{W}^{*})\),
\[\text{tr}(\nabla^{2}L)(\mathbf{W}^{*})\leq\text{tr}(\nabla^{2}L) (\tilde{\mathbf{W}})\leq(1+\delta)R(\tilde{\mathbf{W}}), \tag{20}\]
where the last inequality follows from the definition of \(W\). On the other hand
\[\text{tr}(\nabla^{2}L)(\mathbf{W}^{*})\geq(1-\delta)R(\tilde{\mathbf{W}})\geq( 1-\delta)L(d_{0}d_{L})^{1/L}\|E(\mathbf{W}^{*}){\|_{*}}^{2(L-1)/L}. \tag{21}\]
Combining (19), (20) and (21),
\[\|E(\mathbf{W}^{*})\|_{*}\leq(\frac{1+\delta}{1-\delta})^{\frac{L}{2(L-1)}} \min_{\mathcal{L}^{\prime}(M)=0}\|M\|_{*}.\]
The proof is completed by noting that \(\frac{L}{2(L-1)}\leq 1\) for all \(L\geq 2\).
Thus combining Example 1 and Theorem 1 with \(\delta=1/2\), we have the following corollary.
**Corollary 1**.: _Let \(\{A_{i}\}_{i=1}^{n}\) be sampled independently from Gaussian distribution \(\mathcal{G}_{d_{L}\times d_{0}}\) where \(n\geq\Omega((d_{0}+d_{L}))\), with probability at least \(1-\exp(\Omega(n))\), we have_
\[\|E(\mathbf{W}^{*})\|_{*}\leq 3\min_{\mathcal{L}^{\prime}(M)=0}\|M\|_{*}\leq 3\left\| E(\mathbf{W}^{*})\right\|_{*}.\]
### Recovering the Ground truth
In this section, we prove Theorem 2. The idea is to show that under RIP, the empirical loss \(\mathcal{L}(\mathbf{W})\) is a good approximation for the Frobenius distance of \(E(\mathbf{W})\) to the ground truth \(M^{*}\). To this end, we first introduce a very useful Lemma 5 below, whose proof is deferred to Appendix E.
**Lemma 5**.: _Suppose the measurements \(\{A_{i}\}_{i=1}^{n}\) satisfy the \((2,\delta)\)-RIP condition. Then for any matrix \(M\in\mathbb{R}^{d_{L}\times d_{0}}\), we have that_
\[\Big{|}\frac{1}{n}\sum\nolimits_{i=1}^{n}\left\langle A_{i},M\right\rangle^{2} -\|M\|_{F}^{2}\Big{|}\leq 2\delta\|M\|_{*}^{2}.\]
We note that if \(\{A_{i}\}_{i=1}^{n}\) are i.i.d. random matrices with each coordinate being independent, zero mean, and unit variance (like standard Gaussian distribution), then \(\|W-M^{*}\|_{F}^{2}\) is the population squared loss corresponding to \(W\). Thus, Theorem 2 implies a generalization bound for this case. Now we are ready to prove Theorem 2.
Proof of Theorem 2.: Note that from Theorem 1,
\[\|E(\mathbf{W}^{*})\|_{*}\leq\frac{1+\delta}{1-\delta}\min_{\mathcal{L}^{\prime}(M) =0}\|M\|_{*}\leq\frac{1+\delta}{1-\delta}\|M^{*}\|_{*},\]
which implies the following by triangle inequality,
\[\|E(\mathbf{W}^{*})-M^{*}\|_{*}\leq\|\tilde{E}(\mathbf{W}^{*})\|_{*}+\|M^{*}\|_{*}\leq \frac{2}{1-\delta}\|M^{*}\|_{*}. \tag{22}\]
Combining (22) with Lemma 5 (with \(M=E(\mathbf{W}^{*})-M^{*}\)):
\[\Big{|}\frac{1}{n}\sum\nolimits_{i=1}^{n}\left\langle A_{i},E(\mathbf{W}^{*})-M^{* }\right\rangle^{2}-\|E(\mathbf{W}^{*})-M^{*}\|_{F}^{2}\Big{|}\leq\frac{8\delta}{(1 -\delta)^{2}}\|M^{*}\|_{*}^{2}.\]
Since \(W^{*}\) satisfies the linear constraints \(\text{tr}(A_{i}E(\mathbf{W}^{*}))=b_{i}\), \(\frac{1}{n}\sum\nolimits_{i=1}^{n}\left\langle A_{i},E(\mathbf{W}^{*})-M^{*} \right\rangle^{2}=\frac{1}{n}\sum\nolimits_{i=1}^{n}\left(\left\langle A_{i},E (\mathbf{W}^{*})\right\rangle-b_{i}\right)^{2}=0\), which completes the proof.
### Generalization Bound
In this section, we prove the generalization bound in Theorem 3, which yields a faster rate of \(O(\frac{d_{0}+d_{L}}{n}\left\|M^{*}\right\|_{*}^{2})\) compared to \(O(\sqrt{\frac{d_{0}+d_{L}}{n}}\left\|M^{*}\right\|_{*}^{2})\) in Theorem 2. The intuition for this is as follows: By Corollary 1, we know that with very high probability, the learned solution has a bounded nuclear norm for its end-to-end matrix, no larger than \(3\left\|M^{*}\right\|_{2}\), where \(M^{*}\) is the ground truth. The key mathematical tool is Theorem 6, which provides an upper bound on the population error of the learned interpolation solution that is proportional to the square of the Rademacher complexity of the function class \(\mathcal{H}_{3\|M^{*}\|_{*}}=\{h_{M}\mid\left\|M\right\|_{*}\leq 3\left\|M^{*} \right\|_{*}\}\).
**Theorem 6** (Theorem 1, Srebro et al. [44]).: _Let \(\mathcal{H}\) be a class of real-valued functions and \(\ell:\mathbb{R}\times\mathbb{R}\to\mathbb{R}\) be a differentiable non-negative loss function satisfying that (1) for any fixed \(y\in\mathbb{R}\), the partial derivative \(\ell(\cdot,y)\) with respect to its first coordinate is \(H\)-Lipschitz and (2) \(|\sup_{x,y}\ell(x,y)|\leq B\), where \(H,B\) are some positive constants. Then for any \(p>0\), we have that with probability at least \(1-p\) over a random sample of size \(n\), for any \(h\in\mathcal{H}\) with zero training loss,_
\[\tilde{\mathcal{L}}(h)\leq O\left(H\log^{3}n\mathcal{R}_{n}^{2}(\mathcal{H})+ \frac{B\log(1/p)}{n}\right). \tag{23}\]
One technical difficulty is that Theorem 6 only works for bounded loss functions, but the \(\ell_{2}\) loss on Gaussian data is unbounded. To circumvent this issue, we construct a smoothly truncated variant of \(\ell_{2}\) loss (41) and apply Theorem 6 on that. Finally, we show that with a carefully chosen threshold, this truncation happens very rarely and, thus, does not change the population loss significantly. The proof can be found in Appendix E.
## 6 Result for the Single Measurement Case
Quite surprisingly, even though in the general case we cannot compute the closed-form of the induced regularizer in (12), we can find its minimum as a quasinorm function of the \(E(\mathbf{W})\) which only depends on the singular values of \(E(\mathbf{W})\). This yields the following result for multiple layers \(L\) (possibly \(L>2\)) with a single measurement.
**Theorem 7**.: _Suppose there is only a single measurement matrix \(A\), i.e., \(n=1\). For any \(M\in\mathbb{R}^{d_{L}\times d_{0}}\), the following holds:_
\[F(M)=\min_{W_{L}\dots W_{1}=M}\text{tr}[\nabla^{2}\mathcal{L}](\mathbf{W})=L\left\| \left(A^{T}M\right)^{L-1}A^{T}\right\|_{S_{2/L}}^{2/L}. \tag{24}\]
To better illustrate the behavior of this induced regularizer, consider the case where the measurement matrix \(A\) is identity and \(M\) is symmetric with eigenvalues \(\{\sigma_{i}\}_{i=1}^{d}\). Then, it is easy to see that \(F(M)\) in (24) is equal to \(F(M)=\sum_{i}\sigma_{i}^{2(L-1)/L}\). Interestingly, we see that the value of \(F(M)\) converges to the Frobenius norm of \(M\) and not the nuclear norm as \(L\) becomes large, which behaves quite differently (e.g. in the context of sparse recovery). This means that beyond RIP, the induced regularizer can behave very differently, and perhaps the success of training deep networks with SGD is closely tied to the properties of the dataset.
## 7 Conclusion and Future Directions
In this paper, we study the inductive bias of the minimum trace of the Hessian solutions for learning deep linear networks from linear measurements. We show that trace of Hessian regularization of loss on the end-to-end matrix of deep linear networks roughly corresponds to nuclear norm regularization under restricted isometry property (RIP) and yields a way to recover the ground truth matrix. Furthermore, leveraging this connection with the nuclear norm regularization, we show a generalization bound which yields a faster rate than Frobenius (or \(\ell_{2}\) norm) regularizer for Gaussian distributions. Finally, going beyond RIP conditions, we obtain closed-form solutions for the case of a single measurement. Several avenues for future work remain open, e.g., more general characterization of trace of Hessian regularization beyond RIP settings and understanding it for neural networks with non-linear activations.
## Acknowledgement
TM and ZL would like to thank the support from NSF IIS 2045685. KG and SJ acknowledge support by NSF award CCF-2112665 (TILOS AI Institute) and NSF award 2134108.
Figure 1: **Train and test loss. Label noise SGD leads to better generalization results due to the sharpness-minimization implicit biases (as shown in Figure 2), while mini-batch SGD without label noise finds solutions with much larger test loss.**
## References
* [1]M. Andriushchenko and N. Flammarion (2022) Towards understanding sharpness-aware minimization. In International Conference on Machine Learning, pp. 639-668. Cited by: SS1.
* [2]M. Andriushchenko, F. Croce, M. Muller, M. Hein, and N. Flammarion (2023) A modern look at the relationship between sharpness and generalization. arXiv preprint arXiv:2302.07011. Cited by: SS1.
* [3]S. Arora, N. Cohen, W. Hu, and Y. Luo (2019) Implicit regularization in deep matrix factorization. In Advances in Neural Information Processing Systems, pp. 7411-7422. Cited by: SS1.
* [4]S. Arora, Z. Li, and A. Panigrahi (2022) Understanding gradient descent on edge of stability in deep learning. arXiv preprint arXiv:2205.09745. Cited by: SS1.
* [5]M. Ali Belabbas (2020) On implicit regularization: Morse functions and applications to matrix factorization. arXiv preprint arXiv:2001.04264. Cited by: SS1.
* [6]G. Blanc, N. Gupta, G. Valiant, and P. Valiant (2019) Implicit regularization for deep neural networks driven by an ornstein-uhlenbeck like process. arXiv preprint arXiv:1904.09080. Cited by: SS1.
* [7]E. J. Candes and Y. Plan (2011) Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements. IEEE Transactions on Information Theory57 (4), pp. 2342-2359. Cited by: SS1.
* [8]J. M. Cohen, S. Kaur, Y. Li, J. Z. Kolter, and A. Talwalkar (2021) Gradient descent on neural networks typically occurs at the edge of stability. Cited by: SS1.
* [9]J. M. Cohen, B. Ghorbani, S. Krishnan, N. Agarwal, S. Medapati, M. Badura, D. Suo, D. Cardoze, Z. Nado, G. E. Dahl, et al. (2022) Adaptive gradient methods at the edge of stability. arXiv preprint arXiv:2207.14484. Cited by: SS1.
* [10]A. Damian, T. Ma, and J. Lee (2021) Label noise sgd provably prefers flat global minimizers. Cited by: SS1.
* [11]A. Damian, E. Nichani, and J. D. Lee (2022) Self-stabilization: the implicit bias of gradient descent at the edge of stability. arXiv preprint arXiv:2209.15594. Cited by: SS1.
* [12]L. Ding, D. Drusvyatskiy, and M. Fazel (2022) Flat minima generalize for low-rank matrix recovery. arXiv preprint arXiv:2203.03756. Cited by: SS1.
* [13]L. Dinh, R. Pascanu, S. Bengio, and Y. Bengio (2017) Sharp minima can generalize for deep nets. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1019-1028. Cited by: SS1.
* [14]F. Draxler, K. Veschgini, M. Salmhofer, and F. Hamprecht (2018) Essentially no barriers in neural network energy landscape. In International conference on machine learning, pp. 1309-1318. Cited by: SS1.
* [15]G. Karolina Dziugaite and D. M. Roy (2017) Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data. arXiv preprint arXiv:1703.11008. Cited by: SS1.
* [16]P. Foret, A. Kleiner, H. Mobahi, and B. Neyshabur (2020) Sharpness-aware minimization for efficiently improving generalization. arXiv preprint arXiv:2010.01412. Cited by: SS1.
* [17]P. Foret, A. Kleiner, H. Mobahi, and B. Neyshabur (2021) Sharpness-aware minimization for efficiently improving generalization. In International Conference on Learning Representations, Cited by: SS1.
* [18]T. Garipov, P. Izmailov, D. Podoprikhin, D. P. Vetrov, and A. G. Wilson (2018) Loss surfaces, mode connectivity, and fast ensembling of dnns. Advances in neural information processing systems31. Cited by: SS1.
* [19] Daniel Gissin, Shai Shalev-Shwartz, and Amit Daniely. The implicit bias of depth: How incremental learning drives generalization. _arXiv preprint arXiv:1909.12051_, 2019.
* [20] Suriya Gunasekar, Blake E Woodworth, Srinadh Bhojanapalli, Behnam Neyshabur, and Nati Srebro. Implicit regularization in matrix factorization. In _Advances in Neural Information Processing Systems_, pages 6151-6159, 2017.
* [21] Sepp Hochreiter and Jurgen Schmidhuber. Flat minima. _Neural Computation_, 9(1):1-42, 1997.
* [22] Arthur Jacot, Francois Ged, Franck Gabriel, Berlin Simsek, and Clement Hongler. Deep linear networks dynamics: Low-rank biases induced by initialization scale and l2 regularization. _arXiv preprint arXiv:2106.15933_, 3, 2021.
* [23] Stanislaw Jastrzebski, Zachary Kenton, Devansh Arpit, Nicolas Ballas, Asja Fischer, Yoshua Bengio, and Amos Storkey. Three factors influencing minima in sgd. _arXiv preprint arXiv:1711.04623_, 2017.
* [24] Yiding Jiang, Behnam Neyshabur, Hossein Mobahi, Dilip Krishnan, and Samy Bengio. Fantastic generalization measures and where to find them. _arXiv preprint arXiv:1912.02178_, 2019.
* [25] Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. _arXiv preprint arXiv:1609.04836_, 2016.
* [26] Jungmin Kwon, Jeongseop Kim, Hyunseo Park, and In Kwon Choi. Asam: Adaptive sharpness-aware minimization for scale-invariant learning of deep neural networks. In _International Conference on Machine Learning_, pages 5905-5914. PMLR, 2021.
* [27] Yuanzhi Li, Tengyu Ma, and Hongyang Zhang. Algorithmic regularization in over-parameterized matrix sensing and neural networks with quadratic activations. _arXiv preprint arXiv:1712.09203_, pages 2-47, 2017.
* [28] Zhiyuan Li, Yuping Luo, and Kaifeng Lyu. Towards resolving the implicit bias of gradient descent for matrix factorization: Greedy low-rank learning. _arXiv preprint arXiv:2012.09839_, 2020.
* [29] Zhiyuan Li, Tianhao Wang, and Sanjeev Arora. What happens after sgd reaches zero loss?-a mathematical framework. In _International Conference on Learning Representations_, 2021.
* [30] Zhouzi Li, Zixuan Wang, and Jian Li. Analyzing sharpness along gd trajectory: Progressive sharpening and edge of stability. _arXiv preprint arXiv:2207.12678_, 2022.
* [31] Hong Liu, Sang Michael Xie, Zhiyuan Li, and Tengyu Ma. Same pre-training loss, better downstream: Implicit bias matters for language models. 2022.
* [32] Yong Liu, Siqi Mai, Xiangning Chen, Cho-Jui Hsieh, and Yang You. Towards efficient and scalable sharpness-aware minimization. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 12360-12370, 2022.
* [33] Kaifeng Lyu, Zhiyuan Li, and Sanjeev Arora. Understanding the generalization benefit of normalization layers: Sharpness reduction. _arXiv preprint arXiv:2206.07085_, 2022.
* [34] Chao Ma and Lexing Ying. On linear stability of sgd and input-smoothness of neural networks. _Advances in Neural Information Processing Systems_, 34:16805-16817, 2021.
* [35] Chao Ma, Lei Wu, and Lexing Ying. The multiscale structure of neural network loss functions: The effect on optimization and origin. _arXiv preprint arXiv:2204.11326_, 2022.
* [36] Cong Ma, Kaizheng Wang, Yuejie Chi, and Yuxin Chen. Implicit regularization in nonconvex statistical estimation: Gradient descent converges linearly for phase retrieval and matrix completion. In _International Conference on Machine Learning_, pages 3345-3354. PMLR, 2018.
* [37] Mor Shpigel Nacson, Kavya Ravichandran, Nathan Srebro, and Daniel Soudry. Implicit bias of the step size in linear diagonal neural networks. In _International Conference on Machine Learning_, pages 16270-16295. PMLR, 2022.
* [38] Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, and Nati Srebro. Exploring generalization in deep learning. In _Advances in Neural Information Processing Systems_, pages 5947-5956, 2017.
* [39] Matthew D Norton and Johannes O Royset. Diametrical risk minimization: Theory and computations. _Machine Learning_, pages 1-19, 2021.
* [40] Noam Razin and Nadav Cohen. Implicit regularization in deep learning may not be explainable by norms. _arXiv preprint arXiv:2005.06398_, 2020.
* [41] Noam Razin, Asaf Maman, and Nadav Cohen. Implicit regularization in tensor factorization. In _International Conference on Machine Learning_, pages 8913-8924. PMLR, 2021.
* [42] Benjamin Recht, Maryam Fazel, and Pablo A Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. _SIAM review_, 52(3):471-501, 2010.
* [43] Mark Rudelson and Roman Vershynin. Non-asymptotic theory of random matrices: extreme singular values. In _Proceedings of the International Congress of Mathematicians 2010 (ICM 2010) (In 4 Volumes) Vol. 1: Plenary Lectures and Ceremonies Vols. II-IV: Invited Lectures_, pages 1576-1602. World Scientific, 2010.
* [44] Nathan Srebro, Karthik Sridharan, and Ambuj Tewari. Smoothness, low noise and fast rates. _Advances in neural information processing systems_, 23, 2010.
* [45] Dominik Stoger and Mahdi Soltanolkotabi. Small random initialization is akin to spectral learning: Optimization and generalization guarantees for overparameterized low-rank matrix reconstruction. _Advances in Neural Information Processing Systems_, 34:23831-23843, 2021.
* [46] Colin Wei and Tengyu Ma. Data-dependent sample complexity of deep neural networks via lipschitz augmentation. In _Advances in Neural Information Processing Systems_, pages 9722-9733, 2019.
* [47] Colin Wei and Tengyu Ma. Improved sample complexities for deep networks and robust classification via an all-layer margin. _arXiv preprint arXiv:1910.04284_, 2019.
* [48] Kaiyue Wen, Tengyu Ma, and Zhiyuan Li. How does sharpness-aware minimization minimize sharpness? _arXiv preprint arXiv:2211.05729_, 2022.
* [49] Dongxian Wu, Shu-Tao Xia, and Yisen Wang. Adversarial weight perturbation helps robust generalization. _Advances in Neural Information Processing Systems_, 33:2958-2969, 2020.
* [50] Lei Wu, Chao Ma, and Weinan E. How sgd selects the global minima in over-parameterized learning: A dynamical stability perspective. _Advances in Neural Information Processing Systems_, 31, 2018.
* [51] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In _International Conference on Learning Representations (ICLR)_, 2017.
* [52] Yang Zhao, Hao Zhang, and Xiuyuan Hu. Penalizing gradient norm for efficiently improving generalization in deep learning. _arXiv preprint arXiv:2202.03599_, 2022.
* [53] Yaowei Zheng, Richong Zhang, and Yongyi Mao. Regularizing neural networks via adversarial model perturbation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 8156-8165, 2021.
* [54] Juntang Zhuang, Boqing Gong, Liangzhe Yuan, Yin Cui, Hartwig Adam, Nicha Dvornek, Sekhar Tatikonda, James Duncan, and Ting Liu. Surrogate gap minimization improves sharpness-aware training. _arXiv preprint arXiv:2203.08065_, 2022. | ## Review
### Summary
This paper investigates the implicit regularization properties of deep linear networks by analyzing the trace of the Hessian during training. Specifically, it examines how minimizing the Hessian trace is approximately equivalent to minimizing the nuclear norm, thereby providing generalization guarantees under certain assumptions, such as the Restricted Isometry Property (RIP). The authors derive theoretical results related to matrix recovery and generalization performance, highlighting the impact of flatness on deep matrix factorizations. The study is presented as a significant contribution to understanding the behavior of optimizers in this constrained setting, particularly in relation to the effectiveness of flatness regularization.
### Strengths
- Well-written and clear presentation, making the content accessible even to readers less familiar with the literature.
- Helpful summaries of key results are provided early in the paper.
- Thorough notation and extensive related work sections enhance understanding.
- Interesting problem investigated regarding the role of Hessian trace in optimization and generalization.
- Theoretical results indicate that trace regularization can reduce the number of measurements needed for effective matrix recovery.
### Weaknesses
- The practical significance of the theoretical results is limited due to restrictive assumptions, such as linear activations and specific conditions like RIP.
- Presentation could improve by clarifying assumptions earlier and providing a broader context for the applicability of results.
- Lack of empirical validation of the theoretical findings, particularly under realistic conditions.
- Concerns regarding the relevance of the trace of the Hessian as a measure of flatness, especially compared to other measures like the largest eigenvalue.
### Questions
- Could the authors clarify the use of the term 'end-to-end parameters' as it may be misleading in the context of existing literature?
- What are the implications of using minimal Hessian trace versus L2 regularization for recovery of the minimal nuclear norm solution?
- Can other notions of sharpness, such as the maximal eigenvalue of the Hessian, provide alternative insights into generalization in this setting?
- Could the authors specify the (1, δ)-RIP condition more explicitly at the time of stating Theorem 1?
### Soundness
**Score:** 3
**Description:** The theoretical results are solid, but the reliance on restrictive assumptions raises questions about general applicability.
### Presentation
**Score:** 3
**Description:** The paper is generally well-presented, though some clarifications on assumptions and results could enhance comprehension.
### Contribution
**Score:** 3
**Description:** The paper makes a meaningful contribution to the understanding of flatness in deep learning, though further empirical work is needed to validate its claims.
### Rating
**Score:** 7
**Description:** Accept: The paper is technically solid with moderate-to-high impact, though it requires minor improvements in empirical validation and practical relevance.
### Paper Decision
**Decision:** Accept
**Reasons:** The paper presents original theoretical insights into the relationship between Hessian trace and generalization in deep linear networks. Despite some limitations regarding the practical implications of its findings and the restrictive nature of its assumptions, the clarity of presentation and the significance of the results justify its acceptance. The decision reflects the paper's potential to stimulate further research in the area of implicit regularization and optimization in deep learning.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Estimating Propensity for Causality-based Recommendation without Exposure Data
Zhongzhou Liu
School of Computing and Information Systems
Singapore Management University
Singapore, 178902
[email protected]
Yuan Fang
School of Computing and Information Systems
Singapore Management University
Singapore, 178902
[email protected]
Min Wu
Institute for Infocomm Research
A*STAR
Singapore, 138632
[email protected]
Corresponding author
###### Abstract
Causality-based recommendation systems focus on the causal effects of user-item interactions resulting from item exposure (i.e., which items are recommended or exposed to the user), as opposed to conventional correlation-based recommendation. They are gaining popularity due to their multi-sided benefits to users, sellers and platforms alike. However, existing causality-based recommendation methods require additional input in the form of exposure data and/or propensity scores (i.e., the probability of exposure) for training. Such data, crucial for modeling causality in recommendation, are often not available in real-world situations due to technical or privacy constraints. In this paper, we bridge the gap by proposing a new framework, called Propensity Estimation for Causality-based Recommendation (PropCare). It can estimate the propensity and exposure from a more practical setup, where only interaction data are available _without_ any ground truth on exposure or propensity in training and inference. We demonstrate that, by relating the pairwise characteristics between propensity and item popularity, PropCare enables competitive causality-based recommendation given only the conventional interaction data. We further present a theoretical analysis on the bias of the causal effect under our model estimation. Finally, we empirically evaluate PropCare through both quantitative and qualitative experiments.
## 1 Introduction
Recommendation systems have been widely deployed in many real-world applications, such as streaming services [34, 5], online shopping [17] and job searching [19]. The primary aim of recommendation systems, such as boosting sales and user engagement [10], depends heavily on user interactions, such as clicking on or purchasing items. Hence, a classical paradigm is to predict user-item interactions, and accordingly, recommend items with the highest probability of being interacted (e.g., clicked or purchased) to users [8, 22, 33, 35]. This paradigm ignores the causal impact behind recommendation [31]: If an item already has a high probability of being interacted by a user without being recommended, _is there really a need to recommend the item to this user?_Recently, a few studies [38; 30; 28; 37] have shifted the focus to this question. They aim to recommend an item based on the uplift, also called the _causal effect_, in the user's behavior (e.g., clicks or purchases) caused by different treatments (i.e., recommending/exposing the item or not) [9]. Such causality-based recommendation systems posit that recommending items with a higher causal effect carries greater merit than those with a higher interaction probability. Typical approaches involve quantifying the causal effect in the user's behavior, based on the observed data and the counterfactual treatment [37]. Existing works assume that the exposure data (i.e., whether an item has been recommended to a user or not), or the propensity scores [25] (i.e., the probability of recommending/exposing an item to a user), are observable at least during the training stage. However, in real-world scenarios, those data are often unavailable. For instance, while it is feasible to log each user who purchased an item in a e-commerce platform, it may be difficult to distinguish between purchases made with or without exposure due to technical and privacy constraints in determining if a user has been exposed to the item a priori. Without the exposure data and/or propensity scores provided during training, existing causality-based recommenders cannot be deployed.
Toward practical causality-based recommendation, we consider a more relaxed and realistic setup where exposure and propensity scores are not observable. Although some previous works [38; 42; 1; 14; 21] have attempted to estimate propensity scores in a different context (e.g., addressing biases in recommendation), they further suffer two key limitations. First, most state-of-the-art methods still require exposure data to train the propensity estimator [38; 1; 14]. Second, they fail to integrate prior knowledge into the propensity estimator, resulting in less robust estimation. To address these challenges and bridge the data gap in many recommendation scenarios and benchmarks, we propose a novel framework of **Prop**ensity Estimation for **C**ausality-based **R**ecommendation (PropCare), to estimate the propensity score and exposure of each item for each user. Specifically, we observe a pairwise characteristic that relates propensity scores and item popularity when the probability of user-item interaction is well controlled. (The observation is formalized as Assumption 1 and empirically validated in Sect. 4.2.) Based on the observation, we incorporate item popularity as prior knowledge to guide our propensity estimation. Furthermore, we present a theoretical analysis on the bias of the estimated causal effect. The analysis enables us to investigate the factors that influence our estimation and subsequently guide our model and experiment design.
In summary, we compare previous propensity estimation and PropCare in Fig. 1, highlighting our key advantages: PropCare does not need propensity or exposure data at all, and incorporates prior information for robust estimation. The contributions of this paper include the following. (1) Our proposed framework bridges the gap in existing causality-based recommendation systems, where the propensity score and/or exposure data are often unavailable but required for model training or inference. (2) We incorporate the pairwise relationship between propensity and item popularity as prior knowledge for more robust propensity estimation. We present a further analysis on the factors that influence our model. (3) We conduct extensive experiments to validate the effectiveness of PropCare through both quantitative and qualitative results.
## 2 Related Work
Causal effect estimation in recommendationWhile typical recommendation systems consider positive feedback or interactions like clicks and purchases as successful, it may be more beneficial to optimize the uplift in interactions, also called the causal effect, solely caused by recommendations [18]. However, obtaining the causal effect in real-world scenarios is challenging because of its
Figure 1: Causal diagrams under different frameworks. \(\mathrm{pop}_{i}\) is the popularity (prior) of item \(i\). \(Y_{u,i}\) indicates if user \(u\) interacts with item \(i\). \(Z_{u,i}\) indicates if item \(i\) is exposed to user \(u\).
counterfactual nature [9]. Conducting online A/B tests to compare exposure strategies may be feasible but expensive and susceptible to selection bias [27]. To address these issues, several causal effect estimators have been proposed. The naive estimator [30] assumes random exposure assignment to all user-item pairs, which is inconsistent with most recommendation scenarios. The inverse propensity score (IPS) estimator [30] incorporates the propensity score, defined as the probability of exposure [25], to overcome this limitation. Direct model estimators like CausCF [38] directly predict outcomes using parametric models based on different exposure statuses. A recently proposed doubly robust estimator [37] integrates a parametric model with the non-parametric IPS estimator for reduced bias and variance. However, these estimators require access to input data containing propensity scores and/or exposure data, at least during the training stage, which are often unavailable due to technical and privacy limitations.
Propensity estimation in recommendationExisting causal effect estimation approaches require exposure data and/or propensity scores at least in training, which are frequently unavailable or subject to the missing-not-at-random (MNAR) issue [32]. Hence, we have to rely on their estimations. Some methods estimate propensity in a heuristic way, such as using item popularity [30] or other side information (e.g., items participating in promotional campaigns) [28]. However, these estimations lack personalization and may result in noisy results. Other approaches utilize interaction models (also called click models) [24; 3; 42; 21] to relate propensity scores, relevance and interactions. However, without additional constraints, the interaction model alone can be difficult to optimize as we will elaborate in Sect. 4.1. Besides, matrix factorization [16; 38], linear regression [28], dual learning [21] and doubly robust learning [14] can also learn propensity scores, but they assume exposure data as training labels or known variables, which is incompatible with our setup without any observable propensity or exposure data.
## 3 Preliminaries
Data notationsConsider a typical recommendation dataset that contains only interactions between users and items, such as purchases or clicks. Let \(Y_{u,i}\in\{0,1\}\) denote the observed interaction between user \(u\in\{1,2,\ldots,U\}\) and item \(i\in\{1,2,\ldots,I\}\). \(D=\{(Y_{u,i})\}\) denotes the collection of observed training user-item interaction data. Note that our framework does not assume the availability of any additional data except the interaction data. Moreover, let \(Z_{u,i}\in\{0,1\}\) denote an _unobservable_ indicator variable for exposure, i.e., \(Z_{u,i}=1\) iff item \(i\) is exposed/recommended to user \(u\). We use \(p_{u,i}\) to represent the propensity score, which is defined as the probability of exposure, i.e., \(p_{u,i}=P(Z_{u,i}=1)\).
Causal effect modellingLet \(Y^{0}_{u,i}\) and \(Y^{1}_{u,i}\in\{0,1\}\) be the potential outcomes for different exposure statuses. Specifically, \(Y^{1}_{u,i}\) is defined as the interaction between user \(u\) and item \(i\) when \(i\) has been exposed to \(u\). Accordingly, \(Y^{0}_{u,i}\) is the interaction when \(i\) has not been exposed to \(u\). This setup assumes a counterfactual model: In the real world only one of the scenarios can happen, but not both. Subsequently, the causal effect \(\tau_{u,i}\in\{-1,0,1\}\) is defined as the difference between the two potential outcomes [26], i.e., \(\tau_{u,i}=Y^{1}_{u,i}-Y^{0}_{u,i}\). In other words, \(\tau_{u,i}=1\) means recommending item \(i\) to user \(u\) will increase the interaction between \(u\) and \(i\) and \(\tau_{u,i}=-1\) means the opposite. \(\tau_{u,i}=0\) means recommending or not will not change the user's interaction behavior. Naturally, users, sellers and platforms could all benefit from recommendations that result in positive causal effects.
Causal effect estimationThe causal effect cannot be directly computed based on observed data due to its counterfactual nature. Among the various estimators introduced in Sect. 2, direct parametric models [38; 37] are sensitive to the prediction error of potential outcomes [37]. Hence, high-quality labeled exposure data are required in parametric models, which is not the setup of this work. To avoid this issue, we adopt a nonparametric approach, known as the inverse propensity score (IPS) estimator [30], for causal effect estimation as follows.
\[\hat{\tau}_{u,i}=\frac{Z_{u,i}Y_{u,i}}{p_{u,i}}-\frac{(1-Z_{u,i})Y_{u,i}}{1-p_ {u,i}}. \tag{1}\]Interaction modelIn line with prior works [21; 42; 39], we adopt an interaction model 1[24; 3] that assumes the following relationship between interactions, propensity and relevance:
Footnote 1: Also called the “click” model when the interaction refers to click in some literature.
\[y_{u,i}=p_{u,i}r_{u,i}, \tag{2}\]
where \(y_{u,i}=P(Y_{u,i}=1)\) is the probability of interaction between user \(u\) and item \(i\), and \(r_{u,i}\) represents the probability that item \(i\) is relevant to user \(u\).
## 4 Proposed Approach: PropCare
In this section, we introduce our propensity estimation approach PropCare. We start with a naive approach, followed by our observation on prior knowledge, before presenting the overall loss for propensity learning and how the learned propensity can be used for causality-based recommendation. We end the section by discussing a theoretical property of our estimation.
### Naive propensity estimator
The overall objective is to estimate propensity scores and exposure from a more practical setup where only interaction data are observable. Since the propensity score \(p_{u,i}\) is the probability of exposure \(P(Z_{u,i}=1)\), we focus on the estimation of propensity scores, whereas the corresponding exposure can be readily sampled based on the propensity. The interaction model in Eq. (2) intuitively leads us to the naive loss function below.
\[\mathcal{L}_{\text{naive}}=-Y_{u,i}\log f_{p}(\mathbf{x}_{u,i};\Theta_{p})f_{ r}(\mathbf{x}_{u,i};\Theta_{r})-(1-Y_{u,i})\log(1-f_{p}(\mathbf{x}_{u,i}; \Theta_{p})f_{r}(\mathbf{x}_{u,i};\Theta_{r})), \tag{3}\]
where \(\mathbf{x}_{u,i}=f_{e}(u,i;\Theta_{e})\) is a joint user-item embedding output by a learnable embedding function \(f_{e}\); \(f_{p}\) and \(f_{r}\) are learnable propensity and relevance functions to produce the estimated propensity score \(\hat{p}_{u,i}\) and relevance probability \(\hat{r}_{u,i}\), respectively. Note that each learnable function \(f_{*}\) is parameterized by \(\Theta_{*}\), and we implement each as a multi-layer perceptron (MLP).
However, through the naive loss we cannot learn meaningful propensity and relevance functions (\(f_{p}\) and \(f_{r}\)), since they are always coupled in a product and can be collapsed into one function. It is equivalent to learning a single interaction function, instead of learning each individual factor.
### Incorporating prior knowledge
To avoid the above issue, one solution is to introduce prior knowledge to further constrain the propensity or relevance function. In particular, it has been observed that _more popular items will have a higher chance to be exposed_[41]. The popularity of item \(i\), \(\operatorname{pop}_{i}\), is defined based on the total number of observed interactions in the dataset, i.e., \(\operatorname{pop}_{i}=\sum_{u=1}^{U}Y_{u,i}/\sum_{j=1}^{I}\sum_{u=1}^{U}Y_{u,j}\). However, this observation [41], while intuitive, is not adequate in explaining the relationship between popularity and exposure. In particular, items with a higher interaction probability also tend to have a higher chance to be exposed, especially when prior exposure was decided by recommenders in the classical paradigm. To incorporate popularity as a prior toward propensity/exposure estimation, we propose to introduce a control on the interaction probability, and formulate the following assumption.
**Assumption 1** (Pairwise Relationship on Popularity and Propensity): _Consider a user \(u\) and a pair of items \((i,j)\). Suppose the popularity of item \(i\) is greater than that of \(j\), and their interaction probabilities with user \(u\) are similar. Then it follows that item \(i\) is more likely to be exposed to user \(u\) than item \(j\) is. \(\square\)_
The intuition is that, when a user's interaction probabilities are similar toward two items \(i\) and \(j\), but item \(i\) is more likely to be exposed to the user, the reason could be item \(i\) is more popular than \(j\). Our assumption essentially places a control on the interaction probability to eliminate its influence on the exposure, and simultaneously isolate the effect of popularity on the exposure.
Empirical validation of Assumption 1In the following, we examine our assumption by calculating the fraction of item pairs that satisfy this assumption in three datasets, namely, DH_original,DH_personalized and ML (see Sect. 5.1 for dataset descriptions). Specifically, we first estimate the probability \(y_{u,i}\) of each interaction \(Y_{u,i}\) using logistic matrix factorization [11]. We also obtain the propensity score \(p_{u,i}\) from ground truth values provided by the datasets (note that we only use the ground truth for evaluation purposes, not in model training or inference). Then, for each user \(u\), we place an item pair \((i,j)\), where a randomly sampled \(i\) is paired with each of the remaining items, into several bins based on \(i\) and \(j\)'s similarity in their interaction probability with \(u\). More specifically, each bin \(b\) contains \((i,j)\) pairs such that \(|y_{u,j}-y_{u,i}|\) falls into \(b\)'s boundaries. Finally, we compute the ratio of \((i,j)\) pairs consistent with Assumption 1 to the total pairs in each bin \(b\), as follows.
\[\mathrm{ratio}_{b}=\frac{1}{U}\sum_{u=1}^{U}\frac{\text{\# item pairs $(i,j)$ for user $u$ in bin $b$\;s.\;t.\;$(p_{u,j}-p_{u,i})(\mathrm{pop}_{j}-\mathrm{pop}_{i})>0$}}{\text{\# item pairs $(i,j)$ sampled for user $u$ in bin $b$\;}}. \tag{4}\]
We report the ratios in Fig. 2. It can be observed that when \(|y_{u,j}-y_{u,i}|\) is smaller (i.e., \(i\) and \(j\)'s interaction probabilities with \(u\) are more similar), a higher fraction of items pairs in the bin satisfy our assumption. In contrast, when \(|y_{u,j}-y_{u,i}|\) grows larger (i.e., the interaction probabilities are not well controlled and become less similar), the validity of the original observation [41] becomes weaker. In summary, the results on the three datasets demonstrate the validity of Assumption 1.
Integrating prior knowledgeBased on Assumption 1, we utilize item popularity to inject prior knowledge on the probability of exposure (i.e., propensity score) through the following loss.
\[-\log\left[\sigma(f_{p}(\mathbf{x}_{u,i})-f_{p}(\mathbf{x}_{u,j}))\right]\; \mathrm{s.t.\;pop}_{i}>\mathrm{pop}_{j},\;y_{u,i}\approx y_{u,j}, \tag{5}\]
where \(\sigma\) is the sigmoid activation and \(y_{u,i}\) is computed as \(f_{p}(\mathbf{x}_{u,i})f_{r}(\mathbf{x}_{u,i})\). While Eq. (3) models propensity in a point-wise manner, Eq. (5) incorporates popularity as prior knowledge in a pairwise manner. The advantage is twofold. First, it decouples the propensity and relevance functions, using only item popularity which can be readily computed from the interaction data shown earlier without the need for external information. Second, by separating the estimated propensity of less popular items and more popular items, it prevents all predicted values from clustering in a narrow range near 1 or 0. This is beneficial in mitigating the issue of high variance caused by extreme values [37].
To materialize the control \(y_{u,i}\approx y_{u,j}\) on the interaction probabilities in Eq. (5), we adopt the following loss that involves a soft version of \(y_{u,i}\approx y_{u,j}\).
\[\mathcal{L}_{\mathrm{pop}}=-\kappa_{u,i,j}\log\left[\sigma(\mathrm{sgn}_{i,j} \cdot(f_{p}(\mathbf{x}_{u,i})-f_{p}(\mathbf{x}_{u,j})))+\sigma(\mathrm{sgn}_{ i,j}\cdot(f_{r}(\mathbf{x}_{u,j})-f_{r}(\mathbf{x}_{u,i})))\right], \tag{6}\]
where \(\mathrm{sgn}_{i,j}\in\{1,-1\}\) is the sign of \((\mathrm{pop}_{i}-\mathrm{pop}_{j})\) and \(\kappa_{u,i,j}\) is a weighting function such that it will assign a higher weight if \(y_{u,i}\) and \(y_{u,j}\) are closer. Specifically, we choose \(\kappa_{u,i,j}=e^{\eta(y_{u,i}-y_{u,j})^{2}}\), where \(\eta<0\) is a learnable parameter. Moreover, according to the interaction model in Eq. (2), for a fixed \(y_{u,i}\), a higher \(p_{u,i}\) implies a lower \(r_{u,i}\). This explains the additional constraint on the relevance function \(f_{r}\) in Eq. (6), which will further improve model training.
### Propensity learning
Based on the discussions in Sect. 4.1-4.2, the naive loss essentially optimizes the interaction model, whereas the pairwise loss utilizes popularity as prior information for propensity learning. For more robust learning, we further take a global view on the distribution of propensity scores, which usually follow a long-tailed distribution [40; 42]. In particular, we employ a beta distribution to regularize the
Figure 2: Histogram of item pairs \((i,j)\) that satisfy Assumption 1. The bins are based on the inverse similarity in interaction probabilities, \(|y_{u,j}-y_{u,i}|\), divided by \(\{0,0.01,\ldots,0.09,0.1,0.2,\ldots,0.5\}\). That is, the first 10 bins have an equal width of \(0.01\) and the last 4 bins have an equal width of \(0.1\).
propensity scores, as has been done in literature in modeling propensity or other long-tailed quantities [4; 15]. Overall, we minimize the following loss toward propensity learning:
\[\min_{\Theta}\mathcal{L}=\sum_{u,i,j}(\mathcal{L}_{\text{naive}}+\lambda\mathcal{ L}_{\text{pop}})+\mu\mathrm{KL}(Q\|\mathrm{Beta}(\alpha,\beta)). \tag{7}\]
Here \(Q\) is the empirical distribution of all estimated propensity scores \(\hat{p}_{u,i}\). \(\mathrm{Beta}(\alpha,\beta)\) is a reference beta distribution with parameters \(\alpha\) and \(\beta\) which are selected to simulate a long-tailed shape. \(\mathrm{KL}(\cdot\|\cdot)\) computes the Kullback-Leibler divergence between two distributions. \(\lambda\) and \(\mu\) are trade-off hyperparameters to balance different terms.
Finally, we use the estimated propensity score \(\hat{p}_{u,i}\) to predict the exposure variable \(Z_{u,i}\): \(\hat{Z}_{u,i}=1\) if \(\mathrm{Norm}(\hat{p}_{u,i})\geq\epsilon\), and 0 otherwise, where \(\epsilon\) is a threshold hyperparameter and \(\mathrm{Norm}\) is a normalization function such as \(Z\)-score normalization. The overall training steps are sketched in Algorithm 1 in Appendix A.
### Causality-based recommendation
We resort to DLCE [30], a state-of-the-art causality-based recommender equipped with an IPS estimator. It takes interaction \(Y_{u,i}\), exposure \(Z_{u,i}\) and propensity \(p_{u,i}\) as input, and outputs a ranking score \(\hat{s}_{u,i}\) for each user-item pair. Given a triplet \((u,i,j)\) such that \(u\) is a user and \(i\neq j\) are randomly sampled from the item set, the loss of DLCE is defined as follows [30].
\[\frac{Z_{u,i}Y_{u,i}}{\max(p_{u,i},\chi^{1})}\log\left(1+e^{-\omega(\hat{s}_{u,i}-\hat{s}_{u,j})}\right)+\frac{(1-Z_{u,i})Y_{u,i}}{\max(1-p_{u,i},\chi^{0})} \log\left(1+e^{\omega(\hat{s}_{u,i}-\hat{s}_{u,j})}\right), \tag{8}\]
where \(\chi^{1},\chi^{0}\) and \(\omega\) are hyperparameters. We follow the standard training procedure of DLCE, except that we substitute the ground-truth exposure and propensity score with our estimated values \(\hat{Z}_{u,i}\) and \(\hat{p}_{u,i}\), respectively, in the above loss. Hence, the entire training process for our propensity learning and DLCE do not require any ground-truth exposure or propensity data. After DLCE is trained, for each user \(u\), we generate a ranked list of all items based on the optimized \(\hat{s}_{u,i}\).
### Theoretical property
The performance of causality-based recommendation depends on how accurate we can model the causal effect in the user-item interactions. Although it has been established elsewhere [30] that the IPS estimator defined in Eq. (1) is unbiased as long as exposure \(Z_{u,i}\) and propensity score \(p_{u,i}\) are correctly assigned, in our setup only estimated propensity scores and exposure are available. Thus, we characterize the bias of the IPS estimator when estimations are used instead.
**Proposition 1**: _Suppose we replace the ground truth values of \(Z_{u,i}\) and \(p_{u,i}\) with the estimated \(\hat{Z}_{u,i}\) and \(\hat{p}_{u,i}\) in Eq. (1), respectively. Then, the bias of the estimated causal effect \(\hat{\tau}_{u,i}\) is_
\[\left(\frac{p_{u,i}+\mathbb{E}\left[\hat{Z}_{u,i}-Z_{u,i}\right]}{\hat{p}_{u,i }}-1\right)Y_{u,i}^{1}-\left(\frac{1-p_{u,i}-\mathbb{E}\left[\hat{Z}_{u,i}-Z_ {u,i}\right]}{1-\hat{p}_{u,i}}-1\right)Y_{u,i}^{0}. \tag{9}\]
\(\square\)
We defer the proof to Appendix B. From the bias stated in Proposition 1, we make two further remarks to guide the learning and evaluation of propensity scores and exposure.
**Remark 1**: _The bias is influenced by three major factors: \(p_{u,i}/\hat{p}_{u,i}\), \((1-p_{u,i})/(1-\hat{p}_{u,i})\) and \(\mathbb{E}\left[\hat{Z}_{u,i}-Z_{u,i}\right]\). Note that if \(\hat{p}_{u,i}=p_{u,i}\) and \(\hat{Z}_{u,i}=Z_{u,i}\), the bias would be zero which is consistent with earlier findings [30]. \(\square\)_
**Remark 2**: _If the estimated \(\hat{p}_{u,i}\) is extremely close to 0 or 1, the bias can be potentially very large. \(\square\)_
The above proposition and remarks shed some light on what we should focus on when estimating or evaluate exposure and propensity score. On the one hand, since exposure is a binary variable and the bias is influenced by \(\mathbb{E}\left[\hat{Z}_{u,i}-Z_{u,i}\right]\), we may evaluate it with binary classification metrics such as F1 score. On the other hand, since propensity is a continuous variable and estimations extremely close to zero or one should be avoided, regularizing the global distribution in Eq. (7) and ensuring a proper scale of the propensity scores would be useful.
## 5 Experiment
In this section, we comprehensively evaluate the effectiveness of the proposed PropCare through both quantitative and qualitative experiments.
### Experiment setup
DatasetsWe employ three standard causality-based recommendation benchmarks. Among them, **DH_original** and **DH_personalized** are two versions of the DunnHumby dataset [30], which includes purchase and promotion logs at a physical retailer over a 93-week period. The difference in the two versions mainly lies in the derivation of ground-truth propensity scores as stated by Sato et al. [30], which are based on items featured in the weekly mailer in DH_original, and with a simulated personalization factor in DH_personalized. The third dataset is MovieLens 100K (**ML**) [29], which includes users' ratings on movies and simulated propensity scores based on the ratings and user behaviors. Note that PropCare do not require any propensity or exposure data at all. The ground-truth values are only used to evaluate model output. On each dataset, we generate the training/validation/test sets following their original work [30; 29], respectively. We summarize each dataset in Tab. 1, listing the number of users (#users) and items (#items), as well as the average value of several key variables including the observed interaction (\(\bar{Y}_{u,i}\)), exposure (\(\bar{Z}_{u,i}\)), causal effect (\(\bar{\tau}_{u,i}\)) and propensity (\(\bar{p}_{u,i}\)). Further details can be found in Appendix C.1.
BaselinesWe compare PropCare with the following propensity estimators: (1) **Ground-truth**: Propensity score and exposure values are directly taken from the datasets. (2) **Random**: Propensity scores are assigned randomly between 0 and 1. (3) Item popularity **(POP)**: Propensity scores are assigned as item popularity normalized to \((0,1)\). (4) **CJBPR**[42]: An unbiased recommendation model that optimizes propensity and relevance alternately in a point-wise manner. (5) **EM**[21]: An recommendation model that learns propensity scores in a point-wise manner using an expectation-maximization algorithm.
Note that Ground-truth uses the ground-truth values of propensity \(p_{u,i}\) and exposure \(Z_{u,i}\) directly as input to train DLCE [30]. All other baselines do not need such ground-truth values in any stage just as PropCare. In these methods, the estimated propensity \(\hat{p}_{u,i}\) is used to further derive the exposure \(\hat{Z}_{u,i}\), in the same way as PropCare (see Sect. 4.3). Finally, we utilize the estimated values to train DLCE (see Sect. 4.4).
Parameter settingsWe tune the hyperparameters based on the validation data, following guidance in the literature. Specifically, in PropCare, the trade-off parameter \(\lambda\) and \(\mu\) are set to 10 and 0.4, respectively, on all datasets. For the downstream causal recommender DLCE, we follow the earlier settings [30]. For other settings and implementation details, refer to Appendix C.2.
Evaluation metricsWe evaluate the performance of causality-based recommendation with CP@10, CP@100 and CDCG, whose definitions can be found in Appendix C.3. Additionally, we measure the accuracy of estimated propensity scores w.r.t. the ground-truth values using Kullback-Leibler divergence (KLD) and Kendall's Tau (Tau) [12], and that of estimated exposure using F1 score. Note that all metrics, except KLD, indicate better performance with a larger value.
### Results and discussions
We first compare the performance of PropCare and the baselines, followed by analyses of model ablation, the regularization term, and various influencing factors. Additional experiments including
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Dataset & \#users & \#items & \(\bar{Y}_{u,i}\) & \(\bar{Z}_{u,i}\) & \(\bar{\tau}_{u,i}\) & \(\bar{p}_{u,i}\) \\ \hline DH\_original & 2,309 & 1,372 &.0438 &.6064 &.0175 &.2894 \\ DH\_personalized & 2,309 & 1,372 &.0503 &.6265 &.0178 &.4589 \\ ML & 943 & 1,682 &.0676 &.0593 &.0733 &.0594 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of datasets.
comparison to conventional recommendation methods, evaluation on an alternative backbone, and a scalability study are presented in Appendix D.
Performance comparisonWe evaluate PropCare against the baselines in two aspects: (1) The downstream causality-based recommendation using the estimated propensity and exposure; (2) The accuracy of the estimated propensity and exposure.
We first illustrate the performance of causality-based recommendation in Tab. 2. It is not surprising that Ground-truth achieves the best causal effect by incorporating actual propensity and exposure values in DLCE. However, since ground-truth values are often unavailable, we rely on estimations. Among all baselines, PropCare most closely approaches Ground-truth's performance. Notably, in the DH_personalized dataset, PropCare exhibits only a 6.6% average decrease from Ground-truth across three metrics, significantly outperforming the second-best EM which suffers a 56.6% drop. Furthermore, PropCare surpasses the point-wise CJBPR and EM, implying the advantage of our pairwise formulation based on Assumption 1.
Next, we analyze the accuracy of propensity and exposure estimation in Tab. 3. Among the baselines, POP performs the best in Kendall's Tau. However, the causality metrics of POP is poor (see Tab. 2) due to the ill-fit propensity distribution, reflected in its large KLD from the ground-truth distribution. The estimation of exposure is also challenging for POP in most cases. In contrast, PropCare demonstrates outstanding performance in F1 score and KLD, leading to effective causal metrics. Although its Tau scores lag behind some baselines, a robust distribution on propensity and accurate binary predictions of exposure still contribute to good causal performance. The results in Tab. 3 highlight that causality-based recommendation is influenced by multiple factors, rather than relying solely on a single aspect of estimation. We will discuss these influencing factors further toward the end of this part.
Ablation studyTo evaluate the impact of our key design motivated by Assumption 1, we derive five variants from Eq. (6): (1) **NO_P**- removing the constraint on estimated \(\hat{p}_{u,i}\) by deleting the term with \(f_{p}(\mathbf{x}_{u,i})-f_{p}(\mathbf{x}_{u,j})\); (2) **NO_R**: removing the constraint on estimated \(\hat{r}_{u,i}\) by deleting the term with \(f_{r}(\mathbf{x}_{u,j})-f_{r}(\mathbf{x}_{u,i})\); (3) **NO_P_R**: removing \(\mathcal{L}_{\text{pop}}\) entirely from the overall loss to eliminate Assumption 1 altogether; (4) **NEG**: reversing Assumption 1 by replacing \(\operatorname{sgn}_{i,j}\) with \(-\operatorname{sgn}_{i,j}\) to assume that more popular items have smaller propensity scores; (5) \(\mathbf{\kappa}=\mathbf{1}\): setting all \(\kappa_{u,i,j}\)'s to a constant 1, resulting in equal weighting of all training triplets. Their causal performances are illustrated in Fig. 3. Comparing to the full version of PropCare, NO_R and NO_P show a small drop in performance due to the absence of additional constraints on propensity or relevance, indicating that the pairwise loss is still partially effective. The drop in \(\kappa=1\) highlights the need for controlling the similarity between interaction probabilities. The further drop observed in NO_P_R
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c|}{DH\_original} & \multicolumn{3}{c|}{DH\_personalized} & \multicolumn{3}{c}{ML} \\ \cline{2-10} & CP@10\(\uparrow\) & CP@100\(\uparrow\) & CDCG\(\uparrow\) & CP@10\(\uparrow\) & CP@100\(\uparrow\) & CDCG\(\uparrow\) & CP@10\(\uparrow\) & CP@100\(\uparrow\) & CDCG\(\uparrow\) \\ \hline Ground-truth & 0.665\(\pm\)0.001 &.0215\(\pm\)0.001 & 1.068\(\pm\)0.000 & 1.3304\(\pm\)0.001 & 0.045\(\pm\)0.001 & 1.469\(\pm\)0.003 & 2471\(\pm\)0.001 &.1887\(\pm\)0.000 & 16.29\(\pm\)0.006 \\ \hline Random & 0.154\(\pm\)0.001 &.0071\(\pm\)0.002 &.7390\(\pm\)0.004 &.0479\(\pm\)0.004 &.0107\(\pm\)0.005 &.8316\(\pm\)0.39 &.0124\(\pm\)0.002 &.0135\(\pm\)0.005 & 13.16\(\pm\)0.076 \\ POP & 0.200\(\pm\)0.000 &.0113\(\pm\)0.000 &.7877\(\pm\)0.001 &.0457\(\pm\)0.000 &.0096\(\pm\)0.001 &.8491\(\pm\)0.002 &.142\(\pm\)0.001 &.092\(\pm\)0.001 & 11.43\(\pm\)0.005 \\ CJBPR & 0.0263\(\pm\)0.001 &.0087\(\pm\)0.001 &.7769\(\pm\)0.002 &.0564\(\pm\)0.008 &.0106\(\pm\)0.005 &.852\(\pm\)0.032 &.410\(\pm\)0.002 &.187\(\pm\)0.001 & 9.953\(\pm\)0.006 \\ EM & 0.018\(\pm\)0.001 &.0067\(\pm\)0.001 &.7247\(\pm\)0.001 &.0507\(\pm\)0.002 &.0121\(\pm\)0.001 &.8779\(\pm\)0.003 &.437\(\pm\)0.002 &.194\(\pm\)0.002 & 10.21\(\pm\)0.01 \\ PropCare & **.0351\(\pm\)**0.002 & **.0156\(\pm\)**0.005 & **.1270\(\pm\)**0.001 & **.**0381\(\pm\)**0.000 & **1.**426\(\pm\)**0.001 & **.**0182\(\pm\)**0.002 & **.****0337\(\pm\)**0.002 & **13.**80\(\pm\)**0.011 \\ \hline \hline \end{tabular}
* Results are reported as the average of 5 runs (mean\(\pm\)std). Except Ground-truth, best results are bolded and runners-up are underlined.
\end{table}
Table 2: Performance comparison on downstream causality-based recommendation.
\begin{table}
\begin{tabular}{c|c c c|c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{DH\_original} & \multicolumn{2}{c|}{DH\_personalized} & \multicolumn{2}{c}{ML} \\ \cline{2-7} & KLD\(\downarrow\) & Tau\(\uparrow\) & F1 score\(\uparrow\) & KLD\(\downarrow\) & Tau\(\uparrow\) & F1 score\(\uparrow\) & KLD\(\downarrow\) & Tau\(\uparrow\) & F1 score\(\uparrow\) \\ \hline Random & 5141\(\pm\)0.001 &.0002\(\pm\)0.000 & 4524\(\pm\)013 & 3.008\(\pm\)0.002 &.0001\(\pm\)0.000 &.4463\(\pm\)0.021 &.0363\(\pm\)0.002 &.0002\(\pm\)0.000 & 4511\(\pm\)0.022 \\ POP & 5430\(\pm\)0.000 & **.4726\(\pm\)**0.000 & 2851\(\pm\)0.000 & 4.728\(\pm\)0.000 & **.6646\(\pm\)**0.000 &.2772\(\pm\)0.000 &.0615\(\pm\)0.000 & **.4979\(\pm\)**0.000 & 5050\(\pm\)0.000 \\ CJBPR & 3987\(\pm\)0.008 &.3279\(\pm\)0.011 &.2853\(\pm\)0.005 & 2.650\(\pm\)0.022 &.6477\(\pm\)0.013 &.2825\(\pm\)0.005 &.0230\(\pm\)0.006 &.4956\(\pm\)0.045 & **.5189\(\pm\)**0.020 \\ EM & 6380\(\pm\)0.008 &.034\(\pm\)0.000 &.4974\(\pm\)0.001 &.2385\(\pm\)0.001 &.0934\(\pm\)0.002 &.4954\(\pm\)0.009 &.0517\(\pm\)0.001 &.1321\(\pm\)0.002 & 3653\(\pm\)0.005 \\ PropCare & **.3851\(\pm\)**0.023 &.3331\(\pm\)0.065 & **.5846\(\pm\)**0.006 & **.**1732\(\pm\)**0.038 &.4706\(\pm\)0.072 & **.**6059\(\pm\)**0.017 & **.**0204\(\pm\)**0.005 &.3889\(\pm\)0.034 &.4847\(\pm\)0.020 \\ \hline \hline \end{tabular}
* Results are styled in the same way as in Tab. 2.
\end{table}
Table 3: Performance comparison on propensity score (KLD, Tau) and exposure (F1 score) estimation.
after removing the entire assumption further demonstrates the validity of our assumption and the effectiveness of \(\mathcal{L}\)pop. The worst performance of NEG indicates that contradicting Assumption 1 greatly affects the results of causality-based recommendation.
Effect of regularizationWe examine how effective the regularization term is by varying the trade-off hyperparameter \(\mu\), as used in Eq. (7), over the set \(\{0,0.01,0.1,0.2,0.4,0.8,1,2,4,8,10\}\). The results, shown in Fig. 4, indicate that when \(\mu\) is set to 0, which means the regularization term is completely removed, the causal performance is significantly compromised. This demonstrates the advantage of regularizing the propensity with a beta distribution during the training of PropCare. As \(\mu\) increases, the causal performance gradually improves and reaches its peak within the range of \([0.2,0.8]\). We suggest further exploration by fine-tuning the hyperparameter within this interval.
Factors influencing causality-based recommendationWe further investigate the factors crucial to causality-based recommendation, by injecting noises into ground-truth propensity or exposure values. In Fig. 5(a), we randomly flip a fraction of the exposure values while using the ground-truth propensity scores for DLCE training. The flipping ratios range from \(\{0,0.01,0.05,0.1,0.15,0.2,0.3,0.4,0.5\}\). In Fig. 5(b), we add Gaussian noises to the propensity scores, where the variances of the noise range from \(\{0,0.1,\ldots,0.5\}\) while using the ground-truth exposure. The results in Fig. 5 consistently show a decrease in performance due to misspecified exposure or propensity. To further examine the correlation between estimation accuracy and recommendation performance, we create scatter plots in Fig. 6 for the DH_original dataset. Each data point represents a baseline labeled with its name. The plots reveal a general trend where the CDCG is correlated with the accuracy of estimations.
Figure 4: Effect of regularization term.
Figure 5: Analysis of factors that influence causality-based recommendation performance.
Figure 3: Ablation study on PropCare.
### Case study
We conduct a case study to demonstrate the advantages of PropCare in a practical ranking-based recommendation scenario. In Tab. 4, we analyze the top-5 recommended items for an anonymous user with ID 2308 in the DH_personalized dataset. In the first column, by utilizing ground-truth propensity scores and exposure, DLCE effectively generates a ranking list where most items have a positive causal effect. All items with a positive causal effect were eventually purchased, achieving the goal of causality-based recommendation. Comparing the lists generated by CJBPR and PropCare, it is evident that the associated causal effects of the purchased items differ. For example, in the CJBPR list, recommending "strawberries" has zero causal effect, indicating that the user could have still purchased it even without recommendation. In contrast, PropCare recommends "infant soy", which has a positive causal effect, making it a more ideal choice. Overall, given the list recommended by CJBPR, the user would only purchase "strawberries" and "fluid milk". However, given the list from PropCare, in addition to "infant soy" and "fluid milk", the user may still purchase "strawberries" even without being recommended due to its zero causal effect. Besides, POP tends to recommend popular items but with a lower causal effect, even including an item with a negative causal effect. The results suggest that POP is not an appropriate tool for estimating propensity scores in the context of causality-based recommendation.
## 6 Conclusion
In this paper, we introduced PropCare, a propensity estimation model for causality-based recommendation systems without the need to access ground-truth propensity and exposure data. Leveraging our observation on the pairwise characteristics between propensity scores and item popularity, we formulated a key assumption and incorporated it as prior information to enhance our estimation, thereby improving causality-based recommendation. A theoretical analysis was presented to understand the factors influencing the bias in estimated causal effects, thereby informing model design and evaluation. Empirical studies demonstrated the superiority of PropCare over the baselines. Future research avenues include exploring direct exposure estimation without propensity scores, and investigating parametric causal effect estimators that are potentially more powerful.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multicolumn{1}{c}{Ground-truth} & POP & CJBPR & PropCare \\ \hline \multirow{3}{*}{\(\ddagger\)**} & garlic bread (1176) & bananas (1310) & \(\ddagger\) **fluid milk** (1169) & \(\ddagger\) **infant soy** (1232) \\ & & toilet tissue (742) & bananas (1310) & \(\ddagger\) **fluid milk** (1169) \\ & \(\ddagger\) **fluid milk** (1169) & cerel (1090) & bananas (1310) \\ & \(\ddagger\) **primal** (807) & white bread (675) & **strawberries** (834) & pure juice (1277) \\ & \(\lx@paragraphsign\) & tortilla chips (634) & margarine tubs/bows (1245) & coffee creamers (1169) \\ \hline \hline \end{tabular}
* Each column represents the recommendation list output by DLCE trained with the estimated propensity and exposure by the corresponding baseline. The purchased items are highlighted in bold. Items with positive causal effect (\(\tau_{u,i}=1\)) and negative causal effect (\(\tau_{u,i}=-1\)) are marked by \(\ddagger\) and \(\lx@paragraphsign\), respectively, and unmarked items have zero causal effect (\(\tau_{u,i}=0\)). Numbers in brackets are the popularity ranks in the training set.
\end{table}
Table 4: Case study of an anonymous user.
Figure 6: Correlation analysis on factors influencing recommendation.
## Acknowledgments
This research is supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funds (Grant No. A20H6b0151). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the A*STAR. Dr. Yuan Fang also acknowledges the Lee Kong Chian Fellowship awarded by Singapore Management University for the support of this work.
## References
* Ai et al. [2018] Qingyao Ai, Keping Bi, Cheng Luo, Jiafeng Guo, and W Bruce Croft. Unbiased learning to rank with unbiased propensity estimation. In _ACM SIGIR Conference on Research and Development in Information Retrieval_, pages 385-394, 2018.
* Bonner and Vasile [2018] Stephen Bonner and Flavian Vasile. Causal embeddings for recommendation. In _ACM Conference on Recommender Systems_, pages 104-112, 2018.
* Borisov et al. [2016] Alexey Borisov, Ilya Markov, Maarten De Rijke, and Pavel Serdyukov. A neural click model for web search. In _ACM International Conference on World Wide Web_, pages 531-541, 2016.
* Crump et al. [2009] Richard K Crump, V Joseph Hotz, Guido W Imbens, and Oscar A Mitnik. Dealing with limited overlap in estimation of average treatment effects. _Biometrika_, 96(1):187-199, 2009.
* Davidson et al. [2010] James Davidson, Benjamin Liebald, Junning Liu, Palash Nandy, Taylor Van Vleet, Ullas Gargi, Sujoy Gupta, Yu He, Mike Lambert, Blake Livingston, et al. The YouTube video recommendation system. In _ACM Conference on Recommender Systems_, pages 293-296, 2010.
* Du et al. [2019] Zhengxiao Du, Xiaowei Wang, Hongxia Yang, Jingren Zhou, and Jie Tang. Sequential scenario-specific meta learner for online recommendation. In _ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_, pages 2895-2904, 2019.
* He et al. [2020] Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. Lightgcn: Simplifying and powering graph convolution network for recommendation. In _International ACM SIGIR Conference on Research and Development in Information Retrieval_, pages 639-648, 2020.
* He et al. [2017] Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. Neural collaborative filtering. In _ACM International Conference on World Wide Web_, pages 173-182, 2017.
* Imbens and Rubin [2015] Guido W Imbens and Donald B Rubin. _Causal inference in statistics, social, and biomedical sciences_. Cambridge University Press, 2015.
* Jannach and Jugovac [2019] Dietmar Jannach and Michael Jugovac. Measuring the business value of recommender systems. _ACM Transactions on Management Information Systems_, 10(4):1-23, 2019.
* Johnson [2014] Christopher C Johnson. Logistic matrix factorization for implicit feedback data. _Advances in Neural Information Processing Systems_, 27(78):1-9, 2014.
* Kendall [1938] Maurice G Kendall. A new measure of rank correlation. _Biometrika_, 30(1/2):81-93, 1938.
* Koren et al. [2009] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. _Computer_, 42(8):30-37, 2009.
* Li et al. [2023] Haoxuan Li, Chunyuan Zheng, and Peng Wu. StableDR: Stabilized doubly robust learning for recommendation on data missing not at random. In _International Conference on Learning Representations_, 2023.
* Li et al. [2023] Xueqi Li, Guoqing Xiao, Yuedan Chen, Zhuo Tang, Wenjun Jiang, and Kenli Li. An explicitly weighted gcn aggregator based on temporal and popularity features for recommendation. _ACM Transactions on Recommender Systems_, 2023.
* [16] Dawen Liang, Laurent Charlin, James McInerney, and David M Blei. Modeling user exposure in recommendation. In _ACM International Conference on World Wide Web_, pages 951-961, 2016.
* [17] Greg Linden, Brent Smith, and Jeremy York. Amazon. com recommendations: Item-to-item collaborative filtering. _IEEE Internet computing_, 7(1):76-80, 2003.
* [18] Huishi Luo, Fuzhen Zhuang, Ruobing Xie, Hengshu Zhu, and Deqing Wang. A survey on causal inference for recommendation. _arXiv preprint arXiv:2303.11666_, 2023.
* [19] Ioannis Paparrizos, B Barla Cambazoglu, and Aristides Gionis. Machine learned job recommendation. In _ACM Conference on Recommender Systems_, pages 325-328, 2011.
* [20] Greg Pass, Abdur Chowdhury, and Cayley Torgeson. A picture of search. In _International Conference on Scalable Information Systems_, pages 1-es, 2006.
* [21] Zhen Qin, Suming J Chen, Donald Metzler, Yongwoo Noh, Jingzheng Qin, and Xuanhui Wang. Attribute-based propensity for unbiased learning in recommender systems: Algorithm and case studies. In _ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_, pages 2359-2367, 2020.
* [22] Steffen Rendle. Factorization machines. In _IEEE International conference on Data Mining_, pages 995-1000, 2010.
* [23] Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. BPR: Bayesian personalized ranking from implicit feedback. In _Conference on Uncertainty in Artificial Intelligence_, pages 452-461, 2009.
* [24] Matthew Richardson, Ewa Dominowska, and Robert Ragno. Predicting clicks: estimating the click-through rate for new ads. In _ACM International Conference on World Wide Web_, pages 521-530, 2007.
* [25] Paul R Rosenbaum and Donald B Rubin. The central role of the propensity score in observational studies for causal effects. _Biometrika_, 70(1):41-55, 1983.
* [26] Donald B Rubin. Estimating causal effects of treatments in randomized and nonrandomized studies. _Journal of Educational Psychology_, 66(5):688, 1974.
* [27] Masahiro Sato. Online evaluation methods for the causal effect of recommendations. In _ACM Conference on Recommender Systems_, pages 96-101, 2021.
* [28] Masahiro Sato, Janmajay Singh, Sho Takemori, Takashi Sonoda, Qian Zhang, and Tomoko Ohkuma. Uplift-based evaluation and optimization of recommenders. In _ACM Conference on Recommender Systems_, pages 296-304, 2019.
* [29] Masahiro Sato, Janmajay Singh, Sho Takemori, and Qian Zhang. Causality-aware neighborhood methods for recommender systems. In _Advances in Information Retrieval: European Conference on IR Research_, pages 603-618, 2021.
* [30] Masahiro Sato, Sho Takemori, Janmajay Singh, and Tomoko Ohkuma. Unbiased learning for the causal effect of recommendation. In _ACM Conference on Recommender Systems_, pages 378-387, 2020.
* [31] Amit Sharma, Jake M Hofman, and Duncan J Watts. Estimating the causal impact of recommendation systems from observational data. In _ACM Conference on Economics and Computation_, pages 453-470, 2015.
* [32] Harald Steck. Training and testing of recommender systems on data missing not at random. In _ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_, pages 713-722, 2010.
* [33] Fei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, and Peng Jiang. BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer. In _ACM International Conference on Information and Knowledge Management_, pages 1441-1450, 2019.
* [34] Aaron Van den Oord, Sander Dieleman, and Benjamin Schrauwen. Deep content-based music recommendation. _Advances in Neural Information Processing Systems_, 26, 2013.
* [35] Xiang Wang, Xiangnan He, Yixin Cao, Meng Liu, and Tat-Seng Chua. KGAT: Knowledge graph attention network for recommendation. In _ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_, pages 950-958, 2019.
* [36] Fangzhao Wu, Ying Qiao, Jiun-Hung Chen, Chuhan Wu, Tao Qi, Jianxun Lian, Danyang Liu, Xing Xie, Jianfeng Gao, Winnie Wu, et al. MIND: A large-scale dataset for news recommendation. In _Annual Meeting of the Association for Computational Linguistics_, pages 3597-3606, 2020.
* [37] Teng Xiao and Suhang Wang. Towards unbiased and robust causal ranking for recommender systems. In _ACM International Conference on Web Search and Data Mining_, pages 1158-1167, 2022.
* [38] Xu Xie, Zhaoyang Liu, Shiwen Wu, Fei Sun, Cihang Liu, Jiawei Chen, Jinyang Gao, Bin Cui, and Bolin Ding. CausCF: Causal collaborative filtering for recommendation effect estimation. In _ACM International Conference on Information and Knowledge Management_, pages 4253-4263, 2021.
* [39] Da Xu, Chuanwei Ruan, Evren Korpeoglu, Sushant Kumar, and Kannan Achan. Adversarial counterfactual learning and evaluation for recommender system. _Advances in Neural Information Processing Systems_, 33:13515-13526, 2020.
* [40] Longqi Yang, Yin Cui, Yuan Xuan, Chenyang Wang, Serge Belongie, and Deborah Estrin. Unbiased offline recommender evaluation for missing-not-at-random implicit feedback. In _ACM conference on Recommender Systems_, pages 279-287, 2018.
* [41] Yang Zhang, Fuli Feng, Xiangnan He, Tianxin Wei, Chonggang Song, Guohui Ling, and Yongdong Zhang. Causal intervention for leveraging popularity bias in recommendation. In _International ACM SIGIR Conference on Research and Development in Information Retrieval_, pages 11-20, 2021.
* [42] Ziwei Zhu, Yun He, Yin Zhang, and James Caverlee. Unbiased implicit recommendation and propensity estimation via combinational joint learning. In _ACM Conference on Recommender Systems_, pages 551-556, 2020.
## Appendix A Training procedure of PropCare
We present the pseudocode for training our proposed PropCare in Algorithm 1. The training steps involve calculating various loss terms including \(\mathcal{L}_{\text{naive}}\), \(\mathcal{L}_{\text{pop}}\) and the regularization term. We update all learnable parameters based on the total loss defined in Eq. (7) of the main text.
```
Input: Observed training interaction data \(D\). Output: Model parameters \(\Theta\). Initialize model parameters \(\Theta\); whilenot convergeddo foreachuser-item pair \((u,i)\) in \(D\)do Compute \(\mathcal{L}_{\text{naive}}\) by Eq. (3) of the main text; Sample an item \(j\) for each pair \((u,i)\) from \(\{1,2,\dots,I\}\backslash i\); Compute \(\mathcal{L}_{\text{pop}}\) by Eq. (6) of the main text; Compute the total loss in Eq. (7) of the main text; end while Update \(\Theta\) by backpropagation of the loss in Eq. (7) of the main text; end while return\(\Theta\).
```
**Algorithm 1**Training PropCare
We then analyze the time complexity of the training procedure. We first consider the computation of \(\mathcal{L}_{\text{naive}}\). This involves three MLP models: \(f_{e}\), \(f_{p}\), and \(f_{r}\). As we have to compute \(\mathbf{x}_{u,i}\), \(\hat{p}_{u,i}\) and \(\hat{r}_{u,i}\) for each user-item pair, each MLP incurs a time complexity of \(\mathcal{O}(n)\), where \(n=|D|\), the number of user-item pairs in the training data. To further find the interactions \(\{(y_{u,i})\}\), we need to compute the product between \(\hat{p}_{u,i}\) and \(\hat{r}_{u,i}\), which also has a linear time complexity of \(\mathcal{O}(n)\). Next, to compute \(\mathcal{L}_{\text{pop}}\), we need to sample another item \(j\) for each user-item pair in the training data, which has a time complexity of \(\mathcal{O}(n)\). For each tuple \((u,i,j)\), we need to further compute \(\hat{p}_{u,j}\) and \(\hat{r}_{u,j}\), each taking \(\mathcal{O}(n)\) time. Finally, the time complexity to compute the regularization term is also \(\mathcal{O}(n)\), as the empirical distribution is based on all estimated propensity scores for training pairs. In summary, a series of \(O(n)\) procedures are carried out for the user-item pairs in the training set \(D\), resulting in an overall linear time complexity of \(O(n)\). This demonstrates the scalability of our proposed PropCare, which we further analyze empirically in Appendix D.2.
## Appendix B Bias of the estimated causal effect
Here, we present the the proof of Proposition 1, which have appeared in the main text, Sect. 4.5.
**Proof 1**: _As defined earlier [30], we can model the potential outcomes as follows,_
\[\hat{Y}^{1}_{u,i} =\frac{Z_{u,i}Y_{u,i}}{p_{u,i}},\] (a.1) \[\hat{Y}^{0}_{u,i} =\frac{(1-Z_{u,i})Y_{u,i}}{1-p_{u,i}}.\] (a.2)
_Note that in Eqs. (a.1) and (a.2), the \(Y_{u,i}\)'s on the right-hand side can be replaced with \(Y^{1}_{u,i}\) and \(Y^{0}_{u,i}\), respectively. This substitution is valid because when \(Z_{u,i}=1\), the \(Y_{u,i}\) term in Eq. (a.1) is equivalent to \(Y^{1}_{u,i}\) by definition. In this scenario, \(\hat{Y}^{0}_{u,i}\) is always 0, regardless of the value of \(Y^{0}_{u,i}\). Similarly, when \(Z_{u,i}=0\), the \(Y_{u,i}\) term in Eq. (a.2) is equivalent to \(Y^{0}_{u,i}\). Therefore, the IPS estimator for the causal effect, in Eq. (1) of the main text, can be rewritten as_
\[\hat{\tau}_{u,i}=\frac{Z_{u,i}Y^{1}_{u,i}}{p_{u,i}}-\frac{(1-Z_{u,i})Y^{0}_{u,i }}{1-p_{u,i}}.\] (a.3)_As established earlier [30], if the propensity score \(p_{u,i}\) and exposure \(Z_{u,i}\) are correctly assigned, the expectation of the estimated causal effect from the IPS estimator is_
\[\mathbb{E}\left[\hat{\tau}_{u,i}\right]=Y^{1}_{u,i}-Y^{0}_{u,i}= \tau_{u,i}.\] (a.4)
_That is to say, if we have ground-truth propensity score as well as exposure, the estimated causal effect is unbiased. If we have only ground-truth exposure but have to estimate propensity scores by substituting \(p_{u,i}\) with \(\hat{p}_{u,i}\) in Eq. (a.3), denote the resulting estimator for the casual effect as \(\hat{\tau}^{\prime}_{u,i}\). Then, the expectation and bias of \(\hat{\tau}^{\prime}_{u,i}\) are_
\[\mathbb{E}\left[\hat{\tau}^{\prime}_{u,i}\right] =\frac{p_{u,i}Y^{1}_{u,i}}{\hat{p}_{u,i}}-\frac{(1-p_{u,i})Y^{0}_ {u,i}}{1-\hat{p}_{u,i}},\] (a.5) \[\mathrm{Bias}(\hat{\tau}^{\prime}_{u,i}) =\mathbb{E}\left[\hat{\tau}^{\prime}_{u,i}\right]-\tau_{u,i}= \left(\frac{p_{u,i}}{\hat{p}_{u,i}}-1\right)Y^{1}_{u,i}-\left(\frac{1-p_{u,i} }{1-\hat{p}_{u,i}}-1\right)Y^{0}_{u,i}.\] (a.6)
_Finally, in our setup, both \(Z_{u,i}\) and \(p_{u,i}\) are estimated. Hence, denote the resulting estimator based on \(\hat{Z}_{u,i}\) and \(\hat{p}_{u,i}\) as \(\hat{\tau}^{\prime\prime}_{u,i}\). We can obtain its bias relative to \(\hat{\tau}^{\prime}_{u,i}\) as follows._
\[\mathrm{Bias}(\hat{\tau}^{\prime\prime}_{u,i})-\mathrm{Bias}( \hat{\tau}^{\prime}_{u,i}) =\mathbb{E}\left[\hat{\tau}^{\prime\prime}_{u,i}\right]-\mathbb{E }\left[\hat{\tau}^{\prime}_{u,i}\right]\] \[=\mathbb{E}\left[\frac{\hat{Z}_{u,i}Y^{1}_{u,i}}{\hat{p}_{u,i}}- \frac{(1-\hat{Z}_{u,i})Y^{0}_{u,i}}{1-\hat{p}_{u,i}}-\frac{Z_{u,i}Y^{1}_{u,i}} {\hat{p}_{u,i}}+\frac{(1-Z_{u,i})Y^{0}_{u,i}}{1-\hat{p}_{u,i}}\right]\] \[=\mathbb{E}\left[\frac{(\hat{Z}_{u,i}-Z_{u,i})Y^{1}_{u,i}}{\hat{p }_{u,i}}-\frac{(Z_{u,i}-\hat{Z}_{u,i})Y^{0}_{u,i}}{1-\hat{p}_{u,i}}\right]\] \[=\frac{\mathbb{E}\left[\hat{Z}_{u,i}-Z_{u,i}\right]}{\hat{p}_{u,i }}Y^{1}_{u,i}+\frac{\mathbb{E}\left[\hat{Z}_{u,i}-Z_{u,i}\right]}{1-\hat{p}_{u,i}}Y^{0}_{u,i}.\] (a.7)
_By adding Eqs. (a.6) and (a.7), we are able to obtain the bias of \(\hat{\tau}^{\prime\prime}_{u,i}\)_
\[\mathrm{Bias}(\hat{\tau}^{\prime\prime}_{u,i}) =\mathrm{Bias}(\hat{\tau}^{\prime}_{u,i})+\mathrm{Bias}(\hat{ \tau}^{\prime\prime}_{u,i})-\mathrm{Bias}(\hat{\tau}^{\prime}_{u,i})\] \[=\left(\frac{p_{u,i}+\mathbb{E}\left[\hat{Z}_{u,i}-Z_{u,i}\right] }{\hat{p}_{u,i}}-1\right)Y^{1}_{u,i}-\left(\frac{1-p_{u,i}-\mathbb{E}\left[ \hat{Z}_{u,i}-Z_{u,i}\right]}{1-\hat{p}_{u,i}}-1\right)Y^{0}_{u,i}.\] (a.8)
_This concludes the proof of Proposition 1. \(\square\)_
## Appendix C Additional experimental settings
We describe more details on the datasets, implementation and evaluation metrics.
### Descriptions of datasets
We introduce additional details on data generation and splitting.
Data processing and generationWe perform data processing and generation steps per the earlier studies on the DunnHumby2 (DH) [30] and MovieLens3[29] (ML) datasets.
Footnote 2: The raw data are available at [https://www.dunnhumby.com/careers/engineering/sourcefiles](https://www.dunnhumby.com/careers/engineering/sourcefiles).
Footnote 3: The raw data are available at [https://grouplens.org/datasets/movielens](https://grouplens.org/datasets/movielens).
Specifically, for DH, the items that appear in the weekly mailer are deemed as recommended (i.e., exposed) items. The empirical distribution of \(Y^{1}_{u,i}\) can be found by tallying the weeks in which item \(i\)was both recommended to and purchased by user \(u\) (or purchased but _not_ recommended when dealing with \(Y^{0}_{u,i}\)). The ground-truth values of \(Y^{1}_{u,i}\) and \(Y^{0}_{u,i}\) are then sampled from their respective empirical distributions, which are used to calculate the ground-truth causal effect, as follows.
\[\tau_{u,i}=Y^{1}_{u,i}-Y^{0}_{u,i}.\] (a.9)
Subsequently, two different ways of simulating the ground-truth propensity scores have been attempted. In DH_original, the propensity score \(p_{u,i}\) is defined based on the number of weeks in which the item was recommended while the user visited the retailer during the same week. For DH_personalized, the propensity score \(p_{u,i}\) is established based on the position of item \(i\) in a simulated ranking based on user \(u\)'s probability of interaction with \(i\). The ground-truth exposure \(Z_{u,i}\) is then sampled from a Bernoulli distribution whose parameter is set to the propensity score \(p_{u,i}\). Finally, the ground-truth interaction can be computed as follows.
\[Y_{u,i}=Z_{u,i}Y^{1}_{u,i}+(1-Z_{u,i})Y^{0}_{u,i}.\] (a.10)
For ML, the empirical distributions of \(Y^{1}_{u,i}\) and \(Y^{0}_{u,i}\) are derived from the interaction log using matrix factorization-based techniques. The propensity score is determined by a simulated ranking for each user, similar to DH_personalized. Subsequently, the ground-truth values of the exposure, causal effect and interaction is sampled and established following the same process in DH.
Data splittingThe above data generation steps are applied to both DH and ML datasets to generate training, validation and testing sets. For the DH datasets, the data generation process is repeated 10 times to simulate the 10-week training data, once more to simulate the 1-week validation data, and 10 more times to simulate the 10-week testing data. For the ML dataset, the generation is repeated once each to generate the training, validation, and testing data, respectively.
It is worth noting that the data generation and splitting processes are dependent on some form of simulation. We utilize such "semi-simulated" data for a number of reasons. First, the true causal effects are not observable due to their counterfactual nature. Second, ground-truth propensity scores and exposure are often not available in public datasets, which nonetheless are essential for model evaluation (even though they are not required for our model training). Third, while some datasets [20, 36, 6] do include information on item impressions or exposure statuses, they often provide a one-sided view of the situation. Specifically, the vast majority, if not all, of the item interactions are preceded by prior impressions. This leaves limited scope for investigating item interactions that occur without prior impressions.
### Implementation details
Let us first present key implementation choices regarding propensity and exposure modeling in PropCare. On all datasets, we set \(\alpha=0.2\) and \(\beta=1.0\) for the Beta distribution used in the regularization term. To derive \(\hat{Z}_{u,i}\) based on \(\hat{p}_{u,i}\), we implement \(\mathrm{Norm}\) as a Z-score normalization function, such that \(\hat{Z}_{u,i}=1\) if \(\mathrm{Norm}(\hat{p}_{u,i})\geq\epsilon\), where the threshold \(\epsilon\) is set to 0.2 for DH_original and DH_personalized, and 0.15 for ML. To address the bias introduced by \(\hat{p}_{u,i}\), we employ a practical trick to scale the estimated propensity by a constant factor \(c\), i.e., \(\hat{p}_{u,i}\gets c\times\hat{p}_{u,i}\), prior to training DLCE. This technique aims to further minimize the disparity in the scale between \(p_{u,i}\) and \(\hat{p}_{u,i}\) in order to improve the accuracy of causal effect estimation, according to the theoretical analysis in Sect. 4.5. For DH_original and DH_personalized, the scaling factor \(c\) is set to 0.8, while for ML it is set to 0.2. The hyperparameters \(\epsilon\) and \(c\) are tuned using validation data. In particular, we perform a grid search where \(\epsilon\) is searched over the range (0,1) in steps of 0.05 and \(c\) is searched over the range (0,1) in steps of 0.1.
With regard to the model architecture for PropCare, we randomly initialize \(\mathbf{x}_{u}\) and \(\mathbf{x}_{i}\in\mathbb{R}^{128}\) as user and item ID features. The embedding model \(f_{e}\) takes \((\mathbf{x}_{u}||\mathbf{x}_{i})\) as input and is implemented as an MLP with 256, 128 and 64 neurons for its layers. \(f_{p}\) and \(f_{r}\) are both implemented as MLPs with 64, 32, 16, 8 neurons for the hidden layers and an output layer activated by the sigmoid function. Except the output layer, all layers of \(f_{e}\) and hidden layers of \(f_{p}\) and \(f_{r}\) are activated by the LeakyReLU function. Finally, PropCare is trained with a stochastic gradient descent optimizer using mini-batches, with a batch size set to 5096.
For the baseline CJBPR, we have used its authors' implementation4 and modified its mini-batch setting and optimizer to be identical to PropCare. For the baseline EM, we have implemented their proposed propensity model in Python. To conduct downstream causality-based recommendation, we have used the authors' implementation of DLCE5 following the settings in their work [30]. For a fair comparison, the normalization function \(\operatorname{Norm}\) and constant scaling factor \(c\) are identically applied to all baselines, which generally improve the performance across the board.
Footnote 4: Available at [https://github.com/Zziwei/Unbiased-Propensity-and-Recommendation](https://github.com/Zziwei/Unbiased-Propensity-and-Recommendation).
Footnote 5: Available in ancillary files at [https://arxiv.org/abs/2008.04563](https://arxiv.org/abs/2008.04563).
We implement PropCare using TensorFlow 2.11 in Python 3.10. All experiments were conducted on a Linux server with a AMD EPYC 7742 64-Core CPU, 512 GB DDR4 memory and four RTX 3090 GPUs.
### Evaluation metrics
Unlike commonly used metrics like precision and NDCG that reward all positive interactions regardless of whether the item is exposed or not, we use a variant of them that only reward positive interactions resulting from item exposure. Concretely, we use the Causal effect-based Precision (CP) and Discounted Cumulative Gain (CDCG) as defined in existing work [30].
\[\text{CP}@K =\frac{1}{U}\sum_{u=1}^{U}\sum_{i=1}^{I}\frac{\mathbf{1}(\operatorname {rank}_{u}(\hat{s}_{u,i})\leq K)\tau_{u,i}}{K},\] (a.11) \[\text{CDCG} =\frac{1}{U}\sum_{u=1}^{U}\sum_{i=1}^{I}\frac{\tau_{u,i}}{\log_{ 2}{(1+\operatorname{rank}_{u}(\hat{s}_{u,i}))}},\] (a.12)
where \(\mathbf{1}(\cdot)\) is an indicator function, and \(\operatorname{rank}_{u}(\hat{s}_{u,i})\) returns the position of item \(i\) in the ranking list for user \(u\) as determined by the ranking score \(\hat{s}_{u,i}\). In our paper, we report CP@10, CP@100 and CDCG. Note that since \(\tau_{u,i}\) can be \(-1\), these metrics can be negative.
## Appendix D Additional experiments
We present additional empirical results on the comparison to conventional recommender systems, as well as the robustness and scalability of PropCare.
### Comparison to interaction-based recommenders
To demonstrate the advantage of PropCare over conventional interaction-based recommender systems, we compare with three well-known models, namely Matrix Factorization6 (MF) [13], Bayesian Personalized Ranking6 (BPR) [23] and LightGCN7[7]. Note that we use only interaction data \(Y_{u,i}\) as training labels for these methods and rank the items for each user based on the predicted \(\hat{y}_{u,i}\), as in their classical paradigm. We evaluate the causal performance with CP@10, CP@100 and CDCG in Tab. a.1. From the results we can observe that the conventional interaction-based methods do not demonstrate strong causal performance.
Footnote 6: Implementation available at [https://github.com/kuandeng/LightGCN](https://github.com/kuandeng/LightGCN).
A fundamental reason is that the conventional models indiscriminately reward all positive interactions, regardless of whether the candidate items have been exposed or not. To delve deeper into the impact
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c|}{DH\_original} & \multicolumn{3}{c|}{DH\_personalized} & \multicolumn{3}{c}{ML} \\ \cline{2-11} & CP@10\({}^{\dagger}\) & CP@100\({}^{\dagger}\) & CDCG\({}^{\dagger}\) & CP@10\({}^{\dagger}\) & CP@100\({}^{\dagger}\) & CDCG\({}^{\dagger}\) & CP@10\({}^{\dagger}\) & CP@100\({}^{\dagger}\) & CDCG\({}^{\dagger}\) \\ \hline MF &.0206\(\pm\).001 &.0118\(\pm\).000 &.8569\(\pm\).002 &.0433\(\pm\).001 &.0196\(\pm\).000 &.9699\(\pm\).002 &.460\(\pm\).004 &.-205\(\pm\).002 &.9421\(\pm\).023 \\ BPR &.0301\(\pm\).000 &.0138\(\pm\).000 &.8983\(\pm\).002 &.0476\(\pm\).001 &.0245\(\pm\).000 &.1018\(\pm\).001 &.-408\(\pm\).002 &.-197\(\pm\).001 &.9898\(\pm\).028 \\ LightGCN &.0309\(\pm\).001 &.0149\(\pm\).001 &.9113\(\pm\).004 &.0821\(\pm\).001 &.0263\(\pm\).001 &.1095\(\pm\).002 &.-342\(\pm\).006 &.-177\(\pm\).002 & 10.16\(\pm\).050 \\ PropCare & **.0351\(\pm\)**.002 & **.0156\(\pm\)**.001 & **.9268\(\pm\)**.005 & **.1270\(\pm\)**.001 & **.0381\(\pm\)**.000 & **.1426\(\pm\)**.001 & **.0182\(\pm\)**.002 & **.0337\(\pm\)**.002 & **13.80\(\pm\)**.011 \\ \hline \hline \end{tabular}
* Results are reported as the average of 5 runs (mean\(\pm\)std). Best results are bolded.
\end{table}
Table a.1: Performance comparison with conventional interaction-based recommenders.
of different exposure statuses on the causal effect of user-item pairs with positive interactions, we calculate \(\mathbb{E}[\tau_{u,i}|Y_{u,i}=1,Z_{u,i}=1]\) and \(\mathbb{E}[\tau_{u,i}|Y_{u,i}=1,Z_{u,i}=0]\) for each dataset, as presented in Tab. a.2. The divergent results given different exposure statuses imply that exposure plays a significant role in the causal effect involving positive interactions. Conventional interaction-based models ignore exposure and may rank items in a way that negatively impacts the causal effect. Between the two DH datasets, DH_personalized exhibits a lower value of \(\mathbb{E}[\tau_{u,i}|Y_{u,i}=1,Z_{u,i}=0]\), which explains the greater causal performance gap between the conventional models and PropCare than the gap on DH_original. This greater gap arises because recommending items with \(Y_{u,i}=1,Z_{u,i}=0\) lowers the causal effect more on DH_personalized than on DH_original.
### Robustness to alternative backbone and scalability analysis
To show the robustness of PropCare, we opt for CausE6[2], another causality-based recommender, as an alternative backbone to DLCE. Like DLCE, CausE makes causality-based recommendation given the estimated propensity and exposure data from PropCare or the baselines. The results in Tab. a.3 show a similar pattern as using DLCE in the main text. That is, our PropCare consistently outperforms other baselines even with a different causality-based model as the backbone.
Footnote 6: Available at [https://grouplens.org/datasets/movielens/](https://grouplens.org/datasets/movielens/).
We further examine the scalability of our proposed PropCare on increasingly larger datasets, namely, MovieLens 1M8, MovieLens 10M8 and MovieLens 20M8. In Tab a.4, we report the training times of both PropCare and DLCE, in hours. We observe that the training of PropCare generally follows a linear growth, in line with the time complexity anlysis in Appendix A. Moreover, PropCare only presents a marginal overhead on top of the backbone DLCE, showing its feasibility in working with existing backbones.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & MovieLens 1M & MovieLens 10M & MovieLens 20M \\ \hline PropCare &.0814 & 1.494 & 3.170 \\ DLCE & 2.1759 & 22.658 & 40.658 \\ \hline \hline \end{tabular}
\end{table}
Table a.4: Training times for PropCare and DLCE, in hours.
\begin{table}
\begin{tabular}{c|c c c c c c c c} \hline \hline & \multicolumn{3}{c|}{DH\_original} & \multicolumn{3}{c|}{DH\_personalized} & \multicolumn{3}{c}{ML} \\ \cline{2-10} Methods & CP@10! & CP@100! & CDC@10! & CP@10! & CP@100! & CDC@1 & CP@10! & CP@100! & CDC@1 \\ \hline Ground-truth &.0495\(\pm\).002 &.0216\(\pm\).002 &.1020\(\pm\).004 &.0663\(\pm\).003 &.0241\(\pm\).002 &.105\(\pm\).003 &.1849\(\pm\).002 &.1438\(\pm\).004 & 15.25\(\pm\).007 \\ \hline POP &.0059\(\pm\).001 &.0097\(\pm\).002 &.7759\(\pm\).002 &.0427\(\pm\).002 &.0159\(\pm\).001 &.9616\(\pm\).002 &.191\(\pm\).001 &.036\(\pm\).002 & 12.27\(\pm\).006 \\ CDBP &.0073\(\pm\).001 &.0100\(\pm\).005 &.7809\(\pm\).003 &.0451\(\pm\).002 &.0165\(\pm\).003 &.0621\(\pm\).005 &.217\(\pm\).001 &.044\(\pm\).001 & 12.05\(\pm\).006 \\ EM &.0065\(\pm\).001 &.0013\(\pm\).004 &.7802\(\pm\).002 &.0478\(\pm\).001 &.0166\(\pm\).001 &.9819\(\pm\).006 &.197\(\pm\).002 &.041\(\pm\).002 & 12.24\(\pm\).009 \\ PropCare & **.0123\(\pm\).001** & **.0114\(\pm\).001** & **.**0084\(\pm\).001** & **.**0580\(\pm\).005** & **.**0201\(\pm\).001** & **.**052\(\pm\).002** & **.**138\(\pm\).001** & **.**038\(\pm\).003** & **12.40\(\pm\).009** \\ \hline \hline \end{tabular} Results are reported as the average of 5 runs (mean\(\pm\)std). Except Ground-truth, best results are bolded and runners-up are underlined.
\end{table}
Table a.2: Influence of exposure status on causal effect. | ## Review
### Summary
This paper proposes a novel propensity estimation model for causality-based recommendation systems that operates without requiring exposure data. The authors utilize the correlation between item popularity and propensity to derive effective recommendations, effectively addressing a significant challenge in the field. The research presents a theoretical framework supported by a series of experiments, demonstrating superior performance over baseline methods. The motivation for the work is well articulated, and while the results are promising, the paper also raises important questions regarding the limitations of the selected datasets and the empirical validation of its theoretical assumptions.
### Strengths
- The paper is well-written and easy to follow, with a clear motivation and thorough literature review.
- The proposed model leverages a strong assumption about the relationship between popularity and propensity, leading to novel insights in causal recommendations.
- Experiments are conducted on multiple benchmark datasets, showing the proposed method outperforms baseline models.
- The theoretical analysis is robust, and the ablation studies provide valuable insights into the model's performance.
### Weaknesses
- The choice of KL-divergence-based regularization is not adequately justified and lacks empirical validation.
- The evaluation framework relies heavily on a single baseline model (DLCE) and does not compare with a wider range of state-of-the-art causal recommendation approaches.
- The datasets used are relatively small, raising questions about the scalability and robustness of the proposed method.
- Certain metrics used for evaluation are not explained within the main text, which may hinder understanding for readers unfamiliar with them.
### Questions
- How can the authors clarify whether the relationship between popularity and propensity is strictly causal or correlational?
- What specific contributions does the proposed model make compared to existing methods that also utilize popularity for estimating propensity?
- Can additional experiments on larger datasets be performed to validate the scalability of the proposed approach?
- What is the effect of the regularization term in propensity training, and could an ablation study provide further clarity on this?
### Soundness
**Score:** 3
**Description:** Good: The proposed model is theoretically sound but requires more empirical validation and stronger baseline comparisons.
### Presentation
**Score:** 3
**Description:** Good: The paper is clearly written but could benefit from better explanations of certain evaluation metrics and the regularization methodology.
### Contribution
**Score:** 3
**Description:** Good: The work presents a novel approach to propensity estimation in causal recommendations, but the technical contributions could be further distinguished from existing literature.
### Rating
**Score:** 6
**Description:** Weak Accept: The paper is technically solid with moderate-to-high impact potential, but it requires additional evaluation and clearer explanations to strengthen the claims.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper addresses an important and under-explored problem in recommendation systems. While there are some weaknesses regarding the empirical validation and the choice of baselines, the overall contributions, clarity of presentation, and theoretical insights outweigh these concerns. The innovative approach to estimating propensity without exposure data provides significant value to the field.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Neuro-symbolic Learning Yielding Logical Constraints
Zenan Li\({}^{1}\) Yunpeng Huang\({}^{1}\) Zhaoyu Li\({}^{2}\)
Yuan Yao\({}^{1}\) Jingwei Xu\({}^{1}\) Taolue Chen\({}^{3}\) Xiaoxing Ma\({}^{1}\) Jian Lu\({}^{1}\)
\({}^{1}\)State Key Lab of Novel Software Technology, Nanjing University, China
\({}^{2}\)Department of Computer Science, University of Toronto, Canada
\({}^{3}\)School of Computing and Mathematical Sciences, Birkbeck, University of London, UK
{lizn, hyp}@smail.nju.edu.cn, [email protected],
[email protected], {y.yao,jingweix,xxm,lj}@nju.edu.cn
###### Abstract
Neuro-symbolic systems combine neural perception and logical reasoning, representing one of the priorities of AI research. End-to-end learning of neuro-symbolic systems is highly desirable, but remains to be challenging. Resembling the distinction and cooperation between System 1 and System 2 of human thought (a la Kahneman), this paper proposes a framework that fuses neural network training, symbol grounding, and logical constraint synthesis to support learning in a weakly supervised setting. Technically, it is cast as a game with two optimization problems which correspond to neural network learning and symbolic constraint learning respectively. Such a formulation naturally embeds symbol grounding and enables the interaction between the neural and the symbolic part in both training and inference. The logical constraints are represented as cardinality constraints, and we use the trust region method to avoid degeneracy in learning. A distinguished feature of the optimization lies in the Boolean constraints for which we introduce a difference-of-convex programming approach. Both theoretical analysis and empirical evaluations substantiate the effectiveness of the proposed framework.
## 1 Introduction
Perception and reasoning serve as fundamental human abilities that are intrinsically linked within the realm of human intelligence [14, 1]. The objective of our study is to develop a learning framework for neuro-symbolic systems (e.g., the one illustrated in Figure 1), enabling simultaneous learning of neural perception and symbolic reasoning.
The merit of neuro-symbolic learning lies in resembling the integration of System 1 and System 2 of human minds [14, 15, 16]. First, it eliminates unnatural and sometimes costly human labeling of the latent variables and conducts learning in an end-to-end fashion. Second, it generates not only a neural network for perception, but also a set of explicit (symbolic) constraints enabling exact and interpretable logical reasoning. Last but not least, the mutually beneficial interaction between the neural and the symbolic parts during both training and inference stages potentially achieves better performance than separated learning approaches.
However, existing approaches do not provide an adequate solution to the problem. They either (1) are not end-to-end, i.e., human intervention is employed to label the latent \(\mathbf{z}\) so that the task can be divided into a purely neural subtask of image classification and a purely symbolic subtask of constraint solving, or (2) are not interpretable, i.e., can only approximate symbolic reasoning with neural network but without explicit logical constraints generated (e.g., [20], [21]), which inevitably sacrifices the exactness and interpretability of symbolic reasoning, resulting in an inaccurate black-box predictor but not a genuine neuro-symbolic system.
We argue that end-to-end and interpretable neuro-symbolic learning is extremely challenging due to the _semantic and representation gaps_ between the neural part and the symbolic part. The semantic gap caused by the latency of the intermediate symbol (i.e., \(\mathbf{z}\) in Figure 1) makes the neural network training lack effective supervision and the logic constraint synthesis lack definite inputs. The representation gap between the differentiable neural network and the discrete symbol logic makes it difficult to yield explicit symbolic constraints given the continuous neural network parameters. Despite existing proposals mitigating some of these obstacles such as visual symbol grounding (Topan et al., 2021; Li et al., 2023), softened logic loss (Kimmig et al., 2012; Xu et al., 2018), semidefinite relaxation (Wang et al., 2019), etc., none of them realize the full merit of neuro-symbolic learning.
In this paper, we propose a new neuro-symbolic learning framework directly meeting the challenges. It bridges the semantic gap with an efficient symbol grounding mechanism that models the cooperative learning of both the neural network and the logical constraint as a bilevel optimization problem. It bridges the representation gap by employing difference-of-convex (DC) programming as a relaxation technique for Boolean constraints in the optimization. DC programming ensures the convergence to explicit logical constraints, which enables exact symbolic reasoning with powerful off-the-shell tools such as SAT/SMT solvers (Een and Sorensson, 2006; Bailleux and Boufkhad, 2003; Bailleux et al., 2006) during the inference stage. In addition, to address degeneracy in logical constraint learning, i.e., the tendency to learn only trivial logical constraints (e.g., resulting in simple rules insufficient to solve SudoKu), we introduce an additional trust region term (Boyd et al., 2004; Conn et al., 2000), and then employ the proximal point algorithm in the learning of logical constraints.
We provide a theoretical analysis of the convergence of our algorithm, as well as the efficacy of the DC relaxation in preserving the exactness and the trust region in preventing degeneracy. Empirical evaluations with four tasks, viz. Visual Sudoku Solving, Self-Driving Path Planning, Chained XOR, and Nonograms, demonstrate the new learning capability and the significant performance superiority of the proposed framework.
_Organization._ Section 2 formulates our neuro-symbolic learning framework. Section 3 details the algorithm and theoretical analysis. Section 4 presents empirical evaluations. Section 5 covers related work. Section 6 discusses the limitations. Section 7 concludes the paper.
## 2 Neuro-symbolic Learning Framework
In this paper, we focus on end-to-end neuro-symbolic systems comprising two components: (1) neural network \(f_{\mathbf{\theta}}\colon\mathcal{X}\to\mathcal{Z}\), which transforms the raw input \(\mathbf{x}\in\mathcal{X}\) into a latent state \(\mathbf{z}\in\mathcal{Z}\); and (2) symbolic reasoning \(g_{\mathbf{\phi}}\colon\mathcal{Z}\to\mathcal{Y}\), which deduces the final output \(\mathbf{y}\in\mathcal{Y}\) from state \(\mathbf{z}\in\mathcal{Z}\). Both components are built simultaneously, taking only the input \(\mathbf{x}\) and output \(\mathbf{y}\) as supervision. We assume that \(\mathcal{Z}\) and \(\mathcal{Y}\) are represented by finite sets, and thus we can encode \(\mathbf{z}\) and \(\mathbf{y}\) by binary vectors using
Figure 1: An example of neuro-symbolic learning for visual SudoKu solving. In this task, the neural network is employed to transform the puzzle image (strawberry etc.) into its corresponding symbols, while symbolic reasoning is utilized to produce the puzzle’s solution. Importantly, the neuro-symbolic learning task is framed in a _weakly supervised_ setting, where only the raw input (the puzzle image \(\mathbf{x}\)) and the final output (the puzzle solution \(\mathbf{y}\), but without numbers in \(\mathbf{z}\)) is observed.
one-hot encoding. For ease of discussion, we directly define \(\mathcal{Z}\) and \(\mathcal{Y}\) to be spaces \(\mathcal{B}^{u}\) and \(\mathcal{B}^{v}\) of Boolean vectors, respectively, where \(\mathcal{B}=\{0,1\}\).
Unlike existing work (e.g., SATNet (Wang et al., 2019)) that simulates logical reasoning in an _implicit_ and _approximate_ way via a network layer, our key insight is to learn _explicit_ logical constraints \(h_{\mathbf{\phi}}\colon\mathcal{B}^{u+v}\to\mathcal{B}\) on \((\mathbf{z},\mathbf{y})\) in the training phrase, which allow to perform _exact_ reasoning by off-the-shelf constraint solvers (e.g., SMT solvers) in the inference phrase. Fig. 1 illustrates our neuro-symbolic learning framework.
The latent state \(\mathbf{z}\) enables the interaction between neural perception and logical reasoning. If \(\mathbf{z}\) were observable, we could perform the learning of neural network and logical constraint by solving the following two separate optimization problems:
\[\mathbf{\theta}=\arg\min_{\mathbf{\theta}}\mathbb{E}_{(\mathbf{x},\mathbf{z})\sim \mathcal{D}_{1}}[\ell_{1}(f_{\mathbf{\theta}}(\mathbf{x}),\mathbf{z})],\qquad\mathbf{ \phi}=\arg\min_{\mathbf{\phi}}\mathbb{E}_{(\mathbf{z},\mathbf{y})\sim\mathcal{D}_{ 2}}[\ell_{2}(h_{\mathbf{\phi}}(\mathbf{z},\mathbf{y}),1)],\]
where \(\ell_{1}(f_{\mathbf{\theta}}(\mathbf{x}),\mathbf{z})\) refers to the error between network prediction \(f_{\mathbf{\theta}}(\mathbf{x})\) and the actual symbol \(\mathbf{z}\), and \(\ell_{2}(h_{\mathbf{\phi}}(\mathbf{z},\mathbf{y}),1)\) refers to the (un)satisfaction degree of the learned logical constraints \(h_{\mathbf{\phi}}(\mathbf{z},\mathbf{y})=1\).
Nevertheless, with \(\mathbf{z}\) being latent, these two problems become tightly coupled:
\[\mathbf{\theta}=\arg\min_{\mathbf{\theta}}\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim \mathcal{D}}[\ell_{1}(f_{\mathbf{\theta}}(\mathbf{x}),\mathbf{z})],\quad\mathbf{\phi} =\arg\min_{\mathbf{\phi}}\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\mathcal{D}}[\ell_ {2}(h_{\mathbf{\phi}}(\mathbf{z},\mathbf{y}),1)],\] (1) s.t. \[h_{\mathbf{\phi}}(\mathbf{z},\mathbf{y})=1,\mathbf{z}\in\mathcal{Z}; \text{s.t.}\qquad\quad\mathbf{z}=f_{\mathbf{\theta}}(\mathbf{x}),\mathbf{z}\in \mathcal{Z}.\]
We further surrogate the constraint satisfaction by loss functions, and obtain essentially a game formulation as follows:
\[\mathbf{\theta}=\arg\min_{\mathbf{\theta}}\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim \mathcal{D}}[\ell_{1}(f_{\mathbf{\theta}}(\mathbf{x}),\bar{\mathbf{z}})], \mathbf{\phi}=\arg\min_{\mathbf{\phi}}\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim \mathcal{D}}[\ell_{2}(h_{\mathbf{\phi}}(\bar{\mathbf{z}},\mathbf{y}),1)],\] \[\text{s.t.}\quad\bar{\mathbf{z}}=\arg\min_{\mathbf{z}\in \mathcal{Z}}\ell_{2}(h_{\mathbf{\phi}}(\mathbf{z},\mathbf{y}),1); \text{s.t.}\quad\bar{\mathbf{z}}=\arg\min_{\mathbf{z}\in\mathcal{Z}}\ell_{1} (f_{\mathbf{\theta}}(\mathbf{x}),\mathbf{z}).\]
Three players are involved in this game: neural network \(f_{\mathbf{\theta}}\) and logical constraint \(h_{\mathbf{\phi}}\) pursue optimal (prediction/satisfaction) accuracy, while \(\mathbf{z}\) strives for the grounding that integrates the network prediction and logical reasoning.
### Efficient and Effective Logical Constraint Learning
For _efficient_ learning of logical constraints, we adopt cardinality constraint (Syrjanen, 2009, 2004; Fiorini et al., 2021) to represent logical constraint.1Cardinality constraints can be easily arithmetized, enabling the conventional optimization method and avoiding the computationally expensive model counting or sampling (Manhave et al., 2018; Xu et al., 2018; Li et al., 2020, 2023), which significantly boosts the learning efficiency. In addition, the conjunctive normal form and cardinality constraints can be easily converted from each other, ensuring not only the expressiveness of the learned constraints, but also the seamless compatibility with existing reasoning engines.
Footnote 1: Cardinality constraints express propositional logic formulae by constraining the number of variables being true. For example, \(x\lor y\vee\neg z\) is encoded as \(x+y+\bar{z}\geq 1\).
Formally, we denote the column concatenation of \(\mathbf{z}\) and \(\mathbf{y}\) by \((\mathbf{z};\mathbf{y})\), and define a cardinality constraint \(h_{\mathbf{\phi}}(\mathbf{z},\mathbf{y}):=\mathbf{w}^{\mathsf{T}}(\mathbf{z};\mathbf{y })\in[b_{\min},b_{\max}]\), where \(\mathbf{\phi}=(\mathbf{w},b_{\min},b_{\max})\), \(\mathbf{w}\in\mathcal{B}^{u+v}\) is a Boolean vector, and \(b_{\min},b_{\max}\in\mathcal{N}_{+}\) are two positive integers. Moreover, we can directly extend \(h_{\mathbf{\phi}}\) to the matrix form representing \(m\) logical constraints, i.e.,
\[h_{\mathbf{\phi}}(\mathbf{z},\mathbf{y}):=\mathbf{W}(\mathbf{z};\mathbf{y})=(\mathbf{w}_{1 }^{\mathsf{T}}(\mathbf{z};\mathbf{y}),\ldots,\mathbf{w}_{m}^{\mathsf{T}}(\mathbf{z };\mathbf{y}))\in[\mathbf{b}_{\min},\mathbf{b}_{\max}],\]
where \(\mathbf{\phi}=(\mathbf{W},\mathbf{b}_{\min},\mathbf{b}_{\max})\), \(\mathbf{W}:=(\mathbf{w}_{1}^{\mathsf{T}};\ldots;\mathbf{w}_{m}^{\mathsf{T}})\in\mathcal{B}^{ m\times(u+v)}\), and \(\mathbf{b}_{\min},\mathbf{b}_{\max}\in\mathcal{N}_{+}^{m}\).
For _effective_ learning of logical constraints, one must tackle the frequent problem of _degeneracy_ that causes incomplete or trivial constraints2. First, for the bounding box \([\mathbf{b}_{\min},\mathbf{b}_{\max}]\), any of its superset is a feasible result of logical constraint learning, but we actually expect the tightest one. To overcome this difficulty, we propose to use the classic mean squared loss, i.e., defining \(\ell_{2}(h_{\mathbf{\phi}}(\mathbf{z},\mathbf{y}),1)=\|\mathbf{W}(\mathbf{z};\mathbf{y})- \mathbf{b}\|^{2}\), where \(\mathbf{b}\) can be computed in optimization or pre-defined. Then, our logical constraint learning problem can be formulated as a Boolean least squares problem (Vandenberghe and Boyd, 2004). The unbiased property of least squares indicates that \(\mathbf{b}\approx(\mathbf{b}_{\min}+\mathbf{b}_{\max})/2\), and the minimum variance property of least squares ensures that we can achieve a tight bounding box \([\mathbf{b}_{\min},\mathbf{b}_{\max}]\)(Henderson, 1975; Bjorck, 1990; Hastie et al., 2009).
Second, it is highly possible that some of the \(m\) distinct logical constraints as indicated by matrix \(\mathbf{W}\) eventually degenerate to the same one during training, because the Boolean constraints and stochastic gradient descent often introduce some implicit bias (Gunasekar et al., 2017; Smith et al., 2021; Ali et al., 2020). To mitigate this problem, we adopt the trust region method (Boyd et al., 2004; Conn et al., 2000), i.e., adding constraints \(\|\mathbf{w}_{i}-\mathbf{w}_{i}^{(0)}\|\leq\lambda,i=1,\ldots,m\), where \(\mathbf{w}_{i}^{(0)}\) is a pre-defined centre point of the trust region. The trust region method enforces each \(\mathbf{w}_{i}\) to search in their own local region. We give an illustrative figure in Appendix D to further explain the trust region method.
To summarize, using the penalty instead of the trust region constraint, we can formulate the optimization problem of logical constraint learning in (1) as
\[\begin{split}\min_{(\mathbf{W},\mathbf{b})}&\mathbb{E}_{( \mathbf{x},\mathbf{y})\sim\mathcal{D}}[\|\mathbf{W}(\bar{\mathbf{z}};\mathbf{y})- \mathbf{b}\|^{2}]+\lambda\|\mathbf{W}-\mathbf{W}^{(0)}\|^{2},\\ \text{s.t.}&\bar{\mathbf{z}}=\arg\min_{\mathbf{z}\in \mathcal{Z}}\ell_{1}(\mathbf{f}_{\mathbf{\theta}}(\mathbf{x}),\mathbf{z}),\quad\mathbf{W} \in\mathcal{B}^{m\times(u+v)},\quad\mathbf{b}\in\mathcal{N}_{+}^{m}.\end{split} \tag{2}\]
### Neural Network Learning in Tandem with Constraint Learning
A key challenge underlines end-to-end neuro-symbolic learning is _symbol grounding_, which is to tackle the chicken-and-egg situation between network training and logical constraint learning: training the network requires the supervision of symbol \(\mathbf{z}\) that comes from solving the learned logical constraints, but the constraint learning needs \(\mathbf{z}\) as input recognized by the trained network. Specifically, since matrix \(\mathbf{W}\) is often underdetermined in high-dimensional cases, the constraint \(\bar{\mathbf{z}}=\arg\min_{\mathbf{z}\in\mathcal{Z}}\ell_{2}(h_{\mathbf{\phi}}( \mathbf{z},\mathbf{y}),1):=\|\mathbf{W}(\mathbf{z};\mathbf{y})-\mathbf{b}\|^{2}\) often has multiple minimizers (i.e., multiple feasible groundings of \(\mathbf{z}\)), all of which satisfy the logical constraints \(h_{\mathbf{\phi}}(\mathbf{z},\mathbf{y})=1\). Moreover, matrix \(\mathbf{W}\) is also a to-be-trained parameter, meaning that it is highly risky to determine the symbol grounding solely on the logical constraints.
To address these issues, instead of (approximately) enumerating all the feasible solutions via model counting or sampling (Manhaeve et al., 2018; Xu et al., 2018; Li et al., 2020; van Krieken et al., 2022; Li et al., 2023), we directly combine network prediction and logical constraint satisfaction to establish symbol grounding, owing to the flexibility provided by the cardinality constraints. Specifically, for given \(\alpha\in[0,+\infty)\), the constraint in network learning can be rewritten as follows,
\[\bar{\mathbf{z}}=\arg\min_{\mathbf{z}\in\mathcal{Z}}\|\mathbf{W}(\mathbf{z};\mathbf{y} )-\mathbf{b}\|^{2}+\alpha\|\mathbf{z}-\mathbf{f}_{\mathbf{\theta}}(\mathbf{x})\|^{2}.\]
The coefficient \(\alpha\) can be interpreted as the preference of symbolic grounding for network predictions or logical constraints. For \(\alpha\to 0\), the symbol grounding process can be interpreted as distinguishing the final symbol \(\mathbf{z}\) from all feasible solutions based on network predictions. For \(\alpha\to+\infty\), the symbol grounding process can be viewed as a "correction" step, where we revise the symbol grounding from network's prediction towards logical constraints. Furthermore, as we will show in Theorem 1 later, both symbol grounding strategies can finally converge to the expected results.
The optimization problem of network training in (1) can be written as
\[\begin{split}\min_{\mathbf{\theta}}&\mathbb{E}_{( \mathbf{x},\mathbf{y})\sim\mathcal{D}}[\|\bar{\mathbf{z}}-\mathbf{f}_{\mathbf{\theta }}(\mathbf{x})\|^{2}],\\ \text{s.t.}&\bar{\mathbf{z}}=\arg\min_{\bar{\mathbf{ z}}\in\mathcal{Z}}\|\mathbf{W}(\bar{\mathbf{z}};\mathbf{y})-\mathbf{b}\|^{2}+\alpha\| \bar{\mathbf{z}}-\mathbf{f}_{\mathbf{\theta}}(\mathbf{x})\|^{2}.\end{split} \tag{3}\]
where we also use the mean squared loss, i.e., \(\ell_{1}(\mathbf{f}_{\mathbf{\theta}}(\mathbf{x}),\mathbf{z})=\|\mathbf{f}_{\mathbf{\theta}}( \mathbf{x})-\mathbf{z}\|^{2}\), for compatibility.
## 3 Algorithms and Analysis
Our general framework is given by (1), instantiated by (2) and (3). Both optimizations contain Boolean constraints of the form \(\|\mathbf{Q}\mathbf{u}-\mathbf{q}_{1}\|^{2}+\tau\|\mathbf{u}-\mathbf{q}_{2}\|^{2}\) where \(\mathbf{u}\) are Boolean variables.3 We propose to relax these Boolean constraints by _difference of convex_ (DC) programming (Tao and Hoai An, 1997; Yuille and Rangarajan, 2003; Lipp and Boyd, 2016; Hoai An and Tao, 2018). Specifically, a Boolean constraint \(u\in\{0,1\}\) can be rewritten into two constraints of \(u-u^{2}\geq 0\) and \(u-u^{2}\leq 0\). The first constraint is essentially a box constraint, i.e., \(u\in[0,1]\), which is kept in the optimization. The second one is concave, and we can _equivalently_ add it as a penalty term, as indicated by the following proposition (Bertsekas, 2015; Hansen et al., 1993; Le Thi and Ding Tao, 2001).
**Proposition 1**.: _Let \(\mathbf{e}\) denote the all-one vector. There exists \(t_{0}\geq 0\) such that for every \(t>t_{0}\), the following two problems are equivalent, i.e., they have the same optimum._
\[(P)\quad\min_{\mathbf{u}\in\{0,1\}^{n}}q(\mathbf{u}):=\|\mathbf{Q}\mathbf{u}-\mathbf{q }_{1}\|^{2}+\tau\|\mathbf{u}-\mathbf{q}_{2}\|^{2},\] \[(P_{t})\quad\min_{\mathbf{u}\in[0,1]^{n}}q^{t}(\mathbf{u}):=\|\mathbf{Q}\mathbf{u }-\mathbf{q}_{1}\|^{2}+\tau\|\mathbf{u}-\mathbf{q}_{2}\|^{2}+t(\mathbf{e}^{\mathsf{T}}\mathbf{u}- \mathbf{u}^{\mathsf{T}}\mathbf{u}).\]
_Remarks._ We provide more details, including the setting of \(t_{0}\), in Appendix A.
However, adding this penalty term causes non-convexity. Thus, DC programming further linearizes the penalty \(u-u^{2}\approx\tilde{u}-\tilde{u}^{2}+(u-\tilde{u})(1-2\tilde{u})\) at the given point \(\tilde{u}\), and formulates the problem in Proposition 1 as
\[\min_{\mathbf{u}\in[0,1]^{n}}\|\mathbf{Q}\mathbf{u}-\mathbf{q}_{1}\|^{2}+\tau\|\mathbf{u}-\mathbf{q}_{ 2}\|^{2}+t(\mathbf{e}-2\tilde{\mathbf{u}})^{\mathsf{T}}\mathbf{u}.\]
By applying this linearization, we achieve a successive convex approximation to the Boolean constraint (Razaviyayn, 2014), ensuring that the training is more stable and globally convergent (Lipp and Boyd, 2016). Furthermore, instead of fixing the coefficient \(t\), we propose to gradually increase it until the Boolean constraint is fully satisfied, forming an "annealing" procedure. We illustrate the necessity of this strategy in the following proposition (Beck and Teboulle, 2000; Xia, 2009).
**Proposition 2**.: _A solution \(\mathbf{u}\in\{0,1\}^{n}\) is a stationary point of (\(P_{t}\)) if and only if_
\[[\nabla q(\mathbf{u})]_{i}(1-2\mathbf{u}_{i})+t\geq 0,\quad i=1,\ldots,n.\]
_Then, if \(\mathbf{u}\in\{0,1\}^{n}\) is a global optimum of (P) (as well as (\(P_{t}\))), it holds that_
\[[\nabla q(\mathbf{u})]_{i}(1-2\mathbf{u}_{i})+\rho_{i}\geq 0,\quad i=1,\ldots,n,\]
_where \(\rho_{i}\) is the i-th diagonal element of \((\mathbf{Q}^{\mathsf{T}}\mathbf{Q}+\tau\mathbf{I})\)._
_Remarks._ The proposition reveals a trade-off of \(t\): a larger \(t\) encourages exploration of more stationary points satisfying the Boolean constraints, but a too large \(t\) may cause the converged point to deviate from the optimality of (P). Therefore, a gradual increase of \(t\) is sensible for obtaining the desired solution. Moreover, the initial minimization under small \(t\) results in a small gradient value (i.e., \(|[\nabla q(\mathbf{u})]_{i}|\)), thus a Boolean stationary point can be quickly achieved with a few steps of increasing \(t\).
### Algorithms
For a given dataset \(\{(\mathbf{x}_{i},\mathbf{y}_{i})\}_{i=1}^{N}\), \(\mathbf{X}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{N})\) and \(\mathbf{Y}=(\mathbf{y}_{1},\ldots,\mathbf{y}_{N})\) represent the data matrix and label matrix respectively, and \(f_{\mathbf{\theta}^{(k)}}(\mathbf{X})\) denotes the network prediction at the \(k\)-th iteration.
**Logical constraint learning.** Eliminating the constraint in (2) by letting \(\mathbf{Z}=f_{\mathbf{\theta}^{(k)}}(\mathbf{X})\), the empirical version of the logical constraint learning problem at the (\(k\)+1)-th iteration is
\[\min_{(\mathbf{W},\mathbf{b})}\sum_{i=1}^{m}\|(f_{\mathbf{\theta}^{(k)}}( \mathbf{X});\mathbf{Y})\mathbf{w}_{i}-\mathbf{b}_{i}\|^{2}+\lambda\|\mathbf{w}_{i}-\mathbf{w}_ {i}^{(0)}\|^{2}+t_{1}(\mathbf{e}-2\mathbf{w}_{i}^{(k)})^{\mathsf{T}}\mathbf{w}_{i},\]
where \(\mathbf{w}_{1}^{(k)},\ldots,\mathbf{w}_{m}^{(k)}\) are parameters of logical constraints at the \(k\)-th iteration. In this objective function, the first term is the training loss of logical constraint learning, the second term is the trust region penalty to avoid degeneracy, and the last term is the DC penalty of the Boolean constraint.
To solve this problem, we adopt the proximal point algorithm (PPA) (Rockafellar, 1976; Parikh et al., 2014; Rockafellar, 2021), as it overcomes two challenges posed by stochastic gradient descent. First, stochastic gradient descent has an implicit inductive bias (Gunasekar et al., 2017; Ali et al., 2020; Zhang et al., 2021; Smith et al., 2021), causing different \(\mathbf{w}_{i},i=1,\ldots,m\), to converge to a singleton. Second, the data matrix \((f_{\mathbf{\theta}^{(k)}}(\mathbf{X});\mathbf{Y})\) is a \(0\)-\(1\) matrix and often ill-conditioned, leading to diverse or slow convergence rates of stochastic gradient descent.
Given \((\mathbf{W}^{(k)},\mathbf{b}^{(k)})\) at the \(k\)-th iteration, the update of PPA can be computed by
\[\mathbf{W}^{(k+1)}=(\mathbf{M}+(\lambda+\frac{1}{\gamma})\mathbf{I})^{-1}\Big{(}(f_{\mathbf{ \theta}^{(k)}}(\mathbf{X});\mathbf{Y})^{\mathsf{T}}\mathbf{b}^{(k)}+\lambda\mathbf{W}^ {(0)}+(\frac{1}{\gamma}+t_{1})\mathbf{W}^{(k-1)}-\frac{t_{1}}{2}\mathbf{E}\Big{)},\] \[\mathbf{b}^{(k+1)}=(1+\frac{1}{\gamma})^{-1}(\mathbf{W}^{(k)}(f_{\mathbf{ \theta}^{(k)}}(\mathbf{X});\mathbf{Y})^{\mathsf{T}}\mathbf{e}+\frac{1}{\gamma}\mathbf{ b}^{(k)}), \tag{4}\]
where \(\gamma>0\) is the step size of PPA, \(\mathbf{E}\) is an all-ones matrix, and \(\mathbf{M}=(f_{\mathbf{\theta}^{(k)}}(\mathbf{X});\mathbf{Y})^{\mathsf{T}}(f_{\mathbf{ \theta}^{(k)}}(\mathbf{X});\mathbf{Y})\). Note that the computation of matrix inverse is required, but it is not an issue because \(\mathbf{M}\) is positive semidefinite, and so is the involved matrix, which allows the use of Cholesky decomposition to compute the inverse. Moreover, exploiting the low rank and the sparsity of \(\mathbf{M}\) can significantly enhance the efficiency of the computation.
**Neural network training.** By adding the DC penalty, the constraint (i.e., symbol grounding) in (3) is
\[\bar{\mathbf{z}}=\arg\min\|\mathbf{W}(\bar{\mathbf{z}};\mathbf{y})-\mathbf{b}\|^{2}+ \alpha\|\bar{\mathbf{z}}-f_{\mathbf{\theta}^{(k)}}(\mathbf{x})\|^{2}+t_{2}(\mathbf{e}- 2(\bar{\mathbf{z}}^{(k)})^{\mathsf{T}})\bar{\mathbf{z}}.\]
We can also compute the closed-form solution, i.e.,
\[\bar{\mathbf{Z}}=\left(\mathbf{b}\mathbf{W}+(\alpha+t_{2})\bar{\mathbf{Z}}^{(k)}+ \alpha f_{\mathbf{\theta}^{(k)}}(\mathbf{X})-\frac{t_{2}}{2}\mathbf{E}\right)\left(\bm {W}^{\mathsf{T}}\mathbf{W}+\alpha\mathbf{I}\right)^{-1}. \tag{5}\]
Similarly, the low-rank and sparsity properties of \(\mathbf{W}^{\mathsf{T}}\mathbf{W}\) ensure an efficient computation of matrix inverse. Finally, the parameter \(\mathbf{\theta}\) of network is updated by stochastic gradient descent,
\[\mathbf{\theta}^{(k+1)}=\mathbf{\theta}^{(k)}-\eta\nabla_{\mathbf{\theta}}f_{\mathbf{\theta}^{ (k)}}(\mathbf{x}_{i})\sum_{i=1}^{N}(\bar{\mathbf{z}}_{i}-f_{\mathbf{\theta}^{(k)} }(\mathbf{x}_{i})). \tag{6}\]
The overall algorithm is summarized in Algorithm 1, which mainly involves three iterative steps: (1) update the logical constraints by combining the network prediction and observed output; (2) correct the symbol grounding by revising the prediction to satisfy logical constraints; (3) update the neural network by back-propagating the corrected symbol grounding with observed input.
```
Set step sizes \((\gamma,\eta)\), and penalty coefficients \((\lambda,t_{1},t_{2})\). Randomly generate an initial matrix \(\mathbf{W}^{(0)}\) under \([0,1]\) uniform distribution. for\(k=0,1,\ldots,K\)do Randomly draw a batch \(\{(\mathbf{x}_{i},\mathbf{y}_{i})\}_{i=1}^{N}\) from training data. Compute the predicted symbol \(\mathbf{z}_{i}=f_{\mathbf{\theta}^{(k)}}(\mathbf{x}_{i}),i=1,\ldots,N\). \(\triangleright\)Network prediction Update \((\mathbf{W},\mathbf{b})\) from \((\mathbf{z}_{i},\mathbf{y}_{i}),i=1,\ldots,m\), by PPA update (Eq. 4). \(\triangleright\)Constraint learning Correct the symbol grounding \(\bar{\mathbf{z}}\) from \(\mathbf{z}_{i}\) to logical constraints \(h_{\phi}\) (Eq. 5). \(\triangleright\)Symbol grounding Update \(\mathbf{\theta}\) from \((\mathbf{x}_{i},\bar{\mathbf{z}}_{i}),i=1,\ldots,m\), by SGD update (Eq. 6). \(\triangleright\)Network training if\(\mathbf{W}\notin\mathcal{B}^{m\times(u+v)}/\bar{\mathbf{Z}}\notin\mathcal{B}^{N \times(u+v)}\)then Increasing \(t_{1}/t_{2}\). \(\triangleright\)Enforcing DC penalty endif endfor Estimate \((\mathbf{b}_{\min},\mathbf{b}_{\max})\) based on network \(f_{\mathbf{\theta}}\) and logical constraints \(h_{\phi}\) from training data.
```
**Algorithm 1** Neuro-symbolic Learning Procedure
### Theoretical Analysis
**Theorem 1**.: _With an increasing (or decreasing) \(\alpha\), the constraint learning and network training performed by Algorithm 1 converge to the stationary point of (2) and (3), respectively. Specifically, it satisfies_
\[\mathbb{E}[\|\nabla_{\mathbf{\theta}}\ell_{1}^{k}(\mathbf{\theta}^{k})\|^{2}]=\mathcal{ O}(\frac{1}{\sqrt{K+1}}),\quad and\quad\mathbb{E}[\|\nabla_{\mathbf{\phi}}\ell_{2}( \mathbf{\phi}^{k})\|^{2}]=\mathcal{O}(\frac{1}{\sqrt{K+1}}).\]
_Remarks._ The proof and additional results can be found in Appendix B. In summary, Theorem 1 confirms the convergence of our neuro-symbolic learning framework and illustrates its theoretical complexity. Furthermore, note that an increase (or decrease) in \(\alpha\) indicates a preference for correcting the symbol grounding over network prediction (or logical constraint learning). In practice, we can directly set a small (or large) enough \(\alpha\) instead.
Next, we analyze the setting of centre points \(\mathbf{W}^{(0)}\) in the trust region penalty as follows.
**Theorem 2**.: _Let \(\mathbf{w}_{1}^{(0)}\in[0,1]^{n}\) and \(\mathbf{w}_{2}^{(0)}\in[0,1]^{n}\) be two initial points sampled from the uniform distribution. For given \(t\geq 0\), the probability that the corresponding logical constraints \(\mathbf{\phi}_{1}\) and \(\mathbf{\phi}_{2}\) converge to the same (binary) stationary point \(\mathbf{\phi}\) satisfies_
\[\Pr(\mathbf{\phi}_{1}=\mathbf{\phi},\mathbf{\phi}_{2}=\mathbf{\phi})\leq\prod_{i=1}^{n}\min \big{\{}\frac{1}{2\lambda}([\nabla q(\mathbf{u})]_{i}(1-2\mathbf{u}_{i})+t),1\big{\}} ^{2}.\]
_Remarks._ The proof is in Appendix C. In a nutshell, Theorem 2 shows that the probability of \(\mathbf{w}_{1}\) and \(\mathbf{w}_{2}\) degenerating to the same logical constraint can be very small provided suitably chosen \(\lambda\) and \(t\). Note that \(\lambda\) and \(t\) play different roles. As shown by Proposition 2, the coefficient \(t\) in DC penalty ensures the logical constraint learning can successfully converge to a sensible result. The coefficient \(\lambda\) in trust region penalty enlarges the divergence of convergence conditions between two distinct logical constraints, thereby preventing the degeneracy effectively.
## 4 Experiments
We carry out experiments on four tasks, viz., chained XOR, Nonogram, visual Sudoku solving, and self-driving path planning. We use Z3 SMT (MaxSAT) solver [Moura and Bjorner, 2008] for symbolic reasoning. Other implementation details can be found in Appendix E. The experimental results of chained XOR and Nonogram tasks are detailed in Appendix F due to the space limit. The code is available at [https://github.com/Lizn-zm/Nesy-Programming](https://github.com/Lizn-zm/Nesy-Programming).
### Visual Sudoku Solving
**Datasets.** We consider two \(9\times 9\) visual SudoKu solving datasets, i.e., the SATNet dataset [Wang et al., 2019, Topan et al., 2021]4 and the RRN dataset [Yang et al., 2023], where the latter is more challenging (17 - 34 versus 31 - 42 given digits in each puzzle). Both datasets contain 9K/1K training/test examples, and their images are all sampled from the MNIST dataset. We typically involve two additional transfer tasks, i.e., training the neuro-symbolic system on SATNet dataset (resp. RRN dataset), and then evaluating the system on RRN dataset (resp. SATNet dataset).
Footnote 4: The SATNet dataset is originally created by Wang et al. [2019]. However, Topan et al. [2021] point out that the original dataset has the label leakage problem, which was fixed by removing the labels of given digits.
**Baselines.** We compare our method with four state-of-the-art methods, i.e., RRN [Palm et al., 2018], SATNet [Wang et al., 2019], SATNet* [Topan et al., 2021], and L1R32H4 [Yang et al., 2023]. RRN is modified to match visual SudoKu as done by Yang et al. [2023]. SATNet* is an improved version of SATNet that addresses the symbol grounding problem by introducing an additional pre-clustering step. As part of our ablation study, we introduce two variants of our method (NTR and NDC) where NTR removes the trust region penalty (i.e., setting \(\lambda=0\)), and NDC removes the DC penalty (i.e., fixing \(t_{1}=t_{2}=0\)) and directly binarizes \((\mathbf{W},\mathbf{b})\) as the finally learned logical constraints.
**Results.** We report the accuracy results (i.e., the percentage of correctly recognized boards, correctly solved boards, and both) in Table 1.5 A more detailed version of our experimental results is given in Appendix F. The results show that our method significantly outperforms the existing methods in all cases, and both trust region penalty and DC penalty are critical design choices. The solving accuracy is slightly higher than the perception accuracy, as the MaxSAT solver may still solve the problem correctly even when the perception result is wrong. Notably, our method precisely learns all logical constraints,6 resulting in a logical reasoning component that (1) achieves full accuracy when the neural perception is correct; (2) ensures robust results on transfer tasks, in comparison to the highly sensitive existing methods.
Footnote 5: We successfully reproduce the baseline methods, achieving consistent results with Yang et al. [2023].
Footnote 6: We need \(324\) (\(=9\times 9\times 4\): each row/column/block should fill 1 - 9, and each cell should be in 1 - 9) ground-truth cardinality constraints for the Sudoku task, whose rank is \(249\). After removing redundant constraints, our method learns exact \(324\) logical constraints that are one-to-one corresponding to the ground-truth.
Furthermore, we plot some training curves in Figure 2. The left two figures depict the training curves of neural perception accuracy on the SATNet dataset and RRN dataset, which demonstrate the extremely higher efficiency of our method in symbol grounding compared with the best competitor L1R32H4. We also compute the rank of matrix \(\mathbf{W}\) to evaluate the degeneracy of logical constraint learning. The results are presented in the right two figures, illustrating that logical constraints learned by our method are complete and precise. In contrast, the ablation methods either fail to converge to the correct logical constraints or result in a degenerate outcome.
### Self-driving Path Planning
**Motivation.** Self-driving systems are fundamentally neuro-symbolic, where the primary functions are delineated into two components: object detection empowered by neural perception and path planning driven by symbolic reasoning. Neuro-symbolic learning has great potential in self-driving, e.g., for learning from demonstrations (Schaal, 1996) and to foster more human-friendly driving patterns (Sun et al., 2021; Huang et al., 2021).
**Datasets.** We simulate the self-driving path planning task based on two datasets, i.e., Kitti (Geiger et al., 2013) and nuScenes (Caesar et al., 2020). Rather than provide the label of object detection, we only use planning paths as supervision. To compute planning paths, we construct obstacle maps with \(10\times 10\) grids, and apply the \(A^{*}\) algorithm with fixed start points and random end points. Note that Kitti and nuScenes contain 6160/500 and 7063/600 training/test examples, respectively, where nuScenes is more difficult (7.4 versus 4.6 obstacles per image on average).
**Baselines.** We include the best competitor L1R32H4 (Yang et al., 2023) in the previous experiment as comparison. Alongside this, we also build an end-to-end ResNet model (denoted by ResNet) (He et al., 2016) and an end-to-end recurrent transformer model (denoted by RTNet) (Hao et al., 2019). These models take the scene image, as well as the start point and the end point, as the input, and directly output the predicted path. Finally, as a reference, we train a ResNet model with direct supervision (denoted by SUP) by using labels of object detection, and the logical reasoning is also done by the \(A^{*}\) algorithm.
**Results.** We include the F\({}_{1}\) score of predicted path grids, the collision rate of the planning path, and the distance error between the shortest path and the planning path (only computed for safe paths) in Table 2. The results show that our method achieves the best performance on both datasets, compared with the alternatives. Particularly, the existing state-of-the-art method L1R32H4 fails on this task,
\begin{table}
\begin{tabular}{c c c c|c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{SATNet dataset} & \multicolumn{3}{c}{RRN dataset} & \multicolumn{3}{c}{SATNet \(\rightarrow\) RRN} \\ \cline{2-11} & Percep. & Solving & Total & Percep. & Solving & Total & Percep. & Solving & Total \\ \hline RRN & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ SATNet & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ SATNet* & 72.7 & 75.9 & 67.3 & 75.7 & 0.1 & 0.1 & 80.8 & 1.4 & 1.4 \\ L1R32H4 & 94.1 & 91.0 & 90.5 & 87.7 & 65.8 & 65.7 & 84.8 & 21.3 & 21.3 \\ \hline NTR & 87.4 & 0.0 & 0.0 & 91.4 & 3.9 & 3.9 & 90.2 & 0.0 & 0.0 \\ NDC & 79.9 & 0.0 & 0.0 & 88.0 & 0.0 & 0.0 & 86.1 & 0.0 & 0.0 \\ \hline Ours & **95.5** & **95.9** & **95.5** & **93.1** & **94.4** & **93.1** & **93.9** & **95.2** & **93.9** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy results (%) of visual Sudoku task. Our method performs the best.
Figure 2: Training curves of accuracy (left) and rank (right). Our method significantly boosts the efficiency of symbol grounding, and accurately converges to ground-truth constraints.
resulting in a high collision rate. Our method is nearly comparable to the supervised reference model SUP on the Kitti dataset. On the nuScenes dataset, our method even produces slightly less distance error of safe paths than the SUP method.
## 5 Related Work
**Neuro-symbolic learning.** Neuro-symbolic learning has received great attention recently. For instance, Dai et al. (2019) and Corapi et al. (2010) suggest bridging neural perception and logical reasoning via an abductive approach, where a logic program is abstracted from a given knowledge base. To reduce reliance on knowledge bases, Ciravegna et al. (2020) and Dong et al. (2019) directly represent and learn constraints using neural networks. However, the learned constraints are still uninterpretable. To improve interpretability, Wang et al. (2019) introduce SATNet, a method that relaxes the MaxSAT problem with semidefinite programming and incorporates it as a layer into neural networks. SATNet is further followed up by several works (Topan et al., 2021; Lim et al., 2022; Yang et al., 2023). However, how to explicitly extract and use the learned constraints is still unclear for these works. In contrast to the existing neuro-symbolic learning methods, our method can synthesize explicit logical constraints supporting exact reasoning by off-the-shelf reasoning engines.
**Constraint learning.** Our work is also related to constraint learning, which can be traced back to _Valiant's algorithm_(Valiant, 1984) and, more generally, _inductive logic programming_(Muggleton and De Raedt, 1994; Bratko and Muggleton, 1995; Yang et al., 2017; Evans and Grefenstette, 2018). However, Cropper and Dumancic (2022) highlight that inductive logic programming is limited when learning from raw data, such as images and speech, as opposed to perfect symbolic data. To this end, our method goes a step further by properly tackling the symbol grounding problem.
**Boolean quadratic programming and its relaxation.** Many constraint learning and logical reasoning tasks, e.g., learning Pseudo-Boolean function (Marichal and Mathonet, 2010), MaxSAT learning (Wang et al., 2019) and solving (Gomes et al., 2006), and SAT solving (Lipp and Boyd, 2016), can be formulated as Boolean quadratic programming (i.e., quadratic programming with binary variables) (Hammer and Rubin, 1970). However, commonly used techniques, such as branch and bound (Buchheim et al., 2012) and cutting plane (Kelley, 1960), cannot be applied in neuro-symbolic learning tasks. In literature, semidefinite relaxation (SDR) (d'Aspremont and Boyd, 2003; Gomes et al., 2006; Wang and Kolter, 2019) and difference-of-convex (DC) programming (Tao and Hoai An, 1997; Yuille and Rangarajan, 2003; Lipp and Boyd, 2016; Hoai An and Tao, 2018) are two typical methods to relax Boolean constraints. Although SDR is generally more efficient, the tightness and recovering binary results from relaxation are still an open problem (Burer and Ye, 2020; Wang and Klinp-Karzan, 2022), compromising the exactness of logical reasoning. In this work, we choose DC programming and translate DC constraints to a penalty term with gradually increasing weight, so as to ensure that the Boolean constraints can be finally guaranteed.
## 6 Limitations
In this section, we discuss the limitations of our framework and outline some potential solutions.
**Expressiveness.** The theoretical capability of cardinality constraints to represent any propositional logic formula does not necessarily imply the practical ability to learn any such formula in our frame
\begin{table}
\begin{tabular}{c c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Kitti dataset} & \multicolumn{3}{c}{nuScenes} \\ \cline{2-7} & F\({}_{1}\) score \(\uparrow\) & Coll. rate \(\downarrow\) & Dist. Err. \(\downarrow\) & F\({}_{1}\) score \(\uparrow\) & Coll. Rate \(\downarrow\) & Dist. Err. \(\downarrow\) \\ \hline ResNet & 68.5\% & 54.0\% & 2.91 & 51.8\% & 68.1\% & 3.60 \\ RTNet & 77.3\% & 36.8\% & 2.89 & 55.9\% & 63.8\% & 2.94 \\ L1R32H4 & 11.9\% & 100.0\% & NA. & 12.0\% & 91.5\% & 100.0 \\ \hline Ours & **80.2\%** & **32.8\%** & **2.84** & **58.8\%** & **57.8\%** & **2.81** \\ \hline SUP & 84.9\% & 28.3\% & 2.75 & 74.6\% & 52.9\% & 2.90 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of self-driving path planning task. Our method performs the best.
work; this remains a challenge. Fundamentally, logical constraint learning is an inductive method, and thus different learning methods would have different inductive biases. Cardinality constraint-based learning is more suitable for tasks where the logical constraints can be straightforwardly translated into the cardinality form. A typical example of such a task is Sudoku, where the target CNF formula consists of at least 8,829 clauses (Lynce and Ouaknine, 2006), while the total number of target cardinality constraints stands at a mere 324.
Technically, our logical constraint learning prefers equality constraints (e.g., \(x+y=2\)), which actually induce logical conjunction (e.g., \(x\wedge y=\mathrm{T}\)) and may ignore potential logical disjunction which is represented by inequality constraints (e.g., \(x\lor y=\mathrm{T}\) is expressed by \(x+y\geq 1\)). To overcome this issue, a practical trick is to introduce some auxiliary variables, which is commonly used in linear programming (Fang and Puthenpura, 1993). Consider the disjunction \(x\lor y=\mathrm{T}\); here, the auxiliary variables \(z_{1},z_{2}\) help form two equalities, namely, \(x+y+z_{1}=2\) for \((x,y)=(\mathrm{T},\mathrm{T})\) and \(x+y+z_{2}=1\) for \((x,y)=(\mathrm{T},\mathrm{F})\) or \((x,y)=(\mathrm{F},\mathrm{T})\). One can refer to the Chain-XOR task (cf. Section F.1) for a concrete application of auxiliary variables.
**Reasoning efficiency.** The reasoning efficiency, particularly that of SMT solvers, during the inference phase can be a primary bottleneck in our framework. For instance, in the self-driving path planning task, when we scale the map size up to a \(20\times 20\) grid involving \(800\) Boolean variables (\(400\) variables for grid obstacles and \(400\) for path designation), the Z3 MaxSAT solver takes more than two hours for some inputs.
To boost reasoning efficiency, there are several practical methods that could be applied. One straightforward method is to use an integer linear program (ILP) solver (e.g., Gurobi) as an alternative to the Z3 MaxSAT solver. In addition, some learning-based methods (e.g., Balunovic et al. (2018)) may enhance SMT solvers in our framework. Nonetheless, we do not expect that merely using a more efficient solver can resolve the problem. The improve the scalability, a more promising way is to combine System 1 and System 2 also in the inference stage (e.g., Cornelio et al. (2023)). Generally speaking, in the inference stage, neural perception should first deliver a partial solution, which is then completed by the reasoning engine. Such a paradigm ensures fast reasoning via neural perception, drastically reducing the logical variables that need to be solved by the exact reasoning engine, thereby also improving its efficiency.
## 7 Conclusion
This paper presents a neuro-symbolic learning approach that conducts neural network training and logical constraint synthesis simultaneously, fueled by symbol grounding. The gap between neural networks and symbol logic is suitably bridged by cardinality constraint-based learning and difference-of-convex programming. Moreover, we introduce the trust region method to effectively prevent the degeneracy of logical constraint learning. Both theoretical analysis and empirical evaluations have confirmed the effectiveness of the proposed approach. Future work could explore constraint learning using large language models to trim the search space of the involved logical variables, and augment reasoning efficiency by further combining logical reasoning with neural perception.
## Acknowledgment
We are thankful to the anonymous reviewers for their helpful comments. This work is supported by the National Natural Science Foundation of China (Grants #62025202, #62172199). T. Chen is also partially supported by Birkbeck BEI School Project (EFFECT) and an overseas grant of the State Key Laboratory of Novel Software Technology under Grant #KFKT2022A03. Yuan Yao ([email protected]) and Xiaoxing Ma ([email protected]) are the corresponding authors.
## References
* Kahneman (2011) Daniel Kahneman. _Thinking, fast and slow_. macmillan, 2011.
* Booch et al. (2021) Grady Booch, Francesco Fabiano, Lior Horesh, Kiran Kate, Jonathan Lenchner, Nick Linck, Andreas Loreggia, Keerthiram Murgesan, Nicholas Mattei, Francesca Rossi, et al. Thinking fast and slow in ai. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 35, pages 15042-15046, 2021.
* Yoshua and Gary (2020) Bengio Yoshua and Marcus Gary. Ai debate: The best way forward for ai, 2020. URL [https://montrealartificialintelligence.com/aidebate/](https://montrealartificialintelligence.com/aidebate/).
* LeCun (2022) Yann LeCun. A path towards autonomous machine intelligence version 0.9. 2, 2022-06-27. _Open Review_, 62, 2022.
* Wang et al. (2019) Po-Wei Wang, Priya Donti, Bryan Wilder, and Zico Kolter. Satnet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver. In _International Conference on Machine Learning_, pages 6545-6554. PMLR, 2019.
* Yang et al. (2023) Zhun Yang, Adam Ishay, and Joohyung Lee. Learning to solve constraint satisfaction problems with recurrent transformer. In _The Eleventh International Conference on Learning Representations_, 2023.
* Topan et al. (2021) Sever Topan, David Rolnick, and Xujie Si. Techniques for symbol grounding with satnet. _Advances in Neural Information Processing Systems_, 34:20733-20744, 2021.
* Li et al. (2023) Zenan Li, Yuan Yao, Taolue Chen, Jingwei Xu, Chun Cao, Xiaoxing Ma, and Lu Jian. Softened symbol grounding for neuro-symbolic systems. In _The Eleventh International Conference on Learning Representations_, 2023.
* Kimmig et al. (2012) Angelika Kimmig, Stephen Bach, Matthias Broecheler, Bert Huang, and Lise Getoor. A short introduction to probabilistic soft logic. In _Proceedings of the NIPS workshop on probabilistic programming: foundations and applications_, pages 1-4, 2012.
* Xu et al. (2018) Jingyi Xu, Zilu Zhang, Tal Friedman, Yitao Liang, and Guy Broeck. A semantic loss function for deep learning with symbolic knowledge. In _International conference on machine learning_, pages 5502-5511. PMLR, 2018.
* Een and Sorensson (2006) Niklas Een and Niklas Sorensson. Translating pseudo-boolean constraints into sat. _Journal on Satisfiability, Boolean Modeling and Computation_, 2(1-4):1-26, 2006.
* Bailleux and Boufkhad (2003) Olivier Bailleux and Yacine Boufkhad. Efficient cnf encoding of boolean cardinality constraints. In _Principles and Practice of Constraint Programming-CP 2003: 9th International Conference, CP 2003, Kinsale, Ireland, September 29-October 3, 2003. Proceedings 9_, pages 108-122. Springer, 2003.
* Bailleux et al. (2006) Olivier Bailleux, Yacine Boufkhad, and Olivier Roussel. A translation of pseudo-boolean constraints to sat. _Journal on Satisfiability, Boolean Modeling and Computation_, 2(1-4):191-200, 2006.
* Boyd et al. (2004) Stephen Boyd, Stephen P Boyd, and Lieven Vandenberghe. _Convex optimization_. Cambridge university press, 2004.
* Conn et al. (2000) Andrew R Conn, Nicholas IM Gould, and Philippe L Toint. _Trust region methods_. SIAM, 2000.
* Syrjanen (2009) Tommi Syrjanen. _Logic programs and cardinality constraints: Theory and practice_. PhD thesis, PhD thesis, Helsinki University of Technology, 2009.
* Syrjanen (2004) Tommi Syrjanen. Cardinality constraint programs. In _Logics in Artificial Intelligence: 9th European Conference, JELIA 2004, Lisbon, Portugal, September 27-30, 2004. Proceedings 9_, pages 187-199. Springer, 2004.
* Fiorini et al. (2021) Samuel Fiorini, Tony Huynh, and Stefan Weltge. Strengthening convex relaxations of 0/1-sets using boolean formulas. _Mathematical programming_, 190:467-482, 2021.
* Fiorini et al. (2021)Robin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, and Luc De Raedt. Deepproblog: Neural probabilistic logic programming. _advances in neural information processing systems_, 31, 2018.
* Li et al. (2020) Qing Li, Siyuan Huang, Yining Hong, Yixin Chen, Ying Nian Wu, and Song-Chun Zhu. Closed loop neural-symbolic learning via integrating neural perception, grammar parsing, and symbolic reasoning. In _International Conference on Machine Learning_, pages 5884-5894. PMLR, 2020.
* Vandenberghe & Boyd (2004) Lieven Vandenberghe and Stephen Boyd. _Convex optimization (Tutorial)_, volume 1. Cambridge university press Cambridge, 2004.
* Henderson (1975) Charles R Henderson. Best linear unbiased estimation and prediction under a selection model. _Biometrics_, pages 423-447, 1975.
* Bjorck (1990) Ake Bjorck. Least squares methods. _Handbook of numerical analysis_, 1:465-652, 1990.
* Hastie et al. (2009) Trevor Hastie, Robert Tibshirani, Jerome H Friedman, and Jerome H Friedman. _The elements of statistical learning: data mining, inference, and prediction_, volume 2. Springer, 2009.
* Gunasekar et al. (2017) Suriya Gunasekar, Blake E Woodworth, Srinadh Bhojanapalli, Behnam Neyshabur, and Nati Srebro. Implicit regularization in matrix factorization. _Advances in Neural Information Processing Systems_, 30, 2017.
* Smith et al. (2021) Samuel L Smith, Benoit Dherin, David GT Barrett, and Soham De. On the origin of implicit regularization in stochastic gradient descent. _arXiv preprint arXiv:2101.12176_, 2021.
* Ali et al. (2020) Alnur Ali, Edgar Dobriban, and Ryan Tibshirani. The implicit regularization of stochastic gradient flow for least squares. In _International conference on machine learning_, pages 233-244. PMLR, 2020.
* van Krieken et al. (2022) Emile van Krieken, Thiviyan Thanapalasingam, Jakub M Tomczak, Frank van Harmelen, and Annette ten Teije. A-nesi: A scalable approximate method for probabilistic neurosymbolic inference. _arXiv preprint arXiv:2212.12393_, 2022.
* Tao & An (1997) Pham Dinh Tao and Le Thi Hoai An. Convex analysis approach to dc programming: theory, algorithms and applications. _Acta mathematica vietnamica_, 22(1):289-355, 1997.
* Yuille & Rangarajan (2003) Alan L Yuille and Anand Rangarajan. The concave-convex procedure. _Neural computation_, 15(4):915-936, 2003.
* Lipp & Boyd (2016) Thomas Lipp and Stephen Boyd. Variations and extension of the convex-concave procedure. _Optimization and Engineering_, 17:263-287, 2016.
* An & Tao (2018) Le Thi Hoai An and Pham Dinh Tao. Dc programming and dca: thirty years of developments. _Mathematical Programming_, 169(1):5-68, 2018.
* Bertsekas (2015) Dimitri Bertsekas. _Convex optimization algorithms_. Athena Scientific, 2015.
* Hansen et al. (1993) Pierre Hansen, Brigitte Jaumard, MicheLe Ruiz, and Junjie Xiong. Global minimization of indefinite quadratic functions subject to box constraints. _Naval Research Logistics (NRL)_, 40(3):373-392, 1993.
* Li & Tao (2001) Hoai An Le Thi and Pham Ding Tao. A continuous approch for globally solving linearly constrained quadratic. _Optimization_, 50(1-2):93-120, 2001.
* Razaviyayn (2014) Meisam Razaviyayn. _Successive convex approximation: Analysis and applications_. PhD thesis, University of Minnesota, 2014.
* Beck & Teboulle (2000) Amir Beck and Marc Teboulle. Global optimality conditions for quadratic optimization problems with binary constraints. _SIAM journal on optimization_, 11(1):179-188, 2000.
* Xia (2009) Yong Xia. New optimality conditions for quadratic optimization problems with binary constraints. _Optimization letters_, 3:253-263, 2009.
* Xu et al. (2018)R Tyrrell Rockafellar. Monotone operators and the proximal point algorithm. _SIAM journal on control and optimization_, 14(5):877-898, 1976.
* Parikh et al. (2014) Neal Parikh, Stephen Boyd, et al. Proximal algorithms. _Foundations and trends(r) in Optimization_, 1(3):127-239, 2014.
* Rockafellar (2021) R Tyrrell Rockafellar. Advances in convergence and scope of the proximal point algorithm. _J. Nonlinear and Convex Analysis_, 22:2347-2375, 2021.
* Zhang et al. (2021) Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning (still) requires rethinking generalization. _Communications of the ACM_, 64(3):107-115, 2021.
* de Moura and Bjorner (2008) Leonardo de Moura and Nikolaj Bjorner. Z3: An efficient smt solver. In _International conference on Tools and Algorithms for the Construction and Analysis of Systems_, pages 337-340. Springer, 2008.
* Palm et al. (2018) Rasmus Palm, Ulrich Paquet, and Ole Winther. Recurrent relational networks. _Advances in neural information processing systems_, 31, 2018.
* Schaal (1996) Stefan Schaal. Learning from demonstration. _Advances in neural information processing systems_, 9, 1996.
* Sun et al. (2021) Jiankai Sun, Hao Sun, Tian Han, and Bolei Zhou. Neuro-symbolic program search for autonomous driving decision module design. In _Conference on Robot Learning_, pages 21-30. PMLR, 2021.
* Huang et al. (2021) Junning Huang, Sirui Xie, Jiankai Sun, Qiurui Ma, Chunxiao Liu, Dahua Lin, and Bolei Zhou. Learning a decision module by imitating driver's control behaviors. In _Conference on Robot Learning_, pages 1-10. PMLR, 2021.
* Geiger et al. (2013) Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: The kitti dataset. _The International Journal of Robotics Research_, 32(11):1231-1237, 2013.
* Caesar et al. (2020) Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 11621-11631, 2020.
* He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016.
* Hao et al. (2019) Jie Hao, Xing Wang, Baosong Yang, Longyue Wang, Jinfeng Zhang, and Zhaopeng Tu. Modeling recurrence for transformer. _arXiv preprint arXiv:1904.03092_, 2019.
* Dai et al. (2019) Wang-Zhou Dai, Qiuling Xu, Yang Yu, and Zhi-Hua Zhou. Bridging machine learning and logical reasoning by abductive learning. _Advances in Neural Information Processing Systems_, 32, 2019.
* Corapi et al. (2010) Domenico Corapi, Alessandra Russo, and Emil Lupu. Inductive logic programming as abductive search. In _Technical communications of the 26th international conference on logic programming_. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2010.
* Ciravegna et al. (2020) Gabriele Ciravegna, Francesco Giannini, Stefano Melacci, Marco Maggini, and Marco Gori. A constraint-based approach to learning and explanation. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 34, pages 3658-3665, 2020.
* Dong et al. (2019) Honghua Dong, Jiayuan Mao, Tian Lin, Chong Wang, Lihong Li, and Denny Zhou. Neural logic machines. In _International Conference on Learning Representations_, 2019.
* Lim et al. (2022) Sangho Lim, Eun-Gyeol Oh, and Hongseok Yang. Learning symmetric rules with satnet. In _The 36th Conference on Neural Information Processing Systems (NeurIPS 2022)_. Neural information processing systems foundation, 2022.
* Valiant (1984) Leslie G Valiant. A theory of the learnable. _Communications of the ACM_, 27(11):1134-1142, 1984.
* Valiant (1986)Stephen Muggleton and Luc De Raedt. Inductive logic programming: Theory and methods. _The Journal of Logic Programming_, 19:629-679, 1994.
* Bratko and Muggleton (1995) Ivan Bratko and Stephen Muggleton. Applications of inductive logic programming. _Communications of the ACM_, 38(11):65-70, 1995.
* Yang et al. (2017) Fan Yang, Zhilin Yang, and William W Cohen. Differentiable learning of logical rules for knowledge base reasoning. _Advances in neural information processing systems_, 30, 2017.
* Evans and Grefenstette (2018) Richard Evans and Edward Grefenstette. Learning explanatory rules from noisy data. _Journal of Artificial Intelligence Research_, 61:1-64, 2018.
* Cropper and Dumancic (2022) Andrew Cropper and Sebastijan Dumancic. Inductive logic programming at 30: a new introduction. _Journal of Artificial Intelligence Research_, 74:765-850, 2022.
* Marichal and Mathonet (2010) Jean-Luc Marichal and Pierre Mathonet. Symmetric approximations of pseudo-boolean functions. _arXiv preprint arXiv:1004.2593_, 2010.
* Gomes et al. (2006) Carla P Gomes, Willem-Jan Van Hoeve, and Lucian Leahu. The power of semidefinite programming relaxations for max-sat. In _Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems: Third International Conference, CPAIOR 2006, Cork, Ireland, May 31-June 2, 2006. Proceedings 3_, pages 104-118. Springer, 2006.
* Hammer and Rubin (1970) Peter L Hammer and Abraham A Rubin. Some remarks on quadratic programming with 0-1 variables. _Revue francaise d'informatique et de recherche operationnelle. Serie verte_, 4(V3):67-79, 1970.
* Buchheim et al. (2012) Christoph Buchheim, Alberto Caprara, and Andrea Lodi. An effective branch-and-bound algorithm for convex quadratic integer programming. _Mathematical programming_, 135:369-395, 2012.
* Kelley (1960) James E Kelley, Jr. The cutting-plane method for solving convex programs. _Journal of the society for Industrial and Applied Mathematics_, 8(4):703-712, 1960.
* d'Aspremont and Boyd (2003) Alexandre d'Aspremont and Stephen Boyd. Relaxations and randomized methods for nonconvex qcqps. _EE392o Class Notes, Stanford University_, 1:1-16, 2003.
* Wang and Kolter (2019) Po-Wei Wang and J Zico Kolter. Low-rank semidefinite programming for the max2sat problem. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 33, pages 1641-1649, 2019.
* Burer and Ye (2020) Samuel Burer and Yinyu Ye. Exact semidefinite formulations for a class of (random and non-random) nonconvex quadratic programs. _Mathematical Programming_, 181(1):1-17, 2020.
* Wang and Kilinc-Karzan (2022) Alex L Wang and Fatma Kilinc-Karzan. On the tightness of sdp relaxations of qcqps. _Mathematical Programming_, 193(1):33-73, 2022.
* Lynce and Ouaknine (2006) Ines Lynce and Joel Ouaknine. Sudoku as a sat problem. In _AI&M_, 2006.
* Fang and Puthenpura (1993) Shu-Cherng Fang and Sarat Puthenpura. _Linear optimization and extensions: theory and algorithms_. Prentice-Hall, Inc., 1993.
* Balunovic et al. (2018) Mislav Balunovic, Pavol Bielik, and Martin Vechev. Learning to solve smt formulas. _Advances in Neural Information Processing Systems_, 31, 2018.
* Cornelio et al. (2023) Cristina Cornelio, Jan Stuehmer, Shell Xu Hu, and Timothy Hospedales. Learning where and when to reason in neuro-symbolic inference. In _The Eleventh International Conference on Learning Representations_, 2023.
* Bottou et al. (2018) Leon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning. _SIAM review_, 60(2):223-311, 2018.
* Shalev-Shwartz et al. (2017) Shai Shalev-Shwartz, Ohad Shamir, and Shaked Shammah. Failures of gradient-based deep learning. In _International Conference on Machine Learning_, pages 3067-3075. PMLR, 2017.
* Shalev-Shwartz et al. (2018)
## Appendix A Proofs of DC technique
**Notations.** We define \(\mathbf{S}:=(\mathbf{Q}^{\mathsf{T}}\mathbf{Q}+\tau\mathbf{I})\), \(\mathbf{s}:=(\mathbf{Q}^{\mathsf{T}}\mathbf{q}_{1}+\tau\mathbf{q}_{2})\), and denote the largest eigenvalues and largest diagonal element of \(\mathbf{S}\) by \(\sigma_{\max}\) and \(\delta_{\max}\), respectively. Hence, the two problems can be equivalently rewritten as
\[\text{(P)}\ \ \min_{\mathbf{u}\in\{0,1\}^{n}}\mathbf{u}^{\mathsf{T}}\mathbf{S}\mathbf{u}-2 \mathbf{s}^{\mathsf{T}}\mathbf{u},\qquad\text{(P_{t})}\ \ \min_{\mathbf{u}\in[0,1]^{n}}\mathbf{u}^{ \mathsf{T}}(\mathbf{S}-t\mathbf{I})\mathbf{u}-(2\mathbf{s}-te)^{\mathsf{T}}\mathbf{u}.\]
### Proof of Proposition 1
Proof.: The results are primarily based on Bertsekas (2015, Proposition 1.3.4): the minima of a strictly concave function cannot be in the relative interior of the feasible set.
We first show that if \(t_{0}\geq\sigma_{\max}\), then the two problems are equivalent (Le Thi and Ding Tao, 2001, Theorem 1). Specifically, since \(\mathbf{S}-t\mathbf{I}\) is negative definite, problem (Pt) is strictly concave. Therefore, the minima should be in the vertex set of the feasible domain, which is consistent with problem (P).
We can further generalize this result to the case \(t_{0}\geq\delta_{\max}\)(Hansen et al., 1993, Proposition 1). In this case, considering the \(i\)-th component of \(\mathbf{u}\), its second-order derivative in problem (Pt) is \(2(\mathbf{S}_{ii}-t)\). Similarly, the strict concavity of \(\mathbf{u}_{i}\) ensures a binary solution, indicating the equivalence of problems (P) and (Pt).
### Proof of Proposition 2
Proof.: The Karush-Kuhn-Tucker (KKT) conditions of the problem (Pt) are as follows.
\[[2\mathbf{S}\mathbf{u}-2t\mathbf{u}-2\mathbf{s}+t\mathbf{e}]_{i}-\mathbf{\alpha}_{i}+ \mathbf{\beta}_{i}=\mathbf{0};\] \[\mathbf{u}_{i}\in[0,1]^{n};\quad\mathbf{\alpha}_{i}\geq\mathbf{0},\mathbf{\beta}_ {i}\geq\mathbf{0};\] \[\mathbf{\alpha}_{i}\mathbf{u}_{i}=0,\quad\mathbf{\beta}_{i}(\mathbf{u}_{i}-1)=0; \quad i=1,\ldots,n.\]
where \(\mathbf{\alpha}\) and \(\mathbf{\beta}\) are multiplier vector. For \(\mathbf{u}\in\{0,1\}^{n}\), the KKT condition is equivalent to
\[\mathbf{\alpha}_{i}=[2\mathbf{S}\mathbf{u}-2t\mathbf{u}-2\mathbf{s}+t\mathbf{e}]_{i}(1- \mathbf{u}_{i})\geq 0,\quad\mathbf{\beta}_{i}=[2\mathbf{S}\mathbf{u}-2t\mathbf{u}-2\mathbf{s}+t\mathbf{e}]_{i }\mathbf{u}_{i}\leq 0.\]
By using \((1-2\mathbf{u}_{i})\in\{-1,1\}\), we can further combine the above two inequalities, and obtain
\[2[\mathbf{S}\mathbf{u}-\mathbf{s}]_{i}(1-2\mathbf{u}_{i})+t\geq 0,\quad i=1,\ldots,n.\]
On the other hand, if \(2[\mathbf{S}\mathbf{u}-\mathbf{s}]_{i}(1-2\mathbf{u}_{i})+t\geq 0\) holds for each \(i=1,\ldots,n\), it is easy to check that \(\mathbf{\alpha}\geq 0\) and \(\mathbf{\beta}\geq 0\), which proves the first part of the proposition.
The proof of the second part is a direct result of Beck and Teboulle (2000, Theorem 2.4). To be specific, if \(\mathbf{u}\) achieves a global minimum of (P), then \(q(\mathbf{u})\leq q(\mathbf{u}^{\prime})\) for any \(\mathbf{u}^{\prime}\in\{0,1\}^{n}\). Hence, we only flip the \(i\)-th value of \(\mathbf{u}\), i.e., considering \(\mathbf{u}_{i}\) and \(\mathbf{u}^{\prime}_{i}=1-\mathbf{u}_{i}\), and it holds that
\[\mathbf{u}^{\mathsf{T}}\mathbf{S}\mathbf{u}-2\mathbf{s}^{\mathsf{T}}\mathbf{u} \leq(\mathbf{u}^{\prime})^{\mathsf{T}}\mathbf{S}\mathbf{u}^{\prime}-2\mathbf{s}^{ \mathsf{T}}\mathbf{u}^{\prime}\] \[=(\mathbf{u}^{\mathsf{T}}\mathbf{S}\mathbf{u}-2\mathbf{s}^{\mathsf{T}}\mathbf{u})+2[ \mathbf{S}\mathbf{u}-\mathbf{s}]_{i}(1-2\mathbf{u}_{i})+\mathbf{S}_{ii}.\]
Rearranging the inequality, we obtain
\[2[\mathbf{S}\mathbf{u}-\mathbf{s}]_{i}(1-2\mathbf{u}_{i})\geq-\mathbf{S}_{ii},\quad i=1,\ldots,n,\]
which completes the proof.
## Appendix B Proof of Theorem 1
Proof.: **Notations.** We use \(\|\cdot\|\) to denote the \(\ell_{2}\) norm for vectors and Frobenius norm for matrices. We define
\[\varphi(\mathbf{\phi},\mathbf{\theta},\mathbf{\mathsf{Z}},\mathbf{\mathsf{Y}}):=\| \mathbf{\mathsf{Z}}\mathbf{w}_{u}+\mathbf{\mathsf{Y}}\mathbf{w}_{v}-\mathbf{b}\|^{2}+\alpha\|( \mathbf{\mathsf{Z}},\mathbf{\mathsf{Y}})-(f_{\mathbf{\theta}}(\mathbf{\mathsf{X}}),\mathbf{\mathsf{ Y}})\|^{2}+\lambda\|\mathbf{w}-\mathbf{w}^{0}\|^{2}.\]
For the loss functions of logic programming and network training, we assume \(\ell_{1}(\mathbf{\theta})\) and \(\ell_{2}(\mathbf{\phi})\) to be \(\mu_{\mathbf{\theta}}\) and \(\mu_{\mathbf{\phi}}\) smooth, respectively. For ease of presentation, we define \(\Delta^{k}=f_{\mathbf{\theta}^{k}}(\mathbf{\mathsf{X}})\mathbf{w}_{u}^{k}+\mathbf{\mathsf{Y}} \mathbf{w}_{v}^{k}-\mathbf{b}\)and let \(c_{\max}\) be the upper bound of \(\|\Delta^{k}\|\). Furthermore, by using the Woodbury identity formula, we can compute
\[(\mathbf{Z}^{k};\mathbf{Y}^{k}) =\arg\min_{(\mathbf{Z},\mathbf{Y})}\|\mathbf{Z}\mathbf{w}_{u}^{k}+ \mathbf{Y}\mathbf{w}_{v}^{k}-\mathbf{b}\|^{2}+\alpha\|(\mathbf{Z},\mathbf{Y})-(f_{\mathbf{ \theta}^{k}}(\mathbf{X}),\mathbf{Y})\|^{2}+\lambda\|\mathbf{w}-\mathbf{w}^{0}\|^{2}\] \[=(f_{\mathbf{\theta}^{k}}(\mathbf{X});\mathbf{Y})-\beta^{k}\Delta^{k}( \mathbf{w}^{k})^{\mathsf{T}},\quad\text{where}\quad\beta^{k}=\frac{1}{\alpha+\|\mathbf{ w}^{k}\|^{2}}.\]
Let \(\rho^{k}:=(\alpha\beta^{k})\), we have
\[\varphi(\mathbf{\phi}^{k},\mathbf{\theta}^{k},f_{\mathbf{\theta}^{k}}(\mathbf{ X}),\mathbf{Y})-\varphi(\mathbf{\phi}^{k},\mathbf{\theta}^{k},\mathbf{Z}^{k}, \mathbf{Y}^{k}) =(1-((\alpha\beta^{k})^{2}+(1-\alpha\beta^{k})^{2})\|\Delta^{k} \|^{2}\] \[=2\rho^{k}(1-\rho^{k})\|\Delta^{k}\|^{2}.\]
**Update of \(\mathbf{\phi}\)**. We consider the single rule case (multiple rules can be directly decomposed), i.e., \(\mathbf{\phi}=(\mathbf{w},\mathbf{b})\) and \(\mathbf{b}=(b,\ldots;b)\). The update of \(\mathbf{\phi}\) is conducted on the loss function
\[\ell_{2}^{k}(\mathbf{w},\mathbf{b})=\varphi(\mathbf{\phi}^{k},\mathbf{\theta}^{k},f_{\mathbf{ \theta}^{k}}(\mathbf{X}),\mathbf{Y})=\|f_{\mathbf{\theta}^{k}}(\mathbf{X})\mathbf{w}_ {u}+\mathbf{Y}\mathbf{w}_{v}-\mathbf{b}\|^{2}+\lambda\|\mathbf{w}-\mathbf{w}^{0}\|^{2}.\]
The smallest and the largest eigenvalues of \((f_{\mathbf{\theta}^{k}}(\mathbf{X}),\mathbf{Y})^{\mathsf{T}}(f_{\mathbf{\theta}^{k}}( \mathbf{X}),\mathbf{Y})+\lambda\mathbf{I}\) are denoted by \(\sigma_{\min}\) and \(\sigma_{\max}\), respectively.
The PPA method updates \(\mathbf{w}\) by
\[\mathbf{w}^{k+1}=\arg\min_{\mathbf{w}}\ell_{2}^{k}(\mathbf{w},\mathbf{b})+\frac{1}{\gamma}\| \mathbf{w}-\mathbf{w}^{k}\|^{2},\]
which can be reduced to
\[\mathbf{w}^{k+1}-\mathbf{w}^{k}=-\mathbf{M}^{k}\mathbf{\delta}^{k},\quad\mathbf{\delta}^{k}=(f_{ \mathbf{\theta}^{k}}(\mathbf{X}),\mathbf{Y})^{\mathsf{T}}\Delta=\nabla_{\mathbf{w}} \ell_{2}^{k}(\mathbf{w},\mathbf{b}),\]
where
\[\mathbf{M}^{k}=\big{(}(f_{\mathbf{\theta}^{k}}(\mathbf{X}),\mathbf{Y})^{\mathsf{T}}(f_ {\mathbf{\theta}^{k}}(\mathbf{X}),\mathbf{Y})+\lambda\mathbf{I}+\frac{1}{\gamma}\mathbf{I }\big{)}^{-1}.\]
The \((2/\gamma)\)-strongly convexity of the proximal term implies the Polyak-Lojasiewicz (PL) inequality, which derives that
\[\varphi(\mathbf{\phi}^{k},\mathbf{\theta}^{k},\mathbf{Z}^{k},\mathbf{Y}^{ k})=\varphi(\mathbf{\phi}^{k},\mathbf{\theta}^{k},f_{\mathbf{\theta}^{k}}(\mathbf{X}), \mathbf{Y})-2\rho^{k}(1-\rho^{k})\|\Delta\|^{2}\] \[\quad=\ell_{2}^{k}(\mathbf{w}^{k},\mathbf{b})-2\rho^{k}(1-\rho^{k})\| \Delta^{k}\|^{2}\geq\ell_{2}^{k}(\mathbf{w}^{k+1},\mathbf{b})-2\rho^{k}(1-\rho^{k})\| \Delta\|^{2}+\frac{2}{\gamma}\|\mathbf{w}^{k+1}-\mathbf{w}^{k}\|^{2}.\]
Plugging \(\mathbf{w}^{k+1}-\mathbf{w}^{k}=-\mathbf{M}^{k}\mathbf{\delta}^{k}\) into the inequality, we have
\[\varphi(\mathbf{\phi}^{k},\mathbf{\theta}^{k},\mathbf{Z}^{k},\mathbf{Y}^{ k})\geq\ell_{2}^{k}(\mathbf{w}^{k+1},\mathbf{b})+\frac{2}{\gamma}(\mathbf{\delta}^{k})^{ \mathsf{T}}(\mathbf{M}^{k})^{2}\mathbf{\delta}^{k}-2\rho^{k}(1-\rho^{k})\|\Delta^{k}\| ^{2}\] \[\qquad\qquad\qquad\geq\varphi(\mathbf{\phi}^{k+1},\mathbf{\theta}^{k}, \mathbf{Z}^{k+\frac{1}{2}},\mathbf{Y}^{k+\frac{1}{2}})+\frac{2}{\gamma}(\mathbf{ \delta}^{k})^{\mathsf{T}}(\mathbf{M}^{k})^{2}\mathbf{\delta}^{k}-2\rho^{k}(1-\rho^{k}) \|\Delta^{k}\|^{2},\]
where
\[(\mathbf{Z}^{k+\frac{1}{2}};\mathbf{Y}^{k+\frac{1}{2}})=\arg\min _{(\bar{\mathbf{Z}},\bar{\mathbf{Y}})}\|\bar{\mathbf{Z}}\mathbf{w}_{u}^{k+1}+\bar{ \mathbf{Y}}\mathbf{w}_{v}^{k+1}-\mathbf{b}\|^{2}+\alpha\|(\bar{\mathbf{Z}},\bar{ \mathbf{Y}})-(f_{\mathbf{\theta}^{k}}(\mathbf{X}),\mathbf{Y})\|^{2}\] \[\qquad\qquad=(f_{\mathbf{\theta}^{k}}(\mathbf{X});\mathbf{Y})-\beta^{ k+\frac{1}{2}}\Delta^{k+\frac{1}{2}}(\mathbf{w}^{k+1})^{\mathsf{T}},\quad\text{where} \quad\beta^{k+\frac{1}{2}}=\frac{1}{\alpha+\|\mathbf{w}^{k+1}\|^{2}}.\]
Note that \((\mathbf{M}^{k})^{2}\) has the smallest eigenvalue \(\gamma^{2}/(1+\gamma\sigma_{\max})^{2}\), and thus we have
\[\varphi(\mathbf{\phi}^{k},\mathbf{\theta}^{k},\mathbf{Z}^{k},\mathbf{Y}^{ k})\geq\varphi(\mathbf{\phi}^{k+1},\mathbf{\theta}^{k},\mathbf{Z}^{k+\frac{1}{2}}, \mathbf{Y}^{k+\frac{1}{2}})+\frac{2\gamma}{(1+\gamma\sigma_{\max})^{2}}\|\nabla_{ \mathbf{\phi}}\ell_{2}(\mathbf{w},\mathbf{b})\|^{2}-2\rho^{k}(1-\rho^{k})c_{\max}.\]
**Update of \(\mathbf{\theta}\)**. The update of \(\mathbf{\theta}\) is conducted on the loss function
\[\ell_{1}^{k}(\mathbf{\theta})=\|\mathbf{Z}^{k+\frac{1}{2}}-f_{\mathbf{\theta}}(\mathbf{X}) \|^{2}.\]By using \(\mu_{\mathbf{\theta}}\)-smooth of \(\ell_{1}^{k}\), we obtain that
\[\varphi(\mathbf{\phi}^{k+1},\mathbf{\theta}^{k},\mathbf{Z}^{k+\frac{1}{2}}, \mathbf{Y}^{k+\frac{1}{2}})-\varphi(\mathbf{\phi}^{k+1},\mathbf{\theta}^{k+1},\mathbf{Z }^{k+\frac{1}{2}},\mathbf{Y}^{k+\frac{1}{2}})=\ell_{1}^{k}(\mathbf{\theta}^{k+1})- \ell_{1}^{k}(\mathbf{\theta}^{k})\\ \geq-\langle\nabla_{\mathbf{\theta}}\ell_{1}^{k}(\mathbf{\theta}^{k}),\bm {\theta}^{k+1}-\mathbf{\theta}^{k}\rangle-\frac{\mu_{\mathbf{\theta}}}{2}\|\mathbf{\theta}^ {k+1}-\mathbf{\theta}^{k}\|^{2}\geq\frac{1}{2}\eta\|\nabla_{\mathbf{\theta}}\ell_{1}^{ k}(\mathbf{\theta}^{k})\|^{2}.\]
Letting \(\mathbf{Z}^{k+1}=\arg\min_{\mathbf{Z}}\varphi(\mathbf{\phi}^{k+1},\mathbf{\theta}^{k+1 },\mathbf{Z})\), we conclude
\[\varphi(\mathbf{\phi}^{k+1},\mathbf{\theta}^{k},\mathbf{Z}^{k+\frac{1}{2}},\mathbf{Y}^ {k+\frac{1}{2}})\geq\varphi(\mathbf{\phi}^{k+1},\mathbf{\theta}^{k+1},\mathbf{Z}^{k+1 },\mathbf{Y}^{k+1})+\frac{1}{2}\eta\|\nabla_{\mathbf{\theta}}\ell_{1}^{k}(\mathbf{ \theta}^{k})\|^{2}.\]
**Convergent result.** By combining the update of \(\mathbf{\phi}\) and \(\mathbf{\theta}\), we have
\[\varphi(\mathbf{\phi}^{k},\mathbf{\theta}^{k},\mathbf{Z}^{k},\mathbf{Y}^{ k})-\varphi(\mathbf{\phi}^{k+1},\mathbf{\theta}^{k+1},\mathbf{Z}^{k+1},\mathbf{Y}^{k+1}) \\ \geq\frac{1}{2}\eta\|\nabla_{\mathbf{\theta}}\ell_{1}^{k}(\mathbf{\theta}^ {k})\|^{2}+\frac{2\gamma}{(1+\gamma\sigma_{\max})^{2}}\|\nabla_{\mathbf{\phi}}\ell _{2}(\mathbf{\phi}^{k})\|^{2}-2\rho^{k}(1-\rho^{k})c_{\max}.\]
Taking a telescopic sum over \(k\), we obtain
\[\varphi(\mathbf{\phi}^{0},\mathbf{\theta}^{0},\mathbf{Z}^{0},\mathbf{Y}^{ 0})-\varphi(\mathbf{\phi}^{K},\mathbf{\mathbf{\theta}}^{K},\mathbf{Z}^{K},\mathbf{Y}^{ K})\\ \geq\sum_{i=1}^{K}\frac{1}{2}\eta\|\nabla_{\mathbf{\theta}}\ell_{1}^{ k}(\mathbf{\theta}^{k})\|^{2}+\frac{2\gamma}{(1+\gamma\sigma_{\max})^{2}}\|\nabla_{ \mathbf{\phi}}\ell_{2}(\mathbf{\phi}^{k})\|^{2}-2\rho^{k}(1-\rho^{k})c_{\max}.\]
Since \(\rho^{k}(1-\rho^{k})\leq\kappa_{\rho}/(K+1)^{2}\), we have
\[\mathbb{E}[\|\nabla_{\mathbf{\theta}}\ell_{2}^{k}(\mathbf{\theta}^{k})\|^{2}]\leq\frac {2}{(K+1)\eta}\big{(}(\varphi(\mathbf{\phi}^{0},\mathbf{\theta}^{0},\mathbf{Z}^{0}, \mathbf{Y}^{0})-\min\varphi)+2\kappa c_{\max}\big{)},\]
and
\[\mathbb{E}[\|\nabla_{\mathbf{\phi}}\ell_{2}(\mathbf{\phi}^{k})\|^{2}]\leq\frac{(1+ \gamma\sigma_{\max})^{2}}{2(K+1)}\big{(}(\varphi(\mathbf{\phi}^{0},\mathbf{\theta}^{0},\mathbf{Z}^{0},\mathbf{Y}^{0})-\min\varphi)+2\kappa c_{\max}\big{)}.\]
**Stochastic version.** We first introduce an additional assumption: the gradient estimate is unbiased and has bounded variance (Bottou et al., 2018, Sec. 4), i.e.,
\[\mathbb{E}_{\xi}[\tilde{\nabla}_{\mathbf{\theta}}\ell_{1}^{k}(\mathbf{\theta}^{k})]= \nabla_{\mathbf{\theta}}\ell_{1}^{k}(\mathbf{\theta}^{k}),\quad\mathbb{E}_{\xi}[\tilde {\nabla}_{\mathbf{\theta}}\ell_{2}^{k}(\mathbf{\phi}^{k})]=\nabla_{\mathbf{\theta}}\ell_{2 }^{k}(\mathbf{\phi}^{k}),\]
and
\[\mathbb{V}_{\xi}[\tilde{\nabla}_{\mathbf{\theta}}\ell_{1}^{k}(\mathbf{\theta}^{k})] \leq\zeta+\zeta_{v}\|\nabla_{\mathbf{\theta}}\ell_{1}^{k}(\mathbf{\theta}^{k})\|^{2}, \quad\mathbb{V}_{\xi}[\tilde{\nabla}_{\mathbf{\phi}}\ell_{1}^{k}(\mathbf{\theta}^{k}) ]\leq\zeta+\zeta_{v}\|\nabla_{\mathbf{\phi}}\ell_{2}^{k}(\mathbf{\phi}^{k})\|^{2}.\]
This assumption derives the following inequalities hold for \(\zeta_{g}=\zeta_{v}+1\):
\[\mathbb{E}_{\xi}[\|\tilde{\nabla}_{\mathbf{\theta}}\ell_{1}^{k}(\mathbf{\theta}^{k})\| ^{2}]\leq\zeta+\zeta_{g}\|\nabla_{\mathbf{\theta}}\ell_{1}^{k}(\mathbf{\theta}^{k})\|^{2 },\quad\mathbb{E}_{\xi}[\|\tilde{\nabla}_{\mathbf{\theta}}\ell_{2}^{k}(\mathbf{\phi}^{ k})\|^{2}]\leq\zeta+\zeta_{g}\|\nabla_{\mathbf{\phi}}\ell_{2}^{k}(\mathbf{\phi}^{k})\|^{2},\]
For the update of \(\mathbf{\theta}\), we have
\[\varphi(\mathbf{\phi}^{k+1},\mathbf{\theta}^{k},\mathbf{Z}^{k+\frac{1}{2}},\mathbf{Y}^ {k+\frac{1}{2}})-\mathbb{E}_{\xi}[\varphi(\mathbf{\phi}^{k+1},\mathbf{\theta}^{k+1}, \mathbf{Z}^{k+1},\mathbf{Y}^{k+1})]\geq\frac{\eta_{k}}{2}\|\nabla_{\mathbf{\theta}} \ell_{1}^{k}(\mathbf{\theta}^{k})\|^{2}-\frac{\eta_{k}^{2}\mu_{\mathbf{\theta}}}{2}\zeta.\]
For the update of \(\mathbf{\phi}\), using the \(\mu_{\mathbf{\theta}}\)-smooth, and taking the total expectation:
\[\varphi(\mathbf{\phi}^{k},\mathbf{\theta}^{k},\mathbf{Z}^{k},\mathbf{Y}^{ k})-\mathbb{E}_{\xi}[\varphi(\mathbf{\phi}^{k+1},\mathbf{\theta}^{k},\mathbf{Z}^{k+ \frac{1}{2}},\mathbf{Y}^{k+\frac{1}{2}})]+2\rho^{k}(1-\rho^{k})\|\Delta^{k}\|^{2}\\ \geq(\nabla_{\mathbf{\phi}}\ell_{2}^{k}(\mathbf{\phi}^{k}))^{\mathbf{T}} \mathbf{M}^{k}(\nabla_{\mathbf{\phi}}\ell_{2}^{k}(\mathbf{\phi}^{k}))-\frac{\mu_{\mathbf{\phi}}}{ 2}\mathbb{E}_{\xi}[\|\mathbf{\hat{M}}^{k}\tilde{\nabla}_{\mathbf{\phi}}\ell_{2}^{k}(\mathbf{ \phi}^{k})\|^{2}]\\ \geq\frac{1}{\epsilon^{k}+\sigma_{\max}}\|\nabla_{\mathbf{\phi}}\ell_ {2}^{k}(\mathbf{\phi}^{k})\|^{2}-\frac{\mu_{\mathbf{\phi}}}{2(\epsilon^{k}+\sigma_{\min})^ {2}}(\zeta+\zeta_{g}\|\nabla_{\mathbf{\phi}}\ell_{2}^{k}(\mathbf{\phi}^{k})\|^{2}),\]
where we define \(\epsilon^{k}=1/\gamma^{k}\) for simplicity. Now, let \(\gamma\) be sufficiently small (that is, satisfying \((\epsilon^{k}+\sigma_{\min})^{2}\geq\mu_{\mathbf{\phi}}(\epsilon^{k}+\sigma_{\max})\)), we obtain
\[\varphi(\mathbf{\phi}^{k},\mathbf{\theta}^{k},\mathbf{Z}^{k},\mathbf{Y}^ {k})-\mathbb{E}_{\xi}[\varphi(\mathbf{\phi}^{k+1},\mathbf{\theta}^{k},\mathbf{Z}^{k+ \frac{1}{2}},\mathbf{Y}^{k+\frac{1}{2}})]+2\rho^{k}(1-\rho^{k})\|\Delta^{k}\|^{2} \\ \geq\frac{1}{2(\epsilon^{k}+\sigma_{\max})}\|\nabla_{\mathbf{\phi}} \ell_{2}^{k}(\mathbf{\phi}^{k})\|^{2}-\frac{\mu_{\mathbf{\phi}}}{2(\epsilon^{k}+\sigma_{ \min})^{2}}\zeta.\]Putting the updates of \(\mathbf{\theta}\) and \(\mathbf{\phi}\) together, we have
\[\varphi(\mathbf{\phi}^{k},\mathbf{\theta}^{k},\mathbf{Z}^{k},\mathbf{Y}^{k})- \mathbb{E}_{\xi}[\varphi(\mathbf{\phi}^{k+1},\mathbf{\theta}^{k+1},\mathbf{Z}^{k+1}, \mathbf{Y}^{k+1})]+2\rho^{k}(1-\rho^{k})\|\Delta^{k}\|^{2}\] \[\qquad\qquad\geq\frac{1}{2}\eta_{k}\|\nabla_{\mathbf{\theta}}\ell_{1} ^{k}(\mathbf{\theta}^{k})\|^{2}-\frac{1}{2}\eta_{k}^{2}\mu_{\mathbf{\theta}}\zeta+\frac {1}{2(\epsilon^{k}+\sigma_{\max})}\|\nabla_{\mathbf{\phi}}\ell_{2}^{k}(\mathbf{\phi}^{ k})\|^{2}-\frac{\mu_{\mathbf{\phi}}}{2(\epsilon^{k}+\sigma_{\min})^{2}}\zeta.\]
Now, setting \(\eta^{k}\leq\kappa_{\mathbf{\theta}}/\sqrt{K+1}\) and \(\gamma^{k}\leq\kappa_{\mathbf{\phi}}/\sqrt{K+1}\), we can conclude
\[\mathbb{E}[\|\nabla_{\mathbf{\theta}}\ell_{1}^{k}(\mathbf{\theta}^{k})\|^{2}]=\mathcal{ O}(\frac{1}{\sqrt{K+1}}),\quad\mathbb{E}[\|\nabla_{\mathbf{\phi}}\ell_{2}(\mathbf{ \phi}^{k})\|^{2}]=\mathcal{O}(\frac{1}{\sqrt{K+1}}).\qed\]
## Appendix C Proof of Theorem 2
Proof.: We consider the following problem,
\[(\text{P}_{\xi})\quad\min_{\mathbf{u}\in\{0,1\}^{n}}q_{\xi}(\mathbf{u}):=\mathbf{u}^{ \mathsf{T}}(\mathbf{S}+\lambda\mathbf{I})\mathbf{u}-2(\mathbf{s}+\lambda\mathbf{\xi})^{\mathsf{T}} \mathbf{u}.\]
For given \(t\geq 0\), the corresponding stationary points of (P\({}_{\xi}\)) satisfy
\[2[\mathbf{S}\mathbf{u}-\mathbf{s}]_{i}(1-2\mathbf{u}_{i})+2\lambda(\mathbf{u}_{i}-\mathbf{\xi}_{i})(1- 2\mathbf{u}_{i})+t\geq 0,i=1,\ldots,n.\]
Note that
\[(\mathbf{u}_{i}-\mathbf{\xi}_{i})(1-2\mathbf{u}_{i})=\left\{\begin{array}{cc}-\mathbf{\xi}_{ i}&\text{if}\quad\mathbf{u}_{i}=0;\\ \mathbf{\xi}_{i}-1&\text{if}\quad\mathbf{u}_{i}=1.\end{array}\right.\]
For given \(\mathbf{u}\in\{0,1\}^{n}\), we denote \(\varrho_{i}=2[\mathbf{S}\mathbf{u}-\mathbf{s}]_{i}(1-2\mathbf{u}_{i})\). Then, the probability that (P\({}_{\xi}\)) has the stationary point \(\mathbf{u}\) can be computed as
\[\Pr(\mathbf{u})=\prod_{i=1}^{n}\Pr(\varrho_{i}+2\lambda(\mathbf{u}_{i}-\mathbf{\xi}_{i})(1 -2\mathbf{u}_{i})+t\geq 0),\]
where
\[\Pr(2\lambda(\mathbf{u}_{i}-\mathbf{\xi}_{i})(1-2\mathbf{u}_{i})+\varrho_{i}+t\geq 0)= \min(\frac{1}{2\lambda}(t+\varrho_{i}),1).\]
Hence, for given two different \(\mathbf{u}_{1}^{(0)}\) and \(\mathbf{u}_{2}^{(0)}\), the probability that the corresponding rules can converge to the same result \(\mathbf{u}\) satisfying
\[\Pr(\mathbf{u}_{1}=\mathbf{u},\mathbf{u}_{2}=\mathbf{u})\leq\Pr(\mathbf{u})^{2}=\prod_{i=1}^{n} \min(\frac{1}{2\lambda}(t+\varrho_{i}),1)^{2}.\qed\]
## Appendix D Trust Region Method
Figure 3 illustrates the key concept of the trust region method. For simplicity, centre points \(\mathbf{w}_{1}(\mathbf{0}),\ldots,\mathbf{w}_{4}(\mathbf{0})\) of the trust region are also set as the initial points of stochastic gradient descent. Stochastic gradient descent is implicitly biased to least norm solutions and finally converges to point \((0,1)\) by enforcing the Boolean constraints. The trust region penalty encourages the stochastic gradient descent to converge to different optimal solutions in different trust regions.
## Appendix E Experiment Details
**Computing configuration.** We implemented our approach via the PyTorch DL framework. The experiments were conducted on a GPU server with two Intel Xeon Gold 5118 [email protected], 400GB RAM, and 9 GeForce RTX 2080 Ti GPUs. The server ran Ubuntu 16.04 with GNU/Linux kernel 4.4.0.
**Hyperparameter tuning.** Some hyperparameters are introduced in our framework. In Table 3 we summarize the (hyper-)parameters, together with their corresponding initialization or update strategies. Most of these hyperparameters are quite stable and thus only need to be fixed to a constant or set by standard strategies. We only discuss the selection of \(m\), and the setting of \(\mathbf{b}_{\min},\mathbf{b}_{\max}\) and \(\mathbf{b}\).
(1) To ensure the sufficiency of learned constraints, we suggest initially setting a large \(m\) to estimate the actual number of logical constraints needed, and then adjusting it for more efficient training. We also observe that a large \(m\) does not ruins the performance of our method. For example, we set \(m=2000\) in visual SudoKu solving task, while only \(324\) constraints are learned. (2) For the bias term \(\mathbf{b}\), we recommend \(\mathbf{b}\) to be tuned manually rather than set by PPA update, and one can gradually increase \(\mathbf{b}\) from 1 to \(n-1\) (\(n\) is the number of involved logical variables), and collect all logical constraints as candidate constraints. For \(\mathbf{b}_{\min}\) and \(\mathbf{b}_{\max}\), due to the prediction error, it is unreasonable to set \(\mathbf{b}_{\min}\) and \(\mathbf{b}_{\max}\) that ensure all examples to satisfy the logical constraint. An alternative method is to set a threshold (e.g. _k_%) on the training (or validation) set, and the constraint is only required to be satisfied by at least _k_% examples.
## Appendix F Additional Experiment Results
### Chained XOR
The chained XOR, also known as the parity function, is a basic logical function, yet it has proven challenging for neural networks to learn it explicitly (Shalev-Shwartz et al., 2017; Wang et al., 2019) To be specific, given a sequence of length \(L\), the parity function outputs \(1\) if there are an odd number of 1's in the sequence, and \(0\) otherwise. The goal of the Chained XOR task is to learn this parity function with fixed \(L\). Note that this task does not involve any perception task.
We compare our method with SATNet and L1R32H4. In this task, SATNet uses an implicit but strong background knowledge that the task can be decomposed into \(L\) single XOR tasks. Neither L1R32H4
\begin{table}
\begin{tabular}{c|c|c} \hline \hline
**Param.** & **Description** & **Setting** \\ \hline \(\mathbf{\theta}\) & Neural network parameters & Updated by stochastic gradient descent \\ \hline \(\mathbf{W}\) & Matrix of logical constraints & Updated by stochastic PPA \\ \hline \(\mathbf{b}\) & Bias term of logical constraints & Pre-set or Updated by stochastic PPA \\ \hline \(\mathbf{b}_{\min}/\mathbf{b}_{\max}\) & Lower/Upper bound of logical constraints & Estimated by training set \\ \hline \(m\) & Pre-set number of constraints & Adaptively tuned \\ \hline \(\alpha\) & Trade-off weight in symbol grounding & Fixed to \(\alpha=0.5\) \\ \hline \(\lambda\) & Weight of trust region penalty & Fixed to \(\lambda=0.1\) \\ \hline \(t_{1}/t_{2}\) & Weight of DC penalty & Increased per epoch \\ \hline \(\eta\) & Learning rate of network training & Adam schedule \\ \hline \(\gamma\) & Step size of constraint learning & Adaptively set (\(\gamma=0.001\) by default) \\ \hline \hline \end{tabular}
\end{table}
Table 3: The list of (hyper-)parameters and their initialization or update strategies.
Figure 3: Avoid degeneracy by trust region method. In logical constraint learning, the imposition of the Boolean constraints and the implicit bias of the stochastic gradient descent cause \(\mathbf{w}_{1},\ldots,\mathbf{w}_{4}\) to converge to the same result (left figure), while the trust region constraints guarantee that they can sufficiently indicate different rules (right figure).
nor our method uses such knowledge. For L1R32H4, we adapt the embedding layer to this task and fix any other configuration. Regarding our method, we introduce \(L-1\) auxiliary variables.7
Footnote 7: Note that the number of auxiliary variables should not exceed the number of logical variables. If so, the logical constraints trivially converge to any result.
It is worth noting that these auxiliary variables essentially serve as a form of symbol grounding. Elaborately, the learned logical constraints by our method can be formulated as follows,
\[\mathbf{w}_{1}\mathbf{x}_{1}+\cdots+\mathbf{w}_{L}\mathbf{x}_{L}+\mathbf{w}_{L+1}\mathbf{z} _{1}+\cdots+\mathbf{w}_{2L-1}\mathbf{z}_{L-1}=b,\]
where \(\mathbf{w}_{i}\in\mathcal{B},i=1,\ldots,2L-1\), \(\mathbf{x}_{i}\in\mathcal{B},i=1,\ldots,L\) and \(\mathbf{z}_{i}\in\mathcal{B},i=1,\ldots,L\). The auxiliary variables \(\mathbf{z}_{i},i=1,\ldots,L\) have different truth assignments for different examples, indicting _how_ the logical constraint is satisfied by the given input. Now, combining the symbol grounding of auxiliary variables, we revise the optimization problem (1) of our framework as
\[\min_{(\mathbf{W},\mathbf{b})}\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim \mathcal{D}}[\|\mathbf{W}(\mathbf{x};\bar{\mathbf{z}};\mathbf{y})-\mathbf{b}\|^{2}]+ \lambda\|\mathbf{W}-\mathbf{W}^{(0)}\|^{2},\] \[\text{s.t.}\quad\bar{\mathbf{z}}=\arg\min_{\mathbf{z}\in\mathcal{ Z}}\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\mathcal{D}}[\|\mathbf{W}(\mathbf{x}; \mathbf{z};\mathbf{y})-\mathbf{b}\|^{2}],\quad\mathbf{W}\in\mathcal{B}^{m\times(u+v)}, \quad\mathbf{b}\in\mathcal{N}_{+}^{m}.\]
The symbol grounding is solely guided by logical constraints, as neural perception is not involved.
The experimental results are plotted in Figure 4. The results show that L1R32H4 is unable to learn such a simple reasoning pattern, while SATNet often fails to converge even with sufficient iterations, leading to unstable results. Our method consistently delivers full accuracy across all settings, thereby demonstrating superior performance and enhanced scalability in comparison to existing state-of-the-art methods. To further exemplify the efficacy of our method, we formulate the learned constraints in the task of \(L=20\). Eliminating redundant constraints and replacing the auxiliary variables with logical disjunctions, the final learned constraint can be expressed as
\[(\mathbf{x}_{1}+\cdots+\mathbf{x}_{20}+y=0)\vee(\mathbf{x}_{1}+\cdots+\mathbf{x }_{20}+y=2)\vee\cdots\vee(\mathbf{x}_{1}+\cdots+\mathbf{x}_{20}+y=20),\]
which shows that our method concludes with complete and precise logical constraints.
### Nonograms
Nonograms is a logic puzzle with simple rules but challenging solutions. Given a grid of squares, the task of nonograms is to plot a binary image, i.e., filling each grid in black or marking it by \(\mathsf{X}\). The required numbers of black squares on that row (resp. column) are given beside (resp. above) each row (resp. column) of the grid. Figure 5 gives a simple example.
In contrast to the supervised setting used in Yang et al. [2023], we evaluate our method on a weakly supervised learning setting. Elaborately, instead of the fully solved board, only partial solutions (i.e., only one row or one column) are observed. Note that this supervision is enough to solve the nonograms, because the only logical rule to be learned is that the different black squares (in each row or column) should not be connected.
For our method, we do not introduce a neural network in this task, and only aim to learn the logical constraints. We carry out the experiments on \(7\times 7\) nonograms, with training data sizes ranging
Figure 4: Results (%) of chained XOR task, including accuracy and F\({}_{1}\) score (of class \(0\)). The sequence length ranges from \(20\) to \(200\), showing that our method stably outperforms competitors.
from 1,000 to 9,000. The results are given in Table 4, showing the efficacy of our logical constraint learning. Compared to the L1R32H4 method, whose effectiveness highly depends on the training data size, our method works well even with extremely limited data.
### Visual SudoKu Solving
In the visual SudoKu task, it is worth noting that the computation of \(\mathbf{z}\) cannot be conducted by batch processing. This is because the index of \(\mathbf{y}\) varies for each data point. For instance, in different SudoKu games, the cells to be filled are different, and thus the symbol \(\mathbf{z}\) has to be computed in a point-wise way. To solve this issue, we introduce an auxiliary \(\tilde{\mathbf{y}}\) to approximate the output symbol \(\mathbf{y}\):
\[(\mathbf{\bar{z}},\mathbf{\bar{y}})=\arg\min_{\mathbf{\bar{z}}\in\mathcal{Z}, \mathbf{\bar{y}}\in\mathcal{Y}}\|\mathbf{W}(\mathbf{\bar{z}};\mathbf{\bar{y}})-\bm {b}\|^{2}+\alpha\|(\mathbf{\bar{z}};\mathbf{\bar{y}})-(f_{\mathbf{\theta}}(\mathbf{ x});\mathbf{y})\|^{2}.\]
On the SATNet dataset, we use the recurrent transformer as the perception model [Yang et al., 2023], because we observe that the recurrent transformer can significantly improve the perception accuracy, and even outperforms the state-of-the-art of MNIST digit recognition model. However, we find that its performance degrades on the more difficult dataset RRN, and thus we still use a standard convolutional neural network model as the perception model for this dataset.
We include detailed results of board and cell accuracy in Table 5. It can be observed that our method is consistently superior to the existing methods, and significantly outperforms the current state-of-the-art method L1R32H4 on the RRN dataset (total board accuracy improvement exceeds 20%). Also note that the solving accuracy of our method always performs the best, illustrating the efficacy of our logical constraint learning.
Next, we exchange the evaluation dataset, namely, using the RRN dataset to evaluate the model trained on the SATNet dataset, and vice versa. The results are presented in Table 6. The accurate logical constraints and exact logical reasoning engine guarantee the best performance of our method on transfer tasks. Notably, the performance of L1R32H4 drops significantly when transferring the model (trained SATNet dataset) to RNN dataset, our method remains unaffected by such shift.
### Self-driving Path Planning
The goal of the self-driving path planning task is to train the neural network for object detection and to learn the logical constraints for path planning in an end-to-end way. As shown in Figure 6, we construct two maps and each contains \(10\times 10\) grids (binary variables). The neural perception detects the obstacles from the image \(\mathbf{x}\) and locates it in the first map, which is essentially the symbol \(\mathbf{z}\). Next,
Figure 5: An example of nonograms.
the logical reasoning computes the final path from the symbol \(\mathbf{z}\) and tags it on the second map as the output \(\mathbf{y}\).
As a detailed reference, we select some results of path planning generated by different methods and plot them in Figure 7. We find that some correct properties are learned by our method. For example, given the point \(\mathbf{y}_{34}\) in the path, we have the following connectivity:
\[(\mathbf{y}_{34}=s)+(\mathbf{y}_{34}=e)+\mathrm{Adj}(\mathbf{y}_{34})=2,\]
which means that the path point \(\mathbf{y}_{34}\) should be connected by its adjacent points. In addition, some distinct constraints are also learned, for example,
\[\mathbf{y}_{32}+\mathbf{z}_{32}+\mathbf{z}_{11}+\mathbf{z}_{01}=1.\]
In this constraint, \(\mathbf{z}_{11}\) and \(\mathbf{z}_{01}\) are two noise points, and they always take the value of \(0\). Therefore, it actually ensures that if \(\mathbf{z}_{32}\) is an obstacle, then \(\mathbf{y}_{32}\) should not be selected as a path point. However, it is still unknown whether our neuro-symbolic framework derives all the results as expected, because some of the learned constraints are too complex to be understood.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{SATNet dataset} & \multicolumn{4}{c}{RRN dataset} \\ \cline{2-7} & Perception & Solving & Total & Perception & Solving & Total \\ & board acc. & board acc. & board acc. & board acc. & board acc. & board acc. \\ \hline RRN & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ SATNet & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ SATNet* & 72.7 & 75.9 & 67.3 & 75.7 & 0.1 & 0.1 \\ L1R32H4 & 94.1 & 91.0 & 90.5 & 87.7 & 65.8 & 65.7 \\ \hline NTR & 87.4 & 0.0 & 0.0 & 91.4 & 3.9 & 3.9 \\ NDC & 79.9 & 0.0 & 0.0 & 88.0 & 0.0 & 0.01 \\ \hline Ours & **95.5** & **95.9** & **95.5** & **93.1** & **94.4** & **93.1** \\ \hline \hline & Perception & Solving & Total & Perception & Solving & Total \\ & cell acc. & cell acc. & cell acc. & cell acc. & cell acc. \\ \hline RRN & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ SATNet & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ SATNet* & 99.1 & 98.6 & 98.8 & 75.7 & 59.7 & 72.0 \\ L1R32H4 & 99.8 & 99.1 & 99.4 & 99.3 & 89.5 & 92.6 \\ \hline NTR & 99.7 & 60.1 & 77.8 & 99.7 & 38.5 & 57.3 \\ NDC & 99.4 & 10.8 & 50.4 & 99.5 & 10.9 & 38.7 \\ \hline Ours & **99.9** & **99.6** & **99.7** & **99.7** & **98.3** & **98.7** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Detailed cell and board accuracy (%) of **original** visual Sudoku task.
Figure 7: Some results of neuro-symbolic learning methods in self-driving path planning task.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{SATNet \(\rightarrow\) RRN} & \multicolumn{4}{c}{RRN \(\rightarrow\) SATNet} \\ \cline{2-7} & Perception & Solving & Total & Perception & Solving & Total \\ & board acc. & board acc. & board acc. & board acc. & board acc. \\ \hline RRN & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ SATNet & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ SATNet* & 80.8 & 1.4 & 1.4 & 0.0 & 0.0 & 0.0 \\ L1R32H4 & 84.8 & 21.3 & 21.3 & 94.9 & 95.0 & 94.5 \\ \hline NTR & 90.2 & 0.0 & 0.0 & 86.9 & 0.0 & 0.0 \\ NDC & 86.1 & 0.0 & 0.0 & 82.4 & 0.0 & 0.0 \\ \hline Ours & **93.9** & **95.2** & **93.9** & **95.2** & **95.3** & **95.2** \\ \hline \hline \multirow{2}{*}{} & Perception & Solving & Total & Perception & Solving & Total \\ & cell acc. & cell acc. & cell acc. & cell acc. & cell acc. & cell acc. \\ \hline RRN & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ SATNet & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ SATNet* & 99.1 & 66.2 & 76.5 & 65.8 & 53.8 & 59.2 \\ L1R32H4 & 99.3 & 89.5 & 92.6 & 99.7 & 99.6 & 99.7 \\ \hline NTR & 99.6 & 37.1 & 56.3 & 99.6 & 62.4 & 79.0 \\ NDC & 99.4 & 11.0 & 38.7 & 99.5 & 11.3 & 50.7 \\ \hline Ours & **99.8** & **98.4** & **98.8** & **99.8** & **99.7** & 99.7 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Detailed cell and board accuracy (%) of **transfer** visual Sudoku task. | ## Review
### Summary
The paper presents a novel neurosymbolic approach that simultaneously learns symbolic representations and logical constraints. It introduces a new penalty term based on Difference of Convex (DC) programming, enhancing the optimization process. The method is evaluated on benchmarks such as the visual sudoku challenge and path planning environments, demonstrating superior performance compared to existing approaches. While the results are compelling, the focus on relatively simple environments and the lack of exploration into the adaptability of the method in new contexts are noted as areas needing further investigation.
### Strengths
- The paper employs a principled approach to learning symbolic representations and logical rules simultaneously.
- The DC programming method allows for theoretical analysis and proves effective in achieving desired optimization results.
- The approach achieves significant improvements over existing neurosymbolic methods in the evaluated environments.
- The work tackles the complex problem of learning rules and perception concurrently, achieving strong performance in challenging tasks.
- The methodology includes multiple innovative components such as DC relaxation loss and cardinality constraints.
- The paper provides thorough theoretical proof and robust experimental results, demonstrating superior performance over state-of-the-art models.
### Weaknesses
- Experiments are limited to two simple and static environments, raising questions about generalizability.
- Lack of discussion on the limitations of the proposed methodology, which is critical for future research directions.
- The paper could benefit from clearer high-level figures to illustrate the logical constraints being enforced.
- Understanding the overall approach is complicated by numerous technical details and hyperparameters.
- The introduction of hyperparameter alpha adds unnecessary complexity without clear justification.
- Absence of error bars in experimental results makes it difficult to assess the reliability of findings.
### Questions
- What does the notation (z;y) mean in w^T(z;y)? Is it a concatenation?
- Why do the solvers manage to find a solution despite incorrect perception?
- What are the specific tasks chosen for this study, and how do they relate to prior work?
- What are the limitations of using cardinality constraints, and how are the 324 constraints calculated?
- How should users handle unknown numbers of constraints in new tasks, and how does this impact performance?
- Is the SMT solver only used during inference and not training?
- What values are set for m (number of rules)?
### Soundness
**Score:** 12
**Description:** 3 = good - The theoretical framework is solid, and the proposed methods are well-justified, although some minor issues in clarity exist.
### Presentation
**Score:** 10
**Description:** 2 = fair - The paper is decently written but could benefit from improved clarity and organization to enhance understanding.
### Contribution
**Score:** 13
**Description:** 4 = excellent - The paper presents significant advancements in the neurosymbolic learning field, introducing novel components with high potential impact.
### Rating
**Score:** 26
**Description:** 7 = accept, but needs minor improvements - The paper is technically solid with high impact potential, though it requires refinement in presentation and evaluation.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a novel approach to a significant problem in neurosymbolic AI with strong theoretical backing and experimental results. While there are some weaknesses in terms of generalizability and clarity, the strengths and contributions outweigh these concerns, justifying an acceptance decision.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# A Combinatorial Algorithm for Approximating the Optimal Transport in the Parallel and MPC Settings
Nathaniel Lahn
Radford University
[email protected]
&Sharath Raghvendra
Virginia Tech
[email protected]
&Kaiyi Zhang
Virginia Tech
[email protected]
Following convention from Theoretical Computer Science, all authors are ordered in alphabetical order.
The code used for the experiments reported in this paper is available at: [https://github.com/kaiyiz/Combinatorial-Parallel-OT](https://github.com/kaiyiz/Combinatorial-Parallel-OT)
###### Abstract
Optimal Transport is a popular distance metric for measuring similarity between distributions. Exact and approximate combinatorial algorithms for computing the optimal transport distance are hard to parallelize. This has motivated the development of numerical solvers (e.g. Sinkhorn method) that can exploit GPU parallelism and produce approximate solutions.
We introduce the first parallel combinatorial algorithm to find an additive \(\varepsilon\)-approximation of the OT distance. The parallel complexity of our algorithm is \(O(\log(n)/\varepsilon^{2})\) where \(n\) is the total support size for the input distributions. In Massive Parallel Computation (MPC) frameworks such as Hadoop and MapReduce, our algorithm computes an \(\varepsilon\)-approximate transport plan in \(O(\log(\log(n/\varepsilon))/\varepsilon^{2})\) rounds with \(O(n/\varepsilon)\) space per machine; all prior algorithms in the MPC framework take \(\Omega(\log n)\) rounds. We also provide a GPU-friendly matrix-based interpretation of our algorithm where each step of the algorithm is row or column manipulation of the matrix. Experiments suggest that our combinatorial algorithm is faster than the state-of-the-art approximate solvers in the GPU, especially for higher values of \(n\).
## 1 Introduction
Optimal transport (OT) is a useful metric for measuring similarity between distributions and has numerous applications [1][2][3], including image retrieval [3], GAN training [3], and interpolation between distributions [1]. Given two distributions \(\mu\) and \(\nu\), this metric captures the minimum-cost plan for transporting mass from \(\mu\) to \(\nu\).
More formally, in the _optimal transport_ problem, we are given two discrete distributions \(\mu\) and \(\nu\) whose supports are the point sets \(A\) and \(B\), respectively. For each point \(a\in A\) (resp. \(b\in B\)), we associate a probability of \(\mu_{a}\) (resp. \(\nu_{b}\)) with it such that \(\sum_{a\in A}\mu_{a}=\sum_{b\in B}\nu_{b}=1\). We refer to each point of \(A\) as a demand point and each point in \(B\) as a supply point. For any edge \((a,b)\in A\times B\), we are given a cost \(c(a,b)\); we assume that the costs are scaled so that the largest cost edge is \(1\). Let \(\beta c(a,b)\) be the cost of transporting a supply amount of \(\beta\) from \(b\) to \(a\). A transport plan is a function \(\sigma:A\times B\rightarrow\mathbb{R}_{\geq 0}\) that assigns a non-negative value to each edge of \(G\), indicating the amount of supply transported along the edge. The transport plan \(\sigma\) is such that the total supplies transported into (resp. from) any demand (resp. supply) node \(a\in A\) (resp. \(b\in B\)) is bounded by the demand (resp. supply) at \(a\) (resp. \(b\)). The cost of the transport plan, denoted by \(c(\sigma)\), is given by \(\sum_{(a,b)\in A\times B}\sigma(a,b)c(a,b)\). In this optimal transport problem, we are interested in finding a minimum-cost transport plan that transports all of the supply, denoted by \(\sigma^{*}\). We also define an\(\varepsilon\)-approximate transport plan to be any transport plan \(\sigma\) with a cost \(c(\sigma)\leq c(\sigma^{*})+\varepsilon\) that transports all of the supply.
The special case where \(A\) and \(B\) each contain \(n\) points each and where every point in \(A\) (resp. \(B\)) has a demand of \(1/n\) (resp. supply of \(1/n\)) is called the _assignment problem_. In this special case, there is an optimal transport plan with a special structure; specifically, when there are \(n\) vertex-disjoint edges \((a,b)\) with \(\sigma(a,b)=1/n\), they form a _perfect matching_. Let the cost of any matching \(M\), denoted by \(c(M)\) be the total cost of all of its edges, i.e.,
\[c(M)=\sum_{(a,b)\in M}c(a,b).\]
Given a perfect matching \(M\), the cost of the corresponding transport plan is simply \(1/n\sum_{(a,b)\in M}c(a,b)=(1/n)c(M)\). For simplicity in exposition, in the context of the assignment problem, we will uniformly scale all the demands and supplies from \(1/n\) to \(1\). This does not change the optimal transport plan. It, however, increases the cost of the optimal transport plan to \(c(M)\), an increase by a factor of \(n\). Thus, for the assignment problem, finding the optimal transport plan is equivalent to finding a minimum-cost perfect matching \(M^{*}\).
Similarly, for an \(\varepsilon>0\), after scaling the demands and supplies from \(1/n\) to \(1\), an \(\varepsilon\)-approximate transport plan corresponds to a perfect matching \(M\) with cost \(c(M)\leq c(M^{*})+\varepsilon n\). We refer to such a perfect matching as _\(\varepsilon\)-approximate matching_. Thus, for the assignment problem, finding an \(\varepsilon\)-approximate transport plan corresponds to finding an \(\varepsilon\)-approximate matching.
**Related Work:** For discrete distributions, the optimal transport problem can be formulated as a minimum-cost flow problem and solved using any LP-solver. The best-known exact and approximate solver for optimal transport, as well as the assignment problem, are graph-based combinatorial algorithms [23, 12, 13, 14, 22]. These solvers, however, are known to be difficult to parallelize. For instance, the best \(\varepsilon\)-approximate OT solver, in terms of sequential running time, was given by Lahn _et al._[19]. This combinatorial algorithm runs in \(O(n^{2}/\varepsilon+n/\varepsilon^{2})\) time, and is a non-trivial adaptation of the classical combinatorial exact algorithm by Gabow and Tarjan (GT-algorithm) for the transportation problem [13]. The Lahn _et al._ algorithm runs no more than \(\lfloor 2/\varepsilon\rfloor+1\) iterations, where each iteration executes a Dijkstra's shortest path search to find and augment along a set of "augmenting paths". This algorithm is the state-of-the-art in the sequential setting; see [23] for a discussion on the various algorithms. Unfortunately, however, the flow augmentations have to be done in a sequential manner, making this algorithm hard to parallelize.
Motivated by the need for efficient scalable solutions, machine learning researchers have designed highly parallelizable approximate solvers that generate an \(\varepsilon\)-approximate transport plan in \(\tilde{O}(n^{2}/\varepsilon^{O(1)})\) sequential time and \(\tilde{O}(1/\varepsilon^{O(1)})\) parallel time. Perhaps the most successful among these is an entropy regularized version of the optimal transport, which can be solved using the Sinkhorn-Knopp method [18] and produces an \(\varepsilon\)-approximation to the optimal transport in \(\tilde{O}(n^{2}/\varepsilon^{2})\) sequential time and \(\tilde{O}(1/\varepsilon^{2})\) parallel time [10]. The simplicity of this algorithm has lead to an efficient GPU implementation. From a theoretical standpoint, Jambulapati _et al._[16] designed a dual extrapolation algorithm using an area convex mapping [21] to achieve an improved parallel complexity of \(O((\log(n)\log\log n)/\varepsilon)\). However, as noted by Lin _et al._[22], despite its sound theoretical guarantees, the lack of simplicity and the difficulties of implementation make this algorithm by Jambulapati _et al._ less competitive, and the Sinkhorn algorithm remains the state-of-the-art for approximating the Optimal Transport on GPUs.
Despite being the state-of-the-art in sequential settings, combinatorial algorithms have remained difficult to parallelize and all known exact or approximate combinatorial algorithms run in only slightly sub-linear parallel time [14]. In this paper, we design the first parallel combinatorial algorithm that takes only \(O(\log(n)/\varepsilon^{2})\) parallel time and finds an \(\varepsilon\)-approximation transport plan.
Our algorithm also improves upon existing algorithms in the massive parallel computation frameworks such as Hadoop, MapReduce, Dryad and Spark. In the Massively Parallel Computing (MPC) model, we are given a set of machines, where each machine has a bounded amount of memory. For this model, the base assumption is that communication between machines is the performance bottleneck, and the goal is to minimize the number of synchronized _communication rounds_, where each round consists of a period of local computation on each machine, followed by a period of message communication between machines. It is well known that any standard parallel algorithm that takes \(O(f(n))\) time can be directly translated to an algorithm under the MPC model that runs in \(O(f(n))\) communication rounds. However, algorithms that are more specialized for the MPC model can acheive drastically faster computation times, often having a sub logarithmic number of rounds. For example, it has long been known how to compute a maximal matching in \(O(\log n)\) parallel time [13], but only recently was a breakthrough made that shows how to compute a maximal matching in \(O(\log\log n)\) rounds under the MPC model [3]. Maximal matching is a substantially simpler problem than both the assignment and OT problems. For these OT problem, known parallel \(\varepsilon\)-approximation algorithms immediately yield an \(\tilde{O}(\log(n)/\varepsilon)\) round algorithm for the MPC model [16]. To our knowledge, no specialized MPC algorithms are known for the either problem. Thus, we provide the first sub-logarithmic round \(\varepsilon\)-approximation algorithm for both the assignment and OT problems. We obtain this bound by leveraging the recent breakthrough MPC algorithm for maximal matching by [3].
Our Results:In this paper, we present a very simple combinatorial algorithm to compute an \(\varepsilon\)-approximate transport plan in \(O(n^{2}/\varepsilon^{2})\) sequential time and \(O(\log(n)/\varepsilon^{2})\) parallel time. For the special case of the assignment problem, the sequential execution time of our algorithm improves to \(O(n^{2}/\varepsilon)\). We also provide a GPU implementation of our algorithm that outperforms the implementation of the Sinkhorn algorithm provided by Python Optimal Transport library [11].
Our algorithm also extends to the well-known Massive Parallel Computation (MPC) frameworks such as MapReduce, Hadoop, Dryad and Spark. In the MPC model, our algorithm computes a \(\varepsilon\)-approximate transport plan in \(O(\log(\log n)/\varepsilon^{2})\) rounds with \(O(n)\) memory per machine. Our algorithm is based on the popular push-relabel framework [14] for computing minimum-cost flow.
**Theorem 1.1**.: _Given an \(\varepsilon>0\), there is an algorithm that computes an \(\varepsilon\)-approximate matching in \(O(n^{2}/\varepsilon)\) time. Furthermore, one can execute this algorithm in expected \(O(\log(n)/\varepsilon^{2})\) parallel time or in expected \(O(\log(\log n)/\varepsilon^{2})\) rounds, with \(O(n)\) memory per machine, in the MPC model._
_Extension to the Optimal Transport problem:_ For any \(\varepsilon>0\), Lahn _et al._[10] showed that computing an \(\varepsilon\)-approximation of the optimal transport between two discrete distributions containing \(n\) points in their support reduces to an instance of an unbalanced assignment problem with \(n/\varepsilon\) points. We apply this reduction and slightly adapt our algorithm from Theorem [11] to obtain our result for the optimal transport problem (Theorem [12]). In this paper, we present details of our algorithm for the assignment problem. Details of the adaptation of our algorithm to the optimal transport by using the reduction of [11] is presented in the Section [2] of the appendix.
**Theorem 1.2**.: _Given an \(\varepsilon>0\), there is an algorithm that computes an \(\varepsilon\)-approximate transport plan in \(O(n^{2}/\varepsilon^{2})\) time. Furthermore, one can execute this algorithm in expected \(O(\log(n)/\varepsilon^{2})\) parallel time or in \(O(\log(\log(n/\varepsilon))/\varepsilon^{2})\) rounds with \(O(n/\varepsilon)\) memory per machine in the MPC model._
From a theoretical stand-point, we provide the first parallel combinatorial algorithm for approximating the optimal transport with a expected parallel execution time of \(O(\log(n)/\varepsilon^{2})\). The sequential execution time of \(O(n^{2}/\varepsilon)\) for our algorithm for the assignment problem matches with the current state-of-the-art for the problem [16][19]. We also provide the first sub-logarithmic round algorithm that approximates the optimal transport plan in the MPC model.
From a practical stand-point, for both the assignment problem and the OT problem, we provide an implementation that exploits GPU parallelism. Experiments suggest that both of our GPU implementations outperform the GPU implementation of the state-of-the-art Sinkhorn algorithm provided by the Python Optimal Transport library [11] in terms of running time, while achieving the same level of accuracy.
Our Approach:Our algorithmic approach is based on the popular push-relabel framework for computing network flows. For the assignment problem, our algorithm maintains a matching \(M\) and a set of dual weights \(y(\cdot)\) on vertices of \(A\cup B\). The algorithm runs in \(O(1/\varepsilon^{2})\) iterations and in each iteration, it executes three steps. First, it greedily computes a maximal matching \(M^{\prime}\). In the second step, it uses \(M^{\prime}\) to update the matching \(M\) (the push step). Finally, it updates the dual weights (relabel step). Our proof of correctness is based on the standard dual feasibility conditions used to compute minimum-cost maximum cardinality matchings, with some modifications made to better accommodate our additive-approximate setting. Our main technical difference from standard push-relabel techniques is the novel running time analysis for the additive approximate setting. In particular, we show that the number of iterations required by our algorithm is just \(O(1/\varepsilon^{2})\). Within each iteration, the push and relabel steps take only \(O(n)\) sequential time and \(O(1)\) parallel time. The only non-trivial step, therefore, is the computation of a maximal matching which can be done in \(O(n^{2})\) sequential time and \(O(\log n)\) parallel time [13]. Maximal matchings can also be computed in \(O(\log\log n)\) rounds in the massively parallel computation (MPC) model [3]. As a result, our algorithm can also be executed in \(O(\log(\log n)/\varepsilon^{2})\) rounds in the MPC model, for the assignment problem. We extend our algorithm to also approximate the optimal transport plan by using the reduction of Lahn _et al._[19] (see Section 2.1 of the appendix for details).
Organization:In Section 2.1 we present the definitions required to describe our algorithm. In Section 2.2 we present our algorithm for the assignment problem. For simplicity of exposition, we present an algorithm that computes a \(3\varepsilon\)-approximation of the optimal solution to the assignment problem. To obtain an \(\varepsilon\)-approximation, one can simply choose the error factor in the algorithm to be \(\varepsilon/3\). In Section 2.3 we prove the sequential complexity of our algorithm for the assignment problem. In Section 2.3 we analyze the complexity of our algorithm in the parallel and MPC settings and also describe a GPU-friendly implementation of our matching algorithm. Finally, we present the experimental results in Section 2.3 In the appendix, Section 2.3 we extend our algorithm to the optimal transport problem. All missing proofs can also be found in the appendix, Section 2.3.
## 2 Algorithm
In this section, given an input to the assignment problem and a value \(0<\varepsilon<1\), we present an algorithm that computes a \(3\varepsilon\)-approximate matching.
### Preliminaries
We begin by introducing the terminologies required to understand our algorithm for the assignment problem. For any matching \(M\), we say that any vertex \(v\in A\cup B\) is _free_ if \(v\) is not matched in \(M\) and _matched_ otherwise. Our algorithm critically uses the notion of a maximal matching which we introduce next. For any bipartite graph that is not necessarily complete, any matching \(M\) is _maximal_ if and only if at least one end point of every edge in the graph is matched in \(M\). Thus, if a matching is not maximal, there is at least one edge between two free vertices. One can, therefore, compute a maximal matching in a greedy fashion by iteratively picking such an edge and adding it to the matching.
For every edge \((u,v)\in A\times B\), we transform its cost so that it becomes an integer multiple of \(\varepsilon\) as follows:
\[\overline{c}(u,v)=\varepsilon\lfloor c(u,v)/\varepsilon\rfloor \tag{1}\]
The rounding of edge costs may introduce an error that is bounded by \(\varepsilon\) for each edge and by at most \(\varepsilon n\) for any matching. Our algorithm assigns a dual weight \(y(v)\) for every \(v\in A\cup B\) such that a set of relaxed dual feasibility conditions are satisfied. A matching \(M\) along with dual weights \(y(\cdot)\) is \(\varepsilon\)-feasible if, for every edge \((a,b)\in A\times B\),
\[y(a)+y(b)\leq\overline{c}(a,b)+\varepsilon \text{if }(a,b)\notin M \tag{2}\] \[y(a)+y(b)=\overline{c}(a,b) \text{if }(a,b)\in M \tag{3}\]
In Lemma 3.1 we show that any \(\varepsilon\)-feasible matching produced by our algorithm has a cost within an additive error of \(\varepsilon\) from the optimal solution with respect to the costs \(\overline{c}(\cdot,\cdot)\). For any edge \((u,v)\), we define its _slack_\(s(u,v)\) to be \(0\) if \((u,v)\in M\). Otherwise, if \((u,v)\not\in M\), we set its slack to be \(s(u,v)=\overline{c}(u,v)+\varepsilon-y(u)-y(v)\). We say that \((u,v)\) is admissible if the slack on the edge is \(0\).
We observe that any matching \(M\) whose cardinality is at least \((1-\varepsilon)n\) can be converted into a perfect matching simply by arbitrarily matching the remaining \(\varepsilon n\) free vertices. The cost of any edge is at most \(1\), and so, this increases the cost of the matching \(M\) by at most \(\varepsilon n\). In addition to this, the rounding of costs from \(c(\cdot,\cdot)\) to \(\overline{c}(\cdot,\cdot)\) also introduces an increase of cost by \(\varepsilon n\). Finally, the \(\varepsilon\)-feasibility conditions introduced an additional additive error of \(\varepsilon n\), for a total error of \(3\varepsilon n\), as desired. Thus, in the rest of this section, we present an algorithm that computes an \(\varepsilon\)-feasible matching of cardinality at least \((1-\varepsilon)n\), which has a cost no more than \(\varepsilon n\) above the optimal matching's cost with respect to \(\overline{c}(\cdot,\cdot)\).
### Algorithm Details
Initially, we set the dual weight of every vertex \(b\in B\) to be \(\varepsilon\) and every vertex \(a\in A\) to be \(0\). We initialize \(M\) to \(\emptyset\). Our initial choice of \(M\) and the dual weights satisfies 2 and 3. Our algorithm executes iterations, which we will call _phases_. Within each phase, the algorithm constructs the set \(B^{\prime}\), which consists of all free vertices of \(B\). If \(|B^{\prime}|\leq\varepsilon n\), then \(M\) is an \(\varepsilon\)-feasible matching of cardinality at least \((1-\varepsilon)n\), and the algorithm will arbitrarily match the remaining free vertices and return the resulting matching. Otherwise, the algorithm computes the subset \(E^{\prime}\subseteq E\) of admissible edges with at least one end point in \(B^{\prime}\). Let \(A^{\prime}=\{a\mid a\in A\text{ and }(a,b)\in E^{\prime}\}\), i.e., the set of points of \(A\) that participate in at least one edge in \(E^{\prime}\). For each phase, the algorithm executes the following steps:
1. _Greedy step:_ Computes a maximal (i.e., greedy) matching \(M^{\prime}\) in the graph \(G^{\prime}(A^{\prime}\cup B^{\prime},E^{\prime})\).
2. _Matching Update:_ Let \(A^{\prime\prime}\) be the set of points of \(A^{\prime}\) that are matched in both \(M\) and \(M^{\prime}\) and let \(M^{\prime\prime}\) be the edges of \(M\) that are incident on some vertex of \(A^{\prime\prime}\). The algorithm adds the edges of \(M^{\prime}\) to \(M\) and deletes the edges of \(M^{\prime\prime}\) from \(M\).
3. _Dual Update:_ 1. For every edge \((a,b)\in M^{\prime}\), the algorithm sets \(y(a)\gets y(a)-\varepsilon\), and 2. For every vertex \(b\in B^{\prime}\) that is free with respect to \(M^{\prime}\), the algorithm sets \(y(b)\gets y(b)+\varepsilon\).
In each phase, the matching update step will add edges of \(M^{\prime}\) to \(M\) and remove edges of \(M^{\prime\prime}\) from \(M\). By construction, the updated set \(M\) is a matching. Furthermore, every vertex of \(A\) that was matched prior to the update continues to be matched after the update.
**Lemma 2.1**.: _The new set \(M\) of edges obtained after Step (II) is a matching. Furthermore, any vertex of \(A\) that was matched prior to Step (II) will continue to be matched after the execution of Step (II)._
The dual update step increases or reduces dual weights by \(\varepsilon\). Therefore, the dual weights always remain an integer multiple of \(\varepsilon\).
The algorithm maintains the following invariants:
1. The dual weight of every vertex in \(B\) (resp. \(A\)) is non-negative (resp. non-positive). Furthermore, every free vertex of \(A\) has a dual weight of \(0\).
2. The matching \(M\) and a set of dual weights \(y(\cdot)\) is \(\varepsilon\)-feasible.
Proofs of these invariants can be found in the appendix Section A.1
## 3 Analysis
Next, in Section E.1, we use invariants (I1) and (I2) to show that the algorithm produces a matching with the desired accuracy. In Section E.2, we use the invariants to bound the sequential and parallel execution times of our algorithm.
### Accuracy
As stated in Section E.1 the rounding of costs from \(c(\cdot,\cdot)\) to \(\overline{c}(\cdot,\cdot)\) introduces an error of \(\varepsilon n\). Furthermore, after obtaining a matching of size at least \((1-\varepsilon)n\), the cost of arbitrarily matching the last \(\varepsilon n\) vertices is no more than \(\varepsilon n\). From the following lemma, we can conclude that the total error in the matching computed by our algorithm is no more than \(+3\varepsilon n\). Proof of this Lemma can be found in the appendix Section A.2
**Lemma 3.1**.: _The \(\varepsilon\)-feasible matching of size at least \((1-\varepsilon)n\) that is produced by the main routine of our algorithm is within an additive error of \(\varepsilon n\) from the optimal matching with respect to the rounded costs \(\overline{c}(\cdot,\cdot)\)_
### Efficiency
Suppose there \(t\) phases executed by the algorithm. We use \(n_{i}\) to denote the size of \(B^{\prime}\) in phase \(i\). By the termination condition, each phase is executed only if \(B^{\prime}\) has more than \(\varepsilon n\) vertices, i.e.,\(n_{i}>\varepsilon n\). First, in Lemma 3.2 (appendix A.3), we show that the magnitude of the dual weight of any vertex cannot exceed \((1+2\varepsilon)\). This means the total dual weight magnitude over all vertices is upper bounded by \(n(1+2\varepsilon)\). Furthermore, in Lemma 3.3 (appendix A.4) we show that, during phase \(i\), the total dual weight magnitude increases by at least \(\varepsilon n_{i}\). From this, we can conclude that
\[\sum_{i=1}^{t}n_{i}\leq n(1+2\varepsilon)/\varepsilon=O(n/\varepsilon). \tag{4}\]
Note that, since each \(n_{i}\geq\varepsilon n\), we immediately get \(t\varepsilon n\leq n(1+2\varepsilon)/\varepsilon\), or \(t\leq(1+2\varepsilon)/\varepsilon^{2}=O(1/\varepsilon^{2})\). In order to get the total sequential execution time, we show, in Lemma 3.4 (appendix A.5) that each phase can be efficiently executed in \(O(n\times n_{i})\) time. Combining this with equation 4 gives an overall sequential execution time of \(O(n(\sum_{i=1}^{t}n_{i}))=O(n^{2}/\varepsilon)\).
**Lemma 3.2**.: _For any vertex \(v\in A\cup B\), the magnitude of its dual weight cannot exceed \(1+2\varepsilon\), i.e., \(|y(v)|\leq(1+2\varepsilon)\)._
**Lemma 3.3**.: _The sum of the magnitude of the dual weights increases by at least \(\varepsilon n_{i}\) in each iteration._
**Lemma 3.4**.: _The execution time of each phase is \(O(n\times n_{i})\) time._
### Analysis for the Unbalanced Case
In this section, we describe how the analysis of our matching algorithm can be extended to work for the unbalanced case, where \(|A|\neq|B|\). This analysis is critical for proving the correctness of our optimal transport version of the algorithm. Without loss of generality, assume \(|B|\leq|A|=n\). The overall description of the algorithm remains the same, except for the main routine of our algorithm produces an \(\varepsilon\)-feasible matching of size at least \((1-\varepsilon)|B|\). The asymptotic running time of both the parallel and sequential algorithms remains unchanged. In the following lemma, we bound the additive error of our algorithm for the unbalanced case; the argument is very similar to Lemma 3.1.
**Lemma 3.5**.: _Given an unbalanced input to the assignment problem with \(|B|\leq|A|\), the \(\varepsilon\)-feasible matching of cardinality at least \((1-\varepsilon)|B|\) that is returned by our algorithm is within an additive error of \(\varepsilon|B|\) from the optimal matching with respect to the cost function \(\overline{c}(\cdot,\cdot)\)_
## 4 Parallel Algorithm
In this section, we describe how to parallelize the matching algorithm of Section 2.2 leading to the result of Theorem 1.1 Recall that each phase of this algorithm has three steps : (I) Greedy step, (II) Matching update, and (III) Dual update. Steps (II) and (III) are easily parallelizable using \(O(1)\) time. However, step (I) is nontrivial to parallelize. Fortunately, Israeli and Itai gave a \(O(\log n)\) randomized parallel algorithm for computing a maximal matching on an arbitrary graph [13]. Therefore, we can complete step (I) by applying their algorithm as a black box. However their algorithm is very generic, applying even for non-bipartite graphs. In Section 4.1 we use the Israeli Itai algorithm to bound the parallel complexity of our algorithm. In Section 4.2 we further parallelize the phases of our algorithm leading to a simplified variation of the Israeli Itai algorithm that is more suited for a practical implementation. Finally, in Section 4.3 we provide a matrix-based interpretation of our simplified algorithm, allowing the algorithm to be easily implemented for GPUs.
### Analysis of our Algorithm for Parallel and MPC Models
The Israeli Itai algorithm is designed to work on an arbitrary graph \(G(V,E)\), which may not be bipartite. It computes a maximal matching on \(G\) in \(O(\log n)\) iterations. By directly using their \(O(\log n)\) algorithm for step (I) of our algorithm, we obtain an \(O(\log n)\) parallel running time for each phase. Since our algorithm executes \(O(1/\varepsilon^{2})\) phases, we obtain a worst-case theoretical bound of \(O(\log n/\varepsilon^{2})\) for our algorithm.
We would also like to note that specialized algorithms for maximal matching exist for the MPC model as well. For example, it is possible to compute a maximal matching under the MPC model using just \(O(\log(\log(\Delta)))\) rounds and \(O(n)\) space per machine, where \(\Delta\) is the maximum degree of the graph. [3]). As a result, we are able to achieve an algorithm in the MPC model that requires only \(O(\log(\log n)/\varepsilon^{2})\) rounds with \(O(n)\) space per machine.
### Simplifying the Parallel Implementation of our Algorithm
Instead of using Israeli Itai algorithm (which works for any arbitrary graph) as a black-box, we use a simpler adaptation of their algorithm for bipartite graphs. The Israeli Itai algorithm executes \(O(\log n)\) iterations, where each iteration executes the following steps, using \(O(1)\) parallel time:
1. Each vertex \(u\in V\) selects an incident edge \((u,v)\) at random, and directs it from \(u\) to \(v\), yielding a directed subgraph \(R\subseteq E\).
2. Each vertex \(u\in V\) selects, at random, one incoming edge from \(R\). Let \(S\) be the set of edges chosen by this step, with all directions removed. The graph \(S\) has a maximum vertex degree of \(2\).
3. Each vertex selects an edge of \(S\) at random. Any edge that is selected by both of its endpoints is added to \(M\), and any vertex matched by this step is removed for the next phase.
The Israeli Itai algorithm is designed to work for any graph that is not necessarily bipartite. In this situation, there is no way to partition the vertices into sets \(A\) and \(B\), and so it is necessary to consider edges as having two directions; a vertex \(u\) could 'propose' an outgoing edge \((u,v)\) (see step (i)) and also'receive' multiple proposals as incoming edges (see step (ii)). In the non-bipartite case, it is necessary for _every_ vertex to both send and receive proposals; all vertices must be handled using a symmetric process. This results in a subgraph \(S\), where each vertex could have a degree of \(2\), and an additional step is required to eliminate some of these edges in order to form a matching (see step (iii)).
However, in our situation, we are solely working with bipartite graphs. In this situation, the vertices are divided into two sets \(A\) and \(B\), and, as a result, we do not need to process each vertex in a symmetric fashion. Instead, we can allow one side to make proposals and the other side to receive proposals. As a result, for each iteration of the maximal matching algorithm, we can execute the following steps, which correspond to steps (i) and (ii) in the Israeli Itai algorithm.
1. Each vertex of \(B\) selects, at random, an incident edge, yielding a subgraph \(S\).
2. Each vertex of \(A\), with degree at least one in \(S\), arbitrarily selects an edge from \(S\) and adds it to the matching.
Note that, after step (b) in our approach, each vertex has at most one incident edge selected. This alleviates the need for step (iii), since steps (a) and (b) alone immediately result in a matching.
In addition to our simplified approach to each iteration of the Israeli Itai algorithm, we make a second optimization: Within our algorithm, instead of waiting for the Israeli Itai algorithm to complete, which could require \(O(\log n)\) iterations, our implementation _immediately_ updates the matching and dual weights after each iteration before moving to the next iteration of the Israeli Itai algorithm. While this increases the number of phases, each phase becomes very simple, taking only \(O(1)\) time, resulting in practical improvements. We believe that this additional source of parallelization could lead to asymptotic improvements to the parallel complexity of our algorithm. However, the proofs of Israeli Itai do not readily extend to this modified algorithm. Obtaining a tight bound on the parallel complexity of our modified algorithm is an important open question.
### A GPU-Friendly Implementation
Thus far, we have described our algorithm using graph theoretic notations. However, in practice, it is important for our algorithm to have an efficient GPU-based implementation. In this section, we provide a matrix-based implementation of our algorithm, using the simplified maximal matching approach described in Section 4.2.
In Algorithm 1 we provide a pseudocode of our simplified algorithm. The algorithm resembles the one described in Section 2.2 except for the differences described in Section 4.2. It assumes that the costs were already rounded to each be an even multiple of \(\varepsilon\). The matching returned by the algorithm has cardinality at least \((1-\varepsilon)n\) and a cost at most \(\varepsilon n\) above the optimal cost. The algorithm, as written, does not maintain dual weights explicitly, but if dual weights are required as part of the output, then, as discussed below, the algorithm can be modified to keep track of them. Note that all operations in this psuedocode are based on relatively simple matrix operations, and can be implemented easily on a GPU.
The algorithm takes as input an \(n\times n\) cost matrix \(W\) and a value for the error parameter \(\delta\), and returns an \(n\times n\) bit matrix that describes the matching, where an edge \((i,j)\) is in the matching if and only if the value at row \(i\) and column \(j\) is a \(1\). We use \(\mathbf{1}_{n}\) to represent a row vector containing all \(1\)'s, and \(\mathbf{1}_{m\times n}\) to represent an \(m\) by \(n\) matrix of all \(1\)'s. We use similar notation for vectors and matrices of all \(0\)'s. When considering an \(n\times n\) matrix, we follow a convention that each row corresponds to a vertex of \(A\), and each column corresponds to a vertex of \(B\).
Next, we explain further each part of the algorithm. Line 3 initializes the slack matrix \(S\) to reflect the initial slacks of all edges, which are initially non-matching. Throughout the algorithm, the matrix \(S\) will reflect the slack with respect to the edge 'as if it were a non-matching edge', i.e., \(S_{i,j}=W_{i,j}+\varepsilon-y(a)-y(b)\), regardless of the matching status of the edge. This makes the slacks easier to track without the need to explicitly maintain dual weights.
The main loop of the algorithm, beginning at line 4, specifies the stopping condition of the algorithm. The algorithm terminates once at most \(\varepsilon n\) free vertices of \(B\) remain. Each iteration of this main loop is a _phase_. Lines 5-7 compute a matrix \(P\), which represents a set of edges that will be added to \(M\) during the current phase. This edge set will be a matching on the admissible edges that are incident on free vertices of \(B\). This corresponds to Step (I) of the algorithm from Section2 except for \(P\) is not necessarily maximal. Lines 8-10 update the matching \(M\) by adding edges of \(P\) to \(M\), and removing any preexisting edges of \(M\) that are matched in \(P\). This corresponds to Step (2) of the algorithm from Section2.2 Finally, lines 11-15 update the slacks to reflect dual weight adjustments, corresponding to Step (3) of the algorithm from Section2.2 However, instead of tracking the dual weights explicitly, we simply update the slacks directly. Note that, when updating the slacks on edges incident on free vertices of \(B\), we include a slight change to Step (3). Instead of increasing the dual weight of free vertices of \(B\) by _exactly_\(\varepsilon\), we increase it as much as possible. For some free vertices of \(B\), the increase will be \(0\) (since \(P\) is not maximal), but it is also possible, in practice, for the increase to be larger than \(\varepsilon\).
```
1:Input: \(W\in\mathbb{R}_{20}^{n\times n}\), \(\delta\in\mathbb{R}_{>0}\)
2:\(M\leftarrow\mathbf{0}_{n\times n}\), \(\delta\gets W\)
3:while\(M\) has more than \(\varepsilon n\) columns with all 0's do
4:\(P\leftarrow\mathbf{0}_{n\times n}\)
5:for all columns \(b\) with all zero entries in \(M\)do
6: Randomly select a row \(a\) such that \(S_{a,b}=0\)
7:\(P_{a,b}\gets 1\)
8:for all\(a,b\in[1..n]\)do
9:\(M_{a,b}\gets 1\)if\(M_{a,b}=1\)and row \(a\) of \(P\) has all 0's
10:otherwise\(M_{a,b}\gets P_{a,b}\)
11:for all rows \(a\) in \(P\) with at least one \(1\)do
12: Add \(\varepsilon\) to every entry in row \(a\) of \(S\)
13:for all columns \(b\) in \(M\) with all \(0\) entries do
14:\(\rho\leftarrow\) the minimum entry in column \(b\) of \(S\)
15: Decrease every entry in column \(b\) of \(S\) by \(\rho\)
16:return\(M\)
```
**Algorithm 1** Approximate Bipartite Matching
## 5 Experiments
In this section, we present our experimental results. We implemented the parallel version of our algorithm for both the assignment problem as well as the OT problem. Both implementations are written in Python using the PyTorch library, which supports GPU operations. We compare these implementations of our algorithm to the Sinkhorn algorithm implementation in the Python Optimal Transport (POT) library [11]. This Sinkhorn implementation also uses PyTorch. Additionally, we compare our algorithm to a CUDA based GPU implementation and present the results of this comparison in appendix sectionC.3
Our experiments are run with an Intel Xeon E5-2680v4 2.4GHz chip and an Nvidia Tesla V100 GPU. We ran our algorithms using both real and synthetic data, including four different settings:an assignment problem between randomly generated synthetic point sets, an OT problem between randomly generated point sets, each having randomly assigned demands and supplies, an assignment problem formed from two sets of MNIST images, and an OT problem between two text word embeddings. For each setup, we generated input data and computed the assignment or OT cost using our algorithm with different values of \(\varepsilon\). Then, we determined the appropriate regularization parameter of the Sinkhorn algorithm, ensuring the Sinkhorn distance is close but no lower than the cost of the solution generated by our algorithm. We recorded the running time and the number of parallel rounds for both Sinkhorn and our algorithm. We also repeated each experiment using a reversed process by fixing the regularization parameter of Sinkhorn and searching for the \(\varepsilon\) value for our algorithm, which guarantees the our cost is similar to, but no more than, the Sinkhorn distance; the results for these reversed experimental setups can be found in the technical appendix. Note that, we also recorded the execution time of solving for the exact solution using POT's EMD function, which runs on the CPU. We only present for values of \(\varepsilon\) that are large enough such that either our algorithm or Sinkhorn algorithms run faster than the exact algorithm. We also record the additive error, relative to the optimal solution, of both Sinkhorn and our algorithm in appendix section C.1. Additionally, we conducted experiments comparing the sequential performances of our algorithm and Sinkhorn on the CPU, which can be found in the appendix section C.4.
Synthetic Data:For synthetic data generation, for both the assignment and OT experiments, we randomly sampled the location of two groups of \(n=10,000\) vertices, \(A\) and \(B\), in a \(2\)-dimensional unit square. For the assignment problem, the demand or supply of every vertex is \(1/n\). For the OT problem, the capacity of each vertex is initially chosen by selecting, uniformly at random from the interval \([0,1]\). Then, the capacities are normalized such that the total supply and demand are both equal to \(1\). For any pair of points \((a,b)\in A\times B\), the cost \(c(a,b)\) was set as the squared Euclidean distance between them. For each value of \(\varepsilon\), we executed \(10\) runs. For each combination of \(\varepsilon\), and algorithm choice, we averaged both the running times as well as the number of parallel rounds over all \(10\) runs and recorded the results. The results of running times and parallel rounds can be seen in Figure 1(a)(b) and Figure 1(a)(b) respectively.
Mnist:Next, we ran a similar experiment using real-world image data. We generated our inputs using the MNIST dataset of hand-written digit images [21]. Each image consists of a \(28\times 28\) pixel gray-scale image. The sets \(A\) and \(B\) each consist of \(n=10,000\) images from the MNIST dataset, selected at random. The cost \(c(a,b)\) between two images \(a\in A\) and \(b\in B\) is computed as follows: Let \(a(i,j)\) (resp. \(b(i,j)\)) be the value of the pixel in row \(i\) and column \(j\) of image \(a\) (resp. \(b\)). First, the two images are normalized so that the sum of all pixel values is equal to \(1\) for each image, i.e., \(\sum_{i,j\in[1,28]}a(i,j)=1\) and \(\sum_{i,j\in[1,28]}b(i,j)=1\). Then, the cost \(c(a,b)\) is given by the \(L_{1}\) distance between the resulting normalized images: \(c(a,b)=\sum_{i,j\in[1,28]}|a(i,j)-b(i,j)|\). Note that an upper
Figure 1: Plots of running times on GPU for the synthetic inputs (a)(b) and the real data inputs (c)(d)(e)(f).
bound on the largest cost is \(2\). For each algorithm and for each value of \(\varepsilon\), we averaged both the running times as well as the number of parallel rounds over \(10\) runs. The results for these experiments can be found in the plot in Figure 11c) and the plot in Figure 11c).
Nlp:In our final experiment, we considered OT with natural language data. We calculate the document distances based on OT following the procedure of previous work [13]. Each OT instance was generated by selecting two discrete sections of text with fixed lengths. Each unique word in the first (resp. second) section of text corresponds to a vertex of \(A\) (resp. \(B\)) in the OT problem. To acquire the supply and demand for each vertex, we tokenized each text section with NLTK [6], and counted the number of appearances of each unique token. Then the counts were normalized so that the total supply and demand were each equal to \(1\). Next, to generate the costs between vertices, we represent each unique token using a \(100\)-dimensional GloVe word embedding [20]. The cost of any edge \((a,b)\in A\times B\) is then given by the Euclidean distance between the corresponding points in this embedding.
In the last three plots in Figure 11 and Figure 11 we show the results of applying this experimental setup, using sections of the text from _The Count of Monte Cristo_, _20News_, _IMDB_. For each dataset, five different OT instances are created, using different sections of the text, and the results are averaged over all \(5\) runs. These experiments can be found in Figure 11d)(e)(f) and Figure 11d)(e)(f).
In all our experiments, our new parallel combinatorial algorithm almost always runs significantly faster than the Sinkhorn algorithm for the OT problem often with significantly fewer parallel rounds. Unlike the POT's highly optimized implementation of the Sinkhorn method, the implementation of our algorithm is new and may benefit significantly from further optimizations.
## 6 Conclusion
In this work, we provided a fast, highly parallelizable combinatorial algorithm for computing an \(\varepsilon\)-approximate solution to the assignment problem and the OT problem. We also provided a practical implementation of a slight variation of our algorithm, which outperforms the Sinkhorn algorithm, in terms of running times, in our experimental comparison. In light of this work, we would like to propose the following open question : Is it possible to improve the \(O(\log(n)/\varepsilon^{2})\) parallel running time of our algorithm, possibly by introducing an appropriate modification? In particular, our simplified practical algorithm presented in Section 4.2 seems to execute fewer parallel rounds than the our worst-case theoretical analysis might suggest. Can the simplifications used in our practical implementation be used to improve our worst-case theoretical running times?
Figure 2: Plots of parallel rounds on GPU for the synthetic inputs (a)(b) and the real data inputs (c)(d)(e)(f).
## Acknowledgement
We would like to acknowledge Advanced Research Computing (ARC) at Virginia Tech, which provided us with the computational resources used to run the experiments. Research presented in this paper was funded by NSF CCF-1909171 and NSF CCF-2223871. We would like to thank the anonymous reviewers for their useful feedback.
## References
* [1] Jason Altschuler, Jonathan Weed, and Philippe Rigollet. Near-linear time approximation algorithms for optimal transport via sinkhorn iteration. In _NIPS_, pages 1961-1971, 2017.
* [2] Martin Arjovsky, Soumith Chintala, and Leon Bottou. Wasserstein GAN. _arXiv:1701.07875v3 [stat.ML]_, 2017.
* [3] Soheil Behnezhad, Mohammad Taghi Hajiaghayi, and David G Harris. Exponentially faster massively parallel maximal matching. In _2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS)_, pages 1637-1649. IEEE, 2019.
* [4] Jean-David Benamou, Guillaume Carlier, Marco Cuturi, Luca Nenna, and Gabriel Peyre. Iterative bregman projections for regularized transportation problems. _SIAM Journal on Scientific Computing_, 37(2):A1111-A1138, 2015.
* [5] Jeremie Bigot, Raul Gouet, Thierry Klein, Alfredo Lopez, et al. Geodesic PCA in the wasserstein space by convex PCA. In _Annales de l'Institut Henri Poincare, Probabilites et Statistiques_, volume 53, pages 1-26. Institut Henri Poincare, 2017.
* [6] Steven Bird, Ewan Klein, and Edward Loper. _Natural language processing with Python: analyzing text with the natural language toolkit_. " O'Reilly Media, Inc.", 2009.
* [7] Nicolas Bonneel, Michiel Van De Panne, Sylvain Paris, and Wolfgang Heidrich. Displacement interpolation using lagrangian mass transport. In _Proceedings of the 2011 SIGGRAPH Asia Conference_, pages 1-12, 2011.
* [8] Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In _NIPS_, pages 2292-2300, 2013.
* [9] Marco Cuturi and Arnaud Doucet. Fast computation of wasserstein barycenters. In _International Conference on Machine Learning_, pages 685-693, 2014.
* [10] Pavel Dvurechensky, Alexander Gasnikov, and Alexey Kroshnin. Computational optimal transport: Complexity by accelerated gradient descent is better than by sinkhorn's algorithm. In _International Conference on Machine Learning_, pages 1367-1376. PMLR, 2018.
* [11] Remi Flamary, Nicolas Courty, Alexandre Gramfort, Mokhtar Z. Alaya, Aurelie Boisbunon, Stanislas Chambon, Laetitia Chapel, Adrien Corenflos, Kilian Fatras, Nemo Fournier, Leo Gautheron, Nathalie T.H. Gayraud, Hicham Janati, Alain Rakotomamonjy, Ievgen Redko, Antoine Rolet, Antony Schutz, Vivien Seguy, Danica J. Sutherland, Romain Tavenard, Alexander Tong, and Titouan Vayer. Pot: Python optimal transport. _Journal of Machine Learning Research_, 22(78):1-8, 2021.
* [12] Remi Flamary, Marco Cuturi, Nicolas Courty, and Alain Rakotomamonjy. Wasserstein discriminant analysis. _Machine Learning_, 107(12):1923-1945, 2018.
* [13] Harold N Gabow and Robert E Tarjan. Faster scaling algorithms for network problems. _SIAM Journal on Computing_, 18(5):1013-1036, 1989.
* [14] Andrew V Goldberg, Serge A Plotkin, and Pravin M Vaidya. Sublinear-time parallel algorithms for matching and related problems. _Journal of Algorithms_, 14(2):180-213, 1993.
* [15] Amos Israeli and Alon Itai. A fast and simple randomized parallel algorithm for maximal matching. _Information Processing Letters_, 22(2):77-80, 1986.
* [16] Arun Jambulapati, Aaron Sidford, and Kevin Tian. A direct tilde \(\tilde{O}(1/\varepsilon)\) iteration parallel algorithm for optimal transport. _Advances in Neural Information Processing Systems_, 32, 2019.
* [17] Harold Kuhn. Variants of the hungarian method for assignment problems. _Naval Research Logistics_, 3(4):253-258, 1956.
* [18] Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. From word embeddings to document distances. In _International conference on machine learning_, pages 957-966. PMLR, 2015.
* [19] Nathaniel Lahn, Deepika Mulchandani, and Sharath Raghvendra. A graph theoretic additive approximation of optimal transport. In _Advances in Neural Information Processing Systems 32_, pages 13813-13823, 2019.
* [20] Nathaniel Lahn and Sharath Raghvendra. A weighted approach to the maximum cardinality bipartite matching problem with applications in geometric settings. _Journal of Computational Geometry_, 11(2), 2021. Special Issue of Selected Papers from SoCG 2019.
* [21] Yann LeCun. The MNIST database of handwritten digits. _http://yann. lecun. com/exdb/mnist/_, 1998. License: Creative Commons Attribution-Share Alike 3.0.
* [22] Tianyi Lin, Nhat Ho, and Michael I Jordan. On the efficiency of entropic regularized algorithms for optimal transport. _Journal of Machine Learning Research_, 23(137):1-42, 2022.
* [23] Huidong Liu, GU Xianfeng, and Dimitris Samaras. A two-step computation of the exact gan wasserstein distance. In _International Conference on Machine Learning_, pages 3159-3168. PMLR, 2018.
* [24] Vien V Mai, Jacob Lindback, and Mikael Johansson. A fast and accurate splitting method for optimal transport: Analysis and implementation. _arXiv preprint arXiv:2110.11738_, 2021.
* [25] James B Orlin. A polynomial time primal network simplex algorithm for minimum cost flows. _Mathematical Programming_, 78(2):109-129, 1997.
* [26] Jeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representation. In _Empirical Methods in Natural Language Processing (EMNLP)_, pages 1532-1543, 2014.
* [27] Abhijeet Phatak, Sharath Raghvendra, Chittaranjan Tripathy, and Kaiyi Zhang. Computing all optimal partial transports. In _The Eleventh International Conference on Learning Representations_, 2022.
* [28] Yossi Rubner, Carlo Tomasi, and Leonidas J Guibas. The earth mover's distance as a metric for image retrieval. _International journal of computer vision_, 40(2):99-121, 2000.
* [29] Roman Sandler and Michael Lindenbaum. Nonnegative matrix factorization with earth mover's distance metric for image analysis. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 33(8):1590-1602, 2011.
* [30] R. Sharathkumar and P. K. Agarwal. Algorithms for transportation problem in geometric settings. In _Proceedings of the 23rd Annual ACM-SIAM Symposium on Discrete Algorithms_, pages 306-317, 2012.
* [31] Jonah Sherman. Area-convexity, l-infinity regularization, and undirected multicommodity flow. In _Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing_, pages 452-460, 2017. | ## Review
### Summary
The paper presents a novel combinatorial algorithm for solving the Optimal Transport (OT) problem, which provides an additive epsilon approximation and runs efficiently in both sequential and parallel settings. It demonstrates a significant improvement over the Sinkhorn algorithm, especially in the context of parallel computation, with an expected runtime of O(log log n / epsilon^2) in the Massively Parallel Computation (MPC) model. The authors support their claims with comprehensive theoretical analysis and experimental results showcasing speedups in various datasets. This work highlights the practical merits of the algorithm, making it a valuable contribution to the field.
### Strengths
- The paper introduces a practical and elegant combinatorial algorithm for the Optimal Transport problem.
- It outperforms the Sinkhorn algorithm in empirical evaluations, with significant speedups in various datasets.
- The algorithm is easily parallelizable and has a clear theoretical foundation.
- The paper provides a comprehensive analysis of the algorithm's performance across different systems.
- The simplicity and GPU-friendly interpretation of the algorithm enhance its applicability in real-world scenarios.
### Weaknesses
- The experimental evaluation is limited, primarily relying on toy datasets without a broader analysis of practical scenarios.
- The code used for experiments was not provided, hindering reproducibility.
- The paper does not compare against the best asymptotic algorithm by Jambulapati et al.
- The dependence on epsilon could be reduced for better performance.
- Some sections of the paper could benefit from clearer presentation and organization.
### Questions
- Can the authors share the code to enable reproducibility?
- Could a more extensive experimental comparison be provided, including a comparison with the Jambulapati algorithm?
- Is it possible to achieve better performance in parallel time with similar combinatorial techniques?
- What is the work and depth of the algorithm described in section 4.2?
- Can the authors address the absolute/relative error as a function of epsilon in their results?
### Soundness
**Score:** 3
**Description:** 3 = good: The methodology is sound, with a solid theoretical basis and supporting empirical evidence, though some aspects could be improved.
### Presentation
**Score:** 3
**Description:** 3 = good: The paper is generally well-written but may benefit from clearer organization and some sections could be more concise.
### Contribution
**Score:** 3
**Description:** 3 = good: The paper offers a useful contribution to the field of optimal transport with practical implications, although it has some limitations in depth and breadth of evaluation.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept: The paper is technically solid and has moderate-to-high impact, but it requires minor improvements in evaluation and reproducibility.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a novel and practical solution to the Optimal Transport problem, backed by theoretical analysis and experimental results. While there are some weaknesses in empirical evaluation and clarity, the contributions to the field and the potential for real-world applications justify an acceptance decision.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization
Jeonghoon Kim
NAVER Cloud
[email protected]
Equal contribution
Jung Hyun Lee
NAVER Cloud
[email protected]
Sungdong Kim
NAVER Cloud, KAIST AI
[email protected]
&Sungdong Kim
NAVER Cloud, KAIST AI
University of Richmond
[email protected]
&Joonsuk Park
NAVER Cloud, NAVER AI Lab,
University of Richmond
[email protected]
Kang Min Yoo
NAVER Cloud, SNU AI Center
[email protected]
&Se Jung Kwon
NAVER Cloud
[email protected]
&Dongsoo Lee
NAVER Cloud
[email protected]
###### Abstract
Large language models (LLMs) face the challenges in fine-tuning and deployment due to their high memory demands and computational costs. While parameter-efficient fine-tuning (PEFT) methods aim to reduce the memory usage of the optimizer state during fine-tuning, the inherent size of pre-trained LLM weights continues to be a pressing concern. Even though quantization techniques are widely proposed to ease memory demands and accelerate LLM inference, most of these techniques are geared towards the deployment phase. To bridge this gap, this paper presents Parameter-Efficient and Quantization-aware Adaptation (PEQA) - a simple yet effective method that combines the advantages of PEFT with quantized LLMs. By updating solely the quantization scales, PEQA can be directly applied to quantized LLMs, ensuring seamless task transitions. Parallel to existing PEFT methods, PEQA significantly reduces the memory overhead associated with the optimizer state. Furthermore, it leverages the advantages of quantization to substantially reduce model sizes. Even after fine-tuning, the quantization structure of a PEQA-tuned LLM remains intact, allowing for accelerated inference on the deployment stage. We employ PEQA-tuning for task-specific adaptation on LLMs with up to \(65\) billion parameters. To assess the logical reasoning and language comprehension of PEQA-tuned LLMs, we fine-tune low-bit quantized LLMs using a instruction dataset. Our results show that even when LLMs are quantized to below 4-bit precision, their capabilities in language modeling, few-shot in-context learning, and comprehension can be resiliently restored to (or even improved over) their full-precision original performances with PEQA.
## 1 Introduction
Large language models (LLMs) such as PaLM, LLaMA, and the GPT-series [1; 2; 3; 4; 5; 6; 7] have demonstrated unprecedented levels of task-generalization ability in various applications, including dialogue systems, question answering, summarization, and translation [8; 9]. While they can follow instructions and learn to solve tasks via in-context task descriptions or few-shot examples [10], fine-tuning allows LLMs to align their behavior with desirable traits, such as following instructions more precisely [11] or adhering to certain principles [12]. Additionally, fine-tuning can improve the scaling curve by exposing the model to large collections of task-specific instruction datasets, leading to significant performance enhancements in various unseen downstream tasks [13; 14; 15; 16; 17]. However, the immense computational cost of fully fine-tuning large-scale models presents challenges for researchers and developers, especially given that LLMs have billions or even trillions of parameters [18].
In response, several parameter-efficient fine-tuning (PEFT) methods have been introduced [19; 20; 21], which only update a small number of parameters compared to the pre-trained weights of LLMs. PEFT notably reduces the number of learnable parameters, making the fine-tuning of pre-trained LLMs viable by ensuring that the optimizer states' memory usage becomes negligible. These strategies lead to decreased memory usage during training and more efficient storage and seamless transitions of task-specifically fine-tuned parameters during deployment. Nonetheless, LLMs as a whole still demand significant memory, and further reductions are attainable through model compression. As outlined in Hu et al. [21], for instance, LoRA can cut the memory usage during the fine-tuning of GPT-3 17SB from 1.2TB to 350GB. However, the model still requires approximately 350GB of memory for parameters in half-precision floating-point format.
Quantization is a favorable method for both compressing and accelerating neural networks by discretizing parameters into low-bit integers while maintaining a shared high-precision scale within each parameter group (e.g., channel or layer). However, during training phases, quantization-aware training (QAT) [22; 23; 24; 25] mandates updates for all parameters, rendering it not parameter-efficient. Since post-training quantization (PTQ) [26; 27; 28; 29] is executed after training, most existing quantization schemes primarily target the deployment phases. Although PTQ can be integrated with PEFT, when PTQ follows PEFT, the model remains intact during fine-tuning, not decreasing the memory usage. Conversely, if PTQ precedes PEFT, while there's a reduction in memory usage during fine-tuning, no inference acceleration can be achieved due to the PEFT parameters during deployment.
To bridge the gap between PEFT and quantization, we introduce the Parameter-Efficient and Quantization-aware Adaptation (PEQA), a simple yet effective quantization-aware PEFT method. As illustrated in Figure 1, PEQA encompasses two steps: (a) Decomposition (Quantization) where the parameter matrix of each fully-connected layer is decomposed into a matrix of low-bit integers and quantization scales; and (b) Fine-tuning wherein, for each downstream task, the quantization scale is fine-tuned while the integer matrix remains unchanged. For the quantized LLMs, merely updating the quantization scale leverages the advantages of PEQA. As a result, PEQA maintains the merits of PEFT, such as fewer trainable parameters, along with efficient storage and swift switching of task-specific parameters. Concurrently, it provides the benefits of quantization, including reduced
Figure 1: Illustration of our proposed PEQA scheme where \(A\cdot B\) indicates the element-wise product of \(A\) and \(B\). PEQA is memory-efficient fine-tuning method for quantized large language models that updates only the quantization scale while keeping the integer matrix frozen. Notice a significant reduction in memory footprint when full-precision weights are converted into sub-\(4\)-bit integers.
DRAM usage during both training and deployment, and inference acceleration due to fewer memory accesses at deployment.
Through this, we highlight the following:
* We introduce PEQA, a method that fine-tunes only the quantization scales of quantized LLMs, keeping the integer matrix frozen. It bridges the gap between PEFT and quantization, offering advantages such as reduced memory consumption during both training and deployment phases, seamless task transitions, and faster inference.
* To empirically validate the approach of solely fine-tuning the quantization scale while freezing the integer matrix, we compare the perplexity of LLMs fine-tuned with QAT, PEFT (+PTQ), and PEQA. The results indicate that PEQA delivers competitive performance in comparison to QAT and PEFT+PTQ, even at sub-\(4\)-bit precision.
* To assess the scalability and comprehension performance of PEQA, we apply PEQA to task-specific adaptation and instruction-tuning. Despite the reduction in model size by a factor of \(4\) to \(5\), PEQA demonstrates competitive performance up to a \(65\)B LLM when compared to full-precision baselines. The results suggest that even when LLMs are quantized into low-bit precision, the overall comprehension capability of quantized LLMs can be effectively restored to their original performance using PEQA.
## 2 Related Work
Large Language Models and Alignment Learning.Although LLMs have demonstrated great generalization capabilities through their sheer scales [2; 6; 30] and the emergent mechanism known as _in-context learning_[1], they still require significant alignment to follow natural language instructions [13], adhere to ethical guidelines or steer towards harmlessness [12], utilize external tools [31], and to be grounded in knowledge to generate truthful answers [16; 32]. In particular, instruction-tuning has been pivotal in enabling LLMs to generalize instruction-following abilities, enabling them to solve seemingly any NLP task with only the description of the task in natural text [13; 33; 34], allowing the models to be accessed in an interactive manner.
Parameter-Efficient Fine-Tuning.Fine-tuning leverages the generalization capabilities elicited from the general pretraining to specialize in specific domains and tasks [35; 36] or align the LLM with target behaviors [13]. However, updating parameters in LLMs comes with a high computation cost and minimal compute environment required for gradient computation. As the hyper-scale era makes fine-tuning for LLMs prohibitively expensive, both efficient and effective alternatives to fine-tuning have received considerable attention. Specifically, inspired by the sensitivity of LLMs to prompts [37], a line of works has proposed introducing trainable prompt embeddings prepended to the input text while freezing the original LLM parameters [19; 38; 39]. As another approach, adapter modules [20] introduce task-specific parameters, which are inserted between the pre-existing layers of the model Extending on this adapter-based approach, LoRA [21] employs the concept of low-rank bottleneck modules while demonstrating comparable performance to full fine-tuning. Subsequent works have unified the various versions and diverging approaches to PEFT [40; 41] by formulating them in a single mathematical framework. These parameter-efficient methods have shown comparable performance to full model fine-tuning, presenting a cost-effective and efficient avenue for tailoring LLMs to specific tasks.
However, even with the adoption of PEFT, the inherent model size of the LLM remains a challenge to handle. One immediate solution is to apply post-training quantization (PTQ), but its interaction with task-specific parameters is still an area of active research.There have been attempts to integrate PEFT and neural network quantization, including methods like Quadapter [42] and AlphaTuning [43]. Yet, these methods have primarily been explored in smaller models of 1.3B or fewer parameters. Appendix J delineates the distinctions between our method and AlphaTuning.
Neural Network Quantization.Neural network quantization consists largely of quantization-aware training (QAT) and PTQ. QAT methods [22; 23; 24; 25] basically train not only quantization scales but all the parameters of a full-precision neural network to narrow the performance gap between the full-precision model and its quantized counterpart. Unfortunately, since QAT involves training all the weights of a full-precision network, it is not feasible to apply QAT to LLMs. To quantize LLMs, PTQtechniques tailored to LLMs [26; 27; 28; 29; 44; 45] have been presented. Although such PTQ approaches do not require learning all the parameters of an LLM at all, as PTQ occurs after training/fine-tuning LLMs, PTQ cannot allow for compressing the model size during training/fine-tuning LLMs. To reduce the model size even during training/fine-tuning LLMs, researchers have recently focused on combining PEFT with quantization.
## 3 Methodology
### Problem Setup
**Memory Demands in Fine-tuning.** Given the significant computational demands associated with fully fine-tuning large language models, parameter-efficient fine-tuning (PEFT) methods have been introduced [19; 20; 21; 46]. One of the primary goals of PEFT methods is to reduce memory usage of the optimizer state during training, specifically by reducing the number of learnable parameters. While existing PEFT techniques do decrease memory consumption of the optimizer state and also narrow the accuracy gap between full fine-tuning and PEFT, the pre-trained weights of large language models still demand substantial memory space. When applying LoRA [21] to LLaMA-65B, for instance, even though storing optimizer states for trainable parameters consumes only \(52\)MB (only query and value matrices are adapted with a LoRA rank of \(4\)), the model size still occupies a huge portion of DRAM usage due to the frozen FP16 weights of a pre-trained model. To make PEFT more efficient, reducing the model size is an indispensable requisite. Given that LLMs mostly consist of fully-connected layers, compressing the weights of fully-connected layers is a key factor in compressing the model size and thus leading to more efficient PEFT.
**Inference Latency of Large Language Models.** During text generation inference, an autoregressive LLM generates tokens sequentially. A significant portion of the inference latency arises from matrix-vector multiplications, as opposed to matrix-matrix multiplications. Given that the batch size during inference is typically small [27; 28], matrix-vector multiplications tend to be memory-bound. Specifically, accessing global memory, such as DRAM, is expensive on contemporary high-end GPUs. Thus, the number of weights loaded into registers profoundly impacts the speed of multiplication between a matrix and a vector. To decrease the number of weights (subsequently increasing the weights loaded into registers), quantization is a widely researched method for both compressing and speeding up neural networks [29; 44; 47]. While quantization-aware training (QAT) imposes a significant load on both computation and memory, post-training quantization (PTQ) is often viewed as a fallback strategy among traditional quantization techniques to enhance the generation latency of LLMs. To both accelerate LLM inference and retain all advantages of PEFT, an innovative alternative approach should be pursued.
Figure 2: (a) DRAM usage comparison of LLaMA-\(65\)B on various tuning methods and (b) perplexity over model size when tuning LLaMA models with LoRA and PEQA on Wikitext2 dataset. The size of a circle indicates the number of trainable parameters. For instance, the LLaMA-65B model with LoRA has a size of 131GB and 10.49M trainable parameters. Otherwise, LLaMA-65B with 4-bit PEQA has a model size of 33GB and 6.8M trainable parameters.
### Parameter-Efficient and Quantization-aware Adaptation (PEQA)
Quantization reduces bit-precision for inference acceleration, less storage and increasing throughput. INT8 quantization, which lower the bit-precision for both activations and weights, utilize dedicated engine to effectively accelerate arithmetic computation [48]. This is effective for large batches where computing speed matters but less so for smaller batches constrained by memory. To tackle this memory issue, weight-only quantization keeps high precision for activations (e.g., FP16) but compresses weights to \(4\)-bit or less, targeting memory I/O enhancement in modern GPUs [28, 47]. For simplicity, we mainly focus on low-bit weight-only quantization in a linear asymmetric per-channel context in this paper.
For pre-trained weights of a fully-connected layer \(\mathbf{W}_{0}\in\mathbb{R}^{n\times m}\), while PEQA can be applied to quantized LLMs, we first quantize \(\mathbf{W}_{0}\). In other words, for a given bit-width \(b\), quantized pre-trained weights \(\widehat{\mathbf{W}}_{0}\) can be written as
\[\widehat{\mathbf{W}}_{0}=\mathbf{s}_{0}\cdot\overline{\mathbf{W}}_{0}=\mathbf{s}_{0}\cdot \Big{(}\text{clamp}\Big{(}\Big{\lfloor}\frac{\mathbf{W}_{0}}{\mathbf{s}_{0}}\Big{\rfloor} +\mathbf{z}_{0},0,2^{b}-1\Big{)}-\mathbf{z}_{0}\Big{)}, \tag{1}\]
where \(A\cdot B\), \(\lfloor\cdot\rceil\), and \(\text{clamp}(\cdot,a,b)\) indicate the element-wise product of \(A\) and \(B\), the rounding function, and the clamping function into the range \([a,b]\), respectively, while per-channel scales and zero-points (namely, \(\mathbf{s}_{0},\mathbf{z}_{0}\in\mathbb{R}^{n\times 1}\)) are initialized to minimize \(\|\mathbf{W}_{0}-\widehat{\mathbf{W}}_{0}\|_{F}^{2}\). Notice that \(\mathbf{s}_{0}\) and \(\mathbf{z}_{0}\) are not related to any downstream task. Here, we freeze \(\overline{\mathbf{W}}_{0}=\text{clamp}\Big{(}\Big{\lfloor}\frac{\mathbf{W}_{0}}{\mathbf{s} _{0}}\Big{\rfloor}+\mathbf{z}_{0},0,2^{b}-1\Big{)}-\mathbf{z}_{0}\Big{)}\), which is the integer quantization indices of \(\mathbf{W}_{0}\), for every full-connected layer in a pre-trained LLM. And then we fine-tune only \(\mathbf{s}_{0}\) (residing outside the clamp function in Eq. 1) while sharing \(\overline{\mathbf{W}}_{0}\) across all downstream tasks. Consequently, quantized pre-trained weights \(\widehat{\mathbf{W}}_{0}\) are adapted to a downstream task as follows:
\[\widehat{\mathbf{W}}=(\mathbf{s}_{0}+\Delta\mathbf{s})\cdot\overline{\mathbf{W}}_{0}=(\mathbf{s}_ {0}+\Delta\mathbf{s})\cdot\Big{(}\text{clamp}\Big{(}\Big{\lfloor}\frac{\mathbf{W}_{0} }{\mathbf{s}_{0}}\Big{\rfloor}+\mathbf{z}_{0},0,2^{b}-1\Big{)}-\mathbf{z}_{0}\Big{)}, \tag{2}\]
where \(\Delta\mathbf{s}\in\mathbb{R}^{n\times 1}\) represents the gradient update of \(\mathbf{s}_{0}\) obtained by adaptation to a downstream task. We dub Eq. 2 as Parameter-Efficient and Quantization-aware Adaptation (PEQA). PEQA is a memory-efficient fine-tuning method dedicated to quantized LLMs by solely updating quantization scales \(\mathbf{s}_{0}\). With \(\overline{\mathbf{W}}_{0}\) being frozen and shared for all downstream tasks, \(\mathbf{s}_{0}+\Delta\mathbf{s}\) are task-specific parameters in PEQA, which can be quickly and easily swapped when it is needed to switch to a different downstream task. Note that, PEQA can be seamlessly applied not only to weight-only quantized LLMs but also to weight-activation quantized ones. The overall procedure of PEQA is described in Figure 1 in detail.
### Benefits of PEQA Inherited from Bridging the Gap between PEFT and Quantization
PEQA is designed to have the advantages of both existing PEFT methods [19, 21, 46] and quantized LLM [28, 44, 47, 49]. We summarize the benefits of PEQA in this subsection.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & DRAM & DRAM & Inference & Task- \\ Method & (Fine-Tuning) & (Deployment) & Speed & Switching \\ \hline Full Fine-Tuning & \(457\)GB & \(131\)GB & Slow & Slow \\ PEFT & \(131\)GB & \(131\)GB & Slow & Fast \\ PEFT+PTQ & \(131\)GB & \(33\)GB & Fast & Slow \\ PTQ+PEFT & \(33\)GB & \(33\)GB & Slow & Fast \\
**PEQA (Ours)** & **33GB** & **33GB** & **Fast** & **Fast** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of PEQA with other methods using LLaMA \(65\)B on the DRAM usage and training time during fine-tuning, the DRAM storage for deployment, the inference acceleration, and task-switching efficiency. The DRAM usage estimation for PEFT is based on LoRA. PEFT+PTQ denotes PTQ after PEFT and PTQ+PEFT denotes PTQ before PEFT.
**Benefits of PEFT.** By solely updating the quantization scales, PEQA substantially reduces the memory overhead associated with optimizer state, a feature consistent with other PEFT approaches. Notably, the utilization of quantization scales \(\mathbf{s}_{0}+\Delta\mathbf{s}\) allows PEQA to swiftly and effortlessly switch between task-specific parameters. This capability positions PEQA as ideally suited for deployment of quantized LLMs as a service, mirroring another key advantage of earlier PEFT methods. Note, however, that such a capability is not present in PEFT+PTQ (i.e., the case where PTQ is applied after PEFT) due to the non-reversible quantizers, such as the rounding function.
**Benefits of Quantization.** Previous PEFT methods freeze the pre-trained weights \(\mathbf{W}_{0}\) and utilize additional learnable parameters to reduce the memory usage of optimizer state. Similarly, PEQA freezes quantized pre-trained weights \(\mathbf{\overline{W}}_{0}\) (which are the integer quantization values of \(\mathbf{W}_{0}\)) and fine-tunes quantization scales \(\mathbf{s}_{0}\). Since \(\mathbf{\overline{W}}_{0}\) is a \(b\)-bit integer matrix, not only can PEQA reduce the optimizer states' size but also the model size, leading to even greater efficiency in the PEFT scheme, as illustrated in Figure 1(a). In addition, since \(\mathbf{\widehat{W}}\) is a \(b\)-bit quantized matrix, PEQA can speed up token generation process at inference through dedicated kernels that accelerate the multiplication between a quantized weight matrix and a half-precision activation vector, as described by Frantar et al. [28], Lin et al. [47], and Park et al. [49]. It is worth noticing that employing PTQ before fine-tuning (PTQ+PEFT) [50] allows for memory-efficient fine-tuning and seamless task transition for the quantized LLM; however, PTQ+PEFT is not able to inherit from inference acceleration of quantization. As a result, PEQA can achieve both model compression in the process of fine-tuning and inference acceleration for fine-tuned models with marginal performance degradation compared to LoRA, one of the state-of-the-art PEFT techniques, as shown in Figure 1(b).
The comparison of PEQA with other methods using the LLaMA \(65\)B is summarized in Table 1.
## 4 Experiments
In this section, we empirically validate the effectiveness of our proposed PEQA method by examining its performance in both parameter-efficient fine-tuning (PEFT) and as a quantization method. We achieve this goal by using a series of benchmarks [52, 53, 54, 55, 56, 57], datasets [51, 58, 59], and LLMs [6, 4, 60, 61] that have been publicly introduced. In Section 4.1, to empirically confirm the validity of fine-tuning only the quantization scale while freezing the integer matrix, we compare the perplexity of fine-tuned LLMs through quantization-aware training (QAT), PEFT (+PTQ), and PEQA. In Section 4.2, to evaluate PEQA's scalability and task-specific adaptation performance, we fine-tune and assess LLMs on the Wikitext2 [51] and PennTreeBank [58] datasets using PEQA and LoRA [21]. Section 4.3 is dedicated to showcasing PEQA's performance-restoring capability through instruction-tuning on the Alpaca [59] dataset after round-to-nearest (RTN) quantization over the full-precision original model.
To assess PEQA's performance as a PEFT method, we contrast PEQA with LoRA, which is currently recognized as one of the leading PEFT methods. As discussed in 3.1, we employ a baseline case that merges OPTQ [28], the state-of-the-art weight-only post-training quantization (PTQ) method for LLMs, with LoRA in order to evaluate PEQA's quantization capabilities. In the context of LoRA, QV\(4\) signifies the application of query and value layer weights with a LoRA rank of \(4\), while QKVO\(16\)
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Method & W Bits & GPT-Neo \(2.7\)B & GPT-J \(6\)B & LLaMA \(7\)B & LLaMA \(13\)B \\ \hline QAT & 4 & \(11.07\) & \(8.81\) & \(5.76\) & \(5.26\) \\ LoRA + OPTQ & 4 & \(12.09\) & \(8.91\) & \(7.13\) & \(5.31\) \\ PEQA (Ours) & 4 & \(11.38\) & \(8.84\) & \(5.84\) & \(5.30\) \\ \hline QAT & 3 & \(12.37\) & \(9.60\) & \(6.14\) & \(5.59\) \\ LoRA + OPTQ & 3 & \(21.93\) & \(11.22\) & \(19.47\) & \(7.33\) \\ PEQA (Ours) & 3 & \(12.54\) & \(9.36\) & \(6.19\) & \(5.54\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: To empirically confirm the validity of PEQA’s approach, we compare the perplexity (PPL) of fine-tuned LLMs through QAT, PEFT+PTQ, and PEQA on Wikitext2 [51] for GPT-Neo \(2.7\)B, GPT-J \(6\)B, LLaMA \(7\)B, and LLaMA \(13\)B. Weights are quantized into either 3-bit or 4-bit per channel, without a group size [28, 49]. LoRA configuration is set to QV\(4\). The lower PPL, the better.
indicates the application of query, key, value, and output projection layer weights with a LoRA rank of \(16\). For PEQA, we utilize round-to-nearest (RTN) for the initialization method of quantized LLM.
### Comparing Quantization Capabilities: PEQA vs. QAT vs. PEFT+PTQ
Table 2 presents the perplexity when various quantized LLMs are fine-tuned using QAT, PEFT (+PTQ), and PEQA. To validate our approach, PEQA, which solely fine-tunes the quantization scale as outlined in Eq. 2 while simultaneously maintaining the integer matrix in a frozen state, we use QAT as an upper bound and PEFT+PTQ as a lower bound. Note that QAT, unlike PEQA, updates all parameters including pre-trained weights as well as quantization scales. Table 2 reveals the competitive performance of PEQA compared to QAT. Furthermore, our observations indicate that PEQA consistently outperforms the combination of LoRA and OPTQ for any selected model, regardless of whether a 3-bit or 4-bit setting is employed. Such superior performance can be attributed to PEQA's method of fine-tuning quantized LLMs, which minimizes the final task loss on the full training data, a capability that OPTQ lacks. Detailed settings are in Appendix B.
Diving deeper into the comparison between QAT and PEQA, it is important to note that QAT minimizes the final task loss computed from a weight-only quantized model, as described in Eq. 1, with respect to both \(\mathbf{W}_{0}\) and \(\mathbf{s}_{0}\). Note that QAT includes all pre-trained weights for training, resulting in the practical model size limitation of LLMs under investigation being capped at \(13\)B in our experiments. Despite the fact that QAT also updates \(\mathbf{W}_{0}\) in Eq. 1, which is one of the most simple and straightforward approach though, we observe that the performance gap between QAT and PEQA narrows when the \(4\)-bit association is introduced, especially as the size of LLMs increases. Impressively, PEQA can even outperform QAT in a \(3\)-bit setting, a notably low-bit setting that challenges OPTQ in terms of quantizing LLMs. These findings suggest that the approach of PEQA, solely updating quantization scales while freezing the integer quantization values of pre-trained weights, can achieve performance comparable to that of QAT.
### Task-specific Adaptation and Scalability.
We evaluate the task-specific adaptation performance and scalability of PEQA by employing GPT-Neo, GPT-J, and LLaMA models (up to \(65\)B) on the Wikitext2 [51] and PennTreeBank (PTB) [58] datasets. The adaptation performance of PEQA is compared with LoRA, with its configuration set to QV4. As depicted in Table 3, we note a gradual convergence of PEQA's perplexity to that of full-precision LoRA, with only marginal PPL degradation as the model size expands. Thus, Table 3 demonstrates that PEQA, compared to a prominent PEFT technique that utilizes full-precision pre-trained language model (PLM), can maintain a competitive perplexity level in LLMs while concurrently reducing DRAM usage through low-bit quantized
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{W Bits} & GPT-Neo & GPT-J & LLaMA & LLaMA & LLaMA & LLaMA \\ & & \(2.7\)B & \(6\)B & \(7\)B & \(13\)B & \(30\)B & \(65\)B \\ \hline \multicolumn{8}{c}{Wikitext2} \\ \hline LoRA & \(16\) & \(10.63\) & \(8.50\) & \(5.53\) & \(5.06\) & \(4.06\) & \(3.82\) \\ \hline LoRA+OPTQ & \(4\) & \(12.09\) & \(8.91\) & \(7.13\) & \(5.31\) & \(4.39\) & \(4.10\) \\ PEGA (Ours) & \(4\) & \(\mathbf{11.38}\) & \(\mathbf{8.84}\) & \(\mathbf{5.84}\) & \(\mathbf{5.30}\) & \(\mathbf{4.36}\) & \(\mathbf{4.02}\) \\ \hline LoRA+OPTQ & \(3\) & \(21.93\) & \(11.22\) & \(19.47\) & \(7.33\) & \(5.94\) & \(5.32\) \\ PEGA (Ours) & \(3\) & \(\mathbf{12.54}\) & \(\mathbf{9.36}\) & \(\mathbf{6.19}\) & \(\mathbf{5.54}\) & \(\mathbf{4.58}\) & \(\mathbf{4.27}\) \\ \hline \multicolumn{8}{c}{PTB} \\ \hline LoRA & \(16\) & \(15.92\) & \(12.92\) & \(9.14\) & \(8.52\) & \(7.21\) & \(7.11\) \\ \hline LoRA+OPTQ & \(4\) & \(18.83\) & \(13.46\) & \(11.22\) & \(8.83\) & \(\mathbf{7.55}\) & \(7.46\) \\ PEGA (Ours) & \(4\) & \(\mathbf{16.55}\) & \(\mathbf{13.30}\) & \(\mathbf{9.69}\) & \(\mathbf{8.64}\) & \(7.68\) & \(\mathbf{7.36}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: To show scalability of PEQA, the perplexity (PPL) on Wikitext2 and PennTreeBank (PTB) was compared with LoRA and PEQA. In this comparison, only the weights were quantized into \(3\)-bit and \(4\)-bit per-channel without group size. LoRA configuration is set to QV4. A lower PPL value indicates better performance.
weights. Notably, for a \(3\)-bit quantization, PEQA experiences less performance degradation as the model size decreases due to extreme low-bit quantization compared to the combined LoRA and OPTQ. To further elucidate our findings, we have provided figures illustrating the results of \(3\)-bit and \(4\)-bit PEQA in the Appendix D. The comprehensive results indicate that for the deployment stage, PEQA allows models with larger parameters to operate under DRAM usage constraints, outperforming full-precision PEFT methods. For instance, under a restricted DRAM footprint, large LLaMA models can be explored using PEQA, while full-precision LoRA permits only smaller LLaMA models. Additional results with OPT [4], ranging from \(1.3\)B to \(66\)B models are included in the Appendix E. The detailed experimental settings are also included in the Appendix C.
Model Size and Number of Learnable Parameters.In an effort to estimate the DRAM usage necessitated by PEQA and LoRA during training and deployment, we outline the number of learnable parameters (for training) and the model size (expressed in gigabytes, GB, for deployment) in Table 4. As demonstrated in Table 4, PEQA involves fewer learnable parameters than LoRA when a quantization scale is assigned to each channel of pre-trained weights. For instance, PEQA has approximately \(1.54\) times fewer learnable parameters for LLaMA models than LoRA (QV4). In addition to having fewer learnable parameters, PEQA, through low-bit weight quantization, can also reduce the model size, which captures a huge amount of the DRAM footprint in fine-tuning LLMs. Remarkably, when fine-tuning LLaMA \(30\)B using PEQA with \(4\)-bit precision, the resulting model size is significantly smaller than that obtained by adapting \(13\)B through LoRA, and slightly larger than the model adapted from LLaMA 7B using LoRA. Additionally, a comparison of the memory peak during training between PEQA and LoRA is provided in Appendix L.
Group-wise Quantization.Group-wise per-channel quantization [49], where weight groups in channels share quantization parameters, maintains accuracy at lower bits. In Table 5, we present that the performance incrementally improves as more learnable parameters are incorporated into PEQA. In particular, for Table 5, we examine various group sizes (denoted by \(g\)) when quantizing the weights [49, 62]. Through relatively straightforward grouping (employed to regulate the number of learnable parameters for PEQA), the perplexity incrementally decreases as more learnable parameters are utilized. Detailed settings are in Appendix G.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Model & W Bits & Channel-Wise & \(g256\) & \(g128\) & \(g64\) \\ \hline LLaMA 7B & \(4\) & \(5.84\) & \(5.69\) & \(5.66\) & \(5.64\) \\ & \(3\) & \(6.19\) & \(5.96\) & \(5.91\) & \(5.89\) \\ \hline LLaMA \(13\)B & \(4\) & \(5.30\) & \(5.18\) & \(5.16\) & \(5.16\) \\ & \(3\) & \(5.54\) & \(5.40\) & \(5.37\) & \(5.34\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Multi-scale (grouping) performance with PEQA-tuned LLaMA 7B and \(13\)B on Wikitext2 where \(g\) indicates the group size [49]. The perplexity consistently increases as PEQA take on more learnable parameters.
### Instruction-tuning with the Alpaca Dataset
Although inference cost and training efficiency are crucial, the methods of PEFT and quantization have not been extensively explored. While LoRA and PEQA have been evaluated on adaptation performance for task-specific purposes as discussed in Section 4.2, it has not been widely corroborated that fine-tuning via PEFT can retain the performance for unseen tasks. Thus, given that RTN quantization results in a non-negligible performance degradation in LLMs, it is important to assess how much the performance of low-bit quantized LLMs is degraded on comprehensive tasks. To address these concerns, we conduct comprehensive experiments, benchmarking our techniques on prevalent instruction-following datasets and assessing the response quality of PEQA-tuned LLMs. Furthermore, to determine if PEQA can regain the performance of full-precision LLMs, we employ RTN quantization in conjunction with PEQA instruction-tuning across LLaMAs.
Experimental Settings.We train LLaMAs [6; 7] in various sizes on the Alpaca dataset [59], which is one of the popular instruction-following datasets generated from outputs of InstructGPT [11]. Then, we test the models on other downstream tasks such as common-sense reasoning tasks [52; 53; 54; 55] and massive multitask language understanding (MMLU) [56]. Due to limited time and resources, we could not conduct an exhaustive search over hyper-parameters such as the learning rate or epoch. Instead, we followed the training recipe from Taori et al. [59]. The LoRA configuration is set to QKVO16. Detailed settings can be found in Appendix H.
Common-Sense Reasoning.We conducted an experiment on five tasks [52; 53; 54; 55] to assess whether the performance of common-sense reasoning and in-context learning can be sustained even after instruction-tuning LLMs on the Alpaca dataset via LoRA or PEQA. As depicted in Table 6, the results show that LLMs fine-tuned with LoRA or PEQA maintain a consistent trend in common-sense reasoning tasks. Furthermore, since PEQA's performance aligns closely with that of full-precision
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \# & \multicolumn{2}{c}{Model} & \multirow{2}{*}{PIQA} & \multirow{2}{*}{HellaSwag} & \multirow{2}{*}{ARC-C} & \multirow{2}{*}{ARC-E} & \multirow{2}{*}{OBQA} & \multirow{2}{*}{Average} \\ & & \multicolumn{2}{c}{Params} & & & & & & & \\ \hline \multicolumn{10}{c}{Zero-Shot} \\ \hline \multirow{4}{*}{LLaMA} & 7B & \(13.5\)GB & \(77.3\) & \(73.0\) & \(41.4\) & \(52.5\) & \(42.4\) & \(57.3\) \\ & \(13\)B & \(26.1\)GB & \(79.1\) & \(76.2\) & \(44.5\) & \(59.9\) & \(42.2\) & \(60.4\) \\ & \(30\)B & \(65.1\)GB & \(80.1\) & \(79.2\) & \(45.5\) & \(58.9\) & \(42.0\) & \(61.1\) \\ \hline \multirow{4}{*}{+LoRA} & 7B & \(13.5\)GB & \(78.6\) & \(73.3\) & \(43.7\) & \(55.8\) & \(43.0\) & \(58.9(+1.6)\) \\ & \(13\)B & \(26.1\)GB & \(79.6\) & \(76.7\) & \(46.3\) & \(62.0\) & \(43.2\) & \(61.5(+1.1)\) \\ & \(30\)B & \(65.1\)GB & \(81.8\) & \(80.3\) & \(48.2\) & \(61.6\) & \(42.8\) & \(62.9(+1.8)\) \\ \hline \multirow{4}{*}{+PEQA} & 7B & **3.8GB** & 77.9 & \(71.4\) & \(42.4\) & \(57.2\) & \(42.0\) & \(58.2(+0.9)\) \\ & \(13\)B & **7.0GB** & \(78.9\) & \(74.0\) & \(46.4\) & \(62.5\) & \(42.8\) & \(60.9(+0.5)\) \\ & \(30\)B & **16.9GB** & \(80.3\) & \(78.4\) & \(49.8\) & \(63.3\) & \(42.8\) & \(62.9(+1.8)\) \\ \hline \multicolumn{10}{c}{Five-Shot} \\ \hline \multirow{4}{*}{LLaMA} & 7B & \(13.5\)GB & \(79.4\) & \(75.3\) & \(45.6\) & \(65.8\) & \(44.0\) & \(62.0\) \\ & \(13\)B & \(26.1\)GB & \(80.0\) & \(78.4\) & \(50.4\) & \(70.8\) & \(47.2\) & \(65.4\) \\ & \(30\)B & \(65.1\)GB & \(82.5\) & \(82.2\) & \(56.2\) & \(74.9\) & \(47.0\) & \(68.6\) \\ \hline \multirow{4}{*}{+LoRA} & 7B & \(13.5\)GB & \(79.9\) & \(75.2\) & \(46.4\) & \(66.5\) & \(47.2\) & \(63.0(+1.0)\) \\ & \(13\)B & \(26.1\)GB & \(81.1\) & \(78.8\) & \(53.5\) & \(72.4\) & \(47.0\) & \(66.6(+1.1)\) \\ & \(30\)B & \(65.1\)GB & \(84.1\) & \(83.3\) & \(59.5\) & \(79.2\) & \(50.6\) & \(71.4(+2.8)\) \\ \hline \multirow{4}{*}{+PEQA} & 7B & **3.8GB** & \(78.9\) & \(73.2\) & \(45.1\) & \(65.4\) & \(44.0\) & \(61.3(-0.7)\) \\ & \(13\)B & **7.0GB** & \(80.7\) & \(76.0\) & \(50.9\) & \(71.6\) & \(48.0\) & \(65.5(+0.1)\) \\ \cline{1-1} & \(30\)B & **16.9GB** & \(82.7\) & \(80.2\) & \(56.8\) & \(75.5\) & \(47.6\) & \(68.6(+0.0)\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Common-sense reasoning and in-context learning performance of parameter-efficient instruction-tuned LLaMAs [6] using Alpaca datasets. LoRA configuration is set to QKVO16. Quantization precision of PEQA is set to \(4\)-bit per-channel without group size. Note that ARC-C, ARC-E and OBQA stands for ARC-Challenge, ARC-Easy, and OpenBookQA respectively.
adaptation, this consistency is observed even when the model size has been reduced through low-bit weight quantization. We utilized the evaluation code from Eleuther AI's _lm-evaluation-harness_[63].
Massive Multitask Language Understanding.To assess whether the performance of PEQA-tuned models can be restored to the levels of full-precision model's performance, starting from RTN performance, we test our models on the MMLU benchmark containing \(57\) multiple-choice problems across various domains and levels of knowledge [56]. In this experiment, we utilize the RTN results as a baseline to determine the extent of degradation on quantized LLM. As shown in Table 7, instruction-tuning with PEQA boosts the performance of RTN quantized models. This observation supports our claim that our approach enables LLMs to regain their few-shot in-context learning and understanding capabilities, even though they are significantly smaller than their original model size through quantization. Unfortunately, it seems that the PEQA-tuning does not achieve the best performance in fine-tuning larger models. This might be because PEQA-tuning did not been sufficiently explored different epochs or learning rates. Nonetheless, the observation that the performance of the quantized LLaMAs is restored through PEQA-tuning using an instruction-following dataset highlights the potential to further enhance the accuracy of PTQ methods.
## 5 Conclusion
Fine-tuning aligns large language models (LLMs) with specific purposes. To maintain the comprehensive capabilities of LLMs while effectively aligning them, we introduce PEQA, a method that seamlessly combines the advantages of parameter-efficient fine-tuning (PEFT) and quantization in LLMs. PEQA not only reduces DRAM consumption during fine-tuning but also accelerates inference latency for deployment by retaining weights in a low-bit quantized format. Through rigorous testing across various datasets and LLMs, we have found that PEQA can match the performance of full-precision baselines in task-specific adaptations, even with a significant reduction in model size. When combined with instruction-tuning, PEQA's performance demonstrates its ability to both preserve and enhance comprehensive knowledge after the inherent compromises of quantization, recovering the performance of original model by simply updating the quantization scales of the quantized LLM.
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline & \# Params & Model Size & Humanities & STEM & Social Sciences & Other & Average \\ \hline LLaMA [6] & 7B & \(13.5\)GB & \(32.6\) & \(29.6\) & \(38.0\) & \(37.9\) & \(34.4\) \\ & \(13\)B & \(26.1\)GB & \(42.8\) & \(36.1\) & \(53.3\) & \(53.2\) & \(46.1\) \\ & \(30\)B & \(65.1\)GB & \(54.6\) & \(46.5\) & \(66.1\) & \(63.4\) & \(57.4\) \\ \hline \begin{tabular}{l} + RTN \\ (w/o group size) \\ \end{tabular} & 7B & **3.8GB** & \(28.4\) & \(25.6\) & \(26.9\) & \(31.8\) & \(28.3\) \\ \begin{tabular}{l} 13B \\ 30B \\ \end{tabular} & **7.0GB** & \(30.5\) & \(27.2\) & \(35.5\) & \(38.8\) & \(32.8\) \\ \begin{tabular}{l} 30B \\ \end{tabular} & **16.9GB** & \(39.6\) & \(34.0\) & \(46.1\) & \(49.7\) & \(42.1\) \\ \hline \begin{tabular}{l} + PEQA \\ \end{tabular} & 7B & **3.8GB** & \(35.7\) & \(30.9\) & \(38.2\) & \(40.0\) & \(35.8\) \\ \begin{tabular}{l} 13B \\ 30B \\ \end{tabular} & **7.0GB** & \(42.8\) & \(37.7\) & \(53.6\) & \(49.0\) & \(45.0\) \\ \begin{tabular}{l} 30B \\ \end{tabular} & **16.9GB** & \(51.1\) & \(44.1\) & \(62.4\) & \(60.7\) & \(54.3\) \\ \hline LLaMA2 [7] & 7B & \(13.5\)GB & \(43.3\) & \(37.0\) & \(51.8\) & \(52.4\) & \(45.9\) \\ \begin{tabular}{l} 13B \\ 70B \\ \end{tabular} & \(26.0\)GB & \(54.4\) & \(44.2\) & \(63.4\) & \(60.8\) & \(55.7\) \\ \begin{tabular}{l} 70B \\ \end{tabular} & \(138.0\)GB & \(65.2\) & \(57.9\) & \(80.3\) & \(74.7\) & \(69.1\) \\ \hline \begin{tabular}{l} + RTN \\ (g\(256\)) \\ \end{tabular} & 7B & **3.8GB** & \(39.5\) & \(35.5\) & \(49.3\) & \(49.9\) & \(43.2\) \\ \begin{tabular}{l} 13B \\ \end{tabular} & **7.0GB** & \(50.2\) & \(42.6\) & \(61.3\) & \(59.7\) & \(53.2\) \\ \begin{tabular}{l} 70B \\ \end{tabular} & **35.3GB** & \(63.7\) & \(55.9\) & \(78.4\) & \(71.6\) & \(67.0\) \\ \hline \begin{tabular}{l} + PEQA \\ \end{tabular} & 7B & **3.8GB** & \(52.0\) & \(38.4\) & \(54.1\) & \(52.0\) & \(48.1\) \\ \begin{tabular}{l} 13B \\ \end{tabular} & **7.0GB** & \(60.5\) & \(45.0\) & \(63.3\) & \(57.0\) & \(55.3\) \\
\begin{tabular}{l} 70B \\ \end{tabular} & **35.3GB** & \(73.9\) & \(55.3\) & \(77.8\) & \(68.2\) & \(67.5\) \\ \hline \hline \end{tabular}
\end{table}
Table 7: Massive Multitask Language Understanding (MMLU) benchmark performance of PEQA-tuned LLaMAs using Alpaca datasets. Five-shot accuracy is reported for the MMLU. Quantization precision of PEQA is set to \(4\)-bit. When we quantize LLaMA [6] into \(4\)-bit precision using the RTN method, no group size is applied. For LLaMA2 [7], a group size of \(256\) is used with the RTN method. Note that RTN stands for round-to-nearest in the table.
## References
* Brown et al. [2020] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. volume 33, pages 1877-1901, 2020.
* Chowdhery et al. [2020] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling language modeling with pathways, 2022.
* Andonian et al. [2021] Alex Andonian, Quentin Anthony, Stella Biderman, Sid Black, Preetham Gali, Leo Gao, Eric Hallahan, Josh Levy-Kramer, Connor Leahy, Lucas Nestler, Kip Parker, Michael Pieler, Shivanshu Purohit, Tri Songz, Wang Phil, and Samuel Weinbach. GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch, 8 2021. URL [https://www.github.com/eleutherai/gpt-neox](https://www.github.com/eleutherai/gpt-neox).
* Zhang et al. [2022] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. _arXiv preprint arXiv:2205.01068_, 2022.
* OpenAI [2023] OpenAI. Gpt-4 technical report, 2023.
* Touvron et al. [2023] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothe Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models, 2023.
* Touvron et al. [2023] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint arXiv:2307.09288_, 2023.
* Stiennon et al. [2020] Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. Learning to summarize with human feedback. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, _Advances in Neural Information Processing Systems_, volume 33, pages 3008-3021. Curran Associates, Inc., 2020. URL [https://proceedings.neurips.cc/paper_files/paper/2020/file/1f89885d556929e98d3ef9b86448f951-Paper.pdf](https://proceedings.neurips.cc/paper_files/paper/2020/file/1f89885d556929e98d3ef9b86448f951-Paper.pdf).
* Floridi and Chiriatti [2020] Luciano Floridi and Massimo Chiriatti. Gpt-3: Its nature, scope, limits, and consequences. _Minds Mach._, 30(4):681-694, dec 2020. ISSN 0924-6495. doi: 10.1007/s11023-020-09548-1. URL [https://doi.org/10.1007/s11023-020-09548-1](https://doi.org/10.1007/s11023-020-09548-1).
* Min et al. [2022] Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. Rethinking the role of demonstrations: What makes in-context learning work? In _Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing_, pages 11048-11064, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. URL [https://aclanthology.org/2022.emnlp-main.759](https://aclanthology.org/2022.emnlp-main.759).
* Ouyang et al. [2022] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. _Advances in Neural Information Processing Systems_, 35:27730-27744, 2022.
* Bai et al. [2022] Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. _arXiv preprint arXiv:2212.08073_, 2022.
* Wei et al. [2022] Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V Le. Finetuned language models are zero-shot learners. In _International Conference on Learning Representations_, 2022. URL [https://openreview.net/forum?id=gEZrGCozdqR](https://openreview.net/forum?id=gEZrGCozdqR).
* Sanh et al. [2022] Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjian Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. Multi-task prompted training enables zero-shot task generalization. In _International Conference on Learning Representations_, 2022. URL [https://openreview.net/forum?id=9Vrb9DOWI4](https://openreview.net/forum?id=9Vrb9DOWI4).
* Scialom et al. [2022] Thomas Scialom, Tuhin Chakrabarty, and Smaranda Muresan. Fine-tuned language models are continual learners. In _Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing_, pages 6107-6122, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. URL [https://aclanthology.org/2022.emnlp-main.410](https://aclanthology.org/2022.emnlp-main.410).
* Xie et al. [2022] Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I Wang, et al. Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models. _arXiv preprint arXiv:2201.05966_, 2022.
* Tay et al. [2022] Yi Tay, Jason Wei, Hyung Won Chung, Vinh Q Tran, David R So, Siamak Shakeri, Xavier Garcia, Huaixiu Steven Zheng, Jinfeng Rao, Aakanksha Chowdhery, et al. Transcending scaling laws with 0.1% extra compute. _arXiv preprint arXiv:2210.11399_, 2022.
* Fedus et al. [2022] William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. _J. Mach. Learn. Res._, 23(1), jan 2022. ISSN 1532-4435.
* Liu et al. [2021] Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. Gpt understands, too, 2021.
* Houlsby et al. [2019] Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for NLP. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, _Proceedings of the 36th International Conference on Machine Learning_, volume 97 of _Proceedings of Machine Learning Research_, pages 2790-2799. PMLR, 09-15 Jun 2019. URL [https://proceedings.mlr.press/v97/houlsby19a.html](https://proceedings.mlr.press/v97/houlsby19a.html).
* Hu et al. [2022] Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-rank adaptation of large language models. In _International Conference on Learning Representations_, 2022. URL [https://openreview.net/forum?id=mZeVKeeFYf9](https://openreview.net/forum?id=mZeVKeeFYf9).
* Jung et al. [2019] Sangil Jung, Changyong Son, Seohyung Lee, Jinwoo Son, Jae-Joon Han, Youngjun Kwak, Sung Ju Hwang, and Changkyu Choi. Learning to quantize deep networks by optimizing quantization intervals with task loss. In _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 4350-4359, 2019.
* Esser et al. [2020] Steven K. Esser, Jeffrey L. McKinstry, Deepika Bablani, Rathinakumar Appuswamy, and Dharmendra S. Modha. Learned step size quantization. In _International Conference on Learning Representations_, 2020. URL [https://openreview.net/forum?id=rkgO66VKDS](https://openreview.net/forum?id=rkgO66VKDS).
* Zhao et al. [2020] Xiandong Zhao, Ying Wang, Xuyi Cai, Cheng Liu, and Lei Zhang. Linear symmetric quantization of neural networks for low-precision integer hardware. In _International Conference on Learning Representations_, 2020. URL [https://openreview.net/forum?id=H1Ib2VFPS](https://openreview.net/forum?id=H1Ib2VFPS).
* Lee et al. [2021] Jung Hyun Lee, Jihun Yun, Sung Ju Hwang, and Eunho Yang. Cluster-promoting quantization with bit-drop for minimizing network quantization loss, 2021.
* Dettmers et al. [2022] Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. Llm. int8 (): 8-bit matrix multiplication for transformers at scale. _arXiv preprint arXiv:2208.07339_, 2022.
* Yao et al. [2022] Zhewei Yao, Reza Yazdani Aminabadi, Minjia Zhang, Xiaoxia Wu, Conglong Li, and Yuxiong He. Zeroquant: Efficient and affordable post-training quantization for large-scale transformers. _arXiv preprint arXiv:2206.01861_, 2022.
* Frantar et al. [2023] Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. OPTQ: Accurate quantization for generative pre-trained transformers. In _The Eleventh International Conference on Learning Representations_, 2023. URL [https://openreview.net/forum?id=tcbBPnfwxS](https://openreview.net/forum?id=tcbBPnfwxS).
* Xiao et al. [2022] Guangxuan Xiao, Ji Lin, Mickael Seznec, Julien Demouth, and Song Han. Smoothquant: Accurate and efficient post-training quantization for large language models. _arXiv preprint arXiv:2211.10438_, 2022.
* Hoffmann et al. [2022] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. _arXiv preprint arXiv:2203.15556_, 2022.
* Schick et al. [2023] Timo Schick, Jane Dwivedi-Yu, Roberto Dessi, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. _arXiv preprint arXiv:2302.04761_, 2023.
* Nakano et al. [2021] Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, and John Schulman. Webgpt: Browser-assisted question-answering with human feedback. _CoRR_, abs/2112.09332, 2021. URL [https://arxiv.org/abs/2112.09332](https://arxiv.org/abs/2112.09332).
* Wang et al. [2022] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instructions. _CoRR_, abs/2212.10560, 2022. doi: 10.48550/arXiv.2212.10560. URL [https://doi.org/10.48550/arXiv.2212.10560](https://doi.org/10.48550/arXiv.2212.10560).
* Chiang et al. [2023] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL [https://lmsys.org/blog/2023-03-30-vicuna/](https://lmsys.org/blog/2023-03-30-vicuna/).
* Devlin et al. [2018] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_, 2018.
* Radford et al. [2019] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. _OpenAI blog_, 1(8):9, 2019.
* Schick and Schutze [2020] Timo Schick and Hinrich Schutze. Exploiting cloze questions for few shot text classification and natural language inference. _arXiv preprint arXiv:2001.07676_, 2020.
* Li and Liang [2021] Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_, pages 4582-4597, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.353. URL [https://aclanthology.org/2021.acl-long.353](https://aclanthology.org/2021.acl-long.353).
* Lester et al. [2021] Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_, pages 3045-3059, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.243. URL [https://aclanthology.org/2021.emnlp-main.243](https://aclanthology.org/2021.emnlp-main.243).
* He et al. [2022] Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, and Graham Neubig. Towards a unified view of parameter-efficient transfer learning. In _International Conference on Learning Representations_, 2022. URL [https://openreview.net/forum?id=ORDcd5Axok](https://openreview.net/forum?id=ORDcd5Axok).
* Mao et al. [2022] Yuning Mao, Lambert Mathias, Rui Hou, Amjad Almahairi, Hao Ma, Jiawei Han, Scott Yih, and Madian Khabsa. UniPELT: A unified framework for parameter-efficient language model tuning. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_, pages 6253-6264, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-long.433. URL [https://aclanthology.org/2022.acl-long.433](https://aclanthology.org/2022.acl-long.433).
* Park et al. [2022] Minseop Park, Jaeesong You, Markus Nagel, and Simyung Chang. Quadapter: Adapter for GPT-2 quantization. In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang, editors, _Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022_, pages 2510-2517. Association for Computational Linguistics, 2022. URL [https://aclanthology.org/2022.findings-emnlp.185](https://aclanthology.org/2022.findings-emnlp.185).
* Kwon et al. [2022] Se Jung Kwon, Jeonghoon Kim, Jeongin Bae, Kang Min Yoo, Jin-Hwa Kim, Baeseong Park, Byeongwook Kim, Jung-Woo Ha, Nako Sung, and Dongsoo Lee. Alphatuning: Quantization-aware parameter-efficient adaptation of large-scale pre-trained language models. In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang, editors, _Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022_, pages 3288-3305. Association for Computational Linguistics, 2022. URL [https://aclanthology.org/2022.findings-emnlp.240](https://aclanthology.org/2022.findings-emnlp.240).
* Lee et al. [2023] Jung Hyun Lee, Jeonghoon Kim, Se Jung Kwon, and Dongsoo Lee. FlexRound: Learnable rounding based on element-wise division for post-training quantization. In _Proceedings of the 40th International Conference on Machine Learning_, volume 202 of _Proceedings of Machine Learning Research_, pages 18913-18939. PMLR, 23-29 Jul 2023. URL [https://proceedings.mlr.press/v202/lee23h.html](https://proceedings.mlr.press/v202/lee23h.html).
* Heo et al. [2023] Jung Hwan Heo, Jeonghoon Kim, Beomseok Kwon, Byeongwook Kim, Se Jung Kwon, and Dongsoo Lee. Rethinking channel dimensions to isolate outliers for low-bit weight quantization of large language models. _arXiv preprint arXiv:2309.15531_, 2023.
* Mangrulkar et al. [2022] Sourab Mangrulkar, Sylvain Gugger, Lysandre Debut, Younes Belkada, and Sayak Paul. Perf: State-of-the-art parameter-efficient fine-tuning methods. [https://github.com/huggingface/peft](https://github.com/huggingface/peft), 2022.
* Lin et al. [2023] Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Xingyu Dang, and Song Han. Awq: Activation-aware weight quantization for llm compression and acceleration. _arXiv preprint arXiv:2306.00978_, 2023.
* Xiao et al. [2023] Guangxuan Xiao, Ji Lin, Mickael Seznec, Hao Wu, Julien Demouth, and Song Han. SmoothQuant: Accurate and efficient post-training quantization for large language models. In _Proceedings of the 40th International Conference on Machine Learning_, 2023.
* Park et al. [2023] Gunho Park, Baeseong Park, Minsub Kim, Sungjae Lee, Jeonghoon Kim, Beomseok Kwon, Se Jung Kwon, Byeongwook Kim, Youngjoo Lee, and Dongsoo Lee. Lut-gemm: Quantized matrix multiplication based on luts for efficient inference in large-scale generative language models, 2023.
* Dettmers et al. [2023] Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. _arXiv preprint arXiv:2305.14314_, 2023.
* Merity et al. [2016] Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models, 2016.
* Clark et al. [2018] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. _arXiv preprint arXiv:1803.05457_, 2018.
* Bisk et al. [2020] Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. Piqa: Reasoning about physical commonsense in natural language. In _Proceedings of the AAAI conference on artificial intelligence_, volume 34, pages 7432-7439, 2020.
* Zellers et al. [2019] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_, 2019.
* Mihaylov et al. [2018] Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. In _EMNLP_, 2018.
* Hendrycks et al. [2020] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. _arXiv preprint arXiv:2009.03300_, 2020.
* Mishra et al. [2022] Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. Cross-task generalization via natural language crowdsourcing instructions. In _ACL_, 2022.
* Marcus et al. [1993] Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. Building a large annotated corpus of English: The Penn Treebank. _Computational Linguistics_, 19(2):313-330, 1993. URL [https://www.aclweb.org/anthology/J93-2004](https://www.aclweb.org/anthology/J93-2004).
* Taori et al. [2023] Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. [https://github.com/tatsu-lab/stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca), 2023.
* Black et al. [2021] Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow, March 2021. URL [https://doi.org/10.5281/zenodo.5297715](https://doi.org/10.5281/zenodo.5297715). If you use this software, please cite it using these metadata.
* Zhang et al. [2023] Renrui Zhang, Jiaming Han, Aojun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, Peng Gao, and Yu Qiao. Llama-adapter: Efficient fine-tuning of language models with zero-init attention. _arXiv preprint arXiv:2303.16199_, 2023.
* Alistarh et al. [2017] Dan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, and Milan Vojnovic. QSGD: communication-efficient SGD via gradient quantization and encoding. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, _Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA_, pages 1709-1720, 2017. URL [https://proceedings.neurips.cc/paper/2017/hash/6c340f25839e6acdc73414517203f5f0-Abstract.html](https://proceedings.neurips.cc/paper/2017/hash/6c340f25839e6acdc73414517203f5f0-Abstract.html).
* Gao et al. [2021] Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, September 2021. URL [https://doi.org/10.5281/zenodo.5371628](https://doi.org/10.5281/zenodo.5371628).
* Loshchilov and Hutter [2019] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In _7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019_. OpenReview.net, 2019. URL [https://openreview.net/forum?id=Bkg6RiCq77](https://openreview.net/forum?id=Bkg6RiCq77).
* Rasley et al. [2020] Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Rajesh Gupta, Yan Liu, Jiliang Tang, and B. Aditya Prakash, editors, _KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, CA, USA, August 23-27, 2020_, pages 3505-3506. ACM, 2020. doi: 10.1145/3394486.3406703. URL [https://doi.org/10.1145/3394486.3406703](https://doi.org/10.1145/3394486.3406703).
* [66] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. Transformers: State-of-the-art natural language processing. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_, pages 38-45, Online, October 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-demos.6. URL [https://aclanthology.org/2020.emnlp-demos.6](https://aclanthology.org/2020.emnlp-demos.6).
Common Experimental Settings
For the common experimental settings, AdamW [64] optimizer and linear-decaying learning rate scheduler were used. We use Deepspeed repository [65]2 for FP16 and BF16 training. Additionally, we utilize Huggingface repository[66]3 for training, evaluation code and dataset.
Footnote 2: [https://github.com/microsoft/DeepSpeed](https://github.com/microsoft/DeepSpeed)
Footnote 3: [https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling)
## Appendix B Experimental Settings of Section 4.1
We compare the perplexity when weights are quantized and adapted by quantization-aware training (QAT), LoRA with post-training quantization (PTQ), and PEQA, using the Wikitext2 dataset in Section 4.1. The LoRA configuration is set to QV4. For PTQ method, we utilize OPTQ [28]4 which is state-of-the-art low-bit weight-only PTQ method. We set the model's maximum sequence length to \(1024\). Batch size and epoch for all experiments are set to \(128\) and \(15\) respectively. The learning rates for the experiments of Table 2 are displayed in Table 8. Learning rates for LoRA and PEQA are shown in Appendix C.
Footnote 4: [https://github.com/IST-DASLab/gptq](https://github.com/IST-DASLab/gptq)
## Appendix C Experimental settings of Table 3
In Section 4.2, Table 3 show the scalability and task-specific adaptation performance of PEQA by comparing with LoRA and LoRA+OPTQ on Wikitext2 [51] and PennTreeBank (PTB) [58] datasets. Detailed experimental settings are as follows. LoRA configuration is set to QV4. For PTQ method, we utilize OPTQ [28] which is state-of-the-art low-bit weight-only PTQ method. We set input sequence length after tokenization (block size) to \(1024\) for under \(65\)B models. For LLaMA \(65\)B, input sequence length after tokenization is set to \(768\) due to memory issue. Batch size and epoch for all experiments are set to \(128\) and \(15\) respectively. Learning rates for Table 3 experiments are shown in Table 9.
## Appendix D
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Method & W Bits & GPT-Neo \(2.7\)B & GPT-J \(6\)B & LLaMA \(7\)B & LLaMA \(13\)B \\ \hline QAT & 4 & \(4\)e-\(5\) & \(5\)e-\(6\) & \(1\)e-\(5\) & \(3\)e-\(5\) \\ QAT & 3 & \(6\)e-\(5\) & \(1\)e-\(5\) & \(2\)e-\(5\) & \(1\)e-\(5\) \\ \hline \hline \end{tabular}
\end{table}
Table 8: Learning rates of QAT in Table 2.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{W Bits} & GPT-Neo & GPT-J & LLaMA & LLaMA & LLaMA & LLaMA \\ & & \(2.7\)B & \(6\)B & \(7\)B & \(13\)B & \(30\)B & \(65\)B \\ \hline \multicolumn{8}{c}{Wikitext2} \\ \hline LoRA & \(16\) & \(5\)e-\(4\) & \(6\)e-\(4\) & \(1\)e-\(4\) & \(1\)e-\(4\) & \(2\)e-\(4\) & \(4\)e-\(5\) \\ \hline PEQA (Ours) & \(4\) & \(5\)e-\(5\) & \(6\)e-\(6\) & \(6\)e-\(6\) & \(1\)e-\(5\) & \(1\)e-\(5\) & \(1\)e-\(5\) \\ PEQA (Ours) & \(3\) & \(6\)e-\(5\) & \(5\)e-\(5\) & \(2\)e-\(5\) & \(6\)e-\(5\) & \(3\)e-\(5\) & \(3\)e-\(5\) \\ \hline \multicolumn{8}{c}{PTB} \\ \hline LoRA & \(16\) & \(2\)e-\(3\) & \(1\)e-\(3\) & \(8\)e-\(4\) & \(5\)e-\(4\) & \(4\)e-\(4\) & \(6\)e-\(4\) \\ \hline PEQA (Ours) & \(4\) & \(3\)e-\(4\) & \(5\)e-\(5\) & \(5\)e-\(5\) & \(5\)e-\(5\) & \(3\)e-\(5\) & \(6\)e-\(5\) \\ \hline \hline \end{tabular}
\end{table}
Table 9: Learning rate of LoRA and PEQA in Table 3 on Wikitext2 and PTB datasets.
The Perplexity of \(3\)-bit and \(4\)-bit PEQA on Wikitext2 Dataset
Figure 3 illustrates the results of \(3\)-bit and \(4\)-bit PEQA's next token prediction performance on the Wikitext2 dataset. As shown in Figure 3, \(3\)-bit performance of PEQA shows lower perplexity than \(3\)-bit post-training quantized LoRA. The results from the \(3\)-bit PEQA show that PEQA allows for continuity in model size options under DRAM usage constraints.
## Appendix E OPT Models Adapted with PEQA and LoRA on Wikitext2 Dataset
Table 10 shows the perplexity of OPT[4] models adapted with PEQA and LoRA on the Wikitext2 dataset. The perplexity gap between LoRA and PEQA becomes smaller as the model size increases.
## Appendix F LoRA Configuration Comparison on Wikitext2 Dataset
As shown in Table 11, the LoRA target module configuration of QV4 and QKVO\(16\) has not much effect on perplexity on Wikitext2 experimental results. Table 11 shows equal tendency as mentioned in [21]. We utilize QV\(4\) configuration for Section 4.2 and QKVO\(16\) configuration for Section 4.3 respectively.
Experimental Settings of Multi-scale Performance
In Section 4.2, Table 5 shows the perplexity of PEQA with grouping learnable parameters. We set model maximum sequence length to \(1024\). Batch size and epoch for all experiments are set to \(128\) and \(15\) respectively. Learning rates for experiments are shown in Table 12.
## Appendix H Experimental Settings of Section 4.3
In Section 4.3, we use the Alpaca dataset [59] for instruction-tuning. We set learning rate, epoch, and quantization group size as in Table 13. The batch size is set to \(128\) for all experiments in this subsection. As mentioned in Section 4.3, due to limited time and resources, we couldn't conduct an exhaustive search over hyper-parameters such as learning rate or epoch. We believe that there are hyper-parameters that can perform better. For LLaMA \(1\) series (LLaMA \(7\), \(13\), and \(30\)B), we truncate the prompt to the length of \(2024\) since their maximum sequence length is \(2024\) when evaluating the massive multitask language understanding (MMLU) benchmark. Thus, for LLaMA\(2\)-\(70\)B, we set the tokenizer max length to \(1024\) on fine-tuning due to the resource limit. Otherwise, we use default max length of tokenizer5 on training. For the evaluation, we use default tokenizer setting. For every experiment in this section, the configuration of PEQA is set to \(4\)-bit RTN quantization.
Footnote 5: e.g. ’hf-internal-testing/llama-tokenizer’ for LLaMA-1 series, ’meta-llama/Llama-2-70b-hf’ for LLaMA-2 series.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Hyper-parameter & LLaMA 7B & LLaMA 13B & LLaMA 30B & LLaMA2 7B & LLaMA2 13B & LLaMA2 70B \\ \hline Epoch & \(3\) & \(3\) & \(5\) & \(3\) & \(3\) & \(5\) \\ Learning rate & \(2\)e-\(5\) & \(2\)e-\(5\) & \(5\)e-\(6\) & \(5\)e-\(6\) & \(5\)e-\(6\) & \(5\)e-\(6\) \\ Group size & Per-channel & Per-channel & Per-channel & \(256\) & \(256\) & \(256\) \\ \hline \hline \end{tabular}
\end{table}
Table 13: The learning rate, epoch, quantization group size [49] for experiments on Section 4.3. the weights were quantized into \(4\)-bit.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Model & W Bits & g\(-1\) & g\(256\) & g\(128\) & g\(64\) \\ \hline LLaMA 13B & 4 & \(1\)e-\(5\) & 4e-\(5\) & 4e-\(5\) & 3e-\(5\) \\ & 3 & 6e-\(5\) & 9e-\(5\) & 9e-\(5\) & 5e-\(5\) \\ \hline LLaMA 7B & 4 & 6e-\(6\) & 2e-\(5\) & 2e-\(5\) & 1e-\(5\) \\ & 3 & 2e-\(5\) & 6e-\(5\) & 4e-\(5\) & 7e-\(5\) \\ \hline \hline \end{tabular}
\end{table}
Table 12: Learning rate for Table 5.
## Appendix I LLaMA \(7\)B and \(13\)B on Natural Instruction
To evaluate the instruction-following ability of instruct-tuned models, we test them on another instruction-following dataset, Natural Instruction (NI) [57]. Different from the Alpaca dataset, instructions of NI were collected by humans for existing 61 NLP tasks. For simplicity, we utilize evaluation splits consisting of \(12\) subtasks and restrict the maximum number of instances for each task to \(200\). At test time, the model should generate proper output for the given input with instruction for the target unseen task. As shown in Table 14, we find LLaMAs trained with PEQA show consistently better zero-shot task generalization performance (ROUGE-L) in NI for all parameter sizes compared to those from LoRA.
## Appendix J Comparison with AlphaTuning
When diving deeper into quantization scales, learnable parameters for both PEQA and AlphaTuning, it's worth noting that PEQA's adherence to uniform quantization means there's only one shared quantization scale for integer weight. Conversely, AlphaTuning's non-uniform approach means that for a \(b\)-bit quantization, there are \(b\) individual quantization scales for each weight matrix. Despite having multiple scales, AlphaTuning only fine-tunes one, leaving the rest static. As such, the number of trainable parameters are identical and AlphaTuning seems to offer a larger potential for a well-fitted model, but it can be easily seen that \(b-1\) rest static scales introduced in AlphaTuning have limited usability, and thus the method may be prone to overfitting as evident through empirical results.
In Table 15, we conducted training on GPT-Neo and OPT 1.3B using the Wikitext2 dataset. Interestingly, PEQA, drawing from its methodological advantages, consistently demonstrates superior performance to AlphaTuning by at least \(0.7\) ppl on the Wikitext2 dataset. Both AlphaTuning and PEQA used channel-wise trainable parameters. Batch size of AlphaTuning is set to \(32\).
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & W Bits & OPT 1.3B & GPT-Neo 1.3B \\ \hline AlphaTuning & 4 & 1e-4 & 5e-4 \\ AlphaTuning & 3 & 1e-4 & 1e-3 \\ \hline \hline \end{tabular}
\end{table}
Table 16: Learning rate of AlphaTuning in Table 15.
\begin{table}
\begin{tabular}{r c c c c} \hline \hline \# Params & LLaMA & \(+\)LoRA & \(+\)LoRA w/OPTQ & \(+\)PEQA \\ \hline \(7\)B & \(9.4\) & \(24.4\) & \(25.0\) & **27.1** \\ \(13\)B & \(8.9\) & \(31.3\) & \(29.2\) & **34.1** \\ \hline \hline \end{tabular}
\end{table}
Table 14: Natural Instruction benchmark performance of parameter-efficient instruction-tuned LLaMAs using Alpaca datasets. Zero-shot performance (ROUGE-L) is reported for the NI. LoRA configuration is set to QKVO16. Quantization precisions of LoRA w/ OPTQ and PEQA are set to \(4\)-bit.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & \# Bits & OPT 1.3B & GPT-Neo 1.3B \\ \hline AlphaTuning & 4 & \(13.15\) & \(15.03\) \\ PEQA (Ours) & 4 & \(12.40\) & \(14.22\) \\ \hline AlphaTuning & 3 & \(14.00\) & \(17.25\) \\ PEQA (Ours) & 3 & \(13.40\) & \(15.16\) \\ \hline \hline \end{tabular}
\end{table}
Table 15: The perplexity (PPL) of AlphaTuning and PEQA on Wikitext2 with OPT and GPT-Neo 1.3B. The lower PPL, the better.
Choice of Updating Quantization Scales or Zero-Points
Uniform quantization can represent both asymmetric and symmetric quantizations, hence it's not always necessary to mandate the use of zero-points. This is why adopting a strategy of only learning the scale factor serves as a fundamental and scalable baseline. We opted for this approach to clearly establish its advantages. To determine the efficacy of learning only the scaling factors, we have incorporated additional experiments. By referring to the table below, it's evident that merely optimizing zero-points does not yield effective learning outcomes. Moreover, simultaneously optimizing both zero-points and quantization scales does not present any significant improvement in accuracy either.
## Appendix L Memory Peak on Training
The memory consumption is not solely dictated by the model size but is also influenced by various other factors6. Our approach with PEQA inherently offers memory advantages during fine-tuning by striving to minimize both the model size and the number of training parameters. To provide a clear understanding of these benefits, we conducted tests using a single NVIDIA A100-80GB GPU and the causal language modeling code from the HuggingFace repository7. Both LoRA and PEQA fine-tuned the LLaMA-7B on the Wikitext2 dataset with a batch size of 2 without gradient accumulation. Our findings indicated that while LoRA peaked at a memory usage of 59GB during optimization, PEQA used just 43GB. Remarkably, this disparity (16GB, 7B) escalates as the model size increases; for instance, a 65B full-precision model under LoRA occupies 130GB, whereas PEQA remarkably uses just 33GB. Additionally, LoRA encountered Out-Of-Memory (OOM) issues at a batch size of 4, whereas PEQA, due to its efficiency, continued training seamlessly.
Footnote 6: [https://huggingface.co/docs/transformers/perf_train_gpu_one](https://huggingface.co/docs/transformers/perf_train_gpu_one)
Footnote 7: [https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm_no_trainer.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm_no_trainer.py)
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Zero-points only & Quantization scales only (PEQA) & Both zero-points and quantization scales \\ \hline LLaMA 7B & \(11.56\) & \(5.84\) & \(5.86\) \\ LLaMA 13B & \(9.83\) & \(5.30\) & \(5.34\) \\ \hline \hline \end{tabular}
\end{table}
Table 17: Perplexity (PPL) of PEQA on the Wikitext2 dataset for LLaMA 7B and LLaMA 13B with weights quantized into \(4\)-bit. | ## Review
### Summary
This paper presents a novel approach called Parameter-Efficient and Quantization-aware Adaptation (PEQA) for efficiently fine-tuning large language models (LLMs). PEQA combines quantization and parameter-efficient methods to reduce memory usage while maintaining performance. The technique quantizes fully-connected layers into low-bit integers, updating only the scalar scaling factors during fine-tuning, which significantly decreases memory costs. Extensive experiments demonstrate PEQA's effectiveness across various tasks, showcasing its advantages over existing methods such as Post Training Quantization (PTQ) and LoRA. However, concerns arise regarding the novelty of the approach and its performance consistency across different settings.
### Strengths
- The paper provides a clear explanation of existing research trends and maintains good readability.
- PEQA effectively reduces memory costs by quantizing pre-trained weights to sub-4 bits, making fine-tuning feasible for larger models up to 65B.
- The comprehensive evaluation across various benchmark tasks and model sizes highlights the method's performance.
- The paper is well-written and easy to follow, with a clean idea and method.
### Weaknesses
- The contribution of fine-tuning only the quantization scales appears similar to existing methods like AlphaTuning, and this similarity is not adequately discussed.
- Unusual baseline settings in the evaluation may mislead readers regarding PEQA's true performance.
- Inconsistencies in generalization capabilities of PEQA are noted, raising doubts about its effectiveness.
- The explanation of memory usage during fine-tuning and the advantages of only updating scaling factors is insufficient.
- The overall presentation could be improved for clarity, particularly around technical terms and methodology.
### Questions
- Why does PEQA outperform fine-tuning with LoRA as shown in Table 5?
- What could be the reason for PEQA's performance drop observed in the Five-Shot results?
- How are the gradient calculations for the scaling factors managed in the proposed model?
- What is the intuition behind the decomposition used in PEQA?
- How does PEQA compare to the combination of LoRA and quantization directly?
### Soundness
**Score:** 3
**Description:** 3 = good - The paper demonstrates a solid methodology, although questions about the novelty and thoroughness of the evaluations remain.
### Presentation
**Score:** 3
**Description:** 3 = good - The paper is generally well-structured and easy to follow, but certain sections could benefit from improved clarity and explanation.
### Contribution
**Score:** 2
**Description:** 2 = fair - While the proposed method shows promise, it largely combines existing techniques without offering a significantly novel contribution.
### Rating
**Score:** 5
**Description:** 5 = Borderline accept - The paper is technically solid and the reasons to accept outweigh the reasons to reject, though it requires further evaluation and clarity.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The decision to accept is based on the originality of the proposed method, its soundness in methodology, and its potential significance in the field of efficient fine-tuning for large language models. While there are some weaknesses regarding novelty and clarity, the overall contributions are deemed valuable.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Density of States Prediction of Crystalline Materials
via Prompt-guided Multi-Modal Transformer
Namkeyeong Lee\({}^{1\dagger}\), Heewoong Noh\({}^{1\dagger}\), Sungwon Kim\({}^{1}\),
**Dongmin Hyun\({}^{2}\)**, **Gyoung S. Na\({}^{3}\)**, **Chanyoung Park\({}^{1}\)**
\({}^{1}\) KAIST \({}^{2}\) Yahoo Research \({}^{3}\) KRICT
{namkyeong96,heewoongnoh,swkim,cy.park}@kaist.ac.kr
[email protected], [email protected]
Corresponding author. \({}^{\dagger}\) These authors contributed equally.
###### Abstract
The density of states (DOS) is a spectral property of crystalline materials, which provides fundamental insights into various characteristics of the materials. While previous works mainly focus on obtaining high-quality representations of crystalline materials for DOS prediction, we focus on predicting the DOS from the obtained representations by reflecting the nature of DOS: _DOS determines the general distribution of states as a function of energy_. That is, DOS is not solely determined by the crystalline material but also by the energy levels, which has been neglected in previous works. In this paper, we propose to integrate heterogeneous information obtained from the crystalline materials and the energies via a multi-modal transformer, thereby modeling the complex relationships between the atoms in the crystalline materials and various energy levels for DOS prediction. Moreover, we propose to utilize prompts to guide the model to learn the crystal structural system-specific interactions between crystalline materials and energies. Extensive experiments on two types of DOS, i.e., Phonon DOS and Electron DOS, with various real-world scenarios demonstrate the superiority of DOSTransformer. The source code for DOSTransformer is available at [https://github.com/HeewoongNoh/DOSTransformer](https://github.com/HeewoongNoh/DOSTransformer).
## 1 Introduction
Along with recent advances in machine learning (ML) for scientific discovery, ML models have rapidly been adopted in computational materials science thanks to the abundant experimental and computational data in the field. Most ML models developed in computational materials science to date have been focused on crystalline materials property consisting of single-valued quantities [29], e.g., band gap energy [31, 69], formation energy [3, 56], and Fermi energy [61].
On the other hand, spectral properties are ubiquitous in materials science, characterizing various properties of crystalline materials, e.g., X-ray absorption, dielectric function, and electronic density of states [29] (Figure 1).
The density of states (DOS), which is the main focus of this paper, is a spectral property that provides fundamental insights on various characteristics of crystalline materials, even enabling direct computation of single-valued properties [17]. For example, DOS is utilized as a feature of materials for analyzing the underlying reasons for changes
Figure 1: A crystal structure and various types of its properties.
in electrical conductivity [15]. Moreover, band gaps and edge positions, which can be directly derived from DOS, are utilized to discover new photoanodes for solar fuel generation [47, 64]. As a result, the exploration of ML capabilities for predicting DOS takes us one step further toward the core principles of materials science, thereby expediting the process of materials discovery. However, credible computation of DOS with traditional density functional theory (DFT) requires expensive time/financial costs of exhaustively conducting experiments with expertise knowledge [7, 14]. Therefore, alternative approaches for DOS calculation are necessary, whereas demonstrating ML capabilities for learning such spectral properties of the crystalline materials is relatively under-explored.
In this paper, we focus on predicting DOS by following the nature of DOS calculation: _DOS determines the general distribution of states as a function of energy_. That is, while single-valued properties can be described solely by the crystalline material, DOS is determined by not only the material itself but also the _energy levels_ at which DOS is calculated. Therefore, integrating the heterogeneous signals from both the crystalline material and the various energy levels is crucial for DOS prediction. However, previous works for DOS prediction focus on obtaining a qualified representation of material [7, 14, 17, 9], thereby overlooking the unique characteristics of DOS.
On the other hand, learning from multiple input types, i.e., crystalline material and various energy levels, is not trivial due to their heterogeneity. To this end, we formulate the DOS prediction problem into a multi-modal learning problem, which has recently gained significant attention from ML researchers in various domains thanks to its capability of extracting and relating information from heterogeneous data types [6, 38, 54, 5, 2]. For example, VisualBERT proposes to align elements of input modalities, i.e., images and text, thereby achieving SOTA performance in various vision-and-language tasks [36]. Moreover, Dall-E shows its power in generating images based on text data by encoding images and texts together [43]. Among various multi-modal learners, the multi-modal transformer demonstrates its intrinsic advantages and scalability in modeling multiple modalities [62], by associating heterogeneous modalities with the cross-attention mechanism [65, 50].
Inspired by the recent success of the multi-modal transformer, we propose a multi-modal transformer model for DOS prediction, named DOSTransformer, which incorporates crystalline material and energies as heterogeneous input modalities. Unlike previous works that overlook the energy levels for predicting DOS [7, 14, 17, 9], DOSTransformer learns embeddings of energy levels which are used for modeling complex relationships between the energy levels and crystalline material through a cross-attention mechanism. By doing so, DOSTransformer obtains multiple representations for a single crystalline material according to various energy levels, enabling the prediction of a single DOS value on each energy level. However, simply combining a crystalline material with energy levels may fail to consider the significant impact of the crystal's structure, which sometimes results in distinct material properties even when it is composed of identical elements (e.g., carbon). For example, although graphite and diamond are made entirely out of carbon, their arrangement (or structure) of carbon atoms differs significantly, resulting in distinct material properties. Therefore, we utilize learnable prompts to inform DOSTransformer of the input crystal system (among the seven crystal systems2), which navigates the model to capture structure-specific interactions between a crystalline material and energy levels. By doing so, DOSTransformer learns to extract not only the relational information that is shared with a crystal system, but also across the crystal systems. In this work, we make the following contributions:
Footnote 2: [https://en.wikipedia.org/wiki/Crystal_system](https://en.wikipedia.org/wiki/Crystal_system)
* Inspired by the fact that DOS is determined by not only the materials themselves but also the energy levels at which DOS is calculated, we propose a multi-modal transformer model for DOS prediction, called DOSTransformer, which incorporates crystalline materials and energy as heterogeneous input modalities.
* To capture structure-specific interactions between a crystalline material and energy levels, thereby being able to predict DOS that significantly differs according to the crystal's structure, we utilize learnable prompts to inform DOSTransformer of the input crystal system.
* Our extensive experiments on two types of DOS, i.e., Phonon DOS and Electron DOS, demonstrate that DOSTransformer outperforms a wide range of state-of-the-art methods in various real-world scenarios, i.e., in-distribution scenarios and out-of-distribution scenarios.
To the best of our knowledge, this is the first work that considers various energy levels during DOS prediction and introduces prompts for crystal structural systems.
Related Works
### Machine Learning for Materials and Density of States
Traditional materials science research heavily relies on theory, experimentation, and computer simulations, but these approaches are time-consuming, costly, and inefficient, hindering their ability to keep up with the field's development [57]. As an alternative research tool, ML, which requires neither expensive trial and error experiments nor domain expert knowledge, has been attracting a surge of interest from materials scientists [67]. Consequently, various ML methods have been established to learn the relationship between materials and their properties which are usually obtained from ab-initio calculations [11]. Inspired by the recent success of GNNs in biochemistry [19; 48; 25; 33; 34; 32], CGCNN proposes a message-passing framework for crystal structure providing high-accuracy prediction for 8 different materials properties [61]. However, most studies have focused on single-valued properties [29], whereas various spectral properties, such as DOS, are also crucial.
The DOS describes the number of different states at a particular energy level that particles are allowed to occupy, determining various properties of crystalline materials. For example, phonon DOS has a crucial influence in determining the specific heat and vibrational entropy of crystalline materials as well as the interfacial thermal resistance [60], while electron DOS is used to derive the electronic contribution to heat capacity in metals [21] and the effective mass of electrons in charge carriers [18].
Despite the importance of DOS, how to incorporate ML capabilities for _decoding_ such spectral properties is under-explored since the vast majority of methods concentrate only on _encoding_ crystalline materials. Specifically, Chandrasekaran et al. [7] proposes to predict electron DOS based on the hand-crafted fingerprint of each grid point in the crystalline materials, while Del Rio et al. [14] leverages that of each atom in the crystalline materials. Chen et al. [9] predicts phonon DOS by learning the representations of the crystalline materials that are equivariant to 3D rotations, translations, and inversion with Euclidean neural networks [49; 28; 58], achieving high-quality prediction with a small number of training materials with over 64 atom types. In this work, we concentrate on decoding spectral properties regarding the complex relationship between the various energy levels and crystalline materials.
### Multi-modal Transformer, Prompt Tuning, and Positional Encoding
**Multi-modal Transformer.** In recent years, much progress in multi-modal learning has grown rapidly by extracting and aligning the rich information from heterogeneous modalities [2; 6]. Among the various multi-modal learning methods, the multi-modal transformer demonstrates its superior capability in learning multiple modalities [62], by associating the heterogeneous modalities with a cross-attention mechanism. Specifically, MuIT [50] learns the representations directly from unaligned multi-modal streams with multi-modal transformer, while Yao & Wan [65] improves machine translation quality by learning the image-aware text representations. In this paper, we investigate how to decode DOS of a crystalline material by learning the energy-aware crystal representations via a multi-modal transformer.
**Prompt Tuning.** On the other hand, prompt designing has gained significant attention from NLP researchers as an efficient method for fine-tuning large language models (LLMs) for various tasks [39]. There are two main approaches to prompt designing: discrete prompt designing [44] and continuous prompt tuning [37]. While discrete prompt designing involves manually creating task description tokens for LLMs through trial and error, continuous prompt tuning trains learnable prompts in a continuous latent space without requiring expert knowledge. Inspired by the success of continuous prompt tuning in NLP domain, it has been widely adopted in various domains such as computer vision [24] and vision-language [68]. While previous works utilize prompts for efficient fine-tuning of the models, we employ prompts to capture the heterogeneity of various crystal structures.
**Positional Encoding.** In the original proposal of the Transformer architecture, a sinusoidal function was suggested as a method for positional encoding. However, the sinusoidal function may have limitations in terms of learnability and flexibility, which can affect its effectiveness [53]. To address this issue, most pre-trained language models [16; 40] employ learnable vector embeddings as positional representations. By conceptualizing each energy value as the position of a word in a sentence, various energy embeddings in Section 4.2 can be analogous to the positional encoding used in traditional transformers.
## 3 Preliminaries
**Notations.** Let \(\mathcal{G}=(\mathcal{V},\mathcal{A})\) denote a crystalline material, where \(\mathcal{V}=\{v_{1},\ldots,v_{n}\}\) represents the set of atoms, and \(\mathcal{A}\subseteq\mathcal{V}\times\mathcal{V}\) represents the set of edges connecting the atoms in the crystalline material \(\mathcal{G}\). Moreover, \(\mathcal{G}\) is associated with a feature matrix \(\mathbf{X}\in\mathbb{R}^{n\times F}\) and an adjacency matrix \(\mathbf{A}\in\mathbb{R}^{n\times n}\) where \(\mathbf{A}_{ij}=1\) if and only if \((v_{i},v_{j})\in\mathcal{A}\) and \(\mathbf{A}_{ij}=0\) otherwise.
**Task: Density of States Prediction.** Given a set of crystalline materials \(\mathcal{D}_{\mathcal{G}}=\{\mathcal{G}_{1},\mathcal{G}_{2},\ldots,\mathcal{G }_{N}\}\) and a set of energies \(\mathcal{D}_{\mathcal{E}}=\{\mathcal{E}_{1},\mathcal{E}_{2},\ldots,\mathcal{E }_{M}\}\), our goal is to train a model \(\mathcal{M}\) that predicts the DOS of a crystalline material given a set of energies, i.e., \(\mathbf{Y}^{i}=\mathcal{M}(\mathcal{G}_{i},\mathcal{D}_{\mathcal{E}})\), where \(\mathbf{Y}^{i}\in\mathbb{R}^{M}\) is an \(M\) dimensional vector containing the DOS values of a crystalline material \(\mathcal{G}_{i}\) at each energy \(\mathcal{E}_{1},\ldots,\mathcal{E}_{M}\), and \(\mathbf{Y}_{j}^{i}\in\mathbb{R}\) is the DOS value of \(\mathcal{G}_{i}\) at energy level \(\mathcal{E}_{j}\).
## 4 Methodology: DOSTransformer
In this section, we introduce our proposed method, named DOSTransformer, a novel DOS prediction framework that learns the complex relationship between the atoms in the crystalline material and various energy levels by utilizing a cross-attention mechanism of the multi-modal transformer.
### Crystalline Material Encoder
Before modeling the pairwise interaction between the crystalline material and the energies, we first encode the crystalline material with GNNs to learn the representation of each atom, which contains not only the feature information but also the structural information. Formally, given a crystalline material \(\mathcal{G}=(\mathbf{X},\mathbf{A})\), we generate an atom embedding matrix for the crystalline material as follows:
\[\mathbf{H}=\text{GNN}(\mathbf{X},\mathbf{A}), \tag{1}\]
where \(\mathbf{H}\in\mathbb{R}^{n\times d}\) is an atom embedding matrix for \(\mathcal{G}\), whose \(i\)-th row indicates the representation of atom \(v_{i}\), and we stack \(L^{\prime}\) layers of GNNs. Among various GNNs, we adopt graph networks [4] as our crystalline material encoder, which is a generalized and extended version of various GNNs. We provide further details on the GNNs in Appendix C.
### Prompt-based Multi-modal Transformer
After obtaining the atom embedding matrix \(\mathbf{H}\), we aim to capture the relationship between a crystalline material and various energy levels by utilizing the cross-attention layers of a multi-modal transformer [62] with self-attention layers [52]. In a nutshell, we expect the cross-attention layers to integrate the heterogeneous signals from a crystalline material and various energy levels, while self-attention layers aim to integrate the information of material-specific energy representations.
**Cross-Attention Layers.** Specifically, we expect the cross-attention layers to generate the material-specific representation of the energies by repeatedly reinforcing the energy representation with the crystalline materials. To do so, we first introduce a learnable energy embedding matrix \(\mathbf{E}^{0}\in\mathbb{R}^{M\times d}\), whose \(j\)-th row, i.e., \(\mathbf{E}^{0}_{j}\), indicates the embedding of energy \(\mathcal{E}_{j}\in\mathcal{D}_{\mathcal{E}}\). Then, we present cross-modal attention for fusing the information from the crystal structure into various energy levels as follows:
\[\mathbf{E}^{l} =\text{Cross-Attention}(\mathbf{Q}_{\mathbf{E}^{l-1}},\mathbf{K} _{\mathbf{H}},\mathbf{V}_{\mathbf{H}})\in\mathbb{R}^{M\times d} \tag{2}\] \[=\text{Softmax}(\frac{\mathbf{E}^{l-1}\mathbf{H}^{\top}}{\sqrt{d} })\mathbf{H},\]
where \(l=1,\ldots,L_{1}\) indicates the index of the cross-attention layers. In contrast to the conventional Transformer, which introduces learnable weight matrices for query \(\mathbf{Q}\), key \(\mathbf{K}\), and value \(\mathbf{V}\), we directly employ the previously obtained energy embedding matrix \(\mathbf{E}^{l-1}\) as the query matrix, and the atom embedding matrix \(\mathbf{H}\) as the key and value matrices. Based on the above cross-attention mechanism, we obtain the material-specific energy embedding \(\mathbf{E}^{l}\in\mathbb{R}^{M\times d}\) by aggregating the information regarding the atoms in the crystalline material that were important at each energy level. The final material-specific energy embedding matrix \(\mathbf{E}^{L_{1}}\in\mathbb{R}^{M\times d}\) generated by the cross-attention layer reflects the relationship between the atoms in the crystalline material and various energy levels.
**Global Self-Attention.** In addition to cross-attention layers, we propose to enhance the material-specific energy embedding matrix \(\mathbf{E}^{L_{1}}\) by aggregating the information from other energy levels. To do so, we employ variants of self-attention layers in conventional Transformer [52] as follows:
\[\begin{split}\tilde{\mathbf{E}}^{p}&=\text{Self- Attention}(\mathbf{Q}_{\tilde{\mathbf{E}}^{p-1}},\mathbf{K}_{\tilde{\mathbf{E}}^{p-1}}, \mathbf{V}_{\tilde{\mathbf{E}}^{p-1}})\in\mathbb{R}^{M\times d}\\ &=\text{Softmax}(\frac{\tilde{\mathbf{E}}^{p-1}\tilde{\mathbf{E}}^{p -1\top}}{\sqrt{d}})\tilde{\mathbf{E}}^{p-1},\end{split} \tag{3}\]
where \(p=1,\dots,L_{2}\) indicates the index of the self-attention layer. We use an enhanced material-specific energy embedding \(\tilde{\mathbf{E}}^{0}\) as an input of the self-attention, which is obtained by concatenating the material-specific energy embedding \(\mathbf{E}^{L_{1}}\) with the material representation \(\mathbf{g}_{i}\in\mathbb{R}^{d}\), i.e., \(\tilde{\mathbf{E}}^{0}_{j}=\phi_{1}(\mathbf{E}^{glob}_{j})\), where \(\mathbf{E}^{glob}_{j}=(\mathbf{E}^{L_{1}}_{j}||\mathbf{g}_{i})\), \(\phi_{1}:\mathbb{R}^{2d}\rightarrow\mathbb{R}^{d}\), and \(||\) indicates the concatenation operation. Note that \(\mathbf{g}_{i}\) is a sum pooled representation of the material \(\mathcal{G}_{i}\), and \(\mathbf{E}^{L_{1}}_{j}\) indicates the \(j\)-th row of the energy embedding matrix \(\mathbf{E}^{L_{1}}\) computed by Equation 2 where \(j=0,\dots,M\). Although the output of cross-attention layers, i.e., \(\mathbf{E}^{L_{1}}\in\mathbb{R}^{M\times d}\), captures the local atom-level information regarding the material, we also recognize the importance of incorporating the global material-level information \(\mathbf{g}_{i}\) that may not have been fully encoded during the cross-attention phase. Note that the importance of considering the global contextual information for enhancing the learning of dependencies among neural representations has been widely studied in various domains such as computer vision and natural language processing [51, 55, 22]. By additionally considering the global material-level information, we aim to enhance the material-specific representation of the energies more comprehensively.
**System Self-Attention with Crystal System Prompts.** However, solely aggregating the information of material-specific energy embeddings may neglect the crucial influence of the structural properties in crystalline materials, whereas the structural properties play a significant role in determining the unique characteristics and properties of these materials [45]. To illustrate this point, let's examine the case of graphite and diamond, both composed entirely of carbon atoms. In graphite, carbon atoms are arranged in stacked layers, where each carbon atom is bonded to three neighboring carbon atoms in a hexagonal lattice. In contrast, the carbon atom in diamond forms strong covalent bonds with four neighboring carbon atoms, resulting in a tetrahedral arrangement. Even though graphite and diamond are made entirely out of carbon, due to the structural difference, graphite exhibits properties such as electrical conductivity and lubricity due to its layered structure [59, 26], while diamond is renowned for its hardness and thermal conductivity owing to its tightly bonded tetrahedral network [8, 27].
Despite the importance of structural information, effectively incorporating it into the model is not trivial. Naively concatenating the structural information as an input feature of materials is straightforward but may lead to interference, hindering the model's ability to learn and generalize knowledge across different crystal structures. To this end, we propose to provide additional information to the self-attention layers with learnable prompts, which indicate the structural information of the crystalline materials. Specifically, we adopt learnable prompts \(\mathbf{P}\in\mathbb{R}^{7\times d_{p}}\), whose \(k\)-th row \(\mathbf{P}_{k}\) represents one of the seven widely known crystal systems: Cubic, Hexagonal, Tetragonal, Trigonal, Orthorhombic, Monoclinic, and Triclinic. Then, given a crystalline material \(\mathcal{G}_{i}\), whose crystal system is given as \(k\), we incorporate the learnable prompts into the material-specific energy embedding \(\tilde{\mathbf{E}}^{0}\) as an input
Figure 2: Overall model architecture and attention layers in prompt-based multi-modal Transformer.
of the self-attention, which is obtained as follows: \(\widetilde{\mathbf{E}}_{j}^{0}=\phi_{2}(\mathbf{E}_{j}^{sys})\), where \(\mathbf{E}_{j}^{sys}=(\mathbf{E}_{j}^{L_{1}}||\mathbf{g}_{i}||\mathbf{P}_{k})\) and \(\phi_{2}:\mathbb{R}^{2d+d_{p}}\rightarrow\mathbb{R}^{d}\). By doing so, we obtain high-level energy features that contain the crystal structural system information (i.e., \(\widetilde{\mathbf{E}}^{L_{2}}\)), while keeping the low-level features to share the knowledge across all the materials (i.e., \(\mathbf{E}^{L_{1}}\)). In Section 5.4, we demonstrate the effectiveness of our prompt-based approach compared to other variants.
Besides the self-attention layers, we use additional cross-attention layers on top of the self-attention layers, whose input energy embedding \(\mathbf{E}^{0}\) in Equation 2 is given as \(\widetilde{\mathbf{E}}^{L_{2}}\), and output the final material-specific energy embeddings \(\mathbf{E}^{L_{3}}\), where \(L_{3}\) is the number of additional cross-attention layers. By doing so, the model extracts the enhanced relationships between the crystalline materials and energy levels regarding the crystal structural system information.
### Energy Decoder
After obtaining the final material-specific energy embedding matrix \(\mathbf{E}^{L_{3},i}\) of a crystalline material \(\mathcal{G}_{i}\), the DOS value at each energy level \(\mathcal{E}_{j}\), i.e., \(\hat{\mathbf{Y}}_{j}^{i}\), is given as follows: \(\hat{\mathbf{Y}}_{j}^{i}=\phi_{pred}(\mathbf{E}_{j}^{L_{3},i})\), where \(\phi_{pred}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{1}\) is a parameterized MLP for predicting DOS. While previous approaches directly predict DOS from the material representation \(\mathbf{g}_{i}\) using a function \(\phi_{pred}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{M}\), \(\mathsf{DOSTransformer}\) takes a different approach by making pointwise predictions that align with the nature of DOS calculation, i.e., DOS determines the general distribution of states as a function of energy To the best of our knowledge, \(\mathsf{DOSTransformer}\) is the first work that predicts DOS values in a pointwise manner at each energy level, and we later show in Section 5 that such an approach further enhances the performance of not only \(\mathsf{DOSTransformer}\) but also existing models.
### Model Training and Inference
\(\mathsf{DOSTransformer}\) is trained to minimize the root mean squared error (RMSE) loss \(\mathcal{L}\) between the predicted DOS value \(\hat{\mathbf{Y}}_{j}^{i}\) and the ground truth DOS value \(\mathbf{Y}_{j}^{i}\), i.e., \(\mathcal{L}=\frac{1}{N\cdot M}\sum_{i=1}^{N}\sum_{j=1}^{M}\sqrt{(\hat{\mathbf{ Y}}_{j}^{i}-\mathbf{Y}_{j}^{i})^{2}}\). More specifically, \(\mathsf{DOSTransformer}\) is trained by combining two distinct RMSE losses with balancing term \(\beta\), i.e., \(\mathcal{L}^{total}=\mathcal{L}^{glob}+\beta\cdot\mathcal{L}^{sys}\), where \(\mathcal{L}^{glob}\) and \(\mathcal{L}^{sys}\) are obtained by utilizing \(\mathbf{E}^{L_{3},glob}\) and \(\mathbf{E}^{L_{3},sys}\), respectively. By doing so, \(\mathsf{DOSTransformer}\) extracts the relationship information that is shared among the distinct crystal systems and within a crystal system, respectively. For inference, we utilize the model prediction obtained based on \(\mathbf{E}^{L_{3},sys}\). The overall model architecture is depicted in Figure 2.
## 5 Experiments
### Experimental Setup
**Datasets.** We use two datasets to comprehensively evaluate the performance of \(\mathsf{DOSTransformer}\), i.e., Phonon DOS and Electron DOS. We use the **Phonon DOS** dataset following the instructions of the official Github repository 3 of a previous work [9]. For **Electron DOS** dataset, we collect crystalline materials and their electron DOS data from Materials Project (MP) website 4. We provide further detailed preprocessing procedures and statistics on each dataset in Appendix A.
Footnote 3: [https://github.com/zhantaochen/phonondos_e3nn](https://github.com/zhantaochen/phonondos_e3nn)
Footnote 4: [https://materialsproject.org/](https://materialsproject.org/)
**Methods Compared.** We mainly compare \(\mathsf{DOSTransformer}\) to recently proposed state-of-the-art method, i.e., E3NN [9], which utilizes the Euclidean network for encoding material representation. We also compare \(\mathsf{DOSTransformer}\) to simple baseline methods, i.e., MLP and Graph Network [4], which predicts the entire DOS sequence directly from the learned representation of the materials without regarding the energies during training. Moreover, to evaluate the effectiveness of the transformer layer that considers the relationship between the atoms and various energy levels, we integrate energy embeddings into baseline methods for DOS prediction by concatenating the energy embeddings to the material representation. We provide more details on the implementation of \(\mathsf{DOSTransformer}\) and compared methods in Appendix C and D, respectively.
### In-Distribution Evaluation
**Evaluation Protocol.** In this section, we evaluate the model performances under in-distribution scenarios. While we evaluate \(\mathsf{DOSTransformer}\) with given data splits in a previous work [9] for **Phonon DOS** dataset, we randomly split the **Electron DOS** dataset into train/valid/test of 80/10/10%. Note that while all measures are reported in the original scale, we report MSE values for DOS prediction multiplied by a factor of 10 for clear interpretation during all experiments.
**Experimental Results.** In Table 1, we have the following observations: **1)** Comparing the baseline methods that overlook the energy levels for DOS prediction (i.e., Energy \(\mathbf{\mathcal{X}}\)) with their counterparts that incorporate both the energy levels and the crystal structure as heterogeneous input modalities through the energy embeddings (i.e., Energy \(\mathbf{\mathcal{missing}}\)), we find out that using the energy embeddings consistently enhances the model performance. This indicates that making pointwise predictions on each energy level is crucial for DOS prediction as discussed in Section 4.3, which also aligns with the domain knowledge of materials science, i.e., DOS determines the general distribution of states as a function of energy. **2)** However, we observe that E3NN shows a relatively small performance gain compared with other methods after incorporating the energy information. This is because the integration of the energy embedding may confound the model to learn the proper equivariance of the materials. **3)** On the other hand, \(\mathsf{DOSTransformer}\) outperforms previous methods that do not consider the complex relationships between the atoms in materials and various energy levels via the various attention mechanisms. This again implies that a naive integration of the energy information into previous models cannot fully benefit from the energy information. We also provide qualitative analysis on the obtained DOS in Section 5.5.
**Physical Validity of DOS.** In addition to the accuracy of DOS prediction, we assess the physical validity of the model-predicted DOS by deriving various physical properties, i.e., bulk modulus, band gap, and Fermi energy, of materials, based on the model-predicted DOS. To do so, given the DFT-calculated DOS, which is considered the ground-truth DOS based on which accurate derivation of various physical properties is possible, we train an MLP with a non-linearity in each layer to predict the properties of a crystal structure. Then, based on the obtained MLP weights, we predict the material properties given the model-predicted DOS as the input and calculate the MSE for each material property. We provide further details on the experimental setting in Appendix B. In Table 1, we observe that DOS predicted by \(\mathsf{DOSTransformer}\) shows the superiority of predicting physical properties of materials, indicating that \(\mathsf{DOSTransformer}\) not only achieves high accuracy but also produces physically valid predictions.
### Out-of-Distribution Evaluation
In this section, we evaluate the model performances under out-of-distribution (OOD) scenarios. It is well recognized that existing DFT calculation-based databases have limitations in terms of
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**Phonon DOS**} & \multicolumn{3}{c}{**Electron DOS**} & \multicolumn{3}{c}{**Physical Properties (MSE)**} \\ \cline{2-11} & MSE & MAE & \(R^{2}\) & MSE & MAE & \(R^{2}\) & Bulk M. & Band G. & Ferm. E. \\ \hline \multicolumn{11}{l}{**Energy \(\mathbf{\mathcal{missing}}\)**} \\ \hline MLP & 0.309 & 0.106 & 0.576 & 0.347 & 0.130 & 0.487 & 0.695 & 1.597 & 4.650 \\ & (0.016) & (0.003) & (0.016) & (0.015) & (0.003) & (0.029) & (0.031) & (0.171) & (0.202) \\ Graph Network & 0.259 & 0.099 & 0.638 & 0.264 & 0.103 & 0.613 & 0.688 & 1.457 & 3.362 \\ & (0.009) & (0.001) & (0.013) & (0.005) & (0.000) & (0.008) & (0.060) & (0.051) & (0.189) \\ E3NN & 0.210 & 0.077 & 0.705 & 0.296 & 0.109 & 0.552 & 0.504 & 0.896 & 2.925 \\ & (0.004) & (0.001) & (0.007) & (0.005) & (0.001) & (0.013) & (0.033) & (0.093) & (0.111) \\ \hline \multicolumn{11}{l}{**Energy \(\mathbf{\mathcal{missing}}\)**} \\ \hline MLP & 0.251 & 0.099 & 0.652 & 0.341 & 0.128 & 0.499 & 0.521 & 1.409 & 4.372 \\ & (0.004) & (0.001) & (0.006) & (0.011) & (0.002) & (0.012) & (0.011) & (0.182) & (0.067) \\ Graph Network & 0.226 & 0.092 & 0.685 & 0.240 & 0.099 & 0.650 & 0.543 & 1.263 & 3.080 \\ & (0.007) & (0.002) & (0.011) & (0.005) & (0.001) & (0.005) & (0.085) & (0.107) & (0.100) \\ E3NN & 0.200 & 0.074 & 0.724 & 0.291 & 0.114 & 0.564 & 0.451 & 1.605 & 3.777 \\ & (0.001) & (0.001) & (0.002) & (0.000) & (0.000) & (0.008) & (0.023) & (0.231) & (0.175) \\ \hline \multirow{2}{*}{\(\mathsf{DOSTransformer}\)} & **0.191** & **0.071** & **0.733** & **0.221** & **0.089** & **0.679** & **0.427** & **0.461** & **2.337** \\ & (0.003) & (0.002) & (0.004) & (0.006) & (0.001) & (0.006) & (0.024) & (0.019) & (0.094) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overall model performance under in-distribution scenarios (Bulk M. : Bulk Modulus / Band G. : Band Gap / Ferm. E. : Fermi Energy).
coverage, as they often focus on specific types of materials or structural archetypes, resulting in biased distributions [13, 30, 20]. Therefore, it is essential to evaluate the model's performance in OOD scenarios to assess its real-world applicability and generalizability beyond the limitations of the available databases [35].
**Evaluation Protocol.** To do so, we evaluate the model performance on the crystal structures that 1) contain a different number of atom species with the training set (i.e., # Atom species), and 2) belong to different crystal systems that were not included in the training set (i.e., Crystal System). In both scenarios, training data primarily consist of relatively simple structures compared to those in the test data. We provide further details on data split and evaluation in Appendix B.
**Experimental Results.** In Table 2, we again observe that utilizing energy embeddings consistently brings performance gain to all baseline models. Especially, Graph Network significantly benefits from energy embeddings compared to in-distribution scenarios (See Table 1). We attribute this to the inherent restrictive inductive biases in Graph Network [4], i.e., Graph Network heavily relies on the structural information of materials. Specifically, as the structure of the materials totally varies in the OOD scenarios, which makes the prediction of the whole DOS sequence challenging, incorporating the energy embeddings becomes especially helpful in the OOD scenarios for Graph Network. Moreover, DOSTransformer also alleviates the restrictive inductive bias of Graph Networks by elaborately modeling the complex relationships between energies and materials. It is worth noting that similar observations have been made by comparing Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) in the computer vision domain [1, 66]. In conclusion, we argue that incorporating energy information during model training is crucial not only for in-distribution scenarios but also for OOD scenarios, demonstrating the applicability of DOSTransformer in real-world applications. For a more detailed analysis, please refer to Appendix E.1.
**Fine-tuning on Few Materials from Complex Crystal Systems.** In addition to evaluating the performance under the OOD scenarios, we verify how well the models can make predictions for materials from complex crystal systems with only few training samples, which is a practical situation in reality. For this, we sampled a small subset of of materials (i.e., 10%) from the test set used in OOD scenarios of crystal systems, and fine-tune the models that are already trained on the training set. As for DOSTransformer, we fine-tune the model using two different approaches: i) fine-tuning all model parameters (referred to as "All"), and ii) tuning only the parameters of the prompts and the energy decoder while keeping all other model parameters frozen (referred to as "Only Prompt"). In Table 3, we have the following observations: **1)** As we expected, the additional fine-tuning step achieves performance gain for all models, while it was marginal due to a limited number of materials used for fine-tuning. **2)** Only fine-tuning the prompts of DOSTransformer achieved more performance gain compared to fine-tuning the whole model. This is because while fine-tuning the whole model on a small subset of materials may easily incur overfitting, fine-tuning only prompts enables the model to additionally learn from few new samples while maintaining the knowledge obtained previously. In conclusion, we believe that the proposed prompt tuning approach can have a significant impact in the
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{5}{c}{**Out-of-Distribution**} \\ \cline{2-6} & \multicolumn{2}{c}{**\# Atom Species**} & \multicolumn{2}{c}{**Crystal System**} \\ \cline{2-6} & MSE & MAE & \(R^{2}\) & MSE & MAE & \(R^{2}\) \\ \hline \hline
**Energy ✗** & & & & & \\ \hline MLP & 0.545 & 0.161 & 0.250 & 0.455 & 0.149 & 0.437 \\ & (0.005) & (0.001) & (0.003) & (0.006) & (0.002) & (0.007) \\ Graph Network & 0.460 & 0.144 & 0.381 & 0.407 & 0.132 & 0.505 \\ & (0.015) & (0.003) & (0.002) & (0.004) & (0.001) & (0.006) \\ E3NN & 0.541 & 0.154 & 0.231 & 0.421 & 0.134 & 0.483 \\ & (0.007) & (0.001) & (0.005) & (0.004) & (0.001) & (0.007) \\ \hline
**Energy ✓** & & & & & \\ \hline MLP & 0.545 & 0.159 & 0.260 & 0.455 & 0.149 & 0.445 \\ & (0.005) & (0.001) & (0.016) & (0.004) & (0.001) & (0.007) \\ Graph Network & 0.455 & 0.144 & 0.388 & 0.384 & 0.129 & 0.534 \\ & (0.002) & (0.002) & (0.004) & (0.012) & (0.004) & (0.014) \\ & 0.534 & 0.152 & 0.237 & 0.420 & 0.133 & 0.483 \\ & (0.013) & (0.001) & (0.017) & (0.007) & (0.001) & (0.008) \\ \hline \multirow{2}{*}{DOSTransformer} & **0.454** & **0.136** & **0.399** & **0.373** & **0.122** & **0.552** \\ & (0.008) & (0.001) & (0.017) & (0.006) & (0.001) & (0.006) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Overall model performance in OOD scenarios.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Model** & MSE & MAE & \(R^{2}\) \\ \hline \hline
**Energy ✗** & & & \\ \hline MLP & 0.401 & 0.137 & 0.510 \\ & (0.017) & (0.003) & (0.021) \\ Graph Network & 0.394 & 0.131 & 0.519 \\ & (0.007) & (0.002) & (0.010) \\ E3NN & 0.414 & 0.133 & 0.490 \\ & (0.006) & (0.001) & (0.007) \\ \hline
**Energy ✓** & & & \\ \hline MLP & 0.394 & 0.136 & 0.519 \\ & (0.008) & (0.002) & (0.009) \\ Graph Network & 0.382 & 0.130 & 0.533 \\ & (0.003) & (0.000) & (0.002) \\ E3NN & 0.417 & 0.133 & 0.487 \\ & (0.004) & (0.001) & (0.007) \\ \hline \hline \multicolumn{3}{l}{**DOSTransformer**} \\ \hline All & 0.365 & **0.122** & 0.558 \\ & (0.005) & (0.001) & (0.008) \\ Only Prompt & **0.355** & **0.122** & **0.570** \\ & (0.008) & (0.001) & (0.010) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Fine-tuning on OOD systems.
field of materials science beyond simple DOS prediction, where the existing databases exhibit a bias towards dominant types of materials [13; 30; 20]. We further explore different proportions beyond the 10% of the materials used for fine-tuning, as detailed in the Appendix E.3.
### Model Analysis
**Ablation Studies.** To verify the benefit of each component of DOS-Transformer, we conduct ablation studies under in-distribution scenarios in Figure 3. We observe that utilizing only either one of the global and system losses deteriorates the model performance. This is because jointly optimizing the losses incentivizes the model to extract the relational information that is shared across the crystal systems and within a crystal system. On the other hand, it is worth noting that DOSTransformer with only one of the losses still outperforms baseline models shown in Table 1, indicating that modeling complex relationships between materials and various energies is crucial in DOS prediction.
**Sensitivity Analysis.** To verify the robustness of DOSTransformer, we measure the model performance by varying \(\beta\), which is introduced to control the effect of the crystal system loss, i.e. \(\mathcal{L}^{sys}\), in Section 4.4. In Figure 4, DOSTransformer shows robustness over various levels of \(\beta\), consistently outperforming baseline models in Table 1. This verifies that DOSTransformer can be easily trained without an expensive hyperparameter tuning, further demonstrating the practicality of DOSTransformer.
**Crystal System Prompts.** As discussed in Section 4.2, it is not trivial to effectively incorporate the crystal system information into the model. To evaluate the effectiveness of our proposed prompt-based approach, we explore different approaches for incorporating the crystal system information into the model. We make modifications to determine where and how to inject this information. In terms of the location, we consider two options: injecting the information into the input atoms (i.e., Atom Feat.) and injecting it before the self-attention layers (i.e., Before SA). Regarding the method of injection, we explore two strategies: one-hot encoding of crystal systems and the use of learnable crystal system prompts. In Table 4, we have following observations: **1)** Naively incorporating prompts into atom feature even perform worse than the model without using crystal system (see _"w/o \(\mathcal{L}^{sys}\)"_ in Figure 3), demonstrating the importance of an elaborate design choice for injecting crystal system information. **2)** We observe that our approach, i.e. Before SA and prompt, outperforms all other possible choices, indicating that DOSTransformer successfully incorporates crystal system information during training. We also examine the benefits of injecting crystal system information into baseline methods in Appendix E.2.
### Qualitative Analysis
In this section, we provide a qualitative analysis of the predicted DOS by mainly comparing it to the DFT-calculated (i.e., Ground Truth) DOS and our main baseline (i.e., E3NN). In Figure 5 (a), which represents the predicted DOS of materials not containing transitional metals, both E3NN and DOSTransformer successfully capture the overall trend of the DOS for several materials (e.g., mp-982366, mp-1009129, and mp-16378). However, DOSTransformer shows a much more precise prediction that closely aligns with the ground truth DOS, providing even more useful information beyond the shape of DOS. For example, peak points represent regions of high density and are likely to be strongly influenced when materials undergo changes in property, and thus represents the probabilistically important energy regions of the materials in the process of material discovery. Notably, our model better captures the peak points in the ground truth DOS compared to E3NN, demonstrating the applicability of DOSTransformer-predicted DOS for material discovery process.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline & \multicolumn{2}{c}{**Method**} & \multicolumn{2}{c}{**Electron DOS**} \\ \cline{3-5} & \multicolumn{2}{c}{One-hot} & Prompt & MSE & MAE \\ \hline \multirow{4}{*}{**Dataset**} & \multirow{2}{*}{Atom Feat.} & \multirow{2}{*}{✓} & \multirow{2}{*}{✓} & 0.222 & 0.089 \\ & & & & (0.004) & 0.001 \\ & & & & 0.227 & 0.090 \\ & & & & (0.005) & 0.001 \\ \cline{2-5} & \multirow{2}{*}{Before SA} & \multirow{2}{*}{✓} & \multirow{2}{*}{✓} & 0.226 & 0.090 \\ & & & & (0.005) & 0.001 \\ & & & & **0.221** & **0.089** \\ & & & & (0.006) & 0.001 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparing various crystal system information injection approaches.
Figure 4: Sensitivity analysis.
Figure 3: Ablation studies.
On the other hand, Figure 5 (b) shows the DOS prediction for materials containing transition materials. Although \(\mathsf{DOSTransformer}\) provides more reliable prediction, we observe that the prediction errors of both models get larger compared to the materials that do not contain transition metals shown in Figure 5 (a). This can be attributed to the inherent complexity of physical properties in materials containing transition metals, as discussed in Section 6. Therefore, for our future work, we plan to design expert models in which each expert is responsible for materials with and without transition metals, to achieve more refined and accurate predictions of the DOS. This approach would enable a more comprehensive and elaborate analysis of the DOS in different material compositions.
## 6 Limitations & Future Work
It is widely known that materials containing transitional metals inherently have complex physical properties compared to other materials. As a result, the density functionals that yield optimal performance in DFT calculations differ between materials with and without transitional metals [12, 46]. However, despite the distinction, most ML-based DOS prediction research, including \(\mathsf{DOSTransformer}\), have attempted to encode the properties of both material types into a single model [9, 17], which may provide conflicting signals to the model. Therefore, as a future work, we plan to address this issue by developing specialized approaches that work well on both material types, e.g., utilizing expert models [41] in which an expert is assigned to each material type and later combined to perform well on both material types. We expect this approach to result in the robust and reliable prediction of DOS for both types of materials, enhancing the potential impact in the materials science field and broadening our understanding of material properties.
## 7 Conclusion
In this paper, we propose \(\mathsf{DOSTransformer}\), which predicts DOS of crystalline materials with various energy levels by following the nature of DOS calculation. Specifically, \(\mathsf{DOSTransformer}\) associates the heterogeneous information from materials and energy levels with cross-attention and self-attention layers. Moreover, the crystal system prompts are introduced to effectively train the model to learn the relational information that is shared across all the crystal systems and within a crystal system. By doing so, \(\mathsf{DOSTransformer}\) outperforms previous works in predicting two types of DOS, i.e., phonon DOS and electron DOS, in various scenarios, i.e., in-distribution scenarios and out-of-distribution scenarios. Extensive experiments verify that incorporating energy information is crucial in predicting the DOS of a crystal structure for real-world application and \(\mathsf{DOSTransformer}\) effectively utilizes the crystal structural information with crystal system prompts.
Figure 5: Qualitative Analysis.
## Acknowledgements
This work was supported by the core KRICT project from the Korea Research Institute of Chemical Technology (SI2051-10).
## References
* [1]Bai, Y., Mei, J., Yuille, A. L., and Xie, C. Are transformers more robust than cnns? _Advances in Neural Information Processing Systems_, 34:26831-26843, 2021.
* [2] Baltrusaitis, T., Ahuja, C., and Morency, L.-P. Multimodal machine learning: A survey and taxonomy. _IEEE transactions on pattern analysis and machine intelligence_, 41(2):423-443, 2018.
* [3] Banjade, H. R., Hauri, S., Zhang, S., Ricci, F., Gong, W., Hautier, G., Vucetic, S., and Yan, Q. Structure motif-centric learning framework for inorganic crystalline systems. _Science advances_, 7(17):eabf1754, 2021.
* [4] Battaglia, P. W., Hamrick, J. B., Bapst, V., Sanchez-Gonzalez, A., Zambaldi, V., Malinowski, M., Tacchetti, A., Raposo, D., Santoro, A., Faulkner, R., et al. Relational inductive biases, deep learning, and graph networks. _arXiv preprint arXiv:1806.01261_, 2018.
* [5] Bayoudh, K., Hamdaoui, F., and Mtibaa, A. Hybrid-covid: a novel hybrid 2d/3d cnn based on cross-domain adaptation approach for covid-19 screening from chest x-ray images. _Physical and engineering sciences in medicine_, 43(4):1415-1431, 2020.
* [6] Bayoudh, K., Knani, R., Hamdaoui, F., and Mtibaa, A. A survey on deep multimodal learning for computer vision: advances, trends, applications, and datasets. _The Visual Computer_, 38(8):2939-2970, 2022.
* [7] Chandrasekaran, A., Kamal, D., Batra, R., Kim, C., Chen, L., and Ramprasad, R. Solving the electronic structure problem with machine learning. _npj Computational Materials_, 5(1):1-7, 2019.
* [8] Che, J., Cagin, T., Deng, W., and Goddard III, W. A. Thermal conductivity of diamond and related materials from molecular dynamics simulations. _The Journal of Chemical Physics_, 113(16):6888-6900, 2000.
* [9] Chen, Z., Andrejevic, N., Smidt, T., Ding, Z., Xu, Q., Chi, Y.-T., Nguyen, Q. T., Alatas, A., Kong, J., and Li, M. Direct prediction of phonon density of states with euclidean neural networks. _Advanced Science_, 8(12):2004214, 2021.
* [10] Choudhary, K. and DeCost, B. Atomistic line graph neural network for improved materials property predictions. _npj Computational Materials_, 7(1):185, 2021.
* [11] Choudhary, K., DeCost, B., Chen, C., Jain, A., Tavazza, F., Cohn, R., Park, C. W., Choudhary, A., Agrawal, A., Billinge, S. J., et al. Recent advances and applications of deep learning methods in materials science. _npj Computational Materials_, 8(1):1-26, 2022.
* [12] Cramer, C. J. and Truhlar, D. G. Density functional theory for transition metals and transition metal chemistry. _Physical Chemistry Chemical Physics_, 11(46):10757-10816, 2009.
* [13] De Breuck, P.-P., Evans, M. L., and Rignanese, G.-M. Robust model benchmarking and bias-imbalance in data-driven materials science: a case study on modnet. _Journal of Physics: Condensed Matter_, 33(40):404002, 2021.
* [14] Del Rio, B. G., Kuenneth, C., Tran, H. D., and Ramprasad, R. An efficient deep learning scheme to predict the electronic structure of materials and molecules: The example of graphene-derived allotropes. _The Journal of Physical Chemistry A_, 124(45):9496-9502, 2020.
* [15] Deringer, V. L., Bernstein, N., Csanyi, G., Ben Mahmoud, C., Ceriotti, M., Wilson, M., Drabold, D. A., and Elliott, S. R. Origins of structural and electronic transitions in disordered silicon. _Nature_, 589(7840):59-64, 2021.
* [16] Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_, 2018.
* [17] Fung, V., Ganesh, P., and Sumpter, B. G. Physically informed machine learning prediction of electronic density of states. _Chemistry of Materials_, 2022.
* [18] Garrity, K. F. First-principles search for n-type oxide, nitride, and sulfide thermoelectrics. _Physical Review B_, 94(4):045122, 2016.
* [19] Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O., and Dahl, G. E. Neural message passing for quantum chemistry. In _International conference on machine learning_, pp. 1263-1272. PMLR, 2017.
* [20] Griffiths, R.-R., Schwaller, P., and Lee, A. A. Dataset bias in the natural sciences: a case study in chemical reaction prediction and synthesis design. _arXiv preprint arXiv:2105.02637_, 2021.
* [21] Hamaoui, G., Horny, N., Hua, Z., Zhu, T., Robillard, J.-F., Fleming, A., Ban, H., and Chirtoc, M. Electronic contribution in heat transfer at metal-semiconductor and metal silicide-semiconductor interfaces. _Scientific reports_, 8(1):1-9, 2018.
* [22] Hatamizadeh, A., Yin, H., Kautz, J., and Molchanov, P. Global context vision transformers. _arXiv preprint arXiv:2206.09959_, 2022.
* [23] Illas, F., de PR Moreira, I., Bofill, J., and Filatov, M. Extent and limitations of density-functional theory in describing magnetic systems. _Physical Review B_, 70(13):132414, 2004.
* [24] Jia, M., Tang, L., Chen, B.-C., Cardie, C., Belongie, S., Hariharan, B., and Lim, S.-N. Visual prompt tuning. In _Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXIII_, pp. 709-727. Springer, 2022.
* [25] Jiang, D., Wu, Z., Hsieh, C.-Y., Chen, G., Liao, B., Wang, Z., Shen, C., Cao, D., Wu, J., and Hou, T. Could graph neural networks learn better molecular representation for drug discovery? a comparison study of descriptor-based and graph-based models. _Journal of cheminformatics_, 13(1):1-23, 2021.
* [26] Jorio, A., Dresselhaus, G., and Dresselhaus, M. S. _Carbon nanotubes: advanced topics in the synthesis, structure, properties and applications_, volume 111. Springer, 2008.
* [27] Kidalov, S. V. and Shakhov, F. M. Thermal conductivity of diamond composites. _Materials_, 2 (4):2467-2495, 2009.
* [28] Kondor, R., Lin, Z., and Trivedi, S. Clebsch-gordan nets: a fully fourier space spherical convolutional neural network. _Advances in Neural Information Processing Systems_, 31, 2018.
* [29] Kong, S., Ricci, F., Guevarra, D., Neaton, J. B., Gomes, C. P., and Gregoire, J. M. Density of states prediction for materials discovery via contrastive learning from probabilistic embeddings. _Nature communications_, 13(1):1-12, 2022.
* [30] Kumagai, M., Ando, Y., Tanaka, A., Tsuda, K., Katsura, Y., and Kurosaki, K. Effects of data bias on machine-learning-based material discovery using experimental property data. _Science and Technology of Advanced Materials: Methods_, 2(1):302-309, 2022.
* [31] Lee, J., Seko, A., Shitara, K., Nakayama, K., and Tanaka, I. Prediction model of band gap for inorganic compounds by combination of density functional theory calculations and machine learning techniques. _Physical Review B_, 93(11):115104, 2016.
* [32] Lee, J., Kim, S., Hyun, D., Lee, N., Kim, Y., and Park, C. Deep single-cell rna-seq data clustering with graph prototypical contrastive learning. _Bioinformatics_, 39(6):btd342, 2023.
* [33] Lee, N., Hyun, D., Na, G. S., Kim, S., Lee, J., and Park, C. Conditional graph information bottleneck for molecular relational learning. _arXiv preprint arXiv:2305.01520_, 2023.
* [34] Lee, N., Yoon, K., Na, G. S., Kim, S., and Park, C. Shift-robust molecular relational learning with causal substructure. _arXiv preprint arXiv:2305.18451_, 2023.
* [35] Li, K., DeCost, B., Choudhary, K., Greenwood, M., and Hattrick-Simpers, J. A critical examination of robustness and generalizability of machine learning prediction of materials properties. _npj Computational Materials_, 9(1):55, 2023.
* [36] Li, L. H., Yatskar, M., Yin, D., Hsieh, C.-J., and Chang, K.-W. Visualbert: A simple and performant baseline for vision and language. _arXiv preprint arXiv:1908.03557_, 2019.
* [37] Li, X. L. and Liang, P. Prefix-tuning: Optimizing continuous prompts for generation. _arXiv preprint arXiv:2101.00190_, 2021.
* [38] Lin, T.-Y., Cui, Y., Belongie, S., and Hays, J. Learning deep representations for ground-to-aerial geolocalization. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pp. 5007-5015, 2015.
* [39] Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., and Neubig, G. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. _ACM Computing Surveys_, 55(9):1-35, 2023.
* [40] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_, 2019.
* [41] Masoudnia, S. and Ebrahimpour, R. Mixture of experts: a literature survey. _The Artificial Intelligence Review_, 42(2):275, 2014.
* [42] Petretto, G., Dwaraknath, S., PC Miranda, H., Winston, D., Giantomassi, M., Van Setten, M. J., Gonze, X., Persson, K. A., Hautier, G., and Rignanese, G.-M. High-throughput density-functional perturbation theory phonons for inorganic materials. _Scientific data_, 5(1):1-12, 2018.
* [43] Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., and Sutskever, I. Zero-shot text-to-image generation. In _International Conference on Machine Learning_, pp. 8821-8831. PMLR, 2021.
* [44] Schick, T. and Schutze, H. Exploiting cloze questions for few shot text classification and natural language inference. _arXiv preprint arXiv:2001.07676_, 2020.
* [45] Shandiz, M. A. and Gauvin, R. Application of machine learning methods for the prediction of crystal system of cathode materials in lithium-ion batteries. _Computational Materials Science_, 117:270-278, 2016.
* [46] Sim, E., Song, S., and Burke, K. Quantifying density errors in dft. _The journal of physical chemistry letters_, 9(22):6385-6392, 2018.
* [47] Singh, A. K., Montoya, J. H., Gregoire, J. M., and Persson, K. A. Robust and synthesizable photocatalysts for co2 reduction: a data-driven materials discovery. _Nature communications_, 10 (1):1-9, 2019.
* [48] Stokes, J. M., Yang, K., Swanson, K., Jin, W., Cubillos-Ruiz, A., Donghia, N. M., MacNair, C. R., French, S., Carfrae, L. A., Bloom-Ackermann, Z., et al. A deep learning approach to antibiotic discovery. _Cell_, 180(4):688-702, 2020.
* [49] Thomas, N., Smidt, T., Kearnes, S., Yang, L., Li, L., Kohlhoff, K., and Riley, P. Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds. _arXiv preprint arXiv:1802.08219_, 2018.
* [50] Tsai, Y.-H. H., Bai, S., Liang, P. P., Kolter, J. Z., Morency, L.-P., and Salakhutdinov, R. Multimodal transformer for unaligned multimodal language sequences. In _Proceedings of the conference. Association for Computational Linguistics. Meeting_, volume 2019, pp. 6558. NIH Public Access, 2019.
* [51] Tu, Z., Liu, Y., Lu, Z., Liu, X., and Li, H. Context gates for neural machine translation. _Transactions of the Association for Computational Linguistics_, 5:87-99, 2017.
* [52] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention is all you need. _Advances in neural information processing systems_, 30, 2017.
* [53] Wang, G., Lu, Y., Cui, L., Lv, T., Florencio, D., and Zhang, C. A simple yet effective learnable positional encoding method for improving document transformer model. In _Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022_, pp. 453-463, 2022.
* [54] Wang, L., Li, Y., and Lazebnik, S. Learning deep structure-preserving image-text embeddings. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pp. 5005-5013, 2016.
* [55] Wang, L., Tu, Z., Way, A., and Liu, Q. Exploiting cross-sentence context for neural machine translation. _arXiv preprint arXiv:1704.04347_, 2017.
* [56] Ward, L., Agrawal, A., Choudhary, A., and Wolverton, C. A general-purpose machine learning framework for predicting properties of inorganic materials. _npj Computational Materials_, 2(1):1-7, 2016.
* [57] Wei, J., Chu, X., Sun, X.-Y., Xu, K., Deng, H.-X., Chen, J., Wei, Z., and Lei, M. Machine learning in materials science. _InfoMat_, 1(3):338-358, 2019.
* [58] Weiler, M., Geiger, M., Welling, M., Boomsma, W., and Cohen, T. S. 3d steerable cnns: Learning rotationally equivariant features in volumetric data. _Advances in Neural Information Processing Systems_, 31, 2018.
* [59] Wissler, M. Graphite and carbon powders for electrochemical applications. _Journal of power sources_, 156(2):142-150, 2006.
* [60] Wu, Y.-J., Fang, L., and Xu, Y. Predicting interfacial thermal resistance by machine learning. _npj Computational Materials_, 5(1):1-8, 2019.
* [61] Xie, T. and Grossman, J. C. Crystal graph convolutional neural networks for an accurate and interpretable prediction of material properties. _Physical review letters_, 120(14):145301, 2018.
* [62] Xu, P., Zhu, X., and Clifton, D. A. Multimodal learning with transformers: a survey. _arXiv preprint arXiv:2206.06488_, 2022.
* [63] Yan, K., Liu, Y., Lin, Y., and Ji, S. Periodic graph transformers for crystal material property prediction. _Advances in Neural Information Processing Systems_, 35:15066-15080, 2022.
* [64] Yan, Q., Yu, J., Suram, S. K., Zhou, L., Shinde, A., Newhouse, P. F., Chen, W., Li, G., Persson, K. A., Gregoire, J. M., et al. Solar fuels photoanode materials discovery by integrating high-throughput theory and experiment. _Proceedings of the National Academy of Sciences_, 114(12):3040-3043, 2017.
* [65] Yao, S. and Wan, X. Multimodal transformer for multimodal machine translation. In _Proceedings of the 58th annual meeting of the association for computational linguistics_, pp. 4346-4350, 2020.
* [66] Zhang, C., Zhang, M., Zhang, S., Jin, D., Zhou, Q., Cai, Z., Zhao, H., Liu, X., and Liu, Z. Delving deep into the generalization of vision transformers under distribution shifts. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pp. 7277-7286, 2022.
* [67] Zhong, X., Gallagher, B., Liu, S., Kailkhura, B., Hiszpanski, A., and Han, T. Explainable machine learning in materials science. _npj Computational Materials_, 8(1):1-19, 2022.
* [68] Zhou, K., Yang, J., Loy, C. C., and Liu, Z. Learning to prompt for vision-language models. _International Journal of Computer Vision_, 130(9):2337-2348, 2022.
* [69] Zhuo, Y., Mansouri Tehrani, A., and Brgoch, J. Predicting the band gaps of inorganic solids by machine learning. _The journal of physical chemistry letters_, 9(7):1668-1673, 2018.
**Supplementary Material for**
**Density of States Prediction of Crystalline Materials**
**via Prompt-guided Multi-Modal Transformer**
## Appendix A Datasets
In this section, we provide further details on the dataset used for experiments.
### Phonon DOS
We use the **Phonon DOS** dataset following the instructions of the official Github repository5 of a previous work [9]. This dataset contains 1,522 crystalline materials whose phonon DOS is calculated from density functional perturbation theory (DFPT) by a previous work [42]. Since the provided dataset does not contain crystal system information, we additionally collect the information based on the Materials Project (MP) website 4 on the given each material's unique ID (MP-id).
Footnote 5: [https://github.com/zhantaochen/phonondos_e3nn](https://github.com/zhantaochen/phonondos_e3nn)
### Electron DOS
We also use **Electron DOS** dataset that contains 38,889 crystalline materials. The Electron DOS dataset consists of the materials and their electron DOS information that is collected from the MP website 4. Among the collected data, we exclude the materials that are tagged to include magnetism because the DOS of magnetism materials is not accurate to be directly used for training machine learning models [23]. We consider an energy grid of 201 points ranging from \(-5\) to \(5\) eV with respect to the band edges with \(50\) meV intervals and the Fermi energy is all set to \(0\) eV on this energy grid. Moreover, we normalize the DOS of each material to be in the range between 0 and 1. That is, the maximum and minimum value for each DOS is 1 and 0, respectively, for all materials. Moreover, we smooth the DOS values with the Savitzky-Golay filter with the window size of 17 and polyorder of 1 using scipy library following a previous work [9].
Footnote 4: [https://github.com/zhantaochen/phonondos_e3nn](https://github.com/zhantaochen/phonondos_e3nn)
### Data Statistics of Electron DOS dataset in OOD scenarios
As described in the main manuscript, we further evaluate the model performance in two out-of-distribution scenarios: **Scenario 1**: regarding the number of atom species, and **Scenario 2**: regarding the crystal systems. We provide detailed statistics of the number of crystalline materials for each scenario in Table 5 and Table 6.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & Cubic & Hexagonal & Tetragonal & Trigonal & Orthorhombic & Monoclinic & Triclinic & Total \\ \hline \# Materials & 8,385 & 3,983 & 5,772 & 3,964 & 8,108 & 6,576 & 2,101 & 38,889 \\ \hline \hline \end{tabular}
\end{table}
Table 6: The number of crystals according to different crystal systems (Scenario 2).
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & Unary (1) & Binary (2) & Ternary (3) & Quaternary (4) & Quinary (5) & Senary (6) & Setpenary (7) & Total \\ \hline \# Materials & 386 & 9,034 & 21,794 & 5,612 & 1,750 & 279 & 34 & 38,889 \\ \hline \hline \end{tabular}
\end{table}
Table 5: The number of crystals according to the number of atom species (Scenario 1).
Evaluation Protocol
**Phonon DOS.** As described in the main manuscript, we evaluate the model performance based on the data splits given in a previous work [9].
**Electron DOS.** On the other hand, for the Electron DOS dataset, we use different dataset split strategies for each scenario. For the in-distribution setting, we randomly split the dataset into train/valid/test of 80/10/10%. On the other hand, for the out-of-distribution setting, we split the dataset regarding the structure of the crystals. For both scenarios, we generate training sets with simple crystal structures and a valid/test set with more complex crystal structures, because it is crucial to transfer the knowledge obtained from simple crystal structures to that from complex structures in real-world materials science. More specifically, in the scenario 1 (different number of atom species, i.e., # Atom species in Table 2), we use Binary and Ternary materials as training data and Unary, Quaternary, and Quinary materials as valid and test data. In the case of Unary, we exclude it from training data despite its simplicity due to the observed difficulty of the structure, as will be discussed in Section E.1. In the scenario 2 (different crystal systems, i.e., Crystal System in Table 2), we use Cubic, Hexagonal, Tetragonal, Trigonal, and Orthorhombic crystal systems as training set and Monoclinic and Triclinic as valid and test set. In this scenario, where no prompt is available for unseen crystal systems, we employ the mean-pooled representations of the trained prompts during testing, i.e., for the Monoclinic and Triclinic crystal systems. Please refer to Table 5 and Table 6 for detailed statistics of crystals in each scenario.
**Physical Properties.** In addition to evaluating the accuracy of the model's predictions of the DOS, it is crucial to assess the physical meaningfulness of the predicted DOS for real-world applications. To assess the physical meaningfulness of the predicted DOS, we utilize the predicted DOS to estimate a range of important material properties. Specifically, we evaluate three materials' properties: the bulk modulus for phonon DOS, and the band gap and Fermi energy for electron DOS (Table 1).
Bulk Modulus 6 is a thermodynamic quantity measuring the resistance of a substance to compression. It provides a measure of the material's ability to withstand changes in volume under applied pressure. In the context of elastic properties, the bulk modulus serves as a descriptor, as it indicates how well a material can recover its original volume after being subjected to compression.
Footnote 6: [https://en.wikipedia.org/wiki/Bulk_modulus](https://en.wikipedia.org/wiki/Bulk_modulus)
Another property we focus on is the Band Gap 7, which refers to the energy range in a material where no electronic states exist. It represents the energy difference between the top of the valence band and the bottom of the conduction band in insulators and semiconductors. Functional inorganic materials, such as those used in applications like LEDs, transistors, photovoltaics, or scintillators, require a comprehensive understanding of their band gap [69]. By accurately predicting the band gap based on DOS, we can accelerate the development of new materials for a wide range of applications.
Footnote 7: [https://en.wikipedia.org/wiki/Band_gap](https://en.wikipedia.org/wiki/Band_gap)
Additionally, we predict the Fermi Energy 8, which represents the highest energy level occupied by electrons at absolute zero temperature (0K). It can be used to determine the electrical and thermal characteristics of materials.
Footnote 8: [https://en.wikipedia.org/wiki/Fermi_energy](https://en.wikipedia.org/wiki/Fermi_energy)
## Appendix C Implementation Details
In this section, we provide implementation details of DOSTransformer.
**Graph Neural Networks.** Our graph neural networks consist of two parts, i.e., encoder and processor. Encoder learns the initial representation of atoms and bonds, while the processor learns to pass the messages across the crystal structure. More formally, given an atom \(v_{i}\) and the bond \(e_{ij}\) between atom \(v_{i}\) and \(v_{j}\), node encoder \(\phi_{node}\) and edge encoder \(\phi_{edge}\) outputs initial representations of atom \(v_{i}\) and bond \(e_{ij}\) as follows:
\[\mathbf{h}_{i}^{0}=\phi_{node}(\mathbf{X}_{i}),\ \ \mathbf{b}_{ij}^{0}=\phi_{ edge}(\mathbf{B}_{ij}), \tag{4}\]
where \(\mathbf{X}\) is the atom feature matrix whose \(i\)-th row indicates the input feature of atom \(v_{i}\), \(\mathbf{B}\in\mathbb{R}^{n\times n\times F_{e}}\) is the bond feature tensor with \(F_{e}\) features for each bond. With the initial representationsof atoms and bonds, the processor learns to pass messages across the crystal structure and update atoms and bonds representations as follows:
\[\mathbf{b}_{ij}^{l+1}=\psi_{edge}^{l}(\mathbf{h}_{i}^{l},\mathbf{h}_{j}^{l}, \mathbf{b}_{ij}^{l}),\ \ \ \mathbf{h}_{i}^{l+1}=\psi_{node}^{l}(\mathbf{h}_{i}^{l}, \sum_{j\in\mathcal{N}(i)}\mathbf{b}_{ij}^{l+1}), \tag{5}\]
where \(\mathcal{N}(i)\) is the neighboring atoms of atom \(v_{i}\), \(\psi\) is a two-layer MLP with non-linearity, and \(l=0,\ldots,L^{\prime}\). Note that \(\mathbf{h}_{i}^{L^{\prime}}\) is equivalent to the \(i\)-th row of the atom embedding matrix \(\mathbf{H}\) in Equation 1.
**Model Training.** In all our experiments, we use the AdamW optimizer for model optimization. For all the tasks, we train the model for 1,000 epochs with early stopping applied if the best validation loss does not change for 50 consecutive epochs.
**Hyperparameter Tuning.** Detailed hyperparameter specifications are given in Table 7. For the hyperparameters in DOSTransformer, we tune them in certain ranges as follows: number of message passing layers in GNN \(L^{\prime}\) in {2, 3, 4}, number of cross-attention layers \(L_{1}\), \(L_{3}\) in {2, 3, 4}, number of self-attention layers \(L_{2}\) in {2, 3, 4}, hidden dimension \(d\) in [64, 128, 256], learning rate \(\eta\) in {0.0001, 0.0005, 0.001}, and batch size \(B\) in {1, 4, 8}. We use the sum pooling to obtain the crystalline material \(i\)'s representation, i.e., \(\mathbf{g}_{i}\). We report the test performance when the performance on the validation set gives the best result.
## Appendix D Methods Compared
In this section, we provide further details on the methods that are compared with DOSTransformer in our experiments.
**MLP.** We first encode the atoms in a crystalline material with an MLP. Then, we obtain the representation of material \(i\), i.e., \(\mathbf{g}_{i}\), by sum pooling the representations of its constituent atoms. With the material representation, we predict DOS with an MLP predictor \(\phi^{\prime}\), i.e., \(\hat{\mathbf{Y}}^{i}=\phi^{\prime}(\mathbf{g}_{i})\), where \(\phi^{\prime}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{201}\).
On the other hand, when we incorporate energy embeddings into the MLP, we predict DOS for each energy \(j\) with a learnable energy embedding \(\mathbf{E}_{j}^{0}\) and obtained material representation \(\mathbf{g}_{i}\), i.e., \(\hat{\mathbf{Y}}_{j}^{i}=\phi(\mathbf{E}_{j}^{0}||\mathbf{g}_{i})\), where \(\phi:\mathbb{R}^{2d}\rightarrow\mathbb{R}^{1}\) is a parameterized MLP.
**Graph Network.** We first encode the atoms in a crystalline material with a graph network [4]. As done for MLP, we obtain the representation of material \(i\), i.e., \(\mathbf{g}_{i}\), by sum pooling the representations of its constituent atoms. With the material representation, we predict the DOS with an MLP predictor,
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Hyperparameters**} & \multicolumn{2}{c}{**In-Distribution**} & \multicolumn{2}{c}{**Out-of-Distribution**} \\ \cline{2-5} & **Phonon DOS** & **Electron DOS** & **\# Atom Species** & **Crystal Systems** \\ \hline \# Message Passing & & & & \\ Layers (\(L^{\prime}\)) & 3 & 3 & 3 & 3 \\ \# Cross-Attention & & & & \\ Layers (\(L_{1}\)) & 2 & 2 & 2 & 2 \\ \# Self-Attention & & & & \\ Layers (\(L_{2}\)) & 2 & 2 & 2 & 2 \\ \# Cross-Attention & & & & \\ Layers (\(L_{3}\)) & 2 & 2 & 2 & 2 \\ Hidden Dim. (\(d\)) & 256 & 256 & 256 & 256 \\ Learning Rate (\(\eta\)) & 0.0001 & 0.0001 & 0.0001 & 0.0001 \\ Batch Size (\(B\)) & 1 & 8 & 8 & 8 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Hyperparameter specifications of DOSTransformer.
i.e., \(\hat{\mathbf{Y}}^{i}=\phi^{\prime}(\mathbf{g}_{i})\), where \(\phi^{\prime}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{201}\). Note that the only difference with MLP is that the atom representations are obtained through the message passing scheme. We also compare the vanilla graph network that incorporates the energy information as we have done in MLP.
**E3NN.** For E3NN [9], we use the official code published by the authors9, which implements equivariant neural networks with E3NN python library10. By learning the equivariance, the model can generate high-quality representations with a small number of training materials. After obtaining the crystalline material representation \(\mathbf{g}_{i}\), all other procedures have been done in the same manner with other baseline models, i.e., MLP and Graph Network.
Footnote 9: [https://github.com/ninarina12/phononDoS_tutorial](https://github.com/ninarina12/phononDoS_tutorial)
Footnote 10: [https://docs.e3nn.org/en/latest/index.html](https://docs.e3nn.org/en/latest/index.html)
## Appendix E Additional Experiments
### Model Performance Analysis on Out-of-Distribution Scenarios
In this section, we conduct a comprehensive analysis of the model's predictions in the out-of-distribution scenarios presented in Table 2. In Table 8, we evaluate the performance of the model for each type of material, providing detailed insights into its predictive capabilities. We have following observations: **1)** We observe that DOSTransformer consistently outperforms in both out-of-distribution scenarios, which demonstrates the superiority of DOSTransformer. **2)** The performance of all the compared models generally degrades as the crystal structure gets more complex. That is, models perform worse in Quanrary crystals than in Quanrary crystals, and worse in Triclinic crystals than in Monoclinic crystals. **3)** On the other hand, it is not the case in Unary crystal. This is because in Unary crystal only one type of atom repeatedly appears in the crystal structure, which cannot give enough information to the model. However, DOSTransformer also makes comparably accurate predictions in the Unary materials by modeling the complex relationship between the atoms and various energy levels.
### Injecting Crystal System Information to Baseline Methods
In this section, we adopt our prompt-based crystal system information injection procedure to the baseline methods. We examine two approaches for injecting the information: 1) injecting the information into the input atoms (i.e., Position 1), and 2) injecting it before the DOS prediction layer (i.e., Position 2). In Table 9, we have the following observations: **1)** Compared to Table 1, all baseline models benefit from using crystal system information. This demonstrates the importance of
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{6}{c}{**\# Atom Species**} & \multicolumn{6}{c}{**Crystal System**} \\ \cline{2-11} & \multicolumn{2}{c}{**Unary**} & \multicolumn{2}{c}{**Quarternary**} & \multicolumn{2}{c}{**Quinary**} & \multicolumn{2}{c}{**Monoclinic**} & \multicolumn{2}{c}{**Triclinic**} \\ \cline{2-11} & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE \\ \hline \multicolumn{11}{l}{**Energy ✗**} \\ \hline MLP & 0.578 & 0.180 & 0.500 & 0.153 & 0.673 & 0.182 & 0.370 & 0.132 & 0.470 & 0.146 \\ & (0.005) & (0.001) & (0.003) & (0.001) & (0.004) & (0.002) & (0.017) & (0.003) & (0.021) & (0.003) \\ Graph Network & 0.485 & 0.165 & 0.443 & 0.141 & 0.620 & 0.170 & 0.376 & 0.127 & 0.504 & 0.147 \\ & (0.013) & (0.003) & (0.003) & (0.001) & (0.007) & (0.003) & (0.003) & (0.001) & (0.007) & (0.001) \\ E3NN & 0.565 & 0.167 & 0.486 & 0.145 & 0.708 & 0.179 & 0.393 & 0.129 & 0.510 & 0.148 \\ & (0.025) & (0.002) & (0.001) & (0.000) & (0.013) & (0.002) & (0.003) & (0.001) & (0.006) & (0.002) \\ \hline \multicolumn{11}{l}{**Energy ✓**} \\ \hline MLP & 0.597 & 0.183 & 0.498 & 0.151 & 0.684 & 0.178 & 0.367 & 0.130 & 0.468 & 0.144 \\ & (0.034) & (0.005) & (0.003) & (0.002) & (0.015) & (0.002) & (0.012) & (0.002) & (0.002) & (0.003) \\ Graph Network & 0.471 & 0.160 & 0.416 & 0.137 & 0.571 & 0.165 & 0.359 & 0.125 & 0.476 & 0.144 \\ & (0.028) & (0.005) & (0.002) & (0.002) & (0.005) & (0.002) & (0.010) & (0.004) & (0.008) & (0.004) \\ E3NN & 0.567 & 0.166 & 0.481 & 0.143 & 0.689 & 0.175 & 0.393 & 0.128 & 0.516 & 0.147 \\ & (0.021) & (0.003) & (0.008) & (0.000) & (0.020) & (0.002) & (0.006) & (0.001) & (0.006) & (0.001) \\ \hline \multirow{2}{*}{**DOSTransformer**} & **0.417** & **0.145** & **0.413** & **0.128** & **0.570** & **0.157** & **0.343** & **0.117** & **0.466** & **0.137** \\ & (0.012) & (0.003) & (0.010) & (0.002) & (0.010) & (0.001) & (0.003) & (0.001) & (0.007) & (0.001) \\ \hline \hline \end{tabular}
\end{table}
Table 8: Model performance in Out-of-Distribution scenarios.
utilizing crystal structural systems information, which has been overlooked in previous works. **2)** However, DQSTransformer still outperforms all baseline methods with crystal system information (See DQSTransformer in Table 1), verifying the importance of an elaborate design of crystal system injection procedure. To be more specific, we notice a relatively significant performance gap between DQSTransformer and the best baseline model in Electron DOS, which comprises a broader range of crystalline materials than Phonon DOS. This finding highlights the importance of an intricate crystal system injection procedure when striving to learn the DOS of diverse crystalline materials.
### Various Training Data Ratio for Fine-Tuning
In this section, we additionally provide experimental results on various ratios of training data for fine-tuning in Table 3. That is, instead of sampling 10% of training data from the test set used in OOD scenarios in Section 5.3, we try various sampling ratios, i.e., 5%, 10%, 15%, and 20%, from the test set. We have the following observations: **1)** We notice a significant performance disparity between the "Only Prompt" and "All" approaches, particularly when the training dataset is limited. This phenomenon can be attributed to the challenge of overfitting when fine-tuning the entire model on a small subset of materials, as discussed in Section 5.3. **2)** Conversely, as the training data increases, we observe that the difference in performance between the "Only Prompt" and "All" methods diminishes. This is due to the enough amount of training data allowing for the adjustment of all model parameters, consistent with the analysis discussed in Section 5.3. However, it is important to note that the existing DFT calculation-based databases suffer from a highly biased distribution, which limits their coverage of different materials. This limitation emphasizes the significance of achieving good performance even with a small subset of training data. Therefore, we argue that the application of prompt tuning enhances the real-world applicability of DQSTransformer.
### Comparison with Crystal Neural Network
We conducted additional experiments on the state-of-the-art crystal neural networks, i.e., ALIGNN [10] and Matformer [63], by adjusting the output dimensions of these models (51 for Phonon DOS and 201 for Electron DOS) in Table 10. Although both ALIGNN and Matformer exhibited commendable performance, we observe that DQSTransformer consistently surpassed them significantly. These observations again demonstrate the importance of considering energy levels for accurate DOS prediction.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{4}{c}{**Phonon DOS**} & \multicolumn{4}{c}{**Electron DOS**} \\ \cline{2-9} & \multicolumn{2}{c}{**Position 1**} & \multicolumn{2}{c}{**Position 2**} & \multicolumn{2}{c}{**Position 1**} & \multicolumn{2}{c}{**Position 2**} \\ \cline{2-9} & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE \\ \hline \hline \multicolumn{9}{l}{**Energy ✗**} \\ \hline MLP & 0.308 & 0.105 & 0.324 & 0.109 & 0.343 & 0.128 & 0.340 & 0.127 \\ & (0.013) & (0.001) & (0.012) & (0.002) & (0.008) & (0.001) & (0.009) & (0.0031 \\ Graph Network & 0.260 & 0.097 & 0.244 & 0.095 & 0.260 & 0.102 & 0.260 & 0.103 \\ & (0.008) & (0.000) & (0.023) & (0.001) & (0.011) & (0.001) & (0.006) & (0.002) \\ E3NN & 0.210 & 0.077 & 0.209 & 0.079 & 0.292 & 0.108 & 0.299 & 0.110 \\ & (0.007) & (0.002) & (0.010) & (0.001) & (0.007) & (0.001) & (0.002) & (0.001) \\ \hline \multicolumn{9}{l}{**Energy ✓**} \\ \hline MLP & 0.251 & 0.100 & 0.245 & 0.098 & 0.332 & 0.125 & 0.336 & 0.127 \\ & (0.000) & (0.000) & (0.004) & (0.001) & (0.003) & (0.001) & (0.007) & (0.002) \\ Graph Network & 0.230 & 0.094 & 0.224 & 0.091 & 0.234 & 0.097 & 0.230 & 0.093 \\ & (0.009) & (0.002) & (0.009) & (0.002) & (0.001) & (0.004) & (0.009) & (0.002) \\ E3NN & 0.194 & 0.073 & 0.190 & 0.073 & 0.286 & 0.108 & 0.290 & 0.109 \\ & (0.004) & (0.000) & (0.000) & (0.001) & (0.002) & (0.000) & (0.003) & (0.000) \\ \hline \hline \end{tabular}
\end{table}
Table 9: Baseline model performance with crystal structural system prompts.
Figure 6: Various training data ratios for fine-tuning.
### Model Training and Inference Time
In this section, to verify the efficiency of DOSTransformer, we compare the training and inference time of the baseline methods in Table 11. We observe that DOSTransformer requires a longer training time per epoch on the Phonon DOS dataset compared to E3NN, which can be attributed to the two forward passes (i.e., system and global energy embeddings) during the training procedure. However, when it comes to the Electron DOS dataset, DOSTransformer demonstrates a shorter training time per epoch compared to E3NN. This is because the Electron DOS dataset has complex crystal structures, requiring more time for E3NN to learn equivariant representations. Furthermore, in terms of inference time, DOSTransformer demonstrates significantly faster computation per epoch compared to E3NN, particularly on the Electron DOS dataset. This is because we only utilize system prediction without global prediction during inference. As many predictive ML models are used for high-throughput screening in material discovery, inference time is a critical factor for ML models in materials science, demonstrating the practicality of DOSTransformer in real-world applications.
## Appendix F Broader Impacts
**Potential Positive Scientific Impacts.** In this work, we propose DOSTransformer, which is the first work that considers various energy levels during DOS prediction and introduces prompts for crystal structural system, demonstrating its applicability in real-world scenarios. For example, transferring the knowledge obtained from simple structured materials to complex structured materials is crucial because DFT calculation-based databases cover limited types of materials or structural archetypes. Therefore, we believe DOSTransformer has broad impacts on various fields of materials science.
**Potential Negative Societal Impacts.** This work explores the automation process for materials science without wet lab experiments. However, it is important to acknowledge that in the industry, there are skilled professionals dedicated to conducting such experiments for materials science.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**Phonon DOS**} & \multicolumn{3}{c}{**Electron DOS**} \\ \cline{2-7} & MSE & MAE & \(R^{2}\) & MSE & MAE & \(R^{2}\) \\ \hline \multirow{2}{*}{ALIGNN} & 0.204 & 0.085 & 0.717 & 0.270 & 0.108 & 0.600 \\ & (0.005) & (0.001) & (0.007) & (0.000) & (0.001) & (0.001) \\ \multirow{2}{*}{Matformer} & 0.198 & 0.084 & 0.724 & 0.268 & 0.104 & **0.601** \\ & (0.008) & (0.002) & (0.013) & (0.004) & (0.001) & (0.002) \\ \hline \multirow{2}{*}{DOSTransformer} & **0.191** & **0.071** & **0.733** & **0.221** & **0.089** & **0.679** \\ & (0.003) & (0.002) & (0.004) & (0.006) & (0.001) & (0.006) \\ \hline \hline \end{tabular}
\end{table}
Table 10: Comparison to crystal neural networks.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Training**} & \multicolumn{2}{c}{**Inference**} \\ \cline{2-5} & **Phonon DOS** & **Electron DOS** & **Phonon DOS** & **Electron DOS** \\ \hline \hline \multirow{2}{*}{MLP} & 4.10 & 23.52 & 1.51 & 3.00 \\ Graph Network & 16.17 & 59.74 & 1.95 & 3.88 \\ E3NN & 21.21 & 141.02 & 3.72 & 9.49 \\ \hline \hline \multirow{2}{*}{**Energy ✓**} & & & & \\ \hline MLP & 4.67 & 27.88 & 1.66 & 3.10 \\ Graph Network & 17.45 & 66.83 & 2.16 & 4.28 \\ E3NN & 24.12 & 152.80 & 3.92 & 10.21 \\ \hline \hline \multirow{2}{*}{DOSTransformer} & 39.17 & 145.85 & 2.98 & 5.99 \\ \hline \hline \end{tabular}
\end{table}
Table 11: Training and inference time per epoch for each dataset (sec/epoch).
Therefore, it is important to proactively address these concerns by encouraging collaboration between automated methods and human experts.
## Appendix G Pseudo Code
Algorithm 1 shows the pseudocode of DOSTransformer.
```
Input : An input crystalline material \(\mathcal{G}=(\mathbf{X},\mathbf{A})\), Ground truth DOS \(\mathbf{Y}\), Number of attention layers \(L_{1},L_{2},L_{3}\), Initialized energy embeddings \(\mathbf{E}\), Initialized crystal system prompts \(\mathbf{P}\).
1\(\mathbf{H}\leftarrow\text{GNN}(\mathbf{X},\mathbf{A})\)
2\(\mathbf{E}^{L_{1}}\leftarrow\text{Cross-Attention}(\mathbf{H},\mathbf{E},L_{1})\)
3\(\mathbf{g}\leftarrow\text{Sum Pooling}(\mathbf{H})\)
4\(\mathbf{E}^{glob}\leftarrow(\mathbf{E}^{L_{1}}||\mathbf{g})\)
5\(\tilde{\mathbf{E}}^{0,glob}\leftarrow\phi_{1}(\mathbf{E}^{glob})\)
6\(\tilde{\mathbf{E}}^{L_{2},glob}\leftarrow\text{Self-Attention}(\tilde{ \mathbf{E}}^{0,glob},L_{2})\) // Global Self-Attention
7\(\mathbf{E}^{L_{3},glob}\leftarrow\text{Cross-Attention}(\mathbf{H},\tilde{ \mathbf{E}}^{L_{2},glob},L_{3})\)
8\(\hat{\mathbf{Y}}\leftarrow\phi_{pred}(\mathbf{E}^{L_{3},glob})\)
9\(\mathcal{L}^{glob}\leftarrow\text{RMSE}(\hat{\mathbf{Y}},\mathbf{Y})\)
10\(\mathbf{E}^{sys}\leftarrow(\mathbf{E}^{L_{1}}||\mathbf{g}||\mathbf{P})\)
11\(\tilde{\mathbf{E}}^{0,sys}\leftarrow\phi_{2}(\mathbf{E}^{sys})\)
12\(\tilde{\mathbf{E}}^{L_{2},sys}\leftarrow\text{Self-Attention}(\tilde{ \mathbf{E}}^{0,sys},L_{2})\) // System Self-Attention
13\(\mathbf{E}^{L_{3},sys}\leftarrow\text{Cross-Attention}(\mathbf{H},\tilde{ \mathbf{E}}^{L_{2},sys},L_{3})\)
14\(\hat{\mathbf{Y}}\leftarrow\phi_{pred}(\mathbf{E}^{L_{3},sys})\)
15\(\mathcal{L}^{sys}\leftarrow\text{RMSE}(\hat{\mathbf{Y}},\mathbf{Y})\)
16\(\mathcal{L}^{total}\leftarrow\mathcal{L}^{glob}+\beta\cdot\mathcal{L}^{sys}\) // Calculate total loss
17Function Cross-Attention\((\mathbf{H},\mathbf{E}^{0},L)\):
18for\(l=1,2,\ldots,L\)do
19\(\mathbf{E}^{l}\leftarrow\text{Softmax}(\frac{\mathbf{E}^{l-1}\mathbf{H}^{\top }}{\mathbf{H}})\)
20 end for return\(\mathbf{E}^{L}\)
21Function Self-Attention\((\tilde{\mathbf{E}}^{0},L)\):
22for\(p=1,2,\ldots,L\)do
23\(\tilde{\mathbf{E}}^{p}\leftarrow\text{Softmax}(\frac{\tilde{\mathbf{E}}^{p-1} \tilde{\mathbf{E}}^{p-1}}{\mathbf{E}^{p-1}})\)
24 end for return\(\tilde{\mathbf{E}}^{L}\)
```
**Algorithm 1**Pseudocode of DOSTransformer. | ## Review
### Summary
This paper presents DOSTransformer, a novel transformer-based architecture designed for predicting the density of states (DOS) of crystalline materials. The model uniquely incorporates energy levels as an input modality alongside material configurations, enhancing its predictive capabilities compared to prior approaches. It employs a multi-modal transformer architecture with cross-attention and self-attention layers, demonstrating improved performance over existing methods such as graph networks and E3NN in various experimental settings. Comprehensive evaluations, including ablation studies and sensitivity analyses, validate the effectiveness of the proposed model. Overall, the work addresses an important problem in materials science with promising results.
### Strengths
- The DOSTransformer is evaluated on both in-domain and out-of-domain structures against relevant baseline models.
- The model design effectively utilizes domain-specific knowledge, such as crystal structure type and energy levels.
- The task of predicting DOS is enhanced by providing energy levels for targeted predictions, supported by thorough ablation studies.
- The approach is well-defined and uses state-of-the-art libraries for implementation.
- The results include comprehensive evaluations on two datasets, with empirical validation of predictions.
- The paper is well-written, making it easy to follow and understand.
### Weaknesses
- More detail is needed on the implications of DOS predictions for downstream applications, particularly for a non-materials science audience.
- Clarification on the necessary mean squared error (MSE) values for real-world applicability is required.
- The datasets used in the experiments are somewhat limited in diversity, potentially restricting the evaluation of the proposed method.
- Core ablation studies are missing, particularly regarding the novelty of taking energy levels as input.
- There is insufficient discussion on the relationship between learned prompts and the physical characteristics of materials.
### Questions
- What MSE values are considered acceptable for real-world applications?
- Are there other instances in the literature where transformers have been used to predict outputs of mathematical functions with fixed grid inputs?
- Could the model handle structural variations without relying solely on the input prompts during testing?
- What specific implementation details were used for E3NN, particularly regarding the irreducible representations?
### Soundness
**Score:** 3
**Description:** 3 = good. The foundational concepts and methodologies are sound, but there are areas that require further clarification and detail.
### Presentation
**Score:** 3
**Description:** 3 = good. The overall presentation of the paper is clear and coherent, though some sections could benefit from additional detail.
### Contribution
**Score:** 3
**Description:** 3 = good. The contribution of the research is significant, addressing a relevant problem in the field, although more comparisons with existing methods would enhance its impact.
### Rating
**Score:** 7
**Description:** 7 = accept, but needs minor improvements. The paper is technically solid, with moderate-to-high impact potential, and requires minor clarifications and enhancements.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a novel approach to an important problem in predicting the density of states in crystalline materials. It is grounded in sound methodology and provides valuable empirical results. While there are areas for improvement, particularly in clarifying the implications of its findings and the diversity of datasets, the strengths and overall contributions significantly outweigh the weaknesses, justifying an acceptance decision.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# PromptCoT: Align Prompt Distribution via Adapted Chain of Thought
Anonymous Author(s)
Affiliation
Address
email
###### Abstract
Diffusion-based generative models have exhibited remarkable capability in the production of high-fidelity visual content such as images and videos. However, their performance is significantly contingent upon the quality of textual inputs, commonly referred to as "prompts". The process of traditional prompt engineering, while effective, necessitates empirical expertise and poses challenges for inexperienced users. In this paper, we introduce PromptCoT, an innovative enhancer that autonomously refines prompts for users. The design of PromptCoT is based on the observation that, prompts resembling textual information corresponding to high-quality images within the training set tend to yield superior generation performance. As such, we fine-tune the pre-trained Large Language Models (LLM) using a curated text dataset comprising solely of high-quality visual content descriptions. By doing so, the LLM becomes capable of capturing the distribution of high-quality training texts, enabling it to generate aligned continuations and revisions to boost the original texts. Nonetheless, one drawback of pre-trained LLMs is their tendency to generate extraneous or irrelevant information. To enhance the alignment between the original text prompts and the refined counterparts, we leverage the Chain-of-Thought (CoT) mechanism. CoT can extract and amalgamate crucial information from the aligned continuation and revision, enabling reasonable inferences based on the contextual cues to produce a more comprehensive and nuanced final output. Considering computational efficiency, instead of allocating a dedicated LLM for prompt enhancement to each individual model or dataset, we integrate adapters that facilitate dataset-specific adaptation, leveraging a shared pre-trained LLM as the foundation for this process. By fine-tuning these adapters independently, we can adapt PromptCoT to new datasets with minimal increase in training cost and memory usage. We assess the performance of PromptCoT on widely-used latent diffusion models for image and video generation to validate the effectiveness. The results demonstrate significant improvements in key performance metrics.
## 1 Introduction
In recent years, deep generative models have made notable advancements, specifically with the introduction of diffusion probabilistic models (DPMs). These models have exhibited exceptional capabilities in generating a wide range of visually compelling and high-fidelity visual contents, such as images and videos, as evidenced by notable contributions in the literature [37; 12; 38; 36; 7; 28; 32; 30].
By harnessing textual inputs as conditional guidance, diffusion models have the ability to generate visual outputs that align with the corresponding input text, utilizing an iterative denoising procedure. This technological advancement has paved the way for revolutionary applications, including notable examples such as DALL-E 2 [28], Stable Diffusion [30], MagicVideo [50], among others.
Nevertheless, the quality of the generated content is intricately tied to the caliber of the textual prompts provided to the generative model. Human inputs tend to be informal and straightforward, which may impede the expression of the desired scene with the desired level of depth. Additionally, the text encoder within the generative model may not fully comprehend the semantic nuances present in the human-generated text, resulting in notable disparities between the encoded textual guidance and the user's intended meaning. Diffusion probabilistic models (DPMs) are commonly trained on extensive text-vision pairs acquired through web-scraping techniques [35]. Our observation reveals that the distribution of the text dataset might not be congruent with the linguistic style employed by layman users. Furthermore, even in cases where the training text data aligns with the desired style, the quality can exhibit substantial variations due to the presence of meaningless words or extraneous information within the text data. This intricacy further complicates the establishment of a clear and unambiguous mapping between the text and the corresponding image.
As a result, there is an immediate imperative to develop a methodology that can effectively align prompts, consequently augmenting the image generation performance in generative models. Although data cleaning and model fine-tuning have been considered potential solutions, these methods often entail drawbacks such as high costs, instability, and time intensiveness. Another alternative is manual prompt engineering, which involves refining prompts to optimize generation performance. However, this empirical task traditionally demands the expertise of experienced professionals, thereby posing a significant challenge for individuals lacking relevant experience.
In our study, we observe a noticeable trend that prompts, which resemble those found in the training set, usually lead to superior generative performance. Stemming from this observation, we propose PromptCoT, a novel prompt booster that leverages the power of pre-trained Large Language Models (LLMs) and incorporates the Chain-of-Thought (CoT) mechanism to learn high-quality prompt expressions from the training texts of generative models. Specifically, we carry out the fine-tuning of LLaMA [40], a widely-used pre-trained Large Language Model, on two distinct datasets we've prepared. With a text-continuation dataset that appends aligned details to original prompts, and a text-revision dataset that rewrites original prompts to aligned prompts, we enable LLaMA to refine prompts that better match the distribution of the text data used for training the diffusion models. To further enhance the performance of LLMs by combining the advantages of both text-continuation and text-revision, we construct a dataset using the CoT mechanism assisted by ChatGPT. This CoT dataset is designed to enable LLMs to reason and generate text that follows a logical and coherent flow. By fine-tuning LLMs on this CoT dataset, we can enhance their reasoning ability and augments their capacity to generate high-quality text that is both contextually relevant and logically coherent.
To accommodate the varying training sets of different generative models, we incorporate a parameter-efficient adaptation design into the training pipeline of PromptCoT, augmenting a pre-trained base
Figure 1: **Impacts of PromptCoT. (a) and (c) shows the images generated with the original text prompts, and (b) and (d) show the images generated with the text prompts refined by PromptCoT. The text prompt for (a), (b), (c) and (d) are: 1) ”highly detailed portrait of a hopeful pretty astronaut lady with a wavy blonde hair, by Jamini Roy, 4k resolution, nier:automata inspired, bravely default inspired, vibrant but dureary but uplifting red, black and white color scheme!!! ((Space nebula background))” ; 2) ”Astronaut portrait of Silica from the game Bravely Default II by Jamini Roy”, and 3) ”highly detailed portrait of a hopeful pretty astronaut lady with a wavy blonde hair, by Pablo Picasso, 4k resolution, nier:automata inspired, bravely default inspired, vibrant but dureary but uplifting red, black and white color scheme!!! ((Space nebula background))”,and 4)”Portrait Of A Beautiful Astronaut Girl Canvas Art Print” respectively.**
booster with specific lightweight adapters that are capable of aligning text distributions for various generative models across multiple tasks. We demonstrate the effectiveness of PromptCoT through extensive experiments on widely-used latent diffusion models for image and video generation, showing significant improvements in key performance metrics such as Frechet Inception Distance, aesthetic score, and CLIP-similarity.
Our main contributions are:
\(\bullet\) We propose PromptCoT, an innovative prompt refiner that aligns input prompts with the text distribution employed during the training of diffusion models. By accomplishing this alignment, PromptCoT effectively activates generative models and enhances their performance.
\(\bullet\) We explore a new optimization scheme for improving prompt quality by leveraging the power of pre-trained LLMs and CoT mechanisms. And we construct datasets to facilitate the learning of high-quality prompt distribution from the training texts of generative models.
\(\bullet\) We demonstrate that allocating a dedicated Large Language Model (LLM) for each diffusion model is not a requirement. Instead, we propose an innovative scheme where a set of lightweight adapter weights suffices for each dedicated diffusion model. These adapters can share a shared base pre-trained LLM, resulting in a considerable reduction in memory footprint.
\(\bullet\) We show the effectiveness of PromptCoT through extensive experiments on widely-used latent diffusion models for image and video generation, showing significant improvements in key performance metrics.
## 2 Related Work
### Text-to-Image Generative Models
Text-to-Image Generative Models operate by taking natural language descriptions as input and generating corresponding images as output. One of the recent popular model is DALL-E 2 [29]. It utilize CLIP [26] to align the text and image embeddings. By conditioning the diffusion probabilistic generator on the textual embedding, DALL-E 2 is able to produce photorealistic images that correspond to the given textual description. Later, Google's Imagen [32] and Parti [46] were proposed by gradually simulating the spread of noise into the original image to reveal the desired image. Specifically, both Parti and Imagen combine autoregressive and diffusion. The application of diffusion probabilistic models has also been extended to the domain of video generation. The Video Diffusion Model [13], built upon the foundations of diffusion models, enables the sequential generation of high-quality video frames. To address the substantial computational requirements associated with video generation, MagicVideo [51] was introduced, combining latent diffusion and attention models. MagicVideo utilizes a frame-wise lightweight adapter and an attention module to effectively adjust the image-to-video distribution and capture temporal dependencies across frames.
### Large Language Models
Large Language Models (LLMs) are powerful deep learning models for various natural language processing tasks. The most popular LLMs are the GPT [27, 5] series models developed by OpenAI, which are based on the decoder component of the transformer architecture. Another LLM is Meta's OPT [49], which is open-sourced and performs similarly in performance to GPT-3. However, GPT-3's massive size of 175B parameters requires significant computing power and resources, which makes it challenging for researchers to explore. In contrast, LLaMA [40, 41], StableLM [2], as well as the instruction-following Alpaca model [39] are smaller and more performant, achieve comparable results to ChatGPT with far fewer parameters (7B). For specific tasks like conversational applications, ChatGLM [47, 9] can generate coherent and contextually relevant responses in dialogue systems.
### Parameter-Efficient Fine-Tuning
The goal of parameter-efficient fine-tuning is to attain comparable performance to fine-tuning on a specific downstream task while using the fewest trainable parameters possible. According to [1], common pre-trained models generally have a very low intrinsic dimension, and LoRA [15] learns low-rank parameterizations to enhance tuning efficiency based on that. Except reducing the number of parameters needed for fine-tuning, other approaches try to attach pre-trained parameters to reduce training time. Adapter training [14, 24] utilizes dynamic pre-trained adapters for different tasks and languages to reduce adaptation time. Compacter [21] combines both concepts and builds on top of adapters, low-rank optimization, and parameterized hypercomplex multiplication layers.
### Prompt Engineering
Prompt Engineering is to optimize the outputs of language models with specific input prompts [4; 33; 20; 8]. Discrete text prompts [16] serve as starting points for the model's language generation, and are used to generate responses in dialogue systems. Beyond discrete prompts, [17; 43] explores prompt tuning to learn soft prompts to perform specific downstream tasks, which provide more context-aware guidance to the model. [25] extends the idea of learning soft prompts and demonstrates that the implicit factual knowledge in language models was underestimated. Given that manually designing prompts can be cumbersome, automatically generating prompts gives a chance avoid intensive labor and enhance efficiency [33; 34]. [10] proposes to generate all prompt candidates and selectively incorporate them into each context using a refined strategy. [11] introduces a more efficient method to construct prompts with several sub-prompts that employs prompt tuning with rules without searching. Overall, prompt engineering is an efficient approach that helps bridge the gap between pre-training and fine-tuning.
### Chain-of-Thought
Chain-of-Thought is a specialized tool designed for the task of multi-step reasoning and decision-making [44]. The traditional prompting method [4] performs poorly when it comes to tasks that require reasoning abilities. Inspired by the concept of using intermediate steps to solve reasoning problems [19; 6], the chain of thought method mimics a step-by-step thinking process and breaks down multi-step problems into intermediate steps, enabling the model to deduce more accurate results [23]. Additionally, [52] address the challenge of dealing with tasks that are more complex than example prompts, and proposes the least-to-most prompting approach which breaks down complex problems into smaller and easier subproblems. Moreover, [42] introduces self-consistency as a replacement for the greedy decoding algorithm, which samples and selects the most consistent reasoning paths to replace the greedy set.
## 3 Method
### Overview
Text-to-image diffusion models serve as an illustrative example for showcasing the functionality of PromptCoT. However, it is important to note that the same methodology can be extended and applied to other diffusion-based generative models, including text-to-video and various other domains. In
Figure 2: Pipeline of PromptCoT. (Left) We build three types of instruction patterns for training. (Middle) We utilize adapters for multi-task adaptation. (Right) Results of t-continue, t2t booster and PromptCoT.
the context of training text-to-image diffusion-based models, which involve image-text pairs and employ an iterative denoising process to reconstruct images based on corresponding prompts, our hypothesis posits that prompts aligned with high-quality images within the training set are more inclined to yield visually superior outputs. We randomly select 5 sets of 50 prompts corresponding to images with varying levels of quality from the Stable Diffusion training set, LAION [35], for image generation. The aesthetic score, an image quality metric introduced by [31], is used to represent the quality of individual images. As shown in Table 1, the generation performance is highly related to the prompts corresponding to the original image quality. For convenience, we refer to them as "**high-quality prompts**". In the following sections, we explain the key components of PromptCoT, which is a prompt booster that can align input prompts with high-quality prompts in the training set, and in turn, improve generation performance.
### Aligning Prompt Distribution with LLM
LLMs are extremely powerful tools that are capable of generating human-like language and completing tasks such as translation, summarization, question answering, etc. They are trained on massive amounts of text data and can learn from unstructured data to generalize to new tasks and domains. LLMs can also be fine-tuned on specific tasks with relatively small amounts of task-specific data, making them highly versatile. In this paper, we leverage this ability to align the distribution of high-quality prompts via fine-tuning a popular LLM LLaMA [40], on text continuation and revision tasks. To fine-tune LLaMA on text continuation, we use an instruction tuning template that includes incomplete text descriptions and a goal to provide a compelling continuation. The instruction tuning template is shown in Figure 3. We feed truncated text prompts placed in the _input_ field to the LLM, supervised by the complete prompts. This enables the LLM to generate continuations containing more details.
For text revision, we train the LLM to map human-like input texts to high-quality prompts. However, acquiring a large amount of human-written input text can be costly. Therefore, we leverage image captions from BLIP as a low-cost source of "human-like" input texts. The details of collecting and filtering data pairs are described in the later section. For training, we construct the instruction tuning template in Figure 4. The training pipeline is similar to continuation, but with the input being human-like prompts. As a result, we obtain a booster capable of performing revision tasks.
### Enhancement with CoT
Instruction tuning enables the LLM to add details and align text distribution, however, it tends to generate extraneous information that degrades performance. As such, we introduce the Chain-of-Thought (CoT) mechanism in the pipeline to address this issue. We set up five steps to make the
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline & \multicolumn{3}{c}{Aesthetic Score} \\ \hline Training images & 4-5 & 5-6 & 6-7 & 7-8 \\ \hline Generated images & 5.2 & 5.5 & 6.1 & 6.3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of Aesthetic Scores between Generated Images and Corresponding Training Images.
Figure 3: Template of text-continuation dataset (Up) and corresponding output (Bottom).
LLM yield the expected production: (i) Extract key information from the original prompt, such as visual medium and main elements, (ii) Leverage the text-continuation model to append reasonable details, (iii) Extract additional concepts (for example, the color scheme) from the extended prompt and emphasize crucial concepts, (iv) With improved key information and crucial concepts, the LLM can generate a fluent prompt, remaining to be aligned, (v) Leverage the text-revision model to align prompts to the specific distribution. This mechanism extracts and amalgamates crucial information from the aligned continuation and revision, enabling reasonable inferences based on the contextual cues. As a result, a more comprehensive and nuanced final output is produced.
### Multi-task Adaptation
As the training set of different generative models can vary greatly, one approach to adapt to these new datasets is to fine-tune the entire LLM on the task-specific dataset. However, LLMs are typically models with billions of parameters, and allocating a dedicated LLM to each individual model proves impractical due to computational constraints. Moreover, there are plenty of text-to-image generative models trained on different datasets, and a single LLM cannot cover a diverse distribution of these datasets. As an alternative, we integrate adapters that facilitate dataset-specific adaptation, leveraging a shared pre-trained LLM as the foundation for this process. Adapters are lightweight modules that can be independently fine-tuned and subsequently added to the base model. Keeping adapters instead of the whole model significantly reduces memory usage, while enabling the adaptation of the LLM to different datasets.
Figure 4: Template of text-revision dataset (Up) and corresponding output (Bottom).
Figure 5: Composition of fine-tuning tasks including text-continuation, text-revision, text-CoT, and self-instruction of Alpaca.
### Dataset Preparation
We build three types of datasets: text-continuation, text-revision, and text-CoT.
**Text-continuation dataset.** To create this dataset, we filter high-quality prompts from the training data of existing generative models, using criteria such as high CLIP similarity and proper length. In the case of the LAION dataset, we also consider aesthetic scores to ensure a higher quality of prompts. Once high-quality prompts are identified, we truncate a portion of the text, with the remaining front part assigned as input data. The LLM is then trained to generate the missing information and complete the text. This process enables the LLM to learn how to effectively continue text prompts in a manner that is consistent with the style and context of the original text.
**Text-revision dataset.** The dataset consists of human-like texts and corresponding high-quality prompts which are described in the text-continuation dataset. To acquire human-like prompts, we leverage BLIP and CLIP-interrogator for image captioning. Furthermore, we calculate the text distance with the text encoder of CLIP, ensuring a score greater than 0.4 to guarantee semantic relevance between the two prompts.
**Text-CoT dataset.** We use GPT-3.5-Turbo to build a task-specific dataset. Initially, we design a step-by-step interaction with GPT-3.5-Turbo to extract and guide the prompt booster to finish the alignment task, due to the fact that CoT is still difficult for alpaca with a simple finetuning on datasets above. Following the alpaca's thought, 52k pairs are all generated from gpt-3.5-turbo.
## 4 Experimental Results
In this section, we first introduce the details on the datasets, pre-trained models, and the training hyperparameters used for all our experiments in Section 4.1. Then we demonstrate the results of applying PromptCoT to text-to-image and text-to-video pre-trained generative models in Section 4.2 and Section 4.3 respectively.
### Setup
**Dataset.** For training, we build Text-revision and Text-continuation dataset from LAION-aes6plus [35], and Text-CoT dataset with the help of GPT-3.5-turbo. LAION-aes6plus is the subset of LAION, containing 12M image-text pairs with predicted aesthetics scores of 6 or higher. As a supplement, we also train with Text-revision, Text-continuation, and Text-CoT datasets from the WebVid-10M dataset [3] for video generation. For evaluation, we conduct experiments on COCO [18] validation set and MSR-VTT [45] for FID, FVD, aesthetic score, CLIP score, and PickScore.
**Models.** The pre-trained LLaMA-7B is used as the base model and we employ the adapter design outlined in [48] to facilitate multi-task adaptation. Two versions of Stable Diffusion [31], v1.4 and v2.1, are used for image generation. MagicVideo [50] is used for video generation.
**Implementation Details.** We finetune the LLaMA following alpaca's [39] strategy and instruction pattern, which has been verified powerful for text generation tasks. We validate the viability of
Figure 6: Generated images from prompts refined by different aligners. (a) and (h) show the images generated with the original text prompts. (b-g) and (i-n) denote the images generated with text prompts refine by ‘t-continue’, ‘t2t-blip’, ‘t2t-inter’,‘davinci’,‘CoT_d’, and ‘CoT’ respectively.
our two initial ideas by finetuning three task-specific LLaMA for prompt refining works shown in experiments 2. One is trained on the self-constructed text-continuation dataset while the other two are trained on two types of text-revision dataset. While combining such basic methods by CoT, we include a dataset from alpaca, a subset of the text-continuation dataset, and the text-revision dataset with higher text similarity and the CoT dataset as a whole. We evaluate our alignment work on three diffusion models and on different parameters. Furthermore, we evaluate the portability of promptCoT through an adapter by comparing its performance with the fully-finetuned model.
### Text-to-image Evaluation
The COCO [18] validation set is the standard benchmark for evaluating text-to-image models. The key automated performance metrics used are FID to measure image fidelity, CLIP score, PickScore to measure image-text alignment, aesthetic score [22] to predict the aesthetic quality, and Inception Score (IS) to evaluate the diversity. We utilize two versions of Stable Diffusion for image generation with prompts from COCO and our PromptCoT. Table 2 presents the evaluation results for each metric with different single-function boosters including t-continue, t2t-blip, and t2t-inter, as well as a baseline. The results show that incorporating the alignment method proposed in our paper consistently improved the generated image quality across all metrics compared to the baseline. Among the single-function boosters, the t2t-blip booster demonstrates the best performance, as it is able to achieve alignment to a greater extent. For example, it transfers "Boxes of fruit displayed at an open-air market" to "A view of stalls selling fruit at the Harare International Market in Harare, Zimbabwe" by rephrasing
\begin{table}
\begin{tabular}{c c|c c c c c} \hline \hline \begin{tabular}{c} **Generation** \\ **Model** \\ \end{tabular} & \begin{tabular}{c} **Booster** \\ \end{tabular} & \begin{tabular}{c} **Aesthetic** \\ **Score** \\ \end{tabular} & **FID** & **IS** & \begin{tabular}{c} **CLIP** \\ **Score** \\ \end{tabular} &
\begin{tabular}{c} **PickScore** \\ **(avg/recall)** \\ \end{tabular} \\ \hline \multirow{4}{*}{SD v1.4} & baseline & 5.40 & 59.15 & 39.13 \(\pm\) 0.84 & 0.268 & 27.3\%/35.7\% \\ & t-continue & 5.54 & 44.66 & 35.81 \(\pm\) 0.96 & 0.290 & 39.5\%/61.5\% \\ & t2t-blip & 5.62 & 40.77 & 38.56 \(\pm\) 0.77 & 0.293 & 51.4\%/77.5\% \\ scale=7.0 & t2t-inter & 5.44 & 55.76 & 41.00 \(\pm\) 1.17 & 0.271 & 34.3\%/49.0\% \\ & cot\_d & 5.64 & 49.58 & 37.43 \(\pm\) 0.94 & 0.289 & 40.6\%/62.2\% \\ \hline \multirow{4}{*}{SD v2.1} & baseline & 5.60 & 58.02 & 37.51 \(\pm\) 1.00 & 0.266 & 29.4\%/41.7\% \\ & t-continue & 5.70 & 45.62 & 34.44 \(\pm\) 0.71 & 0.287 & 44.3\%/69.9\% \\ & t2t-blip & 5.79 & 40.59 & 37.38 \(\pm\) 1.08 & 0.292 & 56.3\%/82.5\% \\ scale=7.0 & t2t-inter & 5.64 & 54.93 & 38.60 \(\pm\) 0.85 & 0.269 & 37.1\%/55.6\% \\ & cot\_d & 5.78 & 50.41 & 34.88 \(\pm\) 0.95 & 0.290 & 42.9\%/66.2\% \\ \hline \multirow{4}{*}{SD v2.1} & baseline & 5.60 & 58.17 & 36.37 \(\pm\) 0.81 & 0.267 & - \\ & t-continue & 5.64 & 46.59 & 33.29 \(\pm\) 0.68 & 0.287 & - \\ ddim step=250 & t2t-blip & 5.76 & 40.89 & 36.16 \(\pm\) 0.84 & 0.292 & - \\ scale=12.0 & t2t-inter & 5.64 & 55.37 & 38.10 \(\pm\) 1.16 & 0.269 & - \\ & cot\_d & 5.75 & 50.41 & 34.88 \(\pm\) 0.94 & 0.290 & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Text-to-image generation performance.** We evaluate the generation performance on Stable Diffusion v1.4 and v2.1 on key metrics including aesthetic score, FID, IS, CLIP score and PickScore.
\begin{table}
\begin{tabular}{c c|c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Booster**} & \begin{tabular}{c} **Aesthetic** \\ **Score** \\ \end{tabular} & **FID** & **IS** & \begin{tabular}{c} **CLIP** \\ **Score** \\ \end{tabular} & **PickScore** \\ \hline \multirow{4}{*}{\begin{tabular}{c} Alpaca \\ epochs = 3 \\ \end{tabular} } & t-continue & 5.70 & 45.62 & 34.44 \(\pm\) 0.71 & 0.287 & 44.3\%/69.9\% \\ & t2t-blip & 5.79 & 40.59 & 37.38 \(\pm\) 1.08 & 0.292 & 56.3\%/82.5\% \\ & t2t-inter & 5.64 & 54.93 & 38.60 \(\pm\) 0.852 & 0.269 & 37.1\%/55.6\% \\ & cot\_d & 5.78 & 50.41 & 34.88 \(\pm\) 0.95 & 0.290 & 42.9\%/66.2\% \\ \hline \multirow{4}{*}{
\begin{tabular}{c} Adapter \\ epochs = 5 \\ \end{tabular} } & t-continue & 5.69 & 48.00 & 35.8 \(\pm\) 0.57 & 0.283 & - \\ & t2t-blip & 5.70 & 46.86 & 38.0 \(\pm\) 0.66 & 0.289 & - \\ & t2t-inter & 5.64 & 56.28 & 39.0 \(\pm\) 0.64 & 0.269 & - \\ & cot\_d & 5.85 & 51.06 & 31.8 \(\pm\) 0.65 & 0.251 & - \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Text-to-image generation performance with adapters.** We fine-tune adapters by 5 epochs and compare them with fully fine-tuned Alpaca. Model with adapters achieves comparable results.
the expression and adding reasonable details. In contrast, the t2t-inter booster, which has a similar function to t2t-blip, shows inferior performance, although it still outperforms the baseline. This could be due to the CLIP-interrogator used to create the text-revision dataset introducing irrelevant entities. Furthermore, we test with different factors of classifier-free guidance to prove the generality of our PromptCoT. Varying the scale of classifier-free guidance results in consistent performance.
### Text-to-video Evaluation
In addition, we experiment with the text-to-video evaluation task to demonstrate the effectiveness of our approach. We employ two single-function boosters, t-continue, and t2t-blip on the WebVid-10M dataset [3]. For t2t-blip, we uniformly sample the video and randomly select five frames, which serve as input for the blip model and be used to generate the revision result. Then, we finetune the LLaMA model following alpaca's [39] strategy and build prompts from MSR-VTT with the fine-tuned model. We use MagicVideo [50] as the base model to test the effectiveness of our prompts. The results are shown in Table 5. The results indicate that the boosters are effective in enhancing the quality of the generated videos compared to the baseline, at least they "do no harm". Among the boosters, the booster better aligns the prompts and achieves the best performance overall. For cot_d, we generate 21k data with the help of GPT-3.5-turbo. Similar to text, we utilize a chain of five questions to generate the expected production, but with subtle differences to encourage GPT-3.5-turbo to generate more video-related features, e.g., movement. Similar to text generation, we adopt a chain of five questions to generate the expected production for video prompts. However, there are subtle differences in the question prompts to encourage GPT-3.5-turbo to incorporate more video-related features, such as movement, into its generated content. For example, "a large passenger jet flying in the sky at sunset" can be refined to "Boeing 747 flying across a vibrant sunset backdrop in a captivating, cinematic 4K video. Slowly gaining altitude with wings tilting slightly, this footage captures the plane's majesty". The scores of cot_d will be included in the supplementary material.
## 5 Conclusion
In this paper, we present PromptCoT, an innovative system designed to autonomously enhance the quality of prompts used in diffusion-based generative models, which are critical for high-fidelity visual content generation. PromptCoT leverages pre-trained Large Language Models (LLMs) and a unique Chain-of-Thought (CoT) mechanism to refine prompts, thereby improving the alignment between the original and refined prompts. To balance computational efficiency, we employ adapters to allow for efficient adaptation to new datasets or models. Our evaluations demonstrate that PromptCoT can achieve superior performance compared to the baselines.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline \multirow{2}{*}{**Booster**} & **Aesthetic** & **CLIP** & \multirow{2}{*}{**PickScore**} \\ & **Score** & & \\ \hline baseline & 5.62 & 0.231 & 16.8\%/26.1\% \\ tcontinue & 5.72 & 0.285 & 37.8\%/66.2\% \\ t2t\_blip & 5.80 & 0.293 & 50.6\%/81.5\% \\ t2t\_inter & 5.66 & 0.269 & 30.7\%/52.5\% \\ cot\_d & 5.79 & 0.291 & 34.9\%/59.5\% \\ cot & 5.80 & 0.293 & 36.4\%/59.0\% \\ davinci & 5.69 & 0.277 & 26.0\%/47.5\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Text-to-image generation performance.** We compare finetuned CoT aligner and davinci-003 model from OpenAI. All metrics are evaluated on a subset of the COCO validation dataset which contains 1k images.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Model** & **Dataset** & **Booster** & **FID** & **FVD** & **CLIP Score** \\ \hline \multirow{2}{*}{MagicVideo} & \multirow{2}{*}{MSR-VTT} & baseline & 36.5 & 998 & 0.284 \\ & & t-continue & 33.2 & 951 & 0.296 \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Text-to-video generation performance.** We evaluate the generation performance on MagicVideo on key metrics including FID, FVD, and CLIP score.
## References
* [1] Armen Aghajanyan, Luke Zettlemoyer, and Sonal Gupta. Intrinsic dimensionality explains the effectiveness of language model fine-tuning, 2020.
* [2] Alex Andonian, Quentin Anthony, Stella Biderman, Sid Black, Preetham Gali, Leo Gao, Eric Hallahan, Josh Levy-Kramer, Connor Leahy, Lucas Nestler, Kip Parker, Michael Pieler, Shivanshu Purohit, Tri Songz, Wang Phil, and Samuel Weinbach. GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch, 8 2021.
* [3] Max Bain, Arsha Nagrani, Gul Varol, and Andrew Zisserman. Frozen in time: A joint video and image encoder for end-to-end retrieval, 2022.
* [4] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, T. J. Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. _ArXiv_, abs/2005.14165, 2020.
* [5] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. 2020.
* [6] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems, 2021.
* [7] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. _Advances in Neural Information Processing Systems_, 34, 2021.
* [8] Ning Ding, Yujia Qin, Guang Yang, Fu Wei, Zonghan Yang, Yusheng Su, Shengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, Jing Yi, Weilin Zhao, Xiaozhi Wang, Zhiyuan Liu, Haitao Zheng, Jianfei Chen, Yang Liu, Jie Tang, Juan Li, and Maosong Sun. Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models. _ArXiv_, abs/2203.06904, 2022.
* [9] Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. GIm: General language model pretraining with autoregressive blank infilling. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_, pages 320-335, 2022.
* [10] Tianyu Gao, Adam Fisch, and Danqi Chen. Making pre-trained language models better few-shot learners. _ArXiv_, abs/2012.15723, 2021.
* [11] Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, and Maosong Sun. Ptr: Prompt tuning with rules for text classification. _ArXiv_, abs/2105.11259, 2021.
* [12] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. _Advances in Neural Information Processing Systems_, 33:6840-6851, 2020.
* [13] Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J. Fleet. Video diffusion models, 2022.
* [14] Neil Houlsby, Andrei Giurgiu, Stanislav Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for NLP. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, _Proceedings of the 36th International Conference on Machine Learning_, volume 97 of _Proceedings of Machine Learning Research_, pages 2790-2799. PMLR, 09-15 Jun 2019.
* [15] Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models, 2021.
* [16] Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. Toward controlled generation of text. In _International Conference on Machine Learning_, 2017.
* [17] Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning, 2021.
* [18]* Lin et al. [2014] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollar, and C. Lawrence Zitnick. Microsoft coco: Common objects in context. In _European Conference on Computer Vision_, 2014.
* Ling et al. [2017] Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by rationale generation: Learning to solve and explain algebraic word problems. _arXiv preprint arXiv:1705.04146_, 2017.
* 35, 2021.
* Mahabadi et al. [2021] Rabeeh Karimi Mahabadi, James Henderson, and Sebastian Ruder. Compacter: Efficient low-rank hypercomplex adapter layers, 2021.
* Murray et al. [2012] Naila Murray, Luca Marchesotti, and Florent Perronnin. Ava: A large-scale database for aesthetic visual analysis. In _2012 IEEE conference on computer vision and pattern recognition_, pages 2408-2415. IEEE, 2012.
* Narang et al. [2020] Sharan Narang, Colin Raffel, Katherine Lee, Adam Roberts, Noah Fiedel, and Karishma Malkan. Wt5?! training text-to-text models to explain their predictions, 2020.
* Pfeiffer et al. [2020] Jonas Pfeiffer, Andreas Ruckle, Clifton Poth, Aishwarya Kamath, Ivan Vulic, Sebastian Ruder, Kyunghyun Cho, and Iryna Gurevych. AdapterHub: A framework for adapting transformers, 2020.
* Qin and Eisner [2021] Guanghui Qin and Jas' Eisner. Learning how to ask: Querying lms with mixtures of soft prompts. _ArXiv_, abs/2104.06599, 2021.
* Radford et al. [2021] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. 2021.
* Radford et al. [2019] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019.
* Ramesh et al. [2022] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. _arXiv preprint arXiv:2204.06125_, 2022.
* Ramesh et al. [2021] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Dall'e 2: Exploring cross-modal transformers for image generation. _OpenAI Blog_, 2021.
* Rombach et al. [2021] Robin Rombach, A. Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. High-resolution image synthesis with latent diffusion models. _2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 10674-10685, 2021.
* Rombach et al. [2022] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. High-resolution image synthesis with latent diffusion models, 2022.
* Saharia et al. [2022] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. Photorealistic text-to-image diffusion models with deep language understanding, 2022.
* Schick et al. [2020] Timo Schick, Helmut Schmid, and Hinrich Schutze. Automatically identifying words that can serve as labels for few-shot text classification. In _International Conference on Computational Linguistics_, 2020.
* Schick and Schutze [2020] Timo Schick and Hinrich Schutze. Exploiting cloze-questions for few-shot text classification and natural language inference. In _Conference of the European Chapter of the Association for Computational Linguistics_, 2020.
* Schuhmann et al. [2022] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. _arXiv preprint arXiv:2210.08402_, 2022.
* Song et al. [2020] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. _arXiv preprint arXiv:2010.02502_, 2020.
* Song and Ermon [2019] Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. _Advances in Neural Information Processing Systems_, 32, 2019.
* [38] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. _arXiv preprint arXiv:2011.13456_, 2020.
* [39] Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. [https://github.com/tatsu-lab/stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca), 2023.
* [40] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models, 2023.
* [41] Leandro von Werra, Alex Havrilla, Max reciprocated, Jonathan Tow, Aman cat state, Duy V. Phung, Louis Castricato, Shahbuland Matiana, Alan, Ayush Thakur, Alexey Bukhtiyarov, aaronrmm, Fabrizio Milo, Daniel, Daniel King, Dong Shin, Ethan Kim, Justin Wei, Manuel Romero, Nicky Pochinkov, Omar Sanseviero, Reshinth Adithyan, Sherman Siu, Thomas Simonini, Vladimir Blagojevic, Xu Song, Zack Witten, alexandremuzio, and crumb. CarperA/lrlx: v0.6.0: LLaMa (Alpaca), Benchmark Util, T5 ILQL, Tests, March 2023.
* [42] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Huai hsin Chi, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. _ArXiv_, abs/2203.11171, 2022.
* [43] Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. Finetuned language models are zero-shot learners. _ArXiv_, abs/2109.01652, 2021.
* [44] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models, 2023.
* [45] Jun Xu, Tao Mei, Ting Yao, and Yong Rui. Msr-vtt: A large video description dataset for bridging video and language. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 5288-5296, 2016.
* [46] Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, Ben Hutchinson, Wei Han, Zarana Parekh, Xin Li, Han Zhang, Jason Baldridge, and Yonghui Wu. Scaling autoregressive models for content-rich text-to-image generation, 2022.
* [47] Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. Glm-130b: An open bilingual pre-trained model. _arXiv preprint arXiv:2210.02414_, 2022.
* [48] Renrui Zhang, Jiaming Han, Aojun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, Peng Gao, and Yu Qiao. Llama-adapter: Efficient fine-tuning of language models with zero-init attention. _arXiv preprint arXiv:2303.16199_, 2023.
* [49] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. Opt: Open pre-trained transformer language models, 2022.
* [50] Daquan Zhou, Weimin Wang, Hanshu Yan, Weiwei Lv, Yizhe Zhu, and Jiashi Feng. Magicvideo: Efficient video generation with latent diffusion models. _arXiv preprint arXiv:2211.11018_, 2022.
* [51] Daquan Zhou, Weimin Wang, Hanshu Yan, Weiwei Lv, Yizhe Zhu, and Jiashi Feng. Magicvideo: Efficient video generation with latent diffusion models, 2023.
* [52] Denny Zhou, Nathanael Scharli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Huai hsin Chi. Least-to-most prompting enables complex reasoning in large language models. _ArXiv_, abs/2205.10625, 2022. | ## Review
### Summary
This paper introduces PromptCoT, a method aimed at enhancing the generation of high-quality images from diffusion models by refining text prompts. The authors propose a framework utilizing large language models (LLMs) to create better-aligned prompts based on high-quality image descriptions, employing techniques like text continuation, revision, and chain-of-thought (CoT). The experimental results suggest improvements in performance, but reviewers express concerns regarding the novelty of the contributions as many techniques utilized are already established in the field. The paper's clarity and organization are noted as strengths, though the technical significance and contribution are questioned.
### Strengths
- Application of LLMs to improve image generation through better text alignment is reasonable.
- Overall presentation is clear and easy to follow, with effective visualizations.
- The proposed PromptCoT framework and its three training methods are well-defined.
- Datasets created for fine-tuning can benefit future research.
### Weaknesses
- Lack of technical novelty as many techniques used (prompt engineering, CoT, adapters) are already well-explored.
- The observation regarding high-quality prompts yielding better results needs more empirical support across various datasets.
- The experiments lack clarity: terminology like t2t-inter, cot_d, and cot need better explanation.
- The paper may generate uncontrollable information due to inherent biases in the LLMs and should address this more thoroughly.
### Questions
- What empirical evidence supports the claim that prompts aligned with high-quality images yield superior outputs across different datasets?
- Can the authors clarify the novelty and technical contributions of the prompting techniques used?
- Is there a theoretical reason why the t2t-blip booster performs better than others?
- How do the proposed methods compare against existing techniques in terms of effectiveness and efficiency?
### Soundness
**Score:** 3
**Description:** 3 = good; the methodology is sound overall, but the concerns regarding novelty and clarity of experiments affect the overall assessment.
### Presentation
**Score:** 3
**Description:** 3 = good; while the paper is well-structured and clear, some explanations in the experimental section could be improved for better understanding.
### Contribution
**Score:** 2
**Description:** 2 = fair; the paper lacks significant technical contributions as it mainly applies existing techniques without introducing novel concepts.
### Rating
**Score:** 5
**Description:** 5 = borderline accept; while there are solid aspects, the lack of novelty and clarity in the contribution raises concerns for full acceptance.
### Paper Decision
**Decision:** Reject
**Reasons:** The decision to reject is based on the paper's limited technical contribution and originality, with concerns regarding the significance of its findings. Despite a clear presentation, the use of established techniques without substantial innovation makes it less competitive compared to other submissions.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# What Do Deep Saliency Models Learn about Visual Attention?
Shi Chen Ming Jiang Qi Zhao
Department of Computer Science and Engineering,
University of Minnesota
{chen4595, mjiang, qzhao}@umn.edu
###### Abstract
In recent years, deep saliency models have made significant progress in predicting human visual attention. However, the mechanisms behind their success remain largely unexplained due to the opaque nature of deep neural networks. In this paper, we present a novel analytic framework that sheds light on the implicit features learned by saliency models and provides principled interpretation and quantification of their contributions to saliency prediction. Our approach decomposes these implicit features into interpretable bases that are explicitly aligned with semantic attributes and reformulates saliency prediction as a weighted combination of probability maps connecting the bases and saliency. By applying our framework, we conduct extensive analyses from various perspectives, including the positive and negative weights of semantics, the impact of training data and architectural designs, the progressive influences of fine-tuning, and common failure patterns of state-of-the-art deep saliency models. Additionally, we demonstrate the effectiveness of our framework by exploring visual attention characteristics in various application scenarios, such as the atypical attention of people with autism spectrum disorder, attention to emotion-eliciting stimuli, and attention evolution over time. Our code is publicly available at [https://github.com/szzexpoi/saliency_analysis](https://github.com/szzexpoi/saliency_analysis).
## 1 Introduction
Attention deployment is a complex and fundamental process that enables humans to selectively attend to important sensory data in the visual environment. Scientists have been fascinated by the question of what drives human visual attention for decades. Understanding the mechanisms of visual attention not only sheds light on the human visual system but also helps computational methods to localize critical sensory inputs more efficiently.
One approach to predicting human attention is through saliency models, which have received considerable research attention. A saliency model predicts the most visually important regions in an image that are likely to capture attention. Earlier models follow a feature integration approach [1, 2, 3, 4] and extract low-level features (_e.g.,_ colors, intensity, orientations) or higher-level features (_e.g.,_ objects, semantics) from the input image to infer saliency [5, 6, 7]. While these models show initial success, their performance is limited by the difficulty of engineering relevant visual features. In contrast, recent saliency models [8, 9, 10, 11] follow a data-driven approach, leveraging large datasets [12] and deep neural networks [13, 14, 15] to learn discriminative features. These models achieve human-level performance on several saliency benchmarks [12, 16, 17], thanks to their accurate detection of important objects and high-level semantics [18, 19]. However, due to the lack of transparency, it is still unclear what semantic features these models have captured to predict visual saliency.
To understand how deep neural networks predict visual saliency, in this paper, we develop a principled analytic framework and address several key questions through comprehensive analyses:* How do deep saliency models differentiate salient and non-salient semantics?
* How do data and model designs affect semantic weights in saliency prediction?
* How does fine-tuning saliency models affect semantic weights?
* Can deep saliency models capture characteristics of human attention?
* What is missing to close the gap between saliency models and human attention?
Our method connects implicit features learned by deep saliency models to interpretable semantic attributes and quantifies their impact with a probabilistic method. It factorizes the features into trainable bases, and reformulates the saliency inference as a weighted combination of probability maps, with each map indicating the presence of a basis. By measuring the alignment between the bases and fine-grained semantic attributes (_e.g.,_ concepts in Visual Genome dataset [20]), it quantifies the relationships between diverse semantics and saliency. This unique capability enables us to identify the impact of various key factors on saliency prediction, including training datasets, model designs, and fine-tuning. It can also identify common failure patterns with state-of-the-art saliency models, such as SALICON [9], DINet [11], and TranSalNet [10]. Beyond general saliency prediction, the framework also shows promise in analyzing fine-grained attention preferences in specific application contexts, such as attention in people with autism spectrum disorder, attention to emotional-eliciting stimuli, and attention evolution over time. In sum, our method offers an interpretable interface that enables researchers to better understand the relationships between visual semantics and saliency prediction, as well as a tool for analyzing the performance of deep saliency models in various applications.
## 2 Related Works
Our work is most related to previous studies on visual saliency prediction, which make progress in both data collection and computational modeling.
**Saliency datasets.** With the overarching goal of understanding human visual perception, considerable efforts have been placed on constructing saliency datasets with diverse stimuli. The pioneering study [16] proposes an eye-tracking dataset with naturalistic images, which later becomes a popular online benchmark [21]. Several subsequent datasets characterize visual saliency into finer categories based on visual scenes [22], visual semantics [17], sentiments [23; 24], or temporal dynamics [25], to study the impact of different experimental factors on attention. To overcome the difficulties of tracking eye movements, Jiang _et al._[12] leverage crowd-sourcing techniques and use mouse-tracking as an approximation for eye gaze, which results in currently the largest saliency dataset. Recent works also consider broader ranges of visual stimuli, including videos [26; 27], graphical designs [28; 29], web pages [30; 31], crowds [32], driving scenes [33] and immersive environments [34; 35]. In this study, we focus on visual saliency for naturalistic images, which serves as the foundation of saliency prediction studies.
**Saliency models.** Prior works have developed saliency prediction models to quantitatively study human attention. Inspired by seminal works on saliency modeling [5; 6; 36], early saliency models [1; 2; 3; 4; 37] typically adopt a bottom-up approach, integrating handcrafted features (_e.g.,_ colors, intensity, and orientations). On the other hand, recent approaches take a different route and leverage deep neural networks [13; 14; 15; 38] to automatically learn features and predict saliency. Vig _et al._[39] is one of the first attempts to utilize convolutional neural networks (CNNs) for saliency prediction. Huang _et al._[9] consider features learned from multi-scale inputs to model the coarse-to-fine semantics. Kruthiventi _et al._[40] leverage convolutional layers with diverse kernel sizes to capture multi-scale features and incorporate positional biases with location-dependent convolution. Kummerer _et al._[41] demonstrate the usefulness of features of visual objects in saliency prediction. Cornia _et al._[8] develop a recurrent neural network to iteratively refine features for saliency prediction. Yang _et al._[11] improve saliency prediction with dilated convolution to capture information from broader regions. Lou _et al._[10] study the usefulness of self-attention [15] for saliency prediction. Instead of building new models, several works [18; 42; 43; 44; 45; 46] study the behaviors of models. By analyzing predictions on different categories of stimuli, they identify key factors behind the successes and failures of existing saliency models (_e.g.,_ accurate detection of semantic objects, incorporation of low- and high-level features, contrast between diverse visual cues, detection of Odd-One-Out target, and etc.), and propose directions for improvements. A recent work [19] also analyzes the features learned by saliency models by aligning activation maps with segmentation for a selection of objects (_e.g.,_ body parts, food, and vehicles in [17]).
Our research contributes to the field of attention research by introducing a rigorous methodology for analyzing deep saliency models. In contrast to previous studies [18; 19; 42; 43], our study has three key distinctions: First, while previous analyses of deep saliency models were restricted to a predefined set of salient objects (_e.g.,_ object segmentation used in [19]), our method automatically identifies both salient and non-salient semantics from a vocabulary of objects, parts, actions, and attributes. Second, different from previous qualitative analyses, our method quantifies the weights of these semantics in saliency prediction. It allows us to investigate the impacts of various factors (_e.g.,_ the contributions of positive/negative semantics, the characteristics of datasets and model designs, and the process of fine-tuning) on saliency prediction, offering insights into the development of future deep saliency models. Third, our approach goes beyond analyzing the general visual saliency and demonstrates its strength in characterizing human visual attention under specific conditions such as the attention of people with autism spectrum disorder, the saliency of emotion-eliciting images, and time-course attention evolution.
## 3 Methodology
Human visual attention is influenced by a spectrum of visual cues, from low-level contrasts to high-level semantic attributes [5]. However, current deep learning-based saliency models [8; 9; 11] remain opaque in terms of the semantic attributes they have learned and how these attributes contribute to saliency prediction. To address this gap, we propose a method that decomposes neural network features into discriminative bases aligned with a wide range of salient or non-salient semantic attributes, and quantifies their weights in saliency prediction.
As illustrated in Figure 1, our method decomposes visual features by projecting them onto a collection of trainable bases, and uses the probabilistic distribution of bases to infer visual saliency. The overall idea is to identify both salient and non-salient semantics and quantify their impact on saliency prediction. To achieve this, we start with a deep saliency model and compare the features learned at the penultimate layer with different bases. This is done using a dot product between features \(V\in\mathbb{R}^{M\times C}\) (\(M\) and \(C\) are the spatial resolution and dimension of features) and bases \(B\in\mathbb{R}^{N\times C}\) (\(N=1000\) is the number of bases defined based on the number of units in the final layers of deep saliency models [9; 10]), which corresponds to their cosine similarity:
\[\alpha=\sigma(V\otimes B^{T}) \tag{1}\]
Figure 1: Illustration of our method. It factorizes implicit features into trainable bases, and interprets the meanings of bases by aligning them with diverse semantics. Each basis can be interpreted as a weighted combination of semantics (_e.g.,_ face, female, and happy). By reformulating the saliency inference with a probabilistic method, the relationships between semantics and saliency can be quantified by integrating the model weights \(W_{sal}\) (_i.e.,_ the weight of each basis) and the semantic alignment \(O\) (_i.e.,_ the composition of semantics for each basis).
where \(\otimes\) denotes the dot product. \(\sigma\) is the Sigmoid activation for normalization. \(\alpha_{i,j}\in[0,1]\) represents the probability of \(j^{th}\) basis \(b_{j}\) detected at the \(i^{th}\) region \(P(b_{j}=1|V_{i})\). Inspired by [47] for decomposing model weights, we factorize the features as a weighted combination of matched bases:
\[V_{i}^{f}=\sum_{j=1}^{N}\alpha_{i,j}B_{j} \tag{2}\]
where \(V^{f}\in\mathbb{R}^{M\times C}\) are factorized features used to predict the saliency map \(S=W^{f}V^{f}\) (\(S\in\mathbb{R}^{M}\), \(W^{f}\) are weights of the last layer).
Upon building the connections between visual features and discriminative bases, we then re-route the final saliency prediction by (1) freezing all model weights including the bases, and (2) adjusting the last layer for saliency inference based on the probabilistic distribution \(\alpha\). We train a new layer (with weights \(W^{sal}\)) for predicting the saliency map:
\[S=\sum_{j=1}^{N}W_{j}^{sal}\alpha_{:,j} \tag{3}\]
Intuitively, the method formulates the problem of saliency prediction as learning the linear correlation between the detected bases and visual saliency, which can be denoted as learning \(P(S|b_{1},b_{2},...,b_{N})\). With the intrinsic interpretability of the design, _i.e.,_\(\alpha\) as the probabilistic distribution of bases and \(W^{sal}\) encoding the positive/negative importance of bases, we are able to investigate the weights of different bases to visual saliency.
The final step is to understand the semantic meanings of each basis. Unlike previous studies [18, 19, 42, 43] that focus on predefined salient objects, we take into account a comprehensive range of semantics without assumptions on their saliency. Specifically, our method leverages the factorization paradigm to measure the alignment between each basis and the semantics. Given an image and the regions of interest for different semantics (_e.g.,_ bounding box annotations in Visual Genome [20]), we (1) compute the probabilistic map for each \(j^{th}\) basis \(\alpha_{:,j}\in\mathbb{R}^{M}\), and (2) measure its alignment \(O_{j,p}\) with the regions of interest \(R_{p}\) for each semantic \(p\). Following [48], we binarize the probabilistic map with a threshold \(t_{j}\) and measure the alignment with Intersection over Union (IoU):
\[O_{j,p}=\frac{\left|\mathbb{I}[\alpha_{:,j}>t_{j}]\,\cap\,R_{p}\right|}{\left| \mathbb{I}[\alpha_{:,j}>t_{j}]\,\cup\,R_{p}\right|} \tag{4}\]
We use an adaptive threshold \(t_{j}\) for each individual basis, which is defined to cover the top \(20\%\) of regions of probabilistic maps. Through iterating the measuring process for all images within the dataset, we are able to link bases learned from saliency prediction to a variety of visual semantics. We consider the top-5 semantics matched with each basis, and incorporate their average alignment scores \(\hat{O}_{j,p}\) for determining the weight \(I\) to saliency:
\[I_{p}=\frac{\sum\limits_{j=1}^{N}W_{j}^{sal}\hat{O}_{j,p}}{Z} \tag{5}\]
where \(Z\) is the normalization factor that normalizes the contribution of the semantics to the range of [-1, 1] (for semantics with positive/negative contributions, Z denotes the maximal/minimum contribution among all semantics). This approach enables us to capture a broader range of contributing semantics, while avoiding an overemphasis on dominant salient/non-salient semantics (e.g., face and cloudness).
Overall, our method establishes the foundation for bridging implicit features learned by deep saliency models with interpretable semantics. It goes beyond existing studies that analyze models' predictive behaviors on a selection of object categories, as it considers a comprehensive range of semantics without assuming their relevance to saliency. It provides insights into how well the models capture both salient and non-salient semantics, and how they quantitatively contribute to saliency prediction.
## 4 Experiment
### Implementation
**Semantic annotations.** To correlate implicit features with interpretable visual semantics, we leverage the Visual Genome [20] dataset. It has (1) multiple objects in the same scene to derive relative importance; and (2) a broad coverage of semantics in the naturalistic context, including objects, parts, attributes, and actions. We use the bounding box annotations of semantics (_i.e.,_\(R\) in Equation 4) to measure the alignment between bases and semantics.
**Model configuration.** We experiment with three state-of-the-art saliency prediction models, including SALICON [9], DINet [11] and TranSalNet [10]. All models are optimized with a combination of saliency evaluation metrics (_i.e.,_ Normalized Scanpath Saliency (NSS) [49], Correlation Coefficient (CC) [50], and KL-Divergence (KLD) [51]) as proposed in [8], and use ResNet-50 [13] as the backbone. Model training follows a two-step paradigm: (1) The model is optimized to factorize features with trainable bases, where the weighted combination of bases (_i.e.,_\(V^{f}\) in Equation 2) is used for predicting the saliency map. Note that we do not use pretrained and fixed deep saliency models, but optimize the corresponding model architecture with the proposed factorization modules. (2) We freeze the model weights learned in the previous step and reroute the saliency inference to derive the saliency map from the probabilistic distribution \(\alpha\) (see Equation 3) so that the interpretation is on features learned by the same saliency model. Only the last layer \(W^{sal}\) is fine-tuned to learn the correlation between the distribution of bases and visual saliency.
### How Do Deep Saliency Models Differentiate Salient and Non-Salient Semantics?
Deep saliency models are powerful tools for predicting visual saliency, and their ability to incorporate semantic information is crucial for closing the gap between computational modeling and human behaviors [17]. To gain insight into the semantics that deep saliency models learn and their contributions to saliency prediction, we apply our framework to the state-of-the-art DINet [11] trained on the SALICON [12] dataset (see Section 4.3 for results on other models and datasets), and explicitly measure the weights of diverse semantics during the inference process (_i.e.,_\(I_{p}\) in Equation 5).
Figure 2 shows that the DINet model effectively captures a variety of semantics that are closely related to visual saliency. These include social cues such as faces, noses, and beards, actions like having a meeting, snowboarding, and jumping, clothing such as goggles, and salient object categories like animals, vehicles, and text. These findings resonate with previous research [18; 19; 43] that saliency models learn to recognize salient cues. More importantly, they showcase the versatility of our approach in automatically identifying key contributing factors of saliency models without any preconceived assumptions [43; 19], enabling attention analyses across a diverse array of scenarios (see Section 4.5).
Figure 2: Important semantics learned by the deep saliency model and their weights. We visualize the top-60 semantics with significantly positive or negative weights.
Another unique advantage of our approach is to simultaneously derive semantics that both positively and negatively contribute to saliency, while previous studies commonly focus on the positive side. Our results reveal a clear separation between the semantics that contribute positively and negatively to saliency. Specifically, the model considers social and action cues to have a positive contribution to saliency, while semantics related to backgrounds such as sky (_e.g.,_ hazy and overcast), ground (_e.g.,_ pavement and carpet), and scene (_e.g.,_ bathroom and park) have negative weights. The observation demonstrates that deep saliency models' success is not only due to the accurate detection of salient objects [18; 19; 43], but also strongly related to the ability to distinguish salient and non-salient semantics.
### How Do Data and Model Designs Affect Saliency Prediction?
Training data and model designs play crucial roles in determining how well deep saliency models perform [18]. To understand these roles better, we conduct a comparative analysis of the effects of different training datasets and model architectures on saliency prediction.
Figure 3 compares the results of three DINet models trained on different datasets: SALICON [12], OSIE [17], and MIT [16]. It reveals that the shifts in semantics weights are tightly coupled with the characteristics of the training data. For instance, the model trained on OSIE pays more attention to social cues and non-salient semantics, because the OSIE dataset collects attention on semantic-rich stimuli with diverse social cues. Differently, the model trained on MIT assigns a less positive weight to social cues and a more negative weight to the vehicle category, which is likely due to the dataset's less emphasis on social semantics and the co-occurrence of vehicles and more salient objects. Therefore, differences in the semantic weights across models trained on different datasets reflect the variations in the semantics presented in the respective datasets.
Figure 4 compares the semantic weights of three different saliency models trained on the same SALICON [12] dataset: _i.e.,_ DINet [11], SALICON [9], and TranSalNet [10]. While all models place the highest weights on actions and the lowest weights on ground, scene, and sky, the differences in model designs are reflected in their semantic weights. For instance, compared to the other models, TranSalNet considers clothing and text to be strongly salient but sky to be less non-salient. The results
Figure 4: Semantic weights for different saliency models.
Figure 3: Semantic weights for DINet trained on different datasets.
shed light on the relationships between the designs and behaviors of saliency models, and show the usefulness of our framework in helping researchers tailor their models for specific applications.
Despite the differences in model behaviors across datasets and models, all models excel at automatically identifying salient semantics, _e.g.,_ action and social cues, and differentiating foreground from background semantics. The observation is consistent with our findings in Section 4.2, and indicates that the ability to correlate semantics with saliency and quantify their positive/negative contributions is a systematic advantage of deep saliency models.
### How Does Fine-tuning Saliency Models Affect Semantic Weights?
The process of fine-tuning is crucial for training deep saliency models, as highlighted in previous studies [18, 19]. To better understand the evolution of feature weights during fine-tuning, we conduct experiments on models trained in three scenarios of fine-tuning. These scenarios include models with fixed ImageNet [52] features (W/o fine-tuning), models that are fine-tuned for a single epoch (Single-epoch fine-tuning), and fine-tuned models with the best validation performance (Complete fine-tuning). We experiment with three different models, namely SALICON [9], DINet [11], and TranSalNet [10], and report the average results obtained from our experiments.
As shown on the left side of Figure 5, models with fixed ImageNet features already have the ability to identify salient semantics, such as action and social cues, highlighted with strong positive weights. This is consistent with previous findings [19], which validates the effectiveness of our approach.
When comparing the results of models with single-epoch fine-tuning (shown in the middle of Figure 5) to those without fine-tuning (shown on the left), we notice a significant shift in the negative weights. Specifically, while models with fixed ImageNet features identify scene-related semantics to contribute most negatively to saliency, after being fine-tuned for a single epoch, the models now focus on sky-related semantics and have a weaker emphasis on scene-related semantics. This can be attributed to the fact that ImageNet features are learned from iconic images, which are inherently insensitive to the sky background. However, since the sky is a strong indicator of low saliency values, it is necessary to learn this semantic to achieve accurate saliency prediction (note that the largest performance gain also occurs during the first epoch).
Finally, when examining the fully-tuned models (shown on the right side of Figure 5), we find that semantic weights continue to evolve during the later epochs of fine-tuning. Unlike the first epoch which imposes larger changes on a few negative semantics, the subsequent fine-tuning mostly plays a role in refining the weights of a broader range of semantics, enabling the models to become more sensitive to salient cues (_e.g._ action and text) and continuously adjusts the weights of negative semantics (_e.g._ ground and other).
These observations offer a comprehensive view of how deep saliency models progressively adapt ImageNet features through fine-tuning. They show that fine-tuning first concentrates on semantics with negative weights to saliency, which are not well captured in the pretrained features, and then gradually adjusts the relative weights of diverse semantics. Understanding the evolution of model behaviors during training can provide insights into optimizing the learning recipes for saliency models and enabling them to progressively encode the knowledge of diverse semantics.
Figure 5: Evolution of semantic weights throughout the fine-tuning process.
### Can Deep Saliency Models Capture Characteristics of Human Attention?
Human attention is influenced by several factors, such as the visual preferences of viewers, characteristics of visual stimuli, and temporal dynamics. We investigate the ability of DINet to capture the impact of these factors by training it on different attention data for each factor. We visualize results for the most discriminative semantics, and include the complete analysis in supplementary materials.
Firstly, we study the impact of visual preferences on attention with two subject groups, _i.e.,_, people with autism and those without, using a dataset with attention data of subjects from the two groups [53]. We train a DINet model on each set to compare their corresponding semantic weights. As shown in Figure 5(a), both models assign significant weights to social cues, but the model for the autism group has a considerably weaker emphasis on action cues. This can be linked to the deficits in joint attention for people with autism [54, 55]. Additionally, the model for the autism group also assigns a strong positive weight to the "other" category. The specific objects highlighted in the category are related to gadgets, _e.g.,_ digital, illuminated, and metal (see supplementary materials for details), which is consistent with the findings about the special interests of people with autism [53].
Next, we explore the impact of stimuli with diverse characteristics with images exhibiting positive, negative, and neutral sentiments, using the EMOd dataset [23]. Figure 5(b) shows that models trained on stimuli eliciting strong emotions (positive and negative) exhibit a higher emphasis on social and action cues (_e.g.,_ face and nose, see our supplementary materials for details) than the one for neutral sentiment. Their behaviors align with previous findings that human attention generally prioritizes emotion-eliciting content over non-emotion-eliciting content (_e.g.,_ clothing) [56, 57]. We also identify that models for positive/negative sentiments place more focus on human-related objects (_e.g.,_ social cues and clothing) than objects less related to humans (_e.g.,_ animals), which is a driving factor for characterizing the emotion prioritization effect on attention deployment.
Finally, we investigate the effects of temporal dynamics by conducting experiments on the CodeCharts1K [25] dataset that provides attention annotations collected at 0.5, 3, and 5 seconds. As depicted in Figure 5(c), a shift of focus from dominantly salient cues (_e.g.,_ action and social cues) to more diverse semantics (_e.g.,_ vehicle, animal, and color) is depicted by the gradually increasing weights for the latter group. It is because viewers usually engage their attention to the most salient cues at the beginning of visual exposure (_i.e.,_ within 3 seconds) before broadening their focus.
Overall, our findings suggest that deep saliency models can encode the fine-grained characteristics of diverse attention. They also validate the usefulness of our approach in revealing the discriminative patterns of attention and shed light on how visual attention is influenced by various factors.
### What Is Missing to Close the Gap Between Saliency Models and Human Attention?
Previous studies [18, 43] have identified a collection of common mistakes for saliency models by probing the predicted saliency maps. In this paper, we aim to complement these studies by analyzing the failure patterns within the intermediate inference process using the proposed factorization framework.
Figure 6: Semantic weights for characterizing the attention on three different settings. From left to right are results for the attention of people with and without autism, attention to stimuli eliciting different emotions, and the attention of different time periods. Figures share the same y-axis. Note that the results in the three figures are derived from different datasets selected based on the context of analyses.
To conduct our analysis, we first select the common success and failure examples where three tested models (_i.e.,_ SALICON [9], DINet [11], and TranSalNet [10]) consistently have high/low NSS scores [49]. Then we perform a qualitative analysis by visualizing the spatial probabilistic distribution (\(\alpha\) in Equation 3) of the bases for semantics with positive (red) and negative (blue) weights (see our supplementary materials for details), which is used to derive the final saliency maps.
In the successful examples (see the left panel of Figure 7), we find that accurate saliency prediction correlates with the differentiation of diverse semantics. Specifically, stimuli for these examples typically have salient and non-salient regions belonging to different semantics. Therefore, with the ability to distinguish positive and negative semantics (_i.e.,_ with discriminative distributions of the corresponding bases), models can readily determine their saliency distribution.
However, in the failure examples (see the right panel of Figure 7), models commonly struggle to determine the saliency within objects (_e.g.,_ bicycle in the \(1^{st}\) failure example) or among objects with similar semantics (_e.g.,_ elephants in the \(2^{nd}\) failure example). Investigation of the probabilistic distribution of bases shows that models typically have a uniform-like distribution of bases on object parts or among objects of the same category, thus inherently incapable of determining the importance of the bases to construct an accurate saliency map. We also note that existing models have difficulty with scenes without salient objects, which is illustrated in the \(3^{rd}\) failure examples with a relatively empty scene.
As shown in Figure 7) (second column of the right panel), the ground truth human attention of the failure patterns is scattered. We further look into the inter-subject variability of these cases and found that their inter-subject variability is high, suggesting that human viewers may not agree on where to look, and therefore the ground truth maps are less valid. In this case, one assumption about saliency modeling (i.e., certain commonality about human attention patterns) may not be true, and the validity of using the ground truth human map for training and evaluation (i.e., the standard leave-one-subject-out approach), and the expected behavior of targeted models are interesting and open questions.
Overall, while high-level semantics learned in existing deep saliency models are powerful, we hypothesize that leveraging more structured representations to encode the contextual relationships between semantics, and integrating mid- and low-level cues will be helpful for accommodating challenging scenarios in saliency prediction.
### Quantitative Evaluation of the Interpretable Model
The fundamental objective of our study is to develop a principled framework for understanding the underlying mechanism behind deep saliency models without altering their inherent behaviors. For this, we only introduce minimal architectural modifications, limited to the last two layers of the saliency models, thereby ensuring that their performance aligns seamlessly with the original models across all datasets. To complement our aforementioned analyses and further substantiate the efficacy
Figure 7: Successful (left) and failure (right) examples of deep saliency models. We visualize the predictions from DINet, as those for the other two models are similar. For the distribution of bases, from red to blue, the probabilities of positive bases decrease while those for negative bases increase.
of our methodology, we quantitatively evaluate the saliency prediction performance of our method (using a DINet backbone trained the SALICON training split as an example) on three commonly used saliency datasets, including OSIE [17], MIT [16], and SALICON [12]. Comparative results reported in Table 1 demonstrate the competitive nature of our approach with respect to state-of-the-art methods, and validate its effectiveness in achieving the balance between interpretability and model performance.
## 5 Conclusion, Limitations, and Broader Impacts
As deep saliency models excel in performance, it is important to understand the factors contributing to their successes. Our study introduces a novel analytic framework to explore how deep saliency models prioritize visual cues to predict saliency, which is crucial for interpreting model behavior and gaining insights into visual elements most influential in performance. We discover that the models' success can be attributed to their accurate feature detection and their ability to differentiate semantics with positive and negative weights. These semantic weights are influenced by various factors, such as the training data and model designs. Furthermore, fine-tuning the model is advantageous, particularly in allocating suitable weights to non-salient semantics for optimal performance. Our framework also serves as a valuable tool for characterizing human visual attention in diverse scenarios. Additionally, our study identifies common failure patterns in saliency models by examining inference processes, discusses challenges from both human and model attention, and suggests modeling with holistic incorporation of structures and lower-level information.
Despite advancing attention research from different perspectives, our work still has room for improvement. Specifically, the current study focuses on visual attention deployment in real-world scenarios and employs the commonly used natural scenes as visual stimuli. We are aware that certain applications (_e.g.,_ graphical design and software development) may also involve artificial stimuli, such as advertisements, diagrams, and webpages, and extension of the proposed framework to broader domains can be straightforward and interesting.
We anticipate several positive impacts stemming from this research. Firstly, by advancing the understanding of deep saliency models, valuable insights can be applied to optimize interfaces for human-computer interactions. This, in turn, will enhance the efficiency and reliability of the next generation of computer-aided systems, leading to improved user experiences and increased productivity. Secondly, the accurate capture of visual importance can have significant implications for individuals with visual impairments. Saliency models can effectively assist visually impaired individuals in navigating and interacting with both people and environments. This could empower them to engage more fully in various aspects of daily life, fostering independence and inclusivity. In sum, this comprehensive exploration of computational saliency modeling holds great potential for broader societal benefits.
## Acknowledgements
This work is supported by NSF Grants 2143197 and 2227450.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{OSIE [17]} & \multicolumn{2}{c}{MIT [16]} & \multicolumn{2}{c}{SALICON [12]} \\ \cline{2-7} & NSS & CC & NSS & CC & NSS & CC \\ \hline SALICON [9] & 2.75 & 0.63 & 2.56 & 0.70 & 1.89 & 0.86 \\ \hline SAM [8] & 2.70 & 0.65 & 2.47 & 0.69 & 1.84 & 0.86 \\ \hline DINet [11] & 2.88 & 0.63 & 2.54 & 0.70 & 1.92 & 0.87 \\ \hline Ours & 2.91 & 0.64 & 2.53 & 0.70 & 1.89 & 0.86 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparative results of saliency prediction on three popular datasets. Models are evaluated with two common metrics, including Normalized Scanpath Saliency (NSS) [49] and Correlation Coefficient (CC) [50].
## References
* [1] Neil Bruce and John Tsotsos. Saliency based on information maximization. In _NeurIPS_, volume 18, 2005.
* [2] Jonathan Harel, Christof Koch, and Pietro Perona. Graph-based visual saliency. In _Advances in Neural Information Processing Systems_, volume 19. MIT Press, 2006.
* [3] Laurent Itti and Christof Koch. A saliency-based search mechanism for overt and covert shifts of visual attention. _Vision Research_, 40(10):1489-1506, 2000.
* [4] Eleonora Vig, Michael Dorr, and David Cox. Large-scale optimization of hierarchical features for saliency prediction in natural images. In _CVPR_, pages 2798-2805, 2014.
* [5] Christof Koch and Shimon Ullman. Shifts in selective visual attention: Towards the underlying neural circuitry. _Matters of Intelligence_, 188:115-141, 1987.
* [6] Anne M. Treisman and Garry Gelade. A feature-integration theory of attention. _Cognitive Psychology_, 12(1):97-136, 1980.
* [7] Jeremy M Wolfe. Guided search 2.0: The upgrade. _Proceedings of the Human Factors and Ergonomics Society Annual Meeting_, 37(19):1295-1299, 1993.
* [8] Marcella Cornia, Lorenzo Baraldi, Giuseppe Serra, and Rita Cucchiara. Predicting Human Eye Fixations via an LSTM-based Saliency Attentive Model. _TIP_, 27(10):5142-5154, 2018.
* [9] Xun Huang, Chengyao Shen, Xavier Boix, and Qi Zhao. Salicon: Reducing the semantic gap in saliency prediction by adapting deep neural networks. In _ICCV_, pages 262-270, 2015.
* [10] Jianxun Lou, Hanhe Lin, David Marshall, Dietmar Saupe, and Hantao Liu. Translanet: Towards perceptually relevant visual saliency prediction. _Neurocomputing_, 494:455-467, 2022.
* [11] Sheng Yang, Guosheng Lin, Qiuping Jiang, and Weisi Lin. A dilated inception network for visual saliency prediction. _TMM_, 22(8):2163-2176, 2020.
* [12] Ming Jiang, Shengsheng Huang, Juanyong Duan, and Qi Zhao. Salicon: Saliency in context. In _CVPR_, pages 1072-1080, 2015.
* [13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _CVPR_, pages 770-778, 2016.
* [14] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In _NeurIPS_, volume 25, 2012.
* [15] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In _NeurIPS_, volume 30, 2017.
* [16] Tilke Judd, Krista Ehinger, Fredo Durand, and Antonio Torralba. Learning to predict where humans look. In _ICCV_, pages 2106-2113, 2009.
* [17] Juan Xu, Ming Jiang, Shuo Wang, Mohan S. Kankanhalli, and Qi Zhao. Predicting human gaze beyond pixels. _Journal of Vision_, 14(1):1-20, 2014.
* [18] Ali Borji. Saliency prediction in the deep learning era: Successes and limitations. _TPAMI_, 43(2):679-700, 2021.
* [19] Sen He, Hamed R. Tavakoli, Ali Borji, Yang Mi, and Nicolas Pugeault. Understanding and visualizing deep visual saliency models. In _CVPR_, pages 10198-10207, 2019.
* [20] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalanditis, Li-Jia Li, David A Shamma, Michael Bernstein, and Li Fei-Fei. Visual genome: Connecting language and vision using crowdsourced dense image annotations. In _IJCV_, volume 123, pages 32-73, 2017.
* [21] Tilke Judd, Fredo Durand, and Antonio Torralba. A benchmark of computational models of saliency to predict human fixations. In _MIT Technical Report_, 2012.
* [22] Ali Borji and Laurent Itti. Cat2000: A large scale fixation dataset for boosting saliency research. _CVPR workshop_, 2015.
* [23] Shaojing Fan, Zhiqi Shen, Ming Jiang, Bryan L Koenig, Juan Xu, Mohan S Kankanhalli, and Qi Zhao. Emotional attention: A study of image sentiment and visual attention. In _CVPR_, pages 7521-7531, 2018.
* [24] Subramanian Ramanathan, Harish Katti, Nicu Sebe, Mohan Kankanhalli, and Tat-Seng Chua. An eye fixation database for saliency detection in images. In _ECCV_, pages 30-43. Springer, 2010.
* [25] Camilo Fosco, Anelise Newman, Pat Sukhum, Yun Bin Zhang, Nankuan Zhao, Aude Oliva, and Zoya Bylinskii. How much time do you have? Modeling multi-duration saliency. In _CVPR_, pages 4472-4481, 2020.
* [26] Lai Jiang, Mai Xu, Tie Liu, Minglang Qiao, and Zulin Wang. Deepvs: A deep learning based video saliency prediction approach. In _ECCV_, pages 602-617, 2018.
* [27] Wenguan Wang, Jianbing Shen, Fang Guo, Ming-Ming Cheng, and Ali Borji. Revisiting video saliency: A large-scale benchmark and a new model. In _CVPR_, pages 4894-4903, 2018.
* [28] Zoya Bylinskii, Nam Wook Kim, Peter O'Donovan, Sami Alsheikh, Spandan Madan, Hanspeter Pfister, Fredo Durand, Bryan C. Russell, and Aaron Hertzmann. Learning visual importance for graphic designs and data visualizations. _Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology_, 2017.
* [29] Camilo Fosco, Vincent Casser, Amish Kumar Bedi, Peter O'Donovan, Aaron Hertzmann, and Zoya Bylinskii. Predicting visual importance across graphic design types. In _Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology_, page 249-260, 2020.
* [30] Souradeep Chakraborty, Zijun Wei, Conor Kelton, Seoyoung Ahn, Aruna Balasubramanian, Gregory J. Zelinsky, and Dimitris Samaras. Predicting visual attention in graphic design documents. _TMM_, pages 1-1, 2022.
* [31] Chengyao Shen and Qi Zhao. Webpage saliency. In _ECCV_, pages 33-46, 2014.
* [32] Ming Jiang, Juan Xu, and Qi Zhao. Saliency in crowd. In _ECCV_, pages 17-32. Springer, 2014.
* [33] Andrea Palazzi, Davide Abati, Francesco Solera, Rita Cucchiara, et al. Predicting the driver's focus of attention: the dr (eye) ve project. _TPAMI_, 41(7):1720-1733, 2018.
* [34] Vincent Sitzmann, Ana Serrano, Amy Pavel, Maneesh Agrawala, Diego Gutierrez, Belen Masia, and Gordon Wetzstein. Saliency in vr: How do people explore virtual environments? _IEEE Transactions on Visualization and Computer Graphics_, 24(4):1633-1642, 2018.
* [35] Ziheng Zhang, Yanyu Xu, Jingyi Yu, and Shenghua Gao. Saliency detection in \(360^{\circ}\) videos. In _ECCV_, pages 504-520, 2018.
* [36] L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. _TPAMI_, 20(11):1254-1259, 1998.
* [37] Jianming Zhang and Stan Sclaroff. Saliency detection: A boolean map approach. In _ICCV_, pages 153-160, 2013.
* [38] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. _ICLR_, 2015.
* [39] Eleonora Vig, Michael Dorr, and David Cox. Large-scale optimization of hierarchical features for saliency prediction in natural images. In _CVPR_, pages 2798-2805, 2014.
* [40] Srinivas S. S. Kruthiventi, Kumar Ayush, and R. Venkatesh Babu. Deepfix: A fully convolutional neural network for predicting human eye fixations. _ITIP_, 26(9):4446-4456, 2017.
* [41] Matthias Kummerer, Thomas S. A. Wallis, and Matthias Bethge. Deepgaze II: reading fixations from deep features trained on object recognition. _Arxiv_, 2016.
* [42] Ali Borji, Dicky N. Sihite, and Laurent Itti. Quantitative analysis of human-model agreement in visual saliency modeling: A comparative study. _TIP_, 22(1):55-69, 2013.
* [43] Zoya Bylinskii, Adria Recasens, Ali Borji, Aude Oliva, Antonio Torralba, and Fredo Durand. Where should saliency models look next? In _ECCV_, pages 809-824, 2016.
* [44] Neil D. B. Bruce, Christopher Catton, and Sasa Janjic. A deeper look at saliency: Feature contrast, semantics, and beyond. In _CVPR_, pages 516-524, 2016.
* [45] Iuliia Kotseruba, Calden Wloka, Amir Rasouli, and John K. Tsotsos. Do saliency models detect odd-one-out targets? new datasets and evaluations. In _BMVC_, 2019.
* [46] Matthias K\(\ddot{u}\)mmerer, Thomas S.A. Wallis, Leon A. Gatys, and Matthias Bethge. Understanding low- and high-level contributions to fixation prediction. In _2017 IEEE International Conference on Computer Vision (ICCV)_, pages 4799-4808, 2017.
* [47] Bolei Zhou, Yiyou Sun, David Bau, and Antonio Torralba. Interpretable basis decomposition for visual explanation. In _ECCV_, pages 119-134, 2018.
* [48] David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. Network dissection: Quantifying interpretability of deep visual representations. In _CVPR_, pages 3319-3327, 2017.
* 2416, 2005.
* 2498, 2007.
* [51] Solomon Kullback. _Information Theory and Statistics_. Wiley, 1959.
* [52] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _CVPR_, pages 248-255, 2009.
* 616, 2015.
* [54] Julie Osterling and Geraldine Dawson. Early recognition of children with autism: A study of first birthday home videotapes. _Journal of Autism and Developmental Disorders_, 24(3):247-257, 1994.
* [55] Morton Ann Gernsbacher, Jennifer L Stevenson, Suraiya Khandakar, and H Hill Goldsmith. Why does joint attention look atypical in autism? _Child Development Perspectives_, 2(1):38-45, 2008.
* [56] Rebecca J. Compton. The interface between emotion and attention: A review of evidence from psychology and neuroscience. _Behavioral and Cognitive Neuroscience Reviews_, 2(2):115-129, 2003.
* [57] Patrik Vuilleumier. How brains beware: neural mechanisms of emotional attention. _Trends in Cognitive Sciences_, 9:585-594, 2005. | ## Review
### Summary
The paper presents a novel framework for analyzing deep saliency models, providing insights into the implicit features learned by these models for predicting human visual attention. The framework decomposes learned representations into interpretable bases that align with semantic attributes, allowing for a weighted combination of probability maps in saliency prediction. The authors conduct comprehensive analyses across various datasets and application scenarios, including autism spectrum disorder, to understand the factors influencing saliency model performance. While the study identifies the importance of semantic differentiation and fine-tuning in saliency prediction, it also acknowledges the challenges posed by dataset biases and the need for further exploration of the model's effectiveness against standard metrics. Overall, the findings have potential implications for optimizing human-computer interfaces and understanding visual attention dynamics.
### Strengths
- The paper introduces a novel analytic framework for interpreting the features of deep saliency models, enhancing understanding of human visual attention.
- It effectively identifies positive and negative contributions of semantics to saliency and showcases the versatility of the framework across diverse scenarios.
- The comprehensive analysis highlights the importance of training data and model design on saliency predictions.
- The research addresses a significant challenge in saliency modeling, making strides toward explainability in deep learning.
- The paper is well-written and presents a clear methodology, contributing to both theoretical and practical aspects of saliency research.
### Weaknesses
- The paper lacks a comparative analysis of the proposed model's performance against standard saliency metrics and other existing models.
- Certain critical references regarding saliency model failure modes and the context of human attention evolution are not adequately discussed.
- The framework's reliance on specific probing datasets raises questions about the generalizability and validation of the matched concepts.
- The authors do not sufficiently address the potential biases in their datasets and how these might affect the outcomes of their analyses.
- The paper could benefit from a more detailed exploration of the implications of varying hyperparameters, such as the number of bases.
### Questions
- Could the authors provide results comparing their model's performance with existing state-of-the-art saliency models?
- What is the rationale behind the choice of the number of bases (N) used in the framework, and how stable are the results with respect to variations in N?
- How does the proposed framework relate to model performance and human behavior in different contexts?
- Can insights from this study be extended to other domains, and if so, how?
- What specific methodologies could validate the semantic bases identified in the proposed framework?
### Soundness
**Score:** 3
**Description:** 3 = good; the paper presents a solid methodology and results but requires further validation against existing models and metrics.
### Presentation
**Score:** 3
**Description:** 3 = good; the paper is clearly written and organized, though it could benefit from addressing certain missing details and references.
### Contribution
**Score:** 3
**Description:** 3 = good; the study contributes to the understanding of saliency models and their interpretability, though some aspects need further exploration.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept; the paper is technically solid and has a moderate-to-high impact potential, with some concerns that need to be addressed.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents original and valuable findings that advance the understanding of deep saliency models. While there are valid concerns regarding comparison with existing methods and dataset biases, the overall contribution to the field of visual attention prediction is significant. The authors are encouraged to revise the manuscript to address these concerns and incorporate additional discussions as suggested.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Training on Foveated Images Improves Robustness to Adversarial Attacks
Muhammad A. Shah
Language Technologies Institute
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
&Aqsa Kashaf
ByteDance
San Jose, CA 95110
[email protected]
&Bhiksha Raj
Language Technologies Institute
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
work done while at Carnegie Mellon University
###### Abstract
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks - subtle, perceptually indistinguishable perturbations of inputs that change the response of the model. We hypothesize that an important contributor to the robustness of human visual perception is constant exposure to low-fidelity visual stimuli in our peripheral vision. To investigate this hypothesis, we develop _R-Blur_, an image transform that simulates the loss in fidelity of peripheral vision by blurring the image and reducing its color saturation based on the distance from a given fixation point. We show that compared to DNNs trained on the original images, DNNs trained on images transformed by _R-Blur_ are substantially more robust to adversarial attacks, as well as other, non-adversarial, corruptions, achieving up to 25% higher accuracy on perturbed data2.
Footnote 2: The code for _R-Blur_ is available at [https://github.com/ahmedshah1494/RB](https://github.com/ahmedshah1494/RB) blur
## 1 Introduction
Deep Neural Networks (DNNs) are exceptionally adept at many computer vision tasks and have emerged as one of the best models of the biological neurons involved in visual object recognition [1, 2]. However, their lack of robustness to subtle image perturbations that humans are largely invariant [3, 4, 5] to has raised questions about their reliability in real-world scenarios. Of these perturbations, perhaps the most alarming are _adversarial attacks_, which are specially crafted distortions that can change the response of DNNs when added to their inputs [3, 6] but are either imperceptible to humans or perceptually irrelevant enough to be ignored by them.
While several defenses have been proposed over the years to defend DNNs against adversarial attacks, only a few of them have sought inspiration from biological perception, which, perhaps axiomatically, is one of the most robust perceptual systems in existence. Instead, most methods seek to _teach_ DNNs to be robust to adversarial attacks by exposing them to adversarially perturbed images [7, 8, 9] or random noise [10, 11, 12] during training. While this approach is highly effective in making DNNs robust to the types of perturbations used during training, the robustness often does not generalize to other types of perturbations [13, 14, 15]. In contrast, biologically-inspired defenses seek to make DNNs robust by integrating into them biological mechanisms that would bring their behavior more in line with human/animal vision [16, 17, 18, 19, 20, 21, 22]. As these defenses do not require DNNs to be trained on any particular type of perturbation, they yield models that, like humans, are robust to a variety of perturbations [18] in addition to adversarial attacks. For this reason, and in light of the evidence indicating a positive correlation between biological alignment and adversarial robustness [18, 23], we believe biologically inspired defenses are more promising in the long run.
Following this line of inquiry, we investigate the contribution of low-fidelity visual sensing that occurs in peripheral vision to the robustness of human/animal vision. Unlike DNNs, which sense visual stimuli at maximum fidelity at every point in their visual field, humans sense most of their visual field in low fidelity, i.e without fine-grained contrast [24] and color information [25]. In adults with fully developed vision, only a small region (less than 1% by area) of the visual field around the point of fixation [26] can be sensed with high fidelity. In the remainder of the visual field (the periphery), the fidelity of the sensed stimuli decreases exponentially with distance from the fixation point [27]. This phenomenon is called "foveation". Despite this limitation, humans can accurately categorize objects that appear in the visual periphery into high-level classes [28]. Meanwhile, the presence of a small amount of noise or blurring can decimate the accuracy of an otherwise accurate DNN. Therefore, we hypothesize that the experience of viewing the world at multiple levels of fidelity, perhaps even at the same instant, causes human vision to be invariant to low-level features, such as textures, and high-frequency patterns, that can be exploited by adversarial attacks.
In this paper, we propose _R-Blur_ (short for Retina Blur), which simulates foveation by blurring the image and reducing its color saturation adaptively based on the distance from a given fixation point. This causes regions further away from the fixation point to appear more blurry and less vividly colored than those closer to it. Although adaptive blurring methods have been proposed as computational approximations of foveation [29, 30, 31], their impact on robustness has not been evaluated to the best of our knowledge. Furthermore, color sensitivity is known to decrease in the periphery of the visual field [25, 32, 33], yet most of the existing techniques do not account for this phenomenon.
Similar to how the retina preprocesses the visual stimuli before it reaches the visual cortex, we use _R-Blur_ to preprocess the input before it reaches the DNN. To measure the impact of _R-Blur_, we evaluate the object recognition capability of ResNets [34] trained with and without _R-Blur_ on three image datasets: CIFAR-10 [35], Ecoset [36] and Imagenet [37], under different levels of adversarial attacks and common image corruptions [38]. We find that _R-Blur_ models retain most of the high classification accuracy of the base ResNet while being more robust. Compared to the base ResNet, _R-Blur_ models achieve 12-25 percentage points (pp) higher accuracy on perturbed images. Furthermore, the robustness achieved by _R-Blur_ is certifiable using the approach from [10]. We also compare _R-Blur_ with two biologically inspired preprocessing defenses, namely _VOneBlock_[18], a fixed parameter module that simulates the primate V1, and a non-uniform sampling-based foveation technique [22], which we refer to as _R-Warp_. We observe that _R-Blur_ induces a higher level of robustness, achieving accuracy up to 33 pp higher than _R-Warp_ and up to 15 pp higher than _VOneBlock_ against adversarial attacks. Compared to adversarial training (_AT_) [7, 8] - the state-of-the-art non-biological defense, _R-Blur_ achieves up to 7 pp higher accuracy on average against non-adversarial corruptions of various types and strengths thus indicating that the robustness of _R-Blur_ generalizes better to non-adversarial perturbations than _AT_. Finally, an ablation study showed that both adaptive blurring and desaturation contribute to the improved robustness of _R-Blur_.
## 2 Retinal Blur: An Approximation for Peripheral Vision
To simulate the loss in contrast and color sensitivity of human perception with increasing eccentricity, we propose _R-Blur_, an adaptive Gaussian blurring, and color desaturation technique. The operations
Figure 1: _R-Blur_ adds Gaussian noise to image (a) with the fixation point (red dot) to obtain (b). It then creates a colored and a grayscaled copy of the image and applies adaptive Gaussian blurring to them to obtain the low-fidelity images (c) and (d), where the numbers indicate the standard deviation of the Gaussian kernel applied in the region bounded by the boxes. The blurred color and gray images are combined in a pixel-wise weighted combination to obtain the final image (e), where the weights of the colored and gray pixels are a function of their respective estimated acuity values (see 2.2).
performed by _R-Blur_, given an image and fixation point, are shown in Figure 1. First, _R-Blur_ adds Gaussian noise to the image to simulate stochastic firing rates of biological photoreceptors [39]. It then creates color and grayscale copies of the image and estimates the acuity of color and grayscale vision at each pixel location, using distributions that approximate the relationship between distance from the fixation point (eccentricity) and visual acuity levels in humans. _R-Blur_ then applies _adaptive_ Gaussian blurring to both image copies such that the standard deviation of the Gaussian kernel at each pixel in the color and the grayscale image is a function of the estimated color and grayscale acuity at that pixel. Finally, _R-Blur_ combines the two blurred images in a pixel-wise weighted combination in which the weights of the colored and gray pixels are a function of their respective estimated acuity values. Below we describe some of the more involved operations in detail.
### Eccentricity Computation
The distance of a pixel location from the fixation point, i.e. its eccentricity, determines the standard deviation of the Gaussian kernel applied to it and the combination weight of the color and gray images at this location. While eccentricity is typically measured radially, in this paper we use a different distance metric that produces un-rotated square level sets. This property allows us to efficiently extract regions having the same eccentricity by simply slicing the image tensor. Concretely, we compute the eccentricity of the pixel at location \((x_{p},y_{p})\) as
\[e_{x_{p},y_{p}}=\frac{\max(|x_{p}-x_{f}|,|y_{p}-y_{f}|)}{W_{V}}, \tag{1}\]
where \((x_{f},y_{f})\) and \(W_{V}\) represent the fixation point and the width of the visual field, i.e. the rectangular region over which _R-Blur_ operates and defines the maximum image size that is expected by _R-Blur_. We normalize by \(W_{V}\) to make the \(e_{x_{p},y_{p}}\) invariant to the size of the visual field.
### Visual Acuity Estimation
We compute the visual acuity at each pixel location based on its eccentricity. The biological retina contains two types of photoreceptors. The first type, called cones, are color sensitive and give rise to high-fidelity visual perception at the fovea, while the second type, called rods, are sensitive to only illumination but not color and give rise to low-fidelity vision in the periphery. We devise the following two sampling distributions, \(D_{R}(e_{x,y})\) and \(D_{C}(e_{x,y})\), to model the acuity of color and grayscale vision, arising from the cones and rods at each pixel location, \((x,y)\).
\[\mathcal{D}(e;\sigma,\alpha) =\max\left[\lambda(e;0,\sigma),\gamma(e;0,\alpha\sigma)\right] \tag{2}\] \[D_{C}(e;\sigma_{C},\alpha) =\mathcal{D}(e;\sigma_{C},\alpha)\] (3) \[D_{R}(e;\sigma_{R},\alpha,p_{max}) =p_{max}(1-\mathcal{D}(e;\sigma_{R},\alpha)), \tag{4}\]
where \(\lambda(.;\mu,\sigma)\) and \(\gamma(.;\mu,\sigma)\) are the PDFs of the Laplace and Cauchy distribution with location and scale parameters \(\mu\) and \(\sigma\), and \(\alpha\) is a parameter used to control the width of the distribution. We set \(\sigma_{C}=0.12,\sigma_{R}=0.09,\alpha=2.5\) and \(p_{max}=0.12\). We choose the above equations and their parameters to approximate the curves of photopic and scotopic visual acuity from [27]. The resulting acuity estimates are shown in Figure 2b. Unfortunately, the measured photopic and scotopic acuity curves from [27] cannot be reproduced here due to copyright reasons, however, they can be viewed at [https://nba.uth.tmc.edu/neuroscience/m/s2/chapter14.html](https://nba.uth.tmc.edu/neuroscience/m/s2/chapter14.html) (see Figure 3).
### Quantizing the Visual Acuity Estimate
In the form stated above, we would need to create and apply as many Gaussian kernels as the distance between the fixation point and the farthest vertex of the visual field. This number can be quite large as the size of the image increases and will drastically increase the per-image computation time. To mitigate this issue we quantize the estimated acuity values. As a result, the locations to which the same kernel is applied no longer constitute a single pixel perimeter but become a much wider region (see Figure 1 (c) and (d)), which allows us to apply the Gaussian kernel in these regions very efficiently using optimized implementations of the convolution operator.
To create a quantized eccentricity-acuity mapping, we do the following. We first list all the color and gray acuity values possible in the visual field by assuming a fixation point at \((0,0)\), computing eccentricity values \(e_{0,y}\) for \(y\in[0,W_{V}]\) and the corresponding values of \([0,W_{V}]\)} and \(\mathcal{D}_{C}=\{D_{C}(e_{0,y})|y\in[0,W_{V}]\}\). We then compute and store the histograms, \(H_{R}\) and \(H_{C}\), from \(\mathcal{D}_{R}\) and \(\mathcal{D}_{C}\), respectively. To further reduce the number of kernels we need to apply and increase the size of the region each of them is applied to, we merge the bins containing less than \(\tau\) elements in each histogram with the adjacent bin to their left. After that, given an image to process, we will compute the color and gray visual acuity for each pixel, determine in which bin it falls in \(H_{R}\) and \(H_{C}\), and assign it the average value of that bin.
### Changing the Viewing Distance
Increasing the viewing distance can be beneficial as it allows the viewer to gather a more global view of the visual scene and facilitates object recognition. To increase the viewing distance we drop the \(k\) lowest acuity bins and shift the pixels assigned to them \(k\) bins ahead such that the pixels that were in bins 1 through \(k-1\) are now assigned to bin 1. Figure 3 shows the change in the viewing distance as the value of \(k\) increases from 0 to 5. Formally, given the quantized \(D_{C}(e_{x,y})\) and \(D_{R}(e_{x,y})\), let \(D=[d_{1},...,d_{n}]\) represent the value assigned to each bin and \(P_{i}\) be the pixel locations assigned to the \(i^{th}\) bin, with \(P_{1}\) and \(P_{n}\) corresponding to points with the lowest and highest eccentricity, respectively. To increase the viewing distance, we merge bins 1 through \(k\) such that \(D^{\prime}=[d_{1},...,d_{n-k}]\) and the corresponding pixels are \(P^{\prime}_{1}=[P_{1},...,P_{k}]\) and \(P_{i>1}=P_{k+1}\).
### Blurring and Color Desaturation
We map the estimated visual acuity at each pixel location, \((x_{p},y_{p})\), to the standard deviation of the Gaussian kernel that will be applied at that location as \(\sigma_{(x_{p},y_{p})}=\beta W_{V}(1-D(e_{x,y}))\), where \(\beta\) is constant to control the standard deviation and is set to \(\beta=0.05\) in this paper, and \(D=D_{C}\) for pixels in the colored image and \(D=D_{R}\) for pixels in the grayscaled image. We then apply Gaussian kernels of the corresponding standard deviation to each pixel in the colored and grayscale image to obtain an adaptively blurred copy of each, which we combine in a pixel-wise weighted combination to obtain the final image. The weight of each colored and gray pixel is given by the normalized color and gray acuity, respectively, at that pixel. Formally, the pixel at \((x_{p},y_{p})\) in the final image has value
\[v^{f}_{(x_{p},y_{p})}=\frac{v^{c}_{(x_{p},y_{p})}D_{C}(e_{x,y};\sigma_{C}, \alpha)+v^{g}_{(x_{p},y_{p})}D_{R}(e_{x,y};\sigma_{C},\alpha)}{D_{C}(e_{x,y}; \sigma_{C},\alpha)+D_{R}(e_{x,y};\sigma_{C},\alpha)}, \tag{5}\]
\(v^{c}_{(x_{p},y_{p})}\) and \(v^{g}_{(x_{p},y_{p})}\) are the pixel value at \((x_{p},y_{p})\) in the blurred color and gray images respectively.
## 3 Evaluation
In this section, we determine the accuracy and robustness of _R-Blur_ by evaluating it on clean data and data that has been perturbed by either adversarial attacks or common - non-adversarial - corruptions. We compare the performance of _R-Blur_ with an unmodified ResNet, two existing biologically-inspired defenses, _R-Warp_[22] and _VOneBlock_[18], and a non-biological adversarial defenses: Adversarial Training (_AT_) [7]. We show that _R-Blur_ is significantly more robust to adversarial attacks and common corruptions than the unmodified ResNet and prior biologically inspired methods. Moreover, we use Randomize Smoothing [10] to show that _R-Blur_ is _provably_ robust. While _AT_ is more robust
Figure 3: Illustration of increasing the viewing distance (left to right). As the viewing distance is increased, more of the image is brought into focus. We used \(vd=3\) during inference.
Figure 2: Estimated visual acuity of sharp and colorful, photopic, and gray and blurry, scotopic, vision using equations 3 and 4
than _R-Blur_ against adversarial attacks, _R-Blur_ is more robust than _AT_ against common corruptions, thus indicating that the robustness of _R-Blur_ generalizes better to different types of perturbation than _AT_. We also analyze the contribution of the various components of _R-Blur_ in improving robustness.
### Experimental Setup
**Datasets:** We use natural image datasets, namely CIFAR-10 [35], Imagenet ILSVRC 2012 [37], Ecoset [36] and a 10-class subset of Ecoset (Ecoset-10). Ecoset contains around 1.4M images, mostly obtained from ImageNet database [40] (not the ILSVRC dataset), that are organized into 565 basic object classes. The classes in Ecoset correspond to commonly used nouns that refer to concrete objects. To create Ecoset-10, we select 10 classes from Ecoset that have the highest number of images. The training/validation/test splits of Ecoset-10 and Ecoset are 48K/859/1K, and 1.4M/28K/28K respectively. For most experiments with Ecoset and Imagenet, we use 1130, and 2000 test images, with an equal number of images per class. During training, we use random horizontal flipping and padding + random cropping, as well as AutoAugment [41] for CIFAR-10 and RandAugment for Ecoset and Imagenet. All Ecoset and Imagenet images were resized and cropped to \(224\times 224\). We applied these augmentations to _all_ the models we trained - those with biological and non-biological defenses, as well as the baseline models.
**Model Architectures:** For CIFAR-10 we use a Wide-Resnet [42] model with 22 convolutional layers and a widening factor of 4, and for Ecoset and Imagenet we use XResNet-18 from fastai [43] with a widening factor of 2. Moving forward, we will refer to both these models as ResNet and indicate only the training/evaluation datasets from which the exact architecture may be inferred. Results for additional architectures are presented in Appendix C.
**Baselines and Existing Methods:** We compare the performance of _R-Blur_ to two baselines: (1) an unmodified ResNet trained on clean data (ResNet), and (2) a ResNet which applies five affine transformations 3 to the input image and averages the logits (_RandAffine_). We also compare _R-Blur_ with two biologically inspired defenses: _VOneBlock_ pre-processing proposed in [18], which simulates the receptive fields and activations of the primate V1 4, and _R-Warp_ preprocessing proposed in [22], which simulates foveation by resampling input images such that the sampling density of pixels is maximal at the point of fixation and decays progressively in regions further away from it. Finally, we compare _R-Blur_ with two non-biological adversarial defenses: fast adversarial training [8] with \(\|\delta\|_{\infty}=0.008\) (_AT_), and Randomized Smoothing (_RS_) [10].
Footnote 3: We apply rotation, translation, and shearing, with their parameters sampled from \([-8.6^{\circ},8.6^{\circ}]\), \([-49,49]\) and \([-8.6^{\circ},8.6^{\circ}]\) respectively. The ranges are chosen to match the ranges used in RandAugment. The random seed is fixed during evaluation to prevent interference with adversarial attack generation.
Footnote 4: As in [18], we remove the first conv, batch norm, ReLU, and MaxPool from the ResNet with _VOneBlock_.
**Fixation Selection for _R-Blur_ and _R-Warp_**: While training models with _R-Blur_ and _R-Warp_, we split each batch into sub-batches of 32 images, and for each sub-batch, we randomly sample a single fixation point that we use to apply _R-Blur_ or _R-Warp_ to all the images in that sub-batch. While training the _R-Blur_ model, we also set the viewing distance uniformly at random using the procedure
Figure 4: Illustration of fixation selection. The initial fixation point is set to top-left (0,0) and the image at \(t_{0}\) is processed with _R-Blur_/_R-Warp_ to get the image at \(t_{1}\). DeepGaze-III is used to generate a fixation heatmap from this image. The next fixation point is sampled from the heat map, and _R-Blur_/_R-Warp_ is applied to get the image at \(t_{2}\). The region in the heatmap around the chosen fixation point is masked with an inverted Gaussian kernel to prevent spatial clustering of fixation points. This process is repeated to get a sequence of fixation points.
described in 2.4. During inference, we determine a sequence of five fixation points (a scanpath) using DeepGaze-III [44]. DeepGaze-III passes the input image through a pretrained CNN backbone (DenseNet-201 in [44]) and extracts the activations from several intermediate layers of the CNN. It then applies a sequence of pointwise convolution and normalization layers to the activations to obtain a heatmap indicating where a human is likely to fixate. We found that it was more efficient to not use the scanpath prediction module in DeepGaze-III, and instead obtain scanpaths by keeping track of the past fixation points, and masking the predicted heatmap at these locations prior to sampling the next fixation point from it. This process is illustrated in Figure 4.
We trained two instances of DeepGaze-III using the ResNets we trained with _R-Blur_ and _R-Warp_ as the CNN backbone. We use the corresponding DeepGaze-III models to predict the scanpaths for _R-Blur_ and _R-Warp_ models. To train deepgaze-iii we used the code from the official github repository [41]. The only significant modification we made was to replace the pretrained DenseNet-201 with the pretrained R-Warp/R-Blur augmented XResNet-18 we trained on ImageNet. This improves performance, while keeping the total number of parameters low. Following [41] we train DeepGaze on the SALICON dataset [45]. This corresponds to phase 1 of training mentioned in Table 1 of [41]. We did not notice any benefits in our use case of phases 2-4, so we skipped them.
### Results
_R-Blur_ improves robustness to white-box attacks.We evaluate robustness by measuring the accuracy of models under Auto-PGD (APGD)[46] attack, which is a state-of-the-art white-box adversarial attack. We run APGD for 25 steps on each image. We find that increasing the number of steps beyond 25 only minimally reduces accuracy (Appendix A). We take a number of measures to avoid the pitfalls of gradient obfuscation [47, 48] so that our results reflect the true robustness of _R-Blur_. These steps and detailed settings used for adversarial attacks are mentioned in Appendix A.
To determine if _R-Blur_ improves robustness, we compare _R-Blur_ with the unmodified ResNet and _RandAffine_ under the APGD attack. We observe that _R-Blur_ is significantly more robust than the unmodified ResNet and _RandAffine_ models, consistently achieving higher accuracy than the two on all datasets and against all perturbation types and sizes, while largely retaining accuracy on clean data (Figure 5). While _RandAffine_ does induce some level of robustness, it significantly underperforms _R-Blur_. On smaller datasets, _R-Blur_ suffers relatively little loss in accuracy at small to moderate levels (\(\|\delta\|_{\infty}\leq 0.004\), \(\|\delta\|_{2}\leq 1\)) of adversarial perturbations, while the accuracy of baseline methods quickly deteriorates to chance or worse. On larger datasets - Ecoset and Imagenet, even the smallest amount of adversarial perturbation (\(\|\delta\|_{\infty}=0.002\), \(\|\delta\|_{2}=0.5\)) is enough to drive the accuracy of the baselines to \(\sim\)10%, while _R-Blur_ still is able to achieve 35-44% accuracy. As the perturbation is increased to \(\|\delta\|_{\infty}=0.004\) and \(\|\delta\|_{2}=1.0\), the accuracy of the baselines goes to 0%, while _R-Blur_ achieves 18-22%. We do observe that the accuracy of _R-Blur_ on clean data from Ecoset and Imagenet is noticeably lower than that of the baseline methods.
We also compare _R-Blur_ to two existing biologically motivated adversarial defenses: _VOneBlock_ and _R-Warp_, and find that _R-Blur_ achieves higher accuracy than both of them at all perturbation sizes and
Figure 5: Comparison of accuracy on various datasets (a-d) under adversarial attacks of several \(\ell_{2}\) (top) and \(\ell_{\infty}\) (bottom) norms between _R-Blur_ (green) and two baseline methods: _RandAffine_ (orange) and ResNet (blue). The dashed lines indicate accuracy on clean images. _R-Blur_ models consistently achieve higher accuracy than baseline methods on all datasets, and adversarial perturbation sizes.
types. From Figure 6 we see that _R-Blur_ achieves up to 33pp higher accuracy than _R-Warp_, and up to 15 pp higher accuracy than _VOneBlock_ on adversarially perturbed data.
_R-Blur_ is certifiably robust.To verify that the gains in robustness observed above are indeed reliable, we use the certification method (Certify) from [10] to provide formal robustness guarantees for _R-Blur_. This entails obtaining predictions for an input under a _very_ large number (\(10^{5}\)) of noise samples drawn from \(\mathcal{N}(0,\sigma_{c})\), and using a hypothesis test to determine the _certified radius_ around the input in which the model's prediction is stable _with high probability_ (\(\geq 99.9\%\)). Given a dataset, we can compute the _certified accuracy_ at a radius \(r\) as the proportion of data points for which the certified radius is \(\geq r\) and the model's prediction is correct. It was shown in [10] that a model trained on data perturbed with Gaussian noise achieves high certified accuracy. We call this model _G-Noise_. We compare the certified accuracy of _G-Noise_ and _R-Blur_ on 200 images from Imagenet and Ecoset.
We expose both _R-Blur_ and _G-Noise_ to Gaussian noise of scale \(\sigma_{t}=0.125\) during training and compute their certified accuracy at radii \(r\in\{0.5,1.0\}\). According to [10], if the scale of the noise used in Certify is \(\sigma_{c}\), then the maximum radius for which certified accuracy can be computed (with \(10^{5}\) noise samples) is \(r=4\sigma_{c}\). Therefore, when computing certified accuracy at \(r\leq 0.5\)Certify adds noise of the same scale as was used during training (\(\sigma_{c}=0.125=\sigma_{t}\)), thus we call this the _matched_ setting. However, to compute certified accuracy at \(r\leq 1.0\)Certify adds noise of a larger scale than was used during training (\(\sigma_{c}=0.25>\sigma_{t}\)), and thus in order to achieve high certified accuracy at \(r\leq 1.0\) the model must be able to generalize to a change in noise distribution. We call this the _unmatched_ setting.
Figure 6(a) and 6(b) show the certified accuracy of _R-Blur_ and the _G-Noise_ on Ecoset and Imagenet at several \(\ell_{2}\) norm radii under matched and unmatched settings. In both settings, we see that _R-Blur_ achieves a high certified accuracy on both Ecoset and Imagenet, with the certified accuracy at \(r\approx 0.5\) and \(r\approx 1.0\) being close to the ones observed in Figure 5, indicating that our earlier results are a faithful representation of _R-Blur_'s robustness. Furthermore, we see that even if _R-Blur_ was trained without any noise, it can still achieve more than 50% of the certified accuracy achieved by _R-Blur_ trained with noise. This indicates that adaptive blurring and desaturation do in fact endow the model with a significant level of robustness. Finally, we note that while _G-Noise_ has (slightly) higher certified accuracy than _R-Blur_ in the matched setting, _R-Blur_ achieves significantly higher certified accuracy in the unmatched setting, outstripping _G-Noise_ by more than 10 pp at \(r\approx 1.0\) on Imagenet. This shows that the robustness of _R-Blur_ generalizes beyond the training conditions, while _G-Noise_
Figure 6: The difference in accuracy under adversarial attacks of several \(\ell_{2}\) and \(\ell_{\infty}\) norms between _R-Blur_ and two biologically inspired defenses: _R-Warp_ (blue) and _VOneBlock_ (orange). _R-Blur_ consistently achieves higher accuracy on all adversarial perturbation sizes than _R-Warp_ and _VOneBlock_.
Figure 7: The certified accuracy at various \(\ell_{2}\)-norm radii of _R-Blur_ and _G-Noise_ models. _R-Blur_-CFI uses 1 fixation at the center of the image, and _R-Blur_-5FI, averages logits from 5 fixation (corners + center). \(\sigma_{t}\) denotes the scale of noise added during training and is 0.125 unless specified, whereas \(\sigma_{c}\) is the scale of the noise used to compute the certified accuracy. _G-Noise_ outperforms _R-Blur_ in the matched scenario, while _R-Blur_ is superior in the unmatched scenario indicating that the robustness of _R-Blur_ is more generalizable.
overfits to them. This makes _R-Blur_ particularly suited for settings in which the exact adversarial attack budget is not known, and the model must be able to generalize.
_R-Blur Improves accuracy on common (non-adversarial) corruptions._ Adversarial perturbations constitute only a small subset of perturbations that human vision is invariant to, therefore we evaluate _R-Blur_ on a set of common image corruptions [38] that humans are largely invariant to but DNNs are not. We sample 2 images/class from Imagenet and 5 images/class from Ecoset. Then we apply 17 5 common corruptions proposed in [38] at 5 different severity levels to generate 85 corrupted versions of each image. This yields corrupted versions of Imagenet and Ecoset containing 170K and 240K images, respectively.
Footnote 5: We exclude Gaussian blur and Gaussian noise since they are similar to the transformations done by _R-Blur_.
Figure 8 shows the accuracy of the models on corrupted Ecoset and Imagenet. Here we also compare against an adversarially trained model (_AT_) trained with \(\|\delta\|_{\infty}=0.008\) using the method of [8]. We see that at severity greater than 1 _R-Blur_ consistently achieves the highest accuracy. Furthermore, we also note that _R-Blur_, and _VOneBlock_ consistently achieve higher accuracy than _AT_, which supports our hypothesis that the robustness of biologically motivated methods, and particularly _R-Blur_, is more general than non-biological defenses, like _AT_. In fact, the accuracy of _AT_ on common corruptions is generally lesser than or at par with the accuracy of the unmodified ResNet, indicating that the robustness of _AT_ does not generalize well.
**Summary of Results:** Table 1 summarizes the results of our paper and reiterates two key observations from earlier sections. Firstly, _R-Blur_ makes models more significantly robust to adversarial perturbations than the unmodified ResNet, and other biologically inspired defenses. _R-Blur_, however, achieves lower accuracy against white-box attacks than _AT_. This is to be expected because _AT_ is trained on adversarially perturbed data. Secondly, _R-Blur_ augmented models are significantly more robust to common corruptions than all other models, including _AT_. In contrast, the accuracy of _AT_ on common corruptions is almost the same as that of the unmodified ResNet, indicating that the robustness of _AT_ does not generalize.
\begin{table}
\begin{tabular}{l c c c c|c c c c} \hline \hline Method & Mean & CC & WB & Clean & Mean & CC & Wb & Clean \\ \hline \multicolumn{8}{c}{Ecoset} & \multicolumn{8}{c}{Imagenet} \\ \hline ResNet & 37.1 & 39.4 & 0.8 & 71.2 & 34.7 & 33.6 & 0.1 & **70.3** \\ _RandAffine_ & 35.7 & 35.8 & 3.6 & 67.6 & 33.7 & 30.8 & 2.0 & 68.3 \\ \hline _AT_ & **49.0** & 38.5 & **47.5** & 61.1 & **46.3** & 34.2 & **43.5** & 61.3 \\ \hline _R-Warp_ & 38.5 & 40.0 & 4.5 & 71.1 & 34.1 & 32.5 & 2.2 & 67.7 \\ _VOneBlock_ & 42.9 & 40.7 & 16.1 & **72.0** & 38.8 & 35.8 & 11.9 & 68.7 \\ \hline _R-Blur_ & 44.2 & **45.6** & 23.8 & 63.3 & 38.9 & **39.0** & 17.2 & 60.5 \\ \hline \multicolumn{8}{c}{**best**, second best} \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy of the evaluated models on clean, and perturbed data from Imagenet. “WB” refers to the accuracy under APGD attacks, while “CC” refers to the accuracy under common non-adversarial corruption [38]. _R-Blur_ significantly improves the robustness of ResNet, and outperforms prior biologically motivated defenses, while approaching the performance of _AT_.
Figure 8: The accuracy of the models on Imagenet and Ecoset under the common corruptions from [38] at various severity levels. We see that _R-Blur_ generally achieves the highest accuracy.
### Ablation Study
We examine, by way of ablation, how much each component of _R-Blur_ contributes towards its overall robustness as shown in Figure 9. The most significant contributor to robustness is the addition of noise. This echoes the findings from [18], which showed that neural stochasticity contributes significantly to the robustness of the visual system. Nevertheless, even without noise _R-Blur_ achieves an 11 point improvement over the vanilla ResNet which archives 0% accuracy under the attack, which indicates that other components of _R-Blur_ also contribute towards robustness. Furthermore, experimental results reveal that robustness induced by noise diminishes as the complexity of the dataset increases and the size of the perturbations increases. As observed in Figure 9(b), Gaussian noise augmentation achieves 45-58% (8-10 points) lower accuracy than _R-Blur_, and in Figure 6(b), which shows that at larger perturbation sizes _R-Blur_ achieves higher certified accuracy.
The second most significant contributor to robustness is the blurring performed by _R-Blur_. Importantly, we note that Gaussian blurring in and of itself does not greatly improve robustness. Figure 9 shows that non-adaptive blurring with a single Gaussian kernel having \(\sigma=10.5\) (\(\sigma=10.9\) is the maximum used in _R-Blur_) improves robustness by only 5 points. Furthermore, Figure 9(a) shows that increasing the strength of non-adaptive blurring trades off clean accuracy for robustness. However, after \(\sigma=8\) the gains in robustness hit (a rather low) ceiling, and increasing \(\sigma\) further reduces both clean accuracy and robustness. On the other hand, _R-Blur_, _without any additive noise_, achieves similar clean accuracy as non-adaptive blurring (\(\sigma=10.5\)) but achieves significantly better adversarial robustness, thereby demonstrating that Gaussian blurring alone accounts for only a fraction of _R-Blur_'s robustness. Furthermore, Figure 9(b) shows that the contribution of non-adaptive blurring declines on the more complex Imagenet, where it achieves only 1% accuracy on moderate-sized perturbations.
The next most significant factor, after noise and adaptive blurring, is evaluating multiple fixation points which improved robustness significantly compared to a single fixation point in the center of the image, which suggests that, multiple fixations and saccades are important when the image is hard to recognize and presents a promising direction for future work. Furthermore, not adaptively desaturating the colors reduces the robustness slightly. Finally, we note that dynamic fixation does not improve performance compared to 5 predefined fixation points. To summarize, most of the biologically-motivated components of _R-Blur_ contribute towards improving the adversarial robustness of object recognition DNNs from close to 0% to 45% (\(\ell_{\infty}=0.008\) for Ecoset-10).
Figure 10: Comparison of _R-Blur_ with models trained with non-adaptive Gaussian blur or Gaussian noise augmentations. (a) compares the accuracy under adversarial attack on Ecoset-10 of _R-Blur_ and models augmented with non-adaptive Gaussian blur of various standard deviations (\(\sigma\)). While non-adaptive Gaussian blur does increase robustness, the adaptive blurring in _R-Blur_ outperforms it by a margin. (b) compares the accuracy under adversarial attack on Imagenet of _R-Blur_ and models augmented with either non-adaptive Gaussian blur or Gaussian noise. We see that _R-Blur_ achieves better robustness than either of these methods.
Figure 9: Accuracy on clean and APGD (\(\|\delta\|_{\infty}=0.008\)) perturbed Ecoset-10 images when individual components of _R-Blur_ are removed (left), and when non-adaptive transforms are used (right). Removing the biologically-motivated components of _R-Blur_ harms the robustnessRelated Work
**Non-biological defenses:** Perhaps the most successful class of adversarial defenses are adversarial training algorithms [7; 9; 49; 17; 8], which train models on adversarially perturbed data generated by backpropagating gradients from the loss to the input during each training step. Another popular class of defenses is certified defenses [10; 11; 50; 51] which are accompanied by provable guarantees of the form: with probability \(1-\delta\), the model's output will not change if a given image is perturbed at most \(\epsilon\). Perhaps, most closely related to our work are preprocessing defenses [52; 53] that apply a large number of transforms to the input during inference. Usually, these defenses rely on non-differentiable transformations, and a high degree of randomization in the number, sequence, and parameters of the transforms they apply to each image. Therefore, these defenses tend to obfuscate gradients [47], and have been shown to be compromised by attacks with a higher step budget. We would like to point out that R-Blur does not have these aforementioned pitfalls - the transforms that R-Blur applies (Gaussian blur and desaturation) are fully differentiable and totally deterministic. In general, it is our opinion that by not being cognizant of the biological basis of robust vision, current approaches are excluding a large set of potentially effective approaches for defending against adversarial attacks.
**Biologically inspired defenses:** Several biological defenses have been proposed over the years. These defenses involve integrating computational analogues of biological processes that are absent from common DNNs, such as predictive/sparse coding [16; 17], biologically constrained visual filters, nonlinearities, and stochasticity [18], foveation [19; 20; 21; 22], into DNNs. The resulting models are made more robust to adversarially perturbed data, and have been shown to better approximate the responses of biological neurons [18].
Most relevant to our work are defenses that have integrated foveation with DNNs. One of the earliest works [20] implements foveation by cropping the salient region of the image at inference time. This work has several shortcomings. Firstly, the biological plausibility of this method is questionable because it does not simulate the degradation of visual acuity in the periphery of the visual field, rather it discards the periphery entirely. Secondly, it crops the image after applying the adversarial attack, which means that the attack does not take into account the cropping, which is akin to obfuscating the gradients, and hence any reported improvements in robustness are suspect. A later work [22] (_R-Burp_) avoids the aforementioned pitfalls and simulates foveation via non-uniform sampling (regions further away from the fixation points are sampled less densely). Since this method is fully differentiable and highly biologically plausible, we compare against it in this paper. Some recent works [19; 21] apply foveation in the latent feature space (the intermediate feature maps generated by a CNN). These works implement foveation by changing the receptive field sizes of the convolutional kernels based on the distance to the fixation. Since they operate on the latent feature space, rather than image pixels, their methods not directly comparable to ours.
## 5 Limitations
Adding _R-Blur_ reduces accuracy on clean data, however, it is possible to significantly improve the accuracy of _R-Blur_ by developing better methods for selecting the fixation point. Further experimental results presented in Appendix B show that if the optimal fixation point was chosen by an oracle the clean accuracy of _R-Blur_ can be improved to within 2% of the accuracy of the unmodified ResNet.
## 6 Conclusion
Since the existence of adversarial attacks presents a divergence between DNNs and humans, we ask if some aspect of human vision is fundamental to its robustness that is not modeled by DNNs. To this end, we propose _R-Blur_, a foveation technique that blurs the input image and reduces its color saturation adaptively based on the distance from a given fixation point. We evaluate _R-Blur_ and other baseline models against APGD attacks on two datasets containing real-world images. _R-Blur_ outperforms other biologically inspired defenses. Furthermore, _R-Blur_ also significantly improves robustness to common, non-adversarial corruptions and achieves accuracy greater than that of adversarial training. The robustness achieved by _R-Blur_ is certifiable using the approach from [10] and the certified accuracy achieved by _R-Blur_ is at par or better than that achieved by randomized smoothing [10]. Our work provides further evidence that biologically inspired techniques can improve the accuracy and robustness of AI models.
## References
* [1] Daniel LK Yamins, Ha Hong, Charles F Cadieu, Ethan A Solomon, Darren Seibert, and James J DiCarlo. Performance-optimized hierarchical models predict neural responses in higher visual cortex. _Proceedings of the national academy of sciences_, 111(23):8619-8624, 2014.
* [2] Charles F Cadieu, Ha Hong, Daniel LK Yamins, Nicolas Pinto, Diego Ardila, Ethan A Solomon, Najib J Majaj, and James J DiCarlo. Deep neural networks rival the representation of primate it cortex for core visual object recognition. _PLoS computational biology_, 10(12):e1003963, 2014.
* [3] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In _ICLR_, 2014.
* [4] Robert Geirhos, Carlos RM Temme, Jonas Rauber, Heiko H Schutt, Matthias Bethge, and Felix A Wichmann. Generalisation in humans and deep neural networks. _Advances in neural information processing systems_, 31, 2018.
* [5] Samuel Dodge and Lina Karam. A study and comparison of human and deep learning recognition performance under visual distortions. In _2017 26th international conference on computer communication and networks (ICCCN)_, pages 1-7. IEEE, 2017.
* [6] Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. _Advances in neural information processing systems_, 32, 2019.
* [7] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In _International Conference on Learning Representations_, 2018.
* [8] Eric Wong, Leslie Rice, and J Zico Kolter. Fast is better than free: Revisiting adversarial training. In _International Conference on Learning Representations_, 2019.
* [9] Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. Theoretically principled trade-off between robustness and accuracy. In _International conference on machine learning_, pages 7472-7482. PMLR, 2019.
* [10] Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. Certified adversarial robustness via randomized smoothing. In _International Conference on Machine Learning_, pages 1310-1320. PMLR, 2019.
* [11] Marc Fischer, Maximilian Baader, and Martin Vechev. Certified defense to image transformations via randomized smoothing. _Advances in Neural Information Processing Systems_, 33:8404-8417, 2020.
* [12] Nicholas Carlini, Florian Tramer, J Zico Kolter, et al. (certified!!) adversarial robustness for free! _arXiv preprint arXiv:2206.10550_, 2022.
* [13] Sander Joos, Tim Van hamme, Davy Preuveneers, and Wouter Joosen. Adversarial robustness is not enough: Practical limitations for securing facial authentication. In _Proceedings of the 2022 ACM on International Workshop on Security and Privacy Analytics_, pages 2-12, 2022.
* [14] Yash Sharma and Pin-Yu Chen. Attacking the madry defense model with \(l\_1\)-based adversarial examples. _arXiv preprint arXiv:1710.10733_, 2017.
* [15] Lukas Schott, Jonas Rauber, Matthias Bethge, and Wieland Brendel. Towards the first adversarially robust neural network model on mnist. _arXiv preprint arXiv:1805.09190_, 2018.
* [16] Dylan M Paiton, Charles G Frye, Sheng Y Lundquist, Joel D Bowen, Ryan Zarcone, and Bruno A Olshausen. Selectivity and robustness of sparse coding networks. _Journal of vision_, 20(12):10-10, 2020.
* [17] Tao Bai, Jinqi Luo, Jun Zhao, Bihan Wen, and Qian Wang. Recent advances in adversarial training for adversarial robustness. _arXiv preprint arXiv:2102.01356_, 2021.
* [18] Joel Dapello, Tiago Marques, Martin Schrimpf, Franziska Geiger, David Cox, and James J DiCarlo. Simulating a primary visual cortex at the front of cnns improves robustness to image perturbations. _Advances in Neural Information Processing Systems_, 33:13073-13087, 2020.
* [19] Aditya Jonnalagadda, William Yang Wang, B.S. Manjunath, and Miguel Eckstein. Foveater: Foveated transformer for image classification, 2022.
* [20] Yan Luo, Xavier Boix, Gemma Roig, Tomaso Poggio, and Qi Zhao. Foveation-based mechanisms alleviate adversarial examples. _arXiv preprint arXiv:1511.06292_, 2015.
* [21] Jonathan M Gant, Andrzej Banburski, and Arturo Deza. Evaluating the adversarial robustness of a foveated texture transform module in a cnn. In _SVRHM 2021 Workshop@ NeurIPS_, 2021.
* [22] Manish Reddy Vuyyuru, Andrzej Banburski, Nishka Pant, and Tomaso Poggio. Biologically inspired mechanisms for adversarial robustness. _Advances in Neural Information Processing Systems_, 33:2135-2146, 2020.
* [23] Anne Harrington and Arturo Deza. Finding biological plausibility for adversarially robust features via metameric tasks. In _International Conference on Learning Representations_, 2021.
* [24] Emma E. M. Stewart, Matteo Valsecchi, and Alexander C. Schutz. A review of interactions between peripheral and foveal vision. _Journal of Vision_, 20(12):2-2, 11 2020.
* [25] Thorsten Hansen, Lars Pracejus, and Karl R Gegenfurtner. Color perception in the intermediate periphery of the visual field. _Journal of vision_, 9(4):26-26, 2009.
* ncbi bookshelf, May 2005.
* the university of texas medical school at houston, 2020.
* [28] Farzad Ramezani, Saeed Reza Kheradpisheh, Simon J Thorpe, and Masoud Ghodrati. Object categorization in visual periphery is modulated by delayed foveal noise. _Journal of Vision_, 19(9):1-1, 2019.
* [29] Arturo Deza and Talia Konkle. Emergent properties of foveated perceptual systems, 2021.
* [30] RT Pramod, Harish Katti, and SP Arun. Human peripheral blur is optimal for object recognition. _arXiv preprint arXiv:1807.08476_, 2018.
* [31] Panqu Wang and Garrison W Cottrell. Central and peripheral vision for scene recognition: A neurocomputational modeling exploration. _Journal of vision_, 17(4):9-9, 2017.
* [32] MARY A Johnson. Color vision in the peripheral retina. _American journal of optometry and physiological optics_, 63(2):97-103, 1986.
* [33] Michael A Cohen, Caroline Ostrand, Nicole Frontero, and Phuong-Nghi Pham. Characterizing a snapshot of perceptual experience. _Journal of Experimental Psychology: General_, 150(9):1695, 2021.
* [34] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016.
* [35] Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Cifar-10 (canadian institute for advanced research).
* [36] Johannes Mehrer, Courtney J Spoerer, Emer C Jones, Nikolaus Kriegeskorte, and Tim C Kietzmann. An ecologically motivated image dataset for deep learning yields better models of human vision. _Proceedings of the National Academy of Sciences_, 118(8):e2011417118, 2021.
* [37] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. _International journal of computer vision_, 115:211-252, 2015.
* [38] Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. _arXiv preprint arXiv:1903.12261_, 2019.
* [39] Lisa J Croner, Keith Purpura, and Ehud Kaplan. Response variability in retinal ganglion cells of primates. _Proceedings of the National Academy of Sciences_, 90(17):8128-8130, 1993.
* [40] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _2009 IEEE conference on computer vision and pattern recognition_, pages 248-255. Ieee, 2009.
* [41] Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment: Learning augmentation policies from data. _arXiv preprint arXiv:1805.09501_, 2018.
* [42] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. _British Machine Vision Conference_, 2016.
* [43] Jeremy Howard and Sylvain Gugger. Fastai: A layered api for deep learning. _Information_, 11(2):108, 2020.
* [44] Matthias Kummerer, Matthias Bethge, and Thomas SA Wallis. Deepgaze iii: Modeling free-viewing human scanpaths with deep learning. _Journal of Vision_, 22(5):7-7, 2022.
* [45] Ming Jiang, Shengsheng Huang, Juanyong Duan, and Qi Zhao. Salicon: Saliency in context. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 1072-1080, 2015.
* [46] Francesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In _International conference on machine learning_, pages 2206-2216. PMLR, 2020.
* [47] Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In _International conference on machine learning_, pages 274-283. PMLR, 2018.
* [48] Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian Goodfellow, Aleksander Madry, and Alexey Kurakin. On evaluating adversarial robustness. _arXiv preprint arXiv:1902.06705_, 2019.
* [49] Sylvestre-Alvise Rebuffi, Sven Gowal, Dan Andrei Calian, Florian Stimberg, Olivia Wiles, and Timothy A Mann. Data augmentation can improve robustness. _Advances in Neural Information Processing Systems_, 34:29935-29948, 2021.
* [50] Aounon Kumar and Tom Goldstein. Center smoothing: Certified robustness for networks with structured outputs. _Advances in Neural Information Processing Systems_, 34:5560-5575, 2021.
* [51] Bai Li, Changyou Chen, Wenlin Wang, and Lawrence Carin. Certified adversarial robustness with additive noise. _Advances in neural information processing systems_, 32, 2019.
* [52] Chuan Guo, Mayank Rana, Moustapha Cisse, and Laurens Van Der Maaten. Countering adversarial images using input transformations. _arXiv preprint arXiv:1711.00117_, 2017.
* [53] Edward Raff, Jared Sylvester, Steven Forsyth, and Mark McLean. Barrage of random transforms for adversarially robust defense. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 6528-6537, 2019.
* [54] Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. Synthesizing robust adversarial examples. In _International conference on machine learning_, pages 284-293. PMLR, 2018.
* [55] Ilya O Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, et al. Mlp-mixer: An all-mlp architecture for vision. _Advances in Neural Information Processing Systems_, 34:24261-24272, 2021.
* [56] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. _arXiv preprint arXiv:2010.11929_, 2020.
## Appendix A Preventing Gradient Obfuscation
We take a number of measures to ensure that our results correspond to the true robustness of our method, and we avoid the pitfalls of gradient obfuscation [47, 48].
Firstly, we remove inference time stochasticity from all the models we test. We do this by sampling the Gaussian noise used in _R-Blur_ and _VOneBlock_ once and applying the same noise to all test images. Similarly, we sample the affine transform parameters for _RandAffine_ once and use them for all test images. We also compute the fixation point sequences for _R-Blur_ and _R-Warp_ on unattacked images and do not update them during or after running APGD.
Secondly, we ran APGD for 1 to 100 iterations and observed that as the number of iterations increases the success rate of the attack increases (Figure 10(a)). The success rate plateaus at 50 iterations. Since the attack success rate with 25 steps is only 0.1% lower than the success rate with 50 steps, we run APGD with 25 steps in most of our experiments.
Thirdly, we evaluate _R-Blur_ against AutoAttack [46], an ensemble of 4 state-of-the-art white and black box adversarial attacks. Figure 12 compares the accuracy of _R-Blur_ on Imagenet under APGD and AutoAttack. We see that the accuracy under AutoAttack is only slightly lower than the accuracy under APGD, with the maximum difference being 3%, which would not change any of the trends observed in the paper. Since computing AutoAttack requires a lot of time and compute, and given that it does not decrease accuracy significantly compared to 25-step APGD, we chose to use the latter for most of the results presented in the paper.
Finally, we applied expectation over transformation [54] by computing 10 gradient samples at each APGD iteration and averaging them to obtain the final update. We found this did not change the attack success rate so we take only 1 gradient sample in most of our experiments (Figure 10(b)). Finally, we also used a straight-though-estimator to pass gradients through _R-Blur_ in case it may be obfuscating them and found that doing so reduces the attack success rate, thus indicating that gradients that pass through _R-Blur_ retain valuable information that can be used by the adversarial attack (Figure 10(b)).
## Appendix B Fixation Point Selection
In this study, we did not attempt to develop an optimal fixation point selection algorithm, and instead, we operate under the assumption that points at which humans tend to fixate are sufficiently informative to perform accurate object classification. Therefore, we used DeepGaze-III [44], which is a neural
Figure 11: Accuracy of a _R-Blur_ model trained on Imagenet under APGD attack with different settings. (a) shows the accuracy when APGD attack is applied with different numbers of update steps. (b) shows the accuracy when 10 step of expectation-over-transformation (EOT-10) [54] is used and _R-Blur_ is converted into a straight-through-estimator (STE) in the backward pass. The dashed line in (b) shows the accuracy of a 25-step APGD attack without EOT and normal gradient computation for _R-Blur_. Together these results strongly indicate that _R-Blur_ does not obfuscate gradients and legitimately improves the adversarial robustness of the model.
network model trained to model the human gaze. DeepGaze-III uses a deep CNN backbone to extract features from the image, and based on these features another DNN predicts a heatmap that indicates, for each spatial coordinate, the probability that a human will fixate on it. However, it is possible that this algorithm is sub-optimal, and with further study, a better one could be developed. Though developing such an algorithm is out of the scope of this paper, we conduct a preliminary study to determine if it is possible to select better fixation points than the ones predicted by DeepGaze-III.
To this end, we run the following experiment to pick an optimal fixation point for each image during inference. For each testing image, we select 49 fixation points, spaced uniformly in a grid. Using the models we trained in earlier (see section 3) we obtain predictions for each image and each of the 49 fixation points. If there was at least one fixation point at which the model was able to correctly classify the image, we consider it to be correctly classified for the purpose of computing accuracy. We repeat this experiment for Ecoset-10, Ecoset, and Imagenet, using clean and adversarially perturbed data. We obtain the adversarially perturbed images for each of the 49 fixation points by fixing the fixation point at one location running the APGD attack with \(\ell_{\infty}\)-norm bounded to 0.004. Figure 14 illustrates this experiment with some example images.
The results are presented in Figure 13. We see that when the optimal fixation point is chosen accuracy on both clean and adversarially perturbed data improves, with the improvement in clean accuracy being the most marked. The clean accuracy on Ecoset-10, Ecoset, and Imagenet improved by 5%, 11%, and 10% respectively, which makes the clean accuracy of the _R-Blur_ model on par or better than the clean accuracy achieved by the unmodified ResNet. Furthermore, when the optimal fixation point, is chosen _R-Blur_ obtains higher clean accuracy than _AT_ on all the datasets.
These results are meant to lay the groundwork for future work toward developing methods for determining the optimal fixation point based on the input image. However, they also illustrate that models trained with _R-Blur_ learn features that are not only more adversarially robust features than ResNet but also allow the model to make highly accurate predictions on clean data.
## Appendix C Evaluations With Different Architectures
To demonstrate that the benefits of _R-Blur_ are not limited to CNNs, we trained MLP-Mixer [55] and ViT [56] models with _R-Blur_ preprocessing and evaluated their robustness. We use the configuration of MLP-Mixer referred to as S16 in [55]. Our ViT has a similar configuration, with 8 layers each having a hidden size of 512, an intermediate size of 2048, and 8 self-attention heads. We train both models with a batch size of 128 for 60 epochs on Ecoset-10 using the Adam optimizer. The learning
Figure 12: Accuracy of _R-Blur_ on Imagenet under APGD (blue) and AutoAttack (orange). The accuracy under APGD and AutoAttack is very similar, which shows that evaluating with only APGD provides a reliable measure of _R-Blur_’s robustness
Figure 13: The accuracy obtained on clean and adversarial data when (a) the optimal fixation point was selected, (b) when the five fixation approach from Section 3 was used, and (c) an adversarially trained model was used.
rate of the optimizer is linearly increased to 0.001 over 12 epochs and is decayed linearly to almost zero over the remaining epochs. The results are shown in Figure 15.
We observe that _R-Blur_ significantly improves the robustness of MLP-Mixer models, and achieves greater accuracy than _R-Warp_ at higher levels of perturbations. These results show that the robustness endowed to ResNets by _R-Blur_ was not dependent on the model architecture, and they further strengthen our claim that loss in fidelity due to foveation contributes to the robustness of human and computer vision.
## Appendix D Breakdown of Accuracy Against Common Corruption by Corruption Type
In Figure 16 we break down the performance of the models on common corruptions by higher-level corruption categories. The individual members of each category are listed in Table 2. We see that in most of the categories, _R-Blur_ achieves the highest median accuracy against the most severe corruptions. We also note that _R-Blur_ exhibits a remarkable degree of robustness to noise, which is substantially greater than all the other models we evaluated. It is pertinent to note here that Gaussian noise was just 1 of the 4 types of noise included in the noise category, and thus the performance of _R-Blur_ can not be attributed to overfitting on Gaussian noise during training. Furthermore, robustness to
Figure 14: This figure indicates the locations of the optimal fixation points for some sample images. Each square in the grid corresponds to one of 49 fixation locations and represents the highest resolution region of the image if the model fixates at the center of the square. Squares that are shaded green indicate that the model’s prediction at the corresponding fixation point was correct, while squares shaded red indicate that the model’s prediction at the corresponding fixation point was incorrect. We see that there are certain images in which there are only a few optimal fixation points and they may not be in the center or in the corners of the image.
one type of random noise does not typically generalize to other types of random noise [4]. Therefore, the fact that _R-Blur_ exhibits improved robustness to multiple types of noise indicates that it is not just training on Gaussian noise, but rather the synergy of all the components of _R-Blur_ that is likely the source of its superior robustness.
## Appendix E Sensitivity Analysis of Hyperparameters in _R-Blur_
To measure the influence of the various Hyperparameters of _R-Blur_ we conduct a sensitivity analysis. First, we vary the scale of the Gaussian noise added to the image, the viewing distance during inference, and the value of \(\beta\) from Section 2.5, which is the scaling factor that maps eccentricity (see equation 1 to standard deviation, and measure the impact on accuracy on clean as well as adversarially perturbed data. The results of this analysis are presented in Figure 17. We see that, as expected, increasing the scale of the noise improves accuracy on adversarially perturbed data, however, this improvement does not significantly degrade clean accuracy. It appears that the adaptive blurring is mitigating the deleterious impact of Gaussian noise on clean accuracy. On the other hand, increasing \(\beta\) beyond 0.01 surprisingly does not have a significant impact on accuracy and robustness. We also measured the accuracy on clean and perturbed data after varying the viewing distance (see 2.4) and the number of fixation points over which the logits are aggregated. These results are plotted in Figure 18, and they show that accuracy on clean and perturbed data is maximized when the width of the in-focus region is 48 (this corresponds to \(vd=3\)) and aggregating over more fixation points improves accuracy on clean and perturbed data.
## Appendix F Training Configuration
Table 3 presents the configurations used to train the models used in our evaluation. For all the models the SGD optimizer was used with Nesterov momentum=0.9.
## Appendix G Implementation Details
We used Pytorch v1.11 and Python 3.9.12 to for our implementation. We used the implementation of Auto-PGD from the Torchattacks library ([https://github.com/Harry24k/adversarial-attacks-pytorch](https://github.com/Harry24k/adversarial-attacks-pytorch)). For _R-Warp_ we used the code from the official repo [https://github.com/mvuyyuru/adversary.git](https://github.com/mvuyyuru/adversary.git). Likewise, for _VOneBlock_ we used the code from [https://github.com/dicarlolab/vonenet](https://github.com/dicarlolab/vonenet), and
\begin{table}
\begin{tabular}{c c c c}
**Noise** & **Blur** & **Weather** & **Digital** \\ \hline gaussian noise & defocus blur & snow & contrast \\ shot noise & glass blur & frost & elastic transform \\ impulse noise & motion blur & fog & pixelate \\ speckle noise & zoom blur & brightness & jpeg compression \\ gaussian blur & spatter & saturate \\ \end{tabular}
\end{table}
Table 2: Categories of corruptions used to evaluate robustness to common corruptions. This categorization follows the one from [38]
Figure 15: The accuracy obtained on Ecoset-10 against adversarial perturbations of various \(\ell_{\infty}\) norms when _R-Blur_ is used with ResNet, MLP-Mixer and ViT backbones.
for DeepGaze-III models we used the code from [https://github.com/mathias-k/DeepGaze](https://github.com/mathias-k/DeepGaze). The training code for DeepGaze-III with _R-Blur_ and _R-Warp_ backbones is based on [https://github.com/mathias-k/DeepGaze/blob/main/train_deepgaze3.ipynb](https://github.com/mathias-k/DeepGaze/blob/main/train_deepgaze3.ipynb), and can be found in adversarialML/biologically_inspired_models/src/fixation_prediction/train_deepgaze.py. Our clones of these repositories are included in the supplementary material. For multi-gpu training, we used Pytorch Lightning v1.7.6. We used 16-bit mixed precision training to train most of our models. The code for _R-Blur_ can be found in adversarialML/biologically_inspired_models/src/retina_preproc.py which is part of the supplemental material.
Figure 16: The accuracy achieved by _R-Blur_ and baselines on various classes of common corruptions, proposed in [38]. The boxplot shows the distribution of accuracy values on 4-5 different corruptions in each class applied at different severity levels (x-axis) with 1 referring to least severe and 5 being the most severe corruption. _R-Blur_ generally achieves the highest median accuracy on the highest severity levels.
Figure 17: The impact of the hyperparameters of _R-Blur_ on the accuracy and robustness of models trained on Ecoset-10. (left) the standard deviation of Gaussian noise, and (right) \(\beta\) from Section 2.5.
\begin{table}
\begin{tabular}{c c c c c c c c} Dataset & Method & Batch Size & nEpochs & LR & LR-Schedule & Weight Decay & nGPUs \\ \hline CIFAR-10 & ResNet & 128 & 0.4 & 60 & L-Warmup-Decay(0.2) & 5e-5 & 1 \\ & _AT_ & 128 & 0.4 & 60 & L-Warmup-Decay(0.2) & 5e-5 & 1 \\ & _R-Warp_ & 128 & 0.4 & 60 & L-Warmup-Decay(0.2) & 5e-5 & 1 \\ & _R-Blur_ & 128 & 0.4 & 60 & L-Warmup-Decay(0.2) & 5e-5 & 1 \\ & _G-Noise_ & 128 & 0.4 & 60 & L-Warmup-Decay(0.2) & 5e-5 & 1 \\ Ecoset-10 & ResNet & 128 & 0.4 & 60 & L-Warmup-Decay(0.2) & 5e-4 & 1 \\ & _AT_ & 128 & 0.4 & 60 & L-Warmup-Decay(0.2) & 5e-4 & 1 \\ & _R-Warp_ & 128 & 0.4 & 60 & L-Warmup-Decay(0.2) & 5e-4 & 1 \\ & _R-Blur_ & 128 & 0.1 & 60 & L-Warmup-Decay(0.1) & 5e-4 & 1 \\ & _VOneBlock_ & 128 & 0.1 & 60 & L-Warmup-Decay(0.1) & 5e-4 & 1 \\ & _G-Noise_ & 128 & 0.4 & 60 & L-Warmup-Decay(0.2) & 5e-4 & 1 \\ Ecoset & ResNet & 256 & 0.2 & 25 & L-Warmup-Decay(0.2) & 5e-4 & 2 \\ & _AT_ & 256 & 0.2 & 25 & L-Warmup-Decay(0.2) & 5e-4 & 4 \\ & _R-Warp_ & 256 & 0.1 & 25 & L-Warmup-Decay(0.2) & 5e-4 & 4 \\ & _R-Blur_ & 256 & 0.1 & 25 & C-Warmup-2xDecay(0.1) & 5e-4 & 4 \\ & _VOneBlock_ & 256 & 0.1 & 25 & C-Warmup-2xDecay(0.1) & 5e-4 & 4 \\ & _G-Noise_ & 256 & 0.1 & 25 & C-Warmup-2xDecay(0.1) & 5e-4 & 4 \\ Imagenet & ResNet & 256 & 0.2 & 25 & L-Warmup-Decay(0.2) & 5e-4 & 2 \\ & _AT_ & 256 & 0.2 & 25 & L-Warmup-Decay(0.2) & 5e-4 & 4 \\ & _R-Warp_ & 256 & 0.1 & 25 & C-Warmup-2xDecay(0.1) & 5e-4 & 4 \\ & _VOneBlock_ & 256 & 0.1 & 25 & C-Warmup-2xDecay(0.1) & 5e-4 & 4 \\ & _G-Noise_ & 256 & 0.1 & 25 & C-Warmup-2xDecay(0.1) & 5e-4 & 4 \\ \end{tabular}
\end{table}
Table 3: The configurations used to train the models used in our evaluation. L-Warmup-Decay(\(f\)) represents a schedule that linearly warms up and decays the learning rate and \(f\) represents the fraction of iterations devoted to warmup. C-Warmup-2xDecay(0.1) is similar except that the warmup and decay follow a cosine function, and there are two decay phases. Both the schedulers are implemented using torch.optim.lr_scheduler.OneCycleLR from Pytorch.
Figure 18: The impact of the size of the in-focus region by varying the viewing distance (left) and the number of fixation points over which the logits are aggregated (right) on accuracy. The plots are computed from a _R-Blur_ model trained on Imagenet, and the perturbed data is obtained by conducting a 25-step APGD attack with \(\|\delta\|_{\infty}=0.004\). We see that accuracy on clean and perturbed data is maximized when the width of the in-focus region is 48 (this corresponds to \(vd=3\)) and aggregating over more fixation points improves accuracy on clean and perturbed data.
## Appendix H Hardware Details and Computation Cost
We trained our models on compute clusters with Nvidia GeForce 2080 Ti and V100 GPUs. Most of the Imagenet and Ecoset models were trained and evaluated on the V100s, while the CIFAR-10 and Ecoset-10 models were trained and evaluated on the 2080 Ti's.
### Analysis of Computation Cost
Table 4 presents this comparison and shows that R-Blur causes minimal slowdown (1.1x compared to the vanilla ResNet) during both training and testing. Also, increasing the number of fixations slows R-Blur only sub-linearly (5 predefined fixations \(\Rightarrow\) 3x slowdown). Introducing dynamic fixation prediction has a greater impact on speed because each image is assigned different fixation points and so R-Blur/R-Warp can not be applied to them as a single batch. This shortcoming is likely common to most fixation transforms, and is not unique to R-Blur. In fact, under dynamic fixation prediction, R-Blur is faster than R-Warp.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline Method & Dynamic Fixation & \# Fixations & Train Speed (img/s) & Test Speed (img/s) \\ \hline ResNet & ✗ & 1 & 410 & 370 \\ \hline \(AT\) & ✗ & 1 & 232 (1.8\(\times\)) & - \\ _VOneBlock_ & ✗ & 1 & 277 (1.5\(\times\)) & 289 (1.3\(\times\)) \\ _R-Warp_ & ✗ & 1 & 377 (1.1\(\times\)) & 314 (1.2\(\times\)) \\ _R-Blur_ & ✗ & 1 & 369 (1.1\(\times\)) & 334 (1.1\(\times\)) \\ _R-Blur_ & ✗ & 5 & - & 111 (3.3\(\times\)) \\ \hline _R-Warp_ & ✓ & 5 & - & 26 (14.2\(\times\)) \\ _R-Blur_ & ✓ & 1 & - & 115 (3.2\(\times\)) \\ _R-Blur_ & ✓ & 5 & - & 29 (12.9\(\times\)) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Training and inference speed on 1 Nvidia 2080Ti, measure in images per second, for each model. The value in the parentheses indicates the slowdown relative to the unmodified ResNet computed as the (train/test) speed of the ResNet divided by the speed of the other method. We see that _R-Blur_, in and of itself, causes a very minimal slowdown (only \(1.1\times\)) during training and testing. Increasing the number of fixations slows R-Blur only sub-linearly (5 predefined fixations lead to 3x slowdown). Introducing dynamic fixation prediction has a greater impact on speed because each image is assigned different fixation points and so R-Blur/R-Warp can not be applied to them as a single batch. This shortcoming is likely common to most fixation transforms, and is not unique to R-Blur. | ## Review
### Summary
This paper introduces R-Blur, a biologically inspired data augmentation technique aimed at enhancing the robustness of deep neural networks against adversarial perturbations and common image corruptions. R-Blur simulates the human visual field by applying adaptive Gaussian blurring and color modulation based on the distance from a fixation point in the image. The effectiveness of R-Blur is evaluated on CIFAR-10, Ecoset, and ImageNet datasets, demonstrating improved robustness compared to standard training methods. The paper provides a well-motivated framework that bridges model robustness with insights from human perception, although some concerns regarding baselines and the role of noise remain.
### Strengths
- The originality of the approach is notable, as it mimics human peripheral vision and applies a unique adaptive Gaussian filtering technique.
- The paper is well-written and presents a clear structure with coherent figures and tables.
- Experimental results validate the robustness of the proposed method against adversarial and common corruptions.
- The motivation behind the method is grounded in biological principles, which adds to its significance.
### Weaknesses
- The choice of baseline methods in the evaluation is limited, lacking comparisons with other relevant techniques like Gaussian data augmentation.
- The dependency on an external fixation model may introduce computational bottlenecks and affect performance.
- Details on experimental setups, such as architecture and training parameters, are insufficiently described.
- The improvement in adversarial robustness is not as substantial as claimed, particularly when compared to adversarial training methods.
- The role of noise in the R-Blur approach is not adequately discussed.
### Questions
- Is the proposed R-Blur frame differentiable?
- How does the fixation model impact the performance, especially in white-box evaluations?
- Can the authors clarify the significance of the parameters used for visual acuity estimation?
- Will the authors provide results for the complete AutoAttack rather than just APGD?
- How does R-Blur handle geometric attacks such as translation and rotation?
- What are the specific contributions of noise, adaptive Gaussian blurring, and fixation points to the overall effectiveness of the model?
### Soundness
**Score:** 3
**Description:** Good - The approach is generally solid, but there are some concerns regarding the choice of baselines and the extent of improvements over existing methods.
### Presentation
**Score:** 3
**Description:** Good - The paper is well-structured and easy to follow, though some figures and captions could be improved for clarity.
### Contribution
**Score:** 3
**Description:** Good - While the paper presents a novel augmentation technique rooted in biological principles, the overall impact remains to be fully established due to some limitations.
### Rating
**Score:** 7
**Description:** Accept: The paper is technically sound with moderate-to-high impact potential, though it requires addressing several weaknesses before final acceptance.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a unique approach to enhancing model robustness by leveraging biological insights. While it has strong merits in originality and clarity, it must address certain weaknesses concerning baseline evaluations and the role of noise. The overall contributions are significant enough to warrant acceptance, provided that the authors consider the raised questions and suggestions for improvement.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Compressed Video Prompt Tuning
Bing Li\({}^{1,2}\) Jiaxin Chen\({}^{2}\) Xiuguo Bao\({}^{3}\) Di Huang\({}^{1,2}\)
\({}^{1}\)SKLSDE, Beihang University, Beijing, China
\({}^{2}\)IRIP Lab, SCSE, Beihang University, Beijing, China
\({}^{3}\)CNCERT/CC, Beijing, China
{libingsy, jiaxinchen, dhuang}@buaa.edu.cn, [email protected]
Corresponding author.
###### Abstract
Compressed videos offer a compelling alternative to raw videos, showing the possibility to significantly reduce the on-line computational and storage cost. However, current approaches to compressed video processing generally follow the resource-consuming pre-training and fine-tuning paradigm, which does not fully take advantage of such properties, making them not favorable enough for widespread applications. Inspired by recent successes of prompt tuning techniques in computer vision, this paper presents the first attempt to build a prompt based representation learning framework, which enables effective and efficient adaptation of pre-trained raw video models to compressed video understanding tasks. To this end, we propose a novel prompt tuning approach, namely Compressed Video Prompt Tuning (CVPT), emphatically dealing with the challenging issue caused by the inconsistency between pre-training and downstream data modalities. Specifically, CVPT replaces the learnable prompts with compressed modalities (_e.g._ Motion Vectors and Residuals) by re-parameterizing them into conditional prompts followed by layer-wise refinement. The conditional prompts exhibit improved adaptability and generalizability to instances compared to conventional individual learnable ones, and the Residual prompts enhance the noisy motion cues in the Motion Vector prompts for further fusion with the visual cues from I-frames. Additionally, we design Selective Cross-modal Complementary Prompt (SCCP) blocks. After inserting them into the backbone, SCCP blocks leverage semantic relations across diverse levels and modalities to improve cross-modal interactions between prompts and input flows. Extensive evaluations on HMDB-51, UCF-101 and Something-Something v2 demonstrate that CVPT remarkably outperforms the state-of-the-art counterparts, delivering a much better balance between accuracy and efficiency.
## 1 Introduction
In recent years, there has been a surge of interest in compressed video understanding [51, 36, 8, 46, 11, 19] due to its great potential in saving the computational and storage cost for on-line processing, which directly performs inference on compressed data and thus bypasses the resource-consuming decoding phase.
Raw videos consist of dense RGB frames conveying coherent and consistent content, while compressed videos are composed of sparsely decoded RGB frames with entire spatial information, _i.e._ Intra-frames (I-frames), and incompletely decoded RGB frames with free but coarse motion clues, _i.e._ Predicted frames (P-frames) or Bidirectionally predicted frames (B-frames). In this case, existing approaches to compressed video understanding generally follow the pre-training and fine-tuning paradigm, which initially pre-trains models on compressed videos from a large database (_e.g._ Kinetics-400 [3]) and fully fine-tunes them on downstream ones. According to different network structures,they can be broadly classified into two categories. (i) [20; 1; 21; 19; 11] apply multi-branch CNNs for individual modalities, which are then fused to deliver final results, as depicted in Figure 1 (a). (ii) [5] employs a modified Vision Transformer (ViT) for all modalities, motivated by the advantage of ViT in reasoning tasks and the ability to handle multi-modalities in text-image [15; 37; 50] and multi-task domains [31; 18], as shown in Figure 1 (b). Despite their effectiveness, such a paradigm suffers two major drawbacks. Firstly, it requires pre-training networks on compressed videos, resulting in a high computational complexity in GPU-day in off-line mode. Several attempts are made to speed up pre-training by loading or partially loading off-the-shelf models built based on raw videos; however, it is still not favorable enough for wide popularization. Secondly, it relies on fully fine-tuning backbones, which incurs a large parameter storage burden, making it challenging for deployment in real-world applications. Furthermore, full fine-tuning tends to disrupt the knowledge learned by the pre-trained model, degrading the generalization performance [54]. The limitations aforementioned highlight the need for a more effective and efficient alternative to address these issues.
Inspired by the advancements of prompting techniques in Natural Language Processing (NLP), researchers from the field of Computer Vision (CV) have been discussing this representation learning paradigm. By freezing the pre-trained model and introducing a small number of learnable parameters on input images or videos, they create decent visual prompts [4; 23]. This paradigm enables efficient information mining from pre-trained models and eliminates the necessity for fully fine-tuning networks. On the other side, raw videos contain richer spatial and temporal clues than compressed ones, and if sufficient and important knowledge can be mined from the models pre-trained on raw videos, it should strengthen representation learning of compressed videos. Both the facts trigger us to investigate a prompt tuning paradigm to compressed video understanding with pre-trained raw video models as a more effective and efficient alternative to the pre-training and fine-tuning one. Although current prompt tuning approaches [32; 16; 24; 4; 9] have achieved successes in various tasks, they generally work in the case that pre-training and downstream data share the same modality and it remains problematic when they have different modalities, _i.e._ the pre-training data are in the raw video (dense RGB frames) modality and the downstream data are in the compressed video modality (I-frames, Motion Vectors and Residuals), as shown in Figure 1 (c).
In this paper, we present a novel paradigm to video understanding in the compressed domain, namely Compressed Video Prompt-Tuning (CVPT). As in Figure 1 (d), our approach freezes the whole pre-trained raw video-based model and learns a few modal-specific visual prompts. It allows us to leverage the common prior knowledge in the feature and semantic spaces shared by the two types of videos, to facilitate representation learning of compressed videos. Specifically, VPT employs a conditional prompt framework with refinement, in which Motion Vectors and Residuals are re-parameterized into conditional prompts. These prompts are more adaptable to individual instances and alleviate the impact of freezing these modalities which can lead to performance degradation. Additionally, CVPT incorporates simple and lightweight modality-complementary prompt blocks, namely Selective Cross-modal Complementary Prompt (SCCP), into the frozen pre-trained model, capturing cross-modal interactions for performance improvement. Concretely, SCCP firstly embeds the I-frame
Figure 1: Different types of approaches to representation learning in compressed videos. (a) and (b) are the two main variants of the existing pre-training and fine-tuning paradigm, where the former is based on multi-stream CNNs and the latter is based on ViT, both of which require pre-training models on compressed videos. (c) is a plain prompt tuning scheme which can be considered as a straightforward attempt to leverage frozen off-the-shelf models pre-trained on raw videos, but it does not well adapt to the given task with degraded performance. (d) CVPT advances (c) by designing conditional prompts for Motion Vectors and Residuals, delivering a novel paradigm which successfully learns powerful representations for compressed videos in a more effective and efficient manner.
tokens as well as the conditional Motion Vector and Residual prompts into a low-dimensional feature space, in pursuit of computational efficiency. Based on attention maps induced by the Residual prompts, the Motion Vector prompts are subsequently refined via suppressing the noisy motions, which are further integrated into the I-frame tokens as complementary motion cues. As a consequence, the cross-modal interaction is significantly strengthened, remarkably facilitating model tuning for efficient representation learning of compressed videos.
The main contributions of this paper are summarized in four-fold:
* We propose CVPT, a novel visual prompt tuning framework, which enables pre-trained raw video models to adapt to compressed video understanding tasks. To the best of our knowledge, this is the first work to address this issue.
* We introduce modal-related conditional prompts to replace conventional learnable ones, proving more adaptable to individual instances and alleviates the inconsistency between pre-training and downstream data modalities.
* We design selective cross-modal complementary prompt blocks, and these blocks substantially strengthen cross-modal interactions and selectively complement dynamic cues in Motion Vectors to input flows.
* We conduct extensive experiments on three widely used benchmarks and achieve newly state-of-the-art results. Notably, our approach maintains the parameter-efficiency with less than 1% parameters being trainable.
## 2 Related Work
### Compressed Video Representation Learning
Early studies commonly adopt multi-stream CNNs and focus on dedicatedly designing the model structures for P/B-frames and fusion modules. The pioneering work by [49] introduces the use of CNNs to process compressed videos. They replace optical flows in the two-stream network [14] with Motion Vectors, resulting in a higher efficiency. CoViAR [45] extends a third stream to leverage all the modalities, including I-frames, Motion Vectors, and Residuals, completely bypassing video decoding. To improve the quality of dynamics, [2] applies image denoising techniques, while [40] employs GANs to generate Motion Vectors that approximate optical flows. [22] and [1] adopt a similar approach by feeding compressed modalities into a CNN model to mimic a raw video-based teacher. [22] presents Temporal Trilinear Pooling for fusing multiple modalities in lightweight models suitable for portable devices. Inspired by SlowFast [12, 30] propose the Slow-I-Fast-P model, where pseudo optical flows are estimated by a specific loss function. MM-ViT [5] is the first work using ViT on compressed videos, factorizing the self-attention across the space, time and modality dimensions.
The advent of self-supervised learning, particularly contrastive learning and Masked AutoEncoder (MAE), has led to the exploration of self-supervised pretext tasks for compressed videos. In this context, IMRNet [48] stands out as the first work to delve into self-supervised learning for compressed videos. It employs a three-stream network to individually encode I-frames, Motion Vectors, and Residuals, and introduces two pretext tasks that make use of motion statistics and internal GOP (Group of Pictures) structures in the compressed domain. Another notable approach is MVCGC [21], which delivers a cross-guidance contrastive learning framework with multi-views. MVCGC utilizes Motion Vectors as supervision signals for RGB frames and vice versa.
To sum up, the existing approaches in compressed video understanding involve the design of specialized modules or pretext tasks, which makes the pre-training and fine-tuning paradigm resource-consuming, not favorable enough to various downstream applications. In contrast, our work presents a prompt tuning based alternative, where necessary knowledge are mined from the off-the-shelf model pre-trained on raw videos through only a few modal-specific visual prompts, and is more effective and efficient.
### Visual Prompt Tuning
Prompt tuning has emerged as a promising way to efficient representation learning in the field of NLP. By leveraging textual prompts, researchers have successfully reformulated downstream tasksto resemble pre-training tasks that have already been solved [28; 33]. This approach offers several advantages over full fine-tuning, such as the high parameter efficiency and the reduced storage cost. It has also shown promising results in various CV applications. For instance, VPT [23] introduces learnable parameters to Transformer encoders and achieves superior performance compared to full fine-tuning on multiple recognition tasks. AdaptFormer [7] enhances ViT models with lightweight modules and surpasses the performance of fully fine-tuned models in action recognition benchmarks. Convpass [25] proposes convolutional bypasses for prompt learning in pre-trained ViT models. Some prompting attempts have also been made to deal with multi-modal tasks, such as visual and language. CoOp [53] fine-tunes CLIP using a continuous set of prompt vectors, while Co-CoOp [52] improves its generalization by conditioning prompts on image instances. [26] strengthens CLIP for video understanding tasks by adding lightweight Transformers on visual features and text prompts. MaPLe [27] optimizes the prompting mechanism in both the vision and language branches to improve the adaptability of CLIP.
Different from current prompt-learning approaches in NLP and CV that primarily apply in the case that pre-training and downstream data share the same modality, our work further addresses the challenge arising from the inconsistency between pre-training and downstream data modalities, _i.e._ the pre-training data are dense RGB frames while the downstream data are sparse I-frames and coarse Motion Vectors and Residuals.
## 3 The Proposed Approach
In this section, we describe the details of the proposed Compressed Video Prompt Tuning (CVPT) approach. Briefly, CVPT employs a conditional prompt framework with refinement as shown in Figure 2, by proposing a Selective Cross-modal Complementary Prompt (SCCP), which are depicted in the rest part.
### Conditional Prompt with Refinement
Prompt tuning [28; 33] is commonly utilized to efficiently fine-tune a model pre-trained on the upstream tasks to align with the downstream tasks. Existing prompt tuning approaches often assume
Figure 2: Framework of the proposed CVPT. It firstly embed the input compressed videos, consisting of I-frames, Motion Vectors and Residuals, into the I-frame tokens \(\mathbf{E}_{\rm I}^{0}\), the conditional Motion Vector prompts \(\mathbf{P}_{\rm MV}^{0}\) and the Residual prompts \(\mathbf{P}_{\rm R}^{0}\), respectively. A frozen pre-trained Transformer with \(L\) layers is adopt for feature extraction. The Selective Cross-modal Complementary Prompt (SCCP) blocks are employed to perform conditional prompt learning as well as facilitate the interactions between the prompts and the I-frame tokens for efficient fine-tuning.
that the pre-trained task and the downstream task share the same data modality. However, when fine-tuning for compressed video-based applications, the data modality differs as the downstream tasks adopt the compressed videos, and the pre-trained tasks utilize the raw videos. A straightforward fine-tuning without considering the modality gap often severely deteriorates the performance, since the Motion Vectors and Residuals are unique to compressed videos and have distinct data distributions from the raw videos. To address this issue, existing works employ an extra feature extraction network that may negatively abate the inter-modal interactions; or modify the Transformer structure to process the multi-modal compressed video data, which however still requires pre-training and full fine-tuning.
In this paper, we propose an efficient prompt tuning approach for compassed video-based tasks. Motivated by [54], we re-parameterize the Motion Vectors and Residuals into prompts and enhance the cross-modal interactions with I-frames for prompt learning. Different from [54], we embed the Motion Vectors and Residuals into distinct types of prompts, and refine the Motion Vector prompts by leveraging the Residual counterparts, which are further integrated into the I-frame Motion Vectors as strengthened complements of motion cues.
Concretely, the input I-frames \(\mathbf{X}_{\rm I}\in\mathbb{R}^{3\times T_{\rm T}\times H\times W}\) are firstly encoded into the I-frame tokens \(\mathbf{E}_{\rm I}^{0}\in\mathbb{R}^{N\times d}\) by a frozen patch embedding layer, where \(H\), \(W\), \(d\) represent the frame height, frame width and the encoding dimension, respectively. The Motion Vectors \(\mathbf{X}_{\rm MV}\in\mathbb{R}^{2\times T_{\rm P}\times H\times W}\) and the Residuals \(\mathbf{X}_{\rm R}\in\mathbb{R}^{3\times T_{P}\times H\times W}\) are encoded by learnable patch embedding layers, generating the conditional Motion Vector prompts \(\mathbf{P}_{\rm MV}^{0}\in\mathbb{R}^{N\times d}\) and the Residual prompts \(\mathbf{P}_{\rm R}^{0}\in\mathbb{R}^{N\times d}\), respectively. \(T_{\rm I}/T_{\rm P}\) denote the number of I/P-frames.
The I-frame tokens \(\mathbf{E}_{\rm I}^{0}\) together with the conditional prompts \(\mathbf{X}_{\rm MV}\) and \(\mathbf{X}_{\rm R}\) are comprehensively integrated by the proposed Selective Cross-modal Complementary Prompt (SCCP) at the first layer denoted by \(\mathcal{M}_{1}(\cdot)\), where the output are fed into the Transformer block as complementary motion cues to the I-frame tokens, as well as inputs to the SCCP block from the subsequent layer. The SCCP blocks are successively stacked by aligning with the original ViT layers. The overall forward propagation process for prompt tuning can be briefly formulated as follows:
\[[\mathbf{P}_{\rm MV}^{l},\mathbf{P}_{\rm R}^{l},\mathbf{E}_{\rm I}^{l-1}] =T_{l}\left(\left[\mathbf{\hat{P}}_{\rm MV^{\prime}}^{l-1},\mathbf{\hat{P }}_{\rm R^{\prime}}^{l-1},\mathbf{E}_{\rm I}^{l-1}+\mathbf{\hat{E}}_{\rm I^{\prime}}^ {l-1}\right]\right),\qquad l=1,2,...L; \tag{1}\] \[\mathbf{y} =\text{CLS\_Head}\left(\left[\mathbf{P}_{\rm MV}^{l},\mathbf{P}_{\rm R}^{ l},\mathbf{E}_{\rm I}^{l}\right]\right). \tag{2}\]
where \(L\) is the number of Transformer layers, \(T_{l}(\cdot)\) and \(\text{CLS\_Head}(\cdot)\) refer to the Transformer block at the \(l\)-th layer and the classification head, respectively. Here, \([\mathbf{\hat{P}}_{\rm MV^{\prime}}^{l-1},\mathbf{\hat{P}}_{\rm R^{\prime}}^{l-1},\mathbf{ \hat{E}}_{\rm I^{\prime}}^{l-1}]\) represents the output of the SCCP block at the \(l\)-th layer, which will be elaborated in Section 3.2.
### Selective Cross-modal Complementary Prompt
Given the I-frame tokens \(\mathbf{E}_{\rm I}^{0}\), the conditional Motion Vector prompts \(\mathbf{P}_{\rm MV}^{0}\), the Residual prompts \(\mathbf{P}_{\rm R}^{0}\), and a frozen pre-trained encoder \(\mathcal{G}_{RGB}\) containing \(L\) Transformer layers, SCCP aims to complement the motion cues from the conditional prompts to the I-frame branch. Generally, SCCP aggregates the tokens and prompts from multiple modalities of the previous layer to generate the new ones at the current layer, which is formulated as below:
\[\left[\mathbf{P}_{\rm MV}^{l-1},\mathbf{P}_{\rm R^{\prime}}^{l-1},\mathbf{E}_{\rm I^{\prime }}^{l-1}\right]=\mathcal{M}_{l}\left(\left[\mathbf{P}_{\rm MV}^{l-1},\mathbf{P}_{\rm R} ^{l-1},\mathbf{E}_{\rm I}^{l-1}\right]\right),\qquad l=1,2,...L. \tag{3}\]
It is worth noting that the Motion Vectors, which are block-wise motion cues estimated by matching, tend to be coarse and noisy compared to the optical flow counterparts displayed in Figure 3(a). As depicted in [39], motion information near object boundaries plays a crucial role in representation learning. Inspired by this insight, we generate an attention map to select refined motion cues from the Motion Vectors by leveraging the Residuals, based on the observation that the Residuals are generally well-aligned with the boundaries of moving objects and strongly correlated with Motion Vectors as observed in [40]. The refined motion cues are successively adopted as a informative complements to the RGB modality, thus boosting learning discriminative representations.
The detailed design of SCCP is illustrated in Figure 3(b). Specifically, at the \(l\)-th layer, the input of SCCP consists of the I-frame tokens \(\mathbf{E}_{\rm I}^{l-1}\), the conditional prompts \(\mathbf{P}_{\rm MV}^{l-1}\) and \(\mathbf{P}_{\rm R}^{l-1}\). In order to obtain compact representations and reduce the overall size of parameters, we firstly embed the \(C\)-dimensional input into a \((C/\alpha)\)-dimensional feature space as follows:
\[\mathbf{\hat{E}}_{\rm I}^{l-1}=g_{1}(\mathbf{E}_{\rm I}^{l-1}),\quad\mathbf{\hat{P}}_{\rm MV }^{l-1}=g_{2}(\mathbf{P}_{\rm MV}^{l-1}),\quad\mathbf{\hat{P}}_{\rm R}^{l-1}=g_{3}(\mathbf{ P}_{\rm R}^{l-1}), \tag{4}\]where \(\alpha\) is the reduction factor, the embedding functions \(g_{1}(\cdot)\), \(g_{2}(\cdot)\), and \(g_{3}(\cdot)\) are simple \(1\times 1\) convolutions. In the ViT-B backbone, the input feature dimension \(C\) is 768, and the reduction factor \(\alpha\) is set to 96 in all prompt blocks. As the scale of backbone grows, \(\alpha\) usually increases proportionally. In the case of Swin Transformer, as the feature dimension \(C\) becomes larger in deeper layers, we gradually increase it to maintain the same embedding feature size across different layers.
After feature embedding, we generate a spatial-temporal attention map \(\mathcal{A}\) by performing \(\mathrm{Softmax}(\cdot)\) on the embedded Residual prompts \(\mathbf{\hat{P}}_{\mathrm{R}}^{l-1}\). To strengthen the smoothness of the attention map, we introduce a learnable parameter. The attention map is subsequently applied to select representative motion cues from the embedded Motion Vector prompts \(\mathbf{\hat{P}}_{\mathrm{MV}}^{l-1}\) as the following:
\[\mathbf{\hat{P}}_{\mathrm{MV}}^{l-1}=\mathcal{A}\odot\mathbf{\hat{P}}_{\mathrm{MV}}^{ l-1}=\mathrm{Softmax}(\omega\times\mathbf{\hat{P}}_{\mathrm{R}}^{l-1})\odot\mathbf{ \hat{P}}_{\mathrm{MV}}^{l-1}. \tag{5}\]
The refined prompts \(\mathbf{\hat{P}}_{\mathrm{MV}}^{l-1}\) is further integrated into the intermediate embedding \(\mathbf{\hat{E}}_{\mathrm{I}}^{l-1}\) by addition, in order to complement the motion cues as below
\[\mathbf{\hat{E}}_{\mathrm{I}}^{l-1}:=\mathbf{\hat{E}}_{\mathrm{I}}^{l-1}+\mathbf{\hat{P}}_ {\mathrm{MV}}^{l-1}. \tag{6}\]
After performing the above cross-modal interactions, the embedded tokens \(\mathbf{\hat{E}}_{\mathrm{I}}^{l-1}\), the embedded prompts \(\mathbf{\hat{P}}_{\mathrm{MV}}^{l-1}\) and \(\mathbf{\hat{P}}_{\mathrm{R}}^{l-1}\) are finally project back to the high-dimensional ones by using the following formulation:
\[\mathbf{\hat{E}}_{\mathrm{I}^{\prime}}^{l-1}=g_{4}(\mathbf{\hat{E}}_{\mathrm{I}}^{l-1 }),\quad\mathbf{\hat{P}}_{\mathrm{MV}^{\prime}}^{l-1}=g_{5}(\mathbf{\hat{P}}_{\mathrm{ MV}}^{l-1}),\quad\mathbf{\hat{P}}_{\mathrm{R}^{\prime}}^{l-1}=g_{6}(\mathbf{\hat{P}}_{ \mathrm{R}}^{l-1}), \tag{7}\]
where \(g_{4}(\cdot)\), \(g_{5}(\cdot)\), and \(g_{6}(\cdot)\) are \(1\times 1\) convolutions.
In order to comprehensively enhance the representation learning, the SCCP block is applied to each Transformer block, and is successively stacked, _i.e._ the conditional prompts \(\mathbf{P}_{\mathrm{MV}}^{l-1}\) and \(\mathbf{P}_{\mathrm{R}}^{l-1}\) are updated in each SCCP block and is fed into the next SCCP block as input.
Figure 3: (a) Illustration on the optical flows representing accurate motion cues, the coarse Motion Vectors and the Residuals. The red rectangles highlight the areas that Motion Vectors contain noises, which however can be refined by leveraging the Residuals. (b) Detailed architecture of the proposed Selective Cross-modal Complementary Prompt.
Experimental Results and Analysis
### Datasets and Evaluation Metric
**HMDB-51** and **UCF-101** are two relatively small datasets, which contain 6766 videos from 51 action categories and 13,320 videos from 101 categories, respectively. **Something-Something v2** (SSv2) is a large-scale motion-centric video dataset, including 168,913 videos for training and 24,777 videos for validation from 174 categories. As a typical video understanding task, we choose action recognition and report top-1 accuracy as the evaluation metric in all experiments.
### Implementation Details
By following [45; 29], we convert the raw videos into the MP4 compressed format with a GOP size of 12, where each GOP consists of 1 I-frame and 11 P-frames. We collect I-frames, Motion Vectors and Residuals from 4 consecutive GOPs, totally containing 4 I-frames and 12 P-frames. Within each GOP, we uniformly sample 3 P-frames. In regards of the backbone network, we utilize two representative video Transformer architectures based on ViT [10] and Swin Transformer [34] pre-trained on Kinetics-400 with raw videos, respectively. We adopt the original model configurations and train the prompt parameters using the AdamW optimizer [35] on 12 NVIDIA V100 GPUs. We apply random scaling, corner cropping, and horizontal flipping as data augmentation. The base learning rate, weight decay and batch size are set to \(1\times 10^{-3}\), \(1\times 10^{-4}\) and 240, respectively. Additionally, we adopt a warm-up strategy within the first 5 training epochs.
### Comparison with the State-of-the-art Approaches
**Compressed video-based methods.** We firstly compare our method with the following full fine-tuning compressed video-based models: 1) self-supervised pre-training based ones including CoViAR [45], IMRNet [48], MVCGC [21] and the Full Fine-tuning baseline; and 2) supervised pre-training based ones including Refined-MV [2], IPTSN [20], SIFP-Net [30], MEACI-Net [29] and MM-ViT [5]. Table 1 and Table 2 summarize the comparison results on the HMDB-51, UCF-101 and SSv2 datasets. Generally, the proposed CVPT method achieves comparable or even better performance than the full fine-tuning ones, by tuning less than 1% of the overall parameters. Notably, when adopting the same ViT-B backbone, CVPT improves the top-1 accuracy of MM-ViT/Full Fine-tuning by 2.1%/0.9% on UCF-101, and 2.9%/0.8% on SSv2, via tuning only 0.6% of the parameters.
Besides the full fine-tuning methods, we also compare with representative efficient fine-tuning models, including the Linear Probe baseline, VPT [23] and AdaptFormer [7]. Since VPT and AdaptFormer only report results on raw videos, we re-implement and evaluate them on compressed videos, denoted by VPT\({}^{*}\) and Adaptformer\({}^{*}\), respectively. As displayed, Linear Probe exhibits the lowest performance as it only optimizes the classification head. VPT\({}^{*}\) slightly outperforms Adaptformer\({}^{*}\), but both of their performance are clearly inferior to the fine-tuning counterparts. This suggests that these methods struggle to effectively align the upstream data for pre-training the downstream data for fine-tuning. In contrast, our approach leverages inserted prompt blocks to facilitate the interaction of multimodal information, boosting the accuracy of VPT\({}^{*}\) and Adaptformer\({}^{*}\) by 10.8% and 12.0% on HMDB-51, 5.4% and 7.1% on UCF-101, and 4.8% and 5.6% on SSv2, respectively, when pre-training with the self-supervised learning.
In addition to the accuracy, our approach also has advantages in its training efficiency and facilitation. Concretely, unlike IPTSN that adopts a complex knowledge distillation model based on a raw video-based teacher during fine-tuning, or MVCGC that mines cross-modality information from pre-trained RGB models through self-supervised learning, our method explores the efficient tuning and reaches a promising accuracy. Moreover, as shown in Table 1 and Table 2, our approach consistently promotes the performance by different pre-training strategies and network architectures (_e.g._ ViT-L), highlighting its scalability.
**Raw video-based methods.** As summarized in Table 1 and Table 2, raw video-based fine-tuning methods generally reach higher performance than compressed video-based counterparts, since the raw videos convey more coherent and consistent information. Nevertheless, CVPT significantly reduces this gap, and even surpasses both VPT and Adaptformer that are fine-tuned on raw videos. The reason lies in that CVPT explores the Motion Vectors and Residuals as prompts to complement the missing motion cues caused by sparse sampling of RGB frames in a more comprehensive way.
Additionally, CVPT is capable of directly fine-tuning an off-the-shelf model pre-trained on raw-videos to the domain of compressed videos, thus simultaneously saving extra computational cost in alternatively pre-training on the specific compressed data.
### Ablation Study
To evaluate the effectiveness of components in CVPT, we conduct extensive ablation study. Without loss of generality, we utilize ViT-B pre-trained by self-supervised learning as the backbone.
**On the Main Components.** We firstly evaluate the effect of the main components of the proposed method. Specifically, we employ linear probe as the baseline method. As shown in Table 3, the proposed CPR module without refinement, which only updates the conditional prompts in the first layer, clearly improve the baseline. By introducing the SCCP module, which enriches the cross-modal information between I-frames and conditional prompts of Motion Vectors and Residuals, the
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline \multicolumn{2}{c|}{Method} & \multicolumn{1}{c|}{Modality} & Backbone & Input [M] & GFLOPs & PT & Tunable Params [M] & HMDB-51 & UCF-101 \\ \hline \multirow{8}{*}{**Voxel**} & CoCLR [17] & RGB+OF & S3D & 26.2 & 182.0 & SSL & 17.5 (100\%) & 62.9 & 90.6 \\ & RSPNet [6] & RGB & S3D-G & 6.0 & 91.0 & SSL & 9.6 (100\%) & 64.7 & 93.7 \\ & CVPR [38] & RGB & R3D-50 & 48.2 & 402.0 & SSL & 46.9 (100\%) & 65.4 & 92.1 \\ & \(\mu\)BYOL [13] & RGB & R3D-50 & 12.1 & 402.0 & SSL & 46.9 (100\%) & 73.6 & 95.5 \\ & STS [43] & RGB & S3D-G & 96.3 & 689.0 & SSL & 9.6 (100\%) & 62.0 & 89.0 \\ & M\({}^{\dagger}\)Video [41] & RGB & ViT-B & 24.1 & 1080.0 & SSL & 86.4 (100\%) & 75.4 & 96.1 \\ & MVD-B [44] & RGB & ViT-B & 24.1 & 1080.0 & SSL & 86.4 (100\%) & 76.4 & 97.0 \\ & MotionMAE [47] & RGB & ViT-B & 24.1 & 1080.0 & SSL & 86.4 (100\%) & - & 96.3 \\ & VideosMAE [42] & RGB & ViT-B & 24.1 & 1080.0 & SSL & 86.4 (100\%) & 73.3 & 96.1 \\ & VPT [23] & RGB & ViT-B & 12.1 & 1089.6 & SSL & 0.1 (0.09\%) & 52.7 & - \\ & AdaptFormer [7] & RGB & ViT-B & 12.1 & 1093.8 & SSL & 1.3 (1.46\%) & 55.7 & - \\ \hline \hline \multirow{8}{*}{**Voxel**} & CoV1AR [45] & I-Frame+MV+Res & ResNet152 & 3.8 & 1222.0 & SSL & 142.5 (100\%) & 37.1 & 63.7 \\ & IMRNet [48] & I-Frame+MV+Res & R3D-18 & 3.8 & - & SSL & 100.8 (100\%) & 45.0 & 76.8 \\ & MVVGC [21] & RGB+MV & S3D & 15.7 & - & SSL & 17.5 (100\%) & 63.4 & 90.8 \\ & Refined-MV [2] & I-Frame+MV+Res & ResNet152 & - & - & SL & 142.5 (100\%) & 59.7 & 89.9 \\ & IFFSN [20] & I-Frame+MV+Res & ResNet152 & 6.8 & - & SL & 130.8 (100\%) & 69.1 & 93.4 \\ & SIFP-Net [30] & I-Frame+MV+Res & I3D & 8.1 & - & SL & 92.8 (100\%) & 72.3 & 94.0 \\ & MEACI-Net [29] & I-Frame+MV+Res & I3D & 0.7 & 269.4 & SL & 50.7 (100\%) & 74.4 & 96.4 \\ & MM-ViT [5] & I-Frame+MV+Res & ViT-B & 8.1 & - & SL & 86.4 (100\%) & - & 93.4 \\ & Full Fine-tuning & I-Frame+MV+Res & ViT-B & 6.1 & 772.2 & SSL & 86.4 (100\%) & 60.3 & 88.1 \\ \cline{2-8} & Linear Probe & I-Frame+MV+Res & ViT-B & 6.1 & 772.2 & SSL & 0.1 (0.08\%) & 47.8 & 76.8 \\ & VPT [23] & I-Frame+MV+Res & ViT-B & 6.1 & 778.2 & SSL & 0.1 (0.09\%) & 52.1 & 83.6 \\ & AdaptFormer [7] & I-Frame+MV+Res & ViT-B & 6.1 & 780.6 & SSL & 1.3 (1.46\%) & 50.9 & 81.9 \\ \cline{2-8} & **CVPT (Ours)** & I-Frame+MV+Res & ViT-B & 6.1 & 772.2 & SSL & 0.5 (0.6\%) & 62.9 & 89.0 \\ & **CVPT (Ours)** & I-Frame+MV+Res & ViT-B & 6.1 & 772.2 & SL & 0.5 (0.6\%) & 69.7 & 95.5 \\ & **CVPT (Ours)** & I-Frame+MV+Res & ViT-L & 6.1 & 2569.6 & SL & 0.1 (0.03\%) & 81.5 & 98.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison with the state-of-the-art approaches w.r.t. the top-1 accuracy (%) on HMDB-51 and UCF-101 pre-trained on Kinetics-400: \({}^{*}*\) indicate the results by our re-implementation using the same configuration. \({}^{*}\)PT\({}^{*}\) stands for pre-training. \({}^{*}\)SL\({}^{*}\) and \({}^{*}\)SSL\({}^{*}\) refer to supervised learning and self-supervised learning, respectively.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline \multicolumn{2}{c|}{Method} & \multicolumn{1}{c|}{Modality} & Backbone & Input [M] & GFLOPs & PT & Tunable Params [M] & SSV2 \\ \hline \multirow{8}{*}{**Voxel**} & RSPNet [6] & RGB & S3D-G & 6.0 & 91.0 & SSL & 9.6 (100\%) & 55.0 \\ & \(\rho\)BYOL [13] & RGB & R3D-50 & 12.1 & 402.0 & SSL & 46.9 (100\%) & 55.8 \\ & MVD-B [44] & RGB & ViT-B & 24.1 & 1080.0 & SSL & 86.4 (100\%) & 72.5 \\ & M\({}^{\dagger}\)Video [41] & RGB & ViT-B & 24.1 & 1080.0 & SSL & 86.4 (100\%) & 70.5 \\ & MotionMAE [47] & RGB & ViT-B & 24.1 & 1080.0 & SSL & 86.4 (100\%) & 71.8 \\ & VideoMAE [42] & RGB & ViT-B & 24.1 & 1080.0 & SSL & 86.4 (100\%) & 70.3 \\ & VPT [23] & RGB & ViT-B & 12.1 & 1089.6 & SSL & 0.1 (0.09\%) & 43.7 \\ & AdaptFormer [7] & RGB & ViT-B & 12.1 & 1093.8 & SSL & 1.3 (1.46\%) & 59.0 \\ \hline \hline \multirow{8}{*}{**Voxel**} & MM-ViT [5] & I-Frame+MV+Res & ViT-B & 8.1 & - & SL & 86.4 (100\%) & 62.6 \\ & Full Fine-tuning & I-Frame+MV+Res & ViT-B & 6.1 & 772.2 & SSL & 86.4 (100\%) & 57.6 \\ \cline{1-1} & Linear Probe & I-Frame+MV+Res & ViT-B & 6.1 & 772.2 & SSL & 0.1 (0.08\%) & 33.8 \\ \cline{1-1} & VPT [23] & I-Frame+MV+Res & ViT-B & 6.1 & 778.2 & SSL & 0.1 (0.09\%) & 53.6 \\ \cline{1-1} & AdaptFormer [7] & I-Frame+MV+Res & ViT-B & 6.1 & 780.6 & SSL & 1.3 (1.46\%) & 52.8 \\ \hline \hline \multirow{8}{*}{**Voxel**} & Linear Probe & I-Frame+MV+Res & ViT-B & 6.1 & 772.2 & SSL & 0.5 (0.6\%) & 62.9 & 89.0 \\ & VPT [23] & I-Frame+MV+Res & ViT-B & 6.1 & 772.2 & SSL & 0.5 (0.6\%) & 69.7 & 95.5 \\ \cline{1-1} & **CVPT (Ours)** & I-Frame+MV+Res & ViT-L & 6.1 & 2569.6 & SL & 0.1 (0.03\%) & 81.5 & 98.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison with the state-of-the-art approaches w.r.t. the top-1 accuracy (%) on SSV2 pre-trained on Kinetics-400. \({}^{*}*\) indicate the results by our re-implementation using the same configuration. ’PT’ stands for pre-training. ‘SL’ and ‘SSL’ refer to supervised learning and self-supervised learning, respectively.
performance is remarkably boosted. Finally, applying the refinement module on CPR further obtains gain in performance, leading to the highest accuracy.
**On the Selective Cross-modal Complementary Prompt (SCCP).** To demonstrate the effect of the proposed SCCP module, we compare it to the following two alternative architecture designs as shown in Figure 5: 1) removing the \(\mathbf{P}_{\mathrm{R}}\) prompt and separately embedding the Motion Vectors and Residuals into distinct two types of prompts as displayed in Figure 5(a); 2) abandoning the Residual-guided attention and directly adding to the branch of Motion Vectors as displayed in Figure 5(b). As summarized in Table 4, both of the two above alternative designs yield worse performance than the proposed SCCP module, thus demonstrating the effectiveness of employing the Residual prompt to facilitate extracting more informative dynamic cues for the downstream task.
Besides, we investigate the impact of incorporating different numbers of SCCP blocks into the backbone. The insertion process begins from the first layer and increments by three layers in each iteration. As illustrated in Figure 4, we observe that the overall performance on both datasets is consistently improved as the number of inserted SCCP blocks increases.
**On Distinct Backbones.** We evaluate the performance of our method by using different backbones, compared to the Full Fine-tuning baseline. As summarized in Table 5, our approach steadily outperforms Full Fine-tuning for various backbones by distinct pre-training strategies.
### On the Efficiency of CVPT
We present a comprehensive comparison of inference time between the proposed CVPT method and the representative raw video-based approach VideoMAE. As shown in Table 6, our approach
\begin{table}
\begin{tabular}{c|c|c|c} \hline Method & Total Params. [K] & HMDB-51 & UCF-101 \\ \hline SCCP w/o attention & 39.2 & 59.6 & 86.9 \\ SCCP w/o \(\mathbf{P}_{\mathrm{R}}\) prompt & 19.2 & 57.6 & 85.5 \\ \hline SCCP & 39.2 & 61.4 & 87.6 \\ \hline \end{tabular}
\end{table}
Table 4: Ablation study (%) on different structures of the proposed SCCP module.
Figure 4: The influence of the number of SCCP blocks on the proposed method.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline CPR w/o refinement & SCCP & CPR & HMDB-51 & UCF-101 \\ \hline & & & 47.8 & 76.8 \\ ✓ & & & 55.7 & 83.9 \\ ✓ & ✓ & & 61.4 & 87.6 \\ & ✓ & ✓ & 62.9 & 89.0 \\ \hline \end{tabular}
\end{table}
Table 3: Ablation results (%) on the main components of CVPT.
significantly reduces the pre-processing time without the need for video decoding, while maintaining a comparable model inference time, thus remarkably decreasing the overall time cost.
### Qualitative Results
We visualize the classification maps generated by CVPT, compared to several state-of-the-art compressed video-based approaches including Linear Probe, Adaptformer*, VPT* and Full Fine-tuning. As shown in Figure 6, our method exhibits a broader focus on motion regions than those compared methods.
## 5 Conclusion and Discussion
We propose a novel prompt tuning method, namely CVPT, for compressed videos. Our approach leverages Motion Vectors and Residuals to generate conditional prompts. Furthermore, we employ the SCCP blocks to fully explore the discriminative dynamic cues in Motion Vectors as complements to I-frames. Extensive experimental results on public benchmarks demonstrate the efficiency and effectiveness of CVPT, compared to the current state-of-the-art approaches for efficient fine-tuning.
Despite the promising performance, the scalability of the proposed CVPT method to various vision tasks such as compressed video-based segmentation and tracking has not been investigated. We will further study this problem in our future research.
## Acknowledgment
This work is partly supported by the National Key R&D Program of China (2021ZD0110503), the National Natural Science Foundation of China (62022011 and 62202034), the Research Program of State Key Laboratory of Software Development Environment, and the Fundamental Research Funds for the Central Universities.
\begin{table}
\begin{tabular}{c|c|c|c} \hline Method & Pre-Process & Model Inference & Full Pipeline \\ \hline VideoMAE [42] & 2496.9 & 26.9 & 2523.8 \\ \hline CVPT (Ours) & 238.3 & 25.0 & 263.3 \\ \hline \end{tabular}
\end{table}
Table 6: Comparison of inference time (ms) per video.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \multirow{2}{*}{Backbone} & \multicolumn{2}{c|}{ViT-B (SSL)} & \multicolumn{2}{c|}{ViT-B (SL)} & \multicolumn{2}{c}{Swin-B (SL)} \\ \cline{2-7} & HMDB-51 & UCF-101 & HMDB-51 & UCF-101 & HMDB-51 & UCF-101 \\ \hline Full Fine-tuning & 60.3 & 88.1 & 66.5 & 94.3 & 62.9 & 92.0 \\ \hline
**Ours** & 62.9 (+2.6) & 89.0 (+0.9) & 69.7 (+3.2) & 95.5 (+1.2) & 64.9 (+2.0) & 94.0 (+2.0) \\ \hline \end{tabular}
\end{table}
Table 5: Ablation study (%) on different backbones.
Figure 6: Visualization of the last layer feature map generated by various approaches on UCF-101.
## References
* [1] Barak Battash, Haim Barad, Hanlin Tang, and Amit Bleiweiss. Mimic the raw domain: Accelerating action recognition in the compressed domain. In _CVPR Workshop_, 2020.
* [2] Haoyuan Cao, Shining Yu, and Jiashi Feng. Compressed video action recognition with refined motion vector. _arXiv preprint arXiv:1910.02533_, 2019.
* [3] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In _CVPR_, 2017.
* [4] Aochuan Chen, Yuguang Yao, Pin-Yu Chen, Yihua Zhang, and Sijia Liu. Understanding and improving visual prompting: A label-mapping perspective. In _CVPR_, 2023.
* [5] Jiawei Chen and Chiu Man Ho. Mm-vit: Multi-modal video transformer for compressed video action recognition. In _WACV_, 2022.
* [6] Peihao Chen, Deng Huang, Dongliang He, Xiang Long, Runhao Zeng, Shilei Wen, Mingkui Tan, and Chuang Gan. Rspnet: Relative speed perception for unsupervised video representation learning. In _AAAI_, 2021.
* [7] Shoufa Chen, Chongjian Ge, Zhan Tong, Jiangliu Wang, Yibing Song, Jue Wang, and Ping Luo. Adaptformer: Adapting vision transformers for scalable visual recognition. In _NeurIPS_, 2022.
* [8] Weidong Chen, Dexiang Hong, Yuankai Qi, Zhenjun Han, Shuhui Wang, Laiyun Qing, Qingming Huang, and Guorong Li. Multi-attention network for compressed video referring object segmentation. In _ACM MM_, 2022.
* [9] Rajshekhar Das, Yonatan Dukler, Avinash Ravichandran, and Ashwin Swaminathan. Learning expressive prompting with residuals for vision transformers. In _CVPR_, 2023.
* [10] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. _arXiv preprint arXiv:2010.11929_, 2020.
* [11] Xiang Fang, Daizong Liu, Pan Zhou, and Guoshun Nan. You can ground earlier than see: An effective and efficient pipeline for temporal sentence grounding in compressed videos. In _CVPR_, 2023.
* [12] Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. In _ICCV_, 2019.
* [13] Christoph Feichtenhofer, Haoqi Fan, Bo Xiong, Ross Girshick, and Kaiming He. A large-scale study on unsupervised spatiotemporal representation learning. In _CVPR_, 2021.
* [14] Christoph Feichtenhofer, Axel Pinz, and Andrew Zisserman. Convolutional two-stream network fusion for video action recognition. In _CVPR_, 2016.
* [15] Valentin Gabeur, Chen Sun, Karteek Alahari, and Cordelia Schmid. Multi-modal transformer for video retrieval. In _ECCV_, 2020.
* [16] Yuxian Gu, Xu Han, Zhiyuan Liu, and Minlie Huang. Ppt: Pre-trained prompt tuning for few-shot learning. In _ACL_, 2022.
* [17] Tengda Han, Weidi Xie, and Andrew Zisserman. Self-supervised co-training for video representation learning. In _NeurIPS_, 2020.
* [18] Ronghang Hu and Amanpreet Singh. Unit: Multimodal multitask learning with a unified transformer. In _ICCV_, 2021.
* [19] Yubin Hu, Yuze He, Yanghao Li, Jisheng Li, Yuxing Han, Jiangtao Wen, and Yong-Jin Liu. Efficient semantic segmentation by altering resolutions for compressed videos. In _CVPR_, 2023.
* [20] Shiyuan Huang, Xudong Lin, Svebor Karaman, and Shih-Fu Chang. Flow-distilled ip two-stream networks for compressed video action recognition. _arXiv preprint arXiv:1912.04462_, 2019.
* [21] Yuqi Huo, Mingyu Ding, Haoyu Lu, Nanyi Fei, Zhiwu Lu, Ji-Rong Wen, and Ping Luo. Compressed video contrastive learning. In _NeurIPS_, 2021.
* [22] Yuqi Huo, Xiaoli Xu, Yao Lu, Yulei Niu, Zhiwu Lu, and Ji-Rong Wen. Mobile video action recognition. _arXiv preprint arXiv:1908.10155_, 2019.
* [23] Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, and Ser-Nam Lim. Visual prompt tuning. In _ECCV_, 2022.
* [24] Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, and Ser-Nam Lim. Visual prompt tuning. In _ECCV_, 2022.
* [25] Shibo Jie and Zhi-Hong Deng. Convolutional bypasses are better vision transformer adapters. _arXiv preprint arXiv:2207.07039_, 2022.
* [26] Chen Ju, Tengda Han, Kunhao Zheng, Ya Zhang, and Weidi Xie. Prompting visual-language models for efficient video understanding. In _ECCV_, 2022.
* [27] Muhammad Uzair Khattak, Hanoona Rasheed, Muhammad Maaz, Salman Khan, and Fahad Shahbaz Khan. Maple: Multi-modal prompt learning. In _CVPR_, 2023.
* [28] Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. In _EMNLP_, 2021.
* [29] Bing Li, Jiaxin Chen, Dongming Zhang, Xiuguo Bao, and Di Huang. Representation learning for compressed video action recognition via attentive cross-modal interaction with motion enhancement. In _IJCAI_, 2022.
* [30] Jiapeng Li, Ping Wei, Yongchi Zhang, and Nanning Zheng. A slow-i-fast-p architecture for compressed video action recognition. In _ACM MM_, 2020.
* [31] Muchen Li and Leonid Sigal. Referring transformer: A one-step approach to multi-task visual grounding. In _NeurIPS_, 2021.
* [32] Xiang Lisa Li and Percy Liang. Prefix-tuning: optimizing continuous prompts for generation. In _ACL&IJCNLP_, 2021.
* [33] Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. _ACM Computing Surveys_, 55(9), 2023.
* [34] Ze Liu, Jia Ning, Yue Cao, Yixuan Wei, Zheng Zhang, Stephen Lin, and Han Hu. Video swin transformer. In _CVPR_, 2022.
* [35] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. _arXiv preprint arXiv:1711.05101_, 2017.
* [36] Wenyang Luo, Yufan Liu, Bing Li, Weiming Hu, Yanan Miao, and Yangxi Li. Long-short term cross-transformer in compressed domain for few-shot video classification. _arXiv preprint arXiv:2205.03569_, 2022.
* [37] Aditya Prakash, Kashyap Chitta, and Andreas Geiger. Multi-modal fusion transformer for end-to-end autonomous driving. In _CVPR_, 2021.
* [38] Rui Qian, Tianjian Meng, Boqing Gong, Ming-Hsuan Yang, Huisheng Wang, Serge Belongie, and Yin Cui. Spatiotemporal contrastive video representation learning. In _CVPR_, 2021.
* [39] Laura Sevilla-Lara, Yiyi Liao, Fatma Guney, Varun Jampani, Andreas Geiger, and Michael J Black. On the integration of optical flow and action recognition. In _GCPR_, 2019.
* [40] Zheng Shou, Xudong Lin, Yannis Kalantidis, Laura Sevilla-Lara, Marcus Rohrbach, Shih-Fu Chang, and Zhicheng Yan. Dmc-net: Generating discriminative motion cues for fast compressed video action recognition. In _CVPR_, 2019.
* [41] Xinyu Sun, Peihao Chen, Liangwei Chen, Thomas H Li, Mingkui Tan, and Chuang Gan. M\({}^{3}\) video: Masked motion modeling for self-supervised video representation learning. _arXiv preprint arXiv:2210.06096_, 2022.
* [42] Zhan Tong, Yibing Song, Jue Wang, and Limin Wang. Videomae: Masked autoencoders are data-efficient learners for self-supervised video pre-training. _arXiv preprint arXiv:2203.12602_, 2022.
* [43] Jiangliu Wang, Jianbo Jiao, Linchao Bao, Shengfeng He, Wei Liu, and Yun-Hui Liu. Self-supervised video representation learning by uncovering spatio-temporal statistics. _IEEE TPAMI_, 44:3791-3806, 2021.
* [44] Rui Wang, Dongdong Chen, Zuxuan Wu, Yinpeng Chen, Xiyang Dai, Mengchen Liu, Lu Yuan, and Yu-Gang Jiang. Masked video distillation: Rethinking masked feature modeling for self-supervised video representation learning. In _CVPR_, 2023.
* [45] Chao-Yuan Wu, Manzil Zaheer, Hexiang Hu, R. Manmatha, Alexander J. Smola, and Philipp Krahenbuhl. Compressed video action recognition. In _CVPR_, 2018.
* [46] Kai Xu and Angela Yao. Accelerating video object segmentation with compressed video. In _CVPR_, 2022.
* [47] Haosen Yang, Deng Huang, Bin Wen, Jiannan Wu, Hongxun Yao, Yi Jiang, Xiatian Zhu, and Zehuan Yuan. Self-supervised video representation learning with motion-aware masked autoencoders. _arXiv preprint arXiv:2210.04154_, 2022.
* [48] Youngjae Yu, Sangho Lee, Gunhee Kim, and Yale Song. Self-supervised learning of compressed video representations. In _ICLR_, 2020.
* [49] Bowen Zhang, Limin Wang, Zhe Wang, Yu Qiao, and Hanli Wang. Real-time action recognition with enhanced motion vector cnns. In _CVPR_, 2016.
* [50] Yanan Zhang, Jiaxin Chen, and Di Huang. Cat-det: Contrastively augmented transformer for multi-modal 3d object detection. In _CVPR_, 2022.
* [51] Yue Zhao and Philipp Krahenbuhl. Real-time online video detection with temporal smoothing transformers. _arXiv preprint arXiv:2209.09236_, 2022.
* [52] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Conditional prompt learning for vision-language models. In _CVPR_, 2022.
* [53] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for vision-language models. _IJCV_, 130(9):2337-2348, 2022.
* [54] Jiawen Zhu, Simiao Lai, Xin Chen, Dong Wang, and Huchuan Lu. Visual prompt multi-modal tracking. In _CVPR_, 2023. | ## Review
### Summary
The paper investigates an innovative method for transferring pretrained RGB models to compressed videos through a parameter-efficient approach. It introduces a prompt-tuning framework called Compressed Video Prompt Tuning (CVPT), which integrates learnable prompts with encoded compressed modalities, refining them across layers to enhance cross-modal interactions. The proposed Selective Cross-modal Complementary Prompter (SCCP) block further optimizes the processing of motion cues and complements other modalities to the RGB input flow. Experimental results demonstrate that CVPT outperforms traditional fine-tuning and other prompt-tuning techniques on datasets such as SSv2, UCF101, and HMDB51, marking a significant advancement in video understanding within the compressed video domain.
### Strengths
- The proposed CVPT method efficiently leverages pretrained RGB models with minimal trainable parameters.
- The empirical results show that CVPT outperforms full fine-tuning and other existing prompt-tuning methods on several datasets.
- The paper presents a clear, innovative approach to handling multiple modalities in compressed videos.
- The method of using motion vectors and residuals as prompts is novel and computationally advantageous over traditional methods.
### Weaknesses
- The computational cost (GLOPs) of the proposed method may exceed that of previous ViT-based models.
- No efficiency comparison is provided between the proposed method and previous works using raw videos.
- Some state-of-the-art methods on UCF101, HMDB51, and SSv2 datasets are not included in the comparisons.
- The rationale behind certain operations and the overall architecture lacks clarity.
- The evaluation benchmarks used are relatively small, raising concerns about the scalability of the method.
### Questions
- What is the physical interpretation of adding attended motion vectors to image embeddings?
- Can the authors provide comparisons of inference times and GLOPs against previous methods?
- Was a LoRA update option considered instead of prompt fine-tuning?
- What are the challenges mentioned in the paper regarding inconsistencies between upstream and downstream data?
### Soundness
**Score:** 3
**Description:** 3 = good. The methodology is sound and well-supported by experiments, but some aspects require further clarification.
### Presentation
**Score:** 3
**Description:** 3 = good. The paper is mostly well written, though some sections and figures could benefit from improved clarity.
### Contribution
**Score:** 3
**Description:** 3 = good. The work introduces a novel approach and demonstrates significant results, but the impact is somewhat limited due to the small scale of datasets.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept: The paper is technically solid with moderate-to-high impact, but some weaknesses need to be addressed, particularly regarding clarity and comparisons.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents an interesting and relevant approach to video understanding in compressed formats, demonstrating strong empirical results. While there are some weaknesses in computational efficiency comparisons and clarity in methodology, the overall contributions and innovative aspects make it suitable for acceptance. The simplicity of the proposed method is particularly appealing to the NeurIPS video understanding community.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# A Diffusion-Model of Joint Interactive Navigation
Matthew Niedoba\({}^{1,2}\) J. Wilder Lavington\({}^{1,2}\) Yunpeng Liu\({}^{1,2}\) Vasileios Lioutas\({}^{1,2}\)
Justice Sefas\({}^{1,2}\) Xiaoxuan Liang\({}^{1,2}\) Dylan Green\({}^{1,2}\) Setareh Dabiri\({}^{2}\)
Berend Zwartsenberg\({}^{2}\) Adam Scibior\({}^{1,2}\) Frank Wood\({}^{1,2}\)
\({}^{1}\) University of British Columbia, \({}^{2}\) Inverted AI
[email protected]
###### Abstract
Simulation of autonomous vehicle systems requires that simulated traffic participants exhibit diverse and realistic behaviors. The use of prerecorded real-world traffic scenarios in simulation ensures realism but the rarity of safety critical events makes large scale collection of driving scenarios expensive. In this paper, we present DJINN - a diffusion based method of generating traffic scenarios. Our approach jointly diffuses the trajectories of all agents, conditioned on a flexible set of state observations from the past, present, or future. On popular trajectory forecasting datasets, we report state of the art performance on joint trajectory metrics. In addition, we demonstrate how DJINN flexibly enables direct test-time sampling from a variety of valuable conditional distributions including goal-based sampling, behavior-class sampling, and scenario editing.
## 1 Introduction
Accurate simulations are critical to the development of autonomous vehicles (AVs) because they facilitate the safe testing of complex driving systems [15]. One of the most popular methods of simulation is virtual replay [46], in which the performance of autonomous systems are evaluated by replaying previously recorded traffic scenarios. Although virtual replay is a valuable tool for AV testing, recording diverse scenarios is expensive and time consuming, as safety-critical traffic behaviors are rare [17]. Methods for producing synthetic traffic scenarios of specific driving behaviors are therefore essential to accelerate AV development and simulation quality.
Producing these synthetic traffic scenarios involves generating the joint future motion of all the agents in a scene, a task which is closely related to the problem of trajectory forecasting. Due to the complexity of learning a fully autonomous end-to-end vehicle controller, researchers often opt to split the problem into three main tasks [52]: perception, trajectory forecasting, and planning. In trajectory forecasting, the future positions of all agents are predicted up to a specified future time based on the agent histories and the road information. Due to the utility of trajectory forecasting models in autonomous vehicle systems along with the availability of standard datasets and benchmarks to measure progress [4, 53], a variety of effective trajectory forecasting methods are now available. Unfortunately, most methods produce _deterministic_ sets of trajectory forecasts _per-agent_[47, 9] which are difficult to combine to produce realistic joint traffic scenes [30].
Generative models of driving behavior have been proposed as an alternative to deterministic trajectory forecasting methods for traffic scene generation [40, 46]. These models re-frame trajectory forecasting as modeling the joint distribution of future agent state conditioned on past observations and map context. However, given that the distribution of traffic scenes in motion forecasting datasets are similar to real-world driving, modelling the data distribution does not ensure that models will generate rare, safety critical events.
To alleviate these issues we propose DJINN, a model which generatively produces joint traffic scenarios with _flexible conditioning_. DJINN is a diffusion model over the joint states of all agents in the scene. Similar to [30], our model is conditioned on a flexible set of agent states. By modifying the conditioning set at test-time, DJINN is able to draw traffic scenarios from a variety of conditional distributions of interest. These distributions include sampling scenes conditioned on specific goal states or upsampling trajectories from sparse waypoints. Additionally, the joint diffusion structure of DJINN enables test-time diffusion guidance. Utilizing these methods enables further control over the conditioning of traffic scenes based on behavior modes, agent states, or scene editing.
We evaluate the quality of sampled trajectories with both joint and ego-only motion forecasting on the Argoverse [4] and INTERACTION [53] datasets. We report excellent ego-only motion forecasting and outperform Scene Transformer on joint motion forecasting metrics. We further demonstrate both DJINN's flexibility and compatibility with various forms of test-time diffusion guidance by generating goal-directed samples, examples of cut-in driving behaviors, and editing replay logs.
## 2 Related Work
**Trajectory Forecasting:** A wide variety of methods have been proposed to address the problem of trajectory forecasting. Two attributes which divide this area of work are the output representation type and the agents for which predictions are made. The most common class of models deterministically predict the distribution of ego agent trajectories using a weighted trajectory set either with or without uncertainties. Due to the applicability of this representation as the input for real-time self-driving planners, there are numerous prior methods of this type. Some approaches rasterize the scene into a birdview image and use CNNs to predict a discrete set of future trajectories for the ego agent [5, 3, 32]. The convolutional architecture of these methods captures local information around the agent well, but the birdview image size and resolution limit the ability to capture high speed and long-range interactions. To address these challenges, other prior approaches encode agent states directly either by using RNNs [47, 39, 27], polyline encoders [7, 9] or 1D convolutions [24]. Agent features can be combined with roadgraph information in a variety of ways including graph convolutional networks [24, 1] or attention [29, 27].
To control the distribution of predicted trajectories, several methods have utilized mode or goal conditioning. One approach is to directly predict several goal targets before regressing trajectories to those targets [54, 9, 51, 8]. An alternate approach is to condition on trajectory prototypes [3] or latent embeddings [47].
Predicting joint traffic scenes using per-agent marginal trajectory sets is challenging due to the exponential growth of trajectory combinations. Recent approaches aim to rectify this by producing joint weighted sets of trajectories for all agents in a scene. M2I [45] generates joint trajectory sets by producing "reactor" trajectories which are conditioned on marginal "influencer" trajectories. Scene Transformer [30], which uses a similar backbone architecture to our method, uses a transformer [48] network to jointly produce trajectory sets for all agents in the scene.
As an alternative to deterministic predictions, multiple methods propose generative models of agent trajectories. A variety of generative model classes have been employed including Normalizing Flows [36], GANs [10, 38] or CVRNNs [40, 46]. Joint generative behavior models can either produce entire scenarios in one shot [10, 38, 36], or produce scenarios by autoregressively "rolling-out" agent trajectories [40, 46].
**Diffusion Models:** Diffusion models, proposed by Sohl-Dickstein et al. [41] and improved by Ho et al. [12] are a class of generative models which approximate the data distribution by reversing a forward process which gradually adds noise to the data. The schedule of noise addition can be discrete process or represented by a continuous time differential equation [44, 21].We utilize the diffusion parameterization introduced in EDM [21] in our work for its excellent performance and separation of training and sampling procedures.
This class of models have shown excellent sample quality in a variety of domains including images [12, 6], video [11, 14] and audio [22]. In addition, diffusion models can be adapted at test-time through various conditioning mechanisms. Classifier [6] and classifier-free guidance [13] have enabled powerful conditional generative models such as text conditional image models [34, 37] while editing techniques [26, 31] have enabled iterative refinement of generated samples.
One recent application of diffusion models is planning. Diffuser [20] uses diffusion models to generate trajectories for offline reinforcement learning tasks. They condition their samples using classifier guidance to achieve high rewards and satisfy constraints. Trace and Pace [35] utilizes diffusion planning for guided pedestrian motion planning. In the vehicle planning domain, Controllable Traffic Generation (CTG) [55] builds on Diffuser, using diffusion models to generate trajectories which satisfy road rule constraints. Like CTG, our method also models the future trajectories of road users using diffusion models. However, our approach differs from CTG in terms of output and our methods of conditioning. In CTG, marginal per-agent trajectory samples are combined into a joint scene representation by "rolling-out" a portion of each agent's trajectory before drawing new samples per-agent. By contrast, DJINN models the full joint distribution of agent trajectories in one shot, with no re-planning or roll-outs required. The authors of CTG condition their model exclusively on the past states of other agents and the map, and use classifier-guidance to condition their samples to follow road rules. In our method, we demonstrate conditioning on scene semantics via classifier guidance _as well as_ conditioning on arbitrary state observations, including the past or future states of each agent, and control the strength of conditioning using classifier-free guidance as demonstrated in Fig. 1.
## 3 Background
### Problem Formulation
Our work considers traffic scenarios consisting of \(A\) agents across \(T\) discrete timesteps driving on a roadway described by a set of roadgraph features \(\mathbf{M}\). Each agent \(a\in\{1,\ldots,A\}\) in the scene at time \(t\in\{1,\ldots T\}\) is represented by a state \(\mathbf{s}_{t}^{a}=\{x_{t}^{a},y_{t}^{a},\theta_{t}^{a}\}\) consisting of its 2D position \((x_{t},y_{t})\) and heading \(\theta_{t}\). The joint representation of the scene \(\mathbf{x}\) is the combination of all agents across all timesteps \(\mathbf{x}=\{s_{t}^{a}|a\in\{1,\ldots,A\}\,,t\in\{1,\ldots,T\}\}\in\mathbb{R}^{ A\times T\times 3}\). We assume scenes are distributed according to an unknown distribution \(p_{data}(\mathbf{x})\).
We introduce a model which is conditioned on the map features \(\mathbf{M}\) and can moreover be flexibly conditioned on arbitrary set of observed agent states. For the latter purpose, we consider a boolean variable \(\mathcal{O}\in\{0,1\}^{A\times T}\). We denote that a state in the scene is observed if \(\mathcal{O}_{t}^{a}=1\). Using \(\mathcal{O}\), we partition the scene into two components. The observed portion of the scene is defined as \(\mathbf{x}_{obs}=\{\mathbf{s}_{t}^{a}|\mathbf{s}_{t}^{a}\in\mathbf{x},\mathcal{ O}_{t}^{a}=1\}\) while the unobserved, latent portion is \(\mathbf{x}_{lat}=\mathbf{x}\setminus\mathbf{x}_{obs}\). Figure 1 shows five choices for \(\mathcal{O}\) and their corresponding tasks. Our ultimate goal is to learn a conditional distribution over the set of all latent agent states \(\mathbf{x}_{lat}\) given the observed states \(\mathbf{x}_{obs}\) and the map \(\mathbf{M}\), by modelling \(p(\mathbf{x}_{lat}|\mathbf{x}_{obs},\mathbf{M})\). Using this probabilistic framework, we can represent conditional
Figure 1: **Top: Five example observation masks \(\mathcal{O}\) demonstrating potential conditioning inputs to DJINN. Each element of each mask corresponds to the boolean value of \(\mathcal{O}\) for that agent state. Individual agents are shown in rows, with timesteps as columns. Bottom: Generated traffic scenes corresponding to the type of observation masks above.**
distributions corresponding to various trajectory forecasting tasks by modifying the observation mask \(\mathcal{O}\) and the corresponding conditioning set \(\mathbf{x}_{obs}\).
### Diffusion Models
Diffusion models [41; 12] are a powerful class of generative models built upon a diffusion process which iteratively adds noise to the data. In the continuous time formulation of this process [44; 21], this iterative addition is described by a stochastic differential equation (SDE)
\[d\mathbf{x}_{\tau}=\mu(\mathbf{x}_{\tau},\tau)d\tau-\sigma(\tau)d\mathbf{w}. \tag{1}\]
Here, \(\tau\in[0,\tau_{max}]\) where \(\tau_{max}\) is a fixed, large constant, \(\mu(\mathbf{x}_{\tau},\tau)\) is the drift function and \(\sigma(\tau)\) is the diffusion coefficient which scales standard Brownian motion \(\mathbf{w}\). Note that our work has two notions of time. Throughout we will use \(t\) to denote the "scenario time" and \(\tau\) to represent "diffusion time". We express the marginal distribution of \(\mathbf{x}_{\tau}\) at diffusion time \(\tau\) as \(p(\mathbf{x}_{\tau})\), with \(p(\mathbf{x}_{0})\) corresponding to the data distribution \(p_{data}(\mathbf{x})\). Typically, \(\mu(\mathbf{x}_{\tau},\tau)\), \(\sigma(\tau)\), and \(\tau_{max}\) are chosen such the conditional density \(p(\mathbf{x}_{\tau}|\mathbf{x}_{0})\) is available in closed form and that \(p(\mathbf{x}_{\tau_{max}})\) approximates a tractable Gaussian distribution \(\pi(\mathbf{x})\). Notably, for every diffusion SDE, there exists a corresponding probability flow (PF) ordinary differential equation (ODE) [44] whose marginal probability densities \(p(\mathbf{x}_{\tau})\) match the densities of Eq. (1)
\[d\mathbf{x}_{\tau}=\left[\mu(\mathbf{x}_{\tau},\tau)-\frac{1}{2}\sigma(\tau)^{ 2}\nabla_{x}\log p(\mathbf{x}_{\tau})\right]d\tau. \tag{2}\]
Using the PF ODE, samples are generated from a diffusion model by integrating Eq. (2) from \(\tau=\tau_{max}\) to \(\tau=0\) with initial condition \(\mathbf{x}_{\tau_{max}}\sim\pi(\mathbf{x}_{\tau_{max}})\) using an ODE solver. Typically integration is stopped at some small value \(\epsilon\) for numerical stability. Solving this initial value problem requires evaluation of the _score function_\(\nabla_{\mathbf{x}_{\tau}}\log p(\mathbf{x}_{\tau})\). Since \(p(\mathbf{x}_{\tau})\) is not known in closed form, diffusion models learn an approximation of the score function \(\mathbf{s}_{\theta}(\mathbf{x}_{\tau},\tau)\approx\nabla_{\mathbf{x}_{\tau}} \log p(\mathbf{x}_{\tau})\) via score matching [16; 43; 44].
A useful property of diffusion models is the ability to model conditional distributions \(p(\mathbf{x}_{0}|y)\) at test-time using guidance. Given some conditional information \(y\), the key idea of guidance is to replace the score function in the PF ODE with an approximate _conditional_ score function \(\nabla_{\mathbf{x}_{\tau}}\log p(\mathbf{x}_{\tau}|y)\).
By using the gradient of a pretrained classifier \(p_{\phi}(y|\mathbf{x}_{\tau})\), glassifier guidance [6] approximates the conditional score function through the a linear combination of the unconditional score function and the classifier gradient. The parameter \(\alpha\) controls the strength of the guidance
\[\nabla_{\mathbf{x}_{\tau}}\log p(\mathbf{x}_{\tau}|y)\approx\mathbf{s}_{\theta }(\mathbf{x}_{\tau},\tau)+\alpha\nabla_{\mathbf{x}_{\tau}}\log p_{\phi}(y| \mathbf{x}_{\tau}). \tag{3}\]
One major drawback of classifier guidance is the need to train an external classifier. Instead, classifier-free guidance [13], utilizes a conditional score network \(\mathbf{s}_{\theta}(\mathbf{x}_{\tau},\tau,y)\). Then, a weighted average of the conditional and unconditional scores is used to estimate the conditional score function.
\[\nabla_{\mathbf{x}_{\tau}}\log p(\mathbf{x}_{\tau}|y)\approx\lambda\mathbf{s}_ {\theta}(\mathbf{x}_{\tau},\tau,y)+(1-\lambda)\mathbf{s}_{\theta}(\mathbf{x}_ {\tau},\tau). \tag{4}\]
Here \(\lambda\) is a scalar parameter which controls the strength of the guidance. In both cases, the approximate conditional score can be substituted into Eq. (2) to draw conditional samples from \(p(\mathbf{x}_{0}|y)\).
## 4 Djinn
Our approach models the joint distribution agent states \(p(\mathbf{x}_{lat}|\mathbf{x}_{obs},\mathbf{M})\) conditioned on a set of observed states and the map context. For this purpose, we employ a diffusion model which diffuses directly over \(\mathbf{x}_{lat}\) - the unobserved states of each agent in the scene for \(t=\{1,\ldots T\}\). An important aspect of our method is the choice of observation mask \(\mathcal{O}\) and observation set \(\mathbf{x}_{obs}\) on which we condition. For this purpose we introduce a distribution over observation masks \(p(\mathcal{O})\) which controls the tasks on which we train our model.
In the design of our diffusion process, we follow the choices from EDM [21], setting \(\mu(\mathbf{x}_{lat,\tau},\tau)=\mathbf{0}\) and \(\sigma(\tau)=\sqrt{2\tau}\) from Eq. (2). We also utilize their score function parameterization\[\nabla_{\mathbf{x}_{lat,\tau}}\log p(\mathbf{x}_{lat,\tau}|\mathbf{x}_{obs},\mathbf{ M},\mathbf{c})=\frac{D_{\theta}\left(\mathbf{x}_{lat,\tau},\mathbf{x}_{obs},\mathbf{ M},\mathbf{c},\tau\right)-\mathbf{x}_{lat,\tau}}{\tau^{2}}. \tag{5}\]
Here \(D_{\theta}\) is a neural network which approximates the latent portion of the noise free data \(\mathbf{x}_{lat,0}\). In addition to \(\mathbf{x}_{lat,\tau}\) and \(\tau\), in our work \(D_{\theta}\) also receives the map context \(\mathbf{M}\), the clean observed states \(\mathbf{x}_{obs}\) and \(c\), a collection of unmodelled agent features per observed agent timestep such as velocity, vehicle size, or agent type. We train our network on a modification of the objective from EDM [21]
\[\mathbb{E}_{\mathbf{x}_{0},\tau,\mathcal{O},\mathbf{x}_{lat,\tau}}\|D_{\theta} \left(\mathbf{x}_{lat,\tau},\mathbf{x}_{obs},\mathbf{M},\mathbf{c},\tau\right) -\mathbf{x}_{lat,0}\|_{2}^{2}. \tag{6}\]
Here, \(\mathbf{x}_{0}\sim p_{data}(\mathbf{x})\), \(\mathbf{x}_{\tau}\sim p(\mathbf{x}_{\tau}|\mathbf{x}_{0})=\mathcal{N}(\mathbf{ x},\tau^{2}\mathbf{I})\) and \(\mathcal{O}\sim p(\mathcal{O})\). We compute our loss over \(\tau\sim p_{train}\) - a log normal distribution which controls the variance of the noise added to the data. We set the mean and variance of \(p_{train}\) according to [21].
We use the Heun \(2^{\text{nd}}\) order sampler from [21] to sample traffic scenarios with no changes to the reported hyperparameters. Empirically, we found that deterministic sampling, corresponding to integrating the PF ODE, leads to higher quality samples than using an SDE solver. Unless otherwise noted all samples are produced using \(50\) iterations of the ODE solver, which produces the highest quality samples as measured by ego and joint minADE and minFDE.
**Input Representation** An important choice for trajectory forecasting models is the reference frame for the agent states. In our work, the diffused agent states and observations \(\mathbf{x}_{obs}\) are centered around an "ego agent," which is often specified in trajectory forecasting datasets as the primary agent of interest. We transform \(\mathbf{x}_{0}\) such that the scene is centered on the last observed position of this arbitrary "ego agent" and rotated so the last observed heading of the ego agent is zero. We scale the positions and headings of all agents in each ego-transformed scene to a standard deviation of \(0.5\).
We represent the map \(\mathbf{M}\) as an unordered collection of polylines representing the center of each lane. Polylines are comprised of a fixed number of 2D points. We split longer polylines split into multiple segments and pad shorter polylines padded to the fixed length. Each point has a boolean variable indicating whether the element is padding. Polyline points are represented in the same reference frame as the agent states and are scaled by the same amount as the agent position features.
**Model Architecture** Our score estimator network \(D_{\theta}\) is parameterized by a transformer-based architecture similar to [30]. The network operates on a fixed \([A,T,F]\) shaped feature tensor composed of one \(F\) dimensional feature vector per agent timestep. We use sinusoidal positional embeddings [48] to produce initial feature tensors. Noisy and observed agent states \(\mathbf{x}_{\tau}\), \(\mathbf{x}_{obs}\), the time indices \(t=\{1,\dots,T\}\), and diffusion step \(\tau\) are all embedded into \(F\) dimensional embeddings. \(\mathbf{x}_{lat,\tau}\) and \(\mathbf{x}_{obs}\) are padded with zeros for observed and latent states respectively prior to embedding. A shared MLP projects the concatenated positional embeddings into a \(F\) dimensional vector for each agent.
The main trunk of the network is comprised of a series of transformer layers [48]. Attention between all pairs of feature vectors is factorized into alternating time and agent transformer layers. In time transformer layers, self-attention is performed per-agent across each timestep of that agent's trajectory, allowing for temporal consistency along a trajectory. In agent transformer layers, self-attention is computed across all agents at a given time, updating each agent's features with information about the other agents at that time. We encode the map information \(\mathbf{M}\) with a shared MLP that consumes flattened per-point and per-lane features to produce a fixed size embedding per lane. Cross attention between the collection of lane embeddings and agent states incorporates map information into the agent state features. Our network is comprised of 15 total transformer layers with a fixed feature dimension of 256. We use an MLP decoder after the final transformer layer to produce our estimate of \(\mathbf{x}_{lat,0}\). A full representation of our architecture is available in Appendix A.
## 5 Guidance for Conditional Scene Generation
So far, we have outlined our method for generating joint traffic scenes using DJINN. Next, we describe how the diffusion nature of DJINN enables fine-grained control over the generation and modification of driving scenarios.
### Classifier-free Guidance
In Scene Transformer [30], a masked sequence modelling framework is introduced for goal-directed and agent-reactive scene predictions. One limitation of this approach is that conditioning is performed on precise agent states while future agent states or goals are usually uncertain. We mitigate this limitation through the use of classifier-free guidance.
We assume access to a set of precise observations \(\mathbf{x}_{obs}\), and some set of additional agent states \(\mathbf{x}_{cond}\) on which we wish to condition our sample. For instance, \(\mathbf{x}_{cond}\) may include agent goals upon which we wish to condition. Let \(\mathbf{x}^{\prime}_{obs}=\{\mathbf{x}_{obs}\cup\mathbf{x}_{cond}\}\). Based on Eq. (4), the conditional score is through a weighted average of the score estimate conditioned on \(\mathbf{x}_{obs}\) and the estimated conditioned on \(\mathbf{x}^{\prime}_{obs}\)
\[\nabla_{\mathbf{x}_{lat,\tau}}\log p(\mathbf{x}_{lat,\tau}|\mathbf{x}^{ \prime}_{obs})\approx \lambda\frac{D_{\theta}\left(\mathbf{x}_{lat,\tau},\mathbf{x}^{ \prime}_{obs},\mathbf{M},\mathbf{c},\tau\right)-\mathbf{x}_{lat,\tau}}{\tau^{2}} \tag{7}\] \[+(1-\lambda)\frac{D_{\theta}\left(\mathbf{x}_{lat,\tau},\mathbf{x }_{obs},\mathbf{M},\mathbf{c},\tau\right)-\mathbf{x}_{lat,\tau}}{\tau^{2}}.\]
To facilitate classifier-free conditioning, we train DJINN on a \(p(\mathcal{O})\) representing varied conditioning tasks. These tasks include conditioning on agent history, agent goals, windows of agent states, and random agent states. A full overview of our task distribution is given in Appendix B.
### Classifier Guidance
Many driving behaviors of individual or multiple agents can be categorized by a class \(y\) based on their geometry, inter-agent interactions or map context. Examples of classes include driving maneuvers such as left turns, multi agent behaviors such as yielding to another agent, or constraints such as trajectories which follow the speed limit. DJINN uses classifier guidance to conditioned scenes on these behavior classes. Given a set of example scenes corresponding to a behavior class \(y\), we train a classifier to model \(p_{\phi}(y|\mathbf{x})\). Using Eq. (3) we approximate the conditional score for conditional sampling. Importantly, due to the joint nature of our representation, classifiers for per-agent, multi-agent or whole-scene behaviors can be all used to condition sampled traffic scenes.
### Scenario Editing
One benefit of sampling traffic scenes at once instead of autoregressively is the ability to edit generated or recorded traffic scenarios through stochastic differential editing [26]. Given a traffic scene \(\mathbf{x}\), a user can manually modify the trajectories in the scene to produce a "guide" scene \(\mathbf{x}^{\prime}\) which approximates
\begin{table}
\end{table}
Table 1: Ego-only motion forecasting performance on Argoverse and INTERACTION datasets. minADE and minFDE metrics on both datasets indicate that DJINN produces ego samples which closely match the distribution of ego agent trajectories.
\begin{table}
\end{table}
Table 2: Ego-only and joint metrics comparing DJINN to a jointly trained Scene Transformer model on the Argoverse validation set. DJINN produces better joint samples than SceneTransformer when measured by minSceneADE and minSceneFDE.
the desired trajectories in the scene. The guide scene is used to condition the start of a truncated reverse diffusion process by sampling \(\mathbf{x}_{\tau_{edit}}\sim\mathcal{N}(\mathbf{x}^{\prime},\tau_{edit}\mathbf{I})\) where \(\tau_{edit}\) is an intermediate time in the diffusion process between \(0\) and \(\tau_{max}\). Then, the edited scene is produced by integrating the PF ODE using the same ODE solver, starting from initial condition \(\tau_{edit}\). Through the stochastic differential editing, the guide scene is translated into a realistic traffic scene with agent trajectories which approximate the guide trajectories. We empirically find \(\tau_{edit}=0.8\) to be a good trade-off between generating realistic trajectory scenes and maintaining the information of the guide scene.
## 6 Experiments
### Motion Forecasting Performance
To measure the quality of the samples from DJINN, we evaluate our method on two popular motion prediction datasets, matching \(\mathcal{O}\) during training to match each dataset. For the INTERACTION dataset [53] scenes, we observe the state of all agents over the first second of the scene and generate the next three seconds. On the Argoverse dataset [4] our model observes agent states over the first two seconds of the scene and generates the next three seconds. Training hyperparameters for both models are found in Appendix A.
We note that both INTERACTION and Argoverse metrics measure an ego-only trajectory-set using minADE and minFDE over 6 trajectories. Since DJINN produces stochastic samples of entire traffic scenes, a set of 6 random trajectories may not cover all future trajectory modes. To alleviate this, we draw a collection of 60 samples for each scenario and fit a 6 component Gaussian mixture model with diagonal covariances using EM in a method similar to [47]. We use the means of the mixture components as the final DJINN prediction for motion forecasting benchmarks.
We present DJINN's performance on motion forecasting in Table 1 with Argoverse results in Table 1a and INTERACTION results in Table 1b. On INTERACTION, DJINN generates excellent ego vehicle trajectories, with similar minFDE and minADE to state of the art methods on this dataset. On the Argoverse test set we produce competitive metrics, although our results lag slightly behind top motion forecasting methods. We hypothesize that our lower performance on Argoverse is due to the lower quality agent tracks in this dataset when compared to INTERACTION.
We further analyze the _joint_ motion forecasting performance of DJINN. To this end, we measure the Scene minADE and minFDE proposed by [2] which measures joint motion forecasting performance over a collection of traffic scenes. We compare DJINN against a reproduction of Scene Transformer trained for joint motion forecasting, using their reported hyperparameters. Ego-only and Scene motion forecasting performance is shown in Table 2. Although Scene Transformer predicts slightly better ego vehicle trajectories, we demonstrate DJINN has superior joint motion forecasting capabilities.
### State-conditioned Traffic Scene Generation
While DJINN is able to draw samples for motion forecasting benchmarks by conditioning on past observations of the scene, a key benefit of our approach is the ability to flexibly condition at test-time
Figure 2: The effect of classifier-free guidance weight on the spread of trajectories for goal conditioned sampling. Samples drawn from the INTERACTION validation set conditioned using classifier-free guidance on a goal state (star). As the guidance weight increases, deviation from the goals decreases.
based on arbitrary agent states. We illustrate this test-time conditioning in Fig. 1 by generating samples from five conditional distributions which correspond to use-cases for our model.
Specifying exact agent states on which to condition can be challenging. One approach is to utilize the states of a prerecorded trajectory to produce conditioning inputs. However, if one wishes to generate a trajectory which deviates from a recorded trajectory, there is uncertainty about the exact states on which to condition. In Fig. 2, we demonstrate how classifier-free guidance can be utilized to handle user uncertainty in conditioning agent states. In this example, we set the observation set \(\mathbf{x}_{obs}\) to the first ten states of each agent's recorded trajectory. Further, we create a conditional observation set \(\mathbf{x}^{\prime}_{obs}\) by augmenting \(\mathbf{x}_{obs}\) with a goal state for each agent drawn from a normal distribution centered on the ground-truth final position of each agent, with 1m variance. We sample traffic scenes with varying levels of classifier-free guidance strength, drawing two conclusions. First, DJINN is robust to goals which do not match the recorded final agent states. Secondly, the strength of the classifier guidance weight controls the emphasis of the goal conditioning, resulting in trajectory samples which cluster more tightly around the specified goal as the guidance strength is increased. With low guidance weight, the samples are diverse, and do not closely match the specified goal position. As the weight increases, the spread of the trajectory distribution tightens, especially for fast, longer trajectories. These properties give users finer control over the distribution of traffic scenes when there is uncertainty over the conditioning states.
### Conditional Generation from Behavior Classes
We now continue to demonstrate the flexibility of our approach by considering test-time conditioning of our model on specific driving behaviors through classifier guidance. Specifically, we highlight the ability to condition DJINN on the behavior class of cut-in trajectories by conditioning our INTERACTION trained model with a cut-in classifier.
A "cut-in" occurs when one vehicle merges into the path of another, often requiring intervention by the cut-off driver. We selected this behavior to demonstrate how classifier guidance can be used with our joint representation to sample scenes conditioned on the behavior of multiple agents. We condition DJINN trained on INTERACTION using a simple cut-in classifier. To train the classifier, we first mined a dataset of cut-in behaviors trajectory pairs from the "DR_CHN_Merging_ZS" location - a highway driving scene with some cut-in examples. Each trajectory pair is comprised of an "ego" and an "other" agent. We define a positive cut-in as a case where the future state of the other agent at time \(t_{other}\) overlaps with a future state of the ego agent at time \(t_{ego}\) such that \(t_{ego}-3s<t_{other}<t_{ego}\). Further, we filter cases where the initial state of the other agent overlaps with any part of the ego
Figure 3: Examples of synthetic cut-in behaviors generated using classifier guidance. Samples are generated from the INTERACTION validation set conditioned on the first 10 agent states. Applying classifier guidance causes the other agent (green) to cut in front of the ego agent (purple). We generate trajectories for all agents in the scene, but other agent trajectories have been omitted for clarity
trajectory to eliminate lane following cases. We label a negative cut-in case as any other pair of trajectories in which the minimum distance between any pair of ego and other states is less than 5m.
Using these heuristics, we collect a dataset of 2013 positive and 296751 negative examples. We trained a two layer MLP classifier with 128 dimensions per hidden layer. The classifier takes as input the diffused trajectories of each agent, the validity of each timestep and the diffusion time \(\tau\). Using this classifier, we generate synthetic cut-in scenarios via Eq. (3). Examples of our synthetic cut-in scenarios are found in Fig. 3. The generated scenarios clearly demonstrate our model can be conditioned to create synthetic cut-in behaviors. These synthetic examples provide evidence that given a collection of trajectories exemplifying a behavior mode, or a heuristic which can be used to generate example trajectories, DJINN can be conditioned to generate synthetic examples representing that behavior mode. This finding further expands the flexibility of our model to generate trajectory samples from valuable conditional distributions.
### Scenario Fine-Tuning
We exhibit another method of controlling the traffic scenarios generated with DJINN through fine-tuning. Since DJINN diffuses entire traffic scenes without iterative replanning, we are able to use stochastic differential editing to modify the sampled scenes. Given a recorded or sampled traffic scene, differential stochastic editing can be used to fine-tune the scene through the use of a manually specified guide. In Fig. 4, we demonstrate how DJINN can fine-tune existing scenarios to produce new scenarios with realistic trajectories but complex interactions. Using two recorded validation set scenes from Argoverse, we aim to edit the scenes to generate more interactive trajectories between the agents. For this purpose, we generate an guide scene \(\mathbf{x}_{guide}\) by manually adjusting the trajectories in each scene so that the future paths of two of the agents will intersect. Through stochastic differential editing, we show that DJINN is able to produce realistic driving scenes which shift the guide scene trajectories to maintain their interactivity but avoid collisions between agents.
Figure 4: Two scenario fine-tuning examples (one per row) based on Argoverse validation set scenarios. **Left**: original scene with ground-truth trajectories shown for two interacting vehicles, vehicle positions at the same time index for all agents. **Middle**: a manual edit of one agent’s trajectory in each scene. One (top) replaces a right turn with a forward continuation, the other (bottom) shifts a trajectory back in space to cause a complex interaction to occur near the end of the trajectory. **Right**: the resulting stochastic differential edit of the original scenario. Both rows of the last column illustrate joint reactivity to the new trajectories arising from the edit; in the top row the left-turning vehicle yields and in the bottom row both trajectories shift to avoid collision.
Conclusions
In this work, we present DJINN - a diffusion model of joint traffic scenes. By diffusing in a joint agent state representation, DJINN can be adapted at test time to a variety of modeling tasks through guidance methods and scenario editing. The power of this scenario generation model opens exciting possibilities. Future research may expand the variety of guidance classifiers such as utilizing the classifiers proposed in [55] for traffic-rule constraint satisfaction. Another promising avenue of research is scaling DJINN for faster scenario generation. Although flexible, the diffusion structure of DJINN makes scenario generation relatively slow due to the iterative estimation of the score function. Distillation techniques such as consistency models [42] may be helpful in this regard to improve the number of score estimates required per sample. Future work may also consider scaling the length and agent count in generated scenarios to improve the complexity of behaviors which can be generated. Other areas of future work include using DJINN in a model predictive control setting (hinted at in the predictive mask of Fig. 1) in which an ego action is scored using statistics of ego-action conditioned joint trajectories from DJINN.
## Acknowledgements
We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), the Canada CIFAR AI Chairs Program, Inverted AI, MITACS, the Department of Energy through Lawrence Berkeley National Laboratory, and Google. This research was enabled in part by technical support and computational resources provided by the Digital Research Alliance of Canada Compute Canada (alliancecan.ca), the Advanced Research Computing at the University of British Columbia (arc.ubc.ca), Amazon, and Oracle.
## References
* [1] Sergio Casas, Cole Gulino, Renjie Liao, and Raquel Urtasun. Spagnn: Spatially-aware graph neural networks for relational behavior forecasting from sensor data. In _2020 IEEE International Conference on Robotics and Automation (ICRA)_, pages 9491-9497. IEEE, 2020.
* [2] Sergio Casas, Cole Gulino, Simon Suo, Katie Luo, Renjie Liao, and Raquel Urtasun. Implicit latent variable model for scene-consistent motion forecasting. In _Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXIII 16_, pages 624-641. Springer, 2020.
* [3] Yuning Chai, Benjamin Sapp, Mayank Bansal, and Dragomir Anguelov. Multipath: Multiple probabilistic anchor trajectory hypotheses for behavior prediction. In _Conference on Robot Learning_, pages 86-99. PMLR, 2020.
* [4] Ming-Fang Chang, John Lambert, Patsorn Sangkloy, Jagjeet Singh, Slawomir Bak, Andrew Hartnett, De Wang, Peter Carr, Simon Lucey, Deva Ramanan, et al. Argoverse: 3d tracking and forecasting with rich maps. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 8748-8757, 2019.
* [5] Henggang Cui, Vladan Radosavljevic, Fang-Chieh Chou, Tsung-Han Lin, Thi Nguyen, Tzu-Kuo Huang, Jeff Schneider, and Nemanja Djuric. Multimodal trajectory predictions for autonomous driving using deep convolutional networks. In _2019 International Conference on Robotics and Automation (ICRA)_, pages 2090-2096. IEEE, 2019.
* [6] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. _Advances in Neural Information Processing Systems_, 34:8780-8794, 2021.
* [7] Jiyang Gao, Chen Sun, Hang Zhao, Yi Shen, Dragomir Anguelov, Congcong Li, and Cordelia Schmid. Vectornet: Encoding hd maps and agent dynamics from vectorized representation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 11525-11533, 2020.
* [8] Thomas Gilles, Stefano Sabatini, Dzmitry Tsishkou, Bogdan Stanciulescu, and Fabien Moutarde. Gohome: Graph-oriented heatmap output for future motion estimation. In _2022 International Conference on Robotics and Automation (ICRA)_, pages 9107-9114. IEEE, 2022.
* [9] Junru Gu, Chen Sun, and Hang Zhao. Densetnt: End-to-end trajectory prediction from dense goal sets. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 15303-15312, 2021.
* [10] Agrim Gupta, Justin Johnson, Li Fei-Fei, Silvio Savarese, and Alexandre Alahi. Social gan: Socially acceptable trajectories with generative adversarial networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 2255-2264, 2018.
* [11] William Harvey, Saeid Naderiparizzi, Vaden Masrani, Christian Dietrich Weilbach, and Frank Wood. Flexible diffusion modeling of long videos. In _Advances in Neural Information Processing Systems_, 2022.
* [12] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. _Advances in Neural Information Processing Systems_, 33:6840-6851, 2020.
* [13] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. In _NeurlIPS 2021 Workshop on Deep Generative Models and Downstream Applications_, 2021.
* [14] Jonathan Ho, Tim Salimans, Alexey A. Gritsenko, William Chan, Mohammad Norouzi, and David J. Fleet. Video diffusion models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, _Advances in Neural Information Processing Systems_, 2022.
* [15] WuLing Huang, Kunfeng Wang, Yisheng Lv, and FengHua Zhu. Autonomous vehicles testing methods review. In _2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC)_, pages 163-168. IEEE, 2016.
* [16] Aapo Hyvarinen and Peter Dayan. Estimation of non-normalized statistical models by score matching. _Journal of Machine Learning Research_, 6(4), 2005.
* [17] Ashesh Jain, Luca Del Pero, Hugo Grimmett, and Peter Ondruska. Autonomy 2.0: Why is self-driving always 5 years away? _arXiv preprint arXiv:2107.08142_, 2021.
* [18] Faris Janjos, Maxim Dolgov, Muhamed Kuric, Yinzhe Shen, and J Marius Zollner. San: Scene anchor networks for joint action-space prediction. In _2022 IEEE Intelligent Vehicles Symposium (IV)_, pages 1751-1756. IEEE, 2022.
* [19] Faris Janjos, Maxim Dolgov, and J Marius Zollner. Starnet: Joint action-space prediction with star graphs and implicit global frame self-attention. _arXiv preprint arXiv:2111.13566_, 2021.
* [20] Michael Janner, Yilun Du, Joshua Tenenbaum, and Sergey Levine. Planning with diffusion for flexible behavior synthesis. In _International Conference on Machine Learning_, 2022.
* [21] Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models. In _Advances in Neural Information Processing Systems_, 2022.
* [22] Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. Diffwave: A versatile diffusion model for audio synthesis. In _International Conference on Learning Representations_, 2021.
* [23] Namhoon Lee, Wongun Choi, Paul Vernaza, Christopher B Choy, Philip HS Torr, and Manmohan Chandraker. Desire: Distant future prediction in dynamic scenes with interacting agents. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 336-345, 2017.
* [24] Ming Liang, Bin Yang, Rui Hu, Yun Chen, Renjie Liao, Song Feng, and Raquel Urtasun. Learning lane graph representations for motion forecasting. In _European Conference on Computer Vision_, pages 541-556. Springer, 2020.
* [25] Yicheng Liu, Jinghuai Zhang, Liangji Fang, Qinhong Jiang, and Bolei Zhou. Multimodal motion prediction with stacked transformers. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 7577-7586, 2021.
* [26] Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. Sdedit: Guided image synthesis and editing with stochastic differential equations. In _International Conference on Learning Representations_, 2021.
* [27] Jean Mercat, Thomas Gilles, Nicole El Zoghby, Guillaume Sandou, Dominique Beauvois, and Guillermo Pita Gil. Multi-head attention for multi-modal joint vehicle motion forecasting. In _2020 IEEE International Conference on Robotics and Automation (ICRA)_, pages 9638-9644. IEEE, 2020.
* [28] Xiaoyu Mo, Yang Xing, and Chen Lv. Recog: A deep learning framework with heterogeneous graph for interaction-aware trajectory prediction. _arXiv preprint arXiv:2012.05032_, 2020.
* [29] Nigamaa Nayakanti, Rami Al-Rfou, Aurick Zhou, Kratarth Goel, Khaled S Refaat, and Benjamin Sapp. Wayformer: Motion forecasting via simple & efficient attention networks. _arXiv preprint arXiv:2207.05844_, 2022.
* [30] Jiquan Ngiam, Vijay Vasudevan, Benjamin Caine, Zhengdong Zhang, Hao-Tien Lewis Chiang, Jeffrey Ling, Rebecca Roelofs, Alex Bewley, Chenxi Liu, Ashish Venugopal, David J Weiss, Ben Sapp, Zhifeng Chen, and Jonathon Shlens. Scene transformer: A unified architecture for predicting future trajectories of multiple agents. In _International Conference on Learning Representations_, 2022.
* [31] Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, and Jun-Yan Zhu. Zero-shot image-to-image translation. _arXiv preprint arXiv:2302.03027_, 2023.
* [32] Tung Phan-Minh, Elena Corina Grigore, Freddy A Boulton, Oscar Beijbom, and Eric M Wolff. Covernet: Multimodal behavior prediction using trajectory sets. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 14074-14083, 2020.
* [33] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 652-660, 2017.
* [34] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. _arXiv preprint arXiv:2204.06125_, 2022.
* [35] Davis Rempe, Zhengyi Luo, Xue Bin Peng, Ye Yuan, Kris Kitani, Karsten Kreis, Sanja Fidler, and Or Ltiany. Trace and pace: Controllable pedestrian animation via guided trajectory diffusion. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 13756-13766, 2023.
* [36] Nicholas Rhinehart, Rowan McAllister, Kris Kitani, and Sergey Levine. Precog: Prediction conditioned on goals in visual multi-agent settings. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 2821-2830, 2019.
* [37] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. High-resolution image synthesis with latent diffusion models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 10684-10695, 2022.
* [38] Amir Sadeghian, Vineet Kosaraju, Ali Sadeghian, Noriaki Hirose, Hamid Rezatofighi, and Silvio Savarese. Sophie: An attentive gan for predicting paths compliant to social and physical constraints. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 1349-1358, 2019.
* [39] Tim Salzmann, Boris Ivanovic, Punarjay Chakravarty, and Marco Pavone. Trajectron++: Dynamically-feasible trajectory forecasting with heterogeneous data. In _Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XVIII 16_, pages 683-700. Springer, 2020.
* [40] Adam Scibior, Vasileios Lioutas, Daniele Reda, Peyman Bateni, and Frank Wood. Imagining the road ahead: Multi-agent trajectory prediction via differentiable simulation. In _2021 IEEE International Intelligent Transportation Systems Conference (ITSC)_, pages 720-725. IEEE, 2021.
* [41] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In _International Conference on Machine Learning_, pages 2256-2265. PMLR, 2015.
* [42] Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency models. _arXiv preprint arXiv:2303.01469_, 2023.
* [43] Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. _Advances in neural information processing systems_, 32, 2019.
* [44] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In _International Conference on Learning Representations_, 2021.
* [45] Qiao Sun, Xin Huang, Junru Gu, Brian C Williams, and Hang Zhao. M2i: From factored marginal trajectory prediction to interactive prediction. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 6543-6552, 2022.
* [46] Simon Suo, Sebastian Regalado, Sergio Casas, and Raquel Urtasun. Trafficsim: Learning to simulate realistic multi-agent behaviors. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 10400-10409, 2021.
* [47] Balakrishnan Varadarajan, Ahmed Hefny, Avikalp Srivastava, Khaled S Refaat, Nigamaa Nayakanti, Andre Corman, Kan Chen, Bertrand Douillard, Chi Pang Lam, Dragomir Anguelov, et al. Multipath++: Efficient information fusion and trajectory aggregation for behavior prediction. In _2022 International Conference on Robotics and Automation (ICRA)_, pages 7814-7821. IEEE, 2022.
* [48] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. _Advances in neural information processing systems_, 30, 2017.
* [49] Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tieyan Liu. On layer normalization in the transformer architecture. In _International Conference on Machine Learning_, pages 10524-10533. PMLR, 2020.
* [50] Maosheng Ye, Jiamiao Xu, Xunong Xu, Tongyi Cao, and Qifeng Chen. Dcms: Motion forecasting with dual consistency and multi-pseudo-target supervision. _arXiv preprint arXiv:2204.05859_, 2022.
* [51] Wenyuan Zeng, Ming Liang, Renjie Liao, and Raquel Urtasun. Lanercnn: Distributed representations for graph-centric motion forecasting. In _2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_, pages 532-539. IEEE, 2021.
* [52] Wenyuan Zeng, Wenjie Luo, Simon Suo, Abbas Sadat, Bin Yang, Sergio Casas, and Raquel Urtasun. End-to-end interpretable neural motion planner. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 8660-8669, 2019.
* [53] Wei Zhan, Liting Sun, Di Wang, Haojie Shi, Aubrey Clausse, Maximilian Naumann, Julius Kummerle, Hendrik Konigshof, Christoph Stiller, Arnaud de La Fortelle, et al. Interaction dataset: An international, adversarial and cooperative motion dataset in interactive driving scenarios with semantic maps. _arXiv preprint arXiv:1910.03088_, 2019.
* [54] Hang Zhao, Jiyang Gao, Tian Lan, Chen Sun, Ben Sapp, Balakrishnan Varadarajan, Yue Shen, Yi Shen, Yuning Chai, Cordelia Schmid, et al. Tnt: Target-driven trajectory prediction. In _Conference on Robot Learning_, pages 895-904. PMLR, 2021.
* [55] Ziyuan Zhong, Davis Rempe, Danfei Xu, Yuxiao Chen, Sushant Veer, Tong Che, Baishakhi Ray, and Marco Pavone. Guided conditional diffusion for controllable traffic simulation. _arXiv preprint arXiv:2210.17366_, 2022.
## Appendix A Model Details
### Preconditioning
Following EDM [21], we precondition \(D_{\theta}\) by combining \(\mathbf{x}_{lat,\tau}\) and the output of our network \(F_{\theta}\) using scaling factors
\[D_{\theta}(\mathbf{x}_{lat,\tau},\mathbf{x}_{obs},\mathbf{M},\mathbf{c},\tau)=c _{skip}(\tau)\mathbf{x}_{lat,\tau}+c_{out}(\tau)F_{\theta}(c_{in}\mathbf{x}_{ lat,\tau},\mathbf{x}_{obs},\mathbf{M},\mathbf{c},c_{noise}(\tau)). \tag{8}\]
We use the scaling values reported in [21] without modification but report them in Table 3 for convenience.
Here, \(\sigma_{data}\) is the standard deviation of the diffusion features. We scale the positions and headings of the agent so that \(\sigma_{data}\) is \(0.5\) for all diffusion features.
### Architecture
DJINN utilizes a transformer based architecture for \(F_{\theta}\). An overview of the model structure is shown in Fig. 5.
#### Feature Encoding
We encode the observed agent states \(\mathbf{x}_{obs}\), the noisy latent states \(\mathbf{x}_{lat,\tau}\), the scenario time \(t\) and the diffusion time \(\tau\) using sinusoidal positional encoding [48]. We represent the scenario time as an integer index increasing from \(0\) to \(T\) with 0 corresponding to the earliest agent states. For each of the encoded features, we produce a 256-dimensional encoding vector. An important hyperparameter for sinusoidal positional embeddings are the maximum and minimum encoding periods which we report in Table 4.
The concatenation of the positional encodings with additional agent state features \(\mathbf{c}\) are fed through an MLP to form the input to the main transformer network. The additional agent state features consist of the agent velocity, the observed mask, and the agent size which is available for INTERACTION only. The input MLP is shared across all agent states and contains two linear layers with hidden dimension 256 and ReLU non-linearities.
#### Roadgraph Encoding
DJINN is conditioned on the geometry of the roadgraph through a collection of lane center polylines. Each polyline is comprised of an ordered series of 2D points which represent the approximate center of each driving lane. We fix the length of each polyline to 10 points. We split polylines longer than this threshold into approximately equal segments, and pad shorter polylines with zeros. We utilize a boolean feature to indicate which polyline points are padded. Unlike [30], we do not use a PointNet [33] to encode the roadgraph polylines. Instead, we encode the polylines into a 256-dimensional vector per polyline using a simple MLP. To generate the input to this MLP, we concatenate the position
\begin{table}
\begin{tabular}{l r r} \hline \hline
**Feature** & **Minimum Period** & **Maximum Period** \\ \hline \(\mathbf{x}_{obs},\mathbf{x}_{lat,\tau}\) & 0.01 & 10 \\ \(t\) & 1 & 100 \\ \(\tau\) & 0.1 & 10,000 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Maximum and minimum positional encoding periods for DJINN input features
\begin{table}
\begin{tabular}{l r} \hline \hline
**Scaling Factor** & **Function** \\ \hline \(c_{skip}\) & \(\sigma_{data}^{2}/(\tau^{2}+\sigma_{data}^{2})\) \\ \(c_{out}\) & \(\tau\cdot\sigma_{data}/\sqrt{\tau^{2}+\sigma_{data}^{2}}\) \\ \(c_{in}\) & \(1/\sqrt{\tau^{2}+\sigma_{data}^{2}}\) \\ \(c_{noise}\) & \(\frac{1}{4}\ln(\tau)\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Scaling Functions for Preconditioningand padding mask for each polyline, along with any additional per-polyline features present in the dataset. For both datasets, the MLP is comprised of four linear layers, with a hidden dimensionality of \(256\) and ReLU non-linearities.
#### Transformer Network
The main transformer backbone of DJINN is comprised of 15 transformer layers which perform self-attention over the time and agent dimension, and cross attention with the roadgraph encodings. We utilize the same transformer layers as those proposed in [30], but modify the number of layers and their ordering. Specifically, include more time transformer layers as we found this produced smoother trajectories. All attention layers consume and produce 256 dimensional features per-agent state. We use four heads for each attention operation, and a 1024 dimensional hidden state for in the feed forward network. In the transformer layers, we use the pre-layernorm structure described in [49].
Due to batching and agents which are not tracked for the duration of the traffic scene, there is padding present in the agent feature tensor. The transformer layers account for padding in the scene by modifying the attention masks so that padded agent states are not attended to.
#### Output MLP
We use a two layer MLP with hidden dimension 256 to produce the final output for \(F_{\theta}\). We produce a three dimensional vector per agent state for INTERACTION and two-dimensional vector for Argoverse since headings are not provided in the dataset.
Figure 5: An overview of the DJINN architecture. The main network structure is comprised of time, agent and roadgraph attention layers. Features are encoded using positional encodings and MLPs. The output of the network is an estimate of the de-noised agent states.
### Training details
We train DJINN on two A100 GPUs for 150 epochs. We utilize the Adam optimizer with learning rate of 3E-4 and default values for \(\beta_{1}\) and \(\beta_{2}\). We use a linear learning rate ramp up, scaling from 0 to 3E-4 over 0.1 epochs. We set the batch size to 32. We clip gradients to a maximum norm of 5. Training takes approximately 6 days to complete from scratch.
## Appendix B Observation Distribution
We train DJINN over a variety of observation masks \(\mathcal{O}\) by randomly drawing masks from a training distribution \(p(\mathcal{O})\). Table 5 outlines this training task distribution. We refer to the length of agent state history observation for each dataset as \(t_{obs}\) and the total number of timesteps as \(T\). \(\mathcal{U}\) indicates a uniform distribution over integers.
## Appendix C Additional Qualitative Results
Fig. 6 shows additional qualitative samples from DJINN on the INTERACTION dataset for a subset of the observation masks outlined in Fig. 1. Each row in the figure corresponds to a different tasks, abd each element of the row is a sampled traffic scene.
## Appendix D Additional Quantitative Results
### Effect of Observation Distribution
To enable test-time conditioning through classifier-free guidance as outlined in section 6.2, we train DJINN on the observation distribution described in Appendix B. To quantify the effect of training on this distribution, we compare the sample quality of a DJINN model trained trained on the full observation distribution to one which is trained exclusively on the "Predictive Task." Table 6 shows the impact of the observation distribution as measured by trajectory forecasting metrics on samples drawn from INTERACTION dataset scenes.
Table 6 demonstrates that training on the full mixture of observation masks somewhat reduces the predictive performance of DJINN when compared to the model trained exclusively on the predictive task. However, the diversity of trajectories measured using MFD [40] increases when training on the more diverse distribution.
\begin{table}
\begin{tabular}{l l r} \hline \hline
**Task** & **Description** & **Probability** \\ \hline Predictive & Observe states where \(t\in[0,t_{obs}]\). & \(50\%\) \\ Goal-Conditioned & Observe states where \(t\in[0,t_{obs}]\) and the final state of & \(25\%\) \\ & 3 random agents. & \(10\%\) \\ Agent-Conditioned & Observe states where \(t\in[0,t_{obs}]\) and the entire trajectory of 3 random agents. & \(10\%\) \\ Ego-Conditioned & Observe states where \(t\in[0,t_{obs}]\) and the entire ego-agent trajectory. & \(10\%\) \\ Windowed & Observe states where \(t\in[0,t_{start}]\) and \(t\in(t_{start}+t_{obs},T]\) where \(t_{start}\sim\mathcal{U}(0,t_{obs})\). & 5\% \\ Upsampling & Observe every \(t_{obs}/T\) states, starting from & \(5\%\) \\ & \(t_{start}\sim\mathcal{U}(0,t_{obs}/T)\). & \(5\%\) \\ Imputation & Randomly sample observing each state with & \(5\%\) \\ & probability \(t_{obs}/T\). & \\ \hline \hline \end{tabular}
\end{table}
Table 5: Task distribution for training DJINN. The training observation mask \(\mathcal{O}\) is sampled from this distribution with probabilities given in the rightmost column.
### Effect of Reduced Sampling Steps
The continuous time training procedure of DJINN enables test-time variation in the number of sampling steps. Table 7 outlines the effect of reducing the number of sampling steps from the 50 steps which are used in all other experiments.
Table 7 shows that reducing the number of sampling steps results in modest trajectory forecasting performance reductions up to 20 sampling steps across all metrics. Using 10 steps severely impacts the quality of sampled scenes across all metrics. As sampling time scales linearly with the number of sampling steps, reducing the number of sampling steps allows for a performance runtime tradeoff.
\begin{table}
\begin{tabular}{l r r r r r} \hline
**Observations** & **minADE** & **minFDE** & **Scene minADE** & **Scene minFDE** & **MFD** \\ \hline Predictive & 0.21 & 0.49 & 0.35 & 0.91 & 2.33 \\ Mixture & 0.26 & 0.63 & 0.45 & 1.17 & 3.11 \\ \hline \end{tabular}
\end{table}
Table 6: Comparison of trajectory forecasting performance for models trained with varying observation distributions. Trajectory metrics are measured using 6 samples per scene on the INTERACTION validation set.
Figure 6: Additional generated traffic scenes from the INTERACTION validation set. Each row demonstrates samples generated using a different observation mask.
### Model Runtime
We compare DJINN's runtime to SceneTransformer [30], varying the input size as measured by the number of agents in the scene. Runtimes are measured across 1000 samples on a GeForce RTX 2070 Mobile GPU.
\begin{table}
\begin{tabular}{l r r r r} \hline
**Diffusion Steps** & **minADE** & **minFDE** & **Scene minADE** & **Scene minFDE** \\ \hline
10 & 0.28 & 0.64 & 0.45 & 1.135 \\
20 & 0.22 & 0.51 & 0.37 & 0.95 \\
30 & 0.22 & 0.50 & 0.36 & 0.92 \\
40 & 0.21 & 0.50 & 0.35 & 0.92 \\
50 & 0.21 & 0.49 & 0.35 & 0.92 \\ \hline \end{tabular}
\end{table}
Table 7: Trajectory forecasting performance versus the number of timesteps used in the diffusion sampling procedure. Trajectory forecasting performance is measured using 6 samples per scene on the INTERACTION validation set.
\begin{table}
\begin{tabular}{l r r r} \hline
**Agent Count** & **Scene Transformer** & **DJINN - 50 Steps** & **DJINN - 25 Steps** \\ \hline
8 & 0.0126s & 0.574s & 1.15s \\
16 & 0.0140s & 0.611s & 1.24s \\
32 & 0.017s & 0.844s & 1.69s \\
64 & 0.026s & 1.40s & 2.89s \\ \hline \end{tabular}
\end{table}
Table 8: Average scenario generation time for DJINN and Scene Transformer across varying scene sizes. | ## Review
### Summary
The paper introduces DJINN (Diffusion-based Joint Interaction Network), a novel generative model designed for generating, editing, and predicting multi-vehicle traffic scenarios. By leveraging diffusion models, DJINN effectively addresses the challenges associated with generating traffic scenes conditioned on various observation configurations. This flexibility makes it particularly relevant for simulating safety-critical and out-of-distribution events, which are crucial for the development and testing of autonomous vehicles. The authors validate DJINN's efficacy through comprehensive experiments on popular datasets, demonstrating its superior performance compared to existing trajectory forecasting methods. Although the novelty lies in the application of existing techniques, the paper offers valuable insights into the potential of diffusion models for traffic scenario generation.
### Strengths
- High originality in applying diffusion models to traffic scenario generation.
- Well-structured and comprehensive exploration of the methodology's flexibility.
- Strong experimental validation against state-of-the-art methods, showcasing credible results.
- Clear and coherent presentation that effectively communicates the research's motivation and significance.
- Potential high impact on autonomous vehicle research and development.
### Weaknesses
- Limited novelty, primarily combining existing methods without substantial improvement.
- Lack of detailed analysis on the limitations and failure cases of DJINN.
- Absence of quantitative comparison demonstrating increased variability of generated scenarios.
- Slow inference time, restricting practical usability.
- Need for more qualitative results and concrete examples to validate claims.
### Questions
- How can DJINN be adapted for more complex scenarios or different robotic domains?
- What are the trade-offs between computational performance and network variations in DJINN?
- Could the authors provide further analysis on why the proposed method doesn't show clear improvements?
- What is the impact of conditioning on the model's consistency with observations?
### Soundness
**Score:** 3
**Description:** Good. The methodology is sound, but the novelty and significant improvements over existing methods are limited.
### Presentation
**Score:** 3
**Description:** Good. The paper is well-written and clear, though some technical aspects could be more thoroughly explained.
### Contribution
**Score:** 3
**Description:** Good. The paper contributes to the field by demonstrating the application of diffusion models in traffic scenario generation, although the novelty is not outstanding.
### Rating
**Score:** 7
**Description:** Accept: The paper is technically solid with moderate-to-high impact potential, addressing relevant issues in traffic scenario modeling.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a significant contribution to the field of autonomous vehicle simulation by employing a novel application of diffusion models to generate and predict complex traffic scenarios. While it has some limitations in terms of novelty and the need for further analysis, its clarity, sound methodology, and potential impact justify acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Mutual Information Regularized Offline Reinforcement Learning
Xiao Ma Bingyi Kang Zhongwen Xu Min Lin Shuicheng Yan
Sea AI Lab
{yusufma555, bingykang}@gmail.com
equal contribution, \({}^{\dagger}\) corresponding author.
###### Abstract
The major challenge of offline RL is the distribution shift that appears when out-of-distribution actions are queried, which makes the policy improvement direction biased by extrapolation errors. Most existing methods address this problem by penalizing the policy or value for deviating from the behavior policy during policy improvement or evaluation. In this work, we propose a novel MISA framework to approach offline RL from the perspective of **M**utual **I**nformation between **S**tates and **A**ctions in the dataset by directly constraining the policy improvement direction. MISA constructs lower bounds of mutual information parameterized by the policy and Q-values. We show that optimizing this lower bound is equivalent to maximizing the likelihood of a one-step improved policy on the offline dataset. Hence, we constrain the policy improvement direction to lie in the data manifold. The resulting algorithm simultaneously augments the policy evaluation and improvement by adding mutual information regularizations. MISA is a general framework that unifies conservative Q-learning (CQL) and behavior regularization methods (_e.g._, TD3+BC) as special cases. We introduce 3 different variants of MISA, and empirically demonstrate that tighter mutual information lower bound gives better offline RL performance. In addition, our extensive experiments show MISA significantly outperforms a wide range of baselines on various tasks of the D4RL benchmark, _e.g._, achieving 742.9 total points on gym-locomotion tasks. Our code is attached and will be released upon publication.
## 1 Introduction
Reinforcement learning (RL) has made remarkable achievements in solving sequential decision-making problems, ranging from game playing [28, 39, 5] to robot control [26, 20, 36]. However, its success heavily relies on 1) an environment to interact with for data collection and 2) an online algorithm to improve the agent based only on its own trial-and-error experiences. These make RL algorithms incapable in real-world safety-sensitive scenarios where interactions with the environment are dangerous or prohibitively expensive, such as in autonomous driving and robot manipulation with human autonomy [27, 25]. Therefore, offline RL is proposed to study the problem of learning decision-making agents from experiences that are previously collected from other agents when interacting with the environment is costly or not allowed.
Though much demanded, extending RL algorithms to offline datasets is challenged by the distributional shift between the data-collecting policy and the learning policy. Specifically, a typical RL algorithm alternates between evaluating the Q values of a policy and improving the policy to have better cumulative returns under the current value estimation. When it comes to the offline setting, policy improvement often involves querying out-of-distribution (OOD) state-action pairs thathave never appeared in the dataset, for which the Q values are over-estimated due to extrapolation error of neural networks. As a result, the policy improvement direction is erroneously affected, eventually leading to a catastrophic explosion of value estimations as well as policy collapse after error accumulation. Existing methods [25; 42; 14; 44] tackle this problem by either forcing the learned policy to stay close to the behavior policy [16; 43; 14] or generating low value estimations for OOD actions [29; 25; 44]. Though these methods are effective at alleviating the distributional shift problem of the learning policy, the improved policy is unconstrained and might still deviate from the data distribution. A natural question thus arises: can we directly constrain the policy improvement direction to lie in the data manifold?
In this paper, we step back and consider the offline dataset from a new perspective, _i.e._, the **M**utual **I**nformation between **S**tates and **A**ctions (MISA). By viewing state and action as two random variables, the mutual information represents the reduction of uncertainty of actions given certain states, _a.k.a._, information gain in information theory [31]. Therefore, mutual information is an appealing metric to sufficiently acquire knowledge from a dataset and characterize a behavior policy. We for the first time introduce it into offline RL as a regularization that directly constrains the policy improvement direction. Specifically, to allow practical optimizations of state-action mutual information estimation, we introduce the MISA lower bound of state-action pairs, which connects mutual information with RL by treating a parameterized policy as a variational distribution and the Q-values as the energy functions. We show that this lower bound can be interpreted as the likelihood of a non-parametric policy on the offline dataset, which represents the one-step improvement of the current policy based on the current value estimation. Maximizing MISA lower bound is equivalent to directly regularizing the policy improvement within the dataset manifold. However, the constructed lower bound involves integration over a self-normalized energy-based distribution, whose gradient estimation is intractable. To alleviate this dilemma, Markov Chain Monte Carlo (MCMC) estimation is adopted to produce an unbiased gradient estimation for MISA lower bound.
Theoretically, MISA is a general framework for offline RL that unifies several existing offline RL paradigms including behavior regularization and conservative learning. As examples, we show that TD3+BC [14] and CQL [25] are degenerated cases of MISA. Empirically, we verify that the tighter the mutual information lower bound, the better the final performance. We also demonstrate that MISA achieves significantly better performance on various environments of the D4RL [13] benchmark than a wide range of baselines, including CQL and IQL [24]. Additional visualizations are discussed to better understand the proposed method. Our code will be released upon publication.
## 2 Related Works
Offline Reinforcement LearningThe most critical challenge for extending an off-policy RL algorithm to an offline setup is the distribution shift between the behavior policy, _i.e._, the policy for data collection, and the learning policy [22]. To tackle this challenge, most of the offline RL algorithms consider a conservative learning framework. They either regularize the learning policy to stay close to the behavior policy [16; 43; 14; 38; 42; 45; 46], or force Q values to be low for OOD state-action pairs [29; 25; 44]. For example, TD3+BC [14] adds an additional behavior cloning (BC) signal along with the TD3 [15], which encourages the policy to stay in the data manifold; CQL [25], from the Q-value perspective, penalizes the OOD state-action pairs for generating high Q-value estimations and learns a lower bound of the true value function. However, their policy improvement direction is unconstrained and might deviate from the data distribution. On the other hand, SARSA-style updates [40] are considered to only query in-distribution state-action pairs [34; 24]. Nevertheless, without explicitly querying Bellman's optimality equation, they limit the policy from producing unseen actions. Our proposed MISA follows the conservative framework and directly regularizes the policy improvement direction to lie within the data manifold with mutual information, which more fully exploits the dataset information while learning a conservative policy. Different from the above discussed methods, a separate line works consider learning lower-bounded Q values by taking the minimum of multiple Q networks, e.g., SAC-n [2] uses 50 Q networks for hopper tasks, or using more powerful policy representations [41; 21]. Such methods achieve strong performance on existing benchmarks. However, it greatly increases the sample complexity and we do not directly compare with them in this work.
Mutual Information Estimation.Mutual information is a fundamental quantity in information theory, statistics, and machine learning. However, direct computation of mutual information is intractable as it involves computing a log partition function of a high dimensional variable. Thus, how to estimate the mutual information \(I(x,z)\) between random variables \(\mathcal{X}\) and \(\mathcal{Z}\), accurately and efficiently, is a critical issue. One straightforward lower bound for mutual information estimation is Barber-Agakov bound [3], which introduces an additional variational distribution \(q(z\mid x)\) to approximate the unknown posterior \(p(z\mid x)\). Instead of using an explicit "decoder" \(q(z\mid x)\), we can use _unnormalized_ distributions for the variational family \(q(z\mid x)\)[12; 4; 33], i.e., approximate the distribution as \(q(z\mid x)=\frac{p(z)e^{f(x,z)}}{\mathbb{E}_{p(x)}[e^{f(x,z)}]}\), where \(f(x,z)\) is an arbitrary critic function. As an example, InfoNCE [33] has been widely used in representation learning literature [33; 18; 9]. To further improve the mutual information estimation, a combination of normalized and unnormalized variational distribution can be considered [8; 35]. Our MISA connects mutual information estimation with RL by parameterizing a tractable lower bound with a policy network as a variational distribution and the Q values as critics. In this way, MISA explicitly regularizes the policy improvement direction to lie in the data manifold and produces strong empirical performance.
## 3 Preliminaries
Reinforcement LearningWe consider a Markov Decision Process (MDP) denoted as a tuple \(\mathcal{M}=(\mathcal{S},\mathcal{A},p_{0}(s),p(s^{\prime}\mid s,a),r(s,a),\gamma)\), where \(\mathcal{S}\) is the state space, \(\mathcal{A}\) is the action space, \(p_{0}(s)\) is the initial state distribution, \(p(s^{\prime}\mid s,a)\) is the transition function, \(r(s,a)\) is the reward function, and \(\gamma\) is the discount factor. The target of a learning agent is to find a policy \(\pi^{*}(a\mid s)\) that maximizes the accumulative reward by interacting with the environment
\[\arg\max_{\pi}\mathbb{E}_{\pi}\left[\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t} )\mid s_{0}\sim p_{0}(s),a_{t}\sim\pi(a\mid s_{t})\right]. \tag{1}\]
Q-learning is a set of off-policy RL algorithms that utilize the optimal Bellman's optimality operator \(\mathcal{B}^{*}Q(s,a)=r(s,a)+\gamma\mathbb{E}_{s^{\prime}\sim p(s^{\prime}|s,a )}[\max_{a^{\prime}}Q(s^{\prime},a^{\prime})]\) to learn a Q function. Differently, Bellman's expectation operator \(\mathcal{B}^{*}\mathcal{O}(s,a)=r(s,a)+\gamma\mathbb{E}_{s^{\prime}\sim p(s^{ \prime}|s,a),a^{\prime}\sim r(\mid s^{\prime})}[Q(s^{\prime},a^{\prime})]\) gives an actor-critic framework that alternates between policy evaluation and policy improvement. Consider a value network \(Q_{\phi}(s,a)\) parameterized by \(\phi\) and a policy network \(\pi_{\theta}(a|s)\) parameterized by \(\theta\). Let \(\mu_{\pi}(s)\) denote the stationary distribution induced with policy \(\pi\), which is also called occupancy measure [37]. Given the current policy, the policy evaluation aims to learn a Q network that can accurately predict its values minimizing \(\mathbb{E}_{\mu_{\pi_{\theta}}(s)\pi_{\theta}(a|s)}[(Q_{\phi}(s,a)-\mathcal{B} ^{\pi_{\theta}}Q_{\phi}(s,a))^{2}]\). Policy improvement focuses on learning the optimal policy that maximizes \(\mathbb{E}_{\mu_{\pi}(s)\pi(a|s)}[Q_{\phi}(s,a)]\). In practical implementations, the Bellman operator is replaced with its sample-based version \(\hat{\mathcal{B}}\), and the expectation over \(\mu_{\pi}(s)\pi(a|s)\) is approximated by an online replay buffer or an offline dataset \(\mathcal{D}\).
Nevertheless, as it is unavoidable to query the OOD actions when performing the maximization over actions, an inaccurate over-estimation of the Q value will be selected and the error will accumulate during the Bellman's update. Conservative RL methods, in turn, aim to perform "conservative" updates of the value / policy function during optimization by constraining the updates on only the in-distribution samples, which minimizes the negative impact of OOD actions.
KL DivergenceGiven two probability distributions \(p(x)\) and \(q(x)\) on the same probability space, the KL divergence (_i.e._, relative entropy) from \(q\) to \(p\) is given by \(D_{\text{KL}}(p||q)=\mathbb{E}_{p(x)}\left[\log\frac{p(x)}{q(x)}\right]\geq 0\). The minimum value is achieved when the two densities are identical. We consider two dual representations that result in tractable estimators for the KL divergence.
**Lemma 3.1** (\(f\)**-divergence representation [32]): _The KL divergence admits the following lower bound:_
\[D_{\text{KL}}(p||q)\geq\sup_{T\in\mathcal{F}}\mathbb{E}_{p(x)}\left[T(x) \right]-\mathbb{E}_{q(x)}[e^{T(x)-1}], \tag{2}\]
_where the supremum is taken over a function family \(\mathcal{F}\) satisfying the integrability constraints._
**Lemma 3.2** (Donsker-Varadhan representation [30]): _The KL divergence has the lower bound:_
\[D_{\text{KL}}(p||q)\geq\sup_{T\in\mathcal{F}}\mathbb{E}_{p(x)}\left[T(x) \right]-\log(\mathbb{E}_{q(x)}[e^{T(x)}]), \tag{3}\]
_where the supremum is taken over a function family \(\mathcal{F}\) satisfying the integrability constraints._The above two bounds are tight for large families \(\mathcal{F}\).
## 4 Mutual Information Regularized Offline RL
We propose to tackle the offline RL problem from the perspective of mutual information estimation and develop a novel framework (MISA) by estimating the **M**utual **I**nformation between **S**tates and **A**ctions of a given offline dataset. MISA is a general framework that unifies multiple existing offline RL algorithms as special cases, including behavior cloning, TD3+BC [14], and CQL [25].
### Mutual Information Regularization
Consider the state \(S\) and action \(A\) as two random variables. Let \(p_{(S,A)}(s,a)\) denote the joint distribution of state-action pairs, and \(p_{S}(s)\), \(p_{A}(a)\) be the marginal distributions. The subscripts are omitted in the following for simplicity. The mutual information between \(S\) and \(A\) is defined with:
\[I(S;A)=\mathbb{E}_{p(s,a)}\left[\log\frac{p(s,a)}{p(s)p(a)}\right]=\mathbb{E}_ {p(s,a)}\left[\log\frac{p(a\mid s)}{p(a)}\right]=H(A)-H(A\mid S), \tag{4}\]
where \(H\) is Shannon entropy, and \(H(A|S)\) is conditional entropy of \(A\) given \(S\). The higher mutual information between \(S\) and \(A\) means the lower uncertainty in \(A\) given the state \(S\). This coincides with the observation that the actions selected by a well-performing agent are usually coupled with certain states. Therefore, given a joint distribution of state-action pairs induced by a (sub-optimal) behavior agent, it is natural to learn a policy that can recover the dependence between states and actions produced by the behavior agent. By regularizing the agent with \(I(S;A)\) estimation, we encourage the agent to 1) perform policy update within the dataset distribution and 2) avoid being over-conservative and make sufficient use of the dataset information.
Let \(\pi_{\beta}(a|s)\) represent a behavior policy and \(p_{\beta}(s,a)\) be the joint distribution of state-action pairs induced by \(\pi_{\beta}\). Calculating the mutual information is often intractable as accessing \(p_{\beta}(s,a)\) is infeasible. Fortunately, in the problem of offline reinforcement learning, a dataset \(\mathcal{D}=\{(s_{t},a_{t},r_{t},s_{t+1})\}\) of transitions is given by drawing samples independently from \(p_{\beta}(s,a)\). This dataset can thus be seen as a sample-based empirical joint distribution \(p_{\mathcal{D}}(s,a)\) for \(p_{\beta}\). Let \(\mathcal{I}(\theta,\phi)\) denote a mutual information lower bound that relies on parameterized functions with parameters \(\theta\) and \(\phi\)2, which are usually the policy network and Q network in the context of RL. We defer the derivation of such bounds in Sec. 4.2. Based on the above motivation, we aim at learning a policy that can approximate the mutual information of the dataset while being optimized to get the best possible cumulative return. We focus on the actor-critic framework, and formulate the offline RL problem with mutual information regularization as follows:
Footnote 2: Note some lower bounds might only have one parameterized function.
\[\min_{\phi} \mathbb{E}_{s,a,s^{\prime}\sim\mathcal{D}}\left[\frac{1}{2}\left( Q_{\phi}(s,a)-\mathcal{B}^{\pi_{\theta}}Q_{\phi}(s,a)\right)^{2}\right]-\alpha_{1} \hat{\mathcal{I}}_{\mathcal{D}}(\theta,\phi),\] (Policy Evaluation) (5) \[\max_{\theta} \mathbb{E}_{s\sim\mathcal{D},a\sim\pi_{\theta}(a|s)}\left[Q_{\phi }(s,a)\right]+\alpha_{2}\hat{\mathcal{I}}_{\mathcal{D}}(\theta,\phi),\] (Policy Improvement) (6)
where \(\alpha_{1}\) and \(\alpha_{2}\) are the coefficients to balance RL objective and mutual information objective, and \(\hat{\mathcal{I}}_{\mathcal{D}}(\theta,\phi)\) denotes the sample-based version of \(\mathcal{I}(\theta,\phi)\) estimated from dataset \(\mathcal{D}\).
### State-Action Mutual Information Estimation
In this section, we develop practical solutions to approximate the mutual information \(I(S;A)\) from samples of the joint distribution. We use the learning policy \(\pi_{\theta}(a|s)\) as a variational variable and Eqn. 4 can be rewritten as:
\[I(S;A)=\mathbb{E}_{p(s,a)}\left[\log\frac{\pi_{\theta}(a|s)p(a|s)}{p(a)\pi_{ \theta}(a|s)}\right]=\mathbb{E}_{p(s,a)}\left[\log\frac{\pi_{\theta}(a|s)}{p( a)}\right]+D_{\text{KL}}\left(p(s,a)||p(s)\pi_{\theta}(a|s)\right), \tag{7}\]
where \(p(s)\pi_{\theta}(a|s)\) is an induced joint distribution. Let \(\mathcal{I}_{\text{BA}}\triangleq\mathbb{E}_{p(s,a)}\left[\log\frac{\pi_{ \theta}(a|s)}{p(a)}\right]\). We have \(I(S;A)\geq\mathcal{I}_{\text{BA}}\) as the KL divergence is always non-negative. This is exactly the Barber-Agakov (BA) lower bound developed by [3].
To obtain tighter bounds, we turn to KL dual representations of \(D_{\text{KL}}\left(p(s,a)||p(s)\pi_{\theta}(a|s)\right)\) in Eqn. 7. To this end, we choose \(\mathcal{F}\) to be a set of parameterized functions \(T_{\phi}:S\times A\rightarrow\mathbb{R},\phi\in\Phi\), which can be seen as an energy function.
With the \(f\)-divergence dual representation, we derive MISA-\(f\) as
\[\mathcal{I}_{\text{MISA-}f}\triangleq\mathbb{E}_{p(s,a)}\left[\log\frac{\pi_{ \theta}(a|s)}{p(a)}\right]+\mathbb{E}_{p(s,a)}\left[T_{\phi}(s,a)\right]- \mathbb{E}_{p(s)\pi_{\theta}(a|s)}\left[e^{T_{\phi}(s,a)-1}\right]. \tag{8}\]
The \(\mathcal{I}_{\text{MISA-}f}\) bound is tight when \(p(a|s)\propto\pi_{\theta}(a|s)e^{T_{\phi}(s,a)-1}\). Similarly, using the DV representation in Theorem 3.2, we can have another bound \(\mathcal{I}_{\text{MISA-DV}}\leq I(S;A)\), as shown below:
\[\mathcal{I}_{\text{MISA-DV}}\triangleq\mathbb{E}_{p(s,a)}\left[\log\frac{\pi_{ \theta}(a|s)}{p(a)}\right]+\mathbb{E}_{p(s,a)}\left[T_{\phi}(s,a)\right]-\log \mathbb{E}_{p(s)\pi_{\theta}(a|s)}\left[e^{T_{\phi}(s,a)}\right], \tag{9}\]
which is tight when \(p(a|s)=\frac{1}{Z}p(s)\pi_{\theta}(a|s)e^{T_{\phi}(s,a)},\) where \(\mathcal{Z}=\mathbb{E}_{p(s)\pi_{\theta}(a|s)}\left[e^{T_{\phi}(s,a)}\right]\).
We observe that the KL term in Eqn. 7 can be rewritten as:
\[D_{\text{KL}}\left(p(s,a)||p(s)\pi_{\theta}(a|s)\right)=\mathbb{E}_{p(s)} \left[\mathbb{E}_{p(a|s)}\left[\log\frac{p(a|s)}{\pi_{\theta}(a|s)}\right] \right]=\mathbb{E}_{p(s)}\left[D_{\text{KL}}(p(a|s)||\pi_{\theta}(a|s))\right].\]
Applying the DV representation of \(D_{\text{KL}}(p(a|s)||\pi_{\theta}(a|s))\), we can have a new lower bound \(\mathcal{I}_{\text{MISA}}\):
\[\mathcal{I}_{\text{MISA}}\triangleq\mathbb{E}_{p(s,a)}\left[\log\frac{\pi_{ \theta}(a|s)}{p(a)}\right]+\mathbb{E}_{p(s,a)}\left[T_{\phi}(s,a)\right]- \mathbb{E}_{p(s)}\log\mathbb{E}_{\pi_{\theta}(a|s)}\left[e^{T_{\phi}(s,a)} \right]. \tag{10}\]
The bound is tight when \(p(a|s)=\frac{1}{Z(s)}\pi_{\theta}(a|s)e^{T_{\phi}(s,a)},\) where \(\mathcal{Z}(s)=\mathbb{E}_{\pi_{\theta}(a|s)}[e^{T_{\phi}(s,a)}]\).
**Theorem 4.1**: _Given the joint distribution of state \(s\) and action \(a\), the lower bounds of mutual information \(I(S;A)\) defined in Eqn. 8-10 have the following relations:_
\[I(S;A)\geq\mathcal{I}_{\text{MISA}}\geq\mathcal{I}_{\text{MISA-DV}}\geq \mathcal{I}_{\text{MISA-}f}. \tag{11}\]
The proof is deferred to the appendix due to space limit.
### Integration with Offline Reinforcement Learning
We now describe how our MISA lower bound is integrated into the above framework (Eqn. 12-13) to give a practical offline RL algorithm. Theoretically, \(T_{\psi}(s,a)\) should be an arbitrary function optimized to accurately estimate the mutual information. As regularizing Q-functions is critical to offline RL, we propose to use a Q network \(Q_{\phi}(s,a)\) as the energy function \(T_{\phi}(s,a)\), and use \(p_{\mathcal{D}}(s,a)\) as the joint distribution in Eqn. 10. Then we have the following objective to learn a Q-network during policy evaluation:
\[J_{Q}(\phi)=J_{Q}^{\mathcal{B}}(\phi)-\gamma_{1}\mathbb{E}_{s,a\sim\mathcal{D }}\left[Q_{\phi}(s,a)\right]-\gamma_{1}\mathbb{E}_{s\sim\mathcal{D}}\left[ \log\mathbb{E}_{\pi_{\theta}(a|s)}\left[e^{Q_{\phi}(s,a)}\right]\right], \tag{12}\]
where \(J_{Q}^{\mathcal{B}}(\phi)=\mathbb{E}_{s,a,s^{\prime}\sim\mathcal{D}}\left[ \frac{1}{2}\left(Q_{\phi}(s,a)-\mathcal{B}^{\pi_{\theta}}Q_{\phi}(s,a)\right) ^{2}\right]\) represents the TD error. For policy improvement, note that the entropy term \(H(a)\) in Eqn. 10 can be omitted as it is a constant given dataset \(\mathcal{D}\). Thus, we have the below objective to maximize:
\[J_{\pi}(\theta)=\mathbb{E}_{s\sim\mathcal{D},a\sim\pi_{\theta}(a|s)}\left[Q_{ \phi}(s,a)\right]+\gamma_{2}\mathbb{E}_{s,a\sim\mathcal{D}}[\log\pi_{\theta}(a |s)]-\gamma_{2}\mathbb{E}_{s\sim\mathcal{D}}\left[\log\mathbb{E}_{\pi_{\theta}( a|s)}\left[e^{Q_{\phi}(s,a)}\right]\right]. \tag{13}\]
The formulations for other regularizers (_e.g._, \(\mathcal{I}_{\text{MISA-DV}}\) and \(\mathcal{I}_{\text{MISA-}f}\)) can be derived similarly. A detailed description of the MISA algorithm for offline RL can be found in Algo. 1.
Intuitive Explanation on the Mutual Information Regularizer.By rearranging the terms in Eqn. 10, MISA can be written as:
\[\mathcal{I}_{\text{MISA}}=\mathbb{E}_{s,a\sim\mathcal{D}}\left[\log\frac{\pi_{ \theta}(a\mid s)e^{Q_{\phi}(s,a)}}{\mathbb{E}_{\pi_{\theta}(a^{\prime}|s)} \left[e^{Q_{\phi}(s,a^{\prime})}\right]}\right], \tag{14}\]where the log term can be seen as the log probability of a one-step improved policy. More specifically, for policy improvement with KL divergence regularization: \(\max_{\pi}\mathbb{E}_{s\sim\mathcal{D},a\sim\pi}[Q_{\phi}(s,a)]+D_{\text{KL}}( \pi||\pi_{\theta})\), the optimal solution is given by \(\pi_{\theta,\phi}^{*}\propto\pi_{\theta}(a|s)e^{Q_{\phi}(s,a)}\)[1, 34]. Therefore, \(\mathcal{I}_{\text{MISA}}\) is rewritten with \(\pi_{\theta,\phi}^{*}\) as \(\mathcal{I}_{\text{MISA}}=\mathbb{E}_{s,a\sim\mathcal{D}}[\log\pi_{\theta,\phi }^{*}(a|s)]\), and maximizing it means maximizing the log-likelihood of the dataset using the improved policy. In other words, instead of directly fitting the policy on the dataset, which is short-sighted, this objective considers the optimization direction of the policy improvement step. Given the current policy and policy evaluation results, it first computes the analytic improved policy, and then forces the dataset likelihood to be maximized using the improved policy. In this way, even if an out-of-distribution state-action pair get an overestimated q value, \(\mathcal{I}_{\text{MISA}}\) is going to suppress this value and make sure in-distribution data have relatively higher value estimation.
Unbiased Gradient EstimationFor policy improvement with Eqn. 13, differentiating through a sampling distribution \(\pi_{\theta}(a\mid s)\) is required for \(\mathbb{E}_{s\sim D}\log\mathbb{E}_{\pi_{\theta}(a|s)}\left[e^{Q_{\phi}(s,a)} \right]\). For a Gaussian policy \(\pi_{\theta}(a\mid s)=\mathcal{N}(\mu_{\theta},\sigma_{\theta})\), one could consider the reparameterization trick [23] and convert the objective as \(\mathbb{E}_{s\sim D}\log\mathbb{E}_{\epsilon\sim\mathcal{N}(0,1)}\left[e^{Q_ {\phi}(s,\mu_{\theta}+\epsilon\epsilon\sigma_{\theta})}\right]\). However, this introduces high variance in offline reinforcement learning setups because we condition the policy improvement directly on the Q values of the out-of-distribution actions, which eventually gives a noisy policy. Hence, we aim to minimize the influence of Q values for policy improvement.
Differentiating Eqn. 10 with respect to policy parameters \(\theta\), we have
\[\frac{\partial\mathcal{I}_{\text{MISA}}}{\partial\theta}=\mathbb{E}_{s,a\sim D }\left[\frac{\log\pi_{\theta}(a\mid s)}{\partial\theta}\right]-\mathbb{E}_{s \sim D,a\sim p_{\theta,\phi}(a|s)}\left[\frac{\log\pi_{\theta}(a\mid s)}{ \partial\theta}\right] \tag{15}\]
where \(p_{\theta,\phi}(a\mid s)=\frac{\pi_{\theta}(a|s)e^{Q_{\phi}(s,a)}}{\mathbb{E}_{ s_{\theta}(a|s)}[e^{Q_{\phi}(s,a)})]}\) is a self-normalized distribution. See appendix A.2 for a derivation. By optimizing Eqn. 15, we obtain an unbiased gradient estimation of the MISA objective with respect to the policy parameters, while minimizing the negative effects of the Q values of OOD actions. To sample from \(p_{\theta,\phi}(a\mid s)\), one can consider Markov-Chain Monte-Carlo (MCMC) methods, e.g., Hamiltonian Monte Carlo [6].
### Connections to Existing Offline RL Methods
We show that some existing offline RL methods can be viewed as special cases of MISA framework.
Behavior Cloning and BC Regularized RLWe first show that behavior cloning is a form of mutual information regularizer. As shown by Eqn. 7, \(\mathcal{I}_{\text{BA}}\triangleq\mathbb{E}_{s,a\sim\mathcal{D}}\left[\log \frac{\pi_{\theta}(a|s)}{p(a)}\right]\) gives a lower bound of mutual information. Since \(H(a)\) is a consistent given datasets, maximizing \(\mathcal{I}_{\text{BA}}\) is equivalent to maximizing \(\mathbb{E}_{s,a\sim\mathcal{D}}\left[\log\pi_{\theta}(a|s)\right]\), which is exactly the objective for behavior cloning.
As for TD3+BC [14], the policy evaluation is unchanged, while the policy improvement objective is augmented by an MSE regularization term, _i.e._, \(\mathbb{E}_{s\sim\mathcal{D}}[Q(s,\pi_{\theta}(s))]-\gamma\mathbb{E}_{s,a\sim \mathcal{D}}\left[(\pi_{\theta}(s)-a)^{2}\right]\), where \(\lambda\) is a hyperparameter. Maximizing the negative MSE term is equivalent to maximizing \(\mathbb{E}_{s,a\sim\mathcal{D}}\left[\log p_{\pi_{\theta}}(a|s)\right]\), where \(p_{\pi_{\theta}}=Ce^{-\frac{1}{2}(\pi_{\theta}(s)-a)^{2}}\) is a Gaussian distribution, and \(C\) is a constant. This is a special case of Eqn. 13 when we remove the last log-mean-exp term.
Conservation Q LearningCQL [25] was proposed to alleviate the over-estimation issue of Q learning by making conservative updates to the Q values during policy evaluation. The policy improvement is kept unchanged compared to standard Q learning. We focus on the entropy-regularized policy evaluation of CQL as below:
\[\min_{\phi}-\gamma_{1}\mathbb{E}_{s\sim\mathcal{D}}\left[\mathbb{E}_{a\sim_{ \mathcal{TD}}(a|s)}[Q_{\phi}(s,a)]-\log\sum_{a}e^{Q_{\phi}(s,a)}\right]+J_{Q}^{ \mathsf{B}}(\phi) \tag{16}\]
where we highlight the main difference between it and our MISA policy evaluation (Eqn. 12) in blue. Let \(\pi_{\mathsf{U}}(a|s)\) denote a uniform distribution of actions and \(|A|\) is the number of actions. The log-sum-exp term can be written as \(\log\mathbb{E}_{a\sim\pi_{\mathsf{U}}(a|s)}[Q_{\phi}(s,a)]+\log|A|\). Substituting it into Eqn. 16 and discarding the constant \(\log|A|\), we recover the formulation in Eqn. 12. Therefore, CQL is actually doing mutual information regularization during policy evaluation. The key difference is that it is not using the current policy network as the variational distribution. Instead, a manually designed distribution is used in CQL. However, a uniform policy is usually suboptimal in environments with continuous actions. CQL thus constructs a mixed variational policy by drawing samples drawn from the current policy network, a uniform distribution, and the dataset. In our formulation, the variational distribution will be optimized to give a better mutual information estimation. This explains why MISA gives better performance than CQL.
## 5 Experiments
We first conduct ablation studies on MISA to better understand the influences of mutual information estimation on offline RL. Next, we compare MISA with a wide range of baseline algorithms to demonstrate its effectiveness on the D4RL dataset [13].
### Experiment Setups
We follow the network architectures of CQL [25] and IQL [24], where a neural network of 3 encoding layers of size 256 is used for antmaze-v0 environments, and 2 encoding layers for other tasks, followed by an output layer. When approximating \(\mathbb{E}_{\pi_{\theta}(a|s)}\left[e^{T_{\psi}(s,a)}\right]\), we use 50 Monte-Carlo samples. To sample from the non-parametric distribution \(p_{\theta,\phi}(a\mid s)=\frac{\pi_{\theta}(a|s)e^{Q_{\phi}(s,a)}}{\mathbb{E}_ {\pi_{\theta}(a|s)}\left[e^{Q_{\phi}(s,a)}\right]}\), we use the Hamiltonian Monte Carlo algorithm. In addition, for unbiased gradient estimation with MCMC samples, we use a burn-in step of 5, set the number of leapfrog steps to 2, and set the MCMC step size to 1. All these setups enable MISA to run at a relatively fast speed under careful implementation and do not suffer from the slow MCMC speed. For all tasks, we average the mean returns over 10 evaluation trajectories and 5 random seeds. In particular, following [24], we evaluate the antmaze-v0 environments for 100 episodes instead. To stabilize the training of our agents in antmaze-v0 environments, we follow [25] and normalize the reward by \(r^{\prime}=(r-0.5)*4\).
For practical implementations, we follow the CQL-Lagrange [25] implementation by constraining the Q-value update by a "budget" variable \(\tau\) and rewrite Eqn. 12 as
\[\min_{Q}\max_{\gamma_{1}\geq 0}\gamma_{1}(\mathbb{E}_{s\sim\mathcal{D}} \left[\log\mathbb{E}_{\pi_{\theta}(a|s)}\left[e^{Q_{\phi}(s,a)}\right]\right] -\mathbb{E}_{s,a\sim\mathcal{D}}\left[Q_{\phi}(s,a)\right]-\tau)-J_{Q}^{ \mathsf{B}}(\phi). \tag{17}\]
Eqn. 26 implies that if the expected value of the Q-value difference is less than the threshold \(\tau\), \(\gamma_{1}\) will adjust to close to 0; if the Q-value difference is higher than the threshold \(\tau\), \(\gamma_{1}\) will be larger and penalize Q-values harder. More implementation details are in the appendix.
### Influences of Mutual Information Estimation on Offline RL
To begin with, we perform a thorough ablation study on MISA and its variants to better understand the influences of mutual information estimation on offline RL. Specifically, we aim to understand: 1) if tighter mutual information lower bound regularization helps to improve the performance, and 2) how accurately estimating the mutual information affects the final performance of offline RL.
_Tighter mutual information lower bound helps to improve the performance of offline RL._ In Table 1, BA stands for the Barber-Adakov Bound, and as discussed in Sect. 4.2, we have such inequality for mutual information estimation: BA \(\leq\) MISA-\(f\)\(\leq\) MISA-DV \(\leq\) MISA. As shown in Table 1, we observe a performance trend consistent with our theoretical analysis, where MISA significantly improves over the three variants with lower mutual information estimation bound.
_Accurately estimating the mutual information is critical to the performance of offline RL._ In Table 1, no-BA stands for removing the Barber-Agakov term in Eqn. 10, which gives an inaccurate estimation to mutual information, neither an upper bound nor a lower bound. We can observe that the no-BA variant suffers from a performance drop compared with MISA. Similarly, we vary the number (\(k\)) of Monte-Carlo samples for approximating \(\mathbb{E}_{\pi_{a}(a|s)}\left[e^{T_{\psi}(s,a)}\right]\) and reduce the burn-in steps of MCMC sampling process. Both operations increase the Monte-Carlo approximation errors to MISA. Comparing \(k=5\), \(k=20\), and MISA (\(k=50\)), the performance increases monotonically; comparing MISA (BI=5) with BI=1, we can observe a sharp performance drop. We then conclude that MISA requires careful Monte-Carlo approximation for good performance.
_Unbiased gradient estimations improve the performance of MISA._ MISA-biased ignores the bias correction term in Eqn. 15. Although MISA-biased outperforms the baselines, it still performs worse than MISA. This suggests that by correcting the gradient estimation with additional MCMC samples, MISA achieves better regularized policy learning in offline RL. Note that by setting a minimal 5 MCMC burn-in steps, MISA only slightly sacrifices the training speed, but during inference, MISA directly takes the learned model-free policy which does not affect the inference speed.
_Estimating mutual information with Q functions helps to stabilize MISA._ MISA-T is a variant that learns to estimate the mutual information lower bound by learning an independent function \(T_{\psi}(s,a)\), rather than directly using \(Q_{\phi}(s,a)\). Theoretically, MISA-T would give a more accurate mutual information estimation than MISA. Although practically we observe MISA-T indeed achieves superior performance on tasks like halfcheetah-medium-v2 (60.8) and halfcheetah-medium-replay-v2 (53.1), it mostly fails on others. MISA, instead, balances the mutual information estimation with Q-value regularization, and achieves a better overall performance.
### Offline Reinforcement Learning on D4RL Benchmarks
**Gym Locomotion Tasks.** We first evaluate MISA on the MuJoCo-style continuous control tasks, reported as gym-locomotion-v2 in Table 2. We observe that MISA improves the performance of baselines by a large margin. Specifically, MISA is less sensitive to the characteristics of data distributions. The medium datasets include trajectories collected by a SAC agent trained to reach 1/3 of the performance of an expert; the medium-replay datasets contain all data samples of the replay buffer during the training of the medium SAC agent, which covers the noisy exploration process of the medium agent. We can observe that prior methods are generally sensitive to noisy sub-optimal data in medium and medium-replay environments, while MISA outperforms them by a large margin. In particular, MISA achieves near-expert performance on walker2d-medium-replay with only sub-optimal trajectories. This indicates that by regularizing the policy and Q-values within the mutual information of the dataset, we can fully exploit the data and perform safe and accurate policy improvement during RL. Moreover, on medium-expert environments, where the datasets are mixtures of medium agents and experts, MISA successfully captures the multi-modality of the datasets and allows further improvements of the policy over baselines.
**Advoit Tasks.** According to [14], adroit tasks require strong policy regularization to overcome the extrapolation error, because the datasets are either generated by humans (adroit-human-v0), which would show a narrow policy distribution, or a mixture of human demonstrations and a behavior
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline Dataset & \(k\)=5 & \(k\)=20 & BI=1 & no BA & BA & MISA-\(f\) & MISA-DV & MISA-biased & MISA-T & MISA \\ \hline halfcheetah medium-v2 & 47.1 & 46.9 & 47.2 & 49.1 & 56.3 & 43.5 & 45.5 & 48.4 & **60.8** & 47.4 \\ hopper-medium-v2 & 62.2 & 65.3 & 61.8 & 64.4 & 1.2 & 60.5 & 61.6 & 65.7 & 1.9 & **67.1** \\ walker2d-medium-v2 & 83.3 & 83.9 & 81.8 & 83.8 & 7.5 & 73.2 & 82.8 & **84.2** & 2.8 & 84.1 \\ halfcheetah-medium-replay-v2 & 45.4 & 45.3 & 45.2 & 46.5 & 52.4 & 39.8 & 43.8 & 46.9 & **58.1** & 45.6 \\ hopper-medium-replay-v2 & 79.9 & 88.4 & 72.9 & **100.3** & 56.4 & 34.8 & 45.9 & 98.1 & 44.2 & 98.6 \\ walker2d-medium-replay-v2 & 83.7 & **86.9** & 82.8 & 86.1 & 51.1 & 34.9 & 81.4 & 80.6 & 79.8 & 86.2 \\ halfcheetah-medium-replay-v2 & 94.5 & 92.8 & 92.8 & 87.1 & 26.8 & 57.6 & 92.4 & 84.6 & 24.9 & **94.7** \\ hopper-medium-expert-v2 & 105.7 & 102.7 & 93.4 & 89.6 & 1.3 & 57.7 & **111.5** & 103.2 & 0.7 & 109.8 \\ walker2d-medium-expert-v2 & 109.2 & **109.4** & 109.3 & 108.1 & 1.4 & 102.7 & 108.8 & 109.2 & 2.8 & **109.4** \\ \hline gym-locomotion-v2 (total) & 711 & 721.6 & 687.2 & 715 & 254.4 & 504.7 & 673.7 & 720.9 & 271.0 & **742.9** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Ablation studies on gym-locomotion-v2. \(k\) denotes the number of Monte-Carlo samples for estimating \(\mathbb{E}_{\pi_{a}(a|s)}\left[e^{T_{\psi}(s,a)}\right]\), BI represents the burnin-steps for MCMC simulation, and BA denotes the use of Barber-Agakov Bound. In addition, MISA-\(x\) denotes different variants of MISA.
cloning policy (adroit-cloned-v0). We observe that MISA provides a stronger regularization and significantly outperforms the baselines on adroit.
**Kitchen Tasks.** An episode of Kitchen environment consists of multiple sub-tasks that can be mixed in an arbitrary order. We observe that MISA outperforms baselines on both kitchen-complete-v0 and kitchen-mixed-v0, while achieving slightly worse performance on kitchen-partial-v0. Specifically, on kitchen-mixed, the result is consistent with our assumption that by better regularizing the policy, MISA guarantees a safer and in-distribution policy improvement step in offline RL.
**Antmaze Tasks.** On the challenging AntMaze domain with sparse delayed reward, we observe that MISA generally outperforms CQL and achieves the best performance on umaze environments. However, MISA performs worse than IQL on challenging large environments. Multi-step value update is often necessary for learning a robust value estimation in these scenarios [24] while MISA adopts a single-step SAC for the base RL algorithm.
### Visualization of Embedding
In Fig. 1, we visualize the embeddings before the output layer of Q-value networks, given different mutual information bounds (BA and MISA). We select a subset from walker2d-medium-v2 dataset to study the division of low reward (blue) and high reward (red) \((s,a)\) pairs. We color each point by the reward \(r(s,a)\). As discussed in Sect. 4.2, BA gives a lowest bound for mutual information estimation and MISA produces the tightest bound. In Fig. 1, we observe a consistent result. The embeddings of BA converge to a set of regular curves and fail to cluster the high \(r(s,a)\), because Q-values have
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline Dataset & BC & 10\%BC & DT & AWAC & OneStep RL & TD3+BC & CQL & IQL & MISA \\ \hline halfcheetah-medium-v2 & 42.6 & 42.5 & 42.6 & 43.5 & **48.4** & 48.3 & 44 & 47.4 & 47.4\(\pm\)0.2 \\ hopper-medium-v2 & 52.9 & 56.9 & **67.6** & 57 & 59.6 & 59.3 & 58.5 & 66.3 & 67.1\(\pm\)1.7 \\ walker2d-medium-v2 & 75.3 & 75 & 74 & 72.4 & 81.8 & 83.7 & 72.5 & 78.3 & **84.1\(\pm\)**0.3 \\ halfcheetah-medium-replay-v2 & 36.6 & 40.6 & 36.6 & 40.5 & 38.1 & 44.6 & 45.5 & 44.2 & **45.6\(\pm\)**0.2 \\ hopper-medium-replay-v2 & 18.1 & 75.9 & 82.7 & 37.2 & **97.5** & 60.9 & 95 & 94.7 & **98.6\(\pm\)**2.5 \\ walker2d-medium-replay-v2 & 26.2 & 66.5 & 66.2 & 27.4 & 95.8 & 81.8 & 77.2 & 73.9 & **86.2\(\pm\)**0.5 \\ halfcheetah-medium-replay-v0 & 55.2 & 92.9 & 86.8 & 42.8 & 93.4 & 90.7 & 91.6 & 86.7 & **94.7\(\pm\)**1.9 \\ hopper-medium-expert-v2 & 52.5 & **110.9** & 107.6 & 55.8 & 103.3 & 98 & 105.4 & 91.5 & 109.8\(\pm\)1.8 \\ walker2d-medium-expert-v2 & 107.5 & 109 & 108.1 & 74.5 & **113** & 110.1 & 108.8 & 109.6 & 109.4\(\pm\)0.3 \\ gym-locomotion-v2 (total) & 466.7 & 666.2 & 672.6 & 450.7 & 684.6 & 677.4 & 698.5 & 692.6 & **742.9\(\pm\)**4.6 \\ \hline \hline kitchen-complete-v0 & **65** & 7.2 & - & 39.3 & 57 & 16.8 & 43.8 & 62.5 & **70.2\(\pm\)**6.8 \\ kitchen-partial-v0 & 38 & **66.8** & - & 36.6 & 53.1 & 22.4 & 49.8 & 46.3 & 45.7\(\pm\)6.2 \\ kitchen-mixed-v0 & 51.5 & 50.9 & - & 22 & 47.6 & 46.2 & 51 & 51 & **56.6\(\pm\)**4.3 \\ \hline kitchen-v0 (total) & 154.5 & 124.9 & - & 97.9 & 157.7 & 85.4 & 144.6 & 159.8 & **172.5\(\pm\)**10.2 \\ \hline \hline pen-human-v0 & 63.9 & -2 & - & 15.6 & 71.8 & 64.8 & 37.5 & 71.5 & **88.1\(\pm\)**9.7 \\ hammer-human-v0 & 1.2 & 0 & - & 0.1 & 1.2 & 1.8 & 4.4 & 1.4 & **8.1\(\pm\)**1.3 \\ door-human-v0 & 2 & 0 & - & 0.1 & 5.4 & 0 & **0.9** & 4.3 & 5.2\(\pm\)2.4 \\ relocate-human-v0 & 0.1 & - & - & 0.1 & 1.9 & 0.1 & 0.2 & 0.1 & 0.1\(\pm\)0.1 \\ pen-cloned-v0 & 37 & 0 & - & 24.7 & **60** & 49 & 39.2 & 37.3 & 58.6\(\pm\)4.4 \\ hammer-cloned-v0 & 0.6 & 0 & - & 0.3 & 2.1 & 0.2 & 2.1 & 2.2 & 2.2\(\pm\)0.4 \\ door-cloned-v0 & 0 & 0 & - & 0.1 & 0.4 & 0 & 0.4 & **1.6** & 0.5\(\pm\)0.2 \\ relocate-cloned-v0 & -0.3 & 0 & - & -0.1 & -0.1 & -0.2 & -0.1 & -0.2 & -0.1\(\pm\)0.1 \\ \hline adroit-v0 (human-cloned) & 104.5 & -2 & 0 & 40.9 & 142.7 & 115.7 & 93.6 & 118.1 & **162.7\(\pm\)**11.0 \\ \hline \hline antmaze-umaze-v0 & 54.6 & 62.8 & 59.2 & 56.7 & 64.3 & 78.6 & 74 & 87.5 & **92.3\(\pm\)**5.6 \\ antmaze-umaze-diverse-v0 & 45.6 & 50.2 & 53 & 49.3 & 60.7 & 71.4 & 84 & 62.2 & **89.1\(\pm\)**4.7 \\ antmaze-medium-play-v0 & 0 & 5.4 & 0 & 0 & 0.3 & 10.6 & 61.2 & **71.2** & 63.6\(\pm\)6.9 \\ antmaze-medium-diverse-v0 & 0 & 9.8 & 0 & 0.7 & 0 & 3 & 53.7 & **70** & 62.8\(\pm\)7.2 \\ antmaze-large-play-v0 & 0 & 0 & 0 & 0 & **0** & 0.2 & 15.8 & **39.6** & 17.5\(\pm\)7.8 \\ antmaze-large-diverse-v0 & 0 & 6 & 0 & 1 & 0 & 0 & 14.9 & **47.5** & 23.4\(\pm\)8.1 \\ \hline antmaze-v0 (total) & 100.2 & 134.2 & 112.2 & 107.7 & 125.3 & 163.8 & 303.6 & **378** & 348.1\(\pm\)16.7 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Average normalized score on the D4RL benchmark. Results of baselines are taken directly from [24]. For the missing results of 10%, AWAC, and OneStep RL, we re-implement the baselines and report the results.
Figure 1: tSNE of the Q-value network embeddings of walker2d-medium-v2 dataset, where red color denote high reward and blue color denote low reward.
converged to indistinguishably high values (\(3\times 10^{12}\)) for all \((s,a)\) pairs. In contrast, MISA successfully learns to cluster the \((s,a)\) pairs with a high reward into a cluster. From this perspective, we claim that regularizing the mutual information encourages learning a robust representation in offline RL scenarios.
## 6 Conclusions
We present the MISA framework for offline reinforcement learning by directly regularizing policy improvement and policy evaluation with the mutual information between state-action pairs of the dataset. MISA connects mutual information estimation with RL by constructing tractable lower bounds, treating the learning policy as a variational distribution and Q values as energy functions. The resulting tractable lower bound resembles a non-parametric energy-based distribution, which can be interpreted as the likelihood of a one-step improved policy given the current value estimation. In our experiments, we show how accurate mutual information estimation affects offline RL and the proposed method, MISA, has outperformed a wide range of baselines on D4RL benchmark by large margins, reaching a total point of 742.9 on gym-locomotion tasks of D4RL datasets. However, as a preliminary exploration in this direction, MISA does not sufficiently exploit the most advanced mutual information estimation methods. Future works could consider further optimizing the mutual information estimation to achieve higher performances.
## References
* Abdolmaleki et al. [2018] Abbas Abdolmaleki, Jost Tobias Springenberg, Yuval Tassa, Remi Munos, Nicolas Heess, and Martin A. Riedmiller. Maximum a posteriori policy optimisation. In _International Conference on Learning Representations_, 2018.
* An et al. [2021] Gaon An, Seungyong Moon, Jang-Hyun Kim, and Hyun Oh Song. Uncertainty-based offline reinforcement learning with diversified q-ensemble. _Advances in neural information processing systems_, 34:7436-7447, 2021.
* Barber and Agakov [2004] David Barber and Felix Agakov. The IM algorithm: a variational approach to information maximization. _Advances in neural information processing systems_, 16(320):201, 2004.
* Belghazi et al. [2018] Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeswar, Sherjil Ozair, Yoshua Bengio, R. Devon Hjelm, and Aaron C. Courville. Mutual information neural estimation. In Jennifer G. Dy and Andreas Krause, editors, _Proceedings of the 35th International Conference on Machine Learning_, 2018.
* Berner et al. [2019] Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemyslaw Debiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, et al. Dota 2 with large scale deep reinforcement learning. _arXiv preprint arXiv:1912.06680_, 2019.
* Betancourt [2017] Michael Betancourt. A conceptual introduction to hamiltonian monte carlo. _arXiv preprint arXiv:1701.02434_, 2017.
* Bradbury et al. [2018] James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018.
* Brekelmans et al. [2022] Rob Brekelmans, Sicong Huang, Marzyeh Ghassemi, Greg Ver Steeg, Roger Baker Grosse, and Alireza Makhzani. Improving mutual information estimation with annealed and energy-based bounds. In _International Conference on Learning Representations_, 2022.
* Chen et al. [2020] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. In _International Conference on Machine Learning_, 2020.
* Cichocki and Amari [2010] Andrzej Cichocki and Shun-ichi Amari. Families of alpha-beta-and gamma-divergences: Flexible and robust measures of similarities. _Entropy_, 12(6):1532-1568, 2010.
* Clevert et al. [2015] Djork-Arne Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). _arXiv preprint arXiv:1511.07289_, 2015.
* [12] Monroe D Donsker and SR Srinivasa Varadhan. Asymptotic evaluation of certain markov process expectations for large time, i. _Communications on Pure and Applied Mathematics_, 28(1):1-47, 1975.
* [13] Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4rl: Datasets for deep data-driven reinforcement learning. _arXiv preprint arXiv:2004.07219_, 2020.
* [14] Scott Fujimoto and Shixiang Shane Gu. A minimalist approach to offline reinforcement learning. _Advances in Neural Information Processing Systems_, 2021.
* [15] Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In _International conference on machine learning_, 2018.
* [16] Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. In _International conference on machine learning_, 2019.
* [17] Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In _International conference on machine learning_, 2018.
* [18] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In _Conference on Computer Vision and Pattern Recognition_, 2020.
* [19] Jonathan Heek, Anselm Levskaya, Avital Oliver, Marvin Ritter, Bertrand Rondepierre, Andreas Steiner, and Marc van Zee. Flax: A neural network library and ecosystem for JAX, 2020.
* [20] Gregory Kahn, Adam Villaflor, Bosen Ding, Pieter Abbeel, and Sergey Levine. Self-supervised deep reinforcement learning with generalized computation graphs for robot navigation. In _International Conference on Robotics and Automation_, 2018.
* [21] Bingyi Kang, Xiao Ma, Chao Du, Tianyu Pang, and Shuicheng Yan. Efficient diffusion policies for offline reinforcement learning. _arXiv preprint arXiv:2305.20081_, 2023.
* [22] Bingyi Kang, Xiao Ma, Yirui Wang, Yang Yue, and Shuicheng Yan. Improving and benchmarking offline reinforcement learning algorithms. _arXiv preprint arXiv:2306.00972_, 2023.
* [23] Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In Yoshua Bengio and Yann LeCun, editors, _International Conference on Learning Representations_, 2014.
* [24] Ilya Kostrikov, Ashvin Nair, and Sergey Levine. Offline reinforcement learning with implicit q-learning. In _International Conference on Learning Representations_, 2022.
* [25] Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. _Advances in Neural Information Processing Systems_, 2020.
* [26] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. _The Journal of Machine Learning Research_, 2016.
* [27] Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. _arXiv preprint arXiv:2005.01643_, 2020.
* [28] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. _arXiv preprint arXiv:1312.5602_, 2013.
* [29] Ofir Nachum, Mohammad Norouzi, Kelvin Xu, and Dale Schuurmans. Bridging the gap between value and policy based reinforcement learning. _Advances in neural information processing systems_, 2017.
* [30] XuanLong Nguyen, Martin J Wainwright, and Michael I Jordan. Estimating divergence functionals and the likelihood ratio by convex risk minimization. _IEEE Transactions on Information Theory_, 2010.
* [31] Sebastian Nowozin. Improved information gain estimates for decision tree induction. _arXiv preprint arXiv:1206.4620_, 2012.
* [32] Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. \(f\)-gan: Training generative neural samplers using variational divergence minimization. _Advances in neural information processing systems_, 29, 2016.
* [33] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. _arXiv preprint arXiv:1807.03748_, 2018.
* [34] Xue Bin Peng, Aviral Kumar, Grace Zhang, and Sergey Levine. Advantage-weighted regression: Simple and scalable off-policy reinforcement learning. _arXiv preprint arXiv:1910.00177_, 2019.
* [35] Ben Poole, Sherjil Ozair, Aaron Van Den Oord, Alex Alemi, and George Tucker. On variational bounds of mutual information. In _International Conference on Machine Learning_, pages 5171-5180. PMLR, 2019.
* [36] Manolis Savva, Jitendra Malik, Devi Parikh, Dhruv Batra, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, and Vladlen Koltun. Habitat: A platform for embodied ai research. In _International Conference on Computer Vision, ICCV_, 2019.
* [37] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In _International conference on machine learning_, 2015.
* [38] Noah Siegel, Jost Tobias Springenberg, Felix Berkenkamp, Abbas Abdolmaleki, Michael Neunert, Thomas Lampe, Roland Hafner, Nicolas Heess, and Martin Riedmiller. Keep doing what worked: Behavior modelling priors for offline reinforcement learning. In _International Conference on Learning Representations_, 2020.
* [39] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. _Nature_, 2017.
* [40] Richard S Sutton and Andrew G Barto. _Reinforcement learning: An introduction_. MIT press, 2018.
* [41] Zhendong Wang, Jonathan J Hunt, and Mingyuan Zhou. Diffusion policies as an expressive policy class for offline reinforcement learning. In _The Eleventh International Conference on Learning Representations_, 2023.
* [42] Ziyu Wang, Alexander Novikov, Konrad Zolna, Josh S Merel, Jost Tobias Springenberg, Scott E Reed, Bobak Shahriari, Noah Siegel, Caglar Gulcehre, Nicolas Heess, et al. Critic regularized regression. _Advances in Neural Information Processing Systems_, 2020.
* [43] Yifan Wu, George Tucker, and Ofir Nachum. Behavior regularized offline reinforcement learning. _arXiv preprint arXiv:1911.11361_, 2019.
* [44] Tianhe Yu, Aviral Kumar, Rafael Rafailov, Aravind Rajeswaran, Sergey Levine, and Chelsea Finn. Combo: Conservative offline model-based policy optimization. _Advances in Neural Information Processing Systems_, 2021.
* [45] Yang Yue, Bingyi Kang, Xiao Ma, Gao Huang, Shiji Song, and Shuicheng Yan. Offline prioritized experience replay, 2023.
* [46] Yang Yue, Bingyi Kang, Xiao Ma, Zhongwen Xu, Gao Huang, and Shuicheng Yan. Boosting offline reinforcement learning via data rebalancing, 2022.
## Broader Impact
MISA framework provides a simple yet effective approach to policy learning from offline datasets. Although the results presented in this paper only consider simulated environments, given the generality of MISA, it could be potentially effective on learning real-robot policies in more complex environments. We should be cautious about the misuse of the method proposed. Depending on the specific application scenarios, it might be harmful to democratic privacy and safety.
## Appendix A Proofs and Derivations
### Proof for Theorem 4.1
We first show \(\mathcal{I}_{\text{MISA}}\), \(\mathcal{I}_{\text{MISA-DV}}\) and \(\mathcal{I}_{\text{MISA-}f}\) are lower bounds for mutual information \(I(S,A)\).
Let \(\mu_{\theta,\phi}(a|s)\triangleq\frac{1}{\mathcal{Z}(s)}\pi_{\theta}(a|s)e^{T_ {\phi}(s,a)}\), where \(\mathcal{Z}(s)=\mathbb{E}_{\pi_{\theta}(a|s)}[e^{T_{\phi}(s,a)}]\), \(\mathcal{I}_{\text{MISA}}\) can be written as:
\[\begin{split}\mathcal{I}_{\text{MISA}}&\triangleq \mathbb{E}_{p(s,a)}\left[\log\frac{\pi_{\theta}(a|s)}{p(a)}\right]+\mathbb{E}_{ p(s,a)}\left[T_{\phi}(s,a)\right]-\mathbb{E}_{p(s)}\log\mathbb{E}_{\pi_{ \theta}(a|s)}\left[e^{T_{\phi}(s,a)}\right]\\ &=\mathbb{E}_{p(s,a)}\left[\log\frac{p(a|s)}{p(a)}\right]-\mathbb{ E}_{p(s,a)}[\log p(a|s)]\\ &\quad+\mathbb{E}_{p(s,a)}[\log\pi_{\theta}(a|s)]+\mathbb{E}_{p(s,a)}\left[T_{\phi}(s,a)\right]-\mathbb{E}_{p(s)}[\log\mathcal{Z}(s)]\\ &=I(S,A)-\mathbb{E}_{p(s)}\left[D_{\text{KL}}(p(a|s)||\mu_{\theta,\phi}(a|s))\right]\leq I(S,A).\end{split} \tag{18}\]
The above inequality holds as the KL divergence is always non-negative.
Similarly, let \(\mu_{\theta,\phi}(s,a)\triangleq\frac{1}{\mathcal{Z}}p(s)\pi_{\theta}(a|s)e^{T _{\phi}(s,a)}\), where \(\mathcal{Z}(s)=\mathbb{E}_{p(s)\pi_{\theta}(a|s)}[e^{T_{\phi}(s,a)}]\), \(\mathcal{I}_{\text{MISA-DV}}\) can be written as:
\[\begin{split}\mathcal{I}_{\text{MISA-DV}}&\triangleq \mathbb{E}_{p(s,a)}\left[\log\frac{\pi_{\theta}(a|s)}{p(a)}\right]+\mathbb{E}_{ p(s,a)}\left[T_{\phi}(s,a)\right]-\log\mathbb{E}_{p(s)\pi_{\theta}(a|s)} \left[e^{T_{\phi}(s,a)}\right]\\ &=\mathbb{E}_{p(s,a)}\left[\log\frac{p(a|s)}{p(a)}\right]-\mathbb{ E}_{p(s,a)}[\log p(a|s)]\\ &\quad+\mathbb{E}_{p(s,a)}[\log\pi_{\theta}(a|s)]+\mathbb{E}_{p(s,a)}\left[T_{\phi}(s,a)\right]-\log\mathcal{Z}\\ &=I(S,A)-D_{\text{KL}}(p(s,a)||\mu_{\theta,\phi}(s,a))\leq I(S, A).\end{split} \tag{19}\]
The above inequality holds as the KL divergence is always non-negative.
Consider the generalized KL-divergence [10; 8] between two un-normalized distributions \(\tilde{p}(x)\) and \(\tilde{q}(x)\) defined by
\[D_{\text{GKL}}(\tilde{p}(x)||\tilde{q}(x))=\int\tilde{p}(x)\log\frac{\tilde{p}( x)}{\tilde{q}(x)}-\tilde{p}(x)+\tilde{q}(x)dx, \tag{20}\]
which is always non-negative and reduces to KL divergence when \(\tilde{p}\) and \(\tilde{q}\) are normalized. Let \(\tilde{\mu}_{\theta,\phi}(a|s)\triangleq\pi_{\theta}(a|s)e^{T_{\phi}(s,a)-1}\) denote an un-normalized policy. We can rewrite \(\mathcal{I}_{\text{MISA-}f}\) as
\[\begin{split}\mathcal{I}_{\text{MISA-}f}&\triangleq \mathbb{E}_{p(s,a)}\left[\log\frac{\pi_{\theta}(a|s)}{p(a)}\right]+\mathbb{E}_{ p(s,a)}\left[T_{\phi}(s,a)\right]-\mathbb{E}_{p(s)\pi_{\theta}(a|s)}\left[e^{T_{ \phi}(s,a)-1}\right]\\ &=\mathbb{E}_{p(s,a)}\left[\log\frac{p(a|s)}{p(a)}\right]-\mathbb{ E}_{p(s,a)}[\log p(a|s)]\\ &\quad+\mathbb{E}_{p(s,a)}[\log\pi_{\theta}(a|s)]+\mathbb{E}_{p(s,a)}\left[T_{\phi}(s,a)-1\right]+1-\mathbb{E}_{p(s)\pi_{\theta}(a|s)}\left[e^{T _{\phi}(s,a)-1}\right]\\ &=I(S,A)-\mathbb{E}_{p(s)}\left[D_{\text{GKL}}(p(a|s)||\tilde{ \mu}_{\theta,\phi}(a|s))\right]\leq I(S,A).\end{split} \tag{21}\]So far, we have proven that \(\mathcal{I}_{\text{MISA}}\), \(\mathcal{I}_{\text{MISA-DV}}\) and \(\mathcal{I}_{\text{MISA-}f}\) mutual information lower bounds. Then we are going to prove their relations by starting from the relation between \(\mathcal{I}_{\text{MISA}}\) and \(\mathcal{I}_{\text{MISA-DV}}\).
\[\begin{split}\mathcal{I}_{\text{MISA}}-\mathcal{I}_{\text{MISA- DV}}&=D_{\text{KL}}(p(s,a)||\mu_{\theta,\phi}(s,a))-\mathbb{E}_{p(s)}\left[D_{ \text{KL}}(p(a|s)||\mu_{\theta,\phi}(a|s))\right]\\ &=\mathbb{E}_{p(s)}\mathbb{E}_{p(a|s)}\left[\log\frac{p(s,a)}{p(a|s )}-\log\frac{\mu_{\theta,\phi}(s,a)}{\mu_{\theta,\phi}(a|s)}\right]\\ &=\mathbb{E}_{p(s)}\mathbb{E}_{p(a|s)}\left[\log p(s)-\log\frac{1 }{\mathcal{Z}}p(s)\mathcal{Z}(s)\right]\\ &=\mathbb{E}_{p(s)}\left[\log p(s)-\log\frac{1}{\mathcal{Z}}p(s) \mathcal{Z}(s)\right]\\ &=D_{\text{KL}}\left(p(s)||\frac{1}{\mathcal{Z}}p(s)\mathcal{Z}(s )\right)\geq 0,\end{split} \tag{22}\]
where \(\frac{1}{\mathcal{Z}}p(s)\mathcal{Z}(s)\) is a self-normalized distribution as \(\mathcal{Z}=\mathbb{E}_{p(s)[\mathcal{Z}(s)]}\). Therefore, we have \(\mathcal{I}_{\text{MISA}}\geq\mathcal{I}_{\text{MISA-DV}}\).
Similarly, the relation between \(\mathcal{I}_{\text{MISA-DV}}\) and \(\mathcal{I}_{\text{MISA-}f}\) is given by:
\[\begin{split}\mathcal{I}_{\text{MISA-DV}}-\mathcal{I}_{\text{MISA -}f}&=\mathbb{E}_{p(s)}\left[D_{\text{GKL}}(p(a|s)||\tilde{\mu}_{ \theta,\phi}(a|s))\right]-D_{\text{KL}}(p(s,a)||\mu_{\theta,\phi}(s,a))\\ &=\mathbb{E}_{p(s)}\mathbb{E}_{p(a|s)}\left[\log\frac{p(a|s)}{p(s, a)}-\log\frac{\tilde{\mu}_{\theta,\phi}(a|s)}{\mu_{\theta,\phi}(s,a)}\right]-1+ \mathbb{E}_{p(s)}\mathbb{E}_{\pi_{\theta}(a|s)}\left[e^{T_{\phi}(s,a)-1}\right] \\ &=\mathbb{E}_{p(s)}\mathbb{E}_{p(a|s)}\left[-\log p(s)-\log\frac{ \tilde{\mu}_{\theta,\phi}(a|s)}{\mu_{\theta,\phi}(s,a)}\right]-1+\mathbb{E}_{p( s)}\mathbb{E}_{\pi_{\theta}(a|s)}\left[e^{T_{\phi}(s,a)-1}\right]\\ &=\mathbb{E}_{p(s)}\mathbb{E}_{p(a|s)}\left[\log\frac{\mu_{\theta, \phi}(s,a)}{p(s)\tilde{\mu}_{\theta,\phi}(a|s)}\right]-\mathbb{E}_{\mu_{\theta,\phi}(s,a)}[1]+\mathbb{E}_{p(s)}\mathbb{E}_{\pi_{\theta}(a|s)}\left[e^{T_{ \phi}(s,a)-1}\right]\\ &=\mathbb{E}_{p(s,a)}\left[\log\frac{e}{\mathcal{Z}}\right]- \mathbb{E}_{\mu_{\theta,\phi}(s,a)}[1]+\mathbb{E}_{p(s)}\mathbb{E}_{\pi_{ \theta}(a|s)}\left[e^{T_{\phi}(s,a)-1}\right]\\ &=\mathbb{E}_{\mu_{\theta,\phi}(s,a)}\left[\log\frac{e}{\mathcal{ Z}}\right]-\mathbb{E}_{\mu_{\theta,\phi}(s,a)}[1]+\mathbb{E}_{p(s)}\mathbb{E}_{\pi_{ \theta}(a|s)}\left[e^{T_{\phi}(s,a)-1}\right]\\ &=\mathbb{E}_{\mu_{\theta,\phi}(s,a)}\left[\log\frac{\mu_{\theta,\phi}(s,a)}{p(s)\tilde{\mu}_{\theta,\phi}(a|s)}\right]-\mathbb{E}_{\mu_{\theta,\phi}(s,a)}[1]+\mathbb{E}_{p(s)}\mathbb{E}_{\pi_{\theta}(a|s)}\left[e^{T_{ \phi}(s,a)-1}\right]\\ &=D_{\text{GKL}}\left(\mu_{\theta,\phi}(s,a)||p(s)\tilde{\mu}_{ \theta,\phi}(a|s)\right)\geq 0,\end{split} \tag{23}\]
where \(p(s)\tilde{\mu}_{\theta,\phi}(a|s)\) is an unnormalized joint distribution. Therefore, we have \(I(S,A)\geq\mathcal{I}_{\text{MISA}}\geq\mathcal{I}_{\text{MISA-DV}}\geq \mathcal{I}_{\text{MISA-}f}\).
### Derivation of MISA Gradients
We detail how the unbiased gradient is derived in Sec.4.3.
\[\frac{\partial\mathcal{I}_{\text{MISA}}}{\partial\theta} =\mathbb{E}_{s,a\sim D}\left[\frac{\log\pi_{\theta}(a\mid s)}{ \partial\theta}\right]-\mathbb{E}_{s\sim D}\left[\frac{\partial\log\mathbb{E }_{\pi_{\theta}(a|s)}[e^{Q_{\phi}(s,a)}]}{\partial\theta}\right]\] \[=\mathbb{E}_{s,a\sim D}\left[\frac{\log\pi_{\theta}(a\mid s)}{ \partial\theta}\right]-\mathbb{E}_{s\sim D}\left[\mathbb{E}_{\pi_{\theta}(a |s)}\left[\frac{e^{Q_{\phi}(s,a)}}{\mathbb{E}_{\pi_{\theta}(a|s)}\left[e^{Q_{ \phi}(s,a)}\right]}\frac{\log\pi_{\theta}(a\mid s)}{\partial\theta}\right]\right] \tag{24}\] \[=\mathbb{E}_{s,a\sim D}\left[\frac{\log\pi_{\theta}(a\mid s)}{ \partial\theta}\right]-\mathbb{E}_{s\sim D,a\sim p_{\theta,\phi}(a|s)}\left[ \frac{\log\pi_{\theta}(a\mid s)}{\partial\theta}\right] \tag{25}\]
for Eqn. 24, we use the log-derivative trick.
## Appendix B Implementation Details
We follow the network architectures of CQL [25] and IQL [24], where a neural network of 3 encoding layers of size 256 is used for antmaze-v0 environments, and 2 encoding layers for other tasks, followedby an output layer. We use ELU activation function [11] and SAC [17] as the base RL algorithm. Besides, we use a learning rate of \(1\times 10^{-4}\) for both the policy network and Q-value network with a cosine learning rate scheduler. When approximating \(\mathbb{E}_{\pi_{\theta}(a|s)}\left[e^{T_{\psi}(s,a)}\right]\), we use 50 Monte-Carlo samples. To sample from the non-parametric distribution \(p_{\theta,\phi}(a\mid s)=\frac{\pi_{\theta}(a|s)e^{Q_{\phi}(s,a)}}{\mathbb{E}_{ \pi_{\theta}(a|s)}\left[e^{Q_{\phi}(s,a)}\right]}\), we use Hamiltonian Monte Carlo algorithm. In addition, for unbiased gradient estimation with MCMC samples, we use a burn-in steps of 5. For all tasks, we average the mean returns over 10 evaluation trajectories and 5 random seeds. In particular, following [24], we evaluate the antmaze-v0 environments for 100 episodes instead. To stabilize the training of our agents in antmaze-v0 environments, we follow [25] and normalize the reward by \(r^{\prime}=(r-0.5)*4\). As MCMC sampling is slow, we trade-off its accuracy with efficiency by choosing moderately small iteration configurations. We set the MCMC burn-in steps to 5, number of leapfrog steps to 2, and MCMC step size to 1.
For practical implementations, we follow the CQL-Lagrange [25] implementation by constraining the Q-value update by a "budget" variable \(\tau\) and rewrite Eqn. 12 as
\[\min_{Q}\max_{\gamma_{1}\geq 0}\gamma_{1}(\mathbb{E}_{\approx\sim\mathcal{D}} \left[\log\mathbb{E}_{\pi_{\theta}(a|s)}\left[e^{Q_{\phi}(s,a)}\right]\right]- \mathbb{E}_{s,a\sim\mathcal{D}}\left[Q_{\phi}(s,a)\right]-\tau)-J_{Q}^{\text{B }}(\phi). \tag{26}\]
Eqn. 26 implies that if the expected value of Q-value difference is less than the threshold \(\tau\), \(\gamma_{1}\) will adjust to close to 0; if the Q-value difference is higher than the threshold \(\tau\), \(\gamma_{1}\) will be larger and penalize Q-values harder. We set \(\tau=10\) for antmaze-v0 environments and \(\tau=3\) for adroit-v0 and kitchen-v0 environments. For gym-locomotion-v2 tasks, we disable this function and direction optimize Eqn. 12, because these tasks have a relatively short horizon and dense reward, and further constraining the Q values is less necessary. Our code is implemented in JAX [7] with Flax [19]. All experiments are conducted on NVIDIA 3090 GPUs. | ## Review
### Summary
This paper introduces a novel offline reinforcement learning (RL) method called MISA, which combines the KL regularized and conservative Q-learning methods by incorporating mutual information regularization in both the value and policy loss functions. The authors provide a theoretical basis for MISA's superior performance and empirically demonstrate its effectiveness across a variety of environments, achieving state-of-the-art results on the D4RL benchmark. The paper is well-structured, with extensive experimental evaluations that highlight the advantages of the MISA framework in addressing the distribution shift problem in offline RL.
### Strengths
- The integration of KL regularized method and conservative Q-learning represents a significant novelty in offline RL.
- The paper provides a clear theoretical explanation and extensive empirical results, demonstrating MISA's state-of-the-art performance.
- The experiments are thorough, covering a wide range of environments and providing informative ablation studies.
- Practical implementation details and hyperparameter choices are clearly discussed.
### Weaknesses
- There are some confusions in the theoretical derivation regarding the true Q function and its estimated counterpart.
- The paper lacks error bars in experimental results, making it difficult to assess the significance of improvements.
- Some discussions on the advantages of MISA over prior methods are vague and lack clarity.
- The novelty of the proposed method may not be as pronounced due to its close relation to established methods.
### Questions
- Can the authors clarify the differences in performance between BA, MISA-f, and MISA-DV in the MuJoCo medium-replay environments?
- Could the authors elaborate on what is meant by 'unconstrained' in the context of policy improvements?
- Are the hyperparameters in Equations (12) and (13) specified or need tuning for each environment?
- Have the authors considered online fine-tuning after applying MISA, and could they provide results for that?
### Soundness
**Score:** 3
**Description:** 3 = good. The theoretical framework is generally sound but contains some errors and confusions that need addressing.
### Presentation
**Score:** 3
**Description:** 3 = good. The paper is clear and well-organized, but some sections could benefit from more clarity and precision.
### Contribution
**Score:** 3
**Description:** 3 = good. The paper introduces a novel approach with significant experimental results, though some aspects of its novelty are less pronounced.
### Rating
**Score:** 7
**Description:** 7 = accept, but needs minor improvements. The paper is technically solid with high impact potential, though there are some areas that require clarification and refinement.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a significant advancement in offline RL with clear contributions, thorough evaluations, and a solid theoretical foundation. While there are some issues regarding clarity and certain theoretical aspects, the overall quality and potential impact of the work justify acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Evolving Connectivity for Recurrent Spiking Neural Networks
Guan Wang\({}^{1,2}\), Yuhao Sun\({}^{2,3}\), Sijie Cheng\({}^{1,4}\), Sen Song\({}^{2,3}\)
\({}^{1}\)Department of Computer Science and Technology, Tsinghua University
\({}^{2}\)Laboratory of Brain and Intelligence, Tsinghua University
\({}^{3}\)Department of Biomedical Engineering, Tsinghua University
\({}^{4}\)Institute for AI Industry Research (AIR), Tsinghua University
[email protected], {syh18,csj23}@mails.tsinghua.edu.cn, [email protected]
Equal contributionCorresponding author
###### Abstract
Recurrent spiking neural networks (RSNNs) hold great potential for advancing artificial general intelligence, as they draw inspiration from the biological nervous system and show promise in modeling complex dynamics. However, the widely-used surrogate gradient-based training methods for RSNNs are inherently inaccurate and unfriendly to neuromorphic hardware. To address these limitations, we propose the evolving connectivity (EC) framework, an inference-only method for training RSNNs. The EC framework reformulates weight-tuning as a search into parameterized connection probability distributions, and employs Natural Evolution Strategies (NES) for optimizing these distributions. Our EC framework circumvents the need for gradients and features hardware-friendly characteristics, including sparse boolean connections and high scalability. We evaluate EC on a series of standard robotic locomotion tasks, where it achieves comparable performance with deep neural networks and outperforms gradient-trained RSNNs, even solving the complex 17-DoF humanoid task. Additionally, the EC framework demonstrates a two to three fold speedup in efficiency compared to directly evolving parameters. By providing a performant and hardware-friendly alternative, the EC framework lays the groundwork for further energy-efficient applications of RSNNs and advances the development of neuromorphic devices. Our code is publicly available at [https://github.com/imoneoi/EvolvingConnectivity](https://github.com/imoneoi/EvolvingConnectivity).
## 1 Introduction
Inspired by the remarkable information processing power of the human brain, neuromorphic computing aims to design and implement computational systems that mimic the architecture and functionality of biological neural networks, paving the way for a new era of intelligent machines. Specifically, the highly recurrently connected and spike-emitting brain network inspired recurrent spiking neural networks (RSNNs), which employ discrete, spike-based signals for transmitting information through recurrent connections in an event-driven manner. RSNNs can serve as realistic models of the brain incorporating the latest anatomical and neuro-physiological data (Billeh et al., 2020), leading to the discovery of general principles of computing, e.g., noise robustness (Chen et al., 2022). Furthermore, RSNNs can exhibit self-organized critical properties (Poil et al., 2012) and serve as a reservoir for complex temporal dynamics (Jaeger, 2001; Maass et al., 2002), making them suitable for sequential tasks like robotics (Lee et al., 2020), path-finding (Rueckert et al., 2016), and others.
Despite the importance of RSNNs, developing end-to-end training algorithms remains a challenge. Inspired by the success of deep learning, a line of research [Wu et al., 2018, Shrestha and Orchard, 2018, Bauer et al., 2022] uses error-backpropagation [Rumelhart et al., 1985] with carefully chosen surrogate gradients to address the non-differentiability problem and achieve performance comparable to deep learning. However, incorporating surrogate gradients into RSNNs introduces two concerns. Algorithmically, the surrogate gradient leads to inherent inaccuracy in the descent direction [Li et al., 2021] and sensitivity to function scale selection [Zenke and Vogels, 2021]. At the implementation level, gradient-based training is incompatible with prominent neuromorphic devices [Pei et al., 2019, Davies et al., 2018, Mayr et al., 2019] due to the requirement of accessing the full network state over every timestep [Werbos, 1990]. Consequently, this raises a critical research question: _can we design a training method for RSNNs that bypasses the need for gradients without compromising performance?_
In response to this challenge, we observe that connection probability distribution information from brain connection maps across various species exhibit similarities [Haber et al., 2023] and can be used to construct large-scale neural computing models [Billeh et al., 2020, Schmidt et al., 2018]. Existing studies demonstrate that networks with binary connections alone can achieve high performance, rivaling that of weighted networks [Gaier and Ha, 2019, Frankle and Carbin, 2018, Malach et al., 2020]. Moreover, computing boolean connections relies primarily on integer arithmetic rather than resource-intensive floating-point operations, enabling simpler on-chip implementation and enhanced energy efficiency. Considering these features, we reformulate the architecture of RSNNs, where connections are sampled independently from parametric Bernoulli distributions, and adopt Natural Evolution Strategies (NES) [Wierstra et al., 2014] to optimize this parametric probability distribution, providing a scalable, inference-only, and effective algorithm for tuning parameters.
In this paper, we introduce the evolving connectivity (EC) framework for training RSNNs. The EC framework reformulates RSNNs as boolean connections with homogeneous weights and employs NES to evolve the connection probabilities of the network. To evaluate effectiveness and efficiency, we conduct extensive experiments on locomotion tasks, a set of widely-used sequential decision-making tasks [Brockman et al., 2016, Freeman et al., 2021]. Experimental results show that our proposed EC method achieves performance comparable to deep recurrent neural networks, surpassing the asymptotic performance of surrogate gradients. Despite using a GPU, which is not specifically optimized for our framework, the EC method yields a speed improvement of \(2\sim 3\times\) compared to directly evolving parameters using Evolution Strategies [Salimans et al., 2017] and exhibits better efficiency than surrogate gradients. Additionally, our EC method can effectively train deep recurrent neural networks (RNNs), demonstrating versatility for different architectures and potential for quantized deep neural networks.
Our main contributions are summarized as follows:
* **Novel framework**: We propose a novel inference-only training framework for RSNNs by reformulating weight-tuning as connection probability searching through evolution-based algorithms.
* **High performance**: Our method can solve the complex 17-DoF humanoid locomotion task with RSNN, achieving performance on par with recurrent neural networks and outperforming gradient-trained RSNNs.
* **Hardware-friendly**: By producing RSNNs with sparse 1-bit boolean connections, as well as enabling inference-only training and high scalability, our method is highly compatible with neuromorphic devices. This compatibility holds promising potential for further energy-efficient applications of RSNNs.
## 2 Related Works
### Training Recurrent Spiking Neural Networks
The study of training algorithms for RSNNs endeavors to unravel the mechanism that enables the brain to learn. For decades, biological plasticity rules, especially spike-timing-dependent plasticity (STDP) [Bi and ming Poo, 1998], have been considered as a foundational of RSNNs training [Diehl and Cook, 2015, Kheradpisheh et al., 2018, Mozafari et al., 2019]. Recently, with the success of deep learning, surrogate gradient-based approaches [Wu et al., 2018, Shrestha and Orchard, 2018, Bauer et al., 2022] have been predominately utilized in training RSNNs. Surrogate gradients share conceptual similarities with the straight-through estimator [Bengio et al., 2013] used in deep learning.
Both techniques employ continuous surrogate functions to compute gradients of non-differentiable functions during the backward pass, while preserving the original non-differentiable functions in the forward pass. Slayer (Shrestha and Orchard, 2018) adopted an exponential decaying surrogate function with a spike response model (SRM), while STBP (Wu et al., 2018) incorporated BPTT in multi-layer SNN training and proposed several surrogate function choices. Chen et al. (2022) trained an anatomical and neurophysiological data constrained RSNN and demonstrated its robustness and versatility. However, gradient-based approaches face the new problem of being difficult to implement on neuromorphic devices. To address the limitation, E-prop (Bellec et al., 2020) utilize eligibility trace to compute truncated gradient, becoming an exemplary training algorithm on neuromorphic devices like Loihi2 (Davies et al., 2018) and SpiNNaker2 (Mayr et al., 2019), but have a strict restriction on the temporal relationship of SNN models and still lags behind surrogate gradient methods in performance. Our approach tackles the gradient-effectiveness dilemma from the beginning, by introducing the inference-only training framework.
### Weight-agnostic Neural Networks
Training a neural network typically involves assigning appropriate values to the network's weights. However, the weight-agnostic neural network (WANN) (Gaier and Ha, 2019) has demonstrated that a network's topology can be highly informative, similar to that of a weighted neural network. Additionally, research on the lottery ticket hypothesis (LTH) (Frankle and Carbin, 2018; Zhou et al., 2019; Ramanujan et al., 2020) suggests that an over-parameterized network contains an effective subnetwork, even when using its initial parameter weights. Several theoretical studies have also shown that a sufficiently large, randomly-weighted deep neural network contains a subnetwork capable of approximating any function (Malach et al., 2020; Fischer et al., 2022). Our approach, furthering the existing research on connection-encoded networks, proposes a framework that utilizes connection probability to parameterize the network.
### Deep Neuroevolution
Deep neuroevolution utilizes evolutionary algorithms for training deep neural networks. Natural Evolution Strategies (NES) (Wierstra et al., 2014) have laid the foundation for gradient estimation in this domain. A well-known variant, Evolution Strategies (ES) (Salimans et al., 2017), has been employed for training deep neural networks in sequential decision-making tasks. ES optimizes a single set of continuous network weights by applying Gaussian perturbations and updating the weights using the NES gradient estimator. Following the success of ES, numerous evolutionary algorithms have been proposed for training deep neural networks. For example, Such et al. (2018) showed that deep neural networks could be effectively trained using simple genetic algorithms that mutate weights. Furthermore, Conti et al. (2018) integrated exploration and novelty-seeking into ES to overcome local minima. While the majority of prior research has primarily focused on continuous parameters, our proposed framework presents a novel approach by concentrating on the search for connection probability distributions. This shift in perspective offers new possibilities for evolving recurrent spiking neural networks in a hardware-friendly manner.
## 3 Preliminaries: Recurrent Spiking Neural Networks
RSNN is a class of spiking neural networks which incorporate feedback connections. In this paper, we adopt a typical RSNN architecture from the reservoir network (Taeger, 2001; Maass et al., 2002) for sequential tasks. It is worth noting that, despite that we adopt a specific RSNN model as an example, our framework can be broadly applied to search for connectivity distributions in any type of RSNN, as it does not depend on network-specific assumptions and only relies on evaluating the network with a set of parameters.
According to Dale's law, our network consists of an excitatory neuron group and an inhibitory neuron group. Each neuron is modeled as a leaky integrate and fire (LIF) neuron. A neuron will fire a spike when its membrane potential \(u\) exceeds the threshold, and the membrane potential will hard-reset to \(0\). More specifically, our model defines the dynamics of membrane potential \(u\) and synaptic current \(c\) as follows:
\[\tau_{m}\frac{\mathrm{d}\mathbf{u}^{(g)}}{\mathrm{d}t}=-\mathbf{u}^{(g)}+R \mathbf{c}^{(g)} \tag{1}\]where \(g=\{Exc,Inh\}\) denotes the excitatory and inhibitory group, respectively, and \(R\) denotes the resistance.
The current input was modeled using exponential synapses to retain information for a short duration, commonly adopted in robotic tasks (Tang et al., 2020; Naya et al., 2021).
\[\frac{\text{d}\textbf{e}^{(g)}}{\text{d}t}=-\frac{\textbf{c}^{(g)}}{\tau_{syn} }+\sum_{g_{j}}I_{g_{j}}\sum_{j}\textbf{W}_{ij}^{(g_{i}g_{j})}\delta(t-t_{j}^{ s(g_{j})})+\textbf{I}_{ext} \tag{2}\]
where \(t_{j}^{s(g_{j})}\) denotes the spike time of neuron \(j\) in neuron group \(g_{j}\), \(\delta\) denotes Dirac delta function. \(I_{g}\) was set to define the connection strength of excitatory and inhibitory synapse respectively, where \(I_{Exc}>0\) and \(I_{Inh}<0\). Weight \(\textbf{W}^{(g_{i}g_{j})}\) was defined as a matrix with non-negative elements, connecting group \(g_{j}\) to group \(g_{i}\). Besides, the external input signal \(\textbf{I}_{ext}\) is extracted using linear projection of observation **x**.
Discretizing the LIF differential equations (Eq. 1 and 2) with \(\Delta t\) as a time-step, we could obtain the following difference equation for our RSNN model:
\[\textbf{c}^{(t,g)} =d_{c}\textbf{c}^{(t-1,g)}+\sum_{g_{j}}I_{g_{j}}\textbf{W}^{(g_{i }g_{j})}\textbf{s}^{(t-1,g_{j})}+\textbf{I}_{ext}^{(t,g)} \tag{3}\] \[\textbf{v}^{(t,g)} =d_{v}\textbf{u}^{(t-1,g)}+R\textbf{c}^{(t,g)}\] (4) \[\textbf{s}^{(t,g)} =\textbf{v}^{(t,g)}>\textbf{1}\] (5) \[\textbf{u}^{(t,g)} =\textbf{v}^{(t,g)}(\textbf{1}-\textbf{s}^{(t,g)}) \tag{6}\]
where \(d_{c}=e^{-\frac{\Delta t}{\tau_{syn}}}\) and \(d_{v}=e^{-\frac{\Delta t}{\tau_{m}}}\) are two constant parameters.
The output vector **o** is extracted using linear projection of neuron firing rates in a short time period \(\tau\), i.e.,
\[\textbf{o}^{(t)}=\sum_{\tau}k(\tau)\sum_{g}\textbf{W}_{out}^{(g)}\textbf{s}^{ (t-\tau,g)} \tag{7}\]
where \(\textbf{W}_{out}^{(g)}\) denotes the output weight, and \(k\) is a function averaging the time period \(\tau\).
## 4 Framework
In this section, we present our proposed Evolving Connectivity (EC) framework, which is depicted in Fig. 1. Our approach consists of three main steps: (1) reformulating the neural network architecture from weight-based parameterization to connection probability distribution, (2) employing the Natural Evolution Strategies (NES) method to optimize the reformulated parameter space, and (3) deterministically extracting the final parameters from the distribution.
Reformulation.A sparse connected weight matrix can be described into a weight matrix **w** and a connection mask \(\mathbf{\theta}\). Traditionally, we can use Erdos-Renyi random matrix to describe the connection
Figure 1: Architecture of evolving connectivity (EC). The connectivity \(\mathbf{\theta}_{k}\) of the population is sampled from the global distribution \(B(\mathbf{\rho})\) and then evaluated in parallel. The RSNN consists of excitatory and inhibitory neurons, simulated using 1-bit firing calculations and the LIF model.
mask, in which connections are independently drawn from a Bernoulli distribution, i.e.,
\[\mathbf{W}_{ij}=\mathbf{w}_{ij}\cdot\mathbf{\theta}_{ij},\mathrm{where}\ \mathbf{\theta}_{ ij}\sim B(\mathbf{\rho}) \tag{8}\]
Inspired by the success of subnetworks in deep neural networks (Ramanujan et al., 2020), we can reformulate this in a connection framework. In this view, we aim to find a connection probability matrix, \(\mathbf{\rho}=(\rho_{ij})\), where each element represents the connection probability between two neurons, and set all weights \(\mathbf{w}_{ij}\) to be unit size, i.e.,
\[\mathbf{W}_{ij}=\mathbf{\theta}_{ij},\mathrm{where}\ \mathbf{\theta}_{ij}\sim B(\mathbf{ \rho}_{ij}). \tag{9}\]
Optimization.We optimize \(\mathbf{\rho}\) to maximize the expected performance metric function \(R(\cdot)\) across individual network samples drawn from the distribution, which can be expressed as the following objective function:
\[\mathbf{\rho}^{*}=\operatorname*{arg\,max}_{\mathbf{\rho}}J(\mathbf{\rho})=\operatorname* {arg\,max}_{\mathbf{\rho}}\mathbb{E}_{\mathbf{\theta}\sim B(\mathbf{\rho})}[R(\mathbf{\theta})] \tag{10}\]
To optimize this objective, we employ Natural Evolution Strategies (NES) (Wierstra et al., 2014). NES provides an unbiased Monte Carlo gradient estimation of the objective \(J(\mathbf{\rho})\), by evaluating the performance metric \(R_{k}\) on multiple individual samples \(\mathbf{\theta}_{k}\) drawn from \(B(\mathbf{\rho})\). Specifically, NES estimates the gradient of \(J(\mathbf{\rho})\) with respect to the parameters of the distribution \(\mathbf{\rho}\) by computing the expectation over samples from \(B(\mathbf{\rho})\):
\[\nabla_{\mathbf{\rho}}J(\mathbf{\rho}) =\mathbb{E}_{\mathbf{\theta}\sim B(\mathbf{\rho})}[\nabla_{\mathbf{\rho}} \log P(\mathbf{\theta}|\mathbf{\rho})R(\mathbf{\theta})] \tag{11}\] \[=\mathbb{E}_{\mathbf{\theta}\sim B(\mathbf{\rho})}[\frac{\mathbf{\theta}-\mathbf{ \rho}}{\mathbf{\rho}(\mathbf{1}-\mathbf{\rho})}R(\mathbf{\theta})]\] (12) \[\approx\frac{1}{N}\sum_{k=1}^{N}\frac{\mathbf{\theta}_{k}-\mathbf{\rho}}{ \mathbf{\rho}(\mathbf{1}-\mathbf{\rho})}R_{k} \tag{13}\]
Thus, we obtained an inference-only estimation of the gradient by simply sampling and evaluating the metric. Then gradient descent can be carried over the estimated gradients, following the NES approach (Wierstra et al., 2014). Furthermore, as in Williams (1992), we scale the step size proportionally to the variance, i.e. \(\alpha=\eta\cdot\mathrm{Var}[B(\mathbf{\rho})]\), and obtain the update rule:
\[\mathbf{\rho}_{t}=\mathbf{\rho}_{t-1}+\alpha\nabla_{\mathbf{\rho}}J(\mathbf{\rho})\approx\mathbf{ \rho}_{t-1}+\frac{\eta}{N}\sum_{k=1}^{N}\left(\mathbf{\theta}_{k}-\mathbf{\rho}\right) R_{k} \tag{14}\]
In addition, all elements of \(\mathbf{\rho}\) clipped in the interval \([\epsilon,1-\epsilon]\), where \(\epsilon\to 0^{+}\), to guarantee a minimal level of exploration within the search space. In our experiment, we set \(\epsilon=0.001\) and initialize \(\mathbf{\rho}=0.5\). Additionally, we adopted a similar fitness shaping trick, center rank transform, as discussed in Salimans et al. (2017), which sort the return of a population, then linearly rescale the sorting indexes into a fixed interval \([-0.5,0.5]\).
Once a sufficient number of updates have been performed, the final parameter \(\mathbf{\theta}\) can be deterministically obtained as the parameter with maximum probability, which is subsequently used for deployment. In the context of a Bernoulli distribution, this process is equivalent to thresholding \(\mathbf{\rho}\) at \(0.5\) to produce a boolean matrix containing only \(\{0,1\}\) values.
## 5 Properties of EC Framework
In this section, we further discuss the important properties of our proposed EC framework in implementation, thanks to leveraging the connection probability distribution.
Inference only.The most significant challenge in neuromorphic devices is the absence of an effective hardware-friendly learning algorithm (Li et al., 2023). Most neuromorphic chips, such as Loihi (Davies et al., 2018), Tianjic (Pei et al., 2019), TrueNorth (Akopyan et al., 2015), SpiNNaker2 (Mayr et al., 2019), lack support for directly calculating error backpropagation, which is crucial for surrogate gradient methods. The inference-only EC framework enables an alternative method for training on these inference chips.
1-bit connections.EC employs 1-bit sparse connections throughout training and deployment, replacing the traditional floating-point weight matrix. Therefore, the 1-bit connections permit the use of more economical integer arithmetic instead of costly floating-point cores. This approach not only accelerates computations on devices like GPUs, but also holds a promise of driving the creation of novel 1-bit connection neuromorphic computing hardware.
Scalability.The inference-only property of the EC framework implies no data dependence between evaluations, which allows for the distribution of sampled parameters \(\mathbf{\theta}_{k}\) across independent workers and the collection of their reported performance measures, making EC highly scalable. Moreover, the random seed for sampling the population can be transmitted across nodes instead of the connections \(\mathbf{\theta}_{k}\) to minimize communication overhead, facilitating scalar-only communication.
## 6 Experiments
### Experimental Setups
Tasks.We focus on three robotic locomotion tasks in our experiments, Humanoid, Walker2d, and Hopper, as they are commonly used for sequential decision-making problems in the reinforcement learning domain (Brockman et al., 2016; Freeman et al., 2021). As illustrated in Fig. 2, these tasks involve controlling robots with varying degrees of freedom (DoF), to perform certain actions that maximize the return within a fixed-length episode \(T\). In the EC framework, the performance metric function can be defined as the expected return over episodes \(R(\theta)=\mathbb{E}_{\tau\sim\pi_{\theta}}[\sum_{t=0}^{T}r_{t}]\), and a single evaluation \(R_{i}\) corresponds to the return of one episode using network parameter \(\theta_{k}\).
RSNN architecture.In our study, we employed a RSNN consisting of an input layer, a hidden recurrent layer with LIF neurons, and an output layer, as delineated in the Preliminaries section. To achieve an equilibrium within the excitatory inhibitory balance, input and hidden layers were evenly divided into excitatory and inhibitory group.
Baselines.To thoroughly evaluate the effectiveness of our framework and its advantages over prior methods, we compare our EC framework with deep RNNs, as well as RSNN trained with Surrogate Gradients (SG) and Evolution Strategies (ES). Detailed network architecture and precision is illustrated in Table 1.
For deep RNNs, we employ widely-used recurrent deep neural networks, specifically long-short term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent unit (GRU) (Cho et al., 2014), trained using Evolution Strategies (ES) (Salimans et al., 2017) as our baselines.
For RSNN, we employ the same structured but excitatory/inhibitory not separated RSNN trained with ES and Surrogate Gradient (SG). ES is directly applied to search the weight matrix \(\mathbf{W}\) of RSNNs, optimizing the performance metric in a gradient-free manner. In contrast, the Surrogate Gradient is combined with Proximal Policy Optimization (PPO) (Schulman et al., 2017), a prominent reinforcement learning method, to optimize the weight \(\mathbf{W}\) for sequential decision-making tasks where returns cannot be differentiated.
Implementation Details.Our EC framework and all baselines are implemented using the JAX library (Bradbury et al., 2018) and just-in-time compiled with the Brax physics simulator (Freeman et al., 2021) for efficient GPU execution. The population is vectorized automatically, and 1-bit firing
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model** & **Hidden size** & **\# Params** & **Precision** & **Size (MB)** \\ \hline RSNN & 256 & 193K & 1-bit(EC) / FP32(Others) & 24KB(EC) / 768KB(Others) \\ GRU & 256 & 386K & FP32 & 1544KB \\ LSTM & 128 & 191K & FP32 & 764KB \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of model architectures.
Figure 2: Locomotion tasks illustrated from left to right: Humanoid (17-DoF), Walker2d (6-DoF), and Hopper (3-DoF).
calculations take advantage of INT8 arithmetic for performance optimization. As a result, the training process achieves over \(180,000\) frames per second on a single NVIDIA TITAN RTX GPU. Each experiment's result is averaged over 3 independent seeds, with the standard deviation displayed as a shaded area. For detailed information on hyperparameters and hardware specifications, please refer to Appendix G.
### Performance Evaluation
Firstly, we compare RSNN trained with our EC framework (EC-RSNN) to deep RNNs. To ensure a fair comparison, we utilized the evolution-based training approach ES [Salimans et al., 2017] for deep RNNs, including ES-GRU and ES-LSTM. Typically, SNNs face challenges in surpassing the performance of their DNN counterparts. However, as shown in the experimental results in Fig. 3, our EC-RSNN achieves competitive performance and training efficiency compared to deep RNNs, even surpassing ES-GRU and ES-LSTM on the Walker2d and Hopper tasks. In the complex 17-DoF Humanoid task, our EC-RSNN could also outperform ES-GRU. Moreover, EC-RSNN demonstrates comparable performance to GRU and LSTM trained with PPO, as detailed in Appendix E. It is worth noting that, while GRU and LSTM use densely connected, full precision (32-bit floating-point) weights and hard-coded gating mechanisms, EC-RSNN is constrained on 1-bit sparse weights and adheres to Dale's law for weight signs.
Then, as for the same structured RSNNs, our EC-RSNN outperforms ES-RSNN, especially significantly surpassing ES-RSNN on complex Humanoid and Walker2d tasks. We highlight two aspects that potentially account for the superior performance of our EC framework. On the one hand, schema theory [Whitley, 1994] suggests that selecting over a population of binary strings could combine partial binary string patterns (schemata) to implicitly work on many solutions without evaluating them explicitly, leading to massive parallelism. EC leverages the discrete \(\{0,1\}\)-string space, which may enable more efficient optimization due to implicit parallelism. On the other hand, ES uses fixed-scale noise to perturb a set of deterministic parameters, which may often fall in wide areas in the landscape of the objective function and fail to proceed on sharp maxima [Lehman et al., 2018]. In contrast, EC operates over a probabilistic distribution, providing the flexibility to adjust variance on sensitive parameters and achieving more fine-grained optimization.
Finally, gradient updates cannot be directly converted into evolution generations, thus we only compare the final converged return for surrogate gradient trained RSNNs. Considering that the surrogate gradient approach is sensitive to the selection of the surrogate function and its parameters [Zenke and Vogels, 2021], we choose three commonly-used surrogate functions with different \(\beta\) and \(\gamma\) parameters to thoroughly validate the performance: SuperSpike [Zenke and Ganguli, 2018], the piecewise linear function proposed by Esser et al. [2016], and the derivative of the Sigmoid function [Zenke and Ganguli, 2018]. As shown in Fig. 4, the performance of our EC-RSNNs is consistently higher than SG-RSNNs across all surrogate functions and hyper-parameters on the complex 17-DoF Humanoid task. We adopt the best parameter set of the piecewise linear surrogate function, with \(\gamma=0.8\), \(\beta=2\) as the baseline in Fig. 3.
Figure 3: Performance evaluation on locomotion tasks. SG-RSNN is plotted based on the final return, as gradient-based updates are not directly comparable to generations in evolution-based methods. The proposed Evolving Connectivity (EC) framework effectively solves the 17-DoF Humanoid locomotion task, demonstrating competitive performance with deep RNNs and outperforming RSNNs trained using both Surrogate Gradient and Evolution Strategies across all tasks.
One possible reason to explain the performance of SG-RSNN is that due to RSNNs having both explicit and implicit recurrence, the choice of surrogate function may determine whether gradients vanish or explode (Zenke and Vogels, 2021). Moreover, the gradient approximation of SG is inaccurate and may deviate from the steepest descent direction (Li et al., 2021). In contrast, our EC framework leverages unbiased Monte-Carlo gradient estimation, which approaches the optimal direction given that the population is sufficiently large (Zhang et al., 2017).
### Efficiency Comparison
In this section, we compare the computational efficiency of our proposed EC framework with other baselines, as shown in Fig. 5. To ensure a fair comparison, we evaluate the EC and baseline methods using the same wall-clock computation time and implement them on identical hardware.
As expected, EC-RSNN exhibits slower training than deep RNNs due to the higher complexity of RSNNs, which necessitates simulating multiple timesteps to compute firing and integrate differential equations. Additionally, the deep RNN models employed in this study, specifically LSTM and GRU, are well-established and highly optimized for GPU implementation. Nevertheless, within the same computation time, the performance achieved by EC-RSNN remains competitive with deep RNNs.
Furthermore, we execute both ES and EC for 1,000 generations using the same population size and RSNN architecture. Experimental results in Fig. 5 have shown that our 1-bit EC-RSNN achieves a speedup of approximately \(2\sim 3\times\) over 32-bit floating-point ES-RSNN. This finding emphasizes the value of 1-bit connections within the proposed framework. Utilizing integer arithmetic typically results in higher throughput than floating-point arithmetic across most accelerators, and incorporating smaller data types reduces both memory requirements and memory access time. It is also worth mentioning that the 1-bit connections are implemented using INT8 on GPU, which is not fully optimized. By implementing the framework on hardware supporting smaller data types, connection size could be further reduced by up to \(8\times\), enabling more significant acceleration and cost reduction.
Additionally, our EC-RSNN demonstrates faster convergence compared to SG-RSNN. This can be attributed to the fact that EC-RSNN is an inference-only method with 1-bit connections, requiring
Figure 4: Surrogate Gradient on the Humanoid task, with \(\gamma=0.8\) and several surrogate functions, \(\beta\) parameters. For \(\gamma=1.0\) results, please refer to Appendix F. Performance is sensitive to the chosen surrogate function and its parameters. In this context, EC surpass all considered parameters of SG.
Figure 5: Efficiency comparison in locomotion tasks. Evolution-based approaches (EC, ES) are executed for 1,000 generations, while surrogate gradient (SG) runs for 250,000 gradient steps to attain a comparable run-time to EC, providing a fair comparison. When training RSNNs, EC attains a \(2\sim 3\times\) speedup over ES and demonstrates faster convergence than SG.
only a single forward pass using integer arithmetic, while SG-RSNN demands both a forward and a backward pass employing computationally-intensive floating-point operations.
Finally, we calculated the estimated energy consumption of the EC-RSNN on both GPU and neuromorphic devices, aiming to assess the power efficiency of the EC to train RSNN. In our study, all experiments were consistently conducted on the same NVIDIA Titan RTX GPU, operating at a stable 100% GPU power (280W). Consequently, the energy consumption of the EC training process on GPU is approximated to be directly proportional to the computation wall time, resulting in a 9 MJ. For neuromorphic device, we have conducted computations pertaining to the estimated energy consumption of EC-RSNN when implemented on the Lohii chip (Davies et al., 2018). As detailed in Appendix D, the estimated energy consumption of EC is 28 kJ, which indicate a noticeable reduction in power consumption by more than two orders of magnitude.
### General 1-bit Framework
The task of discretizing a neural network into a 1-bit representation presents a considerable challenge, particularly when employing continuous training techniques such as ES and SGD. To illustrate this difficulty, we attempted to discretize both ES and SG for training 1-bit connection RSNNs utilizing the straight-through estimator (Bengio et al., 2013). Fig. 7 (a) showed that while both ES-RSNN (1bit) and SG-RSNN (1bit) exhibited learning progress, they were surpassed by EC-RSNN (1bit). Fig. 7 (b) and (c) suggest that ES and SG demonstrated superior performance on continuous FP32 weights as opposed to discrete 1-bit representations. These findings imply that continuous optimization methods excel when applied to continuous parameters; conversely, for 1-bit discrete connections, EC is recommended to its specific design for discrete 1-bit optimization and the provision of unbiased gradients. A comprehensive description of the discretization for ES and SG methods is provided in Appendix C.
Moreover, Despite EC have demonstrated the efficacy on RSNNs, it serves as a versatile 1-bit training framework to non-spiking networks as well. To substantiate EC's potential in training deep RNN, we conducted a series of experiments employing a standard RNN model trained with EC, while using ES and PPO as baselines. The RNN has 256 tanh units in the hidden layer. For 1-bit EC, the weight magnitudes are 0-1 connection matrix, and the weight signs are separated to excitatory or inhibitory. The results in Fig. 6 demonstrate that EC can effectively train 1-bit deep recurrent neural networks and has the potential for different architectures and quantized neural networks.
## 7 Conclusion
In this study, we present the innovative Evolving Connectivity (EC) framework, a unique inference-only approach for training RSNN with 1-bit connections. The key attributes of the EC framework,
Figure 6: Performance comparison of vanilla RNN trained with EC, ES, PPO. EC surpasses baselines despite using 1-bit connections.
Figure 7: Optimization of the 1-bit RSNN proposed in this study using ES and SG. (a) In the discrete 1-bit RSNN settings, EC significantly outperforms ES and SG. (b) (c) Comparison of FP32 and 1-bit precision for ES and SG respectively, indicating that these continuous optimization methods achieve optimal results with FP32 continuous weights.
such as its inference-only nature and scalability, render it particularly suitable for training RSNNs on neuromorphic devices. The use of 1-bit connections significantly reduces memory requirements and computational cost, facilitating faster training on GPUs and paving the way for additional cost reductions in neuromorphic chip production.
We performed extensive experiments on a variety of intricate locomotion tasks, showcasing the competitive performance of our proposed EC framework in comparison to deep RNNs like LSTM and GRU. Moreover, the EC framework outperforms other RSNN training methods such as Evolution Strategies and Surrogate Gradient, both in terms of performance and computational efficiency.
## 8 Limitations
Evolutionary algorithms demand the storage of \(N\) distinct parameter sets for population evaluation, leading to a space complexity of O(\(N|\theta|\)). In contrast, gradient-based approaches utilize a single parameter set but necessitate the storage of intermediate results at each timestep, resulting in a space complexity of O(\(NHS+|\theta|\)), where \(H\) corresponds to the number of BPTT timesteps, and \(S\) represents the size of the intermediate results. This situation gives rise to a trade-off: evolutionary methods offer greater memory efficiency for tasks featuring long time horizons, whereas gradient-based techniques with larger parameter sizes and shorter time horizons require less memory. Recognizing this trade-off is crucial in practical applications. Although our proposed EC framework, as an evolutionary algorithm, retains the same space complexity and trade-off, it can achieve a constant term reduction by storing 1-bit connections.
## 9 Discussions
**Neuromorphic hardware.** One primary bottleneck in the development of neuromorphic hardware is the lack of effective on-chip learning algorithms to build applications compared to deep learning approaches. This paper proposes a novel EC framework that circumvents the requirement for gradients and demonstrates efficacy on locomotion tasks, providing a potential solution for solving the on-chip learning challenge. Typically, the demand for computing devices falls into two categories, including cloud and edge. In the cloud, our proposed EC framework supports large-scale learning, while at the edge, it enables energy-efficient applications. Therefore, our framework offers a potential approach to further building neuromorphic applications.
Another practical challenge lies in the trade-off between numeric precision and cost. We suggest that it is possible to drastically reduce connections from floating point to 1-bit, providing a novel design principle for the next generation of neuromorphic hardware. Therefore, our method holds the potential to significantly decrease the manufacturing and energy costs of neuromorphic hardware.
**Neuroscience.** Our EC framework introduces a novel method for neuroscience research. First, as opposed to toy tasks or simulated signals, which are often studied in previous neuromodeling work, we employed RSNN in a complex, real-world like locomotion task. Additionally, controlling signals in neuroscience like 'GO' and 'NOGO' signals in the basal ganglia can be easily integrated into the task by concatenating them to the environment observation vector, enabling the creation of novel neuroscientific tasks. Our work lays the foundation for further investigation of decision-making processes and motor control.
Moreover, our framework provides a novel type of data to analyze: neuron-to-neuron level connection probability. Connection probability is one of the most fundamental properties in brain-wide connectomes. However, obtaining neuron-to-neuron connection probability from the whole mammalian brain is experimentally implausible due to limitations in nowadays neuroconnectomic technology. Our work provides in silico connection data for further analysis, including covariances, motifs, clustered engrams, and dimensional properties.
Finally, our framework is capable of incorporating neuroanatomical and neurophysiological data to construct a novel neurosimulation model, since our framework is able to train arbitrary models in principle. Neuroscientists have discovered that connection probability between two neurons is determined by various factors, including spatial distance, receptive field, and neuron types. Our framework can leverage these findings and conduct in silico experiments with data-driven models.
## References
* Akopyan et al. (2015) F. Akopyan, J. Sawada, A. S. Cassidy, R. Alvarez-Icaza, J. V. Arthur, P. Merolla, N. Imam, Y. Nakamura, P. Datta, G.-J. Nam, B. Taba, M. P. Beakes, B. Brezzo, J. B. Kuang, R. Manohar, W. P. Risk, B. L. Jackson, and D. S. Modha. Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip. _IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems_, 34:1537-1557, 2015.
* Bauer et al. (2022) F. C. Bauer, G. Lenz, S. Haghighatshoar, and S. Sheik. Exodus: Stable and efficient training of spiking neural networks. _arXiv preprint arXiv:2205.10242_, 2022.
* Bellec et al. (2020) G. Bellec, F. Scherr, A. Subramoney, E. Hajek, D. Salaj, R. Legenstein, and W. Maass. A solution to the learning dilemma for recurrent networks of spiking neurons. _Nature communications_, 11(1):3625, 2020.
* Bengio et al. (2013) Y. Bengio, N. Leonard, and A. C. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. _CoRR_, abs/1308.3432, 2013. URL [http://arxiv.org/abs/1308.3432](http://arxiv.org/abs/1308.3432).
* 10472, 1998.
* Billeh et al. (2020) Y. N. Billeh, B. Cai, S. L. Gratiy, K. Dai, R. Iyer, N. W. Gouwens, R. Abbasi-Asl, X. Jia, J. H. Siegle, S. R. Olsen, et al. Systematic integration of structural and functional data into multi-scale models of mouse primary visual cortex. _Neuron_, 106(3):388-403, 2020.
* Bradbury et al. (2018) J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, G. Necula, A. Paszke, J. VanderPlas, S. Wanderman-Milne, and Q. Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL [http://github.com/google/jax](http://github.com/google/jax).
* Brockman et al. (2016) G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym, 2016.
* Chen et al. (2022) G. Chen, F. Scherr, and W. Maass. A data-based large-scale model for primary visual cortex enables brain-like robust and versatile visual processing. _Science Advances_, 8, 2022.
* Cho et al. (2014) K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation, 2014.
* Conti et al. (2018) E. Conti, V. Madhavan, F. P. Such, J. Lehman, K. O. Stanley, and J. Clune. Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking agents, 2018.
* Davies et al. (2018) M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, S. Jain, et al. Loihi: A neuromorphic manycore processor with on-chip learning. _Ieee Micro_, 38(1):82-99, 2018.
* Diehl and Cook (2015) P. U. Diehl and M. Cook. Unsupervised learning of digit recognition using spike-timing-dependent plasticity. _Frontiers in computational neuroscience_, 9:99, 2015.
* Ding et al. (2022) J. Ding, J. Zhang, Z. Yu, and T. Huang. Accelerating training of deep spiking neural networks with parameter initialization, 2022. URL [https://openreview.net/forum?id=T8BnDXDTcFZ](https://openreview.net/forum?id=T8BnDXDTcFZ).
* 11446, 2016.
* Fischer et al. (2022) J. Fischer, A. Gadhikar, and R. Burkholz. Lottery tickets with nonzero biases, 2022.
* Frankle and Carbin (2018) J. Frankle and M. Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. _arXiv preprint arXiv:1803.03635_, 2018.
* Friesen et al. (2018)C. D. Freeman, E. Frey, A. Raichuk, S. Girgin, I. Mordatch, and O. Bachem. Brax - a differentiable physics engine for large scale rigid body simulation, 2021. URL [http://github.com/google/brax](http://github.com/google/brax).
* Gaier and Ha (2019) A. Gaier and D. Ha. Weight agnostic neural networks. _Advances in neural information processing systems_, 32, 2019.
* Haber et al. (2023) A. Haber, A. Wanner, R. W. Friedrich, and E. Schneidman. The structure and function of neural connectomes are shaped by a small number of design principles. _bioRxiv_, pages 2023-03, 2023.
* Hochreiter and Schmidhuber (1997) S. Hochreiter and J. Schmidhuber. Long short-term memory. _Neural computation_, 9:1735-80, 12 1997. doi: 10.1162/neco.1997.9.8.1735.
* Jaeger (2001) H. Jaeger. The"echo state"approach to analysing and training recurrent neural networks. 2001.
* Kheradpisheh et al. (2018) S. R. Kheradpisheh, M. Ganjtabesh, S. J. Thorpe, and T. Masquelier. Stdp-based spiking deep convolutional neural networks for object recognition. _Neural Networks_, 99:56-67, 2018.
* Lehman et al. (2018) J. Lehman, J. Chen, J. Clune, and K. O. Stanley. Es is more than just a traditional finite-difference approximator, 2018.
* Lele et al. (2020) A. S. Lele, Y. Fang, J. Ting, and A. Raychowdhury. Learning to walk: Spike based reinforcement learning for hexapod robot central pattern generation. _2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS)_, pages 208-212, 2020.
* Li et al. (2023) G. Li, L. Deng, H. Tang, G. Pan, Y. Tian, K. Roy, and W. Maass. Brain Inspired Computing: A Systematic Survey and Future Trends. 1 2023. doi: 10.36227/techrxiv.21837027.v1. URL [https://www.techrxiv.org/articles/preprint/Brain_Inspired_Computing_A_Systematic_Survey_and_Future_Trends/21837027](https://www.techrxiv.org/articles/preprint/Brain_Inspired_Computing_A_Systematic_Survey_and_Future_Trends/21837027).
* Li et al. (2021) Y. Li, Y. Guo, S. Zhang, S. Deng, Y. Hai, and S. Gu. Differentiable spike: Rethinking gradient-descent for training spiking neural networks. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, editors, _Advances in Neural Information Processing Systems_, volume 34, pages 23426-23439. Curran Associates, Inc., 2021. URL [https://proceedings.neurips.cc/paper_files/paper/2021/file/c4ca4238a0b923820dcc509a6f75849b-Paper.pdf](https://proceedings.neurips.cc/paper_files/paper/2021/file/c4ca4238a0b923820dcc509a6f75849b-Paper.pdf).
* Maass et al. (2002) W. Maass, T. Natschlager, and H. Markram. Real-time computing without stable states: A new framework for neural computation based on perturbations. _Neural Computation_, 14:2531-2560, 2002.
* Malach et al. (2020) E. Malach, G. Yehudai, S. Shalev-Schwartz, and O. Shamir. Proving the lottery ticket hypothesis: Pruning is all you need. In H. D. III and A. Singh, editors, _Proceedings of the 37th International Conference on Machine Learning_, volume 119 of _Proceedings of Machine Learning Research_, pages 6682-6691. PMLR, 13-18 Jul 2020. URL [https://proceedings.mlr.press/v119/malach20a.html](https://proceedings.mlr.press/v119/malach20a.html).
* Mayr et al. (2019) C. Mayr, S. Hoeppner, and S. B. Furber. Spinnaker 2: A 10 million core processor system for brain simulation and machine learning. _ArXiv_, abs/1911.02385, 2019.
* Mozafari et al. (2019) M. Mozafari, M. Ganjtabesh, A. Nowzari-Dalini, S. J. Thorpe, and T. Masquelier. Bio-inspired digit recognition using reward-modulated spike-timing-dependent plasticity in deep convolutional networks. _Pattern recognition_, 94:87-95, 2019.
* Naya et al. (2021) K. Naya, K. Kutsuzawa, D. Owaki, and M. Hayashibe. Spiking neural network discovers energy-efficient hexapod motion in deep reinforcement learning. _IEEE Access_, 9:150345-150354, 2021.
* Pei et al. (2019) J. Pei, L. Deng, S. Song, M. Zhao, Y. Zhang, S. Wu, G. Wang, Z. Zou, Z. Wu, W. He, et al. Towards artificial general intelligence with hybrid tianjic chip architecture. _Nature_, 572(7767):106-111, 2019.
* 9823, 2012.
* Poil et al. (2019)V. Ramanujan, M. Wortsman, A. Kembhavi, A. Farhadi, and M. Rastegari. What's hidden in a randomly weighted neural network?, 2020.
* Rueckert et al. (2016) E. Rueckert, D. Kappel, D. Tanneberg, D. Pecevski, and J. Peters. Recurrent spiking networks solve planning tasks. _Scientific reports_, 6(1):1-10, 2016.
* Rumelhart et al. (1985) D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation. Technical report, California Univ San Diego La Jolla Inst for Cognitive Science, 1985.
* Salimans et al. (2017) T. Salimans, J. Ho, X. Chen, S. Sidor, and I. Sutskever. Evolution strategies as a scalable alternative to reinforcement learning, 2017.
* Schmidt et al. (2018) M. Schmidt, R. Bakker, K. Shen, G. Bezgin, M. Diesmann, and S. J. van Albada. A multi-scale layer-resolved spiking network model of resting-state dynamics in macaque visual cortical areas. _PLOS Computational Biology_, 14(10):e1006359, 2018.
* Schulman et al. (2017) J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms, 2017.
* Shrestha and Orchard (2018) S. B. Shrestha and G. Orchard. Slayer: Spike layer error reassignment in time. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, _Advances in Neural Information Processing Systems_, volume 31. Curran Associates, Inc., 2018. URL [https://proceedings.neurips.cc/paper_files/paper/2018/file/82f2b308c3b01637c607ce05f52a2fed-Paper.pdf](https://proceedings.neurips.cc/paper_files/paper/2018/file/82f2b308c3b01637c607ce05f52a2fed-Paper.pdf).
* Such et al. (2018) F. P. Such, V. Madhavan, E. Conti, J. Lehman, K. O. Stanley, and J. Clune. Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning, 2018.
* Tang et al. (2020) G. Tang, N. Kumar, and K. P. Michmizos. Reinforcement co-learning of deep and spiking neural networks for energy-efficient mapless navigation with neuromorphic hardware. In _2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_, pages 6090-6097, 2020. doi: 10.1109/IROS45743.2020.9340948.
* Werbos (1990) P. J. Werbos. Backpropagation through time: what it does and how to do it. _Proceedings of the IEEE_, 78(10):1550-1560, 1990.
* Whitley (1994) D. Whitley. A genetic algorithm tutorial. _Statistics and computing_, 4:65-85, 1994.
* Wierstra et al. (2014) D. Wierstra, T. Schaul, T. Glasmachers, Y. Sun, J. Peters, and J. Schmidhuber. Natural evolution strategies. _Journal of Machine Learning Research_, 15(27):949-980, 2014. URL [http://jmlr.org/papers/v15/wierstra14a.html](http://jmlr.org/papers/v15/wierstra14a.html).
* Williams (1992) R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. _Reinforcement learning_, pages 5-32, 1992.
* Wu et al. (2018) Y. Wu, L. Deng, G. Li, J. Zhu, and L. Shi. Spatio-temporal backpropagation for training high-performance spiking neural networks. _Frontiers in neuroscience_, 12:331, 2018.
* Zenke and Ganguli (2018) F. Zenke and S. Ganguli. Superspike: Supervised learning in multilayer spiking neural networks. _Neural computation_, 30(6):1514-1541, 2018.
* Zenke and Vogels (2021) F. Zenke and T. P. Vogels. The Remarkable Robustness of Surrogate Gradient Learning for Instilling Complex Function in Spiking Neural Networks. _Neural Computation_, 33(4):899-925, 03 2021. ISSN 0899-7667. doi: 10.1162/neco_a_01367. URL [https://doi.org/10.1162/neco_a_01367](https://doi.org/10.1162/neco_a_01367).
* Zhang et al. (2017) X. Zhang, J. Clune, and K. O. Stanley. On the relationship between the openai evolution strategy and stochastic gradient descent, 2017.
* Zhou et al. (2019) H. Zhou, J. Lan, R. Liu, and J. Yosinski. Deconstructing lottery tickets: Zeros, signs, and the supermask. _CoRR_, abs/1905.01067, 2019. URL [http://arxiv.org/abs/1905.01067](http://arxiv.org/abs/1905.01067).
Acknowledgement
This work was supported by the National Key Research and Development Program of China (grant 2021ZD0200301).
## Appendix B Comparing EC and NES
For readers familiar with evolutionary algorithms, we present the similarities and main differences between our Evolving Connectivity (EC) method and commonly used evolutionary algorithms, namely Natural Evolution Strategies (NES, Wierstra et al. (2014)) and Evolution Strategies (ES, Salimans et al. (2017)). This comparison will facilitate better understanding and implementation of these algorithms. EC, ES, and NES share the same unbiased Monte-Carlo gradient estimator, gradient update approach, and fitness shaping trick, as described in Section 4.
The primary distinction between these algorithms lies in their search spaces, as illustrated in Figure S1. Both ES and NES employ real-valued parameters, akin to conventional deep learning techniques, and sample the search space using parameterized normal distributions. In contrast, EC utilizes a 1-bit discrete search space parameterized by Bernoulli distributions. This 1-bit formulation reduces the parameter size, endowing EC with unique advantages such as faster computation, improved memory efficiency, and reduced energy and cost requirements.
Additionally, there is a minor difference in how EC enhances exploration. As described in Section 4, EC clips the elements of connection probability matrix to boost exploration.
## Appendix C Optimizing 1-bit Networks Using ES and SG
ES and SG are continuous optimization techniques designed for continuous parameters. To apply these methods to the training of 1-bit connection RSNNs proposed in this paper, we utilize discretization methods as described by Zhou et al. (2019). Specifically, we employ ES and SG to optimize a continuous parameter \(\mathbf{\theta}\), which is then discretized into 1-bit weights \(\mathbf{W}\) by applying a threshold at 0 as \(\mathbf{W}=H(\mathbf{\theta})\), where \(H\) denotes the Heaviside step function. For SG, the straight-through estimator is utilized as a standard approach (Bengio et al., 2013). It is crucial to acknowledge that employing continuous optimization methods in conjunction with discretization may result in biased gradient estimation, while EC naturally provides unbiased gradient estimation for discrete 1-bit connections.
## Appendix D On Chip Power Consumption Estimation
We estimate the power consumption of EC by considering the number of spiking neuron operations and the energy per operation provided by Loihi (Davies et al., 2018), as a representative neuromorphicdevice. The data employed for this estimation is presented in Table 2. Initially, we computed the estimated energy consumption for a single network inference as follows:
\[E_{one}=P_{u}*N*I*S+(P_{s}+C*P_{w})*N*R*S=2.8mJ\]
Subsequently, we determined the total energy consumption during the training process:
\[E_{tot}=E_{one}*G*P=28kJ\]
It is important to note that these calculations are approximations and serve as an initial assessment. In future research, we plan to conduct experiments using neuromorphic chips to empirically validate the energy efficiency of RSNNs.
## Appendix E Performance Comparison with PPO
In order to assess the efficacy of the EC-RSNN approach in relation to contemporary deep reinforcement learning (RL) techniques, we conducted additional training with GRU and LSTM models using Proximal Policy Optimization (PPO) [Schulman et al., 2017] on the most complex Humanoid task. The results of these experiments are presented in Figure S3. The data clearly indicates that EC-RSNN surpasses the performance of PPO-LSTM, illustrating a level of performance that is on par with state-of-the-art deep RL methods.
## Appendix F Full SG Results
We present a comprehensive analysis of the Surrogate Gradient (SG) results, considering various damping factors (\(\gamma\)) and the parameter (\(\beta\)). The outcomes are illustrated in Figure S2.
## Appendix G Detailed Experiment Settings
### Hardware
All experiments were conducted on a single GPU server with the following specifications. Additionally, all efficiency experiments, including those that measure wall-clock time, were executed on a single GPU within this server.
* 8x NVIDIA Titan RTX GPU (24GB VRAM)
* 2x Intel(R) Xeon(R) Silver 4110 CPU
* 252GB Memory
### Hyperparameters
\begin{table}
\begin{tabular}{c c} \hline \hline
**Parameter** & **Value** \\ \hline Energy per synaptic spike op \(P_{s}\) & 23.6 (pJ) \\ Within-tile spike energy \(P_{w}\) & 1.7 (pJ) \\ Energy per neuron update \(P_{u}\) & 81 (pJ) \\ \hline \# Generations \(G\) & 1000 \\ \# Population \(P\) & 10240 \\ \# Time steps \(S\) & 33200 \\ \# Neurons \(N\) & 256 \\ \# Spikes per neuron per step \(R\) & 0.025 \\ \# Connection per neuron \(C\) & 128 \\ \# Update operations per neuron \(I\) & 4 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Energy consumption and network specificsIn this section, we provide the hyperparameters for our framework, Evolving Connectivity (EC), and the baselines, including Evolution Strategies (ES) and Surrogate Gradient (SG). All methods utilize the same hyperparameter set for all three locomotion tasks. We also list the settings of neural networks below.
## Appendix H Source Code and Licensing
The source code associated with this paper is licensed under the Apache 2.0 License and is publicly available at [https://github.com/imoneoi/EvolvingConnectivity](https://github.com/imoneoi/EvolvingConnectivity). In our research, we incorporated locomotion tasks and the Brax physics simulator from Freeman et al. (2021), as well as the JAX framework (Bradbury et al., 2018). Both of these resources are also released under the Apache 2.0 License.
\begin{table}
\begin{tabular}{l c} \hline \hline
**Hyperparameter** & **Value** \\ \hline \multicolumn{2}{c}{_Recurrent Spiking Neural Network (RSNN)_} \\ \hline Number of neurons \(d_{h}\) & 256 \\ Excitatory ratio & \(50\%\) \\ Simulation time per environment step & 16.6 ms \\ Simulation timestep \(\Delta t\) & 0.5 ms \\ Synaptic time constant \(\tau_{syn}\) & 5.0 ms \\ Membrane time constant \(\tau_{m}\) & 10.0 ms \\ Output time constant \(\tau_{out}\) & 10.0 ms \\ Input membrane resistance 1\(R_{in}\) & \(0.1\cdot\tau_{m}\sqrt{\frac{2}{d_{h}}}\) \\ Hidden membrane resistance 1\(R_{h}\) & \(1.0\cdot\frac{\tau_{m}}{\tau_{syn}}\sqrt{\frac{2}{d_{h}}}\) \\ Output membrane resistance 1\(R_{out}\) & \(5.0\cdot\tau_{out}\sqrt{\frac{2}{d_{h}}}\) \\ \hline \multicolumn{2}{c}{_Gated Recurrent Unit (GRU)_} \\ \hline Hidden size & 256 \\ \hline \multicolumn{2}{c}{_Long-short term memory (LSTM)_} \\ \hline Hidden size & 128 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Settings of neural networks
\begin{table}
\begin{tabular}{l c} \hline \hline
**Hyperparameter** & **Value** \\ \hline \multicolumn{2}{c}{_Proximal Policy Optimization (PPO)_} \\ \hline Batch size & 2048 \\ BPTT length & 16 \\ Learning rate \(\eta\) & \(3\times 10^{-4}\) \\ Clip gradient norm & 0.5 \\ Discount \(\gamma\) & 0.99 \\ GAE \(\lambda\) & 0.95 \\ PPO clip & 0.2 \\ Value loss coefficient & 1.0 \\ Entropy coefficient & \(10^{-3}\) \\ \hline \multicolumn{2}{c}{_Surrogate Gradient_} \\ \hline Surrogate function & SuperSpike \\ Surrogate function parameter \(\beta\) & 10 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Hyperparameters for Surrogate Gradient | ## Review
### Summary
The authors propose an algorithm called evolving connectivity (EC) to optimize recurrent spiking neural networks (RSNNs) using natural evolution strategies (NES). This approach addresses the limitations of surrogate gradient-based training methods by focusing on connectivity probabilities rather than weight magnitudes, resulting in binary weight values that make the algorithm more compatible with neuromorphic hardware. The performance of EC is evaluated on standard robotic locomotion tasks, demonstrating comparable or superior results to traditional methods like LSTM and GRU, while also significantly outperforming existing spiking network training techniques. The work presents a notable advancement in the training of RSNNs, although some concerns regarding the novelty and thoroughness of experiments remain.
### Strengths
- The paper presents a strong argument for the significance of connectivity probability in training RSNNs, allowing for implicit fitting without overfitting.
- The EC algorithm is explained clearly, demonstrating its performance superiority over existing algorithms for robotic benchmarks.
- The removal of gradient estimation requirements and the reduction of the weight matrix to 1-bit are significant optimizations for hardware implementation.
- The algorithm's performance is competitive with deep learning solutions, which enhances its applicability to unconventional devices.
### Weaknesses
- The RSNN architecture used in experiments is not clearly described, making it difficult to compare with baseline Deep RNNs in terms of neuron and connection counts.
- There is a lack of exploration regarding the performance of other algorithms (ES/SG) when constrained to binary weights, leaving questions about their competitive accuracy.
- Concerns exist regarding the novelty of the weight-based parameterization method and NES, as they appear to be based on prior works.
- The manuscript lacks code availability, raising reproducibility concerns, and omits relevant citations that may impact the perceived contribution.
### Questions
- Can the authors clarify the RSNN architecture and its comparison to the RNNs used in experiments regarding layers, parameters, and precision?
- Would it be feasible to conduct experiments with 1-bit RSNNs utilizing ES/SG algorithms to assess their performance?
- How do the authors plan to address the lack of theoretical contributions and the impact of varying sample sizes on their results?
- Could the authors elaborate on the potential for their method to replace gradient descent in broader contexts beyond the current tasks?
### Soundness
**Score:** 3
**Description:** 3 = good; the methodology is generally sound, but some weaknesses and lack of clarity in the architecture and novelty require further attention.
### Presentation
**Score:** 3
**Description:** 3 = good; the paper is well-structured but could benefit from clearer descriptions of key concepts and results.
### Contribution
**Score:** 3
**Description:** 3 = good; while the paper presents valuable contributions to RSNN training, questions regarding novelty and thoroughness in experimentation limit its impact.
### Rating
**Score:** 7
**Description:** 7 = accept, but needs minor improvements; the paper is technically solid and impactful, yet it requires clarifications and additional validation of certain aspects.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a novel approach to training RSNNs that addresses significant limitations in current methods, with promising results and practical applications. However, it requires minor improvements in clarity and additional validation regarding its contributions, particularly concerning novelty and thorough experimentation.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Enemy is Inside: Alleviating VAE's Overestimation in Unsupervised OOD Detection
Anonymous Author(s)
Affiliation
Address
email
###### Abstract
Deep generative models (DGMs) aim at characterizing the distribution of the training set by maximizing the marginal likelihood of inputs in an unsupervised manner, making them a promising option for unsupervised out-of-distribution (OOD) detection. However, recent works have reported that DGMs often assign higher likelihoods to OOD data than in-distribution (ID) data, _i.e._, _overestimation_, leading to their failures in OOD detection. Although several pioneer works have tried to analyze this phenomenon, and some VAE-based methods have also attempted to alleviate this issue by modifying their score functions for OOD detection, the root cause of the _overestimation_ in VAE has never been revealed to our best knowledge. To fill this gap, this paper will provide a thorough theoretical analysis on the _overestimation_ issue of VAE, and reveal that this phenomenon arises from two Inside-Enemy aspects: 1) the improper design of prior distribution; 2) the gap of dataset entropies between ID and OOD datasets. Based on these findings, we propose a novel score function to Alleviate VAE's **O**verestimation **I**n unsupervised OOD **D**etection, named **"AVOID"**, which contains two novel techniques, specifically post-hoc prior and dataset entropy calibration. Experimental results verify our analysis, demonstrating that the proposed method is effective in alleviating _overestimation_ and improving unsupervised OOD detection performance.
## 1 Introduction
The detection of out-of-distribution (OOD) data, _i.e._, identifying data that differ from the in-distribution (ID) training set, is crucial for ensuring the reliability and safety of real-world applications [1; 2; 3; 4]. While the most commonly used OOD detection methods rely on supervised classifiers [5; 6; 7; 8; 9; 10; 11], which require labeled data, the focus of this paper is on designing an unsupervised OOD detector. **Unsupervised OOD detection** refers to the task of designing a detector, based solely on the unlabeled training data, that can determine whether an input is ID or OOD [12; 13; 14; 15; 16; 17; 18]. This unsupervised approach is more practical for real-world scenarios where the data lack labels.
Deep generative models (DGMs) are a highly attractive option for unsupervised OOD detection. DGMs, mainly including the auto-regressive model [19; 20], flow model [21; 22], diffusion model [23], generative adversarial network [24], and variational autoencoder (VAE) [25], are designed to model the distribution of the training set by explicitly or implicitly maximizing the likelihood estimation of \(p(\mathbf{x})\) for its input \(\mathbf{x}\) without category label supervision or additional OOD auxiliary data. They have achieved great successes in a wide range of applications, such as image and text generation. Since generative models are promising at modeling the distribution of the training set, they could be seen as an ideal unsupervised OOD detector, where the likelihood of the unseen OOD data output by the model should be lower than that of the in-distribution data.
Unfortunately, developing a flawless unsupervised OOD detector using DGMs is not as easy as it seems to be. Recent experiments have revealed a counterfactual phenomenon that directly applying the likelihood of generative models as an OOD detector can result in _overestimation_, _i.e._, **DGMs assign higher likelihoods to OOD data than ID data**[12; 13; 17; 18]. For instance, a generative model trained on the FashionMNIST dataset could assign higher likelihoods to data from the MNIST dataset (OOD) than data from the FashionMNIST dataset (ID), as shown in Figure 6(a). Since OOD detection can be viewed as a verification of whether a generative model has learned to model the distribution of the training set accurately, the counterfactual phenomenon of _overestimation_ not only poses challenges to unsupervised OOD detection but also raises doubts about the generative model's fundamental ability in modeling the data distribution. Therefore, it highlights the need for developing more effective methods for unsupervised OOD detection and, more importantly, a more thorough understanding of the reasons behind the _overestimation_ in deep generative models.
To develop more effective methods for unsupervised OOD detection, some approaches have modified the likelihood to new score functions based on empirical assumptions, such as low- and high-level features' consistency [17; 18] and ensemble approaches [26]. While these methods, particularly the VAE-based methods [18], have achieved state-of-the-art (SOTA) performance in unsupervised OOD detection, none of them provides a clear explanation for the _overestimation_ issue. To gain insight into the _overestimation_ issue in generative models, pioneering works have shown that the _overestimation_ issue could arise from the intrinsic model curvature brought by the invertible architecture in flow models [27]. However, in contrast to the exact marginal likelihood estimation used in flow and auto-regressive models, VAE utilizes a lower bound of the likelihood, making it difficult to analyze. Overall, the reasons behind the _overestimation_ issue of VAE are still not fully understood.
In this paper, we try to address the research gap by providing a theoretical analysis of VAE's _overestimation_ in unsupervised OOD detection. Our contributions can be summarized as follows:
1. Through theoretical analyses, we are the first to identify two factors that cause the _overestimation_ issue of VAE: 1) the improper design of prior distribution; 2) the intrinsic gap of dataset entropies between ID and OOD datasets;
2. Focused on these two discovered factors, we propose a new score function, named **"AVOID"**, to alleviate the _overestimation_ issue from two aspects: 1) post-hoc prior for the improper design of prior distribution; 2) dataset entropy calibration for the gap of dataset entropies;
3. Extensive experiments demonstrate that our method can effectively improve the performance of VAE-based methods on unsupervised OOD detection, with theoretical guarantee.
## 2 Preliminaries
### Unsupervised Out-of-distribution Detection
In this part, we will first give a problem statement of OOD detection and then we will introduce the detailed setup for applying unsupervised OOD detection.
**Problem statement.** While deploying a machine learning system, it is possible to encounter inputs from unknown distributions that are semantically and/or statistically different from the training data, and such inputs are referred to as OOD data. Processing OOD data could potentially introduce critical errors that compromise the safety of the system [1]. Thus, the OOD detection task is to identify these OOD data, which could be seen as a binary classification task: determining whether an input \(\mathbf{x}\) is more likely ID or OOD. It could be formalized as a level-set estimation:
\[\mathbf{x}=\begin{cases}\text{ID},&\text{if}\quad\mathcal{S}(\mathbf{x})>\lambda,\\ \text{OOD},&\text{if}\quad\mathcal{S}(\mathbf{x})\leq\lambda,\end{cases} \tag{1}\]
where \(\mathcal{S}(\mathbf{x})\) denotes the score function, _i.e._, **OOD detector**, and the threshold \(\lambda\) is commonly chosen to make a high fraction (_e.g._, 95%) of ID data is correctly classified [9]. In conclusion, OOD detection aims at designing the \(\mathcal{S}(\mathbf{x})\) that could assign higher scores to ID data samples than OOD ones.
**Setup.** Denoting the input space with \(\mathcal{X}\), an _unlabeled_ training dataset \(\mathcal{D}_{\text{train}}=\{\mathbf{x}_{i}\}_{i=1}^{N}\) containing of \(N\) data points can be obtained by sampling _i.i.d._ from a data distribution \(\mathcal{P}_{\mathcal{X}}\). Typically, we treat the \(\mathcal{P}_{\mathcal{X}}\) as \(p_{\text{id}}\), which represents the in-distribution (ID) [17; 27]. With this _unlabeled_ training set, unsupervised OOD detection is to design a score function \(\mathcal{S}(\mathbf{x})\) that can determine whether an input is ID or OOD. This is different from supervised OOD detection, which typically leverages a classifier that is trained on labeled data [4; 7; 9]. We provide a detailed discussion in Appendix A.
### VAE-based Unsupervised OOD Detection
DGMs could be an ideal choice for unsupervised OOD detection because the estimated marginal likelihood \(p_{\theta}(\mathbf{x})\) can be naturally used as the score function \(\mathcal{S}(\mathbf{x})\). Among DGMs, VAE can offer great flexibility and strong representation ability [28], leading to a series of unsupervised OOD detection methods based on VAE that have achieved SOTA performance [17; 18]. Specifically, VAE estimates the marginal likelihood by training with the variational evidence lower bound (ELBO), _i.e._,
\[\text{ELBO}(\mathbf{x})=\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}\left[\log p_{\theta}( \mathbf{x}|\mathbf{z})\right]-D_{\text{KL}}(q_{\phi}(\mathbf{z}|\mathbf{x})||p(\mathbf{z})), \tag{2}\]
where the posterior \(q_{\phi}(\mathbf{z}|\mathbf{x})\) is modeled by an encoder, the reconstruction likelihood \(p_{\theta}(\mathbf{x}|\mathbf{z})\) is modeled by a decoder, and the prior \(p(\mathbf{z})\) is set as a Gaussian distribution \(\mathcal{N}(\mathbf{0},\mathbf{I})\). After well training the VAE, \(\text{ELBO}(\mathbf{x})\) is an estimation of the \(p(\mathbf{x})\), which could be directly seen as the score function \(\mathcal{S}(\mathbf{x})\) to do OOD detection. But the VAE would suffer from the _overestimation_ issue, which will be introduced in the next section. More details and **Related Work** can be seen in Appendix B.
## 3 Analysis of VAE's _overestimation_ in Unsupervised OOD Detection
We will first conduct an analysis to identify the factors contributing to VAE's _overestimation_, _i.e._, the improper design of prior distribution and the gap between ID and OOD datasets' entropies. Subsequently, we will give a deeper analysis of the first factor to have a better understanding.
### Identifying Factors of VAE's _Overestimation_ Issue
Following the common analysis procedure [27], an ideal score function \(\mathcal{S}(\mathbf{x})\) that could achieve good OOD detection performance is expected to have the following property for any OOD dataset:
\[\mathcal{G}=\mathbb{E}_{\mathbf{x}\sim p_{\text{id}}(\mathbf{x})}[\mathcal{S}(\mathbf{x})] -\mathbb{E}_{\mathbf{x}\sim p_{\text{goal}}(\mathbf{x})}[\mathcal{S}(\mathbf{x})]>0, \tag{3}\]
where \(p_{\text{id}}(\mathbf{x})\) and \(p_{\text{goal}}(\mathbf{x})\) denote the true distribution of the ID and OOD dataset, respectively. A larger gap between these two expectation terms can usually lead to better OOD detection performance.
Using the \(\text{ELBO}(\mathbf{x})\) as the score function \(\mathcal{S}(\mathbf{x})\), we could give a formal definition of the repeatedly reported VAE's _overestimation_ issue in the context of unsupervised OOD detection [12; 13; 17; 18].
**Definition 1** (VAE's _overestimation_ in unsupervised OOD Detection).: Assume we have a VAE trained on a training set and we use the \(\text{ELBO}(\mathbf{x})\) as the score function to distinguish data points sampled _i.i.d._ from the in-distribution testing set (\(p_{\text{id}}\)) and an OOD dataset (\(p_{\text{pool}}\)). When
\[\mathcal{G}=\mathbb{E}_{\mathbf{x}\sim p_{\text{id}}(\mathbf{x})}[\text{ELBO}(\mathbf{x})] -\mathbb{E}_{\mathbf{x}\sim p_{\text{goal}}(\mathbf{x})}[\text{ELBO}(\mathbf{x})]\leq 0, \tag{4}\]
it is called VAE's _overestimation_ in unsupervised OOD detection.
With a clear definition of _overestimation_, we could now investigate the underlying factors causing the _overestimation_ in VAE. After well training a VAE, we could reformulate the expectation term of \(\text{ELBO}(\mathbf{x})\) from the perspective of information theory [29] as:
\[\mathbb{E}_{\mathbf{x}\sim p(\mathbf{x})}[\text{ELBO}(\mathbf{x})] =\mathbb{E}_{\mathbf{x}\sim p(\mathbf{x})}[\mathbb{E}_{\mathbf{z}\sim q_{\phi} (\mathbf{z}|\mathbf{x})}\log p_{\theta}(\mathbf{x}|\mathbf{z})]-\mathbb{E}_{\mathbf{x}\sim p(\mathbf{x })}[D_{\text{KL}}(q_{\phi}(\mathbf{z}|\mathbf{x})||p(\mathbf{z}))]\] \[=-\mathcal{H}_{p}(\mathbf{x})-D_{\text{KL}}(q(\mathbf{z})||p(\mathbf{z})), \tag{5}\]
because we have
\[\mathbb{E}_{\mathbf{x}\sim p(\mathbf{x})}[\mathbb{E}_{\mathbf{z}\sim q_{\phi} (\mathbf{z}|\mathbf{x})}\log p_{\theta}(\mathbf{x}|\mathbf{z})] =\mathcal{I}_{q}(\mathbf{x},\mathbf{z})+\mathbb{E}_{p(\mathbf{x})}\log p(\mathbf{x})= \mathcal{I}_{q}(\mathbf{x},\mathbf{z})-\mathcal{H}_{p}(\mathbf{x}), \tag{6}\] \[\mathbb{E}_{\mathbf{x}\sim p(\mathbf{x})}[D_{\text{KL}}(q_{\phi}(\mathbf{z}| \mathbf{x})||p(\mathbf{z}))] =\mathcal{I}_{q}(\mathbf{x},\mathbf{z})+D_{\text{KL}}(q(\mathbf{z})||p(\mathbf{z})), \tag{7}\]
where the \(\mathcal{I}_{q}(\mathbf{x},\mathbf{z})\) is mutual information between \(\mathbf{x}\) and \(\mathbf{z}\) and the \(q(\mathbf{z})\) is the aggregated posterior distribution of the latent variables \(\mathbf{z}\), which is defined by \(q(\mathbf{z})=\mathbb{E}_{\mathbf{x}\sim p(\mathbf{x})}q_{\phi}(\mathbf{z}|\mathbf{x})\). We leave the detailed definition and derivation in Appendix C.1. Thus, the gap \(\mathcal{G}\) in Eq. (4) could be rewritten as
\[\mathcal{G}=[-\mathcal{H}_{p_{\text{id}}}(\mathbf{x})+\mathcal{H}_{p_{\text{pool}}} (\mathbf{x})]+[-D_{\text{KL}}(q_{\text{id}}(\mathbf{z})||p(\mathbf{z}))+D_{\text{KL}}(q_{ \text{ood}}(\mathbf{z})||p(\mathbf{z}))], \tag{8}\]
where the dataset entropy \(\mathcal{H}_{p_{\text{id}}}(\mathbf{x})/\mathcal{H}_{p_{\text{pool}}}(\mathbf{x})\) is a constant that only depends on the true distribution of ID/OOD dataset; the prior \(p(\mathbf{z})\) is typically set as a standard (multivariate) Gaussian distribution \(\mathcal{N}(\mathbf{0},\mathbf{I})\) to enable reparameterization for efficient gradient descent optimization [25].
Through analyzing the most widely used criterion, specifically the expectation of ELBO reformulated in Eq. (8), for VAE-based unsupervised OOD detection, we find that there will be two potential factors that lead to the _overestimation_ issue of VAE, _i.e._, \(\mathcal{G}\leq 0\):
**Factor I: The improper design of prior distribution \(p(\mathbf{z})\).** Several studies have argued that the aggregated posterior distribution of latent variables \(q(\mathbf{z})\) cannot always equal \(\mathcal{N}(\mathbf{0},\mathbf{I})\), particularly when the dataset exhibits intrinsic multimodality [28; 30; 31; 32]. In fact, when \(q(\mathbf{z})\) is extremely close to \(p(\mathbf{z})\), it is more likely to become trapped in a bad local optimum known as posterior collapse [33; 34; 35], _i.e._, \(q_{\phi}(\mathbf{z}|\mathbf{x})\approx p(\mathbf{z})\), resulting in \(q(\mathbf{z})=\int_{\mathbf{x}}q_{\phi}(\mathbf{z}|\mathbf{x})p(\mathbf{x})\approx\int_{\mathbf{x}}p( \mathbf{z})p(\mathbf{x})=p(\mathbf{z})\). In this situation, the posterior \(q_{\phi}(\mathbf{z}|\mathbf{x})\) becomes uninformative about the inputs. Thus, the value of \(D_{\text{KL}}(q_{\text{id}}(\mathbf{z})||p(\mathbf{z}))\) could be overestimated, potentially contributing to \(\mathcal{G}\leq 0\).
**Factor II: The gap between \(\mathcal{H}_{p\mathbf{x}}(\mathbf{x})\) and \(\mathcal{H}_{p\mathbf{x}}(\mathbf{x})\).** Considering the dataset's statistics, such as the variance of pixel values, different datasets exhibit various levels of entropy. It is reasonable that a dataset containing images with richer low-level features and more diverse content is expected to have a higher entropy. As an example, the FashionMNIST dataset should possess higher entropy compared to the MNIST dataset. Therefore, when the entropy of the ID dataset is higher than that of an OOD dataset, the value of \(-\mathcal{H}_{p\mathbf{x}}(\mathbf{x})+\mathcal{H}_{p\mathbf{x}}(\mathbf{x})\) is less than 0, potentially leading to _overestimation_.
### More Analysis on Factor I
In this part, we will focus on addressing the following question: _when is the common design of the prior distribution proper, and when is it not?_
**When the design of prior is proper?** Assuming that we have a dataset consisting of \(N\) data points \(\{\mathbf{x}_{i}\}_{i=1}^{N},\) each of which is sampled from a given \(d\)-dimensional data distribution \(p(\mathbf{x})=\mathcal{N}(\mathbf{x}|\mathbf{0},\mathbf{\Sigma}_{\mathbf{x}})\) as shown in Figure 1(a). Then we construct a linear VAE to estimate \(p(\mathbf{x})\), formulated as:
\[p(\mathbf{z}) =\mathcal{N}(\mathbf{z}|\mathbf{0},\mathbf{I}) \tag{9}\] \[q_{\phi}(\mathbf{z}|\mathbf{x}) =\mathcal{N}(\mathbf{z}|\mathbf{A}\mathbf{x}+\mathbf{B},\mathbf{C})\] \[p_{\theta}(\mathbf{x}|\mathbf{z}) =\mathcal{N}(\mathbf{x}|\mathbf{E}\mathbf{z}+\mathbf{F},\sigma^{2}\mathbf{ I}),\]
where \(\mathbf{A}\),\(\mathbf{B}\),\(\mathbf{C}\),\(\mathbf{D}\),\(\mathbf{E}\),\(\mathbf{F}\), and \(\sigma\) are all learnable parameters and their optimal values can be obtained by the derivation in Appendix C.3. As the estimated distribution \(p_{\theta}(\mathbf{x})\) depicted in Figure 1(c), we can find that the linear VAE with the optimal parameter values can accurately estimate the \(p(\mathbf{x})\) through maximizing ELBO, _i.e._, the _overestimation_ issue is not present. In this case, Figures 1(b) and 1(d) indicate that the design of the prior distribution is proper, where the posterior \(q(\mathbf{z})\) equals prior \(p(\mathbf{z})\).
**When the design of prior is NOT proper?** Consider a more complex data distribution, _e.g._, a mixture of Gaussians, \(p(\mathbf{x})=\sum_{k=1}^{K}\pi_{k}\mathcal{N}(\mathbf{x}|\mathbf{\mu}_{k},\mathbf{\Sigma}_{k} ),K=2\) as shown in Figure 2(a), where \(\pi_{k}=1/K\) and \(\sum_{k=1}^{K}\mathbf{\mu}_{k}=\mathbf{0}\). We construct a dataset consisting of \(K\times N\) data points, obtained by sampling \(N\) data samples \(\{\mathbf{x}_{i}^{(k)}\}_{i=1,k=1}^{N,K}\) from each component Gaussian \(\mathcal{N}(\mathbf{x}|\mathbf{\mu}_{k},\mathbf{\Sigma}_{k})\). The formulation of \(p(\mathbf{z})\), \(q_{\phi}(\mathbf{z}|\mathbf{x})\), and \(p_{\theta}(\mathbf{x}|\mathbf{z})\) is consistent with those in Eq. (9). More details are in Appendix C.2.
In what follows, we will provide a basic derivation outline for the linear VAE under the multi-modal case. We can first obtain the marginal likelihood
Figure 1: Visualization of modeling a single-modal data distribution with a linear VAE.
Figure 2: Visualization of modeling a multi-modal data distribution with a linear VAE.
\(\sigma^{2}\mathbf{I}\)) with the strictly tighter importance sampling on ELBO [36], _i.e._, learning the optimal generative process. Then, the joint log-likelihood of the observed dataset \(\{\mathbf{x}_{i}^{(k)}\}_{i=1,k=1}^{N,K}\) can be formulated as:
\[\mathcal{L}=\sum_{k=1}^{K}\sum_{i=1}^{N}\log\hat{p}_{\theta}(\mathbf{x}_{i}^{(k)})= -\frac{KNd}{2}\log(2\pi)-\frac{KN}{2}\log det(\mathbf{M})-\frac{KN}{2}tr[ \mathbf{M}^{-1}\mathbf{S}], \tag{10}\]
where \(\mathbf{M}=\mathbf{EE}^{\top}+\sigma^{2}\mathbf{I}\) and \(\mathbf{S}=\frac{1}{KN}\sum_{k=1}^{K}\sum_{i=1}^{N}(\mathbf{x}_{i}^{(k)}-\mathbf{F} )(\mathbf{x}_{i}^{(k)}-\mathbf{F})^{\top}\). After that, we could explore the stationary points of parameters through the ELBO, which can be analytically written as:
\[\text{ELBO}(\mathbf{x})=\overbrace{\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x} )}[\log p_{\theta}(\mathbf{x}|\mathbf{z})]}^{L_{1}}-\overbrace{D_{\text{KL}}[q_{\phi}( \mathbf{z}|\mathbf{x})||p(\mathbf{z})]}^{L_{2}}, \tag{11}\] \[L_{1}=\frac{1}{2\sigma^{2}}[-tr(\mathbf{ECE}^{\top})-(\mathbf{ EA}\mathbf{x}+\mathbf{EB})^{\top}(\mathbf{EA}\mathbf{x}+\mathbf{EB})+2\mathbf{x}^{\top}( \mathbf{EA}\mathbf{x}+\mathbf{EB})-\mathbf{x}^{\top}\mathbf{x}]-\frac{d}{2}\log(2\pi\sigma^ {2}),\] \[L_{2}=\frac{1}{2}[-\log det(\mathbf{C})+(\mathbf{A}\mathbf{x}+\mathbf{ B})^{\top}(\mathbf{A}\mathbf{x}+\mathbf{B})+tr(\mathbf{C})-1].\]
The detailed derivation of parameter solutions in Eq. (10) and (11) can be found in Appendix C.4.
In conclusion of this case, Figure 2(b) illustrates that \(q(\mathbf{z})\) is a multi-modal distribution instead of \(p(\mathbf{z})=\mathcal{N}(\mathbf{z}|\mathbf{0},\mathbf{I})\), _i.e._, the design of the prior is not proper, which leads to _overestimation_ as seen in Figure 2(c). However, as analyzed in Factor I, we found that the _overestimation_ issue is mitigated when replacing \(p(\mathbf{z})\) in the KL term of the ELBO with \(q(\mathbf{z})\), which is shown in Figure 2(d).
**More empirical studies on the improper design of prior.** To extend to a more practical and representative case, we used a 3-layer MLP to model \(q_{\phi}(\mathbf{z}|\mathbf{x})\) and \(p_{\theta}(\mathbf{x}|\mathbf{z})\) with \(p(\mathbf{z})=\mathcal{N}(\mathbf{0},\mathbf{I})\) on the same dataset of the above multi-modal case. Implementation details are provided in Appendix C.5. After training, we observed that \(q(\mathbf{z})\) still differs from \(p(\mathbf{z})\), as shown in Figure 3(a). The ELBO still suffers from _overestimation_, especially in the region near \((0,0)\), as shown in Figure 3(b).
Finally, we extend the analysis directly to high-dimensional image data. Since VAE trained on image data needs to be equipped with a higher dimensional latent variable space, it is hard to visualize directly. But please note that, if \(q_{\text{id}}(\mathbf{z})\) is closer to \(p(\mathbf{z})=\mathcal{N}(\mathbf{0},\mathbf{I})\), \(\mathbf{z}_{\text{id}}\sim q_{\text{id}}(\mathbf{z})\) should occupy the center of latent space \(\mathcal{N}(\mathbf{0},\mathbf{I})\) and \(\mathbf{z}_{\text{ood}}\sim q_{\text{ood}}(\mathbf{z})\) should be pushed far from the center, leading to \(p(\mathbf{z}_{\text{id}})\) to be larger than \(p(\mathbf{z}_{\text{ood}})\). However, surprisingly, we found this expected phenomenon does not exist, as shown in Figure 3(c) and 3(d), where the experiments are on two dataset pairs, Fashion-MNIST(ID)/MNIST(OOD) and CIFAR10(ID)/SVHN(OOD). This still suggests that the prior \(p(\mathbf{z})\) is improper, even \(q_{\text{ood}}(\mathbf{z})\) for OOD data may be closer to \(p(\mathbf{z})\) than \(q_{\text{id}}(\mathbf{z})\).
**Brief summary.** Through analyzing _overestimation_ scenarios from simple to complex, the answer to the question at the beginning of this part could be: _the prior distribution \(p(\mathbf{z})=\mathcal{N}(\mathbf{0},\mathbf{I})\) is an improper choice for VAE when modeling a complex data distribution \(p(\mathbf{x})\)_, leading to an overestimated \(D_{\text{KL}}(q_{\text{id}}(\mathbf{z})||p(\mathbf{z}))\) and further raising the _overestimation_ issue in unsupervised OOD detection.
## 4 Alleviating VAE's _overestimation_ in Unsupervised OOD Detection
In this section, we develop the **"AVOID"** method to alleviate the influence of two aforementioned factors in Section 3, including **i)** post-hoc prior and **ii)** dataset entropy calibration, both of which are implemented in a simple way to inspire related work and can be further investigated for improvement.
### Post-hoc Prior Method for Factor I
Figure 3: **(a) and (b):** visualization of \(q_{\text{id}}(\mathbf{z})\) and estimated \(p(\mathbf{x})\) by ELBO on the multi-modal data distribution with a non-linear deep VAE; **(c)** and **(d)**: the density plot of the log-probability of posterior \(\mathbf{z}\), _i.e._, \(\mathbf{z}\sim q_{\phi}(\mathbf{z}|\mathbf{x})\), in prior \(\mathcal{N}(\mathbf{0},\mathbf{I})\) on two dataset pairs.
To provide a more insightful view to investigate the relationship between \(q_{\text{id}}(\mathbf{z})\), \(q_{\text{ood}}(\mathbf{z})\), and \(p(\mathbf{z})\), we use t-SNE [37] to visualize them in Figure 4. The visualization reveals that \(p(\mathbf{z})\) cannot distinguish between the latent variables sampled from \(q_{\text{id}}(\mathbf{z})\) and \(q_{\text{ood}}(\mathbf{z})\), while \(q_{\text{id}}(\mathbf{z})\) is clearly distinguishable from \(q_{\text{ood}}(\mathbf{z})\). Therefore, to alleviate _overestimation_, we can explicitly modify the prior distribution \(p(\mathbf{z})\) in Eq. (8) to force it to be closer to \(q_{\text{id}}(\mathbf{z})\) and far from \(q_{\text{ood}}(\mathbf{z})\), _i.e._, decreasing \(D_{\text{KL}}(q_{\text{id}}(\mathbf{z})||p(\mathbf{z}))\) and increasing \(D_{\text{KL}}(q_{\text{ood}}(\mathbf{z})||p(\mathbf{z}))\).
A straightforward modifying approach is to replace \(p(\mathbf{z})\) in ELBO with an additional distribution \(\hat{q}_{\text{id}}(\mathbf{z})\) that can fit \(q_{\text{id}}(\mathbf{z})\) well after training, where the target value of \(q_{\text{id}}(\mathbf{z})\) can be acquired by marginalizing \(q_{\phi}(\mathbf{z}|\mathbf{x})\) over the training set, _i.e._, \(q_{\text{id}}(\mathbf{z})=\mathbb{E}_{\mathbf{x}\sim p_{\text{id}}(\mathbf{x})}[q_{\phi}( \mathbf{z}|\mathbf{x})]\). Previous study on distribution matching [30] has developed an LSTM-based method to efficiently fit \(q_{\text{id}}(\mathbf{z})\) in the latent space, _i.e._,
\[\hat{q}_{\text{id}}(\mathbf{z})=\prod_{t=1}^{T}q(\mathbf{z}_{t}|\mathbf{z}_{<t}),\text{ where }q(\mathbf{z}_{t}|\mathbf{z}_{<t})=\mathcal{N}(\mu_{i},\sigma_{i}^{2}). \tag{12}\]
Thus, we could propose a "post-hoc prior" (PHP) method for Factor I, formulated as
\[\text{PHP}(\mathbf{x}):=\mathbb{E}_{\mathbf{z}\sim q_{\phi}(\mathbf{z}|\mathbf{x})}\log p_{ \theta}(\mathbf{x}|\mathbf{z})-D_{\text{KL}}(q_{\phi}(\mathbf{z}|\mathbf{x})||\hat{q}_{\text{ id}}(\mathbf{z})), \tag{13}\]
which could lead to better OOD detection performance since it could enlarge the gap \(\mathcal{G}\), _i.e._,
\[\mathcal{G}_{\text{PHP}}=[-\mathcal{H}_{p_{\text{id}}}(\mathbf{x})+\mathcal{H}_{p _{\text{ood}}}(\mathbf{x})]+[-D_{\text{KL}}(q_{\text{id}}(\mathbf{z})||\hat{q}_{\text {id}}(\mathbf{z})]+D_{\text{KL}}(q_{\text{ood}}(\mathbf{z})||\hat{q}_{\text{id}}(\mathbf{ z}))]>\mathcal{G}. \tag{14}\]
Please note that PHP can be directly integrated into a trained VAE in a "plug-and-play" manner.
### Dataset Entropy Calibration Method for Factor II
While the entropy of a dataset is a constant that remains unaffected by different model settings, it is still an essential factor that leads to _overestimation_. To address this, a straightforward approach is to design a calibration method that ensures the value added to the ELBO of ID data will be larger than that of OOD data. Specifically, we denote the calibration term as \(\mathcal{C}(\mathbf{x})\), and its expected property could be formulated as
\[\mathbb{E}_{\mathbf{x}\sim p_{\text{id}}(\mathbf{x})}[\mathcal{C}(\mathbf{x})]>\mathbb{E} _{\mathbf{x}\sim p_{\text{ood}}(\mathbf{x})}[\mathcal{C}(\mathbf{x})]. \tag{15}\]
After adding the calibration \(\mathcal{C}(\mathbf{x})\) to the ELBO(\(\mathbf{x})\), we could obtain the "dataset entropy calibration" (DEC) method for Factor II, formulated as
\[\text{DEC}(\mathbf{x}):=\mathbb{E}_{\mathbf{z}\sim q_{\phi}(\mathbf{z}|\mathbf{x})}\log p_{ \theta}(\mathbf{x}|\mathbf{z})-D_{\text{KL}}(q_{\phi}(\mathbf{z}|\mathbf{x})||p(\mathbf{z}))+ \mathcal{C}(\mathbf{x}). \tag{16}\]
With the property in Eq. (15), we could find that the new gap \(\mathcal{G}_{\text{DEC}}\) becomes larger than the original gap \(\mathcal{G}\) based solely on ELBO, as \(\mathcal{G}_{\text{DEC}}=\mathcal{G}+\mathbb{E}_{\mathbf{x}\sim p_{\text{id}}(\bm {x})}[\mathcal{C}(\mathbf{x})]-\mathbb{E}_{\mathbf{x}\sim p_{\text{ood}}(\mathbf{x})}[ \mathcal{C}(\mathbf{x})]>\mathcal{G}\), which should alleviate the _overestimation_ and lead to better unsupervised OOD detection performance.
**How to design the calibration \(\mathcal{C}(\mathbf{x})\)?** For the choice of the function \(\mathcal{C}(\mathbf{x})\), inspired by the previous work [13], we could use image compression methods like Singular Value Decomposition (SVD) [38] to roughly measure the complexity of an image, where the images from the same dataset should have similar complexity. An intuitive insight into this could be shown in Figure 5, where the ID dataset's statistical feature, _i.e._, the curve, is distinguishable to other datasets. Based on this empirical study, we could first propose a **non-scaled** calibration function, denoted as \(\mathcal{C}_{\text{non}}(\mathbf{x})\). First, we could set the number of singular values as \(n_{\text{id}}\), which can achieve the reconstruction error \(|\mathbf{x}_{\text{recon}}-\mathbf{x}|=\epsilon\) in the ID training set; then for a test input \(\mathbf{x}_{i}\), we use SVD to calculate the smallest \(n_{i}\) that could also achieve a smaller reconstruction error \(\epsilon\), then \(\mathcal{C}_{\text{non}}(\mathbf{x})\) could be formulated as:
\[\mathcal{C}_{\text{non}}(\mathbf{x})=\begin{cases}(n_{i}/n_{\text{id}}),&\text{ if}\quad n_{i}<n_{\text{id}},\\ ((n_{\text{id}}-(n_{i}-n_{\text{id}}))/n_{\text{id}}],&\text{if}\quad n_{i} \geq n_{\text{id}},\end{cases} \tag{17}\]
Figure 4: The t-SNE visualization of the latent representations on FashionMNIST(ID)/MNIST(OOD) dataset pair.
Figure 5: Visualization of the relationship between the number of singular values and the reconstruction error.
which can give the ID dataset a higher expectation \(\mathbb{E}_{\mathbf{x}\sim p_{\text{id}}(\mathbf{x})}[\mathcal{C}_{\text{non}}(\mathbf{x})]\) than that of other statistically different OOD datasets. More details to obtain \(\mathcal{C}_{\text{non}}(\mathbf{x})\) can be found in Appendix D.
### Putting Them Together to Get "AVOID"
By combining the post-hoc prior (PHP) method and the dataset entropy calibration (DEC) method, we could develop a new score function, denoted as \(\mathcal{S}_{\text{AVOID}}(\mathbf{x})\):
\[\mathcal{S}_{\text{AVOID}}(\mathbf{x}):=\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}\left[ \log p_{\theta}(\mathbf{x}|\mathbf{z})\right]-D_{\text{KL}}(q_{\phi}(\mathbf{z}|\mathbf{x})|| \hat{q}_{\text{id}}(\mathbf{z}))+\mathcal{C}(\mathbf{x}). \tag{18}\]
To balance the importance of PHP and DEC terms in Eq. (18), we consider to set an appropriate scale for \(\mathcal{C}(\mathbf{x})\). For the scale of \(\mathcal{C}(\mathbf{x})\), if it is too small, its effectiveness in alleviating _overestimation_ could be limited. Otherwise, it may hurt the effectiveness of the PHP method since DEC will dominate the value of "AVOID". Additionally, for statistically similar datasets, _i.e._, \(\mathcal{H}_{\text{pa}}(\mathbf{x})\approx\mathcal{H}_{\text{pa}}(\mathbf{x})\), the property in Eq. (15) cannot be guaranteed and we may only have \(\mathbb{E}_{\mathbf{x}\sim p_{\text{id}}(\mathbf{x})}[\mathcal{C}_{\text{non}}(\mathbf{x} )]\approx\mathbb{E}_{\mathbf{x}\sim p_{\text{ood}}(\mathbf{x})}[\mathcal{C}_{\text{ non}}(\mathbf{x})]\), in which case we could only rely on the PHP method. Thus, an appropriate scale of \(\mathbb{E}_{\mathbf{x}\sim p_{\text{id}}(\mathbf{x})}[\mathcal{C}(\mathbf{x})]\), named "\(\mathcal{C}_{\text{scale}}\)", could be derived by \(\mathcal{C}_{\text{scale}}=\mathbb{E}_{\mathbf{x}\sim p_{\text{id}}(\mathbf{x})}[ \text{PHP}(\mathbf{x})]\approx\mathcal{H}_{\text{pa}}(\mathbf{x})\), which leads to
\[\mathbb{E}_{\mathbf{x}\sim p_{\text{id}}(\mathbf{x})}[\text{DEC}(\mathbf{x})]=-\mathcal{H} _{\text{pa}}(\mathbf{x})-D_{\text{KL}}(q_{\text{id}}(\mathbf{z})||p(\mathbf{z}))+\mathcal{ C}_{\text{scale}}\approx-D_{\text{KL}}(q_{\text{id}}(\mathbf{z})||p(\mathbf{z})). \tag{19}\]
Thus, when \(\mathcal{H}_{\text{pa}}(\mathbf{x})\approx\mathcal{H}_{\text{pa}}(\mathbf{x})\) and \(\mathbb{E}_{\mathbf{x}\sim p_{\text{ad}}(\mathbf{x})}[\mathcal{C}(\mathbf{x})]\approx \mathbb{E}_{\mathbf{x}\sim p_{\text{ood}}(\mathbf{x})}[\mathcal{C}(\mathbf{x})]\), the PHP part of "AVOID" could still be helpful to alleviate _overestimation_.
Motivated by the above analysis, we could implement the **scaled** calibration function, formulated as
\[\mathcal{C}(\mathbf{x})=\mathcal{C}_{\text{non}}(\mathbf{x})\times\mathcal{C}_{\text{ scale}}=\begin{cases}(n_{i}/n_{\text{id}})\times\mathcal{C}_{\text{scale}},&\text{if} \quad n_{i}<n_{\text{id}},\\ \left[((n_{\text{id}}-(n_{i}-n_{\text{id}}))/n_{\text{id}}\right]\times \mathcal{C}_{\text{scale}},&\text{if}\quad n_{i}\geq n_{\text{id}}.\end{cases} \tag{20}\]
## 5 Experiments
### Experimental Setup
**Datasets.** In accordance with existing literature [17; 18; 39], we evaluate our method against previous works using two standard dataset pairs: FashionMNIST [40] (ID) / MNIST [41] (OOD) and CIFAR10 [42] (ID) / SVHN [43] (OOD). The suffixes "ID" and "OOD" represent in-distribution and out-of-distribution datasets, respectively. To more comprehensively assess the generalization capabilities of these methods, we incorporate additional OOD datasets, the details of which are available in Appendix E.1. Notably, datasets featuring the suffix "-G" (e.g., "CIFAR10-G") have been converted to grayscale, resulting in a single-channel format.
**Evaluation and Metrics.** We adhere to the previous evaluation procedure [17; 18], where all methods are trained using the training split of the in-distribution dataset, and their OOD detection performance is assessed on both the testing split of the in-distribution dataset and the OOD dataset. In line with previous works [1; 5; 44], we employ evaluation metrics including the area under the receiver operating characteristic curve (AUROC \(\uparrow\)), the area under the precision-recall curve (AUPRC \(\uparrow\)), and the false positive rate at 80% true positive rate (FPR80 \(\downarrow\)). The arrows indicate the direction of improvement for each metric.
**Baselines.** Our experiments primarily encompass two comparison aspects: **i)** evaluating our novel score function "AVOID" against previous unsupervised OOD detection methods to determine whether it can achieve competitive performance; and **ii)** comparing "AVOID" with VAE's ELBO to assess whether our method can mitigate _overestimation_ and yield improved performance. For comparisons in **i,** we can categorize the baselines into three groups, as outlined in [18]: "**Supervised**" includes supervised OOD detection methods that utilize in-distribution data labels [1; 5; 9; 45; 46; 47; 48; 49]; "**Auxiliary**" refers to methods that employ auxiliary knowledge gathered from OOD data [13; 39; 44]; and "**Unsupervised**" encompasses methods without reliance on labels or OOD-specific assumptions [14; 17; 18; 26]. For comparisons in **ii**, we compare our method with a standard VAE [25], which also serves as the foundation of our method. Further details regarding these baselines and their respective categories can be found in Appendix E.2.
**Implementation Details.** The VAE's latent variable \(\mathbf{z}\)'s dimension is set as 200 for all experiments with the encoder and decoder parameterized by a 3-layer convolutional neural network, respectively.
* [276] The reconstruction likelihood distribution is modeled by a discretized mixture of logistics [20]. For method achieves competitive performance compared to "Supervised" and "Auxiliary" methods and outperforms "Unsupervised" OOD detection methods. Next, we provide a more detailed comparison with some unsupervised methods, particularly the ELBO of VAE, as shown in Table 2. These results indicate that our method effectively mitigates _overestimation_ and enhances OOD detection performance when using VAE as the backbone. Lastly, to assess our method's generalization capabilities, we test it on a broader range of datasets, as displayed in Table 3. Experimental results strongly verify our analysis of the VAE's _overestimation_ issue and demonstrate that our method consistently mitigates _overestimation_, regardless of the type of OOD datasets.
### Ablation Study on Verifying the Post-hoc Prior Method
To evaluate the effectiveness of the Post-hoc Prior (PHP), we compare it with other unsupervised methods in Table 2. Moreover, we test the PHP method on additional datasets and present the results in Table 4 of Appendix F. The experimental results demonstrate that the PHP method can alleviate the _overestimation_. To provide a better understanding, we also visualize the density plot of ELBO and PHP for the "FashionMNIST(ID)/MNIST(OOD)" dataset pair in Figures 6(a) and 6(b), respectively.
The Log-likelihood Ratio (\(\mathcal{LLR}\)) methods [17, 18] are the current SOTA unsupervised OOD detection methods that also focus on latent variables. These methods are based on an empirical assumption that the bottom layer latent variables of a hierarchical VAE could learn low-level features and top layers learn semantic features. However, we discovered that while ELBO could already perform well in detecting some OOD data, the \(\mathcal{LLR}\) method [18] could negatively impact OOD detection performance to some extent, as demonstrated in Figure 6(c), where the model is trained on MNIST and detects FashionMNIST as OOD. On the other hand, our method can still maintain comparable performance since the PHP method can explicitly alleviate _overestimation_, which is one of the strengths of our method compared to the SOTA methods.
(ID) in Figure 7(a) and other OOD datasets in Figure 7(b) when \(n_{\text{id}}=20\). Our results show that the \(\mathcal{C}(\mathbf{x})\) of CIFAR10 (ID) achieves generally higher values than that of other datasets, which is the underlying reason for its effectiveness in alleviating _overestimation_. Additionally, we investigate the impact of different \(n_{\text{id}}\) on OOD detection performance in Figure 7(c), where our results show that the performance is consistently better than ELBO.
## 6 Conclusion
In conclusion, we have identified the underlying factors that lead to VAE's _overestimation_ in unsupervised OOD detection: the improper design of the prior and the gap of the dataset entropies between the ID and OOD datasets. With this analysis, we have developed a novel score function called "AVOID", which is effective in alleviating _overestimation_ and improving unsupervised OOD detection. This work may lead a research stream for improving unsupervised OOD detection by developing more efficient and sophisticated methods aimed at optimizing these revealed factors.
\begin{table}
\begin{tabular}{c|c c c|c c c} \hline ID & \multicolumn{3}{c|}{FashionMNIST} & ID & \multicolumn{3}{c}{CIFAR10} \\ \hline OOD & AUROC\(\uparrow\) & AUPRC\(\uparrow\) & FPR80 \(\downarrow\) & OOD & AUROC\(\uparrow\) & AUPRC\(\uparrow\) & FPR80 \(\downarrow\) \\ \hline \multicolumn{6}{c}{ELBO / AVOID **(ours)**} \\ \hline KMNIST & 60.03 / **78.71** & 54.60 / **68.91** & 61.6 / **48.4** & CIFAR100 & 52.91 / **55.36** & 51.15 / **72.13** & 77.42 / **73.93** \\ Omniglot & 98.96 / **100.0** & 99.89 / **100.0** & 0.00 / **0.00** & CelebA & 57.27 / **71.23** & 54.51 / **72.13** & 69.03 / **54.45** \\ notMNIST & 94.12 / **97.72** & 94.09 / **97.70** & 8.29 / **2.20** & Places365 & 57.24 / **68.37** & 56.96 / **69.05** & 73.13 / **62.64** \\ CIFAR10-G & 98.01 / **99.01** & 98.24 / **99.04** & 1.20 / **0.40** & LFWPeople & 64.15 / **67.72** & 59.71 / **68.81** & 59.44 / **54.45** \\ CIFAR100-G & 98.49 / **98.59** & 97.49 / **97.87** & 1.00 / **1.00** & SUN & 53.14 / **63.09** & 54.48 / **63.32** & 79.52 / **68.63** \\ SVHN-G & 95.61 / **96.20** & 96.20 / **97.41** & 3.00 / **0.40** & STL10 & 49.37 / **64.61** & 47.79 / **65.50** & 78.02 / **67.23** \\ CelebA-G & 97.33 / **97.87** & 94.17 / **95.82** & 3.00 / **0.40** & Flowers102 & 67.68 / **76.86** & 64.68 / **78.01** & 57.94 / **46.65** \\ SUN-G & 99.16 / **99.32** & 99.39 / **99.47** & 0.00 / **0.00** & GTSRB & 39.50 / **53.06** & 41.73 / **49.84** & 86.61 / **73.63** \\ Places365-G & 98.92 / **98.89** & 98.05 / **98.61** & 0.80 / **0.80** & DTD & 37.86 / **81.82** & 40.93 / **62.42** & 82.22 / **64.24** \\ Const & 94.94 / **95.20** & 97.27 / **97.32** & 1.80 / **1.70** & Const & 0.001 / **80.12** & 30.71 / **89.42** & 100.0 / **22.38** \\ Random & 99.80 / **100.0** & 99.90 / **100.0** & 0.00 / **0.00** & Random & 71.81 / **99.31** & 82.89 / **99.59** & 85.71 / **0.000** \\ \hline \end{tabular}
\end{table}
Table 3: The comparisons of our method “AVOID” and baseline “ELBO” on more datasets. Bold numbers are superior performance.
Figure 6: Density plots and ROC curves. **(a):** directly using ELBO(\(\mathbf{x}\)), an estimation of the \(p(\mathbf{x})\), of a VAE trained on FashionMNIST leads to _overestimation_ in detecting MNIST as OOD data; **(b):** using PHP method could alleviate the _overestimation_; **(c):** SOTA method \(\mathcal{LCR}\) hurts the performance when ELBO could already work well; **(d):** PHP method would not hurt the performance.
Figure 7: **(a)** and **(b)** are respectively the visualizations of the calculated entropy calibration \(\mathcal{C}(\mathbf{x})\) of CIFAR10 (ID) and other OOD datasets, where the \(\mathcal{C}(\mathbf{x})\) of CIFAR10 (ID) could achieve generally higher values. **(c)** is the OOD detection performance of dataset entropy calibration with different \(n_{\text{id}}\) settings, which consistently outperforms ELBO.
## References
* [1] Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In _ICLR_, 2017.
* [2] Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In _ICLR_, 2015.
* [3] Anh Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In _CVPR_, 2015.
* [4] Hongxin Wei, Lue Tao, Renchunzi Xie, Lei Feng, and Bo An. Open-sampling: Exploring out-of-distribution data for re-balancing long-tailed datasets. In _ICML_, 2022.
* [5] Alexander A. Alemi, Ian Fischer, and Joshua V. Dillon. Uncertainty in the variational information bottleneck. _CoRR_, abs/1807.00906, 2018.
* [6] Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. Energy-based out-of-distribution detection. In _NeurIPS_, 2020.
* [7] Hongxin Wei, Lue Tao, Renchunzi Xie, and Bo An. Open-set label noise can improve robustness against inherent label noise. In _NeurIPS_, 2022.
* [8] Zhuo Huang, Xiaobo Xia, Li Shen, Bo Han, Mingming Gong, Chen Gong, and Tongliang Liu. Harnessing out-of-distribution examples via augmenting content and style. _arXiv preprint arXiv:2207.03162_, 2022.
* [9] Hongxin Wei, Renchunzi Xie, Hao Cheng, Lei Feng, Bo An, and Yixuan Li. Mitigating neural network overconfidence with logit normalization. In _ICML_, 2022.
* [10] Shuyang Yu, Junyuan Hong, Haotao Wang, Zhangyang Wang, and Jiayu Zhou. Turning the curse of heterogeneity in federated learning into a blessing for out-of-distribution detection. In _ICLR_, 2023.
* [11] Ido Galil, Mohammed Dabbah, and Ran El-Yaniv. A framework for benchmarking class-out-of-distribution detection and its application to imagenet. In _ICLR_, 2023.
* [12] Jie Ren, Peter J. Liu, Emily Fertig, Jasper Snoek, Ryan Poplin, Mark A. DePristo, Joshua V. Dillon, and Balaji Lakshminarayanan. Likelihood ratios for out-of-distribution detection. In _NeurIPS_, 2019.
* [13] Joan Serra, David Alvarez, Vicenc Gomez, Olga Slizovskaia, Jose F. Nunez, and Jordi Luque. Input complexity and out-of-distribution detection with likelihood-based generative models. In _ICLR_, 2020.
* [14] Zhisheng Xiao, Qing Yan, and Yali Amit. Likelihood regret: An out-of-distribution detection score for variational auto-encoder. In _NeurIPS_, 2020.
* [15] Lars Maaloe, Marco Fraccaro, Valentin Lievin, and Ole Winther. BIVA: A very deep hierarchy of latent variables for generative modeling. In _NeurIPS_, 2019.
* [16] Griffin Floto, Stefan Kremer, and Mihai Nica. The tilted variational autoencoder: Improving out-of-distribution detection. In _ICLR_, 2023.
* [17] Jakob D Drachmann Havtorn, Jes Frellsen, Soren Hauberg, and Lars Maaloe. Hierarchical vaes know what they don't know. In _ICML_, 2021.
* [18] Yewen Li, Chaojie Wang, Xiaobo Xia, Tongliang Liu, and Bo An. Out-of-distribution detection with an adaptive likelihood ratio on informative hierarchical vae. In _NeurIPS_, 2022.
* [19] Aaron van den Oord, Nal Kalchbrenner, Lasse Espeholt, Koray Kavukcuoglu, Oriol Vinyals, and Alex Graves. Conditional image generation with pixelcnn decoders. In _NeurIPS_, 2016.
* [20] Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P. Kingma. Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. In _ICLR_, 2017.
* Dinh et al. [2017] Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real NVP. In _ICLR_, 2017.
* Kingma and Dhariwal [2018] Diederik P. Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In _NeurIPS_, 2018.
* Ho et al. [2020] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In _NeurIPS_, 2020.
* Goodfellow et al. [2020] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. _Communications of the ACM_, 63(11):139-144, 2020.
* Kingma and Welling [2014] Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In _ICLR_, 2014.
* Choi et al. [2018] Hyunsun Choi, Eric Jang, and Alexander A Alemi. Waic, but why? generative ensembles for robust anomaly detection. _arXiv preprint arXiv:1810.01392_, 2018.
* Nalisnick et al. [2019] Eric T. Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. Do deep generative models know what they don't know? In _ICLR_, 2019.
* Xiao et al. [2022] Zhisheng Xiao, Karsten Kreis, and Arash Vahdat. Tackling the generative learning trilemma with denoising diffusion gans. In _ICLR_, 2022.
* Cover [1999] Thomas M Cover. _Elements of Information Theory_. John Wiley & Sons, 1999.
* Rosca et al. [2018] Mihaela Rosca, Balaji Lakshminarayanan, and Shakir Mohamed. Distribution matching in variational inference. _CoRR_, abs/1802.06847, 2018.
* Sohl-Dickstein et al. [2015] Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In _ICML_, 2015.
* Feller [2015] William Feller. On the theory of stochastic processes, with particular reference to applications. In _Selected Papers I_, pages 769-798. Springer, 2015.
* Wang et al. [2021] Yixin Wang, David M. Blei, and John P. Cunningham. Posterior collapse and latent variable non-identifiability. In _NeurIPS_, 2021.
* Dieng et al. [2019] Adji B. Dieng, Yoon Kim, Alexander M. Rush, and David M. Blei. Avoiding latent variable collapse with generative skip models. In _AISTATS_, 2019.
* Li et al. [2022] Yewen Li, Chaojie Wang, Zhibin Duan, Dongsheng Wang, Bo Chen, Bo An, and Mingyuan Zhou. Alleviating "posterior collapse" in deep topic models via policy gradient. In _NeurIPS_, 2022.
* Burda et al. [2016] Yuri Burda, Roger B. Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. In _ICLR_, 2016.
* Van der Maaten and Hinton [2008] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. _Journal of machine learning research_, 9(11), 2008.
* Stewart [1993] Gilbert W Stewart. On the early history of the singular value decomposition. _SIAM Review_, 35(4):551-566, 1993.
* Ren et al. [2019] Jie Ren, Peter J. Liu, Emily Fertig, Jasper Snoek, Ryan Poplin, Mark A. DePristo, Joshua V. Dillon, and Balaji Lakshminarayanan. Likelihood ratios for out-of-distribution detection. In _NeurIPS_, 2019.
* Xiao et al. [2017] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. _CoRR_, abs/1708.07747, 2017.
* LeCun et al. [1998] Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. _Proceedings of the IEEE_, 86(11):2278-2324, 1998.
* Krizhevsky and Hinton [2009] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. _Master's thesis, University of Tront_, 2009.
* [43] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011.
* [44] Dan Hendrycks, Mantas Mazeika, and Thomas G. Dietterich. Deep anomaly detection with outlier exposure. In _ICLR_, 2019.
* [45] Shiyu Liang, Yixuan Li, and R Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. In _ICLR_, 2018.
* [46] Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In _NeurIPS_, 2018.
* [47] Saikiran Bulusu, Bhavya Kailkhura, Bo Li, Pramod K Varshney, and Dawn Song. Anomalous example detection in deep learning: A survey. _IEEE Access_, 8:132330-132347, 2020.
* [48] Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In _NeurIPS_, 2017.
* [49] Rui Huang, Andrew Geng, and Yixuan Li. On the importance of gradients for detecting distributional shifts in the wild. In _NeurIPS_, 2021.
* [50] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In _ICLR_, 2015.
* [51] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Z. Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In _NeurIPS_, 2019.
* [52] Ramneet Kaur, Susmit Jha, Anirban Roy, Sangdon Park, Edgar Dobriban, Oleg Sokolsky, and Insup Lee. idecode: In-distribution equivariance for conformal out-of-distribution detection. In _AAAI_, 2022. | ## Review
### Summary
The paper investigates overestimation issues in Variational Autoencoders (VAEs) for out-of-distribution (OOD) detection by analyzing the expected Evidence Lower Bound (ELBO). It identifies two primary factors contributing to this problem: the inherent entropy of datasets and the design of the prior distribution. The authors propose a method called AVOID, which incorporates a post-hoc prior and entropy calibration to address these challenges. The experimental results demonstrate that the AVOID method improves OOD detection performance compared to existing approaches. Despite the interesting contributions, the paper faces criticisms regarding its theoretical assumptions, experimental design, and clarity of presentation.
### Strengths
- Decomposing the ELBO carefully is interesting, especially the new prior design targeting the overestimation issue.
- The theory of the paper is simple but inspiring, aligning well with the designed algorithm.
- The experimental setup is extensive, with numerous dataset combinations considered.
- The scoring method improves upon the standard ELBO, partially validating the analysis.
- Ablation studies are well-executed.
### Weaknesses
- The derivation assumes model distribution can converge exactly to the true one, which is impractical.
- The evaluation is outdated on easier benchmarks and lacks performance comparisons with standard VAEs.
- The paper does not adequately discuss relevant prior work on OOD detection.
- Some theoretical questions posed in the paper remain unanswered, limiting its practical relevance.
- Notation is inconsistent, and the organization of the paper could be improved.
### Questions
- What is the architecture used for the supervised OOD detection methods in Table 1?
- Why do supervised methods not outperform unsupervised ones in Table 1?
- What justifies the high maximum epochs set to 1000?
- How do the reasons noted by prior work on normalizing flows relate to VAEs?
- What is the right definition for entropy in Eq. (8)?
- Why does AVOID perform differently in various experimental setups?
### Soundness
**Score:** 2
**Description:** 2 = fair; the theoretical foundations are questioned, and empirical results are difficult to interpret.
### Presentation
**Score:** 3
**Description:** 3 = good; while the paper is generally understandable, the language is occasionally unclear and could be improved.
### Contribution
**Score:** 3
**Description:** 3 = good; the paper presents a relevant and timely contribution to the field of OOD detection but is limited to VAEs.
### Rating
**Score:** 5
**Description:** 5 = borderline accept; the paper has solid technical content, but significant improvements are needed in evaluation and clarity.
### Paper Decision
**Decision:** Reject
**Reasons:** The paper presents an interesting approach to addressing the overestimation problem in VAEs for OOD detection, but it suffers from theoretical assumptions that may not hold in practice, outdated evaluations, and unclear presentation. The contributions, while relevant, are limited in scope and require substantial improvements before acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Idempotent Learned Image Compression with Right-Inverse
Yanghao Li, Tongda Xu, Yan Wang, Jingjing Liu, Ya-Qin Zhang
Institute for AI Industry Research (AIR), Tsinghua University
[email protected], [email protected]
Corresponding author.
###### Abstract
We consider the problem of idempotent learned image compression (LIC). The idempotence of codec refers to the stability of codec to re-compression. To achieve idempotence, previous codecs adopt invertible transforms such as DCT [23] and normalizing flow [22]. In this paper, we first identify that invertibility of transform is sufficient but not necessary for idempotence. Instead, it can be relaxed into right-invertibility. And such relaxation allows wider family of transforms. Based on this identification, we then implement an idempotent codec using our proposed blocked convolution and null-space enhancement. Empirical results show that we achieve state-of-the-art rate-distortion performance among idempotent codecs. Furthermore, our idempotent codec can be extended into near-idempotent codec by relaxing the right-invertibility. And this near-idempotent codec has significantly less quality decay after \(50\) rounds of re-compression compared with other near-idempotent codecs.
## 1 Introduction
Learned Image Compression (LIC) has been widely studied in recent years [1, 2, 23, 24, 25, 13] and has shown promising rate-distortion (RD) performance. However, the loss caused by re-compression is much more severe in LIC compared with traditional codec, which seriously limits the practical application of LIC [23]. In this paper, we study the idempotence of LIC, which refers to the stability of codec to re-compression. More specifically, denote the original image as \(\mathbf{x}\), the encode-then-decode procedure as \(f(\cdot)\), and the reconstructed image as \(f(\mathbf{x})\), we say a codec is idempotent if:
\[f(\mathbf{x})=f(f(\mathbf{x})). \tag{1}\]
For traditional codecs such as JPEG [23] and JPEG2000 [24], idempotence is easily achieved. This is because those codecs adopt invertible transforms such as Discrete Cosine transform (DCT) and Discrete Wavelet transform (DWT). And the only non-invertible operation is the scalar quantization. As scalar quantization using rounding is naturally idempotent, the idempotence of the whole codec can be easily assured. For LIC, however, neural-network-based transform is introduced for expressiveness. And as most neural networks are non-invertible, the idempotence of LIC is not trivial.
A natural solution to this problem is replacing the non-invertible encoding transform with invertible ones. [1] construct the encoding transform with only invertible normalizing flow [22]. However, due to the limited expressiveness of invertible operations, a dramatic RD performance drop is observed. Another line of works are targeted at near-idempotence, which means that they achieve a small \(|f(x)-f(f(x))|\), but not \(f(x)=f(f(x))\). These worksadopt partially invertible encoding transform [14] or use additional regularization loss [15] to constrain re-compression loss, but none of them is able to achieve idempotence like JPEG and JPEG2000.
In this work, we first identify that invertibility is sufficient but not necessary for idempotent codecs. Instead, it can be relaxed into right-invertibility, and such relaxation allows for more flexible and expressive encoding transforms. Based on this observation, we investigate practical implementation of right-inverse, and propose a highly efficient blocked convolution to overcome the forbidding time complexity of right-inverse. Additionally, we leverage the null space decomposition [22, 23, 24] to further boost the expressiveness of the right-invertible encoding transform. Empirically, we achieve state-of-the-art RD performance among existing idempotent codecs. Further, our idempotent codec can be easily extended into a near-idempotent codec, which also achieves state-of-the-art re-compression performance among near-idempotent codecs.
## 2 Sufficiency of Right-Invertibility for Idempotence
Most modern LIC [1, 16, 17] are composed of four components: the encoding transform \(E\), the decoding transform \(D\), the quantization \(Q\), and the entropy model. On the encoder side, the encoding transform \(E\) transform the input image \(x\) to some latent representation, which is then quantized by \(Q\) to give the code \(y\). The entropy model models the entropy of code \(y\), and is then used losslessly compress \(y\) to a bitstream, On the decoder side, the received bitstream is losslessly decompressed to code \(y\) with the help of the entropy model. After that, the decoding transform \(D\) transforms the \(y\) to the decompressed \(\hat{x}\). This compress-decompress procedure can be represented as
\[y=Q\circ E(x),\;\hat{x}=D(y). \tag{2}\]
Note that the code \(y\) is losslessly encoded and decoded, regardless of how good the entropy model is. Therefore entropy model does not influence the distortion, and is thus omitted for simplicity.
Using similar notations, the re-compression cycle can be represented as
\[y_{n}=Q\circ E(x_{n-1}),\;x_{n}=D(y_{n})=D\circ Q\circ E(x_{n-1}), \tag{3}\]
where \(x_{0}\) is the original image, \(y_{n}\) is the code after \(n\) times' re-compression, and \(x_{n}\) is the corresponding decompressed image. We say a codec is idempotent if and only if
\[x_{n}=x_{1}=D\circ Q\circ E(x_{0}),\forall n\geq 1. \tag{4}\]
An obvious sufficient condition for idempotence (or for Eq. 4) is
\[D=E^{-1}. \tag{5}\]
In other words, \(D\) and \(E\) are inverse of each other. Then, the only difference between \(x_{1}\) and \(x_{n}\) is the number of applications of quantization operation \(Q\):
\[x_{n}=D\circ Q\circ E\text{-}\mathcal{D}\circ Q\circ E(x_{n-2})=...=D\circ Q^{ n}\circ E(x_{0}). \tag{6}\]
This re-compression procedure is idempotent as long as \(Q\) is idempotent. Usually \(Q\) is a scalar quantization implemented by rounding, then this procedure is naturally idempotent. The invertibility of encoding transform is satisfied in traditional image codecs like JPEG and JPEG2000, and can also be achieved using normalizing flow [10], thus these codecs are idempotent. However, invertibility of \(E\) (Eq. 5) is sufficient but not necessary. We notice that for \(E\circ D\) in Eq. 6 to be canceled out, it is sufficient to have
\[E\circ D(y_{i})=y_{i}, \tag{7}\]
which is exactly the definition of right-invertibility.
Relaxing the requirement for \(E\) from invertibility to right-invertibility brings two advantages. On the one hand, the family of right-invertible functions is much less constrained than the family of invertible functions, which means we have wider choices for \(E\) and its easier to improve the expressiveness of \(E\). On the other hand, the sufficient condition for \(E\) to be invertible is that \(E\) is bijective, and the sufficient condition for \(E\) to be right-invertible is that \(E\) is surjective [16]. A surjective \(E\) can transform different input images \(x\) into the same code to save the bits for distinguishing them, while bijective \(E\) always transforms different input images into different codes. As later shown in the experiment section 4.3, this bit saving property benefits the RD performance of lossy compression.
## 3 Practical Design of Right-Invertible Codec
In the previous section, we have shown that if an encoding transform \(E\) and quantization \(Q\) are right-invertible, we only need the decoding transform \(D\) to be the right-inverse of \(E\) to formulate an idempotent codec. Since the success of LIC is, to a large extent, owing to the expressive power of the learned encoding and decoding transforms, our task becomes how to design an expressive yet right-invertible encoding transform. As composition of surjections is still surjective [Mac Lane, 2013], this task can be further decomposed into designing small components of right-invertible transforms and combining them altogether. Specifically, we construct the encoding transform \(E\) in a composition form as \(E=E_{n}\circ E_{n-1}\circ...\circ E_{1}\), and make each \(E_{i}\) a right-invertible sub-transform.
This section is organised as follows: In Sec. 3.1-Sec. 3.3, we discuss how to design expressive yet right-invertible atom transforms used in LIC, such as convolution, normalization and quantization. In Sec. 3.4, we discuss how to organize those atom transforms into an idempotent codec. And In Sec. 3.5, we discuss how to relax this idempotent codec into near-idempotent codec.
### Efficient & Expressive Right-Invertible Convolution
Convolution is of great importance to LIC and makes up the majority of computation cost. Here, we discuss how to implement right-invertible convolution with efficiency and expressiveness. The overall design of right-invertible convolution is illustrated in Fig.1(c).
#### 3.1.1 Blocked Convolution for Efficiency
The right-inverse of a convolution can be calculated in serial (if it exists), but the time complexity is forbiddingly high. To see why, consider a 1-d convolution with kernel size 5, padding 2, stride 2, and channel 1. The input and output are \(\mathbf{x}=(x_{1},x_{2},...,x_{12})\) and \(\mathbf{y}=(y_{1},y_{2},...,y_{6})\) respectively.
The serial solution of right-inverse goes as follows: first solve \((x_{1},x_{2},x_{3})\) given \(y_{1}\), then solve \((x_{4},x_{5})\) given \(y_{2}\) and _already solved_\((x_{1},x_{2},x_{3})\), and so on till the whole \(\mathbf{x}\) is solved. This serial solution cannot be made parallel because solving \((x_{4},x_{5})\) needs \((x_{1},x_{2},x_{3})\) to be _already solved_, and is thus extremely time-consuming. The fundamental reason for the dependency in solving \(\mathbf{x}\) is that, some _same_\(x_{i}\) is involved in the forward calculation of _different_\(y_{i}\), i.e., the overlapping receptive field. Therefore, if we make the receptive field non-overlapping, then parallel solution of right-inverse becomes possible.
Inspired by the non-overlapping \(1\times 1\) convolution for inverse [Kingma and Dhariwal, 2018], we propose blocked convolution for right-inverse. Blocked convolution is also non-overlapping, but extends invertibility to right-invertibility. As shown in Sec. 4.3, this extension boots the the R-D performance of idempotent LIC by a large margin.
Figure 1: Right-Invertible Convolution. (a) Naive convolution matrix (upper) and blocked convolution matrix (lower). (b) Receptive field pattern for the proposed blocked convolution. (c) Null-space enhancement (NE) and coupling enhancement (CE) to improve expressiveness.
With this non-overlapping blocked convolution, we can make solution of right-inverse parallel. Following the previous example, using the same input and output with a \(4\times 2\) blocked-convolution kernel, for now, solving \((x_{1},...,x_{4})\) only needs to know \((y_{1},y_{2})\), and solving \((x_{5},...,x_{8})\) only needs to know \((y_{3},y_{4})\), and these procedures can be made parallel. Matrix multiplication equivalents of normal convolution and blocked convolution are depicted in the upper and lower parts of Fig.1(a) respectively, using the well-known GEMM [GEM] formation. And an analysis of time complexity of 2D convolution can be found in appendix A.1.
#### 3.1.2 Null-Space Enhancement and Coupling Enhancement for Expressiveness
**Null-Space Enhancement**
To solve the right-inverse of the proposed blocked convolution in parallel, we adopt the widely-used Moore-Penrose pseudo-inverse [Moo] as
\[X=YK^{+}, \tag{8}\]
where \(Y\in\mathbb{R}^{b\times d}\) is the output of blocked convolution, \(X\in\mathbb{R}^{b\times D}\) is the solved right-inverse, and \(K^{+}\in\mathbb{R}^{d\times D}\) is the Moore-Penrose pseudo-inverse of the kernel \(K\in\mathbb{R}^{D\times d}\). \(b\) is the batch size, \(D\) and \(d\) are the input and output dimension of convolution in GEMM formation [GEM].
However, simple Moore-Penrose pseudo-inverse does not have enough expressiveness. This is because right-inverse of a surjection might not be unique [Mac Lane, 2013], and the Moore-Penrose pseudo-inverse might not be the optimal choice to calculate the right-inverse for blocked convolution. Therefore, we leverage the null-space decomposition [Schwab et al., 2019, Wang et al., 2022a,b] to identify a right-inverse that is most suitable. More specifically, Eq. 8 can be extended to
\[X=YK^{+}+F(I-KK^{+}). \tag{9}\]
Here \(F\in\mathbb{R}^{b\times D}\) can be an arbitrarily chosen variable. Utilizing this property, we propose to learn a function \(f(Y)\) to identify an \(F\) that is the most suitable as:
\[X=YK^{+}+f(Y)(I-KK^{+}). \tag{10}\]
We call this approach null-space enhancement (NE). In this way, right-inverse \(X\) can be made more than just linear transformation of \(Y\) and thus become more expressive. Ablation study of the proposed null-space enhancement can be found in Sec.4.3. Derivation of Eq.9 and parameterization of kernel \(K\) can be found in appendix A.2.
**Coupling Enhancement**
The proposed blocked convolution makes the calculation of right-inverse parallel-friendly, but it also restricts the receptive field to a blocked pattern (Fig. 1(b)). This restriction limits the exchange of information across different spatial locations. To overcome this drawback, we propose to introduce a coupling enhancement (CE) after the blocked convolution.
Specifically, we implement a coupling structure [Dinh et al., 2016], and utilize normal convolutions without blocked limitation as its scale and translation functions. This structure is formulated as
\[\mathbf{x}=\left[\mathbf{x}_{1}\;\mathbf{x}_{2}\right],\;\mathbf{y}_{1}=\mathbf{x}_{1},\;\mathbf{y}_{2 }=\mathbf{x}_{2}\odot\exp\left(s(\mathbf{x}_{1})\right)+t(\mathbf{x}_{1}),\;\mathbf{y}=\left[ \mathbf{y}_{1}\;\mathbf{y}_{2}\right] \tag{11}\]
Here, \(\mathbf{x}\) and \(\mathbf{y}\) are the input and output, respectively. \(\left[\cdot\right]\) is the split/concatenate operation along the channel dimension, and \(\odot\) is the element-wise multiplication. \(s(\cdot)\) and \(t(\cdot)\) are the scale and translation functions, respectively. This structure is fully invertible as
\[\mathbf{y}=\left[\mathbf{y}_{1}\;\mathbf{y}_{2}\right],\;\mathbf{x}_{1}=\mathbf{y}_{1},\;\mathbf{x}_{2 }=\left(\mathbf{y}_{2}-t(\mathbf{x}_{1})\right)/\exp\left(s(\mathbf{x}_{1})\right),\;\mathbf{ x}=\left[\mathbf{x}_{1}\;\mathbf{x}_{2}\right] \tag{12}\]
Note that the invertibility of this coupling structure does not require the invertibility of the scale and translation functions \(s(\cdot)\) and \(t(\cdot)\), thus \(s(\cdot)\) and \(t(\cdot)\) can be arbitrary learned transforms. Since the proposed coupling enhancement utilize normal convolutions as its \(s(\cdot)\) and \(t(\cdot)\), its receptive field is not restricted to be blocked, and can thus serve as an enhancement of expressiveness.
### Right-Invertible Generalized Divisive Normalization
The widely-used generalized normalization (GDN)[Balle et al., 2015] in LIC is invertible in theory, and is thus qualified as a surjection. However, the inverse of GDN has to be solved for every input image in an iterative manner, and is even not guaranteed to converge in finite steps. Therefore, the original GDN is not suitable for idempotent compression.
We propose a coupling GDN layer (c-GDN) that combines the coupling structure (Dinh et al., 2016) and GDN. The inverse of c-GDN layer can be solved in a much simpler analytical manner. Specifically, we implement a coupling structure (Dinh et al., 2016) with normal GDN as its scale and translation functions (\(s(\cdot)\) and \(t(\cdot)\)). And just like the aforementioned coupling enhancement, the forward and inverse of this c-GDN layer can be calculated according to Eq. 11 and Eq. 12, respectively. We demonstrate empirically in Sec. 4.3 that the proposed c-GDN layer can achieve comparable RD performance with the original GDN.
### Right-Invertible Quantization
As previously discussed, the vanilla scalar quantization using rounding is naturally idempotent. However, the widely adopted mean-shift trick quantization for mean-scale Gaussian entropy model (Minnen et al., 2018) is not guaranteed to be idempotent. As proposed by (Minnen et al., 2020), the mean-shifted quantization can be formulated as
\[Q(\mathbf{y})=\lfloor\mathbf{y}-\mathbf{\mu}\rceil+\mathbf{\mu}, \tag{13}\]
where \(\lfloor\cdot\rceil\) is scalar quantization and \(\mathbf{\mu}\) is the predicted mean of \(\mathbf{y}\). Let \(\mathbf{y}_{1}\) denote the result after the first application of \(Q\), then idempotence requires that \(Q(\mathbf{y}_{1})=\mathbf{y}_{1}\), which in turn requires \(\mathbf{y}_{1}-\mathbf{\mu}\) to be integer. However, no existing method can meet this requirement, thus \(Q\) is not ensured to be idempotent.
We propose two types of circumvention to solve this issue. For the first-type circumvention, we change the quantization into
\[Q_{i}(\mathbf{y})=\lfloor\mathbf{y}-\lfloor\mathbf{\mu}\rceil\rceil+\lfloor\mathbf{\mu}\rceil \tag{14}\]
By adding \(\lfloor\cdot\rceil\) around the predicted mean \(\mathbf{\mu}\), we force the quantization result to be integer, thus \(\mathbf{y}-\lfloor\mathbf{\mu}\rceil\) is guaranteed to be integer from the second application of \(Q_{i}\), and then \(Q_{i}\) is idempotent.
For the second-type circumvention, we resort to the original definition of mean-scale Gaussian entropy model (Minnen et al., 2018), and calculate the quantized CDF (cumulative distribution function) regarding \(Q_{i}(\mathbf{y})=\lfloor\mathbf{y}\rceil\) on the fly during inference.
### Overall Framework of Right-Invertible Codec
The overall framework is depicted in Fig. 2. Following prior works (Balle et al., 2017, 2018; Cheng et al., 2020; He et al., 2022) in LIC, We use the mainstream four-stage framework in our work. Specifically, the encoding transform is divided into four stages, and each stage decreases the resolution by a factor of 2. For the first 3 stages, each stage starts with a right-invertible convolution layer (described in Sec. 3.1) and ends with a c-GDN normalization layer (described in Sec. 3.2). The last stage only consists of one right-invertible convolution layer.
Figure 2: Comparison between (a) baseline framework (Balle et al., 2018) and (b) proposed framework. To make the framework idempotent, we replace conv/deconv with right-invertible convolution (described in Sec. 3.1), GDN/iGDN with c-GDN (described in Sec. 3.2), and quantization \(Q\) with idempotent quantization \(Q_{i}\) (described in Sec. 3.3) Both frameworks use the same mean-scale Gaussian entropy model (Minnen et al., 2018). AE/AD are arithmetic encoder/decoder, respectively.
The decoding transform is built to be a right-inverse of the encoding transform. Note that not all layer in the decoding transform has its own weights. Specifically, for c-GDN normalization layers, there is no additional weights since they are the inverse of their counterparts in the encoding transform. For right-invertible convolution layers, the only additional weights appear in the null-space enhancement (Sec. 3.1).
Entropy model does not influence idempotence in the proposed framework, thus we use the off-the-shelf mean-scale Gaussian entropy model (Minnen et al., 2018) for its availability and efficiency. Additionally, we use the right-invertible quantization (described in Sec. 3.3) for quantizing the code.
### Extension to Near-Idempotent Learned Image Codec
Idempotent codec can make sure that the RD performance keeps unchanged during any times of re-compression. However, this strict idempotence comes with a price that the decoding transform must be the right-inverse of encoding transform. This limitation reduces the expressiveness of transforms and is empirically harmful to the first-time RD performance.
To adapt to the cases where first-time RD performance also matters, we propose to extend our idempotent codec near-idempotent codec by relaxing the right-invertibility. The relaxation of right-invertibility is simple yet effective: we change the first right-invertible convolution layer (described in Sec.1) to be non-surjective, and keep all the rest layers surjective. This is done by allowing \(D\) to be smaller than \(d\) for the kernel \(K\) in the first right-invertible convolution layer. By keeping the right-invertibility of most layers, our near-idempotent codec is more stable to re-compression than existing near-idempotent codecs (Kim et al., 2020; Cai et al., 2022), while achieving comparable or better first-time RD performance.
## 4 Experiments
### Experiment Setup
All the models are trained on the training split of open-images dataset (Kuznetsova et al., 2020), and all the evaluations are conducted on the Kodak dataset (Franzen, 1999).
We sketch the training schedule accordingly from existing literature (Balle et al., 2017, 2018; Minnen et al., 2018, 2020; Cheng et al., 2020). Images are randomly cropped to \(256\times 256\) for training, and a batch size of 16 is used. All the models are trained using an Adam optimizer. The learning rate is initially set to \(10^{-1}\), and decays by a factor of 10 when plateaued.
Figure 3: Performance of different codecs on Kodak. Idempotent codecs are marked as idemp, near-idempotent codec are marked as near id and non-idempotent codecs are marked as non id. (a) First-time and re-compression (upto 50 times) RD performance of different codecs. Idempotent codecs only report first-time RD performance (re-compression RD performance is the same). (b) PSNR drop during re-compression (upto 50 re-compression) of different codecs. Note that idempotent codecs are straight lines and cover each other in the figure.
We choose four bitrate level according to the benchmark setting in [Kim et al., 2020]. Specifically, we set \(\lambda=\{18,67,250,932\}\times 10^{-4}\), and models trained with these \(\lambda\) reaches average bitrates from 0.2-1.5 on Kodak dataset. Following prior works [Balle et al., 2017, 2018], we use a smaller code channels (192) for lower-bpp points, and use a bigger code channels (320) for higher-bpp points. The learned function \(f(\cdot)\) in Eq.10 is implemented with a residual block.
All the experiments are conducted on a computer with AMD EPYC 7742 64-Core Processor and 8 Nivida A30 GPU. All the code is implemented based Python 3.9, Pytorch 1.12 and CompressAI [Begaint et al., 2020].
### Overall Performance
#### 4.2.1 Results of Idempotent Codec
We compare with [Helminger et al., 2021], which is the only prior LIC that achieves idempotent lossy compression to the best of our knowledge. We also compare with traditional idempotent codecs such as JPEG2000.
We report first-time compression RD performance of the above idempotent codecs in Fig. 3(a), and detailed BD-BR and BD-PSNR are listed in Tab. 1. Multi-time re-compression RD performance does not change for idempotent codecs. From the result we see that, our proposed framework exceeds prior art [Helminger et al., 2021] by a large margin, which clearly validates the superiority of right-inverse over strict inverse on the idempotent lossy compression task.
We also compare the FLOPs and encode-decode time in Tab. 1. The results clearly shows that the proposed idempotent framework is also more efficient than [Helminger et al., 2021].
#### 4.2.2 Results of Near-Idempotent Codec
For near-idempotent codecs, we compare against prior work [Kim et al., 2020], as well as [Cai et al., 2022] which utilizes a partially invertible structure. To demonstrate the advantages over non-idempotent codecs, we also compare with non-idempotent learned codecs [Balle et al., 2018, Cheng et al., 2020, He et al., 2022] as well as traditional codecs BPG and VTM.
Following the benchmark protocol in [Kim et al., 2020], we report the PSNR drop of the above codecs upto 50 re-compression. RD trade-off points whose first-time bpp is closest to but not greater than 0.8 bpp are chosen for each codec. The results is shown in Fig. 3(b) and listed in detail in Tab. 2. From the result we see that, in the near-idempotent setting, the PSNR drop of proposed framework is 0.87dB, whereas [Kim et al., 2020] and [Cai et al., 2022] has more than 2dB PSNR drop. Additionally, the PSNR drop of the proposed framework almost converges within 10 re-compression, while other near-idempotent frameworks still experience evident PSNR drop after 30 or even 50 re-compression. Codecs that do not consider idempotence suffers a much more severe drop of PSNR during re-compression.
We also provide the RD performance of the proposed idempotent and near-idempotent frameworks, as is shown in Fig.3(a). It is clear that near-idempotent framework has much better first-time compression RD performance than idempotent framework. In terms of re-compression RD performance, however, near-idempotent can only reach similar RD performance with much higher computation cost (8.40 GFLOPs v.s. 48.78 GFLOPs in Tab.1 and Tab.2 ).
These results clearly demonstrate that, even if we break the right-invertibility of the first layer in order to get higher first-time RD performance, the performance drop during re-compression is still acceptable and highly controllable, as opposed to prior works [Kim et al., 2020, Cai et al., 2022].
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Methods & BD-BR (\%) \(\downarrow\) & BD-PSNR (dB) \(\uparrow\) & GFLOPs \(\downarrow\) & time (ms) \(\downarrow\) \\ \hline _Idempotent Codec_ & & & & \\ JPEG2000 & 0.00 & 0.00 & - & - \\
[Helminger et al., 2021] & 4.83 & -0.21 & 15.89 & 185 \\ Proposed Idempotent & -28.75 & 1.63 & 8.40 & 110 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The BD-BR, BD-PSNR, FLOPs and encode-decode time of different methods on Kodak dataset. FLOPs and enc-dec time are calcualted on an input of shape \(256\times 256\times 3\).
### Ablation Studies
**Inverse v.s. Right-Inverse** The importance of right-inverse for idempotent lossy compression can already be seen from the difference between the proposed framework and (Helminger et al., 2021) in Fig. 3(a) and Tab. 1. To further demonstrate this importance, we test what happens if the proposed framework is changed from right-invertible to invertible. Specifically, we increase the output dimension of the proposed blocked convolution (Sec. 3.1) to its input dimension, so that it becomes fully invertible. We also test what happens if the proposed blocked convolution is replaced with invertible \(1\times 1\) convolution (Kingma and Dhariwal, 2018). For both cases, the encoding transform is fully invertible.
Fig. 4 (a) shows that both invertible blocked convolution (inv.) and invertible \(1\times 1\) convolution (w.conv1x1) suffer dramatic RD performance degradation compared with the proposed right-invertible framework (proposed). This result further demonstrated that it is important for idempotent LIC to have right-inverse rather than inverse.
**Impact of Null-Space Enhancement** Null-space enhancement is introduced in our framework to enable an adaptive right-inverse for linear layers rather than a fixed Moore-Penrose psudo-inverse. As is shown in Fig. 4, for the two lower-bpp points, the framework with null-space enhancement
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{4}{c}{PSNR Drop (dB) \(\downarrow\)} & \multicolumn{4}{c}{GFLOPs \(\downarrow\)} & time (ms) \(\downarrow\) \\ \cline{2-7} & round = & 5 & 10 & 25 & 50 & & \\ \hline _Non-Idempotent Codec_ & & & & & & \\ BPG & 1.16 & 1.93 & 2.10 & 2.19 & - & - \\ VTM & 1.19 & 2.09 & 4.50 & 7.18 & - & - \\ (Ballé et al., 2018) & 2.18 & 3.17 & 5.65 & 8.46 & 6.23 & 80 \\ (Cheng et al., 2020) & 2.44 & 4.76 & 8.59 & 12.40 & 51.99 & \textgreater{}1000 \\ \hline _Near-Idempotent Codec_ & & & & & & \\ (Kim et al., 2020) & 0.18 & 0.61 & 3.18 & 8.26 & 6.23 & 80 \\ (Cai et al., 2022) & 1.36 & 2.01 & 2.75 & - & 131.46 & 240 \\ Proposed Near-Idempotent & 0.74 & 0.83 & 0.87 & 0.87 & 48.78 & 115 \\ \hline \hline \end{tabular}
\end{table}
Table 2: PSNR drop during 50 re-compression of different non-idempotent and near idempotent codecs. FLOPs and encode-decode time tested under the same condition as Tab. 1
Figure 4: Ablation studies for different architectures. (a) Re-compression RD performance of different idempotent architectures on Kodak dataset (except for w.GDN, for which we report first-time compression as it is non-idempotent). (b) qualitative results for w. (right) or w.o. (left) coupling enhancement.
(proposed) has a 1 dB advantage over the framework without null-space enhancement (w.o.NE), whereas for the two higher-bpp points this advantage shrinks to be smaller than 0.5 dB. The reason is that the two lower-bpp points have a smaller code channels (192) than the two higher-bpp points (320). A smaller code channel means a larger null space, and thus null-space enhancement can make larger improvement.
**Impact of Coupling Enhancement** Coupling enhancement improves the receptive field of convolution. The proposed blocked convolution is parallel friendly, but also restricts the receptive field. Qualitatively, this constraint results in obvious deterioration in the flat area, such as the wall, the sky and the face, as is shown in the left column of Fig. 4(b). By introducing coupling enhancement after blocked convolution, the restriction is lifted and the block artifact on the reconstructed images are removed, as is shown in the right column of Fig. 4(b). Quantitatively, the RD performance is also improved by coupling enhancement, which is shown by comparing proposed and w.o.c-en in Fig. 4(a).
**Impact of Right-Invertible Quantization** As is pointed out in Sec. 3.3, the mean-shifted quantization is not guaranteed to be idempotent, and we propose two alternatives to circumvent this issue. In the proposed framework, we use the second-type circumvention. Here we compare these two choices. As is shown in Fig. 4(a), first-type circumvention (w.int mean) performs slightly worse than second-type circumvention (proposed). This is because first-type circumvention forces the mean to be integer, whereas second-type does not has this constraint. However, the second-type circumvention requires calculating the quantized CDF on the fly during inference.
**Impact of Right-Invertible GDN** To make the inverse of GDN layer actually computable in the proposed framework, we combine the coupling structure (Papamakarios et al., 2021) with GDN, as is described in Sec. 3.2. Here we demonstrate that such workaround does not affect the RD performance. Specifically, we change the c-GDN and c-GDN\({}^{-1}\) layer in our framework (Fig. 2(b)) back to GDN and iGDN layer (Balle et al., 2015). Note that this change makes the framework non-idempotent, and is only used to test the RD performance. From the results in Fig. 4(a) we see that, Whether to use coupling GDN (proposed) or original GDN (w.GDN) has negligible influence on RD performance. Thus coupling GDN is an efficient and effective replacement for GDN in idempotent compression framework.
## 5 Related Work
The idempotence has been a crucial consideration for lossy image codecs. For image codecs without prediction encoding like JPEG (Wallace, 1991) and JPEG2000 (Taubman et al., 2002), idempotence is naturally assured as invertible encoding transforms like DCT or DWT is adopted. As long as the quantization is idempotent (which is true for scalar quantization), the whole codec becomes idempotent (Joshi et al., 2000; Richter et al., 2017).
The idempotence of learned image compression is firstly studied by (Kim et al., 2020), which proposes a near-idempotence solution that alleviates re-compression loss but does not eliminate it. (Cai et al., 2022) further improves over (Kim et al., 2020) while it is not able to achieve strict idempotence. (Helminger et al., 2021) is the first LIC that achieves idempotence, using fully invertible normalizing flow as encoding transform. However, it's RD performance dramatically falls behind modern LIC.
Our work is also related to SurVAE flow (Nielsen et al., 2020). On the one hand, the idempotence of LIC requires the encoding transform to be surjective, which is similar to the goal of SurVAE flow. The difference is, we consider deterministic right inverse, whereas SurVAE flow considers stochastic right inverse. On the other hand, the techniques proposed in this paper, such as blocked convolution and null-space enhancement, can be used to improve SurVAE flow.
## 6 Discussion & Conclusion
To conclude, we first identify that invertibility is sufficient but not necessary for idempotent codec, and it can be instead relaxed to right-invertibility. Based on this identification, we investigate the practical implementation of right-inverse with efficiency and expressiveness. Empirically, We show that the proposed method achieves state-of-the-art RD performance among idempotent codecs. Furthermore,our codec can be easily relaxed into a near-idempotent codec, which also achieves state-of-the-art re-compression performance among near-idempotent codecs.
For future work, one possible direction is constructing right-invertible transform without function composition. Currently, the right-invertible transform is constructed using composition of surjections. Such construction strictly restricts the latent dimension to be non-increasing throughout the transform. This restriction is conflict with the prevalent design logic of neural network and is detrimental to expressiveness. It would be interesting to see if this restriction could be removed and how much the performance could be improved.
## Acknowledgements
Funded by Baidu Inc. through Apollo-AIR Joint Research Center.
## References
* M. J. Balle, V. Laparra, and E. P. Simoncelli (2017)End-to-end optimized image compression. In 5th International Conference on Learning Representations, ICLR 2017, Cited by: SS1.
* J. Balle, D. Minnen, S. Singh, S. J. Hwang, and N. Johnston (2018)Variational image compression with a scale hyperprior. In International Conference on Learning Representations, Cited by: SS1.
* J. Begaint, F. Racape, S. Feltman, and A. Pushparaja (2020)Compressai: a pytorch library and evaluation platform for end-to-end compression research. arXiv preprint arXiv:2011.03029. Cited by: SS1.
* B. Bross, Y. Wang, Y. Ye, S. Liu, J. Chen, G. J. Sullivan, and J. Ohm (2021)Overview of the versatile video coding (vvc) standard and its applications. IEEE Transactions on Circuits and Systems for Video Technology31 (10), pp. 3736-3764. Cited by: SS1.
* S. Cai, Z. Zhang, L. Chen, L. Yan, S. Zhong, and X. Zou (2022)High-fidelity variable-rate image compression via invertible activation transformation. In Proceedings of the 30th ACM International Conference on Multimedia, pp. 2021-2031. Cited by: SS1.
* Z. Cheng, H. Sun, M. Takeuchi, and J. Katto (2020)Learned image compression with discretized gaussian mixture likelihoods and attention modules. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7939-7948. Cited by: SS1.
* L. Dinh, J. Sohl-Dickstein, and S. Bengio (2016)Density estimation using real nvp. arXiv preprint arXiv:1605.08803. Cited by: SS1.
* R. Franzen (1999)Kodak lossless true color image suite. source: http://r0k. us/graphics/kodak4 (2). Cited by: SS1.
* D. He, Y. Zheng, B. Sun, Y. Wang, and H. Qin (2021)Checkerboard context model for efficient learned image compression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14771-14780. Cited by: SS1.
* D. He, Z. Yang, W. Peng, R. Ma, H. Qin, and Y. Wang (2022)Elic: efficient learned image compression with unevenly grouped space-channel contextual adaptive coding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5718-5727. Cited by: SS1.
* L. Helminger, A. Djelouah, M. Gross, and C. Schroers (2021)Lossy image compression with normalizing flows. In Neural Compression: From Information Theory to Applications-Workshop@ ICLR 2021, Cited by: SS1.
R. L. Joshi, M. Rabbani, and M. A. Lepley. Comparison of multiple compression cycle performance for jpeg and jpeg 2000. In _Applications of Digital Image Processing XXIII_, volume 4115, pages 492-501. SPIE, 2000.
* Kim et al. (2020) J.-H. Kim, S. Jang, J.-H. Choi, and J.-S. Lee. Instability of successive deep image compression. In _Proceedings of the 28th ACM International Conference on Multimedia_, pages 247-255, 2020.
* Kingma and Dhariwal (2018) D. P. Kingma and P. Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. _Advances in neural information processing systems_, 31, 2018.
* Kuznetsova et al. (2020) A. Kuznetsova, H. Rom, N. Alldrin, J. Uijlings, I. Krasin, J. Pont-Tuset, S. Kamali, S. Popov, M. Malloci, A. Kolesnikov, et al. The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. _International Journal of Computer Vision_, 128(7):1956-1981, 2020.
* Lezcano-Casado (2019) M. Lezcano-Casado. Trivializations for gradient-based optimization on manifolds. In _Advances in Neural Information Processing Systems, NeurIPS_, pages 9154-9164, 2019.
* Lane (2013) S. Mac Lane. _Categories for the working mathematician_, volume 5. Springer Science & Business Media, 2013.
* Minnen et al. (2018) D. Minnen, J. Balle, and G. D. Toderici. Joint autoregressive and hierarchical priors for learned image compression. _Advances in neural information processing systems_, 31, 2018.
* Minnen et al. (2020) D. Minnen, S. Singh, and J. Balle. Channel-wise autoregressive entropy models for learned image compression. In _2020 IEEE International Conference on Image Processing (ICIP)_, pages 3339-3343. IEEE, 2020.
* Nielsen et al. (2020) D. Nielsen, P. Jaini, E. Hoogeboom, O. Winther, and M. Welling. Survae flows: Surjections to bridge the gap between vaes and flows. _Advances in Neural Information Processing Systems_, 33:12685-12696, 2020.
* Papamakarios et al. (2021) G. Papamakarios, E. Nalisnick, D. J. Rezende, S. Mohamed, and B. Lakshminarayanan. Normalizing flows for probabilistic modeling and inference. _The Journal of Machine Learning Research_, 22(1):2617-2680, 2021.
* Randall (1998) K. H. Randall. _Cilk: Efficient multithreaded computing_. PhD thesis, Massachusetts Institute of Technology, 1998.
* Richter et al. (2017) T. Richter, J. Keinert, A. Descampe, G. Rouvroy, and A. Willeme. Multi-generation-robust coding with jpeg xs. In _2017 IEEE International Symposium on Multimedia (ISM)_, pages 6-13. IEEE, 2017.
* Schwab et al. (2019) J. Schwab, S. Antholzer, and M. Haltmeier. Deep null space learning for inverse problems: convergence analysis and rates. _Inverse Problems_, 35(2):025008, 2019.
* Sullivan et al. (2012) G. J. Sullivan, J.-R. Ohm, W.-J. Han, and T. Wiegand. Overview of the high efficiency video coding (hevc) standard. _IEEE Transactions on circuits and systems for video technology_, 22(12):1649-1668, 2012.
* Taubman et al. (2002) D. S. Taubman, M. W. Marcellin, and M. Rabbani. Jpeg2000: Image compression fundamentals, standards and practice. _Journal of Electronic Imaging_, 11(2):286-287, 2002.
* Wallace (1991) G. K. Wallace. The jpeg still picture compression standard. _Communications of the ACM_, 34(4):30-44, 1991.
* Wang et al. (2022a) Y. Wang, Y. Hu, J. Yu, and J. Zhang. Gan prior based null-space learning for consistent super-resolution. _arXiv preprint arXiv:2211.13524_, 2022a.
* Wang et al. (2022b) Y. Wang, J. Yu, and J. Zhang. Zero-shot image restoration using denoising diffusion null-space model. _arXiv preprint arXiv:2212.00490_, 2022b.
**Appendices**
## Appendix A Additional Explanation for the Methods
### Complexity Analysis of Block Convolution
To discuss this problem, consider a 2-dimension convolution with kernel size \(K\times K\), stride \(S\times S\), input spatial size \(H\times W\), input channel \(C_{i}\) and output channel \(C_{o}\). Assuming the in-place parallel matrix multiplication[Randall, 1998] is used.
For serial method, as is described in Line 104-109, we first subtract the influence of the already-solved pixels, which takes \(O(C_{i}K^{2})\) time. Then we solve for the rest pixels, which takes \(O(C_{o})\) time. The point is, this subtract-then-solve procedure needs to be done \(O(HW/S^{2})\) times, in serial, so the overall complexity is \(O(\frac{HW}{S^{2}}(C_{i}K^{2}+C_{o}))\).
For parallel method implemented with the proposed blocked convolution, we can solve for all pixels with \(O(C_{o})\) time complexity.
We can see that, compared with the parallel method, time complexity of serial method is forbiddingly higher and not viable for practical usages. In terms of re-compression performance, both serial method and parallel method are right-inverse, so idempotence can be achieved for both.
### More on Null-space Enhancement
#### Parameterization of Surjective Linear Transform
For a linear transform with kernel \(K\in\mathbb{R}^{D\times d}\) to be surjective, \(K\) must have full column rank, i.e., \(d\) must be less or equal to \(D\) and the rank of \(K\) is \(d\). The vanilla parameterization of \(K\) as a matrix cannot ensure this property. Thus, we use singular value decomposition (SVD) parameterization to ensure that \(K\) is surjective. Specifically, \(K\) is decomposed as \(K=USV^{T}\), where \(U\) is a \(D\times d\) orthonormal matrix, \(S\) is a \(d\times d\) diagonal matrix with non-zero diagonal elements, and \(V\) is a \(d\times d\) orthonormal matrix. With this decomposition, \(K\) is guaranteed to have full column rank, and is thus surjective. Accordingly, Eq. 10 is parameterized as
\[X=YVS^{-1}U^{T}+f(Y)(I-UU^{T}) \tag{15}\]
where \(S^{-1}\) is the inverse of non-zero diagonal matrix \(S\). For orthonormal matrix \(U\) and \(V\), we adopt the parameterization in [Lezcano-Casado, 2019]. The matrix \(S\) is diagonal and trivial to parameterize. To avoid arithmetic overflow, we restrict the diagonal elements of \(S\) to be within \([0.1,10]\).
#### Derivation of Null-space equality
\(\forall F\in\mathbb{R}^{b\times D}\), \(X=YK^{+}K+F(I-KK^{+})\) is a solution to \(Y=XK\).
Proof.: From the property of Moore-Penrose pseudo-inverse [Moo] we know that
\[KK^{+}K=K. \tag{16}\]
Additionally, for a orthonormal \(K\) (with linearly independent columns), we have
\[K^{+}K=I. \tag{17}\]
Then, right-multiply \(X(Y;K)\) by \(K\), we get
\[\begin{split} XK&=YK^{+}K+F(I-KK^{+})K\\ &=Y+FK-FK=Y.\end{split} \tag{18}\]
## Appendix B More Experimental Results
### More Experiment Setup
The detailed encoding and decoding transform is illustrated in Fig. 5. To extend the idempotent framework to near-idempotent framework, we change the first blocked convolution in the encodingFigure 5: Detailed encoding and decoding transform for the proposed idempotent and near-idempotent framework. **blocked** refers to the proposed blocked convolution. The (input, output) channels are annotated in the brackets, and stride is annotated using down-arrow. **NE** refers to the proposed null-space enhancement, and \(f\) is learned parametric function. Here RBU/RBD are the residual block used for upsampling/downsampling (Fig. 6(b)), respectively. **c-enhance** refers to the proposed coupling enhancement using coupling structure (Fig. 6(a)). The kernel size of the convolution is annotated in the brackets. **c-GDN** refers to the proposed right-invertible normalization using coupling structure (Fig. 6(a)). Blanked rectangular refer to the right-inverse/inverse of the corresponding layer, and has no additional weights.
Figure 6: Submodules used in the proposed framework: (a) coupling structure used in c-enhance and c-GDN; (b) residual block downsampling (RBD) and residual block upsampling (RBU) used in the \(f(\cdot)\) of null-space enhancement.
transform to non-surjective by increasing its output channel number from 10 to 128. Since this blocked convolution is no longer surjective, it is no longer right-invertible. However, its corresponding layer in the decoding transform can be make surjective and right-invertible. Thus, we make its corresponding layer in the decoding transform surjective, and use the null-space enhancement on the encoding side.
Coupling structure [11] used in coupling enhancement (c-enhance) and coupling GDN (c-GDN) is illustrated in Fig. 6(a). For c-enhance, the scale \(s(\cdot)\) and translation \(t(\cdot)\) are convolution. For c-GDN, the scale \(s(\cdot)\) and translation \(t(\cdot)\) are GDN [10]. Following the usage guideline in [11], we concatenate two coupling structure with the opposite way of splitting in one c-enhance/c-GDN.
We provide additional experiments on (a) extendability of the proposed idempotent framework (b) functionality of different components of the proposed near-idempotent framework. Specifically, to demonstrate the extendability of the proposed idempotent framework, we replace GDN with residual blocks as suggested by [1] (RB\(\times\)1 for one residual block, or RB\(\times\)3 for three residual blocks), and report the RD performance in Fig.7(a). It is clear that, replacing GDN with the more recent residual blocks would also improve our framework to a similar degree. Thus our proposed framework is compatible with the recent advances in LIC and has good extendability.
To analysis the functionality of different components of the proposed near-idempotent framework, we test what happens if the modification to each component is not applied, and report the PSNR drop during re-compression in Fig.7(b). Specifically, we test keeping the GDN layers unchanged (w. gdn ) or keeping the convolution layers unchanged (w. conv). Keeping both the GDN layers and the convolution layers would reduce to baseline Balle2018. The results shows that keeping more layers unchanged may slightly improve first-time compression performance, but is evidently harmful to re-compression performance. The proposed near-idempotent framework has the best re-compression performance among these settings.
We include the implement of other codecs as follows. For codecs that have open-source implementations, we use that implementation. For codecs that do not have open-source implementations, we either use the data provided in the paper, or re-implement by ourselves if the detailed architecture is provided.
* Implementations from CompressAI [1]: Balle2017[1], Balle2018[1], Cheng2020[2], JPEG2000[12], BPG444[14], VTM444[15]
* Data from the original papers: Helminger2021[1], Cai2022[16]
* Our re-implementation: Kim2020[14]. Specifically, we re-implement the FI loss proposed in this work on Balle2018 [1].
Figure 7: Additional experiments on: (a) extendability of the proposed idempotent framework (b) functionality of different components of the proposed near-idempotent framework.
### More Quantitative & Qualitative Results
See Fig. 8-11 for more quantitative results.
See Fig. 12-15 for more qualitative results.
## Appendix C More Discussion
### Limitation
In this work, the surjective encoding transform is constructed using function composition of simple surjections. This construction strategy limits the latent dimension to be non-increasing throughout the encoding transform. This limitation contradicts the mainstream design logic of neural network, and is harmful to expressiveness.
A function composition of surjections is always a surjection, but a surjection needs not to be a function composition of surjections (Mac Lane, 2013). Thus this restriction could be lifted by more advanced construction strategy of surjection.
### Broader Impact
Improve the rate-distortion of re-compression has positive social impact. Re-compression constantly happens in the transmission and redistribution of image data. Reducing the bitrate can save the resources, energy and the carbon emission during these processes.
### Reproducibility Statement
All theoretical results are proven in Appendix. A. For experimental results, all the datasets used are publicly available, and the implementation details are provided in Appendix. B. Furthermore, the source code for reproducing experimental results are provided in supplementary materials.
Figure 8: **PSNR-BPP curve on Kodak.** Idempotent codecs are marked as idemp, near-idempotent codec are marked as near id and non-idempotent codecs are marked as non id. First-time compression performance is plotted in dotted line, and re-compression performance (upto 50 times) is plotted in solid line.
Figure 9: **MSSSIM-BPP curve on Kodak.** Idempotent codecs are marked as idemp, near-idempotent codec are marked as near id and non-idempotent codecs are marked as non id. First-time compression performance is plotted in dotted line, and re-compression performance (upto 50 times) is plotted in solid line. All models are optimized for minimizing MSE.
Figure 11: **MS-SSIM** drop upto 50 re-compression on **Kodak**. Idempotent codecs are marked as idemp, near-idempotent codec are marked as near id and non-idempotent codecs are marked as non id. First-time MS-SSIM is annotated in (\(\cdot\)). All models are optimized for minimizing MSE.
Figure 10: **PSNR** drop upto 50 re-compression on **Kodak**. Idempotent codecs are marked as idemp, near-idempotent codec are marked as near id and non-idempotent codecs are marked as non id. First-time PSNR is annotated in (\(\cdot\)).
Figure 12: Qualitative comparison on reconstructed kodim06 image after 50 times re-compression.
Figure 13: Qualitative comparison on reconstructed kodim11 image after 50 times re-compression.
Figure 14: Qualitative comparison on reconstructed kodim13 image after 50 times re-compression.
Figure 15: Qualitative comparison on reconstructed kodim24 image after 50 times re-compression. | ## Review
### Summary
This paper proposes a novel method for achieving idempotent image compression by utilizing right-inverse transforms instead of relying solely on traditional invertible models. It highlights the challenges faced in learned image compression and demonstrates that right-invertibility suffices for idempotent compression. The framework introduces improvements such as blocked convolutions and addresses several key aspects of the compression process, including GDN layers and quantization. Overall, the work presents state-of-the-art results in the context of idempotent image codecs, while also exploring near-idempotent compression strategies. The experimental results support the proposed methods, showcasing their effectiveness in re-compression scenarios.
### Strengths
- Novel approach using right-inverse transforms for idempotent compression.
- State-of-the-art results for idempotent image codecs.
- Well-organized paper with clear motivation and detailed descriptions.
- Inclusion of an ablation study demonstrating the importance of individual contributions.
- Code provided for reproducibility.
### Weaknesses
- Transition from function to matrix notation is abrupt and may confuse readers.
- Lack of clarity in the explanation of the blocked convolution and its necessity.
- Insufficient detail on the right-inverse and the learning process for certain equations.
- Limited discussion on computational complexity and efficiency comparisons with existing methods.
- The near-idempotent idea is introduced but not adequately analyzed for performance comparisons.
- Old baselines used for comparison may not reflect current state-of-the-art.
- Lack of comprehensive analysis on the limitations and impact of assumptions in the framework.
### Questions
- How does the blocked rearrangement improve time complexity compared to serial methods?
- Can the proposed modules be integrated into existing learned image compression frameworks without losing performance?
- What insights can be provided regarding the near-idempotent codec's first-time compression performance?
- How does the preprocessing assumption of zero mean and unit variance affect performance?
### Soundness
**Score:** 3
**Description:** 3 = good; the foundational ideas are sound, but some arguments lack sufficient evidence and clarity.
### Presentation
**Score:** 2
**Description:** 2 = fair; while the paper is generally well-organized, some sections are difficult to follow due to abrupt transitions and unclear explanations.
### Contribution
**Score:** 3
**Description:** 3 = good; the paper presents a novel approach with potential significance, although certain contributions require better clarification and support.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept; the paper is technically solid with moderate-to-high impact potential, but it requires some improvements regarding clarity and comparison to current methods.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a novel and technically sound approach to idempotent image compression, demonstrating significant contributions to the field. Despite some weaknesses in presentation and clarity, the overall impact and state-of-the-art results justify acceptance. The authors should address the highlighted weaknesses in the final version to enhance the manuscript.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Equivariant Spatio-Temporal Attentive Graph Networks to Simulate Physical Dynamics
Liming Wu\({}^{1,2}\), Zhichao Hou\({}^{3}\), Jirui Yuan\({}^{4}\), Yu Rong\({}^{5}\), Wenbing Huang\({}^{1,2}\)
\({}^{1}\)Gaoling School of Artificial Intelligence, Renmin University of China
\({}^{2}\)Beijing Key Laboratory of Big Data Management and Analysis Methods, Beijing, China
\({}^{3}\)Department of Computer Science, North Carolina State University
\({}^{4}\)Institute for AI Industry Research (AIR), Tsinghua University \({}^{5}\)Tencent AI Lab
{maniliowu,hwenbing}@ruc.edu.cn [email protected]
[email protected] [email protected]
Equal contribution.Corresponding author.
###### Abstract
Learning to represent and simulate the dynamics of physical systems is a crucial yet challenging task. Existing equivariant Graph Neural Network (GNN) based methods have encapsulated the symmetry of physics, _e.g._, translations, rotations, etc, leading to better generalization ability. Nevertheless, their frame-to-frame formulation of the task overlooks the non-Markov property mainly incurred by unobserved dynamics in the environment. In this paper, we reformulate dynamics simulation as a spatio-temporal prediction task, by employing the trajectory in the past period to recover the Non-Markovian interactions. We propose Equivariant Spatio-Temporal Attentive Graph Networks (ESTAG), an equivariant version of spatio-temporal GNNs, to fulfill our purpose. At its core, we design a novel Equivariant Discrete Fourier Transform (EDFT) to extract periodic patterns from the history frames, and then construct an Equivariant Spatial Module (ESM) to accomplish spatial message passing, and an Equivariant Temporal Module (ETM) with the forward attention and equivariant pooling mechanisms to aggregate temporal message. We evaluate our model on three real datasets corresponding to the molecular-, protein- and macro-level. Experimental results verify the effectiveness of ESTAG compared to typical spatio-temporal GNNs and equivariant GNNs.
## 1 Introduction
It has been a goal to represent and simulate the dynamics of physical systems by making use of machine learning techniques [37, 8, 12]. The related studies, once pushed forward, have great potential to facilitate a variety of downstream scientific tasks including Molecular Dynamic (MD) simulation [19], protein structure prediction [1], virtual screening of drugs and materials [30], model-based robot planning/control [35], and many others.
Plenty of solutions have been proposed, amongst which the usage of Graph Neural Networks (GNNs) [41] becomes one of the most desirable directions. GNNs naturally model particles or unit elements as nodes, physical relations as edges, and the latent interactions as the message passing thereon. More recently, a line of researches [38, 11, 21, 9, 32, 20] have been concerned with generalizing GNNs to fit the symmetry of our physical world. These works, also known as equivariant GNNs [18], ensure that translating/rotating/reflecting the geometric input of GNNs results in the output transformed in the same way. By handcrafting such Euclidean equivariance, themodels are well predictable to scenarios under arbitrary coordinate systems, giving rise to enhanced generalization ability.
In spite of the fruitful progress, existing methods overlook a vital point: _the observed physical dynamics are almost non-Markovian_. In previous methods, they usually take as input the conformation of a system at a single temporal frame and predict as output the future conformation after a fixed time interval, forming a frame-to-frame forecasting problem. Under the Markovian assumption, this setting is pardonable as future frames are independent of all other past frames given the input one. Nevertheless, the Markovian assumption is rather unrealistic when there are other unobserved objects interacting with the system we are simulating [40]. For instance, when we consider simulating the dynamics of a protein that is interacting with an unobserved solvent (such as water), the Markovian property no longer holds; in other words, even conditional on the current frame, the future dynamics of the protein depends on the current state of the solvent which, however, is influenced by the past states of the protein itself, owing to the interaction between the protein and the solvent. The improper Markovian assumption makes current works immature in dynamics modeling.
To relieve from the Markovian assumption, this paper proposes to employ the states in the past period to reflect the latent and unobserved dynamics. In principle, we can recover the non-Markovian behavior (_e.g._ interacting with a solvent) if the past period is sufficiently long. We collect a period of past system states as spatio-temporal graphs, and utilize them as the input to formulate a spatio-temporal prediction task, other than the frame-to-frame problem as usual (see Figure 1). This motivates us to leverage existing Spatio-Temporal GNNs (STGNNs) [44] to fulfill our purpose, which, unfortunately, are unable to conform to the aforementioned Euclidean symmetry and the underlying physical laws. Hence, the equivariant version of STGNNs is no doubt in demand. Another point is that periodic motions are frequently observed in typical physical systems [3]. For example, the Aspirin molecular exhibits clear periodic thermal vibration when binding to a target protein. Under the spatio-temporal setting, we are able to model periodicity of dynamics, which, however, is less investigated before.
Our contributions are summarized as follows:
* We reveal the non-Markov behavior in physical dynamics simulation by developing equivariant spatio-temporal graph models. The proposed model dubbed Equivariant Spatio-Temporal Attentive Graph network (ESTAG) conforms to Euclidean symmetry and alleviates the limitation of the Markovian assumption.
* We design a novel Equivariant Discrete Fourier Transform (EDFT) to extract periodic features from the dynamics, and then construct an Equivariant Spatial Module (ESM), and an Equivariant Temporal Module (ETM) with forward attention and equivariant pooling, to process spatial and temporal message passing, respectively.
* The effectiveness of ESTAG is verified on three real datasets corresponding to the molecular-, protein- and macro-level. We prove that, involving both temporal memory and equivariance are advantageous compared to typical STGNNs and equivariant GNNs with adding trivial spatio-temporal aggregation.
## 2 Related Work
**GNNs for Physical Dynamics Modeling** Graph Neural Networks (GNNs) have shown great potential in physical dynamics modeling. IN [2], NRI [23], and HRN [29] are a series of works to learn physical
Figure 1: Comparison of the problem setting between previous methods and our paper. Here, we choose the dynamics of the Aspirin molecular with time lag as 1 for illustration.
system interaction and evolution. Considering the energy conservation and incorporating physical prior knowledge into GNNs, HNN [15], HOGN [31] leverage ODE and Hamiltonian mechanics to capture the interactions in the systems. However, all of the above-mentioned models don't take into account the underlying symmetries of a system. In order to introduce Euclidean equivariance, Tensor-Field networks (TFN) [38] and SE(3)-Transformer [11] equip filters with rotation equivariance by irreducible representation of the SO(3) group. LieTransformer [21]and LieConv [9] leverage the Lie group to enforce equivariance. Besides, EGNN [32] utilized a simpler E(n)-equivariant framework which can achieve competitive results without computationally expensive information. Based on EGNN, GMN [20] further proposes an equivariant and constraint-aware architecture by making use of forward kinematics information in physical system. Nevertheless, all of these methods ignore the natural spatiotemporal patterns of physical dynamics and model it as a frame-to-frame forecasting problem.
**Spatio-temporal graph neural networks** STGNNs [22] aim to capture the spatial and temporal dependency simultaneously and are widely investigated in various applications like traffic forecasting and human recognition. DCRNN [26] and GaAN [45] are RNN-based methods which filter inputs and hidden states passed to recurrent unit using graph convolutions. However, RNN-based approaches are time-consuming and may suffer from gradient explosion/vanishing problem. CNN-based approaches, such as STGCN [44] and ST-GCN [43], interleave 1D-CNN layers with graph convolutional layers to tackle spatial temporal graphs in a non-recursive manner. Besides, attention mechanism, an important technique STGCNs, is employed by GaAN [45], AGL-STAN [36] and ASTGCN [16] to learn dynamic dependencies in both space and time domain. The aforementioned approaches are targeted to the applications in 2D graph scenarios such as traffic networks and human skeleton graphs, and may not be well applicable to 3D physical systems in which geometric equivariance is a really important property.
**Equivariant spatio-temporal graph neural networks** There are few previous works designing spatio-temporal GNNs while maintaining equivariance. Particularly, by using GRU [6] to record the memory of past frames, LoCS [24] additionally incorporates rotation-invariance to improve the model's generalization ability. Different from the recurrent update mechanism used in LoCs, EqMotion [42] distills the history trajectories of each node into a multi-dimension vector, by which the spatio-temporal graph is compressed as a spatial graph, then it designs an equivariant module and an interaction reasoning module to predict future frames. However, both of LoCS and EqMotion are still defective in exploring the interactions among history trajectories, while in this paper we propose a transformer-alike architecture to fully leverage the spatio-temporal interactions based on equivariant attentions.
## 3 Notations and Task Definition
The dynamics of physical objects (such as molecules) can be formulated with the notion of spatiotemporal graphs, as shown in Figure 1 (left). In particular, a spatiotemporal graph of node number \(N\) and
Figure 2: Schematic overview of ESTAG. After inputting historical graph trajectories \(\mathcal{G}_{0},...,\mathcal{G}_{T-1}\), Equivariant Discrete Fourier Transform (EDFT) extracts equivariant frequency features \(\tilde{\mathbf{f}}\) from the trajectory. We process them into the invariant node-wise feature \(\mathbf{c}\) and adjacency matrix \(\mathbf{A}\) to be adopted for the next stage. Then we stack Equivariant Spatial Module (ESM) and Equivariant Temporal Module (ETM) alternatively for \(L\) times to explore spatial and temporal dependencies. After the equivariant temporal pooling layer, we obtain the estimated position \(\mathbf{\tilde{x}}^{*}(T)\).
temporal length \(T\) is denoted as \(\{\mathcal{G}_{t}=(\mathcal{V}_{t},\mathcal{E})\}_{t=0}^{T-1}\). Here, the nodes \(\mathcal{V}_{t}\) are shared across time, but each node \(i\) is assigned with different scalar feature \(\mathbf{h}_{i}(t)\in\mathbb{R}^{c}\), position vector \(\mathbf{\vec{x}}_{i}(t)\in\mathbb{R}^{3}\) at different temporal frame \(t\); the edges \(\mathcal{E}\) are associated with an identical adjacency matrix \(\mathbf{A}\in\mathbb{R}^{N\times N\times m}\) whose element \(\mathbf{A}_{ij}\in\mathbb{R}^{m}\) defines the edge feature between node \(i\) and \(j\). We henceforth denote by the matrices \(\mathbf{H}(t)\in\mathbb{R}^{c\times N}\) and \(\mathbf{\vec{X}}(t)\in\mathbb{R}^{3\times N}\) the collection of all nodes in \(\mathcal{G}_{t}\). Here for simplicity, we specify the time lag as \(\Delta t=1\) which could be valued more than 1 in practice. In general, the \(t\)-th frame corresponds to time \(T-t\Delta t\).
**Task Definition** This paper is interested in predicting the physical state, particularly the position of each node at frame \(T\) given the historical graph series \(\{\mathcal{G}_{t}\}_{t=0}^{T-1}\). In form, we learn the function \(\phi\):
\[\{(\mathbf{H}(t),\mathbf{\vec{X}}(t),\mathbf{A})\}_{t=0}^{T-1}\overset{\phi}{ \rightarrow}\mathbf{\vec{X}}(T). \tag{1}\]
Although previous approaches such as EGNN [32] and GMN [20] also claim to tackle physical dynamics modeling, they neglect the application of spatiotemporal patterns and only accomplish frame-to-frame prediction, where the input of \(\phi\) is reduced to a single frame, _e.g._, \(\mathcal{G}_{T-1}\) in Eq. 1.
**Equivariance** A crucial constraint on physical dynamics is that the function \(\phi\) should meet the symmetry of our 3D world. In other words, for any translation/rotation/reflection \(g\) in the group E(3), \(\phi\) satisfies:
\[\phi(\{(\mathbf{H}(t),g\cdot\mathbf{\vec{X}}(t),\mathbf{A})\}_{t=0}^{T-1})=g \cdot\mathbf{\vec{X}}(T), \tag{2}\]
where the group action \(\cdot\) is instantiated as \(g\cdot\mathbf{\vec{X}}(t):=\mathbf{\vec{X}}(t)+\mathbf{b}\) for translation \(\mathbf{b}\in\mathbb{R}^{3}\) and \(g\cdot\mathbf{\vec{X}}(t):=\mathbf{O}\mathbf{\vec{X}}(t)\) for rotation/reflection \(\mathbf{O}\in\mathbb{R}^{3\times 3}\).
## 4 Our approach: ESTAG
Discovering informative spatiotemporal patterns from the input graph series is vital to physical dynamics modeling. In this section, we introduce ESTAG that pursues this goal in a considerate way: We first extract the frequency of each node's trajectory by EDFT (SS 4.1), which captures the node-wise temporal dynamics in a global sense and returns important features for the next stage; We then separately characterize the spatial dependency among the nodes of each input graph \(\mathcal{G}_{t}\) via ESM (SS 4.2); We finally unveil the temporal dynamics of each node through the attention-based mechanism ETM and output the estimated position of each node in \(\mathcal{G}_{T}\) after an equivariant temporal pooling layer (SS 4.3). The overall architecture is shown in Figure 2.
### Equivariant Discrete Fourier Transform (EDFT)
The Fourier Transform (FT) gives us insight into the wave frequencies contained in the input signal that is usually periodic. With the extracted frequencies, we are able to view the global behavior of each node \(i\) in different frequency domains. Conventional multidimensional FT employs distinct Fourier bases for different input dimensions of the original signals. Here, to ensure equivariance, we first translate the signals by the mean position and then adopt the same basis over the spatial dimension. To be specific, we compute equivariant DFT as follows:
\[\mathbf{\vec{f}}_{i}(k)=\sum_{t=0}^{T-1}e^{-i^{\prime}\frac{2\pi}{T} kt}\ \left(\mathbf{\vec{x}}_{i}(t)-\mathbf{\vec{\mathbf{x}}(t)}\right), \tag{3}\]
where, \(i^{\prime}\) is the imaginary unit, \(k=0,1,\cdots,T-1\) is the frequency index, \(\mathbf{\vec{\mathbf{x}}}(t)\) is the average position of all nodes in the \(t\)-th frame \(\mathcal{G}_{t}\), and the output \(\mathbf{\vec{f}}_{i}(k)\in\mathbb{C}^{3}\) is complex. The frequencies calculated by Eq. 3 are then utilized to formulate two crucial quantities: the frequency cross-correlation \(\mathbf{A}_{ij}\in\mathbb{R}^{T}\) between node \(i\) and \(j\), and the frequency amplitude \(\mathbf{c}_{i}\in\mathbb{R}^{T}\) of node \(i\).
In signal processing, cross-correlation measures the similarity of two functions \(f_{1}\) and \(f_{2}\). It satisfies \(\mathcal{F}\{f_{1}\star f_{2}\}=\overline{\mathcal{F}\{f_{1}\}}\cdot\mathcal{ F}\{f_{2}\}\), where \(\mathcal{F}\) and \(\star\) denote the FT and the cross-correlation operator, respectively, and \(\overline{\mathcal{F}}\) indicates the complex conjugate of \(\mathcal{F}\). Borrowing this idea, we compute the cross-correlation in the frequency domain by Eq. 3 as:
\[\mathbf{A}_{ij}(k)=w_{k}(\mathbf{h}_{i})w_{k}(\mathbf{h}_{j})|\langle\mathbf{\vec{ f}}_{i}(k),\mathbf{\vec{f}}_{j}(k)\rangle|, \tag{4}\]where \(\langle\cdot,\cdot\rangle\) defines the complex inner product. Notably, we have added two learnable parameters \(w_{k}(\mathbf{h}_{i})\) and \(w_{k}(\mathbf{h}_{j})\) dependent on node features, which act like spectral filters of the \(k\)-th frequency and enable us to select related frequency for the prediction. In the next subsection, we will apply \(\mathbf{A}_{ij}\) as the edge feature to capture the relationship between different nodes. We use Aspirin as an example and visualize the \(\mathbf{A}\) in Figure 3.
We further compute for node \(i\) the amplitude of the frequency \(\mathbf{\tilde{f}}_{i}(k)\) along with the parameter \(w_{k}(\mathbf{h}_{i})\):
\[\mathbf{c}_{i}(k)=w_{k}(\mathbf{h}_{i})\|\mathbf{\tilde{f}}_{i}(k)\|^{2}. \tag{5}\]
This term will be used in the update of the hidden features in the next subsection.
A promising property of Eq. 3 is that it is translation invariant and rotation/reflection equivariant. Therefore, both \(\mathbf{A}_{ij}\) and \(\mathbf{c}_{i}\) are E(3)-invariant, which will facilitate the design of following modules.
### Equivariant Spatial Module (ESM)
For each graph \(\mathcal{G}_{t}\), our ESM is proposed to encode its spatial geometry through equivariant message passing. ESM is built upon EGNN [33] which is a prevailing kind of equivariant GNNs, but it has subtly involved the FT features from the last subsection for enhanced performance beyond EGNN.
The \(l\)-th layer message passing in ESM is as below:
\[\mathbf{m}_{ij} =\phi_{m}\left(\mathbf{h}_{i}^{(l)}(t),\mathbf{h}_{j}^{(l)}(t),\|\mathbf{ \tilde{x}}_{ij}^{(l)}(t)\|^{2},\mathbf{A}_{ij}\right), \tag{6}\] \[\mathbf{h}_{i}^{(l+1)}(t) =\mathbf{h}_{i}^{(l)}(t)+\phi_{h}\left(\mathbf{h}_{i}^{(l)}(t),\mathbf{c}_{i },\sum_{j\neq i}\mathbf{m}_{ij}\right),\] (7) \[\mathbf{\tilde{a}}_{i}(t) =\frac{1}{|\mathcal{N}(i)|}\sum_{j\in\mathcal{N}(i)}\mathbf{\tilde{x }}_{ij}^{(l)}(t)\phi_{x}(\mathbf{m}_{ij}),\] (8) \[\mathbf{\tilde{x}}_{i}^{(l+1)}(t) =\mathbf{\tilde{x}}_{i}^{(l)}(t)+\mathbf{\tilde{a}}_{i}(t), \tag{9}\]
where, \(\phi_{m}\) computes the message \(\mathbf{m}_{ij}\) from node \(j\) to \(i\), \(\phi_{h}\) updates the hidden representation \(\mathbf{h}_{i}\), \(\phi_{x}\) returns a one-dimensional scalar for the update of \(\mathbf{\tilde{a}}_{i}(t)\), and all the above functions are Multi-Layer Perceptrons (MLPs); \(\mathbf{\tilde{x}}_{ij}(t)=\mathbf{\tilde{x}}_{i}(t)-\mathbf{\tilde{x}}_{j}(t)\) is the relative position and \(\mathcal{N}(i)\) denotes the neighborhoods of node \(i\).
Notably, we leverage the cross-correlation \(\mathbf{A}_{ij}\) as the edge feature in Eq. 6 to evaluate the connection between node \(i\) and \(j\) over the global temporal window, since it is computed from the entire trajectory. We also make use of \(\mathbf{c}_{i}\) as the input of the update in Eq. 7. The benefit of considering these two terms will be ablated in our experiments.
Figure 3: Visualization of cross-correlation \(\mathbf{A}\) on Aspirin. EDFT can not only identify strongly-connected nodes (e.g. Node 8 and Node 11), but also discover latent relationship between two nodes which are disconnected yet may have similar structures or functions (e.g. Node 8 and Node 10).
### Equivariant Temporal Module (ETM)
**Forward Temporal Attention** Inspired by the great success of Transformer [39] in sequence modeling, we develop ETM that describes the self-correspondence of each node's trajectory based on the forward attention mechanism, and more importantly, in an E(3)-equivariant way.
In detail, each layer of ETM conducts the following process:
\[\alpha_{i}^{(l)}(ts) =\frac{\exp(\mathbf{q}_{i}^{(l)}(t)^{\top}\mathbf{k}_{i}^{(l)}(s))}{\sum_ {s=0}^{t}\exp(\mathbf{q}_{i}^{(l)}(t)^{\top}\mathbf{k}_{i}^{(l)}(s))}, \tag{10}\] \[\mathbf{h}_{i}^{(l+1)}(t) =\mathbf{h}_{i}^{(l)}(t)+\sum_{s=0}^{t}\alpha_{i}^{(l)}(ts)\mathbf{v}_{i} ^{(l)}(s),\] (11) \[\mathbf{\bar{x}}_{i}^{(l+1)}(t) =\mathbf{\bar{x}}_{i}^{(l)}(t)+\sum_{s=0}^{t}\alpha_{i}^{(l)}(ts)\mathbf{ \bar{x}}_{i}^{(l)}(ts)\phi_{x}(\mathbf{v}_{i}^{(l)}(s)), \tag{12}\]
where, \(\alpha_{ts}\) is the attention weight between time \(t\) and \(s\), computed by the query \(\mathbf{q}_{t}\) and key \(\mathbf{k}_{s}\); the hidden feature \(\mathbf{h}_{i}(t)\) is updated as a weighted combination of the value \(\mathbf{v}_{s}\); the position vector \(\mathbf{\bar{x}}_{i}(t)\) is derived from a weighted combination of a one-dimensional scalar \(\phi_{x}(\mathbf{v}_{s})\) multiplied with the temporal displace vector \(\mathbf{\bar{x}}_{i}^{(l)}(ts)=\mathbf{\bar{x}}_{i}^{(l)}(t)-\mathbf{\bar{x}}_{i}^{(l)}(s)\). Specifically, \(\mathbf{q}_{i}^{(l)}(t)=\phi_{q}\left(\mathbf{h}_{i}^{(l)}(t)\right)\), \(\mathbf{k}_{i}^{(l)}(t)=\phi_{k}\left(\mathbf{h}_{i}^{(l)}(t)\right)\) and \(\mathbf{v}_{i}^{(l)}(t)=\phi_{v}\left(\mathbf{h}_{i}^{(l)}(t)\right)\) are all E(3)-invariant functions. Notably, we derive a particle's next position in a forward-looking way, to keep physical rationality as the derivation of current state should not be dependent on future positions.
**Equivariant Temporal Pooling** We alternate one-layer ESM and one-layer ETM over \(L\) layers, and finally attain the updated coordinates \(\mathbf{\bar{x}}_{i}^{(L)}\in\mathbb{R}^{T\times 3}\) for each node \(i\). Then the predicted coordinates at time \(T\) is given by the following equivariant linear pooling:
\[\mathbf{\bar{x}}_{i}^{*}(T)=\mathbf{\hat{X}}_{i}\mathbf{w}+\mathbf{\bar{x}}_{i}^{(L)}(T-1), \tag{13}\]
where the parameter \(\mathbf{w}\in\mathbb{R}^{(T-1)}\) consists of learnable weights, and \(\mathbf{\hat{X}}_{i}=[\mathbf{\bar{x}}_{i}^{(L)}(0)-\mathbf{\bar{x}}_{i}^{(L)}(T-1),\mathbf{ \bar{x}}_{i}^{(L)}(1)-\mathbf{\bar{x}}_{i}^{(L)}(T-1),\cdots,\mathbf{\bar{x}}_{i}^{(L) }(T-2)-\mathbf{\bar{x}}_{i}^{(L)}(T-1)]\) is translated by \(\mathbf{\bar{x}}_{i}^{(L)}(T-1)\) to allow translation invariance.
We train ESTAG end-to-end via the mean squared error (MSE) loss:
\[\mathcal{L}=\sum_{i=1}^{N}\|\mathbf{\bar{x}}_{i}(T)-\mathbf{\bar{x}}_{i}^{*}(T)\|_{2} ^{2}. \tag{14}\]
By the design of EDFT, ESM and ETM, we have the following property of our model ESTAG.
**Theorem 4.1**.: _We denote ESTAG as \(\mathbf{\bar{X}}(T)=\phi\left(\{(\mathbf{H}(t),g\cdot\mathbf{\bar{X}}(t),\mathbf{A})\}_{t=0}^{ T-1}\right)\), then \(\phi\) is E(3)-equivariant._
Proof.: _See Appendix A._
Although we mainly exploit EGNN as the backbone (particularly in ESM), our framework is general and can be easily extended to other equivariant GNNs, such as GMN [20], the multi-channel version of EGNN. In general, the extended models deal with multi-channel coordinate \(\mathbf{\bar{Z}}\in\mathbb{R}^{3\times m}\) instead of \(\mathbf{\bar{x}}\in\mathbb{R}^{3}\). The most significant feature of these models is to replace the invariant scalar \(\|\mathbf{\bar{x}}\|^{2}\) in the formulations of the message \(\mathbf{m}_{ij}\) (Eq. 6) with the term \(\mathbf{\bar{Z}}^{\top}\mathbf{\bar{Z}}\). It is easily to prove that this term is equivariant to any orthogonal matrix \(\mathbf{O}\), \(i.e.\), \((\mathbf{O}\mathbf{\bar{Z}})^{\top}(\mathbf{O}\mathbf{\bar{Z}})=\mathbf{\bar{Z}}^{\top}\mathbf{\bar{Z} },\forall\mathbf{O}\in\mathbb{R}^{3\times 3},\mathbf{O}^{\top}\mathbf{O}=\mathbf{I}\). Besides, it can be reduced to invariant scalar \(\|\mathbf{\bar{x}}\|^{2}\) when \(m=1\). Empirically, we add the normalization term in order to achieve more stable performance: \(\frac{\mathbf{\bar{Z}}^{\top}\mathbf{\bar{Z}}}{\|\mathbf{\bar{Z}}^{\top}\mathbf{\bar{Z}}\|_{F}}\), where \(\|\cdot\|_{F}\) is the Frobenius norm. In the above derivation, we only display the formulation on EGNN, the details of multi-channel ESTAG are shown in Appendix C.1.
## 5 Experiments
**Datasets.** To verify the superiority of the proposed model, we evaluate our model on three real world datasets: **1)** molecular-level: MD17 [5], **2)** protein-level: AdK equilibrium trajectory dataset [34] and **3)** macro-level: CMU Motion Capture Databse [7]. These datasets involve several continuous long trajectories. Note that all the three datasets contain unobserved dynamics or factors and thus conform to the non-Markovian setting. In particular, the external temperature and pressure are unknown on MD17, the dynamics of water and ions is unobserved on AdK, and the states of the environment are not provided on Motion Capture. The original datasets are composed of long trajectories. We randomize the start point and extract the following \(T+1\) points with the interval \(\Delta t\). We take the first \(T\) timestamps as previous observations and last timestamp as the future position label.
**Baselines.** We compare the performance of ESTAG with several baselines: **1)** We regard the previous observation at start/mid/terminal (\(s/m/t\)) timepoint as the estimated position at timestamp \(T\) directly. **2) EGNN**[33] utilizes a simple yet efficient framework which transforms the 3D vectors into invariant scalars. We provide EGNN with only one previous position at \(s/m/t\) timepoint to predict the future position in a frame-to-frame manner. **3) STGCN**[44] is a spatio-temporal GNN that adopts a "sandwich" structure with two gated sequential convolution layers and on spatial graph convolution layer in between. We modify its default settings by predicting the residual coordinate between time \(T-1\) and \(T\), since directly predicting the exact coordinate at time \(T\) yields much worse performance.**4) AGL-STAN**[36] leverages adaptive graph learning and self-attention for a comprehensive representation of intra-temporal dependencies and inter-spatial interactions. We modify AGL-STAN's setting in the same way as we do with STGCN. **5)** Typical GNNs with trivial spatio-temporal aggregation: we implement GNN [13], and other equivariant models EGNN, TFN [38], and SE(3)-Transformer [11] for each temporal frame in the historical trajectory and then estimate the future position as the weighted sum of all past frames, where the weights are learnable. All models are denoted with a prefix "ST" and we initialize their node features along with temporal positional encoding. **6) EqMotion**[42] is one of equivariant spatio-temporal GNNs, which leverages the temporal information by fusing them for the model's initialization.
### Molecular-level: MD17
**Implementation details.** MD17 dataset includes the trajectories of 8 small molecules generated by MD simulation. We use the atomic number as the time-independent input node feature \(h^{(0)}\). The two atoms are 1-hop neighbor if their distance is less than the threshold \(\lambda\) and we consider two types of neighbors (i.e. 1-hop neighbor and 2-hop neighbor). Other settings including the hyper-parameters are introduced in Appendix D.1.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & Aspirin & Benzene & Ethanol & Malonaldehyde & Naphthalene & Salicylic & Toluene & Uracil \\ \hline Pt-\(s\) & 15.579 & 4.457 & 4.332 & 13.206 & 8.958 & 12.256 & 6.818 & 10.269 \\ Pt-\(m\) & 9.058 & 2.536 & 2.688 & 6.749 & 6.918 & 8.122 & 5.622 & 7.257 \\ Pt-\(t\) & 0.715 & 0.114 & 0.456 & 0.596 & 0.737 & 0.688 & 0.688 & 0.674 \\ \hline EGNN-\(s\) & 12.056 & 3.290 & 2.354 & 10.635 & 4.871 & 8.733 & 3.154 & 6.815 \\ EGNN-\(m\) & 6.237 & 1.882 & 1.532 & 4.842 & 3.791 & 4.623 & 2.516 & 3.606 \\ EGNN-\(t\) & 0.625 & 0.112 & 0.416 & 0.513 & 0.614 & 0.598 & 0.577 & 0.568 \\ \hline ST\_TFN & 0.719 & 0.122 & 0.432 & 0.569 & 0.688 & 0.684 & 0.628 & 0.669 \\ ST\_GNN & 1.014 & 0.210 & 0.487 & 0.664 & 0.769 & 0.789 & 0.713 & 0.680 \\ ST\_SE(3)TR & 0.669 & 0.119 & 0.428 & 0.550 & 0.625 & 0.630 & 0.591 & 0.597 \\ ST\_EGNN & 0.735 & 0.163 & 0.245 & 0.427 & 0.745 & 0.687 & 0.553 & 0.445 \\ EqMotion & 0.721 & 0.156 & 0.476 & 0.600 & 0.747 & 0.697 & 0.691 & 0.681 \\ STGCN & 0.715 & 0.106 & 0.456 & 0.596 & 0.736 & 0.682 & 0.687 & 0.673 \\ AGL-STAN & 0.719 & 0.106 & 0.459 & 0.596 & 0.601 & 0.452 & 0.683 & 0.515 \\ \hline ESTAG & **0.063** & **0.003** & **0.099** & **0.101** & **0.068** & **0.047** & **0.079** & **0.066** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Prediction error (\(\times 10^{-3}\)) on MD17 dataset. Results averaged across 3 runs. We do not display the standard deviation due to its small value.
**Results.** Table 1 shows the average MSE of all models on 8 molecules. We have some interesting observations as follows: **1)** ESTAG exceeds other models in all cases by a significant margin, supporting the general effectiveness of the proposed ideas. **2)** From the Pt-\(s/m/t\), we observe that the point closer to the future point has less prediction error. EGNN-\(s/m/t\) only takes one frame (\(s/m/t\)) as input, and attains slight improvement relative to Pt-\(s/m/t\). **3)** Compared with EGNN-\(t\), ST_EGNN, although equipped with trivial spatio-temporal aggregation, is unable to obtain consistent improvement, which indicates that how to unveil temporal dynamics appropriately is beyond triviality on this dataset. **4.** The non-equivariant methods particularly ST_GNN perform unsatisfactorily in most cases, implying that equivariance is an important property when modeling 3D structures.
**Visualization.** We have provided some visualizations of the predicted molecules by using the PyMol toolkit. Figure 4 shows that the prdicted MSEs of our model are much lower than ST_EGNN and our predicted molecules are closer to the ground-truth molecules. We have also displayed the learned attentions (Eq. 10) and the temporal weight (Eq. 13), where we find meaningful patterns of the temporal correlation. For clearer comparison, we display the molecule URACIL in 3D coordinate system, which can be found in Appendix D.5.
### Protein-level: Protein Dynamics
**Implementation details.** We evaluate our model on the AdK equilibrium trajectory dataset [34] via MDAnalysis toolkit [14]. In order to reduce the data scale, we utilize MDAnalysis to locate the backbone atoms (\(C_{\alpha},C,N,O\)) of the residues and then regard the residues other than atoms in protein as the nodes with \(4\)-channel geometric features. We use the atomic number of four backbone atoms as the time-independent input node feature \(h^{(0)}\). We connect two atoms via an edge if their distance is less than a threshold \(\lambda\). Other settings including the hyperparameters are introduced in Appendix D.2. Slightly different from the model on MD17 dataset, we generalize the single-channel ESTAG into multi-channel version which is presented in detail in Appendix C. We do not conduct EqMotion, TFN and SE(3)-TR used in the last experiment since it is non-trivial to modify them into multi-channel modeling. The state-of-the-art method GMN [20] is implemented by further adding the weighed temporal pooling similar to ST_EGNN.
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & MSE & Time(s) \\ \hline Pt-\(s\) & 3.260 & - \\ Pt-\(m\) & 3.302 & - \\ Pt-\(t\) & 2.022 & - \\ \hline EGNN-\(s\) & 3.254 & 1.062 \\ EGNN-\(m\) & 3.278 & 1.088 \\ EGNN-\(t\) & 1.983 & 1.069 \\ \hline ST\_GNN & 1.871 & 2.769 \\ ST\_GMN & 1.526 & 4.705 \\ ST\_EGNN & 1.543 & 4.705 \\ STGCN & 1.578 & 1.840 \\ AGL-STAN & 1.671 & 1.478 \\ \hline ESTAG & **1.471** & 6.876 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Prediction error and training time on Protein dataset. Results averaged across 3 runs.
Figure 4: PyMol visualization of the predicted molecules by our ESTAG and ST_EGNN, where the MSE (\(\times 10^{-3}\)) with respect to the ground truth is also shown. As expected, the predicted instances by ESTAG exhibit much smaller MSE than ST_EGNN, although the difference is not easy to visualize in some cases. For those obviously mispredicted regions of ST_EGNN, we highlight them with red rectangles. It is observed that ST_EGNN occasionally outputs isolated atoms, which could be caused by violation of the bond length tolerance in PyMol.
**Results.** The predicted MSEs are displayed in Table 2. Generally, the spatio-temporal models are better than Pt-\(s/m/t\) and EGNN-\(s/m/t\), and it suggests that applying spatio-temporal clues on this dataset is crucial, particularly given that the protein trajectories are generated under the interactions with external molecules such as water and ions. The equivariant models always outperform the non-equivariant counterparts (for example, ST_EGNN vs. ST_GNN). Overall, our model ESTAG achieves the best performance owing to its elaboration of equivariant spatio-temporal modeling. Additionally, we report the training time averaged over epochs in Table 2. It shows that the computation overhead of ESTAG over its backbone EGNN is acceptable given its remarkable performance enhancement. It is expected that the superiority of ESTAG on protein dataset is not as obvious as that on MD17, owing to various kinds of physical interactions between different amino acids, let along each amino acid is composed of a certain number of atoms, which makes the dynamics of a protein much more complicated than small molecules.
### Macro-level: Motion Capture
**Implementation details.** We finally adopt CMU Motion Capture Database [7] to evaluate our model. CMU Motion Capture Database involves the trajectories of human motion under several scenarios and we focus on walking motion (subject #35) and basketball motion (subject #102, only take trajectories whose length is greater than 170). The input feature of all the joints (nodes) \(h_{i}^{(0)}\) are all \(1\)s. The two joints are 1-hop neighbor if they are connected naturally and we consider two types of neighbors (i.e. 1-hop neighbor and 2-hop neighbor). Other settings including the hyper-parameters are introduced in Appendix D.3. Notably, the input of EqMotion only contain node coordinates, as the same as our method and other baselines. We find that EqMotion performs much worse by directly predicting the absolute coordinates. We then modify EqMotion to predict the relative coordinates across two adjacent frames and perform zero-mean normalization of node coordinates, for further improvement.
**Results.** Table 3 summarizes the results of all models on the Motion dataset. The spatio-temporal models are better than Pt-\(s/m/t\) and EGNN-\(s/m/t\), which again implies the necessity of taking the spatio-temporal history into account. Unexpectedly, the non-equivariant models are even superior to the equivariant baselines in walking motion, by, for instance, comparing AGL-STAN with ESTAG. We conjure that the samples of this dataset are usually collected in the same orientation, which potentially subdues the effect of rotation equivariance. It is thus not surprising that GNN even outperforms EGNN since GNN involves more flexible form of message passing. But for basketball motion which is more complicated to simulate, ESTAG yields a much lower MSE. We also notice that the attention-based models including our ESTAG, ST_SE(3)-Tr, and AGL-STAN perform promisingly, which probably due to the advantage of using attention to discovery temporal interactions within the trajectories.
### Ablation Studies
Here we conduct several ablation experiments on MD17 to inspect how our proposed components contribute to the overall performance and the results are shown in Table 4.
**1)** Without EDFT. We replace the FT-based edge features with the predefined edge features based on the connecting atom types and the distance between them. The simplified model encounters an average of increase in MSE, showcasing the effectiveness of the FT-based feature in modeling the spatial relation in graphs.
**2)** Without attention. We remove the ETM of ESTAG and observe slight detriment in the model performance, which demonstrates that attention mechanism can well capture the temporal dynamics.
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & Walk & Basketball \\ \hline Pt-\(s\) & 329.474 & 886.023 \\ Pt-\(m\) & 127.152 & 413.306 \\ Pt-\(t\) & 3.831 & 15.878 \\ \hline EGNN-\(s\) & 63.540 & 749.486 \\ EGNN-\(m\) & 32.016 & 335.002 \\ EGNN-\(t\) & 0.786 & 12.492 \\ \hline ST\_GNN & 0.441 & 15.336 \\ ST\_TFN & 0.597 & 13.709 \\ ST\_SE(3)TR & 0.236 & 13.851 \\ ST\_EGNN & 0.538 & 13.199 \\ EqMotion & 1.011 & 4.893 \\ STGCN & 0.062 & 4.919 \\ AGL-STAN & **0.037** & 5.734 \\ \hline ESTAG & 0.040 & **0.746** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Prediction error (\(\times 10^{-1}\)) on Motion dataset. Results averaged across 3 runs.
**3)** Without equivariance. We construct a non-equivariant spatio-temporal attentive framework based on vanilla GNN. ESTAG performs much better than the non-equivariant STAG, indicating that Euclidean equivariance is a crucial property when designing model on geometric graph.
**3)** Without equivariance. We construct a non-equivariant spatio-temporal attentive framework based on vanilla GNN. ESTAG performs much better than the non-equivariant STAG, indicating that Euclidean equivariance is a crucial property when designing model on geometric graph.
**4)** The impact of the number of layers \(L\). We investigate effect of the number of layers \(L\) on Ethanol dataset. We vary \(T\) from 1, 2, 3, 4, 5, 6 and present the results in Table 5. Considering the accuracy and efficiency simultaneously, we choose \(L=2\) for the ESTAG.
More ablation studies will be shown in Appendix E.
## 6 Conclusion
In this paper, we propose ESTAG, an end-to-end equivariant architecture for physical dynamics modeling. ESTAG first extracts frequency features via a novel Equivariant Discrete Fourier Transform (EDFT), and then leverages Equivariant Spatial Module (ESM) and an attentive Equivariant Temporal Module (ETM) to refine the coordinate in space and time domain alternatively. Comprehensive experiments over multiple tasks verify the superiority of ESTAG from molecular-level, protein-level, and to macro-level. Necessary ablations, visualizations, and analyses are also provided to support the validity of our design as well as the generalization of our method. One potential limitation of our model is that we only enforce the E(3) symmetry while other inductive bias like the energy conservation law is also required in physical scenarios.
In the future, we will continue extending our benchmark with more tasks and datasets and evaluate more baselines to validate the effectiveness of our model. It is also promising to extend our model to multi-scale GNN (like SGNN [17], REMuS-GNN [27], BSMS-GNN [4] and MS-MGN [10]), which is useful particularly for industrial-level applications involving huge graphs. Besides, it is valuable to employ our simulation method as a basic block for other applications such as drug discovery, material design, robotic control, etc.
## 7 Acknowledgement
This work was jointly supported by the following projects: the Scientific Innovation 2030 Major Project for New Generation of AI under Grant NO. 2020AAA0107300, Ministry of Science and Technology of the People's Republic of China; the National Natural Science Foundation of China (62006137); Beijing Nova Program (20230484278); the Fundamental Research Funds for the Central Universities, and the Research Funds of Renmin University of China (23NNKJ19); Tencent AI Lab Rhino-Bird Focused Research Program (RBFR2023005); Ant Group through CCF-Ant Research Fund (CCF-AFSG RF20220204); Public Computing Cloud, Renmin University of China.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & Aspirin & Benzene & Ethanol & Malonaldehyde & Naphthalene & Salicylic & Toluene & Uracil \\ \hline ESTAG & **0.063** & **0.003** & **0.099** & **0.101** & **0.068** & **0.047** & **0.079** & 0.066 \\ \hline w/o EDFT & 0.079 & **0.003** & 0.108 & 0.148 & 0.104 & 0.145 & 0.102 & **0.063** \\ w/o Attention & 0.087 & 0.004 & 0.104 & 0.112 & 0.129 & 0.095 & 0.097 & 0.078 \\ w/o Equivariance & 0.762 & 0.114 & 0.458 & 0.604 & 0.738 & 0.698 & 0.690 & 0.680 \\ w/o Temporal & 0.084 & **0.003** & 0.111 & 0.139 & 0.141 & 0.098 & 0.153 & 0.071 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation studies (\(\times 10^{-3}\)) on MD17 dataset. Results averaged across 3 runs.
\begin{table}
\begin{tabular}{l|c c c c c c} \hline \hline \(L\) & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline MSE (\(\times 10^{-4}\)) & 1.25 & 0.990 & 1.096 & 1.022 & 1.042 & 1.028 \\ \hline \hline \end{tabular}
\end{table}
Table 5: MSE on Ethanol \(w.r.t.\) the number of layers \(L\).
## References
* Al-Lazikani et al. [2001] Bissan Al-Lazikani, Joon Jung, Zhexin Xiang, and Barry Honig. Protein structure prediction. _Current opinion in chemical biology_, 5(1):51-56, 2001.
* Battaglia et al. [2016] Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, et al. Interaction networks for learning about objects, relations and physics. _Advances in neural information processing systems_, 29, 2016.
* Broer et al. [2009] Hendrik W Broer, George B Huitema, and Mikhail B Sevryuk. _Quasi-periodic motions in families of dynamical systems: order amidst chaos_. Springer, 2009.
* Cao et al. [2023] Yadi Cao, Menglei Chai, Minchen Li, and Chenfanfu Jiang. Efficient learning of mesh-based physical simulation with bi-stride multi-scale graph neural network. 2023.
* Chmiela et al. [2017] Stefan Chmiela, Alexandre Tkatchenko, Huziel E. Sauceda, Igor Poltavsky, Kristof T. Schutt, and Klaus-Robert Muller. Machine learning of accurate energy-conserving molecular force fields. _Science Advances_, 3(5):e1603015, 2017.
* Chung et al. [2014] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. _arXiv preprint arXiv:1412.3555_, 2014.
* CMU [2003] CMU. Carnegie-mellon motion capture database, 2003. URL L[http://mocap.cs.cmu.edu](http://mocap.cs.cmu.edu).
* Ding et al. [2021] Mingyu Ding, Zhenfang Chen, Tao Du, Ping Luo, Josh Tenenbaum, and Chuang Gan. Dynamic visual reasoning by learning differentiable physics models from video and language. _Advances In Neural Information Processing Systems_, 34:887-899, 2021.
* Finzi et al. [2020] Marc Finzi, Samuel Stanton, Pavel Izmailov, and Andrew Gordon Wilson. Generalizing convolutional neural networks for equivariance to lie groups on arbitrary continuous data. In _International Conference on Machine Learning_, pages 3165-3176. PMLR, 2020.
* Fortunato et al. [2022] Meire Fortunato, Tobias Pfaff, Peter Wirnsberger, Alexander Pritzel, and Peter Battaglia. Multiscale meshgraphnets. _arXiv preprint arXiv:2210.00612_, 2022.
* Fuchs et al. [2020] Fabian Fuchs, Daniel Worrall, Volker Fischer, and Max Welling. Se (3)-transformers: 3d roto-translation equivariant attention networks. _Advances in Neural Information Processing Systems_, 33:1970-1981, 2020.
* Gan et al. [2020] Chuang Gan, Jeremy Schwartz, Seth Alter, Martin Schrimpf, James Traer, Julian De Freitas, Jonas Kubilius, Abhishek Bhandwaldar, Nick Haber, Megumi Sano, et al. Threedworld: A platform for interactive multi-modal physical simulation. _arXiv preprint arXiv:2007.04954_, 2020.
* Gilmer et al. [2017] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In _International conference on machine learning_, pages 1263-1272. PMLR, 2017.
* Gowers et al. [2016] Richard J Gowers, Max Linke, Jonathan Barnoud, Tyler JE Reddy, Manuel N Melo, Sean L Seyler, Jan Domanski, David L Dotson, Sebastien Buchoux, Ian M Kenney, et al. Mdanalysis: a python package for the rapid analysis of molecular dynamics simulations. In _Proceedings of the 15th python in science conference_, volume 98, page 105. SciPy Austin, TX, 2016.
* Greydanus et al. [2019] Samuel Greydanus, Misko Dzamba, and Jason Yosinski. Hamiltonian neural networks. _Advances in neural information processing systems_, 32, 2019.
* Guo et al. [2019] Shengnan Guo, Youfang Lin, Ning Feng, Chao Song, and Huaiyu Wan. Attention based spatial-temporal graph convolutional networks for traffic flow forecasting. In _Proceedings of the AAAI conference on artificial intelligence_, volume 33, pages 922-929, 2019.
* Han et al. [2022] Jiaqi Han, Wenbing Huang, Hengbo Ma, Jiachen Li, Josh Tenenbaum, and Chuang Gan. Learning physical dynamics with subequivariant graph neural networks. _Advances in Neural Information Processing Systems_, 35:26256-26268, 2022.
* [18] Jiaqi Han, Yu Rong, Tingyang Xu, and Wenbing Huang. Geometrically equivariant graph neural networks: A survey. _arXiv preprint arXiv:2202.07230_, 2022.
* [19] Tomas Hansson, Chris Oostenbrink, and WilfredF van Gunsteren. Molecular dynamics simulations. _Current opinion in structural biology_, 12(2):190-196, 2002.
* [20] Wenbing Huang, Jiaqi Han, Yu Rong, Tingyang Xu, Fuchun Sun, and Junzhou Huang. Equivariant graph mechanics networks with constraints. In _International Conference on Learning Representations_.
* [21] Michael J Hutchinson, Charline Le Lan, Sheheryar Zaidi, Emilien Dupont, Yee Whye Teh, and Hyunjik Kim. Lietransformer: Equivariant self-attention for lie groups. In _International Conference on Machine Learning_, pages 4533-4543. PMLR, 2021.
* [22] Guangyin Jin, Yuxuan Liang, Yuchen Fang, Jincai Huang, Junbo Zhang, and Yu Zheng. Spatiotemporal graph neural networks for predictive learning in urban computing: A survey. _arXiv preprint arXiv:2303.14483_, 2023.
* [23] Thomas Kipf, Ethan Fetaya, Kuan-Chieh Wang, Max Welling, and Richard Zemel. Neural relational inference for interacting systems. In _International Conference on Machine Learning_, pages 2688-2697. PMLR, 2018.
* [24] Miltiadis Kofinas, Naveen Nagaraja, and Efstratios Gavves. Roto-translated local coordinate frames for interacting dynamical systems. _Advances in Neural Information Processing Systems_, 34:6417-6429, 2021.
* [25] P. Langley. Crafting papers on machine learning. In Pat Langley, editor, _Proceedings of the 17th International Conference on Machine Learning (ICML 2000)_, pages 1207-1216, Stanford, CA, 2000. Morgan Kaufmann.
* [26] Yaguang Li, Rose Yu, Cyrus Shahabi, and Yan Liu. Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. _arXiv preprint arXiv:1707.01926_, 2017.
* [27] Mario Lino, Stathi Fotiadis, Anil A Bharath, and Chris D Cantwell. Multi-scale rotation-equivariant graph neural networks for unsteady eulerian fluid dynamics. _Physics of Fluids_, 34(8), 2022.
* [28] Elaine C Meng, Thomas D Goddard, Eric F Pettersen, Greg S Couch, Zach J Pearson, John H Morris, and Thomas E Ferrin. Ucsf chimerax: Tools for structure building and analysis. _Protein Science_, page e4792, 2023.
* [29] Damian Mrowca, Chengxu Zhuang, Elias Wang, Nick Haber, Li F Fei-Fei, Josh Tenenbaum, and Daniel L Yamins. Flexible neural representation for physics prediction. _Advances in neural information processing systems_, 31, 2018.
* [30] A Srinivas Reddy, S Priyadarshini Pati, P Praveen Kumar, HN Pradeep, and G Narahari Sastry. Virtual screening in drug discovery-a computational perspective. _Current Protein and Peptide Science_, 8(4):329-351, 2007.
* [31] Alvaro Sanchez-Gonzalez, Victor Bapst, Kyle Cranmer, and Peter Battaglia. Hamiltonian graph networks with ode integrators. _arXiv preprint arXiv:1909.12790_, 2019.
* [32] Victor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E (n) equivariant graph neural networks. In _International conference on machine learning_, pages 9323-9332. PMLR, 2021.
* [33] Victor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E(n) equivariant graph neural networks. In Marina Meila and Tong Zhang, editors, _Proceedings of the 38th International Conference on Machine Learning_, volume 139 of _Proceedings of Machine Learning Research_, pages 9323-9332. PMLR, 18-24 Jul 2021.
* [34] Sean Seyler and Oliver Beckstein. Molecular dynamics trajectory for benchmarking MDAnalysis. 6 2017.
* [35] Mark W Spong, Seth Hutchinson, Mathukumalli Vidyasagar, et al. _Robot modeling and control_, volume 3. Wiley New York, 2006.
* Sun et al. [2022] Mingjie Sun, Pengyuan Zhou, Hui Tian, Yong Liao, and Haiyong Xie. Spatial-temporal attention network for crime prediction with adaptive graph learning. In _International Conference on Artificial Neural Networks_, pages 656-669. Springer, 2022.
* Tenenbaum et al. [2011] Joshua B Tenenbaum, Charles Kemp, Thomas L Griffiths, and Noah D Goodman. How to grow a mind: Statistics, structure, and abstraction. _science_, 331(6022):1279-1285, 2011.
* Thomas et al. [2018] Nathaniel Thomas, Tess Smidt, Steven Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, and Patrick Riley. Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds. _arXiv preprint arXiv:1802.08219_, 2018.
* Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. _Advances in neural information processing systems_, 30, 2017.
* Vlachas et al. [2021] Pantelis R Vlachas, Julija Zavadlav, Matej Praprotnik, and Petros Koumoutsakos. Accelerated simulations of molecular systems through learning of effective dynamics. _Journal of Chemical Theory and Computation_, 18(1):538-549, 2021.
* Wu et al. [2020] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. A comprehensive survey on graph neural networks. _IEEE transactions on neural networks and learning systems_, 32(1):4-24, 2020.
* Xu et al. [2023] Chenxin Xu, Robby T Tan, Yuhong Tan, Siheng Chen, Yu Guang Wang, Xinchao Wang, and Yanfeng Wang. Eqmotion: Equivariant multi-agent motion prediction with invariant interaction reasoning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 1410-1420, 2023.
* Yan et al. [2018] Sijie Yan, Yuanjun Xiong, and Dahua Lin. Spatial temporal graph convolutional networks for skeleton-based action recognition. In _Thirty-second AAAI conference on artificial intelligence_, 2018.
* Yu et al. [2018] Bing Yu, Haoteng Yin, and Zhansing Zhu. Spatio-temporal graph convolutional networks: a deep learning framework for traffic forecasting. In _Proceedings of the 27th International Joint Conference on Artificial Intelligence_, pages 3634-3640, 2018.
* Zhang et al. [2018] Jiani Zhang, Xingjian Shi, Junyuan Xie, Hao Ma, Irwin King, and Dit-Yan Yeung. Gaan: Gated attention networks for learning on large and spatiotemporal graphs. _arXiv preprint arXiv:1803.07294_, 2018.
## Appendix A Proof of ESTAG's Equivariance
**Theorem A.1**.: _We denote ESTAG as \(\vec{\mathbf{X}}(T)=\phi\left(\{(\mathbf{H}(t),g\cdot\vec{\mathbf{X}}(t),\mathbf{A})\}_{t=0}^{T-1 }\right)\), then \(\phi\) is E(3)-equivariant._
Proof.: **1.** We firstly prove that EDFT is E(3)-equivariant.
\[\mathbf{O}\vec{\mathbf{f}}_{i}(k) =\sum_{t=0}^{T-1}e^{-i^{\prime}\frac{2}{T^{2}}kt}\ \left(\mathbf{O} \vec{\mathbf{x}}_{i}(t)+\mathbf{b}-\overline{\mathbf{O}\vec{\mathbf{x}}(t)+\mathbf{b}}\right),\] \[\mathbf{A}_{ij}(k) =w_{k}(\mathbf{h}_{i})w_{k}(\mathbf{h}_{j})|(\mathbf{O}\vec{\mathbf{f}}_{i}(k), \mathbf{O}\vec{\mathbf{f}}_{j}(k))|,\] \[\mathbf{c}_{i}(k) =w_{k}(\mathbf{h}_{i})\|\mathbf{O}\vec{\mathbf{f}}_{i}(k)\|^{2}.\]
**2.** We secondly prove the E(3)-equivariance of ESM.
\[\mathbf{m}_{ij} =\phi_{m}\left(\mathbf{h}_{i}^{(l)}(t),\mathbf{h}_{j}^{(l)}(t),\|\mathbf{O} \vec{\mathbf{x}}_{ij}^{(l)}(t)\|^{2},\mathbf{A}_{ij}\right),\] \[\mathbf{h}_{i}^{(l+1)}(t) =\phi_{h}\left(\mathbf{h}_{i}^{(l)}(t),\mathbf{c}_{i}(k),\sum_{j\neq i} \mathbf{m}_{ij}\right),\] \[\mathbf{O}\vec{\mathbf{a}}_{i}(t) =\frac{1}{|\mathcal{N}(i)|}\sum_{j\in\mathcal{N}(i)}\mathbf{O}\vec{ \mathbf{x}}_{ij}^{(l)}(t)\phi_{x}(\mathbf{m}_{ij}),\] \[\mathbf{O}\vec{\mathbf{x}}_{i}^{(l+1)}(t)+\mathbf{b} =\mathbf{O}\vec{\mathbf{x}}_{i}^{(l)}(t)+\mathbf{b}+\mathbf{O}\vec{\mathbf{a}}_{i}^{( l+1)}(t).\]
**3.** We then prove that ETM is E(3)-equivariant.
\[\mathbf{q}_{i}^{(l)}(t) =\phi_{q}\left(\mathbf{h}_{i}^{(l)}(t)\right),\] \[\mathbf{k}_{i}^{(l)}(t) =\phi_{k}\left(\mathbf{h}_{i}^{(l)}(t)\right),\] \[\mathbf{v}_{i}^{(l)}(t) =\phi_{v}\left(\mathbf{h}_{i}^{(l)}(t)\right),\] \[\alpha_{i}^{(l)}(ts) =\frac{\exp(\mathbf{q}_{i}^{(l)}(t)^{\top}\mathbf{k}_{i}^{(l)}(s))}{\sum _{s=0}^{t}\exp(\mathbf{q}_{i}^{(l)}(t)^{\top}\mathbf{k}_{i}^{(l)}(s))},\] \[\mathbf{h}_{i}^{(l+1)}(t) =\mathbf{h}_{i}^{(l)}(t)+\sum_{s=0}^{t}\alpha_{i}^{(l)}(ts)\mathbf{v}_{i }^{(l)}(s),,\] \[\mathbf{O}\vec{\mathbf{x}}_{i}^{(l+1)}(t)+\mathbf{b} =\mathbf{O}\vec{\mathbf{x}}_{i}^{(l)}(t)+\mathbf{b}+\sum_{s=0}^{t}\alpha_{i}^{ (l)}(ts)\ \mathbf{O}\vec{\mathbf{x}}_{i}^{(l)}(ts)\phi_{x}(\mathbf{v}_{i}^{(l)}(s)).\]
**4.** We finally prove that the linear pooling is equivariant:
\[\mathbf{O}\vec{\mathbf{x}}_{i}^{*}(T)+\mathbf{b}=\mathbf{O}\hat{\mathbf{X}}_{i}\mathbf{w}+\mathbf{O}\vec{ \mathbf{x}}_{i}^{(L)}(T-1)+\mathbf{b}.\]
## Appendix B Full Algorithm Details
In the main body of the paper, for better readability, we present the implementation details of ESTAG. Here, we combine them into one singe algorithmic flowchart in Algorithm 1.
Extended Models
### Multi-channel ESTAG
For proteins, there are four backbone atoms ( N, C\({}_{\alpha}\), C, O) in residue \(i\), hence the above mentioned node position vector \(\mathbf{x}_{i}(t)\in\mathbb{R}^{3}\) is extended to a 4-channel position matrix \(\mathbf{X}_{i}(t)\in\mathbb{R}^{3\times 4}\). Particularly, we denote \(\mathbf{\vec{x}}_{i}^{\alpha}(t)\), a certain column from \(\mathbf{X}_{i}(t)\) as the position of C\({}_{\alpha}\) at time \(t\).
**EDFT:**
\[\mathbf{\vec{f}}_{i}(k) =\sum_{t=0}^{T-1}e^{-i^{\prime}\frac{2\pi}{T}kt}\ \left(\mathbf{\vec{x}}_ {i}^{\alpha}(t)-\mathbf{\vec{\mathbf{x}}^{\alpha}(t)}\right),\] \[\mathbf{A}_{ij}(k) =w_{k}(\mathbf{h}_{i})w_{k}(\mathbf{h}_{j})|\langle\mathbf{\vec{f}}_{i}(k), \mathbf{\vec{f}}_{j}(k)\rangle|,\] \[\mathbf{c}_{i}(k) =w_{k}(\mathbf{h}_{i})\|\mathbf{\vec{f}}_{i}(k)\|^{2}.\]
**ESM:**
\[\mathbf{m}_{ij} =\phi_{m}\left(\mathbf{h}_{i}^{(l)}(t),\mathbf{h}_{j}^{(l)}(t),\frac{( \mathbf{\vec{X}}_{ij}^{(l)}(t))^{\top}\mathbf{\vec{X}}_{ij}^{(l)}(t)}{\|(\mathbf{\vec{X}}_ {ij}^{(l)}(t))^{\top}\mathbf{\vec{X}}_{ij}^{(l)}(t)\|_{F}},\mathbf{A}_{ij}\right),\] \[\mathbf{h}_{i}^{(l+1)}(t) =\phi_{h}\left(\mathbf{h}_{i}^{(l)}(t),\mathbf{c}_{i}(k),\sum_{j\neq i}\bm {m}_{ij}\right),\] \[\mathbf{\vec{A}}_{i}^{(l)}(t) =\frac{1}{|\mathcal{N}(i)|}\sum_{j\in\mathcal{N}(i)}\mathbf{\vec{X}}_ {ij}^{(l)}(t)\phi_{\mathbf{X}}(\mathbf{m}_{ij}),\] \[\mathbf{\vec{X}}_{i}^{(l+1)}(t) =\mathbf{\vec{X}}_{i}^{(l)}(t)+\mathbf{\vec{A}}_{i}^{(l)}(t).\]
**ETM:**
\[\alpha_{i}^{(l)}(ts) =\frac{\exp(\mathbf{q}_{i}^{(l)}(t)^{\top}\mathbf{k}_{i}^{(l)}(s))}{\sum_ {s=0}^{t}\exp(\mathbf{q}_{i}^{(l)}(t)^{\top}\mathbf{k}_{i}^{(l)}(s))},\] \[\mathbf{h}_{i}^{(l+1)}(t) =\mathbf{h}_{i}^{(l)}(t)+\sum_{s=0}^{t}\alpha_{i}^{(l)}(ts)\mathbf{v}_{i} ^{(l)}(s),\] \[\mathbf{\vec{X}}_{i}^{(l+1)}(t) =\mathbf{\vec{X}}_{i}^{(l)}(t)+\sum_{s=0}^{t}\alpha_{i}^{(l)}(ts)\ \mathbf{ \vec{X}}_{i}^{(l)}(ts)\phi_{\mathbf{X}}(\mathbf{v}_{i}^{(l)}(s)),\]
where
\[\mathbf{q}_{i}^{(l)}(t) =\phi_{q}\left(\mathbf{h}_{i}^{(l)}(t)\right),\] \[\mathbf{k}_{i}^{(l)}(t) =\phi_{k}\left(\mathbf{h}_{i}^{(l)}(t)\right),\] \[\mathbf{v}_{i}^{(l)}(t) =\phi_{v}\left(\mathbf{h}_{i}^{(l)}(t)\right).\]
## Appendix D More Experimental Details and Results
### More Details on MD17
**Experiment setup and hyper-parameters.** We use the following hyper-parameters across all experimental evaluations: batch size 100, the number of epochs 500, weight decay \(1\times 10^{-12}\), the number of layers 4 (we consider one ESTAG includes two layers, i.e. ESM and ETM), hidden dim 16, Adam optimizer with learning rate \(5\times 10^{-3}\). We set the length of previous time series \(L=10\) and the interval between two timestamps \(\Delta t=10\). The number of training, validation and testing sets are 500, 2000 and 2000, respectively.
### More Details on Protein
**Experiment setup and hyper-parameters.** We use the following hyper-parameters across all experimental evaluations: batch size 100, the number of epochs 500, weight decay \(1\times 10^{-12}\), the number of layers 4 (we consider one ESTAG includes two layers, i.e. ESM and ETM), hidden dim 16, Adam optimizer with learning rate \(5\times 10^{-5}\). We divide the whole dataset into training, validation and testing sets by a ratio of 6:2:2, resulting in the numbers of the three sets are 2482, 827 and 827 respectively. The number of previous timestamps is \(T=10\) and the interval between timestamps is \(\Delta t=5\) frames.
### More Details on Motion
**Experiment setup and hyper-parameters.** We use the following hyper-parameters across all experimental evaluations: batch size 100, the number of epochs 500, weight decay \(1\times 10^{-12}\), the number of layers 4 (we consider one ESTAG includes two layers, i.e. ESM and ETM), hidden dim 16, Adam optimizer with learning rate \(5\times 10^{-3}\). We set the length of previous time series \(L=10\) and the interval between two timestamps \(\Delta t=5\). The training/validation/testing sets sizes are 3000/800/800 respectively.
### Long-term recurrent forecasting
We additionally explore the performance of the proposed method in long-term recurrent forecasting. The setting in our current experiment predicts only one frame at a time. Here we recurrently predict the future frames at time \(T,T+\Delta T,T+2\Delta T,\cdots,T+10\Delta T\) (the value of \(\Delta T\) follows the setting in the Section 5) in a rollout manner, where the currently-predicted frame will be used as the input for the next frame prediction, within a sliding window of length \(T\). Note that the recurrent forecasting task is more challenging than the original scenario, and we need to make some extra improvements to prevent accumulated errors over time. Particularly for our method, we change the forward attention mechanism to be full attention mechanism (namely replacing \(t\) with \(T-1\) in the superscript of the summation of Eq.10 and Eq.12 10), as we observe that under the recurrent setting, forward attention tends to lead to biased predictions. The results are reported in Figure 5, where we verify that the rollout version of ESTAG delivers generally smaller MSE than all compared methods for all time steps.
### More visualization
**Visualization of data.** To validate that the movement of MD17 molecules is periodical, in Figure 6 we depict the trajectory of one randomly selected atom in each molecule, from timestep 127947 to timestep 327947. It is obvious that almost all molecules move with period.
**Visualization of model parameters.** Moreover, we display the attention map in ETM of MAL-ONALDEHYDE's atom O and temporal pooling weights of all molecules, as shown in Figure 7.
Figure 5: The rollout-MSE curves on 8 molecules in MD17. Our model generally achieves the lowest MSE.
**Visualization of the prediction results on MD17 dataset.** For those non-obvious cases in Figure 4, the unclarity is mainly caused by 2D printing that is hard to depict 3D conformations such as angle deviation. For better visualization, we choose the molecule URACIL which is not clearly visualized, and render its 3D atom coordinates in Figure 8. ESTAG yields more accurate 3D prediction than ST_EGNN, which is consistent with the MSE difference.
**Visualization of the prediction results on Protein dataset.** We further evaluate the predicted protein by visualizing it alongside the ground truth in Figure 9, using the ChimeraX software [28]. It is obvious that protein predicted by ESTAG aligns closer to the ground truth and ST_EGNN fails to predict the alpha helix which we highlight in arrow.
**Visualization of the prediction results on Motion dataset.** Following the visualization manner used for molecules, we represent the entities from both walking and basketball motion scenarios within a 3D coordinate system, in Figure 10 and Figure 11 respectively. It confirms the superiority of ESTAG, particularly in the significantly more challenging task of simulating basketball motion.
### The impact of the learnable parameter \(w_{k}\) in EDFT module
In section 4.1, We have noticed that \(w_{k}\) acts like spectral filters of the \(k\)-th frequency and enables us to select related frequency for the prediction. Here, we conduct ablation study on MD17 dataset to analyse how significant the parameter is. From the results in Table 7, \(w_{k}\) is indeed capable of improving the prediction of physical dynamics.
### Ablation studies on Protein and Motion datasets
Here we conduct several ablations on Protein and Motion dataset similar to MD17 and show the results in Table 8 and Table 9, respectively.
Figure 11: Comparison on Motion basketball subject between ESTAG (left, MSE=0.0749) and ST_EGNN (right, MSE=2.6380). The ground truths are in red while the predicted states are in blue.
Figure 12: MSE on Ethanol \(w.r.t.\) the number of previous timestamps \(T\)
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & Aspirin & Benzene & Ethanol & Malonaldehyde & Naphthalene & Salicylic & Toluene & Uracil \\ \hline ESTAG & **0.063** & **0.003** & **0.099** & **0.101** & **0.068** & **0.047** & **0.079** & **0.066** \\ \hline w/o \(w_{k}\) & 0.071 & **0.003** & 0.102 & 0.104 & 0.081 & 0.081 & **0.079** & 0.069 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Ablation studies (\(\times 10^{-3}\)) of learnable parameter \(w_{k}\) on MD17 dataset. Results averaged across 3 runs.
Figure 10: Comparison on Motion walk subject between ESTAG (left, MSE=0.0048) and ST_EGNN (right, MSE=0.0811). The ground truths are in red while the predicted states are in blue.
## Appendix F Codes
The codes of ESTAG are available at: [https://github.com/ManlioWu/ESTAG](https://github.com/ManlioWu/ESTAG).
\begin{table}
\begin{tabular}{l c} \hline \hline Model & MSE \\ \hline ESTAG & **1.471** \\ w/o EDFT & 1.490 \\ w/o Attention & 1.493 \\ w/o Equivariance & 2.011 \\ w/o Temporal & 1.472 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Ablation study on Protein dataset.
\begin{table}
\begin{tabular}{l c} \hline \hline Model & MSE (\(\times 10^{-1}\)) \\ \hline ESTAG & **0.040** \\ w/o EDFT & 0.053 \\ w/o Attention & 0.047 \\ w/o Equivariance & 0.702 \\ w/o Temporal & 0.044 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Ablation study on Motion dataset.
**Algorithm 1** Equivariant Spatio-Temporal Attentive Graph Networks (ESTAG)
```
Input: Initial historical graph series \(\{(\mathbf{H}^{(0)}(t),\mathbf{\bar{X}}^{(0)}(t),\mathbf{A}\}_{t=0}^{T-1}\). for\(i=1\)to\(N\)do Equivariant Discrete Fourier Transform (EDFT): \[\mathbf{\bar{f}}_{i}(k) =\sum_{t=0}^{T-1}e^{-i\frac{2\pi}{T}kt}\ \left(\mathbf{\bar{x}}_{i}(t)- \overline{\mathbf{\bar{x}}(t)}\right),\] (15) \[\mathbf{A}_{ij}(k) =w_{k}(\mathbf{h}_{i})w_{k}(\mathbf{h}_{j})|\langle\mathbf{\bar{f}}_{i}(k), \mathbf{\bar{f}}_{j}(k)\rangle|,\] (16) \[\mathbf{c}_{i}(k) =w_{k}(\mathbf{h}_{i})\|\mathbf{\bar{f}}_{i}(k)\|^{2}.\] (17) endfor for\(l=1\)to\(L\)do for\(t=0\)to\(T-1\)do Equivariant Spatial Module (ESM): \[\mathbf{m}_{ij} =\phi_{m}\left(\mathbf{h}_{i}^{(l-1)}(t),\mathbf{h}_{j}^{(l-1)}(t),\|\bm {\bar{x}}_{ij}^{(l-1)}(t)\|^{2},\mathbf{A}_{ij}\right),\] (18) \[\mathbf{h}_{i}^{(l-0.5)}(t) =\phi_{h}\left(\mathbf{h}_{i}^{(l-1)}(t),\mathbf{c}_{i}(k),\sum_{j\neq i} \mathbf{m}_{ij}\right),\] (19) \[\mathbf{\bar{a}}_{i}^{(l-0.5)}(t) =\frac{1}{|\mathcal{N}(i)|}\sum_{j\in\mathcal{N}(i)}\mathbf{\bar{x}}_ {ij}^{(l-0.5)}(t)\phi_{x}(\mathbf{m}_{ij}),\] (20) \[\mathbf{\bar{x}}_{i}^{(l-0.5)}(t) =\mathbf{\bar{x}}_{i}^{(l-1)}(t)+\mathbf{\bar{a}}_{i}^{(l-0.5)}(t).\] (21) endfor for\(i=1\)to\(N\)do Equivariant Temporal Module (ETM) \[\alpha_{i}^{(l-0.5)}(ts) =\frac{\exp(\mathbf{q}_{i}^{(l-0.5)}(t)^{\top}\mathbf{k}_{i}^{(l-0.5)}(s) )}{\sum_{s=0}^{t}\exp(\mathbf{q}_{i}^{(l-0.5)}(t)^{\top}\mathbf{k}_{i}^{(l-0.5)}(s))},\] (22) \[\mathbf{h}_{i}^{(l)}(t) =\mathbf{h}_{i}^{(l-0.5)}(t)+\sum_{s=0}^{t}\alpha_{i}^{(l-0.5)}(ts)\bm {v}_{i}^{(l-0.5)}(s),\] (23) \[\mathbf{\bar{x}}_{i}^{(l)}(t) =\mathbf{\bar{x}}_{i}^{(l-0.5)}(t)+\sum_{s=0}^{t}\alpha_{i}^{(l-0.5)}( ts)\mathbf{\bar{x}}_{i}^{(l-0.5)}(ts)\phi_{x}(\mathbf{v}_{i}^{(l-0.5)}(s)),\] (24) where \[\mathbf{q}_{i}^{(l-0.5)}(t) =\phi_{q}\left(\mathbf{h}_{i}^{(l-0.5)}(t)\right),\] (25) \[\mathbf{k}_{i}^{(l-0.5)}(t) =\phi_{k}\left(\mathbf{h}_{i}^{(l-0.5)}(t)\right),\] (26) \[\mathbf{v}_{i}^{(l-0.5)}(t) =\phi_{v}\left(\mathbf{h}_{i}^{(l-0.5)}(t)\right).\] (27) endfor endfor Equivariant linear pooling: \[\mathbf{\bar{x}}_{i}^{*}(T)=\mathbf{\hat{X}}_{i}\mathbf{w}+\mathbf{\bar{x}}_{i}^{(L)}(T-1),\] (28) where \(\mathbf{\hat{X}}_{i}=[\mathbf{\bar{x}}_{i}^{(L)}(0)-\mathbf{\bar{x}}_{i}^{(L)}(T-1),\mathbf{ \bar{x}}_{i}^{(L)}(1)-\mathbf{\bar{x}}_{i}^{(L)}(T-1),\cdots,\mathbf{\bar{x}}_{i}^{(L)} (T-2)-\mathbf{\bar{x}}_{i}^{(L)}(T-1)]\). Output:\(\mathbf{\bar{X}}^{*}(T)\)
```
**Algorithm 2** Equivariant Temporal Module (ETM) | ## Review
### Summary
This paper presents a novel approach to modeling non-Markovian dynamics in physical systems using a spatio-temporal E(3)-equivariant graph neural network (GNN). The authors integrate advanced techniques, including a unique feature extraction method based on the discrete Fourier transform, to develop an architecture that captures spatio-temporal dependencies effectively. The model, evaluated on diverse datasets, demonstrates substantial performance improvements compared to existing methods. The paper also includes ablation studies to highlight the significance of the proposed components. Overall, the work addresses important challenges in physical dynamics simulation and showcases both theoretical and empirical contributions.
### Strengths
- The proposed model is technically solid and fills a significant gap in the literature regarding non-Markovian dynamics.
- The paper is well-written, easy to follow, and the illustrations are clear.
- Empirical results show that the proposed architecture significantly outperforms existing models across multiple datasets, particularly in the molecular domain.
- Ablation studies effectively demonstrate the importance of the new frequency computation technique and the novel components of the model.
### Weaknesses
- Concerns about the technical novelty of the paper, as it adapts existing methods without significant new contributions.
- Insufficient discussion of related work, particularly in the context of advanced spatio-temporal graph neural networks.
- Some evaluation details are unclear, especially regarding the significant MSE differences observed in the MD-17 dataset compared to other datasets.
- Lack of clarity surrounding the importance of specific model components, such as the learnable parameters used in the architecture.
- The paper contains typos and requires thorough proofreading.
- Additional experiments exploring long-term forecasting capabilities would enhance the understanding of the model's performance.
### Questions
- How do the authors justify the claimed novelty in their approach regarding existing spatio-temporal GNNs?
- Can the authors clarify why the performance of the ESTAG model varied significantly across different datasets?
- What is the rationale behind not including comparisons with state-of-the-art methods in the evaluation?
- Could the authors provide a more detailed explanation of the significance of the learnable parameters in the model?
### Soundness
**Score:** 3
**Description:** 3 = good: The methodology is sound, but some areas lack clarity and thorough justification.
### Presentation
**Score:** 3
**Description:** 3 = good: The paper is well-structured and easy to read, but contains some typos that need correction.
### Contribution
**Score:** 3
**Description:** 3 = good: The work contributes to the field by addressing an important problem, although it requires a more comprehensive literature review.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept: The paper is technically sound with moderate-to-high impact, but requires further improvements in clarity and evaluation.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a novel approach to modeling non-Markovian dynamics and demonstrates significant performance improvements through rigorous experimentation. While there are some weaknesses regarding novelty and clarity, the overall contributions are valuable, making it a solid candidate for acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Representational Strengths and Limitations of Transformers
Clayton Sanford, Daniel Hsu
Department of Computer Science
Columbia University
New York, NY 10027
{clayton,djhsu}@cs.columbia.edu
&Matus Telgarsky
Courant Institute
New York University
New York, NY 10012
[email protected]
###### Abstract
Attention layers, as commonly used in transformers, form the backbone of modern deep learning, yet there is no mathematical description of their benefits and deficiencies as compared with other architectures. In this work we establish both positive and negative results on the representation power of attention layers, with a focus on intrinsic complexity parameters such as width, depth, and embedding dimension. On the positive side, we present a _sparse averaging task_, where recurrent networks and feedforward networks all have complexity scaling polynomially in the input size, whereas transformers scale merely _logarithmically_ in the input size; furthermore, we use the same construction to show the necessity and role of a large embedding dimension in a transformer. On the negative side, we present a _triple detection_ task, where attention layers in turn have complexity scaling linearly in the input size; as this scenario seems rare in practice, we also present natural variants that can be efficiently solved by attention layers. The proof techniques emphasize the value of communication complexity in the analysis of transformers and related models, and the role of sparse averaging as a prototypical attention task, which even finds use in the analysis of triple detection.
## 1 Introduction
In recent years, transformer networks [24] have been established as a fundamental neural architecture powering state-of-the-art results in many applications, including language modeling [13], computer vision [15], and protein folding [16]. The key building block of transformer models is the _self-attention unit_, a primitive that represents interactions among input elements as inner-products between low-dimensional embeddings of these elements.
The success of transformer models is linked to their ability to scale their training and generalization performance to larger datasets and sequence lengths. Their representational capacity, however, underlies this scaling power, and is tied to the inductive biases of their learning algorithms. Empirically, transformer models trained with gradient-based learning algorithms exhibit biases towards certain algorithmic primitives [1, 17] and learn representations that may encode domain-specific information in the self-attention units [18, 19, 20, 16]. These examples indicate that transformer architectures not only provide computational benefits, but also have representational capabilities that are particularly well-matched to practical tasks.
In this paper, we investigate these inductive biases by identifying "natural" computational tasks for which transformers are well-suited, especially compared to other neural network architectures, as well as tasks that highlight the limitations of transformers. The tasks--sparse averaging, pair-matching,and triples-matching--represent primitive operations that aggregate structural information encoded in embeddings. We use these tasks to elucidate the relationship between the embedding dimension \(m\) of a self-attention unit and its expressivity, and to showcase the fundamental representational limitations of self-attention layers.
In our model, the primary computational bottleneck faced by a transformer in computing a "sequence-to-sequence"1 function \(f\colon\mathcal{X}^{N}\to\mathcal{Y}^{N}\) is the constrained processing of pairs of input elements \(\{x_{i},x_{j}\}\in\binom{X}{2}\); we allow transformers unbounded computational power when processing the individual elements \(x_{i}\in\mathcal{X}\). This is motivated by modern scaling regimes where the context length \(N\) has rapidly increased, the self-attention embedding dimension \(m\) remains much smaller than \(N\), and the parameterization of multi-layer perceptrons (MLPs) that operate on individual elements is much larger than \(m\). Indeed, the largest GPT-3 model (Brown et al., 2020) features a context length \(N=2048\), an embedding dimension \(m=128\), and MLPs with a 12288-dimensional parameterization; the context length of GPT-4 is as large as \(N=32000\). As such, we are interested in the capabilities of transformers with \(N^{o(1)}\) total "size", as opposed to \(N^{\Omega(1)}\). The nature of the bottleneck in our model makes the tools of communication complexity indispensable for formalizing computational limits.
Footnote 1: Note, however, that attention units are permutation equivariant, so the order of elements in the input “sequence” \(X\in\mathcal{X}^{N}\) is irrelevant. In practice, _positional encodings_ are used when the sequence order is relevant.
### Our contributions
Sparse averaging separations among atomic self-attention units.The _\(q\)-sparse averaging task_\(q\mathrm{SA}\) aims to capture the essential approximation-theoretic properties of self-attention units. In \(q\mathrm{SA}\), the \(i\)th input \(x_{i}\) is a pair \((y_{i},z_{i})\), where \(z_{i}\in\mathbb{R}^{d^{\prime}}\) is the _data_ part of \(x_{i}\), simply a vector in \(\mathbb{R}^{d^{\prime}}\), whereas and \(y_{i}\in\binom{[N]}{q}\) is the _indexing_ part, which specifies \(q\) locations in the input sequence; the \(i\)th output element in \(q\mathrm{SA}\) is obtained by averaging the \(q\)_data_ parts \(z_{j}\) given by \(j\in y_{i}\), meaning
\[q\mathrm{SA}\left((y_{1},z_{1}),\ldots,(y_{N},z_{N})\right)=\left(\frac{1}{q} \sum_{j\in y_{1}}z_{j},\ldots,\frac{1}{q}\sum_{j\in y_{N}}z_{j}\right).\]
(See also Definition 4.) As summarized in the following informal theorem, our analysis of \(q\mathrm{SA}\) in Section 3 and Appendix A illustrates the ability of the self-attention primitive to associate arbitrary subsets of input elements (as opposed to just "local" subsets, as specified by some sequential/topological structure), measures the expressive power accrued by increasing the embedding dimension \(m\) of a self-attention unit, and indicates the representational limitations of "traditional" neural architectures on basic computational tasks.
**Informal Theorem 1**.: _The task \(q\mathrm{SA}\) for \(q\in\mathbb{Z}_{+}\) satisfies the following properties (see Definition 4 for a formal definition and approximation metric)._
1. _There exists a unit of self-attention_ \(f\) _with an_ \(m\)_-dimensional embedding that approximates_ \(q\mathrm{SA}\) _if and only if_ \(m\gtrsim q\) _(Theorems 2 and 4)._
2. _Any fully-connected neural network whose output approximates_ \(q\mathrm{SA}\) _requires its first hidden layer to have width at least_ \(\Omega(Nd)\) _(Theorem_ 10_)._
3. _Any recurrent neural network whose iterates approximate_ \(q\mathrm{SA}\) _requires a hidden state of at least_ \(\Omega(N)\) _bits (Theorem_ 11_)._
We consider the \(q\mathrm{SA}\) implementation in Item 1_efficient_ since the dimension of the model parameters grows with \(\mathrm{poly}(q,d,\log N)\), whereas the latter two are _inefficient_ since their parameter (or state) dimension grows as \(\mathrm{poly}(N)\). The proofs of the positive results employ embeddings for each index \(j\) and each subset \(y_{i}\) that have large inner products if and only if \(j\in y_{i}\). The negative results involve communication complexity reductions and geometric arguments. These arguments naturally introduce a dependence on bits of precision, which we suppress above within the notation "\(\gtrsim\)"; we note that these bounded-precision results are arguably more relevant to modern networks, which uses as few as \(4\) or even \(2\) bits of numerical precision.
Contrast between pairwise and triple-wise matching with self-attention layers.We frame standard transformer architectures as being able to efficiently represent functions that are decomposable into sparse pairwise interactions between inputs. To do so, we introduce two sequential tasks and prove a collection of constructions and hardness results that characterize the abilities of transformers to solve these tasks.
Given an input sequence \(X=(x_{1},\ldots,x_{N})\in[M]^{N}\) (for some \(M=\mathrm{poly}(N)\)), we formalize the problems of _similar pair detection_ (\(\mathrm{Match2}\)) and _similar triple detection_ (\(\mathrm{Match3}\)) as
\[\mathrm{Match2}(X)_{i\in[N]} =1\left\{\exists j\;\mathrm{s.t.}\;x_{i}+x_{j}=0\left(\mathrm{mod} \;M\right)\right\}, \tag{1}\] \[\mathrm{Match3}(X)_{i\in[N]} =1\left\{\exists j_{1},j_{2}\;\mathrm{s.t.}\;x_{i}+x_{j_{1}}+x_{ j_{2}}=0\left(\mathrm{mod}\;M\right)\right\}. \tag{2}\]
For both tasks, note that the output is an \(N\)-dimensional vector whose \(i\)th element is 1 if and only if the sequence \(X\) includes a pair or triple _containing_\(x_{i}\). In this sense, the problems differ from 2SUM and 3SUM, which are not sequence-to-sequence tasks.
We believe these two tasks are intrinsically "pairwise" and "triple-wise", respectively; moreover, since we also believe self-attention performs a fundamentally "pairwise" operation, we will use \(\mathrm{Match2}\) and \(\mathrm{Match3}\) to show a sharp gap in the representation power of self-attention.
**Informal Theorem 2**.:
1. _A single unit of standard self-attention with input and output MLPs and an_ \(O(d)\)_-dimensional embedding can compute_ \(\mathrm{Match2}\) _(Theorem_ 6_)._
2. _A single layer of standard multi-headed self-attention cannot compute_ \(\mathrm{Match3}\) _unless its number of heads_ \(H\) _or embedding dimension_ \(m\) _grows polynomially in_ \(N\) _(Theorem_ 7_)._
3. _A standard transformer model_ can _efficiently compute a modified version of_ \(\mathrm{Match3}\) _that makes assumptions about embedding structure or locality (Theorems_ 8 _and_ 9_)._
4. _Under a generalized notion of "third-order tensor self-attention" introduced in Appendix_ C.3_,_ \(\mathrm{Match3}\) _is efficiently computable with a single unit of third-order attention (Theorem_ 18_)._
While the above result demonstrates the limitations of multi-headed self-attention and illustrates the importance of learning embeddings with contextual clues, we believe that a stronger result exists. Specifically, we conjecture that even multi-layer transformers are unable to efficiently compute \(\mathrm{Match3}\) without hints or augmentation.
**Informal Conjecture 1**.: _Every multi-layer transformer that computes \(\mathrm{Match3}\) must have width, depth, embedding dimension, or bit complexity at least \(N^{\Omega(1)}\)._
In Appendices C.5 and C.6, we give a heuristic information-theoretic argument to support this conjecture, prove a matching upper-bound, and finally prove analogous results for graph-augmented transformers with respect to the problem of cycle detection in directed and undirected graphs.
### Related work
Several computational and learning-theoretic aspects of transformers, distinct from but related to the specific aims of the present paper, have been mathematically studied in previous works.
Universality and Turing-completeness.To demonstrate the power of transformers, universal approximation results for transformers (Yun et al., 2020; Wei et al., 2022)--analogous to results for feedforward networks (Hornik et al., 1989)--establish the capability for sufficiently large networks to accurately approximate general classes of functions. Note, however, that the precise minimal dependence of the required size (e.g., number of attention units, depth of the network) as a function of the input size \(N\) does not directly follow from such results, and it is complicated by the interleaving of other neural network elements between attention layers. (Approximate) Turing-completeness of transformers demonstrates their power in a different manner, and such results have been established, first assuming infinite precision weights (Perez et al., 2019) and later also with finite-precision (Wei et al., 2022). Such results are more closely aligned with our aims, because Turing machines represent a uniform model of computation on inputs of arbitrary size. Wei et al. (2022) showed that Turing machines that run for \(T\) steps can be approximated by "encoder-decoder" transformers of depth \(\log(T)\) and size polynomial in \(\log(T)\) and the number of states of the Turing machine (but the decoder runs for \(T\) steps).
Formal language recognition.The ubiquity of transformers in natural language understanding has motivated the theoretical study of their ability to recognize formal languages. On the positive side, Bhatamishra et al. (2020) constructed transformers that recognize counter languages, and Yao et al. (2021) showed that transformers of bounded size and depth can recognize Dyck languages that have bounded stack depth. Liu et al. (2022) showed that the computations of finite-state automata on sequences of length \(N\) can be performed by transformers of depth \(\log(N)\) and size polynomial in the number of states. On the negative side, Hahn (2020) showed limitations of modeling distributions over formal languages (including Dyck) with fixed-size transformers (though this result does not imply quantitative lower bounds on the size of the transformer). Hahn (2020), as well as Hao et al. (2022), also establish the inability of "hard attention" Transformers to recognize various formal languages and circuit classes by leveraging depth reduction techniques from circuit complexity (Furst et al., 1984).
Learnability.The sample complexity of learning with low-weight transformers can be obtained using techniques from statistical learning theory and, in turn, establish learnability of certain boolean concept classes (e.g., sparse parity) (Edelman et al., 2022; Bhatamishra et al., 2022) using transformer-based hypothesis classes. Our \(q\mathrm{SA}\) function is inspired by these classes, and we establish concrete size lower bounds for approximation (and hence also learnability) by transformers. We note that our constructions use bounded-size weights, and hence, in principle, the aforementioned sample complexity results can be combined with our results to analyze empirical risk minimization for learning transformers. Prior work of Likhoshenstov et al. (2021) also shows how sparse attention patterns can be achieved by self-attention units (via random projection arguments); however, when specialized to \(q\mathrm{SA}\), their construction is suboptimal in terms of the sparsity level \(q\).
Related models.Graph neural networks (GNNs), like transformers, process very large inputs (graphs) using neural networks that act only on small collections of the input parts (vertex neighborhoods). Many classes of GNNs are universal approximators for classes of invariant and equivariant functions (Maron et al., 2019; Keriven and Peyre, 2019). At the same time, they are restricted by the distinguishing power of certain graph isomorphism tests (Xu et al., 2018; Morris et al., 2019; Chen et al., 2019), and lower bounds have been established on the network size to approximate such tests (Aamand et al., 2022). Loukas (2019) established a connection between GNNs and the Local(Angluin, 1980) and Congest(Peleg, 2000) models for distributed computation, and hence directly translates lower bounds for Congest--notably cycle detection problems--into size lower bounds for GNNs. Our lower bounds for cycle detection using transformers also leverage a connection to the Congest model. However, transformers do not have the same limitations as GNNs, since the computational substrate of a transformer does not depend on the input graph in the way it is with GNNs. Thus, we cannot directly import lower bounds for Congest to obtain lower bounds for transformers.
Transformers are also related to other families of invariant and equivariant networks. Our focus on \(\mathrm{Match2}\) and \(\mathrm{Match3}\) (and related problems) was inspired by the separation results of Zweig and Bruna (2022) between models for processing sets: Deep Sets (Qi et al., 2017; Zaheer et al., 2017), which are "singleton symmetric", and the more expressive Relational Pooling networks (Santoro et al., 2017), which are only "pairwise symmetric".
### Conclusion and future work
Our primary contributions are to present a multi-faceted story about transformer approximation: firstly, \(q\mathrm{SA}\) separates transformer models approximation-theoretically from RNNs and MLPs, and moreover the attention embedding dimension both necessary and sufficient for \(q\mathrm{SA}\) scale directly with \(q\), meaning \(q\mathrm{SA}\) also functions to characterize representation power amongst different transformers. Secondly, while single units of self-attention can solve the \(\mathrm{Match2}\) task, even wide layers of self-attention with high-dimensional embeddings cannot solve \(\mathrm{Match3}\), and we believe that deeper models cannot as well. This question of deeper models is stated as a formal conjecture and addressed heuristically in Appendix C.6, using both information- and communication-theoretic proof techniques, both of which we feel are significant steps towards a complete proof.
While our investigation is purely approximation-theoretic, we also include in Appendix D a preliminary empirical study, showing that attention can learn \(q\mathrm{SA}\) with vastly fewer samples than recurrent networks and MLPs; we feel this further emphasizes the fundamental value of \(q\mathrm{SA}\), and constitutes an exciting direction for future work.
Beyond the explicit open question in Informal Conjecture 1, we anticipate that future research could connect the separation results proved in this work to formal linguistic theory and empirical work on attention matrix interpretation. This work examines \(\mathrm{Match2}\) and \(\mathrm{Match3}\) because we believe that the former could represent a key primitive for language processing tasks such as co-referencing, while the latter represents a natural extension of the former that likely is _not_ necessary for language modeling. Rather, it may be possible that language modeling performs triple-wise modeling for tasks such as the identification of subject, verb, and object components by relying on pairwise matching constructions and "clues" learned within an embedding, such as those encoded in the toy problems \(\mathrm{Match3Bigram}\) and \(\mathrm{Match3Local}\). That is, transformers serve as a useful foundational model for language modeling because of their abilities to integrate contextual clues and pairwise communication, and while they are not extensible to "purely triple-wise problems," most practical sequential problems have some efficient decomposition to pairwise structures that can be easily exploited by these architectures. Future work by linguists, theoretical computer scientists, and empirical NLP practitioners could assess how foundational our primitives are and study whether there are any practical triple-wise problems that transformer models fail to solve.
## 2 Preliminaries
Let \(\mathbb{B}^{d}=\{x\in\mathbb{R}^{d}:\left\lVert x\right\rVert_{2}\leq 1\}\) denote the unit ball in \(\mathbb{R}^{d}\), and let \([n]=\{1,2,\ldots,n\}\) denote the first \(n\) positive integers. The expression \(\mathbb{1}\left\{P\right\}\) equals \(1\) if predicate \(P\) is true and \(0\) otherwise. The row-wise softmax operator applied to matrix \(A\in\mathbb{R}^{N\times M}\) returns
\[\mathrm{softmax}(A)_{i,j}=\frac{\exp(A_{i,j})}{\sum_{j^{\prime}=1}^{M}\exp(A_{ i,j^{\prime}})}.\]
### Attention units and transformer architectures
We first introduce the concept of self-attention, which is used as the building block of all transformer architectures included in this paper.
**Definition 1**.: For input dimension \(d\), output dimension \(d^{\prime}\), embedding dimension \(m\), precision \(p\), and matrices \(Q,K\in\mathbb{R}^{d\times m}\) and \(V\in\mathbb{R}^{d\times d^{\prime}}\) (encoded using \(p\)-bit fixed-point numbers), a _self-attention unit_ is a function \(f_{Q,K,V}:\mathbb{R}^{N\times d}\rightarrow\mathbb{R}^{N\times d}\) with
\[f_{Q,K,V}(X)=\mathrm{softmax}(XQK^{\intercal}X^{\intercal})XV.\]
Let \(\mathcal{A}_{d,m,d^{\prime},p}=\{f_{Q,K,V}:Q,K,V\}\) denote all such self-attention units.
Self-attention units can be computed in parallel to create multi-headed attention.
**Definition 2**.: For head-count \(H\) and self-attention units \(f_{1},\ldots,f_{H}\in\mathcal{A}_{d,m,d^{\prime},p}\), \(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\nexists\)\(\)\(\nexists\)\(\nexists\)\(\nexists\)\(\)\(\nexists\)\(\nexists\)\(\)\(\nexists\)\(\nexists\)\(\)\(\nexists\)\(\nexists\)\(\)\(\nexists\)\(\nexists\)\(\)\(\nexists\)\(\nexists\)\(\)\(\nexists\)\(\nexists\)\(\)\(\nexists\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\)\(\nexists\)\(\)\(\)\(\nexists\)\(\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\)\(\nexists\)\(\)\(\)\(\nexists\)\(\)\(\)\(\nexists\)\(\)\(\)\(\)\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\)\(\nexists\)\(\)\(\)\(\nexists\)\(\)\(\)\(\nexists\)\(\)\(\)\(\nexists\)\(\)\(\)\(\nexists\)\(\)\(\)\(\nexists\)\(\)\(\)\(\nexists\)\(\)\(\)\(\nexists\)\(\)\(\)\(\)\(\nexists\)\(\)\(\)\(\)\(\nexists\)\(\)\(\)\(\)\(\nexists\)\(\)\(\)\(\)\(\nexists\)\(\)\(\)\(\)\(\nexists\)\(\)\(\)\(\)\(\nexists\)\(\)\(\)\(\)\(\nexists\)\(\)\(\)\(\)\(\nexists\)\(\)\(\)\(\)\(\nexists\)\(\)\(\)\(\)\(\)\nexists\)\(\)\(\)\(\nexists\)\(\)\(\)\(\nexists\)\(\)\(\)\(\)\(\nexists\)\(\)\(\)\(\)\nexists\)\(\)\(\)\(\nexists\)\(\)\(\)\(\)\nexists\)\(\)\(\)\(\nexists\)\(\)\(\)\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(\)\(\)\nexists\)\(\nexists\)\(\)\(\nexists\)\(\)\(\nexists\)\(represent a full transformer model comprising \(D\) layers of \(H\)-headed self-attention with interspersed MLPs.
While two key features of transformer architectures--the residual connection and the positional embedding--are conspicuously missing from this formalism, the two can be implemented easily under the framework. We can include a positional embedding by encoding the index as a coordinate of the input, i.e. \(x_{i,1}=i\). Then, the subsequent MLP transformation \(\phi(X)\) can incorporate \(i\) suitably into the embedding. A residual connection can be included additively as input to a multi-layer perceptron layer (as is standard) by implementing an "approximate identity" attention head \(f\) with \(Q,K\) and \(V=I_{m}\) set to ensure that \(f(X)\approx X\).2
Footnote 2: A simple construction involves letting \(XQ=XK\) with \(\operatorname{iid}\) Gaussian columns fixed for every index \(i\). Then, the diagonals of \(XQK^{\top}X^{\top}\) are far larger than all other entries and its softmax is approximately \(I_{N}\).
We periodically consider transformers implemented with real-valued arithmetic with infinite bit complexity; in those cases, we omit the bit complexity \(p\) from the notation.
Finally, we assume for the proof of Theorem 3 that the model is permitted to implement a single <END> token at the end of a sequence. That is, we say that a model \(f\in\mathcal{T}^{D,D}_{d,m,d^{\prime},p}\) represents a target \(h:\mathbb{R}^{N\times d}\rightarrow\mathbb{R}^{N\times d^{\prime}}\) if \(f(X^{\prime})_{1:N}=g(X)\) when \(X^{\prime}=(x_{1},\ldots,x_{N},x^{\prime})\) for constant-valued \(x^{\prime}\in\mathbb{R}^{d}\).
## 3 Sparse averaging with attention units
We present the sparse averaging task to highlight the ability of transformer architectures to simulate a wide range of meaningful interactions between input elements. This task demonstrates how the embedding dimension of a self-attention unit modulates the expressive capabilities of the architecture, while showcasing the inabilities of fully-connected and recurrent neural networks to capture similar interactions (see Appendix A).
**Definition 4**.: For sparsity \(q\), problem dimension \(d^{\prime}\), and input dimension \(d=d^{\prime}+q+1\), consider an input \(X=(x_{1},\ldots,x_{N})\in\mathbb{R}^{N\times d}\) with \(x_{i}=(z_{i};y_{i};i)\) for \(z_{i}\in\mathbb{B}^{d^{\prime}}\) and \(y_{i}\in\binom{[N]}{q}\).3 Let the \(q\)-_sparse average_ be
Footnote 3: We may encode a \(q\) element subset of \([N]\) as a vector in \([N]^{q}\) constrained to have distinct components.
\[q\mathrm{SA}(X)=\left(\frac{1}{q}\sum_{j=1}^{q}z_{y_{i,j}}\right)_{i\in[N]}.\]
For accuracy \(\epsilon>0\), a function \(f:\mathbb{R}^{N\times d}\rightarrow\mathbb{R}^{N\times d^{\prime}}\)\(\epsilon\)-_approximates_\(q\mathrm{SA}\) if for all \(X\),
\[\max_{i\in[N]}\left\lVert f(X)_{i}-q\mathrm{SA}(X)_{i}\right\rVert_{2}\leq\epsilon.\]
Figure 0(a) visualizes the sparse averaging task as a bipartite graph between subsets \(y_{i}\) and elements \(z_{i}\) with corresponding averages. Theorems 2 and 4 jointly show that the minimum embedding dimension \(m\) of single self-attention units \(\mathcal{A}^{\prime}_{d,m,d^{\prime},p}\) that \(O(\frac{1}{q})\)-approximate \(q\mathrm{SA}\) scales linearly with \(q\). We believe that the sparse averaging problem is thus a canonical problem establishing the representational capabilities and inductive biases of self-attention units.
### Self-attention can approximate \(q\mathrm{SA}\) when \(m\gtrsim q\)
Our principle positive result shows that the sparse averaging task \(q\mathrm{SA}\) can be approximately solved using fixed-precision arithmetic self-attention units with embedding dimension \(m\) growing with \(q\log N\).
**Theorem 2** (Fixed-precision).: _For any \(N\), any \(m\geq\Omega(d^{\prime}+q\log N)\), any \(\epsilon\in(0,1)\), and \(p=\Omega(\log(\frac{q}{\epsilon}\log N))\), there exists some \(f\in\mathcal{A}^{\prime}_{d,m,d^{\prime},p}\) that \(\epsilon\)-approximates \(q\mathrm{SA}\)._
While the full proof appears in Appendix B.1, we briefly sketch the argument here. Because the output of a self-attention unit is a convex combination of rows of the value matrix \(\phi(X)V\in\mathbb{R}^{N\times d^{\prime}}\), a natural way to approximate \(q\mathrm{SA}\) with a unit of self-attention is to let each value be the corresponding vector in the average (i.e. \(V^{\mathrm{ T}}\phi(x_{i})=z_{i}\)) and choose the key and query functions in order to ensure that the attention matrix satisfies
\[\mathrm{softmax}(\phi(X)QK^{\mathrm{ T}}\phi(X)^{\mathrm{ T}})_{i,j}\approx\begin{cases}\frac{1}{q}&\text{if $j\in y_{i}$,}\\ 0&\text{otherwise.}\end{cases}\]
To do so, let each key \(K^{\mathrm{ T}}\phi(x_{i})\) represent a fixed vertex on a convex polytope, which depends only on index \(i\) and is constructed from random binary vectors. We select each query \(Q^{\mathrm{ T}}\phi(x_{i})\) to ensure that \(\phi(x_{i})^{\mathrm{ T}}QK^{\mathrm{ T}}\phi(x_{j})\) is a fixed large value if \(j\in y_{i}\) and a slightly smaller value otherwise. We obtain the precise query, key, and value embeddings by employing tools from dual certificate analysis from the theory of compressed sensing.
We visualize this construction in Figure 0(b) and 0(c) for \(q=3\) and \(d^{\prime}=4\), which presents the associated attention and value matrices necessary for the construction, and plots a polytope of keys (red dots) with each face corresponding to each subset \(y_{i}\) (green dots). The construction is empirically relevant; Figure 2 shows that a unit of self-attention trained on data generated by the \(q\mathrm{SA}\) task recovers a similar attention matrix to the one stipulated in our construction and visualized in Figure 0(b).
The logarithmic dependence of the embedding dimension \(m\) on the sequence length \(N\) can be eliminated by considering self-attention units with real-valued arithmetic with infinite bit complexity.
**Theorem 3** (Infinite-precision).: _For fixed \(N\), \(m\geq\Omega(d^{\prime}+q)\) and \(\epsilon>0\), there exists some \(f\in\mathcal{A}^{\prime}_{d,m,d^{\prime}}\) that \(\epsilon\)-approximates \(q\mathrm{SA}\)._
Figure 1: A visualization of the \(q\mathrm{SA}\) function outputs given a sequence of inputs \((z_{i};y_{i};i)_{i\in[N]}\) as a bipartite graph between subsets \(y_{i}\) and vectors \(z_{i}\) (a), and of the attention matrix (b) and underlying embeddings (c) that produce the self-attention construction in Theorem 2.
Figure 2: Attention matrix \(\mathrm{softmax}(\phi(X)QK^{\mathrm{ T}}\phi(X)^{\mathrm{ T}})\in\mathbb{R}^{20\times 20}\) for a fixed example after \(T\) epochs of training a self-attention unit to solve \(q\mathrm{SA}\) for \(q=3\). Each row \(i\) corresponds to subset \(y_{i}\), and each cell \(j\in y_{i}\) is outlined in red. See Appendix D for experimental details.
The proof of Theorem 3 employs a similar polytope-based construction in Appendix B.2, relying on a cyclic polytope rather than one drawn from discrete boolean vectors. Theorem 16 proves the near-optimality of _that_ bound by employing a geometric argument to show that a variant of \(q\mathrm{SA}\) can only be approximated by a restricted family of self-attention units with a sufficiently high-dimensional embedding.
### Self-attention cannot approximate \(q\mathrm{SA}\) when \(m\lesssim q\)
We show that the construction used to prove Theorem 2 is nearly optimal.
**Theorem 4**.: _For any sufficiently large \(q\), any \(N\geq 2q+1\), and any \(d^{\prime}\geq 1\), there exists a universal constant \(c\) such that if \(mp\leq cq\), then no \(f\in\mathcal{T}^{1,1}_{d,m,d^{\prime},p}\) exists that \(\frac{1}{2q}\)-approximates \(q\mathrm{SA}\)._
(By choosing \(p=O(\log(q\log N))\), Theorem 2 is shown to be optimal up to logarithmic factors of \(q\) and doubly-logarithmic factors of \(N\).)
The proof of Theorem 4 employs a standard communication complexity argument based on a reduction from the following _set disjointness_ problem in the two-party communication model, in which each party possesses a subset of an \(n\) element domain (encoded as \(n\)-bit strings), and they wish to jointly determine whether their subsets are disjoint. We note that communication complexity is commonly-used technique for proving lower bounds on the representational power of circuits and feedforward neural networks (see, e.g., Karchmer and Wigderson, 1988; Ben-David et al., 2002; Martens et al., 2013; Vardi et al., 2021).
**Fact 5** (Set disjointness communication lower bound (Yao, 1979)).: _Suppose Alice and Bob are given inputs \(a,b\in\{0,1\}^{n}\), respectively, with the goal of jointly computing \(\mathrm{DISJ}(a,b)=\max_{i}a_{i}b_{i}\) by alternately sending a single bit message to the other party over a sequence of communication rounds. Any deterministic protocol for computing \(\mathrm{DISJ}(a,b)\) requires at least \(n\) rounds of communication._
Our proof designs a communication protocol that Alice and Bob use to jointly compute \(\mathrm{DISJ}(a,b)\) when \(n=q\) in \(O(mp)\) rounds of communication, under the assumption that such an \(f\) exists that closely approximates \(q\mathrm{SA}\).
* Alice encodes her input \(a\) in a single subset by letting \(y_{2q+1}=\{2i+a_{i}-1:i\in[q]\}\).
* Bob uses his input \(b\) to assign \(z_{2i-1}\) to \(2b_{i}-1\) and \(z_{2i}=-1\) for all \(i\in[q]\).
* All other input components are set to constant values known by both parties.
Alice sends her \(mp\)-bit query embedding \(Q^{\mathrm{r}}\phi(x_{2q+1})\) bit-by-bit to Bob, who approximately computes \(q\mathrm{SA}\) by determining the outcome of \(f\). The crux of the reduction shows that \(q\mathrm{SA}(X)_{2q+1}=-1\) if and only if \(a_{i}b_{i}=0\) for all \(i\in[q]\), which allows Bob to determine \(\mathrm{DISJ}(a,b)\).
We visualize the protocol in Figure 3 and give the proof in Appendix B.3. The proofs of Theorems 7, 11, 21, and 23 employ similar communication complexity reductions to \(\mathrm{DISJ}\).
Figure 3: The \(mp\)-bit communication protocol used to reduce the hardness of computing \(q\mathrm{SA}\) with a single unit of self-attention to the hardness of solving the \(\mathrm{DISJ}\) communication problem for the proof of Theorem 4 for \(q=4\).
Standard transformer models can only efficiently represent intrinsically pairwise functions
In this section, we argue that the standard transformer architecture is unable to efficiently represent functions that do not decompose into a small number of pairwise-symmetric functions. We do this by contrasting the (in)approximability of intrinsically pairwise and triple-wise functions, respectively \(\mathrm{Match2}\) and \(\mathrm{Match3}\) (defined in (1) and (2)), and their variants.
### Efficient computation of \(\mathrm{Match2}\) with standard self-attention
We first show that \(\mathrm{Match2}\) can be efficiently approximated by a single standard (pairwise) self-attention unit.
**Theorem 6**.: _For any input size \(N\), input range \(M=N^{O(1)}\), and fixed-precision bit complexity \(p=O(\log M)\), there exists a transformer architecture \(f\in\mathcal{T}^{1,1}_{1,m,1,p}\) with a single self-attention unit with embedding dimension \(m=3\) such that for all \(X\in[M]^{N}\), \(f(X)=\mathrm{Match2}(X)\)._
The proof, given in Appendix C.1 uses both a "blank token" and a trigonometric positional embedding, which ensures that
\[\phi(x_{i})^{\intercal}QK^{\intercal}\phi(x_{j})=c\sum_{k=1}^{d}\cos\left( \frac{2\pi(x_{i,k}+x_{j,k})}{M}\right)\]
for some sufficiently large constant \(c\). This embedding ensures that a cell of the attention matrix \(\mathrm{softmax}(\phi(X)QK^{\intercal}\phi(X)^{\intercal})_{i,j}\) is extremely close to zero, unless \(x_{i}=-x_{j}\pmod{M}\).
### Hardness of computing \(\mathrm{Match3}\) with a multi-headed self-attention layer
Although \(\mathrm{Match2}\) can be efficiently represented using a single unit of standard self-attention, representing \(\mathrm{Match3}\) using an entire layer of multi-headed attention units is impossible unless either the number of heads \(H\), the embedding dimension \(m\), or the precision \(p\) grows as \(N^{\Omega(1)}\).
**Theorem 7**.: _There is universal constant \(c>0\) such that for sufficiently large \(N\), and any \(M\geq N+1\), if \(mpH\leq cN/\log\log N\), then there is no \(f\in\mathcal{T}^{1,H}_{1,m,1,p}\) satisfying \(f(X)=\mathrm{Match3}(X)\) for all \(X\in[M]^{N}\)._
We give the proof in Appendix C.2. Like that of Theorem 4, the proof relies on a reduction from set disjointness in two-party communication. The proof of the lower bound applies a domain-restricted variant of \(\mathrm{Match3}\), which actually makes the problem substantially simpler to solve. In Remark 1, we show how this variant of \(\mathrm{Match3}\) introduces a _depth separation_ between the representational powers of single-layer and two-layer transformer models.
As mentioned in the introduction, we also conjecture that multiple layers of multi-headed attention are subject to the same impossibility (Conjecture 19). The impossibility is specific to standard (pairwise) attention; in Appendix C.4, we show that \(\mathrm{Match3}\)_can_ be efficiently computed with a single unit of _third-order_ self-attention.
### More efficient constructions for simplified \(\mathrm{Match3}\) computations
While the previous sections suggests that no efficient construction exists to compute \(\mathrm{Match3}\) with standard transformer models, practical examples of triple detection bound. For example, a transformer-based language model will likely succeed in linking a subject/verb/object triple because all three tokens likely inhabit the same local region and because the model could agglomerate the triple by first identifying a pair and then adding the third. Here, we introduce two variants on the \(\mathrm{Match3}\) problem that have additional structure to serve as hints. The first variant specifies triple sums comprising the input element and a neighboring pair elsewhere in the sequence: for each \(i\in[N]\),
\[\mathrm{Match3Bigram}(X)_{i}=1\left\{\exists j\;\mathrm{s.t.}\;x_{i}+x_{j}+x_{ j+1}=0\left(\mathrm{mod}\;M\right)\right\}.\]
The second focuses on localized sums, where are all components of a triple must be within a fixed range of constant width \(K\ll N\): for each \(i\in[N]\),
\[\mathrm{Match3Local}(X)_{i}=1\left\{\exists j_{1},j_{2}\;\mathrm{s.t.}\;x_{i }+x_{j_{1}}+x_{j_{2}}=0\left(\mathrm{mod}\;M\right),\left|i-j_{1}\right|, \left|i-j_{2}\right|\leq K\right\}.\]
We show that the two can be efficiently represented using compact standard transformer models.
**Theorem 8**.: _For any \(N\), \(M=N^{O(1)}\), and \(p=O(\log M)\), there exists a transformer architecture \(f\in\mathcal{T}^{1,1}_{1,m,1,p}\) with embedding dimension \(m=3\) and depth \(D=2\) such that for all \(X\in[M]^{N\times d}\), \(f(X)=\operatorname{Match3Big}(X)\)._
Informally, the first layer of the construction uses a sinusoidal positional encoding to compute each bigram sum \(x_{j}+x_{j+1}\) in the \(j\)th element of the sequence. The second layer applies the \(\operatorname{Match2}\) construction provided by Theorem 6 to determine whether there exists a \(j\) for each \(i\) such that \(x_{i}+x_{j}+x_{j+1}=0\pmod{M}\).
**Theorem 9**.: _For any \(d\), \(N\), \(M=N^{O(1)}\), \(p=O(\log M)\), and \(K\leq N\), there exists a transformer architecture \(f\in\mathcal{T}^{1,1}_{1,m,1,p}\) with embedding dimension \(m=O(K\log N)\) and bit-complexity \(p=O(\log(K\log N))\) such that for all \(X\in[M]^{N\times d}\), \(f(X)=\operatorname{Match3Local}(X)\)._
Proof.: We implement the localized construction by using Theorem 2 to construct a specific sparse simultaneous average of the inputs with \(q:=2K+1\) and \(d^{\prime}:=2K+1\). To do so, we use the input MLP to convert \(x_{i}\) to the embedding \((z_{i};y_{i};i)\), for zero-padded input
\[z_{i}=x_{i}e_{\bar{i}}\in\mathbb{R}^{2K+1}\]
for \(\bar{i}=i\pmod{2K+1}\) and subset
\[y_{i}=\{i-K,i-K+1,\ldots,i+K\}\in\binom{[N]}{2K+1}.\]
This construction ensures that the \(i\)th element of self-attention output computes (a rotation of) \((x_{i-K},x_{i-K+1},\ldots,x_{i+K})\). An output MLP can then verify whether any matching triples involving \(x_{i}\) exist among those vectors.
## Acknowledgments and Disclosure of Funding
We are grateful for many discussions with and feedback from Navid Ardeshir, Peter Bartlett, Alberto Bietti, Yuval Efron, Christos Papadimitriou, Shivam Nadimpalli, Rocco Servedio, Yusu Wang, and Cyril Zhang. This work was supported in part by NSF grants CCF-1740833 and IIS-1563785, a JP Morgan Faculty Award, and an NSF Graduate Research Fellowship.
## References
* Aamand et al. [2022] Anders Aamand, Justin Chen, Piotr Indyk, Shyam Narayanan, Ronitt Rubinfeld, Nicholas Schiefer, Sandeep Silwal, and Tal Wagner. Exponentially improving the complexity of simulating the Weisfeiler-Lehman test with graph neural networks. In _Advances in Neural Information Processing Systems 35_, 2022.
* Angluin [1980] Dana Angluin. Local and global properties in networks of processors. In _Proceedings of the Twelfth Annual ACM Symposium on Theory of Computing_, 1980.
* Ben-David et al. [2002] Shai Ben-David, Nadav Eiron, and Hans Ulrich Simon. Limitations of learning via embeddings in euclidean half spaces. _Journal of Machine Learning Research_, 3(Nov):441-461, 2002.
* Bhattacharya et al. [2020] Satwik Bhattacharya, Kabir Ahuja, and Navin Goyal. On the ability and limitations of transformers to recognize formal languages. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing_, 2020.
* Bhattacharya et al. [2022] Satwik Bhattacharya, Arkil Patel, Varun Kanade, and Phil Blunsom. Simplicity bias in transformers and their ability to learn sparse boolean functions. _arXiv preprint arXiv:2211.12316_, 2022.
* Brown et al. [2020] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. _arXiv preprint arXiv:2005.14165_, 2020.
* Chen et al. [2020]Emmanuel J Candes and Terence Tao. Decoding by linear programming. _IEEE transactions on information theory_, 51(12):4203-4215, 2005.
* Chen et al. (2022) Nuo Chen, Qiushi Sun, Renyu Zhu, Xiang Li, Xuesong Lu, and Ming Gao. Cat-probing: A metric-based approach to interpret how pre-trained models for programming language attend code structure. _arXiv preprint arXiv:2210.04633_, 2022.
* Chen et al. (2019) Zhengdao Chen, Soledad Villar, Lei Chen, and Joan Bruna. On the equivalence between graph isomorphism testing and function approximation with GNNs. In _Advances in Neural Information Processing Systems 32_, 2019.
* Clark et al. (2019) Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D Manning. What does bert look at? an analysis of bert's attention. _arXiv preprint arXiv:1906.04341_, 2019.
* Daniely (2017) Amit Daniely. Depth separation for neural networks. In Satyen Kale and Ohad Shamir, editors, _Proceedings of the 2017 Conference on Learning Theory_, volume 65 of _Proceedings of Machine Learning Research_, pages 690-696. PMLR, 07-10 Jul 2017. URL [https://proceedings.mlr.press/v65/daniely17a.html](https://proceedings.mlr.press/v65/daniely17a.html).
* Dosovitskiy et al. (2021) Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. _arXiv preprint arXiv:2010.11929_, 2021.
* Edelman et al. (2022) Benjamin L. Edelman, Surbhi Goel, Sham M. Kakade, and Cyril Zhang. Inductive biases and variable creation in self-attention mechanisms. In _International Conference on Machine Learning_, 2022.
* Eldan and Shamir (2016) Ronen Eldan and Ohad Shamir. The power of depth for feedforward neural networks. In Vitaly Feldman, Alexander Rakhlin, and Ohad Shamir, editors, _29th Annual Conference on Learning Theory_, volume 49 of _Proceedings of Machine Learning Research_, pages 907-940, Columbia University, New York, New York, USA, 23-26 Jun 2016. PMLR. URL [https://proceedings.mlr.press/v49/eldan16.html](https://proceedings.mlr.press/v49/eldan16.html).
* Furst et al. (1984) Merrick Furst, James B Saxe, and Michael Sipser. Parity, circuits, and the polynomial-time hierarchy. _Mathematical systems theory_, 17(1):13-27, 1984.
* Gale (1963) David Gale. Neighborly and cyclic polytopes. In _Proc. Sympos. Pure Math_, volume 7, pages 225-232, 1963.
* Hahn (2020) Michael Hahn. Theoretical limitations of self-attention in neural sequence models. _Trans. Assoc. Comput. Linguistics_, 8:156-171, 2020. doi: 10.1162/tacl_{a}{_0}{0}{3}}{0}6. URL [https://doi.org/10.1162/tacl_a_00306](https://doi.org/10.1162/tacl_a_00306).
* Hao et al. (2022) Yiding Hao, Dana Angluin, and Robert Frank. Formal language recognition by hard attention transformers: Perspectives from circuit complexity. _Trans. Assoc. Comput. Linguistics_, 10:800-810, 2022. URL [https://transacl.org/ojs/index.php/tacl/article/view/3765](https://transacl.org/ojs/index.php/tacl/article/view/3765).
* Hewitt and Manning (2019) John Hewitt and Christopher D Manning. A structural probe for finding syntax in word representations. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_, 2019.
* Hornik et al. (1989) Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approximators. _Neural Netw._, 2(5):359-366, July 1989.
* Jumper et al. (2021) John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Zidek, Anna Potapenko, et al. Highly accurate protein structure prediction with alphafold. _Nature_, 596(7873):583-589, 2021.
* Karchmer and Wigderson (1988) Mauricio Karchmer and Avi Wigderson. Monotone circuits for connectivity require super-logarithmic depth. In _Proceedings of the Twentieth Annual ACM Symposium on Theory of Computing_, 1988.
* Keriven and Peyre (2019) Nicolas Keriven and Gabriel Peyre. Universal invariant and equivariant graph neural networks. In _Advances in Neural Information Processing Systems 32_, 2019.
* Karchmer et al. (2019)Valerii Likhosherstov, Krzysztof Choromanski, and Adrian Weller. On the expressive power of self-attention matrices. _arXiv preprint arXiv:2106.03764_, 2021.
* Liu et al. (2022) Bingbin Liu, Jordan T. Ash, Surbhi Goel, Akshay Krishnamurthy, and Cyril Zhang. Transformers learn shortcuts to automata. _CoRR_, abs/2210.10749, 2022. doi: 10.48550/arXiv.2210.10749.
* Loukas (2019) Andreas Loukas. What graph neural networks cannot learn: depth vs width. _arXiv preprint arXiv:1907.03199_, 2019.
* Maron et al. (2019) Haggai Maron, Ethan Fetaya, Nimrod Segol, and Yaron Lipman. On the universality of invariant networks. In _International Conference on Machine Learning_, 2019.
* Martens et al. (2013) James Martens, Arkadev Chattopadhya, Toni Pitassi, and Richard Zemel. On the representational efficiency of restricted boltzmann machines. In _Advances in Neural Information Processing Systems 26_, 2013.
* Mendelson et al. (2007) Shahar Mendelson, Alain Pajor, and Nicole Tomczak-Jaegermann. Reconstruction and subgaussian operators in asymptotic geometric analysis. _Geometric and Functional Analysis_, 17(4):1248-1282, 2007.
* Morris et al. (2019) Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In _AAAI Conference on Artificial Intelligence_, 2019.
* OpenAI (2023) OpenAI. Gpt-4 technical report, 2023.
* Peleg (2000) David Peleg. _Distributed computing: a locality-sensitive approach_. SIAM, 2000.
* Perez et al. (2019) Jorge Perez, Javier Marinkovic, and Pablo Barcelo. On the turing completeness of modern neural network architectures. _arXiv preprint arXiv:1901.03429_, 2019.
* Qi et al. (2017) Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, 2017.
* Rogers et al. (2020) Anna Rogers, Olga Kovaleva, and Anna Rumshisky. A primer in bertology: What we know about how bert works. _Transactions of the Association for Computational Linguistics_, 8:842-866, Dec 2020. ISSN 2307-387X. doi: 10.1162/tacl_a_00349. URL [http://dx.doi.org/10.1162/tacl_a_00349](http://dx.doi.org/10.1162/tacl_a_00349).
* Santoro et al. (2017) Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. A simple neural network module for relational reasoning. In _Advances in Neural Information Processing Systems 30_, 2017.
* Sauer (1972) Norbert Sauer. On the density of families of sets. _Journal of Combinatorial Theory, Series A_, 13(1):145-147, 1972.
* Shelah (1972) Saharon Shelah. A combinatorial problem; stability and order for models and theories in infinitary languages. _Pacific Journal of Mathematics_, 41(1):247-261, 1972.
* Telgarsky (2016) Matus Telgarsky. Benefits of depth in neural networks. In Vitaly Feldman, Alexander Rakhlin, and Ohad Shamir, editors, _29th Annual Conference on Learning Theory_, volume 49 of _Proceedings of Machine Learning Research_, pages 1517-1539, Columbia University, New York, New York, USA, 23-26 Jun 2016. PMLR. URL [https://proceedings.mlr.press/v49/telgarsky16.html](https://proceedings.mlr.press/v49/telgarsky16.html).
* Vapnik and Chervonenkis (1968) Vladimir Naumovich Vapnik and Aleksei Yakovlevich Chervonenkis. The uniform convergence of frequencies of the appearance of events to their probabilities. _Doklady Akademii Nauk_, 181(4):781-783, 1968.
* Vardi et al. (2021) Gal Vardi, Daniel Reichman, Toniann Pitassi, and Ohad Shamir. Size and depth separation in approximating benign functions with neural networks. In _Conference on Learning Theory_, 2021.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In _Advances in Neural Information Processing Systems 30_, 2017.
* Wang et al. (2018)Colin Wei, Yining Chen, and Tengyu Ma. Statistically meaningful approximation: a case study on approximating turing machines with transformers. _Advances in Neural Information Processing Systems_, 35:12071-12083, 2022.
* Xu et al. (2018) Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? _arXiv preprint arXiv:1810.00826_, 2018.
* Yao (1979) Andrew Chi-Chih Yao. Some complexity questions related to distributive computing (preliminary report). In _Proceedings of the Eleventh Annual ACM Symposium on Theory of Computing_, 1979.
* Yao et al. (2021) Shunyu Yao, Binghui Peng, Christos H. Papadimitriou, and Karthik Narasimhan. Self-attention networks can process bounded hierarchical languages. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing_, 2021.
* Yun et al. (2020) Chulhee Yun, Srinadh Bhojanapalli, Ankit Singh Rawat, Sashank Reddi, and Sanjiv Kumar. Are transformers universal approximators of sequence-to-sequence functions? In _International Conference on Learning Representations_, 2020.
* Zaheer et al. (2017) Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. Deep sets. In _Advances in Neural Information Processing Systems 30_, 2017.
* Ziegler (2006) Gunter M Ziegler. Lectures on polytopes. _Graduate Texts in Mathematics_, 152, 2006.
* Zweig and Bruna (2022) Aaron Zweig and Joan Bruna. Exponential separations in symmetric neural networks. _CoRR_, abs/2206.01266, 2022. doi: 10.48550/arXiv.2206.01266.
Fully-connected neural networks and recurrent neural networks cannot efficiently approximate \(q\mathrm{SA}\)
### Only wide fully-connected neural networks can approximate \(q\mathrm{SA}\)
In this section, we show that any fully-connected neural network that approximates \(q\mathrm{SA}:\mathbb{R}^{Nd}\rightarrow\mathbb{R}^{Nd^{\prime}}\) must have width \(m=\Omega(N)\).4 We consider networks of the form \(f(x)=g(Wx)\) for some weight matrix \(W\in\mathbb{R}^{m\times Nd}\) (the first layer weights) and arbitrary function \(g:\mathbb{R}^{m}\rightarrow\mathbb{R}^{Nd^{\prime}}\) (computed by subsequent layers of a neural network).
Footnote 4: We regard inputs as \(Nd\)-dimensional vectors rather than \(N\times d\) matrices.
**Theorem 10**.: _Suppose \(q\leq\frac{N}{2}\). Any fully-connected neural network \(f\) defined as above that \(\frac{1}{2q}\) approximates \(q\mathrm{SA}\) satisfies \(m\geq\mathrm{rank}(W)\geq\frac{Nd^{\prime}}{2}\)._
Proof.: For simplicity, we arrange the input as
\[x=(1;\ldots;N;y_{1};\ldots;y_{N};z_{1};\ldots;z_{N})\]
and \(W=[\tilde{W};V_{1};\ldots;V_{N}]\) with \(z_{1},\ldots,z_{N}\in\mathbb{B}^{d^{\prime}}\), \(\tilde{W}\in\mathbb{R}^{m\times N(d-d^{\prime})}\), and \(V_{1},\ldots,V_{N}\in\mathbb{R}^{m\times d^{\prime}}\). If \(\mathrm{rank}(W)\leq\frac{Nd^{\prime}}{2}-1\), then so too is \(\mathrm{rank}([V_{q};\ldots;V_{N}])\leq\frac{Nd^{\prime}}{2}-1\), and \([V_{q};\ldots;V_{N}]\) has a nontrivial null space containing a nonzero vector \(u=(u_{q};\ldots;u_{N})\in\mathbb{R}^{(N-q)d^{\prime}}\). Let
\[\xi=\frac{1}{\max_{j\in\{q,\ldots,N\}}\left\|u_{j}\right\|_{2}}(u_{q};\ldots;u_ {N}),\]
\(z=(\vec{0};\ldots;\vec{0};\xi_{q};\ldots;\xi_{N})\), and \(z^{\prime}=(\vec{0};\ldots;\vec{0};-\xi_{q};\ldots;-\xi_{N})\). Then,
1. \(z_{j},z^{\prime}_{j}\in\mathbb{B}^{d^{\prime}}\) for all \(j\in[N]\);
2. \(V_{j}z_{j}=V_{j}z^{\prime}_{j}=0\) for all \(j\in[N]\); and
3. \(\|z_{j^{*}}-z^{\prime}_{j^{*}}\|_{2}=2\) for some \(j^{*}\in\{q,\ldots,N\}\).
Therefore, for any \(y_{1},\ldots,y_{N}\in\binom{[N]}{q}\), respective \(x=(1;\ldots;N;y_{1};\ldots;y_{N};z_{1};\ldots;z_{N})\) and \(x^{\prime}=(1;\ldots;N;y_{1};\ldots;y_{N};z^{\prime}_{1};\ldots;z^{\prime}_{N})\) satisfy \(f(x)=f(x^{\prime})\). Consider \(y\) with \(y_{j}=(1,\ldots,q-1,j)\) for each \(j\in\{q,\ldots,N\}\). Then,
\[q\mathrm{SA}(x)_{j}=\frac{1}{q}\xi_{j}\text{ and }q\mathrm{SA}(x^{\prime})_{j}= -\frac{1}{q}\xi_{j}.\]
Hence, \(\left\|q\mathrm{SA}(x)_{j^{*}}-q\mathrm{SA}(x^{\prime})_{j^{*}}\right\|_{2} \geq\frac{2}{q}\). Because \(f(x)=f(x^{\prime})\),
\[\max\left(\left\|f(x)-q\mathrm{SA}(x)_{j^{*}}\right\|_{2},\left\|f(x^{\prime}) -q\mathrm{SA}(x^{\prime})_{j^{*}}\right\|_{2}\right)\geq\frac{1}{q},\]
so \(f\) can approximate \(q\mathrm{SA}\) to accuracy no better than \(\frac{1}{q}\).
### Only high-memory recurrent neural networks can approximate \(q\mathrm{SA}\)
In this section, we show that any memory-bounded algorithm that approximates \(q\mathrm{SA}:\mathbb{R}^{N\times d}\rightarrow\mathbb{R}^{N\times d^{\prime}}\) must use a large "hidden state" (memory) as it processes the input elements. This lower bound applies to various recurrent neural network (RNN) architectures.
A memory-bounded algorithm with an \(m\)-bit memory processes input \(X\in\mathbb{R}^{N\times d}\) sequentially as follows. There is an initial memory state \(h_{0}\in\{0,1\}^{m}\). For \(i=1,2,\ldots,N\), the algorithm computes the \(i\)-th output \(f(X)_{i}\in\mathbb{R}^{d^{\prime}}\) and the updated memory state \(h_{i}\) as a function of the input \(x_{i}\in\mathbb{R}^{d}\) and previous memory state \(h_{i-1}\):
\[(f(X)_{i},h_{i})=g_{i}(x_{i},h_{i-1}),\]where \(g_{i}\colon\mathbb{R}^{d}\times\{0,1\}^{m}\to\mathbb{R}^{d^{\prime}}\times\{0,1 \}^{m}\) is permitted to be an arbitrary function, and \(f\colon\mathbb{R}^{N\times d}\to\mathbb{R}^{N\times d^{\prime}}\) is the function computed by the algorithm.
Our lower bound applies to algorithms that only need to solve the subclass of "causal" instances of \(q\mathrm{SA}\) in which the input \(X=((z_{i},y_{i},i))_{i\in[N]}\in\mathbb{R}^{N\times d}\) is promised to satisfy \(y_{i}=\emptyset\) for all \(i\leq N/2+1\), and \(y_{i}\subseteq\{1,\ldots,N/2+1\}\) for all \(i>N/2+1\).
**Theorem 11**.: _For any \(\varepsilon\in(0,1)\), any memory-bounded algorithm that \(\varepsilon\)-approximates \(q\mathrm{SA}\) (for \(q=1\) and \(d^{\prime}=1\)) on the subclass of "causal" instances must have memory \(m\geq(N-1)/2\)._
Proof.: Consider an \(m\)-bit memory-bounded algorithm computing a function \(f\colon\mathbb{R}^{N\times d}\to\mathbb{R}^{N}\) that \(\varepsilon\)-approximates \(q\mathrm{SA}\) (for \(q=1\) and \(d^{\prime}=1\)). We construct, from this algorithm, a communication protocol for DISJ (with \(N=2n+1\)) that uses \(m\) bits of communication.
Let \(a,b\in\{0,1\}^{n}\) be the input for DISJ provided to Alice and Bob, respectively. The protocol is as follows.
1. Alice constructs inputs \(x_{i}=(z_{i},\emptyset,i)\) for \(i=1,\ldots,n+1\), where for each \(i=1,\ldots,n\), \[z_{i}=\begin{cases}+1&\text{if }a_{i}=0,\\ -1&\text{if }a_{i}=1,\end{cases}\] and \[z_{n+1}=+1.\] Bob constructs inputs \(x_{n+1+i}=(0,y_{n+1+i},n+1+i)\) for \(i=1,\ldots,n\), where \[y_{n+1+i}=\begin{cases}\{n+1\}&\text{if }b_{i}=0,\\ \{i\}&\text{if }b_{i}=1.\end{cases}\] Observe that, for this input \(X=(x_{1},\ldots,x_{2n+1})\), we have \[q\mathrm{SA}(X)_{n+1+i}=\begin{cases}+1&\text{if }a_{i}b_{i}=0,\\ -1&\text{if }a_{i}b_{i}=1.\end{cases}\]
2. Alice simulates the memory-bounded algorithm on the first \(n+1\) inputs \(x_{1},\ldots,x_{n+1}\), and sends Bob the \(m\)-bit memory state \(h_{n+1}\). This requires \(m\) bits of communication.
3. Starting with \(h_{n+1}\), Bob continues the simulation of the memory-bounded algorithm on these \(n\) additional inputs \(x_{n+2},\ldots,x_{2n+1}\).
4. If any output \(f(X)_{n+1+i}\) for \(i=1,\ldots,n\) satisfies \[f(X)_{n+1+i}<0,\] then Bob outputs \(1\) (not disjoint); otherwise Bob outputs \(0\) (disjoint).
The approximation guarantee of \(f\) implies that \(\mathrm{sign}(f(X)_{n+1+i})=q\mathrm{SA}(X)_{n+1+i}\) for all \(i=1,\ldots,n\), so Bob outputs \(1\) if and only if \(a\) and \(b\) are not disjoint. Because this protocol for DISJ uses \(m\) bits of communication, by Fact 5, it must be that \(m\geq n=(N-1)/2\).
We note that the proof of Theorem 11 can be simplified by reducing from the INDEX problem, which has a 1-way communication lower bound of \(n\) bits. This suffices for "single pass" algorithms, such as standard RNNs. However, the advantage of the above argument (and reducing from DISJ) is that it easily extends to algorithms that make multiple passes over the input. Such algorithms are able to capture bidirectional recurrent neural net and related models. A straightforward modification of the protocol in the proof of Theorem 11 shows that \(\Omega(N)\) memory is required for any algorithm that makes \(O(1)\) passes over the input (and computes the outputs in a final pass).
Supplementary results for Section 3
### Proof of Theorem 2
**Theorem 2** (Fixed-precision).: _For any \(N\), any \(m\geq\Omega(d^{\prime}+q\log N)\), any \(\epsilon\in(0,1)\), and \(p=\Omega(\log(\frac{q}{\epsilon}\log N))\), there exists some \(f\in\mathcal{A}^{\prime}_{d,m,d^{\prime},p}\) that \(\epsilon\)-approximates \(q\)SA._
Proof.: Before explaining how they are produced by the input MLP, we introduce the corresponding key, value, and query inputs. The values will simply be \(\phi(X)V=(z_{1},\ldots,z_{N})\). For some \(m^{\prime}=\frac{m-d}{2}\), let \(\phi(X)K=(u_{1},\ldots,u_{N})\in\mathbb{R}^{N\times m^{\prime}}\) be embedded key vectors, where \(u_{1},\ldots,u_{N}\in\{\pm 1/\sqrt{m^{\prime}}\}^{m^{\prime}}\) are the columns of a \(m^{\prime}\times N\) matrix satisfying the \((q,1/4)\)-restricted isometry and orthogonality property (Definition 5), as guaranteed to exist by Lemma 12 and the assumption on \(m^{\prime}\). Let \(\alpha:=\lceil 2\log(4N/\epsilon)\rceil\). By Lemma 13, for each \(y\in\binom{[N]}{q}\), there exists \(w_{y}\in\mathbb{R}^{m^{\prime}}\) with \(\|w_{y}\|_{2}\leq 2\sqrt{q}\) satisfying
\[\langle u_{i^{\prime}},w_{y}\rangle =1\quad\text{ for all }i^{\prime}\in y,\] \[|\langle u_{i^{\prime}},w_{y}\rangle| \leq\frac{1}{2}\quad\text{ for all }i^{\prime}\notin y.\]
Given the bounded precision of the model, we are not free to represent the vectors \(w_{y}\) exactly. Under \(p\)-bit precision for \(p\) sufficiently large, we there exists a vector of \(p\)-bit floating point numbers \(\widetilde{w_{y}}\in\mathbb{R}^{m^{\prime}}\) for every \(w_{y}\) with \(\left\|w_{y}\right\|_{2}\leq 2\sqrt{q}\) satisfying \(\left\|\widetilde{w_{y}}-w_{y}\right\|_{2}\leq\frac{\epsilon}{4\alpha}\). As an immediate consequence, \(|\langle u_{i^{\prime}},\widetilde{w_{y}}\rangle-\langle u_{i^{\prime}},w_{y} \rangle|\leq\frac{\epsilon}{4\alpha}\) for all \(i^{\prime}\) and \(y\) (by Cauchy-Schwarz). The remainder of the proof demonstrates that the necessary properties of the argument hold even with this approximation.
We now describe how to structure the neural network. We define an MLP \(\phi:\mathbb{R}^{d}\rightarrow\mathbb{R}^{m}\) as \(\phi(x_{i})=\phi(z_{i};y_{i};i)=(z_{i};\alpha\widetilde{w_{y_{i}}};u_{i})\), which works simply by using a look-up table on the values of \(u_{i}\) and \(\widetilde{w_{y_{i}}}\) from keys \(i\) and \(y_{i}\) respectively. Then, we define \(Q,K,V\) as sparse boolean-valued matrices that simply copy their respective elements from \(\phi(X)\).
We analyze the output of the softmax. If \(i^{\prime}\in y_{i}\), then
\[\operatorname{softmax}(\phi(X)QK^{\mathsf{T}}\phi(X)^{\mathsf{T} })_{i,i^{\prime}} =\frac{\exp(\alpha\left\langle u_{i},\widetilde{w_{i^{\prime}}} \right\rangle)}{\sum_{i^{\prime\prime}\in y_{i}}\exp(\alpha\left\langle u_{i},\widetilde{w_{i^{\prime\prime}}}\right\rangle)+\sum_{i^{\prime\prime}\not\in y _{i}}\exp(\alpha\left\langle u_{i},\widetilde{w_{i^{\prime\prime}}}\right\rangle)}\] \[\geq\frac{\exp(\alpha-\frac{\epsilon}{4})}{q\exp(\alpha+\frac{ \epsilon}{4})+N\exp(\frac{\alpha}{2}+\frac{\epsilon}{4})}=\frac{e^{\alpha}}{ qe^{\alpha}+Ne^{\alpha/2}}\cdot\exp\left(-\frac{\epsilon}{2}\right)\] \[\geq\left(\frac{1}{q}-\frac{Ne^{\alpha/2}}{qe^{\alpha}}\right) \left(1-\frac{\epsilon}{2}\right)\geq\frac{\left(1-\frac{\epsilon}{4}\right) \left(1-\frac{\epsilon}{4}\right)}{q}\geq\frac{1}{q}\left(1-\frac{\epsilon}{2 }\right).\]
An analogous argument shows that
\[\operatorname{softmax}(\phi(X)QK^{\mathsf{T}}\phi(X)^{\mathsf{T} })_{i,i^{\prime}}\leq\frac{1}{q}\left(1+\frac{\epsilon}{2}\right).\]
Likewise, if \(i^{\prime}\not\in y_{i}\), then
\[\operatorname{softmax}(\phi(X)QK^{\mathsf{T}}\phi(X)^{\mathsf{T} })_{i,i^{\prime}}\leq\frac{\exp(\frac{\alpha}{2}+\frac{\epsilon}{4})}{q\exp( \alpha-\frac{\epsilon}{4})}\leq\exp\left(-\frac{\alpha}{2}+\frac{\epsilon}{2} \right)\leq\frac{\epsilon}{2N}.\]
We thus conclude that that we meet the desired degree of approximation for such \(m\):
\[\left\|f(X)_{i}-q\mathrm{SA}(X)_{i}\right\|_{2} =\left\|\sum_{i^{\prime}\in y_{i}}\left(\frac{1}{q}- \operatorname{softmax}(\phi(X)QK^{\mathsf{T}}\phi(X)^{\mathsf{T}})_{i,i^{ \prime}}\right)z_{i^{\prime}}\right\|_{2}\] \[\quad+\left\|\sum_{i^{\prime}\not\in y_{i}}\left(\operatorname{ softmax}(\phi(X)QK^{\mathsf{T}}\phi(X)^{\mathsf{T}})_{i,i^{\prime}}\right)z_{i^{\prime}} \right\|_{2}\] \[\leq q\cdot\frac{\epsilon}{2q}+(N-q)\cdot\frac{\epsilon}{2N}\leq\epsilon.\qed\]
#### b.1.1 Restricted isometry and orthogonality property
The proof relies on the restricted isometry and orthogonality property from the compressed sensing literature. For \(v\in\mathbb{R}^{N}\), let \(\operatorname{supp}(v)=\{i\in[N]:v_{i}\neq 0\}\).
**Definition 5**.: We say a matrix \(U\in\mathbb{R}^{m\times N}\) satisfies the _\((q,\delta)\)-restricted isometry and orthogonality property_ if
\[\|Uv\|_{2}^{2}\in[(1-\delta)\|v\|_{2}^{2},(1+\delta)\|v\|_{2}^{2}]\quad\text{ and}\quad|\langle Uv,Uv^{\prime}\rangle|\leq\delta\|v\|_{2}\|v^{\prime}\|_{2}\]
for all vectors \(v,v^{\prime}\in\mathbb{R}^{N}\) with \(|\operatorname{supp}(v)|\leq q\), \(|\operatorname{supp}(v^{\prime})|\leq 2q\), and \(\operatorname{supp}(v)\cap\operatorname{supp}(v^{\prime})=\emptyset\).
The first result shows the existence of a sign-valued matrix \(U\) that satisfies the desired distance-preserving property.
**Lemma 12** (Consequence of Theorem 2.3 of Mendelson et al. [2007] and Lemma 1.2 of Candes and Tao [2005]).: _There is an absolute constant \(C>0\) such that the following holds. Fix \(\delta\in(0,1/2)\) and \(q\in\mathbb{N}\). Let \(U\) denote a random \(m\times N\) matrix of independent Rademacher random variables scaled by \(1/\sqrt{m}\). If \(m\geq C(q\log N)/\delta^{2}\), then with positive probability, \(U\) satisfies the \((q,\delta)\)-restricted isometry and orthogonality property._
Sparse subsets of the columns of such a \(U\) can then be linearly separated from all other columns.
**Lemma 13** (Consequence of Lemma 2.2 in Candes and Tao [2005]).: _Fix \(\delta\in(0,1/2)\) and \(q\in\mathbb{N}\). Let matrix \(U=[u_{1},\ldots,u_{N}]\in\mathbb{R}^{m\times N}\) satisfy the \((q,\delta)\)-restricted isometry and orthogonality property. For every vector \(v\in\{0,1\}^{N}\) with \(\operatorname{supp}(v)\leq q\), there exists \(w\in\mathbb{R}^{m}\) satisfying_
\[\|w\|_{2} \leq\sqrt{q}/(1-2\delta),\] \[\langle u_{i},w\rangle =1 \text{if }v_{i}=1,\] \[|\langle u_{i},w\rangle| \leq\delta/(1-2\delta) \text{if }v_{i}=0.\]
### Proof of Theorem 3
**Theorem 3** (Infinite-precision).: _For fixed \(N\), \(m\geq\Omega(d^{\prime}+q)\) and \(\epsilon>0\), there exists some \(f\in\mathcal{A}^{\prime}_{d,m,d^{\prime}}\) that \(\epsilon\)-approximates \(q\mathrm{SA}\)._
The proof relies on the properties of _neighborly polytopes_, which we define.
**Definition 6** (Ziegler [2006]).: A polytope \(P\) is _\(q\)-neighborly_ if every subset of \(q^{\prime}\leq q\) vertices forms a \((q^{\prime}-1)\)-face.
We give a \(q\)-neighborly polytope below that we use for the construction. For vectors \(v_{1},\ldots,v_{N}\in\mathbb{R}^{m^{\prime}}\), let \(\operatorname{Conv}(v_{1},\ldots,v_{N})=\{\sum_{i=1}^{N}\alpha_{i}v_{i}:\alpha \in[0,1]^{N},\sum_{i}\alpha_{i}=1\}\) denote their convex hull.
**Fact 14** (Theorem 1 of Gale [1963]).: _For \(t\in\mathbb{R}\), let \(\theta(t)=(t,\ldots,t^{m^{\prime}})\in\mathbb{R}^{m^{\prime}}\). Then, for all distinct \(t_{1},\ldots,t_{N}\in\mathbb{R}\), the cyclic polytope \(\operatorname{Conv}(\theta(t_{1}),\ldots,\theta(t_{N}))\) is \(\frac{m^{\prime}}{2}\)-neighborly._
The proof of Theorem 3 is immediate from the aforementioned fact and the following lemma.
**Lemma 15**.: _Suppose there exists \(u_{1},\ldots,u_{N}\in\mathbb{R}^{m^{\prime}}\) such that \(\operatorname{Conv}(u_{1},\ldots,u_{N})\) is \(q\)-neighborly. Then, for any \(\epsilon>0\), there exists some \(f\in\mathcal{A}^{\prime}_{d,m,d^{\prime}}\) with fixed key vectors \(\phi(X)K=(u_{1},\ldots,u_{N})\) that \(\epsilon\)-approximates \(q\mathrm{SA}\)._
Proof.: The construction employs a similar look-up table MLP \(\phi\) to the one used in the proof of Theorem 2. We let the key and value embeddings be
\[\phi(X)K=((u_{1},1),\ldots,(u_{N},1))\in\mathbb{R}^{N\times(m^{\prime}+1)}\text {, and }\phi(X)V=(z_{1},\ldots,z_{N})\in\mathbb{R}^{N\times d}.\]
To set the query vectors, observe that for any face \(F\) of a polytope \(P\), there exists a hyperplane \(H_{F}\) such that \(F\subset H_{F}\) and \(P\setminus F\) lies entirely on one side of \(H_{F}\). Thus, for every \(y\in\binom{[N]}{q}\), there exists \(w^{\prime}_{y}\in\mathbb{R}^{m^{\prime}}\) and \(b_{y}\in\mathbb{R}\) such that
\[w^{\prime\prime}_{y}u_{i}+b_{y}\begin{cases}=1&\text{if }i\in y\text{,}\\ <1&\text{otherwise.}\end{cases}\]For \(\alpha>0\), let \(\phi(x_{i})^{\mathsf{T}}Q=\alpha w_{y}=\alpha(w_{y}^{\prime},b_{y})\).
We construct the MLP to satisfy \(\phi(x_{i})=(z_{k};w_{y_{i}};u_{i};1)\in\mathbb{R}^{m}\) for \(m=2m^{\prime}+2\) and set parameter weights accordingly. Following the softmax analysis of Theorem 3, a sufficiently large choice of \(\alpha\) ensures that \(\max_{i\in[N]}\left\lVert f(X)_{i}-q\mathrm{SA}(X)_{i}\right\rVert_{2}\leq\epsilon\).
### Proof of Theorem 4
**Theorem 4**.: _For any sufficiently large \(q\), any \(N\geq 2q+1\), and any \(d^{\prime}\geq 1\), there exists a universal constant \(c\) such that if \(mp\leq cq\), then no \(f\in\mathcal{T}^{1,1}_{d,m,d^{\prime},p}\) exists that \(\frac{1}{2q}\)-approximates \(q\mathrm{SA}\)._
Proof.: We first embed every instance of \(\mathrm{DISJ}\) with \(n=q\) into an instance of \(q\mathrm{SA}\) and prove that they correspond. We assume the existence of the a transformer \(f\in\mathcal{T}^{1,1}_{d,m,d^{\prime},p}\) that \(\frac{1}{2q}\)-approximates \(q\mathrm{SA}\) and implies the existence of an \(O(mp)\)-bit communication protocol that computes \(\mathrm{DISJ}\). An application of Fact 5 concludes the proof.
Consider an instance of \(\mathrm{DISJ}\) with \(a\in\{0,1\}^{q}\) and \(b\in\{0,1\}^{q}\) known by Alice and Bob respectively. We design an instance \(X=(z_{i};y_{i};i)_{i\in[N]}\) of \(q\mathrm{SA}\). For each \(j\in[2q]\), let \(y_{2q+1}=\{2i+a_{i}-1:i\in[q]\}\). Additionally, let
\[z_{j}=\begin{cases}e_{1}&\text{if $j$ is odd and $b_{(j-1)/2}=1$},\\ -e_{1}&\text{otherwise}.\end{cases}\]
All other inputs are set arbitrarily. Then,
\[q\mathrm{SA}(X)_{2q+1} =\frac{1}{q}\left\lvert\left\{j\in[2q]:\;j\in y_{2q+1},\;j\text{ is odd, and }a_{(j-1)/2}=1\right\}\right\rvert e_{1}\] \[\quad-\frac{1}{q}\left\lvert\left\{j\in[2q]:\;j\in y_{2q+1}\text { and }(j\text{ is even or }a_{(j-1)/2}=0)\right\}\right\rvert e_{1}\] \[=\frac{|\{i\in[q]:a_{i}b_{i}=1\}|-|\{i\in[q]:a_{i}b_{i}=0\}|}{q}e_ {1}.\]
Hence, \(q\mathrm{SA}(X)_{2q+1}=-e_{1}\) if and only if \(\mathrm{DISJ}(a,b)=0\).
It remains to show that this implies the existence of an efficient communication protocol that computes \(\mathrm{DISJ}(a,b)\). By the existence of \(f\), there exist \(Q,K,V:\mathbb{R}^{d}\to\mathbb{R}^{m}\) and \(\psi:\mathbb{R}^{m}\to\mathbb{R}^{d^{\prime}}\) such that
\[f(X)_{2q+1}=\psi\left(\frac{\sum_{i=1}^{N}\exp\left(Q(x_{2q+1})^{\mathsf{T}}K( x_{i})\right)V(x_{i})}{\sum_{i=1}^{N}\exp\left(Q(x_{2q+1})^{\mathsf{T}}K(x_{i}) \right)}\right).\]
The protocol is as follows:
1. From \(a\), Alice determines \(y_{2q+1}\) and then computes \(Q(x_{2q+1})\in\mathbb{R}^{m}\), which she sends to Bob. This transmission uses \(O(mp)\) bits.
2. Bob determines \(z_{1},\ldots,z_{2q}\) from \(b\). Using those and the information from Alice, he computes \(f(X)_{2q+1}\). He returns \(1\) if and only if \(f(X)_{2q+1}^{\mathsf{T}}e_{1}\geq-1+\frac{1}{q}\).
The protocol computes \(\mathrm{DISJ}(a,b)\) because \(f\) is a \(\frac{1}{2q}\)-approximation of \(q\mathrm{SA}\). Because any such protocol requires sharing \(\Omega(q)\) bits of information, we conclude that \(mp\leq cq\) for some \(c\).
### Optimality of Theorem 3 under restricted architectures
While the near-optimality of the bounded-precision self-attention construction in Theorem 2 is assured by the communication complexity argument of Theorem 4, it is not immediately apparent whether Theorem 3 is similarly optimal among infinite-precision self-attention models. Theorem 16 proves that this is indeed the case for a restricted family of architectures that resembles _cross-attention_ rather than self-attention.
**Theorem 16**.: _For input \(x_{1},\ldots,x_{N}\) satisfying \(x_{i}=(z_{i};y_{i};i)\), suppose \(\phi(x_{i})^{\mathsf{T}}Q=w(y_{i},i)\), \(\phi(x_{i})^{\mathsf{T}}K=u(i)\), and \(\phi(x_{i})^{\mathsf{T}}V=z_{i}\). Then, for any \(q<N\) and \(m\leq q(1-C\log_{N}q)\) for some universal \(C\), there do not exist \(w:\mathbb{R}^{d}\times[N]\rightarrow\mathbb{R}^{m}\) and \(u:[N]\rightarrow\mathbb{R}^{m}\) such that the resulting self-attention unit \(\frac{1}{2q}\)-approximates \(q\mathrm{SA}\)._
The architectural assumptions of this statement are strong. For each element \(x_{i}=(z_{i};y_{i};i)\), its value embedding must reproduce its target \(z_{i}\); its key embedding depends exclusively on the index \(i\); and its query embedding only on the indices \(y_{i}\) and \(i\). Indeed this attention unit more closely resembles _cross-attention_ rather than self-attention, in which the problem is formulated as two sequences \(((z_{1},1),\ldots,(z_{N},N))\) and \((y_{1};1),\ldots,(y_{N};N)\) that are passed to the key and value inputs and the query inputs respectively. We leave open the problem of generalizing this result to include all infinite-precision cross-attention or self-attention architectures, but we note that the constructions in Theorems 2 and 3 can be implemented under such architectural assumptions.
The proof relies on a geometric argument about how the convex hull of fixed key embeddings \(U=(u(1),\ldots,u(N))\) lacks neighborliness and hence cannot separate every size-\(q\) subsets of values embeddings \(z_{1},\ldots,z_{N}\) from the other values.
Proof.: It suffices to show that for any fixed key embedding \(U\), there exists some \(y_{i}\) and setting of \(z_{1},\ldots,z_{N}\) such that
\[\left\|(\mathrm{softmax}(w(X)U^{\mathsf{T}})Z)_{i}-\frac{1}{q}\sum_{i^{\prime} \in y_{i}}z_{i^{\prime}}\right\|_{2}\geq\frac{1}{2q},\]
where \(w(X)=(w(y_{1},1),\ldots,w(y_{N},N))\in\mathbb{R}^{N\times m}\) and \(U=(u(1),\ldots,u(N))\in\mathbb{R}^{N\times m}\).
By Fact 17, for some \(y_{1}\in\binom{[N]}{q}\), there are no \(w\) and \(\tau\in\mathbb{R}\) satisfying \(w(y_{1},1)^{\mathsf{T}}u_{i^{\prime}}\geq\tau\) if and only if \(i^{\prime}\in y_{1}\). Hence, for any fixed \(w\), there exists \(i_{1}\in y_{1}\) and \(i_{2}\in[N]\setminus y_{1}\) such that \(w(y_{1},1)^{\mathsf{T}}u_{i_{2}}>w(y_{1},1)^{\mathsf{T}}u_{i_{1}}\). Given the value embeddings \(z_{i_{1}}=e_{1},z_{i_{2}}=e_{2}\) and \(z_{i}=e_{3}\) for all \(i\not\in\{i_{1},i_{2}\}\), we have
\[\left\|(\mathrm{softmax}(w(X)U^{\mathsf{T}})Z)_{1}-\frac{1}{q}\sum _{i^{\prime}\in y_{1}}z_{i^{\prime}}\right\|_{2}^{2} \geq\left(\mathrm{softmax}(w(X)U^{\mathsf{T}})Z)_{1,i_{1}}-\frac{1 }{q}\right)^{2}+(\mathrm{softmax}(w(X)U^{\mathsf{T}})Z)_{1,i_{2}})^{2}\] \[\geq\max\left(\left(\mathrm{softmax}(w(X)U^{\mathsf{T}})Z)_{1,i_ {1}}-\frac{1}{q}\right)^{2},\mathrm{softmax}(w(X)U^{\mathsf{T}})Z)_{1,i_{1}}^{2}\right)\] \[\geq\frac{1}{4q^{2}}.\qed\]
**Fact 17**.: _If \(m^{\prime}<q(1-\log_{N}Cq)\), then the columns of any \(U=(u_{1},\ldots,u_{N})\in\mathbb{R}^{N\times m^{\prime}}\) can be partitioned into sets \(U_{1}\) and \(U_{2}\) with \(|U_{1}|=q\) that are not linearly separable. Hence, \(\mathrm{Conv}(u_{1},\ldots,u_{N})\) is not \(q\)-neighborly._
Proof.: By the Sauer-Shelah Lemma [22, 23, 24] and the fact that the VC dimension of \(m^{\prime}\)-dimensional linear thresholds is \(m^{\prime}+1\), the maximum number of partitions of the columns of \(U\) that can be linearly separated is at most
\[\sum_{k=0}^{m^{\prime}+1}\binom{N}{i}\leq C^{\prime}N^{m^{\prime}+1}<C^{\prime }\cdot\frac{N^{q}}{(Cq)^{q}}\leq\binom{N}{q},\]
for a sufficiently large choice of \(C\) given universal constant \(C^{\prime}\). If the fact were to be false, then at least \(\binom{N}{q}\geq(\frac{N}{q})^{q}\) such partitions must exist, which contradicts the above bound.
Supplementary results for Section 4
### Proof of Theorem 6
**Theorem 6**.: _For any input size \(N\), input range \(M=N^{O(1)}\), and fixed-precision bit complexity \(p=O(\log M)\), there exists a transformer architecture \(f\in\mathcal{T}^{1,1}_{1}\) with a single self-attention unit with embedding dimension \(m=3\) such that for all \(X\in[M]^{N}\), \(f(X)=\mathrm{Match2}(X)\)._
Proof.: As discussed in Section 2.1, we allow a single blank token to be appended to the end of the sequence \(X\) and assume the existence of a positional encoding. That is, we consider input \(X^{\prime}=(x_{1},\ldots,x_{N},x^{\prime})\) with \(x_{i,0}=i\) and \(x^{\prime}=\vec{0}\) to be the input to the target attention model. We define input \(\text{MLP}\;\phi:\mathbb{R}\to\mathbb{R}^{3}\) and parameterizations \(Q,K,V\in\mathbb{R}^{3\times 3}\) such that
\[Q^{\mathrm{ T}}\phi(x_{i})=c\left(\cos\left(\frac{2\pi x_{i}}{M}\right),\sin \left(\frac{2\pi x_{i}}{M}\right),1\right),\] \[K^{\mathrm{ T}}\phi(x_{i})=\left(\cos\left(\frac{2\pi x_{i}}{M}\right),-\sin \left(\frac{2\pi x_{i}}{M}\right),0\right),\]
\(V^{\mathrm{ T}}\phi(x_{i})=\vec{1}\), \(Q^{\mathrm{ T}}\phi(x^{\prime})=\vec{0}\), \(K^{\mathrm{ T}}\phi(x^{\prime})=e_{3}\), and \(V^{\mathrm{ T}}\phi(x^{\prime})=\vec{0}\). By elementary trigonometric identities, the following is true about the corresponding inner products:
\[(Q^{\mathrm{ T}}\phi(x_{i}))^{\mathrm{ T}}K^{\mathrm{ T}}\phi(x_{j})=c\cos\left(\frac{2\pi(x_{i}+x_{j})}{M}\right)\] \[(Q^{\mathrm{ T}}\phi(x_{i}))^{\mathrm{ T}}K^{\mathrm{ T}}\phi(x^{\prime})=cd.\]
As a result, \((Q^{\mathrm{ T}}\phi(x_{i}))^{\mathrm{ T}}K^{\mathrm{ T}}\phi(x_{j})=cd\) if and only if \(x_{i}+x_{j}=(\!\!\!\pmod{M}\). Otherwise, \((Q^{\mathrm{ T}}\phi(x_{i}))^{\mathrm{ T}}K^{\mathrm{ T}}\phi(x_{j})\leq c(1-\frac{1}{M^{2}})\). (Here, the \(O(\log M)\)-bit fixed-precision arithmetic is sufficient to numerically distinguish the two cases.) For each \(i\in[N]\) let
\[\beta_{i}=|\{j\in[N]:x_{i}+x_{j}=0\,(\!\!\!\pmod{M})\}|\]
represent the total number of matches the input belongs to. If we take \(c=M^{2}\log(6N)\), then
\[(\mathrm{softmax}(\phi(X)QK^{\mathrm{ T}}\phi(X)^{\mathrm{ T}}))_{i,j}\in\begin{cases}[0,\frac{1}{6N}]&\text{if }x_{i}+x_{j}\neq 0\,(\!\!\!\pmod{M})\text{ and }\,i,j\in[N];\\ \lfloor\frac{1}{\beta_{i}+1}\pm\frac{1}{6N}\rfloor&\text{if }x_{i}+x_{j}=0\,(\!\!\!\pmod{M})\text{ and }\,i,j\in[N];\\ \lfloor\frac{1}{\beta_{i}+1}\pm\frac{1}{6N}\rfloor&\text{if }i\in[N],j=N+1.\end{cases}\]
We conclude that for any \(i\in[N]\),
\[(\mathrm{softmax}(\phi(X)QK^{\mathrm{ T}}\phi(X)^{\mathrm{ T}})V\phi(X))_{i}\begin{cases}\leq\frac{1}{6}\cdot\vec{1}&\text{if }
Note that \(\mathrm{Match3}(X)_{1}=1\) if and only if there exists \(i\in\{2,\ldots,\frac{N+1}{2}\}\) such that \(x_{i}=i\) and \(x_{i+\frac{N-1}{2}}=(M-i)\). Given input \((a,b)\in\{0,1\}^{n}\times\{0,1\}^{n}\) to \(\mathrm{DISJ}\), let \(x_{i+1}=1\) if and only if \(a_{i}=0\), and let \(x_{i+\frac{N+1}{2}}=1\) if and only if \(b_{i}=0\). Then, \(\mathrm{Match3}(X)_{1}=1\) iff \(\mathrm{DISJ}(a,b)=1\).
Suppose \(f(X)=\mathrm{Match3}(X)\) for all \(X\in[M]^{N}\) for some \(f\in\mathcal{T}^{1,H}_{1,m,1,p}\). We show that \(f\) simulates an \(O(mpH)\)-bit communication protocol for testing \(\mathrm{DISJ}\). By definition of the standard self-attention unit with multi-layer perceptrons, note that \(f(X)_{1}=\psi(\sum_{h=1}^{H}f_{h}(\phi(X)))\) for \(\phi:\mathbb{R}\to\mathbb{R}^{m}\), \(\psi:\mathbb{R}^{m}\to\{0,1\}\), and
\[f_{h}(X)=\frac{\sum_{i=1}^{N}\exp(Q_{h}(x_{1})^{\mbox{\tiny\sf T}}K_{h}(x_{i} ))V_{h}(x_{i})}{\sum_{i=1}^{N}\exp(Q_{h}(x_{1})^{\mbox{\tiny\sf T}}K_{h}(x_{i} ))},\]
for \(Q_{h},K_{h},V_{h}:\mathbb{R}^{m\times m}\).
If we assume that this construction exists and is known explicitly by both Alice and Bob, we design a communication protocol for Alice and Bob to solve \(\mathrm{DISJ}\) by sharing \(O(mpH)\) bits with one another. Let Alice possess \(a\in\{0,1\}^{n}\) and Bob \(b\in\{0,1\}^{n}\), with \(n=\frac{N-1}{2}\).
1. Alice and Bob compute \((x_{2},\ldots,x_{\frac{N+1}{2}})\) and \((x_{\frac{N+2}{2}},\ldots,x_{N})\) from \(a\) and \(b\) respectively.
2. Alice computes an \(O(p\log\log N)\)-bit approximation of the logarithm of the first half of the softmax normalization term for each attention head and sends the result to Bob. That is, she sends Bob \[L_{h,a}=\log\left(\sum_{i=1}^{\frac{N+1}{2}}\exp(Q_{h}(\phi(x_{1}))^{\mbox{ \tiny\sf T}}K_{h}(\phi(x_{i})))\right)\] for each \(h\in[H]\). This requires transmitting \(O(pH\log\log N)\) bits.
3. Bob finishes the computation of normalization terms \[L_{h}=\log\left(\exp(L_{h,a})+\sum_{i=\frac{N+3}{2}}^{N}\exp(Q_{h}(\phi(x_{1} ))^{\mbox{\tiny\sf T}}K_{h}(\phi(x_{i})))\right)\] for each \(h\) and sends the result back to Alice (up to \(O(p\log\log N)\)-bits of precision). This again requires transmitting \(O(pH\log\log N)\) bits.
4. Alice computes the partial convex combination of the first \(\frac{N+1}{2}\) value vectors stipulated by the attention matrix \[S_{h,a}=\frac{\sum_{i=1}^{\frac{N+1}{2}}\exp(Q_{h}(\phi(x_{1}))^{\mbox{\tiny \sf T}}K_{h}(\phi(x_{i})))V_{h}(\phi(x_{i}))}{\exp(L_{h})}\in\mathbb{R}^{m}\] for each \(h\) and sends the partial combinations to Bob. This requires transmitting \(O(mpH\log\log N)\) bits (using the same precision as above).
5. Bob finishes the computation of the convex combinations \[f_{h}(X)=S_{h,a}+\frac{\sum_{i=\frac{N+3}{2}}^{N}\exp(Q_{h}(\phi(x_{1}))^{ \mbox{\tiny\sf T}}K_{h}(\phi(x_{i})))V_{h}(\phi(x_{i}))}{\exp(L_{h})}\in \mathbb{R}^{m}.\] Bob concludes the protocol by computing and outputting \(f(X)_{1}\), using his knowledge of each \(f_{h}(X)\) and of \(\psi\).
By the equivalences previously established, Bob returns 1 if and only if \(\mathrm{DISJ}(a,b)=1\). Because the protocol requires \(O(mpH\log\log N)\) bits of communication, we can only avoid contradicting Fact 5 if \(mpH\geq\Omega(n/\log\log N)=\Omega(N/\log\log N)\).
**Remark 1**.: _The domain restrictions to \(\mathrm{Match3}\) stipulated in Equation (3) make the \(\mathrm{Match3}\) problem substantially easier to solve than the full-domain case. Indeed, under the domain restrictions,_
\[\mathrm{Match3}(X)_{1}=\max_{i\in\{2,\ldots,\frac{N+1}{2}\}}\mathrm{Match2}(X) _{i},\]which is computable by a two-layer single-headed transformer network with constant embedding dimension. The first layer computes each \(\mathrm{Match2}(X)_{i}\) with the construction in the proof of Theorem 6, and the second computes the maximum of the previous outputs by using those outputs as key vectors. While Informal Conjecture 1 suggests that two layers are insufficient to compute the full-domain version of \(\mathrm{Match3}\), this restricted variant introduces a concise depth separation (see Eldan and Shamir [2016], Telgarsky [2016], Daniely [2017]) between one- and two-layer transformer models.
### Higher-order tensor attention
We introduce a novel category of higher-order tensor-based transformer models in order to show that problems like \(\mathrm{Match3}\) that are hard to compute with standard transformer models can be made solvable. An \(s\)-order transformer is designed to efficiently compute dense \(s\)-wise interactions among input elements in an analogous manner to how standard transformers compute pairwise interactions. (We think of a standard transformer as second-order.) Before defining the new type of attention, we introduce notation to express the needed tensor products.
For vectors \(v^{1}\in\mathbb{R}^{N_{1}}\) and \(v^{2}\in\mathbb{R}^{N_{2}}\), let \(v^{1}\otimes v^{2}\in\mathbb{R}^{N_{1}N_{2}}\) denote their _Kronecker product_ by \((v^{1}\otimes v^{2})_{(i_{1}-1)N_{2}+i_{2}}=v^{1}_{i_{1}}v^{2}_{i_{2}}\). The _column-wise Kronecker product_ of matrices \(A^{1}\in\mathbb{R}^{N_{1}\times m}\) and \(A^{2}\in\mathbb{R}^{N_{2}\times m}\) is
\[A^{1}\star A^{2}=[A^{1}_{1}\mid\dots\mid A^{1}_{m}]\star[A^{2}_{1}\mid\dots \mid A^{2}_{m}]=[A^{1}_{1}\otimes A^{2}_{1}\mid\dots\mid A^{1}_{m}\otimes A^{2 }_{m}]\in\mathbb{R}^{N_{1}N_{2}\times m}.\]
The following generalizes the definition of self-attention.
**Definition 7**.: For order \(s\geq 2\), input dimension \(d\), output dimension \(d^{\prime}\), embedding dimension \(m\), bit complexity \(p\), and matrices \(Q,K^{1},\dots,K^{s-1}\in\mathbb{R}^{d\times m}\) and \(V^{1},\dots,V^{s-1}\in\mathbb{R}^{d\times d^{\prime}}\) (encoded with \(p\)-bit fixed-point numbers), an \(s\)-_order self-attention unit_ is a function \(f_{Q,K,V}:\mathbb{R}^{N\times d}\to\mathbb{R}^{N\times d^{\prime}}\) with
\[f_{Q,K,V}(X)=\mathrm{softmax}(\underbrace{XQ}_{\in\mathbb{R}^{N\times m}} \underbrace{((XK^{1})\star\dots\star(XK^{s-1}))^{\gamma}}_{\in\mathbb{R}^{m \times N^{s-1}}})(\underbrace{((XV^{1})\star\dots\star(XV^{s-1}))}_{\in \mathbb{R}^{N^{s-1}\times d^{\prime}}}.\]
The input to the row-wise softmax is an \(N\times N^{s-1}\) matrix. Let \(\mathcal{A}_{d,m,d^{\prime},p}^{\otimes s}\) denote the set containing all such attention units.
Note that \(\mathcal{A}_{d,m,d^{\prime},p}^{\otimes 2}=\mathcal{A}_{d,m,d^{\prime},p}\). Because \(s\)-order self-attention units have the same domain and codomain as standard self-attention, multiple units can be analogous combined to construct multi-headed attention units and full transformer models. We define \(\mathcal{A}_{d,m,d^{\prime},p}^{\otimes s}\) and \(\mathcal{T}_{d,m,d^{\prime},p}^{D,H,\otimes s}\) accordingly.
The purpose of the \(s\)-order transformer model as a theoretical construct is to posit how strictly generalizing the architecture in order to permit higher order outer products transfers the expressive powers of standard transformer architectures to more sophisticated interactions among elements of the input sequence \(X\). The model is not defined to be immediately practical, due to its steep computational cost of evaluation.
However, the trade-offs involved in using such architectures resemble those already made by using transformer models instead of fully-connected networks. Transformers are already computationally wasteful relative to the number of the parameters, and these models likely succeed only because extremely efficient factorized parameterization exist. Likewise, third-order transformers could indeed be practical if even more factorization proves useful, since the computational costs may prove mild if the embedding dimension \(m\), number of heads \(H\), and depth \(D\) necessary to succeed on a task exceed the sequence length \(N\) for standard second-order transformers.
### Efficient representation of \(\mathrm{Match3}\) with third-order self-attention
**Theorem 18** (\(\mathrm{Match3}\) construction with third-order self-attention).: _For any sequence length \(N\), input range \(M=N^{O(1)}\), and fixed-precision bit complexity \(p=O(\log M)\), there exists a third-order transformer architecture \(f\in\mathcal{T}_{1,m,1,p}^{1,\otimes 3}\) with a single self-attention unit with embedding dimension \(m=5\) such that for all \(X\in[M]^{N}\), \(f(X)=\mathrm{Match3}(X)\)._Proof of Theorem 18.: The proof is almost identical to that of Theorem 6, except that we instead use a different key and query transforms to express a different trigonometric function:
\[Q\phi(x_{i}) =c\left(\cos\left(\frac{2\pi x_{i}}{M}\right),-\cos\left(\frac{2\pi x _{i}}{M}\right),\sin\left(\frac{2\pi x_{i}}{M}\right),\sin\left(\frac{2\pi x_{i }}{M}\right),1\right),\] \[K^{1}\phi(x_{i}) =\left(\cos\left(\frac{2\pi x_{i}}{M}\right),\sin\left(\frac{2\pi x _{i}}{M}\right),-\cos\left(\frac{2\pi x_{i}}{M}\right),\sin\left(\frac{2\pi x_ {i}}{M}\right),0\right),\] \[K^{2}\phi(x_{i}) =\left(\cos\left(\frac{2\pi x_{i}}{M}\right),\sin\left(\frac{2\pi x _{i}}{M}\right),\sin\left(\frac{2\pi x_{i}}{M}\right),-\cos\left(\frac{2\pi x_ {i}}{M}\right),0\right).\]
Together, these ensure that the resulting tensor products reduce to a trigonometric expression that is maximized when \(x_{i}+x_{j_{1}}+x_{j_{2}}=0\pmod{M}\). That is,
\[(\phi(X)Q((\phi(X)K^{1})\star(\phi(X)K^{2}))^{\mathsf{T}})_{i,(j_{1}-1)+j_{2}} =c\cos\left(\frac{2\pi(x_{i}+x_{j_{1}}+x_{j_{2}})}{M}\right).\]
We similarly let \(V^{1}\phi(x_{i})=V^{2}\phi(x_{i})=\vec{1}\) and \(V^{1}\phi(x^{\prime})=V^{2}\phi(x^{\prime})=\vec{0}\). The remaining choice of \(c\) and the output MLP, and the analysis of the softmax proceeds identically to the previous proof.
### Heuristic argument for Informal Conjecture 1
**Conjecture 19** (Formal version of Informal Conjecture 1).: _For sufficiently large \(N\) and any \(d\geq 1\), for all \(M\geq N+1\) and \(mpHD\leq N^{\Omega(1)}\), there is no \(f\in\mathcal{T}^{D,H}_{1,m,1,p}\) satisfying \(f(X)=\mathrm{Match3}(X)\) for all \(X\in[M]^{N}\)._
We believe that the conjecture holds due to a heuristic information-theoretic argument. Define the distribution \(\mathcal{D}\) over inputs \(X\in\mathbb{R}^{N}\) that will be used to show that the model cannot compute \(\mathrm{Match3}\) for \(M=N^{4}\) with high probability. We draw \(\mathbf{X}\) from \(\mathcal{D}\) as follows:
1. With probability \(\frac{1}{2}\), draw each \(\mathbf{x}_{i}\) iid from \(\mathrm{Unif}([M])\).
2. With probability \(\frac{1}{2}\), draw \(j_{1},j_{2},j_{3}\) iid from \(\mathrm{Unif}(\binom{[N]}{3})\). For all \(i\neq j_{3}\), draw each \(\mathbf{x}_{i}\) iid from \(\mathrm{Unif}([M])\). Let \(\mathbf{x}_{j_{3}}=-\mathbf{x}_{j_{1}}-\mathbf{x}_{j_{2}}\pmod{M}\).
Note that under event \(E_{1}\), a three matching elements exist with probability at most \(\frac{1}{N}\), and
\[\Pr\left[\mathrm{Match3}(\mathbf{X})=\vec{0}\mid E_{1}\right]\geq 1-\frac{1}{N}.\]
Under event \(E_{2}\), a triple of matching elements is always planted, so \(\mathrm{Match3}(\mathbf{X})\neq\vec{0}\). It would suffice to prove that--unless a transformer is sufficiently large--it is impossible to determine whether \(\mathrm{Match3}(\mathbf{X})=\vec{0}\) with probability at least \(0.9\).
Under \(\mathcal{D}\), any subset of \(\{\mathbf{x}_{1},\ldots,\mathbf{x}_{N}\}\) consists of iid integers drawn uniformly from \([M]\), unless all of \(\mathbf{x}_{j_{1}},\mathbf{x}_{j_{2}},\mathbf{x}_{j_{3}}\) appear in the subset. Consider a transformer architecture with \(p\)-bit precision, \(m\)-dimensional embeddings, \(H\) heads per layer, and \(D\) layers. We argue informally that a single-element output of a self-attention unit can take into account information about \(mp\) more inputs \(\mathbf{x}_{1},\ldots,\mathbf{x}_{N}\) than that it had in the previous layer. By induction, after \(D\) layers of \(H\)-headed self-attention with interleaved MLPs, each element is a function of at most \(mpHD\) inputs. Until an element exists that is a function of at least two of the three of \(\mathbf{x}_{j_{1}},\mathbf{x}_{j_{2}},\mathbf{x}_{j_{3}}\), we assume that the elements "known" by each output are chosen independently of the indices \(j_{1},j_{2},j_{3}\). (Given two elements of the triple, the third element can be identified with a single self-attention unit.) Hence, we argue that it suffices to show that the probability any two elements of the triple \(j_{1},j_{2},j_{3}\) occurring within any of the \(N\) sets of \(mpHD\) inputs is vanishingly small for sufficiently large transformer parameters. The probability of single collection having any of two of the three inputs is at most
\[\frac{3\binom{mpHD}{2}}{\binom{N}{2}}\leq 3\left(\frac{empHD}{N}\right)^{2}.\]
Thus, the probability that any collection has all three inputs is no more than \(3(empHD)^{2}/N\). If \(mpHD=O(\sqrt{N})\), then the randomly chosen triple will not jointly appear as the outcome of a single element of a self-attention unit with probability at least \(0.9\), and the transformer will be unexpected to successfully distinguish between the two cases.
Should the conjecture hold, it would represented a tight lower bound on the size of the smallest standard transformer architecture necessary to compute \(\mathrm{Match3}\).
**Theorem 20** (Tightness of Conjecture 19).: _For any sequence length \(N\), if the input range satisfies \(M=N^{O(1)}\) and the transformer size parameters satisfy \(p\geq\log(M)\), \(H=1\), \(m\geq 4\), and \(mD\geq CN^{2}\) for some universal constant \(C\), then there exists a transformer architecture \(f\in\mathcal{T}^{D,H}_{1,m,1,p}\) such that \(f(X)=\mathrm{Match3}(X)\)._
Proof.: We construct an architecture that collects a group of candidate pairs in each layer of single-headed self-attention and verifies whether there exists a triple incorporating each pair that satisfies the summation property. Then, all candidate triples are disposed of, and the subsequent layer collects a new family of candidates.
To do so, we first let \(\ell:=\left\lfloor\frac{m}{2}\right\rfloor-1\geq 1\) represent the total number of pairs shared in each layer of attention. We let \(P=\binom{|N|}{2}\) represent a collection of all pairs of indices and partition it into \(D\) subsets \(P_{1},\ldots,P_{D}\), each containing \(\ell\) distinct pairs. (Since \(|P|=\frac{N(N+1)}{2}\), any \(D\) satisfying the theorem's preconditions is sufficiently large for this to be a proper partition.) Our construction ensures that there exist \(x_{i}+x_{j_{1}}+x_{j_{2}}=0\pmod{M}\) for \((j_{1},j_{2})\in P_{k}\), then the \(k\)th layer of self attention will verify its existence and mark \(x_{i}\) as belonging to the match. Throughout the network, we maintain that the first two dimensions of any embedding of the \(i\)th element correspond to \(x_{i}\in[M]\) and a bit indicating whether a match has been found yet containing \(x_{i}\).
Consider the first layer of self-attention, and let \(P_{1}=\{(i_{1},j_{1}),\ldots,(i_{\ell},j_{\ell})\}\). We set the input MLP \(\phi_{1}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{m}\) and respective matrices \(Q^{1},K^{1}\in\mathbb{R}^{m\times m}\) such that
\[Q^{1}\phi_{1}(x_{i})=ce_{1}\text{ and }K^{1}\phi_{1}(x_{i})=\begin{cases}e_{1}& \text{if }i\in P_{1}\\ \vec{0}&\text{otherwise,}\end{cases}\]
for sufficiently large \(c\). We additionally let
\[V^{1}\phi_{1}(x_{i})=\begin{cases}(2\ell+1)\cdot(x_{i};0;\vec{0})&i\not\in P_{ 1},\\ (2\ell+1)\cdot(x_{i};0;x_{i}e_{2-1})&i=i_{\iota},\\ (2\ell+1)\cdot(x_{i};0;x_{i}e_{2\iota})&i=j_{\iota}.\end{cases}\]
By making use of a residual connection, we ensure that the \(i\)th outcome of the self-attention is \((x_{i},0,x_{i_{1}},x_{j_{1}},\ldots,x_{i_{\ell}},x_{j_{\ell}})\). We encode an MLP to compute
\[(x_{i},0,x_{i_{1}},x_{j_{1}},\ldots,x_{i_{\ell}},x_{j_{\ell}})\mapsto\left(x_ {i},\mathbbm{1}\left\{\exists\iota\in[\ell]\operatorname{s.t.}x_{i}+x_{i_{ \iota}}+x_{j_{\iota}}=\vec{0}\pmod{M}\right\};\vec{0}\right).\]
We repeat this construction \(D\) times, with the only modifications being the replacement of \(P_{1}\) and the fact that the second dimension of the embedding remains 1 after being set to that value. After \(D\) layers, the final MLP outputs the value of the second dimension, which will be 1 if and only if the respective \(x_{i}\) belongs to a three-way match.
### Sharper separations for embedded subgraph detection problems
In pursuit of proving separations analogous to the one between Theorem 18 and Conjecture 19, we draw techniques for proving lower bounds for graph problems in the Congest model of distributed computation with restricted bandwidth [20].5The problems we consider take, as input, the adjacency matrix \(X\in\{0,1\}^{N\times N}\) of an \(N\)-vertex graph \(G=(\mathcal{V},\mathcal{E})\) with \(\mathcal{V}=[N]\), so \(x_{i,j}=\mathbbm{1}\left\{(i,j)\in\mathcal{E}\right\}\). We may regard each row of \(X\) as a high-dimensional (\(d=N\)) embedding of the \(i\)-th vertex containing information about which (outgoing) edges are incident to the \(i\)-th vertex. We consider the following problems:
\[\mathrm{DirectedCycle3}(X) =(\mathbbm{1}\left\{\exists j_{1},j_{2}\in[N]\;\mathrm{s.t.}\;x_{ i,j_{1}}x_{j_{1},j_{2}}x_{j_{2},i}=1\right\})_{i\in[N]}\,;\] \[\mathrm{Cycle5}(X) =(\mathbbm{1}\left\{\exists j_{1},j_{2},j_{3},j_{4}\in[N]\; \mathrm{s.t.}\;x_{i,j_{1}}x_{j_{1},j_{2}}x_{j_{2},j_{3}}x_{j_{3},j_{4}}x_{j_{4 },i}=1\right\})_{i\in[N]}\,,\] \[\text{with }\mathrm{dom}(\mathrm{Cycle5})=\{X:\,X=X^{\uptau}\}.\]
The former treats \(X\) as a directed graph (where \(X\) need not be symmetric) and asks whether each input belongs to a directed 3-cycle. The latter insists that \(X\) be an undirected graph by enforcing symmetry and determines membership in (undirected) 5-cycles.
However, solving these problems with any transformer model of constant order trivially requires having the product of the precision \(p\), embedding dimension \(m\), heads per layer \(H\), and depth \(D\) grow polynomially with \(N\), since each attention unit is limited to considering at most \(pm\) bits of information from each input. Such a lower bound is not interesting for dense graphs, where every vertex may have \(\Omega(N)\) incident edges; the bottleneck is not due to any feature of standard attention units (and would persist with higher-order attention).
To circumvent this issue, we consider an augmented self-attention unit, which permits each element of the self-attention tensor to depend on both its respective inner product and on the presence of edges among corresponding inputs.
**Definition 8**.: For order \(s\geq 2\), input dimension \(d\), output dimension \(d^{\prime}\), embedding dimension \(m\), bit complexity \(p\), matrices \(Q,K^{1},\ldots,K^{s-1}\in\mathbb{R}^{d\times m}\) and \(V^{1},\ldots,V^{s-1}\in\mathbb{R}^{d\times d^{\prime}}\) (encoded with \(p\)-bit fixed-point numbers), and cell-wise attention tensor function \(\kappa:\{0,1\}^{s(s-1)}\times\mathbb{R}\rightarrow\mathbb{R}\), an _s-order graph self-attention unit_ is a function \(f_{Q,K,V}:\mathbb{R}^{N\times d}\rightarrow\mathbb{R}^{N\times d^{\prime}}\) with
\[f_{Q,K,V}(X)=\mathrm{softmax}(\kappa(X,XQ((XK^{1})\star\cdots\star(XK^{s-1}))^ {\uptau}))((XV^{1})\star\cdots\star(XV^{s-1})).\]
For attention tensor \(A\in\mathbb{R}^{N^{\otimes s}}\), we abuse notation by writing \(\kappa(X,A)\) as short-hand for the particular cell-wise application of a fixed function, incorporating information about all relevant edges:
\[\kappa(X,A)_{i_{1},\ldots,i_{s}}=\kappa(x_{i_{1},i_{2}},x_{i_{1},i_{3}}, \ldots,x_{i_{s},i_{s-1}},x_{i_{s},i_{s-2}},A_{i_{1},\ldots,i_{s}}).\]
Let \(\mathcal{AG}_{d,m,d^{\prime},p}^{\otimes s}\) and \(\mathcal{TG}_{d,m,d^{\prime},p}^{D,H,\otimes s}\) denote all such attention units and all such transformers respectively.
Now, we provide four results that exhibit separations between orders of graph self-attention.
**Theorem 21** (Hardness of representing \(\mathrm{Cycle5}\) with standard graph transformer).: _For sufficiently large \(N\), any \(f\in\mathcal{TG}_{N,m,1,p}^{D,H}\) satisfying \(f(X)=\mathrm{Cycle5}(X)\) for all \(X\in\{0,1\}^{N\times N}\) with \(X=X^{\uptau}\) requires \(mpHD=\Omega(N/\log^{2}N)\)._
**Theorem 22** (Efficient construction of \(\mathrm{Cycle5}\) with fifth-order graph transformer).: _For sequence length \(N\) and bit-complexity \(p=O(\log N)\), there exists a fourth-order graph transformer architecture \(f\in\mathcal{TG}_{N,1,1,p}^{1,1,\otimes 3}\) with a single graph self-attention unit such that for all \(X\in\{0,1\}^{N\times N}\) with \(X=X^{\uptau}\), \(f(X)=\mathrm{Cycle5}(X)\)._
**Theorem 23** (Hardness of representing \(\mathrm{DirectedCycle3}\) with standard graph transformer).: _For sufficiently large \(N\), any \(f\in\mathcal{TG}_{N,m,1,p}^{D,H}\) satisfying \(f(X)=\mathrm{DirectedCycle3}(X)\) for all \(X\in\{0,1\}^{N\times N}\) requires \(mpHD=\Omega(N/\log^{2}N)\)._
**Theorem 24** (Efficient construction of \(\mathrm{DirectedCycle3}\) with fourth-order graph transformer).: _For sequence length \(N\) and bit-complexity \(p=O(\log N)\), there exists a third-order graph transformer architecture \(f\in\mathcal{TG}_{N,1,1,p}^{1,1,\otimes 3}\) with a single graph self-attention unit such that for all \(X\in\{0,1\}^{N\times N}\), \(f(X)=\mathrm{DirectedCycle3}(X)\)._
The proofs of Theorems 22 and 24 are immediate from the construction. Because each cell of the self-attention tensor has explicit access the the existence of all relevant edges, \(\kappa\) can be configured to ensure that cell's value is large if and only if the requisite edges for the desired structure all exist. Taking a softmax with a blank element (like in Theorem 6) ensures that the outcome of the self-attention unit for a given element distinguishes between whether or not it belongs to a 5-cycle or a directed 3-cycle. The output MLP ensure that the proper output is returned.
We prove Theorems 21 and 23 by introducing a particular Congest communication graph that can be used to simulate any model in \(\mathcal{T}\mathcal{G}^{D,H}_{d,m,d^{\prime},p}\) (and hence, also any model in \(\mathcal{T}^{D,H}_{d,m,d^{\prime},p}\)) in \(O(mHD\log N)\) rounds of communication. Then, we show for each problem that we can encode each instance of the set disjointness communication problem as an instance of \(\mathrm{Cycle5}\) (or \(\mathrm{DirectedCycle3}\)) and derive a contradiction from the communication graph.
#### c.6.1 A Congest communication graph that generalizes standard graph transformer computation
The key principle of our analysis is that the predominant limitation of a transformer model is in its communication bandwidth and _not_ its computational abilities. We model transformers as having element-wise multi-layer perceptron units with unbounded computational ability (but bounded precision inputs and outputs) and self-attention units, which compute linear combinations of inputs in a carefully regimented way that limits the ability of individual elements to share information with one another. Here, we introduce a specific Congest graph for each sequence length \(N\) and show that every transformer has a communication protocol that simulates its computation in this graph.
For fixed \(N\), we design an undirected Congest graph \(G^{N}=(V^{N},E^{N})\) with \(O(N^{2})\) nodes, each having degree at most 3. (Note that this graph is _not_ the same as the graph provided as input \(X\) to a transformer; this graph is consistent across all transformers taking input of sequence size \(N\).) Let \(u_{1},\ldots,u_{N}\) be nodes in \(V^{N}\) corresponding to each input. For every pair \(i,j\in[N]\), let \(v_{i,j}\) be a node as well. For each \(i\in[N]\), let \(B_{i}=(V_{i},E_{i})\) be a balanced binary trees having root \(u_{i}\) and leaves \(v_{i,1},\ldots,v_{i,N},v_{1,i},\ldots,v_{N,i}\). Hence, each \(B_{i}\) has \(O(N)\) vertices of degree 3 and is of depth \(O(\log N)\). Let \(V^{N}=V_{1}\cup\cdots\cup V_{N}\) and \(E^{N}=E_{1}\cup\cdots\cup E_{N}\). Noting that \(E_{1},\ldots,E_{N}\) are disjoint and that \(V_{1},\ldots,V_{N}\) are disjoint, except for leaves \(v_{i,j}\), we ascertain that \(G^{N}\) contains \(O(N^{2})\) vertices of degree at most 3 and has diameter \(O(\log N)\). We visualize the graph \(G^{N}\) with a highlighted tree \(B_{1}\) in Figure 4.
**Lemma 25**.: _For any transformer \(f\in\mathcal{T}\mathcal{G}^{D,H}_{d,m,d^{\prime},p}\) and any \(X\in\mathbb{R}^{N\times d}\) with \(p\)-bit fixed-precision numbers, there exists a Congest communication protocol on the graph \(G^{N}\) that shares \(p\) bits of information between adjacent vertices per round satisfying the following characteristics:_
Figure 4: The Congest graph \(G^{N}\) visualized for \(N=6\) with root nodes \(\{u_{i}\}_{i\in[N]}\) in blue, leaf nodes \(\{v_{i,j}\}_{i,j\in[N]}\) in green, and the nodes \(V_{1}\) of the binary tree \(B_{1}\) shaded red and edges \(E_{1}\) colored red.
* _Before any communication begins, each node_ \(u_{i}\) _is provided with_ \(x_{i}\) _and each node_ \(v_{i,j}\) _is provided with_ \(x_{i,j}\) _and_ \(x_{j,i}\)_._
* _After_ \(T=O(HD(m+\log N))\) _rounds of communication, each node_ \(u_{i}\) _outputs_ \(f(X)_{i}\)_._
Proof.: It suffices to give a protocol that computes the outcome of a single-headed unit of graph self-attention with parameters \(Q,K,V\in\mathbb{R}^{m\times m}\) and \(\kappa:\{-1,1\}^{2}\times\mathbb{R}\rightarrow\mathbb{R}\) and transmits its \(i\)th output back to \(u_{i}\) in \(O(m\log N)\) rounds of \(p\)-bit communication. The remainder of the argument involves computing the outcomes of all element-wise MLPs within respective vertices \(u_{1},\ldots,u_{N}\) (since we assume each node to have unbounded computational power in the Congest model) and to repeat variants of the protocol \(HD\) times for every individual self-attention unit. Because the protocol is designed for a particular transformer architecture \(f\), we can assume that every node in the Congest graph has knows every parameter of \(f\).
We give the protocol in stages. We assume inductively that every input to \(f\), \(y_{1},\ldots,y_{N}\in\mathbb{R}^{m}\), is known by its respective vertex \(u_{1},\ldots,u_{N}\).
1. Every vertex \(u_{i}\) computes \(Q^{\intercal}y_{i}\in\mathbb{R}^{m}\) and propagates it to every vertex \(v_{i,1},\ldots,v_{i,N}\). This can be done in \(O(m+\log N)\) rounds by transferring one \(p\)-bit fixed-precision number per round from an element of the binary tree \(B_{i}\) to each of its children per round. Because the respective edges \(E_{1},\ldots,E_{N}\) are disjoint, this operation can be carried out in parallel.
2. Each \(u_{i}\) computes \(K^{\intercal}y_{i},V^{\intercal}y_{i}\in\mathbb{R}^{m}\) and propagates them to \(v_{1,i},\ldots,v_{N,i}\) in \(O(m+\log N)\) rounds.
3. Each \(v_{i,j}\), using their knowledge of \(x_{i,j}\) and \(x_{j,i}\), computes \(\alpha_{i,j}\coloneqq\exp(\kappa(x_{i,j},x_{j,i},y_{i}^{\intercal}QK^{\intercal }y_{j}))\). This takes zero rounds.
4. Each \(u_{i}\) computes \(\sum_{j=1}^{N}\alpha_{i,j}\) by propagating each \(\alpha_{i,j}\) in \(v_{i,j}\) up \(B_{i}\) to \(u_{i}\), iteratively summing terms passed up. This takes \(O(\log N)\) rounds.
5. Similarly, \(u_{i}\) computes \(\sum_{j=1}^{N}\alpha_{i,j}V^{\intercal}y_{j}\) in \(O(m\log N)\) rounds. Then, it computes \[\frac{\sum_{j=1}^{N}\alpha_{i,j}V^{\intercal}y_{j}}{\sum_{j=1}^{N}\alpha_{i,j }},\] which is the target output of the self-attention unit.
Because all steps are achievable in parallel with \(O(m+\log N)\) rounds, the claim follows.
#### c.6.2 Reduction from set disjointness
Before proving Theorems 21 and 23 by embedding an instance of a transformer model into an instance of each subgraph identification problem, we first introduce a partition of the vertices \(V^{N}\) of the Congest graph into those possessed by Alice and Bob for use in a two-party communication protocol. We call those two sets \(V^{N}_{a}\) and \(V^{N}_{b}\).
Note that the previous section made no assumptions about the organization of edges in the binary tree. We thus add an additional condition: that each binary tree \(B_{i}\) can be oriented to respect the left-to-right ordering \(v_{i,1},v_{1,i},\ldots,v_{i,N},v_{N,i}\). Let \(u_{i}\in V^{N}_{a}\) if and only if \(i\leq\frac{N}{2}\), and \(v_{i,j}\in V^{N}_{a}\) if and only if \(\min(i,j)\leq\frac{N}{2}\). We label are remaining nodes in \(B_{i}\) by labeling a parent node \(w_{p}\) as a function of its child nodes \(w_{\ell}\) and \(w_{r}\) using the following rules:
1. If \(w_{\ell},w_{r}\in V^{N}_{a}\), then let \(w_{p}\in V^{N}_{a}\).
2. If \(w_{\ell},w_{r}\in V^{N}_{b}\), then let \(w_{p}\in V^{N}_{b}\).
3. Otherwise, let \(w_{p}\in V^{N}_{a}\) if and only if root \(u_{i}\in V^{N}_{a}\).
This partition, which we visualize in Figure 5, bounds the number of bits Alice and Bob can exchange by simulating a protocol on Congest graph \(G^{N}\).
**Lemma 26**.: _Suppose Alice and Bob simulate an \(R\)-round \(p\)-bit protocol on Congest communication graph \(G^{N}\) where Alice has access to all vertices \(V_{a}^{N}\) and Bob \(V_{b}^{N}\). No other communication is permitted besides sharing bits as permitted by the Congest protocol between neighboring vertices. Then, Alice and Bob exchange at most \(O(pRN\log N)\) bits._
Proof.: It suffices to show that the partition \(V_{a}^{N},V_{b}^{N}\) induces a cut of size at most \(O(N\log N)\); this ensures that each can send no more than \(O(pN\log N)\) bits per round.
Per the rules defined above, an edge in \((w_{p},w_{\ell})\) and \((w_{p},w_{r})\) is cut if and only if they are described by case (c). Within each tree \(B_{i}\) under the orientation described above, an inductive argument shows that in every layer, all elements in \(V_{a}^{N}\) are to the left of all elements in \(V_{b}^{N}\). Thus, there exists at most one parent of that layer that belongs to case (c), and thus, no more than one cut edge per layer. Because each tree has \(O(\log N)\) layers and because there are \(N\) trees, the partition cuts at most \(O(N\log N)\) edges.
It remains to embed an instance of DISJ in \(V_{a}^{N},V_{b}^{N}\) for each problem such that its output corresponds identically with that of DISJ.
Proof of Theorem 21.: Assume for the sake of simplicity that \(N\) is divisible by 5. Let \(a,b\in\{0,1\}^{n}\) for \(n=\frac{N^{2}}{25}\) be an input to DISJ, and let Alice and Bob possess \(a\) and \(b\) respectively. We index those vectors as \(a=(a_{1,1},a_{1,2},\ldots,a_{N/5,N/5-1},a_{N/5,N/5})\) and \(b=(b_{1,1},\ldots,b_{N/5,N/5})\) for ease of analysis. We design input matrix \(X\in\{0,1\}^{N\times N}\) as follows:
* If \(i\in(0,\frac{N}{5}]\) and \(j\in(\frac{N}{5},\frac{2N}{5}]\), then \(x_{i,j}=x_{j,i}=a_{i,j-N/5}\).
* If \(i\in(\frac{N}{5},\frac{3N}{5}]\) and \(j\in(\frac{2N}{5},\frac{4N}{5}]\), then \(x_{i,j}=x_{j,i}=\delta_{i,j-N/5}\).
* If \(i\in(\frac{3N}{5},\frac{4N}{5}]\) and \(j\in(\frac{4N}{5},N]\), then \(x_{i,j}=x_{j,i}=b_{j-4N/5,i-3N/5}\).
* If \(i\in(\frac{4N}{5},N]\) and \(j\in(0,\frac{N}{5}]\), then \(x_{i,j}=x_{j,i}=\delta_{i,j+4N/5}\).
* Otherwise, \(x_{i,j}=0\).
Figure 5: The Congest graph \(G^{N}\) with vertices partitioned into sets \(V_{a}^{N}\) (violet) and \(V_{b}^{N}\) (orange) for \(N=6\). The six edges cut by the partition are colored red.
This ensures that \(X\) has a 5-cycle if and only there exist \(i,j\in(0,\frac{N}{5}]\) such that \(a_{i,j}b_{i,j}=1\)6. In addition, note that under the protocol in Lemma 25, Alice's and Bob's inputs \(a\) and \(b\) are known exclusively by nodes belonging to \(V_{a}^{N}\) and \(V_{b}^{N}\) respectively.
Footnote 6: We consider 5-cycles rather than 4-cycles because a spurious 4-cycle could exist among edges \(\{x_{i,j}:i\in(0,\frac{N}{5}],\;j\in(\frac{N}{5},\frac{2N}{5}]\}\).
Consider any transformer architecture \(f\in\mathcal{TG}_{N,m,1,p}^{D,H}\) that computes \(\mathrm{Cycle}5\). By Lemma 25, there exists a protocol on the Congest graph \(G^{N}\) that computes \(\mathrm{Cycle}5\) after \(O(HD(m+\log N))\) rounds of communication of \(p\)-bits each. If Alice and Bob simulate this protocol, and output 1 if and only if at least one of their outputs indicates the existence of a \(\mathrm{Cycle}5\), then they successfully decide DISJ. By Lemma 26, this communication algorithm solves DISJ after exchanging \(O(mpHDN\log^{2}N)\) bits of communication. However, Fact 5 implies that no communication algorithm can do so without exchanging \(\Omega(n)=\Omega(N^{2})\) bits, which concludes the proof.
Proof of Theorem 23.: The proof is identical to its predecessor, but uses a different embedding of an instance \(a,b\in\{0,1\}^{n}\) to DISJ. Let \(n=\frac{N^{2}}{16}\). Then:
* If \(i\in(0,\frac{N}{4}]\) and \(j\in(\frac{N}{2},\frac{3N}{4}]\), then \(x_{i,j}=a_{i,j-N/2}\).
* If \(i\in(\frac{N}{2},\frac{3N}{4}]\) and \(j\in(\frac{3N}{4},N]\), then \(x_{i,j}=b_{j-3N/4,i-N/2}\).
* If \(i\in(\frac{3N}{4},N]\) and \(j\in(0,\frac{N}{4}]\), then \(x_{i,j}=\delta_{i,j+3N/4}\).
* Otherwise, \(x_{i,j}=0\).
This construction ensures that a directed 3-cycle exists if and only if a corresponding pair of elements in \(a\) and \(b\) are both 1.
## Appendix D Experiment details
This section describes the experimental setup behind Figure 2, and provides further experiments suggesting an _implicit bias_ of transformers for \(q\mathrm{SA}\), in particular when compared with MLPs and RNNs.
Experimental setup.Experiments used synthetic data, generated for \(q\mathrm{SA}\) with \(n=1000\) training and testing examples, a sequence length \(N=20\), \(q=3\), with the individual inputs described in more detail as follows.
* The positional encoding of element \(i\) is a random vector sampled uniformly from the sphere in \(\mathbb{R}^{d_{0}}\) with \(d_{0}:=\lceil 1+2\ln(N)\rceil\), a quantity which agrees with the theory but was not tuned.
* A sequence element then consists of the data portion \(z\in\mathbb{R}^{d_{1}}\) where \(d_{1}=4\), also sampled from the unit sphere, then the positional encoding of this sequence element, and then \(q\) further positional encodings identifying elements to average to produce the output; this differs from (and is more tractable than) the presentation in Section 3, where the positional encoding is provided as an integer and the MLP layer input to our attention layers is expected to choose a sufficient positional encoding.
As such, the total dimension of a sequence element is \(d_{1}+(q+1)d_{0}=32\). The architectures are detailed as follows.
* The attention is identical to the description in the paper body, with the additional detail of the width and embedding dimension \(m\) being fixed to \(100\).
* Figure 6 also contains an MLP, which first flattens the input, then has a single hidden ReLU layer of width 256, before a final linear layer and an output reshaping to match the desired output sequence shapes.
* Figure 6 also contains an LSTM, which is a standard pytorch LSTM with \(2\) layers and a hidden state size \(800\), which is \(200\) times larger than the target output dimension \(4\).
Experiments fit the regression loss using Adam and a minibatch size of 32, with default precision, and take a few minutes to run on an NVIDIA TITAN XP, and would be much faster on standard modern hardware.
Further discussion of Figure 2 and Figure 7.In Figure 2 and Figure 7, we plot (post-softmax) alignment matrices after \(T\in\{0,1000,40000\}\) iterations of Adam. The alignment matrices in Figure 2 are taken from the training example whose loss is the median loss across all examples. Figure 7 is similar, but additionally shows the examples of minimal and maximal loss.
Further discussion of Figure 6.Figure 6 plots training and testing error curves for the same attention architecture as in Figure 2, but with further MLP and LSTM architectures as described above. but also an MLP trained on flattened (vectorized) error bars reflect \(5\) separate training runs from random initialization. A few variations of these architectures were attempted, however curves did not qualitatively change, and in particular, only the attention layer achieves good generalization across all attempts.
Figure 6: Test and train error curves of fitting various architectures to \(q\mathrm{SA}\), where the horizontal axis denotes thousands of training iterations, and the vertical axis denotes the regression objective; see Section D for further details.
Figure 7: Alignment plots as in Figure 2, but using examples with minimum, median, and maximum loss, whereas Figure 2 only uses the example with median loss. | ## Review
### Summary
This paper investigates the representational power of attention layers in transformer networks, focusing on their advantages and limitations compared to other neural network architectures. It introduces three computational tasks: sparse averaging (qSA), pair matching, and triple matching, demonstrating that transformers excel in the qSA task with logarithmic scaling in input size, whereas their performance diminishes in the triple matching task, which requires polynomial scaling. The authors utilize communication complexity principles to derive these results, providing both theoretical insights and implications for future research in deep learning, particularly in natural language processing. The paper makes a compelling first step toward understanding transformer limitations and their potential for practical applications, despite some concerns about the relevance of the tasks to real-world scenarios.
### Strengths
- The paper offers original contributions to the understanding of attention layers in transformer networks, providing a mathematical analysis of their representation power.
- It rigorously compares transformers with fully connected networks and recurrent networks, establishing both strengths and limitations.
- The use of communication complexity techniques enhances the depth and reliability of the analysis.
- The mathematical formulations and proofs are of high quality, showcasing the authors' expertise.
- The paper addresses a notable gap in the literature regarding the theoretical understanding of transformers, with implications for model design in NLP.
### Weaknesses
- The fixed precision result in Theorem 2 is still dependent on growing precision with data size, leaving open the question of constant precision.
- The relevance of the computational tasks (qSA, Match2, Match3) to practical applications is not clearly established, which may limit its impact.
- Some sections, particularly the introduction and notation, could be clearer to enhance reader comprehension.
- The paper lacks empirical evaluation of the proposed tasks, which could validate theoretical findings and strengthen conclusions.
- The complexity of mathematical formulations may pose challenges for readers without a strong background in the subject.
### Questions
- How do the tasks presented connect to real-world applications in NLP or other domains?
- What are the specific bottlenecks in developing transformers that can approximate q-SA with fixed constant precision?
- Is the application of communication complexity in this context novel?
- Can the authors provide insights on how the theoretical results relate to practical performance in deep learning tasks?
### Soundness
**Score:** 3
**Description:** 3 = good: The paper is technically sound with solid theoretical foundations, though some concerns about precision and practical connections remain.
### Presentation
**Score:** 3
**Description:** 3 = good: The paper is generally well-written, but some sections could benefit from clearer explanations and more intuitive presentations of complex concepts.
### Contribution
**Score:** 3
**Description:** 3 = good: The contributions are significant in advancing theoretical understanding of transformers, although further empirical validation would enhance their relevance.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept: The paper is technically solid with moderate-to-high impact potential, but it requires improvements in clarity and empirical evaluation.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a technically sound exploration of the representational limits of transformers, making original contributions to the field. While it has areas for improvement, especially concerning practical relevance and empirical validation, the theoretical insights are valuable and can influence future research directions. The overall assessment suggests a strong foundation that justifies acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Explore to Generalize in Zero-Shot RL
Ev Zisselman, Itai Lavie, Daniel Soudry, Aviv Tamar
Technion - Israel Institute of Technology
Correspondence E-mail: [email protected]
###### Abstract
We study zero-shot generalization in reinforcement learning--optimizing a policy on a set of training tasks to perform well on a similar but unseen test task. To mitigate overfitting, previous work explored different notions of invariance to the task. However, on problems such as the ProcGen Maze, an adequate solution that is invariant to the task visualization does not exist, and therefore invariance-based approaches fail. Our insight is that learning a policy that effectively _explores_ the domain is harder to memorize than a policy that maximizes reward for a specific task, and therefore we expect such learned behavior to generalize well; we indeed demonstrate this empirically on several domains that are difficult for invariance-based approaches. Our _Explore to Generalize_ algorithm (ExpGen) builds on this insight: we train an additional ensemble of agents that optimize reward. At test time, either the ensemble agrees on an action, and we generalize well, or we take exploratory actions, which generalize well and drive us to a novel part of the state space, where the ensemble may potentially agree again. We show that our approach is the state-of-the-art on tasks of the ProcGen challenge that have thus far eluded effective generalization, yielding a success rate of \(83\%\) on the Maze task and \(74\%\) on Heist with \(200\) training levels. ExpGen can also be combined with an invariance based approach to gain the best of both worlds, setting new state-of-the-art results on ProcGen. Code available at [https://github.com/EvZissel/expgen](https://github.com/EvZissel/expgen).
## 1 Introduction
Recent developments in reinforcement learning (RL) led to algorithms that surpass human experts in a broad range of tasks (Mnih et al., 2015; Vinyals et al., 2019; Schrittwieser et al., 2020; Wurman et al., 2022). In most cases, the RL agent is tested on the same task it was trained on, and is not guaranteed to perform well on unseen tasks. In zero-shot generalization for RL (ZSG-RL), however, the goal is to train an agent on training domains to act optimally in a new, previously unseen test environment (Kirk et al., 2021). A standard evaluation suite for ZSG-RL is the ProcGen benchmark (Cobbe et al., 2020), containing 16 games, each with levels that are procedurally generated to vary in visual properties (e.g., color of agents in BigFish, Fig. 0(a), or background image in Jumper, Fig. 0(c)) and dynamics (e.g., wall positions in Maze, Fig. 0(d), and key positions in Heist, Fig. 0(e)).
Previous studies focused on identifying various _invariance properties_ in the tasks, and designing corresponding _invariant policies_, through an assortment of regularization and augmentation techniques (Igl et al., 2019; Cobbe et al., 2019; Wang et al., 2020; Lee et al., 2019; Raileanu et al., 2021; Raileanu and Fergus, 2021; Cobbe et al., 2021; Sonar et al., 2021; Bertran et al., 2020; Li et al., 2021). For example, a policy that is invariant to the color of agents is likely to generalize well in BigFish. More intricate invariances include the order of observations in a trajectory (Raileanu and Fergus, 2021), and the length of a trajectory, as reflected in the value function (Raileanu and Fergus, 2021).
Can ZSG-RL be reduced to only finding invariant policies? As a counter-argument, consider the following thought experiment2. Imagine Maze, but with the walls and goal hidden in the observation (Fig. 1f). Arguably, this is the most task-invariant observation possible, such that a solution can still be obtained in a reasonable time. An agent with memory can be trained to optimally solve all training tasks: figuring out wall positions by trying to move ahead and observing the resulting motion, and identifying based on its movement history in which training maze it is currently in. Obviously, such a strategy will not generalize to test mazes. Indeed, as depicted in Figure 2, performance in tasks like Maze and Heist, where the strategy for solving any particular training task must be _indicative_ of that task, has largely not improved by methods based on invariance (e.g. UCB-DrAC and IDAAC).
Footnote 2: We validated this experiment empirically, using a recurrent policy on Maze with 128 training tasks, observing 85% success rate on training domains, and test success similar to a random policy (see Appendix A).
Interestingly, decent zero-shot generalization can be obtained even without a policy that generalizes well. As described by Ghosh et al. (2021), an agent can overcome test-time errors in its policy by treating the perfect policy as an _unobserved_ variable. The resulting decision making problem, termed the _epistemic POMDP_, may require some exploration at test time to resolve uncertainty. Ghosh et al. (2021) further proposed the LEEP algorithm based on this principle, which trains an ensemble of agents and essentially chooses randomly between the members when the ensemble does not agree, and was the first method to present substantial generalization improvement on Maze.
In this work, we follow the epistemic POMDP idea, but ask: _how to improve exploration at test time?_ Our approach is based on a novel discovery: when we train an agent to _explore_ the training domains using a maximum entropy objective (Hazan et al., 2019; Mutti et al., 2021), we observe that the learned exploration behavior generalizes surprisingly well--much better than the generalization attained when training the agent to maximize reward. Intuitively, this can be explained by the fact that reward is a strong signal that leads to a specific behavior that the agent can'memorize' during training, while exploration is naturally more varied, making it harder to memorize and overfit.
Exploration by itself, however, is not useful for solving new tasks. Our algorithm, _Explore to Generalize_ (ExpGen), additionally trains an ensemble of reward-seeking agents. At test time, either the ensemble agrees on an action, and we generalize well, or we take exploratory actions _using the exploration policy_, which we demonstrate to generalize, and drive us to a novel part of the state space, where the ensemble may potentially agree again.
ExpGen is simple to implement, and can be used with any reward maximizing RL algorithm. Combined with vanilla PPO, ExpGen significantly improves the state-of-the-art (SOTA) on several ProcGen games for which previous methods fail (see Fig. 2). ExpGen also significantly improves
Figure 1: (a),(b),(c),(d) and (e) displays screenshot of ProcGen games. (f) Imaginary maze with goal and walls removed (see text for explanation).
Figure 2: Normalized test Performance for ExpGen, LEEP, IDAAC, DAAC, and PPO, on five ProcGen games. ExpGen shows state-of-the-art performance on test levels of Maze, Heist and Jumper; games that are notoriously challenging for other leading approaches. The scores are normalized as proposed by (Cobbe et al., 2020).
upon LEEP, due to its effective test-time exploration strategy. For example, on Maze with \(200\) training levels, our method obtains \(83\%\) success on test tasks, whereas the previous state-of-the-art achieved \(66\%\). When combined with IDAAC (Raileanu and Fergus, 2021), the leading invariance-based algorithm, ExpGen achieves state-of-the-art performance on the full ProcGen suite (the full results are provided in Appendix D).
## 2 Related Work
Generalization in RLThe recent survey by Kirk et al. (2021) provides an extensive review of generalization in RL; here, we provide a brief overview. One approach to generalization is by artificially increasing the number of training tasks, using either procedural generation (Cobbe et al., 2019, 2020), or augmentations (Kostrikov et al., 2020; Ye et al., 2020; Lee et al., 2019; Raileanu et al., 2021), task interpolation (Yao et al., 2021) or various regularization technique, such as dropout (Igl et al., 2020) and batch normalization (Farebrother et al., 2018; Igl et al., 2020). Leading approaches, namely IDAAC (Raileanu and Fergus, 2021) and PPG (Cobbe et al., 2021), investigate the advantages of decoupling policy and value functions for generalization, whereas Jiang et al. (2021) propose automatic curriculum learning of levels.
A different approach is to add inductive bias to the neural network policy or learning algorithm. Approaches such as Tamar et al. (2016), Vlastelica et al. (2021), Boutilier et al. (2020) embed a differentiable planner or learning algorithm into the neural network. Other methods (Kansky et al., 2017; Toyer et al., 2018; Rivlin et al., 2020) combine learning with classical graph planning to generalize across various planning domains. These approaches require some knowledge about the problem structure (e.g., a relevant planning algorithm), while our approach does not require any task-specific knowledge. Another line of work aims to learn policies or features that are invariant across the different training tasks (Sonar et al., 2021; Bertran et al., 2020; Li et al., 2021; Igl et al., 2019; Stooke et al., 2021; Mazoure et al., 2020) and are thus more robust to sensory variations.
Using an ensemble to direct exploration to unknown areas of the state space was proposed in the model-based TEXPLORE algorithm of Hester and Stone (2013), where an ensemble of transition models was averaged to induce exploratory actions when the models differ in their prediction. Observing that exploration can help with zero-shot generalization, the model-free LEEP algorithm by Ghosh et al. (2021) is most relevant to our work. LEEP trains an ensemble of policies, each on a separate subset of the training environment, with a loss function that encourages agreement between the ensemble members. Effectively, the KL loss in LEEP encourages random actions when the agents of the ensemble do not agree, which is related to our method. However, random actions can be significantly less effective in exploring a domain than a policy that is explicitly trained to explore, such as a maximum-entropy policy. Consequentially, we observe that our approach leads to significantly better performance at test time.
State Space Maximum Entropy ExplorationMaximum entropy exploration (maxEnt, Hazan et al. 2019; Mutti et al. 2021) is an unsupervised learning framework that trains policies that maximize the entropy of their state-visitation frequency, leading to a behavior that continuously explores the environment state space. Recently, maximum entropy policies have gained attention in RL (Liu and Abbeel, 2021; Xu et al., 2021; Seo et al., 2021; Hazan et al., 2019; Mutti et al., 2021) mainly in the context of unsupervised pre-training. In that setting, the agent is allowed to train for a long period without access to environment rewards, and only during test the agent gets exposed to the reward signal and performs a limited fine-tuning adaptation learning. Importantly, these works expose the agent to the same environments during pre-training and test phases, with the only distinction being the lack of extrinsic reward during pre-training. To the best of our knowledge, our observation that maxEnt policies generalize well in the zero-shot setting is novel.
## 3 Problem Setting and Background
We describe our problem setting and provide background on maxEnt exploration.
#### Reinforcement Learning (RL)
In Reinforcement Learning an agent interacts with an unknown, stochastic environment and collects rewards. This is modeled by a Partially Observed Markov Decision Process (POMDP) (Bertsekas, 2012), which is the tuple \(M=(S,A,O,P_{init},P,\Sigma,r,\gamma)\), where \(S\in\mathbb{R}^{|S|}\) and \(A\in\mathbb{R}^{|A|}\) are the state and actions spaces, \(O\) is the observation space, \(P_{init}\) is an initial state distribution, \(P\) is the transition kernel, \(\Sigma\) is the observation function, \(r:S\times A\rightarrow\mathbb{R}\) is the reward function, and \(\gamma\in[0,1)\) is the discount factor. The agent starts from initial state \(s_{0}\sim P_{init}\) and at time \(t\) performs an action \(a_{t}\) on the environment that yields a reward \(r_{t}=r(s_{t},a_{t})\), and an observation \(o_{t}=\Sigma(s_{t},a_{t})\in O\). Consequently, the environment transitions into the next state according to \(s_{t+1}\sim P(\cdot|s_{t},a_{t})\). Let the history at time \(t\) be \(h_{t}=\{o_{0},a_{0},r_{0},o_{1},a_{1},r_{1}\ldots,o_{t}\}\), the sequence of observations, actions and rewards. The agent's next action is outlined by a policy \(\pi\), which is a stochastic mapping from the history to an action probability \(\pi(a|h_{t})=P(a_{t}=a|h_{t})\). In our formulation, a history-dependent policy (and not a Markov policy) is required both due to partially observed states, epistemic uncertainty (Ghosh et al., 2021), and also for optimal maxEnt exploration (Mutti et al., 2022).
#### Zero-Shot Generalization for RL
We assume a prior distribution over POMDPs \(P(M)\), defined over some space of POMDPs. For a given POMDP, an optimal policy maximizes the expected discounted return \(\mathbb{E}_{\pi,M}[\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t})]\), where the expectation is taken over the policy \(\pi(h_{t})\), and the state transition probability \(s_{t}\sim P\) of POMDP \(M\). Our generalization objective in this work is to maximize the discounted cumulative reward taken _in expectation over the POMDP prior_, also termed the _population risk_:
\[\mathcal{R}_{pop}(\pi)=\mathbb{E}_{M\sim P(M)}\left[\mathbb{E}_{\pi,M}\left[\sum _{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t})\right]\right]. \tag{1}\]
Seeking a policy that performs well in expectation over any POMDP from the prior corresponds to zero-shot generalization.
We assume access to \(N\) training POMDPs \(M_{1},\ldots,M_{N}\) sampled from the prior, \(M_{i}\sim P(M)\). Our goal is to use \(M_{1},\ldots,M_{N}\) to learn a policy that performs well on objective 1. A common approach is to optimize the _empirical risk_ objective:
\[\mathcal{R}_{emp}(\pi)=\frac{1}{N}\sum_{i=1}^{N}\mathbb{E}_{\pi,M_{i}}\left[ \sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t})\right]=\mathbb{E}_{M\sim\hat{P}(M )}\left[\mathbb{E}_{\pi,M}\left[\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t}) \right]\right], \tag{2}\]
where the empirical POMDP distribution can be different from the true distribution, i.e. \(\hat{P}(M)\neq P(M)\). In general, a policy that optimizes the empirical risk (Eq. 2) may perform poorly on the population risk (Eq. 1)--this is known as overfitting in statistical learning theory (Shalev-Shwartz and Ben-David, 2014), and has been analyzed recently also for RL (Tamar et al., 2022).
#### Maximum Entropy Exploration
In the following we provide the definitions for the state distribution and the maximum entropy exploration objective. For simplicity, we discuss MDPs--the fully observed special case of POMDPs where \(O=S\), and \(\Sigma(s,a)=s\).
A policy \(\pi\), through its interaction with an MDP, induces a \(t\)-step state distribution \(d_{t,\pi}(s)=p(s_{t}=s|\pi)\) over the state space \(S\). Let \(d_{t,\pi}(s,a)=p(s_{t}=s,a_{t}=a|\pi)\) be its \(t\)-step state-action counterpart. For the infinite horizon setting, the stationary state distribution is defined as \(d_{\pi}(s)=lim_{t\rightarrow\infty}d_{t,\pi}(s)\), and its \(\gamma\)-discounted version as \(d_{\gamma,\pi}(s)=(1-\gamma)\sum_{t=0}^{\infty}\gamma^{t}d_{t,\pi}(s)\). We denote the state marginal distribution as \(d_{T,\pi}(s)=\frac{1}{T}\sum_{t=0}^{T}d_{t,\pi}(s)\), which is a marginalization of the \(t\)-step state distribution over a finite time \(T\). The objective of maximum entropy exploration is given by:
\[\mathcal{H}(d(\cdot))=-\mathbb{E}_{s\sim d}[\log(d(s))], \tag{3}\]
where \(d\) can be regarded as either the stationary state distribution \(d_{\pi}\)(Mutti and Restelli, 2020), the discounted state distribution \(d_{\gamma,\pi}\)(Hazan et al., 2019) or the marginal state distribution \(d_{T,\pi}\)(Lee et al., 2019, Mutti and Restelli, 2020). In our work we focus on the finite horizon setting andadapt the marginal state distribution \(d_{T,\pi}\) in which \(T\) equals the episode horizon \(H\), i.e. we seek to maximize the objective:
\[\mathcal{R}_{\mathcal{H}}(\pi)=\mathbb{E}_{M\sim\hat{P}(M)}\left[\mathcal{H}(d_{ H,\pi})\right]=\mathbb{E}_{M\sim\hat{P}(M)}\left[\mathcal{H}\left(\frac{1}{H} \sum_{t=0}^{H}d_{t,\pi}(s)\right)\right], \tag{4}\]
which yields a policy that "equally" visits all states during the episode. Existing works that target maximum entropy exploration rely on estimating the density of the agent's state visitation distribution [Hazan et al., 2019, Lee et al., 2019b]. More recently, a branch of algorithms that employ nonparametric entropy estimation [Liu and Abbeel, 2021a, Mutti et al., 2021, Seo et al., 2021] has emerged, circumventing the burden of density estimation. Here, we follow this common thread and adapt the non-parametric entropy estimation approach; we estimate the entropy using the particle-based \(k\)-nearest neighbor (\(k\)-NN estimator) [Beirlant et al., 1997, Singh et al., 2003], as elaborated in the next section.
## 4 The Generalization Ability of Maximum Entropy Exploration
In this section we present an empirical observation--policies trained for maximum entropy exploration (maxEnt policy) generalize well. First, we explain the training procedure of our maxEnt policy, then we show empirical results supporting this observation.
### Training State Space Maximum Entropy Policy
To tackle objective (4), we estimate the entropy using the particle-based \(k\)-NN estimator [Beirlant et al., 1997, Singh et al., 2003], as described here. Let \(X\) be a random variable over the support \(\chi\subset\mathbb{R}^{m}\) with a probability mass function \(p\). Given the probability of this random variable, its entropy is obtained by \(\mathcal{H}_{X}(p)=-\mathbb{E}_{x\sim p}[\log(p)]\). Without access to its distribution \(p\), the entropy can be estimated using \(N\) samples \(\{x_{i}\}_{i=1}^{N}\) by the \(k\)-NN estimator Singh et al. [2003]:
\[\hat{\mathcal{H}}_{X}^{k,N}(p)\approx\frac{1}{N}\sum_{i=1}^{N}\log\left(\left\| x_{i}-x_{i}^{k\text{-NN}}\right\|_{2}\right), \tag{5}\]
where \(x_{i}^{k\text{-NN}}\) is the \(k\)-NN sample of \(x_{i}\) from the set \(\{x_{i}\}_{i=1}^{N}\).
To estimate the distribution \(d_{H,\pi}\) over the states \(S\), we consider each trajectory as \(H\) samples of states \(\{s_{t}\}_{t=1}^{H}\) and take \(s_{t}^{k\text{-NN}}\) to be the \(k\)-NN of the state \(s_{t}\) within the trajectory, as proposed by previous works [APT, Liu and Abbeel (2021a), RE3, Seo et al. (2021), and APS, Liu and Abbeel (2021a)],
\[\hat{\mathcal{H}}^{k,H}(d_{H,\pi})\approx\frac{1}{H}\sum_{t=1}^{H}\log\left( \left\|s_{t}-s_{t}^{k\text{-NN}}\right\|_{2}\right). \tag{6}\]
Next, similar to previous works, since this sampled estimation of the entropy (Eq. 6) is a sum of functions that operate on each state separately, it can be considered as an expected reward objective \(\hat{\mathcal{H}}^{k,H}(d_{H,\pi})\approx\frac{1}{H}\sum_{t=1}^{H}r_{I}(s_{t})\) with the intrinsic reward function:
\[r_{I}(s_{t}):=\log(\left\|s_{t}-s_{t}^{k\text{-NN}}\right\|_{2}). \tag{7}\]
This formulation enables us to deploy any RL algorithm to approximately optimize objective (4). Specifically, in our work we use the policy gradient algorithm PPO [Schulman et al., 2017], where at every time step \(t\) the state \(s_{t}^{k\text{-NN}}\) is chosen from previous states \(\{s_{i}\}_{i=1}^{t-1}\) of the same episode.
Another challenge stems from the computational complexity of calculating the \(L_{2}\) norm of the \(k\)-NN (Eq. 7) at every time step \(t\). To improve computational efficiency, we introduce the following approximation: instead of taking the full observation as the state \(s_{i}\) (i.e. \(64\times 64\) RGB image), we sub-sample (denoted \(\downarrow\)) the observation by applying average pooling of \(3\times 3\) to produce an image \(s_{i}^{\downarrow}\) of size \(21\times 21\), resulting in:
\[r_{I}(s_{t}):=\log\left(\left\|s_{t}^{\downarrow}-s_{t}^{k\text{-NN}, \downarrow}\right\|_{2}\right). \tag{8}\]We emphasize that we do not modify the termination condition of each game. However, a maxEnt policy will learn to _avoid_ termination, as this increases the sum of intrinsic rewards. In Figure 3 we display the states visited by a maxEnt policy on Maze. We also experimented with \(L_{0}\) as the state similarity measure instead of \(L_{2}\), which resulted in similar performance (see Appendix F.2).
### Generalization of maxEnt Policy
The generalization gap describes the difference between the reward accumulated during training \(\mathcal{R}_{emp}(\pi)\) and testing \(\mathcal{R}_{pop}(\pi)\) of a policy, where we approximate the population score by testing on a large population of tasks withheld during training. We can evaluate the generalization gap for either an extrinsic reward, or for an intrinsic reward, such as the reward that elicits maxEnt exploration (Eq. 8). In the latter, the generalization gap captures how well the agent's exploration strategy generalizes.
We found that agents trained for maximum entropy exploration exhibit a smaller generalization gap compared with the standard approach of training solely with extrinsic reward. Intuitively, this can be attributed to the extrinsic reward serving as an 'easy' signal to learn from, and overfit to in the training environments. To assess the generalization quality of the maxEnt policy, we train agents on \(200,500,1000\) and \(5000\) instances of ProcGen's Maze, Jumper and Miner environments using the intrinsic reward (Eq. 8). The policies are equipped with a memory unit (GRU, Cho et al. 2014) to allow learning of deterministic policies that maximize the entropy (Mutti et al., 2022)3.
Footnote 3: An extensive discussion on the importance of memory for the maxEnt objective is in Appendix B.3.
The train and test return scores are shown in Fig. 3(a). In all three environments, we demonstrate a small generalization gap, as test performance on unseen levels closely follows the performance achieved during training. When considering Maze trained on \(200\) levels, we observe a small generalization gap of \(1.7\%\), meaning test performance closely follows train performance. For Jumper and Miner the maxEnt policy exhibits a small generalization gap of \(8.5\%\) and \(4.3\%\), respectively. In addition, we verify that the train results are near optimal by comparing with a hand designed approximately optimal exploration policy. For example, on Maze we use the well known maze exploring strategy _wall follower_, also known as the left/right-hand rule (Hendrawan, 2020); see Appendix B.2 for details.
Next, we evaluate the generalization gap of agents trained to maximize the extrinsic reward4. The results for this experiment, shown in Fig. 3(b), illustrate that the generalization gap for extrinsic reward
Figure 4: **Generalization ability of maximum entropy vs. extrinsic reward: (a) Score of maximum entropy. (b) Score of extrinsic reward. Training for maximum entropy exhibits a small generalization gap in Maze, Jumper and Miner. Average and standard deviation are obtained using \(4\) seeds.**
Figure 3: Example of a maxEnt trajectory on Maze. The policy visits every reachable state and averts termination by avoiding the goal state.
is more prominent. For comparison, when trained on \(200\) levels, the figure shows a large generalization gap for Maze (\(38.8\%\)) and Jumper (\(27.5\%\)), while Miner exhibits a moderate generalization gap of \(13.1\%\). For an evaluation on all ProcGen games, please see Appendix B.1.
## 5 Explore to Generalize (ExpGen)
Our main insight is that, given the generalization property of the entropy maximization policy established above, an agent can apply this behavior in a test MDP and expect effective exploration _at test time_. In the following, we pair this insight with the epistemic POMDP idea, and propose to play the exploration policy when the agent faces epistemic uncertainty, hopefully driving the agent to a different state where the reward-seeking policy is more certain. This can be seen as an adaptation of the seminal _explicit explore or exploit_ idea (Kearns and Singh, 2002), to the setting of ZSG-RL.
### Algorithm
Our framework comprises two parts: an entropy maximizing network and an ensemble of networks that maximize an extrinsic reward to evaluate epistemic uncertainty. The first step entails training a network equipped with a memory unit to obtain a maxEnt policy \(\pi_{\mathcal{U}}\) that maximizes entropy, as described in section 4.1. Next, we train an ensemble of memory-less policy networks \(\{\pi_{r}^{j}\}_{j=1}^{m}\) to maximize extrinsic reward. Following Ghosh et al. (2021), we shall use the ensemble to assess epistemic uncertainty. Different from Ghosh et al. (2021), however, we do not change the RL loss function, and use an off-the-shelf RL algorithm (such as PPO (Schulman et al., 2017) or IDAAC (Stooke et al., 2021)).
```
1:Input: ensemble size \(m\),
2: initial state \(s_{0}=\textsc{Environment.reset}()\).
3:\(n_{\pi_{\mathcal{U}}}=0\)
4:Train maxEnt policy \(\pi_{\mathcal{U}}\) using intrinsic reward \(r_{I}\) (Eq: 7).
5:Train \(m\) policies \(\pi_{r}^{1},\pi_{r}^{2}\dots\pi_{r}^{m}\) using extrinsic reward \(r_{ext}\).
6:for\(t=1\)to\(H\)do
7:\(a_{i}\sim\pi_{r}^{i}(\cdot|s_{t})\)
8:\(n_{\pi_{\mathcal{H}}}\gets n_{\pi_{\mathcal{U}}}-1\)
9:if\(a_{i}\in\textsc{Consensus}(a_{j}\mid j\in\{1\dots m\})\) and \(n_{\pi_{\mathcal{H}}}<0\)then
10:\(a_{t}=a_{i}\)
11:else
12:\(a_{\mathcal{H}}\sim\pi_{\mathcal{H}}(\cdot|h_{t})\)
13:\(a_{t}=a_{\mathcal{H}}\)
14:\(n_{\pi_{\mathcal{H}}}\sim Geom(\alpha)\)
15:endif
16:\(s_{t+1}\leftarrow\textsc{Environment.step}(a_{t})\)
17:endfor
```
**Algorithm 1** Explore to Generalize (ExpGen)
At test time, we couple these two components into a combined agent \(\boldsymbol{\pi}\) (detailed as pseudo-code in Algorithm 1). We consider domains with a finite action space, and say that the policy \(\pi_{r}^{i}\) is certain at state \(s\) if its action \(a_{i}\!\sim\!\pi_{r}^{i}(a|s)\) is in consensus, the agent \(\boldsymbol{\pi}\) takes a sequence of \(n_{\pi_{\mathcal{H}}}\) actions from the entropy maximization policy \(\pi_{\mathcal{H}}\), which encourages exploratory behavior.
Agent meta-stabilitySwitching between two policies may result in a case where the agent repeatedly toggles between two states--if, say, the maxEnt policy takes the agent from state \(s_{1}\) to a state \(s_{2}\), where the ensemble agrees on an action that again moves to state \(s_{1}\). To avoid such "meta-stable" behavior, we randomly choose the number of maxEnt steps \(n_{\pi_{\mathcal{H}}}\) from a Geometric distribution, \(n_{\pi_{\mathcal{H}}}\sim Geom(\alpha)\).
## 6 Experiments
We evaluate our algorithm on the ProcGen benchmark, which employs a discrete 15-dimensional action space and generates RGB observations of size \(64\times 64\times 3\). Our experimental setup follows ProcGen's 'easy' configuration, wherein agents are trained on \(200\) levels for \(25M\) steps and subsequently tested on random levels (Cobbe et al., 2020). All agents are implemented using the IMPALA convolutional architecture (Espeholt et al., 2018), and trained using PPO (Schulman et al., 2017) or IDAAC (Raileanu and Fergus, 2021). For the maximum entropy agent \(\pi_{\mathcal{H}}\) we incorporate a single GRU (Cho et al., 2014) at the final embedding of the IMPALA convolutional architecture. For all games, we use the same parameter \(\alpha=0.5\) of the Geometric distribution and form an ensemble of 10 networks. For further information regarding our experimental setup and specific hyperparameters, please refer to Appendix C.
### Generalization Performance
We compare our algorithm to six leading algorithms: vanilla PPO (Schulman et al., 2017), PLR (Jiang et al., 2021) that utilizes automatic curriculum-based learning, UCB-DrAC (Raileanu et al., 2021), which incorporates data augmentation to learn policies invariant to different input transformations, PPG (Cobbe et al., 2021), which decouples the optimization of policy and value function during learning, and IDAAC (Raileanu and Fergus, 2021), the previous state-of-the-art algorithm on ProcGen that decouples policy learning from value function learning and employs adversarial loss to enforce invariance to spurious features. Lastly, we evaluate our algorithm against LEEP (Ghosh et al., 2021), the only algorithm that, to our knowledge, managed to improve upon the performance of vanilla PPO on Maze and Heist. The evaluation matches the train and test setting detailed by the contending algorithms and their performance is provided as reported by their authors. For evaluating LEEP and IDAAC, we use the original implementation provided by the authors. 5
Footnote 5: For LEEP and IDAAC, we followed the prescribed hyperparameter values of the papers’ authors (we directly corresponded with them). For some domains, we could not reproduce the exact results, and in those cases, we used their reported scores, giving them an advantage.
Tables 2 and 1 show the train and test scores, respectively, for all ProcGen games. The tables show that ExpGen combined with PPO achieves a notable gain over the baselines on Maze, Heist and Jumper, while on other games, invariance-based approaches perform better (for example, IDAAC leads on BigFish, Plunder and Climber, whereas PPG leads on CaveFlyer, and UCB-DrAC leads on Dodgeball). These results correspond to our observation that for some domains, invariance cannot be used to completely resolve epistemic uncertainty. We emphasize that _ExpGen substantially outperforms LEEP on all games_, showing that our improved exploration at test time is significant. In Appendix E we compare ExpGen with LEEP trained for \(50M\) environment steps, showing a similar trend. When combining ExpGen with the leading invariance-based approach, IDAAC, we establish that ExpGen is in-fact _complementary_ to the advantage of such algorithms, setting a new state-of-the-art performance in ProcGen. A notable exception is Dodgeball, where all current methods still fail.
Figures 6 and 7 show aggregate statistics of ExpGen, PPO, PLR, UCB-DrAC, PPG and IDAAC for all games 6, affirming the dominance of ExpGen+IDAAC as the state-of-the-art. The results are obtained using \(10\) runs per game, with scores normalized as in Appendix C.1. The shaded regions indicate \(95\%\) Confidence Intervals (CIs) and are estimated using the percentile stratified bootstrap with \(2,000\) (Fig. 6) and \(50,000\) (Fig. 7) bootstrap re-samples. Fig. 6 (Left) compares algorithm score-distribution, illustrating the advantage of the proposed approach across all games. Fig. 6 (Right)
Figure 5: Test performance of PPO trained using the reward \(r_{total}\) that combines intrinsic and extrinsic rewards, weighted by \(\beta\) (Eq. 9). Each figure details the results for different values of discount factor \(\gamma\). All networks are randomly initialized and trained on \(200\) maze levels, and their mean is computed over \(4\) runs with different seeds. The figures show an improvement over the PPO baseline for \(\gamma=0.5\). In all cases, ExpGen outperforms the combined reward agent.
shows the probability of improvement of algorithm \(X\) against algorithm \(Y\). The first row (ExpGen vs. IDAAC) demonstrates that the proposed approach surpasses IDAAC with probability \(0.6\) and subsequent rows emphasize the superiority of ExpGen over contending methods at an even higher probability. This is because ExpGen improves upon IDAAC in several key challenging tasks and is on-par in the rest. Fig. 7 provides aggregate metrics of mean, median and IQM scores and optimality gap (as \(1-mean\)) for all ProcGen games. The figure shows that ExpGen outperforms the contending methods in all measures.
Ablation StudyOne may wonder if the ensemble in ExpGen is necessary, or whether the observation that the maxEnt policy generalizes well can be exploited using a single policy. We investigate the effect of combining the intrinsic and extrinsic rewards, \(r_{I}\) and \(r_{ext}\), respectively, into a single reward as a weighted sum:
\[r_{\mathrm{total}}=\beta r_{I}+(1-\beta)r_{\mathrm{ext}}, \tag{9}\]
and train for \(\beta=\{0.1,0.3,0.5,0.7,0.9\}\) on Maze. Figure 5 shows the train and test scores over \(50M\) steps for different values of discount factor \(\gamma\). We obtain the best test score for \(\gamma=0.5\) and \(\beta=0.1\), illustrating an improvement compared with the PPO baseline. When comparing with ExpGen, the combined reward (Eq. 9) exhibits inferior performance with slightly higher variance. In Appendix C.2 and F, we also provide an ablation study of ensemble size and draw comparisons to variants of our algorithm.
\begin{table}
\begin{tabular}{l|c c c c c c c c} \hline \hline Game & PPO & PLR & UCB-DrAC & PPG & IDAAC & LEEP & ExpGen & ExpGen \\ & & & & & & & & (PPO) & (IDAAC) \\ \hline BigFish & \(2.9\pm 1.1\) & \(10.9\pm 2.8\) & \(9.2\pm 2.0\) & \(11.2\pm 1.4\) & \(\mathbf{18.5\pm 1.2}\) & \(4.9\pm 0.9\) & \(6.0\pm 0.5\) & \(\mathbf{18.5\pm 1.9}\) \\ StarPilot & \(24.9\pm 1.0\) & \(27.9\pm 4.4\) & \(30.0\pm 1.3\) & \(\mathbf{47.2\pm 1.6}\) & \(37.0\pm 2.3\) & \(3.2\pm 2.2\) & \(31.0\pm 0.9\) & \(39.8\pm 2.9\) \\ Fruit Bot & \(26.2\pm 1.2\) & \(28.0\pm 1.4\) & \(27.6\pm 0.4\) & \(27.8\pm 0.6\) & \(\mathbf{27.9\pm 0.5}\) & \(16.4\pm 1.6\) & \(26.2\pm 0.6\) & \(\mathbf{28.4\pm 0.4}\) \\ BossFight & \(7.4\pm 0.4\) & \(8.9\pm 0.4\) & \(7.8\pm 0.6\) & \(\mathbf{10.3\pm 0.2}\) & \(\mathbf{9.8\pm 0.6}\) & \(0.5\pm 0.3\) & \(7.7\pm 0.2\) & \(9.8\pm 0.5\) \\ Ninja & \(6.1\pm 0.2\) & \(\mathbf{7.2\pm 0.4}\) & \(6.6\pm 0.4\) & \(6.6\pm 0.1\) & \(6.8\pm 0.4\) & \(4.4\pm 0.5\) & \(6.6\pm 0.2\) & \(6.6\pm 0.3\) \\ Plunder & \(7.8\pm 1.6\) & \(8.7\pm 2.2\) & \(8.3\pm 1.1\) & \(14.3\pm 2.0\) & \(\mathbf{23.3\pm 1.4}\) & \(4.4\pm 0.3\) & \(5.5\pm 1.3\) & \(\mathbf{23.6\pm 1.4}\) \\ CueyF & \(5.5\pm 0.5\) & \(6.3\pm 0.5\) & \(5.0\pm 0.8\) & \(\mathbf{7.0\pm 0.4}\) & \(5.0\pm 0.6\) & \(4.9\pm 0.2\) & \(5.7\pm 0.3\) & \(5.3\pm 0.7\) \\ CoinRun & \(8.6\pm 0.2\) & \(8.8\pm 0.5\) & \(8.6\pm 0.2\) & \(8.9\pm 0.1\) & \(\mathbf{9.4\pm 0.1}\) & \(7.3\pm 0.4\) & \(8.8\pm 0.1\) & \(\mathbf{9.3\pm 0.3}\) \\ Jumper & \(5.8\pm 0.3\) & \(5.8\pm 0.5\) & \(6.2\pm 0.3\) & \(5.9\pm 0.1\) & \(6.3\pm 0.2\) & \(5.4\pm 1.2\) & \(\mathbf{6.7\pm 0.3}\) & \(\mathbf{6.8\pm 0.5}\) \\ Chaser & \(3.1\pm 0.9\) & \(6.9\pm 1.2\) & \(6.3\pm 0.6\) & \(\mathbf{9.8\pm 0.5}\) & \(6.8\pm 1.0\) & \(3.0\pm 0.1\) & \(3.6\pm 1.6\) & \(7.1\pm 1.4\) \\ Climber & \(5.4\pm 0.5\) & \(6.3\pm 0.8\) & \(6.3\pm 0.6\) & \(2.8\pm 0.4\) & \(8.3\pm 0.4\) & \(2.6\pm 0.9\) & \(5.9\pm 0.5\) & \(\mathbf{9.5\pm 0.3}\) \\ Dodgeball & \(2.2\pm 0.4\) & \(1.8\pm 0.5\) & \(\mathbf{4.2\pm 0.9}\) & \(2.3\pm 0.3\) & \(3.2\pm 0.3\) & \(1.9\pm 0.2\) & \(2.9\pm 0.3\) & \(2.8\pm 0.2\) \\ Heist & \(2.4\pm 0.5\) & \(2.9\pm 0.5\) & \(3.5\pm 0.4\) & \(2.8\pm 0.4\) & \(3.5\pm 0.2\) & \(4.5\pm 0.3\) & \(\mathbf{7.4\pm 0.2}\) & \(\mathbf{7.2\pm 0.5}\) \\ Leaper & \(4.9\pm 2.2\) & \(6.8\pm 1.2\) & \(4.8\pm 0.9\) & \(\mathbf{8.5\pm 1.0}\) & \(7.7\pm 1.0\) & \(4.4\pm 0.2\) & \(4.0\pm 1.7\) & \(7.6\pm 1.2\) \\ Maze & \(5.6\pm 0.1\) & \(5.5\pm 0.8\) & \(6.3\pm 0.1\) & \(5.1\pm 0.3\) & \(5.6\pm 0.3\) & \(6.6\pm 0.2\) & \(\mathbf{8.3\pm 0.2}\) & \(7.8\pm 0.2\) \\ Miner & \(7.8\pm 0.3\) & \(9.6\pm 0.6\) & \(9.2\pm 0.6\) & \(7.4\pm 0.2\) & \(\mathbf{9.5\pm 0.4}\) & \(1.1\pm 0.1\) & \(8.0\pm 0.7\) & \(\mathbf{9.8\pm 0.3}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Test score** of ProcGen games trained on \(200\) levels for \(25M\) environment steps. We compare our algorithm to PPO, PLR, UCB-DrAC, PPG, IDAAC and LEEP. The mean and standard deviation are computed over \(10\) runs with different seeds.
Figure 6: Comparison across all ProcGen games, with \(95\%\) bootstrap CIs highlighted in color. **Left.** Score distributions of ExpGen, PPO, PLR, UCB-DrAC, PPG and IDAAC. **Right.** Shows in each row, the probability of algorithm \(X\) outperforming algorithm \(Y\). The comparison illustrates the superiority of ExpGen over the leading contender IDAAC with probability \(0.6\), as well as over other methods with even higher probability.
## 7 Discussion and Limitations
We observed that policies trained to explore, using maximum entropy RL, exhibited generalization of their exploration behavior in the zero-shot RL setting. Based on this insight, we proposed ExpGen--a ZSG-RL algorithm that takes a maxEnt exploration step whenever an ensemble of policies trained for reward maximization does not agree on the current action. We demonstrated that this simple approach performs well on all ZSG-RL domains of the ProcGen benchmark.
One burning question is _why does maxEnt exploration generalize so well?_ An intuitive argument is that the maxEnt policy in an MDP is _invariant_ to the reward. Thus, if for every training MDP there are many different rewards, each prescribing a different behavior, the maxEnt policy has to be invariant to this variability. In other words, the maxEnt policy contains _no information_ about the rewards in the data, and generalization is well known to be bounded by the mutual information between the policy and the training data (Bassily et al., 2018). Perhaps an even more interesting question is whether the maxEnt policy is also less sensitive to variations in the dynamics of the MDPs. We leave this as an open theoretical problem.
Another consideration is safety. In some domains, a wrong action can lead to a disaster, and in such cases, exploration at test time should be hedged. One possibility is to add to ExpGen's policy ensemble an ensemble of advantage functions, and use it to weigh the action agreement (Rotman et al., 2020). Intuitively, the ensemble should agree that unsafe actions have a low advantage, and not select them at test time.
Finally, we point out that while our work made significant progress on generalization in several ProcGen games, the performance on Dodgeball remains low for all methods we are aware of. An interesting question is whether performance on Dodgeball can be improved by combining invariance-based techniques (other than IDAAC) with exploration at test time, or whether Dodgeball represents a different class of problems that requires a completely different approach.
\begin{table}
\begin{tabular}{l|c c c c c c c|c c} \hline \hline Game & PPO & PLR & UCB-DrAC & PPG & IDAAC & LEEP & ExpGen & ExpGen \\ & & & & & & & & (PPO) & (IDAAC) \\ \hline BigFish & \(8.9\pm 2.0\) & \(7.8\pm 1.0\) & \(12.8\pm 1.8\) & \(19.9\pm 1.7\) & \(\boldsymbol{21.8\pm 1.8}\) & \(8.9\pm 0.9\) & \(7.0\pm 0.4\) & \(\boldsymbol{21.5\pm 2.3}\) \\ StarPilot & \(29.0\pm 1.1\) & \(2.6\pm 0.3\) & \(33.1\pm 1.3\) & \(\boldsymbol{49.6\pm 2.1}\) & \(38.6\pm 2.2\) & \(5.3\pm 0.3\) & \(34.3\pm 1.6\) & \(40.0\pm 2.7\) \\ FruitBot & \(28.8\pm 0.6\) & \(15.9\pm 1.3\) & \(29.3\pm 0.5\) & \(\boldsymbol{31.1\pm 0.5}\) & \(29.1\pm 0.7\) & \(17.4\pm 0.7\) & \(28.9\pm 0.6\) & \(29.5\pm 0.5\) \\ BossFight & \(8.0\pm 0.4\) & \(8.7\pm 0.7\) & \(8.1\pm 0.4\) & \(\boldsymbol{11.1\pm 0.1}\) & \(10.4\pm 0.4\) & \(0.3\pm 0.1\) & \(7.9\pm 0.6\) & \(9.9\pm 0.7\) \\ Ninja & \(7.3\pm 0.2\) & \(5.4\pm 0.5\) & \(8.0\pm 0.4\) & \(8.9\pm 0.2\) & \(\boldsymbol{8.9\pm 0.3}\) & \(4.6\pm 0.2\) & \(8.5\pm 0.3\) & \(7.9\pm 0.6\) \\ Plunder & \(9.4\pm 1.7\) & \(4.1\pm 1.3\) & \(10.2\pm 1.8\) & \(16.4\pm 1.9\) & \(\boldsymbol{24.6\pm 1.6}\) & \(4.9\pm 0.2\) & \(5.8\pm 1.4\) & \(\boldsymbol{26.1\pm 2.7}\) \\ CaveFlyer & \(7.3\pm 0.7\) & \(6.4\pm 0.1\) & \(5.8\pm 0.9\) & \(\boldsymbol{9.5\pm 0.2}\) & \(6.2\pm 0.6\) & \(4.9\pm 0.3\) & \(6.8\pm 0.4\) & \(5.5\pm 0.5\) \\ CoinRun & \(9.4\pm 0.3\) & \(5.4\pm 0.4\) & \(9.4\pm 0.2\) & \(\boldsymbol{9.9\pm 0.0}\) & \(9.8\pm 0.1\) & \(6.7\pm 0.1\) & \(9.8\pm 0.1\) & \(9.1\pm 0.4\) \\ Jumper & \(8.6\pm 0.1\) & \(3.6\pm 0.5\) & \(8.2\pm 0.1\) & \(8.7\pm 0.1\) & \(\boldsymbol{8.7\pm 0.2}\) & \(5.7\pm 0.1\) & \(7.9\pm 0.2\) & \(8.1\pm 0.4\) \\ Chaser & \(3.7\pm 1.2\) & \(6.3\pm 0.7\) & \(7.0\pm 0.6\) & \(\boldsymbol{10.7\pm 0.4}\) & \(7.5\pm 0.8\) & \(2.6\pm 0.1\) & \(4.7\pm 1.8\) & \(\boldsymbol{6.9\pm 1.1}\) \\ Climber & \(6.9\pm 1.0\) & \(6.2\pm 0.8\) & \(8.6\pm 0.6\) & \(10.2\pm 0.2\) & \(10.2\pm 0.7\) & \(3.5\pm 0.3\) & \(7.7\pm 0.4\) & \(\boldsymbol{11.4\pm 0.2}\) \\ Dodgeball & \(6.4\pm 0.6\) & \(2.0\pm 1.1\) & \(\boldsymbol{7.3\pm 0.8}\) & \(5.5\pm 0.5\) & \(4.9\pm 0.3\) & \(3.3\pm 0.1\) & \(5.8\pm 0.5\) & \(5.3\pm 0.5\) \\ Heist & \(6.1\pm 0.8\) & \(1.2\pm 0.4\) & \(6.2\pm 0.6\) & \(7.4\pm 0.4\) & \(4.5\pm 0.3\) & \(7.1\pm 0.2\) & \(\boldsymbol{9.4\pm 0.1}\) & \(7.0\pm 0.6\) \\ Leaper & \(5.5\pm 0.4\) & \(6.4\pm 0.4\) & \(5.0\pm 0.9\) & \(\boldsymbol{9.3\pm 1.1}\) & \(8.3\pm 0.7\) & \(4.3\pm 2.3\) & \(4.3\pm 2.0\) & \(8.1\pm 0.9\) \\ Maze & \(9.1\pm 0.2\) & \(4.1\pm 0.5\) & \(8.5\pm 0.3\) & \(9.0\pm 0.2\) & \(6.4\pm 0.5\) & \(9.4\pm 0.3\) & \(\boldsymbol{9.6\pm 0.1}\) & \(7.4\pm 0.4\) \\ Miner & \(11.3\pm 0.3\) & \(9.7\pm 0.4\) & \(\boldsymbol{12.0\pm 0.3}\) & \(11.3\pm 1.0\) & \(11.5\pm 0.5\) & \(1.9\pm 0.6\) & \(9.0\pm 0.8\) & \(11.9\pm 0.2\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Train score** of ProcGen games trained on \(200\) levels for \(25M\) environment steps. We compare our algorithm to PPO, PLR, UCB-DrAC, PPG, IDAAC and LEEP. The mean and standard deviation are computed over \(10\) runs with different seeds.
Figure 7: Aggregate metrics for all ProcGen games: mean, median and IQM scores (higher is better) and optimality gap (lower is better), with \(95\%\) CIs highlighted in color. ExpGen outperforms the contending methods in all measures.
AcknowledgmentsThe research of DS was Funded by the European Union (ERC, A-B-C-Deep, 101039436). The research of EZ and AT was Funded by the European Union (ERC, Bayes-RL, 101041250). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency (ERCEA). Neither the European Union nor the granting authority can be held responsible for them. DS also acknowledges the support of the Schmidt Career Advancement Chair in AI.
## References
* Agarwal et al. (2021) Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron C Courville, and Marc Bellemare. Deep reinforcement learning at the edge of the statistical precipice. _Advances in neural information processing systems_, 34:29304-29320, 2021.
* Bassily et al. (2018) Raef Bassily, Shay Moran, Ido Nachum, Jonathan Shafer, and Amir Yehudayoff. Learners that use little information. In _Algorithmic Learning Theory_, pages 25-55. PMLR, 2018.
* Beirlant et al. (1997) Jan Beirlant, Edward J Dudewicz, Laszlo Gyorfi, Edward C Van der Meulen, et al. Nonparametric entropy estimation: An overview. _International Journal of Mathematical and Statistical Sciences_, 6(1):17-39, 1997.
* Bertran et al. (2020) Martin Bertran, Natalia Martinez, Mariano Phielipp, and Guillermo Sapiro. Instance-based generalization in reinforcement learning. _Advances in Neural Information Processing Systems_, 33:11333-11344, 2020.
* Bertsekas (2012) Dimitri Bertsekas. _Dynamic programming and optimal control: Volume I_, volume 1. Athena scientific, 2012.
* Boutilier et al. (2020) Craig Boutilier, Chih-wei Hsu, Branislav Kveton, Martin Mladenov, Csaba Szepesvari, and Manzil Zaheer. Differentiable meta-learning of bandit policies. _Advances in Neural Information Processing Systems_, 33, 2020.
* Cho et al. (2014) Kyunghyun Cho, Bart Van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. _arXiv preprint arXiv:1409.1259_, 2014.
* Cobbe et al. (2019) Karl Cobbe, Oleg Klimov, Chris Hesse, Taehoon Kim, and John Schulman. Quantifying generalization in reinforcement learning. In _International Conference on Machine Learning_, pages 1282-1289. PMLR, 2019.
* Cobbe et al. (2020) Karl Cobbe, Chris Hesse, Jacob Hilton, and John Schulman. Leveraging procedural generation to benchmark reinforcement learning. In _International conference on machine learning_, pages 2048-2056. PMLR, 2020.
* Cobbe et al. (2021) Karl W Cobbe, Jacob Hilton, Oleg Klimov, and John Schulman. Phasic policy gradient. In _International Conference on Machine Learning_, pages 2020-2027. PMLR, 2021.
* Espeholt et al. (2018) Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Vlad Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, et al. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. In _International conference on machine learning_, pages 1407-1416. PMLR, 2018.
* Farebrother et al. (2018) Jesse Farebrother, Marlos C Machado, and Michael Bowling. Generalization and regularization in dqn. _arXiv preprint arXiv:1810.00123_, 2018.
* Ghosh et al. (2021) Dibya Ghosh, Jad Rahme, Aviral Kumar, Amy Zhang, Ryan P Adams, and Sergey Levine. Why generalization in rl is difficult: Epistemic pomdps and implicit partial observability. _Advances in Neural Information Processing Systems_, 34:25502-25515, 2021.
* Hazan et al. (2019) Elad Hazan, Sham Kakade, Karan Singh, and Abby Van Soest. Provably efficient maximum entropy exploration. In _International Conference on Machine Learning_, pages 2681-2691. PMLR, 2019.
* Hendrawan (2020) YF Hendrawan. Comparison of hand follower and dead-end filler algorithm in solving perfect mazes. In _Journal of Physics: Conference Series_, volume 1569, page 022059. IOP Publishing, 2020.
* Hendrawan (2020)Todd Hester and Peter Stone. Texplore: real-time sample-efficient reinforcement learning for robots. _Machine learning_, 90:385-429, 2013.
* Igl et al. (2019) Maximilian Igl, Kamil Ciosek, Yingzhen Li, Sebastian Tschiatschek, Cheng Zhang, Sam Devlin, and Katja Hofmann. Generalization in reinforcement learning with selective noise injection and information bottleneck. _Advances in neural information processing systems_, 32, 2019.
* Igl et al. (2020) Maximilian Igl, Gregory Farquhar, Jelena Luketina, Wendelin Boehmer, and Shimon Whiteson. The impact of non-stationarity on generalisation in deep reinforcement learning. _arXiv preprint arXiv:2006.05826_, 2020.
* Jiang et al. (2021) Minqi Jiang, Edward Grefenstette, and Tim Rocktaschel. Prioritized level replay. In _International Conference on Machine Learning_, pages 4940-4950. PMLR, 2021.
* Kansky et al. (2017) Ken Kansky, Tom Silver, David A Mely, Mohamed Eldawy, Miguel Lazaro-Gredilla, Xinghua Lou, Nimrod Dorfman, Szymon Sidor, Scott Phoenix, and Dileep George. Schema networks: Zero-shot transfer with a generative causal model of intuitive physics. In _International conference on machine learning_, pages 1809-1818. PMLR, 2017.
* Kearns and Singh (2002) Michael Kearns and Satinder Singh. Near-optimal reinforcement learning in polynomial time. _Machine learning_, 49(2):209-232, 2002.
* Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_, 2014.
* Kirk et al. (2021) Robert Kirk, Amy Zhang, Edward Grefenstette, and Tim Rocktaschel. A survey of generalisation in deep reinforcement learning. _arXiv preprint arXiv:2111.09794_, 2021.
* Kostrikov et al. (2020) Ilya Kostrikov, Denis Yarats, and Rob Fergus. Image augmentation is all you need: Regularizing deep reinforcement learning from pixels. _arXiv preprint arXiv:2004.13649_, 2020.
* Lee et al. (2019a) Kimin Lee, Kibok Lee, Jinwoo Shin, and Honglak Lee. Network randomization: A simple technique for generalization in deep reinforcement learning. _arXiv preprint arXiv:1910.05396_, 2019a.
* Lee et al. (2019b) Lisa Lee, Benjamin Eysenbach, Emilio Parisotto, Eric Xing, Sergey Levine, and Ruslan Salakhutdinov. Efficient exploration via state marginal matching. _arXiv preprint arXiv:1906.05274_, 2019b.
* Li et al. (2021) Bonnie Li, Vincent Francois-Lavet, Thang Doan, and Joelle Pineau. Domain adversarial reinforcement learning. _arXiv preprint arXiv:2102.07097_, 2021.
* Liu and Abbeel (2021) Hao Liu and Pieter Abbeel. Apps: Active pretraining with successor features. In _International Conference on Machine Learning_, pages 6736-6747. PMLR, 2021a.
* Liu and Abbeel (2021b) Hao Liu and Pieter Abbeel. Behavior from the void: Unsupervised active pre-training. _Advances in Neural Information Processing Systems_, 34:18459-18473, 2021b.
* Mazoure et al. (2020) Bogdan Mazoure, Remi Tachet des Combes, Thang Long Doan, Philip Bachman, and R Devon Hjelm. Deep reinforcement and infomax learning. _Advances in Neural Information Processing Systems_, 33:3686-3698, 2020.
* Mnih et al. (2015) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. _nature_, 518(7540):529-533, 2015.
* Mutti and Restelli (2020) Mirco Mutti and Marcello Restelli. An intrinsically-motivated approach for learning highly exploring and fast mixing policies. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 34, pages 5232-5239, 2020.
* Mutti et al. (2021) Mirco Mutti, Lorenzo Pratissoli, and Marcello Restelli. Task-agnostic exploration via policy gradient of a non-parametric state entropy estimate. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 35, pages 9028-9036, 2021.
* Mutti et al. (2022) Mirco Mutti, Riccardo De Santi, and Marcello Restelli. The importance of non-markovianity in maximum state entropy exploration. _arXiv preprint arXiv:2202.03060_, 2022.
* Mnih et al. (2021)Roberta Raileanu and Rob Fergus. Decoupling value and policy for generalization in reinforcement learning. In _International Conference on Machine Learning_, pages 8787-8798. PMLR, 2021.
* Raileanu et al. (2021) Roberta Raileanu, Maxwell Goldstein, Denis Yarats, Ilya Kostrikov, and Rob Fergus. Automatic data augmentation for generalization in reinforcement learning. _Advances in Neural Information Processing Systems_, 34:5402-5415, 2021.
* Rivlin et al. (2020) Or Rivlin, Tamir Hazan, and Erez Karpas. Generalized planning with deep reinforcement learning. _arXiv preprint arXiv:2005.02305_, 2020.
* Rotman et al. (2020) Noga H Rotman, Michael Schapira, and Aviv Tamar. Online safety assurance for learning-augmented systems. In _Proceedings of the 19th ACM Workshop on Hot Topics in Networks_, pages 88-95, 2020.
* Schrittwieser et al. (2020) Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. Mastering atari, go, chess and shogi by planning with a learned model. _Nature_, 588(7839):604-609, 2020.
* Schulman et al. (2017) John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. _arXiv preprint arXiv:1707.06347_, 2017.
* Seo et al. (2021) Younggyo Seo, Lili Chen, Jinwoo Shin, Honglak Lee, Pieter Abbeel, and Kimin Lee. State entropy maximization with random encoders for efficient exploration. In _International Conference on Machine Learning_, pages 9443-9454. PMLR, 2021.
* Shalev-Shwartz and Ben-David (2014) Shai Shalev-Shwartz and Shai Ben-David. _Understanding machine learning: From theory to algorithms_. Cambridge university press, 2014.
* Singh et al. (2003) Harshinder Singh, Neeraj Misra, Vladimir Hnizdo, Adam Fedorowicz, and Eugene Demchuk. Nearest neighbor estimates of entropy. _American journal of mathematical and management sciences_, 23(3-4):301-321, 2003.
* Sonar et al. (2021) Anoopkumar Sonar, Vincent Pacelli, and Anirudha Majumdar. Invariant policy optimization: Towards stronger generalization in reinforcement learning. In _Learning for Dynamics and Control_, pages 21-33. PMLR, 2021.
* Stooke et al. (2021) Adam Stooke, Kimin Lee, Pieter Abbeel, and Michael Laskin. Decoupling representation learning from reinforcement learning. In _International Conference on Machine Learning_, pages 9870-9879. PMLR, 2021.
* Tamar et al. (2016) Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, and Pieter Abbeel. Value iteration networks. _Advances in neural information processing systems_, 29, 2016.
* Tamar et al. (2022) Aviv Tamar, Daniel Soudry, and Ev Zisselman. Regularization guarantees generalization in bayesian reinforcement learning through algorithmic stability. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 36, pages 8423-8431, 2022.
* Toyer et al. (2018) Sam Toyer, Felipe Trevizan, Sylvie Thiebaux, and Lexing Xie. Action schema networks: Generalised policies with deep learning. In _Thirty-Second AAAI Conference on Artificial Intelligence_, 2018.
* Vinyals et al. (2019) Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michael Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. _Nature_, 575(7782):350-354, 2019.
* Vlastelica et al. (2021) Marin Vlastelica, Michal Rolinek, and Georg Martius. Neuro-algorithmic policies enable fast combinatorial generalization. _arXiv preprint arXiv:2102.07456_, 2021.
* Wang et al. (2020) Kaixin Wang, Bingyi Kang, Jie Shao, and Jiashi Feng. Improving generalization in reinforcement learning with mixture regularization. _Advances in Neural Information Processing Systems_, 33:7968-7978, 2020.
* Wurman et al. (2022) Peter R Wurman, Samuel Barrett, Kenta Kawamoto, James MacGlashan, Kaushik Subramanian, Thomas J Walsh, Roberto Capobianco, Alisa Devlic, Franziska Eckert, Florian Fuchs, et al. Outracing champion gran turismo drivers with deep reinforcement learning. _Nature_, 602(7896):223-228, 2022.
* Wang et al. (2021)Huaxiu Yao, Linjun Zhang, and Chelsea Finn. Meta-learning with fewer tasks through task interpolation. _arXiv preprint arXiv:2106.02695_, 2021.
* Yarats et al. [2021] Denis Yarats, Rob Fergus, Alessandro Lazaric, and Lerrel Pinto. Reinforcement learning with prototypical representations. In _International Conference on Machine Learning_, pages 11920-11931. PMLR, 2021.
* Ye et al. [2020] Chang Ye, Ahmed Khalifa, Philip Bontrager, and Julian Togelius. Rotation, translation, and cropping for zero-shot generalization. In _2020 IEEE Conference on Games (CoG)_, pages 57-64. IEEE, 2020.
Hidden Maze Experiment
In this section we empirically validate our thought experiment described in the Introduction: we train a recurrent policy on hidden mazes using 128 training levels (with fixed environment colors). Figure 7(a) depicts the observations of an agent along its trajectory; the agent sees only its own location (green spot), whereas the entire maze layout, corridors and goal, are hidden. In Fig. 7(b) we visualize the full, unobserved, state of the agent. The train and test results are shown in Fig 9, indicating severe overfitting to the training levels: test performance failed to improve beyond the random initial policy during training. Indeed, the recurrent policy memorizes the agent's training trajectories instead of learning generalized behavior. Hiding the maze's goal and corridors leads to agent-behavior that is the most invariant to instance specific observations--the observations include the minimum information, such that the agent can still learn to solve the maze.
This experiment demonstrates why methods based on observation invariance (e.g., IDAAC RALEanu and Fergus (2021)) do not improve performance on Maze-like tasks, despite significantly improving performance on games such as BigFish and Plunder, where invariance to colors helps to generalize.
Figure 8: Hidden Maze experiment where the agent only observes its own location (green spot). Both the goal (purple spot) and corridors are not observable (appear as walls).
## Appendix B Maximum Entropy Policy
This section elaborates on the implementation details of the maxEnt oracle and provides a performance evaluation of the maxEnt policy.
### Generalization Gap of maxEnt vs PPO across all ProcGen Environments
Recall from Section 4.2 that the (normalized) generalization gap is described by
\[(\mathcal{R}_{emp}(\pi)-\mathcal{R}_{pop}(\pi))/\mathcal{R}_{emp}(\pi),\]
where \(R_{emp}(\pi)=\mathcal{R}_{train}(\pi)\) and \(R_{pop}(\pi)=\mathcal{R}_{test}(\pi)\). Fig 10 shows the generalization ability of the maxEnt exploration policy compared to PPO, obtained from training on \(200\) training levels.
The figure demonstrates that the maxEnt exploration policies transfer better in zero-shot generalization (achieve a smaller generalization gap) across all ProcGen games apart from Ninja. This holds true
Figure 10: The normalized generalization gap [%] of maxEnt and PPO for all ProcGen games, trained on \(200\) training levels and averaged over \(4\) seeds (lower is better).
Figure 9: PPO performance on the hidden maze task, indicating severe overfitting. Train (Orange) and Test (blue) performance are displayed for \(10\) seeds, alongside their means (highlighted in bold).
even in environments where the ExpGen algorithm is on par but does not exceed the baseline, pointing to the importance of exploratory behavior in some environments, but not in others.
### Computing the maxEnt Oracle
The maxEnt oracle prescribes the maximum intrinsic return achievable per environment instance. To simplify the computation of the maxEnt oracle score, in this section, we evaluate the maxEnt score using the \(L_{0}\) instead of the \(L_{2}\) norm and use the first nearest neighbor (\(k=1\)). In Maze, we implement the oracle using the right-hand rule (Hendrawan, 2020) for maze exploration: If there is no wall and no goal on the right, the agent turns right, otherwise, it continues straight. If there is a wall (or goal) ahead, it goes left (see Figure 3). In Jumper, we first down-sample (by average pooling) from \(64\times 64\) to \(21\times 21\) pixels to form cells of \(3\times 3\), and the oracle score is the number of background cells (cells that the agent can visit). Here we assume that the agent has a size of \(3\times 3\) pixels. In Miner, the "easy" environment is partitioned into \(10\times 10\) cells. The maximum entropy is the number of cells that contain "dirt" (i.e., cells that the agent can excavate in order to make traversable).
Figure 11: **Generalization ability of maximum entropy and extrinsic reward policy: (a.top row) Score of maximum entropy policy, normalized by the oracle score. (a.bottom row) Success Rate of maximum entropy policy. (b) Score of extrinsic reward policy. Training for maximum entropy exhibits a small generalization gap in Maze, Jumper and Miner. Average and standard deviation are obtained using \(4\) seeds.**
In Fig. 10(a) the top row describes the intrinsic return of the maxEnt policy, normalized by the oracle's return (described above). We draw a comparison between agents with memory (GRU) and without. For the Maze environment (top-right), the agent achieves over \(90\%\) of the oracle's performance when employing a memory unit (GRU). For the Jumper and Miner environments, the agents approach \(60\%\) and \(80\%\) of the oracle's score, respectively.
Next, the bottom row of Fig. 10(a) details the success rate as the ratio of instances in which the agent successfully reached the oracle's score (meaning that the entire traversable region has been explored). For the Maze environment, the agent achieves a success-rate of \(70\%\), whereas for the Jumper and Miner the agent fails to meet the oracle's score. This indicates that the oracle, as per our implementation for Jumper and Miner, captures states that are in-effect unreachable, and thus the agent is unable to match the oracle's score.
### The Importance of a Memory Unit for maxEnt
We evaluate the importance of memory for maxEnt: In Fig. 10(a) we train a maxEnt agent with and without memory (GRU). The results indicate that for Maze, the GRU is vital in order to maximize performance for all various sizes of training set. In Jumper, we see that the GRU provides an advantage when training on \(200\) levels, however the benefit diminishes when additional training levels are available. For Miner, there appears to be no significant advantage for incorporating memory with \(200\) training levels. However, the benefit of a GRU becomes noticeable once more training levels are available (beyond \(500\)).
When looking at the extrinsic reward (Fig. 10(b)), we see an interesting effect after introducing a GRU. Maze and Jumper suffer a degradation in performance with \(200\) levels (indicating overfit), while Miner appears to be unaffected.
In summary, we empirically show that memory is beneficial for the maxEnt policy on Maze, Jumper and Miner. Interestingly, we demonstrate that the introduction of memory for training to maximize extrinsic reward causes the agent to overfit in Maze and Jumper with \(200\) training levels.
## Appendix C Experimental Setup
This section describes the constants and hyperparameters used as part of the evaluation of our algorithm.
### Normalization Constants
In Figures 2 and 12 we compare the performance of the various algorithms. The results are normalized in accordance with (Cobbe et al., 2020), which defines the normalized return as
\[R_{norm}=(R-R_{min})/(R_{max}-R_{min}).\]
The \(R_{min}\) and \(R_{max}\) of each environment are detailed in Table 3. Note that the test performance of PPO on Heist (see Fig. 2) is lower than \(R_{min}\) (the trivial performance), indicating severe overfitting.
Table 7 shows an evaluation of the maxEnt gap and ExpGen using different neighbor size \(k_{\mathrm{NN}}\) on the Maze environment. We found that the best performance is obtained for \(k_{\mathrm{NN}}\in\{1,2,3,4,5\}\). Thus, we choose \(k_{\mathrm{NN}}=2\), the second nearest neighbor, for all games.
## Appendix D Results for all ProcGen Games
Figure 12 details the normalized test performance for all ProcGen games. Normalization is performed according to (Cobbe et al., 2020) as described in Appendix C.1. The figure demonstrates
\begin{table}
\begin{tabular}{||l||c c c||} \hline Game & \(k\) \\ \hline Maze & 6 \\ Jumper & \(4\) \\ Miner & \(2\) \\ Heist & \(8\) \\ BigFish & \(8\) \\ StarPilot & \(1\) \\ FruitBot & \(1\) \\ BossFight & \(1\) \\ Plunder & \(2\) \\ CaveFlyer & \(2\) \\ CoinRun & \(1\) \\ Chaser & \(2\) \\ Climber & \(2\) \\ Dodgeball & \(2\) \\ Leaper & \(1\) \\ Ninja & \(2\) \\ \hline \end{tabular}
\end{table}
Table 4: Consensus size \(k\) as hyperparameter for each game.
\begin{table}
\begin{tabular}{||c c c c c||} \hline & \multicolumn{2}{c}{Hard} & \multicolumn{2}{c||}{Easy} \\ \hline Environment & \(R_{min}\) & \(R_{max}\) & \(R_{min}\) & \(R_{max}\) \\ \hline \hline CoinRun & 5 & 10 & 5 & 10 \\ StarPilot & 1.5 & 35 & 2.5 & 64 \\ CaveFlyer & 2 & 13.4 & 3.5 & 12 \\ Dodgeball & 1.5 & 19 & 1.5 & 19 \\ FruitBot & -.5 & 27.2 & -1.5 & 32.4 \\ Chaser &.5 & 14.2 &.5 & 13 \\ Miner & 1.5 & 20 & 1.5 & 13 \\ Jumper & 1 & 10 & 3 & 10 \\ Leaper & 1.5 & 10 & 3 & 10 \\ Maze & 4 & 10 & 5 & 10 \\ BigFish & 0 & 40 & 1 & 40 \\ Heist & 2 & 10 & 3.5 & 10 \\ Climber & 1 & 12.6 & 2 & 12.6 \\ Plunder & 3 & 30 & 4.5 & 30 \\ Ninja & 2 & 10 & 3.5 & 10 \\ BossFight &.5 & 13 &.5 & 13 \\ \hline \end{tabular}
\end{table}
Table 3: Normalization Constants.
\begin{table}
\begin{tabular}{|l||c|c|c|c|} \hline Ensemble size & 4 & 6 & 8 & 10 \\ \hline ExpGen & \(8.02\pm 0.06\) & \(8.15\pm 0.19\) & \(8.00\pm 0.12\) & \(\mathbf{8.22\pm 0.11}\) \\ \hline \end{tabular}
\end{table}
Table 5: Ablation study of ensemble size and its effect on the test score. Each network in the ensemble is trained on \(200\) instances of Maze. The results show improved performance for large ensemble size. The mean and standard deviation are computed using \(10\) runs with different seeds.
that ExpGen establishes state-of-the-art results on several challenging games and achieves on-par performance with the leading approach on the remaining games.
## Appendix E Results after convergence
Tables 8 and 9 detail the train and test performance of ExpGen, LEEP and PPO, when trained for \(50M\) environment steps. Table 9 shows that ExpGen surpasses LEEP and PPO in most games.
\begin{table}
\begin{tabular}{|l||c|} \hline Parameter & Value \\ \hline \(\gamma\) &.999 \\ \(\lambda\) &.95 \\ \# timesteps per rollout & 512 \\ Epochs per rollout & 3 \\ \# minibatches per epoch & 8 \\ Entropy bonus (\(k_{H}\)) &.01 \\ PPO clip range &.2 \\ Reward Normalization? & Yes \\ Learning rate & 5e-4 \\ \# workers & 1 \\ \# environments per worker & 32 \\ Total timesteps & 25M \\ GRU? & Only for maxEnt \\ Frame Stack? & No \\ \hline \end{tabular}
\end{table}
Table 6: PPO Hyperparameters.
\begin{table}
\begin{tabular}{|l|l l l l|} \hline Neighbor & \multicolumn{3}{c|}{Maze} & Environment & \\ Size \(k_{\rm NN}\) & maxEnt (Train) & maxEnt (Test) & maxEnt Gap [\%] & ExpGen (Test) \\ \hline
1 & \(17.5\pm 0.5\) & \(15.3\pm 1.4\) & **12.7\%** & \(7.9\pm 0.2\) \\
2 & \(33.9\pm 0.2\) & \(31.3\pm 1.5\) & **7.7\%** & \(8.3\pm 0.2\) \\
3 & \(50.2\pm 2.0\) & \(42.5\pm 2.8\) & **15.3\%** & \(8.2\pm 0.2\) \\
4 & \(62.9\pm 2.6\) & \(52.3\pm 3.0\) & **16.8\%** & \(8.2\pm 0.1\) \\
5 & \(70.6\pm 3.3\) & \(57.8\pm 4.6\) & **15.5\%** & \(8.2\pm 0.1\) \\
6 & \(80.0\pm 3.2\) & \(65.3\pm 1.4\) & \(18.4\%\) & \(8.1\pm 0.2\) \\
7 & \(87.2\pm 1.4\) & \(71.1\pm 1.1\) & \(18.5\%\) & \(8.0\pm 0.1\) \\
8 & \(93.8\pm 1.0\) & \(75.3\pm 2.3\) & \(19.7\%\) & \(7.8\pm 0.2\) \\
9 & \(96.6\pm 4.8\) & \(78.0\pm 2.0\) & \(19.3\%\) & \(8.1\pm 0.2\) \\
10 & \(102.2\pm 1.8\) & \(89.1\pm 0.6\) & **12.8\%** & \(7.9\pm 0.1\) \\ \hline \end{tabular}
\end{table}
Table 7: Hyperparameter search for neighborhood size \(k_{\rm NN}\) for the Maze environment. The table presents the maxEnt gap and the performance of ExpGen for varying values of \(k_{\rm NN}\). The mean and variance are computed for \(3\) seeds.
\begin{table}
\begin{tabular}{|l||l|l|l|} \hline Game & PPO & LEEP & ExpGen \\ \hline Maze & \(9.45\pm 0.21\) & \(9.84\pm 0.05\) & \(9.71\pm 0.11\) \\ Heist & \(7.97\pm 0.56\) & \(6.86\pm 0.68\) & \(9.62\pm 0.08\) \\ Jumper & \(8.62\pm 0.08\) & \(6.1\pm 0.4\) & \(8.00\pm 0.13\) \\ Miner & \(12.86\pm 0.06\) & \(1.9\pm 0.2\) & \(12.53\pm 0.12\) \\ BigFish & \(14.24\pm 3.36\) & \(8.82\pm 0.36\) & \(5.95\pm 0.32\) \\ Climber & \(8.76\pm 0.41\) & \(4.6\pm 0.4\) & \(8.89\pm 0.29\) \\ Dodgeball & \(8.88\pm 0.38\) & \(6.22\pm 0.55\) & \(8.73\pm 0.40\) \\ Plunder & \(9.64\pm 1.85\) & \(5.1\pm 0.1\) & \(8.06\pm 0.98\) \\ Ninja & \(9.10\pm 0.32\) & \(5.2\pm 0.3\) & \(8.90\pm 0.30\) \\ CaveFlyer & \(8.98\pm 0.59\) & \(5.4\pm 0.1\) & \(8.75\pm 0.59\) \\ \hline \end{tabular}
\end{table}
Table 8: **Train score** of ProcGen environments trained on 200 instances for 50M environment steps. We compare our algorithm to the baselines LEEP and PPO. The mean and standard deviation are computed over 8 runs with different seeds.
## Appendix F Ablation Study
In the following sections, we provide ablation studies of an ExpGen variant that combines random actions and an evaluation of \(L_{0}\) state similarity measure.
### Ensemble Combined with Random Actions
We compare the proposed approach to a variant of ExpGen denoted by Ensemble+random where we train an ensemble and at test time select a random action if the ensemble networks fail to reach a consensus. The results are shown in Table 10, indicating that selecting the maximum entropy policy upon ensemble disagreements yields superior results.
Figure 12: Normalized test Performance for PPO, IDAAC, and ExpGen+IDAAC, on all ProcGen games. ExpGen achieves state-of-the-art performance on test levels of Maze, Heist, and Jumper and on-par performance in the remaining games.
\begin{table}
\begin{tabular}{|l||l|l|l|} \hline Game & PPO & LEEP & ExpGen \\ \hline Maze & \(5.78\pm 0.39\) & \(6.78\pm 0.21\) & \(\textbf{8.33}\pm\textbf{0.14}\) \\ Heist & \(2.54\pm 0.45\) & \(4.42\pm 0.57\) & \(\textbf{6.91}\pm\textbf{0.24}\) \\ Jumper & \(5.78\pm 0.28\) & \(6.4\pm 0.4\) & \(\textbf{6.64}\pm\textbf{0.15}\) \\ Miner & \(8.76\pm 0.33\) & \(0.8\pm 0.1\) & \(\textbf{9.48}\pm\textbf{0.39}\) \\ BigFish & \(3.82\pm 1.98\) & \(5.5\pm 0.41\) & \(\textbf{5.99}\pm\textbf{0.64}\) \\ Climber & \(6.14\pm 0.50\) & \(2.6\pm 0.4\) & \(6.29\pm 0.54\) \\ Dodgeball & \(3.71\pm 0.55\) & \(\textbf{4.58}\pm\textbf{0.47}\) & \(3.84\pm 0.56\) \\ Plunder & \(\textbf{7.78}\pm\textbf{1.74}\) & \(4.2\pm 0.2\) & \(6.91\pm 1.00\) \\ Ninja & \(6.94\pm 0.30\) & \(4.9\pm 0.8\) & \(\textbf{6.75}\pm\textbf{6.75}\) \\ CaveFlyer & \(6.19\pm 0.66\) & \(2.6\pm 0.2\) & \(\textbf{6.36}\pm\textbf{0.49}\) \\ \hline \end{tabular}
\end{table}
Table 9: **Test score** of ProcGen environments trained on \(200\) instances for \(50M\) environment steps. We compare our algorithm to the baselines LEEP and PPO. The mean and standard deviation are computed over \(8\) runs with different seeds.
### Evaluation of Various Similarity Measures for maxEnt
Tables 11 and 12 present the results of our evaluation of ExpGen equipped with a maxEnt exploration policy that uses either the \(L_{0}\) or \(L_{2}\) norms. The experiment targets the Maze and Heist environments and uses the same train and test procedures as in the main paper (\(25M\) training steps, score mean and standard deviation are measured over \(10\) seeds).
The results demonstrate that both \(L_{2}\) and \(L_{0}\) allow ExpGen to surpass the PPO baseline for Maze and Heist environments, in which they perform similarly well at test time. This indicates that both are valid measures of state similarity for the maxEnt policy.
## Appendix G Sample Complexity
One may wonder whether the leading approaches would benefit from training on additional environment steps. A trained agent can still fail at test time either due to poor generalization performance (overfitting on a small number of training domains) or due to insufficient training steps of the policy (underfitting). In this work, we are interested in the former and design our experiments such that no method underfits. Figure 13 shows IDAAC training for \(100M\) steps on Maze and Jumper, illustrating that the best test performance is obtained at around \(25M\) steps, and training for longer does not contribute further (and can even degrade performance). Therefore, although ExpGen requires more environment steps (on account of training its ensemble of constituent reward policies), training for longer does not place our baseline (IDAAC) at any sort of a disadvantage.
\begin{table}
\begin{tabular}{|l||c|c|} \hline Algorithm & Train & Test \\ \hline ExpGen & \(\mathbf{9.6\pm 0.2}\) & \(\mathbf{8.2\pm 0.1}\) \\ Ensemble + random & \(9.3\pm 0.1\) & \(6.2\pm 0.2\) \\ LEEP & \(9.4\pm 0.3\) & \(6.6\pm 0.2\) \\ PPO + GRU & \(9.5\pm 0.2\) & \(5.4\pm 0.3\) \\ PPO & \(9.1\pm 0.2\) & \(5.6\pm 0.1\) \\ \hline \end{tabular}
\end{table}
Table 10: Ablation study of ExpGen on Maze. The table shows testing scores of networks trained on \(200\) maze instances. We present a comparison between the proposed approach and LEEP, PPO+GRU and PPO, as well as an alternative ensemble policy with random actions upon ensemble disagreement. The mean and standard deviation are computed using \(10\) runs with different seeds.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline Game & ExpGen \(L_{0}\) (Train) & ExpGen \(L_{2}\) (Train) & PPO (Train) \\ \hline Heist & \(\mathbf{9.4\pm 0.3}\) & \(\mathbf{9.4\pm 0.1}\) & \(6.1\pm 0.8\) \\ Maze & \(\mathbf{9.6\pm 0.2}\) & \(\mathbf{9.6\pm 0.1}\) & \(9.1\pm 0.2\) \\ \hline \hline \end{tabular}
\end{table}
Table 11: Train scores of ExpGen using maxEnt policy with either \(L_{0}\) or \(L_{2}\) compared with PPO. The mean and standard deviation are measured over \(10\) seeds.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline Game & ExpGen \(L_{0}\) (Test) & ExpGen \(L_{2}\) (Test) & PPO (Test) \\ \hline Heist & \(\mathbf{7.4\pm 0.1}\) & \(\mathbf{7.4\pm 0.2}\) & \(2.4\pm 0.5\) \\ Maze & \(\mathbf{8.2\pm 0.1}\) & \(\mathbf{8.3\pm 0.2}\) & \(5.6\pm 0.1\) \\ \hline \hline \end{tabular}
\end{table}
Table 12: Test scores of ExpGen using maxEnt policy with either \(L_{0}\) or \(L_{2}\) compared with PPO. The mean and standard deviation are measured over \(10\) seeds.
Figure 13: The mean and median of the accumulated reward for IDAAC trained for \(100M\) steps, averaged over \(10\) runs with different seeds. The curves show that test-reward stagnates and even decreases beyond \(25M\) steps. | ## Review
### Summary
This paper presents a novel algorithm called ExpGen that addresses zero-shot generalization in reinforcement learning (RL) by leveraging maximum entropy (MaxEnt) exploration policies. The authors argue that these policies exhibit a smaller generalization gap than traditional reward-seeking policies. ExpGen integrates an exploratory policy that utilizes MaxEnt rewards and an ensemble of exploitation policies trained on extrinsic rewards, executing the action of the ensemble or the exploratory policy based on their agreement at test time. The algorithm is evaluated on the ProcGen benchmark, demonstrating significant performance improvements in some games while underperforming in others. Subsequent revisions and rebuttals during the review process addressed initial concerns and strengthened the paper's arguments.
### Strengths
- The paper addresses an important and understudied problem of zero-shot generalization in RL.
- The insights regarding MaxEnt rewards and their favorable generalization properties are novel.
- The paper is well-written with a clear introduction and contextualization of related work.
- Experiments are well-organized and present clear results, including an effective limitations section.
- The proposed algorithm shows promise and could contribute significantly to the field.
### Weaknesses
- The experimental results do not convincingly establish the superiority of ExpGen across all tasks.
- There are concerns about the computational inefficiency due to training and inference involving multiple policies.
- Justification for using the L0 norm in the k-NN distance metric is lacking, which raises questions about the soundness of the method.
- Some figures are difficult to read and need clearer captions.
- The reliance on specific experimental settings without broader validation may limit the algorithm's applicability.
### Questions
- How do you select the k-NN? Is it also done using the L0 norm?
- What is your hypothesis why the L0 norm works better?
- How are the scores of the main experiments in tables computed?
- Is there an ablation study for combining ExpGen with IDAAC?
- Do all methods undergo proper hyperparameter tuning by the authors?
### Soundness
**Score:** 3
**Description:** 3 = good; while the foundational insights are strong, the methodological concerns regarding the L0 norm and the evaluation protocol need further clarification.
### Presentation
**Score:** 2
**Description:** 2 = fair; the paper would benefit from improved clarity in figures and overall presentation, as some sections are difficult to interpret.
### Contribution
**Score:** 3
**Description:** 3 = good; the proposed algorithm shows potential contributions to RL, but its effectiveness is not fully established across diverse environments.
### Rating
**Score:** 7
**Description:** 7 = accept, but needs minor improvements; the paper is technically solid and impactful, with some areas requiring refinement before final acceptance.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a novel approach to zero-shot generalization in RL with significant potential contributions. Despite some methodological weaknesses and presentation issues, the authors have addressed key concerns effectively during the rebuttal. The findings are relevant and could lead to further advancements in RL, warranting acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Fairness Aware Counterfactuals for Subgroups
Loukas Kavouras
IMSI / Athena RC
[email protected] &Konstantinos Tsopelas
IMSI / Athena RC
[email protected] &Giorgos Giannopoulos
IMSI / Athena RC
[email protected] &Dimitris Sacharidis
ULB
[email protected] &Eleni Psaroudaki
NTUA & IMSI / Athena RC
[email protected] &Nikolaos Theologitis
IMSI / Athena RC
[email protected] &Dimitrios Rontogiannis
IMSI / Athena RC
[email protected] &Dimitris Fotakis
NTUA & Archimedes / Athena RC
[email protected] &Ioannis Emiris
NKUA & IMSI / Athena RC
[email protected]
###### Abstract
In this work, we present Fairness Aware Counterfactuals for Subgroups (FACTS), a framework for auditing subgroup fairness through counterfactual explanations. We start with revisiting (and generalizing) existing notions and introducing new, more refined notions of subgroup fairness. We aim to (a) formulate different aspects of the difficulty of individuals in certain subgroups to achieve recourse, i.e., receive the desired outcome, either at the micro level, considering members of the subgroup individually, or at the macro level, considering the subgroup as a whole, and (b) introduce notions of subgroup fairness that are robust, if not totally oblivious, to the cost of achieving recourse. We accompany these notions with an efficient, model-agnostic, highly parameterizable, and explainable framework for evaluating subgroup fairness. We demonstrate the advantages, the wide applicability, and the efficiency of our approach through a thorough experimental evaluation of different benchmark datasets.
## 1 Introduction
Machine learning is now an integral part of decision-making processes across various domains, e.g., medical applications [22], employment [5; 7], recommender systems [29], education [26], credit assessment [4]. Its decisions affect our everyday life directly, and, if unjust or discriminative, could potentially harm our society [28]. Multiple examples of discrimination or bias towards specific population subgroups in such applications [33] create the need not only for explainable and interpretable machine learning that is more trustworthy [6], but also for auditing models in order to detect hidden bias for subgroups [30].
Bias towards protected subgroups is most often detected by various notions of _fairness of prediction_, e.g., statistical parity, where all subgroups defined by a protected attribute should have the same probability of being assigned the positive (favorable) predicted class. These definitions capture the _explicit_ bias reflected in the model's predictions. Nevertheless, an _implicit_ form of bias is thedifficulty for, or the _burden_[32, 20] of, an individual (or a group thereof) to achieve _recourse_, i.e., perform the necessary _actions_ to change their features so as to obtain the favorable outcome [10, 35]. Recourse provides explainability (i.e., a counterfactual explanation [38]) and actionability to an affected individual, and is a legal necessity in various domains, e.g., the Equal Credit Opportunity Act mandates that an individual can demand to learn the reasons for a loan denial. _Fairness of recourse_ captures the notion that the protected subgroups should bear equal burden [10, 14, 37, 20].
To illustrate these notions, consider a company that supports its promotion decisions with an AI system that classifies employees as good candidates for promotion, the favorable positive class, based on various performance metrics, including their cycle time efficiency (CTE) and the annual contract value (ACV) for the projects they lead. Figure 0(a) draws ten employees from the negative predicted class as points in the CTE-ACV plane and also depicts the decision boundary of the classifier. Race is the protected attribute, and there are two protected subgroups with five employees each, depicted as circles and triangles. For each employee, the arrow depicts the best _action_ to achieve recourse, i.e., to cross the decision boundary, and the number indicates the _cost_ of the action, here simply computed as the distance to the boundary [10]. For example, \(x_{1}\) may increase their chances for promotion mostly by acquiring more high-valued projects, while \(x_{2}\) mostly by increasing their efficiency. Burden is defined as the mean cost for a protected subgroup [32]. For the protected race 0, the burden is 2, while it is 2.2 for race 1, indicating thus unfairness of recourse against race 1. In contrast, assuming there is an equal number of employees of each race in the company, the classifier satisfies fairness of prediction in terms of statistical parity (equal positive rate in the subgroups).
While fairness of recourse is an important concept that captures a distinct notion of algorithmic bias, as also explained in [14, 37], we argue that it is much more nuanced than the mean cost of recourse (aka burden) considered in all prior work [10, 32, 37, 20], and raise three issues.
First, the mean cost, which comprises a _micro viewpoint_ of the problem, does not provide the complete picture of how the cost of recourse varies among individuals and may lead to contrasting conclusions. Consider again Figure 0(a), and observe that for race 1 all but one employee can achieve recourse with cost at most 2, while an outlier achieves recourse with cost 6. It is this outlier that raises the mean cost for race 1 above that of race 0. For the two race subgroups, Figure 0(b) shows the cumulative distribution of cost, termed the _effectiveness-cost distribution_ (ecd) in Section 2. These distributions allow the fairness auditor to inspect the _tradeoff_ between cost and recourse, and define the appropriate fairness of recourse notion. For example, they may consider that actions with a cost of more than 2 are unrealistic (e.g., because they cannot be realized within some timeframe), and thus investigate how many employees can achieve recourse under this constraint; we refer to this as _equal effectiveness within budget_ in Section 2.3. Under this notion, there is unfairness against race 0, as only 60% of race 0 employees (compared to 80% of race 1) can realistically achieve recourse.
There are several options to go beyond the mean cost. One is to consider fairness of recourse at the individual level, and compare an individual with their counterfactual counterpart had their protected attribute changed value [37]. However, this approach is impractical as, similar to other causal-based definitions of fairness, e.g., [21], it requires strong assumptions about the causal structure in the
Figure 1: (a) An example of affected individuals, the decision boundary, actions, and a subpopulation (in the shaded region), depicted in the feature space; (b) Cumulative distribution of cost of recourse for the individuals in (a); (c) Comparison of two actions to achieve recourse for two individuals.
domain [15]. In contrast, we argue that it's preferable to investigate fairness in subpopulations and inspect the trade-off between cost and recourse.
Second, there are many cases where the aforementioned micro-level aggregation of individuals' costs is not meaningful in auditing real-world systems. To account for this, we introduce a _macro viewpoint_ where a group of individuals is considered as a whole, and an action is applied to and assessed _collectively_ for all individuals in the group. An action represents an external horizontal intervention, such as an affirmative action in society, or focused measures in an organization that would change the attributes of some subpopulation (e.g., decrease tax or loan interest rates, increase productivity skills). In the macro viewpoint, the cost of recourse does not burden the individuals, but the external third party, e.g., the society or an organization. Moreover, the macro viewpoint offers a more intuitive way to _audit_ a system for fairness of recourse, as it seeks to uncover systemic biases that apply to a large number of individuals.
To illustrate the macro viewpoint, consider the group within the shaded region in Figure 0(a). In the micro viewpoint, each employee seeks recourse individually, and both race subgroups have the same distribution of costs. However, we can observe that race 0 employees, like \(x_{1}\) achieve recourse by actions in the ACV direction, while race 1 employees, like \(x_{2}\) in the orthogonal CTE direction. This becomes apparent when we take the macro viewpoint, and investigate the effect of action \(a_{1}\), depicted on the border of the shaded region, discovering that it is disproportionally effective on the race subgroups (leads to recourse for two-thirds of one subgroup but for none in the other). In this example, action \(a_{1}\) might represent the effect of a training program to enhance productivity skills, and the macro viewpoint finds that it would perpetuate the existing burden of race 0 employees.
Third, existing notions of fairness of recourse have an important practical limitation: they require a cost function that captures one's ability to modify one's attributes, whose definition may involve a learning process [31], or an adaptation of off-the-shelf functions by practitioners [35]. Even the idea of which attributes are actionable, in the sense that one can change (e.g., one cannot get younger), or "ethical" to suggest as actionable (e.g., change of marital status from married to divorced) hides many complications [36].
Conclusions drawn about fairness of recourse crucially depend on the cost definition. Consider individuals \(x_{1}\), \(x_{2}\), and actions \(a_{1}\), \(a_{2}\), shown in Figure 0(c). Observe that is hard to say which action is cheaper, as one needs to compare changes _within_ and _across_ very dissimilar attributes. Suppose the cost function indicates action \(a_{1}\) is cheaper; both individuals achieve recourse with the same cost. However, if action \(a_{2}\) is cheaper, only \(x_{2}\) achieves recourse. Is the classifier fair?
To address this limitation, we propose definitions that are _oblivious to the cost function_. The idea is to compare the effectiveness of actions to the protected subgroups, rather than the cost of recourse for the subgroups. One way to define a cost-oblivious notion of fairness of recourse is the _equal choice for recourse_ (see Section 2.3), which we illustrate using the example in Figure 0(c). According to it, the classifier is unfair against \(x_{1}\), as \(x_{1}\) has only one option, while \(x_{2}\) has two options to achieve recourse among the set of actions \(\{a_{1},a_{2}\}\).
ContributionOur aim is to showcase that fairness of recourse is an important and distinct notion of algorithmic fairness with several facets not previously explored. We make a series of conceptual and technical contributions.
Conceptually, we distinguish between two different viewpoints. The _micro viewpoint_ follows literature on recourse fairness [10, 14, 37, 20] in that each individual chooses the action that is cheaper for them, but revisits existing notions. It considers the trade-off between cost and recourse and defines several novel notions that capture different aspects of it. The _macro viewpoint_ considers how an action collectively affects a group of individuals, and quantifies its effectiveness. It allows the formulation of _cost-oblivious_ notions of recourse fairness. It also leads to an alternative trade-off between cost and recourse that may reveal systemic forms of bias.
Technically, we propose an efficient, interpretable, model-agnostic, highly parametrizable framework termed FACTS (Fairness-Aware Counterfactuals for Subgroups) to audit for fairness of recourse. FACTS is _efficient_ in computing the effectiveness-cost distribution, which captures the trade-off between cost and recourse, in both the micro or macro viewpoint. The key idea is that instead of determining the best action for each individual independently (i.e., finding their nearest counterfactual explanation [38]), it enumerates the space of actions and determines how many and which individualsachieve recourse through each action. Furthermore, FACTS employs a systematic way to explore the feature space and discover any subspace such that recourse bias exists among the protected subgroups within. FACTS ranks the subspaces in decreasing order of recourse bias it detects, and for each provides an _interpretable_ summary of its findings.
Related WorkWe distinguish between _fairness of predictions_ and _fairness of recourse_. The former aims to capture and quantify unfairness by comparing directly the model's predictions [27; 2] at: the individual level, e.g., individual fairness [9], counterfactual/causal-based fairness [21; 19; 26]; and the group level, e.g., demographic parity [39], equal odd/opportunity [12].
Fairness of recourse is a more recent notion, related to _counterfactual explanations_[38], which explains a prediction for an individual (the factual) by presenting the "best" counterfactual that would result in the opposite prediction, offering thus _recourse_ to the individual. Best, typically means the _nearest counterfactual_ in terms of a distance metric in the feature space. Another perspective, which we adopt here, is to consider the _action_ that transforms a factual into a counterfactual, and specify a _cost function_ to quantify the effort required by an individual to perform an action. In the simplest case, the cost function can be the distance between factual and counterfactual, but it can also encode the _feasibility_ of an action (e.g., it is impossible to decrease age) and the _plausibility_ of a counterfactual (e.g., it is out-of-distribution). It is also possible to view actions as interventions that act on a structural causal model capturing cause-effect relationships among attributes [15]. Hereafter, we adopt the most general definition, where a cost function is available, and assume that the best counterfactual explanation is the one that comes from the minimum cost action. Counterfactual explanations have been suggested as a mechanism to detect possible bias against protected subgroups, e.g., when they require changes in protected attributes [13].
Fairness of recourse, first introduced in [35] and formalized in [10], is defined at the group level as the disparity of the mean cost to achieve recourse (called burden in subsequent works) among the protected subgroups. Fairness of recourse for an individual is when they require the same cost to achieve recourse in the actual world and in an imaginary world where they would have a different value in the protected attribute [37]. This definition however only applies when a structural causal model of the world is available. Our work expands on these ideas and proposes alternate definitions that capture a macro and a micro viewpoint of fairness of recourse.
There is a line of work on auditing models for fairness of predictions at the subpopulation level [17; 18]. For example, [34] identifies subpopulations that show dependence between a performance measure and the protected attribute. [3] determines whether people are harmed due to their membership in a specific group by examining a ranking of features that are most associated with the model's behavior. There is no equivalent work for fairness of recourse, although the need to consider the subpopulation is recognized in [16] due to uncertainty in assumptions or to intentionally study fairness. Our work is the first that audits for fairness of recourse at the subpopulation level.
A final related line of work is global explainability. For example, recourse summaries [31; 24; 25] summarizes individual counterfactual explanations _globally_, and as the authors in [31; 25] suggest can be used to manually audit for unfairness in subgroups of interest. [23] aims to explain how a model behaves in subspaces characterized by certain features of interest. [8] uses counterfactuals to unveil whether a black-box model, that already complies with the regulations that demand the omission of sensitive attributes, is still biased or not, by trying to find a relation between proxy features and bias. Our work is related to these methods in that we also compute counterfactual explanations for all instances, albeit in a more efficient manner, and with the goal to audit fairness of recourse on subpopulation level.
## 2 Fairness of Recourse for Subgroups
### Preliminaries
We consider a **feature space**\(X=X_{1}\times\cdots\times X_{n}\), where \(X_{n}\) denotes the **protected feature**, which, for ease of presentation, takes two protected values \(\{0,1\}\). For an instance \(x\in X\), we use the notation \(x.X_{i}\) to refer to its value in feature \(X_{i}\).
We consider a binary **classifier**\(h:X\rightarrow\{-1,1\}\) where the positive outcome is the favorable one. For a given \(h\), we are concerned with a dataset \(D\) of **adversely affected individuals**, i.e., those who receive the unfavorable outcome. We prefer the term instance to refer to any point in the feature space \(X\), and the term individual to refer to an instance from the dataset \(D\).
We define an **action**\(a\) as a set of changes to feature values, e.g., \(a=\{\textit{country}\rightarrow\textit{US},\textit{education-num}\to 12\}\). We denote as \(A\) the set of possible actions. An action \(a\) when applied to an individual (a factual instance) \(x\) results in a **counterfactual** instance \(x^{\prime}=a(x)\). If the individual \(x\) was adversely affected (\(h(x)=-1\)) and the action results in a counterfactual that receives the desired outcome (\(h(a(x))=1\)), we say that action \(a\) offers recourse to the individual \(x\) and is thus **effective**. In line with the literature, we also refer to an effective action as a **counterfactual explanation** for individual \(x\)[38].
An action \(a\) incurs a **cost** to an individual \(x\), which we denote as \(\textsf{cost}(a,x)\). The cost function captures both how _feasible_ the action \(a\) is for the individual \(x\), and how _plausible_ the counterfactual \(a(x)\) is [14].
Given a set of actions \(A\), we define the **recourse cost**\(\textsf{rc}(A,x)\) of an individual \(x\) as the minimum cost among effective actions if there is one, or otherwise some maximum cost represented as \(c_{\infty}\):
\[\textsf{rc}(A,x)=\begin{cases}\min\{\textsf{cost}(a,x)|a\in A:h(a(x))=1\},& \text{if }\exists a\in A:h(a(x))=1;\\ c_{\infty},&\text{otherwise}.\end{cases}\]
An effective action of minimum cost is also called a **nearest counterfactual explanation**[14].
We define a **subspace**\(X_{p}\subseteq X\) using a **predicate**\(p\), which is a conjunction of feature-level predicates of the form "_feature-operator-value_", e.g., the predicate \(p=(\textit{country}=\textit{US})\land(\textit{education-num}\geq 9)\) defines instances from the US that have more than 9 years of education.
Given a predicate \(p\), we define the subpopulation **group**\(G_{p}\subseteq D\) as the set of affected individuals that satisfy \(p\), i.e., \(G_{p}=\{x\in D|p(x)\}\). We further distinguish between the **protected subgroups**\(G_{p,1}=\{x\in D|p(x)\land x.X_{n}=1\}\) and \(G_{p,0}=\{x\in D|p(x)\land x.X_{n}=0\}\). When the predicate \(p\) is understood, we may omit it in the designation of a group to simplify notation.
### Effectiveness-Cost Trade-Off
For a specific action \(a\), we naturally define its **effectiveness** (eff) for a group \(G\), as the proportion of individuals from \(G\) that achieve recourse through \(a\):
\[\textsf{eff}(a,G)=\frac{1}{|G|}|\{x\in G|h(a(x))=1\}|.\]
Note that effectiveness is termed correctness in [31] and coverage in [25]. We want to examine how recourse is achieved for the group \(G\) through a set of possible actions \(A\). We define the **aggregate effectiveness** (aeff) of \(A\) for \(G\) in two distinct ways.
In the _micro viewpoint_, the individuals in the group are considered independently, and each may choose the action that benefits itself the most. Concretely, we define the **micro-effectiveness** of set of actions \(A\) for group \(G\) as the proportion of individuals in \(G\) that can achieve recourse through _some_ action in \(A\), i.e.,:
\[\textsf{aeff}_{\mu}(A,G)=\frac{1}{|G|}|\{x\in G|\exists a\in A,\textsf{eff}(a, x)=1\}|.\]
In the _macro viewpoint_, the group is considered as a whole, and an action is applied collectively to all individuals in the group. Concretely, we define the **macro-effectiveness** of set of actions \(A\) for group \(G\) as the largest proportion of individuals in \(G\) that can achieve recourse through _the same_ action in \(A\), i.e.,:
\[\textsf{aeff}_{\mathsf{M}}(A,G)=\max_{a\in A}\frac{1}{|G|}|\{x\in G|\,\textsf {eff}(a,x)=1\}|.\]
For a group \(G\), actions \(A\), and a cost budget \(c\), we define the **in-budget actions** as the set of actions that cost at most \(c\) for any individual in \(G\):
\[A_{c}=\{a\in A|\forall x\in G,\textsf{cost}(a,x)\leq c\}.\]
We define the **effectiveness-cost distribution** (ecd) as the function that for a cost budget \(c\) returns the aggregate effectiveness possible with in-budget actions:\[\mathsf{ecd}(c;A,G)=\mathsf{aeff}(A_{c},G).\]
We use \(\mathsf{ecd}_{\mu}\), \(\mathsf{ecd}_{\mathsf{M}}\) to refer to the micro, macro viewpoints of aggregate effectiveness. A similar concept, termed the coverage-cost profile, appears in [25].
The value \(\mathsf{ecd}(c;A,G)\) is the proportion of individuals in \(G\) that can achieve recourse through actions \(A\) with cost at most \(c\). Therefore, the \(\mathsf{ecd}\) function has an intuitive probabilistic interpretation. Consider the subspace \(X_{p}\) determined by predicate \(p\), and define the random variable \(C\) as the cost required by an instance \(x\in X_{p}\) to achieve recourse. The function \(\mathsf{ecd}(c;A,G_{p})\) is the empirical cumulative distribution function of \(C\) using sample \(G_{p}\).
The **inverse effectiveness-cost distribution** function \(\mathsf{ecd}^{-1}(\phi;A,G)\) takes as input an effectiveness level \(\phi\in[0,1]\) and returns the minimum cost required so that \(\phi|G|\) individuals achieve recourse.
### Definitions of Subgroup Recourse Fairness
We define recourse fairness of classifier \(h\) for a group \(G\) by comparing the \(\mathsf{ecd}\) functions of the protected subgroups \(G_{0}\), \(G_{1}\) in different ways.
The first two definitions are _cost-oblivious_, and apply whenever we can ignore the cost function. Specifically, given a set of actions \(A\) and a group \(G\), we assume that all actions in \(A\) are considered equivalent, and that the cost of any action is the same for all individuals in the group; i.e., \(\mathsf{cost}(a,x)=\mathsf{cost}(a^{\prime},x^{\prime})\), \(\forall a\neq a^{\prime}\in A,x\neq x^{\prime}\in G\). The definitions simply compare the aggregate effectiveness of a set of actions on the protected subgroups.
**Equal Effectiveness** This definition has a micro and a macro interpretation, and says that the classifier is fair if the same proportion of individuals in the protected subgroups can achieve recourse: \(\mathsf{aeff}(A,G_{0})=\mathsf{aeff}(A,G_{1})\).
**Equal Choice for Recourse** This definition has only a macro interpretation and claims that the classifier is fair if the protected subgroups can choose among the same number of sufficiently effective actions to achieve recourse, where sufficiently effective means the actions should work for at least \(100\phi\%\) (for \(\phi\in[0,1]\)) of the subgroup:
\[|\{a\in A|\,\mathsf{eff}(a,G_{0})\geq\phi\}|=|\{a\in A|\,\mathsf{eff}(a,G_{1}) \geq\phi\}|.\]
The next three definitions assume the \(\mathsf{cost}\) function is specified, and have both a micro and a macro interpretation.
**Equal Effectiveness within Budget** The classifier is fair if the same proportion of individuals in the protected subgroups can achieve recourse with a cost at most \(c\):
\[\mathsf{ecd}(c;A,G_{0})=\mathsf{ecd}(c;A,G_{1}).\]
**Equal Cost of Effectiveness** The classifier is fair if the minimum cost to achieve aggregate effectiveness of \(\phi\in[0,1]\) in the protected subgroups is equal:
\[\mathsf{ecd}^{-1}(\phi;A,G_{0})=\mathsf{ecd}^{-1}(\phi;A,G_{1}).\]
**Fair Effectiveness-Cost Trade-Off** The classifier is fair if the protected subgroups have the same effectiveness-cost distribution, or equivalently for each cost budget \(c\), their aggregate effectiveness is equal:
\[\max_{c}|\,\mathsf{ecd}(c;A,G_{0})-\mathsf{ecd}(c;A,G_{1})|=0.\]
The left-hand side represents the two-sample Kolmogorov-Smirnov statistic for the empirical cumulative distributions (\(\mathsf{ecd}\)) of the protected subgroups. We say that the classifier is fair with confidence \(\alpha\) if this statistic is less than \(\sqrt{-\ln(\alpha/2)\frac{|G_{\mathsf{ecd}}|+|G_{\mathsf{e},1}|}{2|G_{\mathsf{ e},0}||G_{\mathsf{e},1}|}}\).
The last definition takes a micro viewpoint and extends the notion of _burden_[32] from literature to the case where not all individuals may achieve recourse. The **mean recourse cost** of a group \(G\),
\[\bar{\mathsf{rc}}(A,G)=\frac{1}{|G|}\sum_{x\in G}\mathsf{rc}(A,x)\]considers individuals that cannot achieve recourse through \(A\) and have a recourse cost of \(c_{\infty}\). To exclude them, we denote as \(G^{*}\) the set of individuals of \(G\) that can achieve recourse through an action in \(A\), i.e., \(G^{*}=\{x\in G|\exists a\in A,h(a(x))=1\}\). Then the **conditional mean recourse cost** is the mean recourse cost among those that can achieve recourse:
\[\bar{\mathsf{rc}}^{*}(A,G)=\frac{1}{|G^{*}|}\sum_{x\in G^{*}}\mathsf{rc}(A,x).\]
If \(G=G^{*}\), the definitions coincide with burden.
**Equal (Conditional) Mean Recourse** The classifier is fair if the (conditional) mean recourse cost for the protected subgroups is the same:
\[\mathsf{rc}^{*}(A,G_{0};A)=\bar{\mathsf{rc}}^{*}(A,G_{1}).\]
Note that when the group \(G\) is the entire dataset of affected individuals, and all individuals can achieve recourse through \(A\), this fairness notion coincides with fairness of burden [35; 32].
## 3 Fairness-aware Counterfactuals for Subgroups
This section presents FACTS (Fairness-aware Counterfactuals for Subgroups), a framework that implements both the micro and the macro viewpoint, and all respective fairness definitions provided in Section 2.3 to support auditing of the "difficulty to achieve recourse" in subgroups. The output of FACTS comprises population groups that are assigned (a) an unfairness score that captures the disparity between protected subgroups according to different fairness definitions and allows us to rank groups, and (b) a user-intuitive, easily explainable counterfactual summary, which we term _Comparative Subgroup Counterfactuals_ (CSC).
Figure 2 presents an example result of FACTS derived from the adult dataset [1]. The "if clause" represents the subgroup \(G_{p}\), which contains all the affected individuals that satisfy the predicate \(p=(\textit{hours-per-week}=\textit{FullTime})\ \wedge\ (\textit{marital-status}=\textit{Married-civ-spouse})\ \wedge\ (\textit{occupation}=\textit{Adm-clerical})\). The information below the predicate refers to the protected subgroups \(G_{p,0}\) and \(G_{p,1}\) which are the female and male individuals of \(G_{p}\) respectively. With blue color, we highlight the percentage \(\mathsf{cov}(G_{p,i})=|G_{p,i}|/|D_{i}|\) which serves as an indicator of the size of the protected subgroup. The most important part of the representation is the actions applied that appear below each protected subgroup and are evaluated in terms of fairness metrics. In this example, the metric is the _Equal Cost of Effectiveness_ with effectiveness threshold \(\phi=0.7\). For \(G_{p,0}\) there is no action surpassing the threshold \(\phi=0.7\), therefore we display a message accordingly. On the contrary, the action \(a=\{\textit{hours-per-week}\rightarrow\textit{Overtime},\textit{occupation} \rightarrow\textit{Exec-managerial}\}\) has effectiveness \(0.72>\phi\) for \(G_{p,1}\), thus allowing a respective \(72\%\) of the male individuals of \(G_{p}\) to achieve recourse. The unfairness score is "inf", since no recourse is achieved for the female subgroup.
**Method overview** Deploying FACTS comprises three main steps: (a) _Subgroup and action space generation_ that creates the sets of groups \(\mathcal{G}\) and actions \(A\) to examine; (b) _Counterfactual summaries generation_ that applies appropriate actions to each group \(G\in\mathcal{G}\); and (c) _CSC construction and fairness ranking_ that applies the definitions of Section 2.3. Next, we describe these steps in detail.
**(a) Subgroup and action space generation** Subgroups are generated by executing the fp-growth [11] frequent itemset mining algorithm on \(D_{0}\) and on \(D_{1}\) resulting to the sets of subgroups \(\mathcal{G}_{0}\) and \(\mathcal{G}_{1}\) and then by computing the intersection \(\mathcal{G}=\mathcal{G}_{0}\bigcap\mathcal{G}_{1}\). In our setting, an item is a feature-level predicate of the form "feature-operator-value" and, consequently, an itemset is a predicate \(p\) defining a subgroup \(G_{p}\). This step guarantees that the evaluation in terms of fairness will be performed
Figure 2: CSC for a highly biased subgroup in terms of _Equal Cost of Effectiveness_ with \(\phi=0.7\).
between the common subgroups \(\mathcal{G}\) of the protected populations. The set of all actions \(A\) is generated by executing fp-growth on the unaffected population to increase the chance of more effective actions and to reduce the computational complexity. The above process is parameterizable w.r.t. the selection of the protected attribute(s) and the minimum frequency threshold for obtaining candidate subgroups. We emphasize that alternate mechanisms to generate the sets \(A\) and \(\mathcal{G}\) are possible.
**(b) Counterfactual summaries generation** For each subgroup \(G_{p}\in\mathcal{G}\), the following steps are performed: (i) Find valid actions, i.e., the actions in \(A\) that contain a subset of the features appearing in \(p\) and at least one different value in these features; (ii) For each valid action \(a\) compute \(\text{eff}(a,G_{p,0})\) and \(\text{eff}(a,G_{p,1})\). The aforementioned process extracts, for each subgroup \(G_{p}\), a subset \(V_{p}\) of the actions \(A\), with each action having exactly the same cost for all individuals of \(G_{p}\). Therefore, individuals of \(G_{p,0}\) and \(G_{p,1}\) are evaluated in terms of subgroup-level actions, with a fixed cost for all individuals of the subgroup, in contrast to methods that rely on aggregating the cost of individual counterfactuals. This approach provides a key advantage to our method in cases where the definition of the exact costs for actions is either difficult or ambiguous: a misguided or even completely erroneous attribution of a cost to an action will equally affect all individuals of the subgroup and only to the extent that the respective fairness definition allows it. In the setting of individual counterfactual cost aggregation, changes in the same sets of features could lead to highly varying action costs for different individuals within the same subgroup.
**(c) CSC construction and fairness ranking** FACTS evaluates all definitions of Section 2.3 on all subgroups, producing an unfairness score per definition, per subgroup. In particular, each definition of Section 2.3 quantifies a different aspect of the difficulty for a protected subgroup to achieve recourse. This quantification directly translates to difficulty scores for the protected subgroups \(G_{p,0}\) and \(G_{p,1}\) of each subgroup \(G_{p}\), which we compare accordingly (computing the absolute difference between them) to arrive at the unfairness score of each \(G_{p}\) based on the particular fairness metric.
The outcome of this process is the generation, for each fairness definition, of a ranked list of CSC representations, in decreasing order of their unfairness score. Apart from unfairness ranking, the CSC representations will allow users to intuitively understand unfairness by directly comparing differences in actions between the protected populations within a subgroup.
## 4 Experiments
This section presents the experimental evaluation of FACTS on the Adult dataset [1]; more information about the datasets, experimental setting, and additional results can be found in the appendix. The code is available at: [https://github.com/AutoFairAthenaRC/FACTS](https://github.com/AutoFairAthenaRC/FACTS). First, we briefly describe the experimental setting and then present and discuss Comparative Subgroup Counterfactuals for subgroups ranked as the most unfair according to various definitions of Section 2.3.
**Experimental Setting** The first step was the dataset cleanup (e.g., removing missing values and duplicate features, creating bins for continuous features like age). The resulting dataset was split randomly with a 70:30 split ratio and was used to train and test respectively a logistic regression model (consequently used as the black-box model to audit). For the generation of the subgroups and the set of actions we used fp-growth with a 1% support threshold on the test set. We also implemented various cost functions, depending on the type of feature, i.e., categorical, ordinal, and numerical. A detailed description of the experimental setting, the models used, and the processes of our framework can be found in the supplementary material.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{**Subgroup 1**} & \multicolumn{4}{c}{**Subgroup 2**} \\ \cline{2-9} & \multicolumn{1}{c}{**rank**} & \multicolumn{1}{c}{**bias**} & \multicolumn{1}{c}{**augural**} & \multicolumn{1}{c}{**unfairness score**} & \multicolumn{1}{c}{**rank**} & \multicolumn{1}{c}{**bias**} & \multicolumn{1}{c}{**gainst**} & \multicolumn{1}{c}{**unfairness score**} \\ \hline Equal Effectiveness & 2950 & Male & 0.11 & 10063 & Female & 0.0004 & 275 & Female & 0.32 \\ Equal Choice for Recourse (\(o=0.3\)) & Fair & 0 & 12 & Female & 2 & Fair & - & 0 \\ Equal Choice for Recourse (\(o=0.7\)) & 6 & Male & 1 & 1 & Female & 6 & Fair & - & 0 \\ Equal Effectiveness within Budget (\(c=5\)) & Fair & 0 & 2966 & Female & 0.056 & 70 & Female & 0.3 \\ Equal Effectiveness within Budget (\(c=10\)) & 2150 & Male & 0.11 & 2518 & Female & 0.0004 & 276 & Female & 0.3 \\ Equal Effectiveness within Budget (\(c=10\)) & 2575 & Male & 0.11 & 9221 & Female & 0.0004 & 272 & Female & 0.3 \\ Equal Cost of Effectiveness (\(o=0.3\)) & Fair & - & 0 & Fair & 0 & 1 & Female & 1 & Fair & 1 \\ Equal Cost of Effectiveness (\(o=0.7\)) & 1 & Male & 1 & 12 & Female & 2 & Fair & - & 0 \\ Fair Effectiveness-Cost Trade-Off & 4005 & Male & 0.11 & 3579 & Female & 0.13 & 306 & Female & 0.32 \\ Equal Confidential Mean Recourse & Pair & - & 0 & 3145 & Female & 0.35 & Fair & - & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Unfair subgroups identified in the Adult dataset.
**Unfair subgroups** Table 1 presents three subgroups which were ranked at position 1 according to three different definitions: _Equal Cost of Effectiveness (\(\phi\) = 0.7), Equal Choice for Recourse (\(\phi\) = 0.7)_ and _Equal Cost of Effectiveness (\(\phi\) = 0.3)_, meaning that these subgroups were detected to have the highest unfairness according to the respective definitions. For each subgroup, its _rank_, _bias against_, and _unfairness score_ are provided for all definitions presented in the left-most column. When the unfairness score is 0, we display the value "Fair" in the rank column. Note that subgroups with exactly the same score w.r.t. a definition will receive the same rank. The CSC representations for the fairness metric that ranked the three subgroups of Table 1 at the first position are shown in Figure 3.
Subgroup 1 is ranked first (highly unfair) based on _Equal Cost of Effectiveness with \(\phi=0.7\)_, while it is ranked much lower or it is even considered as fair according to most of the remaining definitions (see the values of the "rank" column for Subgroup 1). The same pattern is observed for Subgroup 2 and Subgroup 3: they are ranked first based on _Equal Choice for Recourse with \(\phi=0.7\)_ and _Equal Cost of Effectiveness with \(\phi=0.3\)_ accordingly, but much lower according to the remaining definitions. This finding provides a strong indication of the utility of the different fairness definitions, i.e., the fact that they are able to capture different aspects of the difficulty in achieving recourse.1
Footnote 1: Additional examples, as well as statistics measuring this pattern on a significantly larger sample, are included in the supplementary material to further support this finding.
What is more, these different aspects can easily be motivated by real-world auditing needs. For example, out of the aforementioned definitions, _Equal Cost of Effectiveness with \(\phi=0.7\)_ would be suitable in a scenario where a horizontal intervention to support a subpopulation needs to be performed, but a limited number of actions is affordable. In this case, the macro viewpoint demonstrated in the CSC Subgroup 1 (top result in Figure 3) serves exactly this purpose: one can easily derive single, group-level actions that can effectively achieve recourse for a desired percentage of the unfavored
Figure 3: Comparative Subgroup Counterfactuals for the subgroups of Table 1.
subpopulation. On the other hand, _Equal Choice for Recourse (\(\phi=0.3\))_, for which the CSC of a top result is shown in the middle of Figure 3, is mainly suitable for cases where assigning costs to actions might be cumbersome or even dangerous/unethical. This definition is oblivious to costs and measures bias based on the difference in the number of sufficiently effective actions to achieve recourse between the protected subgroups.
Another important observation comes again from Subgroup 1, where bias against the _Male_ protected subgroup is detected, contrary to empirical findings from works employing more statistical-oriented approaches (e.g., [34]), where only subgroups with bias against the _Female_ protected subgroup are reported. We deem this finding important, since it directly links to the problem of gerrymandering [17], which consists in partitioning attributes in such way and granularity to mask bias in subgroups. Our framework demonstrates robustness to this phenomenon, given that it can be properly configured to examine sufficiently small subgroups via the minimum frequency threshold described in Section 3.
## 5 Limitations
There exist numerous challenges and constraints when attempting to determine costs associated with specific counterfactuals. The cost functions involved are intricate, dataset-specific, and frequently necessitate the expertise of a domain specialist. These cost functions may depend on individual characteristics or human interpretation of the perceived difficulty of specific changes, potentially giving rise to concerns like breaches of user privacy (as the expert may require access to all individual user characteristics [25]) or the risk of malicious specialists manipulating cost functions to conceal existing biases.
We acknowledge the difficulties of finding the dataset-dependent cost functions and proceed to implement various "natural" cost functions tailored to different feature types (e.g., \(L_{1}\) norm, exponential distances). These are only suggestions to the expert who is going to use our framework, are susceptible to change, and can be tailored for the specific datasets. Our primary focus is to identify potential biases towards specific subpopulations and recognizing the aforementioned inherent difficulties in defining costs, we introduce fairness metrics that remain independent of cost considerations. We recognize that a single metric, whether cost-dependent or not, cannot identify all forms of bias that may be present within a model's predictions. We do not aim to replace the existing statistical measures for bias detection, rather than complement them by focusing on the _bias of achieving recourse_.
It is essential to note that FACTS, like other resource generation algorithms, may be susceptible to errors, as demonstrated in prior research [31]. Outcomes produced by FACTS can vary based on the chosen configuration, including hyperparameters and metric selection. These variations can either obscure biases present within a classifier or indicate their absence. It is crucial to emphasize that since FACTS is an explainable framework for bias assessment within subgroups, its primary purpose is to serve as a guidance tool for auditors to identify subgroups requiring in-depth analysis of fairness criteria violations. Responsible usage of our framework necessitates transparency regarding the chosen configuration. Therefore, we recommend running FACTS with different hyperparameters and utilizing as many metrics as possible. Appendix G provides a comprehensive discussion of FACTS' limitations and responsible disclosure guidelines.
## 6 Conclusion
In this work, we delve deeper into the difficulty (or burden) of achieving recourse, an implicit and less studied type of bias. In particular, we go beyond existing works, by introducing a framework of fairness notions and definitions that, among others, conceptualizes the distinction between the micro and macro viewpoint of the problem, allows the consideration of a subgroup as a whole when exploring recourses, and supports _oblivious to action cost_ fairness auditing. We complement this framework with an efficient implementation that allows the detection and ranking of subgroups according to the introduced fairness definitions and produces intuitive, explainable subgroup representations in the form of counterfactual summaries. Our next steps include the extension of the set of fairness definitions, focusing on the comparison of the effectiveness-cost distribution; improvements on the exploration, filtering, and ranking of the subgroup representations, via considering hierarchical relations or high-coverage of subgroups; a user study to evaluate the interpretability of fairness judgments, and the development of a fully-fledged auditing tool.
## Acknowledgments and Disclosure of Funding
This work has been funded by the European Union's Horizon Europe research and innovation programme under Grant Agreement No. 101070568 (AutoFair).
## References
* [1] K. Bache and M. Lichman. UCI machine learning repository, 2013.
* [2] Reuben Binns. Fairness in machine learning: Lessons from political philosophy. In _Conference on fairness, accountability and transparency_, pages 149-159. PMLR, 2018.
* [3] Emily Black, Samuel Yeom, and Matt Fredrikson. Fliptest: fairness testing via optimal transport. In _Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency_, pages 111-121, 2020.
* [4] Naama Boer, Daniel Deutch, Nave Frost, and Tova Milo. Just in time: Personal temporal insights for altering model decisions. In _2019 IEEE 35th International Conference on Data Engineering (ICDE)_, pages 1988-1991. IEEE, 2019.
* [5] Miranda Bogen and Aaron Rieke. Help wanted: An examination of hiring algorithms, equity, and bias. _Analysis & Policy Observatory_, 2018.
* [6] Nadia Burkart and Marco F Huber. A survey on the explainability of supervised machine learning. _Journal of Artificial Intelligence Research_, 70:245-317, 2021.
* [7] Lee Cohen, Zachary C Lipton, and Yishay Mansour. Efficient candidate screening under multiple tests and implications for fairness. _arXiv preprint arXiv:1905.11361_, 2019.
* [8] Giandomenico Cornacchia, Vito Walter Anelli, Giovanni Maria Biancofiore, Fedelucio Narducci, Claudio Pomo, Azzurra Ragone, and Eugenio Di Sciascio. Auditing fairness under unawareness through counterfactual reasoning. _Information Processing & Management_, 60(2):103224, 2023.
* [9] Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In _Proceedings of the 3rd innovations in theoretical computer science conference_, pages 214-226, 2012.
* [10] Vivek Gupta, Pegah Nokhiz, Chitradeep Dutta Roy, and Suresh Venkatasubramanian. Equalizing recourse across groups. _arXiv preprint arXiv:1909.03166_, 2019.
* [11] Jiawei Han, Jian Pei, and Yiwen Yin. Mining frequent patterns without candidate generation. In Weidong Chen, Jeffrey F. Naughton, and Philip A. Bernstein, editors, _Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data, May 16-18, 2000, Dallas, Texas, USA_, pages 1-12. ACM, 2000.
* [12] Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. _Advances in neural information processing systems_, 29, 2016.
* [13] Amir-Hossein Karimi, Gilles Barthe, Borja Balle, and Isabel Valera. Model-agnostic counterfactual explanations for consequential decisions. In _AISTATS_, volume 108 of _Proceedings of Machine Learning Research_, pages 895-905. PMLR, 2020.
* [14] Amir-Hossein Karimi, Gilles Barthe, Bernhard Scholkopf, and Isabel Valera. A survey of algorithmic recourse: contrastive explanations and consequential recommendations. _ACM Computing Surveys_, 55(5):1-29, 2022.
* [15] Amir-Hossein Karimi, Bernhard Scholkopf, and Isabel Valera. Algorithmic recourse: from counterfactual explanations to interventions. In _FAccT_, pages 353-362. ACM, 2021.
* [16] Amir-Hossein Karimi, Julius Von Kugelgen, Bernhard Scholkopf, and Isabel Valera. Algorithmic recourse under imperfect causal knowledge: a probabilistic approach. _Advances in neural information processing systems_, 33:265-277, 2020.
* [17] Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In _International conference on machine learning_, pages 2564-2572. PMLR, 2018.
* [18] Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. An empirical study of rich subgroup fairness for machine learning. In _Proceedings of the conference on fairness, accountability, and transparency_, pages 100-109, 2019.
* [19] Niki Kilbertus, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Scholkopf. Avoiding discrimination through causal reasoning. _Advances in neural information processing systems_, 30, 2017.
* [20] Alejandro Kuratomi, Evaggelia Pitoura, Panagiotis Papapetrou, Tony Lindgren, and Panayiotis Tsaparas. Measuring the burden of (un) fairness using counterfactuals. In _Machine Learning and Principles and Practice of Knowledge Discovery in Databases: International Workshops of ECML PKDD 2022, Grenoble, France, September 19-23, 2022, Proceedings, Part I_, pages 402-417. Springer, 2023.
* [21] Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. _Advances in neural information processing systems_, 30, 2017.
* [22] Evangelia Kyrimi, Mariana Raniere Neves, Scott McLachlan, Martin Neil, William Marsh, and Norman Fenton. Medical idioms for clinical bayesian network development. _Journal of Biomedical Informatics_, 108:103495, 2020.
* [23] Himabindu Lakkaraju, Ece Kamar, Rich Caruana, and Jure Leskovec. Faithful and customizable explanations of black box models. In _Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society_, pages 131-138, 2019.
* [24] Dan Ley, Saumitra Mishra, and Daniele Magazzeni. Global counterfactual explanations: Investigations, implementations and improvements. _arXiv preprint arXiv:2204.06917_, 2022.
* [25] Dan Ley, Saumitra Mishra, and Daniele Magazzeni. Globe-ce: A translation-based approach for global counterfactual explanations. In _International Conference on Machine Learning_, 2023.
* [26] Joshua R Loftus, Chris Russell, Matt J Kusner, and Ricardo Silva. Causal reasoning for algorithmic fairness. _arXiv preprint arXiv:1805.05859_, 2018.
* [27] Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. _ACM Comput. Surv._, 54(6), jul 2021.
* [28] Osonde A Osoba and William Welser IV. _An intelligence in our image: The risks of bias and errors in artificial intelligence_. Rand Corporation, 2017.
* [29] Evaggelia Pitoura, Kostas Stefanidis, and Georgia Koutrika. Fairness in rankings and recommenders: Models, methods and research directions. In _2021 IEEE 37th International Conference on Data Engineering (ICDE)_, pages 2358-2361. IEEE, 2021.
* [30] Inioluwa Deborah Raji and Joy Buolamwini. Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products. In _Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society_, pages 429-435, 2019.
* [31] Kaivalya Rawal and Himabindu Lakkaraju. Beyond individualized recourse: Interpretable and interactive summaries of actionable recourses. _Advances in Neural Information Processing Systems_, 33:12187-12198, 2020.
* [32] Shubham Sharma, Jette Henderson, and Joydeep Ghosh. CERTIFAI: A common framework to provide explanations and analyse the fairness and robustness of black-box models. In _AIES_, pages 166-172. ACM, 2020.
* [33] Jason Tashea. Courts are using ai to sentence criminals. that must stop now, Apr 2017.
* [34] Florian Tramer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, Jean-Pierre Hubaux, Mathias Humbert, Ari Juels, and Huang Lin. Fairtest: Discovering unwarranted associations in data-driven applications. In _2017 IEEE European Symposium on Security and Privacy (EuroS&P)_, pages 401-416. IEEE, 2017.
* [35] Berk Ustun, Alexander Spangher, and Yang Liu. Actionable recourse in linear classification. In _Proceedings of the conference on fairness, accountability, and transparency_, pages 10-19, 2019.
* [36] Suresh Venkatasubramanian and Mark Alfano. The philosophical basis of algorithmic recourse. In _Proceedings of the 2020 conference on fairness, accountability, and transparency_, pages 284-293, 2020.
* [37] Julius von Kugelgen, Amir-Hossein Karimi, Umang Bhatt, Isabel Valera, Adrian Weller, and Bernhard Scholkopf. On the fairness of causal algorithmic recourse. In _AAAI_, pages 9584-9594. AAAI Press, 2022.
* [38] Sandra Wachter, Brent Mittelstadt, and Chris Russell. Counterfactual explanations without opening the black box: Automated decisions and the gdpr. _Harv. JL & Tech._, 31:841, 2017.
* [39] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In _Proceedings of the 26th international conference on world wide web_, pages 1171-1180, 2017.
## Appendix
This is the appendix to the main paper, discussing how numerical attributes are handled (Appendix A), describing in detail the experimental setting (Appendix B), presenting the datasets (Appendix C), providing additional results (Appendix D) and discussion (Appendix E), quantitatively comparing the various fairness notions (Appendix F) and describing in detail limitations and usage guidelines of our framework to avoid malicious usage (Appendix G).
## Appendix A Discussion about Numerical Attributes
FACTS works for datasets and models with numerical or continuous features; in fact it works with any mixture of categorical and numerical features. The core idea is that numerical attributes are binned. These bins are used to define subpopulations, e.g., people in the salary range \([40K,50K]\), and actions, e.g., make salary \([50K,60K]\). So the action "if salary is \([40K,50K]\), make salary \([50K,60K]\)" means that all individuals within that salary range are mapped to their counterfactuals where their salaries are increased by \(10K\).
Binning is necessary when defining subpopulations, as we want to explore the entire feature space and present conclusions in a manner that is easily understandable by humans, e.g., "there is unfairness against females when married=no, salary in \([40K,50K]\)".
Binning is also necessary when considering actions over numerical attributes. As explained, binning of granularity \(10K\) for salary means that we consider actions that change salary by \(\pm 10K\), \(\pm 20K\), etc. We emphasize that because the number of actions/counterfactuals is infinite, all methods that work with model-agnostic counterfactual-based explanations (e.g., FACTS, and [31, 25]) have to explore only a small set of actions.
Our goal is to approximate the effectiveness-cost distribution, which means we should consider as many actions as possible. Therefore, on the one hand, binning for actions should rather be fine-grained, while on the other hand, binning for subpopulations should be moderately coarse-grained so as to draw conclusions that concern many affected individuals.
With that said, the binning granularity for actions and subpopulations can differ. For example consider bins of length \(10K\) for subpopulations and \(5K\) for actions. An action, "if salary is \([40K,50K]\), make salary \([55K,60K]\)" means that individuals with salary in \([40K,45K]\) increase their salary by \(15K\), and individuals with salary in \([45K,50K]\) increase their salary by \(10K\).
Experimental Setting
ModelsTo conduct our experiments, we have used the Logistic Regression2 classification model, where we use the default implementation of the python package scikit-learn3. This model corresponds to the black box one that our framework audits in terms of fairness of recourse.
Footnote 2: [https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
Footnote 3: [https://scikit-learn.org/stable/index.html](https://scikit-learn.org/stable/index.html)
Train-Test SplitFor our experiments, all datasets are split into training and test sets with proportions 70% and 30%, respectively. Both shuffling of the data and stratification based on the labels were employed. Our results can be reproduced using the random seed value \(131313\) in the data split function (train_test_split4 from the python package scikit-learn). FACTS is deployed solely on the test set.
Footnote 4: [https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html)
Frequent Itemset MiningThe set of subgroups and the set of actions are generated by executing the fp-growth5 algorithm for frequent itemset mining. We used the implementation in the Python package mlxtend6. We deploy fp-growth with support threshold **1%**, i.e., we require the return of subgroups and actions with at least 1% frequency in the respective populations. Recall that subgroups are derived from the affected populations \(D_{0}\) and \(D_{1}\) and actions are derived from the unaffected population.
Footnote 5: [https://rasbt.github.io/mlxtend/user_guide/frequent_patterns/fpgrowth/](https://rasbt.github.io/mlxtend/user_guide/frequent_patterns/fpgrowth/)
Footnote 6: [https://github.com/rasbt/mlxtend](https://github.com/rasbt/mlxtend)
Effectiveness and BudgetsAs we have stated in Section 2 of our main paper, the metrics _Equal Choice for Recourse_ and _Equal Cost of Effectiveness_ require the definition of a target effectiveness level \(\phi\), while the metric _Equal Effectiveness within Budget_ requires the definition of a target cost level (or budget) \(c\).
Regarding the metrics that require the definition of an effectiveness level \(\phi\), we used two different values arbitrarily, i.e., a relatively low effectiveness level of \(\phi=30\%\) and a relatively high effectiveness level of \(\phi=70\%\).
For the estimation of budget-level values \(c\) we followed a more elaborate procedure. Specifically,
1. Compute the _Equal Cost of Effectiveness_ (micro definition) with a target effectiveness level of \(\phi=50\%\) to calculate, for all subgroups \(G\), the minimum cost required to flip the prediction for at least \(50\%\) of both \(G_{0}\) and \(G_{1}\).
2. Gather all such minimum costs of step 1 in an array.
3. Choose budget values as percentiles of this set of cost values. We have chosen the **30%, 60%** and **90%** percentiles arbitrarily.
Cost FunctionsOur implementation allows the user to define any cost function based on their domain knowledge and requirements. For evaluation and demonstration purposes, we implement an indicative set of cost functions, according to which, the cost of a change of a feature value \(v\) to the value \(v^{\prime}\) is defined as follows:
1. **Numerical features:**\(|norm(v)-norm(v^{\prime})|\), where \(norm\) is a function that normalizes values to \([0,1]\).
2. **Categorical features:**\(1\) if \(v\neq v^{\prime}\), and \(0\) otherwise.
3. **Ordinal features:**\(|pos(v)-pos(v^{\prime})|\), where \(pos\) is a function that provides the order for each value.
Additionally to the above costs, the user is able to define a feature-specific weight that indicates the difficulty of changing the given feature through an action. Thus, for each dataset, the cost of actions can be simply determined by specifying the numerical, categorical, and ordinal features, as well as the weights for each feature.
FeasibilityApart from the cost of actions, we also take care of some obvious unfeasible actions such as that the age and education features can not be reduced and actions should not lead to unknown or missing values.
Compute resourcesExperiments were run on commodity hardware (AMD Ryzen 5 5600H processor, 8GB RAM). On the software side, all experiments were run in an isolated conda environment using Python 3.9.16.
Datasets Description
We have used five datasets in our experimental evaluation; the main paper presented results only on the first. For each dataset, we provide details about the preprocessing procedure, specify feature types, and list the cost feature weights applied.
### Adult
We have generated CSCs in the Adult dataset7 using two different features as protected attributes, i.e.,'sex', and 'race'. The assessment of bias for each protected attribute is done separately. The results for'sex' as the protected attribute are presented in the main paper. Before we present our results for race as the protected attribute, we briefly discuss the preprocessing procedures and feature weights used for the adult dataset.
Footnote 7: [https://raw.githubusercontent.com/columbia/fairtest/master/data/adult/adult.csv](https://raw.githubusercontent.com/columbia/fairtest/master/data/adult/adult.csv)
PreprocessingWe removed the features 'fnlwgt' and 'education' and any rows with unknown values. The 'hours-per-week' and 'age' features have been discretized into 5 bins each.
FeaturesAll features have been treated as categorical, except for 'capital-gain' and 'capital-loss', which are numeric, and 'education-num' and 'hours-per-week', which we treat as ordinal. The feature weights that we used for the cost function are presented in Table 2. We need to remind here that this comprises only an indicative weight assignment to serve our experimentation; the weight below try to capture the notion of how feasible/actionable it is to perform a change to a specific feature.
### Compas
We have generated CSCs in the COMPAS dataset8 for race as the protected attribute. Apart from our results, we provide some brief information regarding preprocessing procedures and the cost feature weights for the COMPAS dataset.
Footnote 8: [https://aif360.readthedocs.io/en/latest/modules/generated/aif360.sklearn.datasets.fetch_compas.html](https://aif360.readthedocs.io/en/latest/modules/generated/aif360.sklearn.datasets.fetch_compas.html)
PreprocessingWe discard the features 'age' and 'c_charge_desc'. The 'priors_count' feature has been discretized into 5 bins: [-0.1,1), [1, 5) [5, 10) [10, 15) and [15, 38), while trying to keep the frequencies of each bin approximately equal (the distribution of values is highly asymmetric so this is not possible with the direct use of e.g., pandas.qcut9).
Footnote 9: [https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.qcut.html](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.qcut.html)
FeaturesWe treat the features 'juv_fel_count', 'juv_misch_count', 'juv_other_count' as numerical and the rest as categorical. The feature weights used for the cost function are shown in Table 3.
### Ssl
We have generated CSCs in the SSL dataset10 for race as the protected attribute. Before we move to our results, we discuss briefly preprocessing procedures and feature weights applied in the SSL dataset.
\begin{table}
\begin{tabular}{l c l l} \hline \hline
**feature name** & **weight value** & **feature name** & **weight value** \\ \hline native-country & 4 & Workclass & 2 \\ marital-status & 5 & hours-per-week & 2 \\ relationship & 5 & capital-gain & 1 \\ age & 10 & capital-loss & 1 \\ occupation & 4 & education-num & 3 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Cost Feature Weights for AdultPreprocessingWe remove all rows with missing values ('U' or 'X') from the dataset. We also discretize the feature 'PREDICTOR RAT TREND IN CRIMINAL ACTIVITY' into 6 bins. Finally, since the target labels are values between 0 and 500, we 'binarize' them by assuming values above 344 to be positively impacted and below 345 negatively impacted (following the principles used in [3].
FeaturesIn this dataset, we treat all features as numerical (apart from the protected race feature). Table 4 presents the feature weights used for the cost function.
### Ad Campaign
We have generated CSCs in the Ad Campaign dataset11 for gender as the protected attribute.
PreprocessingWe decided not to remove missing values, since they represent the vast majority of values for all features. However, we did not allow actions that lead to missing values in the CSCs representation.
Footnote 11: [https://developer.ibm.com/exchanges/data/all/bias-in-advertising/](https://developer.ibm.com/exchanges/data/all/bias-in-advertising/)
\begin{table}
\begin{tabular}{l c} \hline \hline
**feature name** & **weight value** \\ \hline PREDICTOR RAT AGE AT LATEST ARREST & 10 \\ PREDICTOR RAT VICTIM SHOOTING INCIDENTS & 1 \\ PREDICTOR RAT VICTIM BATTERY OR ASSAULT & 1 \\ PREDICTOR RAT ARESITS VIOLENT OFFENSES & 1 \\ PREDICTOR RAT GANG AFFILIATION & 1 \\ PREDICTOR RAT NARCOTIC ARRESTS & 1 \\ PREDICTOR RAT TREND IN CRIMINAL ACTIVITY & 1 \\ PREDICTOR RAT UUW ARRESTS & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Cost Feature Weights for SSL
\begin{table}
\begin{tabular}{l c} \hline \hline
**feature name** & **weight value** \\ \hline age\_cat & 10 \\ juv\_fel\_count & 1 \\ juv\_fel\_count & 1 \\ juv\_other\_count & 1 \\ priors\_count & 1 \\ c\_charge\_degree & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Cost Feature Weights for COMPAS
\begin{table}
\begin{tabular}{l c} \hline \hline
**feature name** & **weight value** \\ \hline religion & 5 \\ politics & 2 \\ parents & 3 \\ age & 10 \\ income & 3 \\ area & 2 \\ college\_educated & 3 \\ homeowner & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 5: The assigned weights for each feature used in the calculation of the cost function in the Ad Campaign datasetFeaturesIn this dataset, we treat all features, apart from the protected one, as categorical. The feature weights used for the cost function are shown in Table 5.
### Credit
We have generated CSCs in the Statlog (German Credit) dataset12 for sex as the protected attribute.
Footnote 12: [https://archive.ics.uci.edu/dataset/144/statlog+german+credit+data](https://archive.ics.uci.edu/dataset/144/statlog+german+credit+data)
PreprocessingWe performed only minimal preprocessing. This involved adding column names, based on the provided documentation, since they are not automatically included in the dataset. We also simplified the values of the'sex' feature into 'Male' and 'Female', from their original, more complex form (which originally was included in a coded version of the combination of personal status and sex). Finally, we changed the labels from \(\{1,2\}\) to \(\{0,1\}\).
FeaturesThe features 'duration', 'credit', 'rate','residence', 'age', 'cards', and 'liables' are treated as numerical, while the remaining features are regarded as categorical. We assigned the same weight value, specifically 1, for all features in the dataset when calculating the cost function, as depicted in Table 6.
\begin{table}
\begin{tabular}{l c} \hline \hline
**feature name** & **weight value** \\ \hline status & 1 \\ duration & 1 \\ hist & 1 \\ purpose & 1 \\ credit & 1 \\ savings & 1 \\ empl & 1 \\ rate & 1 \\ sex & 1 \\ others & 1 \\ residence & 1 \\ property & 1 \\ age & 1 \\ plans & 1 \\ housing & 1 \\ cards & 1 \\ job & 1 \\ liables & 1 \\ telephone & 1 \\ foreign & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 6: The assigned weights for each feature used in the calculation of the cost function in the German Credit datasetAdditional Results
This section repeats the experiment described in the main paper, concerning the Adult dataset with 'gender' as the protected attribute (Section 4), in three other cases. Specifically, we provide three subgroups that were ranked first in terms of unfairness according to a metric, highlight why they were marked as unfair by our framework, and summarize their unfairness scores according to the rest of the metrics.
### Results for Adult with race as the protected attribute
We showcase three prevalent subgroups for which the rankings assigned by different fairness definitions truly yield different kinds of information. This is showcased in Table 7. We once again note that the results presented here are for 'race' as a protected attribute, while the corresponding results for 'gender' are presented in Section 4 of the main paper.
In Figure 4 we present the Comparative Subgroup Counterfactual representation for the subgroups of Table 7 that corresponds to the fairness metric for which each subgroup presents the minimum rank.
These results are in line with the findings reported in the main paper (Section 4), on the same dataset (Adult), but on a different protected attribute (race instead of gender). Subgroups ranked first (highly unfair) with respect to a specific definition, are ranked much lower or even considered as fair according to most of the remaining definitions. This serves as an indication of the utility of the different fairness definitions, which is further strengthened by the diversity of the respective CSCs of Table 7. For example, the Subgroup 1 CSC (ranked first _Equal Choice for Recourse (\(\phi=0.3\))_), demonstrates unfairness by contradicting a plethora of actions for the "White" protected subgroup, as opposed to much fewer actions for the the "Non-White" protected subgroup. For Subgroup 2, a much more concise representation is provided, tied to the respective definition (_Equal Cost of Effectiveness (\(\phi=0.3\))_): no recourses are identified for the desired percentage of the "Non-White" unfavored population, as opposed to the "White" unfavored population.
### Results for COMPAS
We present some ranking statistics for three interesting subgroups for all fairness definitions (Table 8). The Comparative Subgroup Counterfactuals for the same three subgroups are shown in Figure 5.
### Results for SSL
In Table 9 we present a summary of the ranking statistics for three interesting subgroups. and their respective Comparative Subgroup Counterfactuals in Figure 6.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{**Shapgroup 1**} & \multicolumn{4}{c}{**Shapgroup 2**} & \multicolumn{4}{c}{**Shapgroup 3**} \\ \cline{2-10} & rank & bias against & unfairness score & rank & bias against & unfairness score & rank & bias against & unfairness score \\ \hline Equal Effectiveness & Fair & Fair & 0.0 & 3047/0.00 & N-White & 0.115 & 1682.0 & Non-White & 0.162 \\ Equal Choice for Recourse (\(\phi=0.3\)) & F & Fair & 0.0 & 10.0 & Non-White & 1.0 & Fair & Fair & 0.0 \\ Equal Choice for Recourse (\(\phi=0.7\)) & F & Fair & 0.0 & Fair & Fair & 0.0 & Fair & Fair & 0.0 \\ Equal Effectiveness within Baseline (\(\phi=1.7\)) & Fair & Fair & 0.0 & Fair & Fair & 0.0 & Fair & Fair & 0.0 \\ Equal Effectiveness within Baseline (\(\phi=1.0\)) & Fair & Fair & 0.0 & Fair & Fair & 0.0 & Fair & Fair & 0.0 \\ Equal Effectiveness within Baseline (\(\phi=1.0\)) & 300 & Non-White & 0.242 & 2201.00 & N-White & 0.115 & 4035.0 & Non-White & 0.071 \\ Equal Effectiveness within Baseline (\(\phi=2.5\)) & F & Fair & 0.0 & 2978.00 & Non-White & 0.115 & 1663.0 & Non-White & 0.162 \\ Equal Cost of Effectiveness (\(\phi=0.3\)) & 18.0 & Non-White & 0.15 & 1 & Non-White & 1.0 & Fair & Fair & 0.0 \\ Equal Cost of Effectiveness (\(\phi=0.7\)) & Fair & Fair & 0.0 & Fair & Fair & 0.0 & Fair & Fair & 0.0 \\ Fair Effectiveness-Cost Trade-Off & 999.00 & Non-White & 0.242 & 4597.0 & Non-White & 0.115 & 2644.0 & Non-White & 0.162 \\ Equal Confidential Mean Recourse & 5897.0 & White & 0.021 & 5390.0 & White & 0.047 & 1 & Non-White & 0.162 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Example of three unfair subgroups in Adult (protected attribute race)
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{**Shapgroup 1**} & \multicolumn{4}{c}{**Shapgroup 2**} & \multicolumn{4}{c}{**Shapgroup 3**} \\ \cline{2-10} & rank & bias against & unfairness score & rank & bias against & unfairness score & rank & bias against & unfairness score \\ \hline Equal Effectiveness & Fair & Fair & 0.0 & 116.0 & African-American & 0.151 & 220.0 & African-American & 0.071 \\ Equal Choice for Recourse (\(\phi=0.3\)) & F & Fair & 0.0 & 3.0 & African-American & 1.0 & Fair & Fair & 0.0 \\ Equal Effectiveness within Baseline (\(\phi=0.7\)) & F & Fairness-American & 3.0 & Fair & Fair & 0.0 & Fair & Fair & 0.0 \\ Equal Effectiveness within Baseline (\(\phi=1.7\)) & 66.0 & African-American & 0.167 & 79.0 & African-American & 0.151 & 185.0 & African-American & 0.071 \\ Equal Effectiveness within Baseline (\(\phi=1.0\)) & 84.0 & African-American & 0.167 & 108.0 & African-American & 0.151 & 220.0 & African-American & 0.071 \\ Equal Cost of Effectiveness (\(\phi=0.3\)) & F & Fair & 0.0 & Fair & Fair & 0.0 & Fair & Fair & 0.0 \\ Equal Confidential Mean Recourse & 63.0 & African-American & 0.5 & 214.0 & African-American & 0.151 & 376.0 & African-American & 0.071 \\ Equal Confidential Mean Recourse & 99.0 & African-American & 1.667 & Fair & Fair & 0.0 & 1 & African-American & 0.071 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Example of three unfair subgroups in COMPAS
### Results for Ad Campaign
In Table 10 we present, as we did for the other datasets, the ranking results for 3 interesting subgroups, while in Figure 7, we show the respective Comparative Subgroup Counterfactuals for these subgroups.
Figure 4: Example of three Comparative Subgroup Counterfactuals in Adult (protected attribute race); ref. Table 7
**Subgroup 1**
If age_cat = 25 - 45, c_charge_degree = M. juv_mid_count = 0, priors_count = (10.0, 15.0); Protected Subgroup = 'Caucasian', 1.03% covered Make c_charge_degree=F, priors_count=(-0.1, 1.0) with effectiveness 100.00%. Make priors_count=(-0.1, 1.0) with effectiveness 100.00%. Make priors_count=(-0.1, 1.0) with effectiveness 100.00%. Make pgc_cat=Greater than 45, priors_count=(-0.1, 1.0) with effectiveness 100.00%. Make pgc_cat=Greater than 45, c_charge_degree=F, priors_count=(-0.1, 1.0) with effectiveness 100.00%. Make pgc_cat=Greater than 43, c_charge_degree=F, priors_count=(1.0, 5.0) with effectiveness 100.00%. Make pgc_cat=Greater than 45, priors_count=(1.0, 5.0) with effectiveness 100.00%. Protected Subgroup = 'African-American', 1.16% covered Make pgc_cat=(-0.1, 1.0) with effectiveness 83.33%. Make pgc_cat=Greater than 45, priors_count=(-0.1, 1.0) with effectiveness 100.00%. Make pgc_cat=Greater than 45, c_charge_degree=F, priors_count=(-0.1, 1.0) with effectiveness 100.00%. Make pgc_cat=Greater than 45, priors_count=(1.0, 5.0) with effectiveness 83.33%. Bias against African-American due to Equal Choice for Recourse (threshold = 0.7). Unfairness score = 3.
**Subgroup 2**
If c_charge_degree = M. juv_other_count = 1: Protected Subgroup = 'Caucasian', 3.59% covered Make jw_other_count = 0 with effectiveness 42.86%. Protected Subgroup = 'African-American', 3.48% covered No recourses for this subgroup. Bias against African-American due to Equal Cost of Effectiveness (threshold = 0.3). Unfairness score = inf
**Subgroup 3**
If age_cat = Greater than 45, c_charge_degree = F. juv_fel_count = 0, juv_mid_count = 0, juv_other_count = 0, sea = Male: Protected Subgroup = 'Caucasian', 7.18% covered Make c_charge_degree=M with effectiveness 7.14%. Protected Subgroup = 'African-American', 6.00% covered Make c_charge_degree=M with effectiveness 0.00%. Bias against African-American due to Equal Conditional Mean Recourse. Unfairness score = inf.
**Subgroup 1**
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{**Subgroup 1**} & \multicolumn{3}{c}{**Subgroup 2**} & \multicolumn{3}{c}{**Subgroup 3**} \\ \cline{2-11} & rank & bias against & unfairness score & rank & bias against & unfairness score & rank & bias against & unfairness score \\ \hline Equal Effectiveness & 163.0 & Black & 0.076 & 70.0 & Black & 0.663 & 979.0 & Black & 0.151 \\ Equal Choice for Recourse (\(\phi=0.3\)) & Fair & Fair & 0.0 & 12.0 & Black & 1.0 & 12.0 & Black & 1.0 \\ Equal Choice for Recourse (\(\phi=0.7\)) & 13.0 & Black & 3.0 & Fair & Fair & 0.0 & Fair & Fair & 0.0 \\ Equal Effectiveness within Budget (\(\phi=1\)) & Fair & Fair & 0.0 & 195.0 & Black & 0.663 & 1062.0 & White & 0.138 \\ Equal Effectiveness within Budget (\(\phi=2\)) & 242.70 & Black & 0.111 & 126.0 & Black & 0.663 & 3668.0 & White & 0.043 \\ Equal Effectiveness within Budget (\(\phi=10\)) & 2557.0 & Black & 0.076 & 73.0 & Black & 0.663 & 1496.0 & Black & 0.151 \\ Equal Cost of Effectiveness (\(0=0.3\)) & Fair & Fair & 0.0 & 1 & Black & inf & 1 & Black & inf \\ Equal Cost of Effectiveness (\(0=0.7\)) & 1 & Black & inf & Fair & Fair & 0.0 & Fair & Fair & 0.0 \\ Fair Effectiveness-Cost Trade-Off & 3393.0 & Black & 0.111 & 443.0 & Black & 0.663 & 2685.0 & Black & 0.151 \\ Equal (Conditional) Mean Recourse & 3486.0 & Black & 0.053 & 1 & Black & inf & 1374.0 & White & 0.95 \\ \hline \hline \end{tabular}
\end{table}
Table 10: Example of three unfair subgroups in Ad Campaign
Figure 5: Example of three Comparative Subgroup Counterfactuals in COMPAS; ref. Table 8
**Subgroup 1** If PREDICTOR RAT ARENESS VOLENT OFFENSES = 1, PREDICTOR RAT NARCOTIC ARENESS = 1, PREDICTOR RAT VICTIM BATTERY OR ASSAULT = 1: 104% covered
Projected Subgroup = 'Black' 104% covered
No recentes for this subgroup.
Protected Subgroup = 'White' 1.00% covered
Make PREDICTOR RAT ARENESS VOLENT OFFENSES = 0, PREDICTOR RAT NARCOTIC ARENESS = 0, PREDICTOR RAT VICTIM BATTERY OR ASSAULT = 0 with effectiveness 72.73%
Make PREDICTOR RAT ARENESS VOLENT OFFENSES = 0, PREDICTOR RAT NARCOTIC ARENESS = 1, PREDICTOR RAT VICTIM
Make PREDICTOR RAT ARENESS VOLENT OFFENSES = 0, PREDICTOR RAT NARCOTIC ARENESS = 2, PREDICTOR RAT VICTIM BATTERY OR ASSAULT = 0 with effectiveness 72.73%
Bias against Black due to Equal Cost of Effectiveness (threshold = 0.7). Unfairness score = inf.
**Subgroup 1**
If age = 45-54, area = Unknown, parents = 1:
Protected Subgroup = Male', 1.22%c covered
Make age=55-64, area=Rural with effectiveness 30.77%.
Protected Subgroup = 'Female', 1.13%c covered
No recourses for this subgroup.
Bias against Female due to Equal Cost of Effectiveness (threshold=0.3). Unfairness score = inf.
Further Discussion on Experiments
### Model Agnostic Results
We note that our method is model-agnostic and does not depend on the model class. Nonetheless, we have considered two additional models, XGBoost and a Neural Network, for the Adult dataset. We provide an example CSC for Subgroup 1 of Figure 3, one for the output of each model (XGBoost in Figure 8 and NN in Figure 9)
### Generalization to multiclass demographic groups
The generalization of the algorithm to multiple protected groups (comprising multiclass demographic groups) is a straightforward process. For instance, we can compare the burden of each protected group against all other groups using a one-vs-all approach. An example of this methodology applied to the Adult dataset and its multi-valued race attribute is illustrated in Figure 10.
### Runtime and statistics
In Table 11, we present the runtime of FACTS for all datasets. We also provide statistics on the output generated by fp-growth for all these datasets, offering preliminary insights into scalability.
Figure 8: Output of FACTS on an XGBoost model, for the same subgroup as Subgroup 1 of Figure 3.
Figure 10: Output of FACTS for Adult for a multi-valued protected attribute (race); all protected subgroups other than ‘Black’ have three options (actions) to achieve recourse with an effectiveness of at least 0.7, while ‘Black’ has a single option. Therefore we conclude bias against the ‘Black’ using the Equal Choice of Recourse.
Figure 9: Output of FACTS on a simple NN model, for the same subgroup as Subgroup 1 of Figure 3.
Additionally, we include the runtime for the Adult dataset, with sex serving as the protected attribute, for various combinations of minimum support threshold and the number of bins in the numerical attributes (see Table 13). These results encompass both the total runtime of FACTS and the time taken by FACTS in the assessment of fairness step. In Table 14 we report the corresponding statistics from fp-growth for the same combinations as presented in 13. It is worth noting that our primary contribution is conceptual in nature, focusing on the formalization of recourse unfairness notions, rather than technical, i.e., how to efficiently determine a good set of actions to explore.
\begin{table}
\begin{tabular}{l c c c c c c} \hline & **Adult (sex)** & **Adult (race)** & **COMPAS** & **SSL** & **Ad campaign** & **German credit** \\ \hline
**\# male (white) subgroups (affected)** & 27,510 & 27,767 & 1431 & 7,836 & 1,658 & 244,777 \\
**\# female (non-white) subgroups (affected)** & 28,176 & 24,664 & 3,984 & 7,801 & 1,700 & 241,287 \\
**\# actions (unaffected)** & 57,008 & 56,394 & 2,895 & 17,270 & 3,476 & 83,631 \\
**\# common subgroups** & 13,300 & 18,692 & 95 & 6,552 & 1,651 & 36,970 \\
**\# valid actions** & 97,413 & 112,879 & 188 & 86,782 & 4,274 & 117,143 \\
**\# subgroups** & 12,556 & 15,672 & 88 & 6,551 & 1,280 & 31,509 \\ \hline \end{tabular}
\end{table}
Table 12: Statistics on the output of fp-growth for all datasets
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multicolumn{2}{l}{**(min-support threshold, \# bins)**} & (0.1,10) & (0.05,10) & (0.01,10) & (0.05,5) & (0.05,20) \\ \hline
**\# of male subgroups (affected)** & 997 & 3,017 & 25,569 & 4,017 & 2,309 \\
**\# of female subgroups (affected)** & 933 & 3,024 & 26,173 & 4,021 & 2,397 \\
**\# of common subgroups** & 453 & 1,583 & 12,093 & 1,969 & 1,126 \\
**\# of actions (unaffected)** & 2,828 & 7,777 & 60,006 & 10,709 & 5,961 \\
**\# of all valid actions** & 351 & 3,120 & 87,687 & 4,348 & 2,538 \\
**\# of subgroups** & 283 & 1,235 & 10,975 & 1,706 & 887 \\ \hline \end{tabular}
\end{table}
Table 13: Runtime for Adult (sex) while varying min-support threshold and number of bins
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & **Adult (sex)** & **Adult (race)** & **COMPAS** & **SSL** & **Ad campaign** & **German credit** \\ \hline
**Runtime (seconds)** & 1,875 & 2,208 & 11 & 1,807 & 860 & 689 \\ \hline \hline \end{tabular}
\end{table}
Table 11: Runtime for all datasets (min-support threshold: 0.01 for Adult, COMPAS, SSL, Ad campaign; 0.04 for German credit)Comparison of Fairness Metrics
The goal of this section is to answer the question: "How different are the fairness of recourse metrics". To answer this, we consider all subgroups and compare how they rank in terms of unfairness according to 12 distinct metrics. The results justify our claim in the main paper that the fairness metrics capture different aspects of recourse unfairness. For each dataset and protected attribute, we provide (a) the ranking analysis table, and (b) the aggregated rankings table.
The first column of the _ranking analysis_ table shows the number of the most unfair subgroups per metric, i.e., how many ties are in rank 1. Depending on the unit of the unfairness score being compared between the protected subgroups (namely: cost, effectiveness, or number of actions), the number of ties can vary greatly. Therefore, we expect to have virtually no ties when comparing effectiveness percentages and to have many ties when comparing costs. The second and third columns show the number of subgroups where we observe bias in one direction (e.g., against males) and the opposite (e.g., against females) among the top 10% most unfair subgroups.
The _aggregated rankings_ table is used as evidence that different fairness metrics capture different types of recourse unfairness. Each row concerns the subgroups that are the most unfair (i.e., tied at rank 1) according to each fairness metric. The values in the row indicate the average percentile ranks of these subgroups (i.e., what percentage of subgroups are more unfair) when ranked according to the other fairness metrics, shown as columns. Concretely, the value \(v\) of each cell \(i,j\) of this table is computed as follows:
1. We collect all subgroups of the fairness metric appearing in row \(i\) that are ranked first (the most biased) due to this metric.
2. We compute the average ranking \(a\) of these subgroups in the fairness metric appearing in column \(j\).
3. We divide \(a\) with the largest ranking tier of the fairness metric of column \(j\) to arrive at \(v\).
Each non-diagonal value of this table represents the relative ranking based on the specific metric of the column for all the subgroups that are ranked first in the metric of the respective row (all diagonal values of this table are left empty). A relative ranking of \(v\) in a specific metric \(m\) means that the most unfair subgroups of another metric are ranked lower on average (thus are fairer) for metric \(m\).
### Comparison on Adult for protected attribute gender
The number of affected individuals in the test set for the adult dataset is 10,205. We first split the affected individuals on the set of affected males \(D_{1}\) and the set of affected females \(D_{0}\). The number of subgroups formed by running fp-growth with support threshold \(1\%\) on \(D_{1}\) and on \(D_{0}\) and computing their intersection is 12,880. Our fairness metrics will evaluate and rank these subgroups based on the actions applied.
Tables 15 and 16 present the ranking analysis and the aggregated rankings respectively on the gender attribute, on the Adult dataset. Next, we briefly discuss the findings from these two tables; similar findings stand for the respective tables of the other datasets, thus we omit the respective discussion.
### Comparison on COMPAS
The number of affected individuals in the test set for the Adult dataset is 10,205. We first split the affected individuals on the set of affected whites \(D_{1}\) and the set of affected non-whites \(D_{0}\). The number of subgroups formed by running fp-growth with support threshold \(1\%\) on \(D_{1}\) and on \(D_{0}\) and computing their intersection is 16,621. Our fairness metrics will evaluate and rank these subgroups based on the actions applied.
### Comparison on COMPAS
The number of affected individuals in the test set for the COMPAS dataset is 745. We first split the affected individuals on the set of affected Caucasians \(D_{1}\) and the set of affected African Americans \(D_{0}\). The number of subgroups formed by running fp-growth with support threshold \(1\%\) on \(D_{1}\) and on \(D_{0}\) and computing their intersection is 995. Our fairness metrics will evaluate and rank these subgroups based on the actions applied.
### Comparison on SSL
The number of affected individuals in the test set for the SSL dataset is 11,343. We first split the affected individuals on the set of affected blacks \(D_{1}\) and the set of affected whites \(D_{0}\) based on the race attribute (appears with the name RACE CODE CD in the dataset). The number of subgroups
\begin{table}
\begin{tabular}{l r r r r r r r r r} \hline \hline & \begin{tabular}{c} \# Most Unfair \\ Subgroups \\ \end{tabular} & \begin{tabular}{c} \# Subgroups w. Bias against Males \\ (in Top 10\% Unfair Subgroups) \\ \end{tabular} & \begin{tabular}{c} \# Subgroups w. Bias against Females \\ (in Top 10\% Unfair Subgroups) \\ \end{tabular} & \begin{tabular}{c} \# Subgroups w. Bias against Females \\ (in Top 10\% Unfair Subgroups) \\ \end{tabular} &
\begin{tabular}{c} \# Subgroups w. Bias against Females \\ (in Top 10\% Unfair Subgroups) \\ \end{tabular} \\ \hline (Equal Cost of Effectiveness(Macro), 0.3) & 1673 & 56 & 206 \\ (Equal Cost of Effectiveness(Macro), 0.7) & 301 & 26 & 37 \\ (Equal Choice for Recourse, 0.3) & 2 & 54 & 286 \\ (Equal Choice for Recourse, 0.7) & 6 & 31 & 50 \\ Equal Effectiveness & 1 & 39 & 1040 \\ (Equal Effectiveness within Budget, 5.0) & 1 & 41 & 616 \\ (Equal Effectiveness within Budget, 10.0) & 1 & 6 & 904 \\ (Equal Effectiveness within Budget, 18.0) & 1 & 22 & 964 \\ (Equal Cost of Effectiveness(Micro), 0.3) & 1523 & 10 & 226 \\ (Equal Cost of Effectiveness(Micro), 0.7) & 290 & 38 & 27 \\ Equal/Conditional Mean Recourse & 764 & 540 & 565 \\ (Fair Effectiveness-Cost Trade-Off, value) & 1 & 61 & 1156 \\ \hline \hline \end{tabular}
\end{table}
Table 15: Ranking Analysis in Adult (protected attribute gender)
\begin{table}
\begin{tabular}{l r r r r r r r r r r} \hline \hline & \begin{tabular}{c} \# Most Unfair \\ Subgroups \\ \end{tabular} & \begin{tabular}{c} \# Subgroups w. Bias against Males \\ (in Top 10\% Unfair Subgroups) \\ \end{tabular} & \begin{tabular}{c} \# Subgroups w. Bias against Females \\ (in Top 10\% Unfair Subgroups) \\ \end{tabular} & \begin{tabular}{c} \# Subgroups w. Bias against Females \\ (in Top 10\% Unfair Subgroups) \\ \end{tabular} & \begin{tabular}{c} \# Subgroups w. Bias against Females \\ (in Top 10\% Unfair Subgroups) \\ \end{tabular} & \begin{tabular}{c} \# Subgroups w. Bias against Females \\ (in Top 10\% Unfair Subgroups) \\ \end{tabular} & \begin{tabular}{c} \# Subgroups w. Bias against Females \\ (in Top 10\% Unfair Subgroups) \\ \end{tabular} &
\begin{tabular}{c} \# Subgroups w. Bias against Females \\ (in Top 10\% Unfair Subgroups) \\ \end{tabular} \\ \hline (Equal Cost of Effectiveness(Macro), 0.3) & 1731 & 0 & 295 \\ (Equal Cost of Effectiveness(Macro), 0.7) & 325 & 7 & 51 \\ (Equal Choice for Recourse, 0.3) & 1 & 2 & 391 \\ Equal Choice for Recourse, 0.7) & 2 & 10 & 60 \\ Equal Effectiveness within Budget, 1.15 & 1 & 50 & 24 \\ Equal Effectiveness within Budget, 10.0 & 1 & 3 & 1251 \\ Equal Effectiveness within Budget, 21.0) & 1 & 0 & 1423 \\ Equal Cost of Effectiveness(Micro), 0.3) & 1720 & 0 & 294 \\ Equal Cost of Effectiveness(Micro), 0.7) & 325 & 7 & 51 \\ Equal/Conditional Mean Recourse & 2545 & 53 & 1316 \\ (Fair Effectiveness-Cost Trade-Off, value) & 2 & 0 & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 17: Ranking Analysis in Adult (protected attribute race)formed by running fp-growth with support threshold \(1\%\) on \(D_{1}\) and on \(D_{0}\) and computing their intersection is 6,551. Our fairness metrics will evaluate and rank these subgroups based on the actions applied.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline & \# Most Unfair & \# Subgroups w. Bias against Whites & \# Subgroups w. Bias against Blacks & \# Subgroups w. Bias against Blacks \\ & \# Subgroups w. Bias against & \# Subgroups w. Bias against & \# Subgroups w. Bias against & \# Subgroups w. Bias against & \# Subgroups w. Bias against \\ \hline (Equal Cost of Effectiveness(Macro), 0.3) & 371 & 10 & 107 \\ (Equal Cost of Effectiveness(Macro), 0.7) & 627 & 26 & 124 \\ (Equal Choice for Recourse, 0.3) & 1 & 108 & 184 & 184 & 184 & 184 \\ Equal Choice for Recourse, 0.7) & 16 & 78 & 229 \\ Equal Effectiveness & 1 & 15 & 389 \\ Equal Effectiveness within Budget, 1.0) & 18 & 18 & 436 \\ Equal Effectiveness within Budget, 2.0) & 2 & 19 & 532 \\ Equal Effectiveness within Budget, 10.0) & 1 & 15 & 548 & 548 & 135 \\ Equal Cost of Effectiveness(Micro), 0.3) & 458 & 135 & 130 & 134 & 134 \\ Equal Cost of Effectiveness(Micro), 0.7) & 671 & 23 & 130 & 130 \\ Equal(Conditional Mean Recourse) & 100 & 41 & 434 & 134 & 134 \\ (Fair Effectiveness-Cost Trade-Off, value) & 80 & 76 & 544 \\ \hline \hline \end{tabular}
\end{table}
Table 22: Aggregated Rankings in SSL
### Comparison on Ad Campaign
The number of affected individuals in the test set for the Ad campaign dataset is 273,773. We first split the affected individuals on the set of affected males \(D_{1}\) and the set of affected females \(D_{0}\) based on the gender attribute. The number of subgroups formed by running fp-growth with support threshold \(1\%\) on \(D_{1}\) and on \(D_{0}\) and computing their intersection is 1,432. Our fairness metrics will evaluate and rank these subgroups based on the actions applied.
Discussion of Limitations & Responsible Usage Guidelines
Determining costs for specific counterfactuals poses numerous challenges and constraints. The associated cost functions are intricate, dataset-specific, and frequently require the expertise of a domain specialist. These cost functions can depend on individual characteristics or human interpretation of the perceived difficulty of specific changes, potentially giving rise to concerns like breaches of user privacy (as the expert may require access to all individual user characteristics [25]) or potential manipulation of cost functions by malicious specialists to conceal existing biases.
Recognizing the difficulties of finding the dataset-dependent cost functions, we implement various "natural" cost functions tailored to different feature types (e.g., \(L_{1}\) norm, exponential distances, etc.). These serve only as suggestions to the expert using our framework, are susceptible to change, and can be customized for specific datasets. While the creation of precise cost models is an interesting research area, it is not our primary focus. Instead, our primary goal is to identify a specific type of bias (difficulty to achieve recourse) towards specific subpopulations. Therefore, we have introduced fairness metrics that are independent of cost considerations, recognizing the aforementioned inherent difficulties in defining costs. We recognize that a single metric, whether cost-dependent or not, cannot identify all forms of bias that may be present within a model's predictions. We do not aim to replace the existing statistical measures for bias detection, rather than complement them and capture possible gaps, as described in the introduction of our paper and Section 5.
Our framework evaluates subgroups based on a range of fairness metrics, identifying 'candidate' subgroups that may experience unfair treatment under scrutiny. While the number of such 'candidate' subgroups can be substantial across different metrics, this abundance can serve as a safeguard against malicious users, as cost-oblivious rules are immune to user influence. We emphasize our efforts to produce comprehensive summaries that are easily understandable for the end-users and help them make informative decisions regarding the existence of the groups that are unfairly treated.
The ultimate decision regarding which of the 'candidate' subgroups are considered unfairly treated rests with the expert. Our comprehensive summaries highlight the top-scoring 'candidate' subgroups across multiple metrics, with the ultimate goal of helping the end users and deterring malicious users from deliberately ignoring the framework's suggestions. Explanations accompany FACTS' suggestions, requiring users to justify their rejection and serving as a barrier against deliberate malicious actions. While our summaries are interpretable and customizable, we plan to enhance our framework with visualizations in future work to further assist end-users. A user study would also be helpful to assess FACTS' interpretability to the end-user.
It is important to note that FACTS, like other resource generation algorithms, may be susceptible to errors, as demonstrated in prior research [31]. The outcomes produced by FACTS can vary based on the chosen configuration, including hyperparameters and metric selection. These variations can either obscure biases present within a classifier or indicate their absence. It is crucial to emphasize that since FACTS is an explainable framework for bias assessment within subgroups, its primary purpose is to serve as a guidance tool for auditors to identify subgroups requiring in-depth analysis of fairness criteria violations. Responsible usage of our framework necessitates transparency regarding the chosen configuration. Therefore, we recommend running FACTS with different hyperparameters and utilizing as many metrics as possible.
Another potential limitation relates to GDPR and similar regulations, which may restrict access to protected attributes (confidential information, etc.) even in bias detection applications. It's important to clarify that our framework maintains privacy and avoids personal information leakage. Only summaries of results are presented to the end-users, limited to subgroups that impose the minimum size requirement. These features could be further enhanced by adding restrictions to the size requirements if legislation or standardized requirements existed. In essence, our approach addresses complex challenges in bias detection, while upholding fairness, respecting privacy requirements, and ensuring interpretability for end-users.
An acknowledged constraint of our framework pertains to how we generate actions, as explained in Section 3. Both actions and subgroups are derived using a frequent itemset mining algorithm. We have consciously opted against altering our approach to action generation, to minimize computational complexity and enhance action effectiveness. Importantly, the same actions are applied to all groups sharing a common predicate during the computation of fairness metrics. | ## Review
### Summary
The paper presents FACTS (Fairness Aware Counterfactuals for Subgroups), a framework designed to audit the fairness of recourse in machine learning models. It distinguishes between micro and macro viewpoints of fairness and introduces several novel metrics for evaluating subgroup fairness through counterfactual explanations. The authors demonstrate the effectiveness of FACTS on various datasets, providing insights into the challenges individuals face when seeking recourse. The work is well-structured and addresses an important, timely issue in machine learning fairness, contributing to the understanding of recourse bias and proposing methods for its evaluation.
### Strengths
- 1. The work addresses a significant problem in machine learning by focusing on recourse fairness, an area that has not been extensively explored.
- 2. The paper is well-written, with clear explanations of motivations, definitions, and proposed metrics.
- 3. The experimental evaluation is solid, utilizing multiple datasets which showcase the effectiveness of the FACTS framework.
- 4. The framework is model-agnostic, interpretable, and highly parameterizable, making it applicable across different machine learning contexts.
- 5. Extensive bibliographic references enhance the credibility of the research.
### Weaknesses
- 1. The experiments section is insufficiently detailed, lacking thorough evaluations across various model classes and datasets.
- 2. There is no clear methodology described for how to evaluate the efficacy of the FACTS framework in experimental setups.
- 3. The paper does not adequately address limitations or provide a societal impact statement related to the use of the FACTS framework.
- 4. Certain assumptions made in the paper, such as uniform costs across all instances, may not hold true, leading to potential biases.
- 5. The figures used to illustrate results could be improved for clarity and visual appeal.
### Questions
- 1. How does FACTS handle the definition of cost functions for recourse?
- 2. Can the algorithm generalize to multiclass outcomes and continuous features?
- 3. What statistical tests are used to ensure the significance of the subgroup findings?
- 4. How does the performance of FACTS compare with existing state-of-the-art approaches?
- 5. How does the algorithm scale with high-dimensional feature spaces?
### Soundness
**Score:** 3
**Description:** 3 = good. The methodology and experimental results are logically sound, but some aspects lack thorough evaluation and clarity, which impacts overall reliability.
### Presentation
**Score:** 3
**Description:** 3 = good. The paper is well-structured and easy to read, but could benefit from clearer figures and a more concise introduction.
### Contribution
**Score:** 3
**Description:** 3 = good. The paper contributes valuable insights into the fairness of recourse, though it has some limitations that could affect its practical implementation.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept: The paper is technically solid with moderate-to-high impact potential, but requires some improvements in evaluation depth and clarity.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a novel framework addressing an important aspect of fairness in machine learning. While there are some weaknesses in experimental detail and clarity, the significance of the contribution, as well as the overall soundness and presentation, support an acceptance decision. The work holds potential for future research and practical applications in auditing fairness in decision-making processes.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Im-Promptu: In-Context Composition
from Image Prompts
Bhishma Dedhia
Department of Electrical and Computer Engineering, Princeton University
Michael Chang
Department of Computer Science, University of California Berkeley
Jake C. Snell
Department of Computer Science, Princeton University
Thomas L. Griffiths
Niraj K. Jha
Department of Psychology, Princeton University
###### Abstract
Large language models are few-shot learners that can solve diverse tasks from a handful of demonstrations. This implicit understanding of tasks suggests that the attention mechanisms over word tokens may play a role in analogical reasoning. In this work, we investigate whether analogical reasoning can enable in-context composition over composable elements of visual stimuli. First, we introduce a suite of three benchmarks to test the generalization properties of a visual in-context learner. We formalize the notion of an analogy-based in-context learner and use it to design a meta-learning framework called _Im-Promptu_. Whereas the requisite token granularity for language is well established, the appropriate compositional granularity for enabling in-context generalization in visual stimuli is usually unspecified. To this end, we use _Im-Promptu_ to train multiple agents with different levels of compositionality, including vector representations, patch representations, and object slots. Our experiments reveal tradeoffs between extrapolation abilities and the degree of compositionality, with non-compositional representations extending learned composition rules to unseen domains but performing poorly on combinatorial tasks. Patch-based representations require patches to contain entire objects for robust extrapolation. At the same time, object-centric tokenizers coupled with a cross-attention module generate consistent and high-fidelity solutions, with these inductive biases being particularly crucial for compositional generalization. Lastly, we demonstrate a use case of _Im-Promptu_ as an intuitive programming interface for image generation.
## 1 Introduction
No thought can be formed that isn't informed by the past; or, more precisely, we think only thanks to analogies that link our present to our past.
Humans represent complex concepts by combining and organizing simpler concepts [2, 3]. This endows us with an uncanny skill for constructing an unlimited number of concepts by composing a relatively small set of previously learned basic building blocks, an ability more formally called _compositional generalization_. For example, we can generate sentences by combining a dictionary of words in different ways and solve a wide range of mathematical problems using a small set of basic arithmetic operations and variables. In the case of the visual world, objects form naturallycomposable entities and object-centric primitives can be flexibly combined using abstract syntactic rules to yield novel and counterfactual scenes.
The growing body of work in object-centric learning methods [4, 5, 6] enable the extraction of latent object representations from perceptual scenes. However, slot composition methods have been limited to _ad-hoc_ composition rules based on hard-coding slots into concept libraries [7, 8]. A central challenge in learning a generative model that explicitly models the compositional grammar of the visual world is that the seemingly unlimited number of composition rules that undergird real-life visual entities prevents the _explicit_ specification of each rule. For example, a warm spring day consists of blooming flowers and freshly verdant expanses, but come autumn, the same scene stipulates the presence of golden headovers carpeted with fallen leaves. State-of-the-art text-to-image models [9, 10, 11, 12] circumvent this bottleneck by grounding images in natural language. This aligns the representations of entities in an image to the compositional structure of language. While the visual quality of these models is stunning, they often require engineering complex language prompts to elicit the desired output. They also suffer from limited understanding of object relations [13]. Moreover, studies [14] have shown that infants as young as six months old can extract compositional structure from sequences of visual events, even before they acquire language. Such compositional abstraction independent of language faculty exhibited by humans remains elusive for contemporary models.
Large language models (LLMs) [15, 16, 17, 18, 19, 20], on the other hand, demonstrate _implicit_ composition skills. Pre-trained on large text corpora, these models have impressive emergent few-shot learning capabilities. They can solve novel tasks on the fly without gradient updates by following examples shown in the instruction prompts, a process known as _in-context learning_. Under the hood, in-context learning uses the inherently compositional nature of the input text to implicitly infer the task structure and how to apply the task structure to a query [21]. In the most simple form, the prompt-based few-shot learning paradigm can be formalized as _analogy solving_ of the type \(A:B::C:D\), where \(A\) represents an example task, \(B\) represents the example solution, and \(C\) denotes the query. The generated solution is the analogy completion \(D\). Analogies over compositional elements, therefore, provide a direct isomorphism between in-context learning for language and learning implicit composition rules for visual objects, as depicted in Fig. 1. While LLMs use pre-specified tokenizers [22, 23, 24], the compositional granularity for _implicit_ visual understanding is unspecified. On the other hand, slot methods induce object-level representations but fail to learn composition rules. In this work, we then ask:
_Can analogy-solving enable in-context generalization over object-centric compositional elements of visual entities?_
While many linguistic corpora have been extracted from the Internet for text-based tasks, to the best of our knowledge, no datasets exist for measuring in-context composition capability from visual prompts. To this end, we introduce a suite of three diverse benchmarks in Section 3 built on top of rich combinatorial spaces to systematically test image-based in-context learning abilities: (a) 3D Shapes [25], (b) BitMoji Faces, and (c) CLEVr Objects [26]. If the attention over word tokens enables analogical understanding in LLMs, can we transfer this mechanism to visual analogies? We formulate a unifying framework in Section 4.1 to formalize this notion. We present a visual analogy-based meta-learning algorithm called _Im-Promptu_ in Section 4.2 and use it to train agents with a cross-attention module. How does one tokenize visual inputs? To answer this, we model several generative agents in Section 5, encompassing a continuum ranging from strong compositional learners using object-centric abstractions to non-compositional monolithic representation learners. Extensive experiments presented in Section 6 show the dependence of robust in-context learning for visual stimuli on object-centric slots as visual tokenizers and cross-attention. We demonstrate the use
Figure 1: LLMs perform in-context understanding of a task from a few examples in an analogical manner (left). We use analogy-making to implicitly understand the composition rules over visual stimuli made of object-like entities (right).
of our visual analogy framework in task extrapolation (Section 6.2), combinatorial generalization (Section 6.3), and generating counterfactual images (Section 6.5).
## 2 Related Work
In this section, we discuss prior works and draw connections to relevant research areas.
### Object-Centric Learning:
This growing body of work targets unsupervised decomposition of scenes into a set of object-centric vectors from raw perception data. A common approach involves modeling an autoencoder consisting of a latent factorization of object slots that are independently decoded to reconstruct the scene [4, 5, 6, 27, 28, 29]. These methods effectively segment scenes into representations of underlying objects but do not learn the flexible combination of object representations. On the other hand, the authors in [7] replace the independent slot decoding bias with an Image-GPT [30] and demonstrate novel slot composition. While this work largely answers how to compose a _given_ set of visual entities, their slot selection is performed manually through organization of slots into clustered libraries. In our work, we use a meta-learning approach to help the learner implicitly understand the composition task. Recently, the authors of [31] recast the iterative slot refinement problem as finding of fixed points through implicit differentiation to improve the stability and computational cost of the slot-attention (SA) technique. The authors of [32] use specific inductive biases to disentangle object contents from spatial information that enables relational reasoning. Our work produces a more generalized composition beyond spatial relations.
### Analogical Reasoning:
The ability to make analogies has been shown to have important implications in various domains, such as problem-solving, creativity, learning, and counterfactual thinking [1, 33, 34, 35, 36]. Various models have been proposed to explain the underlying mechanisms of analogy-making. Popular symbolic approaches include ARGUS [37], Transfer Frames [38], and Structural Mapping Engine [39], connectionist models include LISA [40] and Star-1 [41], and models that occupy a middle ground like Copycat [42]. Contemporary deep-learning models [43, 44, 45] attempt to solve Raven's Progressive Matrices and Bongard Problems, types of visual analogies present in IQ tests. The authors of [46] propose a generative deep visual analogy-making model based on a parallelogram regularizer on the latent space to disentangle variation factors and traverse transformation manifolds. Most similar to us, a recent work [47] reformulates the analogy-making task as Inpainting in an image grid and proposes an auto-encoder pre-training task to that end.
### Compositional Generative Models:
Prior works [48, 49] use energy-based models to represent composable factors of variation of a scene. However, these methods use concept labels or are grounded in text and are limited to a few concept composition operations. A follow-up work [50] describes unsupervised energy-based concepts; however, the re-composition is performed via manually-picked concepts. Recent works in text-to-image models use language grounding to induce compositionality. DALL-E [9] uses language modeling over the joint space of the text and image tokens to facilitate image generation. The latest text-to-image models [10, 11, 12] use diffusion modeling to guide the image generated from the text. A growing body of work [51, 52, 53] explores text-grounded controllable image editing via prompt engineering, where most parts of the image are preserved, but specific objects are re-composed.
### Modular Representations and Attention:
A swath of recent works proposes neural methods that combine modular primitives with recurrent mechanisms and cross-attention. The authors of [54] propose the Recurrent Independent Mechanisms (RIMs) to learn weakly coupled dynamical systems with independent compositional neural modules that communicate via a sparse attention bottleneck. Another work [55] uses object-centric abstractions as 'files' to store factorized declarative knowledge and the dynamics of the environment. A follow-up work [56] introduces neural abstractions of the classical Production Systems framework to learn rule-based templates, thereby facilitating compositional dynamics of the environment that can quickly adapt to novel stimuli. Most recently, the authors in [57] propose an attention-based architecture to learn in-context causal relationships between entities.
## 3 Benchmarks
Humans can contextualize past visual stimuli in the face of everyday experiences and even use them for imagining counterfactuals. Having seen an oak table, we can, with little difficulty, look at a metal armchair and understand the concept of an oak armchair. How do we then systematically test such abilities of a visual in-context learner? In this work, we posit that an answer to this question lies in combinatorial visual spaces with controllable factors of variation, which provide a _semantically rich, simple, scalable, and composable_ avenue for setting up such tasks. We create a suite of three benchmarks (Fig. 2) from compositional image creators that include (a) 3D Shapes [25], (b) BitMoji Faces, and (c) CLEVr Objects [26]. Each benchmark has a split of primitive training tasks and out-of-distribution test tasks. The _primitive_ tasks (Fig. 2, left) constitute an analogy where a _single_ composable element of the visual stimulus is modified in isolation. The description of each benchmark with its primitive set of tasks is given next:
**3.1 3D Shapes Image Prompts:** This dataset consists of static scenes of various objects lying on a colored floor in front of a colored wall viewed from different orientations. To set up primitive tasks, we consider the four properties \(P\) = {object color, wall color, floor color, scene orientation}, where each property can take multiple values within a domain. For each such task, the latent relation \(r(.)\) for the analogy prompt takes a property \(p\in P\) and modifies it from a _source_ domain value to a _target_ value. This benchmark comprises \(80000\) such tasks with a maximum of four supporting image pairs for each instance and with a roughly equal split across properties.
**3.2 BitMoji Faces Image Prompts:** BitMoji is an avatar creator service for social media users that allows them to create intricate cartoon faces. We queried the BitMoji API [58] to collect faces with four underlying dynamic elements \(P=\{\texttt{skin tone},\texttt{facial hair style},\texttt{hair style},\texttt{eyewear}\}\). Much like the previous case, primitive tasks are generated by intervening on a _source_ value of a property and modifying it to a _target_ value. We populate the training set with 80000 tasks, with four demonstrations available per task.
**3.3 CLEVr Objects Image Prompts:** CLEVr is a popular visual question-answering dataset with the visual component consisting of multiple objects lying in a scene. We use the CLEVr rendering engine to set up primitive tasks that include adding and deleting the same object across various scenes. In this fashion, we generate \(60000\) tasks with three examples per task.
A composable image space allows the concurrent modification of individual visual elements. One can construct _composite_ tasks (Fig. 2, right) out of a combination of primitives shown in the training set. A \(k\)-composite task is denoted as \(R^{k}(.)=r_{1}\circ r_{2}\circ\cdots\circ r_{k}\) where \(r_{1},\cdots,r_{k}\in\mathcal{R}\). We provide details of the \(k\)-compositeness test of each of our agents in Section 6.3. Finer details about each
Figure 2: We use combinatorial image spaces to set up visual in-context learning tasks. Each task requires the generation of a solution from a query and supporting examples. The latent description of the task is described within the square parentheses. Left: Primitive tasks modify a composable element in isolation. Right: Composite tasks combine primitives to test the combinatorial generalization skills of the learner.
benchmark, along with visual examples from every task, are detailed in Appendix A. We will release the complete dataset of primitive and composite tasks upon the publication of this article.
## 4 Methods
To learn compositional structure over visual stimuli, we begin by formulating a unified mechanistic interpretation of an in-context compositional learner using insights from design elements of LLMs.
### In-Context Learning as Analogy Completion:
A general _compositional_ in-context learner \(\mathcal{M}_{\phi,\alpha,\theta}(.)\) can be formalized via the following three key components:
* An encoder \(\mathcal{E}_{\phi}(.)\) that maps the input space \(\mathcal{X}\) to the compositional space \(\mathcal{C}\triangleq\mathcal{E}_{\phi}(.):\mathcal{X}\longmapsto\mathcal{C}\)
* An executor \(\mathcal{T}_{\alpha}(.\,,.\,,.)\) that maps compositional entities from example tasks, example solutions, and a query task to the query solution \(\triangleq\mathcal{T}_{\alpha}(.,.,.):\mathcal{C}\times\mathcal{C}\times \mathcal{C}\longmapsto\mathcal{C}\)
* A decoder \(\mathcal{D}_{\theta}(.)\) that maps the compositional space back to the input space \(\triangleq\mathcal{D}_{\theta}(.):\mathcal{C}\longmapsto\mathcal{X}\)
Coupling the above functions together yields the learner:
\[\mathcal{M}\triangleq\mathcal{D}_{\theta}\left(\mathcal{T}_{\alpha}\left( \mathcal{E}_{\phi}(.),\mathcal{E}_{\phi}(.),\mathcal{E}_{\phi}(.)\right) \right):\mathcal{X}\times\mathcal{X}\times\mathcal{X}\longmapsto\mathcal{X} \tag{1}\]
The inclusion of \(\mathcal{E}_{\phi}(.)\) and \(\mathcal{D}_{\theta}(.)\) is crucial for compositional abstraction since the input space may not be _a priori_ composable as in the case of images. Beyond this, we do not place any strong parametric constraints on the model. Let \(\mathcal{R}\) denote a task set. For any task \(r\in\mathcal{R}\), define the task analogy \(\zeta\) over input pairs \(x_{1},x_{2}\) as \(x_{1}:r(x_{1})::x_{2}:r(x_{2})\triangleq A:B::C:D\). An effective in-context learner over \(\mathcal{R}\) then satisfies the following property [59]:
\[\forall r\in\mathcal{R},\;\mathbb{E}_{\zeta\sim\mathcal{P}(r,\mathcal{X})} \left[\mathcal{L}\left(D;\mathcal{M}_{\phi,\alpha,\theta}(A,B,C)\right)\right] \leq\epsilon \tag{2}\]
Here, \(\mathcal{P}\) denotes the distribution of the analogies resulting from the input space \(\mathcal{X}\) and task \(r(.)\), \(\mathcal{L}(.)\) denotes a loss metric, and \(\epsilon\) is an error bound. The above property simply states that a good in-context learner solves analogies defined over a task set within some finite error bound. In Appendix B, we describe LLMs as an instantiation of this framework.
### Im-Promptu Learning:
Pre-training LLMs on large-scale text data leads to the emergence of highly transferable internal representations that allow them to adjust to novel tasks by simple priming on a few demonstrations. However, it is still unclear whether the underlying structure of the training data, architecture, or inductive biases instilled by choice of pre-training learning algorithms cause these intriguing properties to emerge. How can we encourage the emergence of such a general-purpose representation for visual data? In this work, we take an explicit approach based on the formalism presented in the previous section. Having established the desideratum (Eq. 2), we set up a learning procedure called _Im-Promptu_ that trains the model to generate analogy completions from image prompts. To this end, given a task set \(\mathcal{R}\), in each minibatch, we sample an analogy \(A:B::C:D\) that follows a latent primitive \(r(.)\in\mathcal{R}\). Given a loss criterion, \(\mathcal{L}\), we make the following stochastic gradient updates of the model parameters:
\[\{\phi^{t+1},\alpha^{t+1},\theta^{t+1}\}=\{\phi^{t},\alpha^{t},\theta^{t}\}\;- \frac{\alpha}{N}\sum_{i=1}^{N}\nabla_{\theta^{t},\alpha^{t},\phi^{t}} \mathcal{L}\left(D,\mathcal{M}_{\phi^{t},\alpha^{t},\theta^{t}}\left(A,B,C \right)\right) \tag{3}\]
where the step size \(\alpha\) is a hyperparameter. The full _Im-Promptu_ algorithm is laid out in Algorithm 1.
``` Input:\(\mathcal{R}\), \(\mathcal{R}\), \(\mathcal{R}
The encoder \(e_{\phi}(.)\) and decoder \(d_{\theta}(.)\) are convolutional and deconvolutional networks, respectively. The inference network \(f_{\alpha}(.)\) and executor network \(h_{\beta}(.)\) are modeled as multi-layered perceptrons. Here, \([\ ]\) denotes the concatenation operator. This architecture is detailed in Appendix D.1.
```
Require: Task set \(\mathcal{R}\), Step Size \(\alpha\) Initialize parameters \(\theta,\alpha,\phi\) of model \(\mathcal{M}\) while not done do Sample primitive task \(r(.)\sim\mathcal{R}\) Sample \(N\) input image pairs \(X_{1}^{1:N},X_{2}^{1:N}\sim\mathcal{P}_{train}\) for\(i\) in \(1\cdots N\)do Sample analogy \(\zeta^{i}\triangleq x_{1}^{i}:r(x_{1}^{i})::x_{2}^{i}:r(x_{2}^{i})\triangleq A :B::C:D\) endfor Update \(\{\theta,\alpha,\phi\}=\{\theta,\alpha,\phi\}-\frac{\alpha}{N}\nabla_{\theta, \alpha,\phi}\sum_{i}\mathcal{L}_{\mathcal{M}}(\zeta^{i})\) endwhile
```
**Algorithm 1** Im-Prompt Learning Algorithm
### Dipainting Model:
Inspired by recent work on inpainting for In-Context learning [47], this agent (Inpaint., visualized in Appendix D.2) is modeled via an autoencoder architecture that fills in missing patches of an image. The architecture is composed of a discrete variational autoencoder (dVAE) that discretizes the input patches and a Vision Transformer [60] auto-encoder to impute missing values. While the monolithic agent separately encodes the components of the visual analogy, this model jointly represents them as a \(2\times 2\) image grid. The model is pre-trained using the masked autoencoder reconstruction task [61]. Subsequently, at inference time, the analogy solution is represented as a missing image in the grid. Details have been given in Appendix D.2.
### Object-Centric Learner (OCL):
This agent, as depicted in Fig. 3, is a strongly compositional learner that uses object-centric inductive biases. The encoder and decoder of the framework in Section 4.1 are parametrized using Slot Attention Transformer (SLATE) [7] (see Appendix C.2) as follows:
\[\mathcal{E}_{\phi}(.)\triangleq SA_{\phi}(f_{\phi}^{\text{dVAE}}(.)),\ \mathcal{D}_{\theta}(.)\triangleq g_{\theta}^{dVAE}(g_{\theta}^{GPT}(.)) \tag{5}\]
A Context Encoder Transformer (CET) \(\triangleq\mathcal{T}_{\alpha}(.\,.\,.)\) (.) with interleaved layers of: \((a)\) cross-attention on the context slots and \((b)\) self-attention, induces sparse modifications on the query slots to complete the analogy.
\[S_{1:N}^{A}=\mathcal{E}_{\phi}(A),\ S_{1:N}^{B}=\mathcal{E}_{\phi}(B),\ S_{1:N}^{C}=\mathcal{E}_{\phi}(C) \tag{6}\]
\[S_{1:N}^{D}=CET\left(\text{query}=S_{1:N}^{C},\text{keys, values}=S_{1:N}^{A},S_{1:N}^{B}\right) \tag{7}\]
The latent sequence \(\hat{Z}_{D}\) predicted by Image-GPT is mapped by the dVAE back to the pixel space \(\hat{D}\).
\[\hat{Z}_{D}=g_{\theta}^{GPT}(S_{1:N}^{D}),\ \hat{D}=g_{\theta}^{dVAE}(\hat{Z}_{D}) \tag{8}\]
The CET forward pass is shown in Appendix D.3.
### Sequential Prompter:
This (Seq., visualized in Appendix D.4) is an ablation over the OCL that replaces CET with an LLM-like sequence prompt obtained from a simple concatenation of slots of images \(A\), \(B\), and \(C\), injected with position embeddings \(p_{i}\).
\[\mathcal{T}_{\alpha}(.,.,.)\triangleq[S_{1:N}^{A}\,S_{1:N}^{B},\ S_{1:N}^{C}]+p_{i} \tag{9}\]
While the OCL explicitly encodes the entire context into a fixed-length slot sequence via the CET, the ablated agent models a longer concatenated sequence.
### Patch Learner:
This agent straddles the compositionality extremes of Monolithic Learners and OCLs, and uses image patch abstractions. The SA module from OCL is ablated to get a discrete sequence of \(4\times 4\) patch latents as the degree of compositional granularity \(\mathcal{E}_{\phi}(.)\triangleq f_{\phi}^{\text{dVAE}}(.)\). A larger image implies longer sequences for this modeling choice.
## 6 Experiments
In this section, we set up experiments to answer (1) whether analogical reasoning enables in-context generalization, (2) does such a generalization require key mechanisms from LLMs, namely compositionality and attention, (3) if compositionality is a key ingredient, what is the correct granularity for visual tokens, and (4) what are the limits to the generalization.
**6.1**: **Training Setup:** We trained each agent on the primitive set of tasks across the three benchmarks using _Im-Promptu_ (Section 4.2). The SA-based object-centric encoders used in the OCL and Sequential Prompter were pre-trained as a SLATE autoencoder, as described by the authors in [7]. On the other hand, the Patch Learner was trained completely from scratch. The loss metric \(\mathcal{L}\) used to train the agents was cross-entropy (CE) loss between the true latent sequence \(Z_{D}\) and the predicted solution \(\hat{Z}_{D}\) obtained from Image-GPT, i.e., \(\mathcal{L}_{impromptu}=CE(Z_{D},\hat{Z}_{D})\).
In addition to the above loss, the dVAE was trained using the mean-squared error (MSE) loss over the raw pixel space to yield the full loss function \(\mathcal{L}=\mathcal{L}_{impromptu}+MSE(D,\hat{D})\). For inference, answers of the transformer-based agents were sampled from the Image-GPT decoder using top-\(k\) nucleus sampling [62]. Hyperparameters for training and inference have been laid out in Appendix E.
**6.2**: **Primitive Task Extrapolation:** In the first out-of-distribution paradigm, we tested the ability of agents to apply learned rules to different domain pairs. In order to do this, we held out \(20\%\) of source-target pairs (see Section 3 for the definition of source and target) from the training primitives and tested the ability of agents to generalize from learned rules to unseen source-target pairs. For example, the object color property in the 3D Shapes benchmark can take \(10\) unique values and \(10\times 9=90\) source-target combinations. Thus, an agent was trained on only \(90\times 0.8=72\) pairs for object-hue primitives. We made sure that each value in the domain set was shown at least once as either the target or the source.
Fig. 4(a) plots scores of different agents across benchmarks against two key metrics: (1) MSE (lower is better) that quantitatively compares the construction against the ground truth and (2) Frechet inception distance (FID, lower is better) score to measure the perceptual quality of the composition. We make several interesting observations:
**(R1.1) Simple pixel manipulation produces implausible outputs.** The pixel baseline has poor MSE scores (\(\sim\) 138 for 3D Shapes, \(\sim\) 895 for BitMoji) with clearly apparent artifacts in the output. While linear pixel transformations are sufficient when an object is independent of the remaining entities, it is unable to model complex dependencies (see Fig. 4(b),(d) under 'Pixel').
**(R1.2) Inpainting model struggles to generalize beyond i.i.d. inputs.** We observed that the Inpainting model is able to cogently complete analogies on the i.i.d. validation set (see Appendix F.1 for outputs). However, when tested on extrapolated primitives, it only fills in the high-level structure (see Fig. 4 (b), (c)) and, in the case of more complex inputs, produces entirely incoherent outputs (Fig. 4 (d)). We posit that a top-down method like inpainting pre-ordains larger datasets for improved generalization and, as such, suffers from sample inefficiency.
**(R1.3) Monolithic Learners are reasonably effective at generalizing from learned rules on structured stimuli.** These agents have the lowest MSE scores on 3D Shapes and BitMoji Faces, both spatially consistent datasets. However, the output deteriorated over CLEVr Objects where the
Figure 3: The OCL uses compositional object vectors to answer image prompts. (**1**) The encoder \(\mathcal{E}_{\phi}(.)\) is a pre-trained SLATE [7] encoder that uses a dVAE to obtain a latent discrete sequence for the input image and runs SA [6] over the sequence to extract object-centric slots. (**2**) The CET modifies query slots \(S^{\mathcal{C}}_{1:N}\) via cross-attention over the support prompt slots \(S^{A}_{1:N},S^{B}_{1:N}\). (**3**) The modified slots are used to prompt an Image-GPT [30] that outputs the latent discrete sequence of the task solution \(D\) that is decoded by the dVAE decoder.
latent scene configuration is diverse. The edges of the generated shapes and lighting details were significantly blurred, and occluded objects were often fused together (FID \(\sim\) 92, worse than Pixel baseline, Fig. 4(d) under 'Monolithic').
**(R1.4) Patch Learner and Sequential Prompter benefit from additional examples.** For these agents, the addition of context examples was markedly apparent from the significant MSE drops for \(>\) 1 shot (see Fig. 4(a)). We posit that since these models learn from longer contexts, additional examples are particularly effective in regularizing the output sequence manifold of Image-GPT.
### Composite Task Extrapolation:
In this generalization paradigm, we probed the compositional generalization capabilities of the agents. A \(k\)-composite task (\(k\geq 2\), see Section 3) is an unordered composite function of \(k\) primitives. The value of \(k\) is varied across each benchmark, where higher values of \(k\) yield visually complex contexts. To solve such a task, the agent must identify the underlying primitive relations and combinatorially modify the query.
In Fig. 5(a), we note (R1.1) and (R1.3) from the previous setup in this extrapolation paradigm as well, but we make additional key observations next:
**(R2.1) The effect of object-centric biases is strong.** This is indicated by the widening MSE gap between the OCL and monolithic agent for increasing values of \(k\) across all three benchmarks (see Fig. 5(a)-(c)). Moreover, the OCL and the Sequential Prompter generate outputs with consistent perceptual quality (lower FID scores).
**(R2.2) Monolithic and Patch Learners apply shortcuts.** The monolithic agent tends to produce a Frankenstein-like image, as if a superposition of context images (see Fig. 5(d)-(f) under 'Monolithic'). In the case of the CLEVr Objects dataset, both the Monolithic and Patch Learners often simply filled in 'blobs' agnostic to the color and shape of objects (Fig. 5(f), Appendix F.4), with missing shadows leading to lower MSE scores but poor visual quality.
### Analysis:
Next, we discuss key elements required for in-context composition.
**(A1) In-context generalization critically hinges on compositional granularity.** As evident from (R1.2), monolithic representations suffice for reliably learning primitive composition rules over simple contexts. However, they suffer from fidelity issues when the visual space becomes more complex. Beyond primitive generalization, patch abstractions are effective when the underlying space
Figure 4: Primitive task extrapolation: (a) Plot showing scores where the left y-axis and the bars represent the MSE scores, while the right y-axis and the dotted line denote the FID scores. (b)-(d) Comparison of agent solutions with the dotted red circle representing anomalies with respect to the ground truth.
has object-patch equivalence, as evident from the CLEVr Objects dataset (Fig. 5(c)). However, even with object-patch equivalence, the Patch Learner omits the inter-object dependencies (e.g., object occlusions and shadows). It further fails to learn complex and diverse objects spanning multiple patches reliably (does poorly on the BitMoji Faces dataset with diverse facial components). Slot-based compositionality with the CET not only enables the generalization of the learned primitives but also enables combinatorial generalization to composite tasks that require abstracting out the composition rule over each entity in isolation as well as composing the resultant interplay between them to generate a globally consistent output.
**(A2) Encoding the context via cross-attention leads to sample efficiency.** Longer slot sequences in the Sequential Prompter require multi-shot contexts across all benchmarks. The sample inefficiency was particularly observed on the CLEVr Objects dataset that uses six slots and, hence, much longer sequences. Cross-attention via CET is essential for implicit inference, enabling the learner to encode specific objects from the context.
### 'Counterfactual Prompt Engineering' with Im-Promptu:
LLMs have spurred interest in several priming techniques [63, 64] that can 'engineer' the model into following user instructions. While this natural-language-based programming language provides an intuitive interface for tasks that have a fundamental degree of language abstraction (writing code, summarization, symbolic
Figure 5: Composite task extrapolation: (a)-(c) Plots showing scores across benchmarks where the left y-axis and the bars represent the MSE scores, while the right y-axis and the dotted line denote the FID scores. \(k\) denotes the level of compositeness. (d)-(f) Comparison of agent solutions with the dotted red circle representing anomalies with respect to the ground truth.
manipulation, etc.), it is not _a priori_ obvious that textual descriptions should also scaffold image generators. Coming up with natural-language prompts for complex visual entities is often tedious and unintuitive. Instead, explaining an image is naturally easier from its entities and composition over them1. To this end, _Im-Promptu_ provides an avenue for generating counterfactuals via image prompts. In Fig. 6(a), we demonstrate the creation of a scene from its object components via the OCL. As a point of comparison, in Fig. 6(b), using image-inpainting and natural language prompts with DALL-E [9] yields an unreliable output with distorted objects. More practically, the OCL reduces human effort significantly in rendering images with desired properties by using existing visual assets. The procedure to generate scenes from OCL can be found in Appendix G.
Footnote 1: “No one is an artist unless he carries his picture in his head before painting it, and is sure of his method and composition” – Claude Monet
## 7 Conclusion
This work investigated whether analogy-solving can enable in-context compositional generalization over visual entities. We first designed a new test suite. We probed if attention and compositionality, key ingredients in LLMs, also play a crucial role in the visual domain. We transported these ingredients from language to model several agents. Our experiments showed that non-compositional agents do not generalize well beyond primitive task extrapolation. Even with compositionality baked in, the performance of patch-based learners depends on idiosyncrasies of the visual structure. However, we found that object-centric biases consistently facilitate an implicit understanding of composition rules and generate outputs with global semantic consistency. Our ablation of the CET suggested that cross-attention plays a crucial role, just like language models, which posits the importance of analogy and attention in understanding the underpinnings of in-context learning. Future research challenges include collecting real-world primitives from photo-realistic graphic engines or human labels, improving object-centric inductive biases for real-life entities, and designing better prompting techniques. We hope that our work will spur further research on visual in-context learning.
## Acknowledgments and Disclosure of Funding
We would like to thank Sreejan Kumar for helpful discussions and insights throughout the course of this work. The experiments reported in this article were performed on the computational resources managed and supported by Princeton Research Computing at Princeton University. This project was funded by NSF Grant No. CCF-2203399 and ONR Grant No. N00014-18-1-2873. JS gratefully acknowledges financial support from the Schmidt DataX Fund at Princeton University made possible through a major gift from the Schmidt Futures Foundation.
Figure 6: Scene creation from object components: (a) OCL trained via _Im-Promptu_ is used to generate a scene of objects. Image-Prompts engineered using existing assets reliably generate the scene object by object. (b) The same scene is generated via language grounding and in-painting features of DALL-E [9]. Object properties are distorted and specifying the location of objects is tedious via language.
## References
* [1] Douglas R. Hofstadter and Emmanuel Sander. _Surfaces and Essences: Analogy as the Fuel and Fire of Thinking_. Basic Books, 2013.
* [2] Brenden M. Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua B. Tenenbaum. One Shot Learning of Simple Visual Concepts. _Cognitive Science_, 33, 2011.
* [3] Brenden M. Lake, Tal Linzen, and Marco Baroni. Human Few-Shot Learning of Compositional Instructions. _CoRR_, abs/1901.04587, 2019.
* [4] Christopher P. Burgess, Loic Matthey, Nicholas Watters, Rishabh Kabra, Irina Higgins, Matt Botvinick, and Alexander Lerchner. MONet: Unsupervised Scene Decomposition and Representation. _CoRR_, abs/2303.08774, 2019.
* [5] Klaus Greff, Raphael Lopez Kaufman, Rishabh Kabra, Nick Watters, Chris Burgess, Daniel Zoran, Loic Matthey, Matthew Botvinick, and Alexander Lerchner. Multi-Object Representation Learning with Iterative Variational Inference. _CoRR_, abs/1903.00450, 2019.
* [6] Francesco Locatello, Dirk Weissenborn, Thomas Unterthiner, Aravindh Mahendran, Georg Heigold, Jakob Uszkoreit, Alexey Dosovitskiy, and Thomas Kipf. Object-Centric Learning with Slot Attention. _CoRR_, abs/2006.15055, 2020.
* [7] Gautam Singh, Fei Deng, and Sungjin Ahn. Illiterate DALL-E Learns to Compose. _CoRR_, abs/2110.11405, 2021.
* [8] Jindong Jiang, Fei Deng, Gautam Singh, and Sungjin Ahn. Object-Centric Slot Diffusion. _CoRR_, abs/2303.10834, 2023.
* [9] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-Shot Text-to-Image Generation. _CoRR_, abs/2102.12092, 2021.
* [10] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical Text-Conditional Image Generation with CLIP Latents. _CoRR_, abs/2204.06125, 2022.
* [11] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J. Fleet, and Mohammad Norouzi. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. _CoRR_, abs/2205/11487, 2022.
* [12] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. High-Resolution Image Synthesis with Latent Diffusion Models. _CoRR_, abs/2112.10752, 2021.
* [13] Gary Marcus, Ernest Davis, and Scott Aaronson. A Very Preliminary Analysis of DALL-E 2. _CoRR_, abs/2202.13807, 2022.
* [14] Fei Xu, Elizabeth S. Spelke, and Sydney Goddard. Number Sense in Human Infants. _Developmental Science_, 8(1):88-101, 2005.
* [15] Tom B. Brown et al. Language Models are Few-Shot Learners. _CoRR_, abs/2005.14165, 2020.
* [16] Ben Wang and Aran Komatsuzaki. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. [https://github.com/kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax), May 2021.
* [17] Aakanksha Chowdhery et al. PaLM: Scaling Language Modeling with Pathways. _CoRR_, abs/2204.02311, 2022.
* [18] Opher Lieber, Or Sharir, Barak Lenz, and Yoav Shoham. Jurassic-1: Technical Details and Evaluation. [https://uploads-ssl.webflow.com/60fd4503684b466578c0d307/61138924626a6981ee09caf6_jurassic_tech_paper.pdf](https://uploads-ssl.webflow.com/60fd4503684b466578c0d307/61138924626a6981ee09caf6_jurassic_tech_paper.pdf), 2022.
* [19] Teven Le Scao et al. BLOOM: A 176B-Parameter Open-Access Multilingual Language Model. _CoRR_, abs/2211.05100, 2022.
* [20] OpenAI. GPT-4 Technical Report. _CoRR_, abs/2303.08774, 2023.
* [21] Jerry Wei, Jason Wei, Yi Tay, Dustin Tran, Albert Webson, Yifeng Lu, Xinyun Chen, Hanxiao Liu, Da Huang, Denny Zhou, and Tengyu Ma. Larger Language Models do In-context Learning Differently. _CoRR_, abs/2303.03846, 2023.
* [22] Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural Machine Translation of Rare Words with Subword Units. _CoRR_, abs/1508.07909, 2016.
* [23] Mike Schuster and Kaisuke Nakajima. Japanese and Korean Voice Search. In _IEEE International Conference on Acoustics, Speech and Signal Processing_, pages 5149-5152, 2012.
* [24] Taku Kudo and John Richardson. SentencePiece: A Simple and Language Independent Subword Tokenizer and Detokenizer for Neural Text Processing. _CoRR_, abs/1808.06226, 2018.
* [25] Chris Burgess and Hyunjik Kim. 3D Shapes Dataset. [https://github.com/deepmind/3dshapes-dataset/](https://github.com/deepmind/3dshapes-dataset/), 2018.
* [26] Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C. Lawrence Zitnick, and Ross Girshick. CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning. _CoRR_, abs/1612.06890, 2016.
* [27] Klaus Greff, Sjoerd van Steenkiste, and Jurgen Schmidhuber. On the Binding Problem in Artificial Neural Networks. _CoRR_, abs/2012.05208, 2020.
* [28] Martin Engelcke, Adam R. Kosiorek, Oiwi Parker Jones, and Ingmar Posner. GENESIS: Generative Scene Inference and Sampling with Object-Centric Latent Representations. _CoRR_, abs/1907.13052, 2019.
* [29] Martin Engelcke, Oiwi Parker Jones, and Ingmar Posner. GENESIS-V2: Inferring Unordered Object Representations without Iterative Refinement. _CoRR_, abs/2104/09958, 2021.
* [30] Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. Generative Pretraining From Pixels. In Hal Daume III and Aarti Singh, editors, _Proceedings of the 37th International Conference on Machine Learning_, volume 119 of _Proceedings of Machine Learning Research_, pages 1691-1703. PMLR, 13-18 July 2020.
* [31] Michael Chang, Thomas L. Griffiths, and Sergey Levine. Object Representations as Fixed Points: Training Iterative Refinement Algorithms with Implicit Differentiation. _CoRR_, abs/2207.00787, 2022.
* [32] James C. R. Whittington, Rishabh Kabra, Loic Matthey, Christopher P. Burgess, and Alexander Lerchner. Constellation: Learning Relational Abstractions over Objects for Compositional Imagination. _CoRR_, abs/2107.11153, 2021.
* [33] J. P. Guilford. Creativity. _American Psychologist_, 5(9):444, 1950.
* [34] Dedre Gentner. Structure-Mapping: A Theoretical Framework for Analogy. _Cognitive Science_, 7(2):155-170, 1983.
* [35] Arthur B. Markman and Dedre Gentner. Structural Alignment in Analogy and Similarity. _American Psychologist_, 48(1):40, 1993.
* [36] Keith J. Holyoak and Paul Thagard. _Mental Leaps: Analogy in Creative Thought_. MIT Press, 1995.
* [37] W. R. Reitman. _Cognition and Thought: An Information Processing Approach_. John Wiley and Sons, 1965.
* [38] Patrick H. Winston. Learning by Creatifying Transfer Frames. _Artificial Intelligence_, 10(2):147-172, 1978.
* [39] Brian Falkenhainer, Kenneth D. Forbus, and Dedre Gentner. The Structure-Mapping Engine: Algorithm and Examples. _Artificial Intelligence_, 41(1):1-63, 1989.
* [40] Jeffrey E. Hummel and Keith J. Holyoak. Distributed Representations of Structure: A Theory of Analogical Access and Mapping. _Psychological Review_, 104(3):427-466, 1997.
* [41] Graeme S. Halford, William H. Wilson, Jian Guo, Ross W. Gayler, Janet Wiles, and J. E. M. Stewart. _Connectionist Implications for Processing Capacity Limitations in Analogies_, pages 363-415. Ablex Publishing, 1994.
* [42] Douglas R. Hofstadter. _Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought_. Basic Books, 1995.
* [43] Stefan Depeweg, Constantin A. Rothkopf, and Frank Jakel. Solving Bongard Problems with a Visual Language and Pragmatic Reasoning. _CoRR_, abs/1804.04452, 2018.
* [44] Chi Zhang, Feng Gao, Baoxiong Jia, Yixin Zhu, and Song-Chun Zhu. RAVEN: A Dataset for Relational and Analogical Visual rEasoNing. _CoRR_, abs/1903.02741, 2019.
* [45] Felix Hill, Adam Santoro, David G. T. Barrett, Ari S. Morcos, and Timothy Lillicrap. Learning to Make Analogies by Contrasting Abstract Relational Structure. _CoRR_, abs/1902.00120, 2019.
* [46] Scott E. Reed, Yi Zhang, Yuting Zhang, and Honglak Lee. Deep Visual Analogy-Making. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, _Advances in Neural Information Processing Systems_, volume 28. Curran Associates, Inc., 2015.
* [47] Amir Bar, Yossi Gandelsman, Trevor Darrell, Amir Globerson, and Alexei A. Efros. Visual Prompting via Image Inpainting. _CoRR_, abs/22098.00647, 2022.
* [48] Yilun Du, Shuang Li, and Igor Mordatch. Compositional Visual Generation with Energy Based Models. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, _Advances in Neural Information Processing Systems_, volume 33, pages 6637-6647. Curran Associates, Inc., 2020.
* [49] Nan Liu, Shuang Li, Yilun Du, Antonio Torralba, and Joshua B. Tenenbaum. Compositional Visual Generation with Composable Diffusion Models. _CoRR_, abs/2206.01714, 2022.
* [50] Yilun Du, Shuang Li, Yash Sharma, Joshua B. Tenenbaum, and Igor Mordatch. Unsupervised Learning of Compositional Energy Concepts. _CoRR_, abs/2111.03042, 2021.
* [51] Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aherman, Yael Pritch, and Daniel Cohen-Or. Prompt-to-Prompt Image Editing with Cross Attention Control. _CoRR_, abs/2208.01626, 2022.
* [52] Dani Valevski, Matan Kalman, Yossi Matias, and Yaniv Leviathan. UniTune: Text-Driven Image Editing by Fine Tuning an Image Generation Model on a Single Image. _CoRR_, abs/2210.09477, 2022.
* [53] Bahjat Kawar, Shiran Zada, Oran Lang, Omer Tov, Huiwen Chang, Tali Dekel, Inbar Mosseri, and Michal Irani. Imagic: Text-Based Real Image Editing with Diffusion Models. _CoRR_, abs/2210.09276, 2022.
* [54] Anirudh Goyal, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Bengio, and Bernhard Scholkopf. Recurrent Independent Mechanisms. _CoRR_, abs/1909.10893, 2020.
* [55] Anirudh Goyal, Alex Lamb, Phanideep Gampa, Philippe Beaudoin, Sergey Levine, Charles Blundell, Yoshua Bengio, and Michael Mozer. Object Files and Schemata: Factorizing Declarative and Procedural Knowledge in Dynamical Systems. _CoRR_, abs/2006.16225, 2020.
* [56] Anirudh Goyal, Aniket Didolkar, Nan Rosemary Ke, Charles Blundell, Philippe Beaudoin, Nicolas Heess, Michael Mozer, and Yoshua Bengio. Neural Production Systems: Learning Rule-Governed Visual Dynamics. _CoRR_, abs/2103.01937, 2022.
* [57] Nan Rosemary Ke, Silvia Chiappa, Jane Wang, Anirudh Goyal, Jorg Bornschein, Melanie Rey, Theophane Weber, Matthew Botvinic, Michael Mozer, and Danilo Jimenez Rezende. Learning to Induce Causal Structure. _CoRR_, abs/2204.04875, 2022.
* [58] Snap Inc. Bitmoji. [https://www.bitmoji.com/](https://www.bitmoji.com/), accessed 2023.
* [59] Shivam Garg, Dimitris Tsipras, Percy Liang, and Gregory Valiant. What Can Transformers Learn In-Context? A Case Study of Simple Function Classes. _CoRR_, abs/2208.01066, 2022.
* [60] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. _CoRR_, abs/2010.11929, 2021.
* [61] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollar, and Ross Girshick. Masked Autoencoders Are Scalable Vision Learners. _CoRR_, abs/2111.06377, 2021.
* [62] Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The Curious Case of Neural Text Degeneration. _CoRR_, abs/1904.09751, 2020.
* [63] Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. _CoRR_, abs/2107.13586, 2021.
* [64] Simon Willison. In Defense of Prompt Engineering. [https://simonwillison.net/2023/Feb/21/in-defense-of-prompt-engineering/](https://simonwillison.net/2023/Feb/21/in-defense-of-prompt-engineering/), February 2023.
* [65] Jason Tyler Rolfe. Discrete Variational Autoencoders. _CoRR_, abs/1609.02200, 2016.
* [66] Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural Discrete Representation Learning. _CoRR_, abs/1711.00937, 2017. | ## Review
### Summary
This paper investigates the potential of analogical reasoning to enable in-context compositional generalization in visual modalities. It introduces a framework called Im-Promptu and constructs three benchmarks focused on visual analogy-making, including 3D shapes, BitMoji Faces, and CLEVR. The authors explore various visual representations, such as pixel-based and object-centric methods, to test the efficacy of their approach. The findings highlight the interaction between the degree of compositionality and extrapolation capabilities, ultimately aiming to contribute valuable insights to the field of visual in-context learning.
### Strengths
- The paper constructs three benchmarks to test compositional generalization in visual analogy-making, which is a novel area of study in computer vision.
- The introduction of the Im-Promptu framework is a significant contribution, providing a new approach for in-context learning.
- The experimental evaluations demonstrate the effectiveness of object-centric representations for compositional generalization.
- The work emphasizes the importance of in-context learning in visual modalities, which is an under-explored area compared to language models.
### Weaknesses
- The novelty of the proposed Im-Promptu framework is limited, as it resembles existing methods in visual analogy-making without clear differentiation.
- The benchmarks primarily utilize synthetic data, raising concerns about the generalizability of the results to real-world scenarios.
- The paper lacks comprehensive evaluations across a wider range of tasks beyond the three benchmarks presented.
- The clarity of the paper could be improved, as some sections are dense with technical details and may confuse readers.
### Questions
- What are the fundamental differences between the proposed Im-Promptu learning and existing visual analogy-making frameworks?
- What is the main technical contribution of this paper?
- Is there any ablation study regarding the number of in-context demonstrations used?
- Can the output of tasks be text rather than images in the proposed framework?
- How is the position-preserving functionality in the generated images realized by the Object-Centric Learner?
### Soundness
**Score:** 3
**Description:** 3 = good; the methodology is sound but lacks sufficient evaluation on real-world data to support its claims.
### Presentation
**Score:** 3
**Description:** 3 = good; while the paper is generally well-structured, some sections are overly complex and could benefit from clearer explanations.
### Contribution
**Score:** 3
**Description:** 3 = good; the paper presents novel ideas and frameworks but lacks clear articulation of its unique contributions compared to existing works.
### Rating
**Score:** 7
**Description:** 7 = accept, but needs minor improvements; the paper has a solid foundation with significant contributions but requires more comprehensive evaluations and clarifications.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper demonstrates originality and soundness in exploring in-context learning in visual modalities. While it shows significant potential, particularly in addressing a novel area, it requires further evaluation on real-world datasets and clearer presentation of its contributions to fully establish its impact.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Time-uniform confidence bands for the CDF under nonstationarity
Paul Mineiro
Microsoft Research
[email protected] &Steve Howard
The Voleon Group
[email protected]
###### Abstract
Estimation of a complete univariate distribution from a sequence of observations is a useful primitive for both manual and automated decision making. This problem has received extensive attention in the i.i.d. setting, but the arbitrary data dependent setting remains largely unaddressed. We present computationally felicitious time-uniform and value-uniform bounds on the CDF of the running averaged conditional distribution of a sequence of real-valued random variables. Consistent with known impossibility results, our CDF bounds are always valid but sometimes trivial when the instance is too hard, and we give an instance-dependent convergence guarantee. The importance-weighted extension is appropriate for estimating complete counterfactual distributions of rewards given data from a randomized experiment, e.g., from an A/B test or a contextual bandit.
## 1 Introduction
What would have happened if I had acted differently? Although as old as time itself, successful companies have recently embraced this question via offline estimation of counterfactual outcomes using data from existing randomized experiments or contextual bandits. The problem is important in diverse domains such as software testing (Lindon et al., 2022; Wang and Chapman, 2022), portfolio management (Liu, 2021), and medicine (Shen et al., 2022). These experiments are run in the real (digital) world, which is rich enough to demand non-asymptotic statistical techniques under non-parametric and non-stationary (i.e., not i.i.d.) models. Although existing methods apply for estimating _average_ outcomes in this general setting (either under the observed distribution or counterfactual ones), estimating a complete _distribution_ of outcomes is heretofore only possible with additional assumptions: see Table 1 for a summary and Section 5 for complete discussion of related work.
To fix ideas, we briefly describe an application from Lindon et al. (2022) in the context of canary testing: rolling out changes in an online service to a small, random subset of users in order to detect accidental performance regressions while minimizing effect on overall user experience. The metric of interest measures latency for fetching content from the service. It is common to look beyond the mean of the latency distribution and especially to check for regressions in upper quantiles. As such, the authors choose to estimate bounds on the entire CDF of this latency metric under both the control and treatment arms and check for a statistically significant differences at any point in the CDF. The hope is to detect regressions as soon as possible, often within seconds or minutes, so the authors employ a sequential method which allows an automated system to continuously update the CDF bounds as data accumulates and to stop as soon as a significant regression is detected. Statistically, this translates into the requirement of confidence bands for the CDF which are both uniform over time (valid after every update) and uniform over values (so we can check for regressions at any quantile). We seek such bounds whose statistical validity is guaranteed under a minimum of assumptions.
Intriguingly, this problem is provably impossible in the general data dependent setting (Rakhlin et al., 2015). Consequently, our bounds always achieve non-asymptotic coverage, but may converge to zero width slowly or not at all, depending on the hardness of the instance. We call this design principle AVAST (\(\Delta\)lways \(\Delta\)laid \(\Delta\)nd Sometimes \(\mathsf{Trivial}\)).
#### Contributions
1. In Section 3.2 we provide a time- and value-uniform upper bound on the CDF of the averaged historical conditional distribution of a discrete-time real-valued random process. Consistent with the lack of sequential uniform convergence of linear threshold functions (Rakhlin et al., 2015), the bounds are \(\underline{\text{Always}}\)\(\underline{\text{Valid}}\) (see Theorem 3.1) And \(\underline{\text{Sometimes}}\)\(\underline{\text{Trivial}}\), i.e., the width guarantee is instance-dependent (see Theorem 3.3) and may not converge to zero width in the infinite data limit. When the data generating process is smooth with respect to the uniform distribution on the unit interval, the bound width adapts to the unknown smoothness parameter, following the framework of smoothed online learning (Rakhlin et al., 2011; Haghtalab et al., 2020, 2022b, 20, 2022).
2. In Section 3.3 we extend the previous technique to distributions with support over the entire real line, and further to distributions with a known countably infinite or unknown nowhere dense set of discrete jumps; with analogous instance-dependent guarantees.
3. In Section 3.4 we extend the previous techniques to importance-weighted random variables, achieving our ultimate goal of estimating a complete counterfactual distribution of outcomes.
We exhibit our techniques in various simulations in Section 4. Computationally our procedures have comparable cost to point estimation of the empirical CDF, as the empirical CDF is a sufficient statistic.
## 2 Problem Setting
Let \(\left(\Omega,\mathcal{F},\left\{\mathcal{F}_{t}\right\}_{t\in\mathbb{N}}, \mathbb{P}\right)\) be a probability space equipped with a discrete-time filtration, on which let \(X_{t}\) be an adapted, real-valued random process. Let \(\mathbb{E}_{t}\left[\cdot\right]\doteq\mathbb{E}\left[\cdot|\mathcal{F}_{t}\right]\). The quantity of interest is the (random) map \(\overline{\text{CDF}}_{t}:\mathbb{R}\rightarrow\left[0,1\right]\), the CDF of the averaged historical conditional distribution at time \(t\):
\[\overline{\text{CDF}}_{t}(v)\doteq\frac{1}{t}\sum_{s\in t}\mathbb{E}_{s-1} \left[1_{X_{s}\leq v}\right]. \tag{1}\]
We desire simultaneously time- and value-uniform bounds which hold with high probability, i.e., adapted sequences of maps \(L_{t},U_{t}:\mathbb{R}\rightarrow\left[0,1\right]\) satisfying
\[\mathbb{P}\left(\tfrac{\text{\tiny{freq}}\mathbb{N}}{\text{\tiny{freq}}\mathbb{ N}}:L_{t}(v)\leq\overline{\text{CDF}}_{t}(v)\leq U_{t}(v)\right)\geq 1-2\alpha. \tag{2}\]
Note that the estimand \(\overline{\text{CDF}}_{t}\) is changing at each time step as we incorporate the conditional distribution of the latest observation into our historical average. The maps \(L_{t},U_{t}\) provide a sequence of confidence bands which contain this sequence of changing CDFs uniformly over time with high probability.
In the i.i.d. setting, Equation (1) is deterministic and independent of \(t\), reducing to the CDF of the (unknown) generating distribution. In this setting, the classic results of Glivenko (1933) and
\begin{table}
\begin{tabular}{l|c c c|c c|c} \hline \hline Reference & Time- & Non- & Non- & Non- & Counter- & \(w_{\max}\)- \\ & Uniform? & stationary? & asymptotic? & parametric? & factual? & free? \\ \hline HR22 & ✓ & & ✓ & ✓ & & N/A \\ HLLA21 & & & ✓ & ✓ & ✓ & \\ UnO21, [IID] & & & ✓ & ✓ & ✓ & ✓ \\ UnO21, [NS] & & ✓ & & & ✓ & ✓ \\ WS22, [84] & ✓ & & ✓ & ✓ & ✓ & ✓ \\ this paper & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison to prior art for quantile-uniform CDF estimation. _Time-uniform_: a sequence of confidence bands whose coverage holds uniformly over time with high probability. _Nonstationary_: does not require an i.i.d. assumption. _Non-asymptotic_: guarantees hold at all sample sizes. _Non-parametric_: guarantees apply over an infinite-dimensional class of distributions. _Counterfactual_: method applies to estimating a distribution other than the one which generated the observed data, via importance weighting. \(w_{\max}\)_-free_: guarantee applies without a bound on the maximum importance weight. See Section 5 for details.
Cantelli (1933) established uniform convergence of linear threshold functions; subsequently the Dvoretzky-Kiefer-Wolfowitz (DKW) inequality characterized fixed-time and value-uniform convergence rates (Dvoretzky et al., 1956; Massart, 1990); extended later to simultaneously time- and value-uniform bounds (Howard and Ramdas, 2022). The latter result guarantees an \(O(t^{-1}\log(\log(t)))\) confidence interval width, matching the limit imposed by the Law of the Iterated Logarithm.
AVAST principleIn contrast, under arbitrary data dependence, linear threshold functions are not sequentially uniformly convergent, i.e., the averaged historical empirical CDF does not necessarily converge uniformly to the CDF of the averaged historical conditional distribution (Rakhlin et al., 2015). Consequently, additional assumptions are required to provide a guarantee that the confidence width decays to zero. In this paper we design bounds that are Always Valid And Sometimes Trivial, i.e., under worst-case data generation, \(\sup_{v}\lvert U_{t}(v)-L_{t}(v)\rvert=O(1)\) as \(t\to\infty\). Fortunately our bounds are also equipped with an instance-dependent width guarantee based upon the smoothness of the distribution to a reference measure qua Definition 3.2.
Additional NotationLet \(X_{a:b}=\{X_{s}\}_{s=a}^{b}\) denote a contiguous subsequence of a random process. Let \(\mathbb{P}_{t}\) denote the average historical conditional distribution, defined as a (random) distribution over the sample space \(\mathbb{R}\) by \(\mathbb{P}_{t}(A)\doteq t^{-1}\sum_{s\leqslant t}\mathbb{E}_{s-1}\left[1_{X_{ s}\in A}\right]\) for a Borel subset \(A\) (note \(\mathbb{P}_{t}\) represents the entire historical average while \(\mathbb{E}_{t}\) corresponds to a single conditional distribution).
## 3 Derivations
### High Level Design
Our approaches work as reductions, achieving the value- and time-uniform guarantee of Equation (2) by combining bounds \(\Lambda_{t},\Xi_{t}\) that satisfy a time-uniform guarantee at any fixed value \(\rho\),
\[\mathbb{P}\left(\forall t\in\mathbb{N}:\Lambda_{t}(\rho)\leqslant\overline{ \operatorname{CDF}}_{t}(\rho)\leqslant\Xi_{t}(\rho)\right)\geqslant 1-\delta( \rho). \tag{3}\]
The bounds \(\Lambda_{t},\Xi_{t}\) are tools for estimating a sequence of _scalars_, in this case \((\overline{\operatorname{CDF}}_{t}(\rho))_{t=1}^{\infty}\) for a fixed value \(\rho\). We show how to extend such tools to the more difficult problem of estimating a sequence of (cumulative distribution) _functions_.
There are multiple existing approaches to obtaining the guarantee of Equation (3): we provide a self-contained introduction in Appendix A. For ease of exposition, we will only discuss how to construct
Figure 1: Visualization of Algorithm 1. The values of interest are uncountably infinite; the algorithm allocates probability to maintain upper bounds on a countably infinite set of points \(\rho\) at different resolution levels via the monotonicity of \(\overline{\operatorname{CDF}}_{t}(v)\). As resolution increases, the value \(\rho\) better approximates \(v\), but the allocated probability decreases; the algorithm chooses the tightest of available bounds. Shaded nodes would be consulted for an upper bound for \(v=5/17\).
a time- and value-uniform upper bound by combining fixed-value, time-uniform upper bounds, and defer the analogous lower bound construction to Appendix B.2. Our approach is to compose these fixed-value bounds into a value-uniform bound by taking a union bound over a particular collection of values, leveraging monotonicity of the CDF.
Quantile vs Value SpaceIn the i.i.d. setting, a value-uniform guarantee can be obtained by taking a careful union bound over the unique value associated with each quantile (Howard and Ramdas, 2022). This "quantile space" approach has advantages, e.g., variance based discretization and covariance to monotonic transformations. However, under arbitrary data dependence, the value associated with each quantile can change. Therefore we proceed in "value space". See Appendix A.1 for more details.
### On the Unit Interval
Algorithm 1, visualized in Figure 1, constructs an upper bound on Equation (1) which, while valid for all values, is designed for random variables ranging over the unit interval. For a given value \(v\), it searches over upper bounds on the CDF evaluated at a decreasing sequence of values \(\rho_{1}\geq\rho_{2}\geq\cdots\geq v\) and exploits monotonicity of \(\overline{\mathrm{CDF}}_{t}(v)\). That is, at each level \(d=1,2,\dots\), we construct a discretizing grid of size \(\epsilon(d)\) over the unit interval, and construct a time-uniform upper bound on \(\overline{\mathrm{CDF}}_{t}(\rho)\) for each grid point \(\rho\) using the fixed-value confidence sequence oracle \(\Xi_{t}\). Then, for a given value \(v\), at each level \(d\) we make use of the fixed-value confidence sequence for smallest grid point \(\rho_{d}\geq v\), and we search for the level \(d\) which yields the minimal upper confidence bound. A union bound over the (countably infinite) possible choices for \(\rho_{d}\) controls the coverage of the overall procedure. Because the error probability \(\delta_{d}\) decreases with \(d\) (and the fixed-value confidence radius \(\Xi_{t}\) increases as \(\delta\) decreases), the procedure can terminate whenever no observations remain between the desired value \(v\) and the current upper bound \(\rho_{d}\), as all subsequent bounds are dominated.
The lower bound is derived analogously in Algorithm 2 (which we have left to Appendix B.2 for the sake of brevity) and leverages a lower confidence sequence \(\Lambda_{t}\left(\rho;\delta,\Psi_{t}\right)\) (instead of an upper confidence sequence) evaluated at an increasingly refined lower bound on the value \(\rho\leftarrow\epsilon(d)^{-1}|\epsilon(d)v|\).
**Theorem 3.1**.: _If \(\epsilon(d)\uparrow\infty\) as \(d\uparrow\infty\), then Algorithms 1 and 2 terminate with probability one. Furthermore, if for all \(\rho\), \(\delta\), and \(d\) the algorithms \(\Lambda_{t}(\rho;\delta,\Psi_{t})\) and \(\Xi_{t}(\rho;\delta,\Psi_{t})\) satisfy_
\[P(\forall t:\overline{\mathrm{CDF}}_{t}(\rho)\geq\Lambda_{t}( \rho;\delta,\Psi_{t}))\geq 1-\delta, \tag{4}\] \[P(\forall t:\overline{\mathrm{CDF}}_{t}(\rho)\leq\Xi_{t}(\rho; \delta,\Psi_{t}))\geq 1-\delta, \tag{5}\]
_then guarantee (2) holds with \(U_{t},L_{t}\) given by the outputs of Algorithms 1 and 2, respectively._
Proof.: See Appendix B.3.
Theorem 3.1 ensures Algorithms 1 and 2 yield the desired time- and value-uniform coverage, essentially due to the union bound and the coverage guarantees of the oracles \(\Xi_{t},\Lambda_{t}\). However, coverage is also guaranteed by the trivial bounds \(0\leq\overline{\mathrm{CDF}}_{t}(v)\leq 1\). The critical question is: what is the bound width?
Smoothed Regret GuaranteeEven assuming \(X\) is entirely supported on the unit interval, on what distributions will Algorithm 1 provide a non-trivial bound? Because each \([\Lambda_{t}(\rho;\delta,\Psi_{t}),\Xi_{t}(\rho;\delta,\Psi_{t})]\) is a confidence sequence for the mean of the bounded random variable \(1_{X_{s}\leq\rho}\), we enjoy width guarantees at each of the (countably infinite) \(\rho\) which are covered by the union bound, but the guarantee degrades as the depth \(d\) increases. If the data generating process focuses on an increasingly small part of the unit interval over time, the width guarantees on our discretization will be insufficient to determine the distribution. Indeed, explicit constructions demonstrating the lack of sequential uniform convergence of linear threshold functions increasingly focus in this manner (Block et al., 2022).
Conversely, if \(\forall t:\overline{\mathrm{CDF}}_{t}(v)\) was Lipschitz continuous in \(v\), then our increasingly granular discretization would eventually overwhelm any fixed Lipschitz constant and guarantee uniform convergence. Theorem 3.3 expresses this intuition, but using the concept of smoothness rather than Lipschitz, as smoothness will allow us to generalize further (Rakhlin et al., 2011; Haghtalab et al., 2020, 2022b; Bacon et al., 2022).
**Definition 3.2**.: A distribution \(D\) is \(\xi\)-smooth wrt reference measure \(M\) if \(D\ll M\) and \(\operatorname{ess\,sup}_{M}\left({}^{\text{dD}}\!/_{\text{dM}}\right)\leqslant \xi^{-1}\).
When the reference measure is the uniform distribution on the unit interval, \(\xi\)-smoothness implies an \(\xi^{-1}\)-Lipschitz CDF. However, when the reference measure has its own curvature, or charges points, the concepts diverge. When reading Theorem 3.3, note \(\xi\leqslant 1\) (since the reference measure is a probability distribution) and as \(\xi\to 0\) the smoothness constraint is increasingly relaxed. Thus Theorem 3.3 states "for less smooth distributions, convergence is slowed."
**Theorem 3.3**.: _Let \(U_{t}(v)\) and \(L_{t}(v)\) be the upper and lower bounds returned by Algorithm 1 and Algorithm 2 respectively, when evaluated with \(\epsilon(d)=2^{d}\) and the confidence sequences \(\Lambda_{t}\) and \(\Xi_{t}\) of Equation (15). If \(\forall t:\mathbb{P}_{t}\) is \(\xi_{t}\)-smooth wrt the uniform distribution on the unit interval then_
\[\forall t,\forall v:U_{t}(v)-L_{t}(v)\leqslant \tag{6}\] \[\sqrt{\frac{V_{t}}{t}}+\tilde{O}\left(\sqrt{\frac{V_{t}}{t}}\log \left(\xi_{t}^{-2}\alpha^{-1}t^{3/2}\right)\right),\]
_where \(q_{t}\doteq\overline{\operatorname{CDF}}_{t}(v)\); \(V_{t}\doteq\nicefrac{{1}}{{t}}+\nicefrac{{(q_{t}-1/2)}}{{\log(q_{t}/1-q_{t})}}\); and \(\tilde{O}()\) elides polylog \(V_{t}\) factors._
Proof.: See Appendix C.
Theorem 3.3 matches our empirical results in two important aspects: (i) logarithmic dependence upon smoothness (e.g., Figure 4); (ii) tighter intervals for more extreme quantiles (e.g., Figure 2). Note the choice \(\epsilon(d)=2^{d}\) ensures the loop in Algorithm 1 terminates after at most \(\log_{2}(\Delta)\) iterations, where \(\Delta\) is the minimum difference between two distinct realized values.
Worked ExampleTo build intuition, in Appendix B.1 we explicitly calculate Algorithm 1 for a synthetic data set.
### Extensions
Arbitrary SupportIn Appendix D.1 we describe a variant of Algorithm 1 which uses a countable dense subset of the entire real line. It enjoys a similar guarantee to Theorem 3.3, but with an additional width which is logarithmic in the probe value \(v\): \(\tilde{O}\left(\sqrt{\frac{V_{t}}{t}}\log\left(\left(2+\xi_{t}|v|t^{-1/2} \right)^{2}\xi_{t}^{-2}\alpha^{-1}t^{3/2}\right)\right)\). Note in this case \(\xi_{t}\) is defined relative to (unnormalized) Lebesgue measure and can therefore exceed 1.
Discrete JumpsIf \(\mathbb{P}_{t}\) is smooth wrt a reference measure which charges a countably infinite number of known discrete points, we can explicitly union bound over these additional points proportional to their density in the reference measure. In this case we preserve the above value-uniform guarantees. See Appendix D.2 for more details.
For distributions which charge unknown discrete points, we note the proof of Theorem 3.3 only exploits smoothness local to \(v\). Therefore if the set of discrete points is nowhere dense, we eventually recover the guarantee of Equation (6) after a "burn-in" time \(t\) which is logarithmic in the minimum distance from \(v\) to a charged discrete point.
Figure 3: Nonstationary Polya simulation for two seeds approaching different average conditional CDFs. Bounds successfully track the true CDFs in both cases. See Section 4.2.
Figure 2: CDF bounds approaching the true CDF when sampling i.i.d. from a Beta(6,3) distribution. Note these bounds are simultaneously valid for all times and values.
### Importance-Weighted Variant
An important use case is estimating a distribution based upon observations produced from another distribution with a known shift, e.g., arising in transfer learning (Pan and Yang, 2010) or off-policy evaluation (Waudby-Smith et al., 2022). In this case the observations are tuples \((W_{t},X_{t})\), where the importance weight \(W_{t}\) is a Radon-Nikodym derivative, implying \(\forall t:\mathbb{E}_{t}\left[W_{t}\right]=1\) and a.s. \(W_{t}\geq 0\); and the goal is to estimate \(\overline{\mathrm{CDF}}_{t}(v)=t^{-1}\sum_{s\in t}\mathbb{E}_{s-1}\left[W_{s}1 \chi_{s\leq v}\right]\). The basic approach in Algorithm 1 and Algorithm 2 is still applicable in this setting, but different \(\Lambda_{t}\) and \(\Xi_{t}\) are required. In Appendix E we present details on two possible choices for \(\Lambda_{t}\) and \(\Xi_{t}\): the first is based upon the empirical Bernstein construction of Howard et al. (2021), and the second based upon the DDRM construction of Mineiro (2022). Both constructions leverage the \(L^{*}\) Adagrad bound of Orabona (2019) to enable lazy evaluation. The empirical Bernstein version is amenable to analysis and computationally lightweight, but requires finite importance weight variance to converge (the variance bound need not be known, as the construction adapts to the unknown variance). The DDRM version requires more computation but produces tighter intervals. See Section 4.1 for a comparison.
Inspired by the empirical Bernstein variant, the following analog of Theorem 3.3 holds. Note \(\mathbb{P}_{t}\) is the target (importance-weighted) distribution, not the observation (non-importance-weighted) distribution.
**Theorem 3.4**.: _Let \(U_{t}(v)\) and \(L_{t}(v)\) be the upper and lower bounds returned by Algorithm 1 and Algorithm 2 respectively with \(\epsilon(d)=2^{d}\) and the confidence sequences \(\Lambda_{t}\) and \(\Xi_{t}\) of Equation (18). If \(\forall t:\mathbb{P}_{t}\) is \(\xi_{t}\)-smooth wrt the uniform distribution on the unit interval then_
\[\begin{split}\forall t,&\forall v:U_{t}(v)-L_{t}(v) \leq\\ & B_{t}+\sqrt{\frac{(\tau+V_{t})/t}{t}}\\ &+\tilde{O}\left(\sqrt{\frac{(\tau+V_{t})/t}{t}}\log\left(\xi_{t} ^{-2}\alpha^{-1}\right)\right)\\ &+\tilde{O}(t^{-1}\log\left(\xi_{t}^{-2}\alpha^{-1}\right)),\end{split} \tag{7}\]
_where \(q_{t}\doteq\overline{\mathrm{CDF}}_{t}(v)\), \(K(q_{t})\doteq{(q_{t}-1/2)/\log(q_{t}/_{1-q_{t}})}\); \(V_{t}=O\left(K(q_{t})\sum_{s\leq t}W_{s}^{2}\right)\), \(B_{t}\doteq t^{-1}\sum_{s\leq t}(W_{s}-1)\), and \(\tilde{O}()\) elides polylog \(V_{t}\) factors._
Proof.: See Appendix E.2.
Theorem 3.4 exhibits the following key properties: (i) logarithmic dependence upon smoothness; (ii) tighter intervals for extreme quantiles and importance weights with smaller quadratic variation; (iii) no explicit dependence upon importance weight range; (iv) asymptotic zero width for importance weights with sub-linear quadratic variation.
Additional RemarksFirst, the importance-weighted average CDF is a well-defined mathematical quantity, but the interpretation as a counterfactual distribution of outcomes given different actions in the controlled experimentation setting involves subtleties: we refer the interested reader to Waudby-Smith et al. (2022) for a complete discussion. Second, the need for nonstationarity techniques for
Figure 4: As smoothness’decreases, we require more time to reach the same maximum confidence width. For low smoothness, DKW dominates our method. The logarithmic dependence matches our theory. See Section 4.1.
Figure 5: CDF bounds’approaching the true counterfactual CDF when sampling i.i.d. from a Beta(6,3) with infinite-variance importance weights, using DDRM for the oracle confidence sequence.
estimating the importance-weighted CDF is driven by the outcomes \((X_{t})\) and not the importance-weights \((W_{t})\). For example with off-policy contextual bandits, a changing historical policy does not induce nonstationarity, but a changing conditional reward distribution does.
## 4 Simulations
These simulations explore the empirical behaviour of Algorithm 1 and Algorithm 2 when instantiated with \(\epsilon(d)=2^{d}\) and curved boundary oracles \(\Lambda\) and \(\Xi\). To save space, precise details on the experiments as well additional figures are elided to Appendix F. Reference implementations which reproduce the figures are available at [https://github.com/microsoft/cserbust](https://github.com/microsoft/cserbust).
### The i.i.d. setting
These simulations exhibit our techniques on i.i.d. data. Although the i.i.d. setting does not fully exercise the technique, it is convenient for visualizing convergence to the unique true CDF. In this setting the DKW inequality applies, so to build intuition about our statistical efficiency, we compare our bounds with a naive time-uniform version of DKW resulting from a \((\nicefrac{{6}}{{\pi^{2}}}\epsilon^{2})\) union bound over time.
Beta distributionIn this case the data is smooth wrt the uniform distribution on \([0,1]\) so we can directly apply Algorithm 1 and Algorithm 2. Figure 2 shows the bounds converging to the true CDF as \(t\) increases for an i.i.d. Beta\((6,3)\) realization. Figure 8 compares the bound width to time-uniform DKW at \(t=10000\) for Beta distributions that are increasingly less smooth with respect to the uniform distribution. The DKW bound is identical for all, but our bound width increases as the smoothness decreases.
The additional figures in Appendix F clearly indicate tighter bounds at extreme quantiles, in correspondence with Theorem 3.3.
Beyond the unit intervalIn Figure 7 (main text) and Appendix F.1 we present further simulations of i.i.d. lognormal and Gaussian random variables, ranging over \(\mathbb{R}^{+}\) and \(\mathbb{R}\) respectively, and using Algorithm 3. The logarithmic dependence of the bound width upon the probe value is evident.
An Exhibition of FailureFigure 4 shows the (empirical) relative convergence when the data is simulated i.i.d. uniform over \([0,\epsilon]\) for decreasing \(\epsilon\) (hence decreasing smoothness). The reference width is the maximum bound width obtained with Algorithm 1 and Algorithm 2 at \(t_{\text{ref}}=10000\) and \(\epsilon=1/16\), and shown is the multiplicative factor of time required for the maximum bound width to match the reference width as smoothness varies. The trend is consistent with arbitrarily poor convergence with arbitrarily small \(\epsilon\). Because this is i.i.d. data, DKW applies and a uniform bound (independent of \(\epsilon\)) is available. Thus while our instance-dependent guarantees are valuable in practice, they can be dominated by stronger guarantees leveraging additional assumptions. On a positive note, a logarithmic dependence on smoothness is evident over many orders of magnitude, confirming the analysis of Theorem 3.3.
Figure 6: A nonstationary, importance-weighted simulation in which the factual distribution (red) diverges dramatically from the counterfactual distribution (green). The bound correctly covers the counterfactual CDF.
Importance-WeightedIn these simulations, in addition to being i.i.d., \(X_{t}\) and \(W_{t}\) are drawn independently of each other, so the importance weights merely increase the difficulty of ultimately estimating the same quantity.
In the importance-weighted case, an additional aspect is whether the importance-weights have finite or infinite variance. Figures 5 and 13 demonstrate convergence in both conditions when using DDRM for pointwise bounds. Figures 14 and 15 show the results using empirical Bernstein pointwise bounds. In theory, with enough samples and infinite precision, the infinite variance Pareto simulation would eventually cause the empirical Bernstein variant to reset to trivial bounds, but in practice this is not observed. Instead, DDRM is consistently tighter but also consistently more expensive to compute, as exemplified in Table 2. Thus either choice is potentially preferable.
(i.e. not time-uniform). In other words, given a sample of i.i.d. random variables \(X_{1},\ldots,X_{n}\sim F\), these fixed time bounds \([\dot{L}_{n}(x),\dot{U}_{n}(x)]_{x\in\mathbb{R}}\) satisfy a guarantee of the form:
\[\mathbb{P}(\forall x\in\mathbb{R},\;\dot{L}_{n}(x)\leq F(x)\leq\dot{U}_{n}(x)) \geq 1-\alpha, \tag{8}\]
for any desired error level \(\alpha\in(0,1)\). Howard and Ramdas (2022) developed confidence bands \([\overline{L}_{t}(x),\,\overline{U}_{t}(x)]_{x\in\mathbb{R},t\in\mathbb{N}}\) that are both quantile- _and_ time-uniform, meaning that they satisfy the stronger guarantee:
\[\mathbb{P}(\forall x\in\mathbb{R},t\in\mathbb{N},\;\overline{L}_{t}(x)\leq F(x )\leq\bar{U}_{t}(x))\geq 1-\alpha. \tag{9}\]
However, the bounds presented in Howard and Ramdas (2022) ultimately focused on the classical i.i.d. _on-policy_ setup, meaning the CDF for which confidence bands are derived is the same CDF as those of the observations \((X_{t})_{t=1}^{\infty}\). This is in contrast to off-policy evaluation problems such as in randomized controlled trials, adaptive A/B tests, or contextual bandits, where the goal is to estimate a distribution different from that which was collected (e.g. collecting data based on a Bernoulli experiment with the goal of estimating the counterfactual distribution under treatment or control). Chandak et al. (2021) and Huang et al. (2021) both introduced fixed-time (i.e. non-time-uniform) confidence bands for the off-policy CDF in contextual bandit problems, though their procedures are quite different, rely on different proof techniques, and have different properties from one another. Waudby-Smith et al. (2022, Section 4) later developed _time-uniform_ confidence bands in the off-policy setting, using a technique akin to Howard and Ramdas (2022, Theorem 5) and has several desirable properties in comparison to Chandak et al. (2021) and Huang et al. (2021) as outlined in Waudby-Smith et al. (2022, Table 2).
Nevertheless, regardless of time-uniformity or on/off-policy estimation, all of the aforementioned prior works assume that the distribution to be estimated is _fixed and unchanging over time_. The present paper takes a significant departure from the existing literature by deriving confidence bands that allow the distribution to change over time in a data-dependent manner, all while remaining time-uniform and applicable to off-policy problems in contextual bandits. Moreover, we achieve this by way of a novel stitching technique which is closely related to those of Howard and Ramdas (2022) and Waudby-Smith et al. (2022).
## 6 Discussion
This work constructs bounds by tracking specific values, in contrast with i.i.d. techniques which track specific quantiles. The value-based approach is amenable to proving correctness qua Theorem 3.1, but has the disadvantage of sensitivity to monotonic transformations. We speculate it is possible to be covariant to a fixed (wrt time) but unknown monotonic transformation without violating known impossibility results. A technique with this property would have increased practical utility.
## Acknowledgments and Disclosure of Funding
The authors thank Ian Waudby-Smith for insightful discussion and review.
## References
* Block et al. (2022) Adam Block, Yuval Dagan, Noah Golowich, and Alexander Rakhlin. Smoothed online learning is as easy as statistical learning. _arXiv preprint arXiv:2202.04690_, 2022.
* Cantelli (1933) Francesco Paolo Cantelli. Sulla determinazione empirica delle leggi di probabilita. _Giorn. Ist. Ital. Attuari_, 4(421-424), 1933.
* Chandak et al. (2021) Yash Chandak, Scott Niekum, Bruno da Silva, Erik Learned-Miller, Emma Brunskill, and Philip S Thomas. Universal off-policy evaluation. _Advances in Neural Information Processing Systems_, 34:27475-27490, 2021.
* Chatzigeorgiou (2013) Ioannis Chatzigeorgiou. Bounds on the lambert function and their application to the outage analysis of user cooperation. _IEEE Communications Letters_, 17(8):1505-1508, 2013.
* Chatzigeorgiou et al. (2021)Aryeh Dvoretzky, Jack Kiefer, and Jacob Wolfowitz. Asymptotic minimax character of the sample distribution function and of the classical multinomial estimator. _The Annals of Mathematical Statistics_, pages 642-669, 1956.
* Fan et al. (2015) Xiequan Fan, Ion Grama, and Quansheng Liu. Exponential inequalities for martingales with applications. _Electronic Journal of Probability_, 20:1-22, 2015.
* Feller (1958) William Feller. _An introduction to probability theory and its applications, 3rd edition_. Wiley series in probability and mathematical statistics, 1958.
* Glivenko (1933) Valery Glivenko. Sulla determinazione empirica delle leggi di probabilita. _Gion. Ist. Ital. Attauri._, 4:92-99, 1933.
* Haghtalab et al. (2020) Nika Haghtalab, Tim Roughgarden, and Abhishek Shetty. Smoothed Analysis of Online and Differentially Private Learning. In _Advances in Neural Information Processing Systems_, volume 33, pages 9203-9215, 2020.
* Haghtalab et al. (2022a) Nika Haghtalab, Yanjun Han, Abhishek Shetty, and Kunhe Yang. Oracle-Efficient Online Learning for Beyond Worst-Case Adversaries, November 2022a. arXiv:2202.08549 [cs, stat].
* Haghtalab et al. (2022b) Nika Haghtalab, Tim Roughgarden, and Abhishek Shetty. Smoothed Analysis with Adaptive Adversaries. In _2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS)_, pages 942-953, 2022b.
* Howard and Ramdas (2022) Steven R Howard and Aaditya Ramdas. Sequential estimation of quantiles with applications to A/B testing and best-arm identification. _Bernoulli_, 28(3):1704-1728, 2022.
* Howard et al. (2021) Steven R Howard, Aaditya Ramdas, Jon McAuliffe, and Jasjeet Sekhon. Time-uniform, nonparametric, nonasymptotic confidence sequences. _The Annals of Statistics_, 49(2):1055-1080, 2021.
* Huang et al. (2021) Audrey Huang, Liu Leqi, Zachary Lipton, and Kamyar Azizzadenesheli. Off-policy risk assessment in contextual bandits. _Advances in Neural Information Processing Systems_, 34:23714-23726, 2021.
* Kearns and Saul (1998) Michael Kearns and Lawrence Saul. Large deviation methods for approximate probabilistic inference. In _Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence_, pages 311-319, 1998.
* Lindon et al. (2022) Michael Lindon, Chris Sanden, and Vache Shirikian. Rapid regression detection in software deployments through sequential testing. In _Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_, pages 3336-3346, 2022.
* Liu (2021) Wentao Liu. _Risk-Aware Financial Portfolio Management with Distributional Deep Deterministic Policy Gradient_. PhD thesis, University of Toronto (Canada), 2021.
* Massart (1990) Pascal Massart. The tight constant in the Dvoretzky-Kiefer-Wolfowitz inequality. _The annals of Probability_, pages 1269-1283, 1990.
* Mineiro (2022) Paul Mineiro. A lower confidence sequence for the changing mean of non-negative right heavy-tailed observations with bounded mean. _arXiv preprint arXiv:2210.11133_, 2022.
* Olver et al. (2010) Frank WJ Olver, Daniel W Lozier, Ronald F Boisvert, and Charles W Clark. _NIST handbook of mathematical functions hardback and CD-ROM_. Cambridge university press, 2010.
* Orabona (2019) Francesco Orabona. A modern introduction to online learning. _arXiv preprint arXiv:1912.13213_, 2019.
* Pan and Yang (2010) Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. _IEEE Transactions on knowledge and data engineering_, 22(10):1345-1359, 2010.
* Pinelis (2020) Iosif Pinelis. Exact lower and upper bounds on the incomplete gamma function. _arXiv preprint arXiv:2005.06384_, 2020.
* Pan and Yang (2010)Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Online Learning: Stochastic, Constrained, and Smoothed Adversaries. In _Advances in Neural Information Processing Systems_, volume 24, 2011.
* Rakhlin et al. (2015) Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Sequential complexities and uniform martingale laws of large numbers. _Probability theory and related fields_, 161(1):111-153, 2015.
* Shafer et al. (2011) Glenn Shafer, Alexander Shen, Nikolai Vereshchagin, and Vladimir Vovk. Test martingales, Bayes factors and p-values. _Statistical Science_, 26(1):84-101, 2011.
* Shen et al. (2022) Yi Shen, Jessilyn Dunn, and Michael M Zavlanos. Risk-averse multi-armed bandits with unobserved confounders: A case study in emotion regulation in mobile health. In _2022 IEEE 61st Conference on Decision and Control (CDC)_, pages 144-149. IEEE, 2022.
* Wang and Chapman (2022) Yuheng Wang and Margaret P Chapman. Risk-averse autonomous systems: A brief history and recent developments from the perspective of optimal control. _Artificial Intelligence_, page 103743, 2022.
* Waudby-Smith et al. (2022) Ian Waudby-Smith, Lili Wu, Aaditya Ramdas, Nikos Karampatziakis, and Paul Mineiro. Anytime-valid off-policy inference for contextual bandits. _arXiv preprint arXiv:2210.10768_, 2022.
Conflidence Sequences for Fixed \(v\)
Since our algorithm operates via reduction to pointwise confidence sequences, we provide a brief self-contained review here. We refer the interested reader to Howard et al. (2021) for a more thorough treatment.
A confidence sequence for a random process \(X_{t}\) is a time-indexed collection of confidence sets \(\text{CI}_{t}\) with a time-uniform coverage property \(\mathbb{P}\left(\forall t\in\mathbb{N}:X_{t}\in\text{CI}_{t}\right)\geqslant 1-\alpha\). For real random variables, the concept of a lower confidence sequence can be defined via \(\mathbb{P}\left(\forall t\in\mathbb{N}:X_{t}\geqslant L_{t}\right)\geqslant 1-\alpha\), and analogously for upper confidence sequences; and a lower and upper confidence sequence can be combined to form a confidence sequence \(\text{CI}_{t}\doteq\{x|L_{t}\leqslant x\leqslant U_{t}\}\) with coverage \((1-2\alpha)\) via a union bound.
One method for constructing a lower confidence sequence for a real valued parameter \(z\) is to exhibit a real-valued random process \(E_{t}(z)\) which, when evaluated at the true value \(z\)* of the parameter of interest, is a non-negative supermartingale with initial value of 1, in which case Ville's inequality ensures \(\mathbb{P}\left(\forall t\in\mathbb{N}:E_{t}(z^{\text{*}})\leqslant\alpha^{-1 }\right)\geqslant 1-\alpha\). If the process \(E_{t}(z)\) is monotonically increasing in \(z\), then the supremum of the lower contour set \(L_{t}\doteq\sup_{z}\left\{z|E_{t}(z)\leqslant\alpha^{-1}\right\}\) is suitable as a lower confidence sequence; an upper confidence sequence can be analogously defined.
We use the above strategy as follows. We bound these deviations using the following nonnegative martingale,
\[E_{t}(\lambda)\doteq\exp\left(\lambda S_{t}-\sum_{s\leqslant t} \log\left(h(\lambda,\theta_{s})\right)\right), \tag{10}\]
where \(\lambda\in\mathbb{R}\) is fixed and \(h(\lambda,z)\doteq(1-z)e^{-\lambda z}+ze^{\lambda(1-z)}\), the moment-generating function of a centered Bernoulli\((z)\) random variable. Equation (10) is a test martingale qua Shafer et al. (2011), i.e., it can be used to construct time-uniform bounds on \(\hat{q}_{t}-q_{t}\) via Ville's inequality.
Next we lower bound Equation (10),
\[E_{t}(\lambda)\doteq\exp\left(\lambda S_{t}-\sum_{s\leqslant t} \log\left(h(\lambda,\theta_{s})\right)\right), \tag{10}\]
and eliminate the explicit dependence upon \(\theta_{s}\), by noting \(\log h(\lambda,\cdot)\) is concave and therefore
\[E_{t}(\lambda)\geqslant\exp\left(\lambda t\left(q_{t}-\hat{q}_{ t}\right)-t\ \log h\left(\lambda,q_{t}\right)\right), \tag{11}\]
because \(\left(tf(q)=\max_{\theta\big{|}1^{\top}\theta=tq}\sum_{s\leqslant t}f(\theta_ {s})\right)\) for any concave \(f\). Equation (11) is monotonically increasing in \(q_{t}\) and therefore defines a lower confidence sequence. For an upper confidence sequence we use \(q_{t}=1-(1-q_{t})\) and a lower confidence sequence on \((1-q_{t})\).
Regarding the choice of \(\lambda\), in practice many \(\lambda\) are (implicitly) used via stitching (i.e., using different \(\lambda\) in different time epochs and majorizing the resulting bound in closed form) or mixing (i.e., using a particular fixed mixture of Equation (11) via a discrete sum or continuous integral over \(\lambda\)); our choices will depend upon whether we are designing for tight asymptotic rates or low computational footprint. We provide specific details associated with each theorem or experiment.
Note Equation (11) is invariant to permutations of \(X_{1:t}\) and hence the empirical CDF at time \(t\) is a sufficient statistic for calculating Equation (11) at any \(v\).
### Challenge with quantile space
In this section assume all CDFs are invertible for ease of exposition.
In the i.i.d. setting, Equation (10) can be evaluated at the (unknown) fixed \(v(q)\) which corresponds to quantile \(q\). Without knowledge of the values, one can assert the existence of such values for a countably infinite collection of quantiles and a careful union bound of Ville's inequality on a particular discretization can yield an LIL rate: this is the approach of Howard and Ramdas (2022). A key advantage of this approach is covariance to monotonic transformations.
Beyond the i.i.d. setting, one might hope to analogously evaluate Equation (10) at an unknown fixed value \(v_{t}(q)\) which for each \(t\) corresponds to quantile \(q\). Unfortunately, \(v_{t}(q)\) is not just unknown,but also unpredictable with respect to the initial filtration, and the derivation that Equation (10) is a martingale depends upon \(v\) being predictable. In the case that \(X_{t}\) is independent but not identically distributed, \(v_{t}(q)\) is initially predictable and therefore this approach could work, but would only be valid under this assumption.
The above argument does not completely foreclose the possibility of a quantile space approach, but merely serves to explain why the authors pursued a value space approach in this work. We encourage the interested reader to innovate.
## Appendix B Unit Interval Bounds
### Worked Example
Our (synthetic) data set consists of five values, each of which has occurred 1000 times:
\[D=\left\{(1000,0),(1000,\nicefrac{{1}}{{7}}),(1000,\nicefrac{{2}}{{7}}),(1000, \nicefrac{{3}}{{7}}),(1000,\nicefrac{{6}}{{7}})\right\}.\]
We use resolution \(\epsilon(d)=2^{d}\) and coverage error \(\alpha=\nicefrac{{1}}{{20}}\).
Upper bound for \(v=\nicefrac{{4}}{{7}}\)The upper bound algorithm starts with the trivial upper bound of 1. The first evaluated point \(\rho_{1}=2^{-1}[2^{1}v]=1\) again yields the trivial bound of 1. There are still empirical counts between the probe value (\(v=\nicefrac{{4}}{{7}}\)) and the bound value (\(\rho_{1}=1\)) so the algorithm continues. The second evaluated point \(\rho_{2}=2^{-2}[2^{2}v]=\nicefrac{{3}}{{4}}\), for which there are 4000 empirical counts below \(\rho_{2}\) out of 5000 total. The pointwise confidence sequence is evaluated with counts \((4000,5000)\) and coverage error \(\delta_{2}=\nicefrac{{\alpha}}{{24}}\), resulting in improved bound \(\approx 0.825\). Now, there are no empirical counts between the probe value (\(v=\nicefrac{{4}}{{7}}\)) and the bound value (\(\rho_{2}=\nicefrac{{3}}{{4}}\)) so the algorithm terminates. To see all subsequent bounds are dominated, note that a tighter upper bound \(\rho_{d>2}\) will result in the same empirical counts (4000 out of 5000) but a looser coverage error and hence worse bound.
Upper bound for \(v=\nicefrac{{13}}{{28}}\)The upper bound algorithm starts with the trivial upper bound of 1. The first evaluated point \(\rho_{1}=2^{-1}[2^{1}v]=\nicefrac{{1}}{{2}}\), for which there are 4000 empirical counts below \(\rho_{2}\) out of 5000 total. The pointwise confidence sequence is evaluated with counts \((4000,5000)\) and coverage error \(\delta_{1}=\nicefrac{{\alpha}}{{4}}\), resulting in improved bound \(\approx 0.822\). There are no empirical counts between the probe value \((v=\nicefrac{{13}}{{28}})\) and the bound value (\(\rho_{1}=\nicefrac{{1}}{{2}}\)) so the algorithm terminates. Relative to the previous example, the bound is slightly tighter as the discretization worked better for this \(v\).
Upper bound for \(v=\nicefrac{{5}}{{14}}\)The upper bound algorithm starts with the trivial upper bound of 1. The first evaluated point \(\rho_{1}=2^{-1}[2^{1}v]=\nicefrac{{1}}{{2}}\), for which there are 4000 empirical counts below \(\rho_{2}\) out of 5000 total. The pointwise confidence sequence is evaluated with counts \((4000,5000)\) and coverage error \(\delta_{1}=\nicefrac{{\alpha}}{{4}}\), resulting in improved bound \(\approx 0.822\). There are empirical counts between the probe value \((v=\nicefrac{{5}}{{14}})\) and the bound value (\(\rho_{1}=\nicefrac{{1}}{{2}}\)) so the algorithm continues. The second evaluated point \(\rho_{2}=2^{-2}[2^{2}v]=\nicefrac{{1}}{{2}}\), which has the same empirical counts but worse coverage error at this level, and hence does not improve the bound. There are empirical counts between the probe value \((v=\nicefrac{{5}}{{14}})\) and the bound value (\(\rho_{2}=\nicefrac{{1}}{{2}}\)) so the algorithm continues. The third evaluated point \(\rho_{3}=2^{-3}[2^{3}v]=\nicefrac{{3}}{{8}}\), for which there are 3000 empirical counts below \(\rho_{3}\) out of 5000 total. The pointwise confidence sequence is evaluated with counts \((3000,5000)\) and coverage error \(\delta_{3}=\nicefrac{{\alpha}}{{96}}\), resulting in improved bound \(\approx 0.633\). There are no empirical counts between the probe value \((v=\nicefrac{{5}}{{14}})\) and the bound value (\(\rho_{3}=\nicefrac{{3}}{{8}}\)) so the algorithm terminates.
### Lower Bound
Algorithm 2 is extremely similar to Algorithm 1: the differences are indicated in comments. Careful inspection reveals the output of Algorithm 1, \(U_{t}(v)\), can be obtained from the output of Algorithm 2, \(L_{t}(v)\), via \(U_{t}(v)=1-L_{t}(1-v)\); but only if the sufficient statistics are adjusted such that \(\Xi_{t}(\rho_{d};\delta,\Psi_{t})=1-\Lambda_{t}(1-\rho_{d};\delta,\Psi_{t}^{ \prime})\). The reference implementation uses this strategy.
### Proof of Theorem 3.1
We prove the results for the upper bound Algorithm 1; the argument for the lower bound Algorithm 2 is similar.
The algorithm terminates when we find a \(d\) such that \(0=\sum_{s\leqslant t}1_{X_{s}\in(v,\rho_{d}]}\). Since \(\epsilon(d)\uparrow\infty\) as \(d\uparrow\infty\), we have \(\rho_{d}=\epsilon(d)[\epsilon(d)^{-1}v]\downarrow v\), so that \(\sum_{s\leqslant t}1_{X_{s}\in(v,\rho_{d}]}\downarrow 0\). So the algorithm must terminate.
At level \(d\), we have \(\epsilon(d)\) confidence sequences. The \(i^{\text{th}}\) confidence sequence at level \(d\) satisfies
\[P(\exists t:\overline{\text{CDF}}_{t}(i/\epsilon(d))>\Xi_{t}(i/\epsilon(d); \delta_{d},d,\Psi_{t}))\leqslant\frac{\alpha}{2^{d}\epsilon(d)}. \tag{12}\]
Taking a union bound over all confidence sequences at all levels, we have
\[P\left(\exists d\in\mathbb{N},i\in\{1,\ldots,d\},t\in\mathbb{N}:\overline{ \text{CDF}}_{t}(i/\epsilon(d))>\Xi_{t}(i/\epsilon(d);\delta,d,\Psi_{t})\right) \leqslant\alpha. \tag{13}\]
Thus we are assured that, for any \(v\in\mathbb{R}\),
\[P(\forall t,d:\overline{\text{CDF}}_{t}(v)\leqslant\overline{\text{CDF}}_{t}( \rho_{d})\leqslant\Xi_{t}(\rho_{d};\delta_{d},d,\Psi_{t}))\geqslant 1-\alpha. \tag{14}\]
Algorithm 1 will return \(\Xi_{t}(\rho_{d};\delta_{d},d,\Psi_{t})\) for some \(d\) unless all such values are larger than one, in which case it returns the trivial upper bound of one. This proves the upper-bound half of guarantee (2). A similar argument proves the lower-bound half, and union bound over the upper and lower bounds finishes the argument.
```
Input: value \(v\); confidence \(\alpha\); sufficient statistic \(\Psi_{t}\). // comments below indicate differences from upper bound //\(\Psi_{t}\doteq X_{1:t}\) or \(\Psi_{t}\doteq(W_{1:t},X_{1:t})\) Output:\(L_{t}(v)\) satisfying Equation (2). if\(v<0\)thenreturn\(0\)endif// check for underflow of range rather than overflow \(l\gets 0\)// initialize with \(0\) instead of \(1\) \(v\leftarrow\min\left(1,v\right)\)// project onto \([0,1]\) using \(\min\) instead of \(\max\) for\(d=1\)to\(\infty\)do \(\rho_{d}\leftarrow\epsilon(d)^{-1}[\epsilon(d)v]\)// use floor instead of ceiling \(\delta_{d}\leftarrow\nicefrac{{\alpha}}{{2^{d}\epsilon(d)}}\) \(l\leftarrow\max\left(l,\Lambda_{t}\left(\rho_{d};\delta,\Psi_{t}\right)\right)\)// use lower bound instead of upper bound if\(0=\sum_{s\leqslant t}1_{X_{s}\in[\rho_{d},v)}\)thenreturn\(l\) endif endfor
```
**Algorithm 2** Unit Interval Lower Bound. \(\epsilon(d)\) is an increasing function specifying the resolution of discretization at level \(d\). \(\Lambda_{t}\left(\rho;\delta,d,\Psi_{t}\right)\) is a lower confidence sequence for fixed value \(\rho\) with coverage at least \((1-\delta)\).
## Appendix C Proof of Theorem 3.3
**Theorem 3.3**.: _Let \(U_{t}(v)\) and \(L_{t}(v)\) be the upper and lower bounds returned by Algorithm 1 and Algorithm 2 respectively, when evaluated with \(\epsilon(d)=2^{d}\) and the confidence sequences \(\Lambda_{t}\) and \(\Xi_{t}\) of Equation (15). If \(\forall t:\mathbb{P}_{t}\) is \(\xi_{t}\)-smooth wrt the uniform distribution on the unit interval then_
\[\forall t,\forall v:U_{t}(v)-L_{t}(v)\leqslant \tag{6}\] \[\sqrt{\frac{V_{t}}{t}}+\tilde{O}\left(\sqrt{\frac{V_{t}}{t}\log \left(\xi_{t}^{-2}\alpha^{-1}t^{3/2}\right)}\right),\]
_where \(q_{t}\doteq\overline{\text{CDF}}_{t}(v)\); \(V_{t}\doteq\nicefrac{{1}}{{t}}+\nicefrac{{(q_{t}-1/2)}}{{\log(\nicefrac{{q_{t }}}{{1-q_{t}}})}}\); and \(\tilde{O}()\) elides polylog \(V_{t}\) factors._
Note \(v\) is fixed for the entire argument below, and \(\xi_{t}\) denotes the unknown smoothness parameter at time \(t\).
We will argue that the upper confidence radius \(U_{t}(v)-t^{-1}\sum_{s\leqslant t}1_{X_{s}\leqslant v}\) has the desired rate. An analogous argument applies to the lower confidence radius \(t^{-1}\sum_{s\leqslant t}1_{X_{s}\leqslant v}-L_{t}(v)\), and the confidence width \(U_{t}(v)-L_{t}(v)\) is the sum of these two.
For the proof we introduce an integer parameter \(\eta\geqslant 2\) which controls both the grid spacing (\(\epsilon(d)=\eta^{d}\)) and the allocation of error probabilities to levels (\(\delta_{d}=\alpha/(\eta^{d}\epsilon(d))\)). In the main paper we set \(\eta=2\).
At level \(d\) we construct \(\eta^{d}\) confidence sequences on an evenly-spaced grid of values \(1/\eta^{d},2/\eta^{d},\ldots,1\). We divide total error probability \(\alpha/\eta^{d}\) at level \(d\) among these \(\eta^{d}\) confidence sequences, so that each individual confidence sequence has error probability \(\alpha/\eta^{2d}\).
For a fixed bet \(\lambda\) and value \(\rho\), \(S_{t}\) defined in Section 3.2 is sub-Bernoulli qua Howard et al. (2021, Definition 1) and therefore sub-Gaussian with variance process \(V_{t}\doteq tK(q_{t})\), where \(K(p)\doteq\nicefrac{{(2p-1)}}{{2}\log(\nicefrac{{\eta}}{{1-\rho}})}\) is from Kearns and Saul (1998); from Howard et al. (2021, Proposition 5) it follows that there exists an explicit mixture distribution over \(\lambda\) such that
\[M(t;q_{t},\tau)\doteq\sqrt{2\left(tK(q_{t})+\tau\right)\log\left(\frac{\eta^{2 d}}{2\alpha}\sqrt{\frac{tK(q_{t})+\tau}{\tau}}+1\right)} \tag{15}\]
is a (curved) uniform crossing boundary, i.e., satisfies
\[\frac{\alpha}{\eta^{2d}}\geqslant\mathbb{P}\left(\exists t\geqslant 1:S_{t} \geqslant\frac{M(t;q_{t},\tau)}{t}\right),\]
where \(S_{t}\doteq\overline{\mathrm{CDF}}_{t}(\rho)-t^{-1}\sum_{s\leqslant t}1_{X_{s} \leqslant\rho}\) is from Equation (10), and \(\tau\) is a hyperparameter to be determined further below.
Because the values at level \(d\) are \(1/\eta^{d}\) apart, the worst-case discretization error in the estimated average CDF value is
\[\overline{\mathrm{CDF}}_{t}(\epsilon(d)[\epsilon(d)^{-1}v])-\overline{\mathrm{ CDF}}_{t}(v)\leqslant 1/(\xi_{t}\eta^{d}),\]
and the total worst-case confidence radius including discretization error is
\[r_{d}(t)=\frac{1}{\xi_{t}\eta^{d}}+\sqrt{\frac{2\left(K(q_{t})+\tau/t\right)}{ t}\log\left(\frac{\eta^{2d}}{2\alpha}\sqrt{\frac{tK(q_{t})+\tau}{\tau}}+1 \right)}.\]
Now evaluate at \(d\) such that \(\sqrt{\psi_{t}}<\xi_{t}\eta^{d}\leqslant\eta\sqrt{\psi_{t}}\) where \(\psi_{t}\doteq t\left(K(q_{t})+\tau/t\right)^{-1}\),
\[r_{d}(t)\leqslant\sqrt{\frac{K(q_{t})+\tau/t}{t}}+\sqrt{\frac{2\left(K(q_{t}) +\tau/t\right)}{t}\log\left(\frac{\xi_{t}^{-2}\eta^{2}}{2\alpha}\left(\frac{t }{K(q_{t})+\tau/t}\right)\sqrt{\frac{tK(q_{t})+\tau}{\tau}}+1\right)}.\]
The final result is not very sensitive to the choice of \(\tau\), and we use \(\tau=1\) in practice.
## Appendix D Extensions
### Arbitrary Support
Algorithm 3 is a variation on Algorithm 1 which does not assume a bounded range, and instead uses a countably discrete dense subset of the entire real line. Using the same argument of Theorem 3.3 with the modified probability from the modified union bound, we have
\[|k_{d}|-1<\eta^{-d}|v|\leqslant|k_{d}|,\] \[\xi_{t}/\sqrt{\psi_{t}}>\eta^{-d}\geqslant\eta^{-1}\xi_{t}/\sqrt {\psi_{t}}\] \[\implies 1+|k_{d}|<2+\xi_{t}|v|/\sqrt{\psi_{t}}\] \[\implies r_{d}(t)\leqslant\tilde{O}\left(\sqrt{\frac{V_{t}}{t}}\log \left(\left(2+\xi_{t}|v|t^{-1/2}\right)^{2}\xi_{t}^{-2}\alpha^{-1}t^{3/2} \right)\right),\]
demonstrating a logarithmic penalty in the probe value \(v\) (e.g., Figure 7).
**Algorithm 3** Entire Real Line Upper Bound. \(\epsilon(d)\) is an increasing function specifying the resolution of discretization at level \(d\). \(\Xi_{t}\left(\rho;\delta,d,\Psi_{t}\right)\) is an upper confidence sequence for fixed value \(\rho\) with coverage at least \((1-\delta)\).
**Input:** value \(v\); confidence \(\alpha\); sufficient statistic \(\Psi_{t}\).
//e.g. \(\Psi_{t}\doteq X_{1:t}\) or \(\Psi_{t}\doteq(W_{1:t},X_{1:t})\)
**Output:**\(U_{t}(v)\) satisfying Equation (2).
\(u\gets 1\)
**for**\(d=1\)**to**\(\infty\)**do**
\(k_{d}\leftarrow[\epsilon(d)^{-1}v]\) // Sub-optimal: see text for details
\(\rho_{d}\leftarrow\epsilon(d)k_{d}\)
\(\delta_{d}\leftarrow(\nicefrac{{\alpha}}{{2^{d}}})\left(\nicefrac{{3}}{{( \pi^{2}-3)(1+|k_{d}|)^{2}}}\right)\) // Union bound over \(d\in\mathbb{N}\) and \(k_{d}\in\mathbb{Z}\)
\(u\leftarrow\min\left(u,\Xi_{t}\left(\rho_{d};\delta_{d},d,\Psi_{t}\right)\right)\)
**if**\(0=\sum_{s\leqslant t}1_{X_{s}\epsilon(v,\rho_{d})}\)**then**
**return**\(u\)
**end for**
**Algorithm 4** Entire Real Line Upper Bound. \(\epsilon(d)\) is an increasing function specifying the resolution of discretization at level \(d\). \(\Xi_{t}\left(\rho;\delta,d,\Psi_{t}\right)\) is an upper confidence sequence for fixed value \(\rho\) with coverage at least \((1-\delta)\).
**Sub-optimality of \(k_{d}\)**The choice of \(k_{d}\) in Algorithm 3 is amenable to analysis, but unlike in Algorithm 1, it is not optimal. In Algorithm 1 the probability is allocated uniformly at each depth, and therefore the closest grid point provides the tightest estimate. However in Algorithm 3, the probability budget decreases with \(|k_{d}|\) and because \(k_{d}\) can be negative, it is possible that a different \(k_{d}\) can produce a tighter upper bound. Since every \(k_{d}\) is covered by the union bound, in principle we could optimize over all \(k_{d}\) but it is unclear how to do this efficiently. In our implementation we do not search over all \(k_{d}\), but we do adjust \(k_{d}\) to be closest to the origin with the same empirical counts.
### Discrete Jumps
Known Countably InfiniteSuppose \(D\) is smooth wrt a reference measure \(M\), where \(M\) is of the form
\[M=\tilde{M}+\sum_{i\in I}\zeta_{i}1_{v_{i}},\]
with \(I\) a countable index set, \(1\geqslant\sum_{i\in I}\zeta_{i}\) and \(\tilde{M}\) a sub-probability measure normalizing to \((1-\sum_{i\in I}\zeta_{i})\). Then we can allocate \((1-\sum_{i\in I}\zeta_{i})\) of our overall coverage probability to bounding \(\tilde{M}\) using Algorithm 1 and Algorithm 2. For the remaining \(\{v_{i}\}_{i\in I}\) we can run explicit pointwise bounds each with coverage probability fraction \(\zeta_{i}\).
Computationally, early termination of the infinite search over the discrete bounds is possible. Suppose (wlog) \(I\) indexes \(\zeta\) in non-increasing order, i.e., \(i\leqslant j\implies\zeta_{i}\leqslant\zeta_{j}\): then as soon as there are no remaining empirical counts between the desired value \(v\) and the most recent discrete value \(v_{i}\), the search over discrete bounds can terminate.
## Appendix E Importance-Weighted Variant
### Modified Bounds
Algorithm 1 and Algorithm 2 are unmodified, with the caveat that the oracles \(\Lambda_{t}\) and \(\Xi_{t}\) must now operate on an importance-weighted realization \((W_{1:t},X_{1:t})\), rather then directly on the realization \(X_{1:t}\).
#### e.1.1 DDRM Variant
For simplicity we describe the lower bound \(\Lambda_{t}\) only. The upper bound is derived analogously via the equality \(Y_{s}=W_{s}-(W_{s}-Y_{s})\) and a lower bound on \((W_{s}-Y_{s})\): see Waudby-Smith et al. (2022, Remark 3) for more details.
This is the Heavy NSM from Mineiro (2022) combined with the \(L\)* bound of Orabona (2019, SS4.2.3). The Heavy NSM allow us to handle importance weights with unbounded variance, while the Adagrad \(L\)* bound facilitates lazy evaluation.
For fixed \(v\), let \(Y_{t}=W_{t}1_{X_{s}\succeq v}\) be a non-negative real-valued discrete-time random process, let \(\hat{Y}_{t}\in[0,1]\) be a predictable sequence, and let \(\lambda\in[0,1)\) be a fixed scalar bet. Then
\[E_{t}(\lambda)\doteq\exp\left(\lambda\left(\sum_{s\leq t}\hat{Y}_{s}-\mathbb{E} _{s-1}\left[Y_{s}\right]\right)+\sum_{s\leq t}\log\left(1+\lambda\left(Y_{s}- \hat{Y}_{s}\right)\right)\right)\]
is a test supermartingale (Mineiro, 2022, SS3). Manipulating,
\[E_{t}(\lambda) =\exp\left(\lambda\left(\sum_{s\leq t}Y_{s}-\mathbb{E}_{s-1}\left[ Y_{s}\right]\right)-\sum_{s\leq t}\underbrace{\left(\lambda\left(Y_{s}-\hat{Y}_{s} \right)-\log\left(1+\lambda\left(Y_{s}-\hat{Y}_{s}\right)\right)\right)}_{ \triangleq h\left(\lambda\left(Y_{s}-\hat{Y}_{s}\right)\right)}\right)\] \[=\exp\left(\lambda\left(\sum_{s\leq t}Y_{s}-\mathbb{E}_{s-1}\left[ Y_{s}\right]\right)-\sum_{s\leq t}h\left(\lambda\left(Y_{s}-\hat{Y}_{s}\right) \right)\right)\] \[\geqslant\exp\left(\lambda\left(\sum_{s\leq t}Y_{s}-\mathbb{E}_{s -1}\left[Y_{s}\right]\right)-\left(\sum_{s\leq t}h\left(\lambda\left(Y_{s}- \hat{Y}_{t}^{*}\right)\right)\right)-\text{Reg}(t)\right)\] ( \[\dagger\] ) \[=\exp\left(\lambda\left(t\hat{Y}_{t}^{*}-\sum_{s\leq t}\mathbb{E} _{s-1}\left[Y_{s}\right]\right)+\sum_{s\leq t}\log\left(1+\lambda\left(Y_{s}- \hat{Y}_{t}^{*}\right)\right)-\text{Reg}(t)\right),\]
where for \((\dagger)\) we use a no-regret learner on \(h()\) with regret \(\text{Reg}(t)\) to any constant prediction \(\hat{Y}_{t}^{*}\in[0,1]\). The function \(h()\) is \(M\)-smooth with \(M=\frac{\lambda^{2}}{(1-\lambda)^{2}}\) so we can get an \(L^{*}\) bound (Orabona, 2019, SS4.2.3) of
\[\text{Reg}(t) =4\frac{\lambda^{2}}{(1-\lambda)^{2}}+4\frac{\lambda}{1-\lambda} \sqrt{\sum_{s\leq t}h\left(\lambda\left(Y_{s}-\hat{Y}_{t}^{*}\right)\right)}\] \[=4\frac{\lambda^{2}}{(1-\lambda)^{2}}+4\frac{\lambda}{1-\lambda} \sqrt{\left(-t\hat{Y}_{t}^{*}+\sum_{s\leq t}Y_{s}\right)-\sum_{s\leq t}\log \left(1+\lambda\left(Y_{s}-\hat{Y}_{t}^{*}\right)\right)},\]
thus essentially our variance process is inflated by a square-root. In exchange we do not have to actually run the no-regret algorithm, which eases the computational burden. We can compete with any in-hindsight prediction: if we choose to compete with the clipped running mean \(\overline{Y_{t}}\) then we end up with
\[E_{t}(\lambda)\geqslant\exp\left(\lambda\left(\min\left(t,\sum_{s\leq t}Y_{s} \right)-\mathbb{E}_{s-1}\left[Y_{s}\right]\right)+\sum_{s\leq t}\log\left(1+ \lambda\left(Y_{s}-\overline{Y_{t}}\right)\right)-\text{Reg}(t)\right), \tag{16}\]
which is implemented in the reference implementation as LogApprox:getLowerBoundWithRegret(lam). The \(\lambda\)-s are mixed using DDRM from Mineiro (2022, Thm. 4), implemented via the DDRM class and the getDDRMCSLowerBound method in the reference implementation. getDDRMCSLowerBound provably correctly early terminates the infinite sum by leveraging
\[\sum_{s\leq t}\log\left(1+\lambda\left(Y_{s}-\overline{Y_{t}}\right)\right) \leqslant\lambda\left(\sum_{s\leq t}Y_{s}-t\overline{Y_{t}}\right)\]
as seen in the termination criterion of the inner method logwealth(mu).
To minimize computational overhead, we can lower bound \(\log(a+b)\) for \(b\geqslant 0\) using strong concavity qua Mineiro (2022, Thm. 3), resulting in the following geometrically spaced collection of sufficient statistics:
\[(1+k)^{n_{t}}=z_{l}\leqslant z<z_{u}=(1+k)z_{l}=(1+k)^{n_{l}+1},\]
along with distinct statistics for \(z=0\). \(k\) is a hyperparameter controlling the granularity of the discretization (tighter lower bound vs. more space overhead): we use \(k=1/4\) exclusively in our experiments. Note the coverage guarantee is preserved for any choice of \(k\) since we are lower bounding the wealth.
Given these statistics, the wealth can be lower bounded given any bet \(\lambda\) and any in-hindsight prediction \(\hat{Y}_{t}^{*}\) via
\[f(z) \doteq\log\left(1+\lambda\left(z-\hat{Y}_{t}^{*}\right)\right),\] \[f(z) \geqslant\alpha f(z_{l})+(1-\alpha)f(z_{u})+\frac{1}{2}\alpha(1- \alpha)m(z_{l}),\] \[\alpha \doteq\frac{z_{u}-z}{z_{u}-z_{l}},\] \[m(z_{l}) \doteq\left(\frac{kz_{l}\lambda}{kz_{l}\lambda+1-\lambda\hat{Y}_{t }^{*}}\right)^{2}.\]
Thus when accumulating the statistics, for each \(Y_{s}=W_{s}1_{X_{s}\geqslant v}\), a value of \(\alpha\) must be accumulated at key \(f(z_{l})\), a value of \((1-\alpha)\) accumulated at key \(f(z_{u})\), and a value of \(\alpha(1-\alpha)\) accumulated at key \(m(z_{l})\). The LogApprox::update method from the reference implementation implements this.
Because these sufficient statistics are data linear, a further computational trick is to accumulate the sufficient statistics with equality only, i.e., for \(Y_{s}=W_{s}1_{X_{s}=v}\); and when the CDF curve is desired, combine these point statistics into cumulative statistics. In this manner only \(O(1)\) incremental work is done per datapoint; while an additional \(O(t\log(t))\) work is done to accumulate all the sufficient statistics only when the bounds need be computed. The method StreamingDDRMECDF::Frozen::__init__from the reference implementation contains this logic.
#### e.1.2 Empirical Bernstein Variant
For simplicity we describe the lower bound \(\Lambda_{t}\) only. The upper bound is derived analogously via the equality \(Y_{s}=W_{s}-(W_{s}-Y_{s})\) and a lower bound on \((W_{s}-Y_{s})\): see Waudby-Smith et al. (2022, Remark 3) for more details.
This is the empirical Bernstein NSM from Howard et al. (2021) combined with the \(L\)* bound of Orabona (2019, SS4.2.3). Relative to DDRM it is faster to compute, has a more concise sufficient statistic, and is easier to analyze; but it is wider empirically, and theoretically requires finite importance weight variance to converge.
For fixed \(v\), let \(Y_{t}=W_{t}1_{X_{t}\geqslant v}\) be a non-negative real-valued discrete-time random process, let \(\hat{Y}_{t}\in[0,1]\) be a predictable sequence, and let \(\lambda\in[0,1)\) be a fixed scalar bet. Then
\[E_{t}(\lambda)\doteq\exp\left(\lambda\left(\sum_{s\in t}\hat{Y}_{s}-\mathbb{E} _{s-1}\left[Y_{s}\right]\right)+\sum_{s\in t}\log\left(1+\lambda\left(Y_{s}- \hat{Y}_{s}\right)\right)\right)\]
is a test supermartingale (Mineiro, 2022, SS3). Manipulating,
\[E_{t}(\lambda) \doteq\exp\left(\lambda\left(\sum_{s\in t}Y_{s}-\mathbb{E}_{s-1} \left[Y_{s}\right]\right)-\sum_{s\in t}\underbrace{\left(\lambda\left(Y_{s}- \hat{Y}_{s}\right)-\log\left(1+\lambda\left(Y_{s}-\hat{Y}_{s}\right)\right) \right)}_{\triangleq h\left(\lambda\left(Y_{s}-\hat{Y}_{s}\right)\right)}\right)\] \[\geqslant\exp\left(\lambda\left(\sum_{s\in t}Y_{s}-\mathbb{E}_{s- 1}\left[Y_{s}\right]\right)-h(-\lambda)\sum_{s\in t}\left(Y_{s}-\hat{Y}_{s} \right)^{2}\right)\] [Fan, Lemma 4.1] \[\geqslant\exp\left(\lambda\left(\sum_{s\in t}Y_{s}-\mathbb{E}_{s- 1}\left[Y_{s}\right]\right)-h(-\lambda)\left(\text{Reg}(t)+\sum_{s\in t}\left( Y_{s}-Y_{t}^{*}\right)^{2}\right)\right)\] ( \[\dagger\] ) \[\doteq\exp\left(\lambda S_{t}-h(-\lambda)V_{t}\right),\]
where \(S_{t}=\sum_{s\in t}Y_{s}-\mathbb{E}_{s-1}\left[Y_{s}\right]\) and for (\(\dagger\)) we use a no-regret learner on squared loss on feasible set \([0,1]\) with regret \(\text{Reg}(t)\) to any constant in-hindsight prediction \(\hat{Y}_{t}^{*}\in[0,1]\). Since \(Y_{s}\) is unbounded above, the loss is not Lipschitz and we can't get fast rates for squared loss, but we can run Adagrad and get an \(L^{*}\) bound,
\[\text{Reg}(t) =2\sqrt{2}\sqrt{\sum_{s\leq t}g_{s}^{2}}\] \[=4\sqrt{2}\sqrt{\sum_{s\leq t}(Y_{s}-\hat{Y}_{s})^{2}}\] \[\leq 4\sqrt{2}\sqrt{\text{Reg}(t)+\sum_{s\leq t}(Y_{s}-\hat{Y}_{t }^{*})^{2}},\] \[\implies\text{Reg}(t) \leq 16+4\sqrt{2}\sqrt{8+\sum_{s\leq t}(Y_{s}-\hat{Y}_{t}^{*})^{2}}.\]
Thus basically our variance process is inflated by an additive square root.
We will compete with \(Y_{t}^{*}=\min\left(1,\frac{1}{t}\sum_{s}Y_{s}\right)\).
A key advantage of the empirical Bernstein over DDRM is the availability of both a conjugate (closed-form) mixture over \(\lambda\) and a closed-form majorized stitched boundary. This yields both computational speedup and analytical tractability.
For a conjugate mixture, we use the truncated gamma prior from Waudby-Smith et al. (2022, Theorem 2) which yields mixture wealth
\[M_{t}^{\text{EB}}\doteq\left(\frac{\tau^{\tau}e^{-\tau}}{\Gamma(\tau)-\Gamma( \tau,\tau)}\right)\left(\frac{1}{\tau+V_{t}}\right){}_{1}F_{1}\left(1,V_{t}+ \tau+1,S_{t}+V_{t}+\tau\right), \tag{17}\]
where \({}_{1}F_{1}(\ldots)\) is Kummer's confluent hypergeometric function and \(\Gamma(\cdot,\cdot)\) is the upper incomplete gamma function. For the hyperparameter, we use \(\tau=1\).
### Proof of Theorem 3.4
**Theorem 3.4**.: _Let \(U_{t}(v)\) and \(L_{t}(v)\) be the upper and lower bounds returned by Algorithm 1 and Algorithm 2 respectively with \(\epsilon(d)=2^{d}\) and the confidence sequences \(\Lambda_{t}\) and \(\Xi_{t}\) of Equation (18). If \(\forall t:\mathbb{P}_{t}\) is \(\xi_{t}\)-smooth wrt the uniform distribution on the unit interval then_
\[\begin{split}\forall t,\forall v&:U_{t}(v)-L_{t}(v) \leqslant\\ B_{t}&+\sqrt{\frac{(\tau+V_{t})/t}{t}}\\ &+\tilde{O}\left(\sqrt{\frac{(\tau+V_{t})/t}{t}}\log\left(\xi_{t} ^{-2}\alpha^{-1}\right)\right)\\ &+\tilde{O}(t^{-1}\log\left(\xi_{t}^{-2}\alpha^{-1}\right)),\end{split} \tag{7}\]
_where \(q_{t}\doteq\overline{\text{CDF}}_{t}(v)\), \(K(q_{t})\doteq(q_{t}-1/2)/_{\log(q_{t}/1-q_{t})}\); \(V_{t}=O\left(K(q_{t})\sum_{s\leq t}W_{s}^{2}\right)\), \(B_{t}\doteq t^{-1}\sum_{s\leq t}(W_{s}-1)\), and \(\tilde{O}()\) elides polylog \(V_{t}\) factors._
Note \(v\) is fixed for the entire argument below, and \(\xi_{t}\) denotes the unknown smoothness parameter at time \(t\).
We will argue that the upper confidence radius \(U_{t}(v)-t^{-1}\sum_{s\leq t}W_{s}1_{X_{s}\leq v}\) has the desired rate. An analogous argument applies to the lower confidence radius. One difference from the non-importance-weighted case is that, to be sub-exponential, the lower bound is constructed from an upper bound on \(U_{t}^{\prime}(v)=W_{s}(1-1_{X_{s}\leq v})\) via \(L_{t}(v)-1-U_{t}^{\prime}(v)\), which introduces an additional \(B_{t}=t^{-1}\sum_{s\leq t}(W_{s}-1)\) term to the width. (Note, because \(\forall t:\mathbb{E}_{t}[W_{t}-1]=0\), this term will concentrate, but we will simply use the realized value here.)
For the proof we introduce an integer parameter \(\eta\geqslant 2\) which controls both the grid spacing (\(\epsilon(d)=\eta^{d}\)) and the allocation of error probabilities to levels (\(\delta_{d}=\alpha/(\eta^{d}\epsilon(d))\)). In the main paper we set \(\eta=2\).
At level \(d\) we construct \(\eta^{d}\) confidence sequences on an evenly-spaced grid of values \(1/\eta^{d},2/\eta^{d},\ldots,1\). We divide total error probability \(\alpha/\eta^{d}\) at level \(d\) among these \(\eta^{d}\) confidence sequences, so that each individual confidence sequence has error probability \(\alpha/\eta^{2d}\).
For a fixed bet \(\lambda\) and value \(\rho\), \(S_{t}\) defined in Appendix E.1.2 is sub-exponential qua Howard et al. (2021, Definition 1) and therefore from Lemma E.1 there exists an explicit mixture distribution over \(\lambda\) inducing (curved) boundary
\[\frac{\alpha}{\eta^{2d}} \geq\mathbb{P}\left(\exists t\geq 1:\frac{S_{t}}{t}\geq\max \left(\frac{C(\tau)}{t},u\left(V_{t};\tau,\frac{\alpha}{\eta^{2d}}\right) \right)\right),\] \[u\left(V_{t};\tau,\frac{\alpha}{\eta^{2d}}\right) =\sqrt{2\left(\frac{(\tau+V_{t})/t}{t}\right)\log\left(\sqrt{\frac {\tau+V_{t}}{2\pi}}e^{-\frac{1}{12(\tau+V_{t})+1}}\left(\frac{1+\eta^{2d} \alpha^{-1}}{C(\tau)}\right)\right)}\] \[\qquad+\frac{1}{t}\log\left(\sqrt{\frac{\tau+V_{t}}{2\pi}}e^{- \frac{1}{12(\tau+V_{t})+1}}\left(\frac{1+\eta^{2d}\alpha^{-1}}{C(\tau)}\right) \right), \tag{18}\]
where \(S_{t}\doteq\overline{\mathrm{CDF}}_{t}(\rho)-t^{-1}\sum_{s\leq t}W_{s}1_{X_{s} \leq\rho}\), and \(\tau\) is a hyperparameter to be determined further below.
Because the values at level \(d\) are \(1/\eta^{d}\) apart, the worst-case discretization error in the estimated average CDF value is
\[\overline{\mathrm{CDF}}_{t}(\epsilon(d)[\epsilon(d)^{-1}v])-\overline{\mathrm{ CDF}}_{t}(v)\leq 1/(\xi_{t}\eta^{d}),\]
and the total worst-case confidence radius including discretization error is
\[r_{d}(t)=\frac{1}{\xi_{t}\eta^{d}}+\max\left(\frac{C(\tau)}{t},u\left(V_{t}; \tau,\frac{\alpha}{\eta^{2d}}\right)\right).\]
Now evaluate at \(d\) such that \(\sqrt{\psi_{t}}<\xi_{t}\eta^{d}\leq\eta\sqrt{\psi_{t}}\) where \(\psi_{t}\doteq t\left((\tau+V_{t})/t\right)^{-1}\),
\[r_{d}(t) \leq\frac{1}{\sqrt{\psi_{t}}}+\max\left(\frac{C(\tau)}{t},u\left( V_{t};\tau,\frac{\alpha}{\eta^{2}\xi_{t}^{-2}\psi_{t}}\right)\right)\] \[=\sqrt{\frac{(\tau+V_{t})/t}{t}}+\tilde{O}\left(\sqrt{\frac{( \tau+V_{t})/t}{t}\log\left(\xi_{t}^{-2}\alpha^{-1}\right)}\right)+\tilde{O}(t^ {-1}\log\left(\xi_{t}^{-2}\alpha^{-1}\right)),\]
where \(\tilde{O}()\) elides polylog \(V_{t}\) factors. The final result is not very sensitive to the choice of \(\tau\), and we use \(\tau=1\) in practice.
**Lemma E.1**.: _Suppose_
\[\exp\left(\lambda S_{t}-\psi_{e}(\lambda)V_{t}\right),\] \[\psi_{e}(\lambda)\doteq-\lambda-\log(1-\lambda),\]
_is sub-\(\psi_{e}\) qua Howard et al. (2021, Definition 1); then there exists an explicit mixture distribution over \(\lambda\) with hyperparameter \(\tau>0\) such that_
\[\alpha \geq\mathbb{P}\left(\exists t\geq 1:\frac{S_{t}}{t}\geq\max \left(\frac{C(\tau)}{t},u\left(V_{t};\tau,\alpha\right)\right)\right),\] \[u\left(V_{t};\tau,\alpha\right) =\sqrt{2\left(\frac{(\tau+V_{t})/t}{t}\right)\log\left(\sqrt{ \frac{\tau+V_{t}}{2\pi}}e^{-\frac{1}{12(\tau+V_{t})+1}}\left(\frac{1+\alpha^ {-1}}{C(\tau)}\right)\right)}\] \[\qquad+\frac{1}{t}\log\left(\sqrt{\frac{\tau+V_{t}}{2\pi}}e^{- \frac{1}{12(\tau+V_{t})+1}}\left(\frac{1+\alpha^{-1}}{C(\tau)}\right)\right),\] \[C(\tau) \doteq\frac{\tau^{\tau}e^{-\tau}}{\Gamma(\tau)-\Gamma(\tau,\tau)},\]
_is a (curved) uniform crossing boundary._Proof.: We can form the conjugate mixture using a truncated gamma prior from Howard et al. (2021, Proposition 9), in the form from Waudby-Smith et al. (2022, Theorem 2), which is our Equation (17).
\[M_{t}^{\text{EB}}\doteq\left(\frac{\tau^{\tau}e^{-\tau}}{\Gamma(\tau)-\Gamma( \tau,\tau)}\right)\left(\frac{1}{\tau+V_{t}}\right){}_{1}F_{1}\left(1,V_{t}+ \tau+1,S_{t}+V_{t}+\tau\right),\]
where \({}_{1}F_{1}(\ldots)\) is Kummer's confluent hypergeometric function. Using Olver et al. (2010, identity 13.6.5),
\[{}_{1}F_{1}(1,a+1,x)=e^{x}ax^{-a}\left(\Gamma(a)-\Gamma(a,x)\right)\]
where \(\Gamma(a,x)\) is the (unregularized) upper incomplete gamma function. From Pinelis (2020, Theorem 1.2) we have
\[\Gamma(a,x) <\frac{x^{a}e^{-x}}{x-a}\] \[\implies{}_{1}F_{1}(1,a+1,x) \geqslant e^{x}ax^{-a}\Gamma(a)-\frac{a}{x-a}.\]
Applying this to the mixture yields
\[M_{t}^{\text{EB}} \geqslant\frac{C(\tau)e^{\tau+V_{t}+S_{t}}}{(\tau+V_{t}+S_{t})^{ \tau+V_{t}}}\Gamma\left(\tau+V_{t}\right)-\frac{C(\tau)}{S_{t}}\] \[\geqslant\frac{C(\tau)e^{\tau+V_{t}+S_{t}}}{(\tau+V_{t}+S_{t})^{ \tau+V_{t}}}\Gamma\left(\tau+V_{t}\right)-1, (\dagger)\]
where \((\dagger)\) follows from the self-imposed constraint \(S_{t}\geqslant C(\tau)\). This yields crossing boundary
\[\alpha^{-1} =\frac{C(\tau)e^{\tau+V_{t}+S_{t}}}{(\tau+V_{t}+S_{t})^{\tau+V_{t} }}\Gamma\left(\tau+V_{t}\right)-1,\] \[\frac{e^{\tau+V_{t}+S_{t}}}{\left(1+\frac{S_{t}}{\tau+V_{t}} \right)^{\tau+V_{t}}} =\left(\frac{(\tau+V_{t})^{\tau+V_{t}}}{\Gamma\left(\tau+V_{t} \right)}\right)\left(\frac{1+\alpha^{-1}}{C(\tau)}\right)\doteq\left(\frac{( \tau+V_{t})^{\tau+V_{t}}}{\Gamma\left(\tau+V_{t}\right)}\right)\phi_{t}(\tau, \alpha),\] \[\frac{e^{1+\frac{S_{t}}{\tau+V_{t}}}}{\left(1+\frac{S_{t}}{\tau+ V_{t}}\right)} =\left(\frac{(\tau+V_{t})^{\tau+V_{t}}}{\Gamma\left(\tau+V_{t} \right)}\right)^{\frac{1}{\tau+V_{t}}}\phi_{t}(\tau,\alpha)^{\frac{1}{\tau+V_ {t}}}\doteq z_{t},\] \[S_{t} =(\tau+V_{t})\left(-1-W_{-1}\left(-z_{t}^{-1}\right)\right).\]
Chatzgeorgiou (2013, Theorem 1) states
\[W_{-1}(-e^{-u-1}) \in-1-\sqrt{2u}+\left[-u,-\frac{2}{3}u\right]\] \[\implies-1-W_{-1}(-e^{-u-1}) \in\sqrt{2u}+\left[\frac{2}{3}u,u\right].\]
Substituting yields
\[(\tau+V_{t})\left(-1-W_{-1}\left(-z_{t}^{-1}\right)\right)\leqslant(\tau+V_{t })\left(\sqrt{2\log\left(\frac{z_{t}}{e^{1}}\right)}+\log\left(\frac{z_{t}}{ e^{1}}\right)\right). \tag{19}\]
From Feller (1958, Equation (9.8)) we have
\[\Gamma(1+n) \in\sqrt{2\pi n}\left(\frac{n}{e^{1}}\right)^{n}\left[e^{\frac{1} {2n+1}},e^{\frac{1}{2n}}\right]\] \[\implies\left(\frac{(\tau+V_{t})^{\tau+V_{t}}}{\Gamma\left(\tau+ V_{t}\right)}\right)^{\frac{1}{\tau+V_{t}}} \in\left(\frac{\tau+V_{t}}{2\pi}\right)^{\frac{1}{2(\tau+V_{t})}}e^{1}\left[e^ {-\frac{1}{12(\tau+V_{t})^{2}}},e^{-\frac{1}{12(\tau+V_{t})^{2}+(\tau+V_{t})} }\right].\]
Therefore
\[(\tau+V_{t})\sqrt{2\log\left(\frac{z_{t}}{e^{1}}\right)} \leqslant(\tau+V_{t})\sqrt{2\log\left(\left(\frac{\tau+V_{t}}{2 \pi}\right)^{\frac{1}{2(\tau+V_{t})}}e^{-\frac{1}{12(\tau+V_{t})^{2}+(\tau+V_{ t})}}\phi_{t}(\tau,\alpha)^{\frac{1}{\tau+V_{t}}}\right)}\] \[=\sqrt{2\left(\tau+V_{t}\right)\log\left(\sqrt{\frac{\tau+V_{t}} {2\pi}}e^{-\frac{1}{12(\tau+V_{t})+\tau}}\phi_{t}(\tau,\alpha)\right)}, \tag{20}\]\[(\tau+V_{t})\log\left(\frac{z_{t}}{e^{1}}\right) \leqslant(\tau+V_{t})\log\left(\left(\frac{\tau+V_{t}}{2\pi}\right) ^{\frac{1}{2(\tau+V_{t})}}e^{-\frac{1}{12(\tau+V_{t})^{2}+(\tau+V_{t})}}\phi_{ t}(\tau,\alpha)^{\frac{1}{\tau+V_{t}}}\right)\] \[=\log\left(\sqrt{\frac{\tau+V_{t}}{2\pi}}e^{-\frac{1}{12(\tau+V_{t })+1}}\phi_{t}(\tau,\alpha)\right). \tag{21}\]
Combining Equations (19) to (21) yields the crossing boundary
\[\frac{S_{t}}{t} =\sqrt{2\left(\frac{(\tau+V_{t})/t}{t}\right)\log\left(\sqrt{ \frac{\tau+V_{t}}{2\pi}}e^{-\frac{1}{12(\tau+V_{t})+1}}\left(\frac{1+\alpha^{ -1}}{C(\tau)}\right)\right)}\] \[\qquad+\frac{1}{t}\log\left(\sqrt{\frac{\tau+V_{t}}{2\pi}}e^{- \frac{1}{12(\tau+V_{t})+1}}\left(\frac{1+\alpha^{-1}}{C(\tau)}\right)\right).\]
## Appendix F Simulations
### i.i.d. setting
For non-importance-weighted simulations, we use the Beta-Binomial boundary of Howard et al. (2021) for \(\Lambda_{t}\) and \(\Xi_{t}\). The curved boundary is induced by the test NSM
\[W_{t}(b;\hat{q}_{t},q_{t}) =\frac{\int_{q_{t}}^{1}d\text{Beta}\left(p;bq_{t},b(1-q_{t})\right) \,\left(\frac{p}{q_{t}}\right)^{t\hat{q}_{t}}\left(\frac{1-p}{1-q_{t}}\right)^ {t(1-\hat{q}_{t})}}{\int_{q_{t}}^{1}d\text{Beta}\left(p;bq_{t},b(1-q_{t})\right)}\] \[=\frac{1}{(1-q_{t})^{t(1-\hat{q}_{t})}q_{t}^{t\hat{q}_{t}}}\left( \frac{\text{Beta}(q_{t},1,bq_{t}+t\hat{q}_{t},b(1-q_{t})+t(1-\hat{q}_{t}))}{ \text{Beta}(q_{t},1,bq_{t},b(1-q_{t}))}\right)\]
with prior parameter \(b=1\). Further documentation and details are in the reference implementation csnsquantile.ipynb.
The importance-weighted simulations use the constructions from Appendix E: the reference implementation is in csnsopquantile.ipynb for the DDRM variant and csnsopquantile-ebern.ipynb for the empirical Bernstein variant.
Figure 11: Demonstration of the variant described in Section 3.3 and Appendix D.1 for distributions with arbitrary support, based on i.i.d. sampling from a variety of Gaussian distributions. Logarithmic range dependence is evident.
Figure 12: Maximum bound width, scaled by \(\sqrt{t/\log(t)}\) to remove the primary trend, as a function of \(t\), for nonstationary Polya simulations with different \(\gamma_{t}\) schedules. See Section 4.2
Figure 8: Comparison to naive time-uniform DKW (which is only valid in the i.i.d. setting) for Beta distributions of varying smoothness. Decreasing smoothness degrades our bound.
Figure 7: Demonstration of the variant described in Section 3.3 and Appendix D.1 for distributions with arbitrary support, based on i.i.d. sampling from a variety of lognormal distributions. Logarithmic range dependence is evident.
Figure 9: CDF bounds approaching the true CDF when sampling i.i.d. from a lognormal(0, 1) distribution. Recall these bounds are simultaneously valid for all times and values.
Figure 8: Comparison to naive time-uniform DKW (which is only valid in the i.i.d. setting) for Beta distributions of varying smoothness. Decreasing smoothness degrades our bound.
Figure 10: CDF bounds approaching the true CDF when sampling i.i.d. from a Gaussian(0, 1) distribution. Recall these bounds are simultaneously valid for all times and values.
Figure 7: Demonstration of the variant described in Section 3.3 and Appendix D.1 for distributions with arbitrary support, based on i.i.d. sampling from a variety of lognormal distributions. Logarithmic range dependence is evident.
Figure 14: CDF bounds\({}^{\prime}\) approaching the true counterfactual CDF when sampling i.i.d. from a Beta(6,3) with finite-variance importance weights, using Empirical Bernstein for the oracle confidence sequence.
Figure 13: CDF bounds’ approaching the true counterfactual CDF when sampling i.i.d. from a Beta(6,3) with finite-variance importance weights, using DDRM for the oracle confidence sequence.
Figure 15: CDF bounds’ approaching the true counterfactual CDF when sampling i.i.d. from a Beta(6,3) with infinite-variance importance weights, using Empirical Bernstein for the oracle confidence sequence. Despite apparent convergence, eventually this simulation would reset the Empirical Bernstein oracle confidence sequence to trivial bounds. | ## Review
### Summary
This paper proposes a method for constructing time-uniform and value-uniform bounds on the cumulative distribution function (CDF) of the averaged historical distribution of a real-valued random process, particularly under nonstationarity and dependence among variables. Building on the work of Howard and Ramdas (2022), the authors derive uniform bounds that do not require the assumption of independent and identically distributed (i.i.d.) samples. The paper includes theoretical guarantees for the proposed methods and simulations demonstrating their effectiveness in various scenarios. However, it requires clearer explanations and definitions to enhance accessibility for readers not deeply familiar with the material.
### Strengths
- Reliable bounds on the CDF for dependent data settings, which is valuable as iid assumptions do not always hold.
- The considered problem of constructing confidence bands for CDFs under nonstationarity is of great practical importance.
- The proposed technique appears new and interesting, with theoretical guarantees provided in Theorem 3.3.
- The authors simulate their algorithm on several synthetic examples, showcasing its adaptive performance.
### Weaknesses
- The paper is hard to read and follow, assuming prior knowledge from the reader, thus requiring more detailed explanations.
- Key concepts, such as confidence sequences and their role in algorithms, are insufficiently explained.
- The introduction lacks clarity regarding the problem being solved and could benefit from including more background information.
- Comparisons with existing results need to be clearer, particularly regarding technical difficulties and contributions.
- Some figures are poorly labeled or too small, impacting readability.
### Questions
- What is the instance-adaptivity of Theorem 3.3, and how does it maintain time-uniformity without dependence on previous parameters?
- Is the i.i.d. assumption made explicit throughout Section 3?
- Can the authors clarify the roles of certain inputs in Algorithm 1 and provide concrete examples to aid comprehension?
- What are the major technical difficulties addressed in deriving the main results, and could the authors summarize these more explicitly?
### Soundness
**Score:** 2
**Description:** Fair: The methodology is generally sound, but there are concerns regarding the clarity of certain theoretical aspects and proofs.
### Presentation
**Score:** 2
**Description:** Fair: The writing is often difficult to follow, with numerous instances of insufficient explanations and definitions, though some parts are well-structured.
### Contribution
**Score:** 3
**Description:** Good: The paper introduces noteworthy contributions to the field, especially concerning confidence bands for dependent variables.
### Rating
**Score:** 5
**Description:** Borderline accept: The paper is technically solid with significant contributions but requires improvements in clarity and presentation.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents valuable contributions to the understanding and methodology of constructing confidence bands for nonstationary processes. Despite the concerns regarding presentation and clarity, the technical merits and originality of the work warrant acceptance, provided that the authors address the feedback related to exposition and detail in future revisions.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Schema-learning and rebinding as mechanisms of in-context learning and emergence
Sivaramakrishnan Swaminathan Antoine Dedieu Rajkumar Vasudeva Raju Murray Shanahan Miguel Lazaro-Gredilla Dileep George Google DeepMind
{sivark,adedieu,rajvraju,mshanahan,lazarogredilla,dileepgeorge}@google.com
###### Abstract
In-context learning (ICL) is one of the most powerful and most unexpected capabilities to emerge in recent transformer-based large language models (LLMs). Yet the mechanisms that underlie it are poorly understood. In this paper, we demonstrate that comparable ICL capabilities can be acquired by an alternative sequence prediction learning method, namely clone-structured causal graphs (CSCGs). A key property of CSCGs is that, unlike transformer-based LLMs, they are _interpretable_, which considerably simplifies the task of explaining how ICL works. We show that ICL in CSCG uses a combination of (a) learning template (schema) circuits for pattern completion, (b) retrieving relevant templates in a context-sensitive manner, and (c) rebinding novel tokens to appropriate slots in the templates. We go on to marshall evidence for the hypothesis that similar mechanisms underlie ICL in LLMs. For example, we find that, with CSCGs as with LLMs, different capabilities emerge at different levels of overparameterization, suggesting that overparameterization helps in learning more complex template (schema) circuits. By showing how ICL can be achieved with small models and datasets, we open up a path to novel architectures, and take a vital step towards a more general understanding of the mechanics behind this important capability.
## 1 Introduction
In a pre-trained sequence model, _in-context learning_ (ICL), or _few-shot prompting_, is the ability to learn a new task from a small set of examples presented within the context (the prompt) at inference time. Surprisingly, large language models (LLMs) trained on sufficient data exhibit ICL, even though they are trained only with the objective of next token prediction [1, 2]. A good deal of the ongoing excitement surrounding LLMs arises from this unexpected capacity, since it dramatically enlarges their set of potential applications. Ongoing attempts to understand this capability take a variety of forms, including higher-level normative accounts using Bayesian inference [3], and mechanistic explanations involving implicit gradient descent [4] or induction heads [5]. Despite this, the mechanisms that underlie ICL in LLMs remain somewhat mysterious.
We take an alternative approach, studying a sequence-learning model called a clone-structured causal graph (CSCG) [6, 7] to reveal the conditions that drive ICL. We show that ICL can be explained as a combination of (a) learning template circuits for pattern completion, (b) retrieving relevant templates in a context-sensitive manner, and (c) rebinding of novel tokens to appropriate slots in templates [8]. Unlike n-gram models, CSCGs allow transitive generalization in the latent space: they assign semantically sensible non-zero probabilities to sequences never seen during training to ensure that the contexts (prompts) used for retrieval are not pure memorizations. In addition, the binding of novel tokens to slots in learned templates allows the same structural knowledge to be applied to entirely novel inputs. We hypothesize how similar mechanisms could exist in transformer-based LLMs. Byelucidating the principles that underpin the mechanics of ICL, we hope to pave the way for the design of novel architectures for abstraction and generalization, while the building blocks we identify guide the search for mechanistically interpretable [9] and editable [10] circuits in transformers [11].
## 2 Rebinding algorithm for clone-structured causal graphs
### Background on clone-structured causal graphs (CSCGs)
Consider an agent executing a series of discrete actions \(a_{1},\ldots,a_{N-1}\) with \(a_{n}\in\{1,\ldots,N_{\text{actions}}\}\), e.g. walking in a room. As a result of each action, the agent receives a perceptually aliased observation [12], resulting in the stream of random variables \(X_{1},\ldots,X_{N}\) with observed values \(x_{1},\ldots,x_{N}\), where each \(x_{n}\in\{1,\ldots,N_{\text{obs}}\}\). CSCG [6] is a probabilistic sequence learning model that introduces a latent explanatory variable \(Z_{n}\) at each timestep \(n\), with values \(z_{n}\in\{1,\ldots,N_{\text{latent}}\}\), to model the action-conditional stream of observations as
\[P(x_{1},\ldots,x_{N}|a_{1},\ldots,a_{N-1})=\sum_{z_{1},\ldots,z_{n}}P(x_{1}|z _{1})P(z_{1})\prod_{n=2}^{N}P(x_{n}|z_{n})P(z_{n}|z_{n-1},a_{n-1}).\]
A transition tensor \(T:\,T_{ijk}=P(Z_{n}=k|Z_{n-1}=j,a_{n-1}=i)\,\forall n\) represents the action-conditional dynamics. \(T\) defines a directed multigraph, whose nodes correspond to the values of \(z\). Conditioned on an action, each entry of \(T\) is the weight of a directed edge between two nodes (from the row index to the column index of that entry). A CSCG can thus recover a graph that represents the latent causal structure [13] of the environment (see Fig. 1D for an example) which can then be used for planning.
An emission matrix \(E:\,E_{ij}=P(X_{n}=j|Z_{n}=i)\,\,\forall n\) represents the observation probabilities. CSCGs have a deterministic observation model: for any latent value \(z\), the same observation \(x\) is always emitted. Multiple values of \(z\) can result in the same observed \(x\), making the model overcomplete [14]. The restriction to deterministic \(E\) makes CSCGs less general than a hidden Markov model (HMM), but easier to learn [6]. A CSCG can disambiguate multiple aliased percepts (same observation \(x\)) into distinct causes (different latent values \(z\)) given a sufficiently long context. If the observations correspond to word tokens, CSCGs can also be used as a language model, with a single action that accesses the next token1.
Footnote 1: This can be generalized to include actions that skip over one or more next tokens.
### Rebinding in CSCGs
On encountering a new environment with a similar structure but different observations, an agent can learn that environment faster by reusing the latent graph \(T\) from prior experience and relearning just the emission matrix, through a _rebinding_ process [15]. Rebinding can be interpreted as a soft intervention on the agent's model [16; 17]. See Fig. 1D & F for examples of two rooms that share the same latent structure but different observations. When a new emission matrix binds to an existing schema, it has to respect the _clone structure_ of the original emission matrix (Fig. 1E). The clone structure function \(\mathcal{C}(\cdot)\in 1,\ldots,N_{\text{obs}}\) partitions the latent state in \(N_{\text{obs}}\)_slots_: two latent states \(z=i\) and \(z=i^{\prime}\) belong to the same slot iff \(\mathcal{C}(i)=\mathcal{C}(i^{\prime})\). An emission matrix respects the clone structure \(\mathcal{C}\) if \(\mathcal{C}(i)=\mathcal{C}(i^{\prime})\implies E_{ij}=E_{i^{\prime}j}\,\forall \,i,i^{\prime},j.\) The 3-tuple \(\{T,\mathcal{C},E\}\) defines a _grounded schema_, the tuple \(\{T,\mathcal{C}\}\) defines an _ungrounded schema with clone structure_ and \(T\) alone is a _schema_[15].
#### 2.2.1 Fast rebinding by attending to surprise
Often, environment changes are localized such that most of the latent structure and observation mapping is preserved while just a few observations need to be rebound: for example, just replacing the carpet in a room while the wall colors remain the same, or getting exposed to a new word in a familiar context. This insight can be utilized to derive an algorithm that focuses the update of the emission matrix only to those observations that were found surprising by the existing model.
Suppose that at test time, a grounded schema \(\{T,\mathcal{C},E^{0}\}\) is exposed to a sequence with novel observations. Algorithm 1 proposes a fast procedure to update the emission matrix to the new observations by only performing local updates, and to bind the updated emission matrix to the existing schema \(T\), defining a new grounded schema \(\{T,\mathcal{C},E^{\text{rb}}\}\). We call this process _fast rebinding_.
Given a prompt \((x_{1},\ldots,x_{N})\) and a surprise threshold, Algorithm 1 proceeds by (a) identifying emission matrix entries that need updating then (b) updating these entries using the Expectation-Maximization (EM) algorithm [18]. The conditional probability \(P(X_{n}=j\mid x_{\setminus n})\) of tokens at timestep \(n\) given all other tokens is used to identify timesteps and latent states that are surprising. Step 3 identifies _anchors_, i.e., latent states corresponding to observations that are correctly predicted with high confidence: anchors are not rebound. Step 4 identifies _candidates for rebinding_ as latent states (a) not among the anchor states and (b) corresponding to timesteps at which observations are incorrectly predicted with high confidence. Finally, instead of re-learning the whole emission matrix [15, Appendix A.2], Step 5 (detailed in Appendix A) _locally updates_ the emission matrix by only applying EM on the latent states and timesteps identified in Step 4. As a result, only a small subset of rows differ between \(E^{0}\) and \(E^{\mathrm{rb}}\). Protected rows correspond to either (a) anchors in the current prompt or (b) slots not relevant to the current prompt but possibly relevant to future observations. Section 6 discusses how a similar mechanism could be implemented in transformers.
**Input:** Grounded schema \(\{T,\mathcal{C},E^{0}\}\), pseudocount \(\epsilon\), prompt \((x_{1},\ldots,x_{N})\), surprise probability threshold \(p_{\text{surprise}}\).
**Output:** Rebound emission matrix \(E^{\mathrm{rb}}\)
```
1: Define \(\tilde{E}^{0}\propto E^{0}+\epsilon\), with normalized rows.
2: For timestep \(n\), use the emission matrix \(\tilde{E}^{0}\) to compute \(P(X_{n}=j\mid x_{\setminus n})=P(X_{n}=j\mid x_{1},\ldots,x_{n-1},x_{n+1}, \ldots,x_{N})\), \(\forall j\leq N_{\text{obs}}\)
3: Identify latent states and timesteps that can act as anchors: \(\mathcal{A}=\left\{(i,n)\mid\,P(X_{n}=x_{n}\mid x_{\setminus n})>p_{\text{ surprise}},\text{ and }\mathcal{C}(i)=x_{n}\right\}\)
4: Identify latent states to be rebound (and their timesteps): \(\mathcal{R}=\left\{(i,n)\mid\,P(X_{n}=j\mid x_{\setminus n})>p_{\text{ surprise}},\,j\neq x_{n},\,\,(\cdot,n)\notin\mathcal{A},\,(i,\cdot)\notin \mathcal{A}\text{ and }\mathcal{C}(i)=j\right\}\)
5: Fix \(T\), and use EM to update the emission matrix (initialized with \(E^{0}\), and without using any pseudocount) by only using the beliefs for latent states \(i\) and timesteps \(n\) such that \((i,n)\in\mathcal{R}\).
Figure 1: **A**. Inducing room structure (_cognitive maps_) from sequential sensory observations is challenging due to perceptual aliasing – local observations do not identify locations uniquely. **B**. Cloned hidden Markov models (HMMs) [7]. Each observation is mapped to multiple clone states in the latent space. **C**. Graphical model for CSCGs [6], extending cloned HMMs by incorporating actions. CSCGs utilize the latent space to overcome the perceptual aliasing problem. Different clones learn to represent different temporal contexts to recover the latent structure of the room. **D**. Learned CSCG for the room shown in panel A consists of a latent transition matrix and an emission matrix. We visualize the model in two ways: (i) stacking clone states for respective observations into columns, and (ii) clones as nodes in a transition graph, colored with their respective emissions. **E**. The emission matrix imposes a _slot_ structure – nodes within the same slot are constrained to bind to the same observation. A new environment with the same latent structure but different observation mapping (Room 2) can be learned quickly by freezing the transition graph and slot structure, and learning a new emission matrix by rebinding slots to a new set of observations. **F**. CSCG for a _different room_ learned purely through rebinding.
After rebinding, we complete the prompt by performing MAP inference conditioned on the provided prompt in the rebound CSCG. We run the max-product algorithm [19] forward (the backward messages are all uniform) thus generating a series of MAP observations for the tokens following the prompt. We stop once we generate a delimiter token. See Algorithm 2 in Appendix B for details.
## 3 Outline of the overall argument using CSCG
### Context-dependent latent representations and transitive generalization
The clone structure of CSCGs allows context-based separation and appropriate blending for language modeling. For example, the sense of the word "bank" in "bank robber" is different from the one in "river bank". CSCG learning disambiguates these contexts in the latent space by wiring them to different clones to improve predictive accuracy. In Fig. 2A, the sentences "river bank resort", and "one bank robber" use different clones of "bank". Sequences can have probabilistic branching: "one bank robber" can terminate at "\n", or continue to "eating at river bank resort" or "eating bread and honey", or "eating bread and butter at river bank resort" (Fig. 2B). CSCGs also allow the merging of contexts that result in transitive generalization: even if training data has only the sequences "bread and butter", and "milk and honey", if they go through the same clone state "and", the model will generalize to "bread and honey" and "milk and butter", assigning non-zero probability to those sequences. Due to the combination of context-sensitive separation and transitivity, related topics, concepts, and algorithms get clustered into sub-networks that pass through the same clones. A prompt's context would activate its sub-network, and transitive generalization allows for prompts that are not exact memorizations. As we show in Section 4, the Bayesian inference perspective on ICL [3] corresponds to this context-sensitive and transitively generalizing storage and retrieval alone, and is insufficient to explain the ICL properties we consider in the next sections.
### Learning flexible schemas (template circuits) and rebinding
Just like learning room layouts, CSCG can learn automata circuits [20] for sequence-to-sequence (seq2seq) algorithms. See Fig. 2 for CSCG circuits for computing parity, copying a sequence, and reversing sequences of multiple lengths. The list reversal circuit in Fig. 2E is bound to the specific symbols \(A,B,C,D,E\) used in training. For use as a template, slots in this graph must be able to appropriately bind to contents (arbitrary symbols) that occur in context at test time [8; 21]. The rebinding mechanism (formalized in Algorithm 1) can intuitively be understood as operating based on prediction errors - when the latent context strongly predicts the latent state corresponding to a time instant, but the actual observation is mismatched, rebinding adjusts the emission matrix to wire all the clones of that latent state to the surprising observation. Such a mechanism to mix and gate previous knowledge with new content allows circuits learned during training to become flexible templates
Figure 2: **A.** CSCGs allow both separation of contexts and transitive generalization. The word “bank” is wired to multiple clones corresponding to the different contexts it is used in. If “milk and honey”, and “bread and butter” are seen in training, transitive generalization occurs if they get wired through the same “and” clone: “bread and honey” and “milk and butter” appear as valid sequences. **B.** Probabilistic branching & merging of sequences. **C – F.** Exemplar CSCG circuits for copying a sequence, parity operation, reversing a list with exactly five elements, reversing lists with a variable number of elements. **G**. Rebinding to new observations: dashed gray arrows correspond to old emissions while green arrows correspond to newly rebound emissions.
with slots that can dynamically bind to new inputs as required. For example, in the list reversal schema in Fig. 2F, tokens "["], and "]" are prior contents that detect the beginning and end of the list - these act as anchors for grounding the schema in the observations. Probabilistic branching based on the end of list token "]" allows for length generalization, whereas absorbing arbitrary symbols into the slots corresponding to \(A,B,C,D,E\) allows the algorithm to generalize to new symbols. Fig. 2G illustrates the outcome of this rebinding mechanism where the slots emitting \(A,B,C,D,E\) are respectively rebound to symbols \(K,M,N,P,R\) from the input prompt. Similarly, in the sentence "I wrote in a notebook using a dax", rebinding can absorb the new token "dax" into the context by binding it to a clone corresponding to "pencil" or "pen", and use the new word in those contexts.
### Instruction-based or content-based retrieval and completion of tasks
Zero-shot task recognition as content-based retrieval using rebinding:Many striking examples of zero-shot learning involve recognizing tasks from prompts, and repeating them on new inputs. For example, given a prompt "Input: [p, q, r, s] Output: [p, p, q, q, r, r, s, s]; Input: [l, m, n, o] Output: [l, l, m, m, n, n, o]" LLMs can infer the task as repeating the elements of the sequence, and apply that to complete the output for a new input prompt even when the tokens "p, q, r, s, l, m, n, o" were not seen during training, in association with this task. The rebinding mechanism offers a natural explanation for this. Given the prompt, expectation maximization (EM) [18] simultaneously evaluates the different rebindings to multiple latent algorithm schemas to infer the best binding, which is then applied to complete the query prompt.
Instruction-based retrieval:When algorithms are trained with prefixed language instructions, CSCGs learn instruction sub-networks that directly point to the circuits that represent the algorithms (see Section 4.2). The algorithm can be retrieved by direct prompting with language instructions that can be significantly different from training instructions due to transitive generalization and rebinding.
### Emergence
We hypothesize and empirically demonstrate in Section 4, that emergence is explainable as the combined effects of the above properties (context-separation, transitive generalization, schema-formation, and rebinding), model capacity, and patterns in the data. Training on a bigger dataset results in the induction of more templates that might not have occurred in the smaller dataset. Learning the schematic circuits for more complex algorithms or more patterns in the data requires greater model capacity because overparameterization helps in the optimization process.
## 4 Results
We substantiate the above argument using empirical results on three datasets: (a) the GINC benchmark introduced in [3], (b) a suite of algorithm learning tasks that we introduce in our LIALT datasets, and (c) a zero-shot word usage induction task on a CSCG language model.
### Context-sensitive retrieval on GINC dataset matches Bayesian inference explanation
**Dataset:** The GINC dataset [3] introduced for studying ICL, is generated from a uniform mixture of five factorial HMMs [22]. Each factorial HMM is referred to as a _concept_. A document is created by concatenating independent sentence samples from a concept. The in-context test prompts have examples of lengths \(k\in\{3,5,8,10\}\), varying in number from \(n=0\) to \(n=64\), with \(2500\) prompts for each setting \((k,n)\). Each prompt uniformly selects a concept, samples \(n-1\) examples \(x^{(1)}_{:k},\ldots,x^{(n-1)}_{:k}\) of length \(k\), and one example \(x^{(n)}_{:k-1}\) of length \(k-1\). The in-context task is to infer the most likely last token of the last example, i.e., \(\operatorname*{argmax}_{x^{(n)}_{k-1}}p\left(x^{(n)}_{k-1}|x^{(1)}_{:k},\ldots,x^{(n-1)}_{:k},x^{(n)}_{:k-1}\right)\). Since the vocabulary is shared among different latent concepts, observations in GINC are aliased like in natural language, and solving the task requires the model to disambiguate the aliased observations to correctly infer the latent concepts.
**Training:** We train a single CSCG with \(50\) clones on the GINC dataset for \(100\) full-batch EM iterations using a pseudocount [6] of \(\epsilon=10^{-2}\). Given a test prompt, CSCG infers the most likely hidden sequence for that prompt, then predicts the next most likely observation.
**Results:** CSCG learns different latent sub-networks corresponding to the five latent concepts in the GINC dataset ( Fig. 3A), and inference on a supplied prompt retrieves the correct latent sub-network (Fig. 3C). Increasing the prompt length improves the localization of the sub-network and the particular states within the sub-network. Figure 3C visualizes the decoded latent state distribution for an example prompt in the zero-shot setting (\(n=0\)). The decoding starts out uncertain, and improves as the prompt gets longer. This localization (on the graph) results in effective schema retrieval, and hence accurate prompt completion. Figure 3B[left] reports the in-context accuracy--defined as the average ratio of correct predictions--for each \((k,n)\) pair of the GINC test set. CSCG in-context accuracy matches the patterns exhibited by LSTMs and transformers in [3], while slightly improving their performance. Fig. 3B also shows that a CSCG with larger capacity, i.e. with \(50\) clones per token, better separates the latent concepts and significantly outperforms a CSCG with only \(10\) clones per token. Fig. 9[left] in Appendix C displays the CSCG in-context confidence: for larger contexts, CSCG is better at disambiguating aliasing and the averaged predictions probabilities are higher. Finally, Fig. 9[right] shows that similarly to the transformer and LSTM in [3], CSCG fails at ICL when test prompts are sampled from concepts unseen during training. The GINC results match the context-based retrieval argument in Section 3.1: ICL in this setting is the retrieval of a shared latent concept between the prompt and the model. By using the long-range coherence of concepts in the training documents, the model learns to separate concepts into different latent representations. Despite the train and prompt distribution mismatch [3], CSCG succeeds at prompt completion because the representation allows transitive mixing.
### Learning schemas for seq2seq algorithms and generalization using rebinding
**Training dataset:** To test the ability of CSCG to learn _algorithms_ that generalize to novel inputs not seen during training, we construct the Language Instructed Algorithm Learning Tasks (LIALT) dataset. The LIALT training set contains demonstrations of \(13\) list and matrix algorithms displayed in Fig. 4A[top-left]. A demonstration consists of a multi-word language instruction--each algorithm has five different instructions--followed by \(10\) input-output examples of that algorithm. See Tables 2 & 3 in Appendix D.1 for the complete list of instructions used. For each instruction, the dataset contains \(20\) demonstrations. Within a demonstration, the language instruction and the examples are separated by a "/" delimiter. Demonstrations are separated by a "\(\backslash\)n" delimiter. The input lists and matrices values are created by uniformly sampling from a vocabulary of \(676\) tokens, created by random pairings of uppercase letters. List operations examples vary in length from \(3\) to \(6\), and the matrix operations are of sizes \(2\times 2\) or \(3\times 3\). Fig. 4A [bottom-left] shows the training data format.
Figure 3: **A.** Visualizing the transition graph of a CSCG with \(50\) clones trained on the GINC dataset from [3]. The clones cluster into five groups – one per _concept_. **B.**[Left] In-context accuracy averaged over the GINC test dataset (with \(95\%\) confidence intervals (CIs) as in [3], for the same model For contexts of \(8\) and \(10\) tokens, the model predicts the most likely next token at least \(95\%\) of the time—including in the zero-shot regime. [Right] In-context accuracy decreases when we reduce the number of clones to \(10\)—for \(k\in\{8,10\}\) it drops from above \(95\%\) to below \(75\%\). The numerical values are reported in Appendix C, Table 1. **C.** Decoded latent state distributions (increasing intensities of black for higher density) for the CSCG with 50 clones, for an \(n=0\) & \(k=10\) prompt “o y w r m r o y aj”, when truncated to different lengths (\(k=2,3,5,8,10\)). Longer prompts improve latent state estimation—resulting in better concept retrieval, and next token prediction.
Test dataset:LIALT has two test datasets, respectively containing: (a) instruction-based retrieval prompts, and (b) example-based retrieval prompts. An instruction-based retrieval test prompt consists of a natural language instruction followed by a single input. An example-based retrieval test prompt consists of a first input-output example of an algorithm, without any natural instruction, followed by a second input. All the lists and matrices in the two test datasets contain novel tokens. For both types of prompts, the in-context task is to predict the algorithm's output when applied to the (last) input. Note that for an example-based prompt, CSCG has to infer the algorithm used from the first example. Each test set contains \(100\) prompts, constructed by uniformly sampling instructions, and list or matrix tokens. Fig. 4A [right] shows the formats of these two test sets.
Training:For each token, a CSCG allocates its number of clones proportionally to the number of distinct contexts in the training data in which it occurs2. We parameterize CSCG capacity via this proportionality factor - the "overallocation ratio". We train CSCGs for an increasing sequence of overallocation ratios on the training data with \(500\) EM iterations and a pseudocount of \(\epsilon=10^{-6}\). After running EM, we run \(10\) iterations of Viterbi training [23].
Footnote 2: As the same token might occur in different contexts in the training data, knowing the context allows predicting the sequence of following tokens, up to the next "/" delimiter.
Figure 4: **A.** [Top-left] List and matrix algorithms used in the LIALT dataset. Format of the training set [bottom-left] and examples of the two LIALT test sets [right]. **B.** Example of a learned circuit for the “reverse” algorithm, displayed by stacking clones [left] or unrolling them [right].
Figure 5: [Left] In-context accuracy (ICA) with \(95\%\) CIs after a single EM iteration, as a function of the overallocation ratio for a CSCG trained on LIALT and averaged [top] on the instruction-based LIALT test set [bottom] on the example-based LIALT test set. ICA increases with model capacity. [Right] ICA with standard errors per task on the two LIALT test sets: for each task, overparametrization improves performance. Invisible bars indicate zero accuracy for the respective combinations of model and task. All the numerical values are in Appendix D.3. Figure 11 in the Appendix visualizes the same quantities after EM convergence; the similarity demonstrates that the fast rebinding algorithm is not just localized in its updates, but also rapid.
**Results:** CSCGs with sufficient model capacity successfully learn the algorithms in the training set, and rebinding generalizes those algorithms to novel tokens. Fig. 4B shows the extracted circuit for the list reversal algorithm. Fig. 5[left] presents the in-context accuracy of CSCGs (using \(\epsilon=10^{-6}\) and \(p_{\text{surprise}}=0.1\)) on the two LIALT test sets: the best performing CSCG (a) successfully rebinds the learned schemas to the test prompts' novel tokens and (b) correctly infers the algorithm from a single input-output pair for example-based prompts. Fig. 5 also shows that model size drives ICL performance [left] even when breaking down the performance by tasks [right].
The learned CSCG (initialized with an overallocation ratio of \(3\)) is visualized in Fig. 10 in the Appendix, using stacked clones. Fig. 6A shows the transition graph using the Kamada-Kawai algorithm [24]. It reveals thirteen loosely connected clusters corresponding to the thirteen algorithms present in the LIALT dataset. Fig. 6B illustrates the rebinding process, with the decoded distributions over latent states of the learned CSCG model, for two different example-based prompts. Even before any rebinding, the identification of anchors and slots already restricts the decoding to schemas compatible with the prompt _structure_--in this case based on brackets & delimiters. However, the structure is insufficient to disambiguate completely between the compatible schemas (list operations corresponding to reversal, circular forward shift, and circular backward shift), and both the chosen prompts result in the same latent state distribution. Hence, the decoded distribution after the first E-step localizes to the three compatible schemas. In the M-step that follows, the slots in all three schemas will be rebound for this prompt. At the end of the first EM iteration, the new bindings for slots in the correct schema will be highly certain given the consistent evidence, while inconsistent evidence will lead to uncertain bindings for the other slots. In the E-step of the second iteration, the respective levels of certainty in the bindings then help promote the correct algorithm schema to become the most likely decoding--and complete the prompt appropriately. Note that a single EM step is sufficient to derive the correct rebinding in these examples. Compare Figs. 5 & 11, and the
Figure 6: **A.** Transition graph of the CSCG model learned on the LIALT dataset, visualized using the Kamada-Kawai algorithm. **B.** Visualizing the inferred probability of the observation at timestep \(n\), conditioned on observations at all other timesteps, before rebinding. This drives the identification of anchors and slots selected for rebinding. **C.** Decoded latent state distributions (and predicted prompt completions) for the two different example-based LIALT prompts specified in subfig. B: (top) before rebinding, and (bottom) after one iteration of EM. Fig. 12 in Appendix D.3.2 extends the same visualization to EM convergence. The left prompt corresponds to the operation of circularly shifting the list forward, and the right prompt corresponds to reversing the list.
tables in Appendix Sec. D.3 for how the in-context completion performance after the first EM step in the rebinding process is very similar to that at the end of EM convergence.
The LIALT results substantiate the arguments we made in Sections 3.2 and 3.3. Bayesian inference of the latent context based on long-term coherence (sufficient for the GINC results in Section 4.1) does not explain the remapping of a latent representation to completely new tokens as required for generalizing on the LIALT algorithms. Without rebinding, even a prompt containing a full-length example of an algorithm but with novel tokens does not retrieve the correct algorithm schema or produce the correct completion based on inference over the latent states alone (Fig. 6B, first row). By contrast, simultaneously inferring the rebindings and the latent states results in accurate retrieval of the algorithm schema and the correct prompt completion (Fig. 6B, second row). CSCGs are thus able to learn seq2seq algorithms and generalize those algorithms to novel tokens using rebinding.
**Emergence:** ICL performance of CSCG on the LIALT dataset shows characteristics attributed to emergence. In-context accuracy has a clear dependency on the level of overparameterization of CSCG, offering evidence in support of our hypothesis in Section 3.4.
### Dax test
In language, the "dax" test [25] is used to demonstrate the capability of a model to absorb the usage of an entirely new word from a single presentation. To test for this capability, we train a CSCG on the PreCo dataset [26], which is a large-scale English dataset for coreference resolution. We then test the model on five word-replaced query prompts, where certain words in the prompts do not appear in the training set. We use Algorithm 1 with \(\epsilon=10^{-6}\) and \(p_{\text{surprise}}=\frac{1}{16}\) to rebind the emission matrix on each of these prompts, each time probing the model for completing a sentence by filling in the blanks (uncertain inputs) using MAP inference. Fig. 7 shows these results.
## 5 Related work
**In-context learning:** Similar to how humans learn by analogy [27] and how synaptic plasticity allows the brain to rapidly adapt to a new task [28], ICL capabilities [1] allows a pre-trained model to learn a new task given only a few examples. [29; 30] showed how demonstrations that explicitly guide the reasoning process improve the ICL performance of transformers on new complex tasks. We clarify below some concepts that should not be confused with ICL, and then discuss some works that aim at understanding ICL and the factors that influence it.
**Supervised learning (SL) and few-shot learning (FSL):** SL approaches learn a mapping that minimizes a loss on the training data: gradient methods are a popular paradigm [31; 32; 33]. In FSL, a model learns to rapidly adapt to a new task from a limited number of supervised examples [34; 35; 36], and performs this same task at inference. In contrast, ICL tasks are only revealed at inference. [37; 38] showed that finetuning transformers on ICL instructions improves their ICL performance.
**Meta-learning:** The meta-learning paradigm aims at learning to adapt to a new task with only a few examples [39; 40; 41] by using multiple learning experiences. In contrast, ICL directly emerges from the pre-trained model. [42; 43] proposed a meta-learning framework for ICL where the model is fine-tuned: it learns to leverage few-shot examples and to adapt to new tasks at inference time.
**How ICL works:**[3] explained ICL as implicit Bayesian inference and constructed the GINC dataset (see Section 4.1) for demonstrating ICL. [44] abstracted ICL as an algorithm learning problem and found that a transformer can implicitly infer a hypothesis function. Similarly, [45] showed that a transformer can be trained to perform ICL of unseen linear functions, with performance
Figure 7: Examples of the dax test on a CSCG trained on the PreCo dataset. In each row, the novel word in red (e.g. “terras”) is absorbed by binding it to the clones of the corresponding word in blue (e.g. “planets”). The CSCG can then use the new token in similar contexts, as demonstrated by the fill-in-the-blanks probing.
comparable to the optimal least squares estimator. [46] showed that, in the linear case, transformers implicitly implement gradient descent and train an implicit linear model on the ICL examples. [4] proposed a dual between transformer attention and gradient methods and suggested pre-trained models as meta-optimizers. They presented ICL as implicit finetuning, where the forward pass on the demonstrative examples produces meta-gradients. Finally, [5] showed the existence of "induction heads" in transformers, that emerge during training, copy previous patterns, and drive ICL capacities.
**What influences ICL:**[1, 47] indicated that LLMs' ICL performance "emerges" then keeps improving when the model size increases. [48] proposed a substitute for the positional encoding, and demonstrated how transformers can learn schemas for algorithmic tasks and generalize to test sequences longer than any seen during training. Some works have highlighted the role of the training data in ICL. [49] showed that ICL emerge when the training data has a large number of rare classes and when examples appear in clusters. while [50] demonstrated that ICL emerge when a model is trained on a combination of multiple corpora, and that low perplexity and ICL performance do not always correlate. [51, 52] found that ICL is highly unstable and is influenced by the prompting template, the selection of in-context examples, and the order of the examples. [53] showed that the ICL performance is driven by the exposure to the label space, the input distribution, and the overall format of the sequence. Similarly, [54] found that selecting ICL examples with closer embeddings to ICL test sample improves ICL performance, and [55] showed that adding explanations in-context improves performance. Finally, [56] recently claimed that the sharp emergence of ICL in larger models might be an artifact of the metrics, not a fundamental property of the model.
## 6 Discussion
With ICL deconstructed into schema learning, schema retrieval, and slot rebinding, an interesting avenue for future work would be to probe various sequence models for how robustly each of these components are manifested - or even construct models around these principles. Here we consider how this framework might map to transformers, where the phenomenon of ICL was originally observed. Unlike CSCGs, transformers buffer the inputs and represent location as positional encoding, allowing attention to gate by the structure of the prompt, along with the contents. Prior explanations [3, 4] do not distinguish the role of sequence positions vis-a-vis contents; we argue that theories might need to emphasize this distinction (see Fig. 8A) to fully understand the inductive biases behind ICL. We conjecture (see Fig. 8B) that layers of the transformer implement multiple mixed templates of positions and content, evaluated at different offsets of a prompt. The template assembly that can auto-regressively match the prompt wins out the competition to gate the content. The rebinding mechanism requires only a few iterations of sparse updates to the emission matrix, and can be temporally "unrolled" into a forward pass, allowing ICL behavior with fixed weights since the slotting process lives in the space of activations.
Coming back to CSCGs, implementations can scale to larger models and datasets by exploiting sparsity and parallelizing computations in the EM steps. Allowing a factorized latent space and adding skip connections would also allow compositionality while enabling scalability. Further, while we have illustrated here the concept of rebinding to attach new symbols to existing slots, rebinding "through time" can also target connections between clones, enabling compositional behavior in-context. We leave these explorations for future research. Our goal here has been to elucidate a general framework for ICL behavior, leveraging the interpretability of CSCGs. We hope this demystifies the ICL behavior observed in LLMs by analogy, showcases avenues for further research on ICL capabilities, and provides broad impetus for interpretable methods.
Figure 8: **A. Learned templates in a transformer could involve content, position, or a mix of both. **B.** Activations in the forward pass of a transformer could be selected among pre-learned templates that mix content and position to achieve ICL without weight changes.
## Acknowledgements
We thank Stephanie Chan, Andrew Lampinen, Anirudh Goyal, Dharshan Kumaran, Neel Nanda and Guangyao Zhou for helpful discussions and comments on the draft.
## References
* [1] Tom Brown et al. "Language models are few-shot learners". In: _Advances in neural information processing systems_ 33 (2020), pp. 1877-1901.
* [2] Taylor Webb, Keith J Holyoak, and Hongjing Lu. "Emergent Analogical Reasoning in Large Language Models". In: _arXiv preprint arXiv:2212.09196_ (2022).
* [3] Sang Michael Xie et al. "An explanation of in-context learning as implicit bayesian inference". In: _arXiv preprint arXiv:2111.02080_ (2021).
* [4] Damai Dai et al. "Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta Optimizers". In: _arXiv preprint arXiv:2212.10559_ (2022).
* [5] Catherine Olsson et al. "In-context learning and induction heads". In: _arXiv preprint arXiv:2209.11895_ (2022).
* [6] Dileep George et al. "Clone-structured graph representations enable flexible learning and vicarious evaluation of cognitive maps". In: _Nature communications_ 12.1 (2021), p. 2392.
* [7] Antoine Dedieu et al. "Learning higher-order sequential structure with cloned HMMs". In: _arXiv preprint arXiv:1905.00507_ (2019).
* [8] Murray Shanahan and Melanie Mitchell. "Abstraction for Deep Reinforcement Learning". In: _Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22_. 2022, pp. 5588-5596.
* [9] Neel Nanda et al. "Progress measures for grokking via mechanistic interpretability". In: _arXiv preprint arXiv:2301.05217_ (2023).
* [10] Kevin Meng et al. "Locating and Editing Factual Associations in GPT". In: _Advances in Neural Information Processing Systems_ 36 (2022).
* [11] Ashish Vaswani et al. "Attention is all you need". In: _Advances in neural information processing systems_ 30 (2017).
* [12] Lonnie Chrisman. "Reinforcement learning with perceptual aliasing: The perceptual distinctions approach". In: _AAAI_. Vol. 1992. Citeseer. 1992, pp. 183-188.
* [13] Judea Pearl. _Causality_. Cambridge university press, 2009.
* [14] Vatsal Sharan et al. "Learning overcomplete hmms". In: _Advances in Neural Information Processing Systems_ 30 (2017).
* [15] J Swaroop Guntupalli et al. "Graph schemas as abstractions for transfer learning, inference, and planning". In: _arXiv preprint arXiv:2302.07350_ (2023).
* [16] Jonas Peters, Dominik Janzing, and Bernhard Scholkopf. _Elements of causal inference: foundations and learning algorithms_. The MIT Press, 2017.
* [17] Daniel Eaton and Kevin Murphy. "Exact Bayesian structure learning from uncertain interventions". In: _Artificial intelligence and statistics_. PMLR. 2007, pp. 107-114.
* [18] Arthur P Dempster, Nan M Laird, and Donald B Rubin. "Maximum likelihood from incomplete data via the EM algorithm". In: _Journal of the royal statistical society: series B (methodological)_ 39.1 (1977), pp. 1-22.
* [19] Judea Pearl. _Probabilistic reasoning in intelligent systems: networks of plausible inference_. Morgan kaufmann, 1988.
* [20] Bingbin Liu et al. "Transformers learn shortcuts to automata". In: _arXiv preprint arXiv:2210.10749_ (2022).
* [21] Oren Kolodny, Arnon Lotem, and Shimon Edelman. "Learning a generative probabilistic grammar of experience: A process-level model of language acquisition". In: _Cognitive Science_ 39.2 (2015), pp. 227-267.
* [22] Zoubin Ghahramani and Michael Jordan. "Factorial hidden Markov models". In: _Advances in Neural Information Processing Systems_ 8 (1995).
* [23] Frederick Jelinek. "Continuous speech recognition by statistical methods". In: _Proceedings of the IEEE_ 64.4 (1976), pp. 532-556.
* [24] Tomihisa Kamada, Satoru Kawai, et al. "An algorithm for drawing general undirected graphs". In: _Information processing letters_ 31.1 (1989), pp. 7-15.
* [25] Haley A Vlach and Catherine A DeBrock. "Remember dax? Relations between children's cross-situational word learning, memory, and language abilities". In: _Journal of memory and language_ 93 (2017), pp. 217-230.
* [26] Hong Chen et al. "PreCo: A large-scale dataset in preschool vocabulary for coreference resolution". In: _arXiv preprint arXiv:1810.09807_ (2018).
* [27] Patrick H Winston. "Learning and reasoning by analogy". In: _Communications of the ACM_ 23.12 (1980), pp. 689-703.
* [28] Katie C Bittner et al. "Behavioral time scale synaptic plasticity underlies CA1 place fields". In: _Science_ 357.6355 (2017), pp. 1033-1036.
* [29] Jason Wei et al. "Chain of thought prompting elicits reasoning in large language models". In: _arXiv preprint arXiv:2201.11903_ (2022).
* [30] Denny Zhou et al. "Least-to-most prompting enables complex reasoning in large language models". In: _arXiv preprint arXiv:2205.10625_ (2022).
* [31] Leon Bottou, Frank E Curtis, and Jorge Nocedal. "Optimization methods for large-scale machine learning". In: _SIAM review_ 60.2 (2018), pp. 223-311.
* [32] Sebastian Ruder. "An overview of gradient descent optimization algorithms". In: _arXiv preprint arXiv:1609.04747_ (2016).
* [33] Diederik P Kingma and Jimmy Ba. "Adam: A method for stochastic optimization". In: _arXiv preprint arXiv:1412.6980_ (2014).
* [34] Yaqing Wang et al. "Generalizing from a few examples: A survey on few-shot learning". In: _ACM computing surveys (csur)_ 53.3 (2020), pp. 1-34.
* [35] Li Fei-Fei, Robert Fergus, and Pietro Perona. "One-shot learning of object categories". In: _IEEE transactions on pattern analysis and machine intelligence_ 28.4 (2006), pp. 594-611.
* [36] Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. "Human-level concept learning through probabilistic program induction". In: _Science_ 350.6266 (2015), pp. 1332-1338.
* [37] Jason Wei et al. "Finetuned language models are zero-shot learners". In: _arXiv preprint arXiv:2109.01652_ (2021).
* [38] Victor Sanh et al. "Multitask prompted training enables zero-shot task generalization". In: _arXiv preprint arXiv:2110.08207_ (2021).
* [39] Devang K Naik and Richard J Mammone. "Meta-neural networks that learn by learning". In: _[Proceedings 1992] IJCNN International Joint Conference on Neural Networks_. Vol. 1. IEEE. 1992, pp. 437-442.
* [40] Sachin Ravi and Hugo Larochelle. "Optimization as a model for few-shot learning". In: _International conference on learning representations_. 2017.
* [41] Sepp Hochreiter, A Steven Younger, and Peter R Conwell. "Learning to learn using gradient descent". In: _Artificial Neural Networks--ICANN 2001: International Conference Vienna, Austria, August 21-25, 2001 Proceedings 11_. Springer. 2001, pp. 87-94.
* [42] Sewon Min et al. "Metaicl: Learning to learn in context". In: _arXiv preprint arXiv:2110.15943_ (2021).
* [43] Yanda Chen et al. "Meta-learning via language model in-context tuning". In: _arXiv preprint arXiv:2110.07814_ (2021).
* [44] Yingcong Li et al. "Transformers as Algorithms: Generalization and Stability in In-context Learning". In: ().
* [45] Shivam Garg et al. "What can transformers learn in-context? a case study of simple function classes". In: _Advances in Neural Information Processing Systems_ 35 (2022), pp. 30583-30598.
* [46] Ekin Akyurek et al. "What learning algorithm is in-context learning? investigations with linear models". In: _arXiv preprint arXiv:2211.15661_ (2022).
* [47] Jason Wei et al. "Emergent abilities of large language models". In: _arXiv preprint arXiv:2206.07682_ (2022).
* [48] Yuxuan Li and James L McClelland. "Systematic Generalization and Emergent Structures in Transformers Trained on Structured Tasks". In: _arXiv preprint arXiv:2210.00400_ (2022).
* [49] Stephanie Chan et al. "Data distributional properties drive emergent in-context learning in transformers". In: _Advances in Neural Information Processing Systems_ 35 (2022), pp. 18878-18891.
* [50] Seongjin Shin et al. "On the effect of pretraining corpora on in-context learning by a large-scale language model". In: _arXiv preprint arXiv:2204.13509_ (2022).
* [51] Zihao Zhao et al. "Calibrate before use: Improving few-shot performance of language models". In: _International Conference on Machine Learning_. PMLR. 2021, pp. 12697-12706.
* [52] Yao Lu et al. "Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity". In: _arXiv preprint arXiv:2104.08786_ (2021).
* [53] Sewon Min et al. "Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?" In: _arXiv preprint arXiv:2202.12837_ (2022).
* [54] Jiachang Liu et al. "What Makes Good In-Context Examples for GPT-\(3\)?" In: _arXiv preprint arXiv:2101.06804_ (2021).
* [55] Andrew K Lampinen et al. "Can language models learn from explanations in context?" In: _arXiv preprint arXiv:2204.02329_ (2022).
* [56] Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo. "Are Emergent Abilities of Large Language Models a Mirage?" In: _arXiv preprint arXiv:2304.15004_ (2023).
Locally updating the emission matrix with the transition matrix fixed
We reuse the same notations as in [15], Appendix A.2. The authors describe the EM algorithm for learning the emission matrix of a CSCG with a fixed transition matrix. In particular their M step defines the new emission matrix as:
\[E(j)=\sum_{n=1}^{N}1_{X_{n}=j}\;\gamma(n)\oslash\sum_{n=1}^{N}\gamma(n),\;\forall j\]
where \(E(j)\) is a column of the emission matrix corresponding to the emission \(j\), \(1_{X_{n}=j}\) is an indicator function, \(\oslash\) is the element-wise division and \(\gamma(n)\) is derived by the authors from the forward and backward probabilities. The \(i\) entry of the vector \(E(j)\) is then defined as:
\[E_{ij}=\frac{\sum_{n=1}^{N}1_{X_{n}=j}\;\gamma_{i}(n)}{\sum_{n=1}^{N}\gamma_{i} (n)}.\]
In contrast, in Section 2.2, Step 5 of Algorithm 1 only updates the row \(i\) for which we can find a pair \((i,n)\in\mathcal{R}\), by only using the beliefs at timestep \(n\). For this row, the \(j\)th entry becomes:
\[E_{ij}=\frac{\sum_{n:\;(i,n)\in\mathcal{R}}1_{X_{n}=j}\;\gamma_{i}(n)}{\sum_{n :\;(i,n)\in\mathcal{R}}\gamma_{i}(n)}.\]
The pseudocount used in Step 1 of Algorithm 1 is an uncertainty parameter that lets the model smooth over incorrect observations. More details of this parameter are available in [6].
## Appendix B Prompt completion algorithm
Algorithm 2 describes the prompt completion algorithm introduced in Section 2.2. It implicitly considers a single action, which takes the next sequence element.
```
0: Grounded schema \(\{T,\mathcal{C},E^{\mathrm{rb}}\}\) with rebound CSCG emission matrix \(E^{\mathrm{rb}}\), delimiter token \(x_{\emptyset}\), prompt \(x^{\text{(prompt)}}=(x_{1},\ldots,x_{N})\)
0: A completed prompt \(x^{\text{(completed)}}=(x_{1},\ldots,x_{N},x_{N+1},\ldots,x_{N+P}=x_{\emptyset})\)
1: Run max-product for MAP inference and return \(z^{\text{MAP}}=(z_{1},\ldots,z_{N})=\operatorname*{argmax}_{z}P(z|x^{\text{( prompt)}})\).
2: Set \(\ell=0\). While \(x_{N+\ell}\neq x_{\emptyset}\), increment \(\ell\leftarrow\ell+1\) and sample the next most likely observation: \(z_{N+\ell}\in\operatorname*{argmax}_{j}T_{z_{N+\ell-1},\;j}\) and \(x_{N+\ell}\in\operatorname*{argmax}_{j}E^{\mathrm{rb}}_{z_{N+\ell},\;j}\).
```
**Algorithm 2** - Prompt completion Additional materials for the GINC dataset
First, we present two additional plots for the GINC experiment.
Second, we present the table of results associated with Fig. 3 for the CSCGs with \(10\) and \(50\) clones.
**CSCG performs better on zero-shot prompts than on few-shot prompts:** We observe that, for short contexts, CSCG in-context accuracy is higher on zero-shot prompts \(n=0\) than on few-shot prompts \(n=1,2,\ldots\). We hypothesize that the difference between the training and the prompt distributions creates a gap that lowers few-shot in-context accuracy. The performance gap disappears
\begin{table}
\begin{tabular}{c c c c} \hline \hline Context length & No. of examples & CSCG with \(10\) clones & CSCG with \(50\) clones \\ \hline \hline \(3\) & \(0\) & \(0.509\) (\(0.020\)) & \(0.534\) (\(0.020\)) \\ & \(1\) & \(0.351\) (\(0.019\)) & \(0.445\) (\(0.019\)) \\ & \(2\) & \(0.366\) (\(0.019\)) & \(0.453\) (\(0.020\)) \\ & \(4\) & \(0.356\) (\(0.019\)) & \(0.468\) (\(0.020\)) \\ & \(8\) & \(0.360\) (\(0.019\)) & \(0.454\) (\(0.020\)) \\ & \(16\) & \(0.354\) (\(0.019\)) & \(0.460\) (\(0.020\)) \\ & \(32\) & \(0.354\) (\(0.019\)) & \(0.441\) (\(0.0219\)) \\ & \(64\) & \(0.369\) (\(0.019\)) & \(0.468\) (\(0.020\)) \\ \hline \(5\) & \(0\) & \(0.682\) (\(0.018\)) & \(0.927\) (\(0.010\)) \\ & \(1\) & \(0.640\) (\(0.019\)) & \(0.927\) (\(0.012\)) \\ & \(2\) & \(0.629\) (\(0.019\)) & \(0.904\) (\(0.012\)) \\ & \(4\) & \(0.654\) (\(0.019\)) & \(0.883\) (\(0.013\)) \\ & \(8\) & \(0.627\) (\(0.019\)) & \(0.894\) (\(0.012\)) \\ & \(16\) & \(0.637\) (\(0.019\)) & \(0.902\) (\(0.012\)) \\ & \(32\) & \(0.634\) (\(0.019\)) & \(0.901\) (\(0.012\)) \\ & \(64\) & \(0.637\) (\(0.019\)) & \(0.899\) (\(0.012\)) \\ \hline \(8\) & \(0\) & \(0.696\) (\(0.018\)) & \(0.969\) (\(0.007\)) \\ & \(1\) & \(0.694\) (\(0.018\)) & \(0.972\) (\(0.007\)) \\ & \(2\) & \(0.686\) (\(0.018\)) & \(0.972\) (\(0.006\)) \\ & \(4\) & \(0.681\) (\(0.018\)) & \(0.978\) (\(0.006\)) \\ & \(8\) & \(0.690\) (\(0.018\)) & \(0.973\) (\(0.006\)) \\ & \(16\) & \(0.686\) (\(0.018\)) & \(0.975\) (\(0.006\)) \\ & \(32\) & \(0.676\) (\(0.018\)) & \(0.968\) (\(0.006\)) \\ & \(64\) & \(0.694\) (\(0.018\)) & \(0.975\) (\(0.007\)) \\ \hline \(10\) & \(0\) & \(0.684\) (\(0.018\)) & \(0.975\) (\(0.006\)) \\ & \(1\) & \(0.705\) (\(0.018\)) & \(0.977\) (\(0.006\)) \\ & \(2\) & \(0.674\) (\(0.018\)) & \(0.971\) (\(0.006\)) \\ & \(4\) & \(0.713\) (\(0.018\)) & \(0.974\) (\(0.006\)) \\ & \(8\) & \(0.690\) (\(0.018\)) & \(0.977\) (\(0.006\)) \\ & \(16\) & \(0.689\) (\(0.018\)) & \(0.977\) (\(0.006\)) \\ & \(32\) & \(0.712\) (\(0.018\)) & \(0.978\) (\(0.006\)) \\ & \(64\) & \(0.690\) (\(0.018\)) & \(0.978\) (\(0.006\)) \\ \hline \hline \end{tabular}
\end{table}
Table 1: In-context accuracy for a CSCG with \(10\) clones and a CSCG \(50\) clones trained on the GINC dataset, averaged (with \(95\%\) confidence intervals) on each each pair \((k,n)\) of context length and number of examples \(n\) of the GINC test set.
Figure 9: [Left] In-context confidence for the CSCG with \(50\) clones on the GINC test dataset, defined as the averaged probability of the predictions. For larger values of \(k\), CSCG correctly infers the context of the aliased observations and is more confident in its predictions. [Right] Similar to the transformer and LSTM reported in [3], CSCG fails to extrapolate and has a low in-context accuracy when the test prompts are sampled from five novel concepts, unseen during training.
for larger contexts \(k\in\{8,10\}\) as they "overpower" the train-test distribution divergence. [3] made a similar observation for transformers. However, their performance gap was also observable for larger contexts.
## Appendix D Additional materials for the LIALT dataset
### Natural language instructions
Tables 2 and 3 present the natural language instructions respectively used for the nine list algorithms and four matrix algorithms of the LIALT dataset. Language instructions are grouped in clusters of five: all five instructions within one cluster describe to the same algorithm. As described in the main text, each demonstration of the LIALT training and first test set uniformly selects one instruction.
### Learned CSCG model
Our next Figure 10 displays the transition graph of the CSCG model trained on the LIALT dataset with an overallocation ratio of 3, with stacked clones for each symbol.
\begin{table}
\begin{tabular}{l} \hline \hline “print the element at index zero of the list” \\ “print the first element from the list” \\ “return the leading element from the list” \\ “find the head element from the list” \\ “feature the starting element from the list” \\ “print the element at index two of the list” \\ “find the third element from the list” \\ “locate the third element from the list” \\ “output the third item from the list” \\ “return the element in third place from the list” \\ “duplicate each list item” \\ “replicate every element in the list” \\ “make a copy of each element in the list” \\ “clone each element in the list” \\ “create a second instance of every element in the list” \\ “print every other member in the list starting with the second member” \\ “retrieve alternate items in the list starting with the second tour” \\ “return every other object in the list starting with the second object” \\ “retrieve every other entry in the list starting with the second entry” \\ “output old indexed elements” \\ “rotate the list elements one place backward” \\ “move the list elements one position to the left” \\ “change the items of the list one position backward” \\ “displace the elements of the list one index backward” \\ “roll the list items one position backward” \\ \hline \hline \end{tabular}
\end{table}
Table 2: Natural language instructions for the list algorithms used in the LIALT dataset
\begin{table}
\begin{tabular}{l} \hline \hline “return the matrix transpose” \\ “reliver the transpose of the matrix” \\ “get the transpond matrix” \\ “compute the transpond form of the matrix” \\ “define the transpond matrix” \\ “find the matrix element in the second row and second column” \\ “find the value in the second row and second column of the matrix” \\ “fetch the matrix element located in row 2 and column \\ “fruit the value at 2 2 in the matrix” \\ “rotrive the matrix element at 2 2” \\ \hline \hline \end{tabular}
\end{table}
Table 3: Natural language instructions for the matrix algorithms used in the LIALT datasetFigure 10: CSCG model learned on the LIALT dataset, visualized with stacked clones.
### Results on the LIALT dataset
#### d.3.1 After a single EM iteration
Presented below are the tables of results associated with Fig. 5. Table 4 contains the in-context accuracies averaged on the entire test set, Table 5 contains the in-context accuracies per task on instructions-based prompts, and Table 6 contains the in-context accuracies per task on example-based prompts.
\begin{table}
\begin{tabular}{l c c} \hline \hline \multicolumn{1}{c}{Overallocation ratio} & \multicolumn{1}{c}{Instruction-based prompts} & \multicolumn{1}{c}{Example-based prompts} \\ \hline
0.1 & 0.00 (0.00) & 0.00 (0.00) \\
0.3 & 0.16 (0.07) & 0.11 (0.06) \\
1.0 & 0.54 (0.10) & 0.49 (0.10) \\
3.0 & 0.89 (0.06) & 0.93 (0.05) \\ \hline \hline \end{tabular}
\end{table}
Table 7: Average in-context accuracy of each CSCG model—with \(95\%\) confidence intervals—as a function of CSCG overallocation on both (a) the instruction-based LILAT test set and (b) the example-based LIALT test set.
Figure 11: [Left] In-context accuracy (with \(95\%\) CIs) after EM convergence, as a function of the overallocation ratio for a CSCG trained on LIALT and averaged [top] on the instruction-based LIALT test set [bottom] on the example-based LIALT test set. In-context accuracy increases for CSCGs with larger capacities. [Right] In-context accuracy (with standard errors) per task on the two LIALT test sets: for each task, overparametrization improves performance. All the numerical values are in Appendix D.3. Invisible bars indicate zero accuracy for the respective combination of model and task.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{Overallocation ratio} \\ \cline{2-5} Task & 0.1 & 0.3 & 1.0 & 3.0 \\ \hline list 1st elem. & 0.00 (0.00) & 0.00 (0.00) & 0.89 (0.10) & 1.00 (0.00) \\ list 2nd elem. & 0.00 (0.00) & 0.60 (0.15) & 0.70 (0.14) & 1.00 (0.00) \\ list 3rd elem. & 0.00 (0.00) & 0.60 (0.15) & 0.60 (0.15) & 1.00 (0.00) \\ list reverse & 0.00 (0.00) & 0.00 (0.00) & 0.70 (0.14) & 1.00 (0.00) \\ list repeat twice & 0.00 (0.00) & 0.00 (0.00) & 0.50 (0.25) & 0.75 (0.22) \\ list all. even & 0.00 (0.00) & 0.00 (0.00) & 0.50 (0.20) & 1.00 (0.00) \\ list alt. odd & 0.00 (0.00) & 0.00 (0.00) & 0.71 (0.17) & 0.86 (0.13) \\ list circ. shift fw. & 0.00 (0.00) & 0.00 (0.00) & 0.00 (0.00) & 0.56 (0.17) \\ list circ. shift bw. & 0.00 (0.00) & 0.00 (0.00) & 0.14 (0.13) & 1.00 (0.00) \\ matrix diagonal & 0.00 (0.00) & 0.00 (0.00) & 0.50 (0.18) & 1.00 (0.00) \\ matrix transpose & 0.00 (0.00) & 0.00 (0.00) & 0.50 (0.20) & 1.00 (0.00) \\ matrix roll columns & 0.00 (0.00) & 0.00 (0.00) & 0.00 (0.00) & 0.17 (0.15) \\ matrix elem. at idx. & 0.00 (0.00) & 0.50 (0.18) & 1.00 (0.00) & 1.00 (0.00) \\ \hline \hline \end{tabular}
\end{table}
Table 8: Average in-context accuracy by task—with standard errors—as a function of CSCG overallocation on instruction-based prompts.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{Overallocation ratio} \\ \cline{2-5} Task & 0.1 & 0.3 & 1.0 & 3.0 \\ \hline list 1st elem. & 0.00 (0.00) & 0.00 (0.00) & 1.00 (0.00) & 1.00 (0.00) \\ list 2nd elem. & 0.00 (0.00) & 0.12 (0.12) & 1.00 (0.00) & 1.00 (0.00) \\ list 3rd elem. & 0.00 (0.00) & 0.38 (0.17) & 1.00 (0.00) & 1.00 (0.00) \\ list reverse & 0.00 (0.00) & 0.00 (0.00) & 0.00 (0.00) & 0.88 (0.12) \\ list repeat twice & 0.00 (0.00) & 0.00 (0.00) & 0.50 (0.18) & 0.88 (0.12) \\ list alt. even & 0.00 (0.00) & 0.00 (0.00) & 0.00 (0.00) & 1.00 (0.00) \\ list alt. odd & 0.00 (0.00) & 0.00 (0.00) & 0.50 (0.18) & 1.00 (0.00) \\ list circ. shift fw. & 0.00 (0.00) & 0.00 (0.00) & 0.00 (0.00) & 0.88 (0.12) \\ list circ. shift bw. & 0.00 (0.00) & 0.00 (0.00) & 0.29 (0.17) & 1.00 (0.00) \\ matrix diagonal & 0.00 (0.00) & 0.67 (0.19) & 0.67 (0.19) & 1.00 (0.00) \\ matrix transpose & 0.00 (0.00) & 0.00 (0.00) & 0.43 (0.19) & 1.00 (0.00) \\ matrix roll columns & 0.00 (0.00) & 0.00 (0.00) & 0.00 (0.00) & 0.50 (0.18) \\ matrix elem. at idx. & 0.00 (0.00) & 0.38 (0.17) & 1.00 (0.00) & 1.00 (0.00) \\ \hline \hline \end{tabular}
\end{table}
Table 9: Average in-context accuracy by task—with standard errors—as a function of CSCG overallocation on example-based prompts.
### Example failures
Finally, we present a few examples which illustrate the failure modes of our approach. These are primarily a consequence of imperfections in the learned CSCG model. Each example is presented in the format (prompt, ground truth correct output, actual model response).
1. For these failures, the instruction circuit has been wired to the wrong algorithm circuit (possibly driven by the ambiguity of the forward slash delimiter separating the instruction from the example), resulting in the retrieval of the wrong schema. * output odd indexed elements / [ U V B Q K I ] [ U B K ] / [ V Q I ] / * flip the list / [ S E J ] / [ S E E J J ] * reverse the list / [ R T B ] / [ R R T T B B ] / * mirror the list / [ B A O T ] [ T O A B ] / [ B B A A O O T T ] /
2. For these failures, the schema has been learned incorrectly. * switch the items of the list one position forward / [ L N G X M T ] [ T L N G X M T ] [ T L N G X M T ]... * shift the columns of the matrix to the right / [ [ D Y ] [ V F ] ] [ [ F V ] ] / [ get * / [ Z J B ] [ Z Z J J B B ] / [ B A E F W L ] / [ B B A E F F W W L L ] / * / [ V P X T ] [ P T ] / [ V F J P E W ] / [ F P W ] / [ F P W ] [ F P W ]...
Figure 12: Extension of Fig. 6 to EM convergence. **A.** Visualizing the inferred probability of the observation at timestep \(n\), conditioned on observations at all other timesteps, before rebinding. This drives the identification of anchors and slots selected for rebinding. **B.** Decoded latent state distributions — and predicted prompt completions — for the two different example-based LIAIT prompts specified in subfig. A: (top) before rebinding, (middle) after one iteration of EM, and (bottom) after EM convergence. The left prompt corresponds to the operation of circularly shifting the list forward, and the right prompt corresponds to reversing the list. | ## Review
### Summary
The paper introduces clone-structured causal graphs (CSCGs) as a novel model for understanding in-context learning (ICL) in large language models (LLMs). The authors present experiments demonstrating that CSCGs can successfully replicate certain ICL behaviors by utilizing properties such as context-based separation and transitive generalization. They argue that CSCGs provide insights into the mechanisms behind ICL and compare their performance with standard LLMs across various benchmarks. While the theoretical contributions are noteworthy, the empirical support for some claims, particularly regarding interpretability and connections to transformers, requires further elaboration.
### Strengths
- The presentation of CSCGs as a model of in-context learning is novel and relevant to the community.
- Experiments show that CSCGs can learn meaningful concepts and generalize well, supporting the argument that overparameterization is beneficial.
- The exploration of ICL is timely and contributes to understanding this emerging area.
- The paper provides interesting insights into ICL mechanisms and has targeted, well-organized experiments.
- The LIALT dataset offers valuable benchmarks for evaluating ICL capabilities.
### Weaknesses
- Key claims about interpretability are not substantiated; the experiments lack a meaningful exploration of how CSCGs are more interpretable than existing models.
- Some experiments are superficial or limited in scale, which raises questions about the representativeness of the findings.
- The relationship between CSCGs and transformers is conjectured but not empirically supported, limiting the paper's impact.
- The setup of CSCGs is distant from real-world LLMs, and this gap should be acknowledged.
- The paper lacks a clear limitations section discussing potential challenges of scaling CSCGs to larger datasets.
### Questions
- How does the size of the latent graph scale with list size?
- What is the computational cost of fitting a CSCG with EM, especially in relation to typical pre-training corpora?
- Are the results in the figures representative, or could they be cherry-picked?
- What was the process for identifying schemas in the experiments, and are there principled detection methods?
- How is the transition matrix learned in the CSCG model, and what training data is used for the experiments?
### Soundness
**Score:** 3
**Description:** 3 = good. While the paper presents interesting ideas and a novel model, some claims lack empirical validation, particularly regarding interpretability and connections to existing frameworks.
### Presentation
**Score:** 3
**Description:** 3 = good. The paper is generally well-organized and presents its ideas clearly, but some sections could benefit from improved clarity and detail, particularly regarding limitations.
### Contribution
**Score:** 3
**Description:** 3 = good. The work offers novel insights into ICL and introduces a new model, though some claims require further empirical support to fully establish their significance.
### Rating
**Score:** 7
**Description:** 7 = accept, but needs minor improvements. The paper is technically solid and impactful, but it would benefit from addressing the weaknesses, particularly regarding empirical validation of key claims.
### Paper Decision
**Decision:** Accept
**Reasons:** The paper presents novel contributions to the understanding of in-context learning through the introduction of CSCGs. While there are some weaknesses related to empirical support for claims and clarity regarding limitations, the overall soundness, presentation, and contribution justify an acceptance decision. The paper is timely and relevant, with implications for future research in the field.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
On the Last-iterate Convergence in Time-varying Zero-sum Games: Extra Gradient Succeeds where Optimism Fails
Yi Feng
[email protected]
&Hu Fu
[email protected]
&Qun Hu
[email protected]
&Ping Li
[email protected]
&Ioannis Panageas
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Sureg
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
[email protected]
&Qun Hu
512021310186@live.
where \(\mathcal{X}\subset\mathbb{R}^{n}\) and \(\mathcal{Y}\subset\mathbb{R}^{m}\) are convex sets, and \(A\) is a \(n\times m\) payoff matrix. The above captures two-player zero-sum games in which \(x^{\top}Ay\) is interpreted as the payment of the "min player" \(x\) to the "max player" \(y\). If \(\mathcal{X}=\mathbb{R}^{n}\) and \(\mathcal{Y}=\mathbb{R}^{m}\) the setting is called _unconstrained_, otherwise it is _constrained_. Soon after the minimax theorem of Von Neumann was established (for compact \(\mathcal{X},\mathcal{Y}\)), learning dynamics such as fictitious play (Brown [1951]) were proposed for solving min-max optimization problems. Blackwell's approachability theorem (Blackwell [1956]) further propelled the field of online learning, which lead to the discovery of several learning algorithms; such learning methods include multiplicative-weights-update method, online gradient descent/ascent and their optimistic variants and extra-gradient methods.
Last Iterate Convergence.There have been a vast literature on whether or not the aforementioned dynamics converge in an average sense or exhibit last-iterate convergence when applied to zero-sum games. Dating back to Nesterov [2005], there have been quite a few results showing that online learning algorithms have last-iterate convergence to Nash equilibria in zero-sum games. Examples include optimistic multiplicative weights update (Daskalakis and Panageas [2019], Wei et al. [2021]), optimistic gradient descent ascent (OGDA) (Daskalakis et al. [2018], Liang and Stokes [2019a]) (applied even to GANs) for unconstrained zero-sum games, OGDA for constrained zero-sum games (Wei et al. [2021], Cai et al. [2022], Gorbunov et al. [2022b]) and extra-gradient methods (Mertikopoulos et al. [2019], Cai et al. [2022], Gorbunov et al. [2022a]) using various techniques, including sum of squares.
Nevertheless, all aforementioned results assume that the underlying repeated zero-sum game remains invariant throughout the learning process. In many learning environments that assumption is unrealistic, see Duvocelle et al. [2022], Mai et al. [2018], Cardoso et al. [2019]) and references therein. One more realistic learning setting is where the underlying game is actually changing; this game is called time-varying. There have been quite a few works that deal with time-varying games, where they aim at analyzing the duality gap or dynamic regret (Zhang et al. [2022]) and references therein for OGDA and variants. However, in all these prior works, last-iterate convergence has not been investigated; the main purpose of this paper is to fill in this gap. We aim at addressing the following question:
_Will learning algorithms such as optimistic gradient descent ascent or extra-gradient exhibit last-iterate converge in time-varying zero-sum games?_
Our contributions.We consider unconstrained two-player zero-sum games with a time-varying payoff matrix (that is the payoff matrix \(A_{t}\) depends on time \(t\))
\[\min_{x\in\mathbb{R}^{n}}\max_{y\in\mathbb{R}^{m}}x^{\top}A_{t}y,\] (Time-varying zero sum game)
in which the payoff matrix \(A_{t}\) varies with time in the following two ways:
* **Periodic games:**\(A_{t}\) is a periodic function with period \(T\), i.e., \(A_{t+T}=A_{t}\).
* **Convergent perturbed games:**\(A_{t}=A+B_{t}\), \(\lim_{t\rightarrow\infty}B_{t}=0\).
In a repeated time-varying zero-sum game, players choose their learning algorithms and repeatedly play the zero-sum game. In the \(t\)-th round, when the players use strategy \((x_{t},y_{t})\), they receive their payoff \(-A_{t}^{\top}x_{t}\) and \(A_{t}y_{t}\).
In this paper we show the following results:
* For **periodic games:** We prove that when two players use extra-gradient, their strategies will converge to the common Nash equilibrium of the games within a period with an exponential rate, see Theorem 3.1 for details. Additionally, we provide an example where optimistic gradient descent ascent and negative momentum method diverge from the equilibrium with an exponential rate, see Theorem 3.2
To the best of our knowledge, this is the first result that provides a clear separation between the behavior of extra-gradient methods and optimistic gradient descent ascent.
* For **convergent perturbed games:** Assuming \(\sum_{t=1}^{\infty}\|B_{t}\|_{2}\) is bounded, we prove that the extra-gradient, optimistic gradient descent ascent, and negative momentum method all converge to the Nash equilibrium of the game defined by payoff matrix \(A\) with a rate determined by \(\{B_{t}\}_{t}\) and singular values of \(A\), see Theorem 3.3 Furthermore, we prove that extra-gradient will asymptotically converge to equilibrium without any additional assumptions on perturbations besides \(\lim_{t\rightarrow\infty}B_{t}=0\), see Theorem 3.4Related work on time-varying games.The closest work to ours that argues about stabilization of mirror descent type dynamics on convergent strongly monotone games is [20]. Most of the literature has been focused on proving either recurrence/oscillating behavior of learning dynamics in time-varying periodic games [17, 18, 19] and references therein or performing regret analysis [13, 14, 15]. In particular, the latter work extends results on RVU [16] bounds to argue about dynamic regret [21].
Technical Comparison.We investigate the last iterate behaviors of learning dynamics through their formulation of linear difference systems. This approach has also been used to establish last iterate convergence results for learning dynamics in time-independent games [19, 18, 17]. One common key point of these works is to prove that a certain matrix has no eigenvalues with modulus larger than \(1\), then the last iterate convergence can be guaranteed by the general results of autonomous difference systems. However, this method cannot be generalized to the time-varying games where the corresponding difference systems are non-autonomous. In particular, the dynamical behavior of a non-autonomous system is not determined by the eigenvalues of a single matrix. In fact, it is difficult to establish convergence/divergence results even for non-autonomous system with special structures, such as periodic or perturbed systems. In this paper, to get such results, we employ both general results in linear difference systems, such as Floquet theorem and Gronwall inequality, and special structures of the difference systems associated to learning dynamics.
Organization.In Section2 we present the necessary background for this work. The main results are stated in Section3. In Section4 we present numerical experiments and in Section5 we conclude with a discussion and propose some future research problems.
## 2 Preliminaries
### Definitions
Zero-sum bilinear game.An unconstrained two players zero-sum game consists of two agents \(\mathcal{N}=\{1,2\}\), and losses of both players are determined via payoff matrix \(A\in\mathbb{R}^{n\times m}\). Given that player \(1\) selects strategy \(x\in\mathbb{R}^{n}\) and player \(2\) selects strategy \(y\in\mathbb{R}^{m}\), player \(1\) receives loss \(u_{1}(x,y)=\langle x,Ay\rangle\), and player \(2\) receives loss \(u_{2}(x,y)=-\langle y,A^{\top}x\rangle\). Naturally, players want to minimize their loss resulting the following min-max problem:
\[\min_{x\in\mathbb{R}^{n}}\max_{y\in\mathbb{R}^{m}}x^{\top}Ay\] (Zero-Sum Game)
Note that the set \(\{(x^{*},y^{*})|A^{\top}x^{*}=0,Ay^{*}=0\}\) represents the set of equilibrium of the game.
Time-varying zero-sum bilinear game.In this paper, we study games in which the payoff matrices vary over time and we define two kinds of such time-varying games.
**Definition 2.1** (Periodic games).: _A periodic game with period \(T\) is an infinite sequence of zero-sum bilinear games \(\{A_{t}\}_{t=0}^{\infty}\subset\mathbb{R}^{n\times m}\), and \(A_{t+T}=A_{t}\) for all \(t\geq 0\)._
Note that the periodic game defined here is the same as Definition 1 in [18] except for the fact that we are considering a discrete time setting. Therefore, we do not make the assumption that payoff entries are smoothly dependent on \(t\).
**Definition 2.2** (Convergent perturbed games).: _A convergent perturbed game is an infinite sequence of zero-sum bilinear games \(\{A_{t}\}_{t=0}^{\infty}\subset\mathbb{R}^{n\times m}\), and \(\lim_{t\rightarrow\infty}A_{t}=A\) for some \(A\in\mathbb{R}^{n\times m}\). Equivalently, write \(A_{t}=A+B_{t}\), then \(\lim_{t\rightarrow\infty}B_{t}=0\). We will refer to the zero-sum bilinear game defined by \(A\) as **stable game**._
Learning dynamics in games.In this paper, we consider three kinds of learning dynamics : optimistic gradient descent ascent (OGDA), extra-gradient (EG), and negative momentum method. All these methods possess the last-iterate convergence property in repeated game with a time-independent payoff matrix, as demonstrated in previous literature. However, here we state their forms within a time-varying context.
**Optimistic gradient descent-ascent.** We study the optimistic descent ascent method (OGDA) defined as follows:
\[x_{t+1} =x_{t}-2\eta A_{t}y_{t}+\eta A_{t-1}y_{t-1},\] (OGDA) \[y_{t+1} =y_{t}+2\eta A_{t}^{\top}x_{t}-\eta A_{t-1}^{\top}x_{t-1}.\]
Optimistic gradient descent ascent method was proposed in (Popov[1980]), and here we choose the same parameters as (Daskalakis et al.[2018]). The last iterate convergence property of OGDA in unconstrained bilinear game with a time-independent payoff was proved in (Daskalakis et al.[2018]). Recently, there are also works analyzing the regret behaviors of OGDA under a time varying setting (Zhang et al.[2022], (Anagnostes et al.[2023])).
**Extra gradient.** We study the extra gradient descent ascent method (EG) defined as follows:
\[x_{t+\frac{1}{2}} =x_{t}-\gamma A_{t}y_{t},\;\;y_{t+\frac{1}{2}}=y_{t}+\gamma A_{t} ^{\top}x_{t},\] (EG) \[x_{t+1} =x_{t}-\alpha A_{t}y_{t+\frac{1}{2}},\;\;y_{t+1}=y_{t}+\alpha A_{ t}^{\top}x_{t+\frac{1}{2}}.\]
Note that the extra-gradient method first calculates an intermediate state before proceeding to the next state. Extra-gradient was firstly proposed in (Korpelevich[1976]) with the restriction that \(\alpha=\gamma\). Here we choose the parameters same as in (Liang and Stokes[2019]), where the linear convergence rate of extra-gradient in the bilinear zero-sum game with time-independent was also proven. Convergence of extra-gradient on convex-concave game was analyzed in (Nemirovski[2004], Monteiro and Svaited[2010]), and convergence guarantees for special non-convex-non-concave game was provided in (Mertikopoulos et al.[2019]).
**Negative momentum method.** We study the alternating negative momentum method (NM), defined as follows:
\[x_{t+1} =x_{t}-\eta A_{t}y_{t}+\beta_{1}(x_{t}-x_{t-1}),\] (NM) \[y_{t+1} =y_{t}+\eta A_{t+1}^{\top}x_{t+1}+\beta_{2}(y_{t}-y_{t-1}),\]
where \(\beta_{1},\beta_{2}\leq 0\) are the momentum parameters.
Applications of negative momentum method in game optimization was firstly proposed in (Cidel et al.[2019]). Note that the algorithm has an alternating implementation: the update rule of \(y_{t+1}\) uses the payoff \(A_{t+1}^{\top}x_{t+1}\), thus in each round, the second player chooses his strategy after the first player has chosen his. It was shown in (Cidel et al.[2019]) that both the negative momentum parameters and alternating implementations are crucial for convergence in bilinear zero-sum games with time-independent payoff matrices. Analysis of the convergence rate of negative momentum method in strongly-convex strongly-concave games was provided in (Zhang and Wang[2021]).
### Difference systems
The analysis of the last-iterate behavior of learning algorithms can be reduced to analyzing the dynamical behavior of the associated linear difference systems (Zhang and Yu[2020], Daskalakis and Panageas[2018]). When the payoff matrix is time-independent, the associated difference systems is autonomous. However, as we are studying games with time-varying payoff matrices, we have to deal with non-autonomous difference systems. In general, the convergence behavior of non-autonomous difference systems is much more complicated than that of autonomous ones (Colonius and Kliemann[2014]).
**Linear difference system.** Given a sequence of iterate matrices \(\{\mathcal{A}_{t}\}_{t=1}^{\infty}\subset\mathbb{R}^{n\times n}\) and initial condition \(X_{0}\in\mathbb{R}^{n}\), a linear difference system has form
\[X_{t+1}=\mathcal{A}_{t}X_{t}.\] (Linear difference system)
If \(\mathcal{A}_{t}\equiv\mathcal{A}\) is a matrix independent of time, the system is called an **autonomous system**, otherwise, it is called a **non-autonomous system**. We care about the asymptotic behavior of \(X_{t}\), that is, what can we say about \(X_{t}\) as \(t\to\infty\).
**Definition 2.3**.: _A point \(X\) is called **asymptotically stable** under the linear difference system if_
\[\exists\ \delta>0,\forall\ Y,\ s.t.\ \|Y-X\|_{2}\leq\delta\Rightarrow\lim_{t\to \infty}\|(\prod_{t=1}^{\infty}\mathcal{A}_{t})Y-X\|_{2}=0.\]
_Moreover, \(X\) is called **exponentially asymptotically stable** if the above limit has an exponentially convergence rate, i.e., \(\exists\ \alpha\in(0,1)\), such that_
\[\|(\prod_{t=1}^{s}\mathcal{A}_{t})Y-X\|_{2}\leq\alpha^{s}\|Y-X\|_{2}.\]
It is well known that for autonomous linear systems, being asymptotically stable is equivalent to being exponentially asymptotically stable, as shown in Thm 1.5.11 in [Colonius and KliemannColonius and Kliemann, 2014]. However, this equivalence does not hold for non-autonomous systems.
Linear difference system has a formal solution \(X_{T+1}=\prod_{t=0}^{T}\mathcal{A}_{t}X_{0}\). However, such a representation does not yield much information about the asymptotic behavior of solution as \(t\to\infty\), except in the case of autonomous system. In this paper, we mainly consider two classes of non-autonomous linear difference systems: if the iterate matrix \(\mathcal{A}_{t}\) is a periodic function of \(t\), the system is called a periodic system; and if \(\mathcal{A}_{t}\) has a limit as \(t\) tends to infinity, the system is called a perturbed system.
**Periodic linear system.** If the iterate matrix \(\mathcal{A}_{t}\) in a linear difference system satisfies \(\mathcal{A}_{t}=\mathcal{A}_{t+T}\), \(\forall t\in\mathbb{Z}\), then the system is called a periodic linear system. Denote \(\widetilde{\mathcal{A}}=\prod_{j=1}^{T}\mathcal{A}_{T-j}\). For a \(T\)-periodic equation, the eigenvalues \(\alpha\in\mathbb{C}\) of \(\widetilde{\mathcal{A}}\) is called the Floquet multipliers, and the Floquet exponents are defined by \(\lambda_{j}=\frac{1}{T}\ln(|\alpha_{j}|)\). The following Floquet theorem characterizes the stability of a periodic linear difference equation.
**Proposition 2.4** (Floquet theorem, [Colonius and KliemannColonius and Kliemann, 2014]).: _The zero solution of a periodic linear difference equation is asymptotically stable if and only if all Floquet exponents are negative._
Although Floquet theorem provides a method to determine whether a periodic system converges in general, considerable further work may be necessary in order to obtain explicit convergence criteria for specific equation. That is because even if we know the modulus of the largest eigenvalue of each \(\mathcal{A}_{t}\), it is usually difficult to compute the the modulus of the largest eigenvalue of each \(\prod_{t=0}^{T}\mathcal{A}_{t}\) due to the complex behavior of eigenvalues under matrix multiplication.
**Perturbed linear system.** If the iterative matrix \(\mathcal{A}_{t}\) in a linear difference system satisfies \(\mathcal{A}_{t}=\mathcal{A}+\mathcal{B}_{t}\) and \(\lim_{t\to\infty}\mathcal{B}_{t}=0\), then the system is called a perturbed linear system. The convergence behavior of a perturbed linear system is not clear in the literature. A general result in this direction is the following Perron's theorem:
**Theorem 2.5** (Perron's theorem, [PitukPituk2002]).: _If \(X_{n}\) is a solution of a perturbed linear system, then either \(X_{n}=0\) for all sufficient large \(n\) or \(\rho=\lim_{n\to\infty}\sqrt[n]{\|X_{n}\|_{2}}\) exists and is equal to the modulus of one of the eigenvalues of matrix \(\mathcal{A}\)._
This result can only guarantee the convergence of \(X_{n}\) when all eigenvalues of \(\mathcal{A}\) have modulus smaller than 1. In this case, \(\lim_{n\to\infty}X_{n}=0\). However, for analyzing the non-autonomous linear systems associated to learning dynamics, it is not sufficient as we will show that the stablized matrix of these systems generally has eigenvalues equal to \(1\).
## 3 Main results
In this section, we present our main results. We present the relationship between learning dynamics and linear difference equation in Section5.1 investigate the last-iterate behaviors of learning dynamics in a periodic game in Section5.2 and investigate the last-iterate behaviors of learning dynamics in a convergent perturbed game in Section5.3 Proofs are deferred to the appendix.
### Learning dynamics as linear difference systems
Formalizing learning dynamics as linear difference systems is useful for studying their dynamical behaviors. In the following, we present the formulation of optimistic gradient descent ascent, extra-gradient, and negative momentum method as linear difference systems.
**Proposition 3.1**.: _Optimistic gradient descent ascent can be written as the following linear difference system:_
\[\begin{bmatrix}x_{t+1}\\ y_{t+1}\\ x_{t}\\ y_{t}\end{bmatrix}=\begin{bmatrix}I&-2\eta A_{t}&0&\eta A_{t-1}\\ 2\eta A_{t}^{\top}&I&-\eta A_{t-1}^{\top}&0\\ I&0&0&0\\ 0&I&0&0\end{bmatrix}\begin{bmatrix}x_{t}\\ y_{t}\\ x_{t-1}\\ y_{t-1}\end{bmatrix}. \tag{2}\]
_Extra-gradient can be written as the following linear difference system :_
\[\begin{bmatrix}x_{t+1}\\ y_{t+1}\end{bmatrix}=\begin{bmatrix}I-\alpha\gamma A_{t}A_{t}^{\top}&-\alpha A_ {t}\\ \\ \alpha A_{t}^{\top}&I-\gamma\alpha A_{t}^{\top}A_{t}\end{bmatrix}\begin{bmatrix} x_{t}\\ y_{t}\end{bmatrix}. \tag{3}\]
_Negative momentum method can be written as the following linear difference system:_
\[\begin{bmatrix}x_{t+1}\\ y_{t+1}\\ x_{t}\\ y_{t}\end{bmatrix}=\begin{bmatrix}(1+\beta_{1})I&-\eta A_{t}&-\beta_{1}I&0\\ \\ \eta(1+\beta_{1})A_{t+1}^{\top}&(1+\beta_{2})I-\eta^{2}A_{t+1}^{\top}A_{t}&- \eta\beta_{1}A_{t+1}^{\top}&-\beta_{2}I\\ \\ I&0&0&0\\ \\ 0&I&0&0\end{bmatrix}\begin{bmatrix}x_{t}\\ y_{t}\\ x_{t-1}\\ y_{t-1}\end{bmatrix}. \tag{4}\]
It is easy to verify that these linear difference systems are equivalent to their corresponding learning dynamics by directly writing down the matrix-vector product.
We will also refer to the iterative matrix of these linear difference systems as the **iterative matrix of their corresponding learning dynamics**. In the following, we will study the convergence/divergence behaviors of these linear difference systems under the condition that \(\{A_{t}\}_{t}\) is a periodic game or convergent perturbed game. Note that although an intermediate step \((x_{t+\frac{1}{2}},y_{t+\frac{1}{2}})\) is required in extra-gradient, it is eliminated in 3.
### Periodic games
Recall that in a periodic game with period \(\mathcal{T}\), the payoff matrices \(\{A_{s}\}_{s=1}^{\infty}\) satisfy \(A_{s+\mathcal{T}}=A_{s}\) for any \(s>0\). Define
\[\Delta_{i,t}=\|A_{i}^{\top}x_{t}\|_{2}+\|A_{iy}t\|_{2} \tag{5}\]
for \(i\in[\mathcal{T}]\). As in [4], we use \(\Delta_{i,t}\) as a measurement of the distance between the current strategy \((x_{t},y_{t})\) and a Nash equilibrium of the zero-sum bilinear game defined by the kernel space of payoff matrix \(A_{i}\). Note that if \((x^{*},y^{*})\) is a Nash equilibrium of the game defined by \(A_{i}\), then \((A_{i}^{\top}x^{*},A_{i}y^{*})=(0,0)\), and
\[\Delta_{i,t}=\|A_{i}^{\top}(x_{t}-x^{*})\|_{2}+\|A_{i}(y_{t}-y^{*})\|_{2}\]
thus when strategy is close to equilibrium, \(\Delta_{i,t}\) will be small. Moreover, \(\Delta_{i,t}=0\) if and only if \((x_{t},y_{t})\) is an equilibrium. In this section, we will consider the convergence/growth rate of \(\Delta_{i,t}\) at a function of \(t\).
We firstly consider Extra-gradient method. Denote the iterative matrix in the linear difference form of Extra-gradient 3 as \(\mathcal{A}_{t}\). In the following theorem, we prove that if two players use the Extra-gradient method, their strategies will converge to the common Nash equilibrium of games in the period with an exponential rate.
**Theorem 3.1**.: _When two players use extra-gradient in a periodic games with period \(T\), with step size \(\alpha=\gamma<\frac{1}{\sigma}\) where \(\sigma=\max\{\sigma^{\prime}|\sigma^{\prime}\) is a singular value of \(A_{i}\) for some \(i\in[\mathcal{T}]\}\). Then_
\[\Delta_{i,t}\in\mathcal{O}\left((\lambda_{*})^{t/\mathcal{T}}\cdot\text{Poly} (t)\right),\ \forall i\in[\mathcal{T}]\]
_where \(\lambda_{*}=\max\{\ |\lambda|\ |\ \lambda\ \text{is an eigenvalue of}\ \ \left(\prod_{t=1}^{\mathcal{T}}\mathcal{A}_{t}\right),\lambda\neq 1\}\), and \(\lambda_{*}<1\)._
Note that in a periodic game, the iterative matrices of learning dynamics are also periodic, which means that the learning difference systems of these learning dynamics are periodic systems. According to Floquet theorem, see proposition 2.4 the study of dynamical behaviors of a periodic system can be reduced to the autonomous system whose asymptotic behavior is determined by \(\prod_{t=1}^{\mathcal{T}}\mathcal{A}_{t}\).
The key point on the proof of Theorem 3.1 is an observation that the iterative matrix of extra-gradient is a normal matrix, which makes it possible to calculate the Jordan normal form of \(\prod_{t=1}^{\mathcal{T}}\mathcal{A}_{t}\) for arbitrary large \(\mathcal{T}\). The details of proof are left to Appendix B
In the following theorem, we provide an example demonstrating that when two players use the optimistic gradient descent ascent or negative momentum method, their strategy will diverge at an exponential rate, regardless of how they choose their step-sizes and momentum parameters.
**Theorem 3.2**.: _Consider a periodic game with period \(\mathcal{T}=2\), and described by the following payoff matrix_
\[A_{t}=\left\{\begin{array}{ll}\left[1,-1\right],&t\;\;\text{is}\;\;\text{odd }\\ \left[-1,1\right],&t\;\;\text{is}\;\;\text{even}\end{array}\right. \tag{6}\]
_with \(x_{t}\in\mathbb{R},\;y_{t}\in\mathbb{R}^{2}\). If two players use optimistic gradient descent ascent or negative momentum method, then regardless of how they choose step sizes and momentum parameters, we have_
\[\sup_{s\in[t]}\Delta_{i,s}\in\Omega(\lambda^{t}),\;\text{where}\;\lambda>1,\; i\in\{1,2\}.\]
_Here \(\lambda\) is determined by the largest modulus of the eigenvalues of the iterative matrix of optimistic gradient descent ascent or negative momentum method._
To prove theorem 3.2 we directly calculate the characteristic polynomials of the iterative matrices products in one period for optimistic gradient descent ascent and negative momentum method under the game defined by \(\textcircled{\raisebox{-0.86pt}{\rule{0.86pt}{0.86pt}}}\). To show that these systems have an exponential divergence rate, it is sufficient to demonstrate that their characteristic polynomials have a root with modulus larger than \(1\). We achieve this by using the Schur stable theorem, which is also employed to demonstrate the last iterate convergence of several learning dynamics in time-independent game (Zhang and Yu [2020]). The proof is deferred to Appendix C
In Figure 1, we present the function curves of \(\Delta_{1,t}\) for these three types of game dynamics under the periodic game defined by \(\textcircled{\raisebox{-0.86pt}{\rule{0.86pt}{0.86pt}}}\). From the experimental results, extra-gradient converges, while both optimistic gradient descent ascent and negative momentum method diverge.
### Convergent perturbed game
Recall that the payoff matrix of a convergent perturbed game has form \(A_{t}=A+B_{t}\), where \(\lim_{t\rightarrow\infty}B_{t}=0\), and we refer to the zero-sum game defined by payoff matrix \(A\) as the stable game. We denote
\[\Delta_{t}=\|A^{\top}x_{t}\|_{2}+\|Ay_{t}\|_{2}, \tag{7}\]
thus \(\Delta_{t}\) measures the distance between the strategy \((x_{t},y_{t})\) in the \(t\)-th round and the Nash equilibrium of the stable game defined by \(A\). Moreover, \(\Delta_{t}=0\) if and only if \((x_{t},y_{t})\) is an equilibrium of the stable game.
Figure 1: Function curves of \(\Delta_{1,t}\) of the game presented in Theorem 3.2 Extra-gradient converges, while the other two methods diverge.
In the literature on linear difference systems, a common assumption that needs to be added for convergence guarantee is the following bounded accumulated perturbations (BAP) assumption (Benzaid and Lutz [1987], Elaydi et al. [1999], Elaydi and Gyori [1995]):
\[\sum_{t=0}^{\infty}\lVert B_{t}\rVert_{2}\;\text{is bounded}.\] (BAP assumption)
In the following theorem, we prove that under BAP assumption, all three learning dynamics considered in this paper will make \(\Delta_{t}\) converge to \(0\), with a rate dependent on the vanishing rate of \(B_{t}\).
**Theorem 3.3**.: _Assume that the BAP assumption holds, i.e., \(\sum_{t=0}^{\infty}\lVert B_{t}\rVert_{2}\) is bounded, and let \(\sigma\) be the maximum modulus of the singular value of payoff matrix \(A\), then with parameters choice:_
* _for extra-gradient with step size_ \(\alpha=\eta<\frac{1}{2\sigma}\)_,_
* _for optimistic gradient descent ascent with step size_ \(\eta<\frac{1}{2\sigma}\)_,_
* _for negative momentum method with step size_ \(\eta<\frac{1}{\sigma}\) _and momentum parameters_ \(\beta_{1}=-\frac{1}{2}\) _and_ \(\beta_{2}=0\)_,_
_we have \(\Delta_{t}\) converge to \(0\) with rate \(\mathcal{O}(f(t))\). Here_
\[f(t)=\max\{\lambda^{t},\sum_{i=t/2}^{\infty}\lVert B_{i}\rVert_{2}\},\]
_and \(\lambda\in(0,1)\) is determined by the eigenvalues of the iterative matrix of corresponding learning dynamics and the payoff matrix \(A\) of the stable game._
There are two main ingredients in the proof of Theorem 3.3 firstly, we show that the iterative matrices associated with these learning dynamics are diagonalizable; secondly, these matrices do not have eigenvalues with modulus larger than \(1\). Moreover, we prove a general results which states any linear difference system satisfying these two conditions will converge. The details of proof are left to Appendix E
**Remark 3.2**.: _The BAP assumption can be converted into a constraint on the vanishing rate of \(B_{t}\): if \(B_{t}\) has a vanishing rate like \(\mathcal{O}(\frac{1}{t^{1+\epsilon}})\), for some arbitrary small \(\epsilon>0\), then \(\sum_{t=0}^{\infty}\lVert B_{t}\rVert_{2}\) is bounded. We also note that the condition for \(B_{t}\) to vanish at a rate \(\mathcal{O}(\frac{1}{t^{1+\epsilon}}),\;\forall\epsilon>0\) is necessary to ensure convergence in general linear difference system. For example, consider the 1-dimension system \(x_{t}=(1+\frac{1}{t})x_{t-1}\) where a \(\mathcal{O}(\frac{1}{t})\) convergence rate of perturbations leads to \(x_{t}\) diverging with a \(\Omega(t)\) rate._
Surprisingly, in the next theorem, we prove that Extra-gradient makes \(\Delta_{t}\) asymptotically converge to \(0\), without making any further assumptions about the converge rate of \(B_{t}\).
**Theorem 3.4**.: _In a convergent perturbed game, if two players use Extra-gradient, there holds \(\lim_{t\rightarrow\infty}\Delta_{t}=0\) with step size \(\alpha=\eta<\frac{1}{2\sigma}\) where \(\sigma\) is the maximum modulus of the singular value of payoff matrix \(A\)._
To prove Theorem 3.4 we first observe that \(\lVert x_{t}\rVert_{2}+\lVert y_{t}\rVert_{2}\) a non-increasing function of \(t\) since the iterative matrix of extra-gradient is a normal matrix. Next, we prove that if Theorem 3.4 doesn't hold, then \(\lVert x_{t}\rVert_{2}+\lVert y_{t}\rVert_{2}\) will decrease by a constant for infinite number of times, thus leading to a contradiction with the non-increasing property and the non-negativity of \(\lVert x_{t}\rVert_{2}+\lVert y_{t}\rVert_{2}\). The proof is deferred to Appendix E
## 4 Experiments
In this section we present numerical results for our theoretical results in Section E
**Experiments on Theorem 3.1** We verify Theorem 3.1 through a period-\(3\) game, the payoff matrices are chosen to be
\[A_{1}=\begin{bmatrix}1&2\\ 2&4\end{bmatrix},A_{2}=\begin{bmatrix}3&7\\ 7&1\end{bmatrix},A_{3}=\begin{bmatrix}4&2\\ 4&2\end{bmatrix}.\]We run extra-gradient and optimistic gradient descent ascent on this example, both with step size = 0.01, the experimental results are presented in Figure 2. We can see extra-gradient (left) makes \(\Delta_{i,t}\) converge, while optimistic gradient descent ascent (right) makes \(\Delta_{i,t}\) diverge. This result supports Theorem 3.1 and provides a numerical example of the separation of extra-gradient and optimistic gradient descent ascent in periodic games.
**Experiments on Theorem 3.3** We verify Theorem 3.3 by examples:
\[A=\begin{bmatrix}2&3\\ 4&6\end{bmatrix},B_{1,t}=B\cdot t^{-1.1},B_{2,t}=B\cdot t^{-4},\;\text{and}\;,B _{3,t}=B\cdot t^{-8}\]
where \(B=[[-15,70],[-90,90]]\). The step size is chosen to be \(0.005\). The initial points are chosen to be \(x_{0}=[15,13],x_{-1}=[11,3]\) and \(y_{0}=[35,1],y_{-1}=[35,1]\). The experimental results are presented in Figure 3 all of the three dynamics make \(\Delta_{t}\) converge to \(0\), and slower convergence rate of perturbations can decelerate the convergence rate of learning dynamics, thus support the convergence result in Theorem 3.3. We also observe that the convergence rate is faster than the upper bound provided in the theorem. Therefore, we conjecture the bound of convergence rate can be improved.
**Experiments on Theorem 3.4** We verify Theorem 3.4 by two group of examples. The perturbations are :
\[B_{1,t}=B\cdot t^{-0.4},\;B_{2,t}=B\cdot t^{-0.3},\;B_{3,t}=B\cdot t^{-0.2},\]
and
\[B_{4,t}=B\cdot\log(t)^{-1.8},\;B_{5,t}=B\cdot\log(t)^{-1.5},\;B_{6,t}=B\cdot \log(t)^{-1.3}.\]
where \(B=[[-10,10],[-10,10]]\). The payoff matrix of stable game is chosen to be \(A=[[2,3],[4,6]]\). Note that the perturbations do not satisfy (BAP assumption). The experimental results are shown in Figure 4. We can see all these curves converge to \(0\), thus support the result in Theorem 3.4 Furthermore, we can observe that large perturbations may lead to more oscillations during the convergence processes, which in turn slows down the rate of convergence. We present more experiments in Appendix C
Figure 3: Values of \(\Delta_{t}\) for extra-gradient (left), optimistic gradient descent ascent (middle), negative momentum method (right).
Figure 2: Function curves of \(\Delta_{t}\) for extra-gradient (left), and optimistic gradient descent ascent (right).
## 5 Discussion
In this paper, we study the last-iterate behavior of extra-gradient, optimistic gradient descent ascent, and negative momentum method in two types of time-varying games : periodic game and convergent perturbed game. In the case of periodic game, we prove that extra-gradient will converge to a Nash equilibrium while other two methods diverge. To the best of our knowledge, this is the first result that provides a clear separation between the behavior of extra-gradient methods and optimistic gradient descent ascent. In the case of convergent perturbed game, we prove all three learning dynamics converge to Nash equilibrium under the BAP assumption, which is commonly used in the literature on dynamical systems.
Our results leave many interesting open questions. Firstly, is the BAP assumption necessary for ensuring convergence in optimistic gradient descent ascent and negative momentum method? Secondly, the bound of convergence rate in Theorem 3.3 may rather slight as we shown in experiments section. Obtaining a tighter bound on the convergence rate is an important future research problem. Thirdly, can the results here be generalized to other settings, such as constrained zero-sum game?
## Acknowledgement
Yi Feng is supported by the Fundamental Research Funds for the Central Universities. Ioannis Panageas wants to thank a startup grant. Xiao Wang acknowledges Grant 202110458 from SUFE and support from the Shanghai Research Center for Data Science and Decision Technology.
## References
* Anagnostides et al. (2023) Ioannis Anagnostides, Ioannis Panageas, Gabriele Farina, and Tuomas Sandholm. On the convergence of no-regret learning dynamics in time-varying games. _arXiv preprint arXiv:2301.11241_, 2023.
* Benzaid and Lutz (1987) Z Benzaid and DA Lutz. Asymptotic representation of solutions of perturbed systems of linear difference equations. _Studies in Applied Mathematics_, 77(3):195-221, 1987.
* Blackwell (1956) David Blackwell. An analog of the minimax theorem for vector payoffs. In _Pacific J. Math._, pages 1-8, 1956.
* Brown (1951) George W Brown. Iterative solution of games by fictitious play. _Act. Anal. Prod Allocation_, 13(1):374, 1951.
* Cai et al. (2022) Yang Cai, Argyris Oikonomou, and Weiqiang Zheng. Finite-time last-iterate convergence for learning in multi-player games. In _NeurIPS_, 2022.
* Cardoso et al. (2019) Adrian Rivera Cardoso, Jacob D. Abernethy, He Wang, and Huan Xu. Competing against nash equilibria in adversarially changing zero-sum games. In _Proceedings of the 36th International Conference on Machine Learning, ICML 2019_, volume 97 of _Proceedings of Machine Learning Research_, pages 921-930. PMLR, 2019.
* Colonius and Kliemann (2014) Fritz Colonius and Wolfgang Kliemann. _Dynamical systems and linear algebra_, volume 158. American Mathematical Society, 2014.
* Colonius et al. (2019)Constantinos Daskalakis and Ioannis Panageas. The limit points of (optimistic) gradient descent in min-max optimization. _Advances in neural information processing systems_, 31, 2018.
* Leibniz-Zentrum fur Informatik, 2019.
* Daskalakis et al. (2018) Constantinos Daskalakis, Andrew Ilyas, Vasilis Syrgkanis, and Haoyang Zeng. Training GANs with Optimism. In _Proceedings of ICLR_, 2018.
* Duvocelle et al. (2022) Benoit Duvocelle, Panayotis Mertikopoulos, Mathias Staudigl, and Dries Vermeulen. Multiagent online learning in time-varying games. _Mathematics of Operations Research_, 2022.
* Elaydi and Gyori (1995) S Elaydi and I Gyori. Asymptotic theory for delay difference equations. _Journal of Difference Equations and Applications_, 1(2):99-116, 1995.
* Elaydi et al. (1999) Saber Elaydi, Satoru Murakami, and Etsuyo Kamiyama. Asymptotic equivalence for difference equations with infinite delay. _Journal of Difference Equations and Applications_, 5(1):1-23, 1999.
* Fiez et al. (2021) Tanner Fiez, Ryann Sim, Stratis Skoulakis, Georgios Piliouras, and Lillian Ratliff. Online learning in periodic zero-sum games. _Advances in Neural Information Processing Systems_, 34, 2021.
* Gidel et al. (2019) Gauthier Gidel, Reyhane Askari Hemmat, Mohammad Pezeshki, Remi Le Priol, Gabriel Huang, Simon Lacoste-Julien, and Ioannis Mitliagkas. Negative momentum for improved game dynamics. In _The 22nd International Conference on Artificial Intelligence and Statistics_, pages 1802-1811. PMLR, 2019.
* Gorbunov et al. (2022a) Eduard Gorbunov, Nicolas Loizou, and Gauthier Gidel. Extragradient method: O(1/K) last-iterate convergence for monotone variational inequalities and connections with cocoercivity. In Gustau Camps-Valls, Francisco J. R. Ruiz, and Isabel Valera, editors, _International Conference on Artificial Intelligence and Statistics, AISTATS 2022, 28-30 March 2022, Virtual Event_, volume 151 of _Proceedings of Machine Learning Research_, pages 366-402. PMLR, 2022a.
* Gorbunov et al. (2022b) Eduard Gorbunov, Adrien Taylor, and Gauthier Gidel. Last-iterate convergence of optimistic gradient method for monotone variational inequalities. In _NeurIPS_, 2022b.
* Korpelevich (1976) Galina M Korpelevich. The extragradient method for finding saddle points and other problems. _Matecon_, 12:747-756, 1976.
* Liang and Stokes (2019) Tengyuan Liang and James Stokes. Interaction matters: A note on non-asymptotic local convergence of generative adversarial networks. _AISTATS_, 2019a.
* Liang and Stokes (2019) Tengyuan Liang and James Stokes. Interaction matters: A note on non-asymptotic local convergence of generative adversarial networks. In _The 22nd International Conference on Artificial Intelligence and Statistics_, pages 907-915. PMLR, 2019b.
* Mai et al. (2018) Tung Mai, Milena Mihail, Ioannis Panageas, Will Ratcliff, Vijay V. Vazirani, and Peter Yunker. Cycles in zero-sum differential games and biological diversity. In Eva Tardos, Edith Elkind, and Rakesh Vohra, editors, _Proceedings of the 2018 ACM Conference on Economics and Computation, Ithaca, NY, USA, June 18-22, 2018_, pages 339-350. ACM, 2018.
* Mertikopoulos et al. (2019) Panayotis Mertikopoulos, Bruno Lecouat, Houssam Zenati, Chuan-Sheng Foo, Vijay Chandrasekhar, and Georgios Piliouras. Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile. In _7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019_. OpenReview.net, 2019.
* Monteiro and Svaiter (2010) Renato DC Monteiro and Benar Fux Svaiter. On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean. _SIAM Journal on Optimization_, 20(6):2755-2787, 2010.
* Nemirovski (2004) Arkadi Nemirovski. Prox-method with rate of convergence o (1/t) for variational inequalities with lipschitz continuous monotone operators and smooth convex-concave saddle point problems. _SIAM Journal on Optimization_, 15(1):229-251, 2004.
* Nemirovski et al. (2019)Yurii Nesterov. Smooth minimization of non-smooth functions. _Math. Program._, 103(1):127-152, 2005.
* Pituk (2002) Mihaly Pituk. More on poincare's and perron's theorems for difference equations. _The Journal of Difference Equations and Applications_, 8(3):201-216, 2002.
* Popov (1980) Leonid Denisovich Popov. A modification of the arrow-hurwicz method for search of saddle points. _Mathematical notes of the Academy of Sciences of the USSR_, 28:845-848, 1980.
* Roy et al. (2019) Abhishek Roy, Yifang Chen, Krishnakumar Balasubramanian, and Prasant Mohapatra. Online and bandit algorithms for nonstationary stochastic saddle-point optimization. _CoRR_, abs/1912.01698, 2019.
* Syrgkanis et al. (2015) Vasilis Syrgkanis, Alekh Agarwal, Haipeng Luo, and Robert E. Schapire. Fast convergence of regularized learning in games. In Corinna Cortes, Neil D. Lawrence, Daniel D. Lee, Masashi Sugiyama, and Roman Garnett, editors, _Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada_, pages 2989-2997, 2015.
* Wei et al. (2021) Chen-Yu Wei, Chung-Wei Lee, Mengxiao Zhang, and Haipeng Luo. Linear last-iterate convergence in constrained saddle-point optimization. In _9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021_. OpenReview.net, 2021.
* Zhang and Wang (2021) Guodong Zhang and Yuanhao Wang. On the suboptimality of negative momentum for minimax optimization. In _International Conference on Artificial Intelligence and Statistics_, pages 2098-2106. PMLR, 2021.
* Zhang and Yu (2020) Guojun Zhang and Yaoliang Yu. Convergence of gradient methods on bilinear zero-sum games. In _International Conference on Learning Representations_, 2020.
* Zhang et al. (2022) Mengxiao Zhang, Peng Zhao, Haipeng Luo, and Zhi-Hua Zhou. No-regret learning in time-varying zero-sum games. In _International Conference on Machine Learning, ICML 2022_, volume 162 of _Proceedings of Machine Learning Research_, pages 26772-26808. PMLR, 2022.
* Zinkevich (2003) Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Tom Fawcett and Nina Mishra, editors, _Machine Learning, Proceedings of the Twentieth International Conference (ICML 2003), August 21-24, 2003, Washington, DC, USA_, pages 928-936. AAAI Press, 2003. | ## Review
### Summary
This paper investigates the last-iterate convergence of three algorithms (Optimistic Gradient Descent Ascent, Extra Gradient, and Negative Momentum Method) in two types of time-varying zero-sum games: periodic and convergent perturbed games. It demonstrates that the Extra Gradient algorithm converges in periodic games, while the other two algorithms may diverge. Additionally, it shows that all algorithms converge in convergent perturbed games. The paper includes thorough theoretical analysis and supporting experimental results, contributing to the understanding of learning dynamics in zero-sum games.
### Strengths
- The problem studied is natural and clearly explained, with strong results, including the interesting split in behavior between the algorithms.
- The paper is well-written, organized, and easy to follow.
- The authors provide a significant contribution to the understanding of last-iterate convergence in time-varying games.
- The analysis is sound, and the empirical results validate the theoretical insights.
- The work leverages technical innovations from the literature on non-autonomous linear difference systems.
### Weaknesses
- The classes of games studied could benefit from better motivation. There is a lack of clarity on their significance beyond technical convenience.
- Some results lack intuitive explanations, particularly regarding the distinct behaviors of the algorithms.
- Figures could be improved for clarity, as some are not well-plotted.
- The notion of equilibrium is inconsistently defined in the paper.
- The negative momentum method may not apply in strictly online settings where both players must act first.
### Questions
- What is the range of $t$ in Theorem 3.1, and can the authors clarify the implications of choosing $t=T$?
- Could the authors provide more intuitive explanations for the convergence properties of the different algorithms?
- What specific reasons justify focusing on periodic and convergent perturbed games?
- Do the results extend to more general cases of time-varying games?
- Can the authors discuss whether the optimistic gradient descent ascent and negative momentum methods converge in the same instances as shown in the experiments?
### Soundness
**Score:** 15
**Description:** 3 = good; The theoretical foundation is solid with clear proofs and sound analysis.
### Presentation
**Score:** 15
**Description:** 3 = good; The paper is well-organized and accessible, though it could benefit from improved clarity in figures and terminology.
### Contribution
**Score:** 14
**Description:** 3 = good; The work represents a meaningful advancement in the field, though there are suggestions for broader applications.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept; The paper is technically solid with moderate-to-high impact, though it has minor issues that need addressing.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents original research with significant implications for understanding learning dynamics in zero-sum games. Despite some minor weaknesses and questions regarding motivation and clarity, the overall soundness and contribution of the work outweigh these concerns, justifying an acceptance decision.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# HotBEV: Hardware-oriented Transformer-based Multi-View 3D Detector for BEV Perception
Peiyan Dong\({}^{1}\)1, Zhenglun Kong\({}^{1*}\), Xin Meng\({}^{1*}\), Pinrui Yu\({}^{1}\), Yifan Gong\({}^{1}\), Geng Yuan\({}^{2}\),
Hao Tang\({}^{3}\)2, Yanzhi Wang\({}^{1}\)
\({}^{1}\)Northeastern University, \({}^{2}\)University of Georgia, \({}^{3}\)Carnegie Mellon University
{dong.pe, kong.zhe, yanz.wang}@northeastern.edu
Equal ContributionCorresponding Author
Footnote 1: footnotemark:
###### Abstract
The bird's-eye-view (BEV) perception plays a critical role in autonomous driving systems, involving the accurate and efficient detection and tracking of objects from a top-down perspective. To achieve real-time decision-making in self-driving scenarios, low-latency computation is essential. While recent approaches to BEV detection have focused on improving detection precision using Lift-Splat-Shoot (LSS)-based or transformer-based schemas, the substantial computational and memory burden of these approaches increases the risk of system crashes when multiple on-vehicle tasks run simultaneously. Unfortunately, there is a dearth of literature on efficient BEV detector paradigms, let alone achieving realistic speedups. Unlike existing works that focus on reducing computation costs, this paper focuses on developing an efficient model design that prioritizes actual on-device latency. To achieve this goal, we propose a latency-aware design methodology that considers key hardware properties, such as memory access cost and degree of parallelism. Given the prevalence of GPUs as the main computation platform for autonomous driving systems, we develop a theoretical latency prediction model and introduce efficient building operators. By leveraging these operators and following an effective local-to-global visual modeling process, we propose a hardware-oriented backbone that is also optimized for strong feature capturing and fusing. Using these insights, we present a new hardware-oriented framework for efficient yet accurate camera-view BEV detectors. Experiments show that HotBEV achieves a 2%\(\sim\)23% NDS gain, and 2%\(\sim\)7.8% mAP gain with a 1.1\(\times\)\(\sim\)3.4\(\times\) speedups compared to existing works on V100; On multiple GPU devices such as GPU GTX 2080 and the low-end GTX 1080, HotBEV achieves 1.1\(\times\)\(\sim\)6.3\(\times\) faster than others. The code is available at HotBEV
## 1 Introduction
Recently, there has been a growing interest in 3D object detection (also known as BEV perception tasks) from multi-camera images, especially in the context of autonomous driving. While LiDAR-based methods have made remarkable progress, camera-only-based approaches have gained extensive attention due to their lower expenses and ability to identify color-based road elements such as traffic lights and signs [1]. Mainstream camera-only-based approaches for multi-camera 3D object detection tasks can be broadly classified into two research domains: 1) 3D reconstruction detectors, which project 2D features from the image view into the bird's-eye view (BEV) to extract intrinsic and extrinsic information [2][3][4][5][6][7]. However, these approaches suffer from degradation due to the inaccuracy of the depth information predicted from 2D features. 2) 3D projection detectors,which encode information such as 3D position and object queries into 2D features and then sample corresponding features for end-to-end 3D bounding box prediction [11]. These approaches overcome the problem of inaccuracies in the BEV features caused by the prediction of depth values or distributions. In particular, 3D projection detectors are considered one of the dominant future multi-view BEV perception techniques.
Despite the impressive performance exhibited by 3D projection detectors, their on-device inference speed is limited by various factors. i) Detectors employing a CNN-based backbone often demonstrate inferior detection performance compared to those with a transformer-based backbone [10]. Additionally, transformers are known to possess computation and memory complexities [11], which obstruct scalability, particularly when deploying them on resource-limited devices. ii) Many prior works [12][13] prioritized efficient model design by only relying on hardware-agnostic metrics like computation FLOPs, neglecting to account for the actual hardware's performance, such as the actual inference latency. Current detectors lack optimizations for specific hardware deployments, including factors like memory access cost, degree of parallelism, and compiler characteristics, all of which significantly impact hardware throughput during inference. The quadratic memory complexity associated with original self-attention operations further exacerbates the disparity between theoretical FLOPs and actual hardware performance. Notably, most current in-vehicle chips adopt the GPU architecture, which poses greater challenges and requires further investigation despite its essential role. iii) To bridge the gap between the theoretical and practical efficiency of deep models, researchers have started considering the real latency in the network design phase. Hardware profiling [12][13], which is generally utilized to estimate on-device speed, is dedicated and time-consuming. And profiling should be reduce once the model structure, size, and hardware type change. Thousands of measurements are required for each operation to ensure data correctness. In contrast, there is a need for a practical, plug-and-play model capable of efficiently estimating inference latency, specifically designed for general GPU architectures.
In this paper, we propose a hardware-oriented transformer-based framework (HotBEV) for camera-only 3D detection tasks, which achieves both higher detection precision and remarkable speedup across both high-end GPUs and low-end GPUs (see Figure[1]). Firstly, we propose a theoretical latency prediction model by considering the algorithm, the scheduling strategy, and the hardware properties. Given a target GPU, we directly use the latency, rather than the computation FLOPs, to guide our algorithm design. Then we perform a latency breakdown of major modules in classic camera-only detectors and figure out that the backbone is usually the speed bottleneck. After benchmarking
Figure 1: (a) Comparison with previous backbone methods in real-world object detection. HotBEV captures the global/local information better than others. (b) HotBEV aims to search for specialized model designs for target GPU devices.
Figure 2: The trade-off between performance (mAP/NDS) and hardware efficiency (FPS) for different methods on the nuScenes test dataset. HotBEV outperforms SOTA efficient transformer-based detectors on both precision and efficiency. Models are evaluated on NVIDIA V100.
* We propose a latency-aware design methodology for 3D object detection based on BEV perception.
* We propose a hardware-oriented backbone that excels at capturing and fusing features.
* According to our analysis of efficient operators and strong feature modeling, we propose a new hardware-oriented framework for the BEV detector.
* We conduct experiments to showcase the superior inference accuracy of HotBEV compared to SOTA efficient BEV detectors while also highlighting its significant hardware efficiency.
## 2 Related Work
Current multi-view 3D object detectors can be divided into two schemas: LSS-based schema [3] and transformer-based schema.
**LSS-based schema**. BEVDet [2] is the first study that combines LSS and LiDAR detection head which uses LSS to extract BEV feature and LiDAR detection head to propose 3D bounding boxes. By introducing previous frames, BEVDet4D [3] acquires the ability of velocity prediction. For the above works, the models are complex, so many hardware platforms do not support some inside operators.
**Transformer-based schema**. BEVDepth [2] uses LiDAR to generate depth GT for supervision and encodes camera intrinsic and extrinsic parameters to enhance the model's ability of depth detection. DETR3D [2] extends DETR [11] into 3D space, using a transformer to generate 3D bounding boxes. Following the DETR3D, PETR [3] converts 3D coordinates into 3D position embedding to perceive the 3D scene. And BEVFormer [11] uses deformable transformer [11] to extract features from images and cross attention to link the characteristics between different frames for velocity prediction. Although better performance is achieved through these works, the computation cost (e.g., over 170 GFLOPs) or realistic speed has yet to be optimized. We are the first to work on transformer-based 3D detectors to explore comparable performance and fast inference speed on diverse hardware platforms while maintaining acceptable detection accuracy.
**Hardware-aware network design.** Several existing works have addressed the issue of realistic latency during the network design phase, exploring two distinct directions. Firstly, some researchers evaluate speed directly on targeted devices and derive guidelines for developing hand-designed efficient models, as demonstrated by [12]. Secondly, others employ the technique of neural architecture search (NAS) to search for fast models, exemplified by [13]. Nevertheless, conducting speed tests on various hardware platforms for our proposed structures can be exceedingly labor-intensive due to the extensive range of candidates and their corresponding properties. Additionally, obtaining precise
Figure 3: **Device speed breakdown. Left: The latency breakdown of representative Camera-only Detectors (Input resolution: 3×256x704 for PETRv2, others 3×900x1600). Middle and Left: Results are obtained on NVIDIA V100, GeForce GTX 2080 Ti, and GeForce GTX 1080 Ti. The on-device speed for frequently used backbone and various operators is reported. (Input resolution: 3×224×224) standard backbones on high-end (V100) and low-end (GTX 1080 Ti) GPUs, we propose efficient operators and fusion techniques for model on-device implementation. Based on these operators and the process of vision modeling, we design a hardware-oriented backbone with strong feature enhancement by information interaction between model stages. Then we extend the latency-aware design methodology to other parts, such as image embedding and decoder, and propose the basic design paradigm of HotBEV. Finally, guided by the latency prediction model, we generate the family of HotBEV through a standard search algorithm. Experiments show that our proposed framework can achieve a 2%\(\sim\)23% NDS gain and 2%\(\sim\)7.8% mAP with a 1.1×3.4× speedups compared to existing works on V100 (see Figure 2). On multiple GPU devices such as GPU GTX 2080 HotBEV can reach 1.1×6.3× faster than other models; for the low-end GTX 1080, our framework can achieve 1.1×4.9× faster than others. Overall, our contributions are summarized as follows:
results for small-scale structures on low-end devices presents its own set of challenges. In contrast, our latency prediction model directly estimates the inference speed on specific computing platforms, circumventing these difficulties.
## 3 Methodology
In this section, we present our approach to efficiently and accurately predict the on-device latency of an architecture on the target device using the latency prediction model. Additionally, we detail efficient operators and fusion implementation techniques. According to the process of vision modeling, we introduce an efficient embedding module and a hardware-oriented backbone that excels at capturing and fusing features. Then our proposed basic design paradigm for HotBEV is based on these efficient operators, the process of vision modeling, and the detection flow of DETR3D [2]. Finally, guided by the latency prediction model, we generate a family of models known as HotBEV through a standard search algorithm [18]. Note that we also adapt the Temporal Aligned Module (TAM) in [19] to efficiently improve performance and robustness.
### Latency-aware Design
#### 3.1.1 Latency Prediction Model
We introduce a novel latency prediction model, denoted by \(E\), which enables direct prediction of the latency of runtime design choices on any target GPU. This allows for the efficient identification of optimal model settings on corresponding platforms. Specifically, the latency predictor \(E\) considers the hardware properties \(H\), layer type \(T\), channel dimension \(C\), and input granularity \(G\) as input for each design choice and outputs the predicted latency of the block as \(l=E(H,T,C,G)\).
**Hardware model.** We model a hardware device as multiple processing engines (PEs), allowing for parallel computation with varying degrees of parallelism. As illustrated in Figure 2, we represent the memory system using a three-level structure [20, 21], which includes: 1) off-chip memory, 2) on-chip global memory, and 3) memory in the PEs. This hardware model enables accurate prediction of the latency of data movement and computation.
**Latency prediction modeling.** It includes three steps: 1) _Input/output shape definition._ The initial step is to calculate the input and output shapes. 2) _Operation mapping to hardware._ Based on our hardware model, we first divide the output feature map into multiple tiles and assign each tile to a separate PE for parallel processing. 3) _Latency estimation._ We evaluate the latency of each tile, which comprises both data movement and computation latency: \(l\)=\(l_{data}+l_{compute}\). For \(l_{data}\), we leverage our hardware model (see Figure 2) and compute the sum of input and output data movement latencies as \(l_{data}\)=\(l_{in}+l_{out}\). These latencies are estimated based on hardware bandwidth and input granularity \(G\) (equivalent to resolution scale). We assume that each PE moves the required input feature maps and weights just once to compute an output tile. For \(l_{compute}\), we use the maximum throughput of FP32 computation on PEs and the FLOPs required to compute a single output tile. The total computation latency can be determined by considering the number of tiles and PEs. We test three types of hardware devices, NVIDIA V100, NVIDIA GTX 2080, and NVIDIA GTX 1080. For a more detailed description of our latency prediction model, please refer to Appendix A
#### 3.1.2 Efficient Operators and Fusion Techniques of Model Implementation
The development of efficient network architectures for resource-limited devices has greatly benefited from reduced parameters and floating-point operations (FLOPs) and improved accuracy. However, conventional efficiency metrics, such as FLOPs, overlook memory cost and degree of parallelism. In this study, we aim to improve network runtime speed and detection performance by identifying and modifying building blocks that are not hardware-friendly. To achieve this, we first perform a latency breakdown of major modules in classic camera-only detectors, DETR3D, PETR [3], PETRv2 [19],
Figure 4: **Hardware Modeling. The memory system includes 1) off-chip memory, 2) on-chip global memory, and 3) memory in the PEs.**
BEVDeT [1], and BEVFormer [1], which reveals that the backbone is usually the speed bottleneck (66%\(\sim\)78%). Therefore, we deploy common backbones on high-end (V100) and low-end (GTX 1080 Ti) GPUs and benchmark their speed, as illustrated in Figure 3. We then introduce efficient operators and fusion techniques to improve the speed and detection performance of these networks.
**Convolutional modulation for efficient global modeling.** Instead of calculating the similarity score matrix (attention matrix [2]), we simplify self-attention by modulating the value \(V\) with convolutional features as Figure 5. Our approach uses convolutional modulation rather than self-attention to build relationships since they are more memory-efficient, particularly when processing high-resolution images. Due to the modulation operation, our method differs from traditional residual blocks and can adapt to the input content.
**Fusing BatchNorm into the preceding fully-connected layer.** After analyzing various backbones as shown in Figure 5, we found that LayerNorm (LN) accounts for approximately 10% to 15% of the network's total latency. Dynamic normalization techniques like LN gather running statistics during inference, resulting in delays that impact speed. On the other hand, BatchNorm (BN) is more memory and computation-efficient, as it is fused with the preceding fully-connected layer, reducing data movement and computation load (see Figure 4). To this end, we modified the WMSA/SWMSA in Swin [10] by replacing LN-Linear with Linear-BN, which we refer to as WMSA\({}_{bn}\)/SWMSA\({}_{bn}\). By switching to a BN-based design, we achieved a 13% to 15% speedup with no significant loss in accuracy (less than 0.3%). Based on the accuracy analysis in Appendix A one advantage of the window-based self-attention on the efficient implementation compared to other transformer architectures: It can replace the LayerNorm with BatchNorm and then perform the fusing technique with acceptable accuracy. And replacing the LayerNorm directly with the BatchNorm in the original self-attention is difficult since the model training cannot converge. Therefore, we consider WMSA\({}_{bn}\)/SWMSA\({}_{bn}\) as our design candidates.
**Fusing multiple branches into one single branch in reparameterized CNNs.** Multi-branch structures come with increased data movement cost, as the activation values of each branch are saved into PE memory or on-chip memory (if the PE memory is insufficient) to compute the subsequent tensor in the graph. Additionally, the synchronization cost arising from multiple branches impacts the overall runtime [2]. To address these challenges, we use RepCNN [2] as a network component, which fuses multiple branches into more single-branch substructures during inference. This approach enables even distribution of computation among multiple PEs, preventing imbalanced computation overheads associated with multiple branches. The resulting operator fusion improves memory access and parallel computation on multiple PEs.
### Architecture Design
We present HotBEV, a hardware-oriented framework for multi-view 3D detection. Given a set of \(I\)={\(I^{i}\in\mathbb{R}^{3\times H\times W},i=1,\ldots,N\)}, our framework leverages a hardware-oriented backbone (HOB) to extract 2D multi-view features, \(F^{2d}\)={\(F^{2d}_{i}\in\mathbb{R}^{C\times H_{\times}W},i=1,\ldots,N\)}. The _coordinates generator_ generates 3D coordinates, which are aligned with the coordinate system of the frame \(t\) using the temporal aligned module (TAM) concerning the previous frame \(t-1\). Next, the 2D features and 3D coordinates from adjacent frames are concatenated and fed into the 3D _position encoder_ to obtain 3D-aware features. Our hardware-oriented decoder uses these features as keys and values, which interacts with detection queries initialized from standard learnable 3D anchor points with a small MLP network. Finally, the updated queries are input to the detection head for the final prediction. For _coordinates generator_ and _position encoder_, please refer to [3].
#### 3.2.1 Hardware-Oriented Backbone with Strong Feature Enhancement
**Hardware-oriented backbone.** Based on our GPU breakdown [3] we found that the backbone consistently caused the most latency. To maintain detection precision while removing this speed bottleneck, we propose a powerful transformer encoder for feature capturing and fusing, as shown in the backbone design in Figure 5. We divide the backbone into four stages \(S\), following the granularity of data flow in ResNet [2]. Since features scale from the local to the global visual receptive field, we introduce the HOB block design as shown in Figure 5. In each HOB block, we use several consecutive local-wise attention mechanisms to extract local information (texture-level semantics), followed by global attention to enhance global information (abstract-level semantics) in the feature map. Furthermore, we insert a semantic-augmented module (consisting of an upsampling layer and global attention) into every two consecutive HOB blocks (except between Stage 1 and 2) to further enhance low-level semantics in the current stage, as shown in Figure 12. Notably, unlike other methods such as [10, 25], we simultaneously enhance texture-level and abstract-level information in each stage. To reinforce texture-level semantics, we leverage information interaction between stages, not just between layers in the same stage. By using the efficient operators described in Section 8.1.2 we provide the "two-phase design space" (DS) of the HOB backbone:
\[\begin{split}\text{DS}^{1}_{i,local,s=1,2,3},&\in \{\text{RepCNN}^{i},\text{WMSA}^{i}_{bn},\text{SWMSA}^{i}_{bn}\},\\ \text{DS}^{2}_{i,local,s=4},&\in\{\text{WMSA}^{i}_{ bn},\text{SWMSA}^{i}_{bn}\},\\ \text{DS}^{1,2}_{i,global}&\in\{\text{ConvModula}\},\end{split} \tag{1}\]
where _local_ represents the candidates of local-wise attention while _global_ for the global attention; the 1st phase covers \(S_{1}\), \(S_{2}\), \(S_{3}\) of the backbone, and the \(2nd\) phase for \(S_{4}\); \(i\) denotes the \(i^{th}\) block.
**Image embedding module.** Figure 2 reveals that patch embedding is a speed bottleneck on multiple platforms due to its reliance on a non-overlapping convolutional layer with large kernel size and stride. Unfortunately, most compilers and acceleration techniques, such as Winograd, do not support this type of layer well. To address this issue, we propose a faster downsampling method using a convolution stem, which consists of three hardware-efficient 3\(\times\)3 convolutions with a stride of 2. To obtain input embeddings \(L_{0}\) of size \(\frac{H}{G}\times\frac{W}{G}\times C\) for an input image \(x\)\(\in\)\(\mathbb{R}^{H\times W\times 3}\), we divide it into \(\frac{H\times W}{G\times G}\) patches and feed them to the convolution stem:
\[t_{0}^{\frac{H}{G}\times\frac{W}{G}\times D_{1}}=\text{PatchEmbed}(x^{H\times W \times 3}). \tag{2}\]
#### 3.2.2 Hardware-Oriented Decoder
To alleviate the convergence difficulties in the 3D scene, we initialize a set of learnable 3D anchor points with a 0\(\sim\)1 uniform distribution. Then, we input the coordinates of the 3D anchor points into a small MLP network with two linear layers of Sigmoid function in between and generate the initial object query \(Q_{0}\). The decoder is applied to predict the final abstract semantics and generate the bounding box. Our decoder consists of six global attention blocks and is divided into two parts, as shown in Figure 2(d). The front three layers take object queries as the only input and perform the self-attention computation, aiming at separating different objects as [14]; The back three layers take object queries as queries and image features as keys and values. As shown in Figure 12 they perform the cross-attention between object queries and image features to extract the content and position of
Figure 5: **The design of our proposed HotBEV. The workflow follows the DETR3D, and the main components include that (a) Overall backbone: includes four stages, where feature size scales along the stage. To enhance the semantic information of low-level features, one semantic-augmented module (SAM) is added at the end of each stage. (b) Design of the HOB block, which contains multiple local/global attention layers. (c) The convolutional modulation for efficient global modeling. (d) Hardware-Oriented Decoder.**
the object. For efficient modeling, we leverage the convolutional modulation layer as the decoder component (more details in Appendix A).
To generate temporal-aligned 3D coordinates, we adapted the technique TAM [12]. Subsequently, we combine the resulting 3D coordinates with the 2D features and feed them into a 3D _position encoder_[8]. This allows us to obtain temporal-aligned 3D-aware features, which improve the model's localization, attitude, speed estimation, and overall robustness.
### Training
Our design space (Eq. [1]), also as search space is a selection of possible blocks, including RepCNN, WMSA, SWMA (for local-wise attention layer), and convolutional modulation (for global-wise attention layer). We propose a simple, fast yet effective gradient-based search algorithm to obtain a candidate network that just needs to train the supernet for once. To train our supernet, we adopt the Gumble Softmax sampling to get the importance score for the blocks within each search space/stage. During each step of training, a number of blocks are sampled to obtain a subnet structure. The latency of this subnet can be estimated using our latency prediction model.
**Supernet design.** We use the _two-phase design space_ (DS) as the search space and train the supernet for the HOB backbone. We only search the backbone's structures, dimensions \(C\), and input granularity \(G\), while the decoder uses fixed structures with dimensions adapted to the backbone.
**Latency-aware model slimming.** It has three steps:
1) Train the supernet with the Gumble Softmax sampling [20] to get the importance score for the blocks within each DS.
2) Use latency prediction model \(E\) (Section 8.1.1) to estimate the on-device latency of each candidate.
3) Perform latency-aware model slimming on the supernet obtained from step 1) by FPS evaluated with predictor \(E\). Specifically, we use the score \(s\) of each candidates to define the importance score of DS\({}_{i}\) as \(\frac{s^{\text{softmax}}_{i}+s^{\text{WSASA}}_{i}}{s^{\text{softmax}}_{i}}\) for \(S_{1}\), \(S_{2}\), \(S_{3}\), and \(\frac{s^{\text{WSASA}}_{i}}{s^{\text{softmax}}_{i}}\) for \(S_{4}\). We sum up all the scores of all DS within that \(S\) and deduce the score for each \(S\). Then we define the evolution process (all performed in the current least important \(S\)): a) remove the \(1^{st}\) SWMSA ; b) remove the \(1^{st}\) WMSA; c) reduce the width by multiples of 16. Then we predict the current FPS \(f\) and decide by FPS_NDS_drop(-%*f). This process is repeated until the target throughput is reached.
## 4 Experiments
### Datasets and Implementation Details
We conduct comprehensive experiments on the nuScenes dataset [21], which contains 1000 driving scenes of 20-second length for each. The scenes are officially split into 700, 150, and 150 scenes for training, validation, and testing. The dataset includes approximately 1.4M camera images.
\begin{table}
\begin{tabular}{l|c c|c c|c c c c c|c} \hline \hline
**Method** & **Backbone** & **Resolution** & **Frames** & **NDS \(\uparrow\)** & **mAP \(\uparrow\)** & **mATE \(\downarrow\)** & **mASE \(\downarrow\)** & **mAGE \(\downarrow\)** & **mAVE \(\downarrow\)** & **mAGE \(\downarrow\)** & **FPS** \\ \hline \hline REVDet & ResNet50 & 256 × 704 & 1 & 0.379 & 0.298 & 0.725 & 0.279 & 0.589 & 0.86 & 0.245 & 25.8 \\ BEVDesDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDDstDstDstDDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDDstDstDstDstDstDstDstDstDstDstDstDstDstDstDDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDDstDstDstDstDstDstDstDDstDstDstDstDDstDstDstDstDstDstDstDstDstDstDstDDstDstDstDstDDstDstDstDstDstDDstDstDstDDstDstDstDDstDstDstDstDstDstDstDstDstDstDstDstDDstDstDstDstDstDDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDDstDstDstDDstDstDstDstDstDDstDDstDstDstDstDstDDstDstDstDstDstDstDstDstDDstDstDstDstDstDstDstDstDstDstDDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDDstDstDstDDstDstDstDstDstDDstDstDstDstDstDDstDstDstDstDstDstDstDDstDstDDstDstDstDstDDstDstDstDstDstDDstDstDstDstDstDstDstDDstDstDstDstDstDDstDstDstDstDstDstDstDstDstDstDstDDstDstDstDstDstDstDDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDDstDstDDstDstDstDstDstDstDDstDstDstDDstDstDDstDstDstDstDstDstDDstDDstDstDstDstDstDstDstDstDDstDstDDstDstDstDstDstDstDstDstDDstDstDstDstDstDDstDDstDDstDstDDstDstDstDstDDstDstDstDDstDstDstDDstDDstDstDstDDstDstDDstDDstDstDDstDstDstDstDDstDstDstDstDDstDstDstDstDDstDstDstDstDstDstDstDstDDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDDstDstDstDstDstDDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDstDDstDstDstDstDstDstDstDDstDstDstDstDstDstDstDstDstDDstDstDstDstDstDEvaluation metrics include mean Average Precision (mAP) and five types of true positive metrics (TP metrics): mean Average Translation Error (mATE), mean Average Scale Error (mASE), mean Average Orientation Error (mAOE), mean Average Velocity Error (mAVE), mean Average Attribute Error (mAAE). We also report nuScenes Detection Score (NDS) to capture all aspects of the nuScenes detection tasks. We follow the training recipe from PETR but mainly report results with 24 training epochs to compare with other detection models. Experiments were run on 8 NVIDIA V100.
### Model Accuracy and Speed Performance
**Main results.** As delineated in Table [1] our model is compared to an array of existing camera-based methodologies. These include FCOS3D [23], DETR3D [2], PGD [2], PETR [3], PETRv2 [1], Focal-PETR [30], BEVDet [4], BEVDet4D [5], BEVFormer [1], BEVDepth [2], and PolarDETR [31]. The table lists the backbone type, image resolution, number of frames, inference speed (FPS), and accuracy on the nuScenes validation set for each method. The backbone options include ResNet50 and ResNet101-DCN [25]. Our streamlined models surpass other methods in both performance score and inference speed.
In particular, our compact model, HOB-tiny (256 \(\times\) 704), with 0.487 NDS, 0.362 mAP, and 19.8 FPS, outperforms BEVDet (0.379 NDS, 0.298 mAP, 16.7 FPS), BEVDet4D (0.457 NDS, 0.322 mAP, 16.7 FPS), PETRv2 (0.456 NDS, 0.349 mAP, 18.9 FPS), and BEVDepth (0.475 NDS, 0.351 mAP, 15.7 FPS) with a ResNet50 backbone, achieving a 2.5% \(\sim\) 28.5% NDS gain, 3.1% \(\sim\) 21.5% mAP gain and 4.8% \(\sim\) 26.1% FPS gain. The reductions in the orientation error, velocity error, and attribute error all contribute to enhancing the NDS score. Even with a larger input resolution, our model continues to offer an unrivaled balance of accuracy and speed compared to extant work. For instance, our HOB-nano model (512 \(\times\) 1408) achieves 2% NDS and 13.6% FPS increase with a 1.3% mAP and score difference when compared to Focal-PETR (ResNet101-DCN, 512 \(\times\) 1408). Our HOB-tiny model (512 \(\times\) 1408) surpasses PolarDETR (0.488 NDS, 0.383 mAP, 3.5 FPS) in both speed and accuracy. Our HOB-base model (512 \(\times\) 1408) also surpasses BEVFormer (0.517 NDS, 0.416 mAP, 3 FPS) in both speed and accuracy. For detailed configurations of our model and detailed analysis of temporal modeling and robustness related to frame length, please refer to Appendix A
Our research specifically targets small models, so our results are particularly favorable for these models compared to other studies. For a comprehensive understanding, we also present a comparison with baseline models that possess larger backbones and increased input sizes in Table [2]. Notably, we surpass baseline models in frames per second (FPS) while maintaining comparable accuracy levels.
\begin{table}
\begin{tabular}{l|c c|c c|c c c c|c} \hline \hline
**Method** & **Backbone** & **Resolution** & **NDS\(\uparrow\)** & **mAP\(\uparrow\)** & **mATE \(\downarrow\)** & **mASE \(\downarrow\)** & **mAOE \(\downarrow\)** & **mAVE \(\downarrow\)** & **Average \(\downarrow\)** & **FPS** \\ \hline PETR & ResNet101 & 1600x900 & 0.442 & 0.37 & 0.711 & 0.267 & 0.383 & 0.865 & 0.201 & 5.7 \\ PETRv2 & ResNet101 & 1600x640 & 0.524 & 0.421 & 0.681 & 0.267 & 0.357 & 0.377 & 0.186 & - \\ BEVDet4D & Swin-B & 1600x640 & 0.515 & 0.396 & 0.619 & 0.26 & 0.361 & 0.399 & 0.189 & - \\ BEVDepth & ResNet101 & 512x108 & 0.535 & 0.412 & 0.565 & 0.266 & 0.358 & 0.331 & 0.19 & 2.3 \\ BEVFormerv2 & ResNet50 & 1600x640 & 0.529 & 0.423 & 0.618 & 0.273 & 0.413 & 0.333 & 0.181 & - \\ PolarFormerT & ResNet101 & 1600x900 & 0.528 & 0.432 & 0.648 & 0.27 & 0.348 & 0.409 & 0.201 & 3.5 \\ Sparse4D & ResNet101-DCN & 900x1600 & 0.541 & 0.436 & 0.633 & 0.279 & 0.363 & 0.317 & 0.177 & 4.3 \\ \hline
**HotBEV** & HOB-base & 512x1408 & **0.525** & **0.427** & **0.62** & **0.221** & **0.36** & **0.55** & **0.163** & 5.5 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison for large backbone and resolutions on the nuScenes val set (FPS on V100).
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline
**Methods** & **NDS\(\uparrow\)** & **mAP\(\uparrow\)** & **FPS** \\ \hline PointPillars & 61.3 & 52.3 & 29 \\ SECOND & 63 & 52.6 & 14.3 \\ CenterPoint & 66.8 & 59.6 & 12.4 \\ \hline HotBEV-nano & 47 & 38.5 & 31.8 \\ HotBEV-tiny & 51.2 & 40.7 & 20.4 \\ HOB-base & 52.5 & 42.7 & 16.1 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison with 3D reconstruction-based BEV detectors on the nuScenes val set.
Moreover, according to [33], the camera processing modules occupy 33% \(\sim\) 42% of the whole latency distribution for the post-fusion system. Our hardware-oriented design with enhanced image feature representation can be leveraged in the camera encoder part of the current in-vehicle detection system to improve on-device efficiency.
**Performance on multiple GPU devices.** To evaluate the hardware throughput, we implement the latency-aware model slimming on two other devices: NVIDIA GTX 2080 Ti which has a 5.5M size L2 cache with 448 Gbps memory bandwidth; NVIDIA GTX 1080 Ti which has a 5.5M size L2 cache with 325 Gbps memory bandwidth; We report the average FPS of over 1000 inferences. As depicted in Figure 5, our approach surpasses current camera-only BEV frameworks in terms of both hardware efficiency and performance. Other methods typically overlook the constraints imposed by limited memory and parallelism in on-device runtimes, leading to further degradation in speed performance on devices with limited resources. Without hardware optimization, e.g., intsh quantization, some methods fail to produce results within a reasonable timeframe. In contrast, our models strike the optimal balance between speed and performance, making them the superior choice among existing approaches. For example, on GTX 2080, our HotBEV (0.385 mAP) is 4.5\(\times\) faster than BEVDet (0.393 mAP); our HotBEV (0.407 mAP) is 2.6\(\times\) faster than BEVformer (0.416 mAP); our HotBEV (0.427 mAP) is 1.4\(\times\) faster than PolarDETR (0.383 mAP).
The speed-up effect is superior to that in Table [1], demonstrating our framework's enhanced GPU generalization ability compared to other approaches. It's worth noting that our logic extends beyond autonomous driving. Firstly, our backbone is designed to cater to general vision tasks. Secondly, the hardware model is specifically optimized for GeMM. In the appendix, we showcase our implementation on diverse general vision tasks such as classification and 2D detection.
**Performance on multiple commercial Orin.** It is necessary to test the speed on an actual commercial chip. As shown in Table [1], we test our HotBEV models on Orin to validate our framework. Before testing, we quantized our model into INT8 with a tensorRT engine. And then run the test 50 times for each model to obtain stable results.
### Ablation Study
In this section, we conduct the ablations with HotBEV-nano trained 24 epochs. The backbone is pre-trained on ImageNet dataset [34] and trained on Nuscenes. The input image size is 256\(\times\)704, and the number of detection queries is set to 900.
#### 4.3.1 Analysis of Components in HotBEV
**Major components of HOB.** Table [1] studies how global attention (GA) and semantic-augmented module (SAM) contribute to HOB performance. We only modify the backbone network without disabling the 3D position encoder, TAM, and decoder modules. 1 and 2 show that inserting our GA after the local-wise attention can improve 0.9% mAP and 4.7% NDS without significant impact on speed. 2 and 3 show that inserting one GA block costs 2.3 FPS yet only gains 0.3% mAP. Once we replace the GA with the SAM, mAP is increased from 34% to 35%, while NDS is increased from 44.7% to 45.5%. So SAM can enhance performance and be better than simple global modeling.
\begin{table}
\begin{tabular}{l|l|l|l|l|l} \hline \hline
**Methods** & **Backbone** & **Resolution** & **NDS\({}^{\dagger}\)** & **mAP\({}^{\dagger}\)** & **FPS** \\ \hline HotBEV & HOB-anno & 512x1408 & 0.47 & 0.385 & 31.8 \\ HotBEV & HOB-tiny & 512x1408 & 0.512 & 0.407 & 20.4 \\ HotBEV & HOB-base & 512x1408 & 0.525 & 0.427 & 16.1 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Our proposed results on Orin.
Figure 6: **Left and middle**: The trade-off between performance (NDS) and hardware efficiency (FPS) for different detection methods on the nuScenes val set with different GPUs. **Right**: Various granularity settings on the HOB backbone of HotBEV.
\begin{table}
\begin{tabular}{c c c|c c c} \hline \hline Index & GA & SAM & NDS & mAP & FPS \\ \hline \(\copyright\) & - & - & 0.396 & 0.328 & 21.3 \\ \(\copyright\) & ✓ & - & 0.443 & 0.337 & 21.4 \\ \(\copyright\) & ✓ & GA & 0.447 & 0.340 & 23.7 \\ \(\copyright\) & ✓ & ✓ & **0.455** & **0.350** & **24.5** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation study of HOB.
#### 4.3.2 Analysis of Prediction Modeling
**Empirical validation.** To evaluate the prediction model, we varied the input granularity \(G\) and used the 1-\(st\) block in Swin-T as a case study. Figure 7 compares our predictions and the actual testing latency on two different GPUs, the NVIDIA V100 and GTX 1080 Ti. The results demonstrate that the prediction model can accurately estimate the actual latency across a wide range of input granularities.
**More granularities settings.** We test various granularity settings on the HOB backbone of HotBEV-nano to examine the effects of \(G\). The results on the Tesla-V100 GPU are presented in the left sub-figure of Figure 8. We increase \(G\) from 2 and 64, which consistently improves the realistic efficiency of both the NVIDIA V100 and GTX 1080 GPU. It can be found that the finest granularity (\(G{=}2\)) causes substantial inefficiency despite the mAP improvement. Otherwise, the coarsest granularity (\(G{=}64\)) benefits the speedup with large detection precision degradation. In specific, all networks are transformed into fixed matrix operations on GPU platforms (General Matrix Multiply, GeMM). The latency prediction model evaluates the speed of matrix multiplication and the corresponding data movement, so it is applicable to general networks. For example, this modeling can be generalized in other networks as the prediction results on the 1st block in ResNet-50 in Appendix [1].
**More discussion on latency predictor.** Our proposed latency predictor provides some opportunities. (1) Efficient model generation, which also proceeds with AI democratization. As the benchmarking-based approach needs one-day training, our proposed theoretical latency predictor is training-free. For example, the benchmarking-based approach requires 5 days to generate the dataset of 5 different devices if 5 target models are demanded. In contrast, our proposed is off-the-shelf. Our method provides the opportunity for inexpensive and efficient research for users who do not have access to target devices. For instance, when the in-vehicle Orin chip is not accessible, the related efficient model research on the Orin chip can still be advanced. In conclusion, our approach makes sense for today's rapidly growing demand for autonomous driving. (2) The proposed latency predictor focuses on modeling the latency of Matrix Multiplication (MM) with generalizability. Indeed, strong GPU simulators cannot accurately model the behavior of the latest NVIDIA GPUs. However, our purpose is not to describe the behavior of GPUs. We want to reflect on the relative performance of latency for different layer types and sizes on target GPUs. This is because our search goal is to minimize the relative time in the search space of the current device. For generalizability, our design focuses on latency modeling of MM, the typical computation operation in DNNs, which is mainly impacted by the computing performance of Tensor Core, not other specific operators, so the proposed predictor has generalizability, as shown in Figure 7 of our paper.
## 5 Conclusions and Limitations
We present a hardware-oriented transformer-based framework (HotBEV) for camera-only 3D detection tasks, which achieves higher detection precision and remarkable speedup across high-end and low-end GPUs. Firstly, we propose a theoretical, plug-and-play latency prediction model. Given a target GPU, we directly use the latency to guide our algorithm design. Based on the latency breakdown of classic camera-only detectors, we identify the backbone as the main speed bottleneck. Then, we propose efficient operators and fusion techniques for model on-device implementation. Based on these operators and the process of vision modeling, we design a hardware-oriented backbone with strong feature enhancement. Then we propose the basic design paradigm of HotBEV. Finally, guided by the latency prediction model, we generate the family of HotBEV through a standard search algorithm. Experiments show the superior inference accuracy of HotBEV compared to SOTA BEV detectors with significant on-device speed. Notably, while our primary focus lies in the camera-only method for BEV perception, our latency-aware design methodology can also be applied to fusion-based BEV methods, enabling efficient algorithm design that aligns with real-world requirements.
## 6 Acknowledgment
This work is supported in part by National Science Foundation CCF-1937500.
Figure 7: **Latency prediction results.** Results are tested on NVIDIA V100 and GTX 1080 Ti.
## References
* [1]Z. Li, W. Wang, H. Li, E. Xie, C. Sima, T. Lu, Q. Yu, and J. Dai (2022) BeVformer: learning bird's-eye-view representation from multi-camera images via spatiotemporal transformers. arXiv preprint arXiv:2203.17270. Cited by: SS1, SS2.
* [2]Y. Li, Z. Ge, G. Yu, J. Yang, Z. Wang, Y. Shi, J. Sun, and Z. Li (2022) BeVdepth: acquisition of reliable depth for multi-view 3d object detection. arXiv preprint arXiv:2206.10092. Cited by: SS1, SS2.
* [3]J. Philion and S. Fidler (2020) Lift, splat, shoot: encoding images from arbitrary camera rigs by implicitly unprojecting to 3d. In European Conference on Computer Vision, pp. 194-210. Cited by: SS1, SS2.
* [4]J. Huang, G. Huang, Z. Zhu, and D. Du (2021) BeVdet: high-performance multi-camera 3d object detection in bird-eye-view. arXiv preprint arXiv:2112.11790. Cited by: SS1, SS2.
* [5]J. Huang and G. Huang (2022) BeVdet4d: exploit temporal cues in multi-camera 3d object detection. arXiv preprint arXiv:2203.17054. Cited by: SS1, SS2.
* [6]E. Xie, Z. Yu, D. Zhou, J. Philion, A. Anandkumar, S. Fidler, P. Luo, and J. M. Alvarez (2022) M^ 2bev: multi-camera joint 3d detection and segmentation with unified birds-eye view representation. arXiv preprint arXiv:2204.05088. Cited by: SS1, SS2.
* [7]H. Wu, B. Xiao, N. Codella, M. Liu, X. Dai, L. Yuan, and L. Zhang (2021) Cvt: introducing convolutions to vision transformers. arXiv preprint arXiv:2103.15808. Cited by: SS1, SS2.
* [8]Y. Liu, T. Wang, X. Zhang, and J. Sun (2022) Petr: position embedding transformation for multi-view 3d object detection. arXiv preprint arXiv:2203.05625. Cited by: SS1, SS2.
* [9]Y. Wang, V. Campagnolo Guizilini, T. Zhang, Y. Wang, H. Zhao, and J. Solomon (2022) Detr3d: 3d object detection from multi-view images via 3d-to-2d queries. In Conference on Robot Learning, pp. 180-191. Cited by: SS1, SS2.
* [10]Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo (2021) Swin transformer: hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012-10022. Cited by: SS1, SS2.
* [11]Z. Kong, P. Dong, X. Ma, X. Meng, W. Niu, M. Sun, B. Ren, M. Qin, H. Tang, and Y. Wang (2021) SPVIT: enabling faster vision transformers via soft token pruning. arXiv preprint arXiv:2112.13890. Cited by: SS1, SS2.
* [12]R. Zhao, Y. Hu, J. Dotzel, C. De Sa, and Z. Zhang (2019) Improving neural network quantization without retraining using outlier channel splitting. In International conference on machine learning, pp. 7543-7552. Cited by: SS1, SS2.
* [13]L. Deng, G. Li, S. Han, L. Shi, and Y. Xie (2020) Model compression and hardware acceleration for neural networks: a comprehensive survey. Proceedings of the IEEE108 (4), pp. 485-532. Cited by: SS1, SS2.
* [14]N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko (2020) End-to-end object detection with transformers. In European Conference on Computer Vision (ECCV), pp. 213-229. Cited by: SS1, SS2.
* [15]X. Zhu, W. Su, L. Lu, B. Li, X. Wang, and J. Dai (2020) Deformable detr: deformable transformers for end-to-end object detection. In International Conference on Learning Representations (ICLR), Cited by: SS1, SS2.
* [16]N. Ma, X. Zhang, H. Zheng, and J. Sun (2018) Shufflenet v2: practical guidelines for efficient cnn architecture design. In Proceedings of the European conference on computer vision (ECCV), pp. 116-131. Cited by: SS1, SS2.
* [17] Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V Le. Mnasnet: Platform-aware neural architecture search for mobile. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 2820-2828, 2019.
* [18] Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun. Single path one-shot neural architecture search with uniform sampling. In _Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XVI 16_, pages 544-560. Springer, 2020.
* [19] Yingfei Liu, Junjie Yan, Fan Jia, Shuailin Li, Qi Gao, Tiancai Wang, Xiangyu Zhang, and Jian Sun. Petrv2: A unified framework for 3d perception from multi-camera images. _arXiv preprint arXiv:2206.01256_, 2022.
* [20] John L Hennessy and David A Patterson. _Computer architecture: a quantitative approach_. Elsevier, 2011.
* [21] Peiyan Dong, Zhenglun Kong, Xin Meng, Peng Zhang, Hao Tang, Yanzhi Wang, and Chih-Hsien Chou. Speeddetr: Speed-aware transformers for end-to-end object detection. 2023.
* [22] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In _Advances in Neural Information Processing Systems_, pages 5998-6008, 2017.
* [23] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 7132-7141, 2018.
* [24] Xiaohan Ding, Xiangyu Zhang, Ningning Ma, Jungong Han, Guiguang Ding, and Jian Sun. Repvgg: Making vgg-style convnets great again. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 13733-13742, 2021.
* [25] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016.
* [26] Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. _arXiv preprint arXiv:1806.09055_, 2018.
* [27] Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 11621-11631, 2020.
* [28] Tai Wang, Xinge Zhu, Jiangmiao Pang, and Dahua Lin. Fcos3d: Fully convolutional one-stage monocular 3d object detection. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 913-922, 2021.
* [29] Tai Wang, ZHU Xinge, Jiangmiao Pang, and Dahua Lin. Probabilistic and geometric depth: Detecting objects in perspective. In _Conference on Robot Learning_, pages 1475-1485. PMLR, 2022.
* [30] Shihao Wang, Xiaohui Jiang, and Ying Li. Focal-petr: Embracing foreground for efficient multi-camera 3d object detection. _arXiv preprint arXiv:2212.05505_, 2022.
* [31] Shaoyu Chen, Xinggang Wang, Tianheng Cheng, Qian Zhang, Chang Huang, and Wenyu Liu. Polar parametrization for vision-based surround-view 3d detection. _arXiv preprint arXiv:2206.10965_, 2022.
* [32] U.S.News. 10 Cars That Are Almost Self-Driving.
* [33] Trung Pham, Mehran Maghoumi, Wanli Jiang, Bala Siva Sashank Jujjavarapu, Mehdi Sajjadi Xin Liu, Hsuan-Chu Lin, Chen, et al. Nvautonet: Fast and accurate 360o 3d perception for self driving. _arXiv preprint arXiv:2303.12976_, 2023.
* [34] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _2009 IEEE conference on computer vision and pattern recognition_, pages 248-255. Ieee, 2009.
* [35] Yinpeng Chen, Xiyang Dai, Dongdong Chen, Mengchen Liu, Xiaoyi Dong, Lu Yuan, and Zicheng Liu. Mobile-former: Bridging mobilenet and transformer. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 5270-5279, 2022.
* [36] Sachin Mehta and Mohammad Rastegari. Mobilevit: light-weight, general-purpose, and mobile-friendly vision transformer. _arXiv preprint arXiv:2110.02178_, 2021.
* [37] Qibin Hou, Cheng-Ze Lu, Ming-Ming Cheng, and Jiashi Feng. Conv2former: A simple transformer-style convnet for visual recognition. _arXiv preprint arXiv:2211.11943_, 2022.
* [38] Alaaeldin El-Nouby, Hugo Touvron, Mathilde Caron, Piotr Bojanowski, Matthijs Douze, Armand Joulin, Ivan Laptev, Natalia Neverova, Gabriel Synnaeve, Jakob Verbeek, et al. Xcit: Cross-covariance image transformers. _arXiv preprint arXiv:2106.09681_, 2021.
* [39] Yingming Wang, Xiangyu Zhang, Tong Yang, and Jian Sun. Anchor detr: Query design for transformer-based detector. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 36, pages 2567-2575, 2022.
* [40] Benjamin Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Herve Jegou, and Matthijs Douze. Levit: a vision transformer in convnet's clothing for faster inference. In _Proceedings of the IEEE/CVF international conference on computer vision_, pages 12259-12269, 2021. | ## Review
### Summary
This paper presents HotBEV, a novel hardware-efficient transformer-based framework designed for camera-only 3D detection tasks. By prioritizing latency-aware design and considering hardware properties such as memory access cost, HotBEV achieves impressive reductions in computational delay, enabling real-time decision-making in autonomous driving scenarios. The framework is compatible with both high-end and low-end GPUs, showcasing its versatility. Rigorous experimental validation demonstrates its superior performance in terms of speed and accuracy compared to existing solutions, making it an important contribution to the field. However, concerns regarding the fairness of comparisons and the clarity of the experimental methodologies are raised, requiring attention before final publication.
### Strengths
- The model is compatible with both high-end and low-end GPUs, demonstrating a broad range of applicability.
- The proposed method achieves a balance between model speed and detection precision.
- The utilization of a theoretical latency prediction model to guide design choices is an innovative approach.
- The paper is well-written and provides comprehensive evaluations on a large dataset (nuScenes).
- The design methodology prioritizes hardware efficiency and customizes architecture to the target GPU, facilitating adaptability.
### Weaknesses
- The comparison in main experiments is not fair; the methodology employs four frames for temporal fusion while comparative methods use fewer frames.
- The exploration of convolutional modulation versus self-attention lacks sufficient literature coverage.
- The organization and writing of the paper are dense, making it hard to follow in certain sections.
- Latency prediction's novelty compared to existing hardware-aware designs is unclear.
- Some experiments lack comparison with stronger baselines and alternative datasets.
### Questions
- How does the proposed methodology's speed and accuracy compare with 3D reconstruction-based BEV detectors?
- What are the differences between the hardware-oriented decoder and existing methods like DETR3D and PETR?
- Can the predicted latency be evaluated against realistic latency?
- What justifications exist for focusing on multi-frame models over 3D representations?
- How does the proposed approach leverage unique task properties?
### Soundness
**Score:** 2
**Description:** Fair. The paper presents some sound ideas, but significant concerns about comparisons and methodology exist.
### Presentation
**Score:** 3
**Description:** Good. The paper is generally well-written, but parts are dense and unclear, requiring improvements for clarity.
### Contribution
**Score:** 3
**Description:** Good. The framework offers valuable contributions to hardware-efficient designs for 3D detection, with promising results.
### Rating
**Score:** 5
**Description:** Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, but limited evaluation and clarity issues need addressing.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents an innovative approach for hardware-efficient 3D detection that is timely and relevant to the field of autonomous driving. Despite some weaknesses in experimental comparisons and presentation clarity, the overall contribution and potential impact of the work justify acceptance, provided that the authors address the reviewers' concerns in their final submission.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Posthoc privacy guarantees for collaborative inference with modified Propose-Test-Release
Abhishek Singh\({}^{1}\), Praneeth Vepakomma\({}^{1}\), Vivek Sharma\({}^{1,2,3,}\), Ramesh Raskar\({}^{1}\)
\({}^{1}\)MIT Media Lab, \({}^{2}\)MGH, Harvard Medical School, \({}^{3}\)Sony AI
Part of this work was carried out while he was at MIT and MGH, Harvard Medical School.
###### Abstract
Cloud-based machine learning inference is an emerging paradigm where users query by sending their data through a service provider who runs an ML model on that data and returns back the answer. Due to increased concerns over data privacy, recent works have proposed Collaborative Inference (CI) to learn a privacy-preserving encoding of sensitive user data before it is shared with an untrusted service provider. Existing works so far evaluate the privacy of these encodings through empirical reconstruction attacks. In this work, we develop a new framework that provides formal privacy guarantees for an arbitrarily trained neural network by linking its local Lipschitz constant with its local sensitivity. To guarantee privacy using local sensitivity, we extend the Propose-Test-Release (PTR) framework to make it tractable for neural network queries. We verify the efficacy of our framework experimentally on real-world datasets and elucidate the role of Adversarial Representation Learning (ARL) in improving the privacy-utility trade-off. The source code and other details are available at tremblerz.github.io/posthoc.
## 1 Introduction
While training ML models privately has seen tremendous progress[1, 45, 10, 26] in the last few years, protecting privacy during the inference phase remains elusive as these models get deployed by cloud-based service providers. Cryptographic techniques[43, 31, 40, 27] address this challenge by computing over encrypted data. To combat the high computational cost of encryption techniques, several recent works [66, 65, 5, 56, 57] have proposed Collaborative Inference (CI) - an alternate paradigm where users share a lower dimensional embedding of raw data where task-irrelevant information is suppressed. However, CI techniques are currently incompatible with formal privacy frameworks and their evaluation is currently based on empirical reconstruction attacks. The central issue in CI is the usage of Deep Neural Networks (DNNs) making it unsuitable for formal and worst-case privacy analysis. In this work, we take the first step towards a formal privacy framework for CI that will enable a more rigorous evaluation and a reliable understanding of the CI techniques.
The key aspect of any CI algorithm is an _obfuscator_ (typically a DNN), which is trained to encode a user's private data such that an attacker can not reconstruct the original data from its encoding. Hence, CI techniques typically evaluate the privacy of their representations by empirically measuring the information leakage using a proxy adversary. However, existing works [57, 66, 18, 56] show that a proxy adversary's performance as a measure of protection could be unreliable because a future adversary can come up with a stronger attack. Alternately a few existing CI techniques have used theoretical tools [20, 69, 4, 68, 62, 5, 39] for measuring information leakage empirically. However, most of these works analyze specific obfuscation techniques and lack formal privacy definitions. Therefore, we first introduce a privacy definition applicable to CI by adopting a different threat model from that of differential privacy (DP)[12] because (as we explain in detail in Sec 2) protecting membership inference is at odds with achieving a non-trivial privacy-utility trade-off in CI.
Our threat model for the reconstruction attack is motivated by the intuition that the obfuscator discloses coarse-grained and filters out fine-grained information for preventing reconstruction. We hypothesize that to do so, the _obfuscator_ should act as a contractive mapping, and as a result, increases the _stability_ of the (obfuscation) function in the local neighborhood of data. We formalize this intuition by introducing a privacy definition in the metric space of data and experimentally validate it in Sec 5. Our privacy definition is an instantiation of metric-DP by Chatzikokolakis et al. Instead of evaluating the global Lipschitz constant of DNNs, we evaluate the Lipschitz constant only in the local neighborhood of the user's sensitive data. We then extend the Propose-Test-Release (PTR)[11] framework to formalize our local neighborhood based measurement of the Lipschitz constant.
We adopt the PTR framework due to its posthoc and data-adaptive nature - as the privacy of encoding is evaluated in the neighborhood of the private user input and after the _obfuscator_ has been trained. This approach is different from conventional privacy mechanisms where the mechanism is fixed and the perturbation does not depend on data. Existing seminal works in data-adaptive privacy such as Propose-Test-Release (PTR) [11], smooth sensitivity [41] and Lipschitz extensions [48] have exploited this posthoc approach to enhance the utility of the mechanisms. Data adaptive mechanisms are an important area of study as quoted in a recent workshop report on the challenges in Differential Privacy [9] - "_Going data-adaptive is a crucial step in bringing DP algorithms to an acceptable level of utility in applications._". In the context of our work - the benefit of using the PTR framework is two-fold i) Utility - the _obfuscator_ is expected to behave better at samples that match the data distribution from its training dataset are sampled, and ii) Computation - instead of computing global sensitivity, we only need to compute sensitivity in the small neighborhood around private data points. While the original PTR is intractable due to a well-known [9] expensive computation step, we design a new formulation that substitutes the expensive computation step with a tractable alternate.
Computational tractability is generally the main concern that makes CI incompatible with formal privacy because giving worst-case guarantees for _obfuscator_ is generally difficult. While training and convergence guarantees for DNNs remain elusive, we find that the certified robustness community has made rapid progress in the past few years in giving worst-case guarantees during the inference stage of DNNs. Specifically, we utilize recent efficient formulations for the Lipschitz constant computation of DNNs [25, 55]. We measure the stability of the learned _obfuscator_ model using the Lipschitz constant and link it with the local sensitivity under our privacy definition. We bridge the gap between the Lipschitz constant and our extended PTR framework by proposing a binary search algorithm that computes this Lipschitz constant multiple times in the neighborhood of sensitive data. This bridge makes our work amenable to any differentiable _obfuscator_ with ReLU non-linearity.
The **scope** of our paper is to provide privacy guarantees against reconstruction attacks for existing CI techniques, _i.e._, our goal is not to develop a new CI algorithm but rather to develop a formal privacy framework compatible with existing algorithms. A majority of the CI techniques protect either a sensitive attribute or reconstruction of the input. We _only consider_ sensitive input in this work. Furthermore, we _only focus_ on protecting the privacy of data during the inference stage, and assume that ML models can be trained privately. Our privacy definition and the mechanism are built upon \(d_{\mathcal{X}}\)-privacy [8] and PTR [11]. Existing instantiations of \(d_{\mathcal{X}}\)-privacy include geo-indistinguishability [2] and location-dependent privacy[32] that share a similar goal as ours of sharing coarse-grained information. Our work differs because we use DNN queries and high-dimensional datasets. We refer the reader to the supplementary for a detailed literature review.
In Sec 2 we begin with the preliminaries of DP and its variant for metric spaces. Then, we introduce the privacy definition relevant for CI in Sec 3. Next, we design our framework by extending PTR and proving its privacy guarantees in Sec 4. In Sec 5 we experimentally demonstrate the feasibility of our framework. We now give the **summary of our contributions:**
1. _Proposing a privacy definition that formalizes reconstruction privacy in the context of CI. Protecting against membership inference (meaningfully) is not possible in CI and hence traditional DP can not be applied directly as discussed in Sec 2._
2. _Extending the (PTR) algorithm to guarantee privacy using the local Lipschitz constant. The standard PTR can not be applied due to its intractability for queries beyond median and mode (see - Page 152, 2nd paragraph of Dwork et al. and Sec 4)._
3. _Experiments 1) to verify efficacy for both utility and computability, 2) ablation on different design choices, 3) empirical reconstruction attacks, and 4) identifying the role ARL techniques play in improving privacy-utility trade-off in CI._
## 2 Preliminaries
Differential privacy[12] is a widely used framework for answering a query, \(f\), on a dataset \(\mathbf{x}\in\mathcal{X}\) by applying a mechanism \(\mathcal{M}(\cdot)\) such that the probability distribution of the output of the mechanism \(\mathcal{M}(\mathbf{x})\) is _similar_ regardless the presence or absence of any individual in the dataset \(\mathbf{x}\). More formally, \(\mathcal{M}\) satisfies \((\epsilon,\delta)\)-DP if \(\forall\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{X}\) such that \(d_{H}(\mathbf{x},\mathbf{x}^{\prime})\leq 1\), and for all (measurable) output \(S\) over the range of \(\mathcal{M}\)
\[\mathbb{P}(\mathcal{M}(\mathbf{x})\in S)\leq e^{\epsilon}\mathbb{P}(\mathcal{ M}(\mathbf{x}^{\prime})\in S)+\delta,\]
where \(d_{H}\) is the hamming distance. This definition is based on a trusted central server model, where a trusted third party collects sensitive data and computes \(\mathcal{M}(\mathbf{x})\) to share with untrusted parties. In _local_-DP[28], this model has been extended such that each user shares \(\mathcal{M}(\mathbf{x})\), and the service provider is untrusted. Our threat model is the same as local DP. However, unlike local DP, data is not aggregated over multiple individuals. Specifically, every sample is evaluated independently by the service provider and there is no aggregation involved. For ex.- a user shares a face image to receive an age prediction from the service provider, here the answer to the query depends exactly on the user's input. With traditional local DP, it is "impossible" to achieve good utility and privacy at the same time because any two samples (no matter how different) are neighboring databases, more formally, \(d_{H}(\mathbf{x},\mathbf{x}^{\prime})\leq 1,\ \forall\mathbf{x},\mathbf{x}^{ \prime}\in\mathcal{X}\) for ML inference. Informally, this notion of neighboring databases under the standard DP definition would imply that the outcome for any two samples should be _almost_ indistinguishable no matter how different the samples are. This privacy definition could be too restrictive for our ML inference application where the data instance necessarily needs a certain degree of distinguishability to obtain utility from the service provider. This observation is formalized in the impossibility result of instance encoding[7] for private learning. To subside this fundamental conflict between the privacy definition and our application, we look at the definition of \(d_{\mathcal{X}}\)-privacy[8] that generalizes the DP definition to a general distance metric beyond hamming distance as follows:
\[\mathbb{P}(\mathcal{M}(\mathbf{x})\in S)\leq e^{d_{\mathcal{X}}(\mathbf{x}, \mathbf{x}^{\prime})}\mathbb{P}(\mathcal{M}(\mathbf{x}^{\prime})\in S), \tag{1}\]
here \(d_{\mathcal{X}}(\mathbf{x},\mathbf{x}^{\prime})\) is a function that gives a level of indistinguishability between two datasets \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\). DP can be viewed as a special case of \(d_{\mathcal{X}}\)-privacy by keeping \(d_{\mathcal{X}}(\mathbf{x},\mathbf{x}^{\prime})=\epsilon d_{H}(\mathbf{x}, \mathbf{x}^{\prime})\). Choosing a different distance metric yields a stronger or weaker privacy guarantee. Due to the impossibility of achieving both privacy and utility with hamming distance, we move towards a weaker privacy definition by only focusing on achieving privacy against reconstruction attacks instead of membership inference. Intuitively, we choose a distance metric that provides indistinguishability only within a neighborhood of similar _looking_ samples.
## 3 Privacy Definition and Threat Model
To formalize reconstruction privacy, we hypothesize that semantically similar points are close to each other on a data manifold, i.e. semantically similar samples are closer in a space where distance is defined as geodesic on the data manifold. Therefore, one way to bound the reconstruction of
Figure 1: **Posthoc Privacy framework**: We project a high dimensional data instance to a lower dimensional embedding. The goal of the _embedder_ is to measure a semantically relevant distance between different instances. The embedding is fed to the _Obfuscator_ that compresses similar inputs in a small volume. In traditional ARL, the obfuscated instance is shared with the untrusted service provider without any formal privacy guarantee. In this work, by analyzing the stability of the obfuscator network, we perturb the obfuscated instance to provide a formal privacy guarantee.
is by making it indistinguishable among semantically similar points. The extent of reconstruction privacy would therefore depend upon the radius of the neighborhood. We formalize it with a privacy parameter \(R\) that allows a user to control how big this indistinguishable neighborhood should be.
**Definition 1**.: _A mechanism \(\mathcal{M}\) satisfies (\(\epsilon,\delta,\textit{R}\))-reconstruction privacy if \(\forall\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{X}\) s.t. \(d_{\mathcal{X}}(\mathbf{x},\mathbf{x}^{\prime})\leq\textit{R}\) and \(S\subseteq Range(\mathcal{M})\)_
\[\mathbb{P}(\mathcal{M}(\mathbf{x})\in S)\leq e^{\epsilon}\mathbb{P}(\mathcal{ M}(\mathbf{x}^{\prime})\in S)+\delta. \tag{2}\]
Note that the above equation is exactly the same as (\(\epsilon,\delta\))-DP except for the definition of neighboring databases. In this way, our privacy definition can be seen as a mix of standard DP and \(d_{\mathcal{X}}\)-privacy.
### Threat and Attack model
Our threat and attack model is the same as previous collaborative inference works [66; 65]. Specifically, we focus on attackers that aim to reconstruct the original sample from its encoding. That is, given an encoding \(\mathcal{M}(\mathbf{x})\) obtained from a sample \(\mathbf{x}\), an attacker will attempt to recover the original sample (\(\mathbf{x}\)). Our threat model is the same as local DP. Specifically, all side information including access to the obfuscator model, embedder model, and the framework in Fig 1 is known to the adversary and it can interact with them as much as it wants. The attacker can use this information to improve its attack algorithm. We (informally) note that an attacker always has a prior about raw data \(\mathbb{P}(\mathbf{x})\) and it will update its belief based on the posterior \(\mathbb{P}(\mathbf{x}|\mathcal{M}(\mathbf{x}))\). The goal of our privacy mechanism is to ensure that the distance between the prior and posterior distribution is bounded. Our definition indeed enjoys such an interpretation due to being an instantiation of \(d_{\mathcal{X}}\)-privacy. A more formal description of this Bayesian formulation is given by Chatzikokolakis et al. in Theorem 4. We emulate such attackers in Sec 5 by training an ML model on a dataset of \(\mathcal{M}(\mathbf{x})\) as input and \(\mathbf{x}\) as output, then we evaluate its reconstruction error on encodings of previously unseen \(\mathcal{M}(\mathbf{x})\).
### Comparison with Differential Privacy
Conceptually, the usage of hamming distance in DP for neighboring databases provides a level of protection at the sample level. Such a privacy requirement can be at odds with the goal of prediction that necessarily requires discrimination between samples belonging to different concept classes. We relax this dichotomy by changing the distance metric to account for reconstruction privacy. Intuitively, an attacker can only obtain an accurate reconstruction of data up to a neighborhood in the space of data. Therefore, the size of the neighborhood is a privacy parameter \(R\) such that a higher value of \(R\) provides higher privacy. This privacy parameter is equivalent to the _group size_[12] used often in the DP literature. By default, this value is kept at 1 in DP but can be kept higher if correlated individuals (multiple samples, family) are present in a database instead of a single individual. We state the equivalence between the group privacy definition and standard DP informally -
**Lemma 2.2** in [60]: Any (\(\epsilon\), \(\delta\))-differentially private mechanism is (\(\textit{Re}\), \(\textit{Re}^{(R-1)\epsilon}\delta\))-differentially private for groups of size \(R\).
This lemma also applies to our proposed definition. However, we emphasize that privacy parameters of (\(\epsilon,\delta\))-DP mechanism can not be compared trivially with a (\(\epsilon,\delta,\textit{R}\))-reconstruction privacy mechanism because same value of \(\epsilon\) and \(\delta\) provide different levels of protection due to different definitions of neighboring databases. We experimentally demonstrate this claim in Sec. 5.
Choice of \(d_{\mathcal{X}}(\mathbf{x},\mathbf{x}^{\prime})\) and its impact on the privacy guarantees offered
Our privacy definition is an instantiation of \(d_{\mathcal{X}}(\mathbf{x},\mathbf{x}^{\prime})\) that permits a broad class of distance metrics for the privacy definition. Since the goal of our work is to protect against reconstruction - \(\ell_{1}\) or \(\ell_{2}\) distance is a reasonable choice. However, the space over which \(\ell\) norm should be considered is a challenging question. Existing papers have used metrics such as \(\ell_{1}\) or \(\ell_{2}\) norm in the raw data space [21; 6; 17; 3] for guaranteeing reconstruction privacy when training ML models. However, for high-dimensional datasets typically considered in CI such as images, guaranteeing reconstruction in ambient dimensions can lead to unintended privacy guarantee. In our work, we consider pre-processing the samples by projecting them into an embedding space, where samples that are semantically closer and farther have \(\ell\) norm smaller and higher respectively. We evaluate the privacy-utility trade-off in our experiments for both ambient space and embedding space.
## 4 Privacy Mechanism
Our goal is to design a framework that can provide a formal privacy guarantee for encodings informally privatized in CI. However, CI algorithms use non-linear neural networks trained on non-convex objectives making it difficult to perform any worst-case analysis. Therefore, we take a posthoc approach where we reason about privacy after the model is trained. Specifically, we apply the PTR mechanism by [11]. Applying PTR directly to our query (ARL) is not computationally feasible because PTR requires estimating local sensitivity at multiple points whereas evaluating the local sensitivity of a neural network query is not even feasible at a single point. Therefore, we design a tractable variant of PTR that utilizes the local Lipschitz constant estimator to compute privacy related parameters. We refer the reader to supplementary for background on the Lipschitz constant estimation and PTR.
CI algorithms consist of three computational blocks in the training stage: 1) _obfuscator_ (\(f(\cdot)\)) that generates an (informally private) representation (\(\mathbf{\tilde{z}}\)) of data, 2) _proxy adversary_ that reconstructs the data from the representation produced by the _obfuscator_, and 3) _classifier_ that performs the given task using the obfuscated representation. The _classifier_ and _proxy adversary_ are trained to minimize the task loss and reconstruction loss, respectively. The _obfuscator_ is trained to minimize the task loss but maximize the reconstruction loss. This setup results in a min-max optimization where the trade-off between task performance and reconstruction is controlled by a hyper-parameter \(\alpha\). Note that a few CI techniques[42; 44; 61] do not require a proxy adversary but still learn an obfuscator model using other regularizers.
At a high level, our framework applies the mechanism \(\mathcal{M}\) such that the final released data \(\mathbf{\hat{z}}=\mathcal{M}(\mathbf{x})\) has a privacy guarantee discussed in Eq 2. Like PTR, we start with a proposal (\(\Delta^{p}_{LS}\)) on the upper bound of the local sensitivity of \(\mathbf{x}\). To test the correctness of \(\Delta^{p}_{LS}\), we compute the size of the biggest possible neighborhood such that the local Lipschitz constant of the _obfuscator_ network in that neighborhood is smaller than the proposed bound. Next, we privately verify the correctness of the proposed bound. We do not release the data (denoted by \(\bot\)) if the proposed bound is invalid. Otherwise, we perturb the data using the Laplace mechanism. Next, we discuss the framework and privacy guarantees in more detail.
**Global Sensitivity and Lipschitz constant** of a query \(f:\mathcal{X}\rightarrow\mathcal{Y}\) are related in the \(d_{\mathcal{X}}\)-privacy framework. Global sensitivity of a query \(f(\cdot)\) is the smallest value of \(\Delta\) (if it exists) such that \(\forall\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{X}\), \(d_{\mathcal{Y}}(f(\mathbf{x}),f(\mathbf{x}^{\prime}))\leq\Delta d_{\mathcal{X} }(\mathbf{x},\mathbf{x}^{\prime})\). While global sensitivity is a measure over all possible pairs of data in the data domain \(\mathcal{X}\), local sensitivity (\(\Delta_{LS}\)) is defined with respect to a given dataset \(\mathbf{x}\) such that \(\forall\mathbf{x}^{\prime}\in\mathcal{X}\), \(d_{\mathcal{Y}}(f(\mathbf{x}),f(\mathbf{x}^{\prime}))\leq\Delta_{LS}(\mathbf{x })d_{\mathcal{X}}(\mathbf{x},\mathbf{x}^{\prime})\). We integrate the notion of similarity in a neighborhood (motivated in Sec 2) by defining the local sensitivity of a neighborhood \(\mathcal{N}(\mathbf{x},R)\) around \(\mathbf{x}\) of radius \(R\) where \(\mathcal{N}(\mathbf{x},R)=\{\mathbf{x}^{\prime}|d_{\mathcal{X}}(\mathbf{x}, \mathbf{x}^{\prime})\leq R,\ \forall\mathbf{x}^{\prime}\in\mathcal{X}\}\). Therefore, the local sensitivity of query \(f\) on \(\mathbf{x}\) in the \(R\)-neighborhood is defined \(\forall\mathbf{x}^{\prime}\in\mathcal{N}(\mathbf{x},R)\) such that
\[d_{\mathcal{Y}}(f(\mathbf{x}),f(\mathbf{x}^{\prime}))\leq\Delta_{LS}(\mathbf{x },R)d_{\mathcal{X}}(\mathbf{x},\mathbf{x}^{\prime}). \tag{3}\]
We note that if \(d_{\mathcal{X}}\) is hamming distance and \(R\) is 1 then this formulation is exactly same as local sensitivity in \(\epsilon\)-DP[12]. The equation above can be re-written as:
\[\Delta_{LS}(\mathbf{x},R)=\sup_{\mathbf{x}^{\prime}\in\mathcal{N}(\mathbf{x}, R)}\frac{d_{\mathcal{Y}}(f(\mathbf{x}),f(\mathbf{x}^{\prime}))}{d_{ \mathcal{X}}(\mathbf{x},\mathbf{x}^{\prime})}. \tag{4}\]
This formulation of local sensitivity is similar to the definition of the local Lipschitz constant. The local Lipschitz constant \(\mathcal{L}\) of \(f\) for a given open neighborhood \(\mathcal{N}\subseteq\mathcal{X}\) is defined as follows:
\[\mathcal{L}^{\alpha,\beta}(f,\mathcal{N})=\sup_{\mathbf{x}^{\prime},\mathbf{x} ^{\prime\prime}\in\mathcal{N}}\frac{||f(\mathbf{x}^{\prime})-f(\mathbf{x}^{ \prime\prime})||_{\alpha}}{||\mathbf{x}^{\prime}-\mathbf{x}^{\prime\prime}||_ {\beta}}\quad(\mathbf{x}^{\prime}\neq\mathbf{x}^{\prime\prime}) \tag{5}\]
We note that while the local sensitivity of \(\mathbf{x}\) is described around the neighborhood of \(\mathbf{x}\), the Lipschitz constant is defined for every possible pair of points in a given neighborhood. Therefore, in Lemma 4.1 we show that the local Lipschitz in the neighborhood of \(\mathbf{x}\) is an upper bound on the local sensitivity.
**Lemma 4.1**.: _For a given \(f\) and for \(d_{\mathcal{Y}}\leftarrow\ell_{\alpha}\) and \(d_{\mathcal{X}}\leftarrow\ell_{\beta}\), \(\Delta_{LS}(\mathbf{x})\leq\mathcal{L}(f,\mathcal{N}(\mathbf{x},R))\). Proof in Supplementary._
Since local sensitivity is upper bounded by the Lipschitz constant, evaluating the Lipschitz constant suffices as an alternative to evaluating local sensitivity.
**Lower bound on testing the validity of \(\Delta_{LS}^{p}\):** The PTR algorithm [11] suggests a proposal on the upper bound (\(\Delta_{LS}^{p}\)) of local sensitivity and then finds the distance between the given dataset (\(\mathbf{x}\)) and the closest dataset for which the proposed upper bound is not valid. Let \(\gamma(\cdot)\) be a distance query and \(\Delta_{LS}(\mathbf{x})\) be the local sensitivity defined as per the DP framework with respect to \(\mathbf{x}\) such that
\[\gamma(\mathbf{x})=\min_{\mathbf{x}^{\prime}\in\mathcal{X}}\{d_{H}(\mathbf{x}, \mathbf{x}^{\prime})\;\;s.t.\;\;\Delta_{LS}(\mathbf{x}^{\prime})>\Delta_{LS}^{ p}\}. \tag{6}\]
For our privacy definition, the query \(\gamma(\mathbf{x},\textit{R})\) can be formulated as follows:
\[\gamma(\mathbf{x},\textit{R})=\min_{\mathbf{x}^{\prime}\in\mathcal{X}}\{d_{ \mathcal{X}}(\mathbf{x},\mathbf{x}^{\prime})\;\;s.t.\;\;\Delta_{LS}(\mathbf{x} ^{\prime},\textit{R})>\Delta_{LS}^{p}\}. \tag{7}\]
We note that keeping \(d_{\mathcal{X}}=d_{H}\) and \(R=1\), makes the \(\gamma\) query exactly same as defined in the eq 6. Generally, computing \(\gamma(\cdot)\) is intractable due to local sensitivity estimation required for every \(\mathbf{x}^{\prime}\in\mathcal{X}\) (which depends upon a non-linear neural network in our case). We emphasize that this step is intractable at two levels, first, we require estimating the local sensitivity of a neural network query. Second, we require this local sensitivity over all samples in the data domain. Therefore, we make it tractable by computing a lower bound over \(\gamma(\mathbf{x},\textit{R})\) by designing a function \(\phi(\cdot)\) s.t. \(\phi(\mathbf{x},\textit{R})\leq\gamma(\mathbf{x},\textit{R})\). Intuitively, \(\phi(\cdot)\) finds the largest possible neighborhood around \(\mathbf{x}\) such that the local Lipschitz constant of the neighborhood is smaller than the proposed local sensitivity. Because the subset of points around \(\mathbf{x}\) whose neighborhood does not violate \(\Delta_{LS}^{p}\) is half of the size of the original neighborhood in the worst case, we return half of the size of neighborhood as the output. We describe the algorithm procedurally in the supplementary material. More formally,
\[\phi(\mathbf{x},\textit{R})=\frac{1}{2}\cdot\operatorname*{arg\,max}_{\textit{ R}^{\prime}\geq R}\{\mathcal{L}(f,\mathcal{N}(\mathbf{x},\textit{R}^{\prime})) \leq\Delta_{LS}^{p}\}\]
If there is no solution to the equation above, then we return \(0\).
**Lemma 4.2**.: \(\phi(\mathbf{x},\textit{R})\leq\gamma(\mathbf{x},\textit{R})\)_. Proof in Supplementary._
**Privately testing the lower bound**: The next step in the PTR algorithm requires testing if \(\gamma(\mathbf{x})\leq\ln(\frac{1}{\delta}/\epsilon)\). If the condition is true, then no-answer (\(\bot\)) is released instead of data. Since the \(\gamma\) query depends upon \(\mathbf{x}\), PTR privatizes it by applying laplace mechanism, i.e. \(\hat{\gamma}(\mathbf{x})=\gamma(\mathbf{x})+\mathsf{Lap}(1/\epsilon)\). The query has a sensitivity of 1 since the \(\gamma\) could differ at most by 1 for any two neighboring databases. In our framework, we compute \(\phi(\mathbf{x},\textit{R})\) to lower bound the value of \(\gamma(\mathbf{x},\textit{R})\). Therefore, we need to privatize the \(\phi\) query. We prove that for distance metrics in \(d_{\mathcal{X}}\)-privacy, the global sensitivity of the \(\phi(\mathbf{x})\) query is 1.
**Lemma 4.3**.: _The query \(\phi(\cdot)\) has a global sensitivity of \(1\), i.e. \(\forall\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{X},d_{abs}(\phi(\mathbf{x}, \textit{R}),\phi(\mathbf{x}^{\prime},\textit{R}))\leq d_{\mathcal{X}}(\mathbf{x },\mathbf{x}^{\prime})\). Proof in Supplementary._
After computing \(\phi(\mathbf{x},\textit{R})\), we add noise sampled from a laplace distribution, i.e. \(\hat{\phi}(\mathbf{x},\textit{R})=\phi(\mathbf{x},\textit{R})+\mathsf{Lap}( \textit{R}/\epsilon)\). Next, we check if \(\hat{\phi}(\mathbf{x},\textit{R})\leq\ln(\frac{1}{\delta})\cdot\textit{R}/\epsilon\), then we release \(\bot\), otherwise we release \(\hat{\mathbf{z}}=f(g(\mathbf{x}))+\mathsf{Lap}(\Delta_{LS}^{p}/\epsilon)\). Next, we prove that the mechanism \(\mathcal{M}_{1}\) described above satisfies _reconstruction-privacy_.
**Theorem 4.4**.: _Mechanism \(\mathcal{M}_{1}\) satisfies uniform \((2\epsilon,\delta/2,\textit{R})\)-reconstruction privacy Eq. 2, i.e. \(\forall\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{X},s.t.\;d_{\mathcal{X}}( \mathbf{x},\mathbf{x}^{\prime})\leq\textit{R}\)_
\[\mathbb{P}(\mathcal{M}(\mathbf{x})\in S)\leq e^{2\epsilon}\mathbb{P}(\mathcal{ M}(\mathbf{x}^{\prime})\in S)+\frac{\delta}{2} \tag{8}\]
Proof Sketch: Our proof is similar to the proof for the PTR framework[12] except the peculiarity introduced due to our metric space formulation. First, we show that not releasing the answer (\(\bot\)) satisfies the privacy definition. Next, we divide the proof into two parts, when the proposed bound is incorrect (i.e. \(\Delta_{LS}(\mathbf{x},\textit{R})>\Delta_{LS}^{p}\)) and when it is correct. Let \(\hat{\textit{R}}\) be the output of query \(\phi\).
\[\begin{array}{c}\frac{\mathbb{P}[\hat{\phi}(\mathbf{x},\textit{R})=\hat{ \textit{R}}]}{\mathbb{P}[\hat{\phi}(\mathbf{x}^{\prime},\textit{R})=\hat{ \textit{R}}]}=\frac{\exp(-(\frac{|\phi(\mathbf{x},\textit{R})-\hat{\textit{R} }|}{\textit{R}}\cdot\epsilon))}{\exp(-(\frac{|\phi(\mathbf{x},\textit{R})- \hat{\textit{R}}|}{\textit{R}}\cdot\epsilon))}\\ \leq\exp(|\phi(\mathbf{x}^{\prime},\textit{R})-\phi(\mathbf{x},\textit{R})| \cdot\frac{\epsilon}{\textit{R}})\leq\exp(d_{\mathcal{X}}(\mathbf{x}, \mathbf{x}^{\prime})\cdot\frac{\epsilon}{\textit{R}})\leq\exp(\epsilon)\end{array}\]Therefore, using the post-processing property - \(\mathbb{P}[\mathcal{M}(\mathbf{x})=\bot]\leq e^{\epsilon}\mathbb{P}[\mathcal{M}( \mathbf{x}^{\prime})=\bot]\). Here, the first inequality is due to triangle inequality, the second one is due to Lemma 4.3 and the third inequality follows from \(d_{\mathcal{X}}(\mathbf{x},\mathbf{x}^{\prime})\leq R\). Note that when \(\Delta_{LS}(\mathbf{x},R)>\Delta_{LS}^{p}\), \(\phi(\mathbf{x},R)=0\). Therefore, the probability for the test to release the answer in this case is
\[\mathbb{P}[\mathcal{M}(\mathbf{x})\neq\bot]=\mathbb{P}[\phi(\mathbf{x},R)+ \mathsf{Lap}(\frac{R}{\epsilon})>\log(\frac{1}{\delta})\cdot\frac{R}{\epsilon }]=\mathbb{P}[\mathsf{Lap}(\frac{R}{\epsilon})>\log(\frac{1}{\delta})\cdot \frac{R}{\epsilon}]\]
Based on the CDF of Laplace distribution, \(\mathbb{P}[\mathcal{M}(\mathbf{x})\neq\bot]=\frac{\delta}{2}\). Therefore, if \(\Delta_{LS}(\mathbf{x},R)>\Delta_{LS}^{p}\), for any \(S\subseteq\mathbb{R}^{d}\cup\bot\) in the output space of \(\mathcal{M}\)
\[\mathbb{P}[\mathcal{M}(\mathbf{x})\in S]=\mathbb{P}[\mathcal{M}( \mathbf{x})\in S\cap\{\bot\}]+\mathbb{P}[\mathcal{M}(\mathbf{x})\in S\cap\{ \mathbb{R}^{d}\}]\] \[\leq e^{\epsilon}\mathbb{P}[\mathcal{M}(\mathbf{x}^{\prime})\in S \cap\{\bot\}]+\mathbb{P}[\mathcal{M}(\mathbf{x})\neq\bot]\leq e^{\epsilon} \mathbb{P}[\mathcal{M}(\mathbf{x}^{\prime})\in S]+\frac{\delta}{2}\]
If \(\Delta_{LS}(\mathbf{x},R)\leq\Delta_{LS}^{p}\) then the mechanism is a composition of two (\(\epsilon,\delta,R\))-reconstruction private algorithm where the first algorithm (\(\phi(\mathbf{x},R)\)) is \((\epsilon,\delta/2,R)\)-reconstruction private and the second algorithm is \((\epsilon,0,R)\)-reconstruction private. Using composition, the algorithm is \((2\epsilon,\delta/2,R)\)-reconstruction private. We describe \(\mathcal{M}_{1}\) step by step in the algorithm in supplementary. To summarize, we designed the posthoc privacy framework that extends the PTR framework by making it tractable to get \((\epsilon,\delta,R)\)-reconstruction privacy. The exact local Lipschitz constant of the neural network based obfuscator is estimated using mixed-integer programming based optimization developed by [25].
**Computational feasibility**: Our key idea is to add extra computation on the client side to turn informally private representations to their formal counterpart. This extra computational cost is due to the estimation of the local Lipschitz constant of the _obfuscator_ network. However, the following two factors of our framework make it practically feasible -
1. We compute the local Lipschitz constant (i.e. in a small neighborhood around a given point): Our extension of the propose-test-release framework only requires us to operate in a small local neighborhood instead of estimating the global Lipschitz constant which would be much more computationally expensive.
2. Low number of parameters for obfuscator: Instead of estimating the Lipschitz constant of the whole prediction model, we only require estimation of the obfuscator - a neural network that has a significantly lower number of parameters in comparison to the prediction model.
We experimentally validate both of the above benefits in Sec 5. The fact that the local Lipschitz constant is being computed over the same obfuscator allows room for optimizing performance by caching. Our goal is to demonstrate the feasibility of bridging formal privacy guarantees and CI mechanisms, hence, we did not explore such performance speedups.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \multicolumn{2}{c|}{} & \multicolumn{3}{c|}{MNIST (\(\epsilon=0,0.10\)), (\(\epsilon=\infty,0.93\))} & \multicolumn{3}{c|}{PMNIST (\(\epsilon=0,0.10\)), (\(\epsilon=\infty,0.781\))} & \multicolumn{3}{c|}{UTFace (\(\epsilon=0,0.502\)), (\(\epsilon=\infty,0.732\))} \\ \hline \hline & \multicolumn{3}{c|}{Informal} & \(\epsilon=1\) & \(\epsilon=2\) & \(\epsilon=5\) & \(\epsilon=10\) & \multicolumn{3}{c|}{Informal} & \(\epsilon=1\) & \(\epsilon=2\) & \(\epsilon=5\) & \(\epsilon=10\) & \multicolumn{3}{c|}{Informal} & \(\epsilon=1\) & \(\epsilon=2\) & \(\epsilon=5\) & \(\epsilon=10\) \\ \hline Encoder & **0.93** & 0.428 & 0.673 & 0.883 & **0.921** & 0.779 & 0.228 & 0.355 & 0.605 & **0.722** & 0.724 & 0.617 & 0.673 & 0.717 & 0.721 \\ \hline ARL & 0.917 & 0.329 & 0.532 & 0.792 & 0.882 & 0.747 & 0.214 & 0.319 & 0.557 & 0.685 & 0.71 & 0.605 & 0.649 & 0.691 & 0.707 \\ \hline C & 0.926 & 0.443 & 0.684 & 0.881 & 0.917 & **0.781** & 0.158 & 0.225 & 0.422 & 0.608 & **0.73** & 0.623 & 0.673 & **0.718** & **0.724** \\ \hline N & 0.932 & 0.279 & 0.496 & 0.816 & 0.902 & 0.559 & 0.136 & 0.127 & 0.310 & 0.462 & 0.725 & 0.614 & 0.667 & 0.708 & 0.715 \\ \hline ARL-C & 0.896 & 0.424 & 0.648 & 0.839 & 0.883 & 0.761 & 0.196 & 0.314 & 0.537 & 0.682 & 0.709 & 0.632 & 0.684 & 0.70 & 0.705 \\ \hline ARL-N & 0.88 & 0.118 & 0.139 & 0.21 & 0.325 & 0.717 & 0.294 & 0.467 & 0.657 & 0.705 & 0.71 & 0.628 & 0.674 & 0.701 & 0.708 \\ \hline C-N & 0.929 & 0.353 & 0.574 & 0.844 & 0.913 & 0.774 & 0.161 & 0.224 & 0.411 & 0.599 & 0.727 & 0.616 & 0.671 & 0.712 & 0.722 \\ \hline ARL-C & 0.921 & **0.514** & **0.751** & **0.891** & 0.912 & 0.706 & **0.371** & **0.554** & **0.678** & 0.695 & 0.712 & **0.650** & **0.600** & 0.700 & 0.700 \\ \hline \end{tabular}
\end{table}
Table 1: **Performance comparison for different baselines: Our posthoc framework enables comparison between different obfuscation techniques by fixing the privacy budget (\(\epsilon\)). First four rows are different approaches to protect against data reconstruction and the remaining rows below are combinations of different approaches. The top row refers to the accuracy corresponding to different datasets under two extremes of epsilons. ARL refers to widely used adversarial representation learning approach for regularizing representation based on a proxy attacker[35, 38, 65, 56]. Contrastive refers to contrastive learning based informally privatizing mechanism introduced in [44].**
## 5 Experiments
**Experimental Setup:** We evaluate different aspects of our proposed framework - i) **E1**: comparison between different adversarial appraches, ii) **E2**: comparison with local differential privacy (LDP), iii) **E3**: computational tractability of our proposed framework, and iv) **E4**: investigating role of ARL in achieving privacy. We use MNIST[34], FMNIST[64] and UTKFace[67] dataset for all experiments. All of them contain samples with high ambient dimensions (MNIST-784, FMNIST-784 and UTKFace-4096). We use a deep CNN based \(\beta\)-VAE[22] for the _pre-processing_. We use LipMip[25] for computing Lipschitz constant over \(\ell_{\infty}\) norm in the input space and \(\ell_{1}\) norm in the output space. Our baselines include a simple _Encoder_ that projects the data to smaller dimensions using a neural network. This encoder type approach has been used in the literature as Split Learning [19]. We include widely used ARL techniques [65; 56; 38; 35; 66] and adversarial contrastive learning[44] which we denote as C. We use noisy regularization (denoted by N) to improve classifier performance. We refer the reader to supplementary for a detailed experimental setup, codebase, and hyper-parameters.
**E1: Privacy-Utility Trade-off:** Since our framework enables comparison between different obfuscation techniques under same privacy budget, we evaluate test set accuracy as utility in Table 1. Our results indicate that ARL complemented with contrastive and noise regularization helps in attaining overall best performance among all possible combinations.
**E2: Comparison between ARL and LDP:** While \(\epsilon\)-LDP definition provides a different privacy guarantee than our proposed privacy definition, for reference we compare the performance between LDP and CI and report results in supplementary. We note that for low values of \(\epsilon\), LDP techniques do not yield any practical utility. This corroborates with the impossibility result of instance encoding [7] and our discussion in Sec 2 about the inapplicability of traditional DP in the context of CI.
**E3: Computational feasibility:** We report end-to-end runtime evaluation on a CPU-based client and achieve **2 sec/image** (MNIST) and **3.5 sec/image** (UTKFace). While plenty of room exists for optimizing this runtime, we believe current numbers serve as a reasonable starting point for providing formal privacy in ARL. As discussed in Sec 4, we compare the computation time of the _obfuscator_ across three factors relevant to our setup - i) Dimensionality of the input, ii) Size of the neighborhood, and iii) Number of layers in the _Obfuscator_. We report the results in Fig. 2.
**E4: What role does ARL play in achieving privacy?** We investigate the contribution of adversarial training in improving the privacy-utility trade-off. We train _obfuscator_ models with different values of \(\alpha\) (weighing factor) for adversarial training. Our results in Fig 3 indicate that higher weighing of adversarial regularization reduces the local Lipschitz constant, hence reducing the local sensitivity of the neural network. Furthermore, for high values of \(\alpha\), the change in the local Lipschitz constant reduces significantly for different sizes (\(R\)) of the neighborhood. These two observations can potentially explain that ARL improves reconstruction privacy by reducing the sensitivity of the _obfuscator_. However, as we observe in Table 1, the classifier can reduce its utility if ARL is not complemented with noisy and contrastive regularization. We believe this finding could be of independent interest to the adversarial defense community.
Figure 2: **Runtime evaluation of local lipschitz computation** for different (a) neighborhood radius, (b) input dimensions, and (c) number of layers. While the runtime increases exponentially with dimensions, it plateaus with increase in neighborhood radius. Since the input dimensions are same as embedding dimensions it makes the algorithm favorable to our analysis.
## 6 Discussion
**How to select privacy parameter _R_?** One of the key difference between \((\epsilon,\delta,R)\)-neighborhood privacy and \((\epsilon,\delta)\)-DP is the additional parameter \(R\). The choice of \(R\) depends upon the neighborhood in which a user wishes to get an \(\epsilon\) level of indistinguishability. We discussed the equivalence of \(R\) and \(k\) in group Differential Privacy in Sec 3.2. To understand the role of \(R\), we perform reconstruction attacks on privatized encoding obtained from our framework by training an ML model to reconstruct original images. We compare reconstruction results for different values of \(\epsilon\) and \(R\) on four distinct metrics for images and report in Supplementary. To assess the level of indistinguishability, we project the original images into embedding space and sample points from the boundary of neighborhoods of different \(R\). We observe that as the boundary of the neighborhood increases, the images become perceptually different from the original image. For extremely large radii, the images change significantly enough that their corresponding label may change too. Such visualizations can be used to semantically understand different values of \(R\).
**How to propose \(\Delta^{p}_{LS}\)?** Our framework requires a proposal on the upper bound of local sensitivity in a private manner. One possible way to obtain \(\Delta^{p}_{LS}\) is by using the Lipschitz constant of training data samples used in training the _obfuscator_. We choose \(\Delta^{p}_{LS}\) by computing the mean (\(\mu\)) and standard deviation (\(\sigma\)) of local sensitivity on the training dataset (assumed to be known under our threat model), then we keep \(\Delta^{p}_{LS}=\mu+n*\sigma\) where \(n\) allows a trade-off between the likelihood of releasing the samples under PTR and adding extra noise to data. We used \(n=3\) in our experiments. Since empirically, the value of local sensitivity appears to be following a gaussian, using confidence interval serves as a good proxy. Fig 3 shows that for higher values of \(\alpha\), the variability in the local Lipschitz constant decreases indicating the validity of the bound would hold for a large number of samples.
**Limitations:** i) Since we utilize the PTR framework, outlier samples may not get released due to high sensitivity, this is expected since these outlier samples are likely to be misclassified anyway. ii) Lipschitz constant computation is limited to ReLU based DNNs, therefore more sophisticated obfuscator architectures are currently not compatible with our proposed framework and we leave it as part of the future work. iii) Choosing the privacy parameter (_R_) could be challenging for certain datasets and might vary based on user preferences. We believe the choice of such a parameter would depend upon the context in which CI is being applied.
## 7 Related Work
**Collaborative Inference** techniques aim to _learn_ a task-oriented privacy preserving encoding of data. Majority of the works in this area either protect against sensitive attribute leakage [20; 50; 5; 36] or input reconstruction [52; 56; 39; 35; 38]. These techniques usually evaluate their privacy using empirical attacks since the mechanism is learned using gradient based min-max optimization making it infeasible for the worst-case privacy analysis. The problem is exacerbated in the context of input reconstruction because of the lack of formal definition for reconstruction privacy. The goal of our work is to create a framework that make the existing techniques amenable to formal privacy analysis.
Figure 3: **Local sensitivity comparison for different values of \(\alpha\)**: The five bars for each \(\alpha\) represent different neighborhood radii. Increase in the value of \(\alpha\) decreases the local Lipschitz constant (upper bound on local sensitivity) indicating lesser amount of noise to be added for the same level of privacy.
While theoretical analyses [68; 69; 51] of ARL objectives have identified fundamental trade-offs between utility and attribute leakage, they are difficult to formalize as a worst-case privacy guarantee especially for deep neural networks.
**Privacy definitions** that extend the DP definition to incorporate some of its restrictions [29] include \(d_{\mathcal{X}}\)-Privacy [8], and Pufferfish [30]. Our privacy definition is a specific instantiation of the \(d_{\mathcal{X}}\)-Privacy [8] framework that extends DP to general metric spaces. Our instantiation is focused on reconstruction privacy for individual samples instead of membership inference attacks [13]. Existing works in DP for reconstruction attacks [6; 58] focus on protecting training data from an attacker that has access to model weights (whitebox attacker) or output of model queries (blackbox attacker). In contrast, our work only focuses on the privacy of the data used during the prediction stage and not training data.
**Lipschitz constant** estimation for neural networks has been used to guarantee network's stability to perturbations. Existing works either provide an upper bound [63; 33; 14], exact Lipschitz constant [25; 24] or Lipschitz constant regularization [53; 23] during the training stage. Some existing works have explored the relationship between adversarial robustness and DP model training [46; 47; 59]. We utilize similar ideas of perturbation stability but for privacy. Shavit and Gjura [54] use Lipschitz neural networks [16] to learn a private mechanism design for summary statistics such as median, however their mechanism design lack privacy guarantee.
**Posthoc approach to privacy** applies privacy preserving mechanism in a data dependent manner. Smooth sensitivity [41] and PTR [11] reduce the noise magnitude because the local sensitivity is only equal to global sensitivity in the worst case and not average case. Privacy odometer [49], Ex-post privacy loss [37] and Renyi privacy filter [15] track privacy loss as the query is applied over data. Our works builds upon the PTR framework in order to give high privacy for less sensitive data. However, as we show in Sec 4, our framework reformulates the PTR algorithm to make it tractable under our setup.
## 8 Conclusion
In this work we take a first step towards bridging empirical techniques in ARL and formal privacy mechanisms. The posthoc nature of our framework allows reasoning about privacy without any modification to the obfuscation algorithm, hence, making it easy to integrate for benchmarking existing and future techniques. We introduced a privacy definition that formalizes ARL and designed a corresponding privacy mechanism. An exciting future direction for extending our framework is to integrate NNs for designing more effective privacy mechanisms for other sophisticated queries beyond CI. Data adaptive mechanisms are an exciting area in DP and our improvement to PTR can be potentially applied for other queries where PTR might be intractable currently.
## 9 Acknowledgements and Disclosure of funding
We thank Mohammad Mohammadi Amiri for his valuable feedback on the early draft. This work is supported by NSF award number 1729931.
## References
* [1] Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In _Proceedings of the 2016 ACM SIGSAC conference on computer and communications security_, pages 308-318, 2016.
* [2] Miguel E Andres, Nicolas E Bordenabe, Konstantinos Chatzikokolakis, and Catuscia Palamidessi. Geo-indistinguishability: Differential privacy for location-based systems. In _Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security_, pages 901-914, 2013.
* [3] Borja Balle, Giovanni Cherubin, and Jamie Hayes. Reconstructing training data with informed adversaries. In _2022 IEEE Symposium on Security and Privacy (SP)_, pages 1138-1156. IEEE, 2022.
* [4] Yuksel Ozan Basciftci, Ye Wang, and Prakash Ishwar. On privacy-utility tradeoffs for constrained data release mechanisms. In _2016 Information Theory and Applications Workshop (ITA)_, pages 1-6. IEEE, 2016.
* [5] Martin Bertran, Natalia Martinez, Afroditi Papadaki, Qiang Qiu, Miguel Rodrigues, Galen Reeves, and Guillermo Sapiro. Adversarially learned representations for information obfuscation and inference. In _International Conference on Machine Learning_, pages 614-623. PMLR, 2019.
* [6] Abhishek Bhowmick, John Duchi, Julien Freudiger, Gaurav Kapoor, and Ryan Rogers. Protection against reconstruction and its applications in private federated learning. _arXiv preprint arXiv:1812.00984_, 2018.
* [7] Nicholas Carlini, Samuel Deng, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Shuang Song, Abhradeep Thakurta, and Florian Tramer. Is private learning possible with instance encoding? _arXiv preprint arXiv:2011.05315_, 2020.
* [8] Konstantinos Chatzikokolakis, Miguel E Andres, Nicolas Emilio Bordenabe, and Catuscia Palamidessi. Broadening the scope of differential privacy using metrics. In _International Symposium on Privacy Enhancing Technologies Symposium_, pages 82-102. Springer, 2013.
* [9] Rachel Cummings, Damien Desfontaines, David Evans, Roxana Geambasu, Matthew Jagielski, Yangsibo Huang, Peter Kairouz, Gautam Kamath, Sewoong Oh, Olga Ohrimenko, et al. Challenges towards the next frontier in privacy. _arXiv preprint arXiv:2304.06929_, 2023.
* [10] Jian Du, Song Li, Moran Feng, and Siheng Chen. Dynamic differential-privacy preserving sgd. _arXiv preprint arXiv:2111.00173_, 2021.
* [11] Cynthia Dwork and Jing Lei. Differential privacy and robust statistics. In _Proceedings of the forty-first annual ACM symposium on Theory of computing_, pages 371-380, 2009.
* [12] Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. _Found. Trends Theor. Comput. Sci._, 9(3-4):211-407, 2014.
* [13] Cynthia Dwork, Adam Smith, Thomas Steinke, and Jonathan Ullman. Exposed! a survey of attacks on private data. _Annual Review of Statistics and Its Application_, 4:61-84, 2017.
* [14] Mahyar Fazlyab, Alexander Robey, Hamed Hassani, Manfred Morari, and George Pappas. Efficient and accurate estimation of lipschitz constants for deep neural networks. _Advances in Neural Information Processing Systems_, 32, 2019.
* [15] Vitaly Feldman and Tijana Zrnic. Individual privacy accounting via a renyi filter. _Advances in Neural Information Processing Systems_, 34, 2021.
* [16] Henry Gouk, Eibe Frank, Bernhard Pfahringer, and Michael Cree. Regularisation of neural networks by enforcing lipschitz continuity. arxiv e-prints, page. _arXiv preprint arXiv:1804.04368_, 2018.
* [17] Chuan Guo, Brian Karrer, Kamalika Chaudhuri, and Laurens van der Maaten. Bounding training data reconstruction in private (deep) learning. In _International Conference on Machine Learning_, pages 8056-8071. PMLR, 2022.
* [18] Hao Guo, Brian Dolhansky, Eric Hsin, Phong Dinh, Cristian Canton Ferrer, and Song Wang. Deep poisoning: Towards robust image data sharing against visual disclosure. In _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_, pages 686-696, 2021.
* [19] Otkrist Gupta and Ramesh Raskar. Distributed learning of deep neural network over multiple agents. _Journal of Network and Computer Applications_, 116:1-8, 2018.
* [20] Jihun Hamm. Minimax filter: Learning to preserve privacy from inference attacks. _The Journal of Machine Learning Research_, 18(1):4704-4734, 2017.
* [21] Awni Hannun, Chuan Guo, and Laurens van der Maaten. Measuring data leakage in machine-learning models with fisher information. In _Uncertainty in Artificial Intelligence_, pages 760-770. PMLR, 2021.
* [22] Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. 2016.
* [23] Yujia Huang, Huan Zhang, Yuanyuan Shi, J Zico Kolter, and Anima Anandkumar. Training certifiably robust neural networks with efficient local lipschitz bounds. _Advances in Neural Information Processing Systems_, 34, 2021.
* [24] Matt Jordan and Alex Dimakis. Provable lipschitz certification for generative models. In _International Conference on Machine Learning_, pages 5118-5126. PMLR, 2021.
* [25] Matt Jordan and Alexandros G Dimakis. Exactly computing the local lipschitz constant of relu networks. _Advances in Neural Information Processing Systems_, 33:7344-7353, 2020.
* [26] James Jordon, Jinsung Yoon, and Mihaela Van Der Schaar. Pate-gan: Generating synthetic data with differential privacy guarantees. In _International conference on learning representations_, 2018.
* [27] Chiraag Juvekar, Vinod Vaikuntanathan, and Anantha Chandrakasan. {GAZELLE}: A low latency framework for secure neural network inference. In _27th USENIX Security Symposium (USENIX Security 18)_, pages 1651-1669, 2018.
* [28] Shiva Prasad Kasiviswanathan, Homin K Lee, Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. What can we learn privately? _SIAM Journal on Computing_, 40(3):793-826, 2011.
* [29] Daniel Kifer and Ashwin Machanavajjhala. No free lunch in data privacy. In _Proceedings of the 2011 ACM SIGMOD International Conference on Management of data_, pages 193-204, 2011.
* [30] Daniel Kifer and Ashwin Machanavajjhala. Pufferfish: A framework for mathematical privacy definitions. _ACM Transactions on Database Systems (TODS)_, 39(1):1-36, 2014.
* [31] Brian Knott, Shobha Venkataraman, Awni Hannun, Shubho Sengupta, Mark Ibrahim, and Laurens van der Maaten. Crypten: Secure multi-party computation meets machine learning. _Advances in Neural Information Processing Systems_, 34, 2021.
* [32]Fragkiskos Koufogiannis and George J Pappas. Location-dependent privacy. In _2016 IEEE 55th Conference on Decision and Control (CDC)_, pages 7586-7591. IEEE, 2016.
* [33] Fabian Latorre, Paul Rolland, and Volkan Cevher. Lipschitz constant estimation of neural networks via sparse polynomial optimization. _arXiv preprint arXiv:2004.08688_, 2020.
* [34] Yann LeCun. The mnist database of handwritten digits. _http://yann. lecun. com/exdb/mnist/_, 1998.
* [35] Ang Li, Jiayi Guo, Huanrui Yang, Flora D Salim, and Yiran Chen. Deepobfuscator: Obfuscating intermediate representations with privacy-preserving adversarial learning on smartphones. In _Proceedings of the International Conference on Internet-of-Things Design and Implementation_, pages 28-39, 2021.
* [36] Yitong Li, Timothy Baldwin, and Trevor Cohn. Towards robust and privacy-preserving text representations. _arXiv preprint arXiv:1805.06093_, 2018.
* [37] Katrina Ligett, Seth Neel, Aaron Roth, Bo Waggoner, and Steven Z Wu. Accuracy first: Selecting a differential privacy level for accuracy constrained emr. _Advances in Neural Information Processing Systems_, 30, 2017.
* [38] Sicong Liu, Junzhao Du, Anshumali Shrivastava, and Lin Zhong. Privacy adversarial network: representation learning for mobile data privacy. _Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies_, 3(4):1-18, 2019.
* [39] Fatemehsadat Mireshghallah, Mohammadkazem Taram, Ali Jalali, Ahmed Taha Taha Elthakeb, Dean Tullsen, and Hadi Esmaeilzadeh. Not all features are equal: Discovering essential features for preserving prediction privacy. In _Proceedings of the Web Conference 2021_, pages 669-680, 2021.
* [40] Pratyush Mishra, Ryan Lehmkuhl, Akshayaram Srinivasan, Wenting Zheng, and Raluca Ada Popa. Delphi: A cryptographic inference service for neural networks. In _29th USENIX Security Symposium (USENIX Security 20)_, pages 2505-2522, 2020.
* [41] Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. Smooth sensitivity and sampling in private data analysis. In _Proceedings of the thirty-ninth annual ACM symposium on Theory of computing_, pages 75-84, 2007.
* [42] Seong Joon Oh, Rodrigo Benenson, Mario Fritz, and Bernt Schiele. Faceless person recognition: Privacy implications in social media. In _European Conference on Computer Vision_, pages 19-35. Springer, 2016.
* [43] Olga Ohrimenko, Felix Schuster, Cedric Fournet, Aastha Mehta, Sebastian Nowozin, Kapil Vaswani, and Manuel Costa. Oblivious {Multi-Party} machine learning on trusted processors. In _25th USENIX Security Symposium (USENIX Security 16)_, pages 619-636, 2016.
* [44] Seyed Ali Osia, Ali Shahin Shamsabadi, Sina Sajadmanesh, Ali Taheri, Kleomenis Katevas, Hamid R Rabiee, Nicholas D Lane, and Hamed Haddadi. A hybrid deep learning architecture for privacy-preserving mobile analytics. _IEEE Internet of Things Journal_, 7(5):4505-4518, 2020.
* [45] Nicolas Papernot, Martin Abadi, Ulfar Erlingsson, Ian Goodfellow, and Kunal Talwar. Semi-supervised knowledge transfer for deep learning from private training data. _arXiv preprint arXiv:1610.05755_, 2016.
* [46] Hai Phan, My T Thai, Han Hu, Ruoming Jin, Tong Sun, and Dejing Dou. Scalable differential privacy with certified robustness in adversarial learning. In _International Conference on Machine Learning_, pages 7683-7694. PMLR, 2020.
* [47] Rafael Pinot, Florian Yger, Cedric Gouy-Pailler, and Jamal Aif. A unified view on differential privacy and robustness to adversarial examples. _arXiv preprint arXiv:1906.07982_, 2019.
* [48] Sofya Raskhodnikova and Adam Smith. Efficient lipschitz extensions for high-dimensional graph statistics and node private degree distributions. _arXiv preprint arXiv:1504.07912_, 2015.
* [49] Ryan M Rogers, Aaron Roth, Jonathan Ullman, and Salil Vadhan. Privacy odometers and filters: Pay-as-you-go composition. _Advances in Neural Information Processing Systems_, 29, 2016.
* [50] Proteek Chandan Roy and Vishnu Naresh Boddeti. Mitigating information leakage in image representations: A maximum entropy approach. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 2586-2594, 2019.
* [51] Bashir Sadeghi and Vishnu Boddeti. On the fundamental trade-offs in learning invariant representations. _arXiv preprint arXiv:2109.03386_, 2021.
* [52] Mohammad Samragh, Hossein Hosseini, Aleksei Triastcyn, Kambiz Azarian, Joseph Soriaga, and Farinaz Koushanfar. Unsupervised information obfuscation for split inference of neural networks. _arXiv preprint arXiv:2104.11413_, 2021.
* [53] Kevin Scaman and Aladin Virmaux. Lipschitz regularity of deep neural networks: Analysis and efficient estimation. In _Proceedings of the 32nd International Conference on Neural Information Processing Systems_, NIPS'18, page 3839-3848, Red Hook, NY, USA, 2018. Curran Associates Inc.
* [54] Yonadav Shavit and Boriana Gjura. Exploring the use of lipschitz neural networks for automating the design of differentially private mechanisms. 2019.
* [55] Zhuoxing Shi, Yihan Wang, Huan Zhang, J Zico Kolter, and Cho-Jui Hsieh. Efficiently computing local lipschitz constants of neural networks via bound propagation. _Advances in Neural Information Processing Systems_, 35:2350-2364, 2022.
* [56] Abhishek Singh, Ayush Chopra, Ethan Garza, Emily Zhang, Praneeth Vepakomma, Vivek Sharma, and Ramesh Raskar. Disco: Dynamic and invariant sensitive channel obfuscation for deep neural networks. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 12125-12135, 2021.
* [57] Brij Mohan Lal Srivastava, Aurelien Bellet, Marc Tommasi, and Emmanuel Vincent. Privacy-preserving adversarial representation learning in asr: Reality or illusion? _arXiv preprint arXiv:1911.04913_, 2019.
* [58] Pierre Stock, Igor Shilov, Ilya Mironov, and Alexandre Sablayrolles. Defending against reconstruction attacks with r\(\backslash\)'enyi differential privacy. _arXiv preprint arXiv:2202.07623_, 2022.
* [59] Nurislam Turysnbek, Aleksandr Petiushko, and Ivan Oseledets. Robustness threats of differential privacy. _arXiv preprint arXiv:2012.07828_, 2020.
* [60] Salil Vadhan. The complexity of differential privacy. In _Tutorials on the Foundations of Cryptography_, pages 347-450. Springer, 2017.
* [61] Praneeth Vepakomma, Abhishek Singh, Emily Zhang, Otkrist Gupta, and Ramesh Raskar. Nopeek-infer: Preventing face reconstruction attacks in distributed inference after on-premise training. In _2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021)_, pages 1-8. IEEE, 2021.
* [62] Ye Wang, Yuksel Ozan Basciftci, and Prakash Ishwar. Privacy-utility tradeoffs under constrained data release mechanisms. _arXiv preprint arXiv:1710.09295_, 2017.
* [63] Lily Weng, Huan Zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Luca Daniel, Duane Boning, and Inderjit Dhillon. Towards fast computation of certified robustness for relu networks. In _International Conference on Machine Learning_, pages 5276-5285. PMLR, 2018.
* [64] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017.
* [65] Taihong Xiao, Yi-Hsuan Tsai, Kihyuk Sohn, Manmohan Chandraker, and Ming-Hsuan Yang. Adversarial learning of privacy-preserving and task-oriented representations. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 34, pages 12434-12441, 2020.
* [66] Mengda Yang, Ziang Li, Juan Wang, Hongxin Hu, Ao Ren, Xiaoyang Xu, and Wenzhe Yi. Measuring data reconstruction defenses in collaborative inference systems. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, _Advances in Neural Information Processing Systems_, 2022. URL [https://openreview.net/forum?id=UMdY6-r7yRu](https://openreview.net/forum?id=UMdY6-r7yRu).
* [67] Zhifei Zhang, Yang Song, and Hairong Qi. Age progression/regression by conditional adversarial autoencoder. In _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_. IEEE, 2017.
* [68] Han Zhao, Jianfeng Chi, Yuan Tian, and Geoffrey J Gordon. Trade-offs and guarantees of adversarial representation learning for information obfuscation. _Advances in Neural Information Processing Systems_, 33:9485-9496, 2020.
* [69] Han Zhao, Chen Dan, Bryon Aragam, Tommi S Jaakkola, Geoffrey J Gordon, and Pradeep Ravikumar. Fundamental limits and tradeoffs in invariant representation learning. _arXiv preprint arXiv:2012.10713_, 2020. | ## Review
### Summary
This paper addresses the challenge of privacy-preserving collaborative inference by proposing a framework for encoding sensitive data to limit the information that can be inferred by service providers during machine learning-based interactions. The authors introduce a formal method for evaluating privacy guarantees, inspired by differential privacy, and demonstrate its application using adversarial representation learning. While the proposed approach shows promise in protecting user privacy during inference, there are concerns regarding its practical utility and robustness against advanced attacks. The paper provides a detailed evaluation of the solution across various datasets, although clarity in certain areas and comparisons with existing methods could enhance the overall contribution.
### Strengths
- The paper made several technical improvements to render the proposed solution useful in practice.
- The proposed solution has been evaluated extensively across three diverse datasets.
- The paper is well-written and explains the limits of previous works on assessing the security of adversarial representation learning.
- Formal privacy guarantees are established, which are important for the studied problem.
- The move from differential privacy to d_x-privacy allows for more tractable local sensitivity estimation.
### Weaknesses
- Lack of discussion on the applicability of the framework to tabular datasets.
- There is insufficient evaluation against advanced attacks and the threat model is considered unrealistic.
- The paper does not provide local differential privacy, which may limit its perceived utility.
- More clarifications are needed on the robustness of the framework against white-box adversaries.
- Some experimental results are not convincing and require further justification or additional metrics.
### Questions
- What are some concrete applications where a user would reasonably be satisfied with the specific type of privacy afforded by this approach?
- Can the authors provide more discussion on the applicability to tabular data?
- How do the theoretical results compare with existing works that also analyze obfuscation techniques?
- What are the key technical differences between this approach and others like secure multiparty computation or homomorphic encryption?
- Why does the performance of ARL-C-N seem inferior to other baselines in certain cases?
### Soundness
**Score:** 3
**Description:** 3 = good: The proposed method is fundamentally sound, but some concerns regarding robustness and evaluation against advanced attacks need to be addressed.
### Presentation
**Score:** 3
**Description:** 3 = good: The presentation is generally clear and well-structured, but some parts require additional clarification and detail.
### Contribution
**Score:** 2
**Description:** 2 = fair: While the paper addresses an important problem, the practical utility and evaluation of the proposed method require significant improvements.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept: The paper is technically solid with moderate-to-high impact potential, but it needs to address several concerns before full acceptance.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper demonstrates originality in addressing privacy in collaborative inference and provides a solid technical framework. However, for it to be fully impactful, concerns regarding practical utility, robustness against attacks, and clarity in presentation need to be addressed in the final version. Overall, the soundness and contributions are good enough to warrant acceptance with minor clarifications.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# An Efficient Tester-Learner for Halfspaces
Anonymous Author(s)
Affiliation
Address
email
###### Abstract
We give the first efficient algorithm for learning halfspaces in the testable learning model recently defined by Rubinfeld and Vasilyan [20]. In this model, a learner certifies that the accuracy of its output hypothesis is near optimal whenever the training set passes an associated test, and training sets drawn from some target distribution must pass the test. This model is more challenging than distribution-specific agnostic or Massart noise models where the learner is allowed to fail arbitrarily if the distributional assumption does not hold. We consider the setting where the target distribution is the standard Gaussian in \(d\) dimensions and the label noise is either Massart or adversarial (agnostic). For Massart noise, our tester-learner runs in polynomial time and outputs a hypothesis with (information-theoretically optimal) error \(\mathsf{opt}+\epsilon\) (and extends to any fixed strongly log-concave target distribution). For adversarial noise, our tester-learner obtains error \(O(\mathsf{opt})+\epsilon\) in polynomial time. Prior work on testable learning ignores the labels in the training set and checks that the empirical moments of the covariates are close to the moments of the base distribution. Here we develop new tests of independent interest that make critical use of the labels and combine them with the moment-matching approach of [1]. This enables us to implement a testable variant of the algorithm of [1] for learning noisy halfspaces using nonconvex SGD.
## 1 Introduction
Learning halfspaces in the presence of noise is one of the most basic and well-studied problems in computational learning theory. A large body of work has obtained results for this problem under a variety of different noise models and distributional assumptions (see e.g. [1] for a survey). A major issue with common distributional assumptions such as Gaussianity, however, is that they can be hard or impossible to verify in the absence of any prior information.
The recently defined model of testable learning [20] addresses this issue by replacing such assumptions with efficiently testable ones. In this model, the learner is required to work with an arbitrary input distribution \(D_{\mathcal{X}\mathcal{Y}}\) and verify any assumptions it needs to succeed. It may choose to reject a given training set, but if it accepts, it is required to output a hypothesis with error close to \(\mathsf{opt}(\mathcal{C},D_{\mathcal{X}\mathcal{Y}})\), the optimal error achievable over \(D_{\mathcal{X}\mathcal{Y}}\) by any function in a concept class \(\mathcal{C}\). Further, whenever the training set is drawn from a distribution \(D_{\mathcal{X}\mathcal{Y}}\) whose marginal is truly a well-behaved target distribution \(D^{*}\) (such as the standard Gaussian), the algorithm is required to accept with high probability. Such an algorithm, or tester-learner, is then said to testably learn \(\mathcal{C}\) with respect to target marginal \(D^{*}\). (See Definition 2.1.) Note that unlike ordinary distribution-specific agnostic learners, a tester-learner must take some nontrivial action _regardless_ of the input distribution.
The work of [20, 1] established foundational algorithmic and statistical results for this model and showed that testable learning is in general provably harder than ordinary distribution-specific agnostic learning. As one of their main algorithmic results, they showed tester-learners forthe class of halfspaces over \(\mathbb{R}^{d}\) that succeed whenever the target marginal is Gaussian (or one of a more general class of distributions), achieving error \(\mathsf{opt}+\epsilon\) in time and sample complexity \(d^{\tilde{O}(1/\epsilon^{2})}\). This matches the running time of ordinary distribution-specific agnostic learning of halfspaces over the Gaussian using the standard approach of [14]. Their testers are simple and label-oblivious, and are based on checking whether the low-degree empirical moments of the unknown marginal match those of the target \(D^{*}\).
These works essentially resolve the question of designing tester-learners achieving error \(\mathsf{opt}+\epsilon\) for halfspaces, matching known hardness results for (ordinary) agnostic learning [1, 1, 2, 3]. Their running time, however, necessarily scales exponentially in \(1/\epsilon\).
A long line of research has sought to obtain more efficient algorithms at the cost of relaxing the optimality guarantee [1, 1, 3, 4]. These works give polynomial-time algorithms achieving bounds of the form \(\mathsf{opt}+\epsilon\) and \(O(\mathsf{opt})+\epsilon\) for the Massart and agnostic setting respectively under structured distributions (see Section 1.1 for more discussion). The main question we consider here is whether such guarantees can be obtained in the testable learning framework.
**Our contributions.** In this work we design the first tester-learners for halfspaces that run in fully polynomial time in all parameters. We match the optimality guarantees of fully polynomial-time learning algorithms under Gaussian marginals for the Massart noise model (where the labels arise from a halfspace but are flipped by an adversary with probability at most \(\eta\)) as well as for the agnostic model (where the labels can be completely arbitrary). In fact, for the Massart setting our guarantee holds with respect to any chosen target marginal \(D^{*}\) that is isotropic and strongly log-concave, and the same is true of the agnostic setting albeit with a slightly weaker guarantee.
**Theorem 1.1** (Formally stated as Theorem 4.1).: _Let \(\mathcal{C}\) be the class of origin-centered halfspaces over \(\mathbb{R}^{d}\), and let \(D^{*}\) be any isotropic strongly log-concave distribution. In the setting where the labels are corrupted with Massart noise at rate at most \(\eta<\frac{1}{2}\), \(\mathcal{C}\) can be testably learned w.r.t. \(D^{*}\) up to error \(\mathsf{opt}+\epsilon\) using \(\mathrm{poly}(d,\frac{1}{\epsilon},\frac{1}{1-2\eta})\) time and sample complexity._
**Theorem 1.2** (Formally stated as Theorem 5.1).: _Let \(\mathcal{C}\) be as above. In the adversarial noise or agnostic setting where the labels are completely arbitrary, \(\mathcal{C}\) can be testably learned w.r.t. \(\mathcal{N}(0,I_{d})\) up to error \(O(\mathsf{opt})+\epsilon\) using \(\mathrm{poly}(d,\frac{1}{\epsilon})\) time and sample complexity._
**Our techniques.** The tester-learners we develop are significantly more involved than prior work on testable learning. We build on the nonconvex optimization approach to learning noisy halfspaces due to [1, 1] as well as the structural results on fooling functions of halfspaces using moment matching due to [1]. Unlike the label-oblivious, global moment tests of [1, 1], our tests make crucial use of the labels and check _local_ properties of the distribution in regions described by certain candidate vectors. These candidates are approximate stationary points of a natural nonconvex surrogate of the 0-1 loss, obtained by running gradient descent. When the distribution is known to be well-behaved, [1, 1] showed that any such stationary point is in fact a good solution (for technical reasons we must use a slightly different surrogate loss). Their proof relies crucially on structural geometric properties that hold for these well-behaved distributions, an important one being that the probability mass of any region close to the origin is proportional to its geometric measure.
In the testable learning setting, we must efficiently check this property for candidate solutions. Since these regions may be described as intersections of halfspaces, we may hope to apply the moment-matching framework of [1]. Naively, however, they only allow us to check in polynomial time that the probability masses of such regions are within an additive constant of what they should be under the target marginal. But we can view these regions as sub-regions of a known band described by our candidate vector. By running moment tests on the distribution _conditioned_ on this band and exploiting the full strength of the moment-matching framework, we are able to effectively convert our weak additive approximations to good multiplicative ones. This allows us to argue that our stationary points are indeed good solutions.
**Limitations and Future Work.** In this paper we provide the first efficient tester-learners for halfspaces when the noise is either adversarial or Massart. An interesting direction for future work would be to design tester-learners for the agnostic setting whose target marginal distributions may lie within a large family (e.g., strongly log-concave distributions) but still achieve error of \(O(\mathsf{opt})\). Another interesting direction is providing tester-learners that are not tailored to a single target distribution, but are guaranteed to accept any member of a large family of distributions.
### Related work
We provide a partial summary of some of the most relevant prior and related work on efficient algorithms for learning halfspaces in the presence of adversarial label or Massart noise, and refer the reader to [1] for a survey.
In the distribution-specific agnostic setting where the marginal is assumed to be isotropic and log-concave, [10] showed an algorithm achieving error \(O(\mathsf{opt}^{1/3})+\epsilon\) for the class of origin-centered halfspaces. [1] later obtained \(O(\mathsf{opt})+\epsilon\) using an approach that introduced the principle of iterative _localization_, where the learner focuses attention on a band around a candidate halfspace in order to produce an improved candidate. [16] used this principle to obtain a PTAS for agnostically learning halfspaces under the uniform distribution on the sphere, and [1] extended it to more general \(s\)-concave distributions. Further works in this line include [13, 14, 15].
[1] introduced the simplest approach yet, based entirely on nonconvex SGD, and showed that it achieves \(O(\mathsf{opt})+\epsilon\) for origin-centered halfspaces over a wide class of structured distributions. Other related works include [17, 18].
In the Massart noise setting with noise rate bounded by \(\eta\), work of [19] gave the first efficient distribution-free algorithm achieving error \(\eta+\epsilon\); further improvements and followups include [18, 19]. However, the optimal error opt achievable by a halfspace may be much smaller than \(\eta\), and it has been shown that there are distributions where achieving error competitive with opt as opposed to \(\eta\) is computationally hard [13, 14]. As a result, the distribution-specific setting remains well-motivated for Massart noise. Early distribution-specific algorithms were given by [1, 2], but a key breakthrough was the nonconvex SGD approach introduced by [1], which achieved error \(\mathsf{opt}+\epsilon\) for origin-centered halfspaces efficiently over a wide range of distributions. This was later generalized by [1].
### Technical overview
Our starting point is the nonconvex optimization approach to learning noisy halfspaces due to [18, 19]. The algorithms in these works consist of running SGD on a natural non-convex surrogate \(\mathcal{L}_{\sigma}\) for the 0-1 loss, namely a smooth version of the ramp loss. The key structural property shown is that if the marginal distribution is structured (e.g. log-concave) and the slope of the ramp is picked appropriately, then any we that has large angle with an optimal \(\mathbf{w}^{*}\) cannot be an approximate stationary point of the surrogate loss \(\mathcal{L}_{\sigma}\), i.e. that \(\|\nabla\mathcal{L}_{\sigma}(\mathbf{w})\|\) must be large. This is proven by carefully analyzing the contributions to the gradient norm from certain critical regions of \(\mathrm{span}(\mathbf{w},\mathbf{w}^{*})\), and crucially using the distributional assumption that the probability masses of these regions are proportional to their geometric measures. (See Fig. 3.) In the testable learning setting, the main challenge we face in adapting this approach is checking such a property for the unknown distribution we have access to.
A preliminary observation is that the critical regions of \(\mathrm{span}(\mathbf{w},\mathbf{w}^{*})\) that we need to analyze are rectangles, and are hence functions of a small number of halfspaces. Encouragingly, one of the key structural results of the prior work of [1] pertains to "fooling" such functions. Concretely, they show that whenever the true marginal \(D_{\mathcal{X}}\) matches moments of degree at most \(\widetilde{O}(1/\tau^{2})\) with a target \(D^{*}\) that satisfies suitable concentration and anticoncentration properties, then \(|\operatorname{\mathbb{E}}_{D_{\mathcal{X}}}[f]-\operatorname{\mathbb{E}}_{D^ {*}}[f]|\leq\tau\) for any \(f\) that is a function of a small number of halfspaces. If we could run such a test and ensure that the probabilities of the critical regions over our empirical marginal are also related to their areas, then we would have a similar stationary point property.
However, the difficulty is that since we wish to run in fully polynomial time, we can only hope to fool such functions up to \(\tau\) that is a constant. Unfortunately, this is not sufficient to analyze the probability masses of the critical regions we care about as they may be very small.
The chief insight that lets us get around this issue is that each critical region \(R\) is in fact of a very specific form, namely a rectangle that is axis-aligned with \(\mathbf{w}\): \(R=\{\mathbf{x}:\langle\mathbf{w},\mathbf{x}\rangle\in[-\sigma,\sigma]\text{ and }\langle\mathbf{v},\mathbf{x}\rangle\in[\alpha,\beta]\}\) for some values \(\alpha,\beta,\sigma\) and some \(\mathbf{v}\) orthogonal to \(\mathbf{w}\). Moreover, we _know_\(\mathbf{w}\), meaning we can efficiently estimate the probability \(\operatorname{\mathbb{P}}_{D_{\mathcal{X}}}[\langle\mathbf{w},\mathbf{x} \rangle\in[-\sigma,\sigma]]\) up to constant multiplicative factors without needing moment tests. Denoting the band \(\{\mathbf{x}:\langle\mathbf{w},\mathbf{x}\rangle\in[-\sigma,\sigma]\}\) by \(T\) and writing \(\operatorname{\mathbb{P}}_{D_{\mathcal{X}}}[R]=\operatorname{\mathbb{P}}_{D_{ \mathcal{X}}}[\langle\mathbf{v},\mathbf{x}\rangle\in[\alpha,\beta]\mid\mathbf{x }\in T]\operatorname{\mathbb{P}}_{D_{\mathcal{X}}}[T]\), it turns out that we should expect \(\operatorname{\mathbb{P}}_{D_{\mathcal{X}}}[\langle\mathbf{v},\mathbf{x} \rangle\in[\alpha,\beta]\mid\mathbf{x}\in T]=\Theta(1)\), as this is what would occur under the structured target distribution \(D^{*}\). (Such a "localization" property is also at the heart of the algorithms for approximately learning halfspaces of, e.g., [1, 1].) To check this, it suffices to run tests that ensure that \(\mathbb{P}_{D_{\mathcal{X}}}[\langle\mathbf{v},\mathbf{x}\rangle\in[\alpha, \beta]\mid\mathbf{x}\in T]\) is within an additive constant of this probability under \(D^{*}\).
We can now describe the core of our algorithm (omitting some details such as the selection of the slope of the ramp). First, we run SGD on the surrogate loss \(\mathcal{L}\) to arrive at an approximate stationary point and candidate vector \(\mathbf{w}\) (technically a list of such candidates). Then, we define the band \(T\) based on \(\mathbf{w}\), and run tests on the empirical distribution conditioned on \(T\). Specifically, we check that the low-degree empirical moments conditioned on \(T\) match those of \(D^{*}\) conditioned on \(T\), and then apply the structural result of [1] to ensure conditional probabilities of the form \(\mathbb{P}_{D_{\mathcal{X}}}[\langle\mathbf{v},\mathbf{x}\rangle\in[\alpha, \beta]\mid\mathbf{x}\in T]\) match \(\mathbb{P}_{D^{*}}[\langle\mathbf{v},\mathbf{x}\rangle\in[\alpha,\beta]\mid \mathbf{x}\in T]\) up to a suitable additive constant. This suffices to ensure that even over our empirical marginal, the particular stationary point \(\mathbf{w}\) we have is indeed close in angular distance to an optimal \(\mathbf{w}^{*}\).
A final hurdle that remains, often taken for granted under structured distributions, is that closeness in angular distance \(\measuredangle(\mathbf{w},\mathbf{w}^{*})\) does not immediately translate to closeness in terms of agreement, \(\mathbb{P}[\operatorname{sign}(\langle\mathbf{w},\mathbf{x}\rangle)\neq \operatorname{sign}(\langle\mathbf{w}^{*},\mathbf{x}\rangle)]\), over our unknown marginal. Nevertheless, we show that when the target distribution is Gaussian, we can run polynomial-time tests that ensure that an angle of \(\theta=\measuredangle(\mathbf{w},\mathbf{w}^{*})\) translates to disagreement of at most \(O(\theta)\). When the target distribution is a general strongly log-concave distribution, we show a slightly weaker relationship: for any \(k\in\mathbb{N}\), we can run tests requiring time \(d^{\widetilde{O}(k)}\) that ensure that an angle of \(\theta\) translates to disagreement of at most \(O(\sqrt{k}\cdot\theta^{1-1/k})\). In the Massart noise setting, we can make \(\measuredangle(\mathbf{w},\mathbf{w}^{*})\) arbitrarily small, and so obtain our \(\operatorname{opt}+\epsilon\) guarantee for any target strongly log-concave distribution in polynomial time. In the adversarial noise setting, we face a more delicate tradeoff and can only make \(\measuredangle(\mathbf{w},\mathbf{w}^{*})\) as small as \(\Theta(\operatorname{opt})\). When the target distribution is Gaussian, this is enough to obtain final error \(O(\operatorname{opt})+\epsilon\) in polynomial time. When the target distribution is a general strongly log-concave distribution, we instead obtain \(\widetilde{O}(\operatorname{opt})+\epsilon\) in quasipolynomial time.
## 2 Preliminaries
Notation and setupThroughout, the domain will be \(\mathcal{X}=\mathbb{R}^{d}\), and labels will lie in \(\mathcal{Y}=\{\pm 1\}\). The unknown joint distribution over \(\mathcal{X}\times\mathcal{Y}\) that we have access to will be denoted by \(D_{\mathcal{X}\mathcal{Y}}\), and its marginal on \(\mathcal{X}\) will be denoted by \(D_{\mathcal{X}\mathcal{Y}}\). The target marginal on \(\mathcal{X}\) will be denoted by \(D^{\star}\). We use the following convention for monomials: for a multi-index \(\alpha=(\alpha_{1},\ldots,\alpha_{d})\in\mathbb{Z}_{\geq 0}^{d}\), \(\mathbf{x}^{\alpha}\) denotes \(\prod_{i}x_{i}^{\alpha_{i}}\), and \(|\alpha|=\sum_{i}\alpha_{i}\) denotes its total degree. We use \(\mathcal{C}\) to denote a concept class mapping \(\mathbb{R}^{d}\) to \(\{\pm 1\}\), which throughout this paper will be the class of halfspaces or functions of halfspaces over \(\mathbb{R}^{d}\). We use \(\operatorname{opt}(\mathcal{C},D_{\mathcal{X}\mathcal{Y}})\) to denote the optimal error \(\inf_{f\in\mathcal{C}}\mathbb{P}_{(\mathbf{x},y)\sim D_{\mathcal{X}\mathcal{Y} }}[f(\mathbf{x})\neq y]\), or just \(\operatorname{opt}\) when \(\mathcal{C}\) and \(D_{\mathcal{X}\mathcal{Y}}\) are clear from context. We recall the definitions of the noise models we consider. In the Massart noise model, the labels satisfy \(\mathbb{P}_{y\sim D_{\mathcal{X}\mathcal{Y}}\mid\mathbf{x}}[y\neq\operatorname {sign}(\langle\mathbf{w}^{*},\mathbf{x}\rangle)\mid\mathbf{x}]=\eta(\mathbf{x})\), where \(\eta(\mathbf{x})\leq\eta<\frac{1}{2}\) for all \(\mathbf{x}\). In the adversarial label noise or agnostic model, the labels may be completely arbitrary. In both cases, the learner's goal is to produce a hypothesis with error competitive with \(\operatorname{opt}\).
We now formally define testable learning. The following definition is an equivalent reframing of the original definition [13, Def 4], folding the (label-aware) tester and learner into a single tester-learner.
**Definition 2.1** (Testable learning, [13]).: Let \(\mathcal{C}\) be a concept class mapping \(\mathbb{R}^{d}\) to \(\{\pm 1\}\). Let \(D^{*}\) be a certain target marginal on \(\mathbb{R}^{d}\). Let \(\epsilon,\delta>0\) be parameters, and let \(\psi:[0,1]\to[0,1]\) be some function. We say \(\mathcal{C}\) can be testably learned w.r.t. \(D^{*}\) up to error \(\psi(\operatorname{opt})+\epsilon\) with failure probability \(\delta\) if there exists a tester-learner \(A\) meeting the following specification. For any distribution \(D_{\mathcal{X}\mathcal{Y}}\) on \(\mathbb{R}^{d}\times\{\pm 1\}\), \(A\) takes in a large sample \(S\) drawn from \(D_{\mathcal{X}\mathcal{Y}}\), and either rejects \(S\) or accepts and produces a hypothesis \(h:\mathbb{R}^{d}\to\{\pm 1\}\). Further, the following conditions must be met:
1. (Soundness.) Whenever \(A\) accepts and produces a hypothesis \(h\), with probability at least \(1-\delta\) (over the randomness of \(S\) and \(A\)), \(h\) must satisfy \(\mathbb{P}_{(\mathbf{x},y)\sim D_{\mathcal{X}\mathcal{Y}}}[h(\mathbf{x})\neq y] \leq\psi(\operatorname{opt}(\mathcal{C},D_{\mathcal{X}\mathcal{Y}}))+\epsilon\).
2. (Completeness.) Whenever \(D_{\mathcal{X}\mathcal{Y}}\) truly has marginal \(D^{*}\), \(A\) must accept with probability at least \(1-\delta\) (over the randomness of \(S\) and \(A\)).
Testing properties of strongly log-concave distributions
In this section we define the testers that we will need for our algorithm. All the proofs from this section can be found in Appendix B. We begin with a structural lemma that strengthens the key structural result of [13], stated here as Proposition A.3. It states that even when we restrict an isotropic strongly log-concave \(D^{*}\) to a band around the origin, moment matching suffices to fool functions of halfspaces whose weights are orthogonal to the normal of the band.
**Proposition 3.1**.: _Let \(D^{*}\) be an isotropic strongly log-concave distribution. Let \(\mathbf{w}\in\mathbb{S}^{d-1}\) be any fixed direction. Let \(p\) be a constant. Let \(f:\mathbb{R}^{d}\to\mathbb{R}\) be a function of \(p\) halfspaces of the form in Eq. (A.2), with the additional restriction that its weights \(\mathbf{v}^{i}\in\mathbb{S}^{d-1}\) satisfy \(\langle\mathbf{v}^{i},\mathbf{w}\rangle=0\) for all \(i\). For some \(\sigma\in[0,1]\), let \(T\) denote the band \(\{\mathbf{x}:|\langle\mathbf{w},\mathbf{x}\rangle|\leq\sigma\}\). Let \(D\) be any distribution such that \(D_{|T}\) matches moments of degree at most \(k=\widetilde{O}(1/\tau^{2})\) with \(D^{*}_{|T}\) up to an additive slack of \(d^{-\widetilde{O}(k)}\). Then \(|\mathbb{E}_{D^{*}}[f\mid T]-\mathbb{E}_{D}[f\mid T]|\leq\tau\)._
We now describe some of the testers that we use. First, we need a tester that ensures that the distribution is concentrated in every single direction. More formally, the tester checks that the moments of the distribution along any direction are small.
**Proposition 3.2**.: _For any isotropic strongly log-concave \(D^{*}\), there exists some constants \(C_{1}\) and a tester \(T_{1}\) that takes a set \(S\subseteq\mathbb{R}^{d}\times\{\pm 1\}\), an even \(k\in\mathbb{N}\), a parameter \(\delta\in(0,1)\) and runs and in time \(\operatorname{poly}\left(d^{k},|S|,\log\frac{1}{\delta}\right)\). Let \(D\) denote the uniform distribution over \(S\). If \(T_{1}\) accepts, then for any \(\mathbf{v}\in\mathbb{S}^{d-1}\)_
\[\operatorname*{\mathbb{E}}_{(\mathbf{x},\mathbf{y})\sim D}[(\langle\mathbf{v}, \mathbf{x}\rangle)^{k}]\leq(C_{1}k)^{k/2}. \tag{3.1}\]
_Moreover, if \(S\) is obtained by taking at least \(\left(d^{k},\left(\log\frac{1}{\delta}\right)^{k}\right)^{C_{1}}\) i.i.d. samples from a distribution whose \(\mathbb{R}^{d}\)-marginal is \(D^{*}\), the test \(T_{1}\) passes with probability at least \(1-\delta\)._
Secondly, we will use a tester that makes sure the distribution is not concentrated too close to a specific hyperplane. This is one of the properties we will need to use in order to employ the localization technique of [1].
**Proposition 3.3**.: _For any isotropic strongly log-concave \(D^{*}\), there exist some constants \(C_{2},C_{3}\) and a tester \(T_{2}\) that takes a set \(S\subseteq\mathbb{R}^{d}\times\{\pm 1\}\) a vector \(\mathbf{w}\in\mathbb{S}^{d-1}\), parameters \(\sigma,\delta\in(0,1)\) and runs in time \(\operatorname{poly}\left(d,|S|,\log\frac{1}{\delta}\right)\). Let \(D\) denote the uniform distribution over \(S\). If \(T_{2}\) accepts, then_
\[\operatorname*{\mathbb{P}}_{(\mathbf{x},\mathbf{y})\sim D}[|\langle\mathbf{w},\mathbf{x}\rangle|\leq\sigma]\in(C_{2}\sigma,C_{3}\sigma). \tag{3.2}\]
_Moreover, if \(S\) is obtained by taking at least \(\frac{100}{K_{1}\sigma^{2}}\log\left(\frac{1}{\delta}\right)\) i.i.d. samples from a distribution whose \(\mathbb{R}^{d}\)-marginal is \(D^{*}\), the test \(T_{2}\) passes with probability at least \(1-\delta\)._
Finally, in order to use the localization idea of [1] in a manner similar to [10], we need to make sure that the distribution is well-behaved also within a band around to a certain hyperplane. The main property of the distribution that we establish is that functions of constantly many halfspaces have expectations very close to what they would be under our distributional assumption. As we show later in this work, having the aforementioned property allows us to derive many other properties that strongly log-concave distributions have, including many of the key properties that make the localization technique successful.
**Proposition 3.4**.: _For any isotropic strongly log-concave \(D^{*}\) and a constant \(C_{4}\), there exists a constant \(C_{5}\) and a tester \(T_{3}\) that takes a set \(S\subseteq\mathbb{R}^{d}\times\{\pm 1\}\) a vector \(\mathbf{w}\in\mathbb{S}^{d-1}\), parameters \(\sigma,\tau\,\delta\in(0,1)\) and runs in time \(\operatorname{poly}\left(d^{\widetilde{O}(\frac{1}{\tau})},\frac{1}{\sigma}, |S|,\log\frac{1}{\delta}\right)\). Let \(D\) denote the uniform distribution over \(S\), let \(T\) denote the band \(\{\mathbf{x}:|\langle\mathbf{w},\mathbf{x}\rangle|\leq\sigma\}\) and let \(\mathcal{F}_{\mathbf{w}}\) denote the set \(\{\pm 1\}\)-valued functions of \(C_{4}\) halfspaces whose weight vectors are orthogonal to \(\mathbf{w}\). If \(T_{3}\) accepts, then_
\[\max_{f\in\mathcal{F}_{\mathbf{w}}}\left|\operatorname*{\mathbb{E }}_{\mathbf{x}\sim D^{*}}[f(\mathbf{x})\mid\mathbf{x}\in T]-\operatorname*{ \mathbb{E}}_{(\mathbf{x},\mathbf{y})\sim D}[f(\mathbf{x})\mid\mathbf{x}\in T ]\right|\leq\tau, \tag{3.3}\] \[\max_{\mathbf{v}\in\mathbb{S}^{d-1}:\,\langle\mathbf{v},\mathbf{w }\rangle=0}\left|\operatorname*{\mathbb{E}}_{\mathbf{x}\sim D^{*}}[(\langle \mathbf{v},\mathbf{x}\rangle)^{2}\mid\mathbf{x}\in T]-\operatorname*{\mathbb{E }}_{(\mathbf{x},\mathbf{y})\sim D}[(\langle\mathbf{v},\mathbf{x}\rangle)^{2} \mid\mathbf{x}\in T]\right|\leq\tau. \tag{3.4}\]_Moreover, if \(S\) is obtained by taking at least \(\left(\frac{1}{\tau}\cdot\frac{1}{\sigma}\cdot d^{\frac{1}{\tau^{2}}}\log^{C_{5}}( \frac{1}{\tau})\cdot\left(\log\frac{1}{\delta}\right)^{\frac{1}{\tau^{2}}}\log^{ C_{5}}(\frac{1}{\tau})\right)^{C_{5}}\) i.i.d. samples from a distribution whose \(\mathbb{R}^{d}\)-marginal is \(D^{*}\), the test \(T_{3}\) passes with probability at least \(1-\delta\)._
## 4 Testably learning halfspaces with Massart noise
In this section we prove that we can testably learn halfspaces with Massart noise with respect to isotropic strongly log-concave distributions (see Definition A.1).
**Theorem 4.1** (Tester-Learner for Halfspaces with Massart Noise).: _Let \(D_{\mathcal{X}\mathcal{Y}}\) be a distribution over \(\mathbb{R}^{d}\times\{\pm 1\}\) and let \(D^{*}\) be an isotropic strongly log-concave distribution over \(\mathbb{R}^{d}\). Let \(\mathcal{C}\) be the class of origin centered halfspaces in \(\mathbb{R}^{d}\). Then, for any \(\eta<1/2\), \(\epsilon>0\) and \(\delta\in(0,1)\), there exists an algorithm (Algorithm 1) that testably learns \(\mathcal{C}\) w.r.t. \(D^{*}\) up to excess error \(\epsilon\) and error probability at most \(\delta\) in the Massart noise model with rate at most \(\eta\), using time and a number of samples from \(D_{\mathcal{X}\mathcal{Y}}\) that are polynomial in \(d,1/\epsilon,\frac{1}{1-2\eta}\) and \(\log(1/\delta)\)._
```
0: Training sets \(S_{1},S_{2}\), parameters \(\sigma\), \(\delta\), \(\alpha\)
0: A near-optimal weight vector \(\mathbf{w}\), or rejection
0: Run PSGD on the empirical loss \(\mathcal{L}_{\sigma}\) over \(S_{1}\) to get a list \(L\) of candidate vectors.
0: Test whether \(L\) contains an \(\alpha\)-approximate stationary point \(\mathbf{w}\) of the empirical loss \(\mathcal{L}_{\sigma}\) over \(S_{2}\).
0: Reject if no such \(\mathbf{w}\) exists.
0:for each candidate \(\mathbf{w}^{\prime}\) in \(\{\mathbf{w},-\mathbf{w}\}\)do
0: Let \(B^{\prime}_{\mathbf{w}}(\sigma)\) denote the band \(\{\mathbf{x}:|\langle\mathbf{w}^{\prime},\mathbf{x}\rangle|\leq\sigma\}\). Let \(\mathcal{F}^{\prime}_{\mathbf{w}}\) denote the class of functions of at most two halfspaces with weights orthogonal to \(\mathbf{w}^{\prime}\).
0: Let \(\delta^{\prime}=\Theta(\delta)\).
0: Run \(T_{1}(S_{2},k=2,\delta)\) to verify that the empirical marginal is approximately isotropic. Reject if \(T_{1}\) rejects.
0: Run \(T_{2}(S_{2},\mathbf{w}^{\prime},\sigma,\delta^{\prime})\) to verify that \(\mathbb{P}_{S}[B^{\prime}_{\mathbf{w}}]=\Theta(\sigma)\). Reject if \(T_{2}\) rejects.
0: Run \(T_{3}(S_{2},\mathbf{w}^{\prime},\sigma=\sigma/6,\tau,\delta^{\prime})\) and \(T_{3}(S,\mathbf{w}^{\prime},\sigma=\sigma/2,\tau,\delta^{\prime})\) for a suitable constant \(\tau\) to verify that the empirical distribution conditioned on \(B^{\prime}_{\mathbf{w}}(\sigma/6)\) and \(B^{\prime}_{\mathbf{w}}(\sigma/2)\) fools \(\mathcal{F}^{\prime}_{\mathbf{w}}\) up to \(\tau\). Reject if \(T_{3}\) rejects.
0: Estimate the empirical error of \(\mathbf{w}^{\prime}\) on \(S\).
0: If all tests have accepted, output \(\mathbf{w}^{\prime}\in\{\mathbf{w},-\mathbf{w}\}\) with the best empirical error.
**Algorithm 1**Tester-learner for halfspaces
To show our result, we revisit the approach of [14] for learning halfspaces with Massart noise under well-behaved distributions. Their result is based on the idea of minimizing a surrogate loss that is non convex, but whose stationary points correspond to halfspaces with low error. They also require that their surrogate loss is sufficiently smooth, so that one can find a stationary point efficiently. While the distributional assumptions that are used to demonstrate that stationary points of the surrogate loss can be discovered efficiently are mild, the main technical lemma, which demostrates that any stationary point suffices, requires assumptions that are not necessarily testable. We establish a label-dependent approach for testing, making use of tests that are applied during the course of our algorithm.
We consider a slightly different surrogate loss than the one used in [14]. In particular, for \(\sigma>0\), we let
\[\mathcal{L}_{\sigma}(\mathbf{w})=\underset{(\mathbf{x},y)\sim D_{\mathcal{X} \mathcal{Y}}}{\mathbb{E}}\bigg{[}\ell_{\sigma}\bigg{(}-y\frac{\langle\mathbf{ w},\mathbf{x}\rangle}{\|\mathbf{w}\|_{2}}\bigg{)}\bigg{]}, \tag{4.1}\]
where \(\ell_{\sigma}:\mathbb{R}\rightarrow[0,1]\) is a smooth approximation to the ramp function with the properties described in Proposition C.1 (see Appendix C), obtained using a piecewise polynomial of degree \(3\). Unlike the standard logistic function, our loss function has derivative exactly \(0\) away from the origin (for \(|t|>\sigma/2\)). This makes the analysis of the gradient of \(\mathcal{L}_{\sigma}\) easier, since the contribution from points lying outside a certain band is exactly \(0\).
The smoothness allows us to run PSGD to obtain stationary points efficiently, and we now state the convergence lemma we need.
**Proposition 4.2** (PSGD Convergence, Lemmas 4.2 and B.2 in [41]).: _Let \(\mathcal{L}_{\sigma}\) be as in Equation (4.1) with \(\sigma\in(0,1]\), \(\ell_{\sigma}\) as described in Proposition C.1 and \(D_{\mathcal{X}\mathcal{Y}}\), such that the marginal \(D_{\mathcal{X}}\) on \(\mathbb{R}^{d}\) satisfies Property (3.1) for \(k=2\). Then, for any \(\epsilon>0\) and \(\delta\in(0,1)\), there is an algorithm whose time and sample complexity is \(O(\frac{d}{\sigma^{4}}+\frac{\log(1/\delta)}{\epsilon^{4}\sigma^{4}})\), which, having access to samples from \(D_{\mathcal{X}\mathcal{Y}}\), outputs a list \(L\) of vectors \(\mathbf{w}\in\mathbb{S}^{d-1}\) with \(|L|=O(\frac{d}{\sigma^{4}}+\frac{\log(1/\delta)}{\epsilon^{4}\sigma^{4}})\) so that there exists \(\mathbf{w}\in L\) with_
\[\|\nabla_{\mathbf{w}}\mathcal{L}_{\sigma}(\mathbf{w})\|_{2}\leq\epsilon\,,\text { with probability at least }1-\delta\,.\]
_In particular, the algorithm performs Stochastic Gradient Descent on \(\mathcal{L}_{\sigma}\) Projected on \(\mathbb{S}^{d-1}\) (PSGD)._
It now suffices to show that, upon performing PSGD on \(\mathcal{L}_{\sigma}\), for some appropriate choice of \(\sigma\), we acquire a list of vectors that testably contain a vector which is approximately optimal. We first prove the following lemma, whose distributional assumptions are relaxed compared to the corresponding structural Lemma 3.2 of [41]. In particular, instead of requiring the marginal distribution to be "well-behaved", we assume that the quantities of interest (for the purposes of our proof) have expected values under the true marginal distribution that are close, up to multiplicative factors, to their expected values under some "well-behaved" (in fact, strongly log-concave) distribution. While some of the quantities of interest have values that are miniscule and estimating them up to multiplicative factors could be too costly, it turns out that the source of their vanishing scaling can be completely attributed to factors of the form \(\mathbb{P}[|\langle\mathbf{w},\mathbf{x}\rangle|\leq\sigma]\) (where \(\sigma\) is small), which, due to standard concentration arguments, can be approximated up to multiplicative factors, given \(\mathbf{w}\in\mathbb{S}^{d-1}\) and \(\sigma>0\) (see Proposition 3.3). As a result, we may estimate the remaining factors up to sufficiently small additive constants (see Proposition 3.4) to get multiplicative overall closeness to the "well behaved" baseline. We defer the proof of the following Lemma to Appendix C.1.
**Lemma 4.3**.: _Let \(\mathcal{L}_{\sigma}\) be as in Equation (4.1) with \(\sigma\in(0,1]\), \(\ell_{\sigma}\) as described in Proposition C.1, let \(\mathbf{w}\in\mathbb{S}^{d-1}\) and consider \(D_{\mathcal{X}\mathcal{Y}}\) such that the marginal \(D_{\mathcal{X}}\) on \(\mathbb{R}^{d}\) satisfies Properties (3.2) and (3.3) for \(C_{4}=2\) and accuracy \(\tau\). Let \(\mathbf{w}^{*}\in\mathbb{S}^{d-1}\) define an optimum halfspace and let \(\eta<1/2\) be an upper bound on the rate of the Massart noise. Then, there are constants \(c_{1},c_{2},c_{3}>0\) such that if \(\|\nabla_{\mathbf{w}}\mathcal{L}_{\sigma}(\mathbf{w})\|_{2}<c_{1}(1-2\eta)\) and \(\tau\leq c_{2}\), then_
\[\angle(\mathbf{w},\mathbf{w}^{*})\leq\frac{c_{3}}{1-2\eta}\cdot\sigma\quad \text{or}\quad\angle(-\mathbf{w},\mathbf{w}^{*})\leq\frac{c_{3}}{1-2\eta}\cdot\sigma\]
Combining Proposition 4.2 and Lemma 4.3, we get that for any choice of the parameter \(\sigma\in(0,1]\), by running PSGD on \(\mathcal{L}_{\sigma}\), we can construct a list of vectors of polynomial size (in all relevant parameters) that testably contains a vector that is close to the optimum weight vector. In order to link the zero-one loss to the angular similarity between a weight vector and the optimum vector, we use the following Proposition (for the proof, see Appendix C.2).
**Proposition 4.4**.: _Let \(D_{\mathcal{X}\mathcal{Y}}\) be a distribution over \(\mathbb{R}^{d}\times\{\pm 1\}\), \(\mathbf{w}^{*}\in\arg\min_{\mathbf{w}\in\mathbb{S}^{d-1}}\mathbb{P}_{D_{ \mathcal{X}\mathcal{Y}}}[y\neq\operatorname{sign}(\langle\mathbf{w},\mathbf{x }\rangle)]\) and \(\mathbf{w}\in\mathbb{S}^{d-1}\). Then, for any \(\theta\geq\angle(\mathbf{w},\mathbf{w}^{*})\), \(\theta\in[0,\pi/4]\), if the marginal \(D_{\mathcal{X}}\) on \(\mathbb{R}^{d}\) satisfies Property (3.1) for \(C_{1}>0\) and some even \(k\in\mathbb{N}\) and Property (3.2) with \(\sigma\) set to \((C_{1}k)^{\frac{k}{2(k+1)}}\cdot(\tan\theta)^{\frac{k}{k+1}}\), then, there exists a constant \(c>0\) such that the following is true._
\[\operatorname*{\mathbb{P}}_{D_{\mathcal{X}\mathcal{Y}}}[y\neq\operatorname{sign }(\langle\mathbf{w},\mathbf{x}\rangle)]\leq\mathsf{opt}+c\cdot k^{1/2}\cdot \theta^{1-\frac{1}{k+1}}\,.\]
We are now ready to prove Theorem 4.1.
Proof of Theorem 4.1.: Throughout the proof we consider \(\delta^{\prime}\) to be a sufficiently small polynomial in all the relevant parameters. Each of the failure events will have probability at least \(\delta^{\prime}\) and their number will be polynomial in all the relevant parameters, so by the union bound, we may pick \(\delta^{\prime}\) so that the probability of failure is at most \(\delta\).
The algorithm we run is Algorithm 1, with appropriate selection of parameters and given samples \(S_{1}\), \(S_{2}\), each of which are sufficiently large sets of independent samples from the true unknown distribution \(D_{\mathcal{X}\mathcal{Y}}\). For some \(\sigma\in(0,1]\) to be defined later, we run PSGD on the empirical loss \(\mathcal{L}_{\sigma}\) over \(S_{1}\) as described in Proposition 4.2 with \(\epsilon=c_{1}(1-2\eta)\sigma/4\), where \(c_{1}\) is given by Lemma 4.3. By Proposition 4.2, we get a list \(L\) of vectors \(\mathbf{w}\in\mathbb{S}^{d-1}\) with \(|L|=\operatorname{poly}(d,1/\sigma)\) such that there exists \(\mathbf{w}\in L\) with \(\|\nabla_{\mathbf{w}}\mathcal{L}_{\sigma}(\mathbf{w})\|_{2}<\frac{1}{2}c_{1}( 1-2\eta)\) under the true distribution, if the marginal is isotropic.
Having acquired the list \(L\) using sample \(S_{1}\), we use the independent samples in \(S_{2}\) to test whether \(L\) contains an approximately stationary point of the empirical loss on \(S_{2}\). If this is not the case, then we may safely reject: for large enough \(|S_{1}|\), if the distribution is indeed isotropic strongly logconcave, there is an approximate stationary of the population loss in \(L\) and if \(|S_{2}|\) is large enough, the gradient of the empirical loss on \(S_{2}\) will be close to the gradient of the population loss on each of the elements of \(L\), due to appropriate concentration bounds for log-concave distributions as well as the fact that the elements of \(L\) are independent from \(S_{2}\). For the following, let \(\mathbf{w}\) be a point such that \(\|\nabla_{\mathbf{w}}\mathcal{L}_{\sigma}(\mathbf{w})\|_{2}<c_{1}(1-2\eta)\) under theempirical distribution over \(S_{2}\)
In Lemma 4.3 and Proposition 4.4 we have identified certain properties of the marginal distribution that are sufficient for our purposes, given that \(L\) contains an approximately stationary point of the empirical (surrogate) loss on \(S_{2}\). Our testers \(T_{1},T_{2},T_{3}\) verify that these properties hold for the empirical marginal over our sample \(S_{2}\), and it will be convenient to analyze the optimality of our algorithm purely over \(S_{2}\). In particular, we will need to require that \(|S_{2}|\) is sufficiently large, so that when the true marginal is indeed the target \(D^{*}\), our testers succeed with high probability (for the corresponding sample complexity, see Propositions 3.2, 3.3 and 3.4). Moreover, by standard generalization theory, since the VC dimension of halfspaces is only \(O(d)\) and for us \(|S_{2}|\) is a large \(\operatorname{poly}(d,1/\epsilon)\), both the error of our final output and the optimal error over \(S_{2}\) will be close to that over \(D_{\mathcal{X}\mathcal{Y}}\). So in what follows, we will abuse notation and refer to the uniform distribution over \(S_{2}\) as \(D_{\mathcal{X}\mathcal{Y}}\) and the optimal error over \(S_{2}\) simply as opt.
We proceed with some basic tests. Throughout the rest of the algorithm, whenever a tester fails, we reject, otherwise we proceed. First, we run testers \(T_{2}\) with inputs \((\mathbf{w},\sigma/2,\delta^{\prime})\) and \((\mathbf{w},\sigma/6,\delta^{\prime})\) (Proposition 3.3) and \(T_{3}\) with inputs \((\mathbf{w},\sigma/2,c_{2},\delta^{\prime})\) and with \((\mathbf{w},\sigma/6,c_{2},\delta^{\prime})\) (Proposition 3.4, \(c_{2}\) as defined in Lemma 4.3). This ensures that for the approximate stationary point \(\mathbf{w}\) of the \(\mathcal{L}_{\sigma}\), the probability within the band \(B_{\mathbf{w}}(\sigma/2)=\{\mathbf{x}:|\langle\mathbf{w},\mathbf{x}\rangle| \leq\sigma/2\}\) is \(\Theta(\sigma)\) (and similarly for \(B_{\mathbf{w}}(\sigma/6)\)) and moreover that our marginal conditioned on each of the bands fools (up to an additive constant) functions of halfspaces with weights orthogonal to \(\mathbf{w}\). As a result, we may apply Lemma 4.3 to \(\mathbf{w}\) and form a list of \(2\) vectors \(\{\mathbf{w},-\mathbf{w}\}\) which contains some \(\mathbf{w}^{\prime}\) with \(\measuredangle(\mathbf{w}^{\prime},\mathbf{w}^{*})\leq c_{2}\sigma/(1-2\eta)\) (where \(c_{3}\) is as defined in Lemma 4.3).
Figure 1: Critical regions in the proofs of main structural lemmas (Lemmas 4.3, 5.2). We analyze the contributions of the regions labeled \(A_{1},A_{2}\) to the quantities \(A_{1},A_{2}\) in the proofs. Specifically, the regions \(A_{1}\) (which have height \(\sigma/3\) so that the value of \(\ell^{\prime}_{\sigma}(\mathbf{x}_{\mathbf{w}})\) for any \(\mathbf{x}\) in these regions is exactly \(1/\sigma\), by Proposition C.1) form a subset of the region \(\mathcal{G}\), and their probability mass under \(D_{\mathcal{X}}\) is (up to a multiplicative factor) a lower bound on the quantity \(A_{1}\) (see Eq (C.3)). Similarly, the region \(A_{2}\) is a subset of the intersection of \(\mathcal{G}^{c}\) with the band of height \(\sigma\), and has probability mass that is (up to a multiplicative factor) an upper bound on the quantity \(A_{2}\) (see Eq (C.4)).
We run \(T_{1}\) (Proposition 3.2) with \(k=2\) to verify that the marginals are approximately isotropic and we use \(T_{2}\) once again, with appropriate parameters for each \(\mathbf{w}\) and its negation, to apply Proposition 4.4 and get that \(\{\mathbf{w},-\mathbf{w}\}\) contains a vector \(\mathbf{w}^{\prime}\) with
\[\operatorname*{\mathbb{P}}_{D_{\mathcal{X}\mathcal{Y}}}[y\neq\operatorname*{ sign}(\langle\mathbf{w}^{\prime},\mathbf{x}\rangle)]\leq\mathsf{opt}+c\cdot\theta^{2/3},\]
where \(\measuredangle(\mathbf{w}^{\prime},\mathbf{w}^{*})\leq\theta:=c_{2}\sigma/ \sqrt{1-2\eta}\). By picking \(\sigma=\Theta(\epsilon^{3/2}(1-2\eta))\), we get
\[\operatorname*{\mathbb{P}}_{D_{\mathcal{X}\mathcal{Y}}}[y\neq\operatorname*{ sign}(\langle\mathbf{w}^{\prime},\mathbf{x}\rangle)]\leq\mathsf{opt}+\epsilon\,.\]
However, we do not know which of the weight vectors in \(\{\mathbf{w},-\mathbf{w}\}\) is the one guaranteed to achieve small error. In order to discover this vector, we estimate the probability of error of each of the corresponding halfspaces (which can be done efficiently, due to Hoeffding's bound) and pick the one with the smallest error. This final step does not require any distributional assumptions and we do not need to perform any further tests.
## 5 Testably learning halfspaces in the agnostic setting
In this section, we provide our result on efficiently and testably learning halfspaces in the agnostic setting with respect to isotropic strongly log-concave target marginals. We defer the proofs to Appendix D. The algorithm we use is once more Algorithm 1, but we call it multiple times for different choices of the parameter \(\sigma\), reject if any call rejects and output the vector that achieved the minimum empirical error overall, otherwise. Also, the tester \(T_{1}\) is called for a general \(k\) (not necessarily \(k=2\)).
**Theorem 5.1** (Efficient Tester-Learner for Halfspaces in the Agnostic Setting).: _Let \(D_{\mathcal{X}\mathcal{Y}}\) be a distribution over \(\mathbb{R}^{d}\times\{\pm 1\}\) and let \(D^{*}\) be a strongly log-concave distribution over \(\mathbb{R}^{d}\) (Definition A.1). Let \(\mathcal{C}\) be the class of origin centered halfspaces in \(\mathbb{R}^{d}\). Then, for any even \(k\in\mathbb{N}\), any \(\epsilon>0\) and \(\delta\in(0,1)\), there exists an algorithm that agnostically testably learns \(\mathcal{C}\) w.r.t. \(D^{*}\) up to error \(O(k^{1/2}\cdot\mathsf{opt}^{1-\frac{1}{k+1}})+\epsilon\), where \(\mathsf{opt}=\min_{\mathbf{w}\in\mathbb{S}^{d-1}}\operatorname*{\mathbb{P}}_{ D_{\mathcal{X}\mathcal{Y}}}[y\neq\operatorname*{sign}(\langle\mathbf{w}, \mathbf{x}\rangle)]\), and error probability at most \(\delta\), using time and a number of samples from \(D_{\mathcal{X}\mathcal{Y}}\) that are polynomial in \(d^{\tilde{O}(k)},(1/\epsilon)^{\tilde{O}(k)}\) and \((\log(1/\delta))^{O(k)}\)._
_In particular, by picking some appropriate \(k\leq\log^{2}d\), we obtain error \(\tilde{O}(\mathsf{opt})+\epsilon\) in quasipolynomial time and sample complexity, i.e. \(\operatorname*{poly}(2^{\operatorname*{polylog}d},(\frac{1}{\epsilon})^{ \operatorname*{polylog}d})\)._
To prove Theorem 5.1, we may follow a similar approach as the one we used for the case of Massart noise. However, in this case, the main structural lemma regarding the quality of the stationary points involves an additional requirement about the parameter \(\sigma\). In particular, \(\sigma\) cannot be arbitrarily small with respect to the error of the optimum halfspace, because, in this case, there is no upper bound on the amount of noise that any specific point \(\mathbf{x}\) might be associated with. As a result, picking \(\sigma\) to be arbitrarily small would imply that our algorithm only considers points that lie within a region that has arbitrarily small probability and can hence be completely corrupted with the adversarial opt budget. On the other hand, the polynomial slackness that the testability requirement introduces (through Proposition 4.4) between the error we achieve and the angular distance guarantee we can get via finding a stationary point of \(\mathcal{L}_{\sigma}\) (which is now coupled with opt), appears to the exponent of the guarantee we achieve in Theorem 5.1.
**Lemma 5.2**.: _Let \(\mathcal{L}_{\sigma}\) be as in Equation (4.1) with \(\sigma\in(0,1]\), \(\ell_{\sigma}\) as described in Proposition C.1, let \(\mathbf{w}\in\mathbb{S}^{d-1}\) and consider \(D_{\mathcal{X}\mathcal{Y}}\) such that the marginal \(D_{\mathcal{X}}\) on \(\mathbb{R}^{d}\) satisfies Properties (3.2), (3.3) and (3.4) for \(\mathbf{w}\) with \(\mathcal{C}_{4}=2\) and accuracy parameter \(\tau\). Let \(\mathsf{opt}\) be the minimum error achieved by some origin centered halfspace and let \(\mathbf{w}^{*}\in\mathbb{S}^{d-1}\) be a corresponding vector. Then, there are constants \(c_{1},c_{2},c_{3},c_{4}>0\) such that if \(\mathsf{opt}\leq c_{1}\sigma\), \(\|\nabla_{\mathbf{w}}\mathcal{L}_{\sigma}(\mathbf{w})\|_{2}<c_{2}\), and \(\tau\leq c_{3}\) then_
\[\measuredangle(\mathbf{w},\mathbf{w}^{*})\leq c_{4}\sigma\quad\text{or}\quad \measuredangle(-\mathbf{w},\mathbf{w}^{*})\leq c_{4}\sigma.\]
We obtain our main result for Gaussian target marginals by refining Proposition 4.4 for the specific case when the target marginal distribution \(D^{*}\) is the standard multivariate Gaussian distribution. The algorithm for the Gaussian case is similar to the one of Theorem 5.1, but it runs different tests for the improved version (see Proposition D.1) of Proposition 4.4.
**Theorem 5.3**.: _In Theorem 5.1, if \(D^{*}\) is the standard Gaussian in \(d\) dimensions, we obtain error \(O(\mathsf{opt})+\epsilon\) in polynomial time and sample complexity, i.e. \(\operatorname*{poly}(d,1/\epsilon,\log(1/\delta))\)._
## References
* [ABHU15] Pranjal Awasthi, Maria-Florina Balcan, Nika Haghtalab, and Ruth Urner. Efficient learning of linear separators under bounded noise. In _Conference on Learning Theory_, pages 167-190. PMLR, 2015.
* [ABHZ16] Pranjal Awasthi, Maria-Florina Balcan, Nika Haghtalab, and Hongyang Zhang. Learning and 1-bit compressed sensing under asymmetric noise. In _Conference on Learning Theory_, pages 152-192. PMLR, 2016.
* [ABL17] Pranjal Awasthi, Maria Florina Balcan, and Philip M Long. The power of localization for efficiently learning linear separators with noise. _Journal of the ACM (JACM)_, 63(6):1-27, 2017.
* [BH21] Maria-Florina Balcan and Nika Haghtalab. Noise in classification. _Beyond the Worst-Case Analysis of Algorithms_, page 361, 2021.
* [BZ17] Maria-Florina F Balcan and Hongyang Zhang. Sample and computationally efficient learning algorithms under s-concave distributions. _Advances in Neural Information Processing Systems_, 30, 2017.
* [Dan15] Amit Daniely. A ptas for agnostically learning halfspaces. In _Conference on Learning Theory_, pages 484-502. PMLR, 2015.
* [DGT19] Ilias Diakonikolas, Themis Gouleakis, and Christos Tzamos. Distribution-independent pac learning of halfspaces with massart noise. _Advances in Neural Information Processing Systems_, 32, 2019.
* [DK22] Ilias Diakonikolas and Daniel Kane. Near-optimal statistical query hardness of learning halfspaces with massart noise. In _Conference on Learning Theory_, pages 4258-4282. PMLR, 2022.
* [DKK\({}^{+}\)22] Ilias Diakonikolas, Daniel M Kane, Vasilis Kontonis, Christos Tzamos, and Nikos Zarifis. Learning general halfspaces with general massart noise under the gaussian distribution. In _Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing_, pages 874-885, 2022.
* [DKMR22] Ilias Diakonikolas, Daniel Kane, Pasin Manurangsi, and Lisheng Ren. Cryptographic hardness of learning halfspaces with massart noise. In _Advances in Neural Information Processing Systems_, 2022.
* [DKPZ21] Ilias Diakonikolas, Daniel M Kane, Thanasis Pittas, and Nikos Zarifis. The optimality of polynomial regression for agnostic learning under gaussian marginals in the sq model. In _Conference on Learning Theory_, pages 1552-1584. PMLR, 2021.
* [DKS18] Ilias Diakonikolas, Daniel M Kane, and Alistair Stewart. Learning geometric concepts with nasty noise. In _Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing_, pages 1061-1073, 2018.
* [DKT21] Ilias Diakonikolas, Daniel Kane, and Christos Tzamos. Forster decomposition and learning halfspaces with noise. _Advances in Neural Information Processing Systems_, 34:7732-7744, 2021.
* [DKTZ20a] Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, and Nikos Zarifis. Learning halfspaces with massart noise under structured distributions. In _Conference on Learning Theory_, pages 1486-1513. PMLR, 2020.
* [DKTZ20b] Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, and Nikos Zarifis. Non-convex sgd learns halfspaces with adversarial label noise. _Advances in Neural Information Processing Systems_, 33:18540-18549, 2020.
* [DKTZ22] Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, and Nikos Zarifis. Learning general halfspaces with adversarial label noise via online gradient descent. In _International Conference on Machine Learning_, pages 5118-5141. PMLR, 2022.
* [DKZ20] Ilias Diakonikolas, Daniel Kane, and Nikos Zarifis. Near-optimal sq lower bounds for agnostically learning halfspaces and relus under gaussian marginals. _Advances in Neural Information Processing Systems_, 33:13586-13596, 2020.
* [DTK22] Ilias Diakonikolas, Christos Tzamos, and Daniel M Kane. A strongly polynomial algorithm for approximate forster transforms and its application to halfspace learning. _arXiv preprint arXiv:2212.03008_, 2022.
* [GGK20] Surbhi Goel, Aravind Gollakota, and Adam Klivans. Statistical-query lower bounds via functional gradients. _Advances in Neural Information Processing Systems_, 33:2147-2158, 2020.
* [GKK23] Aravind Gollakota, Adam R Klivans, and Pravesh K Kothari. A moment-matching approach to testable learning and a new characterization of rademacher complexity. _Proceedings of the fifty-fifth annual ACM Symposium on Theory of Computing_, 2023. To appear. (document), 1, 1.2, 3.
* [KKMS08] Adam Tauman Kalai, Adam R Klivans, Yishay Mansour, and Rocco A Servedio. Agnostically learning halfspaces. _SIAM Journal on Computing_, 37(6):1777-1805, 2008.
* [KLS09] Adam R Klivans, Philip M Long, and Rocco A Servedio. Learning halfspaces with malicious noise. _Journal of Machine Learning Research_, 10(12), 2009.
* [RV23] Ronitt Rubinfeld and Arsen Vasilyan. Testing distributional assumptions of learning algorithms. _Proceedings of the fifty-fifth annual ACM Symposium on Theory of Computing_, 2023. To appear. (document), 1, 1, 2, 2.
* [SW14] Adrien Saumard and Jon A Wellner. Log-concavity and strong log-concavity: a review. _Statistics surveys_, 8:45, 2014.
* [YZ17] Songbai Yan and Chicheng Zhang. Revisiting perceptron: Efficient and label-optimal learning of halfspaces. _Advances in Neural Information Processing Systems_, 30, 2017.
* [Zha18] Chicheng Zhang. Efficient active learning of sparse halfspaces. In _Conference on Learning Theory_, pages 1856-1880. PMLR, 2018.
* [ZL21] Chicheng Zhang and Yinan Li. Improved algorithms for efficient active learning halfspaces with massart and tsybakov noise. In _Conference on Learning Theory_, pages 4526-4527. PMLR, 2021.
* [ZSA20] Chicheng Zhang, Jie Shen, and Pranjal Awasthi. Efficient active learning of sparse halfspaces with arbitrary bounded noise. _Advances in Neural Information Processing Systems_, 33:7184-7197, 2020. | ## Review
### Summary
This paper presents an efficient algorithm for learning halfspaces within the testable learning framework proposed by Rubinfeld and Vasilyan. The algorithm operates under the assumption of a target distribution that is either Gaussian or strongly log-concave, and it addresses both Massart and agnostic noise scenarios. The main contributions include a two-fold algorithmic guarantee: achieving an error margin of opt+ε for Massart noise and O(opt)+ε for the agnostic setting. The methodology utilizes a non-convex optimization approach and incorporates novel ideas such as moment matching and a three-stage testing procedure. Overall, the work significantly advances the understanding of testable learning in halfspace scenarios and is presented with clarity and thoroughness.
### Strengths
- Extends the testable learning framework to a fundamental machine learning problem, learning halfspaces.
- Presents novel algorithmic techniques that efficiently handle both Massart and agnostic noise.
- Results are strong and technically solid, contributing meaningful advancements in the field.
- Well-structured and clearly written, making it accessible and understandable.
- Practical implications of the algorithm are promising, with potential for implementation and real-world applications.
### Weaknesses
- The hypothesis class is limited to homogeneous halfspaces without bias.
- In the agnostic scenario, the error bound may exceed the optimal error by a constant factor.
- The practical applicability of the testable learning model is not fully explored.
- Some minor typographical errors and missing references were noted.
### Questions
- What are the technical hurdles that prevent the current approach from handling non-homogeneous halfspaces?
- Is there an inherent difficulty that explains the weaker guarantees in agnostic testable learning compared to Massart?
- Can this approach be adapted to design tester-learners for classes beyond halfspaces?
- Did the authors consider the Tsybakov noise model as a potential area for future work?
### Soundness
**Score:** 3
**Description:** 3 = good: The results are generally strong and the methodology sound, but there are limitations in the assumptions made about the noise models and hypothesis classes.
### Presentation
**Score:** 3
**Description:** 3 = good: The paper is well-written with clear explanations, though some minor typographical issues detract from the overall presentation.
### Contribution
**Score:** 3
**Description:** 3 = good: The paper provides a solid contribution to the field of testable learning, though its impact in broader applications remains to be fully understood.
### Rating
**Score:** 7
**Description:** 7 = accept, but needs minor improvements: The paper is technically solid with significant contributions, but it requires some revisions in clarity and minor errors.
### Paper Decision
**Decision:** Accept
**Reasons:** The paper demonstrates originality and significant contributions to the field of machine learning, particularly in the context of testable learning for halfspaces. While there are minor weaknesses and questions that need addressing, the overall soundness, clarity, and practical relevance of the results warrant acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Glime: General, Stable and Local LIME Explanation
Zeren Tan
Tsinghua University
[email protected] &Yang Tian
Tsinghua University
[email protected] &Jian Li
Tsinghua University
[email protected]
###### Abstract
As black-box machine learning models grow in complexity and find applications in high-stakes scenarios, it is imperative to provide explanations for their predictions. Although Local Interpretable Model-agnostic Explanations (LIME) [22] is a widely adopted method for understanding model behaviors, it is unstable with respect to random seeds [35, 24, 3] and exhibits low local fidelity (i.e., how well the explanation approximates the model's local behaviors) [21, 16]. Our study shows that this instability problem stems from small sample weights, leading to the dominance of regularization and slow convergence. Additionally, LIME's sampling neighborhood is non-local and biased towards the reference, resulting in poor local fidelity and sensitivity to reference choice. To tackle these challenges, we introduce Glime, an enhanced framework extending LIME and unifying several prior methods. Within the Glime framework, we derive an equivalent formulation of LIME that achieves significantly faster convergence and improved stability. By employing a local and unbiased sampling distribution, Glime generates explanations with higher local fidelity compared to LIME. Glime explanations are independent of reference choice. Moreover, Glime offers users the flexibility to choose a sampling distribution based on their specific scenarios.
## 1 Introduction
Why a patient is predicted to have a brain tumor [10]? Why a credit application is rejected [11]? Why a picture is identified as an electric guitar [22]? As black-box machine learning models continue to evolve in complexity and are employed in critical applications, it is imperative to provide explanations for their predictions, making interpretability a central concern [1]. In response to this imperative, various explanation methods have been proposed [39, 26, 4, 19, 22, 28, 30], aiming to provide insights into the internal mechanisms of deep learning models.
Among the various explanation methods, Local Interpretable Model-agnostic Explanations (LIME) [22] has attracted significant attention, particularly in image classification tasks. LIME explains predictions by assigning each region within an image a weight indicating the influence of this region to the output. This methodology entails segmenting the image into super-pixels, as illustrated in the lower-left portion of Figure 0(a), introducing perturbations, and subsequently approximating the local model prediction using a linear model. The approximation is achieved by solving a weighted Ridge regression problem, which estimates the impact (i.e., weight) of each super-pixel on the classifier's output.
Nevertheless, LIME has encountered significant instability due to its random sampling procedure [35, 24, 3]. In LIME, a set of samples perturbing the original image is taken. As illustrated in Figure 0(a), LIME explanations generated with two different random seeds display notable disparities, despite using a large sample size (16384). The Jaccard index, measuring similarity between two explanations on a scale from 0 to 1 (with higher values indicating better similarity), is below 0.4. While many prior studies aim to enhance LIME's stability, some sacrifice computational time for stability [24, 40], and others may entail the risk of overfitting [35]. The evident drawback of unstableexplanations lies in their potential to mislead end-users and hinder the identification of model bugs and biases, given that LIME explanations lack consistency across different random seeds.
In addition to its inherent instability, LIME has been found to have poor local fidelity [16, 21]. As depicted in Figure 0(a), the \(R^{2}\) value for LIME on the sample image approaches zero (refer also to Figure 3(b)). This problem arises from the non-local and skewed sampling space of LIME, which is biased towards the reference. More precisely, the sampling space of LIME consists of the corner points of the hypercube defined by the explained instance and the selected reference. For instance, in the left section of Figure 0(b), only four red points fall within LIME's sampling space, yet these points are distant from \(\mathbf{x}\). As illustrated in Figure 3, the \(L_{2}\) distance between LIME samples of the input \(\mathbf{x}\) and \(\mathbf{x}\) is approximately \(0.7\|\mathbf{x}\|_{2}\) on ImageNet. Although LIME incorporates a weighting function to enforce locality, an explanation cannot be considered as local if the samples themselves are non-local, leading to a lack of local fidelity in the explanation. Moreover, the hypercube exhibits bias towards the reference, resulting in explanations designed to explain only a portion of the local neighborhood. This bias causes LIME to generate different explanations for different references, as illustrated in Figure 0(b) (refer to Appendix A.4 for more analysis and results).
To tackle these challenges, we present Glime--a local explanation framework that generalizes LIME and five other methods: KernelSHAP [19], SmoothGrad [28], Gradient [36], DLIME [35], and ALIME [24]. Through a flexible sample distribution design, Glime produces explanations that are more stable and faithful. Addressing LIME's instability issue, within Glime, we derive an equivalent form of LIME, denoted as Glime-Binomial, by integrating the weighting function into the sampling distribution. Glime-Binomial ensures exponential convergence acceleration compared to LIME when the regularization term is presented. Consequently, Glime-Binomial demonstrates improved stability compared to LIME while preserving superior local fidelity (see Figure 4). Furthermore, Glime enhances both local fidelity and stability by sampling from a local distribution independent of any specific reference point.
In summary, our contributions can be outlined as follows:
* We conduct an in-depth analysis to find the source of LIME's instability, revealing the interplay between the weighting function and the regularization term as the primary cause. Additionally, we attribute LIME's suboptimal local fidelity to its non-local and biased sampling space.
* We introduce Glime as a more general local explanation framework, offering a flexible design for the sampling distribution. With varying sampling distributions and weights, Glime serves as a generalization of LIME and five other preceding local explanation methods.
Figure 1: **Glime enhances stability and local fidelity compared to LIME.** (a) LIME demonstrates instability with the default parameter \(\sigma\), while Glime consistently provides meaningful explanations. (b) LIME samples from a biased and non-local neighborhood, a limitation overcome by Glime.
* By integrating weights into the sampling distribution, we present a specialized instance of Glime with a binomial sampling distribution, denoted as Glime-Binomial. We demonstrate that Glime-Binomial, while maintaining equivalence to LIME, achieves faster convergence with significantly fewer samples. This indicates that enforcing locality in the sampling distribution is better than using a weighting function.
* With regard to local fidelity, Glime empowers users to devise explanation methods that exhibit greater local fidelity. This is achieved by selecting a local and unbiased sampling distribution tailored to the specific scenario in which Glime is applied.
## 2 Preliminary
### Notations
Let \(\mathcal{X}\) and \(\mathcal{Y}\) denote the input and output spaces, respectively, where \(\mathcal{X}\subset\mathbb{R}^{D}\) and \(\mathcal{Y}\subset\mathbb{R}\). We specifically consider the scenario in which \(\mathcal{X}\) represents the space of images, and \(f:\mathcal{X}\rightarrow\mathcal{Y}\) serves as a machine learning model accepting an input \(\mathbf{x}\in\mathcal{X}\). This study focuses on the classification problem, wherein \(f\) produces the probability that the image belongs to a certain class, resulting in \(\mathcal{Y}=[0,1]\).
Before proceeding with explanation computations, a set of features \(\{s_{i}\}_{i=1}^{d}\) is derived by applying a transformation to \(\mathbf{x}\). For instance, \(\{s_{i}\}_{i=1}^{d}\) could represent image segments (also referred to as super-pixels in LIME) or feature maps obtained from a convolutional neural network. Alternatively, \(\{s_{i}\}_{i=1}^{d}\) may correspond to raw features, i.e., \(\mathbf{x}\) itself. In this context, \(\|\cdot\|_{0}\), \(\|\cdot\|_{1}\), and \(\|\cdot\|_{2}\) denote the \(\ell_{0}\), \(\ell_{1}\), and \(\ell_{2}\) norms, respectively, with \(\odot\) representing the element-wise product. Boldface letters are employed to denote vectors and matrices, while non-boldface letters represent scalars or features. \(B_{\mathbf{x}}(\epsilon)\) denotes the ball centered at \(\mathbf{x}\) with radius \(\epsilon\).
### A brief introduction to LIME
In this section, we present the original definition and implementation of LIME [22] in the context of image classification. LIME, as a local explanation method, constructs a linear model when provided with an input \(\mathbf{x}\) that requires an explanation. The coefficients of this linear model serve as the feature importance explanation for \(\mathbf{x}\).
**Features.** For an input \(\mathbf{x}\), LIME computes a feature importance vector for the set of features. In the image classification setting, for an image \(\mathbf{x}\), LIME initially segments \(\mathbf{x}\) into super-pixels \(s_{1},\ldots,s_{d}\) using a segmentation algorithm such as Quickshift [32]. Each super-pixel is regarded as a feature for the input \(\mathbf{x}\).
**Sample generation.** Subsequently, LIME generates samples within the local vicinity of \(\mathbf{x}\) as follows. First, random samples are generated uniformly from \(\{0,1\}^{d}\). The \(j\)-th coordinate \(z^{\prime}_{j}\) for each sample \(\mathbf{z^{\prime}}\) is either 1 or 0, indicating the presence or absence of the super-pixel \(s_{j}\). When \(s_{j}\) is absent, it is replaced by a reference value \(r_{j}\). Common choices for the reference value include a black image, a blurred version of the super-pixel, or the average value of the super-pixel [29; 22; 8]. Then, these \(\mathbf{z^{\prime}}\) samples are transformed into samples in the original input space \(\mathbb{R}^{D}\) by combining them with \(\mathbf{x}=(s_{1},\ldots,s_{d})\) using the element-wise product as follows: \(\mathbf{z}=\mathbf{x}\odot\mathbf{z^{\prime}}+\mathbf{r}\odot(1-\mathbf{z^{ \prime}})\), where \(\mathbf{r}\) is the vector of reference values for each super-pixel, and \(\odot\) represents the element-wise product. In other words, \(\mathbf{z}\in\mathcal{X}\) is an image that is the same as \(\mathbf{x}\), except that those super-pixels \(s_{j}\) with \(z^{\prime}_{j}=0\) are replaced by reference values.
**Feature attributions.** For each sample \(\mathbf{z^{\prime}}\) and the corresponding image \(\mathbf{z}\), we compute the prediction \(f(\mathbf{z})\). Finally, LIME solves the following regression problem to obtain a feature importance vector (also known as feature attributions) for the super-pixels:
\[\mathbf{w}^{\text{LIME}}=\operatorname*{arg\,min}_{\mathbf{v}}\mathbb{E}_{ \mathbf{z^{\prime}}\sim\text{Uni}(\{0,1\}^{d})}[\pi(\mathbf{z^{\prime}})(f( \mathbf{z})-\mathbf{v}^{\top}\mathbf{z^{\prime}})^{2}]+\lambda\|\mathbf{v}\|_{2 }^{2}, \tag{1}\]
where \(\mathbf{z}=\mathbf{x}\odot\mathbf{z^{\prime}}+\mathbf{r}\odot(1-\mathbf{z^{ \prime}})\), \(\pi(\mathbf{z^{\prime}})=\exp\{-\|\mathbf{1}-\mathbf{z^{\prime}}\|_{2}^{2}/ \sigma^{2}\}\), and \(\sigma\) is the kernel width parameter.
**Remark 2.1**.: _In practice, we draw samples \(\{\mathbf{z}^{\prime}_{i}\}_{i=1}^{n}\) from the uniform distribution Uni\((\{0,1\}^{d})\) to estimate the expectation in Equation 1. In the original LIME implementation [22], \(\lambda=\alpha/n\) for a constant \(\alpha>0\). This choice has been widely adopted in prior studies [40; 8; 19; 9; 5; 20; 34]. We use \(\hat{\mathbf{w}}^{\text{LIME}}\) to represent the empirical estimation of \(\mathbf{w}^{\text{LIME}}\)._
### LIME is unstable and has poor local fidelity
**Instability.** To capture the local characteristics of the neighborhood around the input \(\mathbf{x}\), LIME utilizes the sample weighting function \(\pi(\cdot)\) to assign low weights to samples that exclude numerous super-pixels and, consequently, are located far from \(\mathbf{x}\). The parameter \(\sigma\) controls the level of locality, with a small \(\sigma\) assigning high weights exclusively to samples very close to \(\mathbf{x}\) and a large \(\sigma\) permitting notable weights for samples farther from \(\mathbf{x}\) as well. The default value for \(\sigma\) in LIME is \(0.25\) for image data. However, as depicted in Figure 0(a), LIME demonstrates instability, a phenomenon also noted in prior studies [35; 24; 34]. As showed in Section 4, this instability arises from small \(\sigma\) values, leading to very small sample weights and, consequently, slow convergence.
**Poor local fidelity.** LIME also suffers from poor local fidelity [16; 21]. The sampling space of LIME is depicted in Figure 0(b). Generally, the samples in LIME exhibit considerable distance from the instance being explained, as illustrated in Figure 3, rendering them non-local. Despite LIME's incorporation of weights to promote locality, it fails to provide accurate explanations for local behaviors when the samples themselves lack local proximity. Moreover, the sampling space of LIME is influenced by the reference, resulting in a biased sampling space and a consequent degradation of local fidelity.
## 3 A general local explanation framework: Glime
### The definition of Glime
We first present the definition of Glime and show how it computes the explanation vector \(\mathbf{w}^{\text{GLIME}}\). Analogous to LIME, Glime functions by constructing a model within the neighborhood of the input \(\mathbf{x}\), utilizing sampled data from this neighborhood. The coefficients obtained from this model are subsequently employed as the feature importance explanation for \(\mathbf{x}\).
**Feature space.** For the provided input \(\mathbf{x}\in\mathcal{X}\subset\mathbb{R}^{D}\), the feature importance explanation is computed for a set of features \(\mathbf{s}=(s_{1},\ldots,s_{d})\) derived from applying a transformation to \(\mathbf{x}\). These features \(\mathbf{s}\) can represent image segments (referred to as super-pixels in LIME) or feature maps obtained from a convolutional neural network. Alternatively, the features \(\mathbf{s}\) can correspond to raw features, i.e., the individual pixels of \(\mathbf{x}\). In the context of LIME, the method specifically operates on super-pixels.
**Sample generation.** Given features \(\mathbf{s}\), a sample \(\mathbf{z}^{\prime}\) can be generated from the distribution \(\mathcal{P}\) defined on the feature space (e.g., \(\mathbf{s}\) are super-pixels segmented by a segmentation algorithm such Quickshift [32] and \(\mathcal{P}=\text{Uni}(\{0,1\}^{d})\) in LIME). It's important to note that \(\mathbf{z}^{\prime}\) may not belong to \(\mathcal{X}\) and cannot be directly input into the model \(f\). Consequently, we reconstruct \(\mathbf{z}\in\mathbb{R}^{D}\) in the original input space for each \(\mathbf{z}^{\prime}\) and obtain \(f(\mathbf{z})\) (in LIME, a reference \(\mathbf{r}\) is first chosen and then \(\mathbf{z}=\mathbf{x}\odot\mathbf{z}^{\prime}+\mathbf{r}\odot(1-\mathbf{z}^{ \prime})\)). Both \(\mathbf{z}\) and \(\mathbf{z}^{\prime}\) are then utilized to compute feature attributions.
**Feature attributions.** For each sample \(\mathbf{z}^{\prime}\) and its corresponding \(\mathbf{z}\), we compute the prediction \(f(\mathbf{z})\). Our aim is to approximate the local behaviors of \(f\) around \(\mathbf{x}\) using a function \(g\) that operates on the feature space. \(g\) can take various forms such as a linear model, a decision tree, or any Boolean function operating on Fourier bases [37]. The loss function \(\ell(f(\mathbf{z}),g(\mathbf{z}^{\prime}))\) quantifies the approximation gap for the given sample \(\mathbf{z}^{\prime}\). In the case of LIME, \(g(\mathbf{z}^{\prime})=\mathbf{v}^{\top}\mathbf{z}^{\prime}\), and \(\ell(f(\mathbf{z}),g(\mathbf{z}^{\prime}))=(f(\mathbf{z})-g(\mathbf{z}^{ \prime}))^{2}\). To derive feature attributions, the following optimization problem is solved:
\[\mathbf{w}^{\text{GLIME}}=\operatorname*{arg\,min}_{\mathbf{v}}\mathbb{E}_{ \mathbf{z}^{\prime}\sim\mathcal{P}}[\pi(\mathbf{z}^{\prime})\ell(f(\mathbf{z}),g(\mathbf{z}^{\prime}))]+\lambda R(\mathbf{v}), \tag{2}\]
where \(\pi(\cdot)\) is a weighting function and \(R(\cdot)\) serves as a regularization function, e.g., \(\|\cdot\|_{1}\) or \(\|\cdot\|_{2}^{2}\) (which is used by LIME). We use \(\hat{\mathbf{w}}^{\text{GLIME}}\) to represent the empirical estimation of \(\mathbf{w}^{\text{GLIME}}\).
**Connection with Existing Frameworks.** Our formulation exhibits similarities with previous frameworks [22; 12]. The generality of Glime stems from two key aspects: (1) Glime operates within a broader feature space \(\mathbb{R}^{d}\), in contrast to [22], which is constrained to \(\{0,1\}^{d}\), and [12], which is confined to raw features in \(\mathbb{R}^{D}\). (2) Glime can accommodate a more extensive range of distribution choices tailored to specific use cases.
### An alternative formulation of Glime without the weighting function
Indeed, we can readily transform Equation 2 into an equivalent formulation without the weighting function. While this adjustment simplifies the formulation, it also accelerates convergence by sampling from the transformed distribution (see Section 4.1 and Figure 3(a)). Specifically, we define the transformed sampling distribution as \(\widetilde{\mathcal{P}}(\mathbf{z}^{\prime})=\frac{\pi(\mathbf{z}^{\prime})P( \mathbf{z}^{\prime})}{\int\pi(\mathbf{t})P(\mathbf{t})\mathrm{d}\mathbf{t}}\). Utilizing \(\widetilde{\mathcal{P}}\) as the sampling distribution, Equation 2 can be equivalently expressed as
\[\mathbf{w}^{\text{Glime}}=\operatorname*{arg\,min}_{\mathbf{v}}\mathbb{E}_{ \mathbf{z}^{\prime}\sim\widetilde{\mathcal{P}}}[\ell(f(\mathbf{z}),g(\mathbf{ z}^{\prime}))]+\frac{\lambda}{Z}R(\mathbf{v}),\quad Z=\mathbb{E}_{\mathbf{t}\sim \mathcal{P}}[\pi(\mathbf{t})\mathcal{P}(\mathbf{t})] \tag{3}\]
It is noteworthy that the feature attributions obtained by solving Equation 3 are equivalent to those obtained by solving Equation 2 (see Appendix B.1 for a formal proof). Therefore, the use of \(\pi(\cdot)\) in the formulation is not necessary and can be omitted. Hence, unless otherwise specified, Glime refers to the framework without the weighting function.
### Glime unifies several previous explanation methods
This section shows how Glime unifies previous methods. For a comprehensive understanding of the background regarding these methods, kindly refer to Appendix A.6.
**LIME [22] and Glime-Binomial.** In the case of LIME, it initiates the explanation process by segmenting pixels \(x_{1},\cdots,x_{D}\) into super-pixels \(s_{1},\cdots,s_{d}\). The binary vector \(\mathbf{z}^{\prime}\sim\mathcal{P}=\text{Uni}(\{0,1\}^{d})\) signifies the absence or presence of corresponding super-pixels. Subsequently, \(\mathbf{z}=\mathbf{x}\odot\mathbf{z}^{\prime}+\mathbf{r}\odot(\mathbf{1}- \mathbf{z}^{\prime})\). The linear model \(g(\mathbf{z}^{\prime})=\mathbf{v}^{\top}\mathbf{z}^{\prime}\) is defined on \(\{0,1\}^{d}\). For image explanations, \(\ell(f(\mathbf{z}),g(\mathbf{z}^{\prime}))=(f(\mathbf{z})-g(\mathbf{z}^{\prime }))^{2}\), and the default setting is \(\pi(\mathbf{z}^{\prime})=\exp(-\|\mathbf{1}-\mathbf{z}^{\prime}\|_{0}^{2}/ \sigma^{2})\), \(R(\mathbf{v})=\|\mathbf{v}\|_{2}^{2}\)[22]. Remarkably, LIME is equivalent to the special case Glime-Binomial without the weighting function (see Appendix B.2 for the formal proof). The sampling distribution of Glime-Binomial is defined as \(\mathcal{P}(\mathbf{z}^{\prime},\|\mathbf{z}^{\prime}\|_{0}=k)=e^{k/\sigma^{2 }}/(1+e^{1/\sigma^{2}})^{d}\), where \(k=0,1,\ldots,d\). This distribution is essentially a Binomial distribution. To generate a sample \(\mathbf{z}^{\prime}\in\{0,1\}^{d}\), one can independently draw \(z^{\prime}_{i}\in\{0,1\}\) with \(\mathbb{P}(z_{i}=1)=1/(1+e^{-1/\sigma^{2}})\) for \(i=1,\ldots,d\). The feature importance vector obtained by solving Equation 3 under Glime-Binomial is denoted as \(\mathbf{w}^{\text{Binomial}}\).
**KernelSHAP [19].** In our framework, the formulation of KernelSHAP aligns with that of LIME, with the only difference being \(R(\mathbf{v})=0\) and \(\pi(\mathbf{z}^{\prime})=(d-1)/(\binom{d}{\|\mathbf{z}^{\prime}\|_{0}}\| \mathbf{z}^{\prime}\|_{0}(d-\|\mathbf{z}^{\prime}\|_{0}))\).
**SmoothGrad [28].** SmoothGrad functions on raw features, specifically pixels in the case of an image. Here, \(\mathbf{z}=\mathbf{z}^{\prime}+\mathbf{x}\), where \(\mathbf{z}^{\prime}\sim\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})\). The loss function \(\ell(f(\mathbf{z}),g(\mathbf{z}^{\prime}))\) is represented by the squared loss, while \(\pi(\mathbf{z}^{\prime})=1\) and \(R(\mathbf{v})=0\), as established in Appendix B.6.
**Gradient [36].** The Gradient explanation is essentially the limit of SmoothGrad as \(\sigma\) approaches 0.
**DLIME [35].** DLIME functions on raw features, where \(\mathcal{P}\) is defined over the training data that have the same label with the nearest neighbor of \(\mathbf{x}\). The linear model \(g(\mathbf{z}^{\prime})=\mathbf{v}^{\top}\mathbf{z}^{\prime}\) is employed with the square loss function \(\ell\) and the regularization term \(R(\mathbf{v})=0\).
**ALIME [24].** ALIME employs an auto-encoder trained on the training data, with its feature space defined as the output space of the auto-encoder. The sample generation process involves introducing Gaussian noise to \(\mathbf{x}\). The weighting function in ALIME is denoted as \(\pi(\mathbf{z}^{\prime})=\exp(-\|\mathcal{AE}(\mathbf{x})-\mathcal{AE}( \mathbf{z}^{\prime})\|_{1})\), where \(\mathcal{AE}(\cdot)\) represents the auto-encoder. The squared loss function is chosen as the loss function and no regularization function is applied.
## 4 Stable and locally faithful explanations in Glime
### Glime-Binomial converges exponentially faster than LIME
To understand the instability of LIME, we demonstrate that the sample weights in LIME are very small, resulting in the domination of the regularization term. Consequently, LIME tends to produce explanations that are close to zero. Additionally, the small weights in LIME lead to a considerably slower convergence compared to Glime-Binomial, despite both methods converging to the same limit.
**Small sample weights in LIME.** The distribution of the ratio of non-zero elements to the total number of super-pixels, along with the corresponding weights for LIME and Glime-Binomial, is depicted in Figure 2. Notably, most samples exhibit approximately \(d/2\) non-zero elements. However, when \(\sigma\) takes values such as 0.25 or 0.5, a significant portion of samples attains weights that are nearly zero. For instance, when \(\sigma=0.25\) and \(\|\mathbf{z}^{\prime}\|_{0}=d/2\), \(\pi(\mathbf{z}^{\prime})\) reduces to \(\exp(-8d)\), which is approximately \(10^{-70}\) for \(d=20\). Even with \(\|\mathbf{z}^{\prime}\|_{0}=d-1\), \(\pi(\mathbf{z}^{\prime})\) equals \(e^{-16}\), approximating \(10^{-7}\). Since LIME samples from \(\text{Uni}(\{0,1\}^{d})\), the probability that a sample \(\mathbf{z}^{\prime}\) has \(\|\mathbf{z}^{\prime}\|_{0}=d-1\) or \(d\) is approximately \(2\times 10^{-5}\) when \(d=20\). Therefore, most samples have very small weights. Consequently, the sample estimation of the expectation in Equation 1 tends to be much smaller than the true expectation with high probability and is thus inaccurate (see Appendix B.3 for more details). Given the default regularization strength \(\lambda=1\), this imbalance implies the domination of the regularization term in the objective function of Equation 1. As a result, LIME tends to yield explanations close to zero in such cases, diminishing their meaningfulness and leading to instability.
**Glime converges exponentially faster than LIME in the presence of regularization.** Through the integration of the weighting function into the sampling process, every sample uniformly carries a weight of 1, contributing equally to Equation 3. Our analysis reveals that Glime requires substantially fewer samples than LIME to transition beyond the regime where the regularization term dominates. Consequently, Glime-Binomial converges exponentially faster than LIME. Recall that \(\hat{\mathbf{w}}^{\text{LIME}}\) and \(\hat{\mathbf{w}}^{\text{Glime}}\) represent the empirical solutions of Equation 1 and Equation 3, respectively, obtained by replacing the expectations with the sample average. \(\hat{\mathbf{w}}^{\text{Binomial}}\) is the empirical solution of Equation 3 with the transformed sampling distribution \(\widetilde{\mathcal{P}}(\mathbf{z}^{\prime},\|\mathbf{z}^{\prime}\|_{0}=k)=e^{ k/\sigma^{2}}/(1+e^{1/\sigma^{2}})^{d}\), where \(k=0,1,\cdots,d\). In the subsequent theorem, we present the sample complexity bound for LIME (refer to Appendix B.4 for proof).
**Theorem 4.1**.: _Suppose samples \(\{\mathbf{z}^{\prime}_{i}\}_{i=1}^{n}\sim\text{Uni}(\{0,1\}^{d})\) are used to compute the LIME explanation. For any \(\epsilon>0,\delta\in(0,1)\), if \(n=\Omega(\epsilon^{-2}d^{9}2^{8d}e^{8/\sigma^{2}}\log(4d/\delta)),\lambda\leq n,\) we have \(\mathbb{P}(\|\hat{\mathbf{w}}^{\text{LIME}}-\mathbf{w}^{\text{LIME}}\|_{2}< \epsilon)\geq 1-\delta\). \(\mathbf{w}^{\text{LIME}}=\lim_{n\rightarrow\infty}\hat{\mathbf{w}}^{\text{ LIME}}\)._
Next, we present the sample complexity bound for Glime (refer to Appendix B.5 for proof).
**Theorem 4.2**.: _Suppose \(\mathbf{z}^{\prime}\)\(\sim\)\(\mathcal{P}\) such that the largest eigenvalue of \(\mathbf{z}^{\prime}(\mathbf{z}^{\prime})^{\top}\) is bounded by \(R\) and \(\mathbb{E}[\mathbf{z}^{\prime}(\mathbf{z}^{\prime})^{\top}]=(\alpha_{1}-\alpha _{2})\mathbf{I}+\alpha_{2}\mathbf{1}\mathbf{1}^{\top},\|\text{Var}(\mathbf{z} ^{\prime}(\mathbf{z}^{\prime})^{\top})\|_{2}\leq\nu^{2},\)\(|(\mathbf{z}^{\prime}f(\mathbf{z}))_{i}|\leq M\) for some \(M>0\). \(\{\mathbf{z}^{\prime}_{i}\}_{i=1}^{n}\) are i.i.d. samples from \(\mathcal{P}\) and are used to compute Glime explanation \(\hat{\mathbf{w}}^{\text{Glime}}\). For any \(\epsilon>0,\delta\in(0,1)\), if \(n=\Omega(\epsilon^{-2}M^{2}\nu^{2}d^{3}\gamma^{4}\log(4d/\delta))\) where \(\gamma\) is a function of \(\lambda,d,\alpha_{1},\alpha_{2}\), we have \(\mathbb{P}(\|\hat{\mathbf{w}}^{\text{Glime}}-\mathbf{w}^{\text{Glime}}\|_{2}< \epsilon)\geq 1-\delta\). \(\mathbf{w}^{\text{Glime}}=\lim_{n\rightarrow\infty}\hat{\mathbf{w}}^{\text{ Glime}}\)._
Since Glime-Binomial samples from a binomial distribution, which is sub-Gaussian with parameters \(M=\sqrt{d}\), \(\nu=2\), \(\alpha_{1}=1/(1+e^{-1/\sigma^{2}})\), \(\alpha_{2}=1/(1+e^{-1/\sigma^{2}})^{2}\), and \(\gamma(\alpha_{1},\alpha_{2},d)=de^{2/\sigma^{2}}\) (refer to Appendix B.5 for proof), we derive the following corollary:
**Corollary 4.3**.: _Suppose \(\{\mathbf{z}^{\prime}_{i}\}_{i=1}^{n}\) are i.i.d. samples from \(\widetilde{\mathcal{P}}(\mathbf{z}^{\prime},\|\mathbf{z}^{\prime}\|_{0}=k)=e^ {k/\sigma^{2}}/(1+e^{1/\sigma^{2}})^{d},k=1,\ldots,d\) and are used to compute Glime-Binomial explanation. For any \(\epsilon>0,\delta\in(0,1)\), if \(n=\Omega(\epsilon^{-2}d^{5}e^{4/\sigma^{2}}\log(4d/\delta)),\) we have \(\mathbb{P}(\|\hat{\mathbf{w}}^{\text{Binomial}}-\mathbf{w}^{\text{Binomial}}\|_{2 }<\epsilon)\geq 1-\delta\). \(\mathbf{w}^{\text{Binomial}}=\lim_{n\rightarrow\infty}\hat{\mathbf{w}}^{\text{ Binomial}}\)._
Figure 2: **The distribution and the weight of \(\|\mathbf{z}^{\prime}\|_{0}\). In LIME, the distribution of \(\|\mathbf{z}^{\prime}\|_{0}\) follows a binomial distribution, and it is independent of \(\sigma\). In Glime-Binomial, when \(\sigma\) is small, \(\|\mathbf{z}^{\prime}\|_{0}\) concentrates around \(d\), while in LIME, most samples exhibit negligible weights except for the all-one vector 1. As \(\sigma\) increases, the distributions in Glime-Binomial converge to those in LIME, and all LIME samples attain non-negligible weights.**Comparing the sample complexities outlined in Theorem 4.1 and Corollary 4.3, it becomes evident that LIME necessitates an exponential increase of \(\exp(d,\sigma^{-2})\) more samples than Glime-Binomial for convergence. Despite both LIME and Glime-Binomial samples being defined on the binary set \(\{0,1\}\), the weight \(\pi(\mathbf{z}^{\prime})\) associated with a sample \(\mathbf{z}^{\prime}\) in LIME is notably small. Consequently, the square loss term in LIME is significantly diminished compared to that in Glime-Binomial. This situation results in the domination of the regularization term over the square loss term, leading to solutions that are close to zero. For stable solutions, it is crucial that the square loss term is comparable to the regularization term. Consequently, Glime-Binomial requires significantly fewer samples than LIME to achieve stability.
### Designing locally faithful explanation methods within Glime
**Non-local and biased sampling in LIME.** LIME employs uniform sampling from \(\{0,1\}^{d}\) and subsequently maps the samples to the original input space \(\mathcal{X}\) with the inclusion of a reference. Despite the integration of a weighting function to enhance locality, the samples \(\{\mathbf{z}_{i}\}_{i=1}^{n}\) generated by LIME often exhibit non-local characteristics, limiting their efficacy in capturing the local behaviors of the model \(f\) (as depicted in Figure 3). This observation aligns with findings in [16, 21], which demonstrate that LIME frequently approximates the global behaviors instead of the local behaviors of \(f\). As illustrated earlier, the weighting function contributes to LIME's instability, emphasizing the need for explicit enforcement of locality in the sampling process.
**Local and unbiased sampling in Glime.** In response to these challenges, Glime introduces a sampling procedure that systematically enforces locality without reliance on a reference point. One approach involves sampling \(\mathbf{z}^{\prime}\sim\mathcal{P}=\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})\) and subsequently obtaining \(\mathbf{z}=\mathbf{x}+\mathbf{z}^{\prime}\). This method, referred to as Glime-Gauss, utilizes a weighting function \(\pi(\cdot)\equiv 1\), with other components chosen to mirror those of LIME. The feature attributions derived from this approach successfully mitigate the aforementioned issues. Similarly, alternative distributions, such as \(\mathcal{P}=\text{Laplace}(\mathbf{0},\sigma)\) or \(\mathcal{P}=\text{Uni}([-\sigma,\sigma]^{d})\), can be employed, resulting in explanation methods known as Glime-Laplace and Glime-Uniform, respectively.
### Sampling distribution selection for user-specific objectives
Users may possess specific objectives they wish the explanation method to fulfill. For instance, if a user seeks to enhance local fidelity within a neighborhood of radius \(\epsilon\), they can choose a distribution and corresponding parameters aligned with this objective (as depicted in Figure 5). The flexible design of the sample distribution in Glime empowers users to opt for a distribution that aligns with their particular use cases. Furthermore, within the Glime framework, it is feasible to integrate feature correlation into the sampling distribution, providing enhanced flexibility. In summary, Glime affords users the capability to make more tailored choices based on their individual needs and objectives.
Figure 3: **The distribution of sample distances to the original input. The samples produced by LIME display a considerable distance from the original input \(\mathbf{x}\), whereas the samples generated by Glime demonstrate a more localized distribution. LIME has a tendency to overlook sampling points that are in close proximity to \(\mathbf{x}\).**
Experiments
**Dataset and models.** Our experiments are conducted on the ImageNet dataset1. Specifically, we randomly choose 100 classes and select an image at random from each class. The models chosen for explanation are ResNet18 [13] and the tiny Swin-Transformer [18] (refer to Appendix A.7 for results). Our implementation is derived from the official implementation of LIME2. The default segmentation algorithm in LIME, Quickshift [32], is employed. Implementation details of our experiments are provided in Appendix A.1. For experiment results on text data, please refer to Appendix A.9. For experiment results on ALIME, please refer to Appendix A.8.
Footnote 1: Code is available at [https://github.com/thutzr/GLIME-General-Stable-and-Local-LIME-Explanation](https://github.com/thutzr/GLIME-General-Stable-and-Local-LIME-Explanation)
Footnote 2: [https://github.com/marcotcr/lime](https://github.com/marcotcr/lime)
**Metrics.** (1) _Stability_: To gauge the stability of an explanation method, we calculate the average top-\(K\) Jaccard Index (JI) for explanations generated by 10 different random seeds. Let \(\mathbf{w}_{1},\ldots,\mathbf{w}_{10}\) denote the explanations obtained from 10 random seeds. The indices corresponding to the top-\(K\) largest values in \(\mathbf{w}_{i}\) are denoted as \(R_{i,:K}\). The average Jaccard Index between pairs of \(R_{i,:K}\) and \(R_{j,:K}\) is then computed, where \(\text{JI}(A,B)=|A\cap B|/|A\cup B|\).
(2) _Local Fidelity_: To evaluate the local fidelity of explanations, reflecting how well they capture the local behaviors of the model, we employ two approaches. For LIME, which uses a non-local sampling neighborhood, we use the \(R^{2}\) score returned by the LIME implementation for local fidelity assessment [33]. Within Glime, we generate samples \(\{\mathbf{z}_{i}\}_{i=1}^{m}\) and \(\{\mathbf{z}^{\prime}_{i}\}_{i=1}^{m}\) from the neighborhood \(B_{\mathbf{x}}(\epsilon)\). The squared difference between the model's output and the explanation's output on these samples is computed. Specifically, for a sample \(\mathbf{z}\), we calculate \((f(\mathbf{z})-\hat{\mathbf{w}}^{\top}\mathbf{z}^{\prime})^{2}\) for the explanation \(\hat{\mathbf{w}}\). The local fidelity of an explanation \(\hat{\mathbf{w}}\) at the input \(\mathbf{x}\) is defined as \(1/(1+\frac{1}{m}\sum_{i}(f(\mathbf{z}_{i})-\hat{\mathbf{w}}^{\top}\mathbf{z}^{ \prime}_{i})^{2})\), following the definition in [16]. To ensure a fair comparison between different distributions in Glime, we set the variance parameter of each distribution to match that of the Gaussian distribution. For instance, when sampling from the Laplace distribution, we use \(\text{Laplace}(\mathbf{0},\sigma/\sqrt{2})\), and when sampling from the uniform distribution, we use \(\text{Uni}([-\sqrt{3}\sigma,\sqrt{3}\sigma]^{d})\).
### Stability of LIME and Glime
**LIME's instability and the influence of regularization/weighting.** In Figure 3(a), it is evident that LIME without the weighting function (\(\text{LIME}+\pi=1\)) demonstrates greater stability compared to its weighted counterpart, especially when \(\sigma\) is small (e.g., \(\sigma=0.25,0.5\)). This implies that the weighting function contributes to instability in LIME. Additionally, we observe that LIME without regularization (\(\text{LIME}+\lambda=0\)) exhibits higher stability than the regularized LIME, although the improvement is not substantial. This is because, when \(\sigma\) is small, the sample weights approach zero, causing the Ridge regression problem to become low-rank, leading to unstable solutions. Conversely, when \(\sigma\) is large, significant weights are assigned to all samples, reducing the effectiveness of regularization. For instance, when \(\sigma=5\) and \(d=40\), most samples carry weights around 0.45, and even samples with only one non-zero element left possess weights of approximately 0.2. In such scenarios, the regularization term does not dominate, even with limited samples. This observation is substantiated by the comparable performance of LIME, LIME\(+\pi=1\), and LIME\(+\lambda=0\) when \(\sigma=1\) and \(5\). Further results are presented in Appendix A.2.
**Enhancing stability in LIME with Glime.** In Figure 3(a), it is evident that LIME achieves a Jaccard Index of approximately 0.4 even with over 2000 samples when using the default \(\sigma=0.25\). In contrast, both Glime-Binomial and Glime-Gauss provide stable explanations with only 200-400 samples. Moreover, with an increase in the value of \(\sigma\), the convergence speed of LIME also improves. However, Glime-Binomial consistently outperforms LIME, requiring fewer samples for comparable stability. The logarithmic scale of the horizontal axis in Figure 3(a) highlights the exponential faster convergence of Glime compared to LIME.
**Convergence of LIME and Glime-Binomial to a common limit.** In Figure 8 of Appendix A.3, we explore the difference and correlation between explanations generated by LIME and Glime-Binomial. Mean Squared Error (MSE) and Mean Absolute Error (MAE) are employed as metrics to quantify the dissimilarity between the explanations, while Pearson correlation and Spearman rank correlation assess their degree of correlation. As the sample size increases, both LIME and GlimeBinomial exhibit greater similarity and higher correlation. The dissimilarity in their explanations diminishes rapidly, approaching zero when \(\sigma\) is significantly large (e.g., \(\sigma=5\)).
### Local fidelity of LIME and Glime
**Enhancing local fidelity with Glime.** A comparison of the local fidelity between LIME and the explanation methods generated by Glime is presented in Figure 3(b). Utilizing 2048 samples for each image to compute the \(R^{2}\) score, Glime consistently demonstrates superior local fidelity compared to LIME. Particularly, when \(\sigma=0.25\) and \(0.5\), LIME exhibits local fidelity that is close to zero, signifying that the linear approximation model \((\hat{\mathbf{w}}^{\text{LIME}})^{\top}\mathbf{z}^{\prime}\) is nearly constant. Through the explicit integration of locality into the sampling process, Glime significantly improves the local fidelity of the explanations.
**Local fidelity analysis of Glime under various sampling distributions.** In Figure 5, we assess the local fidelity of Glime employing diverse sampling distributions: \(\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})\), \(\text{Laplace}(\mathbf{0},\sigma/\sqrt{2})\)
Figure 4: Glime consistently enhances stability and local fidelity compared to LIME across various values of \(\sigma\).
Figure 5: **Local fidelity of Glime across different neighborhood radii.** Explanations produced under a distribution with a standard deviation of \(\sigma\) demonstrate the ability to capture behaviors within local neighborhoods with radii exceeding \(\sigma\).
and \(\text{Uni}([-\sqrt{3}\sigma,\sqrt{3}\sigma]^{d})\). The title of each sub-figure corresponds to the standard deviation of these distributions. Notably, we observe that the value of \(\sigma\) does not precisely align with the radius \(\epsilon\) of the intended local neighborhood for explanation. Instead, local fidelity tends to peak at larger \(\epsilon\) values than the corresponding \(\sigma\). Moreover, different sampling distributions achieve optimal local fidelity for distinct \(\epsilon\) values. This underscores the significance of selecting an appropriate distribution and parameter values based on the specific radius \(\epsilon\) of the local neighborhood requiring an explanation. Unlike LIME, Glime provides the flexibility to accommodate such choices. For additional results and analysis, please refer to Appendix A.5.
### Human experiments
In addition to numerical experiments, we conducted human-interpretability experiments to evaluate whether Glime provides more meaningful explanations to end-users. The experiments consist of two parts, with 10 participants involved in each. The details of the procedures employed in conducting the experiments is presented in the following.
1. Can Glime improve the comprehension of the model's predictions? To assess this, we choose images for which the model's predictions are accurate. Participants are presented with the original images, accompanied by explanations generated by both LIME and Glime. They are then asked to evaluate the degree of alignment between the explanations from these algorithms and their intuitive understanding. Using a 1-5 scale, where 1 indicates a significant mismatch and 5 signifies a strong correspondence, participants rate the level of agreement.
2. Can Glime assist in identifying the model's errors? To explore this, we select images for which the model's predictions are incorrect. Participants receive the original images along with explanations generated by both LIME and Glime. They are then asked to assess the degree to which these explanations aid in understanding the model's behaviors and uncovering the reasons behind the inaccurate predictions. Using a 1-5 scale, where 1 indicates no assistance and 5 signifies substantial aid, participants rate the level of support provided by the explanations.
Figure 6 presents the experimental results. When participants examined images with accurate model predictions, along with explanations from LIME and Glime, they assigned an average score of 2.96 to LIME and 3.37 to Glime. On average, Glime received a score 0.41 higher than LIME. Notably, in seven out of the ten instances, Glime achieved a higher average score than LIME.
In contrast, when participants examined images with incorrect model predictions, accompanied by explanations from LIME and Glime, they assigned an average score of 2.33 to LIME and 3.42 to Glime. Notably, Glime outperformed LIME with an average score 1.09 higher across all ten images. These results strongly indicate that Glime excels in explaining the model's behaviors.
Figure 6: Glime helps explaining model predictions better than LIME.
Related work
**Post-hoc local explanation methods.** In contrast to inherently interpretable models, black-box models can be explained through post-hoc explanation methods, which are broadly categorized as model-agnostic or model-specific. Model-specific approaches, such as Gradient [2, 27], SmoothGrad [28], and Integrated Gradient [30], assume that the explained model is differentiable and that gradient access is available. For instance, SmoothGrad generates samples from a Gaussian distribution centered at the given input and computes their average gradient to mitigate noise. On the other hand, model-agnostic methods, including LIME [22] and Anchor [23], aim to approximate the local model behaviors using interpretable models, such as linear models or rule lists. Another widely-used model-agnostic method, SHAP [19], provides a unified framework that computes feature attributions based on the Shapley value and adheres to several axioms.
**Instability of LIME.** Despite being widely employed, LIME is known to be unstable, evidenced by divergent explanations under different random seeds [35, 34, 38]. Many efforts have been devoted to stabilize LIME explanations. Zafar et al. [35] introduced a deterministic algorithm that utilizes hierarchical clustering for grouping training data and k-nearest neighbors for selecting relevant data samples. However, the resulting explanations may not be a good local approximation. Addressing this concern, Shankaranarayana et al. [24] trained an auto-encoder to function as a more suitable weighting function in LIME. Shi et al. [25] incorporated feature correlation into the sampling step and considered a more restricted sampling distribution, thereby enhancing stability. Zhou et al. [40] employed a hypothesis testing framework to determine the necessary number of samples for ensuring stable explanations. However, this improvement came at the expense of a substantial increase in computation time.
**Impact of references.** LIME, along with various other explanation methods, relies on references (also known as baseline inputs) to generate samples. References serve as uninformative inputs meant to represent the absence of features [4, 30, 26]. Choosing an inappropriate reference can lead to misleading explanations. For instance, if a black image is selected as the reference, important black pixels may not be highlighted [15, 6]. The challenge lies in determining the appropriate reference, as different types of references may yield different explanations [14, 6, 15]. In [15], both black and white references are utilized, while [7] employs constant, noisy, and Gaussian blur references simultaneously. To address the reference specification issue, [6] proposes Expected Gradient, considering each instance in the data distribution as a reference and averaging explanations computed across all references.
## 7 Conclusion
In this paper, we introduce Glime, a novel framework that extends the LIME method for local feature importance explanations. By explicitly incorporating locality into the sampling procedure and enabling more flexible distribution choices, Glime mitigates the limitations of LIME, such as instability and low local fidelity. Experimental results on ImageNet data demonstrate that Glime significantly enhances stability and local fidelity compared to LIME. While our experiments primarily focus on image data, the applicability of our approach readily extends to text and tabular data.
## 8 Acknowledgement
The authors would like to thank the anonymous reviewers for their constructive comments. Zeren Tan and Jian Li are supported by the National Natural Science Foundation of China Grant (62161146004). Yang Tian is supported by the Artificial and General Intelligence Research Program of Guo Qiang Research Institute at Tsinghua University (2020GQG1017).
## References
* [1] Alejandro Barredo Arrieta, Natalia Diaz-Rodriguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, et al. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. _Information fusion_, 58:82-115, 2020.
* [2] David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, and Klaus-Robert Muller. How to explain individual classification decisions. _The Journal of Machine Learning Research_, 11:1803-1831, 2010.
* [3] Naman Bansal, Chirag Agarwal, and Anh Nguyen. Sam: The sensitivity of attribution methods to hyperparameters. In _Proceedings of the ieee/cvf conference on computer vision and pattern recognition_, pages 8673-8683, 2020.
* [4] Alexander Binder, Gregoire Montavon, Sebastian Lapuschkin, Klaus-Robert Muller, and Wojciech Samek. Layer-wise relevance propagation for neural networks with local renormalization layers. In _Artificial Neural Networks and Machine Learning-ICANN 2016: 25th International Conference on Artificial Neural Networks, Barcelona, Spain, September 6-9, 2016, Proceedings, Part II 25_, pages 63-71. Springer, 2016.
* [5] Ian C Covert, Scott Lundberg, and Su-In Lee. Explaining by removing: A unified framework for model explanation. _The Journal of Machine Learning Research_, 22(1):9477-9566, 2021.
* [6] Gabriel Erion, Joseph D Janizek, Pascal Sturmfels, Scott M Lundberg, and Su-In Lee. Improving performance of deep learning models with axiomatic attribution priors and expected gradients. _Nature machine intelligence_, 3(7):620-631, 2021.
* [7] Ruth C Fong and Andrea Vedaldi. Interpretable explanations of black boxes by meaningful perturbation. In _Proceedings of the IEEE international conference on computer vision_, pages 3429-3437, 2017.
* [8] Damien Garreau and Dina Mardaoui. What does lime really see in images? In _International conference on machine learning_, pages 3620-3629. PMLR, 2021.
* [9] Damien Garreau and Ulrike von Luxburg. Looking deeper into tabular lime. _arXiv preprint arXiv:2008.11092_, 2020.
* [10] Loveleen Gaur, Mohan Bhandari, Tanvi Razdan, Saurav Mallik, and Zhongming Zhao. Explanation-driven deep learning model for prediction of brain tumour status using mri image data. _Frontiers in Genetics_, page 448, 2022.
* [11] Rory Mc Grath, Luca Costabello, Chan Le Van, Paul Sweeney, Farbod Kamiab, Zhao Shen, and Freddy Lecue. Interpretable credit application predictions with counterfactual explanations. _arXiv preprint arXiv:1811.05245_, 2018.
* [12] Tessa Han, Suraj Srinivas, and Himabindu Lakkaraju. Which explanation should i choose? a function approximation perspective to characterizing post hoc explanations. _arXiv preprint arXiv:2206.01254_, 2022.
* [13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016.
* [14] Saachi Jain, Hadi Salman, Eric Wong, Pengchuan Zhang, Vibhav Vineet, Sai Vemprala, and Aleksander Madry. Missingness bias in model debugging. _arXiv preprint arXiv:2204.08945_, 2022.
* [15] Andrei Kapishnikov, Tolga Bolukbasi, Fernanda Viegas, and Michael Terry. Xrai: Better attributions through regions. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 4948-4957, 2019.
* [16] Thibault Laugel, Xavier Renard, Marie-Jeanne Lesot, Christophe Marsala, and Marcin Detyniecki. Defining locality for surrogates in post-hoc interpretablity. _arXiv preprint arXiv:1806.07498_, 2018.
* [17] Wu Lin, Mohammad Emtiyaz Khan, and Mark Schmidt. Stein's lemma for the reparameterization trick with exponential family mixtures. _arXiv preprint arXiv:1910.13398_, 2019.
* [18] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In _Proceedings of the IEEE/CVF international conference on computer vision_, pages 10012-10022, 2021.
* [19] Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. _Advances in neural information processing systems_, 30, 2017.
* [20] Christoph Molnar. _Interpretable machine learning_. Lulu. com, 2020.
* [21] Amir Hossein Akhavan Rahnama and Henrik Bostrom. A study of data and label shift in the lime framework. _arXiv preprint arXiv:1910.14421_, 2019.
* [22] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. " why should i trust you?" explaining the predictions of any classifier. In _Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining_, pages 1135-1144, 2016.
* [23] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Anchors: High-precision model-agnostic explanations. In _Proceedings of the AAAI conference on artificial intelligence_, volume 32, 2018.
* [24] Sharath M Shankaranarayana and Davor Runje. Alime: Autoencoder based approach for local interpretability. In _Intelligent Data Engineering and Automated Learning-IDEAL 2019: 20th International Conference, Manchester, UK, November 14-16, 2019, Proceedings, Part I 20_, pages 454-463. Springer, 2019.
* [25] Sheng Shi, Xinfeng Zhang, and Wei Fan. A modified perturbed sampling method for local interpretable model-agnostic explanation. _arXiv preprint arXiv:2002.07434_, 2020.
* [26] Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. In _International conference on machine learning_, pages 3145-3153. PMLR, 2017.
* [27] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. _arXiv preprint arXiv:1312.6034_, 2013.
* [28] Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viegas, and Martin Wattenberg. Smoothgrad: removing noise by adding noise. _arXiv preprint arXiv:1706.03825_, 2017.
* [29] Pascal Sturmfels, Scott Lundberg, and Su-In Lee. Visualizing the impact of feature attribution baselines. _Distill_, 5(1):e22, 2020.
* [30] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In _International conference on machine learning_, pages 3319-3328. PMLR, 2017.
* [31] Joel A Tropp et al. An introduction to matrix concentration inequalities. _Foundations and Trends(r) in Machine Learning_, 8(1-2):1-230, 2015.
* [32] Andrea Vedaldi and Stefano Soatto. Quick shift and kernel methods for mode seeking. In _Computer Vision-ECCV 2008: 10th European Conference on Computer Vision, Marseille, France, October 12-18, 2008, Proceedings, Part IV 10_, pages 705-718. Springer, 2008.
* [33] Giorgio Visani, Enrico Bagli, and Federico Chesani. Optilime: Optimized lime explanations for diagnostic computer algorithms. _arXiv preprint arXiv:2006.05714_, 2020.
* [34] Giorgio Visani, Enrico Bagli, Federico Chesani, Alessandro Poluzzi, and Davide Capuzzo. Statistical stability indices for lime: Obtaining reliable explanations for machine learning models. _Journal of the Operational Research Society_, 73(1):91-101, 2022.
* [35] Muhammad Rehman Zafar and Naimul Mefraz Khan. Dime: A deterministic local interpretable model-agnostic explanations approach for computer-aided diagnosis systems. _arXiv preprint arXiv:1906.10263_, 2019.
* [36] Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In _Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13_, pages 818-833. Springer, 2014.
* [37] Yifan Zhang, Haowei He, and Yang Yuan. Consistent and truthful interpretation with fourier analysis. _arXiv preprint arXiv:2210.17426_, 2022.
* [38] Yujia Zhang, Kuangyan Song, Yiming Sun, Sarah Tan, and Madeleine Udell. " why should you trust my explanation?" understanding uncertainty in lime explanations. _arXiv preprint arXiv:1904.12991_, 2019.
* [39] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 2921-2929, 2016.
* [40] Zhengze Zhou, Giles Hooker, and Fei Wang. S-lime: Stabilized-lime for model explanation. In _Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining_, pages 2429-2438, 2021.
More discussions
### Implementation details
**Dataset selection.** The experiments use images from the validation set of the ImageNet-1k dataset. To ensure consistency, a fixed random seed (2022) is employed. Specifically, 100 classes are uniformly chosen at random, and for each class, an image is randomly selected.
**Models.** The pretrained models used are sourced from torchvision.models, with the weights parameter set to IMAGENET1K_V1.
**Feature transformation.** The initial step involves cropping each image to dimensions of (224, 224, 3). The quickshift method from scikit-image is then employed to segment images into super-pixels, with specific parameters set as follows: kernel_size=4, max_dist=200, ratio=0.2, and random_seed=2023. This approach aligns with the default setting in LIME, except for the modified fixed random seed. Consistency in the random seed ensures that identical images result in the same super-pixels, thereby isolating the source of instability to the calculation of explanations. However, for different images, they are still segmented in different ways.
**Computing explanations.** The implemented procedure follows the original setup in LIME. The hide_color parameter is configured as None, causing the average value of each super-pixel to act as its reference when the super-pixel is removed. The distance_metric is explicitly set to l2, as recommended for image data in LIME [22]. The default value for alpha in Ridge regression is 1, unless otherwise specified. For each image, the model \(f\) infers the most probable label, and the explanation pertaining to that label is computed. Ten different random seeds are utilized to compute explanations for each image. The random_seed parameter in both the LineImageExplainer and the explain_instance function is set to these specific random seeds.
### Stability of LIME and Glime
Figure 7 illustrates the top-1, top-5, top-10, and average Jaccard indices. Importantly, the results presented in Figure 7 closely align with those in Figure 3(a). In summary, it is evident that Glime consistently provides more stable explanations compared to LIME.
### LIME and Glime-Binomial converge to the same limit
In Figure 8, the difference and correlation between explanations generated by LIME and Glime-Binomial are presented. With an increasing sample size, the explanations from LIME and Glime-Binomial become more similar and correlated. The difference between their explanations rapidly converges to zero, particularly when \(\sigma\) is large, such as in the case of \(\sigma=5\). While LIME exhibits a slower convergence, especially with small \(\sigma\), it is impractical to continue sampling until their difference fully converges. Nevertheless, the correlation between LIME and Glime-Binomial strengthens with an increasing number of samples, indicating their convergence to the same limit as the sample size grows.
### LIME explanations are different for different references.
The earlier work by Jain et al. [14] has underscored the instability of LIME regarding references. As shown in Section 4.2, this instability originates from LIME's sampling distribution, which relies on the chosen reference \(\mathbf{r}\). Additional empirical evidence is presented in Figure 9. Six distinct references--black, white, red, blue, yellow image, and the average value of the removed super-pixel (the default setting for LIME)--are selected. The average Jaccard indices for explanations computed using these various references are detailed in Figure 9. The results underscore the sensitivity of LIME to different references.
Different references result in LIME identifying distinct features as the most influential, even with a sample size surpassing 2000. Particularly noteworthy is that, with a sample size exceeding 2000, the top-1 Jaccard index consistently remains below 0.7, underscoring LIME's sensitivity to reference variations.
Figure 8: **Difference and correlation between LIME and Glime-Binomial explanations. Mean Squared Error (MSE) and Mean Absolute Error (MAE) serve as metrics to evaluate the dissimilarity between explanations provided by LIME and Glime-Binomial. Pearson and Spearman correlation coefficients are employed to quantify the correlation between these explanations. With an increasing number of samples, the explanations from LIME and Glime-Binomial tend to show greater similarity. Notably, the dissimilarity and correlation of explanations between LIME and Glime-Binomial converge more rapidly when \(\sigma\) is higher.**
Figure 7: Top-1, 5, 10, and average Jaccard indices are computed for various methods. The average Jaccard index is obtained by averaging the top-1 to top-d Jaccard indices.
### The local fidelity of Glime
Figure 5 presents the local fidelity of Glime, showcasing samples from the \(\ell_{2}\) neighborhood \(\{\mathbf{z}\|\|\mathbf{z}-\mathbf{x}\|_{2}\leq\epsilon\}\) around \(\mathbf{x}\). Additionally, Figure 10 and Figure 11 illustrate the local fidelity of Glime within the \(\ell_{1}\) neighborhood \(\{\mathbf{z}\|\|\mathbf{z}-\mathbf{x}\|_{1}\leq\epsilon\}\) and the \(\ell_{\infty}\) neighborhood \(\{\mathbf{z}\|\|\mathbf{z}-\mathbf{x}\|_{\infty}\leq\epsilon\}\), respectively.
A comparison between Figure 5 and Figure 10 reveals that, for the same \(\sigma\), Glime can explain the local behaviors of \(f\) within a larger radius in the \(\ell_{1}\) neighborhood compared to the \(\ell_{2}\) neighborhood. This difference arises from the fact that \(\{\mathbf{z}\|\|\mathbf{z}-\mathbf{x}\|_{2}\leq\epsilon\}\) defines a larger neighborhood compared to \(\{\mathbf{z}\|\|\mathbf{z}-\mathbf{x}\|_{1}\leq\epsilon\}\) with the same radius \(\epsilon\).
Likewise, the set \(\{\mathbf{z}\|\|\mathbf{z}-\mathbf{x}\|_{\infty}\leq\epsilon\}\) denotes a larger neighborhood than \(\{\mathbf{z}\|\|\mathbf{z}-\mathbf{x}\|_{2}\leq\epsilon\}\), causing the local fidelity to peak at a smaller radius \(\epsilon\) for the \(\ell_{\infty}\) neighborhood compared to the \(\ell_{2}\) neighborhood under the same \(\sigma\).
Remarkably, Glime-Laplace consistently demonstrates superior local fidelity compared to Glime-Gauss and Glime-Uniform. Nevertheless, in cases with larger \(\epsilon\), Glime-Gauss sometimes surpasses the others. This observation implies that the choice of sampling distribution should be contingent on the particular radius of the local neighborhood intended for explanation by the user.
Figure 11: The local fidelity of Glime in the \(\ell_{\infty}\) neighborhood
Figure 10: The local fidelity of Glime in the \(\ell_{1}\) neighborhood
Figure 9: The Top-\(K\) Jaccard index of explanations, computed with different references, consistently stays below 0.7, even when the sample size exceeds 2000.
### Glime unifies several previous methods
**KernelSHAP [19].** KernelSHAP integrates the principles of LIME and Shapley values. While LIME employs a linear explanation model to locally approximate \(f\), KernelSHAP seeks a linear explanation model that adheres to the axioms of Shapley values, including local accuracy, missingness, and consistency [19]. Achieving this involves careful selection of the loss function \(\ell(\cdot,\cdot)\), the weighting function \(\pi(\cdot)\), and the regularization term \(R\). The choices for these parameters in LIME often violate local accuracy and/or consistency, unlike the selections made in KernelSHAP, which are proven to adhere to these axioms (refer to Theorem 2 in [19]).
**Gradient [2, 27].** This method computes the gradient \(\nabla f\) to assess the impact of each feature under infinitesimal perturbation [2, 27].
**SmoothGrad [28].** Acknowledging that standard gradient explanations may contain noise, SmoothGrad introduces a method to alleviate noise by averaging gradients within the local neighborhood of the explained input [28]. Consequently, the feature importance scores are computed as \(\mathbb{E}_{\mathbf{c}\sim\mathcal{N}(0,\sigma^{2}\mathbf{I})}[\nabla f( \mathbf{x}+\epsilon)]\).
**DLIME [35].** Diverging from random sampling, DLIME seeks a deterministic approach to sample acquisition. In its process, DLIME employs agglomerative Hierarchical Clustering to group the training data, and subsequently utilizes k-Nearest Neighbour to select the cluster corresponding to the explained instance. The DLIME explanation is then derived by constructing a linear model based on the data points within the identified cluster.
**ALIME [24]:** ALIME leverages an auto-encoder to assign weights to samples. Initially, an auto-encoder, denoted as \(\mathcal{AE}(\cdot)\), is trained on the training data. Subsequently, the method involves sampling \(n\) nearest points to \(\mathbf{x}\) from the training dataset. The distances between these samples and the explained instance \(\mathbf{x}\) are assessed using the \(\ell_{1}\) distance between their embeddings, obtained through the application of the auto-encoder \(\mathcal{AE}(\cdot)\). For a sample \(\mathbf{z}\), its distance from \(\mathbf{x}\) is measured as \(\|\mathcal{AE}(\mathbf{z})-\mathcal{AE}(\mathbf{x})\|_{1}\), and its weight is computed as \(\exp(-\|\mathcal{AE}(\mathbf{z})-\mathcal{AE}(\mathbf{x})\|_{1})\). The final explanation is derived by solving a weighted Ridge regression problem.
### Results on tiny Swin-Transformer [18]
The findings on the tiny Swin-Transformer align with those on ResNet18, providing additional confirmation that Glime enhances stability and local fidelity compared to LIME. Please refer to Figure 12, Figure 13 and Figure 14 for results.
### Comparing Glime with ALIME
While ALIME [24] improves upon the stability and local fidelity of LIME, Glime consistently surpasses ALIME. A key difference between ALIME and LIME lies in their methodologies: ALIME employs an encoder to transform samples into an embedding space, calculating their distance from the input to be explained as \(\|\mathcal{AE}(\mathbf{z})-\mathcal{AE}(\mathbf{x})\|_{1}\), whereas LIME utilizes a binary vector \(\mathbf{z}\in\{0,1\}^{d}\) to represent a sample, measuring the distance from the explained input as \(\|\mathbf{1}-\mathbf{z}\|_{2}\).
Because ALIME relies on distance in the embedding space to assign weights to samples, there is a risk of generating very small sample weights if the produced samples are far from \(\mathbf{x}\), potentially resulting in instability issues.
In our ImageNet experiments comparing Glime and ALIME, we utilize the VGG16 model from the repository imagenet-autoconcoder3 as the encoder in ALIME. The outcomes of these experiments are detailed in Table 1. The findings demonstrate that, although ALIME demonstrates enhanced stability compared to LIME, this improvement is not as substantial as the improvement achieved by Glime, particularly under conditions of small \(\sigma\) or sample size.
Footnote 3: [https://github.com/Horizon2333/imagenet-autoencoder](https://github.com/Horizon2333/imagenet-autoencoder)
### Experiment results on IMDb
The DistilBERT model is employed in experimental evaluations on the IMDb dataset, where 100 data points are selected for explanation. The comparison between Glime-Binomial and LIMEis depicted in Figure 15 using the Jaccard Index. Our findings indicate that Glime-Binomial consistently exhibits higher stability than LIME across a range of \(\sigma\) values and sample sizes. Notably, at smaller \(\sigma\) values, Glime-Binomial demonstrates a substantial improvement in stability compared to LIME.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{\# samples} & 128 & 256 & 512 & 1024 \\ \hline \multirow{3}{*}{\(\sigma=0.25\)} & Glime-Binomial & 0.952 & 0.981 & 0.993 & 0.998 \\ \cline{2-6} & Glime-Gauss & 0.872 & 0.885 & 0.898 & 0.911 \\ \cline{2-6} & LIME & 0.618 & 0.691 & 0.750 & 0.803 \\ \hline \multirow{3}{*}{\(\sigma=0.5\)} & Glime-Binomial & 0.596 & 0.688 & 0.739 & 0.772 \\ \cline{2-6} & Glime-Gauss & 0.875 & 0.891 & 0.904 & 0.912 \\ \cline{2-6} & ALIME & 0.525 & 0.588 & 0.641 & 0.688 \\ \hline \multirow{3}{*}{\(\sigma=1\)} & Glime-Binomial & 0.533 & 0.602 & 0.676 & 0.725 \\ \cline{2-6} & Glime-Gauss & 0.883 & 0.894 & 0.908 & 0.915 \\ \cline{1-1} \cline{2-6} & ALIME & 0.519 & 0.567 & 0.615 & 0.660 \\ \hline \multirow{3}{*}{\(\sigma=5\)} & Glime-Binomial & 0.493 & 0.545 & 0.605 & 0.661 \\ \cline{2-6} & Glime-Gauss & 0.865 & 0.883 & 0.898 & 0.910 \\ \cline{1-1} \cline{2-6} & ALIME & 0.489 & 0.539 & 0.589 & 0.640 \\ \hline \end{tabular}
\end{table}
Table 1: **Top-20 Jaccard Index of Glime-Binomial, Glime-Gauss, and ALIME.**Glime-Binomial and Glime-Gauss exhibit significantly higher stability than ALIME, particularly in scenarios with small \(\sigma\) or limited samples.
Figure 12: Glime markedly enhances both stability and local fidelity compared to LIME across various values of \(\sigma\).
Figure 14: **Difference and correlation in LIME and Glime-Binomial explanations (tiny Swin-Transformer).** Mean Squared Error (MSE) and Mean Absolute Error (MAE) quantify the divergence between LIME and Glime-Binomial explanations. Pearson and Spearman correlation coefficients gauge the correlation. With an increasing sample size, the explanations tend to align more closely. Notably, the difference and correlation converge more rapidly with larger values of \(\sigma\).
Figure 13: Top-1, 5, 10, and average Jaccard indices are calculated for various methods, and the average Jaccard index represents the mean of top-\(1,\cdots,d\) indices for the tiny Swin-Transformer.
## Appendix B Proofs
### Equivalent Glime formulation without \(\pi(\cdot)\)
By integrating the weighting function into the sampling distribution, the problem to be solved is
\[\mathbf{w}^{\text{Glime}} =\operatorname*{arg\,min}_{\mathbf{v}}\mathbb{E}_{\mathbf{z}^{ \prime}\sim\mathcal{P}}[\pi(\mathbf{z}^{\prime})\ell(f(\mathbf{z}),g(\mathbf{ z}^{\prime}))]+\lambda R(\mathbf{v})\] \[=\operatorname*{arg\,min}_{\mathbf{v}}\int_{\mathbb{R}^{d}}\pi( \mathbf{z}^{\prime})\ell(f(\mathbf{z}),g(\mathbf{z}^{\prime}))\mathcal{P}( \mathbf{z})\mathrm{d}\mathbf{z}+\lambda R(\mathbf{v})\] \[=\] \[= \operatorname*{arg\,min}_{\mathbf{v}}\frac{\int_{\mathbb{R}^{d}} \ell(f(\mathbf{z}),g(\mathbf{z}^{\prime}))\pi(\mathbf{z}^{\prime})\mathcal{P}( \mathbf{z})\mathrm{d}\mathbf{z}+\frac{\lambda R(\mathbf{v})}{\int_{\mathbb{R}^ {d}}\pi(\mathbf{u}^{\prime})\mathcal{P}(\mathbf{u})\mathrm{d}\mathbf{u}}+ \frac{\lambda R(\mathbf{v})}{\int_{\mathbb{R}^{d}}\pi(\mathbf{u}^{\prime}) \mathcal{P}(\mathbf{u})\mathrm{d}\mathbf{u}}\] \[= \operatorname*{arg\,min}_{\mathbf{v}}\int_{\mathbb{R}^{d}}\ell(f (\mathbf{z}),g(\mathbf{z}^{\prime}))\widetilde{\mathcal{P}}(\mathbf{z})\mathrm{ d}\mathbf{z}+\frac{\lambda}{Z}R(\mathbf{v})\quad\widetilde{\mathcal{P}}(\mathbf{z})= \frac{\pi(\mathbf{z}^{\prime})\mathcal{P}(\mathbf{z})}{Z},Z=\int_{\mathbb{R}^{ d}}\pi(\mathbf{u}^{\prime})\mathcal{P}(\mathbf{u})\mathrm{d}\mathbf{u}\] \[= \operatorname*{arg\,min}_{\mathbf{v}}\mathbb{E}_{\mathbf{z}^{ \prime}\sim\widetilde{\mathcal{P}}}[\ell(f(\mathbf{z}),g(\mathbf{z}^{\prime})) ]+\frac{\lambda}{Z}R(\mathbf{v})\]
### Equivalence between LIME and Glime-Binomial
For LIME, \(\mathcal{P}=\text{Uni}(\{0,1\}^{d})\) and thus \(\mathcal{P}(\mathbf{z}^{\prime},\|\mathbf{z}^{\prime}\|_{0}=k)=\frac{1}{2^{d} },k=0,1,\cdots,d\), so that
\[Z=\int_{\mathbb{R}^{d}}\pi(\mathbf{u}^{\prime})\mathcal{P}(\mathbf{u})\mathrm{ d}\mathbf{u}=\sum_{k=0}^{d}e^{(k-d)/\sigma^{2}}\frac{\binom{d}{k}}{2^{d}}=\frac{e^{ -d/\sigma^{2}}}{2^{d}}(1+e^{1/\sigma^{2}})^{d}\]
Thus, we have
\[\widetilde{\mathcal{P}}(\mathbf{z})=\frac{\pi(\mathbf{z}^{\prime})\mathcal{P}( \mathbf{z})}{Z}=\frac{e^{(k-d)/\sigma^{2}}2^{-d}}{Z}=\frac{e^{k/\sigma^{2}}}{ (1+e^{1/\sigma^{2}})^{d}}\]
Therefore, Glime-binomial is equivalent to LIME.
Figure 15: Glime significantly enhances both stability and local fidelity compared to LIME across various \(\sigma\) values.
### LIME requires many samples to accurately estimate the expectation term in Equation 1.
In Figure 2, it is evident that a lot of samples generated by LIME possess considerably small weights. Consequently, the sample estimation of the expectation in Equation 1 tends to be much smaller than the true expectation with high probability. In such instances, the regularization term would have a dominant influence on the overall objective.
Consider specific parameters, such as \(\sigma=0.25\), \(n=1000\), \(d=20\) (where \(\sigma\) and \(n\) are the default values in the original implementation of LIME). The probability of obtaining a sample \(\mathbf{z}^{\prime}\) with \(\|\mathbf{z}^{\prime}\|_{0}=d-1\) or \(d\) is approximately \(\frac{d}{2^{d}}+\frac{1}{2^{d}}=\frac{d+1}{2^{d}}\approx 2\times 10^{-5}\). Let's consider a typical scenario where \(|f(\mathbf{z})|\in[0,1]\), and \((f(\mathbf{z})-\mathbf{v}^{\top}\mathbf{z}^{\prime})^{2}\) is approximately \(0.1\) for most \(\mathbf{z},\mathbf{z}^{\prime}\). In this case, \(\mathbb{E}_{\mathbf{z}^{\prime}\sim\text{Uni}\{(0,1^{d})\}}[\pi(\mathbf{z}^{ \prime})(f(\mathbf{z})-\mathbf{v}^{\top}\mathbf{z}^{\prime})^{2}]\approx 0.1 \cdot\sum_{k=0}^{d}\frac{e^{(k-d)/\sigma^{2}}}{2^{d}}\approx 10^{-7}\). However, if we lack samples \(\mathbf{z}^{\prime}\) with \(\|\mathbf{z}^{\prime}\|_{0}=d-1\) or \(d\), then all samples \(\mathbf{z}^{\prime}_{i}\) with \(\|\mathbf{z}^{\prime}_{i}\|_{0}\leq d-2\) have weights \(\pi(\mathbf{z}^{\prime}_{i})\leq\exp(-\frac{2}{\sigma^{2}})\approx 1.26\times 10^{ -14}\). This leads to the sample average \(\frac{1}{n}\sum_{i=1}^{n}\pi(\mathbf{z}^{\prime}_{i})(f(\mathbf{z}_{i})-\mathbf{ v}^{\top}\mathbf{z}^{\prime}_{i})^{2}\leq 1.26\times 10^{-15}\ll 10^{-7}\). The huge difference between the magnitude of the expectation term in Equation 1 and the sample average of this expectation indicates that the sample average is not an accurate estimation of \(\mathbb{E}_{\mathbf{z}^{\prime}\sim\text{Uni}\{(0,1^{d})\}}[\pi(\mathbf{z}^{ \prime})(f(\mathbf{z})-\mathbf{v}^{\top}\mathbf{z}^{\prime})^{2}]\) (if we do not get enough samples). Additionally, under these circumstances, the regularization term is likely to dominate the sample average term, leading to an underestimation of the intended value of \(\mathbf{v}\). In conclusion, the original sampling method for LIME, even with extensively used default parameters, is not anticipated to yield meaningful explanations.
### Proof of Theorem 4.1
**Theorem B.1**.: _Suppose samples \(\{\mathbf{z}^{\prime}_{i}\}_{i=1}^{n}\sim\text{Uni}(\{0,1\}^{d})\) are used to compute LIME explanation. For any \(\epsilon>0,\delta\in(0,1)\), if \(n=\Omega(\epsilon^{-2}d^{5}2^{4d}e^{4/\sigma^{2}}\log(4d/\delta)),\lambda\leq n\), we have \(\mathbb{P}(\|\hat{\mathbf{w}}^{\text{LIME}}-\mathbf{w}^{\text{LIME}}\|_{2}< \epsilon)\geq 1-\delta\). \(\mathbf{w}^{\text{LIME}}=\lim_{n\rightarrow\infty}\hat{\mathbf{w}}^{\text{LIME}}\)._
Proof.: To compute the LIME explanation with \(n\) samples, the following optimization problem is solved:
\[\hat{\mathbf{w}}^{\text{LIME}}=\operatorname*{arg\,min}_{\mathbf{v}}\frac{1}{ n}\sum_{i=1}^{n}\pi(\mathbf{z}^{\prime}_{i})(f(\mathbf{z}_{i})-\mathbf{v}^{ \top}\mathbf{z}^{\prime}_{i})^{2}+\frac{\lambda}{n}\|\mathbf{v}\|_{2}^{2}.\]
Let \(L=\frac{1}{n}\sum_{i=1}^{n}\pi(\mathbf{z}^{\prime}_{i})(f(\mathbf{z}_{i})- \mathbf{v}^{\top}\mathbf{z}^{\prime}_{i})^{2}+\frac{\lambda}{n}\|\mathbf{v}\| _{2}^{2}\). Setting the gradient of \(L\) with respect to \(\mathbf{v}\) to zero, we obtain:
\[-2\frac{1}{n}\pi(\mathbf{z}^{\prime}_{i})(f(\mathbf{z}_{i})-\mathbf{v}^{\top }\mathbf{z}^{\prime}_{i})\mathbf{z}^{\prime}_{i}+\frac{2}{n}\lambda\mathbf{v}=0,\]
which leads to:
\[\hat{\mathbf{w}}^{\text{LIME}}=\bigg{(}\frac{1}{n}\sum_{i=1}^{n}\pi(\mathbf{z} ^{\prime}_{i})\mathbf{z}^{\prime}_{i}(\mathbf{z}^{\prime}_{i})^{\top}+\frac{ \lambda}{n}\bigg{)}^{-1}\bigg{(}\frac{1}{n}\sum_{i=1}^{n}\pi(\mathbf{z}^{ \prime}_{i})\mathbf{z}^{\prime}_{i}f(\mathbf{z}_{i})\bigg{)}.\]
Denote \(\mathbf{\Sigma}_{n}=\frac{1}{n}\sum_{i=1}^{n}\pi(\mathbf{z}^{\prime}_{i}) \mathbf{z}^{\prime}_{i}(\mathbf{z}^{\prime}_{i}(\mathbf{z}^{\prime}_{i})^{\top} +\frac{\lambda}{n}\), \(\mathbf{\Gamma}_{n}=\frac{1}{n}\sum_{i=1}^{n}\pi(\mathbf{z}^{\prime}_{i}) \mathbf{z}^{\prime}_{i}f(\mathbf{z}_{i})\), \(\mathbf{\Sigma}=\lim_{n\rightarrow\infty}\mathbf{\Sigma}_{n}\), and \(\mathbf{\Gamma}=\lim_{n\rightarrow\infty}\mathbf{\Gamma}_{n}\). Then, we have:
\[\hat{\mathbf{w}}^{\text{LIME}}=\mathbf{\Sigma}_{n}^{-1}\mathbf{\Gamma}_{n}, \quad\mathbf{w}^{\text{LIME}}=\mathbf{\Sigma}^{-1}\mathbf{\Gamma}.\]
To prove the concentration of \(\hat{\mathbf{w}}^{\text{LIME}}\), we follow the proofs in [8]: (1) First, we prove the concentration of \(\mathbf{\Sigma}_{n}\); (2) Then, we bound \(\|\mathbf{\Sigma}^{-1}\|_{F}^{2}\); (3) Next, we prove the concentration of \(\mathbf{\Gamma}_{n}\); (4) Finally, we use the following inequality:
\[\|\mathbf{\Sigma}_{n}^{-1}\mathbf{\Gamma}_{n}-\mathbf{\Sigma}^{-1}\mathbf{ \Gamma}\|\leq 2\|\mathbf{\Sigma}^{-1}\|_{F}\|\mathbf{\Gamma}_{n}-\mathbf{\Gamma} \|_{2}+2\|\mathbf{\Sigma}^{-1}\|_{F}^{2}\|\mathbf{\Gamma}\|\|\mathbf{\Sigma}_{ n}-\mathbf{\Sigma}\|,\]
when \(\|\mathbf{\Sigma}^{-1}(\mathbf{\Sigma}_{n}-\mathbf{\Sigma})\|\leq 0.32\)[8].
Before establishing concentration results, we first derive the expression for \(\mathbf{\Sigma}\).
**Expression of \(\mathbf{\Sigma}\).**
\[\mathbf{\Sigma}_{n}=\begin{bmatrix}\frac{1}{n}\sum_{i}\pi(\mathbf{z}^{\prime}_{i })(\mathbf{z}^{\prime}_{i1})^{2}+\frac{\lambda}{n}&\frac{1}{n}\sum_{i}\pi( \mathbf{z}^{\prime}_{i})\mathbf{z}^{\prime}_{i1}\mathbf{z}^{\prime}_{i2}&\cdots &\frac{1}{n}\sum_{i}\pi(\mathbf{z}^{\prime}_{i})\mathbf{z}^{\prime}_{i1}\mathbf{z }^{\prime}_{id}\\ \frac{1}{n}\sum_{i}\pi(\mathbf{z}^{\prime}_{i})\mathbf{z}^{\prime}_{i1}\mathbf{z }^{\prime}_{i2}&\frac{1}{n}\sum_{i}\pi(\mathbf{z}^{\prime}_{i})(\mathbf{z}^{ \prime}_{i2})^{2}+\frac{\lambda}{n}&\cdots&\frac{1}{n}\sum_{i}\pi(\mathbf{z}^{ \prime}_{i})\mathbf{z}^{\prime}_{i2}\mathbf{z}^{\prime}_{id}\\ \vdots&\vdots&\ddots&\vdots\\ \frac{1}{n}\sum_{i}\pi(\mathbf{z}^{\prime}_{i})\mathbf{z}^{\prime}_{i1}\mathbf{z }^{\prime}_{id}&\frac{1}{n}\sum_{i}\pi(\mathbf{z}^{\prime}_{i})\mathbf{z}^{ \prime}_{i2}\mathbf{z}^{\prime}_{id}&\cdots&\frac{1}{n}\sum_{i}\pi(\mathbf{z}^{ \prime}_{i})(\mathbf{z}^{\prime}_{id})^{2}+\frac{\lambda}{n}\end{bmatrix}\]By taking \(n\to\infty\), we have
\[\mathbf{\Sigma}_{n}\to\mathbf{\Sigma}=(\alpha_{1}-\alpha_{2})\mathbf{I}+\alpha_{2} \mathbf{1}\mathbf{1}^{\top}\]
where
\[\alpha_{1}= \mathbb{E}_{\mathbf{z}^{\prime}\sim\mathsf{Uni}\left(\{0,1\}^{d} \right)}[\pi(\mathbf{z}^{\prime})z_{i}^{\prime}]=\mathbb{E}_{\mathbf{z}^{ \prime}\sim\mathsf{Uni}\left(\{0,1\}^{d}\right)}[\pi(\mathbf{z}^{\prime})(z_{ i}^{\prime})^{2}]\] \[= \sum_{k=0}^{d}e^{(k-d)/\sigma^{2}}\mathbb{P}(z_{i}^{\prime}=1|| \mathbf{z}^{\prime}||_{0}=k)\mathbb{P}(\|\mathbf{z}^{\prime}\|_{0}=k)\] \[= \sum_{k=0}^{d}e^{(k-d)/\sigma^{2}}\frac{k}{d}\frac{\binom{d}{k-1} }{2^{d}}\] \[= \sum_{k=0}^{d}e^{(k-d)/\sigma^{2}}\frac{\binom{d-1}{k-1}}{2^{d}}\] \[= \sum_{k=0}^{d}e^{(k-1)/\sigma^{2}}e^{(1-d)/\sigma^{2}}\frac{ \binom{d-1}{k-1}}{2^{d}}\] \[= e^{(1-d)/\sigma^{2}}\frac{(1+e^{\frac{1}{\sigma^{2}}})^{d-1}}{2 ^{d}}=\frac{(1+e^{-\frac{1}{\sigma^{2}}})^{d-1}}{2^{d}}\] \[\alpha_{2}= \mathbb{E}_{\mathbf{z}^{\prime}\sim\mathsf{Uni}\left(\{0,1\}^{d} \right)}[\pi(\mathbf{z}^{\prime})z_{i}^{\prime}z_{j}^{\prime}]\] \[= \frac{1}{Z}\sum_{k=0}^{d}e^{(k-d)/\sigma^{2}}\mathbb{P}(z_{i}^{ \prime}=1,z_{j}^{\prime}=1||\mathbf{z}^{\prime}||_{0}=k)\mathbb{P}(\|\mathbf{z }^{\prime}\|_{0}=k)\] \[= \sum_{k=0}^{d}e^{(k-d)/\sigma^{2}}\frac{k(k-1)}{d(d-1)}\frac{ \binom{d}{k}}{2^{d}}\] \[= \sum_{k=0}^{d}e^{(k-d)/\sigma^{2}}\frac{\binom{d-2}{k-2}}{2^{d}}\] \[= \sum_{k=0}^{d}e^{(k-2)/\sigma^{2}}e^{(2-d)/\sigma^{2}}\frac{ \binom{d-2}{k-2}}{2^{d}}\] \[= e^{(2-d)/\sigma^{2}}\frac{(1+e^{\frac{1}{\sigma^{2}}})^{d-2}}{2 ^{d}}=\frac{(1+e^{-\frac{1}{\sigma^{2}}})^{d-2}}{2^{d}}\]
By Sherman-Morrison formula, we have
\[\mathbf{\Sigma}^{-1} =((\alpha_{1}-\alpha_{2})\mathbf{I}+\alpha_{2}\mathbf{1}\mathbf{1 }^{\top})^{-1}=\frac{1}{\alpha_{1}-\alpha_{2}}(\mathbf{I}+\frac{\alpha_{2}}{ \alpha_{1}-\alpha_{2}}\mathbf{1}\mathbf{1}^{\top})^{-1}\] \[=\frac{1}{\alpha_{1}-\alpha_{2}}(\mathbf{I}-\frac{\frac{\alpha_{ 2}}{\alpha_{1}-\alpha_{2}}\mathbf{1}\mathbf{1}^{\top}}{1+\frac{\alpha_{2}}{ \alpha_{1}-\alpha_{2}}d})=(\beta_{1}-\beta_{2})\mathbf{I}+\beta_{2}\mathbf{1} \mathbf{1}^{\top}\]
where
\[\beta_{1}=\frac{\alpha_{1}+(d-2)\alpha_{2}}{(\alpha_{1}-\alpha_{2})(\alpha_{1} +(d-1)\alpha_{2})},\quad\beta_{2}=-\frac{\alpha_{2}}{(\alpha_{1}-\alpha_{2})( \alpha_{1}+(d-1)\alpha_{2})}\]
In the following, we aim to establish the concentration of \(\hat{\mathbf{w}}^{\text{LIME}}\).
**Concentration of \(\mathbf{\Sigma}_{n}\).** Considering \(0\leq\pi(\cdot)\leq 1\) and \(\mathbf{z}_{i}\in\{0,1\}^{d}\), each element within \(\mathbf{\Sigma}_{n}\) resides within the interval of \([0,2]\). Moreover, as
\[\frac{1}{2^{d}}\leq\alpha_{1}=\frac{(1+e^{-\frac{1}{\sigma^{2}}})^{d-1}}{2^{d} }\leq\frac{2^{d-1}}{2^{d}}=\frac{1}{2}\]
\[\frac{1}{2^{d}}\leq\alpha_{2}=\frac{(1+e^{-\frac{1}{\sigma^{2}}})^{d-2}}{2^{d} }\leq\frac{2^{d-2}}{2^{d}}=\frac{1}{4}\]\[\frac{e^{-1/\sigma^{2}}}{2^{d}}\leq\alpha_{1}-\alpha_{2}=e^{-\frac{1}{\sigma^{2}}} \frac{(1+e^{-\frac{1}{\sigma^{2}}})^{d-2}}{2^{d}}\leq\frac{1}{4}\]
The elements within \(\mathbf{\Sigma}\) are within the range of \([0,\frac{1}{4}]\). Consequently, the elements in \(\mathbf{\Sigma}_{n}-\mathbf{\Sigma}\) fall within the range of \([-\frac{1}{4},2]\).
Referring to the matrix Hoeffding's inequality [31], it holds true that for all \(t>0\),
\[\mathbb{P}(\|\mathbf{\Sigma}_{n}-\mathbf{\Sigma}\|_{2}\geq t)\leq 2d\exp(-\frac{nt ^{2}}{32d^{2}})\]
**Bounding \(\|\mathbf{\Sigma}^{-1}\|_{F}^{2}\).**
\[\|\mathbf{\Sigma}^{-1}\|_{F}^{2}=d\beta_{1}^{2}+(d^{2}-d)\beta_{2}^{2}\]
Because
\[\frac{d}{2^{d}}\leq\alpha_{1}+(d-1)\alpha_{2}\leq\frac{2+(d-1)}{4}=\frac{d+1} {4}\]
\[\frac{d-1}{2^{d}}\leq\alpha_{1}+(d-2)\alpha_{2}\leq\frac{2+(d-2)}{4}=\frac{d} {4}\]
we have
\[|\beta_{1}|=\left|\frac{\alpha_{1}+(d-2)\alpha_{2}}{(\alpha_{1}-\alpha_{2})( \alpha_{1}+(d-1)\alpha_{2})}\right|\leq\left|\frac{1}{\alpha_{1}-\alpha_{2}} \right|\leq 2^{d}e^{1/\sigma^{2}},\beta_{1}^{2}\leq 2^{2d}e^{2/\sigma^{2}}\]
\[|\beta_{2}|=\left|-\frac{\alpha_{2}}{(\alpha_{1}-\alpha_{2})(\alpha_{1}+(d-1) \alpha_{2})}\right|=\left|e^{1/\sigma^{2}}\frac{1}{(\alpha_{1}+(d-1)\alpha_{2} )}\right|\leq d^{-1}2^{d}e^{1/\sigma^{2}},\]
\[\beta_{2}^{2}\leq d^{-2}2^{2d}e^{2/\sigma^{2}}\]
so that
\[\|\mathbf{\Sigma}^{-1}\|_{F}^{2}=d\beta_{1}^{2}+(d^{2}-d)\beta_{2}^{2}\leq d2^ {2d}e^{2/\sigma^{2}}+(d^{2}-d)d^{-2}2^{2d}e^{2/\sigma^{2}}\leq 2d2^{2d}e^{2/ \sigma^{2}}\]
**Concentration of \(\mathbf{\Gamma}_{n}\).** With \(|f|\leq 1\), all elements within both \(\mathbf{\Gamma}_{n}\) and \(\mathbf{\Gamma}\) exist within the range of \([0,1]\). According to matrix Hoeffding's inequality [31], for all \(t>0\),
\[\mathbb{P}(\|\mathbf{\Gamma}_{n}-\mathbf{\Gamma}\|\geq t)\leq 2d\exp\left(- \frac{nt^{2}}{8d}\right)\]
**Concentration of \(\hat{\mathbf{w}}^{\text{LIME}}\).** When \(\|\mathbf{\Sigma}^{-1}(\mathbf{\Sigma}_{n}-\mathbf{\Sigma})\|\leq 0.32\)[8], we have
\[\|\mathbf{\Sigma}_{n}^{-1}\mathbf{\Gamma}_{n}-\mathbf{\Sigma}^{-1}\mathbf{ \Gamma}\|\leq 2\|\mathbf{\Sigma}^{-1}\|_{F}\|\mathbf{\Gamma}_{n}-\mathbf{ \Gamma}\|_{2}+2\|\mathbf{\Sigma}^{-1}\|_{F}^{2}\|\mathbf{\Gamma}\|\|\mathbf{ \Sigma}_{n}-\mathbf{\Sigma}\|\]
Given that
\[\|\mathbf{\Sigma}^{-1}(\mathbf{\Sigma}_{n}-\mathbf{\Sigma})\|\leq\|\mathbf{ \Sigma}^{-1}\|\|\mathbf{\Sigma}_{n}-\mathbf{\Sigma}\|\leq 2^{1/2}d^{1/2}2^{d}e^ {1/\sigma^{2}}\|\mathbf{\Sigma}_{n}-\mathbf{\Sigma}\|\]
Exploiting the concentration of \(\mathbf{\Sigma}_{n}\), where \(n\geq n_{1}=2^{7}d^{3}2^{2d}e^{2/\sigma^{2}}\log(4d/\delta)\) and \(t=t_{1}=5^{-2}2^{2.5}d^{-0.5}2^{-d}e^{-1/\sigma^{2}}\), we have
\[\mathbb{P}(\|\mathbf{\Sigma}_{n}-\mathbf{\Sigma}\|_{2}\geq t)\leq 2d\exp\left(- \frac{nt^{2}}{32d^{2}}\right)\leq 2d\exp\left(-\frac{nt^{2}}{32d^{2}}\right)\leq \frac{\delta}{2}\]Therefore, with a probability of at least \(1-\frac{\delta}{2}\), we have
\[\|\mathbf{\Sigma}^{-1}(\mathbf{\Sigma}_{n}-\mathbf{\Sigma})\|\leq\|\mathbf{\Sigma }^{-1}\|\|\mathbf{\Sigma}_{n}-\mathbf{\Sigma}\|\leq 2^{1/2}d^{1/2}2^{d}e^{1/\sigma^{2}} \|\mathbf{\Sigma}_{n}-\mathbf{\Sigma}\|\leq 0.32\]
For \(n\geq n_{2}=2^{8}\epsilon^{-2}d^{2}2^{2d}e^{2/\sigma^{2}}\log(4d/\delta)\) and \(t_{2}=2^{-2.5}d^{-0.5}2^{-d}e^{-1/\sigma^{2}}\epsilon\), the following concentration inequality holds:
\[\mathbb{P}(\|\mathbf{\Gamma}_{n}-\mathbf{\Gamma}\|\geq t_{2})\leq 2d\exp\left(- \frac{n_{2}t_{2}^{2}}{8d}\right)\leq\frac{\delta}{2}\]
In this context, with a probability of at least \(1-\frac{\delta}{2}\), we have
\[\|\mathbf{\Sigma}^{-1}\|\|\mathbf{\Gamma}_{n}-\mathbf{\Gamma}\|\leq\frac{ \epsilon}{4}\]
Considering \(\|\mathbf{\Gamma}\|\leq\sqrt{d}\), we select \(n\geq n_{3}=2^{9}\epsilon^{-2}d^{5}2^{4d}e^{4/\sigma^{2}}\log(4d/\delta)\) and \(t_{3}=2^{-3}d^{-1.5}2^{-2d}e^{-2/\sigma^{2}}\epsilon\), leading to
\[\mathbb{P}(\|\mathbf{\Sigma}_{n}-\mathbf{\Sigma}\|_{2}\geq t_{3})\leq 2d\exp \left(-\frac{n_{3}t_{3}^{2}}{32d^{2}}\right)\leq 2d\exp\left(-\frac{n_{3}t_{3}^{2}}{32d^ {2}}\right)\leq\frac{\delta}{2}\]
With a probability at least \(1-\delta/2\), we have
\[\|\mathbf{\Sigma}^{-1}\|^{2}\|\mathbf{\Gamma}\|\|\mathbf{\Sigma}_{n}-\mathbf{ \Sigma}\|\leq\frac{\epsilon}{4}\]
In summary, we choose \(n\geq\max\{n_{1},n_{2},n_{3}\}\), and then for all \(\epsilon>0,\delta\in(0,1)\)
\[\mathbb{P}(\|\mathbf{\Sigma}_{n}^{-1}\mathbf{\Gamma}_{n}-\mathbf{\Sigma}^{-1} \mathbf{\Gamma}\|\geq\epsilon)\leq\delta\]
### Proof of Theorem 4.2 and Corollary 4.3
**Theorem B.2**.: _Suppose \(\mathbf{z}^{\prime}{\sim}{\mathcal{P}}\) such that the largest eigenvalue of \(\mathbf{z}^{\prime}(\mathbf{z}^{\prime})^{\top}\) is bounded by \(R\) and \(\mathbb{E}[\mathbf{z}^{\prime}(\mathbf{z}^{\prime})^{\top}]=(\alpha_{1}-\alpha _{2})\mathbf{I}\mathbf{I}+\alpha_{2}\mathbf{1}\mathbf{1}^{\top},\|\text{Var}( \mathbf{z}^{\prime}(\mathbf{z}^{\prime})^{\top})\|_{2}\leq\nu^{2}\), \(|(\mathbf{z}^{\prime}f(\mathbf{z}))_{i}|\leq M\) for some \(M>0\). \(\{\mathbf{z}^{\prime}_{i}\}_{i=1}^{n}\) are i.i.d. samples from \({\mathcal{P}}\) and are used to compute Gline explanation \(\hat{\mathbf{w}}^{\text{Gline}}\). For any \(\epsilon>0,\delta\in(0,1)\), if \(n=\Omega(\epsilon^{-2}M^{2}\nu^{2}d^{3}\gamma^{4}\log(4d/\delta))\) where \(\gamma^{2}=d\beta_{1}^{2}+(d^{2}-d)\beta_{2}^{2},\beta_{1}=(\alpha_{1}+(d-2) \alpha_{2})/\beta_{0},\beta_{2}=-\alpha_{2}/\beta_{0},\beta_{0}=(\alpha_{1}- \alpha_{2})(\alpha_{1}+(d-1)\alpha_{2}))\), we have \(\mathbb{P}(\|\hat{\mathbf{w}}^{\text{Gline}}-\mathbf{w}^{\text{Gline}}\|_{2}< \epsilon)\geq 1-\delta\). \(\mathbf{w}^{\text{Gline}}=\lim_{n\rightarrow\infty}\hat{\mathbf{w}}^{\text{Gline}}\)._
Proof.: The proof closely resembles that of Theorem 4.1. Employing the same derivation, we deduce that:
\[\mathbf{\Sigma}=(\alpha_{1}+\lambda-\alpha_{2})\mathbf{I}+\alpha_{2}\mathbf{1} \mathbf{1}^{\top},\quad\mathbf{\Sigma}^{-1}=(\beta_{1}-\beta_{2})\mathbf{I}+ \beta_{2}\mathbf{1}\mathbf{1}^{\top}\]
where
\[\beta_{1}=\frac{\alpha_{1}+\lambda+(d-2)\alpha_{2}}{(\alpha_{1}+\lambda-\alpha _{2})(\alpha_{1}+\lambda+(d-1)\alpha_{2})},\quad\beta_{2}=-\frac{\alpha_{2}}{ (\alpha_{1}+\lambda-\alpha_{2})(\alpha_{1}+\lambda+(d-1)\alpha_{2})}\]
Given that \(\lambda_{\max}(z^{\prime}(z^{\prime})^{\top})\leq R\) and \(\|\text{Var}(\mathbf{z}^{\prime}(\mathbf{z}^{\prime})^{\top})\|_{2}\leq\nu^{2}\), according to the matrix Hoeffding's inequality [31], for all \(t>0\):
\[\mathbb{P}(\|\mathbf{\Sigma}_{n}-\mathbf{\Sigma}\|_{2}\geq t)\leq 2d\exp\left(- \frac{nt^{2}}{8\nu^{2}}\right)\]Applying Hoeffding's inequality coordinate-wise, we obtain:
\[\mathbb{P}(\|\mathbf{\Gamma}_{n}-\mathbf{\Gamma}\|\geq t)\leq 2d\exp\left(-\frac{nt^{2 }}{8M^{2}d^{2}}\right)\]
Additionally,
\[\|\mathbf{\Sigma}^{-1}\|_{F}^{2}=d\beta_{1}^{2}+(d^{2}-d)\beta_{2}^{2}=\gamma^{2}\]
By selecting \(n\geq n_{1}=2^{5}\gamma^{2}\nu^{2}\log(4d/\delta)\) and \(t_{1}=2^{3}5^{-2}\gamma^{-1}\), we obtain
\[\mathbb{P}(\|\mathbf{\Sigma}_{n}-\mathbf{\Sigma}\|_{2}\geq t_{1})\leq 2d\exp \left(-\frac{n_{1}t_{1}^{2}}{8\nu^{2}}\right)\leq\frac{\delta}{2}\]
with a probability of at least \(1-\delta/2\).
\[\|\mathbf{\Sigma}^{-1}(\mathbf{\Sigma}_{n}-\mathbf{\Sigma})\|\leq\|\mathbf{ \Sigma}^{-1}\|\cdot\|\mathbf{\Sigma}_{n}-\mathbf{\Sigma}\|\leq\gamma t_{1}=0.32\]
Letting \(n\geq n_{2}=2^{5}\epsilon^{-2}M^{2}d^{2}\gamma^{2}\log(4d/\delta)\) and \(t_{2}=2^{-2}\epsilon\gamma^{-1}\), we have
\[\mathbb{P}(\|\mathbf{\Gamma}_{n}-\mathbf{\Gamma}\|\geq t_{2})\leq 2d\exp \left(-\frac{n_{2}t_{2}^{2}}{8M^{2}d^{2}}\right)\leq\frac{\delta}{2}\]
with a probability of at least \(1-\delta/2\).
\[\|\mathbf{\Sigma}\|\|\mathbf{\Gamma}_{n}-\mathbf{\Gamma}\|\leq\gamma t_{2}\leq \frac{\epsilon}{4}\]
As \(\|\mathbf{\Gamma}\|\leq M\), by choosing \(n\geq n_{3}=2^{5}\epsilon^{-2}M^{2}\nu^{2}d\gamma^{4}\log(4d/\delta)\) and \(t_{3}=2^{-2}\epsilon M^{-1}d^{-0.5}\gamma^{-2}\), we have
\[\mathbb{P}(\|\mathbf{\Sigma}_{n}-\mathbf{\Sigma}\|_{2}\geq t_{3})\leq 2d\exp \left(-\frac{n_{3}t_{3}^{2}}{2\nu^{2}}\right)\leq\frac{\delta}{2}\]
and with a probability of at least \(1-\delta/2\),
\[\|\mathbf{\Sigma}^{-1}\|^{2}\|\mathbf{\Gamma}\|\|\mathbf{\Sigma}_{n}-\mathbf{ \Sigma}\|\leq\gamma^{2}Md^{0.5}t_{3}=\frac{\epsilon}{4}\]
Therefore, by choosing \(n=\max\{n_{1},n_{2},n_{3}\}\), we have
\[\mathbb{P}(\|\mathbf{\Sigma}_{n}^{-1}\mathbf{\Gamma}_{n}-\mathbf{\Sigma}^{-1} \mathbf{\Gamma}\|\geq\epsilon)\leq\delta\]
**Corollary B.3**.: _Suppose \(\{\mathbf{z}^{\prime}_{i}\}_{i=1}^{n}\) are i.i.d. samples from \(\mathbb{P}(\mathbf{z}^{\prime},\|\mathbf{z}^{\prime}\|_{0}=k)=e^{k/\sigma^{2}}/ (1+e^{1/\sigma^{2}})^{d},k=1,\ldots,d\) are used to compute Glime-Binomial explanation. For any \(\epsilon>0,\delta\in(0,1)\), if \(n=\Omega(\epsilon^{-2}d^{5}e^{4/\sigma^{2}}\log(4d/\delta)),\) we have \(\mathbb{P}(\|\hat{\mathbf{w}}^{\text{Binomial}}-\mathbf{w}^{\text{Binomial}}\| _{2}<\epsilon)\geq 1-\delta\). \(\mathbf{w}^{\text{Binomial}}=\lim_{n\rightarrow\infty}\hat{\mathbf{w}}^{\text{ Binomial}}\)._
Proof.: For Glime-Binomial, each coordinate of \(\mathbf{z}^{\prime}(\mathbf{z}^{\prime})^{\top}\) follows a Bernoulli distribution, ensuring the bounded variance of both \(\mathbf{z}^{\prime}(\mathbf{z}^{\prime})^{\top}\) and \((\mathbf{z}^{\prime}f(\mathbf{z}^{\prime}))_{i}\). Additionally, we have
\[\|\mathbf{\Gamma}\|\leq\sqrt{d},\]\[\alpha_{1}= \mathbb{E}[(z_{i}^{2})^{\prime}]=\mathbb{E}[z_{i}^{\prime}]=\frac{e^{ 1/\sigma^{2}}}{1+e^{1/\sigma^{2}}}\] \[= \sum_{k=0}^{d}\mathbb{P}(z_{i}^{\prime}=1||\mathbf{z}^{\prime}||_{ 0}=k)\mathbb{P}(\|\mathbf{z}^{\prime}\|_{0}=k)\] \[= \sum_{k=0}^{d}\frac{k}{d}\frac{\binom{d}{k}e^{k/\sigma^{2}}}{(1+e ^{1/\sigma^{2}})^{d}}\] \[= \sum_{k=0}^{d}\frac{\binom{d-1}{k-1}e^{k/\sigma^{2}}}{(1+e^{1/ \sigma^{2}})^{d}}\] \[= \frac{(1+e^{1/\sigma^{2}})^{d-1}}{(1+e^{1/\sigma^{2}})^{d}}e^{1/ \sigma^{2}}=\frac{e^{1/\sigma^{2}}}{1+e^{1/\sigma^{2}}}\] \[\alpha_{2}= \mathbb{E}[z_{i}^{\prime}z_{j}^{\prime}]=\frac{e^{1/\sigma^{2}}} {1+e^{1/\sigma^{2}}}\] \[= \sum_{k=0}^{d}\mathbb{P}(z_{i}^{\prime}=1,z_{j}^{\prime}=1|| \mathbf{z}^{\prime}\|_{0}=k)\mathbb{P}(\|\mathbf{z}^{\prime}\|_{0}=k)\] \[= \sum_{k=0}^{d}\frac{k(k-1)}{d(d-1)}\frac{\binom{d}{k}e^{k/\sigma^ {2}}}{(1+e^{1/\sigma^{2}})^{d}}\] \[= \sum_{k=0}^{d}\frac{\binom{d-2}{k-2}e^{k/\sigma^{2}}}{(1+e^{1/ \sigma^{2}})^{d}}\] \[= \frac{(1+e^{1/\sigma^{2}})^{d-2}}{(1+e^{1/\sigma^{2}})^{d}}e^{2/ \sigma^{2}}=\frac{e^{2/\sigma^{2}}}{(1+e^{1/\sigma^{2}})^{2}}=\alpha_{1}^{2}\] \[|\beta_{1}|^{2}= |\frac{\alpha_{1}+\lambda+(d-2)\alpha_{2}}{(\alpha_{1}+\lambda- \alpha_{2})(\alpha_{1}+\lambda+(d-1)\alpha_{2})}|^{2}\] \[\leq |\frac{1}{\alpha_{1}+\lambda-\alpha_{2}}|\leq\frac{1}{|\alpha_{1} -\alpha_{2}|}=e^{-1/\sigma^{2}}(1+e^{1/\sigma^{2}})^{2}\leq 4e^{1/\sigma^{2}}\] \[|\beta_{2}|^{2}= |-\frac{\alpha_{2}}{(\alpha_{1}+\lambda-\alpha_{2})(\alpha_{1}+ \lambda+(d-1)\alpha_{2})}|^{2}\] \[\leq \frac{\alpha_{2}^{2}}{(\alpha_{1}-\alpha_{2})(\alpha_{1}+\lambda +(d-1)\alpha_{2})^{2}}\] \[= \frac{\alpha_{1}\alpha_{2}}{(1-\alpha_{1})(\alpha_{1}+(d-1)\alpha _{2})^{2}}\] \[\leq \frac{\alpha_{1}\alpha_{2}}{(1-\alpha_{1})((d-1)\alpha_{2})^{2}}= \frac{e^{-1/\sigma^{2}}(1+e^{1/\sigma^{2}})^{2}}{(d-1)^{2}}\leq\frac{2^{2}e^{ 1/\sigma^{2}}}{(d-1)^{2}}\]
Therefore,
\[d\beta_{1}^{2}+(d^{2}-d)\beta_{2}^{2}\leq de^{1/\sigma^{2}}+e^{1/\sigma^{2}} \frac{d}{d-1}\leq de^{1/\sigma^{2}}\]
### Formulation of SmoothGrad
**Proposition B.4**.: _SmoothGrad is equivalent to Glime formulation with \(\mathbf{z}=\mathbf{z}^{\prime}+\mathbf{x}\) where \(\mathbf{z}^{\prime}\sim\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})\), \(\ell(f(\mathbf{z}),g(\mathbf{z}^{\prime}))=(f(\mathbf{z})-g(\mathbf{z}^{ \prime}))^{2}\) and \(\pi(\mathbf{z})=1,\Omega(\mathbf{v})=0\)._
_The explanation returned by Glime for \(f\) at \(\mathbf{x}\) with infinitely many samples under the above setting is_
\[\mathbf{w}^{*}=\frac{1}{\sigma^{2}}\mathbb{E}_{\mathbf{z}^{\prime}\sim\mathcal{ N}(\mathbf{0},\sigma^{2}\mathbf{I})}[\mathbf{z}^{\prime}f(\mathbf{z}^{\prime}+ \mathbf{x})]=\mathbb{E}_{\mathbf{z}^{\prime}\sim\mathcal{N}(\mathbf{0}, \sigma^{2}\mathbf{I})}[\nabla f(\mathbf{x}+\mathbf{z}^{\prime})]\]
_which is exactly SmoothGrad explanation. When \(\sigma\to 0\), \(\mathbf{w}^{*}\rightarrow\nabla f(\mathbf{x}+\mathbf{z})|_{\mathbf{z}=\mathbf{ 0}}\)._Proof.: To establish this proposition, we commence by deriving the expression for the Glime explanation vector \(\mathbf{w}^{*}\).
**Exact Expression of \(\mathbf{\Sigma}\):** For each \(i=1,\cdots,n\), let \(\mathbf{z}^{\prime}_{i}\sim\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})\). In this context,
\[\hat{\mathbf{\Sigma}}_{n}=\begin{bmatrix}\frac{1}{n}\sum_{k}(z_{k1}^{2})^{ \prime}&\cdots&\frac{1}{n}\sum_{k}z_{l1}^{\prime}z_{kd}^{\prime}\\ \vdots&\ddots&\vdots\\ \frac{1}{n}\sum_{k}z_{kd}^{\prime}z_{k1}^{\prime}&\cdots&\frac{1}{n}\sum_{k}(z_ {kd}^{2})^{\prime}\end{bmatrix}\]
This implies
\[\mathbf{\Sigma}=\mathbb{E}_{\mathbf{z}^{\prime}\sim\mathcal{N}( \mathbf{0},\sigma^{2}\mathbf{I})}[\mathbf{z}^{\prime}(\mathbf{z}^{\prime})^{ \top}]=\begin{bmatrix}\sigma^{2}&\cdots&0\\ \vdots&\ddots&\vdots\\ 0&\cdots&\sigma^{2}\end{bmatrix}\] \[\mathbf{\Sigma}^{-1}=\begin{bmatrix}\frac{1}{\sigma^{2}}&\cdots&0 \\ \vdots&\ddots&\vdots\\ 0&\cdots&\frac{1}{\sigma^{2}}\end{bmatrix}\]
Consequently, we obtain
\[\mathbf{w}^{*}=\mathbf{\Sigma}^{-1}\mathbf{\Gamma}=\frac{1}{\sigma^{2}} \mathbb{E}_{\mathbf{z}^{\prime}\sim\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I} )}[\mathbf{z}^{\prime}f(\mathbf{x}+\mathbf{z}^{\prime})]=\mathbb{E}_{\mathbf{z }^{\prime}\sim\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})}[\nabla f(\mathbf{x }+\mathbf{z}^{\prime})]\]
The final equality is a direct consequence of Stein's lemma [17]. | ## Review
### Summary
The paper presents GLIME, a novel approach that addresses the instability and diminished local fidelity issues inherent in the original LIME method for model-agnostic explanations. By employing a new sampling scheme that guarantees a faster sampling rate and modifying the sampling distribution to prioritize nearby samples, GLIME improves upon LIME. The authors provide a rigorous theoretical analysis and empirical demonstrations to validate their claims. Overall, the work contributes significantly to the field of explainable AI by offering a more stable and general framework for model explanations, particularly in the image domain.
### Strengths
- The problem addressed is specific and well-formulated.
- The proposed solution is simple and effective, supported by rigorous sample complexity analysis.
- The paper's clarity and readability are commendable, making the technical contributions accessible.
- GLIME shows potential to become a standard replacement for LIME.
### Weaknesses
- The applicability of GLIME is limited to the image domain, with no evaluation on other data types.
- The empirical section lacks comparisons to similar methods like ALIME.
- Some theoretical properties of GLIME are not empirically verified.
- Figures require improvement for better clarity and understanding.
### Questions
- Could the local fidelity problem be addressed by adjusting the regularization weight?
- What are GLIME's shortcomings, and what future improvements are planned?
- Has a comparison between GLIME and ALIME been considered?
- What are the empirical results of using GLIME on different datasets?
### Soundness
**Score:** 3
**Description:** 3 = good; the theoretical foundations are solid, but some empirical validations are lacking.
### Presentation
**Score:** 3
**Description:** 3 = good; the paper is mostly clear but can benefit from improved figures and organization.
### Contribution
**Score:** 3
**Description:** 3 = good; the contribution is significant but could be better contextualized within the broader field of model explanation methods.
### Rating
**Score:** 7
**Description:** 7 = accept, but needs minor improvements; the paper is technically solid with moderate-to-high impact in the field.
### Paper Decision
**Decision:** Accept
**Reasons:** The paper presents an original approach to a significant issue in explainable AI, demonstrating soundness and clarity in its methodology. Although there are some weaknesses, particularly regarding the empirical validation and broader applicability, the overall contribution to the field is strong, warranting acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Neural Frailty Machine: Beyond proportional hazard assumption in neural survival regressions
Ruofan Wu\({}^{*}\)
Equal contribution
Jiawei Qiao\({}^{*}\)
Equal contribution
Mingzhe Wu\({}^{\lx@sectionsign}\)
Wen Yu\({}^{\ddagger}\)
Ming Zheng\({}^{\ddagger}\)
Tengfei Liu\({}^{\dagger}\)
Tianyi Zhang\({}^{\dagger}\)
Weiqiang Wang\({}^{\dagger}\)
\({}^{\dagger}\)Ant Group
\({}^{\ddagger}\)Fudan University
\({}^{\lx@sectionsign}\)Coupang
{ruofan.wrf, aaron.ltf, zty113091, weiqiang.wwq}@antgroup.com
[email protected], [email protected], {wenyu, mingzheng}@fudan.edu.cn
###### Abstract
We present neural frailty machine (NFM), a powerful and flexible neural modeling framework for survival regressions. The NFM framework utilizes the classical idea of multiplicative frailty in survival analysis as a principled way of extending the proportional hazard assumption, at the same time being able to leverage the strong approximation power of neural architectures for handling nonlinear covariate dependence. Two concrete models are derived under the framework that extends neural proportional hazard models and nonparametric hazard regression models. Both models allow efficient training under the likelihood objective. Theoretically, for both proposed models, we establish statistical guarantees of neural function approximation with respect to nonparametric components via characterizing their rate of convergence. Empirically, we provide synthetic experiments that verify our theoretical statements. We also conduct experimental evaluations over \(6\) benchmark datasets of different scales, showing that the proposed NFM models achieve predictive performance comparable to or sometimes surpassing state-of-the-art survival models. Our code is publicly availabel at [https://github.com/Rorschach1989/nfm](https://github.com/Rorschach1989/nfm)
## 1 Introduction
Regression analysis of time-to-event data [40] has been among the most important modeling tools for clinical studies and has witnessed a growing interest in areas like corporate finance [22], recommendation systems [38], and computational advertising [71]. The key feature that differentiates time-to-event data from other types of data is that they are often _incompletely observed_, with the most prevailing form of incompleteness being the _right censoring_ mechanism [40]. In the right censoring mechanism, the duration time of a sampled subject is (sometimes) only known to be larger than the observation time instead of being recorded precisely. It is well known in the community of survival analysis that even in the case of linear regression, naively discarding the censored observations produces estimation results that are statistically biased [7], at the same time losses sample efficiency if the censoring proportion is high.
Cox's proportional hazard (CoxPH) model [14] using the convex objective of negative partial likelihood [15] is the _de facto_ choice in modeling right censored time-to-event data (hereafter abbreviated as censored data without misunderstandings). The model is _semiparametric_[4] in the sense that the baseline hazard function needs no parametric assumptions. The original formulation of CoxPH modelassumes a linear form and therefore has limited flexibility since the truth is not necessarily linear. Subsequent studies extended CoxPH model to nonlinear variants using ideas from nonparametric regression [36; 8; 9], ensemble learning [37], and neural networks [25; 42]. While such extensions allowed a more flexible nonlinear dependence structure with the covariates, the learning objectives were still derived under the proportional hazards (PH) assumption, which was shown to be inadequate in many real-world scenarios [31]. The most notable case was the failure of modeling the phenomenon of crossing hazards [61]. It is thus of significant interest to explore extensions of CoxPH that both allow nonlinear dependence over covariates and relaxations of the PH assumption.
Frailty models [69; 21] are among the most important research topics in modern survival analysis, in that they provide a principled way of extending CoxPH model via incorporating a multiplicative random effect to capture unobserved heterogeneity. The resulting parameterization contains many useful variants of CoxPH like the proportional odds model [3], under specific choices of frailty families. While the theory of frailty models has been well-established [48; 49; 53; 45], most of them focused on the linear case. Recent developments on applying neural approaches to survival analysis [42; 46; 64; 56] have shown promising results in terms of empirical predictive performance, with most of them lacking theoretical discussions. Therefore, it is of significant interest to build more powerful frailty models via adopting techniques in modern deep learning [29] with provable statistical guarantees.
In this paper, we present a general framework for neural extensions of frailty models called the **neural frailty machine (NFM)**. Two concrete neural architectures are derived under the framework: The first one adopts the proportional frailty assumption, allowing an intuitive interpretation of the neural CoxPH model with a multiplicative random effect. The second one further relaxes the proportional frailty assumption and could be viewed as an extension of nonparametric hazard regression (NHR) [13; 44], sometimes referred to as "fully neural" models under the context of neural survival analysis [52]. We summarize our contributions as follows.
* We propose the neural frailty machine (NFM) framework as a principled way of incorporating unobserved heterogeneity into neural survival regression models. The framework includes many commonly used survival regression models as special cases.
* We derive two model architectures based on the NFM framework that extend neural CoxPH models and neural NHR models. Both models allow stochastic training and scale to large datasets.
* Theoretically, we show _statistical correctness_ of the two proposed models via characterizing the rates of convergence of the proposed nonparametric function estimators. The proof technique is different from previous theoretical studies on neural survival analysis and is applicable to many other types of neural survival models.
* Empirically, we verify the _empirical efficacy_ of the proposed framework via conducting extensive studies on various benchmark datasets at different scales. Under standard performance metrics, both models are empirically shown to perform competitively, matching or sometimes outperforming state-of-the-art neural survival models.
## 2 Related works
### Nonlinear extensions of CoxPH
Most nonlinear extensions of CoxPH model stem from the equivalence of partial likelihood and semiparametric profile likelihood [50] of CoxPH model, resulting in nonlinear variants that essentially replaces the linear term in partial likelihood with nonlinear variants: [36] used smoothing splines, [8; 9] used local polynomial regression [24]. The empirical success of tree-based models inspired subsequent developments like [37] that equip tree-based models such as gradient boosting trees and random forests with losses in the form of negative log partial likelihood. Early developments of neural survival analysis [25] adopted similar extension strategies and obtained neural versions of partial likelihood. Later attempts [42] suggested using the successful practice of stochastic training which is believed to be at the heart of the empirical success of modern neural methods [34]. However, stochastic training under the partial likelihood objective is highly non-trivial, as mini-batch versions of log partial likelihood [42] are no longer valid stochastic gradients of the full-sample log partial likelihood [64].
### Beyond CoxPH in survival analysis
In linear survival modeling, there are standard alternatives to CoxPH such as the accelerated failure time (AFT) model [7; 73], the extended hazard regression model [23], and the family of linear transformation models [74]. While these models allow certain types of nonlinear extensions, the resulting form of (conditional) hazard function is still restricted to be of a specific form. The idea of nonparametric hazard regression (NHR) [13; 44; 63] further improves the flexibility of nonparametric survival analysis via directly modeling the conditional hazard function by nonparametric regression techniques such as spline approximation. Neural versions of NHR have been developed lately such as the CoxTime model [46]. [56] used a neural network to approximate the conditional survival function and could be thus viewed as another trivial extension of NHR.
Aside from developments in NHR, [47] proposed a discrete-time model with its objective being a mix of the discrete likelihood and a rank-based score; [75] proposed a neural version of the extended hazard model, unifying both neural CoxPH and neural AFT model; [64] used an ODE approach to model the hazard and cumulative hazard functions.
### Theoretical justification of neural survival models
Despite the abundance of neural survival models, assessment of their theoretical properties remains nascent. In [76], the authors developed minimax theories of partially linear cox model using neural networks as the functional approximator. [75] provided convergence guarantees of neural estimates under the extended hazard model. The theoretical developments therein rely on specific forms of objective function (partial likelihood and kernel pseudo-likelihood) and are not directly applicable to the standard likelihood-based objective which is frequently used in survival analysis.
## 3 Methodology
### The neural frailty machine framework
Let \(\tilde{T}\geq 0\) be the interested event time with survival function denoted by \(S(t)=\mathbb{P}(\tilde{T}>t)\) associated with a feature(covariate) vector \(Z\in\mathbb{R}^{d}\). Suppose that \(\tilde{T}\) is a continuous random variable and let \(f(t)\) be its density function. Then \(\lambda(t)=f(t)/S(t)\) is the hazard function and \(\Lambda(t)=\int_{0}^{t}\lambda(s)ds\) is the cumulative hazard function. Aside from the covariate \(Z\), we use a positive scalar random variable \(\omega\in\mathbb{R}^{+}\) to express the unobserved heterogeneity corresponding to individuals, or _frailty_. 2. In this paper we will assume the following generating scheme of \(\tilde{T}\) via specifying its conditional hazard function:
Footnote 2: For example in medical biology, it was observed that genetically identical animals kept in as similar an environment as possible will typically not behave the same upon exposure to environmental carcinogens [6]
\[\lambda(t|Z,\omega)=\omega\widetilde{\nu}(t,Z). \tag{1}\]
Here \(\widetilde{\nu}\) is an unspecified non-negative function, and we let the distribution of \(\omega\) be parameterized by a one-dimensional parameter \(\theta\in\mathbb{R}\). 3 The formulation (1) is quite general and contains several important models in both traditional and neural survival analysis:
Footnote 3: The choice of one-dimensional frailty family is mostly for simplicity and clearness of theoretical derivations. Note that there exist multi-dimensional frailty families like the PVF family [69]. Generalizing our theoretical results to such kinds of families would require additional sets of regularity conditions, and will be left to future explorations.
1. When \(\omega\) follows parametric distributional assumptions, and \(\widetilde{\nu}(t,Z)=\lambda(t)e^{\beta^{\top}Z}\), (1) reduces to the standard proportional frailty model [45]. A special case is when \(\omega\) is degenerate, i.e., it has no randomness, then the model corresponds to the classic CoxPH model.
2. When \(\omega\) is degenerate and \(\widetilde{\nu}\) is arbitrary, the model becomes equivalent to nonparametric hazard regression (NHR) [13; 44]. In NHR, the function parameter of interest is usually the logarithm of the (conditional) hazard function.
In this paper we construct neural approximations to the logarithm of \(\widetilde{\nu}\), i.e., \(\nu(t,Z)=\log\widetilde{\nu}(t,Z)\). The resulting models are called **Neural Frailty Machines (NFM)**. Depending on the prior knowledge of the function \(\nu\), we propose two function approximation schemes:
**The proportional frailty (PF) scheme** assumes the dependence of \(\nu\) on event time and covariates to be completely _decoupled_, i.e.,
\[\nu(t,Z)=h(t)+m(Z). \tag{2}\]
Proportional-style assumption over hazard functions has been shown to be a useful inductive bias in survival analysis. We will treat both \(h\) and \(m\) in (2) as function parameters, and device two multi-layer perceptrons (MLP) to approximate them separately.
**The fully neural (FN) scheme** imposes no a priori assumptions over \(\nu\) and is the most general version of NFM. It is straightforward to see that the most commonly used survival models, such as CoxPH, AFT[73], EH[75], or PF models are included in the proposed model space as special cases. We treat \(\nu=\nu(t,Z)\) as the function parameter with input dimension \(d+1\) and use a multi-layer perceptron (MLP) as the function approximator to \(\nu\). Similar approximation schemes with respect to the hazard function have been proposed in some recent works [52; 56], referred to as "fully neural approaches" without theoretical characterizations.
**The choice of frailty family** There are many commonly used families of frailty distributions [45; 21; 69], among which the most popular one is the _gamma frailty_, where \(\omega\) follows a gamma distribution with mean \(1\) and variance \(\theta\). We briefly introduce some other types of frailty families in appendix A.
### Parameter learning under censored observations
In time-to-event modeling scenarios, the event times are typically observed under right censoring. Let \(C\) be the right censoring time which is assumed to be conditionally independent of the event time \(\tilde{T}\) given \(Z\), i.e., \(\tilde{T}\perp C|Z\). In data collection, one can observe the minimum of the survival time and the censoring time, that is, observe \(T=\tilde{T}\wedge C\) as well as the censoring indicator \(\delta=I(\tilde{T}\leqslant C)\), where \(a\wedge b=\min(a,b)\) for constants \(a\) and \(b\) and \(I(\cdot)\) stands for the indicator function. We assume \(n\) independent and identically distributed (i.i.d.) copies of \((T,\delta,Z)\) are used as the training sample \((T_{i},\delta_{i},Z_{i}),i\in[n]\), where we use \([n]\) to denote the set \(\{1,2,\ldots,n\}\). Additionally, we assume the unobserved frailties are independent and identically distributed, i.e., \(\omega_{i}\overset{\text{i.i.d.}}{\sim}f_{\theta}(\omega),i\in[n]\). Next, we derive the learning procedure based on the **observed log-likelihood (OLL)** objective under both PF and FN scheme. To obtain the observed likelihood, we first integrate the conditional survival function given the frailty:
\[S(t|Z)=\mathbb{E}_{\omega\sim f_{\theta}}\left[e^{-\omega\int_{0}^{t}e^{\nu(s, Z)}ds}\right]=:e^{-G_{\theta}\left(\int_{0}^{t}e^{\nu(s,Z)}ds\right)}. \tag{3}\]
Here the _frailty transform_\(G_{\theta}(x)=-\log\left(\mathbb{E}_{\omega\sim f_{\theta}}\left[e^{-\omega x }\right]\right)\) is defined as the negative of the logarithm of the Laplace transform of the frailty distribution. The conditional cumulative hazard function is thus \(\Lambda(t|Z)=G_{\theta}(\int_{0}^{t}e^{\nu(s,Z)}ds)\). For the PF scheme of NFM, we use two MLPs \(\widehat{h}=\widehat{h}(t;\mathbf{W}^{h},\mathbf{b}^{h})\) and \(\widehat{m}=\widehat{m}(Z;\mathbf{W}^{m},\mathbf{b}^{m})\) as function approximators to \(\nu\) and \(m\), parameterized by \((\mathbf{W}^{h},\mathbf{b}^{h})\) and \((\mathbf{W}^{m},\mathbf{b}^{m})\), respectively.4 According to standard results on censored data likelihood [40], we write the learning objective under the PF scheme as:
Footnote 4: Here we adopt the conventional notation that \(\mathbf{W}\) is the collection of the weight matrices of the MLP in all layers, and \(\mathbf{b}\) corresponds to the collection of the bias vectors in all layers.
\[\begin{split}&\mathcal{L}(\mathbf{W}^{h},\mathbf{b}^{h},\mathbf{W} ^{m},\mathbf{b}^{m},\theta)\\ =&\frac{1}{n}\left[\sum_{i\in[n]}\delta_{i}\log g_{ \theta}\left(e^{\widehat{m}(Z_{i})}\int_{0}^{T_{i}}e^{\widehat{h}(s)}ds\right) +\delta_{i}\widehat{h}(T_{i})+\delta_{i}\widehat{m}(Z_{i})-G_{\theta}\left(e^ {\widehat{m}(Z_{i})}\int_{0}^{T_{i}}e^{\widehat{h}(s)}ds\right)\right].\end{split} \tag{4}\]
Here we define \(g_{\theta}(x)=\frac{\partial}{\partial x}G_{\theta}(x)\). Let \((\widehat{\mathbf{W}}_{n}^{h},\widehat{\mathbf{b}}_{n}^{h},\widehat{\mathbf{W }}_{m}^{m},\widehat{\mathbf{b}}_{m}^{m},\widehat{\theta}_{n})\) be the maximizer of (4) and further denote \(\widehat{h}_{n}(t)=\widehat{h}(t;\widehat{\mathbf{W}}_{n}^{h},\widehat{\mathbf{ b}}_{n}^{h})\) and \(\widehat{m}_{n}(Z)=\widehat{m}(Z;\widehat{\mathbf{W}}_{n}^{m},\widehat{\mathbf{b}}_{n}^{m})\). The resulting estimators for conditional cumulative hazard and survival functions are:
\[\widehat{\Lambda}_{\mathsf{PF}}(t|Z)=G_{\widehat{\theta}_{n}}\left(\int_{0}^{t} e^{\widehat{h}_{n}(s)+\widehat{m}_{n}(Z)}ds\right),\qquad\widehat{S}_{\mathsf{PF}}(t|Z)=e^{- \widehat{\Lambda}_{\mathsf{PF}}(t|Z)}, \tag{5}\]For the FN scheme, we use \(\widehat{\nu}=\widehat{\nu}(t,Z;\mathbf{W}^{\nu},\mathbf{b}^{\nu})\) to approximate \(\nu(t,Z)\) parameterized by \((\mathbf{W}^{\nu},\mathbf{b}^{\nu})\). The OLL objective is written as:
\[\begin{split}&\mathcal{L}(\mathbf{W}^{\nu},\mathbf{b}^{\nu},\theta) \\ =&\frac{1}{n}\left[\sum_{i\in[n]}\delta_{i}\log g_{ \theta}\left(\int_{0}^{T_{i}}e^{\widehat{\nu}_{n}(s,Z_{i};\mathbf{W}^{\nu}, \mathbf{b}^{\nu})}ds\right)+\delta_{i}\widehat{\nu}(T_{i},Z_{i};\mathbf{W}^{ \nu},\mathbf{b}^{\nu})-G_{\theta}\left(\int_{0}^{T_{i}}e^{\widehat{\nu}(s,Z_{i} ;\mathbf{W}^{\nu},\mathbf{b}^{\nu})}ds\right)\right].\end{split} \tag{6}\]
Let \((\widehat{\mathbf{W}}_{n}^{\nu},\widehat{\mathbf{b}}_{n}^{\nu},\widehat{\theta }_{n})\) be the maximizer of (6), and further denote \(\widehat{\nu}_{n}(t,Z)=\widehat{\nu}(t,Z;\widehat{\mathbf{W}}_{n}^{\nu}, \widehat{\mathbf{b}}_{n}^{\nu})\). The conditional cumulative hazard and survival functions are therefore estimated as:
\[\widehat{\Lambda}_{\mathsf{FN}}(t|Z)=G_{\widehat{\theta}_{n}}\left(\int_{0}^{ t}e^{\widehat{\nu}_{n}(s,Z)}ds\right),\qquad\widehat{S}_{\mathsf{FN}}(t|Z)=e^{- \widehat{\Lambda}_{\mathsf{FN}}(t|Z)}. \tag{7}\]
The evaluation of objectives like (6) and its gradient requires computing a definite integral of an exponentially transformed MLP function. Instead of using exact computations that are available for only a restricted type of activation functions and network structures, we use numerical integration for such kinds of evaluations, using the method of Clenshaw-Curtis quadrature [5], which has shown competitive performance and efficiency in recent applications to monotonic neural networks [68].
_Remark 3.1_.: The interpretation of frailty terms differs in the two schemes. In the PF scheme, introducing the frailty effect strictly increases the modeling capability (i.e., the capability of modeling crossing hazard) in comparison to CoxPH or neural variants of CoxPH [45]. In the FN scheme, it is arguable that in the i.i.d. case, the marginal hazard function is a reparameterization of the hazard function in the context of NHR. Therefore, we view the incorporation of frailty effect as injecting a domain-specific inductive bias that has proven to be useful in survival analysis and time-to-event regression modeling and verify this claim empirically in section 5.2. Moreover, frailty becomes especially helpful when handling correlated or clustered data where the frailty term is assumed to be shared among certain groups of individuals [53]. Extending NFM to such scenarios is valuable and we left it to future explorations.
## 4 Statistical guarantees
In this section, we present statistical guarantees regarding both NFM estimates in the sense of nonparametric regression [65], where we obtain rates of convergence to the ground truth function parameters (which is frequently referred to as the _true parameter_ in statistics literature). The results in this section is interpreted as showing the _statistical correctness_ of our approach.
**Proof strategy** Our proof technique is based on the method of sieves [60; 59; 11] that views neural networks as a special kind of nonlinear sieve [11] that satisfies desirable approximation properties [72]. Our strategy is different from previous theoretical works on neural survival models [75; 76] where the developments implicitly requires the loss function to be well-controlled by the \(L_{2}\) loss and is therefore not directly applicable to our model due to the flexibility in choosing the frailty transform. Since both models produce estimates of function parameters, we need to specify a suitable function space to work with. Here we choose the following Holder ball as was also used in previous works on nonparametric estimation using neural networks [57; 26; 76]
\[\mathcal{W}^{\beta}_{M}(\mathcal{X})=\left\{f:\max_{\alpha:|\alpha|\leq\beta} \operatorname*{esssup}_{x\in\mathcal{X}}|D^{\alpha}(f(x))|\leq M\right\}, \tag{8}\]
where the domain \(\mathcal{X}\) is assumed to be a subset of \(d\)-dimensional euclidean space. \(\alpha=(\alpha_{1},\ldots,\alpha_{d})\) is a \(d\)-dimensional tuple of nonnegative integers satisfying \(|\alpha|=\alpha_{1}+\cdots+\alpha_{d}\) and \(D^{\alpha}f=\frac{\partial^{|\alpha|}f}{\partial x_{1}^{\alpha_{1}}\cdots x_{ d}^{\alpha_{d}}}\) is the weak derivative of \(f\). Now assume that \(M\) is a reasonably large constant, and let \(\Theta\) be a closed interval over the real line. We make the following assumptions for the _true parameters_ under both schemes:
**Condition 4.1** (True parameter, PF scheme).: _The euclidean parameter \(\theta_{0}\in\Theta\subset\mathbb{R}\), and the two function parameters \(m_{0}\in\mathcal{W}^{\beta}_{M}([-1,1]^{d}),h_{0}\in\mathcal{W}^{\beta}_{M}([0,\tau])\), and \(\tau>0\) is the ending time of the study duration, which is usually adopted in the theoretical studies in survival analysis [67]._
**Condition 4.2** (True parameter, FN scheme).: _The euclidean parameter \(\theta_{0}\in\Theta\subset\mathbb{R}\), and the function parameter \(\nu_{0}\in\mathcal{W}^{\beta}_{M}([0,\tau]\times[-1,1]^{d})\),_Next, we construct sieve spaces for function parameter approximation via restricting the complexity of the MLPs to "scale" with the sample size \(n\).
**Condition 4.3** (Sieve space, PF scheme).: _The sieve space \(\mathcal{H}_{n}\) is constructed as a set of MLPs satisfying \(\widehat{h}\in\mathcal{W}^{\beta}_{M_{n}}([0,\tau])\), with depth of order \(O(\log n)\) and total number of parameters of order \(O(n^{\frac{1}{d+2}}\log n)\). The sieve space \(\mathcal{M}_{n}\) is constructed as a set of MLPs satisfying \(\widehat{m}\in\mathcal{W}^{\beta}_{M_{m}}([-1,1]^{d})\), with depth of order \(O(\log n)\) and total number of parameters of order \(O(n^{\frac{d}{d+2}}\log n)\). Here \(M_{h}\) and \(M_{m}\) are sufficiently large constants such that every function in \(\mathcal{W}^{\beta}_{M}([-1,1]^{d})\) and \(\mathcal{W}^{\beta}_{M}([0,\tau])\) could be accurately approximated by functions inside \(\mathcal{H}_{n}\) and \(\mathcal{M}_{n}\), according to [72, Theorem 1]._
**Condition 4.4** (Sieve space, FN scheme).: _The sieve space \(\mathcal{V}_{n}\) is constructed as a set of MLPs satisfying \(\widehat{\nu}\in\mathcal{W}^{\beta}_{M_{\nu}}([0,\tau])\), with depth of order \(O(\log n)\) and total number of parameters of order \(O(n^{\frac{d+1}{d+2}}\log n)\). Here \(M_{\nu}\) is a sufficiently large constant such that \(\mathcal{V}_{n}\) satisfies approximation properties, analogous to condition 4.3._
For technical reasons, we will assume the nonparametric function estimators are constrained to fall inside the corresponding sieve spaces, i.e., \(\widehat{h}_{n}\in\mathcal{H}_{n}\), \(\widehat{m}_{n}\in\mathcal{M}_{n}\) and \(\widehat{\nu}\in\mathcal{V}_{n}\). This will not affect the implementation of optimization routines as was discussed in [26]. Furthermore, we restrict the estimate \(\widehat{\theta}_{n}\in\Theta\) in both PF and FN schemes.
Additionally, we need the following regularity condition on the function \(G_{\theta}(x)\):
**Condition 4.5**.: \(G_{\theta}(x)\) _is viewed as a bivariate function \(G:\Theta\times\mathcal{B}\mapsto\mathbb{R}\), where \(\mathcal{B}\) is a compact set on \(\mathbb{R}\). The functions \(G_{\theta}(x),\frac{\partial}{\partial\theta}G_{\theta}(x),\frac{\partial}{ \partial x}G_{\theta}(x),\log g_{\theta}(x),\frac{\partial}{\partial\theta}\log g _{\theta}(x)\), \(\frac{\partial}{\partial x}\log g_{\theta}(x)\) are bounded on \(\Theta\times\mathcal{B}\)._
We define two metrics that measures convergence of parameter estimates: For the PF scheme, let \(\phi_{0}=(h_{0},m_{0},\theta_{0})\) be the true parameters and \(\widehat{\phi}_{n}=(\widehat{h}_{n},\widehat{m}_{n},\widehat{\theta}_{n})\) be the estimates. We abbreviate \(\mathbb{P}_{\phi_{0},Z=z}\) as the conditional probability distribution of \((T,\delta)\) given \(Z=z\) under the true parameter, and \(\mathbb{P}_{\widehat{\phi}_{n},Z=z}\) as the conditional probability distribution of \((T,\delta)\) given \(Z=z\) under the estimates. Define the following metric
\[d_{\mathsf{PF}}\left(\widehat{\phi}_{n},\phi_{0}\right)=\sqrt{\mathbb{E}_{z \sim\mathbb{P}_{Z}}\left[H^{2}(\mathbb{P}_{\widehat{\phi}_{n},Z=z}\parallel \mathbb{P}_{\phi_{0},Z=z})\right]}, \tag{9}\]
where \(H^{2}(\mathbb{P}\parallel\mathbb{Q})=\int\left(\sqrt{d\mathbb{P}}-\sqrt{d \mathbb{Q}}\right)^{2}\) is the squared Hellinger distance between probability distributions \(\mathbb{P}\) and \(\mathbb{Q}\). The case for the FN scheme is similar: Let \(\psi_{0}=(\nu_{0},\theta_{0})\) be the parameters and \(\widehat{\nu}_{n}=(\widehat{\nu}_{n},\widehat{\theta}_{n})\) be the estimates. Analogous to the definitions above, we define \(\mathbb{P}_{\psi_{0},Z=z}\) as the true conditional distribution given \(Z=z\), and \(\mathbb{P}_{\widehat{\psi}_{n},Z=z}\) be the estimated conditional distribution, we will use the following metric in the FN scheme:
\[d_{\mathsf{FN}}\left(\widehat{\psi}_{n},\psi_{0}\right)=\sqrt{\mathbb{E}_{z \sim\mathbb{P}_{Z}}\left[H^{2}(\mathbb{P}_{\widehat{\psi}_{n},Z=z}\parallel \mathbb{P}_{\psi_{0},Z=z})\right]}. \tag{10}\]
Now we state our main theorems. We denote \(\mathbb{P}\) as the data generating distribution and use \(\widetilde{O}\) to hide poly-logarithmic factors in the big-O notation.
**Theorem 4.6** (Rate of convergence, PF scheme).: _In the PF scheme, under condition 4.1, 4.3, 4.5, we have that \(d_{\mathsf{PF}}\left(\widehat{\phi}_{n},\phi_{0}\right)=\widetilde{O}_{\mathbb{ P}}\left(n^{-\frac{\beta}{2\beta+2\beta}}\right)\)._
**Theorem 4.7** (Rate of convergence, FN scheme).: _In the FN scheme, under condition 4.2, 4.4, 4.5, we have that \(d_{\mathsf{FN}}\left(\widehat{\psi}_{n},\psi_{0}\right)=\widetilde{O}_{\mathbb{ P}}\left(n^{-\frac{\beta}{2\beta+2\beta+2\beta}}\right)\)._
_Remark 4.8_.: The idea of using Hellinger distance to measure the convergence rate of sieve MLEs was proposed in [70]. Obtaining rates under a stronger topology such as \(L_{2}\) is possible if the likelihood function satisfies certain conditions such as the curvature condition [26]. However, such kind of conditions is in general too stringent for likelihood-based objectives, instead, we use Hellinger distance that has minimal requirements. Consequently, our proof strategy is applicable to many other survival models that rely on neural function approximation such as [56], with some modification to the regularity conditions. For proper choices of metrics in sieve theory, see also the discussion in [11, Chapter 2].
## 5 Experiments
In this section, we report the empirical performance of NFM, we will focus on the following two research questions:
**RQ\(1\)(Verfication of statistical correctness):** The results listed in section characterized the convergence results _in theory_, providing a crude guide on the number of samples required for an accurate estimate. Nonetheless, theoretical rates are often pessimistic, thus we want to investigate **whether a moderate number of sample size suffices for good approximation**.
**RQ\(2\)(Assessment of empirical efficacy):** While NFM is theoretically sound in terms of _estimation accuracy_, the theory we have developed does not necessarily guarantee its _empirical efficacy_ as a method of doing prognosis. It is therefore valuable to inspect **how useful NFM is regarding real-world predictive tasks in survival analysis**.
### Synthetic experiments
To answer RQ\(1\), we conduct synthetic experiments to check the empirical convergence. Specifically, we investigate the empirical recovery of underlying ground truth parameters under various level of sample size.
**Ground truth** We set the true underlying model to be a nonlinear gamma-frailty model with a \(5\)-dimensional feature. We generate three training datasets of different scales, with \(n\in\{1000,5000,10000\}\). The assessment will be made on a fixed test sample of \(100\) hold-out points that are independently drawn from the generating scheme of the event time. A censoring mechanism is applied such that the censoring ratio is around \(40\%\) for each dataset. The precise form of the frailty model as well as the generating distribution of the feature vectors are detailed in appendix C.2.
**Empirical recovery results** We report the empirical recovery of the nonlinear component \(\nu(t,Z)\) regarding the hold-out test set in in figure 1. We observe from graphical illustrations that under a moderate sample size \(n=1000\), NFM already exhibits satisfactory recovery for a (relatively low-dimensional) feature space, which is the prevailing case in most public benchmark datasets. We also present additional assessments about: (i) The recovery of \(m(Z)\) when using PF scheme in appendix D.1, (ii) The recovery of survival functions under both PF and FN scheme in appendix D.2, and (iii) The numerical recovery results of survival function in appendix D.3.
Figure 1: Visualizations of synthetic data results under the NFM framework. The plots in the first row compare the empirical estimates of the nonparametric component \(\nu(t,Z)\) against its true value evaluated on \(100\) hold-out points, under the PF scheme. The plots in the second row are obtained using the FN scheme, with analogous semantics to the first row.
### Real-world data experiments
To answer RQ\(2\), we conduct extensive empirical assessments over \(6\) benchmark datasets, comprising five survival datasets and one non-survival dataset. The survival datasets include the Molecular Taxonomy of Breast Cancer International Consortium (METABRIC) [16], the Rotterdam tumor bank and German Breast Cancer Study Group (RotGBSG)[43], the Assay Of Serum Free Light Chain (FLCHAIN) [20], the Study to Understand Prognoses Preferences Outcomes and Risks of Treatment (SUPPORT) [43], and the Medical Information Mart for Intensive Care (MIMIC-III) [39]. For all the survival datasets, the event of interest is defined as the mortality after admission. In our experiments, we view METABRIC, RotGBSG, FLCHAIN, and SUPPORT as small-scale datasets and MIMIC-III as a moderate-scale dataset. We additionally use the KGBOX dataset [46] as a large-scale evaluation. In this dataset, an event time is observed if a customer churns from the KKBOX platform. We summarize the basic statistics of all the datasets in table 3.
**Baselines** We compare NFM with \(12\) baselines. The first one is linear CoxPH model [14]. Gradient Boosting Machine (GBM) [27, 10] and Random Survival Forests (RSF) [37] are two tree-based nonparametric survival regression methods. DeepSurv [42] and CoxTime [46] are two models that adopt neural variants of partial likelihood as objectives. SuMo-net [56] is a neural variant of NHR. We additionally chose six latest state-of-the-art neural survival models: DeepHit [47], SurvNode [32], DeepEH [75], DCM [51], DeSurv [19] and SODEN [64]. Among the chosen baselines, DeepSurv and SuMo-net are viewed as implementations of neural CoxPH and neural NHR and are therefore of particular interest for the empirical verification of the efficacy of frailty.
**Evaluation strategy** We use two standard metrics in survival predictions for evaluating model performance: integrated Brier score (IBS) and integrated negative binomial log-likelihood (INBLL). Both metrics are derived from the following:
\[\mathcal{S}(\ell,t_{1},t_{2})=\int_{t_{2}}^{t_{1}}\frac{1}{n}\sum_{i=1}^{n} \left[\frac{\ell(0,\widehat{S}(t|Z_{i}))I(T_{i}\leq t,\delta_{i}=1)}{\widehat {S}_{C}(T_{i})}+\frac{\ell(1,\widehat{S}(t|Z_{i}))I(T_{i}>t)}{\widehat{S}_{C} (t)}\right]dt. \tag{11}\]
Where \(\widehat{S}_{C}(t)\) is an estimate of the survival function \(S_{C}(t)\) of the censoring variable, obtained by the Kaplan-Meier estimate [41] of the censored observations on the test data. \(\ell:\{0,1\}\times[0,1]\mapsto\mathbb{R}^{+}\) is some proper loss function for binary classification [28]. The IBS metric corresponds to \(\ell\) being the square loss, and the INBLL metric corresponds to \(\ell\) being the negative binomial (Bernoulli) log-likelihood [30]. Both IBS and INBLL are proper scoring rules if the censoring times and survival times are independent. 5 We additionally report the result of another widely used metric, the concordance index (C-index), in appendix D. Since all the survival datasets do not have standard
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Model & \multicolumn{2}{c}{METABRIC} & \multicolumn{2}{c}{RotGBSG} & \multicolumn{2}{c}{FLCHAIN} & \multicolumn{2}{c}{SUPPORT} \\ \cline{2-9} & IBS & INBLL & IBS & INBLL & IBS & INBLL & IBS & INBLL \\ \hline CoxPH & 16.46\(\pm\)0.90 & 49.57\(\pm\)2.66 & 18.25\(\pm\)0.44 & 53.76\(\pm\)1.11 & 10.05\(\pm\)0.38 & 33.18\(\pm\)1.16 & 20.54\(\pm\)0.38 & 59.58\(\pm\)0.86 \\ GBM & 16.61\(\pm\)0.52 & 49.87\(\pm\)2.44 & 17.83\(\pm\)0.44 & 52.78\(\pm\)1.11 & 0.98\(\pm\)0.37 & 32.88\(\pm\)1.05 & 19.18\(\pm\)0.30 & 56.46\(\pm\)0.10 \\ RSF & 16.62\(\pm\)0.54 & 49.61\(\pm\)1.54 & 17.89\(\pm\)0.42 & 52.77\(\pm\)1.01 & **9.96\(\pm\)**0.37 & 32.92\(\pm\)1.05 & 19.11\(\pm\)0.40 & 56.28\(\pm\)1.00 \\ DeepSurv & 16.55\(\pm\)0.93 & 49.85\(\pm\)5.30 & 17.80\(\pm\)0.49 & 52.62\(\pm\)1.25 & 10.09\(\pm\)0.38 & 33.28\(\pm\)1.15 & 19.20\(\pm\)0.41 & 56.48\(\pm\)1.08 \\ CoxTime & 16.54\(\pm\)0.83 & 49.67\(\pm\)2.67 & 17.80\(\pm\)0.58 & 52.56\(\pm\)1.47 & 10.28\(\pm\)0.45 & 34.18\(\pm\)1.53 & 19.17\(\pm\)0.40 & 56.45\(\pm\)1.10 \\ DeepHit & 17.50\(\pm\)0.83 & 52.10\(\pm\)1.06 & 19.61\(\pm\)0.38 & 56.67\(\pm\)1.00 & 11.81\(\pm\)0.39 & 37.72\(\pm\)1.02 & 20.66\(\pm\)0.62 & 60.06\(\pm\)0.72 \\ DeepEH & 16.56\(\pm\)0.85 & 49.42\(\pm\)1.53 & 17.62\(\pm\)0.52 & 52.08\(\pm\)1.27 & 10.11\(\pm\)0.37 & 33.30\(\pm\)1.10 & 19.30\(\pm\)0.30 & 56.67\(\pm\)0.94 \\ SuMo-net & 16.49\(\pm\)0.83 & 49.74\(\pm\)2.12 & 17.77\(\pm\)0.47 & 52.62\(\pm\)1.11 & 10.07\(\pm\)0.40 & 33.20\(\pm\)1.10 & 19.40\(\pm\)0.38 & 56.77\(\pm\)0.96 \\ SODEN & 16.52\(\pm\)0.83 & 49.93\(\pm\)1.97 & **17.05\(\pm\)0.63** & **50.45\(\pm\)**1.97 & 19.10\(\pm\)0.24 & 33.37\(\pm\)0.57 & 19.07\(\pm\)0.50 & 56.51\(\pm\)1.35 \\ SurvNode & 16.67\(\pm\)1.32 & 49.73\(\pm\)3.89 & 17.42\(\pm\)0.53 & 51.70\(\pm\)1.16 & 10.40\(\pm\)0.29 & 34.37\(\pm\)1.03 & 19.58\(\pm\)0.34 & 57.49\(\pm\)0.84 \\ DCM & 16.58\(\pm\)0.87 & 49.48\(\pm\)2.23 & 17.66\(\pm\)0.54 & 52.26\(\pm\)1.23 & 10.13\(\pm\)0.36 & 33.40\(\pm\)1.38 & 19.20\(\pm\)0.42 & 56.88\(\pm\)1.09 \\ DeSurv & 16.17\(\pm\)0.75 & 49.61\(\pm\)1.25 & 17.98\(\pm\)0.46 & 53.23\(\pm\)1.15 & 10.06\(\pm\)0.62 & 33.18\(\pm\)1.93 & 19.50\(\pm\)0.40 & 57.28\(\pm\)0.89 \\ \hline
**NFM-PF** & 16.33\(\pm\)0.75 & 49.07\(\pm\)1.96 & 17.60\(\pm\)0.55 & 52.12\(\pm\)1.34 & **9.96\(\pm\)**0.39 & **32.84\(\pm\)**1.15 & 19.14\(\pm\)0.39 & 56.35\(\pm\)1.50 \\
**NFM-FN** & **16.11\(\pm\)0.81** & **48.21\(\pm\)0.04** & 17.66\(\pm\)0.52 & 52.41\(\pm\)1.22 & 10.05\(\pm\)0.39 & 33.11\(\pm\)1.10 & **18.97\(\pm\)**0.60 & **55.87\(\pm\)**1.50 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Survival prediction results measured in IBS and INBLL metric (%) on four small-scale survival datasets. In each column, the **boldfaced** score denotes the best result and the underlined score represents the second-best result (both in mean).
train/test splits, we follow previous practice [75] that uses \(5\)-fold cross-validation (CV): \(1\) fold is for testing, and \(20\%\) of the rest is held out for validation. In our experiments, we observed that a single random split into \(5\) folds does not produce stable results for most survival datasets. Therefore we perform \(10\) different CV runs for each survival dataset and report average metrics as well as their standard deviations. For the KGBOX dataset, we use the standard train/valid/test splits that are available via the pycox package [46] and report results based on \(10\) trial runs.
**Experimental setup** We follow standard preprocessing strategies [42; 46; 75] that standardize continuous features into zero mean and unit variance, and do one-hot encodings for all categorical features. We adopt MLP with ReLU activation for all function approximators, including \(\widehat{h}\), \(\widehat{m}\) in PF scheme, and \(\widehat{\nu}\) in FN scheme, across all datasets, with the number of layers (depth) and the number of hidden units (width) within each layer being tunable. We tune the frailty transform over several standard choices: gamma frailty, Box-Cox transformation frailty and \(\text{IGG}(\alpha)\) frailty, with their precise forms detailed in appendix C.3. A more detailed description of the tuning procedure, as well as training configurations for baseline models, are reported in appendix C.3.
**Results** We report experimental results of small-scale datasets in table 1, and results of two larger datasets in table 2. The proposed NFM framework achieves competitive performance which is comparable to the other state-of-the-art models. In particular, NFM attains best performance in mean on \(5\) of the \(6\) datasets, and is statistically significantly better over all the baselines at \(0.05\) empirical level on the MIMIC-III dataset.
**Ablation on the benefits of frailty** To better understand the additional benefits of introducing the frailty formulation, we compute the (relative) performance gain of NFM-PF and NFM-FN, against their non-frailty counterparts, namely DeepSurv [42] and SuMo-net [56]. The evaluation is conducted for all three metrics mentioned in this paper. The results are shown in table 6. The results suggest a solid improvement in incorporating frailty, as the relative increase in performance could be over \(10\%\) for both NFM models. A more detailed discussion is presented in section D.5.
## 6 Discussion and conclusion
We have introduced NFM as a flexible and powerful neural modeling framework for survival analysis, which is shown to be both statistically correct in theory, and empirically effective in predictive tasks. While our proposed framework provides a theoretically-principled tool of neural survival modeling, a few limitations and challenges need to be addressed in future works including predictive guarantees and better evaluation protocols, which we elaborate in appendix D.6.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{MIMIC-III} & \multicolumn{2}{c}{KKBOX} \\ \cline{2-5} & IBS & INBLL & IBS & INBLL \\ \hline CoxPH & \(20.40_{\pm 0.00}\) & \(60.02_{\pm 0.00}\) & \(12.60_{\pm 0.00}\) & \(39.40_{\pm 0.00}\) \\ GBM & \(17.70_{\pm 0.00}\) & \(52.30_{\pm 0.00}\) & \(11.81_{\pm 0.00}\) & \(38.15_{\pm 0.00}\) \\ RSF & \(17.79_{\pm 0.19}\) & \(53.34_{\pm 0.41}\) & \(14.46_{\pm 0.00}\) & \(44.39_{\pm 0.00}\) \\ DeepSurv & \(18.58_{\pm 0.92}\) & \(55.98_{\pm 2.43}\) & \(11.31_{\pm 0.05}\) & \(35.28_{\pm 0.15}\) \\ CoxTime & \(17.68_{\pm 1.36}\) & \(52.08_{\pm 3.06}\) & \(10.70_{\pm 0.06}\) & \(33.10_{\pm 0.21}\) \\ DeepHit & \(19.80_{\pm 1.31}\) & \(59.03_{\pm 4.20}\) & \(16.00_{\pm 0.34}\) & \(48.64_{\pm 1.04}\) \\ SuMo-net & \(18.62_{\pm 1.23}\) & \(54.51_{\pm 2.97}\) & \(11.58_{\pm 0.11}\) & \(36.61_{\pm 0.28}\) \\ DCM & \(18.02_{\pm 0.49}\) & \(52.83_{\pm 0.94}\) & \(10.71_{\pm 0.11}\) & \(33.24_{\pm 0.06}\) \\ DeSurv & \(18.19_{\pm 0.65}\) & \(54.69_{\pm 2.83}\) & \(10.77_{\pm 0.21}\) & \(33.22_{\pm 0.10}\) \\ \hline
**NFM-PF** & \(\mathbf{16.28}_{\pm 0.36}\) & \(\mathbf{49.18}_{\pm 0.92}\) & \(11.02_{\pm 0.11}\) & \(35.10_{\pm 0.22}\) \\
**NFM-FN** & \(\underline{17.47}_{\pm 0.45}\) & \(\underline{51.48}_{\pm 1.23}\) & \(\mathbf{10.63}_{\pm 0.08}\) & \(\mathbf{32.81}_{\pm 0.14}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Survival prediction results measured in IBS and INBLL metric (%) on two larger datasets. In each column, the **boldfaced** score denotes the best result and the underlined score represents the second-best result (both in mean). Two models are not reported, namely SODEN and DeepEH, as we found empirically that their computational/memory cost is significantly worse than the rest, and we fail to obtain reasonable performances over the two datasets for these two models.
Acknowledgements
We would like to thank professor Zhiliang Ying and professor Guanhua Fang for helpful discussions. Wen Yu's research is supported by the National Natural Science Foundation of China Grants (\(12071088\)). Ming Zheng's research is supported by the National Natural Science Foundation of China Grants (\(12271106\)).
## References
* [1] L. Antolini, P. Boracchi, and E. Biganzoli. A time-dependent discrimination index for survival data. _Statistics in medicine_, 24(24):3927-3944, 2005.
* [2] P. L. Bartlett, N. Harvey, C. Liaw, and A. Mehrabian. Nearly-tight vc-dimension and pseudodimension bounds for piecewise linear neural networks. _The Journal of Machine Learning Research_, 20(1):2285-2301, 2019.
* [3] S. Bennett. Analysis of survival data by the proportional odds model. _Statistics in medicine_, 2(2):273-277, 1983.
* [4] P. J. Bickel, C. A. Klaassen, Y. Ritov, J. Klaassen, J. A. Wellner, and Y. Ritov. _Efficient and adaptive estimation for semiparametric models_, volume 4. Springer, 1993.
* [5] J. P. Boyd. _Chebyshev and Fourier spectral methods_. Courier Corporation, 2001.
* [6] P. Brennan. Gene-environment interaction and aetiology of cancer: what does it mean and how can we measure it? _Carcinogenesis_, 23(3):381-387, 2002.
* [7] J. Buckley and I. James. Linear regression with censored data. _Biometrika_, 66(3):429-436, 1979.
* [8] J. Cai, J. Fan, J. Jiang, and H. Zhou. Partially linear hazard regression for multivariate survival data. _Journal of the American Statistical Association_, 102(478):538-551, 2007.
* [9] J. Cai, J. Fan, J. Jiang, and H. Zhou. Partially linear hazard regression with varying coefficients for multivariate survival data. _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_, 70(1):141-158, 2008.
* [10] T. Chen and C. Guestrin. Xgboost: A scalable tree boosting system. In _Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining_, pages 785-794, 2016.
* [11] X. Chen. Large sample sieve estimation of semi-nonparametric models. _Handbook of econometrics_, 6:5549-5632, 2007.
* [12] X. Chen and X. Shen. Sieve extremum estimates for weakly dependent data. _Econometrica_, pages 289-314, 1998.
* [13] D. D. Cox and F. O'Sullivan. Asymptotic analysis of penalized likelihood and related estimators. _The Annals of Statistics_, pages 1676-1695, 1990.
* [14] D. R. Cox. Regression models and life-tables. _Journal of the Royal Statistical Society: Series B (Methodological)_, 34(2):187-202, 1972.
* [15] D. R. Cox. Partial likelihood. _Biometrika_, 62(2):269-276, 1975.
* [16] C. Curtis, S. P. Shah, S.-F. Chin, G. Turashvili, O. M. Rueda, M. J. Dunning, D. Speed, A. G. Lynch, S. Samarajiwa, Y. Yuan, et al. The genomic and transcriptomic architecture of 2,000 breast tumours reveals novel subgroups. _Nature_, 486(7403):346-352, 2012.
* [17] J. Cuzick. Rank regression. _The Annals of Statistics_, pages 1369-1389, 1988.
* [18] D. M. Dabrowska and K. A. Doksum. Partial likelihood in transformation models with censored data. _Scandinavian journal of statistics_, pages 1-23, 1988.
* [19] D. Danks and C. Yau. Derivative-based neural modelling of cumulative distribution functions for survival analysis. In _International Conference on Artificial Intelligence and Statistics_, pages 7240-7256. PMLR, 2022.
* [20] A. Dispenzieri, J. A. Katzmann, R. A. Kyle, D. R. Larson, T. M. Therneau, C. L. Colby, R. J. Clark, G. P. Mead, S. Kumar, L. J. Melton III, et al. Use of nonclonal serum immunoglobulin free light chains to predict overall survival in the general population. In _Mayo Clinic Proceedings_, volume 87, pages 517-523. Elsevier, 2012.
* [21] L. Duchateau and P. Janssen. _The frailty model_. Springer Science & Business Media, 2007.
* [22] D. Duffie, A. Eckner, G. Horel, and L. Saita. Frailty correlated default. _The Journal of Finance_, 64(5):2089-2123, 2009.
* [23] J. Etezadi-Amoli and A. Ciampi. Extended hazard regression for censored survival data with covariates: a spline approximation for the baseline hazard function. _Biometrics_, pages 181-192, 1987.
* [24] J. Fan and I. Gijbels. _Local Polynomial Modelling and Its Applications: Monographs on Statistics and Applied Probability 66_, volume 66. CRC Press, 1996.
* [25] D. Faraggi and R. Simon. A neural network model for survival data. _Statistics in medicine_, 14(1):73-82, 1995.
* [26] M. H. Farrell, T. Liang, and S. Misra. Deep neural networks for estimation and inference. _Econometrica_, 89(1):181-213, 2021.
* [27] J. H. Friedman. Greedy function approximation: a gradient boosting machine. _Annals of statistics_, pages 1189-1232, 2001.
* [28] T. Gneiting and A. E. Raftery. Strictly proper scoring rules, prediction, and estimation. _Journal of the American statistical Association_, 102(477):359-378, 2007.
* [29] I. Goodfellow, Y. Bengio, and A. Courville. _Deep learning_. 2016.
* [30] E. Graf, C. Schmoor, W. Sauerbrei, and M. Schumacher. Assessment and comparison of prognostic classification schemes for survival data. _Statistics in medicine_, 18(17-18):2529-2545, 1999.
* [31] R. J. Gray. Estimation of regression parameters and the hazard function in transformed linear survival models. _Biometrics_, 56(2):571-576, 2000.
* [32] S. Groha, S. M. Schmon, and A. Gusev. A general framework for survival analysis and multi-state modelling. _arXiv preprint arXiv:2006.04893_, 2020.
* [33] X. Han, M. Goldstein, A. Puli, T. Wies, A. Perotte, and R. Ranganath. Inverse-weighted survival games. _Advances in neural information processing systems_, 34:2160-2172, 2021.
* [34] M. Hardt, B. Recht, and Y. Singer. Train faster, generalize better: Stability of stochastic gradient descent. In _International conference on machine learning_, pages 1225-1234. PMLR, 2016.
* [35] P. Hougaard. Life table methods for heterogeneous populations: distributions describing the heterogeneity. _Biometrika_, 71(1):75-83, 1984.
* [36] J. Huang. Efficient estimation of the partly linear additive cox model. _The annals of Statistics_, 27(5):1536-1563, 1999.
* [37] H. Ishwaran, U. B. Kogalur, E. H. Blackstone, and M. S. Lauer. Random survival forests. _The annals of applied statistics_, 2(3):841-860, 2008.
* [38] H. Jing and A. J. Smola. Neural survival recommender. In _Proceedings of the Tenth ACM International Conference on Web Search and Data Mining_, pages 515-524, 2017.
* [39] A. E. Johnson, T. J. Pollard, L. Shen, L.-w. H. Lehman, M. Feng, M. Ghassemi, B. Moody, P. Szolovits, L. Anthony Celi, and R. G. Mark. Mimic-iii, a freely accessible critical care database. _Scientific data_, 3(1):1-9, 2016.
* [40] J. D. Kalbfleisch and R. L. Prentice. _The Statistical Analysis of Failure Time Data_, volume 360. John Wiley & Sons, 2002.
* [41] E. L. Kaplan and P. Meier. Nonparametric estimation from incomplete observations. _Journal of the American statistical association_, 53(282):457-481, 1958.
* [42] J. L. Katzman, U. Shaham, A. Cloninger, J. Bates, T. Jiang, and Y. Kluger. Deepsurv: personalized treatment recommender system using a cox proportional hazards deep neural network. _BMC medical research methodology_, 18(1):1-12, 2018.
* [43] W. A. Knaus, F. E. Harrell, J. Lynn, L. Goldman, R. S. Phillips, A. F. Connors, N. V. Dawson, W. J. Fulkerson, R. M. Califf, N. Desbiens, et al. The support prognostic model: Objective estimates of survival for seriously ill hospitalized adults. _Annals of internal medicine_, 122(3):191-203, 1995.
* [44] C. Kooperberg, C. J. Stone, and Y. K. Truong. Hazard regression. _Journal of the American Statistical Association_, 90(429):78-94, 1995.
* [45] M. R. Kosorok, B. L. Lee, and J. P. Fine. Robust inference for univariate proportional hazards frailty regression models. _The Annals of Statistics_, 32(4):1448-1491, 2004.
* [46] H. Kvamme, O. Borgan, and I. Scheel. Time-to-event prediction with neural networks and cox regression. _arXiv preprint arXiv:1907.00825_, 2019.
* [47] C. Lee, W. Zame, J. Yoon, and M. Van Der Schaar. Deephit: A deep learning approach to survival analysis with competing risks. In _Proceedings of the AAAI conference on artificial intelligence_, volume 32, 2018.
* [48] S. A. Murphy. Consistency in a proportional hazards model incorporating a random effect. _The Annals of Statistics_, 22(2):712-731, 1994.
* [49] S. A. Murphy. Asymptotic theory for the frailty model. _The annals of statistics_, pages 182-198, 1995.
* [50] S. A. Murphy and A. W. Van der Vaart. On profile likelihood. _Journal of the American Statistical Association_, 95(450):449-465, 2000.
* [51] C. Nagpal, S. Yadlowsky, N. Rostamzadeh, and K. Heller. Deep cox mixtures for survival regression. In _Machine Learning for Healthcare Conference_, pages 674-708. PMLR, 2021.
* [52] T. Omi, n. ueda, and K. Aihara. Fully neural network based model for general temporal point processes. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alche-Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems_, volume 32. Curran Associates, Inc., 2019.
* [53] E. Parner. Asymptotic theory for the correlated gamma-frailty model. _The Annals of Statistics_, 26(1):183-214, 1998.
* [54] S. Polsterl. scikit-survival: A library for time-to-event analysis built on top of scikit-learn. _Journal of Machine Learning Research_, 21(212):1-6, 2020.
* [55] S. Purushotham, C. Meng, Z. Che, and Y. Liu. Benchmarking deep learning models on large healthcare datasets. _Journal of biomedical informatics_, 83:112-134, 2018.
* [56] D. Rindt, R. Hu, D. Steinsaltz, and D. Sejdinovic. Survival regression with proper scoring rules and monotonic neural networks. Proceedings of Machine Learning Research. Journal of Machine Learning Research, 2022.
* [57] J. Schmidt-Hieber. Nonparametric regression using deep neural networks with relu activation function. _The Annals of Statistics_, 48(4):1875-1897, 2020.
* [58] S. Shalev-Shwartz and S. Ben-David. _Understanding machine learning: From theory to algorithms_. Cambridge university press, 2014.
* [59] X. Shen. On methods of sieves and penalization. _The Annals of Statistics_, 25(6):2555-2591, 1997.
* [60] X. Shen and W. H. Wong. Convergence rate of sieve estimates. _The Annals of Statistics_, pages 580-615, 1994.
* [61] D. M. Stablein and I. Koutrouvelis. A two-sample test sensitive to crossing hazards in uncensored and singly censored data. _Biometrics_, pages 643-652, 1985.
* [62] H. Steck, B. Krishnapuram, C. Dehing-oberije, P. Lambin, and V. C. Raykar. On ranking in survival analysis: Bounds on the concordance index. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, _Advances in Neural Information Processing Systems_, volume 20. Curran Associates, Inc., 2007.
* [63] R. L. Strawderman and A. A. Tsiatis. On the asymptotic properties of a flexible hazard estimator. _The Annals of Statistics_, 24(1):41-63, 1996.
* [64] W. Tang, J. Ma, Q. Mei, and J. Zhu. Soden: A scalable continuous-time survival model through ordinary differential equation networks. _Journal of Machine Learning Research_, 23(34):1-29, 2022.
* [65] A. Tsybakov. _Introduction to Nonparametric Estimation_. Springer Series in Statistics. Springer New York, 2008.
* [66] A. van der Vaart, A. van der Vaart, A. van der Vaart, and J. Wellner. _Weak Convergence and Empirical Processes: With Applications to Statistics_. Springer Series in Statistics. Springer, 1996.
* [67] A. W. Van der Vaart. _Asymptotic statistics_, volume 3. Cambridge university press, 2000.
* [68] A. Wehenkel and G. Louppe. Unconstrained monotonic neural networks. _Advances in neural information processing systems_, 32, 2019.
* [69] A. Wienke. _Frailty models in survival analysis_. Chapman and Hall/CRC, 2010.
* [70] W. H. Wong and X. Shen. Probability inequalities for likelihood ratios and convergence rates of sieve mles. _The Annals of Statistics_, pages 339-362, 1995.
* [71] W. C.-H. Wu, M.-Y. Yeh, and M.-S. Chen. Predicting winning price in real time bidding with censored data. In _Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_, pages 1305-1314, 2015.
* [72] D. Yarotsky. Error bounds for approximations with deep relu networks. _Neural Networks_, 94:103-114, 2017.
* [73] Z. Ying. A large sample study of rank estimation for censored regression data. _The Annals of Statistics_, pages 76-99, 1993.
* [74] D. Zeng and D. Lin. Efficient estimation of semiparametric transformation models for counting processes. _Biometrika_, 93(3):627-640, 2006.
* [75] Q. Zhong, J. W. Mueller, and J.-L. Wang. Deep extended hazard models for survival analysis. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, editors, _Advances in Neural Information Processing Systems_, volume 34, pages 15111-15124. Curran Associates, Inc., 2021.
* [76] Q. Zhong, J. W. Mueller, and J.-L. Wang. Deep learning for the partially linear cox model. _The Annals of Statistics_, 2021.
Examples of frailty specifications
We list several commonly used frailty models, and specify their corresponding characteristics via their frailty transform \(G_{\theta}\):
**Gamma frailty:** Arguably the gamma frailty is the most widely used frailty model [48, 49, 53, 69, 21], with
\[G_{\theta}(x)=\frac{1}{\theta}\log(1+\theta x),\theta\geq 0. \tag{12}\]
When \(\theta=0\), \(G_{0}(x)=\lim_{\theta\to 0}G_{\theta}(x)\) is defined as the (pointwise) limit. A notable fact of the gamma frailty specification is that when the proportional frailty (PF) assumption (2) is met, if \(\theta=0\), the model degenerates to CoxPH. Otherwise if \(\theta=1\), the model corresponds to the proportional odds (PO) model [3].
**Box-Cox transformation frailty:** Under this specification, we have
\[G_{\theta}(x)=\frac{(1+x)^{\theta}-1}{\theta},\theta\geq 0. \tag{13}\]
The case of \(\theta=0\) is defined analogously to that of gamma frailty, which corresponds to the PO model under the PF assumption. When \(\theta=1\), the model reduces to CoxPH under the PF assumption.
**IGG\((\alpha)\) frailty:** This is an extension of gamma frailty [45] and includes other types of frailty specifications like the inverse gaussian frailty [35], with
\[G_{\theta}(x)=\frac{1-\alpha}{\alpha\theta}\left[\left(1+\frac{ \theta x}{1-\alpha}\right)^{\alpha}-1\right],\theta\geq 0,\alpha\in[0,1). \tag{14}\]
In the one-dimensional parameter paradigm, the parameter \(\alpha\) is assumed known instead of being learnable. When \(\alpha=1/2\), we obtain the gamma frailty model. When \(\alpha\to 0\), the limit corresponds to the inverse Gaussian frailty.
**Satistiability of regularity condition 4.5** In [45, Proposition 1], the authors verified the regularity condition of gamma and \(\text{IGG}(\alpha)\) frailties. Using a similar argument, it is straightforward to verify the regularity of Box-Cox transformation frailty.
## Appendix B Proofs of theorems
### Preliminary
Additional definitionsThe theory of empirical processes [66] will be involved heavily in the proof. Therefore we briefly introduce some common notations: For a function class \(\mathcal{F}\), define \(N\left(\epsilon,\mathcal{F},\|\cdot\|\right)\) to be the covering number of \(\mathcal{F}\) with respect to norm \(\|\cdot\|\) under radius \(\epsilon\), and define \(N_{\|}\left(\epsilon,\mathcal{F},\|\cdot\|\right)\) to be the bracketing number of \(\mathcal{F}\) with respect to norm \(\|\cdot\|\) under radius \(\epsilon\). We use VC (\(\mathcal{F}\)) to denote the VC-dimension of \(\mathcal{F}\). Moreover, we use the notation \(a\lesssim b\) to denote \(a\leq Cb\) for some positive constant \(C\).
Before proving theorem 4.6 and 4.7, we introduce some additional notations that will be useful throughout the proof process.
In the PF scheme, define
\[l(T,\delta,Z;h,m,\theta)= \delta\log g_{\theta}\left(e^{m(Z)}\int_{0}^{T}e^{h(s)}ds\right) +\delta h(T)+\delta m(Z)\] \[-G_{\theta}\left(e^{m(Z)}\int_{0}^{T}e^{h(s)}ds\right),\]
where we denote \(g_{\theta}=G^{\prime}(\theta)\). Under the definition of the sieve space stated in condition 4.3, we restate the parameter estimates as
\[\left(\widehat{h}_{n},\widehat{m}_{n},\widehat{\theta}_{n}\right)=\operatorname {argmax}_{\widehat{h}\in\mathcal{H}_{n},\widehat{m}\in\mathcal{M}_{n},\theta \in\Theta}\frac{1}{n}\sum_{i\in[n]}l(T_{i},\delta_{i},Z_{i};\widehat{h}, \widehat{m},\theta).\]Similarly, in the FN scheme, we define
\[l(T,\delta,Z;\nu,\theta)=\delta\log g_{\theta}\left(\int_{0}^{T}e^{\nu(s,Z)}ds \right)+\delta\nu(T,Z)-G_{\theta}\left(\int_{0}^{T}e^{\nu(s,Z)}ds\right)\]
Under the definition of the sieve space stated in condition 4.4, we restate the parameter estimates as
\[\left(\widehat{\nu}_{n}(t,z),\widehat{\theta}_{n}\right)=\operatorname*{argmax }_{\widehat{\nu}\in\mathcal{V}_{n},\theta\in\Theta}\frac{1}{n}\sum_{i\in[n]}l( T_{i},\delta_{i},Z_{i};\widehat{\nu},\theta).\]
We denote the conditional density function and survival function of the event time \(\tilde{T}\) given \(Z\) by \(f_{\tilde{T}|Z}(t)\) and \(S_{\tilde{T}|Z}(t)\), respectively. Similarly, we denote the conditional density function and survival function of the censoring time \(C\) given \(Z\) by \(f_{C|Z}(t)\) and \(S_{C|Z}(t)\). Under the assumption that \(\tilde{T}\perp C\mid Z\), the joint conditional density of the observed time \(T\) and the censoring indicator \(\delta\) given \(Z\) can be expressed as the following:
\[p(T,\delta\mid Z) = f_{\tilde{T}|Z}(T)^{\delta}S_{\tilde{T}|Z}(T)^{1-\delta}f_{C|Z}( T)^{1-\delta}S_{C|Z}(T)^{\delta}\] \[= \lambda_{\tilde{T}|Z}(T)^{\delta}S_{\tilde{T}|Z}(T)f_{C|Z}(T)^{1 -\delta}S_{C|Z}(T)^{\delta},\]
where \(\lambda_{\tilde{T}|Z}(T)\) is the conditional hazard function of the survival time \(\tilde{T}\) given \(Z\).
Under the model assumption of PF scheme, \(p(T,\delta\mid Z)\) can be expressed by
\[p(T,\delta\mid Z;h,m,\theta)=\exp\left(l(T,\delta,Z;h,m,\theta)\right)f_{C|Z}(T )^{1-\delta}S_{C|Z}(T)^{\delta}.\]
For \(\phi_{0}=(h_{0},m_{0},\theta_{0})\) and an estimator \(\widehat{\phi}=(\widehat{h},\widehat{m},\widehat{\theta})\), the defined distance \(d_{\textsf{PF}}\left(\widehat{\phi},\phi_{0}\right)\) can be explicitly expresses by
\[d_{\textsf{FN}}\left(\widehat{\psi},\psi_{0}\right)=\sqrt{\mathbb{E}_{Z}\left[ \int\left|\sqrt{p(T,\delta\mid Z;\widehat{h},\widehat{m},\widehat{\theta})}- \sqrt{p(T,\delta\mid Z;h_{0},m_{0},\theta_{0})}\right|^{2}\mu(dT\times d \delta)\right]}.\]
Here the dominating measure \(\mu\) is defined such that for any (measurable) function \(r(T,\delta)\)
\[\int r(T,\delta)\mu(dT\times d\delta)=\int_{0}^{\tau}r(T,\delta=1)dT+\int_{0}^ {\tau}r(T,\delta=0)dT\]
Under the model assumption of FN scheme, \(p(T,\delta\mid Z)\) can be expressed by
\[p(T,\delta\mid Z;\nu,\theta)=\exp\left(l(T,\delta,Z;\nu,\theta)\right)f_{C|Z}( T)^{1-\delta}S_{C|Z}(T)^{\delta}.\]
For \(\psi_{0}=(\nu_{0},\theta_{0})\) and an estimator \(\widehat{\psi}=(\widehat{\nu},\widehat{\theta})\), the defined distance \(d_{\textsf{FN}}\left(\widehat{\psi},\psi_{0}\right)\) can be explicitly expresses by
\[d_{\textsf{FN}}\left(\widehat{\psi},\psi_{0}\right)=\sqrt{\mathbb{E}_{Z}\left[ \int\left|\sqrt{p(T,\delta\mid Z;\widehat{\nu},\widehat{\theta})}-\sqrt{p(T, \delta\mid Z;\nu_{0},\theta_{0})}\right|^{2}\mu(dT\times d\delta)\right]}.\]
### Technical lemmas
The following lemmas are needed for the proof of Theorem 4.6 and 4.7. Hereafter for notational convenience, we will use \(\widehat{h},\widehat{m}\) for arbitrary elements in the corresponding sieve space listed in condition 4.3, \(\widehat{\nu}\) for an arbitrary element in the sieve space listed in condition 4.4, and \(\widehat{\theta}\) for an arbitrary element in \(\Theta\).
**Lemma B.1**.: _Under condition 4.1, 4.3, 4.5, for \((T,\delta,Z)\in[0,\tau]\times\{0,1\}\times[-1,1]^{d}\), the following terms are bounded:_1. \(l(T,\delta,Z;h_{0},m_{0},\theta_{0})\) _with true parameter_ \((h_{0},m_{0},\theta_{0})\)__
2. \(l(T,\delta,Z;\widehat{h},\widehat{m},\widehat{\theta})\) _with parameter estimates_ \((\widehat{h},\widehat{m},\widehat{\theta})\) _in any sieve space listed in condition_ 4.3_._
**Lemma B.2**.: _Under condition 4.2, 4.4, 4.5, for \((T,\delta,Z)\in[0,\tau]\times\{0,1\}\times[-1,1]^{d}\), the following terms are bounded:_
1. \(l(T,\delta,Z;\nu_{0},\theta_{0})\) _with true parameter_ \((\nu_{0},\theta_{0})\)__
2. \(l(T,\delta,Z;\widehat{\nu},\widehat{\theta})\) _with parameter estimates_ \((\widehat{\nu},\widehat{\theta})\) _in any sieve space listed in condition_ 4.4_._
**Lemma B.3**.: _Under condition 4.1, 4.3, 4.5, let \((\widehat{h},\widehat{m},\widehat{\theta})\), \((\widehat{h}_{1},\widehat{m}_{1},\widehat{\theta}_{1})\), and \((\widehat{h}_{2},\widehat{m}_{2},\widehat{\theta}_{2})\) be arbitrary three parameter triples inside the sieve space defined in condition 4.3, the following two inequalities hold._
\[\|l(T,\delta,Z;h_{0},m_{0},\theta_{0})-l(T,\delta,Z;\widehat{h}, \widehat{m},\widehat{\theta})\|_{\infty}\lesssim|\theta_{0}-\widehat{\theta} |+\|h_{0}-\widehat{h}\|_{\infty}+\|m_{0}-\widehat{m}\|_{\infty}\] \[\|l(T,\delta,Z;\widehat{h}_{1},\widehat{m}_{1},\widehat{\theta}_{ 1})-l(T,\delta,Z;\widehat{h}_{2},\widehat{\theta}_{2})\|_{\infty}\lesssim| \widehat{\theta}_{1}-\widehat{\theta}_{2}|+\|\widehat{h}_{1}-\widehat{h}_{2}\| _{\infty}+\|\widehat{m}_{1}-\widehat{m}_{2}\|_{\infty}.\]
**Lemma B.4**.: _Under condition 4.2, 4.4, 4.5, let \((\widehat{\nu},\widehat{\theta})\), \((\widehat{\nu}_{1},\widehat{\theta}_{1})\), and \((\widehat{\nu}_{2},\widehat{\theta}_{2})\) be arbitrary three parameter tuples inside the sieve space defined in condition 4.4., the following inequalities hold._
\[\|l(T,\delta,Z;\nu_{0},\theta_{0})-l(T,\delta,Z;\widehat{\nu}, \widehat{\theta})\|_{\infty}\lesssim|\theta_{0}-\widehat{\theta}|+\|\nu_{0}- \widehat{\nu}\|_{\infty}\] \[\|l(T,\delta,Z;\widehat{\nu}_{1},\widehat{\theta}_{1})-l(T,\delta,Z;\widehat{\nu}_{2},\widehat{\theta}_{2})\|_{\infty}\lesssim|\widehat{\theta} _{1}-\widehat{\theta}_{2}|+\|\widehat{\nu}_{1}-\widehat{\nu}_{2}\|_{\infty}.\]
**Lemma B.5** (Approximating error of PF scheme).: _In the PF scheme, for any \(n\), there exists an element in the corresponding sieve space \(\pi_{n}\phi_{0}=(\pi_{n}h_{0},\pi_{n}m_{0},\pi_{n}\theta_{0})\), satisfying \(d_{\mathsf{PF}}\left(\pi_{n}\phi_{0},\phi_{0}\right)=O\left(n^{-\frac{\beta}{ \beta+d}}\right)\)._
**Lemma B.6** (Approximating error of FN scheme).: _In the FN scheme, for any \(n\), there exists an element in the corresponding sieve space \(\pi_{n}\psi=(\pi_{n}\nu_{0},\pi_{n}\theta_{0})\) satisfying \(d_{\mathsf{FN}}\left(\pi_{n}\psi_{0},\psi_{0}\right)=O\left(n^{-\frac{\beta}{ \beta+d+1}}\right)\)._
**Lemma B.7**.: _Suppose \(\mathcal{F}\) is a class of functions satisfying that \(N(\varepsilon,\mathcal{F},\|\cdot\|)<\infty\) for \(\forall\varepsilon>0\). We define \(\widetilde{N}(\varepsilon,\mathcal{F},\|\cdot\|)\) to be the minimal number of \(\varepsilon\)-balls \(B(f,\varepsilon)=\{g:\|g-f\|<\varepsilon\}\) needed to cover \(\mathcal{F}\) with radius \(\varepsilon\) and further constrain that \(f\in\mathcal{F}\). Then we have_
\[N(\varepsilon,\mathcal{F},\|\cdot\|)\leq\widetilde{N}(\varepsilon, \mathcal{F},\|\cdot\|)\leq N(\frac{\varepsilon}{2},\mathcal{F},\|\cdot\|).\]
**Lemma B.8**.: _Suppose \(\mathcal{F}\) is a class of functions satisfying that \(N_{[\![}\varepsilon,\mathcal{F},\|\cdot\|_{\infty})<\infty\) for \(\forall\varepsilon>0\). We define \(\widetilde{N}_{\widetilde{\mathbb{I}}}(\varepsilon,\mathcal{F},\|\cdot\|_{ \infty})\) to be the minimal number of brackets \([l,u]\) needed to cover \(\mathcal{F}\) with \(\|l-u\|_{\infty}<\varepsilon\) and further constrain that \(f\in\mathcal{F}\), \(l=f-\frac{\varepsilon}{2}\) and \(u=f+\frac{\varepsilon}{2}\). Then we have_
\[N_{[\![}\varepsilon,\mathcal{F},\|\cdot\|_{\infty})\leq\widetilde{N}_{[\![} \varepsilon,\mathcal{F},\|\cdot\|_{\infty})\leq N_{[\![}(\frac{\varepsilon}{2},\mathcal{F},\|\cdot\|_{\infty})\]
_Furthermore, we have \(\widetilde{N}_{[\![}\varepsilon,\mathcal{F},\|\cdot\|_{\infty})=\widetilde{N} (\frac{\varepsilon}{2},\mathcal{F},\|\cdot\|_{\infty})\)._
**Lemma B.9** (Model capacity of PF scheme).: _Let \(\mathcal{F}_{n}=\{l(T,\delta,Z;\widehat{h},\widehat{m},\widehat{\theta}):\widehat {h}\in\mathcal{H}_{n},\widehat{m}\in\mathcal{M}_{n},\widehat{\theta}\in \Theta\}\). Under condition 4.5, with \(s_{h}=\frac{2\beta}{2\beta+d+1}\) and \(s_{m}=\frac{2\beta}{2\beta+d}\), there exist constants \(c_{h}\) and \(c_{m}>0\) such that_
\[N_{[\![}\varepsilon,\mathcal{F}_{n},\|\cdot\|_{\infty})\lesssim\frac{1}{ \varepsilon}N(c_{h}\varepsilon^{1/s_{h}},\mathcal{H}_{n},\|\cdot\|_{2})\times N (c_{m}\varepsilon^{1/s_{m}},\mathcal{M}_{n},\|\cdot\|_{2}).\]
**Lemma B.10** (Model capacity of FN scheme).: _Let \(\mathcal{G}_{n}=\{l(T,\delta,Z;\widehat{\nu},\widehat{\theta}):\widehat{\nu}\in \mathcal{V}_{n},\widehat{\theta}\in\Theta\}\). Under condition 4.5, with \(s_{\nu}=\frac{2\beta}{2\beta+d+1}\), there exists a constant \(c_{\nu}>0\) such that_
\[N_{[\![}\varepsilon,\mathcal{G}_{n},\|\cdot\|_{\infty})\lesssim\frac{1}{ \varepsilon}N(c_{\nu}\varepsilon^{1/s_{\nu}},\mathcal{V}_{n},\|\cdot\|_{2}).\]
### Proofs of theorem 4.6 and 4.7
Proof of theorem 4.6.: The proof is divided into four steps.
Step \(1\)We denote \(\phi_{0}=(h_{0},m_{0},\theta_{0})\) and \(\widehat{\phi}=(\widehat{h},\widehat{m},\widehat{\theta})\), where \(\widehat{h}\in\mathcal{H}_{n}\),\(\widehat{m}\in\mathcal{M}_{n}\) and \(\widehat{\theta}\in\Theta\). For arbitrary small \(\varepsilon>0\), we have that
\[\inf_{d_{\text{PF}}(\widehat{\phi},\phi_{0})\geq\varepsilon} \mathbb{E}\left[l(T,\delta,Z;h_{0},m_{0},\theta_{0})-l(T,\delta,Z;\widehat{h}, \widehat{m},\widehat{\theta})\right]\] \[=\inf_{d_{\text{PF}}(\widehat{\phi},\phi_{0})\geq\varepsilon} \mathbb{E}_{Z}\left[\mathbb{E}_{T,\delta|Z}\left[\log p(T,\delta\mid Z;h_{0},m _{0},\theta_{0})-\log p(T,\delta\mid Z;\widehat{h},\widehat{m},\widehat{\theta })\right]\right]\] \[=\inf_{d_{\text{PF}}(\widehat{\phi},\phi_{0})\geq\varepsilon} \mathbb{E}_{Z}\left[\text{D}_{\text{KL}}\left(\mathbb{P}_{\widehat{\phi},Z} \parallel\mathbb{P}_{\phi_{0},Z}\right)\right]\]
Using the fact that \(\text{D}_{\text{KL}}\left(\mathbb{P}_{\widehat{\phi},Z}\parallel\mathbb{P}_{ \phi_{0},Z}\right)\geq 2H^{2}(\mathbb{P}_{\widehat{\phi},Z}\parallel \mathbb{P}_{\phi_{0},Z})\). Thus, we further obtain that
\[\inf_{d_{\text{PF}}(\widehat{\phi},\phi_{0})\geq\varepsilon} \mathbb{E}\left[l(T,\delta,Z;h_{0},m_{0},\theta_{0})-l(T,\delta,Z;\widehat{h}, \widehat{m},\widehat{\theta})\right]\] \[\geq\inf_{d_{\text{PF}}(\widehat{\phi},\phi_{0})\geq\varepsilon} 2\mathbb{E}_{Z}\left[H^{2}(\mathbb{P}_{\widehat{\phi},Z}\parallel\mathbb{P}_{ \phi_{0},Z})\right]\] \[=2\inf_{d_{\text{PF}}(\widehat{\phi},\phi_{0})\geq\varepsilon}d_{ \text{PF}}^{2}\left(\widehat{\phi},\phi_{0}\right)\] \[\geq 2\varepsilon^{2}.\]
Step \(2\)Consider the following derivation.
\[\sup_{d_{\text{PF}}(\widehat{\phi},\phi_{0})\leq\varepsilon} \text{Var}\left[l(T,\delta,Z;h_{0},m_{0},\theta_{0})-l(T,\delta,Z;\widehat{h}, \widehat{m},\widehat{\theta})\right]\] \[\leq\sup_{d_{\text{PF}}(\widehat{\phi},\phi_{0})\leq\varepsilon} \mathbb{E}\left[\left(l(T,\delta,Z;h_{0},m_{0},\theta_{0})-l(T,\delta,Z; \widehat{h},\widehat{m},\widehat{\theta})\right)^{2}\right]\] \[=\sup_{d_{\text{PF}}(\widehat{\phi},\phi_{0})\leq\varepsilon} \mathbb{E}_{Z}\mathbb{E}_{T,\delta|Z}\left[\log p(T,\delta,Z;h_{0},m_{0},\theta _{0})-\log p(T,\delta,Z;\widehat{h},\widehat{m},\widehat{\theta})\right]^{2}\] \[=4\sup_{d_{\text{PF}}(\widehat{\phi},\phi_{0})\leq\varepsilon} \mathbb{E}_{Z}\left[\int\left(p(T,\delta,Z;h_{0},m_{0},\theta_{0})\left(\log \sqrt{\frac{p(T,\delta,Z;h_{0},m_{0},\theta_{0})}{p(T,\delta,Z;\widehat{h}, \widehat{m},\widehat{\theta})}}\right)^{2}\right)\mu(dT\times d\delta)\right]\]
By Taylor's expansion on \(\log x\), there exists \(\xi(T,\delta,Z)\) between \(p^{\frac{1}{2}}(T,\delta,Z;h_{0},m_{0},\theta_{0})\) and \(p^{\frac{1}{2}}(T,\delta,Z;\widehat{h},\widehat{m},\widehat{\theta})\) pointwisely such that
\[p(T,\delta,Z;h_{0},m_{0},\theta_{0})\left(\log\sqrt{\frac{p(T, \delta,Z;h_{0},m_{0},\theta_{0})}{p(T,\delta,Z;\widehat{h},\widehat{m}, \widehat{\theta})}}\right)^{2}\] \[=p(T,\delta,Z;h_{0},m_{0},\theta_{0})\left(\log\sqrt{p(T,\delta,Z ;h_{0},m_{0},\theta_{0})}-\log\sqrt{p(T,\delta,Z;\widehat{h},\widehat{m}, \widehat{\theta})}\right)^{2}\] \[=\frac{p(T,\delta,Z;h_{0},m_{0},\theta_{0})}{\xi(T,\delta,Z)^{2}} \left(\sqrt{p(T,\delta,Z;h_{0},m_{0},\theta_{0})}-\sqrt{p(T,\delta,Z;\widehat{h},\widehat{m},\widehat{\theta})}\right)^{2}\]
Since
\[\frac{p(T,\delta,Z;h_{0},m_{0},\theta_{0})}{p(T,\delta,Z;\widehat{h},\widehat{m },\widehat{\theta})}=e^{l(T,\delta,Z;h_{0},m_{0},\theta_{0})-l(T,\delta,Z; \widehat{h},\widehat{m},\widehat{\theta})}\]
by lemma B.1, \(l(T,\delta,Z;h_{0},m_{0},\theta_{0})\) and \(l(T,\delta,Z;\widehat{h},\widehat{m},\widehat{\theta})\) are bounded among\([0,\tau]\times\{0,1\}\times[-1,1]^{d}\) uniformly on all \(\widehat{\phi}=(\widehat{h},\widehat{m},\widehat{\theta})\). Thus, there exist constants \(C_{1}\) and \(C_{2}\) such that \(0<C_{1}\leq p(T,\delta,Z;h_{0},m_{0},\theta_{0})/p(T,\delta,Z;\widehat{h}, \widehat{m},\widehat{\theta})\leq C_{2}\). This leads to the fact that \(p(T,\delta,Z;h_{0},m_{0},\theta_{0})\frac{1}{\xi(T,\delta,Z)^{2}}\) is bounded. We further obtained that
\[p(T,\delta,Z;h_{0},m_{0},\theta_{0})\left(\log\sqrt{p(T,\delta,Z;h_{0},m_{0}, \theta_{0})}-\log\sqrt{p(T,\delta,Z;\widehat{h},\widehat{m},\widehat{\theta})} \right)^{2}\]
\[\lesssim\left|\sqrt{p(T,\delta,Z;h_{0},m_{0},\theta_{0})}-\sqrt{p(T,\delta,Z; \widehat{h},\widehat{m},\widehat{\theta})}\right|^{2}.\]
Thus, we have that
\[\sup_{d_{\mathsf{PF}}\left[\widehat{\phi},\phi_{0}\right]\leq \varepsilon}\text{Var}(l(T,\delta,Z;h_{0},m_{0},\theta_{0})-l(T,\delta,Z; \widehat{h},\widehat{m},\widehat{\theta}))\] \[\lesssim\sup_{d_{\mathsf{PF}}\left(\widehat{\phi},\phi_{0}\right) \leq\varepsilon}\mathbb{E}_{Z}\left[\int\left|\sqrt{p(T,\delta,Z;h_{0},m_{0}, \theta_{0})}-\sqrt{p(T,\delta,Z;\widehat{h},\widehat{m},\widehat{\theta})} \right|^{2}\mu(dT\times d\delta)\right]\] \[=\sup_{d_{\mathsf{PF}}\left(\widehat{\phi},\phi_{0}\right)\leq \varepsilon}d_{\mathsf{PF}}\left(\widehat{\phi},\phi_{0}\right)\] \[\leq\varepsilon^{2}.\]
Step \(3\)We define that \(\widetilde{\mathcal{F}}_{n}=\{l(T,\delta,Z;\widehat{h},\widehat{m},\widehat{ \theta})-l(T,\delta,Z;\pi_{n}h_{0},\pi_{n}m_{0},\pi_{n}\theta_{0}):\widehat{h} \in\mathcal{H}_{n},\widehat{m}\in\mathcal{M}_{n},\widehat{\theta}\in\Theta\}\). Here \((\pi_{n}h_{0},\pi_{n}m_{0},\pi_{n}\theta_{0})\) have been defined in lemma B.5. Obviously, we have that \(\log N_{\|}(\varepsilon,\widetilde{\mathcal{F}}_{n},\|\cdot\|_{\infty})=\log N _{\|}(\varepsilon,\mathcal{F}_{n},\|\cdot\|_{\infty})\), where \(\mathcal{F}\) is defined in lemma B.9. By lemma B.9, we further have that
\[\log N_{\|}(\varepsilon,\mathcal{F}_{n},\|\cdot\|_{\infty})\lesssim\log\frac{ 1}{\varepsilon}+\log N(c_{h}\varepsilon^{1/s_{h}},\mathcal{H}_{n},\|\cdot\|_{ 2})+\log N(c_{m}\varepsilon^{1/s_{m}},\mathcal{M}_{n},\|\cdot\|_{2}).\]
According to [2, Theorem 7], under condition 4.3, we have that the VC-dimension of \(\mathcal{H}_{n}\) and \(\mathcal{M}_{n}\) satisfy that \(\text{VC}\left(\mathcal{H}_{n}\right)\lesssim n^{\frac{1}{\beta+2}}\log^{3}n\) and \(\text{VC}\left(\mathcal{M}_{n}\right)\lesssim n^{\frac{1}{\beta+2}}\log^{3}n\). Thus, we obtain that
\[\log N(c_{h}\varepsilon^{1/s_{h}},\mathcal{H}_{n},\|\cdot\|_{2})\lesssim\frac {\text{VC}\left(\mathcal{H}_{n}\right)}{s_{h}}\log\frac{1}{\varepsilon}\lesssim n ^{\frac{1}{\beta+2}}\log^{3}n\log\frac{1}{\varepsilon},\]
and
\[\log N(c_{m}\varepsilon^{1/s_{m}},\mathcal{M}_{n},\|\cdot\|_{2})\lesssim\frac {\text{VC}\left(\mathcal{M}_{n}\right)}{s_{\nu}}\log\frac{1}{\varepsilon} \lesssim n^{\frac{d}{\beta+2}}\log^{3}n\log\frac{1}{\varepsilon}.\]
Thus, we obtain that \(\log N_{\|}(\varepsilon,\widetilde{\mathcal{F}}_{n},\|\cdot\|_{\infty}) \lesssim n^{\frac{d}{\beta+2}}\log^{3}n\log\frac{1}{\varepsilon}\).
Step \(4\)By the Cauchy-Schwartz inequality, we have that
\[\sqrt{\mathbb{E}\left[l(T,\delta,Z;\pi_{n}h_{0},\pi_{n}m_{0},\pi_{n}\theta_{0 })-l(T,\delta,Z;h_{0},m_{0},\theta_{0})\right]}\]
\[\leq\left[\mathbb{E}(l(T,\delta,Z;\pi_{n}h_{0},\pi_{n}m_{0},\pi_{n}\theta_{0}) -l(T,\delta,Z;h_{0},m_{0},\theta_{0}))^{2}\right]^{\frac{1}{4}}.\]
Similar to the second part and by lemma B.5, we further have that
\[\sqrt{\mathbb{E}\left[l(T,\delta,Z;\pi_{n}h_{0},\pi_{n}m_{0},\pi_{n}\theta_{0 })-l(T,\delta,Z;h_{0},m_{0},\theta_{0})\right]}\lesssim\sqrt{d_{\mathsf{PF}} \left(\pi_{n}\phi_{0},\phi_{0}\right)}\lesssim n^{-\frac{\beta}{2\beta+2}}.\]
Now let
\[\tau=\frac{\beta}{2\beta+2d}-2\frac{\log\log n}{\log n}\]
By Step 1,2,3 and [60, Theorem 1], we have
\[d_{\mathsf{PF}}\left(\widehat{\phi}_{n},\phi_{0}\right)=\max\left(n^{-\tau},d _{\mathsf{PF}}\left(\pi_{n}\phi_{0},\phi_{0}\right),\right.\]
\[\left.\sqrt{\mathbb{E}\left[l(T,\delta,Z;\pi_{n}h_{0},\pi_{n}m_{0},\pi_{n} \theta_{0})-l(T,\delta,Z;h_{0},m_{0},\theta_{0})\right]}\right)\]
By lemma B.5, \(d_{\mathsf{PF}}\left(\pi_{n}\phi_{0},\phi_{0}\right)=O(n^{-\frac{\beta}{2\beta+ 2}})\).
By Step 4, \(\sqrt{\mathbb{E}\left[l(T,\delta,Z;\pi_{n}h_{0},\pi_{n}m_{0},\pi_{n}\theta_{0}) -l(T,\delta,Z;h_{0},m_{0},\theta_{0})\right]}=O\left(n^{-\frac{\beta}{2\beta+ 2d}}\right)\). Thus, we have \(d_{\mathsf{PF}}\left(\widehat{\phi}_{n},\phi_{0}\right)=O(n^{-\frac{\beta}{2 \beta+2d}}\log^{2}n)=\widetilde{O}(n^{-\frac{\beta}{2\beta+2d}})\).
Proof of theorem 4.7.: The proof is divided into four steps.
Step \(1\)We denote \(\psi_{0}=(\nu_{0},\theta_{0})\) and \(\widehat{\psi}=(\widehat{\nu},\widehat{\theta})\), where \(\widehat{\nu}\in\mathcal{V}_{n}\) and \(\widehat{\theta}\in\Theta\). For arbitrary \(0<\varepsilon\leq 1\), we have that
\[\inf_{d_{\text{rn}}\left(\widehat{\psi},\psi_{0}\right)\geq \varepsilon}\mathbb{E}\left[l(T,\delta,Z;\nu_{0},\theta_{0})-l(T,\delta,Z; \widehat{\nu},\widehat{\theta})\right]\] \[=\inf_{d_{\text{rn}}\left(\widehat{\psi},\psi_{0}\right)\geq \varepsilon}\mathbb{E}_{Z}\left[\mathbb{E}_{T,\delta|Z}\left[\log p(T,\delta \mid Z;\nu_{0},\theta_{0})-\log p(T,\delta\mid Z;\widehat{\nu},\widehat{\theta })\right]\right]\] \[=\inf_{d_{\text{rn}}\left(\widehat{\psi},\psi_{0}\right)\geq \varepsilon}\mathbb{E}_{Z}\left[\mathbb{D}_{\text{KL}}\left(\mathbb{P}_{ \widehat{\psi},Z}\parallel\parallel\mathbb{P}_{\psi_{0},Z}\right)\right]\]
Using the fact that \(KL(\mathbb{P}_{\widehat{\psi},Z}\parallel\mathbb{P}_{\psi_{0},Z})\geq 2H^{2}( \mathbb{P}_{\widehat{\psi},Z}\parallel\mathbb{P}_{\psi_{0},Z})\). Thus, we further obtain that
\[\inf_{d_{\text{rn}}\left(\widehat{\psi},\psi_{0}\right)\geq \varepsilon}\mathbb{E}\left[l(T,\delta,Z;\nu_{0},\theta_{0})-l(T,\delta,Z; \widehat{\nu},\widehat{\theta})\right]\] \[\geq\inf_{d_{\text{rn}}\left(\widehat{\psi},\psi_{0}\right)\geq \varepsilon}2\mathbb{E}_{Z}\left[H^{2}(\mathbb{P}_{\widehat{\varphi},Z} \parallel\mathbb{P}_{\psi_{0},Z})\right]\] \[=2\inf_{d_{\text{rn}}\left(\widehat{\psi},\psi_{0}\right)\geq \varepsilon}d_{\text{rn}}^{2}\left(\widehat{\psi},\psi_{0}\right)\] \[\geq 2\varepsilon^{2}.\]
Step \(2\)We consider the following derivation.
\[\sup_{d_{\text{rn}}\left(\widehat{\psi},\psi_{0}\right)\leq \varepsilon}\text{Var}\left[l(T,\delta,Z;\nu_{0},\theta_{0})-l(T,\delta,Z; \widehat{\nu},\widehat{\theta})\right]\] \[\leq\sup_{d_{\text{rn}}\left(\widehat{\psi},\psi_{0}\right)\leq \varepsilon}\mathbb{E}\left[\left(l(T,\delta,Z;\nu_{0},\theta_{0})-l(T,\delta,Z;\widehat{\nu},\widehat{\theta})\right)^{2}\right]\] \[=4\sup_{d_{\text{rn}}\left(\widehat{\psi},\psi_{0}\right)\leq \varepsilon}\mathbb{E}_{Z}\left[\int\left(p(T,\delta,Z;\nu_{0},\theta_{0})( \log\sqrt{\frac{p(T,\delta,Z;\nu_{0},\theta_{0})}{p(T,\delta,Z;\widehat{\nu},\widehat{\theta})}})^{2}\right)\mu(dT\times d\delta)\right]\]
By Taylor's expansion on \(\log x\), there exists \(\eta(T,\delta,Z)\) between \(\sqrt{p(T,\delta,Z;\nu_{0},\theta_{0})}\) and \(\sqrt{p(T,\delta,Z;\widehat{\nu},\widehat{\theta})}\) pointwisely such that
\[p(T,\delta,Z;\nu_{0},\theta_{0})(\log\sqrt{\frac{p(T,\delta,Z; \nu_{0},\theta_{0})}{p(T,\delta,Z;\widehat{\nu},\widehat{\theta})}})^{2}\] \[=p(T,\delta,Z;\nu_{0},\theta_{0})\left(\log\sqrt{p(T,\delta,Z; \nu_{0},\theta_{0})}-\log\sqrt{p(T,\delta,Z;\widehat{\nu},\widehat{\theta})} \right)^{2}\] \[=\frac{p(T,\delta,Z;\nu_{0},\theta_{0})}{\eta(T,\delta,Z)^{2}} \left(\sqrt{p(T,\delta,Z;\nu_{0},\theta_{0})}-\sqrt{p(T,\delta,Z;\widehat{ \nu},\widehat{\theta})}\right)^{2}\]
Since \(p(T,\delta,Z;\nu_{0},\theta_{0})/p(T,\delta,Z;\widehat{\nu},\widehat{\theta})= e^{l(T,\delta,Z;\nu_{0},\theta_{0})-l(T,\delta,Z;\widehat{\nu},\widehat{\theta})}\), by lemma B.2, \(l(T,\delta,Z;\nu_{0},\theta_{0})\) and \(l(T,\delta,Z;\widehat{\nu},\widehat{\theta})\) are bounded on \([0,\tau]\times\{0,1\}\times[-1,1]^{d}\) uniformly for all \(\widehat{\psi}=(\widehat{\nu},\widehat{\theta})\). Thus there exist constants \(C_{3}\) and \(C_{4}\) such that \(0<C_{3}\leq p(T,\delta,Z;\nu_{0},\theta_{0})/p(T,\delta,Z;\widehat{\nu}, \widehat{\theta})\leq C_{4}\). This leads to the fact that \(p(T,\delta,Z;\nu_{0},\theta_{0})\frac{1}{\eta(T,\delta,Z)^{2}}\) is bounded. We further have that
\[p(T,\delta,Z;\nu_{0},\theta_{0})\left(\log\sqrt{p(T,\delta,Z; \nu_{0},\theta_{0})}-\log\sqrt{p(T,\delta,Z;\widehat{\nu},\widehat{\theta})} \right)^{2}\] \[\lesssim\left|\sqrt{p(T,\delta,Z;\nu_{0},\theta_{0})}-\sqrt{p(T, \delta,Z;\widehat{\nu},\widehat{\theta})}\right|^{2}.\]Thus, we have that
\[\sup_{d_{\mathsf{FN}}(\widehat{\varphi},\psi_{0})\leq\varepsilon} \mathsf{Var}\left[l(T,\delta,Z;\nu_{0},\theta_{0})-l(T,\delta,Z;\widehat{\nu}, \widehat{\theta})\right]\] \[\lesssim\sup_{d_{\mathsf{FN}}(\widehat{\varphi},\psi_{0})\leq \varepsilon}\mathbb{E}_{Z}\left[\int\left|\sqrt{p(T,\delta,Z;\nu_{0},\theta_{0 })}-\sqrt{p(T,\delta,Z;\widehat{\nu},\widehat{\theta})}\right|^{2}\mu(dT\times d \delta)\right]\] \[=\sup_{d_{\mathsf{FN}}(\widehat{\varphi},\psi_{0})\leq\varepsilon} d_{\mathsf{FN}}\left(\widehat{\psi},\psi_{0}\right)\] \[\leq\varepsilon^{2}.\]
Step \(3\)We define that \(\widetilde{\mathcal{G}}_{n}=\{l(T,\delta,Z;\widehat{\nu},\widehat{\theta})-l (T,\delta,Z;\pi_{n}\nu_{0},\pi_{n}\theta_{0}):\widehat{\nu}\in\mathcal{V}_{n}, \theta\in\Theta\}\). Here \((\pi_{n}\nu_{0},\pi_{n}\theta_{0})\) have been defined in lemma B.6. Obviously, we have that \(\log N_{\llbracket}(\varepsilon,\widetilde{\mathcal{G}}_{n},\|\cdot\|_{\infty })=\log N_{\llbracket}(\varepsilon,\mathcal{G}_{n},\|\cdot\|_{\infty})\), where \(\mathcal{G}\) is defined in lemma B.10. By lemma B.10, we further obtain that
\[\log N_{\llbracket}(\varepsilon,\mathcal{G}_{n},\|\cdot\|_{\infty})\lesssim \log\frac{1}{\varepsilon}+\log N(c_{\nu}\varepsilon^{1/s_{\nu}},\mathcal{V}_{n },\|\cdot\|_{2}).\]
According to [2, Theorem 7], under condition 4.4, we have that the VC-dimension of \(\mathcal{V}_{n}\) satisfies that \(\mathsf{VC}\left(\mathcal{V}_{n}\right)\lesssim n^{\frac{d+1}{\beta+d+1}}\log ^{3}n\). Thus, we obtain that
\[\log N(c_{h}\varepsilon^{1/s_{\nu}},\mathcal{V}_{n},\|\cdot\|_{2})\lesssim \frac{\mathsf{VC}\left(\mathcal{V}_{n}\right)}{s_{\nu}}\log\frac{1}{\varepsilon }\lesssim n^{\frac{d+1}{\beta+d+1}}\log^{3}n\log\frac{1}{\varepsilon}.\]
Furthermore, we get that \(\log N_{\llbracket}(\varepsilon,\widetilde{\mathcal{G}}_{n},\|\cdot\|_{ \infty})\lesssim n^{\frac{d+1}{\beta+d+1}}\log^{3}n\log\frac{1}{\varepsilon}\).
Step \(4\)By the Cauchy-Schwartz inequality, we have that
\[\sqrt{\mathbb{E}[l(T,\delta,Z;\pi_{n}\nu_{0},\pi_{n}\theta_{0})-l(T,\delta,Z; \nu_{0},\theta_{0})]}\leq\left[\mathbb{E}\left(l(T,\delta,Z;\pi_{n}\nu_{0},\pi _{n}\theta_{0})-l(T,\delta,Z;\nu_{0},\theta_{0})\right)^{2}\right]^{\frac{1} {4}}.\]
Similar to the second part and by lemma B.6, we further obtain that
\[\sqrt{\mathbb{E}[l(T,\delta,Z;\pi_{n}\nu_{0},\pi_{n}\theta_{0})-l(T,\delta,Z; \nu_{0},\theta_{0})]}\lesssim\sqrt{d_{\mathsf{FN}}\left(\pi_{n}\psi_{0},\psi_ {0}\right)}\lesssim n^{-\frac{\beta}{2\beta+2d+2}}\]
Now let
\[\tau=\frac{\beta}{2\beta+2d+2}-2\frac{\log\log n}{\log n}.\]
By step 1,2,3 and Step 1,2,3 and [60, Theorem 1],
\[d_{\mathsf{FN}}\left(\widehat{\psi}_{n},\psi_{0}\right)=\max\left(n^{-\tau}, d_{\mathsf{FN}}\left(\pi_{n}\psi_{0},\psi_{0}\right),\right.\]
\[\left.\sqrt{\mathbb{E}[l(T,\delta,Z;\pi_{n}\nu_{0},\pi_{n}\theta_{0})-l(T, \delta,Z;\nu_{0},\theta_{0})]}\right)\]
By lemmaB.6, \(d_{\mathsf{FN}}\left(\pi_{n}\psi_{0},\psi_{0}\right)=O(n^{-\frac{\beta}{\beta+ d+1}})\)
By Step 4, \(\sqrt{\mathbb{E}[l(T,\delta,Z;\pi_{n}\nu_{0},\pi_{n}\theta_{0})-l(T,\delta,Z; \nu_{0},\theta_{0})]}=O(n^{-\frac{\beta}{2\beta+2d+2}})\). Thus, we have \(d_{\mathsf{FN}}\left(\widehat{\psi}_{n},\psi_{0}\right)=O(n^{-\frac{\beta}{2 \beta+2d+2}}\log^{2}n)=\widetilde{O}(n^{-\frac{\beta}{2\beta+2d+2}})\).
### Proofs of technical lemmas
Proof of lemma b.1.: Since \(h_{0}(T)\in\mathcal{W}_{M}^{\beta}([0,\tau])\) and \(m_{0}(Z)\in\mathcal{W}_{M}^{\beta}([-1,1]^{d})\), we have that \(h_{0}(T)\leq M\), \(m_{0}(Z)\leq M\) and \(e^{m_{0}(Z)}\int_{0}^{T}h_{0}(s)ds\leq\tau e^{2M}\). Let \(\mathcal{B}=[0,\tau e^{2M}]\), we have that
\[|l(T,\delta,Z;h_{0},m_{0},\theta_{0})|\] \[\leq 2M+\sup_{x\in\mathcal{B}}|\log g_{\theta_{0}}(x)|+\sup_{x\in \mathcal{B}}|G_{\theta_{0}}(x)|\]
By condition 5, we have that \(l(T,\delta,Z;h_{0},m_{0},\theta_{0})\) is bounded for \((T,\delta,Z)\in[0,\tau]\times\{0,1\}\times[-1,1]^{d}\). The proof of the boundness of \(l(T,\delta,Z;\widehat{h},\widehat{m},\widehat{\theta})\) is similar.
Proof of lemma b.2.: Since \(\nu_{0}(T,Z)\in\mathcal{W}^{\beta}_{M}([0,\tau]\times[-1,1]^{d})\), we have \(\nu_{0}(T,Z)\leq M\) and \(\int_{0}^{T}e^{\nu(s,Z)}ds\leq\tau e^{M}\). Let \(\mathcal{B}=[0,\tau e^{M}]\), we have that
\[|l(T,\delta,Z;\nu_{0},\theta_{0})|\] \[\leq\left|\log G^{\prime}_{\theta_{0}}\left(\int_{0}^{T}e^{\nu_{0 }(s,Z)}ds\right)\right|+|\nu_{0}(T,Z)|+\left|G_{\theta_{0}}\left(\int_{0}^{T}e ^{\nu_{0}(s,Z)}ds\right)\right|\] \[\leq M+\sup_{x\in\mathcal{B}}\left|\log G^{\prime}_{\theta_{0}}(x )\right|+\sup_{x\in\mathcal{B}}\left|G_{\theta_{0}}(x)\right|.\]
By condition 4.5, we have that \(l(T,\delta,Z;\nu_{0},\theta_{0})\) is bounded among \((T,\delta,Z)\in[0,\tau]\times\{0,1\}\times[-1,1]^{d}\) The proof of the boundness of \(l(T,\delta,Z;\widehat{\nu},\widehat{\theta})\) is similar.
Proof of lemma b.3.: By definition, we have that
\[|l(T,\delta,Z;h_{0},m_{0},\theta_{0})-l(T,\delta,Z;\widehat{h}, \widehat{m},\widehat{\theta})|\] \[\leq\left|\log g_{\theta_{0}}\left(e^{m_{0}(Z)}\int_{0}^{T}e^{h_{0 }(s)}ds\right)-\log g_{\widehat{\theta}}\left(e^{\widehat{m}(Z)}\int_{0}^{T}e ^{\widehat{h}(s)}ds\right)\right|+\left|h_{0}(T)-\widehat{h}(T)\right|\] \[+|m_{0}(Z)-\widehat{m}(Z)|+\left|G_{\theta_{0}}\left(e^{m_{0}(Z)} \int_{0}^{T}e^{h_{0}(s)}ds\right)-G_{\widehat{\theta}}\left(e^{\widehat{m}(Z)} \int_{0}^{T}e^{\widehat{h}(s)}ds\right)\right|.\]
Let \(\mathcal{B}=[0,\tau\max(e^{2M},e^{M_{h}+M_{m}})]\). By Taylor's expansion, we can further show that
\[|l(T,\delta,Z;h_{0},m_{0},\theta_{0})-l(T,\delta,Z;\widehat{h}, \widehat{m},\widehat{\theta})|\] \[\leq\sup_{\tilde{\theta}\in\Theta,\tilde{x}\in\mathcal{B}}\left| \frac{\partial\log g_{\tilde{\theta}}(\tilde{x})}{\partial_{\tilde{\theta}}} \right|\cdot\left|\theta_{0}-\widehat{\theta}\right|\] \[+\sup_{\tilde{\theta}\in\Theta,\tilde{x}\in\mathcal{B}}\left| \frac{\partial\log g_{\tilde{\theta}}(\tilde{x})}{\partial_{\tilde{x}}} \right|\cdot\left|e^{m_{0}(Z)}\int_{0}^{T}e^{h_{0}(s)}ds-e^{\widehat{m}(Z)} \int_{0}^{T}e^{\widehat{h}(s)}ds\right|\] \[+|h_{0}(T)-\widehat{h}(T)|+|m_{0}(Z)-\widehat{m}(Z)|+\sup_{ \tilde{\theta}\in\Theta,\tilde{x}\in\mathcal{B}}\left|\frac{\partial G_{ \tilde{\theta}}(\tilde{x})}{\partial_{\tilde{\theta}}}\right|\cdot\left| \theta_{0}-\widehat{\theta}\right|\] \[+\sup_{\tilde{\theta}\in\Theta,\tilde{x}\in\mathcal{B}}\left| \frac{\partial G_{\tilde{\theta}}(\tilde{x})}{\partial_{\tilde{x}}}\right| \cdot\left|e^{m_{0}(Z)}\int_{0}^{T}e^{h_{0}(s)}ds-e^{\widehat{m}(Z)}\int_{0}^ {T}e^{\widehat{h}(s)}ds\right|.\]
Again, by Taylor's expansion, we have that
\[\left|e^{m_{0}(Z)}\int_{0}^{T}e^{h_{0}(s)}ds-e^{\widehat{m}(Z)} \int_{0}^{T}e^{\widehat{h}(s)}ds\right|\] \[\leq\left|e^{m_{0}(Z)}\int_{0}^{T}(e^{h_{0}(s)}-e^{\widehat{h}(s) })ds\right|+\left|(e^{m_{0}(Z)}-e^{\widehat{m}(Z)})\int_{0}^{T}e^{\widehat{h}( s)}ds\right|\] \[\leq e^{M}\cdot\tau e^{\max(M,M_{h})}\left\|h_{0}-\widehat{h} \right\|_{\infty}+\tau e^{M_{h}}\cdot e^{\max(M,M_{m})}\|m_{0}-\widehat{m}\|_ {\infty}.\]
Finally, we obtain that
\[|l(T,\delta,Z;h_{0},m_{0},\theta_{0})-l(T,\delta,Z;\widehat{h}, \widehat{m},\widehat{\theta})|\] \[\leq\sup_{\tilde{\theta}\in\Theta,\tilde{x}\in\mathcal{B}}\left| \frac{\partial\log g_{\tilde{\theta}}(\tilde{x})}{\partial_{\tilde{x}}} \right|\cdot\left[e^{M}\cdot\tau e^{\max(M,M_{h})}\|h_{0}-\widehat{h}\|_{\infty }+\tau e^{M_{h}}\cdot e^{\max(M,M_{m})}\|m_{0}-\widehat{m}\|_{\infty}\right]\] \[+\sup_{\tilde{\theta}\in\Theta,\tilde{x}\in\mathcal{B}}\left| \frac{\partial\log g_{\tilde{\theta}}(\tilde{x})}{\partial_{\tilde{\theta}}} \right|\cdot\left|\theta_{0}-\widehat{\theta}\right|+\left|h_{0}(T)-\widehat{h }(T)\right|+|m_{0}(Z)-\widehat{m}(Z)|\] \[+\sup_{\tilde{\theta}\in\Theta,\tilde{x}\in\mathcal{B}}\left| \frac{\partial G_{\tilde{\theta}}(\tilde{x})}{\partial_{\tilde{x}}}\right| \cdot\left|\theta_{0}-\widehat{\theta}\right|\] \[+\sup_{\tilde{\theta}\in\Theta,\tilde{x}\in\mathcal{B}}\left| \frac{\partial G_{\tilde{\theta}}(\tilde{x})}{\partial_{\tilde{x}}}\right|\cdot \left[e^{M}\cdot\tau e^{\max(M,M_{h})}\|h_{0}-\widehat{h}\|_{\infty}+\tau e^{ M_{h}}\cdot e^{\max(M,M_{m})}\|m_{0}-\widehat{m}\|_{\infty}\right].\]Taking supremum on both sides, we conclude that
\[\|l(T,\delta,Z;h_{0},m_{0},\theta_{0})-l(T,\delta,Z;\widehat{h},\widehat{m}, \widehat{\theta})\|_{\infty}\lesssim|\theta_{0}-\widehat{\theta}|+\|h_{0}- \widehat{h}\|_{\infty}+\|m_{0}-\widehat{m}\|_{\infty}.\]
The proof of the second inequality is similar.
Proof of lemma b.4.: By definition, we have that
\[|l(T,\delta,Z;\nu_{0},\theta_{0})-l(T,\delta,Z;\widehat{\nu}, \widehat{\theta})|\] \[\leq\left|\log g_{\theta_{0}}\left(\int_{0}^{T}e^{\nu_{0}(s,Z)}ds \right)-\log g_{\widehat{\theta}}\left(\int_{0}^{T}e^{\widehat{\nu}(s,Z)}ds \right)\right|+|\nu_{0}(T,Z)-\widehat{\nu}(T,Z)|\] \[+\left|G_{\theta_{0}}\left(\int_{0}^{T}e^{\nu_{0}(s,Z)}ds\right)- G_{\widehat{\theta}}\left(\int_{0}^{T}e^{\widehat{\nu}(s,Z)}ds\right)\right|.\]
Let \(\mathcal{B}=[0,\tau\max(e^{M_{\nu}},e^{M_{\nu}})]\). By Taylor's expansion, we can further show that
\[|l(T,\delta,Z;\nu_{0},\theta_{0})-l(T,\delta,Z;\widehat{\nu}, \widehat{\theta})|\] \[\leq\sup_{\widehat{\theta}\in\Theta,\tilde{x}\in\mathcal{B}}\left| \frac{\partial\log g_{\widehat{\theta}}(\tilde{x})}{\partial_{\tilde{\theta} }}\right|\cdot\left|\theta_{0}-\widehat{\theta}\right|+\sup_{\widehat{\theta} \in\Theta,\tilde{x}\in\mathcal{B}}\left|\frac{\partial\log g_{\widehat{\theta }}(\tilde{x})}{\partial_{\tilde{x}}}\right|\cdot\left|\int_{0}^{T}e^{\nu_{0}( s,Z)}ds-\int_{0}^{T}e^{\widehat{\nu}(s,Z)}ds\right|\] \[+|\nu_{0}(T,Z)-\widehat{\nu}(T,Z)|+\sup_{\widehat{\theta}\in \Theta,\tilde{x}\in\mathcal{B}}\left|\frac{\partial G_{\widehat{\theta}}( \tilde{x})}{\partial_{\tilde{\theta}}}\right|\cdot|\theta_{0}-\widehat{\theta}|\] \[+\sup_{\widehat{\theta}\in\Theta,\tilde{x}\in\mathcal{B}}\left| \frac{\partial G_{\tilde{\theta}}(\tilde{x})}{\partial_{\tilde{x}}}\right| \cdot\left|\int_{0}^{T}e^{\nu_{0}(s,Z)}ds-\int_{0}^{T}e^{\widehat{\nu}(s,Z)}ds \right|.\]
Again, by Taylor's expansion,
\[\left|\int_{0}^{T}e^{\nu_{0}(s,Z)}ds-\int_{0}^{T}e^{\widehat{\nu}(s,Z)}ds \right|\leq\tau e^{\max(M,M_{\nu})}\|\nu_{0}-\widehat{\nu}\|_{\infty},\]
Finally, we obtain that
\[\left|l(T,\delta,Z;\nu_{0},\theta_{0})-l(T,\delta,Z;\widehat{\nu},\widehat{\theta})\right|\] \[+|\nu_{0}(T,Z)-\widehat{\nu}(T,Z)|+\sup_{\widehat{\theta}\in \Theta,\tilde{x}\in\mathcal{B}}\left|\frac{\partial G_{\widehat{\theta}}( \tilde{x})}{\partial_{\tilde{\theta}}}\right|\cdot\left|\theta_{0}-\widehat{ \theta}\right|\] \[+\sup_{\widehat{\theta}\in\Theta,\tilde{x}\in\mathcal{B}}\left| \frac{\partial G_{\tilde{\theta}}(\tilde{x})}{\partial_{\tilde{x}}}\right| \cdot\tau e^{\max(M,M_{\nu})}\left\|\nu_{0}-\widehat{\nu}\right\|_{\infty}.\]
Taking supremum on both sides, we conclude that
\[\|l(T,\delta,Z;\nu_{0},\theta_{0})-l(T,\delta,Z;\widehat{\nu},\widehat{\theta })\|_{\infty}\lesssim|\theta_{0}-\widehat{\theta}|+\|\nu_{0}-\widehat{\nu}\|_ {\infty},\]
The proof of the second inequality is similar.
Proof of lemma b.5.: According to [72, Theorem 1], there exist approximating functions \(\widehat{h}^{*}\) and \(\widehat{m}^{*}\) such that \(\|\widehat{h}^{*}-h_{0}\|_{\infty}=O\left(n^{-\frac{\beta}{\beta+2}}\right)\) and \(\|\widehat{m}^{*}-m_{0}\|_{\infty}=O\left(n^{-\frac{\beta}{\beta+2}}\right)\). Let \(\pi_{n}h_{0}=\widehat{h}^{*}\)\(\pi_{n}m_{0}=\widehat{m}^{*}\), and \(\pi_{n}\theta=\theta_{0}\). We have that
\[d_{\mathsf{PF}}\left(\pi_{n}\phi_{0},\phi_{0}\right)\] \[=\sqrt{\mathbb{E}_{Z}\left[\int|\sqrt{p(T,\delta\mid Z;\pi_{n}h_{0 },\pi_{n}m_{0},\pi_{n}\theta_{0})}-\sqrt{p(T,\delta\mid Z;h_{0},m_{0},\theta_{0 })}|^{2}\mu(dT\times d\delta)\right]}\] \[=\sqrt{\mathbb{E}_{Z}\left[\int[e^{\frac{1}{2}l(T,\delta,Z;\pi_{n} h_{0},\pi_{n}m_{0},\pi_{n}\theta_{0})}-e^{\frac{1}{2}l(T,\delta,Z;h_{0},m_{0}, \theta_{0})}]^{2}f_{C\mid Z}(T)^{1-\delta}S_{C\mid Z}(T)^{\delta}\mu(dT\times d \delta)\right]}\] \[\leq\left\|e^{\frac{1}{2}l(T,\delta,Z;\pi_{n}h_{0},\pi_{n}m_{0}, \pi_{n}\theta_{0})}-e^{\frac{1}{2}l(T,\delta,Z;h_{0},m_{0},\theta_{0})}\right\| _{\infty}\] \[\qquad\qquad\times\sqrt{\mathbb{E}_{Z}\left[\int f_{C\mid Z}(T)^{ 1-\delta}S_{C\mid Z}(T)^{\delta}\mu(dT\times d\delta)\right]}.\]
By lemma B.1 and B.3, we have that
\[\left\|e^{\frac{1}{2}l(T,\delta,Z;\pi_{n}h_{0},\pi_{n}m_{0},\pi_{ n}\theta_{0})}-e^{\frac{1}{2}l(T,\delta,Z;h_{0},m_{0},\theta_{0})}\right\| _{\infty}\] \[\lesssim\|\pi_{n}\theta_{0}-\theta_{0}\|+\|\pi_{n}h_{0}-h_{0}\|_ {\infty}+\|\pi_{n}m_{0}-m_{0}\|_{\infty}\] \[=O\left(n^{-\frac{\alpha}{\beta+2}}\right).\]
Since \(f_{C\mid Z}(T)^{1-\delta}\leq f_{C\mid Z}(T)+1\) and \(S_{C\mid Z}(T)^{\delta}\leq 1\), we also have that
\[\sqrt{\mathbb{E}_{Z}\left[\int f_{C\mid Z}(T)^{1-\delta}S_{C\mid Z }(T)^{\delta}\mu(dT\times d\delta)\right]} \leq \sqrt{\mathbb{E}_{Z}\left[\int(1+f_{C\mid Z}(T))\mu(dT\times d \delta)\right]}\] \[\leq \sqrt{2+2\tau}.\]
Thus, we obtain that \(d_{\mathsf{PF}}\left(\pi_{n}\phi_{0},\phi_{0}\right)=O\left(n^{-\frac{\beta}{ \beta+4}}\right)\).
Proof of lemma b.6.: According to [72, Theorem 1], there exists an approximating function \(\widehat{\nu}^{*}\) such that \(\|\widehat{\nu}^{*}-\nu_{0}\|_{\infty}=O\left(n^{-\frac{\beta}{\beta+4}}\right)\). Let \(\pi_{n}\nu_{0}=\widehat{\nu}^{*}\) and \(\pi_{n}\theta_{0}=\theta_{0}\). We have that
\[d_{\mathsf{FN}}\left(\pi_{n}\psi_{0},\psi_{0}\right)\] \[=\sqrt{\mathbb{E}_{Z}\left[\int\left|\sqrt{p(T,\delta\mid Z;\pi_ {n}\nu_{0},\pi_{n}\theta_{0})}-\sqrt{p(T,\delta\mid Z;\nu_{0},\theta_{0})} \right|^{2}\mu(dT\times d\delta)\right]}\] \[=\sqrt{\mathbb{E}_{Z}\left[\int\left[e^{\frac{1}{2}l(T,\delta,Z; \pi_{n}\nu_{0},\pi_{n}\theta_{0})}-e^{\frac{1}{2}l(T,\delta,Z;\nu_{0},\theta_{ 0})}\right]^{2}f_{C\mid Z}(T)^{1-\delta}S_{C\mid Z}(T)^{\delta}\mu(dT\times d \delta)\right]}\] \[\leq\left\|\frac{1}{2}e^{l(T,\delta,Z;\pi_{n}\nu_{0},\pi_{n} \theta_{0})}-\frac{1}{2}e^{l(T,\delta,Z;\nu_{0},\theta_{0})}\right\|_{\infty} \sqrt{\mathbb{E}_{Z}\left[\int f_{C\mid Z}(T)^{1-\delta}S_{C\mid Z}(T)^{ \delta}\mu(dT\times d\delta)\right]}.\]
By lemma B.2 and B.4, we have that
\[\left\|e^{\frac{1}{2}l(T,\delta,Z;\pi_{n}\nu_{0},\pi_{n}\theta_{0 })}-e^{\frac{1}{2}l(T,\delta,Z;\nu_{0},\theta_{0})}\right\|_{\infty} \lesssim\|\pi_{n}\theta_{0}-\theta_{0}\|+\|\pi_{n}\nu_{0}-\nu_{0 }\|_{\infty}\] \[=O\left(n^{-\frac{\beta}{\beta+4+1}}\right).\]
Since \(f_{C\mid Z}(T)^{1-\delta}\leq f_{C\mid Z}(T)+1\) and \(S_{C\mid Z}(T)^{\delta}\leq 1\), we also have that
\[\sqrt{\mathbb{E}_{Z}\left[\int f_{C\mid Z}(T)^{1-\delta}S_{C\mid Z }(T)^{\delta}\mu(dT\times d\delta)\right]} \leq \sqrt{\mathbb{E}_{Z}\left[\int(1+f_{C\mid Z}(T))\mu(dT\times d \delta)\right]}\] \[\leq \sqrt{2+2\tau}.\]
Thus, we obtain that \(d_{\mathsf{FN}}\left(\pi_{n}\psi_{0},\psi_{0}\right)=O\left(n^{-\frac{\beta}{ \beta+4}}\right)\).
Proof of lemma b.7.: The left inequality is trivial according to the definition of covering number. We need to show that the correctness of the right inequality.
Suppose that we have \(\{B(g_{i},\frac{\varepsilon}{2})\},i=1\ldots,N\), where \(N=N(\frac{\varepsilon}{2},\mathcal{F},\|\cdot\|)\), are the minimal number of \(\frac{\varepsilon}{2}\)-ball that covers \(\mathcal{F}\). Then there exists at least one \(f_{i}\in\mathcal{F}\) such that \(f_{i}\in B(g_{i},\varepsilon)\). Consider the following \(\varepsilon-balls\)\(\{B(f_{i},\varepsilon)\},i=1\ldots,N\). For arbitrary \(f\in\mathcal{F}\cap B(g_{i},\frac{\varepsilon}{2})\), we have that \(\|f-f_{i}\|\leq\|f-g_{i}\|+\|f_{i}-g_{i}\|\leq\varepsilon\). Thus \(\{B(f_{i},\varepsilon)\},i=1\ldots,N\) forms a \(\varepsilon\)-covering of \(\mathcal{F}\). By definition, we have that \(\widetilde{N}(\varepsilon,\mathcal{F},\|\cdot\|)\leq N(\frac{\varepsilon}{2},\mathcal{F},\|\cdot\|)\).
Proof of lemma b.8.: The proof of the first two inequalities follows exactly the same steps of lemma b.7. Here we just need to mention the rest of the statement that \(\widetilde{N}_{[]}(\varepsilon,\mathcal{F},\|\cdot\|_{\infty})=\widetilde{N} (\frac{\varepsilon}{2},\mathcal{F},\|\cdot\|_{\infty})\). We first choose a set of \(\frac{\varepsilon}{2}\)-covering balls \(\{B(f_{i},\frac{\varepsilon}{2})\},i=1,\ldots,N_{1}\), where \(N_{1}=\widetilde{N}(\frac{\varepsilon}{2},\mathcal{F},\|\cdot\|_{\infty})\). Now we construct a set of brackets \(\{[l_{i},u_{i}]\},i=1\ldots,N_{1}\), where \(l_{i}=f_{i}-\frac{\varepsilon}{2}\) and \(u_{i}=f_{i}+\frac{\varepsilon}{2}\). Noting that the bracket \(\{[l_{i},u_{i}]\}\) is exactly the same as \(B(f_{i},\frac{\varepsilon}{2})\), The set \(\{[l_{i},u_{i}]\},i=1,\ldots,N_{1}\) covers \(\mathcal{F}\), which leads to \(\widetilde{N}_{[]}(\varepsilon,\mathcal{F},\|\cdot\|_{\infty})\leq\widetilde{N }(\frac{\varepsilon}{2},\mathcal{F},\|\cdot\|_{\infty})\). Likewise, we have that \(\widetilde{N}_{[]}(\varepsilon,\mathcal{F},\|\cdot\|_{\infty})\geq\widetilde{N }(\frac{\varepsilon}{2},\mathcal{F},\|\cdot\|_{\infty})\). Consequently, we have that \(\widetilde{N}_{[]}(\varepsilon,\mathcal{F},\|\cdot\|_{\infty})=\widetilde{N} (\frac{\varepsilon}{2},\mathcal{F},\|\cdot\|_{\infty})\).
Proof of lemma b.9.: By lemma b.8, first we have that \(N_{[]}(\varepsilon,\mathcal{F}_{n},\|\cdot\|_{\infty})\leq\widetilde{N}_{[]}( \varepsilon,\mathcal{F}_{n},\|\cdot\|_{\infty})\). By lemma b.3, there exists a constant \(c_{1}>0\) such that for arbitrary \(\widehat{h}_{1},\widehat{h}_{2}\in\mathcal{H}_{n}\),\(\widehat{m}_{1},\widehat{m}_{2}\in\mathcal{M}_{n}\) and \(\widehat{\theta}_{1},\widehat{\theta}_{2}\in\Theta\), we have that
\[\|l(T,\delta,Z;\widehat{h}_{1},\widehat{m}_{1},\theta_{1})-l(T,\delta,Z; \widehat{h}_{2},\widehat{m}_{2},\theta_{2})\|_{\infty}\leq c_{1}[|\widehat{ \theta}_{1}-\widehat{\theta}_{2}|+|\widehat{h}_{1}-\widehat{h}_{2}|_{\infty} +\|\widehat{m}_{1}-\widehat{m}_{2}\|_{\infty}],\]
which indicates that as long as \(|\widehat{\theta}_{1}-\widehat{\theta}_{2}|\leq\frac{\varepsilon}{3c_{1}}\), \(|\widehat{h}_{1}-\widehat{h}_{2}|_{\infty}\leq\frac{\varepsilon}{3c_{1}}\) and \(\|\widehat{m}_{1}-\widehat{m}_{2}\|_{\infty}\leq\frac{\varepsilon}{3c_{1}}\), we have that \(\|l(T,\delta,Z;\widehat{h}_{1},\widehat{m}_{1},\theta_{1})-l(T,\delta,Z; \widehat{h}_{2},\widehat{m}_{2},\theta_{2})\|_{\infty}\leq\varepsilon\). Consequently, we have that
\[\widetilde{N}_{[]}(\varepsilon,\mathcal{F}_{n},\|\cdot\|_{\infty})\leq \widetilde{N}_{[]}(\frac{\varepsilon}{3c_{1}},\Theta,\|\cdot\|_{\infty})\times \widetilde{N}_{[]}(\frac{\varepsilon}{3c_{1}},\mathcal{H}_{n},\|\cdot\|_{ \infty})\times\widetilde{N}_{[]}(\frac{\varepsilon}{3c_{1}},\mathcal{M}_{n}, \|\cdot\|_{\infty}).\]
Since \(\Theta\) is a compact set on \(\mathbb{R}\), by lemma b.8 and traditional volume argument, we have that \(\widetilde{N}_{[]}(\frac{\varepsilon}{3c_{1}},\Theta,\|\cdot\|_{\infty})\leq N _{[]}(\frac{\varepsilon}{6c_{1}},\Theta,\|\cdot\|_{\infty})\lesssim\frac{1}{\varepsilon}\).
For \(\widetilde{N}_{[]}(\frac{\varepsilon}{3c_{1}},\mathcal{H}_{n},\|\cdot\|_{ \infty})\), by lemma b.8, we have that \(\widetilde{N}_{[]}(\frac{\varepsilon}{3c_{1}},\mathcal{H}_{n},\|\cdot\|_{ \infty})=\widetilde{N}(\frac{\varepsilon}{3c_{1}},\mathcal{H}_{n},\|\cdot\|_{ \infty})\). By [12, Lemma 2], there exists a constant \(c_{2}>0\) such that \(\|\widehat{h}_{1}-\widehat{h}_{2}\|_{\infty}\leq c_{2}\|\widehat{h}_{1}- \widehat{h}_{2}\|_{2}^{s_{h}}\), which leads to \(\widetilde{N}(\frac{\varepsilon}{3c_{1}},\mathcal{H}_{n},\|\cdot\|_{\infty}) \leq\widetilde{N}(\frac{\varepsilon^{1/s_{h}}}{(3c_{1}c_{2})^{1/s_{h}}}, \mathcal{H}_{n},\|\cdot\|_{2})\). By lemma b.7 we further have that \(\widetilde{N}(\frac{\varepsilon^{1/s_{h}}}{(3c_{1}c_{2})^{1/s_{h}}}, \mathcal{H}_{n},\|\cdot\|_{2})\leq N(\frac{\varepsilon^{1/s_{h}}}{2(3c_{1}c_{2} )^{1/s_{h}}},\mathcal{H}_{n},\|\cdot\|_{2})\). Let \(c_{h}=\frac{1}{2(3c_{1}c_{2})^{1/s_{h}}}\). We have that \(\widetilde{N}_{[]}(\frac{\varepsilon}{3c_{1}},\mathcal{H}_{n},\|\cdot\|_{ \infty})\leq N(c_{h}\varepsilon^{1/s_{h}},\mathcal{H}_{n},\|\cdot\|_{2})\).
Similarly, there exists a constant \(c_{m}>0\) such that \(\widetilde{N}_{[]}(\frac{\varepsilon}{3c_{1}},\mathcal{M}_{n},\|\cdot\|_{ \infty})\leq N(c_{m}\varepsilon^{1/s_{m}},\mathcal{M}_{n},\|\cdot\|_{2})\). Thus, finally we can obtain that
\[N_{[]}(\varepsilon,\mathcal{F}_{n},\|\cdot\|_{\infty})\lesssim\frac{1}{ \varepsilon}N(c_{h}\varepsilon^{1/s_{h}},\mathcal{H}_{n},\|\cdot\|_{2}) \times N(c_{m}\varepsilon^{1/s_{m}},\mathcal{M}_{n},\|\cdot\|_{2}).\]
Proof of lemma b.10.: By lemma b.8, first we have \(N_{[]}(\varepsilon,\mathcal{G}_{n},\|\cdot\|_{\infty})\leq\widetilde{N}_{[]}( \varepsilon,\mathcal{G}_{n},\|\cdot\|_{\infty})\). By lemma b.4, there exists a constant \(c_{3}>0\) such that for arbitrary \(\widehat{\nu}_{1},\widehat{\nu}_{2}\in\mathcal{V}_{n}\) and \(\widehat{\theta}_{1},\widehat{\theta}_{2}\in\Theta\), we have that
\[\|l(T,\delta,Z;\widehat{\nu}_{1},\widehat{\theta}_{1})-l(T,\delta,Z;\widehat{ \nu}_{2},\widehat{\theta}_{2})\|_{\infty}\leq c_{3}[\|\widehat{\theta}_{1}- \widehat{\theta}_{2}|+\|\widehat{\nu}_{1}-\widehat{\nu}_{2}\|_{\infty}],\]
which indicates that as long as \(|\widehat{\theta}_{1}-\widehat{\theta}_{2}|\leq\frac{\varepsilon}{2c_{3}}\)and \(\|\widehat{\nu}_{1}-\widehat{\nu}_{2}\|_{\infty}\leq\frac{\varepsilon}{2c_{3}}\), we have that \(\|l(T,\delta,Z;\widehat{\nu}_{1},\widehat{\theta}_{1})-l(T,\delta,Z;\widehat{\nu}_{ 2},\widehat{\theta}_{2})\|_{\infty}\leq\varepsilon\). Thus, we have:
\[\widetilde{N}_{[]}(\varepsilon,\mathcal{G}_{n},\|\cdot\|_{\infty})\leq \widetilde{N}_{[]}(\frac{\varepsilon}{2c_{3}},\Theta,\|\cdot\|_{\infty}) \times\widetilde{N}_{[]}(\frac{\varepsilon}{2c_{3}},\mathcal{V}_{n},\|\cdot\|_{ \infty}).\]Since \(\Theta\) is a compact set on \(\mathbb{R}\), by lemma B.8 and traditional volume argument, we have that \(\widetilde{N}_{\|}(\frac{\varepsilon}{2c_{3}},\Theta,\|\cdot\|_{\infty})\leq N_{ \|}(\frac{\varepsilon}{4c_{3}},\Theta,\|\cdot\|_{\infty})\lesssim\frac{1}{\varepsilon}\).
For \(\widetilde{N}_{\|}(\frac{\varepsilon}{2c_{3}},\mathcal{V}_{n},\|\cdot\|_{\infty})\), by lemma B.8, we have that \(\widetilde{N}_{\|}(\frac{\varepsilon}{2c_{3}},\mathcal{V}_{n},\|\cdot\|_{ \infty})=\widetilde{N}(\frac{\varepsilon}{2c_{3}},\mathcal{V}_{n},\|\cdot\|_{ \infty})\). By [12, Lemma 2], there exists a constant \(c_{4}>0\) such that \(\|\widehat{\nu_{1}}-\widehat{\nu}_{2}\|_{\infty}\leq c_{4}\|\widehat{\nu_{1}}- \widehat{\nu}_{2}\|_{2}^{3_{\text{\tiny{B}}}}\), which leads to \(\widetilde{N}(\frac{\varepsilon}{2c_{3}},\mathcal{V}_{n},\|\cdot\|_{\infty}) \leq\widetilde{N}(\frac{\varepsilon^{1/s_{\nu}}}{(2c_{3}c_{4})^{1/s_{\nu}}}, \mathcal{V}_{n},\|\cdot\|_{2})\). By lemma B.7 we further have \(\widetilde{N}(\frac{\varepsilon^{1/s_{\nu}}}{(2c_{3}c_{4})^{1/s_{\nu}}}, \mathcal{V}_{n},\|\cdot\|_{2})\leq N(\frac{\varepsilon^{1/s_{\nu}}}{2(2c_{3}c _{4})^{1/s_{\nu}}},\mathcal{V}_{n},\|\cdot\|_{2})\). Let \(c_{\nu}=\frac{1}{2(2c_{3}c_{4})^{1/s_{\nu}}}\), we have that \(\widetilde{N}_{\|}(\frac{\varepsilon}{2c_{3}},\mathcal{V}_{n},\|\cdot\|_{ \infty})\leq N(c_{\nu}\varepsilon^{1/s_{\nu}},\mathcal{V}_{n},\|\cdot\|_{2})\).
Thus, finally we can obtain that
\[N_{\|}(\varepsilon,\mathcal{G}_{n},\|\cdot\|_{\infty})\lesssim\frac{1}{ \varepsilon}N(c_{\nu}\varepsilon^{1/s_{\nu}},\mathcal{V}_{n},\|\cdot\|_{2}).\]
## Appendix C Experimental details
### Dataset summary
We report summaries of descriptive statistics of the \(6\) benchmark datasets used in section 5.2 in table 3.
### Details of synthetic experiments
Since the true model is assumed to be of PF form, we generate event time according to the following transformed regression model [18]:
\[\log H(\tilde{T})=-m(Z)+\epsilon, \tag{15}\]
where \(H(t)=\int_{0}^{t}e^{h(s)}ds\) with \(h\) defined in (2). The error term \(\epsilon\) is generated such that \(e^{\epsilon}\) has cumulative hazard function \(G_{\theta}\). The formulation (15) is the equivalent to (2) [18, 17, 45]. In our experiments, the covariates are of dimension \(5\), sampled independently from the uniform distribution over \([0,1]\). We set \(h(t)=t\) and hence \(H(t)=e^{t}\). The function form of \(m(Z)\) is set to be \(m(Z)=\sin(\langle Z,\beta\rangle)+\langle\sin(Z),\beta\rangle\), where \(\beta=(0.1,0.2,0.3,0.4,0.5)\). Then censoring time \(C\) is generated according to
\[\log H(C)=-m(Z)+\epsilon_{C}, \tag{16}\]
which reuses covariate \(Z\), and draws independently a noise vector \(\epsilon_{C}\) such that the censoring ratio is controlled at around \(40\%\). We generate three datasets with \(n\in\{1000,5000,10000\}\) respectively.
**Hyperparameter configurations** We specify below the network architectures and optimization configurations used in all the tasks:
**PF scheme:** For both \(\widehat{m}\) and \(\widehat{h}\), we use \(64\) hidden units for \(n=1000\), \(128\) hidden units for \(n=5000\) and \(256\) hidden units for \(n=10000\). We train each model for \(100\) epochs with batch size \(128\), optimized using Adam with learning rate \(0.0001\), and no weight decay.
**FN scheme:** For both \(\widehat{\nu}\), we use \(64\) hidden units for \(n=1000\), \(128\) hidden units for \(n=5000\) and \(256\) hidden units for \(n=10000\). We train each model for \(100\) epochs with batch size \(128\), optimized using Adam with learning rate \(0.0001\), and no weight decay.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & METABRIC & RotGBSG & FLCHAIN & SUPPORT & MIMIC-III & KKBOX \\ \hline Size & \(1904\) & \(2232\) & \(6524\) & \(8873\) & \(35953\) & \(2646746\) \\ Censoring rate & \(0.423\) & \(0.432\) & \(0.699\) & \(0.320\) & \(0.901\) & \(0.280\) \\ Features & \(9\) & \(7\) & \(8\) & \(14\) & \(26\) & \(15\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Descriptive statistics of benchmark datasets
### Details of public data experiments
**Dataset preprocessing** For METABRIC, RotGBSG, FLCHAIN, SUPPORT and KKBOX dataset, we take the version provided in the pycox package [46]. We standardize continuous features into zero mean and unit variance and do one-hot encodings for all categorical features. For the MIMIC-III dataset, we follow the preprocessing routines in [55] which extracts \(26\) features. The event of interest is defined as the mortality after admission, and the censored time is defined as the last time of being discharged from the hospital. The definition is similar to that in [64]. But since the dataset is not open sourced, according to our implementation the resulting dataset exhibits a much higher censoring rate (\(90.2\%\) as compared to \(61.0\%\) as reported in the SODEN paper [64]). Since the major purpose of this paper is for the proposal of the NFM framework, We use our own version of the processed dataset to further verify the predictive performance of NFM.
**Hyperparameter configurations** We follow the general training template that uses MLP as all nonparametric function approximators (i.e., \(\widehat{m}\) and \(\widehat{h}\) in the PF scheme, and \(\widehat{\nu}\) in the FN scheme), and train for \(100\) epochs across all datasets using Adam as the optimizer. The tunable parameters and their respective tuning ranges are reported as follows:
**Number of layers (network depth)** We tune the network depth \(L\in\{2,3,4\}\). Typically, the performance of two-layer MLPs is sufficiently satisfactory.
**Number of hidden units in each layer (network width)** We tune the network width \(W\in\{2^{k},5\leq k\leq 10\}\).
**Optional dropout** We optionally apply dropout with probability \(p\in\{0.1,0.2,0.3,0.5,0.7\}\).
**Batch size** We tune batch size within the range \(\{128,256,512\}\), in the KKBOX dataset, we also tested with larger batch sizes \(\{1024\}\).
**Learning rate and weight decay** We tune both the learning rate and weight decay coefficient of Adam within range \(\{0.01,0.001,0.0001\}\).
**Frailty specification** We tested gamma frailty, Box-Cox transformation frailty, and \(\text{IGG}(\alpha)\) frailty with \(\alpha\in\{0,0.25,0.75\}\). Here note that \(\text{IGG}(0.5)\) is equivalent to gamma frailty. We also empirically tried to set \(\alpha\) to be a learnable parameter and found that this additional flexibility provides little performance improvement regarding the datasets used for evaluation.
### Implementations
We use pytorch to implement NFM. **The source code is provided in the supplementary material**. For the baseline models:
* We use the implementations of CoxPH, GBM, and RSF from the sksurv package [54], for the KKBOX dataset, we use the XGBoost library [10] to implement GBM and RSF, which might yield some performance degradation.
* We use the pycox package to implement DeepSurv, CoxTime, and DeepHit models.
* We use the official code provided in the SODEN paper [64] to implement SODEN.
* We obtain results of SuMo and DeepEH based on our re-implementations.
## Appendix D Additional experiments
### Recovery assessment of \(m(Z)\) in PF scheme
We plot empirical recovery results targeting the \(m\) function in (2) in figure 2. The result demonstrates satisfactory recovery with a moderate amount of data, i.e., \(n\geq 1000\).
### Recovery assessment of survival functions
To assess the recovery performance of NFM with respect to survival functions, we consider the following setup: under the same data generation framework as in section C.2, we compute the test feature \(\bar{Z}\) as the sample mean of all the \(100\) hold-out test points. And plot \(\widehat{S}(t|\bar{Z})\) against the groundtruth \(S(t|\bar{Z})\) regarding both PF and FN schemes. The results are shown in figure 3. The results suggest that both scheme provides accurate estimation of survival functions when the sample size is sufficiently large.
### Numerical results of the synthetic experiments
Following [75], we report the relative integrated mean squared error (RISE) of the estimated survival function against the ground truth and list the results in table. The results suggest that the goodness of fit becomes better with a larger sample size. Moreover, since the true model in the simulation is generated as an PF model, we found PF to perform slightly better than FN, which is reasonable since the inductive bias of PF is more correct in this setup.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \(N=1000\) & \(N=5000\) & \(N=10000\) \\ \hline NFM-PF & \(0.0473\) & \(0.0145\) & \(0.0137\) \\ NFM-FN & \(0.0430\) & \(0.0184\) & \(0.0165\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: RISE of the estimated survival function in synthetic experiments
Figure 3: Visualizations of synthetic data results under the NFM framework. The plots in the first row compare the empirical estimates of the survival function \(S(t|\bar{Z})\) against its true value with \(\bar{Z}\) being the average of the features of the \(100\) hold-out points, under the PF scheme. The plots in the second row are obtained using the FN scheme, with analogous semantics to the first row.
Figure 2: Visualizations of synthetic data results under the PF scheme of NFM framework, regarding empirical recovery of the \(m\) function in (2)
### Performance evaluations under the concordance index (C-index)
The concordance index (C-index) [1] is yet another evaluation metric that is commonly used in survival analysis. The C-index estimates the probability that, for a random pair of individuals, the predicted survival times of the two individuals have the same ordering as their true survival times. Formally, C-index is defined as
\[\mathcal{C}=\mathbb{P}\left[\widehat{S}(T_{i}\mid Z_{i})<\widehat{S}(T_{j}\mid Z _{j})\mid T_{i}<T_{j},\delta_{i}=1\right]. \tag{17}\]
We report performance evaluations based on C-index over all the \(6\) benchmark datasets in table 5. From table 5, it appears that there's no clear winner regarding the C-index metric across the \(6\) selected datasets. We conjecture this phenomenon to be closely related to the loose correlation between the C-index and the likelihood-based learning objective, as was observed in [56]. Therefore we compute the average rank of each model as an overall assessment of performance, as illustrated in the last column in table 5. The results suggest that the two NFM models perform better on average.
### Benefits of frailty
We compute the (relative) performance gain of NFM-PF and NFM-FN, against their non-frailty counterparts, namely DeepSurv [42] and SuMo-net [56] based on results in table 1, table 2 and table 5. The results are shown in table 6 The results suggest solid improvement in incorporating frailty, especially for IBS and INBLL metrics, as the relative increase in performance could be over \(10\%\) for both NFM models. For the IBS and INBLL metrics, the performance improvement is consistent across all datasets. The only performance degradation appears on the MIMIC-III dataset evaluated under C-index. This phenomenon is also understandable: Since the DeepSurv model utilized a variant
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Dataset & \multicolumn{3}{c}{NFM-PF vs DeepSurv} & \multicolumn{3}{c}{NFM-FN vs SuMo-net} \\ & IBS & INBLL & C-index & IBS & INBLL & C-index \\ \hline METABRIC & \(+1.33\%\) & \(+1.56\%\) & \(+1.61\%\) & \(+2.30\%\) & \(+3.08\%\) & \(+2.79\%\) \\ RotGBSG & \(+1.11\%\) & \(+0.95\%\) & \(+0.84\%\) & \(+0.62\%\) & \(+0.40\%\) & \(+0.79\%\) \\ FLCHAIN & \(+1.29\%\) & \(+1.32\%\) & \(+0.52\%\) & \(+0.20\%\) & \(+0.27\%\) & \(+0.01\%\) \\ SUPPORT & \(+0.31\%\) & \(+0.23\%\) & \(+0.69\%\) & \(+2.22\%\) & \(+1.76\%\) & \(+0.05\%\) \\ MIMIC-III & \(+12.38\%\) & \(+12.15\%\) & \(=\)0.64\% & \(+6.18\%\) & \(+5.56\%\) & \(+5.18\%\) \\ KKBOX & \(+2.56\%\) & \(+0.51\%\) & \(+0.75\%\) & \(+8.20\%\) & \(+10.38\%\) & \(+2.17\%\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Relative improvement of NFM models in comparison to their non-frailty counterparts, measured in IBS, INBLL, and C-index.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Model & METABRIC & RotGBSG & FLCHAIN & SUPPORT & MIMIC-III & KKBOX & Ave. Rank \\ \hline CoxPH & \(63.42_{1.81}\) & \(66.14_{\pm 1.46}\) & \(79.09_{\pm 1.11}\) & \(56.89_{\pm 0.91}\) & \(74.91_{\pm 0.00}\) & \(83.01_{\pm 0.00}\) & \(11.33\) \\ GBM & \(64.02_{\pm 1.79}\) & \(67.35_{\pm 1.16}\) & \(\mathbf{79.47}_{\pm 1.08}\) & \(61.46_{\pm 0.80}\) & \(75.20_{\pm 0.00}\) & \(85.84_{\pm 0.00}\) & \(7.17\) \\ RSF & \(64.74_{\pm 1.82}\) & \(67.33_{\pm 1.34}\) & \(78.75_{\pm 1.07}\) & \(61.63_{\pm 0.84}\) & \(75.47_{\pm 0.17}\) & \(85.79_{\pm 0.00}\) & \(8.00\) \\ DeepSurv & \(63.95_{\pm 2.12}\) & \(67.20_{\pm 1.22}\) & \(79.04_{\pm 1.14}\) & \(60.91_{\pm 0.85}\) & \(80.08_{\pm 0.44}\) & \(85.59_{\pm 0.08}\) & \(8.50\) \\ CoxTime & \(66.22_{\pm 1.69}\) & \(67.41_{\pm 1.35}\) & \(78.95_{\pm 1.01}\) & \(61.54_{\pm 0.87}\) & \(78.78_{\pm 0.62}\) & \(\mathbf{87.31}_{\pm 0.24}\) & \(5.00\) \\ DeepHt & \(66.33_{\pm 1.61}\) & \(66.38_{\pm 1.07}\) & \(78.48_{\pm 1.09}\) & \(\mathbf{63.20}_{\pm 0.85}\) & \(79.16_{\pm 0.59}\) & \(86.12_{\pm 0.26}\) & \(6.50\) \\ DeepEH & \(66.59_{\pm 2.00}\) & \(\mathbf{67.93}_{\pm 1.28}\) & \(78.71_{\pm 1.14}\) & \(61.51_{\pm 1.04}\) & & & & \\ SuMo-net & \(64.82_{\pm 1.80}\) & \(67.20_{\pm 1.31}\) & \(79.28_{\pm 1.02}\) & \(62.18_{\pm 0.78}\) & \(76.23_{\pm 1.06}\) & \(84.77_{\pm 0.02}\) & \(7.00\) \\ SODEN & \(64.82_{\pm 1.05}\) & \(66.97_{\pm 0.75}\) & \(79.00_{\pm 0.96}\) & \(61.10_{\pm 0.59}\) & \(-\) & & 10.17 \\ SurvNode & \(64.64_{\pm 4.91}\) & \(67.30_{\pm 1.65}\) & \(76.11_{\pm 0.98}\) & \(55.37_{\pm 0.77}\) & & & & 11.50 \\ DCM & \(65.76_{\pm 1.25}\) & \(66.75_{\pm 1.35}\) & \(78.61_{\pm 0.79}\) & \(62.19_{\pm 0.95}\) & \(76.45_{\pm 0.34}\) & \(83.48_{\pm 0.07}\) & \(\mathbf{8.33}\) \\ DeSurv & \(65.85_{\pm 0.22}\) & \(67.30_{\pm 1.45}\) & \(78.97_{\pm 1.64}\) & \(61.47_{\pm 0.97}\) & \(\mathbf{80.97}_{\pm 0.30}\) & \(86.11_{\pm 0.05}\) & \(5.67\) \\ \hline
**NFM-PF** & \(64.98_{\pm 1.87}\) & \(67.77_{\pm 1.35}\) & \(79.45_{\pm 1.03}\) & \(61.33_{\pm 0.83}\) & \(79.56_{\pm 0.15}\) & \(86.23_{\pm 0.01}\) & \(4.67\) \\
**NFM-FN** & \(\mathbf{66.63}_{\pm 1.82}\) & \(67.73_{\pm 1.29}\) & \(79.29_{\pm 0.93}\) & \(62.21_{\pm 0.41}\) & \(80.18_{\pm 0.20}\) & \(86.61_{\pm 0.01}\) & \(\mathbf{2.16}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Survival prediction results measured in C-index (%) on all the \(6\) benchmark datasets. In each column, the **boldfaced** score denotes the best result and the underlined score represents the second-best result. The average rank of each model is reported in the rightmost column. We did not manage to obtain reasonable results for DeepEH and SODEN on two larger datasets MIMIC-III and KKBOX, and we set corresponding ranks to be the worst on those datasets.
of partial likelihood (PL) for model training, as previous works [62] pointed out that PL type objective is closely related to the ranking problem. As C-index could be considered a certain type of ranking measure, it is possible that DeepSurv obtains better ranking performance than NFM-type models which are trained using scale-sensitive likelihood objective.
### Limitations
In this section we discuss the limitations of this paper form both theoretical and empirical standpoints.
#### Theoretical limitations
As we have established formal statistical guarantees regarding the estimation properties of NFM, the guanrantees do not necessarily imply that NFM perform well on prediction tasks under metrics such as IBS and INBLL. Following the spirit of classical learning theory [58], for prediction guarantees it is ideal to directly optimize the underlying metric or its surrogates, which is difficult in survival problems since the metrics involve both a model over the event time and a working model on the censoring time. So far as we have noticed, the only effort that aims to address this issue is the method of inversely-weighted survival games [33]. However, the authors in [33] did not provide rigorous learning-theoretic statements, which is a promising research direction for future works.
#### Empirical limitations
While NFM is shown to be a competitive survival model for prognosis empirically, we have observed from the empirical results that we can hardly obtain _statistically significant improvements_ over the baselines, which is a common problem exhibited in previous works on neural survival regressions [75, 56]. We conjecture that this phenomenon is primarily due to two facts: Firstly, there lacks open-to-public large-scale survival datasets that allows scalable evaluation of neural survival models. Secondly, for most of the current available datasets, there are no authoritative train-test splits. Consequently, most experiments are done using cross-validation on moderate scale datasets, causing the resulting variability of the modeling algorithms to be relatively large. Therefore we suggest the survival analysis community to release (in a privacy-preserving manner) more large-scale, sanitized datasets equipped with standard train-test splits, which we believe will greatly benefit the state-of-the-art for neural survival modeling. | ## Review
### Summary
This paper presents a novel framework for survival regression called the neural frailty machine, which extends traditional frailty models by incorporating neural network architectures. The authors demonstrate that existing methods can be viewed as special cases of their proposed framework while providing theoretical convergence guarantees. Empirical evaluations indicate marginal improvements over existing methods, although the results vary across datasets. The theoretical analysis is well-articulated, and the paper is generally well-written, but the experimental results do not robustly demonstrate the advantages of the proposed methods compared to existing approaches.
### Strengths
- The paper proposes two new models, NFM-PF and NFM-PM, which combine the frailty model with neural networks.
- The theoretical analysis provided for the proposed models is interesting and shows correctness.
- The authors successfully include frailty in their modeling framework, addressing some limitations of previous works.
- The writing is clear and easy to follow.
### Weaknesses
- The empirical evaluation does not convincingly demonstrate the practical advantages of using frailty in neural network models.
- The experiments primarily reflect differences in neural network architectures rather than the benefits of the frailty concept.
- There is insufficient discussion around the methodology and results, particularly regarding the performance of the proposed approaches.
- The evaluation metrics used, such as IBS and IBNLL, are not proper scoring rules; better alternatives should be considered.
### Questions
- What is the intuition behind introducing the PF model, and how does it differ from existing methods?
- Can the authors explain the variability in performance between the PF and FN models across different datasets?
- How does the empirical convergence compare with the theoretical predictions?
- What impact do the parameters of the Clenshaw-Curtis integrator have on the performance of the method?
- Is there a rigorous connection between the number of parameters in the network and the number of samples used?
### Soundness
**Score:** 3
**Description:** Good - The theoretical framework is sound, but the empirical results raise concerns about the robustness of the claims.
### Presentation
**Score:** 3
**Description:** Good - The paper is well-structured and clear, but some areas require more detailed discussion.
### Contribution
**Score:** 2
**Description:** Fair - While the theoretical contributions are valuable, the practical implications and novelty compared to existing methods are limited.
### Rating
**Score:** 5
**Description:** Borderline accept - The paper shows technical solidity and originality but requires improvements in empirical evaluations and clarity in several discussions.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a theoretically interesting approach to survival analysis using neural networks. Despite some weaknesses in empirical evaluation and discussions, the originality and the sound theoretical framework provide a solid basis for acceptance. The contributions to the field, while needing further clarification and validation through stronger experimental results, justify its acceptance as a poster.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Addressing Negative Transfer in Diffusion Models
Hyojun Go\({}^{1}\)1 JinYoung Kim\({}^{1*}\)1 Yunsung Lee\({}^{2*}\)1 Seunghyun Lee\({}^{3*}\)1
Shinhyeok Oh\({}^{3}\) Hyeongdon Moon\({}^{4}\) Seungtaek Choi\({}^{5\dagger}\)1
Twelvelabs\({}^{1}\) Wrtn Technologies\({}^{2}\) Riiid\({}^{3}\) EPFL\({}^{4}\) Yanolja\({}^{5}\)
{gohyojun15, seago0828}@gmail.com\({}^{1}\), [email protected]\({}^{2}\), {seunghyun.lee shinhyeok.oh}@riiid.co\({}^{3}\), [email protected]\({}^{4}\), [email protected]\({}^{5}\)
Footnote 1: Co-first author \({}^{1,2,4,5}\)Work done while at Riiid \({}^{\dagger}\)Corresponding author
###### Abstract
Diffusion-based generative models have achieved remarkable success in various domains. It trains a shared model on denoising tasks that encompass different noise levels simultaneously, representing a form of multi-task learning (MTL). However, analyzing and improving diffusion models from an MTL perspective remains under-explored. In particular, MTL can sometimes lead to the well-known phenomenon of _negative transfer_, which results in the performance degradation of certain tasks due to conflicts between tasks. In this paper, we first aim to analyze diffusion training from an MTL standpoint, presenting two key observations: **(O1)** the task affinity between denoising tasks diminishes as the gap between noise levels widens, and **(O2)** negative transfer can arise even in diffusion training. Building upon these observations, we aim to enhance diffusion training by mitigating negative transfer. To achieve this, we propose leveraging existing MTL methods, but the presence of a huge number of denoising tasks makes this computationally expensive to calculate the necessary per-task loss or gradient. To address this challenge, we propose clustering the denoising tasks into small task clusters and applying MTL methods to them. Specifically, based on **(O2)**, we employ interval clustering to enforce temporal proximity among denoising tasks within clusters. We show that interval clustering can be solved using dynamic programming, utilizing signal-to-noise ratio, timestep, and task affinity for clustering objectives. Through this, our approach addresses the issue of negative transfer in diffusion models by allowing for efficient computation of MTL methods. We validate the efficacy of proposed clustering and its integration with MTL methods through various experiments, demonstrating 1) improved generation quality and 2) faster training convergence of diffusion models. Our project page is available at [https://gohyojun15.github.io/ANT_diffusion/](https://gohyojun15.github.io/ANT_diffusion/).
## 1 Introduction
Diffusion-based generative models [20, 66, 71] have accomplished remarkable achievements in various generative tasks, including image [8], video [21, 23], 3D shape [44, 54], and text generation [38]. In particular, they have shown excellent performance and flexibility in a wide range of image generation settings, including unconditional [28, 47], class-conditional [22], and text-conditional image generation [1, 48, 55]. Consequently, improving diffusion models has garnered significant interest.
The framework of diffusion models [20, 66, 71] comprises gradually corrupting the data towards a given noise distribution and its subsequent reverse process. A model is optimized by minimizing the weighted sum of denoising score-matching losses across various noise levels [20, 69] for learning the reverse process. This can be interpreted as diffusion training aiming to train a single shared model todenoising its input across various noise levels. Therefore, diffusion training is inherently multi-task learning (MTL) in nature, where _each noise level_ represents _a distinct denoising task_.
However, analyzing and improving diffusion models from an MTL perspective remains under-explored. In particular, sharing one model between tasks may lead to competition between conflicting tasks, resulting in a phenomenon known as _negative transfer_[24; 25; 57; 78], leading to poorer performance compared to learning individual tasks with separate models. _Negative transfer_ has been a critical issue in MTL research, and related works have demonstrated that the performance of multi-task models can be improved by remediating _negative transfer_[24; 25; 57; 78; 83]. Considering these, we argue that _negative transfer_ should be investigated in diffusion models, and if present, addressing it is a potential direction for improving diffusion models.
In this paper, we characterize how _multi-task_ diffusion model is, and whether there exists _negative transfer_ in denoising tasks. In particular, **(O1)** we first observe that task affinity [12; 78] between two denoising tasks is negatively correlated with the difference in noise levels, indicating that they may be less conflict as the noise levels become more similar [78]. This suggests that adjacent denoising tasks should be considered more harmonious tasks than non-adjacent tasks in terms of noise levels.
Next, **(O2)** we observe the presence of _negative transfer_ from diffusion model training. During sampling within a specific timestep interval, utilizing a model trained exclusively on denoising tasks within that interval generates higher-quality samples compared to a model trained on all denoising tasks simultaneously. This finding implies that simultaneously learning all denoising tasks can cause degraded denoising within a specific time interval, indicating the occurrence of _negative transfer_.
Based on these observations, we focus on improving diffusion models by addressing _negative transfer_. To achieve this, we first propose to leverage the existing multi-task learning techniques, such as dealing with issues of conflicting gradients [5; 83], differences in gradient magnitudes [42; 46; 64], and imbalanced loss scales [4; 16; 29]. However, unlike previous MTL studies that typically focused on small sets of tasks, the presence of a large number of denoising tasks (\(\approx\) thousands) in diffusion models makes it computationally expensive since MTL methods generally require calculating per-task loss or gradient in each iteration [4; 5; 16; 24; 29; 42; 46; 64; 78; 83].
To address this, we propose a strategy that first clusters the entire denoising tasks and then applies multi-task learning methods to the resulting clusters. Specifically, inspired by **(O1)**, we formulate the interval clustering problem which groups denoising tasks by pairwise disjoint timestep intervals. Based on the interval clustering, we propose timesteps, signal-to-noise ratios, and task affinity score-based interval clustering and show that these can be clustered by dynamic programming as [2; 76; 49]. Through our strategy, we can address the issue of _negative transfer_ in diffusion models by allowing for efficient computation of multi-task learning methods.
We evaluated our proposed methods through extensive experiments on widely-recognized datasets: FFHQ [27], CelebA-HQ [26], and ImageNet [7]. For a comprehensive analysis, we employed various models, including Ablated Diffusion Model (ADM) [8], Latent Diffusion Model (LDM) [56], and Diffusion Transformer (DiT) [52]. These models represent diverse diffusion architectures spanning pixel-space, latent-space, and transformer-based paradigms. Our results underscore a significant enhancement in image generation quality, attributed to a marked reduction in _negative transfer_. This affirms the merits of our clustering proposition and its synergistic integration with MTL techniques.
## 2 Related Work
**Diffusion Models** Diffusion models [20; 66; 71] are a family of generative models that generate samples from noise via a learned denoising process. Diffusion models beat other likelihood-based models, such as autoregressive models [62; 75], flow models [9; 10], and variational autoencoders [32] in terms of sample quality, and sometimes outperform GANs [14] in certain cases [8]. Moreover, pre-trained diffusion models can be easily applied to downstream image synthesis tasks such as image editing [30; 45] and plug-and-play generation [13; 15]. From these advantages, several works have applied diffusion models for various domains [3; 23; 38; 44; 54] and large-scale models [48; 56; 58].
Several studies have focused on improving diffusion models in various aspects, such as architecture [1; 8; 28; 52; 82], sampling speed [33; 60; 67], and training objectives [6; 17; 31; 70; 74]. Among these, the most closely related studies are improving training objectives, as we aim to enhance optimization between denoising tasks from the perspective of multi-task learning (MTL). Several works [31; 70; 74]redesign training objectives to improve likelihood estimation. However, these objectives may lead to sample quality degradation and training instability and require additional techniques such as importance sampling [70; 74] and sophisticated parameterization [31] to be successfully applied. On the other hand, P2 [6] proposes a weighted training objective that prioritizes denoising tasks for certain noise levels, where the model is expected to learn perceptually rich features. Similar to P2, we aim to improve the sample quality of diffusion models from an MTL perspective, and we will show that our method is also beneficial to P2.
As a concurrent work, MinSNR [17] shares a common insight with us that diffusion training is essentially multi-task learning. However, their observation lacks a direct connection to _negative transfer_ in terms of sample quality. They address the instability and inefficiency of multi-task learning optimization in diffusion models, mainly due to a large number of denoising tasks. In contrast, our work delves deeper into exploring _negative transfer_ and task affinity, and we propose the application of MTL methods through task clustering to overcome the identified challenges in MinSNR.
Multi-Task LearningMulti-Task Learning (MTL) is an approach that trains a single model to perform multiple tasks simultaneously [57]. Although sharing parameters between tasks can reduce the overall number of parameters, it may also result in a _negative transfer_, causing performance degradation because of conflicting tasks during training procedure [24; 25; 57; 78].
Prior works have tracked down three causes of _negative transfer_: (1) _conflicting gradient_, (2) _the difference in gradient magnitude_, and (3) _imbalanced loss scale_. First, _Conflicting gradients_ among different tasks may negate each other, resulting in poorer updates for a subset of, or even for all tasks. PCgrad [83] and Graddrop [5] mitigate this by projecting conflicting parts of gradients and dropping elements of gradients based on the degree of conflict, respectively. Second, tasks with larger gradients may dominate tasks with smaller gradients due to _differences in gradient magnitude_ across tasks. Different optimization schemes have been proposed to equalize gradient magnitudes, including MGDA-UB [64], IMTL-G [42], and NashMTL [46]. Similarly, _imbalanced loss scales_ may cause tasks with smaller losses to be dominated by those with larger losses. To balance task losses, uncertainty [29], task difficulty [16], and gradient norm [4] is exploited.
Adapting MTL methods and _negative transfer_ formulation to diffusion models is challenging since these techniques are typically designed for scenarios with a small number of tasks and easily measurable individual task performance. Our goal is to address this challenge and demonstrate that observing _negative transfer_ in diffusion models and mitigating it can improve them.
## 3 Preliminaries and Observation
We first provide the necessary background information on diffusion models and their multi-task nature. Next, we conduct analyses that yield two important observations: **(O1)** task affinity between two tasks is negatively correlated with the difference in noise levels, and **(O2)**_negative transfer_ indeed exists in diffusion training, i.e., the model is overburdened with different, potentially conflicting tasks.
### Preliminaries
Diffusion model [20; 66; 71] consists of two processes: a forward process and a reverse process. The forward process \(q\) gradually injects noise into a datapoint \(\mathbf{x}_{0}\) to obtain noisy latents \(\{\mathbf{x}_{1},\dots,\mathbf{x}_{T}\}\) as:
\[q(\mathbf{x}_{t}|\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{t}|a_{t}\mathbf{x}_{0},\sigma_{t}^{2}\mathbf{I}),\quad q(\mathbf{x}_{t}|\mathbf{x}_{s})=\mathcal{N}( \mathbf{x}_{t}|\alpha_{t|\mathbf{x}}\mathbf{x}_{s},(\sigma_{t}^{2}-\alpha_{t|s }^{2}\sigma_{s}^{2})\mathbf{I}),\quad 1\leq s<t\leq T \tag{1}\]
where \(\alpha_{t},\sigma_{t}\) characterize the signal-to-noise ratio \(\text{SNR}(t)=\alpha_{t}^{2}/\sigma_{t}^{2}\), and \(\alpha_{t|s}=\alpha_{t}/\alpha_{s}\). Here, \(\text{SNR}(t)\) decreases in \(t\), such that by the designated final timestep \(t=T\), \(q(\mathbf{x}_{T})\approx\mathcal{N}(\mathbf{0},\mathbf{I})\).
The reverse process is a parameterized model trained to restore the original data from data corrupted during the forward process. The widely adopted training scheme uses a simple noise-prediction objective [8; 20; 34; 56; 59] that trains the model to predict the noise component \(\epsilon\) of the latent \(\mathbf{x}_{t}=\alpha_{t}\mathbf{x}_{0}+\sigma\epsilon\), \(\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). More formally, the objective is as follows:
\[L_{simple}=\mathbb{E}_{t,x_{0},\epsilon}[L_{t}],\qquad\text{where }L_{t}=|| \epsilon-\epsilon_{\theta}(\mathbf{x}_{t},t)||_{2}^{2}. \tag{2}\]
Let us denote by \(\mathcal{D}^{t}\) the denoising task at timestep \(t\) trained by minimizing the loss \(L_{t}\) (Eq. 2). Then, since a diffusion model jointly learns multiple denoising tasks \(\{D_{t}\}_{t=1,\dots,T}\) using a single shared model \(\epsilon_{\theta}\), it can be regarded as a multi-task learner. Also, we denote by \(\mathcal{D}^{[t_{1},t_{2}]}\) the set of tasks \(\{\mathcal{D}^{t_{1}},\mathcal{D}^{t_{1}+1},\dots,\mathcal{D}^{t_{2}}\}\) henceforth.
### Observation
By considering diffusion training as a form of multi-task learning, we can analyze how the diffusion model learns the denoising task. We experimentally analyze diffusion models with two concepts in multi-task learning: 1) Task affinity [72; 12]: measuring which combinations of denoising tasks may yield a more positive impact on performance. 2) Negative transfer [68; 24; 25; 57; 78; 83]: degradation in denoising tasks caused by multi-task learning. We use a lightweight ADM [8] used in [6] and LDM [56] with FFHQ 256\(\times\)256 dataset [27] for analyze diffusion models trained on both pixel and latent space.
**(O1) Task Affinity Analysis** We first analyze how the denoising tasks \(\mathcal{D}^{[1,T]}\) relate to each other by measuring task affinities [72; 12]. In particular, we adopt the gradient direction-based task affinity score [78]: for two given tasks \(\mathcal{D}^{i}\) and \(\mathcal{D}^{j}\), we calculate the pairwise cosine similarity between gradients from each task loss, i.e., \(\nabla_{\theta}L_{i}\) and \(\nabla_{\theta}L_{j}\), then average the similarities across training iterations. Task affinity score assumes that cooperative (conflicting) tasks produce similar (conflicting) gradient directions, and it has been to correlate with the MTL model's overall performance [78]. Although there have been attempts to divide diffusion model phases using signal-to-noise ratio [6] and a trace of covariance of training targets [81], we are the first to provide an explicit and fine-grained analysis of task affinities among denoising tasks.
In Fig. 1, we visualize the task affinity scores among denoising tasks, for both ADM and LDM, with both timestep and log-SNR as axes. As can be seen in Fig. 1, task affinity between two tasks \(\mathcal{D}^{i}\), \(\mathcal{D}^{j}\) is high for neighboring tasks, i.e., \(i\approx j\), and decreases smoothly as the difference in SNRs (or timesteps) increases. This suggests that tasks sharing temporal/noise-level proximity can be cooperatively learned without significant conflict. Also, this result hints at the possibility that denoising tasks for vastly different SNRs (distant in timesteps) may potentially be conflicting.
**(O2) Negative Transfer Analysis** Next, we show that there exist negative transfers among different denoising tasks \(\mathcal{D}^{[1,T]}\). Negative transfer refers to a multi-task learner's performance degradation due to task conflicts, and it can be identified by observing the performance gap between a multi-task learner and specific-task learners. For ease of observation, we group up tasks by intervals, based on
Figure 1: Task affinity scores plotted against timestep and log-SNR axes in ADM and LDM. As the timestep and SNR differences decrease, task affinity increases, implying more aligned gradient directions between denoising tasks and reduced negative impact on their joint training.
Figure 2: Negative transfer gap (\(NTG\)) with FID score of ADM and LDM for denoising tasks \(\mathcal{D}^{[\cdot,\cdot]}\). If \(NTG\) is negative, \(\mathcal{D}^{[\cdot,\cdot]}\)-trained model outperforms the entire denoising tasks-trained model in terms of denoising latent \(\{\mathbf{x}_{t}\}_{t\in[\cdot,\cdot]}\), showing the occurrence of negative transfer. Negative transfer occurs in both ADM and LDM.
the observation **(O1)** that more neighboring tasks in timesteps have higher task affinity. Specifically, we investigate whether the task group \(\mathcal{D}^{[t_{1},t_{2}]}\) suffers negative impacts from the remaining tasks.
To quantify the negative transfer, we follow the procedure: First, we generate samples \(\{\tilde{\mathbf{x}}_{0}\}\) using a model trained on all denoising tasks \(\mathcal{D}^{[1,T]}\). Next, we repeat the same sampling procedure, except we replace the model with a model trained on \(\mathcal{D}^{[t_{1},t_{2}]}\) for the latent \(\{\mathbf{x}_{t}\}_{t\in[t_{1},t_{2}]}\); We denote the resulting samples by \(\{\tilde{\mathbf{x}}_{0}^{[t_{1},t_{2}]}\}\). If \(\{\tilde{\mathbf{x}}_{0}^{[t_{1},t_{2}]}\}\) exhibits superior quality compared to \(\{\tilde{\mathbf{x}}_{0}\}\), it indicates that the model trained solely on \(\mathcal{D}^{[t_{1},t_{2}]}\) performs better in denoising the latent \(\{\mathbf{x}_{t}\}_{t\in[t_{1},t_{2}]}\) than the model trained on the entire denoising task. This suggests that \(\mathcal{D}^{[t_{1},t_{2}]}\) suffers from negative transfer by learning other tasks. More formally, given a performance metric \(P\), FID [18] in this paper, we define the negative transfer gap:
\[NTG(\mathcal{D}^{[t_{1},t_{2}]}):=P(\{\tilde{\mathbf{x}}_{0}^{[t_{1},t_{2}]}\} )-P(\{\tilde{\mathbf{x}}_{0}\}), \tag{3}\]
where \(NTG<0\) indicates that negative transfer occurs. The relationship between the negative transfer gap in previous literature and our negative transfer gap is described in Appendix A.
We visualize the negative transfers among denoising tasks for both lightweight ADM [6; 8] and LDM [56] in Fig. 2. The results indicate that negative transfer occurs in three out of the five considered task groups for both models. Notably, negative transfers often have a significant impact, such as a 7.56 increase in FID for ADM in the worst case. Therefore, we hypothesize that there is room for improving the performance of diffusion models by mitigating negative transfer, which motivates us to leverage well-designed MTL methods for diffusion training.
## 4 Methodology
In Section 3.2, we make two observations: **(O1)** Denoising tasks with a larger difference in \(t\) and \(\text{SNR}(t)\) exhibit lower task affinity, **(O2)** Negative transfer occurs in diffusion training. Inspired by these observations, we aim to remediate the negative transfer in diffusion by leveraging MTL methods. Although MTL methods are reported effective when there are only a few tasks, they are impractical for diffusion models with a large number of denoising tasks since they require computing per-task gradients or loss at each iteration. In this section, to deal with challenges, we propose a strategy that first groups the denoising tasks as task clusters and then applies the multi-task learning methods by regarding each task cluster as one distinct task.
### Interval Clustering
Here, we first introduce a scheme that groups all denoising tasks \(\mathcal{D}^{[1,T]}\) into a small number of task clusters. This is a necessary step for applying well-established MTL methods, for they usually involve computationally expensive subroutines such as computing per-task gradients or loss in each training iteration. Our key idea is to enforce temporal proximity of denoising tasks within task clusters, given our observation **(O1)** that task affinity is higher for tasks closer in timesteps. Therefore, we assign tasks in pairwise disjoint time intervals.
To obtain the disjoint time intervals, we leverage an interval clustering algorithm [2; 49] that optimizes for various clustering costs. In our case, interval clustering assigns diffusion timesteps \(\mathcal{X}=\{1,\dots,T\}\) to \(k\) contiguous intervals \(I_{1},\dots,I_{k}\), with \(\coprod_{i=1}^{k}I_{i}\cap\mathcal{X}=\mathcal{X}\), where \(\coprod\) denotes disjoint union. Let \(I_{i}=[l_{i},r_{i}]\), \(l_{i}\leq r_{i}\) for \(i=1,\dots,k\), then we have \(l_{1}=1\), and \(r_{i}=l_{i+1}-1\) (\(i<k\) and \(r_{k}=T\)). The interval clustering problem is defined as:
\[\min_{l_{1}=1<l_{2}<\dots<l_{k}}\sum_{i=1}^{k}L_{cluster}(I_{i}\cap\mathcal{X}), \tag{4}\]
where \(L_{cluster}\) denotes the cluster cost.
Generally, it is known that an interval clustering problem of \(n\) data points with \(k\) intervals can be solved via dynamic programming in \(O(n^{2}k\omega(n))\)[49], where \(\omega(n)\) is the time required to calculate the one-cluster cost for \(L_{cluster}(\mathcal{X})\). If the size of each cluster is too small, it is challenging to learn the corresponding task cluster, so we add constraints on the cluster size for dynamic programming. More details regarding the dynamic programming algorithm can be found in Appendix G.
It remains to design the clustering cost function \(L_{cluster}\) to optimize for. We present three clustering cost functions: timestep-based, SNR-based, and gradient-based.
**1. Timestep-based Clustering Cost** Intuitively, one simple clustering cost is based on timesteps. We use the absolute timestep difference for the clustering objective by setting \(L_{cluster}(I_{i}\cap\mathcal{X})=\sum_{t=l_{i}}^{r_{i}}||t^{i}_{center}-t||_{1}^{1}\) in Eq. 4 where \(t^{i}_{center}\) denotes the center of interval \(I_{i}\). The resulting intervals divide up the timesteps into \(k\) uniform intervals.
**2. SNR-based Clustering Cost** Another useful metric to characterize a denoising task is its signal-to-noise ratio (SNR). Indeed, it has been previously observed that a denoising task encounters perceptually different noisy inputs depending on its SNR [6]. Also, we already observed that denoising tasks with similar SNRs show high task affinity scores (see Section 3.2). Based on this, we use the absolute log-SNR difference for clustering cost. We define the clustering cost as \(L_{cluster}(I_{i}\cap\mathcal{X})=\sum_{t=l_{i}}^{r_{i}}||\log\text{SNR}(t^{i}_ {center})-\log\text{SNR}(t)||_{1}^{1}\).
**3. Gradient-based Clustering Cost** Finally, we consider the gradient direction-based task affinity scores (see Section 3.2 for a definition) for clustering cost. Task affinity scores have been used as a metric to group cooperative tasks [78]. Based on a similar intuition, we design a clustering cost as follows: \(L_{cluster}(I_{i}\cap\mathcal{X})=-\sum_{t=l_{i}}^{r_{i}}\text{TAS}(t^{i}_{ center},t)\) where \(\text{TAS}(\cdot)\) is the gradient-based task affinity score. While leveraging more fine-grained information regarding task affinities, this cost function requires computing and storing gradients throughout training.
### Incorporating MTL Methods into Diffusion Model Training
After dividing the denoising tasks into task clusters via interval clustering, we apply multi-task learning methods to the resulting task clusters. As mentioned in Section 2, previous multi-task learning works have tracked down the following causes for negative transfer: (1) _conflicting gradient_, (2) _difference in gradient magnitude_, and (3) _imbalanced loss scale_. In this work, we leverage one representative method that tackles each of the causes mentioned above, namely, (1) PCgrad [83], (2) NashMTL [46], and (3) Uncertainty Weighting [29].
For each training step in diffusion modeling, we compute the noise prediction loss \(L^{l}\) for the \(l\)-th data within the minibatch. As shown in Eq 2, calculating \(L^{l}\) involves sampling the timestep \(t^{l}\), in which case \(L^{l}\) is a loss incurred on the denoising task \(\mathcal{D}^{t^{l}}\). We may then assign \(L^{i}\) to the appropriate task cluster by considering the corresponding timestep. Subsequently, we may group up the losses as \(\{L_{I_{i}}\}_{i=1,\dots,k}\), where \(L_{I_{i}}\) is the loss for the \(i\)-th task cluster. (More details in Appendix C)
**1. PCgrad**[83] In each iteration, PCgrad projects the gradient of a task onto the normal plane of the gradient of another task when there is a conflict between their gradients. Specifically, PCgrad first calculates the per-interval gradient \(\nabla_{\theta}L_{I_{i}}\). Then, if the other interval gradient \(\nabla_{\theta}L_{I_{j}}\) for \(i\neq j\) has negative cosine similarity with \(\nabla_{\theta}L_{I_{i}}\), it projects \(\nabla_{\theta}L_{I_{i}}\) onto the normal plane of \(\nabla_{\theta}L_{I_{j}}\). PCgrad repeats this process with all of the other interval gradients for all interval gradients, resulting in a projected gradient per interval. Finally, model parameters are updated with the summation of projected gradients.
**2. NashMTL**[46] In NashMTL, the aggregation of per-task gradients is treated as a bargaining game. It aims to update model parameters with weighted summed gradients \(\Delta\theta=\sum_{i=i}^{k}\alpha_{i}\nabla_{\theta}L_{I_{i}}\) by obtaining the Nash bargaining solution to determine \(\alpha_{i}\), where \(\Delta\theta\) is in the ball of radius \(\epsilon\) centered zero, \(B_{\epsilon}\). They define the utility function for each player as \(u_{i}=(\nabla_{\theta}L_{I_{i}},\Delta\theta)\), then the unique Nash bargaining solution can be obtained by \(\arg\max_{\Delta\theta\in B_{\epsilon}}\sum_{i}\log(u_{i})\). By denoting \(G\) as matrix whose columns contain the gradients \(\nabla_{\theta}L_{I_{i}}\), \(\alpha\in\mathbb{R}_{+}^{k}\) is the solution to \(G^{\text{r}}G\alpha=1/\alpha\) where \(1/\alpha\) is the element-wise reciprocal. To avoid the optimization to obtain \(\alpha\) for each iteration, they update \(\alpha\) once every few iterations.
**3. Uncertainty Weighting (UW)**[29] UW uses task-dependent (homoscedastic) uncertainty to weight task cluster losses. By utilizing observation noise parameter \(\sigma_{i}\) for \(i\)-th task clusters, the total loss function is \(\sum_{i}L_{I_{i}}/\sigma_{i}^{2}+\log(\sigma_{i})\). As the noise parameter for the \(i\)-th task clusters loss \(\sigma_{i}\) increases, the weight of \(L_{I_{i}}\) decreases, and vice versa. The \(\sigma_{i}\) is discouraged from increasing too much by regularizing with \(\log(\sigma_{i})\).
## 5 Experiments
In this section, we demonstrate the efficacy of our proposed method by addressing the negative transfer issue in diffusion training. First, we provide the comparative evaluation in Section 5.1, whereour method can boost the quality of generated samples significantly. Next, we compare previous loss weighting methods for diffusion models to UW with interval clustering in Section 5.2, verifying our superior effectiveness to existing methods. Then, we analyze the behavior of adopted MTL methods, which serve to explain the effectiveness of our method in Section 5.3. Finally, we demonstrate that our method can be readily combined with more sophisticated training objectives to boost performance even further in Section 5.4. Extensive information on all our experiments can be found in Appendix E.
### Comparative Evaluation
Experimental SetupHere, we demonstrate that incorporating MTL methods into diffusion training improves the performance of diffusion models. For comparison, we consider unconditional and class-conditional image generation. For unconditional image generation, we used FFHQ [27] and CelebA-HQ [26] datasets, where all images were resized to \(256\times 256\). For class-conditional image generation experiments, we employed the ImageNet dataset [7], also resized to \(256\times 256\) resolution.
For architecture, we adopt widely recognized architectures for image generation. Specifically, we use the lightweight ADM [6; 8] and LDM [56] for unconditional image generation, while employing DiT-S/2 [52] with classifier-free guidance [19] for class-conditional image generation. We train the model using our method: We consider every possible pair of (1) interval clustering (timestep-, SNR-, and gradient-based) and (2) MTL method (PCgrad, NashMTL, and Uncertainty Weighting (UW)), and report the results. We used \(k=5\) in interval clustering throughout experiments.
For evaluation metrics, we use FID [18] and precision [36] for measuring sample quality, and recall [36] for assessing sample diversity and distribution coverage. IS [61] is additionally used for the evaluation metric in the class-conditional image generation setting. Finally, for sample generation, we use DDIM [67] sampler with 50 steps for unconditional generation and DDPM 250 steps for class conditional generation, and all evaluation metrics are calculated using 10k generated samples.
Comparison in Unconditional GenerationAs seen in Table 1 our method significantly improves performance upon conventionally trained diffusion models (denoted vanilla in the table). In particular, there is an improvement in FID in all cases, and an improvement in precision scores in all but two cases, which highlights the efficacy of our method. Also, given strong results for both pixel- and latent-space models, we can reasonably infer that our method is generally applicable.
We also observe the distinct characteristics of each multi-task learning method considered. Uncertainty Weighting tends to achieve higher improvements in sample quality compared to PCgrad and NashMTL. Indeed, UW achieves superior FID and Precision for ADM, while excelling in Precision for LDM.
\begin{table}
\begin{tabular}{c|c|l|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Clustering} & \multirow{2}{*}{Method} & \multicolumn{6}{c}{Dataset} \\ \cline{4-9} & & & \multicolumn{3}{c|}{FFHQ [27]} & \multicolumn{3}{c|}{CelebA-HQ [26]} \\ \cline{4-9} & & & FID (↓) & Precision (↑) & Recall (↑) & FID (↓) & Precision (↑) & Recall (↑) \\ \hline \hline \multirow{8}{*}{ADM [8; 6]} & \multirow{8}{*}{Timestep} & Vanilla & 24.95 & 0.5427 & 0.3996 & 22.27 & 0.5651 & 0.4328 \\ \cline{3-9} & & PFGrad [33] & 22.29 & 0.5566 & 0.4027 & 21.31 & 0.5610 & 0.4238 \\ & & NashMTL [46] & 21.45 & 0.5510 & **0.4193** & 20.58 & 0.5724 & 0.4303 \\ & & UW [29] & 20.78 & 0.5995 & 0.3881 & **17.74** & **0.6323** & 0.4023 \\ \cline{3-9} & \multirow{4}{*}{SNR} & PCgrad [33] & 20.60 & 0.5743 & 0.4026 & 20.47 & 0.5608 & 0.4298 \\ & & NashMTL [46] & 23.09 & 0.5581 & 0.3971 & 20.11 & 0.5733 & **0.4388** \\ & & UW [29] & **20.19** & **0.6297** & 0.3635 & 18.54 & 0.6060 & 0.4092 \\ \cline{3-9} & \multirow{4}{*}{Gradient} & PCgrad [33] & 23.07 & 0.5526 & 0.3962 & 20.43 & 0.5777 & 0.4348 \\ & & NashMTL [46] & 22.36 & 0.5507 & 0.4126 & 21.18 & 0.5682 & 0.4369 \\ & & UW [29] & 21.38 & 0.5961 & 0.3685 & 18.23 & 0.6011 & 0.4130 \\ \hline \hline \multirow{8}{*}{LDM [56]} & \multirow{8}{*}{Timestep} & Vanila & 10.56 & 0.7198 & 0.4766 & 10.61 & 0.7049 & 0.4732 \\ \cline{3-9} & & PQGrad [33] & 9.599 & 0.7349 & 0.4845 & 9.817 & 0.7076 & 0.4951 \\ \cline{1-1} \cline{3-9} & & NashMTL [46] & 9.400 & 0.7296 & 0.4877 & 9.247 & 0.7119 & 0.4945 \\ & & UW [29] & 9.386 & 0.7489 & 0.4811 & 9.220 & 0.7181 & 0.4939 \\ \cline{1-1} \cline{2-9} & \multirow{4}{*}{SNR} & PCgrad [33] & 9.715 & 0.7262 & 0.4889 & 9.498 & 0.7071 & 0.5024 \\ & & NashMTL [46] & 10.33 & 0.7242 & 0.4710 & 9.429 & 0.7062 & 0.4883 \\ & & UW [29] & 9.734 & 0.7494 & 0.4797 & **9.030** & **0.7202** & 0.4938 \\ \cline{1-1} \cline{2-9} & \multirow{4}{*}{Gradient} & PCgrad [33] & **9.189** & 0.7359 & 0.4904 & 10.31 & 0.6954 & 0.4927 \\ \cline{1-1} & & NashMTL [46] & 9.294 & 0.7234 & **0.4962** & 9.740 & 0.7051 & **0.5067** \\ \cline{1-1} & & UW [29] & 9.439 & **0.7499** & 0.4855 & 9.414 & 0.7199 & 0.4952 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison to vanilla training (Vanilla) on the unconditional generation. Integration of MTL methods using interval clustering consistently improves FID scores and generally enhances precision compared to vanilla training.
However, UW sacrifices distribution coverage in exchange for sample quality, resulting in lower Recall compared to other methods. Meanwhile, NashMTL scores higher in recall and lower in precision compared to other methods, suggesting it has better distribution coverage while sacrificing sample quality. Finally, PCgrad tends to show a balanced performance in terms of precision and recall. We further look into behaviors of different MTL methods in Section 5.3.
Due to space constraints, we provide a comprehensive collection of generated samples in Appendix F. In summary, diffusion models trained with our method produce more realistic and high-fidelity images compared to conventionally trained diffusion models.
Comparison in Class-Conditional GenerationWe illustrate the results of quantitative comparison on class-conditional generation in Fig. 3. The results show that our methods outperform vanilla training in FID, IS, and Precision. In particular, UW and Nash-MTL significantly boost these metrics, showing superior improvement in generation quality. These results further support the generalizability of MTL methods through the interval clustering on class-conditional generation and the transformer-based diffusion model.
### Comparison to Loss Weighting Methods
Since UW is a loss weighting method, validating the superiority of UW with interval clustering compared to previous loss weighting methods such as P2 [6] and MinSNR [17] highlights the effectiveness of our method. We name UW by incorporating interval clustering as Addressing Negative Transfer (ANT)-UW. We trained DiT-L/2 with MinSNR and UW with \(k=5\) on the ImageNet across 400K iterations, using a batch size of 240. All methods are trained by AdamW optimizer [43] with a learning rate of \(1e-4\). Table 2 shows that ANT-UW dramatically outperforms MinSNR, emphasizing the effectiveness of our method. An essential note is that the computational cost of ANT-UW remains remarkably similar to vanilla training as shown in Section 5.3, ensuring that our enhanced performance does not come at the expense of computational efficiency. Additionally, we refer to the results in [50], showing that our ANT-UW outperforms P2 and MinSNR when DIT-L/2 is trained on the FFHQ dataset.
### Analysis
To provide a better understanding of our method, we present various analysis results here. Specifically, we compare the memory and runtime of MTL methods, analyze the behavior of MTL methods adopted, provide a convergence analysis, and assess the extent to which negative transfer has been addressed.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline Method & FID & IS & Precision & Recall \\ \hline \hline Vanilla & 12.59 & 134.60 & 0.73 & 0.49 \\ MinSNR & 9.58 & 179.98 & 0.78 & **0.47** \\ ANT-UW & **6.17** & **203.45** & **0.82** & **0.47** \\ \hline \end{tabular}
\end{table}
Table 2: Comparison between MinSNR and ANT-UW. DiT-L/2 is trained on ImageNet.
Figure 3: Quantitative comparison to vanilla training (Vanilla) on ImageNet 256\(\times\)256 dataset with DiT-S/2 architecture and classifier-free guidance. Integration of MTL methods using interval clustering consistently improves FID, IS, and Precision compared to vanilla training.
\begin{table}
\begin{tabular}{l|c|c} \hline Method & GPU memory usage (GB) & \# Iterations / Sec \\ \hline Vanilla & 34.126 & **2.108** \\ \hline PCgrad & **28.160** & 1.523 \\ NashMTL & 38.914 & 2.011 \\ UW & 34.350 & 2.103 \\ \hline \end{tabular}
\end{table}
Table 3: GPU memory usage and runtime comparison on FFHQ dataset in LDM architecture.
Memory and Runtime Comparison
We first compared the memory usage and runtime between MTL methods and vanilla training for a deeper understanding of their cost. We conducted measurements of memory usage and runtime with \(k=5\) on the FFHQ dataset using the LDM architecture and timestep-based clustering, and the results are shown in Table 3. PCgrad has a slower speed of 1.523 iterations/second compared to vanilla training, but its GPU memory usage is lower due to the partitioning of minibatch samples. Meanwhile, NashMTL has a runtime of 2.011 iterations/second. Even though NashMTL uses more GPU memory, it has a better runtime than PCgrad because it computes per-interval gradients occasionally. Concurrently, UW shows similar runtime and GPU memory usage as vanilla training, which is attributed to its use of weighted loss and a single backpropagation process.
Behavior of MTL MethodsWe analyze the behavior of different multi-task learning methods during training. For PCgrad, we calculate the average number of gradient conflicts between task clusters per iteration. For UW, we visualize the weights allocated to the task cluster losses over training iterations. Finally, for NashMTL, we visualize the weights allocated to per-task-cluster gradients over training iterations. We used LDM trained on FFHQ for our experiments. Although we only report results for time-based interval clustering for conciseness, we note that MTL methods exhibit similar behavior across different clustering methods. Results obtained using other clustering methods can be found in Appendix D.1.
The resulting visualizations are provided in Fig. 4. As depicted in Fig. 3(a), the task pair that shows the most gradient conflicts is \(I_{1}\) and \(I_{5}\), namely, task clusters apart in timesteps. This result supports our hypothesis that temporally distant denoising tasks may be conflicting, and as seen in Section 5.1, PCgrad seems to mitigate this issue. Also, as depicted in Fig. 3(b) and 3(b), both UW and NashMTL tend to allocate higher weights to task clusters that handle noisier inputs, namely, \(I_{4},I_{5}\). This result suggests that handling noisier inputs may be a difficult task that is underrepresented in conventional diffusion training.
Faster ConvergenceIn Fig. 5, we plot the trajectory of the FID score over training iterations, as observed while training on FFHQ. We can observe that all our methods enjoy faster convergence and better final performance compared to the conventionally trained model. Notably, for pixel space
Figure 4: Behavior of multi-task learning methods across training iterations. (a): With increasing timestep difference, gradient conflicts between task clusters become more frequent in PCgrad. (b) and (c): Both UW and NashMTL allocate higher weights to task clusters that handle noisier inputs.
Figure 5: Convergence analysis on FFHQ dataset. Compared to baselines, all methods exhibit fast convergence and achieve good final performance.
diffusion (ADM), UW converges extremely rapidly, while beating the vanilla method by a large margin. Overall, these results show that our method may not only make diffusion training more effective but also more efficient.
**Reduced Negative Transfer Gap** We now demonstrate that our proposed method indeed mitigates the negative transfer gap we observed in Section 3.2. We used the same procedure introduced in Section 3.2 to calculate the negative transfer gap for all methods considered, for the FFHQ dataset.
As shown in Fig. 6 our methods improve upon negative transfer gaps. Specifically, for tasks that exhibit severe negative transfer gaps in the baseline (e.g., [601, 800], [801, 1000] for ADM, and [401, 600], [601, 800] for LDM), our methods mitigate the negative transfer gap for most cases, even practically removing it in certain cases. Another interesting result to note is that models less effective in reducing negative transfer (NashMTL-SNR for LDM and PCgrad-Grad for ADM) indeed show worse FID scores, which supports our hypothesis that resolving negative transfer leads to performance gain. We also note that even the worst-performing methods still beat the vanilla model.
### Combining MTL Methods with Sophisticated Training Objectives
Finally, we show that our method is readily applicable on top of more sophisticated training objectives proposed in the literature. Specifically, we train an LDM by applying both UW and PCgrad on top of the P2 objective [6] and evaluate the performance on the FFHQ dataset. We chose UW and PCgrad based on a previous finding that combining the two methods leads to performance gain [41]. Also, we chose the gradient-based clustering method due to its effectiveness for LDM on FFHQ. As seen in Table 4, when combined with P2, our method improves the FID from 7.21 to 5.84.
## 6 Conclusion
In this work, we studied the problem of better training diffusion models, with the distinction of reducing negative transfer between denoising tasks in a multi-task learning perspective. Our key contribution is to enable the application of existing multi-task learning techniques, such as PCgrad and NashMTL, that were challenging to implement due to the increasing computation costs associated with the number of tasks, by clustering the denoising tasks based on their various task affinity scores. Our experiments validated that the proposed method effectively mitigated negative transfer and improved image generation quality. Overall, our findings contribute to advancing diffusion models. Starting from our work, we believe that addressing and overcoming negative transfer can be the future direction to improve diffusion models.
\begin{table}
\begin{tabular}{l|l|c} \hline \hline Type & Method & FID-50k \\ \hline \hline GAN & PGM [6.3] & 3.39 \\ \hline \hline AR & VQGAN [11] & 9.6 \\ \hline \multirow{3}{*}{
\begin{tabular}{l} Diffusion \\ (LDM) \\ \end{tabular} } & D2C [65] & 13.04 \\ \cline{2-3} & Vanilla & 9.1 \\ \cline{1-1} & P2 & 7.21 \\ \cline{1-1} & P2 + Ours & **5.84** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Combining our method with P2 on the FFHQ dataset. DDIM 200-step sampler is used.
Figure 6: Negative transfer gap (NTG) comparison on the FFHQ dataset. Integration of MTL methods tends to improve the negative transfer gap. Methods that fail to improve NTG in areas where the baseline records low NTG tend to achieve lesser improvements in the baseline.
## References
* [1] Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, et al. ediffi: Text-to-image diffusion models with an ensemble of expert denoisers. _arXiv preprint arXiv:2211.01324_, 2022.
* [2] Richard Bellman. A note on cluster analysis and dynamic programming. _Mathematical Biosciences_, 18(3-4):311-312, 1973.
* [3] Nicholas Carlini, Florian Tramer, Krishnamurthy Dj Dvijotham, Leslie Rice, Mingjie Sun, and J Zico Kolter. (certified!!) adversarial robustness for free! _arXiv preprint arXiv:2206.10550_, 2022.
* [4] Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, and Andrew Rabinovich. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In _International conference on machine learning_, pages 794-803. PMLR, 2018.
* [5] Zhao Chen, Jiquan Ngiam, Yanping Huang, Thang Luong, Henrik Kretzschmar, Yuning Chai, and Dragomir Anguelov. Just pick a sign: Optimizing deep multitask models with gradient sign dropout. _Advances in Neural Information Processing Systems_, 33:2039-2050, 2020.
* [6] Jooyoung Choi, Jungbeom Lee, Chaehun Shin, Sungwon Kim, Hyunwoo Kim, and Sungroh Yoon. Perception prioritized training of diffusion models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 11472-11481, 2022.
* [7] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _2009 IEEE conference on computer vision and pattern recognition_, pages 248-255. Ieee, 2009.
* [8] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. _Advances in Neural Information Processing Systems_, 34:8780-8794, 2021.
* [9] Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components estimation. _arXiv preprint arXiv:1410.8516_, 2014.
* [10] Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. _arXiv preprint arXiv:1605.08803_, 2016.
* [11] Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 12873-12883, 2021.
* [12] Chris Fifty, Ehsan Amid, Zhe Zhao, Tianhe Yu, Rohan Anil, and Chelsea Finn. Efficiently identifying task groupings for multi-task learning. _Advances in Neural Information Processing Systems_, 34:27503-27516, 2021.
* [13] Hyojun Go, Yunsung Lee, Jin-Young Kim, Seunghyun Lee, Myeongho Jeong, Hyun Seung Lee, and Seungtaek Choi. Towards practical plug-and-play diffusion models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 1962-1971, 2023.
* [14] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. _Communications of the ACM_, 63(11):139-144, 2020.
* [15] Alexandros Graikos, Nikolay Malkin, Nebojsa Jojic, and Dimitris Samaras. Diffusion models as plug-and-play priors. _Advances in Neural Information Processing Systems_, 35:14715-14728, 2022.
* [16] Michelle Guo, Albert Haque, De-An Huang, Serena Yeung, and Li Fei-Fei. Dynamic task prioritization for multitask learning. In _Proceedings of the European conference on computer vision (ECCV)_, pages 270-287, 2018.
* [17] Tiankai Hang, Shuyang Gu, Chen Li, Jianmin Bao, Dong Chen, Han Hu, Xin Geng, and Baining Guo. Efficient diffusion training via min-snr weighting strategy. In _Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)_, pages 7441-7451, October 2023.
* [18] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. _Advances in neural information processing systems_, 30, 2017.
* [19] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. In _NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications_, 2021.
** [20] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. _Advances in Neural Information Processing Systems_, 33:6840-6851, 2020.
* [21] Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P Kingma, Ben Poole, Mohammad Norouzi, David J Fleet, et al. Imagen video: High definition video generation with diffusion models. _arXiv preprint arXiv:2210.02303_, 2022.
* [22] Jonathan Ho, Chitwan Saharia, William Chan, David J Fleet, Mohammad Norouzi, and Tim Salimans. Cascaded diffusion models for high fidelity image generation. _Journal of Machine Learning Research_, 23(47):1-33, 2022.
* [23] Jonathan Ho, Tim Salimans, Alexey A. Gritsenko, William Chan, Mohammad Norouzi, and David J. Fleet. Video diffusion models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, _Advances in Neural Information Processing Systems_, 2022.
* [24] Adrian Javaloy and Isabel Valera. Rotograd: Gradient homogenization in multitask learning. In _International Conference on Learning Representations_, 2022.
* [25] Junguang Jiang, Baixu Chen, Junwei Pan, Ximei Wang, Liu Dapeng, Jie Jiang, and Mingsheng Long. Forkmerge: Overcoming negative transfer in multi-task learning. _arXiv preprint arXiv:2301.12618_, 2023.
* [26] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. In _International Conference on Learning Representations_, 2018.
* [27] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 4401-4410, 2019.
* [28] Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, _Advances in Neural Information Processing Systems_, 2022.
* [29] Alex Kendall, Yarin Gal, and Roberto Cipolla. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 7482-7491, 2018.
* [30] Gwanghyun Kim, Taesung Kwon, and Jong Chul Ye. Diffusionclip: Text-guided diffusion models for robust image manipulation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 2426-2435, 2022.
* [31] Diederik Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. Variational diffusion models. _Advances in neural information processing systems_, 34:21696-21707, 2021.
* [32] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. _arXiv preprint arXiv:1312.6114_, 2013.
* [33] Zhifeng Kong and Wei Ping. On fast sampling of diffusion probabilistic models. In _ICML Workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models_, 2021.
* [34] Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. Diffwave: A versatile diffusion model for audio synthesis. In _International Conference on Learning Representations_, 2021.
* [35] Vitaly Kurin, Alessandro De Palma, Ilya Kostrikov, Shimon Whiteson, and Pawan K Mudigonda. In defense of the unitary scalarization for deep multi-task learning. _Advances in Neural Information Processing Systems_, 35:12169-12183, 2022.
* [36] Tuomas Kynkaanniemi, Tero Karras, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Improved precision and recall metric for assessing generative models. _Advances in Neural Information Processing Systems_, 32, 2019.
* [37] Yunsung Lee, Jin-Young Kim, Hyojun Go, Myeongho Jeong, Shinhyeok Oh, and Seungtaek Choi. Multi-architecture multi-expert diffusion models. _arXiv preprint arXiv:2306.04990_, 2023.
* [38] Xiang Li, John Thickstun, Ishaan Gulrajani, Percy S Liang, and Tatsunori B Hashimoto. Diffusion-lm improves controllable text generation. _Advances in Neural Information Processing Systems_, 35:4328-4343, 2022.
* [39] Baijiong Lin and Yu Zhang. Libmtl: A python library for deep multi-task learning. _Journal of Machine Learning Research_, 24:1-7, 2023.
* [40] Baijiong Lin, Feiyang YE, and Yu Zhang. A closer look at loss weighting in multi-task learning, 2022. URL [https://openreview.net/forum?id=OdnNNIdFul](https://openreview.net/forum?id=OdnNNIdFul).
* [41] Bo Liu, Xingchao Liu, Xiaojie Jin, Peter Stone, and Qiang Liu. Conflict-averse gradient descent for multi-task learning. _Advances in Neural Information Processing Systems_, 34:18878-18890, 2021.
* [42] Liyang Liu, Yi Li, Zhanghui Kuang, J Xue, Yimin Chen, Wenming Yang, Qingmin Liao, and Wayne Zhang. Towards impartial multi-task learning. International Conference on Learning Representations, 2021.
* [43] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In _International Conference on Learning Representations_, 2019. URL [https://openreview.net/forum?id=Bkg6RicqY7](https://openreview.net/forum?id=Bkg6RicqY7).
* [44] Shitong Luo and Wei Hu. Diffusion probabilistic models for 3d point cloud generation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 2837-2845, 2021.
* [45] Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. SDEdit: Guided image synthesis and editing with stochastic differential equations. In _International Conference on Learning Representations_, 2022.
* [46] Aviv Navon, Aviv Shamsian, Idan Achituve, Haggai Maron, Kenji Kawaguchi, Gal Chechik, and Ethan Fetaya. Multi-task learning as a bargaining game. In _Proceedings of the 39th International Conference on Machine Learning_, volume 162 of _Proceedings of Machine Learning Research_, pages 16428-16446. PMLR, 17-23 Jul 2022. URL [https://proceedings.mlr.press/v162/navon22a.html](https://proceedings.mlr.press/v162/navon22a.html).
* [47] Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In _International Conference on Machine Learning_, pages 8162-8171. PMLR, 2021.
* [48] Alexander Quinn Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mcgrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. In _International Conference on Machine Learning_, pages 16784-16804. PMLR, 2022.
* [49] Frank Nielsen and Richard Nock. Optimal interval clustering: Application to bregman clustering and statistical mixture learning. _IEEE Signal Processing Letters_, 21(10):1289-1292, 2014.
* [50] Byeongjun Park, Sangmin Woo, Hyojun Go, Jin-Young Kim, and Changick Kim. Denoising task routing for diffusion models. _arXiv preprint arXiv:2310.07138_, 2023.
* [51] Gaurav Parmar, Richard Zhang, and Jun-Yan Zhu. On aliased resizing and surprising subtleties in gan evaluation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 11410-11420, 2022.
* [52] William Peebles and Saining Xie. Scalable diffusion models with transformers. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 4195-4205, 2023.
* [53] Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Muller, Joe Penna, and Robin Rombach. Sdxl: improving latent diffusion models for high-resolution image synthesis. _arXiv preprint arXiv:2307.01952_, 2023.
* [54] Ben Poole, Ajay Jain, Jonathan T Barron, and Ben Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. In _The Eleventh International Conference on Learning Representations_, 2022.
* [55] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. _arXiv preprint arXiv:2204.06125_, 2022.
* [56] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. High-resolution image synthesis with latent diffusion models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 10684-10695, 2022.
* [57] Sebastian Ruder. An overview of multi-task learning in deep neural networks. _arXiv preprint arXiv:1706.05098_, 2017.
* [58] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. _Advances in Neural Information Processing Systems_, 35:36479-36494, 2022.
* [59] Chitwan Saharia, Jonathan Ho, William Chan, Tim Salimans, David J Fleet, and Mohammad Norouzi. Image super-resolution via iterative refinement. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 2022.
* [60] Tim Salimans and Jonathan Ho. Progressive distillation for fast sampling of diffusion models. In _International Conference on Learning Representations_, 2021.
* [61] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. _Advances in neural information processing systems_, 29, 2016.
* [62] Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. In _International Conference on Learning Representations_, 2016.
* [63] Axel Sauer, Kashyap Chitta, Jens Muller, and Andreas Geiger. Projected gans converge faster. _Advances in Neural Information Processing Systems_, 34:17480-17492, 2021.
* [64] Ozan Sener and Vladlen Koltun. Multi-task learning as multi-objective optimization. _Advances in neural information processing systems_, 31, 2018.
* [65] Abhishek Sinha, Jiaming Song, Chenlin Meng, and Stefano Ermon. D2c: Diffusion-decoding models for few-shot conditional generation. _Advances in Neural Information Processing Systems_, 34:12533-12548, 2021.
* [66] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In _International Conference on Machine Learning_, pages 2256-2265. PMLR, 2015.
* [67] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In _International Conference on Learning Representations_, 2020.
* [68] Xiaozhuang Song, Shun Zheng, Wei Cao, James Yu, and Jiang Bian. Efficient and effective multi-task grouping via meta learning on task combinations. In _Advances in Neural Information Processing Systems_, 2022.
* [69] Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. _Advances in neural information processing systems_, 32, 2019.
* [70] Yang Song, Conor Durkan, Iain Murray, and Stefano Ermon. Maximum likelihood training of score-based diffusion models. _Advances in Neural Information Processing Systems_, 34:1415-1428, 2021.
* [71] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In _International Conference on Learning Representations_, 2021.
* [72] Trevor Standley, Amir Zamir, Dawn Chen, Leonidas Guibas, Jitendra Malik, and Silvio Savarese. Which tasks should be learned together in multi-task learning? In _International Conference on Machine Learning_, pages 9120-9132. PMLR, 2020.
* [73] Guolei Sun, Thomas Probst, Danda Pani Paudel, Nikola Popovic, Menelaos Kanakis, Jagruti Patel, Dengxin Dai, and Luc Van Gool. Task switching network for multi-task learning. In _Proceedings of the IEEE/CVF international conference on computer vision_, pages 8291-8300, 2021.
* [74] Arash Vahdat, Karsten Kreis, and Jan Kautz. Score-based generative modeling in latent space. _Advances in Neural Information Processing Systems_, 34:11287-11302, 2021.
* [75] Aaron Van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image generation with pixelcnn decoders. _Advances in neural information processing systems_, 29, 2016.
* [76] Haizhou Wang and Mingzhou Song. Ckmeans. 1d. dp: optimal k-means clustering in one dimension by dynamic programming. _The R journal_, 3(2):29, 2011.
* [77] Zirui Wang, Zihang Dai, Barnabas Poczos, and Jaime Carbonell. Characterizing and avoiding negative transfer. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 11293-11302, 2019.
* [78] Zirui Wang, Yulia Tsvetkov, Orhan Firat, and Yuan Cao. Gradient vaccine: Investigating and improving multi-task optimization in massively multilingual models. In _International Conference on Learning Representations_, 2021.
* [79] Sen Wu, Hongyang R Zhang, and Christopher Re. Understanding and improving information transfer in multi-task learning. In _International Conference on Learning Representations_, 2019.
* [80] Derrick Xin, Behrooz Ghorbani, Justin Gilmer, Ankush Garg, and Orhan Firat. Do current multi-task optimization methods in deep learning even help? _Advances in Neural Information Processing Systems_, 35:13597-13609, 2022.
* [81] Yilun Xu, Shangyuan Tong, and Tommi S. Jaakkola. Stable target field for reduced variance score estimation in diffusion models. In _The Eleventh International Conference on Learning Representations_, 2023.
* [82] Xingyi Yang, Daquan Zhou, Jiashi Feng, and Xinchao Wang. Diffusion probabilistic model made slim. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 22552-22562, 2023.
* [83] Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol Hausman, and Chelsea Finn. Gradient surgery for multi-task learning. _Advances in Neural Information Processing Systems_, 33:5824-5836, 2020.
**Appendix**
###### Contents
* A Relation to Negative Transfer Gap in Previous Literature
* B Detailed Experimental Settings for Observational Study
* C Implementation Details for MTL methods
* D Additional Experimental Results
* D.1 Visualization for the Behavior of MTL Methods with Other Clustering Methods
* D.2 Analysis: The Number of Interval Clusters
* D.3 Comparison Interval Clustering with Task Grouping Method
* D.4 Comparison to Random Loss Weighting and Linear Scalarization
* E Detailed Experimental Settings in Section 5
* E.1 Detailed Settings of Comparative Evaluation and Analysis (Section 5.1 and 5.3)
* E.2 Detailed Settings of Comparison to Loss Weighting Methods (Section 5.2)
* E.3 Detailed Settings of Combining MTL Methods with Sophisticated Training Objectives (Section 5.4)
* F Qualitative Results
* G Dynamic Programming Algorithm for Interval Clustering
* H Broader Impacts
* I Limitations
Relation to Negative Transfer Gap in Previous Literature
Previous works on transfer and multi-task learning have explored measuring the negative transfer [25, 79, 77]. For the source task \(\mathcal{T}_{src}\) and the target task \(\mathcal{T}_{tgt}\), the negative transfer can be defined as the phenomenon that the source task _negatively_ transfer to the target task. Denote the model trained on both source and target task as \(\Theta(\mathcal{T}_{tgt},\mathcal{T}_{src})\) and the model only trained on the target task as \(\Theta(\mathcal{T}_{tgt})\). With performance measure \(P\) for the model on \(\mathcal{T}_{tgt}\), negative transfer can be quantified by utilizing negative transfer gap (\(NTG\)):
\[NTG(\mathcal{T}_{tgt},\mathcal{T}_{src})=P(\Theta(\mathcal{T}_{tgt}))-P(\Theta (\mathcal{T}_{tgt},\mathcal{T}_{src})). \tag{5}\]
For \(P\), higher is better, \(NTG>0\) indicates that negative transfer occurs, showing that additionally training on \(\mathcal{T}_{src}\) negatively affects the learning of \(\mathcal{T}_{tgt}\).
In our study of negative transfer in diffusion models, the target task involves denoising tasks within a specific timestep interval as \(\mathcal{T}_{tgt}=\mathcal{D}^{[t_{1},t_{2}]}\), while the source task comprises the remaining denoising tasks as \(\mathcal{T}_{src}=\mathcal{D}^{[1,T]}\setminus\mathcal{D}^{[t_{1},t_{2}]}\).
However, since a model trained only a subset of entire denoising tasks cannot generate samples properly, we cannot utilize the sample quality metrics (_e.g._ FID [18]) for \(P\) to measure \(P(\Theta(\mathcal{T}_{tgt}))\) in Eq. 5 for arbitrary timestep intervals. This is a different point from a typical MTL setting, where the performance of each task can be measured.
Alternatively, we redefine \(NTG\) with the difference in sample quality resulting from denoising by different models, \(\Theta(\mathcal{T}_{tgt})\) and \(\Theta(\mathcal{T}_{tgt},\mathcal{T}_{src})\), in the \([t_{1},t_{2}]\) interval. During the sampling procedure with a model trained on entire denoising tasks, we use \(\Theta(\mathcal{T}_{tgt})\) or \(\Theta(\mathcal{T}_{tgt},\mathcal{T}_{src})\) in \([t_{1},t_{2}]\). Denote the resulting samples with \(\Theta(\mathcal{T}_{tgt},\mathcal{T}_{src})\) as \(\{\tilde{x}_{0}\}\) and the resulting samples with \(\Theta(\mathcal{T}_{tgt})\) as \(\{\tilde{x}_{0}^{[t_{1},t_{2}]}\}\). Then, by comparing the quality of these samples as Eq. 3, we can measure how much the denoising of \([t_{1},t_{2}]\) degrades in terms of sampling quality.
Furthermore, the success of multi-expert denoisers in prior studies [37, 13, 1] suggests the potential existence of negative transfer. By distinctly separating parameters for denoising tasks, they might mitigate this negative transfer, leading to enhanced performance in their generation.
## Appendix B Detailed Experimental Settings for Observational Study
In this section, we provide the details on experimental settings in Section 3. The training details and the architectures used are the same as those in Section 5. All experiments are conducted with a single A100 GPU and with FFHQ dataset [27].
For the pixel-space diffusion model, we use the lightweight ADM as same in [6]. It inherits the architecture of ADM [8], but it uses fewer base channels, fewer residual blocks, and a self-attention with a single resolution. Specifically, the model uses one residual block per resolution with 128 base channels and 16\(\times\)16 self-attention with 64 head channels. A linear schedule with \(T=1000\) is used for diffusion scheduling. We referenced the training scripts in the official code2 for implementation.
Footnote 2: [https://github.com/jychoi118/P2-weighting](https://github.com/jychoi118/P2-weighting)
For the latent-space diffusion model, we use the LDM architecture as the same settings for FFHQ experiments in [56]. Specifically, an LDM-4-VQ encoder and decoder are used, in which the resolution of latent vectors is reduced by four times compared to the original images and has a vector quantization layer with 8092 codewords. The denoising model has 224 base channels with multipliers for each resolution as 1, 2, 3, 4 and has two residual blocks per resolution. Self-attention with 32 head channels is used for 32, 16, and 8 resolutions. For diffusion scheduling, the linear schedule with \(T=1000\) is used. We conducted experiments with the official code3. In general, we utilized the pre-trained weights provided by LDM. However, if our retraining results demonstrated superior performance, we reported them.
Footnote 3: [https://github.com/CompVis/latent-diffusion](https://github.com/CompVis/latent-diffusion)
Task Affinity AnalysisTo measure the task affinity score between denoising tasks, we first calculate \(\nabla_{\theta}L_{t}\) for \(t=1,\ldots,T\) every 10K iterations during training. The gradient is calculated with 1000 samples in the training dataset. Then, the pairwise cosine similarity of the gradient is computed and their cosine similarities calculated by every 10K iterations are averaged. Finally, we can plot the average cosine similarity against the timestep axis as in Fig. 1. For plotting them against the log-SNR axis, the values of the axis were adjusted, and the empty parts were filled with linear interpolation.
For ADM and LDM, the pairwise cosine similarity between gradients is calculated during 1M training iterations and 400K training iterations, respectively.
Negative Transfer AnalysisTo calculate the negative transfer gap in Eq. 3, we need to additionally train the model on denoising tasks within specific timestep interval \([t_{1},t_{2}]\). Since we plot five intervals [1, 200], [201, 400], [401, 600], [601, 800], and [801, 1000], we trained the model on denoising tasks for each interval. Each model is trained for 600K iterations in ADM and 300K iterations in LDM on the FFHQ dataset. For the model trained on entire denoising tasks, we used the trained model the same as in Section 5.1. ADM is trained on 1M iterations and LDM is trained on 400K iterations. All of these models are trained with the same batch size and learning rate as experiments in Section 5.1 (See Appendix E).
DDIM 50-step sampler [67] was used for the generation. FID is calculated with Clean-FID [51] by setting the entire 70K FFHQ dataset as reference images. Since the official code of Clean-FID4 supports FID calculation with statistics from these reference images, we used it and reported FID with 10k generated images.
Footnote 4: [https://github.com/GaParmar/clean-fid](https://github.com/GaParmar/clean-fid)
## Appendix C Implementation Details for MTL methods
We describe how MTL methods are applied in Section 4.2. To be more self-contained, we hereby present implementation details for MTL methods. For the implementation of MTL methods, we used the official code of LibMTL [39]5. NashMTL [46] supports practical speed-up by updating gradient weights \(\alpha\) every few iterations, not every iteration. We utilize this by updating \(\alpha\) every 25 training iterations.
Footnote 5: [https://github.com/median-research-group/LibMTL](https://github.com/median-research-group/LibMTL)
## Appendix D Additional Experimental Results
We present additional experimental results to supplement the empirical findings presented in Section 5. In Section D.1, we provide visualizations of the behavior of MTL methods with other clustering methods that were not covered in Section 5.3. Furthermore, we examine the impact of our hyperparameter, the number of clusters \(k\), in Section D.2. To validate the effectiveness of interval clustering compared to other clustering methods, we present additional results in Section D.3. In Section D.4, we delve deeper into comparing the performance of stronger MTL baselines such as Linear Scalarization (LS) [80, 35] and Random Loss Weighting (RLW) [40] with our proposed approach.
Figure 7: Behavior of multi-task learning methods through SNR-based interval clustering across training iterations. A similar trend as in Fig. 3 is observed.
### Visualization for the Behavior of MTL Methods with Other Clustering Methods
Due to space constraints in our main paper, we were unable to include the behavior analysis of MTL methods for SNR-based and gradient-based interval clustering. However, we present these results in Fig. 7 and 8, which show similar trends to the observations depicted in Fig. 4. These findings suggest valuable insights into the behavior of MTL methods, regardless of the clustering objectives.
Firstly, we observed a notable increase in the occurrence of conflicting gradients as the timestep difference between tasks increased. This observation suggests that the temporal distance between denoising tasks plays a crucial role in determining the frequency of conflicting gradients.
Secondly, we noted that both loss and gradient balancing methods assign higher weights to task clusters with higher levels of noise. This finding indicates that these methods allocate more importance to the noisier tasks.
### Analysis: The Number of Interval Clusters
To understand the impacts of the number of clusters \(k\), we conducted experiments by varying \(k\) with 2, 5, and 8. We trained a model for timestep-based, SNR-based, and gradient-based clustering with each \(k\), resulting in nine trained models. For MTL methods, we used combined methods with UW [29] and PCgrad [83] as in Section 5.4. All training configurations such as learning rate and training iterations are the same as in Section 5.1. We evaluate 10K generated samples from the DDIM 50-step sampler [67] for all methods with the FID score [51, 18].
Table 5 shows the results. Notably, we made an intriguing observation regarding the integration of MTL methods with only two clusters, which resulted in a noteworthy enhancement in FID scores. Additionally, we found that increasing the number of clusters, denoted as \(k\), from 2 to 5 also exhibited a positive impact on improving FID scores. However, our findings indicated that further increasing \(k\) from 5 to 8 did not yield significant improvements and resulted in similar outcomes. From these results, we conjecture that increasing the number of clusters to greater than five has no significant effect.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \multirow{2}{*}{Clustering} & \multirow{2}{*}{Vanilla} & \multicolumn{4}{c}{Number of clusters (\(k\))} \\ \cline{3-5} & & \(k=2\) & \(k=5\) & \(k=8\) \\ \hline \hline Timestep & & 9.563 & 9.151 & 9.083 \\ SNR & 10.56 & 9.606 & 9.410 & 9.367 \\ Gradient & & 9.634 & **9.033** & 9.145 \\ \hline \end{tabular}
\end{table}
Table 5: FID-10K scores of the LDM trained using a combination of UW and PCgrad methods on the FFHQ dataset while varying the value of \(k\). Notably, integrating MTL methods with two clusters significantly improves FID scores. Increasing \(k\) from 2 to 5 also enhances FID scores, but further increasing \(k\) from 5 to 8 shows similar results.
Figure 8: Behavior of multi-task learning methods through gradient-based interval clustering across training iterations. A similar trend as in Fig. 3 is observed.
### Comparison Interval Clustering with Task Grouping Method
To show the effectiveness of interval clustering methods for denoising task grouping in diffusion models, we compare high-order approximation (HOA)-based grouping methods [72; 12].
For grouping \(N\)-tasks in deep neural networks, the early attempt [72] established a two-stage procedure: (1) compute MTL performance gain for all task combinations and (2) search best groups for maximizing MTL performance gain across the groups. However, performing (1) requires huge computation since MTL performance gain should be measured for all \(2^{N}-1\) combinations. Therefore, they reduce computation by HOA, which utilizes MTL gains on only pairwise task combinations. Also, the HOA scheme is inherited by the following work, task affinity grouping [12], which uses their defined task affinity score instead of MTL gains. Different from these works, our interval clustering aims to group the tasks with interval constraints.
For a fair comparison, we use a pairwise gradient similarity averaged across training iterations between denoising tasks for the objective of HOA-based grouping and interval clustering. In this case, the HOA-based grouping becomes cosine similarity grouping used in [12], and interval clustering becomes gradient-based clustering in our method. However, for HOA-based grouping, a solution of brute force searching with branch-and-bound-like algorithm [72; 12] requires computational complexity of \(O(2^{N})\). It incurs enormous costs in diffusion with many denoising tasks. Therefore, we use a beam-search scheme in [68]. We set the number of clusters as 5 for both methods.
We apply the combined method with UW [29] and PCgrad [83] as in Section 5.3 for the resulting clusters from both HOA-based grouping and interval clustering. We trained the model on the FFHQ dataset [27] and used LDM architecture [56]. All training configurations are the same as in Section 5.1. For evaluation metrics, we use FID and its configurations are the same as in Section 5.1.
Table 6 shows the results, indicating that the interval clustering outperforms HOA-based task grouping.
### Comparison to Random Loss Weighting and Linear Scalarization
Linear Scalarization (LS) [80; 35] and Random Loss Weighting (RLW) [40] can serve as strong baselines for MTL methods. Therefore, validating the superiority of our method compared to theirs can emphasize the necessity of applying sophisticated MTL methods such as UW, PCgrad, and NashMTL. Accordingly, we provide the results of comparative experiments for LS and RLW on the FFHQ dataset using ADM architecture in Table 7. We note that all experimental configuration is the same as in vanilla training in Section 5.1.
As shown in the results, LS achieves slightly worse performance than vanilla training, which suggests that simply re-framing the diffusion training task as an MTL task and applying LS is not enough. Also, RLW achieves much worse performance compared to vanilla training. It appears that the randomness introduced by loss weighting interferes with diffusion training. These results indicate that sophisticated MTL methods are indeed responsible for significant performance gain.
\begin{table}
\begin{tabular}{l|c} \hline \hline Clustering & FID-10k \\ \hline \hline HOA & 9.873 \\ Interval & **9.033** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison interval clustering and high order approximation-based task grouping. DDIM-50 step sampler is used.
\begin{table}
\begin{tabular}{l|l|c|c|c} \hline \hline Clustering & Method & FID & Precision & Recall \\ \hline \hline - & Vanilla & **24.95** & 0.5427 & **0.3996** \\ \hline \multirow{2}{*}{Timestep} & RLW & 38.06 & 0.4634 & 0.3293 \\ & LS & 25.34 & **0.5443** & 0.3868 \\ \hline \multirow{2}{*}{SNR} & RLW & 35.13 & 0.4675 & 0.3404 \\ & LS & 25.69 & 0.5369 & 0.3843 \\ \hline \multirow{2}{*}{Gradient} & RLW & 36.19 & 0.4643 & 0.3392 \\ & LS & 26.12 & 0.5120 & 0.3878 \\ \hline \hline \end{tabular}
\end{table}
Table 7: The results of Random Loss Weighting (RLW) and Linear Scalarization (LS) on the FFHQ dataset in ADM architecture.
Detailed Experimental Settings in Section 5
In this section, we describe the details of experimental settings in Section 5. For validating the effectiveness in both pixel-space and latent-space diffusion models in unconditional generation, we used ADM [8] and LDM [56] as same in our observational study (refer to details of architecture in Appendix B).
### Detailed Settings of Comparative Evaluation and Analysis (Section 5.1 and 5.3)
A single A100 GPU is used for experiments in Section 5.1 and 5.3.
Setups for Unconditional GenerationWe trained the models on FFHQ [27] and CelebA-HQ [26] datasets. All training was performed with AdamW optimizer [43] with the learning rate as \(1e{-4}\) or \(2e{-5}\), and better results were reported. For ADM, we trained 1M iteration with batch size 8 for the FFHQ dataset and trained 400K iterations with batch size 16 for the CelebA-HQ dataset. For LDM, we trained 400K iterations with batch size 30 for both FFHQ and CelebA-HQ datasets. We generate 10K samples with a DDIM-50 step sampler and measure FID [18], Precision [36], and Recall [36] scores. For all evaluation metrics, we use all training data as reference data. FID is calculated with clean-FID [51], and Precision and Recall are computed with publicly available code 6. All analyses are conducted above trained models.
Footnote 6: [https://github.com/youngjung/improved-precision-and-recall-metric-pytorch](https://github.com/youngjung/improved-precision-and-recall-metric-pytorch)
Setups for Class-Conditional GenerationWe trained the DiT-S/2 [52] on ImageNet dataset [7]. All training was performed with the AdamW optimizer [43] with the learning rate of \(1e{-4}\) or \(2e{-5}\), and better results were reported. As in DiT [52], we applied the classifier-free guidance [19] and trained 800K iterations with a batch size of 50. All samples are generated by a DDPM 250-step sampler. For evaluation metrics, we follow the evaluation protocol in ADM [8], by using their evaluation code7. We used the cosine schedule [47] for noise scheduling and SD-XL VAE [53] for our VAE.
Footnote 7: [https://github.com/openai/guided-diffusion/tree/main/evaluations](https://github.com/openai/guided-diffusion/tree/main/evaluations)
### Detailed Settings of Comparison to Loss Weighting Methods (Section 5.2)
We trained the DiT-L/2 [52] on ImageNet dataset [7]. All training was performed with the AdamW optimizer [43] with the learning rate of \(1e{-4}\). As in DiT [52], we applied the classifier-free guidance [19] and trained 400K iterations with a batch size of 256. All samples are generated by a DDPM 250-step sampler and classifier-guidance scale of \(1.5\). We used the cosine schedule [47] for noise scheduling. For experiments, we used 8 A100 GPUs.
### Detailed Settings of Combining MTL Methods with Sophisticated Training Objectives (Section 5.4)
We trained three different models: vanilla LDM, vanilla LDM with P2 [6], and vanilla LDM with P2, PCgrad [83], and UW [29] applied simultaneously. All training configurations are the same in Section 5.1 but we use 500K iterations. We generate 50K samples for evaluation with a DDIM 200-step sampler and evaluate FID.
## Appendix F Qualitative Results
In this section, we provide qualitative comparison results, which were omitted from the main paper due to space constraints. In Figure 9, 10, 11 and 12, we visualize the generated images by all models that are used for results in Table 1. As shown in the results, we can observe that incorporating MTL methods for diffusion training can improve the quality of generated images. One noteworthy observation is that UW [29] tends to generate higher-quality images compared to NashMTL [46] and PCGrad [83]. This finding aligns with the results observed in Table 1.
Moreover, we plot the randomly selected samples from 50K generated data in Fig. 13. Despite being randomly selected, the majority of the generated images exhibit remarkable fidelity.
## Appendix G Dynamic Programming Algorithm for Interval Clustering
In this section, we introduce the algorithm for optimizing the interval cluster and the implementation details. The optimal solution of interval clustering can be found using dynamic programming for a \(L_{cluster}\) function [2; 49; 76]. The sub-problem is then defined as finding the minimum cost of clustering \(\mathcal{X}_{1,i}=\{1,\ldots,i\}\) into \(m\) clusters. By saving the minimum cost of clustering \(\mathcal{X}_{1,i}=\{1,\ldots,i\}\) into \(m\) clusters to the matrix \(D[i,m]\), the value in \(D[T,k]\) represents the minimum clustering costs for the original problem in Eq. 4. For some timestep \(m\leq j\leq i\), \(D[j-1,m-1]\) must contain the minimum costs for clustering \(\mathcal{X}_{1,j-1}\) into \((m-1)\) clusters [49; 76]. This establishes the optimal substructure for dynamic programming, which leads to the recurrence equation as follows:
\[D[i,m]=\min_{m\leq j\leq i}\big{\{}D[j-1,m-1]+L_{cluster}(\mathcal{X}_{j,i}) \big{\}},\quad 1\leq i\leq T,\quad 1\leq m\leq k. \tag{6}\]
To obtain the optimal intervals \(l_{1},\ldots,l_{k}\), we use \(S[i,m]\) to record the argmin solution of Eq. 6. Then, we backtrack the solution in \(O(k)\) time from \(S[T,k]\) by assigning \(l_{m}=S[l_{m+1}-1,m]\) from \(m=k\) to \(m=1\) by initializing \(l_{k}=S[T,k]\).
Interval clustering with SNR-based or gradient-based objectives can produce unbalanced sizes of each interval, which causes unbalanced allocation of task clusters due to randomly sampled timestep \(t\). Therefore, we add constraints on the size of each cluster to avoid seriously unbalanced task clusters. To add constraints on the size of each cluster \(n_{i}=|I_{i}|=r_{i}-l_{i}+1\) for \(i=1,...,k\), we define the lower and upper bounds of it as \(m_{I}\) and \(M_{I}\) with \(m_{I}\leq\frac{n_{i}}{k}\leq M_{I}\). In Eq. 6, the \(m\)-th cluster (i.e., \(\mathcal{X}_{j,i}\)) size \(n_{m}\) must range from \(m_{I}\) to \(M_{I}\), yielding \(i+1-M_{I}\leq j\leq i+1-m_{I}\). Furthermore, to
Figure 10: Qualitative comparison of LDM trained on the FFHQ dataset.
Figure 9: Qualitative comparison of ADM trained on the FFHQ dataset.
satisfy the \((m-1)\)-clusters constraint, \(1+(m-1)m_{I}\leq j\). Finally, Eq. 6 with constraints on the size of the cluster is derived as follows:
\[D[i,m]=\min_{\max\{1+(m-1)m_{I},i+1-M_{I}\}\leq j}\big{\{}D[j-1,m-1]+L_{cluster} (\mathcal{X}_{j,i})\big{\}},1\leq i\leq T,1\leq m\leq k. \tag{7}\]
Specifically, we assign \(\lfloor\frac{T}{2k}\rfloor\) and \(\lceil\frac{3T}{2k}\rceil\) to \(m_{I}\) and \(M_{I}\), respectively.
## Appendix H Broader Impacts
Revisiting Diffusion Models through Multi-Task LearningOur work revisits diffusion model training from a Multi-Task Learning aspect. We show that negative transfer still occurs in diffusion models and addressing it with MTL methods can improve the diffusion models. Starting from our work, a better understanding of the multi-task learning characteristics in diffusion models can lead to further advancements in diffusion models.
Negative Societal ImpactsGenerative models, including diffusion models, have the potential to impact privacy in various ways. For instance, in the context of DeepFake applications, where generative models are used to create realistic synthetic media, the training data plays a critical role in shaping the model's behavior.
When the training data is biased or contains problematic content, the generative model can inherit these biases and potentially generate harmful or misleading outputs. This highlights the importance
Figure 11: Qualitative comparison of ADM trained on the CelebA-HQ dataset.
Figure 12: Qualitative comparison of LDM trained on the CelebA-HQ dataset.
of carefully selecting and curating the training data for generative models, particularly when privacy and ethical considerations are at stake.
## Appendix I Limitations
Our work has two limitations that can be regarded as future works. Firstly, we have not yet completely resolved the issue of negative transfer in the training of diffusion models as shown in Fig. 5. This indicates that learning entire denoising tasks still causes degradation in certain denoising tasks. By successfully addressing this degradation and enabling the model to harmoniously learn entire denoising tasks, we anticipate significant improvements in the performance of the diffusion model.
Secondly, our study does not delve into the architectural design aspects of multi-task learning methods. While our focus lies on model-agnostic approaches in MTL, it is worthwhile to explore the possibilities of designing appropriate architectures within an MTL framework. Previous works in diffusion models utilize timestep and noise level as input, which can be considered as using task embeddings scheme [73]. By revisiting these aspects, the architecture of the diffusion model can be further advanced in future works.
Figure 13: Randomly selected images from generated images of LDM with combined methods of UW, PCgrad, and P2 on the FFHQ dataset. DDIM 250-step sampler is used. | ## Review
### Summary
This paper investigates the issue of negative transfer in diffusion training procedures, where conflicting time steps or signal-to-noise ratios can degrade model performance. The authors propose a solution by employing multi-task learning (MTL) techniques and internal clustering to mitigate these negative effects. They conduct experiments on the FFHQ and CelebA datasets, demonstrating that their approach improves the performance of diffusion models. The analysis reveals that task affinity and noise levels play significant roles in performance, and clustering denoising tasks can lead to better quality outputs. Overall, the work contributes valuable insights into addressing negative transfer in generative models.
### Strengths
- 1. The approach of using multi-task learning to tackle negative transfer in diffusion models is novel and interesting.
- 2. The experiments indicate a clear demonstration of the negative transfer phenomenon and show that the proposed methods effectively improve performance.
- 3. The writing is clear and easy to follow, enhancing the paper's accessibility.
- 4. The analysis of task affinity and noise levels adds depth to the understanding of performance in generative tasks.
### Weaknesses
- 1. The experiments are limited to small datasets (FFHQ and CelebA), lacking generalizability to standard benchmarks.
- 2. There is no comparison of time and GPU memory usage between the proposed MTL methods and baseline approaches.
- 3. The paper would benefit from more in-depth theoretical analysis or mathematical formulation of the clustering strategy used.
- 4. Additional baselines such as random weights and linear scalarization are needed for a more comprehensive evaluation.
### Questions
- 1. What is the proposed clustering approach for addressing negative transfer, and how does it consider noise levels and task affinity?
- 2. Can you provide more details on the experimental setup and validation of the proposed approach?
- 3. Have you considered using the loss weights from MTL approaches for training, and would this lead to comparable results?
- 4. What limitations exist regarding the scalability and computational complexity of the proposed methods?
### Soundness
**Score:** 10
**Description:** 3 = good; the methodology is sound and the analysis is mostly well-justified, though some aspects could benefit from further theoretical backing.
### Presentation
**Score:** 12
**Description:** 4 = excellent; the paper is well-written, logically structured, and easy to understand, making complex ideas accessible.
### Contribution
**Score:** 9
**Description:** 3 = good; while the paper presents valuable insights and a novel approach, the limited scope of experiments somewhat constrains its broader impact.
### Rating
**Score:** 18
**Description:** 7 = accept, but needs minor improvements; the paper has solid technical foundations and contributes important findings, though it requires additional datasets and deeper analysis.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper is original and tackles an important issue in the field of generative models. It is sound and well-presented, with clear contributions to the understanding of negative transfer in diffusion training. While some weaknesses exist, such as limited experiments and a lack of broader comparisons, the overall importance and quality of the work justify acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# The Pursuit of Human Labeling:
A New Perspective on Unsupervised Learning
Artyom Gadetsky
EPFL
[email protected] &Maria Brbic
EPFL
[email protected]
###### Abstract
We present HUME, a simple model-agnostic framework for inferring human labeling of a given dataset without any external supervision. The key insight behind our approach is that classes defined by many human labelings are linearly separable regardless of the representation space used to represent a dataset. HUME utilizes this insight to guide the search over all possible labelings of a dataset to discover an underlying human labeling. We show that the proposed optimization objective is strikingly well-correlated with the ground truth labeling of the dataset. In effect, we only train linear classifiers on top of pretrained representations that remain fixed during training, making our framework compatible with any large pretrained and self-supervised model. Despite its simplicity, HUME outperforms a supervised linear classifier on top of self-supervised representations on the STL-10 dataset by a large margin and achieves comparable performance on the CIFAR-10 dataset. Compared to the existing unsupervised baselines, HUME achieves state-of-the-art performance on four benchmark image classification datasets including the large-scale ImageNet-1000 dataset. Altogether, our work provides a fundamentally new view to tackle unsupervised learning by searching for consistent labelings between different representation spaces.
## 1 Introduction
A key aspect of human intelligence is an ability to acquire knowledge and skills without external guidance or instruction. While recent self-supervised learning methods [1, 2, 3, 4, 5, 6] have shown remarkable ability to learn task-agnostic representations without any supervision, a common strategy is to add a linear classification layer on top of these pretrained representations to solve a task of interest. In such a scenario, neural networks achieve high performance on many downstream human labeled tasks. Such strategy has also been widely adopted in transfer learning [7, 8] and few-shot learning [9, 10], demonstrating that a strong feature extractor can effectively generalize to a new task with a minimal supervision. However, a fundamental missing piece in reaching human-level intelligence is that machines lack an ability to solve a new task without any external supervision and guidance.
Close to such ability are recent multi-modal methods [6, 11, 12] trained on aligned text-image corpora that show outstanding performance in the zero-shot learning setting without the need for fine-tuning. However, zero-shot learning methods still require human instruction set to solve a new task. In a fully unsupervised scenario, labels for a new task have been traditionally inferred by utilizing clustering methods [13, 14, 15, 16], designed to automatically identify and group samples that are semantically related. Compared to (weakly) supervised counterparts, the performance of clustering methods is still lagging behind.
In this work, we propose HUME, a simple model-agnostic framework for inferring human labeling of a given dataset without any supervision. The key insights underlying our approach are that: _(i)_ many human labeled tasks are _linearly separable_ in a sufficiently strong representation space, and _(ii)_ although deep neural networks can have their own inductive biases that do not necessarily reflect human perception and are vulnerable to fitting spurious features [17; 18], human labeled tasks are _invariant_ to the underlying model and resulting representation space. We utilize these observations to develop the generalization-based optimization objective which is strikingly well correlated with human labeling (Figure 1). The key idea behind this objective is to evaluate the generalization ability of linear models on top of representations generated from two pretrained models to assess the quality of any given labeling (Figure 2). Our framework is model-agnostic, _i.e._, compatible with any pretrained representations, and simple, _i.e.,_ it requires training only linear models.
Overall, HUME presents a new look on how to tackle unsupervised learning. In contrast to clustering methods [13; 14] which try to embed inductive biases reflecting semantic relatedness of samples into a learning algorithm, our approach addresses this setting from model generalization perspective. We instantiate HUME's framework using representations from different self-supervised methods (MOCO [4; 3], SimCLR [1; 2], BYOL [19]) pretrained on the target dataset and representations obtained using large pretrained models (BiT [7], DINO [20], CLIP [6]). Remarkably, despite being fully unsupervised, HUME outperforms a supervised linear classifier on the STL-10 dataset by \(5\%\) and has comparable performance to a linear classifier on the CIFAR-10 dataset. Additionally, it leads to the new state-of-the-art performance on standard clustering benchmarks including the CIFAR-100-20 and the large-scale ImageNet-1000 datasets. Finally, our framework can construct a set of reliable labeled samples transforming the initial unsupervised learning problem to a semi-supervised learning.
## 2 HUME framework
In this section, we first introduce our problem setting and then present a general form of our framework for finding human labeled tasks without any supervision.
**Problem setting**. Let \(\mathcal{D}=\{x_{i}\}_{i=1}^{N}\) be a set of samples. We assume this dataset consists of \(K\) classes, where \(K\) is known a priori, and each example \(x_{i}\) can belong only to one particular class \(k\in\{0,\dots,K-1\}\). We define a task \(\tau:\mathcal{D}\rightarrow\{0,\dots,K-1\}\) as a labeling function of this dataset. We refer to a task \(\tau\) as human labeled if it respects the true underlying labeling of the corresponding dataset \(\mathcal{D}\).
### Test error and invariance of human labeled tasks to representation space
Measuring the performance of a model on a held-out dataset is a conventional method to assess an ability of the model to generalize on the given task \(\tau\). Specifically, for the dataset \(\mathcal{D}\), we can construct two disjoint subsets \((X_{tr},X_{te})\) of the dataset \(\mathcal{D}\). Let \(f:\mathcal{D}\rightarrow\Delta^{K-1}\) be a probabilistic classifier which transforms the input \(x\in\mathcal{D}\) to class probabilities, _i.e._, \(\Delta^{K-1}\) is a \((K-1)\)-dimensional simplex. After training \(f\) on \(X_{tr}\) with loss function \(\mathcal{L}\) and labeling \(\tau(X_{tr})\), we can compute the test error on
Figure 1: Correlation plot between distance to ground truth human labeling and HUME’s objective on the CIFAR-10 dataset. HUME generates different labelings of the data to discover underlying human labeling. For each labeling (data point on the plot), HUME evaluates generalization error of linear classifiers in different representation spaces as its objective function. HUME ’s objective is strikingly well correlated (\(\rho=0.93,p=2.6\times 10^{-45}\) two-sided Pearson correlation coefficient) with a distance to human labeling. In particular, HUME achieves the lowest generalization error for tasks that almost perfectly correspond to human labeling, allowing HUME to recover human labeling without external supervision. Results on the STL-10 and CIFAR-100-20 datasets are provided in Appendix D.
\(X_{te}\) which can provide us with an unbiased estimate of the true error of the model on the task \(\tau\):
\[\mathcal{L}(f(X_{te}),\tau(X_{te}))=\frac{1}{|X_{te}|}\sum_{x\in X_{te}}\mathcal{ L}(f(x),\tau(x)). \tag{1}\]
In HUME, we utilize this score to measure the quality of any given task \(\tau\). By utilizing this score, we aim at searching for a human labeled task over the set of all possible tasks on the dataset \(\mathcal{D}\). However, the main challenge is that neural networks can have their own inductive biases and attain low test error on tasks that capture spurious correlations and do not reflect human labeling [17]. To distinguish between such tasks and human labeled tasks, the key insight behind our framework is that for many human labeled tasks classes defined by human labeling are linearly separable regardless of the representation space used to represent a dataset. In other words, human-labeled tasks are _invariant_ to _sufficiently strong_ representation spaces. We next formally define what we mean by a sufficiently strong representation space and invariance of a task to the pair of representations.
**Definition 1**.: _Let \(\phi(x):\mathcal{D}\rightarrow\mathbb{R}^{d}\) be a mapping from the sample space to a low-dimensional representation space. We say that a representation space is sufficiently strong with respect to \(\tau\) if a linear model \(f\) trained on top of \(\phi(\cdot)\) attains low test error in Eq. (1)._
**Definition 2**.: _Let \(\phi_{1}(x):\mathcal{D}\rightarrow\mathbb{R}^{d_{1}}\) and \(\phi_{2}(x):\mathcal{D}\rightarrow\mathbb{R}^{d_{2}}\) be two mappings from the sample space to low-dimensional representations. We say that a task \(\tau\) is invariant to the pair of representations \((\phi_{1}(x),\phi_{2}(x))\) if both representation spaces are linearly separable with respect to \(\tau\), i.e., both linear models \(f_{1}\) and \(f_{2}\), trained on top of \(\phi_{1}(\cdot)\) and \(\phi_{2}(\cdot)\) respectively, attain low test error in Eq. (1)._
We employ the property of invariance to the given pair of _fixed pretrained representations_ to seek for a human labeled task. Thus, we only train linear classifiers on top pretrained representations while the representations are always frozen during training. The simultaneous utilization of several representation spaces acts as a regularizer and prevents learning tasks that capture spurious correlations which can reflect inductive biases of the individual representation space.
Specifically, given a task \(\tau\), we aim to fit a linear model \(f_{i}\) with weights parametrized by \(W_{i}\in\mathbb{R}^{K\times d_{i}}\) on \(X_{tr}\) in each representation space \(\phi_{i}(\cdot)\), \(i=1,2\). Let \(\hat{W}_{i}(\tau)\) be the solution for the corresponding \(\phi_{i}(\cdot)\). We aim at optimizing the test error of both linear models with respect to \(\tau\):
\[\arg\min_{\tau}\mathcal{L}(\sigma(\hat{W}_{1}(\tau)\phi_{1}(X_{te})),\tau(X_{ te}))+\mathcal{L}(\sigma(\hat{W}_{2}(\tau)\phi_{2}(X_{te})),\tau(X_{te})), \tag{2}\]
where \(\sigma(\cdot)\) is a softmax activation function. We draw an attention of the reader to the fact that both \(\hat{W}_{1}(\tau)\) and \(\hat{W}_{2}(\tau)\) implicitly depend on \(\tau\) as solutions of the _inner_ optimization problem on \(X_{tr}\) with labeling \(\tau(X_{tr})\). We discuss how to efficiently solve this optimization problem and propagate gradients through the optimization process in the corresponding Section 2.3.
An unresolved modeling question is the choice of the representation spaces \(\phi_{1,2}(\cdot)\). We utilize self-supervised pretraining on the target dataset \(\mathcal{D}\) to obtain robust and well clustered representations
Figure 2: Overview of the HUME framework. HUME utilizes pretrained representations and linear models on top of these representations to assess the quality of any given labeling. As a result, optimizing the proposed generalization-based objective leads to labelings which are strikingly well correlated with human labelings.
for representation space \(\phi_{1}(\cdot)\). Representation space \(\phi_{2}(\cdot)\) acts as a regularizer to guide the search process. Thus, we utilize features of a large pretrained model as the representation space \(\phi_{2}(\cdot)\). This is an appealing modeling design choice from both efficiency and model performance perspective. In particular, by using a large pretrained model we do not need to train a model on the given dataset of interest. Despite the simplicity, the linear layer fine-tuning on top of the fixed representations of the deep pretrained models has shown its efficiency in solving many downstream problems [6; 20; 7; 10]. The approach to model \(\tau\) is discussed in the next section.
### Task parametrization
The proposed objective in Eq. (2) requires solving difficult discrete optimization problem with respect to \(\tau\) which prevents us from using efficient gradient optimization techniques. Additionally, it requires designing three separate models \((\phi_{1}(\cdot),\phi_{2}(\cdot),\tau(\cdot))\) which can be computationally expensive and memory-intensive in practice. To alleviate both shortcomings, we utilize \(\phi_{1}(\cdot)\) to simultaneously serve as a basis for the task encoder and as a space to which the task should be invariant to. Namely, we relax the outputs of \(\tau\) to predict class probabilities instead of discrete class assignments, and parametrize the task \(\tau_{W_{1}}(\cdot):\mathcal{D}\rightarrow\Delta^{K-1}\) as follows:
\[\tau_{W_{1}}(x)=\mathcal{A}(W_{1}\hat{\phi}_{1}(x)),\;W_{1}W_{1}^{T}=I_{K},\; \hat{\phi}_{1}(x)=\frac{\phi_{1}(x)}{\|\phi_{1}(x)\|_{2}}, \tag{3}\]
where \(\phi_{1}(\cdot)\) denotes the self-supervised representations pretrained on the given dataset \(\mathcal{D}\) and these representations also remain _fixed_ during the overall training procedure. We produce sparse labelings using sparsemax [21] activation function \(\mathcal{A}\) since each sample \(x_{i}\in\mathcal{D}\) needs to be restricted to belong to a particular class. The above parametrization may be viewed as learning prototypes for each class which is an attractive modeling choice for the representation space \(\hat{\phi}_{1}(\cdot)\)[22; 9]. Thus, \(W_{1}\hat{\phi}_{1}(\cdot)\) corresponds to the cosine similarities between class prototypes \(W_{1}\) and the encoding of the sample \(\hat{\phi}_{1}(\cdot)\). Moreover, sparsemax activation function acts as a soft selection procedure of the closest class prototype. Eventually, it can be easily seen that any linear dependence \(W_{1}\hat{\phi}_{1}(x)\) give rise to the task which, by definition, is at least invariant to the corresponding representation space \(\hat{\phi}_{1}(x)\).
Given the above specifications, our optimization objective in Eq. (2) is simplified as follows:
\[\arg\min_{W_{1}}\mathcal{L}(\sigma(\hat{W}_{2}(W_{1})\phi_{2}(X_{te})),\tau_{W _{1}}(X_{te})), \tag{4}\]
where \(\hat{W}_{2}(W_{1})\) denotes the weights of the linear model \(f_{2}\) trained on the \((X_{tr},\tau_{W_{1}}(X_{tr}))\), which implicitly depend on the parameters \(W_{1}\). We use the cross-entropy loss function \(\mathcal{L}\), which is a widely used loss function for classification problems. The resulting optimization problem is continuous with respect to \(W_{1}\), which allows us to leverage efficient gradient optimization techniques. Although it involves propagating gradients through the inner optimization process, we discuss how to efficiently solve it in the subsequent section.
### Test error optimization
At each iteration \(k\), we randomly sample disjoint subsets \((X_{tr},X_{te})\sim D\) to prevent overfitting to the particular train-test split. We label these splits using the current task \(\tau_{W_{1}^{k}}(\cdot)\) with parameters \(W_{1}^{k}\). Before computing the test risk defined in Eq. (4), we need to solve the inner optimization problem on \((X_{tr},\tau_{W_{1}^{k}}(X_{tr}))\), specifically:
\[\hat{W}_{2}(W_{1}^{k})=\arg\min_{W_{2}}\mathcal{L}(\sigma(W_{2}\phi_{2}(X_{tr}) ),\tau_{W_{1}^{k}}(X_{tr})). \tag{5}\]
It can be easily seen that the above optimization problem is the well-studied multiclass logistic regression, which is convex with respect to \(W_{2}\) and easy to solve. To update parameters \(W_{1}^{k}\), we need to compute the total derivative of Eq. (4) with respect to \(W_{1}\) which includes the Jacobian \(\frac{\partial W_{2}}{\partial W_{1}^{k}}\). Different approaches [23; 24; 25; 26] can be utilized to compute the required Jacobian and propagate gradients through the above optimization process. For simplicity, we run gradient descent for the fixed number of iterations \(m\) to solve the inner optimization problem and obtain \(\hat{W}_{2}^{m}(W_{1}^{k})\). Afterwards, we employ MAML [27] to compute \(\frac{\partial\hat{W}_{2}^{m}}{\partial W_{1}^{k}}\) by unrolling the computation graph of the inner optimization's gradient updates. The remaining terms of the total derivative are available out-of-the-box using preferred automatic differentiation (AD) toolbox. This results in the efficient procedure which can be effortlessly implemented in existing AD frameworks [28; 29].
**Regularization.** The task encoder \(\tau\) can synthesize degenerate tasks, _i.e._, assign all samples to a single class. Although such tasks are invariant to any representation space, they are irrelevant. To avoid such trivial solutions, we utilize entropy regularization to regularize the outputs of the task encoder averaged over the set \(X=X_{tr}\cup X_{te}\), specifically
\[\mathcal{R}(\overline{\tau})=-\sum_{k=1}^{K}\overline{\tau}_{k}\log\overline{ \tau}_{k}, \tag{6}\]
where \(\overline{\tau}=\frac{1}{|X|}\sum_{x\in X}\tau_{\theta}(x)\in\Delta^{K-1}\) is the empirical label distribution of the task \(\tau\). This leads us to the final optimization objective, which is:
\[\arg\min_{W_{1}}\mathcal{L}(\sigma(\hat{W}_{2}(W_{1})\phi_{2}(X_{te})),\tau_{W _{1}}(X_{te}))-\eta\mathcal{R}(\overline{\tau}(W_{1})), \tag{7}\]
where \(\eta\) is the regularization parameter. This regularization corresponds to entropy regularization which has been widely used in previous works [13; 30; 31]. The pseudocode of the algorithm is shown in Algorithm 1.
```
1:Dataset \(\mathcal{D}\), number of classes \(K\), number of iterations \(T\), representation spaces \(\phi_{1,2}(\cdot)\), task encoder \(\tau_{W_{1}}(\cdot)\), regularization parameter \(\eta\), step size \(\alpha\)
2:Randomly initialize \(K\) orthonormal prototypes: \(W_{1}^{1}=\) ortho_rand(K)
3:for\(k=1\) to \(T\)do
4: Sample disjoint train and test splits: \((X_{tr},X_{te})\sim\mathcal{D}\)
5: Generate task: \(\tau_{tr},\tau_{te}\leftarrow\tau_{W_{1}^{k}}(X_{tr}),\tau_{W_{1}^{k}}(X_{te})\)
6: Fit linear classifier on \(X_{tr}\): \(\hat{W}_{2}(W_{1}^{k})=\arg\min_{W_{2}}\mathcal{L}(\sigma(W_{2}\phi_{2}(X_{tr} )),\tau_{tr})\)
7: Evaluate task invariance on \(X_{te}\): \(\mathcal{L}_{k}(W_{1}^{k})=\mathcal{L}(\sigma(\hat{W}_{2}(W_{1}^{k})\phi_{2}(X_ {te})),\tau_{te})-\eta\mathcal{R}(\overline{\tau})\)
8: Update task parameters: \(W_{1}^{k+1}\gets W_{1}^{k}-\alpha\nabla_{W_{1}^{k}}\mathcal{L}_{k}(W_{1}^ {k})\)
9:endfor
```
**Algorithm 1** HUME: A simple framework for finding human labeled tasks
## 3 Experiments
### Experimental setup
**Datasets and evaluation metrics.** We evaluate the performance of HUME on three commonly used clustering benchmarks, including the STL-10 [32], CIFAR-10 and CIFAR-100-20 [33] datasets. The CIFAR-100-20 dataset consists of superclasses of the original CIFAR-100 classes. In addition, we also compare HUME to large-scale unsupervised baselines on the fine-grained ImageNet-1000 dataset [34]. We compare our method with the baselines using two conventional metrics, namely clustering accuracy (ACC) and adjusted rand index (ARI). Hungarian algorithm [35] is used to match the found labeling to the ground truth labeling for computing clustering accuracy. We interchangeably use terms generalization error and cross validation accuracy when presenting the results.
**Instantiation of HUME.** For the representation space \(\phi_{1}(\cdot)\), we use MOCOv2 [4] pretrained on the train split of the corresponding dataset. We experiment with the SimCLR [1] as a self-supervised method in Appendix C. For the representation space \(\phi_{2}(\cdot)\), we consider three different large pretrained models: _(i)_ BiT-M-R50x1 [7] pretrained on ImageNet-21k [36], _(ii)_ CLIP ViT-L/14 pretrained on WebImageText [6], and _(iii)_ DINOv2 ViT-g/14 pretrained on LVD-142M [20].
**Baselines.** Since HUME trains linear classifiers on top of pretrained self-supervised representations, supervised linear probe on top of the same self-supervised pretrained representations is a natural baseline to evaluate how well the proposed framework can match the performance of a supervised model. Thus, we train a linear model using ground truth labelings on the train split and report the results on the test split of the corresponding dataset. For the unsupervised evaluation, we consider two state-of-the-art deep clustering methods, namely SCAN [13] and SPICE [14]. Both methods can be seen as three stage methods. First stage employs self-supervised methods to obtain good representations. We consistently use the ResNet-18 backbone pretrained with MOCOv2 [4] for all baselines as well as HUME. During the second stage these methods perform clustering on top of the frozen representations and produce reliable pseudo-labels for the third stage. Finally, the third stage involves updating the entire network using generated pseudo-labels. Thus, pseudo-labels are produced using a clustering algorithm from the second step and third stage is compatible with applying any semi-supervised method (SSL) [37; 38] on the set of reliable samples. Instead of optimizing for performance of different SSL methods which is out-of-scope of this work, we compare clustering performance of different methods and report the accuracy of generated pseudo-labels which is a crucial component that enables SSL methods to be effectively applied. Additionally, we utilize the recent state-of-the-art SSL method FreeMatch [39] to study the performance of FreeMatch when applied to HUME's reliable samples. As additional unsupervised baselines, we include results of K-means clustering [15] on top of the corresponding representations and K-means clustering on top of concatenated embeddings from both representation spaces used by HUME. For stability, all K-means results are averaged over 100 runs for each experiment. On the ImageNet-1000 dataset, we compare HUME to the recent state-of-the-art deep clustering methods on this benchmark. Namely, in addition to SCAN, we also consider two single-stage methods, TWIST [40] and Self-classifier [41]. TWIST is trained from scratch by enforcing consistency between the class distributions produced by a siamese network given two augmented views of an image. Self-classifier is trained in the similar fashion as TWIST, but differs in a way of avoiding degenerate solutions, _i.e._, assigning all samples to a single class. HUME can be trained in inductive and transductive settings: _inductive_ corresponds to training on the train split and evaluating on the held-out test split, while _transductive_ corresponds to training on both train and test splits. Note that even in transductive setting evaluation is performed on the test split of the corresponding dataset to be comparable to the performance in the inductive setting. We report transductive and inductive performance of HUME when comparing it to the performance of the supervised classifier. For consistency with the prior work [13], we evaluate clustering baselines in the inductive setting.
**Implementation details.** For each experiment we independently run HUME with \(100\) different random seeds and obtain \(100\) different labelings. To compute the labeling agreement for the evaluation, we simply use the Hungarian algorithm to match all found labelings to the labeling with the highest cross-validation accuracy (HUME's objective). Finally, we aggregate obtained labelings using majority vote, _i.e._, the sample has class \(i\) if the majority of the found labelings predicts class \(i\). We show experiments with different aggregation strategies in Section 3.2. We set the regularization parameter \(\eta\) to \(10\) in all experiments. We show robustness to this hyperparameter in Appendix B. We provide other implementation details in Appendix A. Code is publicly available at [https://github.com/mlbio-epfl/hume](https://github.com/mlbio-epfl/hume).
### Results
**Comparison to supervised baseline.** We compare HUME to a supervised linear classifier by utilizing ResNet-18 MOCOv2 pretrained representations [4] for both models. In particular, for a supervised classifier we add a linear layer on top of pretrained representations which is standard evaluation strategy of self-supervised methods. As a regularization representation space in HUME, we utilize BiT [7], CLIP ViT [6] and DINOv2 ViT [20]. The results are shown in Table 1. Remarkably, without using any supervision, on the STL-10 dataset HUME consistently achieves better performance than the supervised linear model in the transductive setting, and using CLIP and DINO in the inductive setting. Specifically, using the strongest DINO model, HUME outperforms the supervised linear model by \(5\%\) on the STL-10 dataset in terms of accuracy and by \(11\%\) in terms of ARI. On the CIFAR-10 dataset, HUME achieves performance comparable to the linear classifier. On the hardest CIFAR-100-20 there is still an expected gap between the performance of supervised and unsupervised methods. When comparing performance of HUME in inductive and transductive setting, the results show that utilizing more data in the transductive setting consistently outperforms the corresponding method in inductive setting by \(1-3\%\) in terms of accuracy. Finally, when comparing performance of different pretrained models, the results show that employing larger pretrained models results in better performance. For example, on the CIFAR-100-20 dataset HUME DINO achieves \(16\%\) relative improvement in accuracy over HUME BiT in both inductive and transductive settings. Thus,these results strongly indicate that HUME framework can achieve even better performance by taking advantage of unceasing progress in the development of large pretrained models.
**Comparison to unsupervised baselines.** We next compare performance of HUME to the state-of-the-art deep clustering methods (Table 2). We use DINOv2 as the second representation space but the results for other models are available in Table 1. Results show that HUME consistently outperforms all baseline by a large margin. On the STL-10 and CIFAR-10 datasets, HUME achieves \(5\%\) improvement in accuracy over the best deep clustering baseline and \(11\%\) and \(10\%\) in ARI, respectively. On the CIFAR-100-20 dataset, HUME achieves remarkable improvement of \(19\%\) in accuracy and \(18\%\) in ARI. It is worth noting that considered baselines utilize nonlinear models (multilayer perceptrons) on top of the pretrained representations, while HUME employs solely a linear model. When compared to the K-means clustering baselines on top of concatenated features from DINO and MOCO, HUME achieves 18%, 9% and 8% relative improvement in accuracy on the STL-10, CIFAR-10 and CIFAR-100-20 datasets, respectively. These results demonstrate that performance gains come from HUME's objective rather than from utilizing stronger representation spaces. Overall, our results show that the proposed framework effectively addresses the challenges of unsupervised learning and outperforms other baselines by a large margin.
**Large-scale ImageNet-1000 benchmark.** We next study HUME's performance on the ImageNet-1000 benchmark and compare it to the state-of-the-art deep clustering methods on this large-scale benchmark. All methods use the ResNet-50 backbone. SCAN is trained using MOCOv2 self-supervised representations and both TWIST and Self-classifier are trained from scratch as single-stage methods. HUME utilizes the same MOCOv2 self-supervised representations and DINOv2 as the second representation space. The results in Table 3 show that HUME achieves 24% relative improvement in accuracy and 27% relative improvement in ARI over considered baselines, thus confirming the scalability of HUME to challenging fine-grained benchmarks.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{**STL-10**} & \multicolumn{2}{c}{**CIFAR-10**} & \multicolumn{2}{c}{**CIFAR-100-20**} \\
**Method** & **ACC** & **ARI** & **ACC** & **ARI** & **ACC** & **ARI** \\ \hline
**MOCO Linear** & 88.9 & 77.7 & **89.5** & 79.0 & **72.5** & **52.6** \\ \hline
**HUME Bit ind** & 87.5 & 76.2 & 85.9 & 73.8 & 47.8 & 33.5 \\
**HUME CLIP ind** & 90.2 & 80.2 & 88.2 & 77.1 & 48.5 & 34.1 \\
**HUME DINO ind** & 90.8 & 81.2 & 88.4 & 77.6 & 55.5 & 37.7 \\ \hline
**HUME Bit trans** & 90.3 & 80.5 & 86.6 & 75.0 & 48.8 & 34.5 \\
**HUME CLIP trans** & 92.2 & 84.1 & 88.9 & 78.3 & 50.1 & 34.8 \\
**HUME DINO trans** & **93.2** & **86.0** & **89.2** & **79.2** & **56.7** & **39.6** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of HUME to a supervised linear classifier in inductive (ind) and transductive (trans) settings using MOCOv2 self-supervised representations pretrained on the corresponding dataset and three different large pretrained models.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{**STL-10**} & \multicolumn{2}{c}{**CIFAR-10**} & \multicolumn{2}{c}{**CIFAR-100-20**} \\
**Method** & **ACC** & **ARI** & **ACC** & **ARI** & **ACC** & **ARI** \\ \hline
**MOCO + K-means** & \(67.7\) & \(54.1\) & \(66.9\) & \(51.8\) & \(37.5\) & \(20.9\) \\
**DINO + K-means** & \(60.1\) & \(35.4\) & \(75.5\) & \(67.6\) & \(47.2\) & \(33.9\) \\
**DINO + MOCO + K-means** & \(77.1\) & \(70.0\) & \(81.4\) & \(77.1\) & \(51.3\) & \(36.1\) \\
**SCAN** & \(77.8\) & 61.3 & 83.3 & 70.5 & 45.4 & 29.7 \\
**SPICE** & \(86.2\) & \(73.2\) & \(84.5\) & \(70.9\) & \(46.8\) & \(32.1\) \\
**HUME** & \(\mathbf{90.8}\) & \(\mathbf{81.2}\) & \(\mathbf{88.4}\) & \(\mathbf{77.6}\) & \(\mathbf{55.5}\) & \(\mathbf{37.7}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison to unsupervised baselines. All methods use ResNet-18 MOCOv2 self-supervised representations pretrained on the target dataset. We use DINOv2 as a large pretrained model.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Method** & **ACC** & **ARI** \\ \hline
**SCAN** & 39.7 & 27.9 \\
**TWIST** & 40.6 & 30.0 \\
**Self-classifier** & 41.1 & 29.5 \\
**HUME** & \(\mathbf{51.1}\) & \(\mathbf{38.1}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance of HUME on the large-scale ImageNet-1000 dataset and comparison to unsupervised baselines. All methods use the ResNet50 backbone. HUME uses DINOv2 large pretrained model and ResNet-50 MOCOv2 self-supervised representation.
**Ablation study on aggregation strategy.** HUME achieves a strikingly high correlation between its objective function and ground truth labeling (Figure 1). However, due to the internal stochasticity, the found labeling with the lowest generalization error does not need to always correspond to the highest accuracy. Thus, to stabilize the predictions we obtain final labeling by aggregating found labelings from independent runs. We simply use the majority vote of all tasks in all our experiments. Here, we investigate the effect of using different aggregation strategies on the performance. Figure 3 (a) shows the results on the CIFAR-10 dataset and the corresponding plots for the STL-10 and CIFAR-100-20 datasets are shown in Appendix F. Given the high correlation of our objective and human labeling, we aggregate top-\(n\) of labelings w.r.t. generalization error and compute the corresponding point on the plot. Thus, the leftmost strategy (at \(x=1\)) corresponds to the one labeling that has the lowest generalization error, while the rightmost data point (at \(x=100\)) corresponds to aggregation over all generated labelings, _i.e._, the strategy we adopt in all experiments. By aggregation over multiple labelings the algorithm produces more stable predictions. Expectedly, given the high correlation between HUME's objective and human labeling, we can achieve even better performance by aggregating only top-\(n\) predictions compared to our current strategy that considers all tasks; however, we do not optimize for this in our experiments. Finally, it can be seen that utilizing larger pretrained models eliminates the need of aggregation procedure, since they lead to stronger and robust performance.
**Reliable samples for semi-supervised learning (SSL).** We next aim to answer whether HUME can be used to reliably generate labeled examples for SSL methods. By generating a few reliable samples per class (pseudo-labels), an unsupervised learning problem can be transformed into an SSL problem [14]. Using these reliable samples as initial labels, any SSL method can be applied. HUME can produce such reliable samples in a simple way. Specifically, we say that a sample is reliable if _(i)_ the majority of the found labelings assigns it to the same class, and _(ii)_ the majority of the sample neighbors have the same label. We provide the detailed description of the algorithm in Appendix G. We evaluate the accuracy of the generated reliable samples on the CIFAR-10 dataset and show results in Figure 3 (b). In the standard SSL evaluation setting that uses \(4\) labeled examples per class [39, 42, 43], HUME produces samples with perfect accuracy. Moreover, even with \(15\) labeled examples per class, reliable samples generated by HUME still have perfect accuracy. Remarkably, in other frequently evaluated SSL settings on the CIFAR-10 dataset [39, 42, 43] with \(25\) and \(400\) samples per class, accuracy of reliable samples produced by HUME is near perfect (\(99.6\%\) and \(99.7\%\) respectively). Results on the STL-10 and CIFAR-100-20 datasets and additional statistics of the reliable samples are provided in the Appendix E.
We next utilize the recent state-of-the-art SSL method FreeMatch [39] to compare the results of running FreeMatch with reliable samples produced by HUME to running FreeMatch with ground truth labeling. The results in Table 4 show that in the extremely low data regime FreeMatch with reliable samples produced by HUME outperforms FreeMatch with ground truth labels. Indeed, FreeMatch with ground truth labels utilizes samples which are sampled uniformly at random, while
Figure 3: **(a) Different aggregation strategies on the CIFAR-10 dataset. We use MOCOv2 and different large pretrained models to instantiate HUME. (b) Accuracy of the reliable samples on the CIFAR-10 dataset. We use MOCOv2 and DINOv2 to instantiate HUME. The well-established setting for testing SSL methods on the CIFAR-10 dataset uses \(4\), \(25\) and \(400\) reliable samples per class (depicted with lines in red color).**HUME's reliable samples by definition are such samples whose most neighbours belong to the same class predicted by HUME. Consequently, FreeMatch which is based on adaptive thresholding for pseudo-labeling, benefits from utilizing labeled samples for which it can confidently set the threshold, especially in a low data regime. For instance, FreeMatch with reliable samples achieves \(10\%\) relative improvement in accuracy over FreeMatch with ground truth labeling on the CIFAR-10 dataset with one sample per class. Comparing FreeMatch with \(4\) reliable samples produced by HUME with an original HUME method, we observe improvement of \(8\%\) on the CIFAR-10 dataset, demonstrating that HUME's results can be further improved by applying SSL with HUME's reliable samples. Overall, our results strongly demonstrate that HUME is highly compatible with SSL methods and can be used to produce reliable labeling for SSL methods.
## 4 Related work
**Self-supervised learning.** Self-supervised methods [44; 45; 46; 47; 48; 49; 5; 19] aim to define a pretext task which leads to learning representations that are useful for downstream tasks. Recently, contrastive approaches [1; 2; 3; 4; 6; 20] have seen a significant interest in the community. These approaches learn representations by contrasting positive pairs against negative pairs. Another line of work relies on incorporating beneficial inductive biases such as image rotation [49], solving Jigsaw puzzles [46] or by introducing sequential information that comes from video [45]. Regardless of a learning approach, a linear probe, _i.e._, training a linear classifier on top of the frozen representations using the groundtruth labeling, is a frequently used evaluation protocol for self-supervised methods. In our work, we turn this evaluation protocol into an optimization objective with the goal to recover the human labeling in a completely unsupervised manner. In Appendix C, we show that stronger representations lead to better unsupervised performance mirroring the linear probe evaluation. Given that HUME framework is model-agnostic, it can constantly deliver better unsupervised performance by employing continuous advancements of self-supervised approaches. Furthermore, HUME can be used to evaluate performance of self-supervised methods in an unsupervised manner.
**Transfer learning.** Transfer learning is a machine learning paradigm which utilizes large scale pretraining of deep neural networks to transfer knowledge to low-resource downstream problems [50; 7]. This paradigm has been successfully applied in a wide range of applications including but not limited to few-shot learning [10; 9], domain adaptation [51; 52] and domain generalization [53; 54]. Recently, foundation models [55; 56; 11; 57; 12] trained on a vast amount of data achieved breakthroughs in different fields. The well-established pipeline of transfer learning is to fine-tune weights of a linear classifier on top of the frozen representations in a supervised manner [50]. In our framework, we leverage strong linear transferability of these representations to act as a regularizer in guiding the search process of human labeled tasks. Thus, it can be also seen as performing an unsupervised transfer learning procedure. While language-image foundation models such as CLIP [6] require human instruction set to solve a new task, HUME provides a solution to bypass this requirement.
**Clustering.** Clustering is a long studied machine learning problem [58]. Recently, deep clustering methods [59; 13; 60] have shown benefits over traditional approaches. The typical approach to clustering problem is to encourage samples to have the same class assignment as its neighbours in a representation space [13; 60; 61]. Other recent methods rely on self-labeling, _i.e._, gradually fitting the neural network to its own most confident predictions [14; 16; 62; 63]. Alternative approaches [64; 59] train an embedding space and class prototypes to further assign samples to the closest prototype in the embedding space. In contrast, our framework redefines the way to approach a clustering problem.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{**STL-10**} & \multicolumn{3}{c}{**CIFAR-10**} \\
**Method** & **4** & **100** & **1** & **4** & **25** & **400** \\ \hline
**FreeMatch w/ reliable samples** & \(\mathbf{81.3\pm 4.3}\) & \(91.3\pm 0.3\) & \(\mathbf{91.2\pm 5.3}\) & \(\mathbf{95.1\pm 0.1}\) & \(93.3\pm 2.5\) & \(94.3\pm 0.4\) \\
**FreeMatch w/ ground truth labels** & \(77.8\pm 1.1\) & \(\mathbf{94.0\pm 0.1}\) & \(83.3\pm 9.6\) & \(94.8\pm 0.3\) & \(\mathbf{95.1\pm 0.0}\) & \(\mathbf{95.7\pm 0.1}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of accuracy of FreeMatch trained using reliable samples produced by HUME and FreeMatch trained using ground truth labeling on the STL-10 and on the CIFAR-10 dataset. We apply FreeMatch using \(4\) and \(100\) samples per class on the STL-10 dataset and \(1\), \(4\), \(25\) and \(400\) samples per class on the CIFAR-10 dataset. Each experiment is run \(3\) times and the results are averaged.
Without explicitly relying on notions of semantic similarity, we seek to find the most generalizable labeling regardless of the representation space in the space of all possible labelings.
**Generalization.** One of the conventional protocols to assess an ability of the model to generalize is to employ held-out data after model training. In addition, different measures of generalization [65, 66, 67, 68, 69, 70] have been developed to study and evaluate model generalization from different perspectives. Although, traditionally a generalization error is used as a metric, recent work [17] employs test-time agreement of two deep neural networks to find labelings on which the corresponding neural network can generalize well. Often, these labelings reflect inductive biases of the learning algorithm and can be used to analyze deep neural networks from a data-driven perspective.
**Meta-optimization.** The proposed optimization procedure requires optimization through the inner optimization process. This type of optimization problems also naturally arises in a gradient-based meta-optimization [27, 71, 72]. Other works [23, 25, 24] tackle the similar problem and study how to embed convex differentiable optimization problems as layers in deep neural networks. Although meta-optimization methods are, in general, computationally heavy and memory intensive, the proposed optimization procedure only requires propagating gradients through a single linear layer. This makes HUME a simple and efficient method to find human labelings. In addition, the inner optimization problem (Eq. 5) is convex allowing for the utilization of the specialized methods such as L-BFGS [73, 74], making our framework easily extendable to further boost the efficiency.
**Semi-supervised learning.** Semi-supervised learning (SSL) assumes that along with an abundance of unlabeled data, a small amount of labeled examples is given for each class. Consequently, SSL methods [39, 45, 37, 75, 76, 38] effectively employ available labeled data to increase the model's generalization performance on entire dataset. As shown in Figure 3 (b) and in Table 4, HUME can be used to construct reliable samples and convert the initial unsupervised problem to the SSL problem. Thus, the proposed framework can also be used to eliminate the need of even a few costly human annotations, bridging the gap between supervised and semi-supervised learning.
## 5 Limitations
The instantiation of HUME employs advances from different fields of study to solve the proposed optimization problem and consequently, it inherits some limitations which we discuss below.
**Meta-optimization.** For simplicity, HUME leverages MAML [27] to solve the proposed optimization problem (Eq. 7). However, its non-convex nature prevents convergence to the labelings which attain global optima. Though, it can be seen that the aforementioned objective is a special case of well-studied bilevel optimization problems with a convex inner part. Thus, HUME can greatly benefit from utilization of specialized optimization methods [77, 78, 79, 80] to further boost the performance.
**Number of classes and empirical label distribution.** HUME assumes the number of classes and empirical label distribution is known a priori - a common assumption in existing unsupervised learning approaches [13, 14, 59]. Although, in this work, we also follow these assumptions, the proposed framework is compatible with methods which can estimate the required quantities [81, 82].
## 6 Conclusion
We introduced HUME, a simple framework for discovering human labelings that sheds a new light on solving the unsupervised learning problem. HUME is based on the observation that a linear model can separate classes defined by human labelings regardless of the representation space used to encode the data. We utilize this observation to search for linearly generalizable data labeling in representation spaces of two pretrained models. Our approach improves considerably over existing unsupervised baselines on all of the considered benchmarks. Additionally, it shows superior performance to a supervised baseline on the STL-10 dataset and competitive on the CIFAR-10 dataset. HUME could also be used to generate labelings for semi-supervised methods and to evaluate quality of self-supervised representations. Our model-agnostic framework will greatly benefit from new more powerful self-supervised and large pretrained models that will be developed in the future.
## References
* [1] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In _International Conference on Machine Learning_, 2020.
* [2] Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey E Hinton. Big self-supervised models are strong semi-supervised learners. In _Advances in Neural Information Processing Systems_, 2020.
* [3] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In _Computer Vision and Pattern Recognition_, 2020.
* [4] Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. _arXiv preprint arXiv:2003.04297_, 2020.
* [5] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollar, and Ross Girshick. Masked autoencoders are scalable vision learners. In _Computer Vision and Pattern Recognition_, 2022.
* [6] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, et al. Learning transferable visual models from natural language supervision. In _International Conference on Machine Learning_, 2021.
* [7] Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, and Neil Houlsby. Big Transfer (BiT): General visual representation learning. In _European Conference on Computer Vision_, 2020.
* [8] Jeff Z. HaoChen, Colin Wei, Ananya Kumar, and Tengyu Ma. Beyond separability: Analyzing the linear transferability of contrastive representations to related subpopulations. In _Advances in Neural Information Processing Systems_, 2022.
* [9] Guneet Singh Dhillon, Pratik Chaudhari, Avinash Ravichandran, and Stefano Soatto. A baseline for few-shot image classification. In _International Conference on Learning Representations_, 2020.
* [10] Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, and Jia-Bin Huang. A closer look at few-shot classification. In _International Conference on Learning Representations_, 2019.
* [11] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, et al. Flamingo: a visual language model for few-shot learning. In _Advances in Neural Information Processing Systems_, 2022.
* [12] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with CLIP latents. _arXiv preprint arXiv:2204.06125_, 2022.
* [13] Wouter Van Gansbeke, Simon Vandenhende, Stamatios Georgoulis, Marc Proesmans, and Luc Van Gool. SCAN: Learning to classify images without labels. In _European Conference on Computer Vision_, 2020.
* [14] Chuang Niu, Hongming Shan, and Ge Wang. SPICE: Semantic pseudo-labeling for image clustering. _IEEE Transactions on Image Processing_, 31:7264-7278, 2022.
* [15] J. B. MacQueen. Some methods for classification and analysis of multivariate observations. In _Berkeley Symposium on Mathematical Statistics and Probability_, 1967.
* [16] Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In _European Conference on Computer Vision_, 2018.
* [17] Andrei Atanov, Andrei Filatov, Teresa Yeo, Ajay Sohmshetty, and Amir Zamir. Task discovery: Finding the tasks that neural networks generalize on. In _Advances in Neural Information Processing Systems_, 2022.
* [18] Fereshte Khani and Percy Liang. Removing spurious features can hurt accuracy and affect groups disproportionately. In _ACM Conference on Fairness, Accountability, and Transparency_, 2021.
- a new approach to self-supervised learning. In _Advances in Neural Information Processing Systems_, 2020.
* Oquab et al. [2023] Maxime Oquab, Timothee Darcet, Theo Moutakanni, Huy Vo, Marc Szafraniec, et al. DINOv2: Learning robust visual features without supervision. _arXiv preprint arXiv:2304.07193_, 2023.
* Martins and Astudillo [2016] Andre Martins and Ramon Astudillo. From softmax to sparsemax: A sparse model of attention and multi-label classification. In _International Conference on Machine Learning_, 2016.
* Snell et al. [2017] Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In _Advances in Neural Information Processing Systems_, 2017.
* Agrawal et al. [2019] Akshay Agrawal, Brandon Amos, Shane Barratt, Stephen Boyd, Steven Diamond, and J. Zico Kolter. Differentiable convex optimization layers. In _Advances in Neural Information Processing Systems_, 2019.
* Amos and Kolter [2017] Brandon Amos and J. Zico Kolter. OptNet: Differentiable optimization as a layer in neural networks. In _International Conference on Machine Learning_, 2017.
* Agrawal et al. [2019] Akshay Agrawal, Shane Barratt, Stephen Boyd, Enzo Busseti, and Walaa M. Moursi. Differentiating through a cone program. _Journal of Applied and Numerical Optimization_, 1(2):107-115, 2019.
* Barratt [2019] Shane Barratt. On the differentiability of the solution to convex optimization problems. _arXiv preprint arXiv:1804.05098_, 2019.
* Finn et al. [2017] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In _International Conference on Machine Learning_, 2017.
* Paszke et al. [2019] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, et al. PyTorch: An imperative style, high-performance deep learning library. In _Advances in Neural Information Processing Systems_, 2019.
* Abadi et al. [2016] Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, et al. TensorFlow: Large-scale machine learning on heterogeneous distributed systems. _arXiv preprint arXiv:1603.04467_, 2016.
* Arazo et al. [2020] Eric Arazo, Diego Ortego, Paul Albert, Noel E O'Connor, and Kevin McGuinness. Pseudolabeling and confirmation bias in deep semi-supervised learning. In _International Joint Conference on Neural Networks_, 2020.
* Cao et al. [2022] Kaidi Cao, Maria Brbic, and Jure Leskovec. Open-world semi-supervised learning. In _International Conference on Learning Representations_, 2022.
* Coates et al. [2011] Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In _International Conference on Artificial Intelligence and Statistics_, 2011.
* Krizhevsky et al. [2009] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. _Technical report, University of Toronto_, 2009.
* Deng et al. [2009] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In _Conference on Computer Vision and Pattern Recognition_, 2009.
* Kuhn [1955] Harold W Kuhn. The Hungarian method for the assignment problem. _Naval research logistics quarterly_, 2(1-2):83-97, 1955.
* Ridnik et al. [2021] Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, and Lihi Zelnik. ImageNet-21K pretraining for the masses. In _Advances in Neural Information Processing Systems Track on Datasets and Benchmarks_, 2021.
* Sohn et al. [2020] Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, et al. FixMatch: Simplifying semi-supervised learning with consistency and confidence. In _Advances in Neural Information Processing Systems_, 2020.
* [38] David Berthelot, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Kihyuk Sohn, et al. ReMixMatch: Semi-supervised learning with distribution matching and augmentation anchoring. In _International Conference on Learning Representations_, 2020.
* [39] Yidong Wang, Hao Chen, Qiang Heng, Wenxin Hou, Yue Fan, et al. FreeMatch: Self-adaptive thresholding for semi-supervised learning. In _International Conference on Learning Representations_, 2023.
* [40] Wang Feng, Kong Tao, Zhang Rufeng, and Liu Huaping. Self-supervised learning by estimating twin class distribution. _IEEE Transactions on Image Processing_, 2023.
* [41] Elad Amrani, Leonid Karlinsky, and Alex Bronstein. Self-supervised classification network. In _European Conference on Computer Vision_, 2022.
* [42] Jianfeng Wang, Thomas Lukasiewicz, Daniela Massiceti, Xiaolin Hu, Vladimir Pavlovic, and Alexandros Neophytou. NP-Match: When neural processes meet semi-supervised learning. In _International Conference on Machine Learning_, 2022.
* [43] Bowen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, et al. FlexMatch: Boosting semi-supervised learning with curriculum pseudo labeling. In _Advances in Neural Information Processing Systems_, 2021.
* [44] Carl Doersch, Abhinav Gupta, and Alexei A. Efros. Unsupervised visual representation learning by context prediction. In _International Conference on Computer Vision_, 2015.
* [45] Xiaolong Wang and Abhinav Gupta. Unsupervised learning of visual representations using videos. In _International Conference on Computer Vision_, 2015.
* [46] Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving Jigsaw puzzles. In _European Conference on Computer Vision_, 2016.
* [47] Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In _European Conference on Computer Vision_, 2016.
* [48] Deepak Pathak, Ross Girshick, Piotr Dollar, Trevor Darrell, and Bharath Hariharan. Learning features by watching objects move. In _Computer Vision and Pattern Recognition_, 2017.
* [49] Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. In _International Conference on Learning Representations_, 2018.
* [50] Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, et al. A comprehensive survey on transfer learning. _Proceedings of the IEEE_, 109(1):43-76, 2020.
* [51] Kuniaki Saito, Donghyun Kim, Stan Sclaroff, and Kate Saenko. Universal domain adaptation through self supervision. In _Advances in Neural Information Processing Systems_, 2020.
* [52] Kuniaki Saito and Kate Saenko. Ovanet: One-vs-all network for universal domain adaptation. In _International Conference on Computer Vision_, 2021.
* [53] Gilles Blanchard, Aniket Anand Deshmukh, Urun Dogan, Gyemin Lee, and Clayton Scott. Domain generalization by marginal transfer learning. _The Journal of Machine Learning Research_, 22(1):46-100, 2021.
* [54] Piotr Teterwak, Kuniaki Saito, Theodoros Tsiligkaridis, Kate Saenko, and Bryan A. Plummer. ERM++: An improved baseline for domain generalization. _arXiv preprint arXiv:2304.01973_, 2023.
* [55] Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, et al. On the opportunities and risks of foundation models. _arXiv preprint arXiv:2108.07258_, 2022.
* [56] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, et al. Language models are few-shot learners. In _Advances in Neural Information Processing Systems_, 2020.
* [57] OpenAI. GPT-4 technical report. _arXiv preprint arXiv:2303.08774_, 2023.
* [58] Anil Kumar Jain, M Narasimha Murty, and Patrick Flynn. Data clustering: a review. _ACM Computing Surveys_, 31(3):264-323, 1999.
* [59] Junyuan Xie, Ross Girshick, and Ali Farhadi. Unsupervised deep embedding for clustering analysis. In _International Conference on Machine Learning_, 2016.
* [60] Jianlong Chang, Lingfeng Wang, Gaofeng Meng, Shiming Xiang, and Chunhong Pan. Deep adaptive image clustering. In _International Conference on Computer Vision_, 2017.
* [61] Jianwei Yang, Devi Parikh, and Dhruv Batra. Joint unsupervised learning of deep representations and image clusters. In _Conference on Computer Vision and Pattern Recognition_, 2016.
* [62] Mathilde Caron, Piotr Bojanowski, Julien Mairal, and Armand Joulin. Unsupervised pre-training of image features on non-curated data. In _International Conference on Computer Vision_, 2019.
* [63] YM Asano, C Rupprecht, and A Vedaldi. Self-labelling via simultaneous clustering and representation learning. In _International Conference on Learning Representations_, 2020.
* [64] Philip Haeusser, Johannes Plapp, Vladimir Golkov, Elie Aljalbout, and Daniel Cremers. Associative deep clustering: Training a classification network with no labels. In _German Conference on Pattern Recognition_, 2019.
* [65] Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. Norm-based capacity control in neural networks. In _Conference on Learning Theory_, 2015.
* [66] Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In _International Conference on Learning Representations_, 2017.
* [67] Tengyuan Liang, Tomaso Poggio, Alexander Rakhlin, and James Stokes. Fisher-Rao metric, geometry, and complexity of neural networks. In _International Conference on Artificial Intelligence and Statistics_, 2019.
* [68] Vaishnavh Nagarajan and J. Zico Kolter. Generalization in deep networks: The role of distance from initialization. In _NeurIPS 2017 Workshop on Deep Learning: Bridging Theory and Practice_, 2017.
* [69] Samuel L. Smith and Quoc V. Le. A Bayesian perspective on generalization and stochastic gradient descent. In _International Conference on Learning Representations_, 2018.
* [70] Yiding Jiang, Vaishnavh Nagarajan, Christina Baek, and J Zico Kolter. Assessing generalization of SGD via disagreement. In _International Conference on Learning Representations_, 2022.
* [71] Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms. _arXiv preprint arXiv:1803.02999_, 2018.
* [72] Ondrej Bohdal, Yongxin Yang, and Timothy Hospedales. EvoGrad: Efficient gradient-based meta-learning and hyperparameter optimization. In _Advances in Neural Information Processing Systems_, 2021.
* [73] Dong C. Liu and Jorge Nocedal. On the limited memory BFGS method for large scale optimization. _Mathematical Programming_, 45:503-528, 1989.
* [74] Stephen P Boyd and Lieven Vandenberghe. Convex optimization. _Cambridge University Press_, 2004.
* [75] Yi Xu, Jiandong Ding, Lu Zhang, and Shuigeng Zhou. DP-SSL: Towards robust semi-supervised learning with a few labeled samples. In _Advances in Neural Information Processing Systems_, 2021.
* [76] David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. MixMatch: A holistic approach to semi-supervised learning. In _Advances in Neural Information Processing Systems_, 2019.
* [77] Aurelien Ouattara and Anil Aswani. Duality approach to bilevel programs with a convex lower level. In _American Control Conference_, 2018.
* [78] Jiri V Outrata. On the numerical solution of a class of Stackelberg problems. _Zeitschrift fur Operations Research_, 34:255-277, 1990.
* [79] Mihai Anitescu. On using the elastic mode in nonlinear programming approaches to mathematical programs with complementarity constraints. _SIAM Journal on Optimization_, 15(4):1203-1236, 2005.
* [80] Jane J Ye and DL Zhu. Optimality conditions for bilevel programming problems. _Optimization_, 33(1):9-27, 1995.
* [81] Kai Han, Andrea Vedaldi, and Andrew Zisserman. Learning to discover novel visual categories via deep transfer clustering. In _International Conference on Computer Vision_, 2019.
* [82] Yiqi Wang, Zhan Shi, Xifeng Guo, Xinwang Liu, En Zhu, and Jianping Yin. Deep embedding for determining the number of clusters. In _AAAI Conference on Artificial Intelligence_, 2018.
Implementation details
We utilize ResNet-18 backbone [1] as \(\phi_{1}(\cdot)\) pretrained on the train split of the corresponding dataset with MOCOv2 [2]. Features are obtained after _avgpool_ with dimension equal to \(512\). For the ImageNet-1k dataset, we utilize ResNet-50 backbone as \(\phi_{1}(\cdot)\) pretrained with MOCOv2. Here, features are obtained after _avgpool_ with dimension equal to \(2048\). We use the same backbone and pretraining strategy for baselines as well. To enforce orthogonality constraint on the weights of the task encoder we apply Pytorch parametrizations [3]. When precomputing representations we employ standard data preprocessing pipeline of the corresponding model and do not utilize any augmentations during HUME's training. In all experiments, we use the following hyperparameters: number of iterations \(T=1000\), Adam optimizer [4] with step size \(\alpha=0.001\) and temperature of the sparsemax activation function \(\gamma=0.1\). We anneal temperature and step size by \(10\) after \(100\) and \(200\) iterations. We set regularization parameter \(\eta\) to value \(10\) in all experiments and we show ablation for this hyperparameter in Appendix B. To solve inner optimization problem we run gradient descent for \(300\) iterations with step size equal to \(0.001\). At each iteration we sample without replacement \(10000\) examples from the dataset to construct subset \((X_{tr},X_{te}),|X_{tr}|=9000,|X_{te}|=1000\). Since STL-10 dataset has less overall number of samples, we use \(5000\) (\(|X_{tr}|=4500,|X_{te}|=500\)) in the inductive setting and \(8000\) (\(|X_{tr}|=7200,|X_{te}|=800\)) in the transductive setting. For the ImageNet-1000 dataset we use inner and outer step size equal to \(0.1\), number of inner steps equal to \(100\), sample 20000 examples with \(|X_{tr}|=14000,|X_{te}|=6000\) on each iteration. We do not anneal temperature and step size for the ImageNet-1000 dataset and other hyperparameters remain the same. To reduce the gradient variance, we average the final optimization objective (Eq. 7) over \(20\) random subsets on each iteration. To stabilize training in early iterations, we clip gradient norm to 1 before updating task encoder's parameters. We use \(N_{neigh}=500\) in Algorithm G1 to construct reliable samples for semi-supervised learning.
## Appendix B Robustness to a regularization parameter
HUME incorporates entropy regularization of the empirical label distribution in the final optimization objective (Eq. 7) to avoid trivial solutions, _i.e._, assigning all samples to a single class. To investigate the effect of the corresponding hyperparameter \(\eta\), we run HUME from 100 random initializations \(W_{1}\) for each \(\eta\in\{0,1,2,5,10\}\) on the CIFAR-10 dataset. Figure B1 shows results with different values of hyperaparameter \(\eta\). The results show that \(\eta\) is indeed a necessary component of HUME objective, _i.e._, setting \(\eta=0\) leads to degenerate labelings since assigning all samples to a single class is trivially invariant to any pair of representation spaces. Furthermore, the results show that HUME is robust to different positive values of the parameter \(\eta\). We set \(\eta=10\) in all experiments.
Ablation study on different self-supervised methods
In all experiments, we use MOCOv2 [2] self-supervised representations. Here, we evaluate HUME's performance with different self-supervised learning method. In particular, we utilize SimCLR [5] and [6] to obtain representation space \(\phi_{1}(\cdot)\). Table C1 shows the results in the inductive setting using DINO large pretrained model as the second representation space. The results show that HUME instantiated with MOCO consistently outperforms HUME instantiated with SimCLR on all of the datasets. This is expected result since MOCO representations are stronger also when assessed by a supervised linear probe. Alternatively, utilizing BYOL shows consistent improvements over MOCO representations. These results demonstrate that HUME can improve by employing stronger self-supervised representations. Interestingly, even with SimCLR representations HUME still outperforms unsupervised baselines pretrained with MOCOv2 in Table 2.
## Appendix D Correlation plots on the STL-10 and CIFAR-100-20 datasets.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{**STL-10**} & \multicolumn{2}{c}{**CIFAR-10**} & \multicolumn{2}{c}{**CIFAR-100-20**} \\
**Method** & **ACC** & **ARI** & **ACC** & **ARI** & **ACC** & **ARI** \\ \hline
**SimCLR Linear** & 85.3 & 71.2 & 87.1 & 74.4 & 70.4 & 49.7 \\
**MOCO Linear** & 88.9 & 77.7 & 89.5 & 79.0 & 72.5 & 52.6 \\
**BYOL Linear** & 92.1 & 83.6 & 90.7 & 81.0 & 77.2 & 59.3 \\ \hline
**HUME SimCLR** & 86.9 & 74.3 & 85.2 & 71.8 & 51.8 & 33.9 \\
**HUME MOCO** & 90.8 & 81.2 & 88.4 & 77.6 & 55.5 & 37.7 \\
**HUME BYOL** & 91.5 & 82.7 & 89.8 & 79.7 & 56.0 & 40.3 \\ \hline \hline \end{tabular}
\end{table}
Table C1: Comparison of different self-supervised representations. We use DINOv2 large pretrained model. Stronger self-supervised representations lead to better performance.
The HUME's objective is to search over the labelings of a dataset by minimizing a generalization error. To show the correlation between HUME's objective and the ground truth labeling, we plot correlation between generalization error of the labeling measured by cross-validation accuracy with respect to the found labeling and accuracy of the found labeling with respect to ground truth labeling. In addition to the results on the CIFAR-10 dataset presented in the main paper, Figure D1 shows the correlation plots on the STL-10 and CIFAR-100-20 datasets. The results demonstrate that HUME achieves the lowest generalization error for tasks that almost perfectly correspond to the ground truth labeling on the STL-10 dataset, allowing HUME to recover human labeling without external supervision. On the CIFAR-100-20 dataset even the supervised linear model on top of MOCOv2 self-supervised representations does not attain low generalization error (72.5% accuracy in Table 1). Consequently, HUME's performance also reduces, thus this additionally suggests that employing stronger representations will lead to better performance of HUME as also shown in Appendix C. Nevertheless, Figure D1 shows fairly-positive correlation (\(\rho=0.47,p=5.9\times 10^{-7}\)) between distance to ground truth human labeling and HUME's objective, thus confirming the applicability of HUME to more challenging setups when one of the representation spaces might be insufficiently strong.
## Appendix E Quality of the reliable samples produced by HUME
HUME can be used to produce reliable samples which can be further utilized with any semi-supervised learning (SSL) method. To measure the quality of reliable samples, we use two different statistics of the produced reliable samples: per class balance and per class accuracy. Per class balance measures number of samples for each ground truth class, _i.e._, \(\sum_{j\in R}[y_{j}=k]\), where \(R\) is the set of indices of produced reliable samples, \(y_{j}\) is the ground truth label of sample \(j\), \(k\) represents one of the ground truth classes number and \([\cdot]\) corresponds to Iverson bracket. Per class accuracy measures the average per class accuracy of the corresponding set of the reliable samples with respect to ground truth labeling. We follow standard protocol for the evaluation of SSL learning methods [7] and consider 4, \(100\) samples per class on the STL-10 dataset and \(1\), \(4\), \(25\), \(400\) samples per class on the CIFAR-10 dataset. We provide results averaged across all classes in Table E1 and the corresponding standard deviations across classes in Table E2. In addition to the provided statistics, Figure E1 presents accuracies of the reliable samples on the STL-10 and CIFAR-100-20 datasets for wider range of number of reliable samples per class. Overall, the results show that on the STL-10 and CIFAR-10 datasets HUME shows almost perfect balance and mean per class accuracy, _i.e._, up to \(100\) samples per class on STL-10 and up to \(400\) samples per class on CIFAR-10, thus demonstrating that HUME can produce reliable pseudo-labels for SSL methods.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**STL-10**} & \multicolumn{3}{c}{**CIFAR-10**} & \multicolumn{3}{c}{**CIFAR-100-20**} \\
**Quantity** & **4** & **100** & **1** & **4** & **25** & **400** & **1** & **10** & **50** & **100** \\ \hline
**Mean Per Class Balance** & \(4.0\) & \(100.0\) & \(1.0\) & \(4.0\) & \(25.0\) & \(400.0\) & \(0.6\) & \(6.3\) & \(26.3\) & \(52.6\) \\
**Mean Per Class Accuracy** & \(100.0\) & \(99.6\) & \(100.0\) & \(100.0\) & \(99.6\) & \(99.7\) & \(72.7\) & \(62.3\) & \(51.1\) & \(48.9\) \\ \hline \hline \end{tabular}
\end{table}
Table E1: Mean per class balance and mean per class accuracy for the reliable samples produced by HUME on the STL-10, CIFAR-10 and CIFAR-100 datasets. Mean is computed over the number of classes in the corresponding dataset.
Appendix F Ablation study on different aggregation strategies on the STL-10 and CIFAR-100-20 datasets
We additionally study the effect of the proposed aggregation strategy on the STL-10 and CIFAR-100-20 datasets in an inductive setting. We employ MOCOv2 self-supervised representations as representation space \(\phi_{1}(\cdot)\) and show the results for different large pretrained models as representation space \(\phi_{2}(\cdot)\). Figure F1 shows the results for the STL-10 and CIFAR-100-20 datasets, respectively. We observe the similar behaviour to the results obtained in the main paper on the CIFAR-10 dataset. Namely, the proposed aggregation strategy stabilizes the results and provides robust predictions. It is worth noting that even using top-5 labelings in the majority vote is enough to produce stable results. For weaker models such as BiT, aggregation strategy has more effect on the performance and the optimal strategy is to aggregate around top-10 tasks. This is expected given the high correlation between HUME's objective and accuracy on ground truth labels since this strategy gives robust performance and tasks are closer to human labeled tasks. Larger models such as DINO show high robustness to the aggregation strategy. It is important to emphasize that in experiments we always report average across all tasks and do not optimize for different aggregation strategies.
Figure F1: Different aggregation strategies on the **(a)** STL-10 dataset and **(b)** CIFAR-100-20 dataset. We use MOCOv2 self-supervised representations pretrained on the corresponding dataset and each line on the plot corresponds to the type of the large pretrained model.
Algorithm for constructing reliable samples
We showed that HUME can be utilized to construct a set of reliable labeled examples to transform an initial unsupervised learning problem to a semi-supervised problem. It is worth noting that standard semi-supervised setting requires a balanced labeled set, _i.e._, equal number of labeled samples for each class. For simplicity, we adapt the approach presented in SPICE [8] to produce a balanced set of reliable samples. Namely, we sort all samples per class by _(i)_ number of labellings in the majority vote which predict the same class, and _(ii)_ number of neighbours of the sample which have the same class. Thus, we consider a sample reliable if both quantities are high. Finally, given the sorted order we take the required number of samples to stand as a set of reliable samples. The proposed algorithm is outlined in Algorithm G1.
```
0: Dataset \(\mathcal{D}\), number of classes \(K\), number of samples per class \(N_{k}\), number of neighbours \(N_{neigh}\), self-supervised representation space \(\phi_{1}(\cdot)\), trained labelings \(\tau_{1},\ldots,\tau_{m}\)
1: Compute majority vote: \(\tau_{\text{MA}}(x)=\arg\max\limits_{k=1,\ldots,K}\sum_{i=1}^{m}\mathbbm{1} \left[\tau_{i}(x)=k\right]\)
2: Count number of agreed labelings: \(\mathcal{A}^{\tau}(x)=\sum_{i=1}^{m}\mathbbm{1}[\tau_{i}(x)=\tau_{\text{MA}}(x)]\)
3: Find nearest neighbours in representation space \(\phi_{1}\): \(\mathcal{N}(x)\gets N_{neigh}\) nearest neighbours for sample \(x\in\mathcal{D}\)
4: Count number of agreed nearest neighbours: \(\mathcal{A}^{\text{m}}(x)=\sum_{z\in\mathcal{N}(x)}\mathbbm{1}[\tau_{\text{MA }}(z)=\tau_{\text{MA}}(x)]\)
5: Initialize set of reliable samples: \(\mathcal{R}=\emptyset\)
6:for\(k=1\) to \(K\)do
7: Take per class samples: \(S_{k}=\{x\in\mathcal{D}|\tau_{\text{MA}}(x)=k\}\)
8: Sort \(S_{k}\) in descending order by lexicographic comparison of tuples \((\mathcal{A}^{\text{m}}(x),\mathcal{A}^{\tau}(x))\)
9: Take top-\(N_{k}\) samples from the sorted \(\hat{S}_{k}\) and update set of reliable samples: \(\mathcal{R}=\mathcal{R}\cup\text{top-}N_{k}(\hat{S}_{k})\)
10:endfor
```
**Algorithm G1** Reliable samples construction
## References
* [1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Conference on Computer Vision and Pattern Recognition_, 2016.
* [2] Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. _arXiv preprint arXiv:2003.04297_, 2020.
* [3] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, et al. PyTorch: An imperative style, high-performance deep learning library. In _Advances in Neural Information Processing Systems_, 2019.
* [4] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_, 2017.
* [5] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In _International Conference on Machine Learning_, 2020.
* a new approach to self-supervised learning. In _Advances in Neural Information Processing Systems_, 2020.
* [7] Yidong Wang, Hao Chen, Qiang Heng, Wenxin Hou, Yue Fan, et al. FreeMatch: Self-adaptive thresholding for semi-supervised learning. In _International Conference on Learning Representations_, 2023.
* [8] Chuang Niu, Hongming Shan, and Ge Wang. Spice: Semantic pseudo-labeling for image clustering. _IEEE Transactions on Image Processing_, 31:7264-7278, 2022. | ## Review
### Summary
The paper presents HUME, an innovative unsupervised learning method that infers human labels from image datasets by assuming linear separability across different representation spaces. This framework utilizes bi-level optimization, where an inner loop optimizes pseudo-labels and an outer loop minimizes the classification loss of a separate linear classifier. HUME demonstrates state-of-the-art performance on benchmarks such as CIFAR and STL-10, showcasing its effectiveness in generating semantically meaningful clusters without supervision. The approach is well-grounded in recent findings and offers valuable insights for future research in the field.
### Strengths
- HUME is a clever method that effectively utilizes linear probing across diverse representation spaces to achieve semantically meaningful clusters.
- The insight that human labels should be linearly separable in strong representation spaces is original and significant.
- The paper is well-organized, clearly written, and presents strong justifications for each component of the method.
- HUME outperforms state-of-the-art methods on various unsupervised learning benchmarks, demonstrating its robustness.
- The inclusion of multiple datasets and models in the experiments enhances the generalizability of the findings.
### Weaknesses
- The performance gains may not solely be attributed to HUME but could also stem from the quality of the second representation space. Additional baseline experiments are needed for clarification.
- The assumption of linear separability may not hold for more challenging datasets, raising concerns about HUME's scalability.
- Comparative analyses with other unsupervised methods that utilize large pretrained models are limited, potentially skewing the perceived effectiveness of HUME.
- Some results suggest that if classes are already linearly separable, minimal labeled examples would suffice, questioning the necessity of HUME for certain tasks.
- The paper lacks a thorough evaluation of full semi-supervised learning models, which would strengthen its claims.
### Questions
- What is the class balance when producing reliable samples for SSL, and how does this impact performance across different classes?
- Can the method scale to larger datasets considering the need for second-order gradients, and how does this affect performance?
- What are the computational costs associated with HUME, particularly regarding the use of 300 iterations for MAML's inner loop?
- Could the authors clarify what they mean by 'unbiased estimate' and provide the distance measure used in the figures?
- How do the results generalize to other datasets, and what analyses can be provided to support this?
### Soundness
**Score:** 3
**Description:** 3 = good: The methods and results are sound, though further validation and additional experiments could improve robustness.
### Presentation
**Score:** 3
**Description:** 3 = good: The paper is generally well-written but could benefit from clearer explanations in some sections and minor typographical corrections.
### Contribution
**Score:** 4
**Description:** 4 = excellent: The work makes a significant contribution to the field by proposing a novel and impactful approach to unsupervised learning.
### Rating
**Score:** 7
**Description:** 7 = accept, but needs minor improvements: The paper is technically solid with high impact potential, though it requires some clarifications and additional validation.
### Paper Decision
**Decision:** Accept (spotlight)
**Reasons:** The decision to accept is based on the paper's originality, soundness, and significant contributions to unsupervised learning, despite some weaknesses and areas for improvement that could be addressed in the final version.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Post Hoc Explanations of Language Models Can Improve Language Models
Satyapriya Krishna
Harvard University
Jiaqi Ma
University of Illinois Urbana-Champaign
Dylan Slack
University of California, Irvine
Asma Ghandeharioun
Google Inc
Sameer Singh
University of California, Irvine
Himabindu Lakkaraju
Harvard University
###### Abstract
Large Language Models (LLMs) have demonstrated remarkable capabilities in performing complex tasks. Moreover, recent research has shown that incorporating human-annotated rationales (e.g., Chain-of-Thought prompting) during in-context learning can significantly enhance the performance of these models, particularly on tasks that require reasoning capabilities. However, incorporating such rationales poses challenges in terms of scalability as this requires a high degree of human involvement. In this work, we present a novel framework, Amplifying Model Performance by Leveraging In-Context Learning with Post Hoc Explanations (AMPLIFY), which addresses the aforementioned challenges by automating the process of rationale generation. To this end, we leverage post hoc explanation methods which output attribution scores (explanations) capturing the influence of each of the input features on model predictions. More specifically, we construct automated natural language rationales that embed insights from post hoc explanations to provide corrective signals to LLMs. Extensive experimentation with real-world datasets demonstrates that our framework, AMPLIFY, leads to prediction accuracy improvements of about 10-25% over a wide range of tasks, including those where prior approaches which rely on human-annotated rationales such as Chain-of-Thought prompting fall short. Our work makes one of the first attempts at highlighting the potential of post hoc explanations as valuable tools for enhancing the effectiveness of LLMs. Furthermore, we conduct additional empirical analyses and ablation studies to demonstrate the impact of each of the components of AMPLIFY, which, in turn, lead to critical insights for refining in-context learning.
## 1 Introduction
In recent years, Large Language Models (LLMs) [3] have ushered in a new era for machine learning research. These models are exhibiting emergent capabilities that enable them to excel at complex tasks that involve sophisticated capabilities such as reasoning and language understanding [34; 5]. Moreover, these models do not only exhibit remarkable performance on tasks they were trained for but also quickly adapt to other novel and complex tasks. This is made possible through a mechanism known as _in-context learning_, which allows these models to learn from a limited number of input and label pairs, commonly referred to as few-shot prompts[8], provided during test time. Prior research has also demonstrated that the performance of these models on sophisticated reasoning tasks can be significantly improved by presenting them with human-annotated rationales alongsideinput/label pairs during test time [35, 15]. While incorporating such human-annotated rationales has contributed to enhancing model performance, it is not a scalable approach as it involves a lot of human intervention, thus, limiting its applicability to the ever-expanding range of tasks that are being handled by LLMs [29, 38]. Additionally, prompting these models with human-annotated rationales is not always effective, and may lead to drops in performance on certain tasks requiring sophisticated language understanding as demonstrated by prior works [31]. This could be due to the fact that the seemingly innocuous biases introduced by human-annotated rationales do not necessarily align with the performance goals [33].
To address the aforementioned challenges, we propose a novel framework Amplifying Model Performance by Leveraging In-Context Learning with Post Hoc Explanations (AMPLIFY) which can automatically generate rationales to improve the performance of LLMs on tasks involving sophisticated reasoning and language understanding. To this end, we leverage post hoc explanation methods which output attribution scores that capture the influence of each of the input features on model predictions. More specifically, our framework constructs natural language rationales that embed insights from post hoc explanations to provide corrective signals to LLMs. For example, when a LLM makes an error on an instance, our framework enables the model to correct itself by first identifying the top keywords by computing post hoc explanations (e.g., gradients) of the model with respect to the given instance and its true label, and then prompting the model to pay attention to the top keywords identified, thus, amplifying this signal. To illustrate, let us consider a sentiment classification task where a LLM is assessing the sentiment associated with the sentence _"RRR movie has a great story and amazing visuals."_. If the model is incorrect in its assessment, it can be provided with a corrective input such as "The keywords 'great' and 'amazing' are important cues in predicting the sentiment of this sentence" where the keywords themselves are automatically identified by a post hoc explanation method. While post hoc explanations have generally been considered valuable tools for deepening our understanding of model behavior [6] and for identifying root causes of errors made by ML models [10, 11], our work is the first to explore their utility in improving the performance of LLMs.
While the use of post hoc explanation methods to rectify LLMs' behavior eliminates the need for human intervention, it encounters two significant challenges: First, calculating gradients (and hence post hoc explanations) for LLMs with several billions of parameters is computationally intensive. Second, many LLMs operate as black boxes, depriving end users of access to gradients or other internal model details. To mitigate these issues, we compute post hoc explanations for a proxy model (e.g. GPT-2, BERT, etc.) that is considerably smaller than a large language model with billions of parameters. We then incorporate these explanations into input prompts for larger language models (e.g. GPT-3, GPT-3.5, etc.). This approach not only improves efficiency and feasibility (as we are now calculating gradients for models with 100-200 million parameters, instead of those with 100 billion parameters) but also takes advantage of the accessibility of smaller models that are open sourced. In summary, our AMPLIFY framework follows a four-step approach: (1) Select a proxy model for which post hoc explanation generation is computationally viable; (2) Identify samples that are most likely to provide corrective signals to LLMs; (3) Compute post hoc explanations for the samples identified
Figure 1: The AMPLIFY framework consists of four steps aimed at improving the performance of LLMs. (1) We select a proxy model, such as GPT-2 or BERT, which is significantly smaller in size compared to the LLMs and for which it is computationally feasible to generate post hoc explanations. (2) By leveraging the validation set, we identify samples that were misclassified by the LLM. Subsequently, we select the samples that the proxy model exhibits the highest level of confidence in misclassifying. (3) We then use explainability techniques to compute explanations for the selected samples with respect to their ground truth labels. (4) We construct the few-shot prompt for LLM using the samples selected and their corresponding explanations to feed as input to LLM for prediction.
in the previous step; (4) Construct a few-shot prompt using the samples chosen in step (2), their true labels, and the post hoc explanations obtained in step 3 as rationales. This prompt is then provided as input to the LLM at test time.
Our findings demonstrate that AMPLIFY leads to performance improvements of about 10-25% across a wide range of tasks, including those where previously considered prompting techniques such as Chain-of-Thought prompting which rely on human-annotated explanations, fall short. This highlights the potential of post hoc explanation methods as valuable tools for enhancing the effectiveness of LLMs. Furthermore, we conduct an extensive empirical analysis to examine the impact of each step of our framework AMPLIFY. This allows for a better understanding of the change in LLM performance with different choices of proxy model (step 1), selection strategy (step 2), post hoc explanation method (step 3), and rationale templates (step 4). Thus, we offer critical insights for refining in-context learning while addressing the limitations posed by methods dependent on human-annotated rationales.
## 2 Related Works
In-context Learning.Over the past decade, numerous language models have been developed to excel at a wide range of complex predictive tasks [34]. This is accomplished by training and fine-tuning language models on datasets associated with various tasks. While these advancements have led to highly effective language models for numerous tasks, they have also increased the models' parameter sizes and the computational costs for additional fine-tuning on new tasks. To address this issue, recent research has demonstrated that modern language models can learn new tasks in-context, which allows the model to perform well on new tasks by simply providing a few task samples in the prompt [16]. This method of learning contrasts with the conventional fine-tuning process, which incurs additional computational costs [16]. This in-context learning ability is even more pronounced in extremely large language models (>100 billion parameters), where it is also referred to as an "emergent ability" [34]. These findings have garnered significant attention, and various approaches have been proposed to enhance in-context learning by incorporating additional signals into the prompt [15]. The current state-of-the-art among such approaches is the Chain-of-Thought (CoT) technique [35] which augments prompts with human-annotated rationales comprising of step-by-step instructions on how to perform a new task. This method has substantially improved language models' capacity to tackle highly challenging tasks that involve sophisticated reasoning. However, this method relies heavily on human annotations and is therefore not very scalable. Further, prior works have demonstrate that this method also leads to poor performance in certain kinds of reasonings tasks[31]. These limitations persist, and there are largely no solutions for them yet. In this work, we propose an approach that demonstrates a lot of promise in alleviating the aforementioned issues.
Post Hoc Explanations.As language models have become more capable and complex, understanding their behavior and the rationale behind their predictions has grown increasingly challenging [9]. To understand the predictions made by these black boxes, various methods have been developed to provide explanations in the form of feature attributions which capture the influence of each input feature on a given model prediction. These methods are known as post hoc explanation methods [19]. Post hoc explanation methods can be broadly classified into two primary categories: (1) perturbation-based methods and (2) gradient-based methods. Perturbation-based methods involve creating an interpretable approximation of the original black-box model using perturbations of input samples. Notable examples of these methods include LIME [23], SHAP [18], Occlusion [37], and others. In addition, gradient-based methods such as SmoothGrad and Integrated Gradients [28; 30] calculate model gradients with respect to input features to determine the sensitivity of the model's output to each feature. However, to the best of our knowledge, the utility of these explanation methods has not been studied in the context of LLMs. Our work makes the first attempt at exploring the utility of these methods in improving the performance of LLMs.
## 3 Our Framework AMPLIFY
In this section, we describe our framework Amplifying Model Performance by Leveraging In-Context Learning with Post Hoc Explanations (AMPLIFY) in detail. Recall that the main goal of our framework is to eliminate the need for human-annotated rationales, and to instead generate automated rationales which can, in turn, provide corrective signals to improve the performance of LLMs on sophisticated reasoning and language understanding tasks. To this end, our framework leverages post hoc explanations to construct such rationales. More specifically, our framework adopts the following four-step approach: (1) Select a proxy model for which post hoc explanation generation is computationally viable; (2) Identify samples that are most likely to provide corrective signals to LLMs; (3) Compute post hoc explanations for the samples identified in the previous step; (4) Construct a few-shot prompt using the samples chosen in step (2), their true labels, and the post hoc explanations (rationales) obtained in step 3. This prompt is then provided as input to the LLM at test time. We discuss each of these steps in more detail below.
Step (1): Proxy Model Selection. In this step, we choose a proxy model, typically one that is substantially smaller in size compared to LLMs with billions of parameters, so that generating post hoc explanations is computationally viable. Further, we consider a couple of strategies when selecting a suitable proxy model: (i) Use a pre-trained model such as GPT-2, BERT, etc., which has been shown to perform quite well on several tasks and is thousands of times smaller than LLMs (GPT-3, Bloom, etc.) or (ii) Fine-tune or pre-train a smaller language model from scratch on the target task. The major difference between the two strategies is that the first one requires no additional computational cost as we directly use a pre-trained (potentially open-sourced) model. We test both proxy model selection strategies to discern performance variations between them. Lastly, it is important to note that proxy models of the size we select in this step (e.g., GPT-2, BERT etc.) do not exhibit complex reasoning abilities [34]. Consequently, they do not perform well on reasoning tasks by themselves [29]. However, our analyses (more details in Section 4) demonstrate that such smaller models can be used to improve the reasoning capabilities and task performance of LLMs.
Step (2): Few-shot Sample Selection. The goal of this step is to identify samples i.e., (input, label) pairs that are most likely to provide corrective signals to the LLM. To this end, we first identify instances from the validation set that are misclassified by the LLM. We then rank these instances using a metric we introduce called the Misclassification Confidence Score (MCS). Formally, \(\mathbf{MCS}(\mathbf{x})=\mathbf{f}(\mathbf{x})_{\mathbf{y}}-\mathbf{f}( \mathbf{x})_{\mathbf{\hat{y}}}\). Here, \(\mathbf{x}\in\mathbb{R}^{\text{N}}\) represents the input sequence containing N tokens, \(f:\mathbb{R}^{\text{N}}\rightarrow\mathbb{R}^{|\mathcal{L}|}\) is the fine-tuned language model that produces class probability scores for each label in the label set \(\mathcal{L}\), \(f(\mathbf{x})_{\mathbf{y}}\) represents the class probability score for the incorrect label (\(y\)) predicted by the model, and \(f(\mathbf{x})_{\mathbf{\hat{y}}}\) represents the class probability score for the ground truth label (\(\hat{y}\)). The samples that exhibit the highest MCS represent the most egregious misclassifications. By incorporating these samples and their corresponding corrective rationales into the few-shot prompt, the LLM is likely to receive strong supervision to avoid similar misclassifications. In summary, this step results in \(s\) samples of (input (\(\mathbf{x}\)), label (\(\hat{y}\))) pairs, filtered from the validation set, that are likely to carry the most useful corrective signals to assist LLMs.
Step (3): Rationale Generation. In this step, we compute post hoc explanations for each sample obtained from the previous step. Specifically, for each sample, we use a post hoc explanation method with the (input, label) pair, along with the proxy model, which then calculates the attribution scores for each token in the input sentence. These attribution scores, associated with each token, indicate the contribution each token in the input sentence makes towards the proxy model's prediction of the provided label. We then compute attribution scores for each word by averaging the attributions obtained for each token in that word. Finally, we filter the top-\(k\) words with the highest attribution scores. As a result, this step outputs a set of \(k\) words for each sample selected in the previous step. The most commonly used post hoc explanation methods for computing attribution scores of input tokens are based on gradient computations [19]. That is, the attribution for the token \(\mathbf{x_{i}}\) in input \(\mathbf{x}\in\mathbb{R}^{\text{N}}\) is computed as \(\frac{\partial f(\mathbf{x})_{\mathbf{\hat{y}}}}{\partial\mathbf{x_{i}}}\), as is the case with Vanilla Gradients [25]. We experiment with several other post hoc explanation methods discussed in more detail in the experiment section.
Step (4): Prompt Design for LLMs. In the final step, we proceed to construct the corrective rationale for each selected sample using the template: _"The key words: \(word_{1}\), \(word_{2}\),...and \(word_{k}\) are important clues to predict [Label] as the correct answer."_ In this template, "[ \(word_{1}\), \(word_{2}\)..., and \(word_{k}\) ]" refers to the top-\(k\) most important words in the input sentence for the true label, which were obtained from the previous step. Using the few-shot template _[Input][Rationale][Label]_, we construct the \(s\)-shot prompt as _"[Input\({}_{1}\)][Rationale\({}_{1}\)][Label\({}_{1}\)]...[Input\({}_{s}\)][Rationale\({}_{s}\)][Label\({}_{s}\)]"_, which is simply the concatenation of _[Input][Rationale][Label]_ for each selected sample. This constructed prompt is then combined with the input of the test sample to form the final input prompt for the LLMs, enabling them to make predictions for the samples in the test set. This process is illustrated in the last step of Figure 1.
## 4 Experimental Evaluation
In this section, we discuss our empirical evaluation in detail. First, we describe our experiment setup and provide details about the datasets and tasks we experiment with. Next, we evaluate the effectiveness of our framework in improving task performance of LLMs on a variety of real-world tasks. Lastly, we examine the impact of each step of our framework AMPLIFY by conducting rigorous ablation studies.
Datasets. We evaluate our framework AMPLIFY on some of the popular datasets from the BigBench-Hard[29] benchmark. More specifically, we experiment with: (1) The _Snarks_[29] dataset which gauges a model's proficiency in discerning sarcastic sentences from a selection of alternatives; (2) The _Causal Judgment_[29] dataset, designed to evaluate a model's ability in accurately deducing the causative factors of an event from a detailed summary; (3) The _Ruin Names_[29] task, which involves the identification of comical modifications to artist or movie names; (4) The _Formal Fallacies_[29] task, where machine learning models are put to the test to distinguish between logically sound arguments and those with logical discrepancies; (5) The _Salient Translation Error Detection_[29] task, engineered to train models in identifying one out of six predetermined translation errors given a pair of translations; (6) The _CommonsenseQA_[32] dataset, a multiple-choice question answering platform that necessitates a wide variety of commonsense knowledge for accurately determining the correct responses; (7) Lastly, the _Coin Flip_[35] dataset, a synthetically generated dataset used for assessing the symbolic reasoning capacity of LLMs.
Large Language Models. Our methodology is assessed in comparison to baseline approaches on two LLMs. First, GPT-3 [4], a language model with 175 billion parameters, demonstrates robust performance across a range of natural language processing tasks without the need for explicit training or fine-tuning. Second, GPT-3.5 [1] is a series of models that were trained on a mix of text and code data before the end of the fourth quarter in 2021. These models, expressly crafted for chat applications, function as an advanced version of InstructGPT [20].
Post Hoc Explanation Techniques.In this study, we use three widely adopted post hoc explanation methods to generate explanations that are later incorporated as rationales into prompts for in-context learning. These methods include Vanilla Gradients [25], Gradient x Input [24], and contrastive explanations [36]. Vanilla Gradients [25] calculates feature attributions by computing the norm of the gradients of model output with respect to each token's embedding. Gradient x Input derives attribution scores by taking the product of gradient and input embedding. Finally, contrastive gradients determine attribution scores by subtracting the gradients with respect to the model prediction token from those associated with the truth label. We apply these explanation techniques to two proxy models for the generation of corrective rationales in step 3 of AMPLIFY: GPT-2 (\(\sim\)125 Mn parameters)[21] and BERT (\(\sim\)110 Mn parameters) [7].
Baseline Methods. In our experiments, we evaluate the performance of AMPLIFY in comparison to two alternative approaches, namely Answer-Only (AO) prompts [29] and Chain-of-Thought (CoT) [35]. AO prompting represents the standard few-shot prompting technique, in which the input prompt consists of a few (input, label) pairs and the test input sentence, followed by an answer delimiter ('A:') for the LLM to generate the response. Chain-of-Thought (CoT), on the other hand, is the current state-of-the-art method that enhances the input prompt by including human-annotated rationales for each few-shot example. The LLM is then expected to generate a rationale followed by an answer.
Implementation Details. In our experiments, we implemented the AO and CoT baselines using the same methodology as described in their respective works. For CoT, we directly used the provided rationales from the original work for the corresponding datasets [35]. In the case of AMPLIFY, we employed GPT-2[22] fine-tuned for the target task as the proxy model for step 1, unless stated otherwise. We utilized a rationale template with \(k=5\), which is of the form: _"The key words: word\({}_{1}\), word\({}_{2}\),...and word\({}_{5}\) are important clues to predict [ground truth label] as the correct answer"_. These keywords _"word\({}_{1}\), word\({}_{2}\),...and word\({}_{5}\)"_ were chosen based on the highest attribution scores obtainedfrom the post hoc explanation computed in step 3. To compute these attribution scores, we used Gradient x Input as the default post hoc explanation method for generating explanations.
### Empirical Analysis
Overall Task Performance.We demonstrate the effectiveness of AMPLIFY by comparing the prediction accuracy of LLMs using prompts generated by AMPLIFY against baselines, i.e., Answers-Only (AO) prompts and Chain-of-Thought (CoT) prompts. Table 1 displays the results of GPT-3 and GPT-3.5 across all datasets. We observe that incorporating rationales generated by our approach leads to a substantial improvement in accuracy compared to both standard Answer-Only (AO) prompts and Chain-of-Thought (CoT) prompts. Specifically, GPT-3.5 achieves state-of-the-art performance on the Snarks dataset with a 91.6% accuracy in identifying the correct option; this is 16% better than standard answer-only prompting and over 20% better than CoT. Similar trends were observed for Causal Judgment, where our method delivered the best performance of 76.3%, significantly surpassing CoT (63.1%) and AO (57.8%). When using GPT-3, our approach attained the highest performance in Ruin Names (78.6%), a trend also evident in the case of Formal Fallacies. Finally, our method achieved the top performance with the GPT-3.5, registering an accuracy of 60.8% for the Salient Translation Error Detection task. In the scenarios of commonsense reasoning (CommonsenseQA) and symbolic reasoning (Coin Flip) tasks, we noticed consistent trends, with AMPLIFY recording the highest performance. In the next set of analyses, we examine the effects of different steps in AMPLIFY on the performance of LLMs.
Impact of Proxy Model Selection on LLM Performance.In the following analysis, we investigate how the choices made at each step of AMPLIFY affect the performance of LLM on the seven tasks. We begin with step 1, which involves selecting a proxy model for sample selection (step 2) and computing post hoc explanations(step 3). In the experiments conducted to calculate the overall model performance, as shown in Table 1, we utilized a finetuned GPT-2 model for the target task as
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{[29, 35]} & \multicolumn{3}{c}{Human-Rater [38]} & \multicolumn{3}{c}{GPT-3} & \multicolumn{3}{c}{GPT-3.5} \\ \cline{2-11} Experiment Tasks & Random & SOTA & Avg. & Max & AO & CoT & AMPLIFY & AO & CoT & AMPLIFY \\ \hline Snarks & 50.0 & 71.3 & 76.7 & 100 & 52.7 & 61.1 & 80.5 & 75.0 & 69.4 & **91.6** \\ Causal Judgment & 50.0 & 62.1 & 69.6 & 100 & 55.2 & 55.2 & 60.5 & 57.8 & 63.1 & **76.3** \\ Ruin Names & 25.0 & 72.8 & 77.7 & 100 & 64.0 & 62.9 & **78.6** & 69.6 & 62.9 & 77.5 \\ Formal Fallacies & 25.0 & 52.2 & 90.8 & 100 & 53.6 & 50.8 & **60.1** & 51.4 & 54.6 & 59.9 \\ Salient Translation Error Detection & 16.7 & 31.9 & 36.7 & 80.0 & 48.2 & 50.2 & 51.7 & 43.2 & 54.7 & **60.8** \\ CommonsenseQA & 20.0 & 80.0 & 90.0 & 100 & 69.3 & 72.6 & 73.5 & 75.7 & 75.2 & **77.9** \\ Coin Flip (OOD) & - & - & - & 54.7 & 63.3 & **65.7** & 52.9 & 61.0 & 65.3 \\ \hline All Tasks _(avg)_ & 31.1 & 61.7 & 73.5 & 96.6 & 56.8 & 58.0 & 67.2 & 60.8 & 62.9 & **72.7** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Few-shot prompting performance of several large language models on the seven datasets. AO: standard “answer-only” prompting. CoT: chain-of-thought prompting. Best model performance is in bold. The LLMs we experimented with are GPT-3 and GPT-3.5. The recorded performance in this table represents the percentage of test samples for which the LLM accurately predicted the true label.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{GPT-3} & \multicolumn{3}{c}{GPT-3.5} \\ \cline{2-7} Experiment Tasks & E = 0 & E = 10 & E = 200 & E = 0 & E = 10 & E = 200 \\ \hline Snarks & 77.7 & 80.5 & 80.5 & 88.8 & 88.8 & **91.6** \\ Causal Judgment & 55.2 & 57.8 & 60.5 & 71.0 & 73.6 & **76.3** \\ Ruin Names & 74.1 & 75.2 & **78.6** & 65.1 & 67.4 & 77.5 \\ Formal Fallacies & 53.7 & 56.9 & **60.1** & 48.3 & 51.6 & 59.8 \\ Salient Translation Error Detection & 49.7 & 51.2 & 51.7 & 57.7 & 60.8 & **60.8** \\ CommonsenseQA & 69.1 & 72.6 & 73.5 & 71.9 & 75.8 & **77.9** \\ Coin Flip (OOD) & 56.4 & 60.8 & **65.7** & 55.4 & 61.4 & 65.3 \\ \hline All Tasks _(avg)_ & 62.2 & 65.0 & 67.2 & 65.4 & 68.4 & **72.7** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Few-shot prompting performance of multiple LLMs on the seven datasets when post hoc explanations, which form the rationale in the prompt constructed during step 4 of AMPLIFY, are generated using models with varying degrees of fine-tuning of the proxy model (GPT-2 in this case). Here, “E” represents the number of epochs the proxy model was fine-tuned. “E = 0” indicates that the proxy model was used to generate post hoc explanations without any fine-tuning. The recorded performance in this table represents the percentage of test samples for which the LLM accurately predicted the true label.
the proxy model. While using GPT-2 for finetuning is computationally cheaper compared to other LLMs, it is still expensive to finetune a model with more than 100 million parameters. Therefore, we examined the performance of LLMs based on the amount of fine-tuning, measured in terms of the number of epochs (E). This analysis aimed to understand the impact of finetuning on improving model performance. Table 2 presents the model performance scores of LLMs when the proxy model is GPT-2 without any fine-tuning on the target task (E=0), with minor fine-tuning (E=10), and when GPT-2 has achieved its best performance at epoch E=200. As depicted in Table 2, we observe that the model performance of LLMs with GPT-2 (E=0) is already quite close to the best performance achieved when GPT-2 is finetuned to saturation (E=200) for most datasets. Specifically, the performance of GPT-3.5 for Snarks with GPT-2 (E=0) is 88.8%, which is significantly better than the best performance of CoT shown in Table 1. This pattern is also evident in the case of Causal Judgment, Salient Translation Error Detection, and Ruin Names. There are two primary reasons for this behavior. Firstly, GPT-2 possesses sufficient language understanding capabilities to provide useful post hoc explanations that lead to accurate predictions. Secondly, most of the tasks where GPT-2 (E=0) achieved the best or near-best performance have very limited training data, which might not be sufficient for GPT-2 to learn anything beyond what it has already acquired during pre-training. This observation is further supported by the findings presented in Table 5 in Appendix A, where the accuracy improvements for most datasets are not substantial. These findings suggest that an additional step of fine-tuning a pre-trained model may not always be necessary when selecting a proxy model in step 1 of AMPLIFY, thereby reducing computational costs even further. Lastly, we observe similar trends when BERT is chosen as the proxy model, and the detailed results are presented in Appendix A.4.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{GPT-3} & \multicolumn{4}{c}{GPT-3.5} \\ \cline{2-9} Experiment Tasks & Grad & Grad\(\times\)Inp & C-Grad & C-Grad\(\times\)Inp & Grad & Grad\(\times\)Inp & C-Grad & C-Grad\(\times\)Inp \\ \hline Snarks & 77.7 & 80.5 & 80.5 & 86.1 & 88.8 & **91.6** & 88.8 & 91.6 \\ Causal Judgment & 60.5 & 60.5 & 57.8 & 60.5 & 71.0 & **76.3** & 71.0 & 73.6 \\ Ruiin Names & 71.9 & **78.6** & 75.2 & 77.5 & 65.1 & 77.5 & 73.0 & 74.1 \\ Formal Fallacies & 59.7 & **60.1** & 59.7 & 58.6 & 59.9 & 59.9 & 59.4 & 57.6 \\ Salient Translation Error Detection & 49.7 & 51.7 & 51.7 & 50.7 & 59.7 & **60.8** & 60.8 & 60.8 \\ CommonsenseQA & 72.1 & 73.5 & 72.9 & 73.0 & 73.7 & **77.9** & 75.5 & 77.9 \\ Coin Flip (OOD) & 62.9 & 64.1 & 62.6 & **65.7** & 62.6 & 63.9 & 62.4 & 65.3 \\ \hline All Tasks (_avg_) & 64.9 & 67.0 & 65.7 & 67.3 & 68.6 & **72.5** & 70.1 & 71.5 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The table presents a performance comparison for when prompt is created using explanations generated by four different post hoc explanation methods. Grad: Vanilla Gradient Method, Grad \(\times\) Inp : Gradient x Input Method, C-Grad and C-Grad\(\times\)Inp are contrastive version of Vanilla gradient and Gradient x Input. The recorded performance in this table represents the percentage of test samples for which the LLM accurately predicted the true label.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{GPT-3} & \multicolumn{4}{c}{GPT-3.5} \\ \cline{2-9} Experiment Tasks & Random & L-MCS & H-MCS & F-Exp & Random & L-MCS & H-MCS & F-Exp \\ \hline Snarks & 69.4 & 80.5 & 80.5 & 77.7 & 69.4 & 88.8 & **91.6** & 88.8 \\ Causal Judgment & 57.8 & 60.5 & 60.5 & 57.8 & 68.4 & 73.6 & **76.3** & 71.0 \\ Ruin Names & 65.1 & 77.5 & **78.6** & 74.1 & 66.2 & 77.5 & 77.5 & 73.0 \\ Formal Fallacies & 52.3 & 57.9 & **60.1** & 59.7 & 46.7 & 51.5 & 59.9 & 58.6 \\ Salient Translation Error Detection & 48.7 & 50.2 & 51.7 & 51.7 & 53.2 & 59.2 & **60.8** & 58.7 \\ CommonsenseQA & 67.6 & 71.5 & 73.5 & 70.9 & 72.9 & 76.6 & **77.9** & 77.2 \\ Coin Flip (OOD) & 54.7 & 60.1 & **65.7** & 58.5 & 57.8 & 61.6 & 65.3 & 61.1 \\ \hline All Tasks (_avg_) & 59.3 & 65.4 & 67.2 & 64.3 & 62.0 & 69.8 & **72.7** & 69.7 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Few-shot prompting performance of various large language models on the seven datasets is analyzed based on different selection strategies used for choosing samples during prompt design of LLMs, specifically in step 2 of Figure 1. The “Random” selection refers to randomly chosen samples. “L-MCS ” signifies the selection of samples with the lowest Misclassification Confidence Score (MCS). “H-MCS ” represents the selection strategy of choosing samples with the Misclassification Confidence Score (MCS) for prompt design. “F-Exp” indicates the selection strategy of choosing samples with the most faithful explanations for LLM prompt. The recorded performance in this table represents the percentage of test samples for which the LLM accurately predicted the true label.
**Impact of Selection Strategies on LLM Performance.** In this analysis, we focus on step 2 of AMPLIFY, which involves selecting samples for few-shot prompts that can provide effective corrective signals to assist LLMs in reducing misclassifications. As explained in section 3, this step in AMPLIFY includes two sub-steps: first, identifying misclassified samples by LLM on the validation set, and second, selecting samples with the highest MCS calculated with respect to proxy model. The first step is straightforward as we specifically focus on samples that were misclassified earlier by LLMs. For subsequent filtering based on confidence, we experiment with several sample selection strategies to better understand how the performance of the language model is sensitive to these different strategies. The results, in terms of language model performance scores, are shown in Table 3. Specifically, we experiment with three selection strategies in addition to the one that selects samples with the highest MCS score, referred to as "H-MCS " in Table 3: (1) "Random": randomly selecting samples without considering Misclassification Confidence Score, (2) "L-MCS": selecting samples with the lowest MCS, and (3) "F-EXP": selecting samples based on the most faithful explanations, measured by the difference in model output when the top-K features identified by the explanation are perturbed [27, 33]. Among these strategies, we find that selecting samples with the highest Misclassification Confidence Score yields the best performance, outperforming all other strategies. Random selection performs the worst across all datasets, while the performance using "L-MCS" is comparable to that of "H-MCS" selections. This suggests that samples for which the model is less confident in its predictions can still provide reasonable corrective signals in reducing misclassifications. Notably, the LLM performance with "F-EXP" sample selection is worse than "L-MCS" for most datasets (Snarks, Causal Judgment, Ruin Names, and CommonsenseQA), indicating that relying solely on faithful explanations may not be sufficient for achieving optimal performance.
Figure 2: This image exemplifies an instance of Causal Judgment task where standard prompts and CoT produce inaccurate responses. The CoT response fails to take into account that the red wire should not make contact with the battery, which caused the short circuit. In contrast, the response generated by AMPLIFY emphasizes this crucial detail. Please note that while we inject rationales in terms of k individual words, we do not restrict LLMs from generating rationales in terms of phrases or multiple words. This is why we often see LLM-generated rationales having multi-word clues, such as _”red wire,” ”never supposed,”_ and so on.
Impact of Post Hoc Explanation Method on LLM Performance.We then examine the impact on LLM performance due to the choice of post hoc explanation used to identify top-k keywords in step 3. To investigate this, we employ four different explanation methods for step 3 of AMPLIFY and record the LLM performance corresponding to each post hoc explanation method choice in Table 4. Specifically, the four post hoc explanation methods used in this analysis are: (1) Vanilla Gradients [25] (Grad), (2) Gradient \(\times\) Input [24] (Grad\(\times\)Inp), (3) Contrastive Gradient [36] (C-Grad), and (4) Contrastive Gradient \(\times\) Input [36] (C-Grad\(\times\)Inp). Based on Table 4, we observe that the LLM performs best in general when Gradient x Input or its contrastive variant is used to generate explanations. However, we also note that there aren't drastic changes in LLM performance across different methods. For instance, GPT-3.5 performance on Snarks doesn't change much across different methods, as the accuracy remains consistently around 88.8-91.6%. This suggests that LLM performance isn't sensitive to rationales generated using different variants of gradient-based post hoc explanation methods.
Impact of Rationale Template on LLM Performance.Lastly, in the final step of AMPLIFY, we generate a few-shot prompt by combining an (input, label) pair and its corresponding set of more important words using the rationale template as _"The key words: word\({}_{1}\), word\({}_{2}\),...and word\({}_{5}\) are crucial clues for predicting [ground truth label] as the correct answer"_. We have observed that while using a task-independent rationale template leads to improvements in performance, tailoring the rationale to the question asked in the input sentence for a given dataset also provides benefits. For example, in the case of Causal Judgment, each sample includes a generic question: _"How would a typical person answer each of the following questions about causation?"_ If we utilize the rationale template as _"After observing the key words: word\({}_{1}\), word\({}_{2}\),...and word\({}_{5}\), a typical person would respond with [label] as the correct answer"_, we notice a slight enhancement in GPT-3 performance, rising from 60.5% to 63.1%. However, we did not observe the model's sensitivity to minor changes in the template, such as punctuation variations. Further discussions on the impact of hyperparameters associated with AMPLIFY can be found in Appendix A.3.
Qualitative Analysis.In addition to quantitatively evaluating the performance of AMPLIFY compared to other baselines, we also qualitatively examine how LLM responses differ for certain test samples using each of the baseline prompting approaches, and compare them to the responses generated by AMPLIFY. Figure 2 illustrates the responses generated by GPT-3.5 for an input sample using the Standard Prompt (AO), Chain-of-Thought (CoT), and AMPLIFY. In this particular example, both AO and CoT yield incorrect responses, whereas AMPLIFY produces the correct response. Analyzing the responses reveals that CoT and AO miss an important sentence in the sample: _"The red wire is never supposed to touch the battery as it moves around inside the machine"_. Interestingly, GPT-3.5 captures this crucial information when the prompt is augmented with post hoc explanations using AMPLIFY. We observe similar examples for CommonsenseQA, such as _"Q: Unlike a spider and its many observers, people only have what? Answer Choices: (A) tongues (B) names (C) brains (D) feelings (E) two eyes."_. In this case, CoT incorrectly selects option (C), whereas AMPLIFY correctly predicts option (E). The complete response is shown in Figure 3 in the Appendix A. This error also stems from the same issue of CoT overlooking crucial information that the question pertains to eyes rather than the entire body, a nuance that AMPLIFY successfully captures. This demonstrates that the rationales generated by AMPLIFY can assist LLMs in capturing critical signals that might have otherwise been overlooked.
## 5 Conclusion
In this work, we introduce AMPLIFY, a novel framework aimed at improving the performance of LLMs by replacing human-annotated rationales with automated rationales obtained using post hoc explanation techniques. Our unique four-step approach leverages smaller, open-source models for efficient computation of post hoc explanations. Our framework results in performance improvements of 10-25% across diverse tasks, outperforming conventional techniques such as CoT prompting which rely on human-annotated rationales. Our findings highlight the potential of post hoc explanation methods as valuable tools for enhancing the effectiveness of LLMs.
## Acknowledgments and Disclosure of Funding
We thank Adam Pearce for helpful feedback on writing and presentation of this work. This work is supported in part by the NSF awards IIS-2008461, IIS-2040989, IIS-2238714, and research awards from Google, JP Morgan, Amazon, Harvard Data Science Initiative, and the Digital, Data, and Design (D\({}^{\wedge}\)3) Institute at Harvard. The views expressed here are those of the authors and do not reflect the official policy or position of the funding agencies. This work was also funded in part by Hasso Plattner Institute (HPI) through the UCI-HPI fellowship, and in part by the NSF awards IIS-2008956, IIS-2046873, and IIS-2040989.
## References
* [1]G. Gpt-3.5-turbo. [https://platform.openai.com/docs/model-index-for-researchers](https://platform.openai.com/docs/model-index-for-researchers),. Accessed: 2022-01-01.
* [2] Gpt-4 system card. [https://cdn.openai.com/papers/gpt-4-system-card.pdf](https://cdn.openai.com/papers/gpt-4-system-card.pdf),. Accessed: 2022-01-01.
* [3] R. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, M. S. Bernstein, J. Bohg, A. Bosselut, E. Brunskill, et al. On the opportunities and risks of foundation models. _arXiv preprint arXiv:2108.07258_, 2021.
* [4] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. Language models are few-shot learners. _Advances in neural information processing systems_, 33:1877-1901, 2020.
* [5] S. Bubeck, V. Chandrasekaran, R. Eldan, J. Gehrke, E. Horvitz, E. Kamar, P. Lee, Y. T. Lee, Y. Li, S. Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. _arXiv preprint arXiv:2303.12712_, 2023.
* [6] S. Choudhary, N. Chatterjee, and S. K. Saha. Interpretation of black box nlp models: A survey. _arXiv preprint arXiv:2203.17081_, 2022.
* [7] J. Devlin, M. Chang, K. Lee, and K. Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In _Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)_, 2018.
* [8] Q. Dong, L. Li, D. Dai, C. Zheng, Z. Wu, B. Chang, X. Sun, J. Xu, and Z. Sui. A survey for in-context learning. _arXiv preprint arXiv:2301.00234_, 2022.
* [9] F. Doshi-Velez and B. Kim. Towards a rigorous science of interpretable machine learning. _arXiv preprint arXiv:1702.08608_, 2017.
* [10] M. Idahl, L. Lyu, U. Gadiraju, and A. Anand. Towards benchmarking the utility of explanations for model debugging. _arXiv preprint arXiv:2105.04505_, 2021.
* [11] S. Jesus, C. Belem, V. Balayan, J. Bento, P. Saleiro, P. Bizarro, and J. Gama. How can i choose an explainer? an application-grounded evaluation of post-hoc explanations. In _Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency_, pages 805-815, 2021.
* [12] Z. Ji, N. Lee, R. Frieske, T. Yu, D. Su, Y. Xu, E. Ishii, Y. J. Bang, A. Madotto, and P. Fung. Survey of hallucination in natural language generation. _ACM Computing Surveys_, 55(12):1-38, 2023.
* [13] S. Krishna, T. Han, A. Gu, J. Pombra, S. Jabbari, S. Wu, and H. Lakkaraju. The disagreement problem in explainable machine learning: A practitioner's perspective. _arXiv preprint arXiv:2202.01602_, 2022.
* [14] H. Lakkaraju, N. Arsov, and O. Bastani. Robust and stable black box explanations. In _International Conference on Machine Learning_, pages 5628-5638. PMLR, 2020.
* [15] A. K. Lampinen, I. Dasgupta, S. C. Chan, K. Matthewson, M. H. Tessler, A. Creswell, J. L. McClelland, J. X. Wang, and F. Hill. Can language models learn from explanations in context? _arXiv preprint arXiv:2204.02329_, 2022.
* [16] P. Liu, W. Yuan, J. Fu, Z. Jiang, H. Hayashi, and G. Neubig. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. _ACM Computing Surveys_, 55(9):1-35, 2023.
* [17] A. S. Luccioni, S. Viguier, and A.-L. Ligozat. Estimating the carbon footprint of bloom, a 176b parameter language model. _arXiv preprint arXiv:2211.02001_, 2022.
* [18] S. M. Lundberg and S.-I. Lee. A unified approach to interpreting model predictions. In _Advances in Neural Information Processing Systems_, 2017.
* [19] A. Madsen, S. Reddy, and S. Chandar. Post-hoc interpretability for neural nlp: A survey. _ACM Computing Surveys_, 55(8):1-42, 2022.
* [20] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, et al. Training language models to follow instructions with human feedback. _Advances in Neural Information Processing Systems_, 35:27730-27744, 2022.
* [21] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are unsupervised multitask learners. 2019.
* [22] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever, et al. Language models are unsupervised multitask learners. _OpenAI blog_, 1(8):9, 2019.
* [23] M. T. Ribeiro, S. Singh, and C. Guestrin. "Why should I trust you?" Explaining the predictions of any classifier. In _ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_, 2016.
* [24] A. Shrikumar, P. Greenside, and A. Kundaje. Learning important features through propagating activation differences. In _Proceedings of the 34th International Conference on Machine Learning_, 2017.
* [25] K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. In _International Conference on Learning Representations_, 2014.
* [26] D. Slack, A. Hilgard, S. Singh, and H. Lakkaraju. Reliable post hoc explanations: Modeling uncertainty in explainability. _Advances in neural information processing systems_, 34:9391-9404, 2021.
* [27] D. Slack, S. Krishna, H. Lakkaraju, and S. Singh. Talktomodel: Understanding machine learning models with open ended dialogues. _arXiv preprint arXiv:2207.04154_, 2022.
* [28] D. Smilkov, N. Thorat, B. Kim, F. Viegas, and M. Wattenberg. Smoothgrad: Removing noise by adding noise. _arXiv preprint arXiv:1706.03825_, 2017.
* [29] A. Srivastava, A. Rastogi, A. Rao, A. A. M. Shoeb, A. Abid, A. Fisch, A. R. Brown, A. Santoro, A. Gupta, A. Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. _arXiv preprint arXiv:2206.04615_, 2022.
* [30] M. Sundararajan, A. Taly, and Q. Yan. Axiomatic attribution for deep networks. In _International Conference on Machine Learning_, 2017.
* [31] M. Suzgun, N. Scales, N. Scharli, S. Gehrmann, Y. Tay, H. W. Chung, A. Chowdhery, Q. V. Le, E. H. Chi, D. Zhou, et al. Challenging big-bench tasks and whether chain-of-thought can solve them. _arXiv preprint arXiv:2210.09261_, 2022.
* [32] A. Talmor, J. Herzig, N. Lourie, and J. Berant. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_, pages 4149-4158, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1421. URL [https://aclanthology.org/N19-1421](https://aclanthology.org/N19-1421).
* [33] M. Turpin, J. Michael, E. Perez, and S. R. Bowman. Language models don't always say what they think: Unfaithful explanations in chain-of-thought prompting. _arXiv preprint arXiv:2305.04388_, 2023.
* [34] J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou, D. Metzler, et al. Emergent abilities of large language models. _arXiv preprint arXiv:2206.07682_, 2022.
* [35] J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou. Chain of thought prompting elicits reasoning in large language models. _arXiv preprint arXiv:2201.11903_, 2022.
* [36] K. Yin and G. Neubig. Interpreting language models with contrastive explanations. _arXiv preprint arXiv:2202.10419_, 2022.
* [37] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In _Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part 1 13_, pages 818-833. Springer, 2014.
* [38] W. Zhong, R. Cui, Y. Guo, Y. Liang, S. Lu, Y. Wang, A. Saied, W. Chen, and N. Duan. Agavel: A human-centric benchmark for evaluating foundation models. _arXiv preprint arXiv:2304.06364_, 2023.
Appendix
### Proxy Model Task Performance
### Qualitative Analysis
Figure 3 shows an example from CommonsenseQA where GPT-3.5 responses using AO and CoT prompting yield an incorrect answer. The most likely reason for this is that these prompt strategies don't seem to capture all the key points of the input sentence, i.e., the context in the input is based on eyes rather than the overall body. However, this crucial detail is captured when GPT-3.5 is prompted with AMPLIFY. We observe that the GPT-3.5 response is correct, and it acknowledges "eyes" as the most important clue in making the correct prediction.
### Hyper-parameter Analysis
Recall that AMPLIFY has two other primary hyper-parameters apart from the rationale template choice discussed in our empirical findings, namely, \(s\), which is the size of the few-shot prompt created for LLMs, and \(k\), which is the number of most important tokens identified by the post hoc explanation. Table 6 shows the LLM performance variations for different combinations of (\(k\), \(s\)). It is important to note that AMPLIFY does not have scalability constraints with increasing \(s\) and \(k\), as AMPLIFY computes prompts automatically. This is unlike CoT, where increasing the size of the few-shot prompt would require more human effort to generate relevant chains of thoughts.
### Impact of BERT as Proxy Model on LLM Performance
Table 7 shows LLM performance when BERT is used as the proxy model in step 1 of AMPLIFY. We observe similar trends as those observed for the case of GPT-2, where fine-tuning proxy model provides marginal improvements in general. This indicates that the fine-tuning step could be avoided in most cases to reduce additional computational overhead.
## Appendix B Limitations and Broader Impacts
Our work proposes a new framework, AMPLIFY, which focuses on improving the task performance of LLMs by injecting automatically generated rationales. This framework results in the reduction of reliance on processes that require heavy human intervention. These processes, which rely on rationales based on human annotations, often suffer from noise and biases, which may transfer to LLMs during in-context learning. We hope that automated rationale creation will provide a solution to mitigate this problem. While our approach provides significant improvements in model performance, the broader negative impact pertaining to LLMs, such as safety concerns in the form of misinformation[2], social bias[2], hallucination[12], etc., and the massive carbon footprint due to heavy usage of LLMs [17], may still persist even when using our proposed framework. Other than the limitations of LLMs, our framework relies on post hoc explanation methods to create automated rationales; hence, AMPLIFY may also inherit widely studied issues with post hoc explanations such as robustness[14], the disagreement problem[13], stability[26], etc.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{GPT-2} & \multicolumn{2}{c}{BERT} \\ \cline{2-5} Experiment Tasks & Pre-trained & Fine-tuned & Pre-trained & Fine-tuned \\ \hline \hline Smarks & 38.8 & 47.2 & 30.5 & 38.8 \\ Causal Judgment & 44.7 & 55.2 & 44.7 & 52.6 \\ \hline \hline Ruin Names & 07.8 & 26.9 & 10.1 & 22.4 \\ Formal Fallacies & 50.5 & 54.4 & 51.6 & 53.5 \\ Salient Translation Error Detection & 14.0 & 27.1 & 11.5 & 22.6 \\ CommonsenseQA & 07.4 & 29.1 & 08.8 & 26.9 \\ Coin Flip & 45.2 & 59.4 & 51.1 & 59.7 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Proxy models performance on the target tasks with and without fine-tuning.
## Appendix C Additional Experiments
### Larger Proxy Model: GPT-2-Medium
While we experimented with the fine-tuning of proxy models, it's important to note that this step can be eliminated by using a more capable pretrained proxy model, while still achieving performance gains over baselines. The Table 8 shows that the performance of LLM surpasses the baseline (CoT) when we use gpt2-medium instead of gpt2-small, without any fine-tuning. This demonstrates that fine-tuning of the proxy model is not mandatory. Our motivation to show results for fine-tuning models in the paper is to demonstrate the improvement in LLM performance when the proxy model is further fine tuned.
### Multi-step Problem Solving : GSM8k
For our experiments, we focused on tasks that require complex language understanding[31], which are also cases where post hoc explanations have been found to be useful in capturing important features, hence providing useful explanations[19]. However, we also experimented with GSM8k (math problem dataset) used in CoT [35] and observed that AMPLIFY outperforms AO but performs lower than CoT, as shown in Table 9.
While we outperform the standard few-shot approach, the underperformance of AMPLIFY when compared to CoT is expected because solving math problems requires multi-step reasoning, a complex function which is beyond what post hoc explanations are designed to explain. We further wish to clarify that we do not present AMPLIFY as a replacement for CoT, but rather as a superior alternative for tasks requiring complex language understanding; these are tasks for which obtaining chains-of-thought through human annotations is exceptionally challenging [31].
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{GPT-3 (\(k\),\(s\))} & \multicolumn{4}{c}{GPT-3.5 (\(k\),\(s\))} \\ Experiment Tasks & (2, 5) & (5, 5) & (5, 10) & (7, 10) & (2, 5) & (5, 5) & (5, 10) & (7, 10) \\ \hline Snarks & 63.8 & 72.2 & 80.5 & 80.5 & 75.0 & 80.5 & **91.6** & 88.8 \\ Causal Judgment & 52.6 & 57.8 & 60.5 & 60.5 & 65.7 & 73.6 & 76.3 & **76.3** \\ Ruin Names & 64.0 & 75.2 & 76.4 & **78.6** & 73.0 & 75.2 & 77.5 & 77.5 \\ Formal Fallacies & 55.5 & 57.9 & 59.8 & **59.8** & 56.3 & 58.8 & 59.6 & 59.6 \\ Salient Translation Error Detection & 49.7 & 50.2 & 51.2 & 51.2 & 52.7 & 56.2 & 60.8 & **60.8** \\ CommonsenseQA & 72.8 & 73.1 & 73.3 & 73.5 & 76.0 & 76.7 & 77.6 & **77.9** \\ Coin Flip (OOD) & 64.9 & 65.3 & 65.7 & **65.7** & 63.3 & 65.0 & 65.3 & 65.3 \\ \hline All Tasks _(avg)_ & 60.4 & 64.5 & 66.7 & 67.1 & 66.0 & 69.4 & **72.6** & 72.3 \\ \hline \hline \end{tabular}
\end{table}
Table 6: This figure shows LLM performance for the different selections of \(k\) and \(s\) hyper-parameters of AMPLIFY, as denoted by (\(k\), \(s\)) for each column. In general, we observe (\(k\) = 7, \(s\) = 10) achieves the best results for most of the datasets.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{4}{c}{GPT-3} & \multicolumn{4}{c}{GPT-3.5} \\ Experiment Tasks & E = 0 & E = 10 & E = 200 & E = 0 & E = 10 & E = 200 \\ \hline Snarks & 66.6 & 72.2 & 72.2 & 80.8 & 80.8 & **88.8** \\ Causal Judgment & 50.0 & 52.6 & 57.8 & 71.0 & 73.6 & **73.6** \\ Ruin Names & 70.7 & 73.0 & **73.0** & 71.9 & 71.9 & 71.9 \\ Formal Fallacies & 56.2 & 56.9 & **58.5** & 56.7 & 56.9 & 57.8 \\ Salient Translation Error Detection & 50.2 & 51.2 & 51.2 & 56.2 & 59.2 & **60.8** \\ CommonsenseQA & 71.3 & 71.8 & 72.4 & 76.1 & 76.5 & **77.4** \\ Coin Flip (OOD) & 65.4 & 65.8 & **65.9** & 63.7 & 64.3 & 65.1 \\ \hline All Tasks _(avg)_ & 61.2 & 63.1 & 68.0 & 68.0 & 69.0 & **70.7** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Few-shot prompting performance of multiple LLMs on the seven datasets when post hoc explanations, which form the rationale in the prompt constructed during step 4 of AMPLIFY, are generated using models with varying degrees of fine-tuning of the proxy model (BERT in this case). Here, “E” represents the number of epochs the proxy model was fine-tuned. “E = 0” indicates that the proxy model was used to generate post hoc explanations without any fine-tuning. The recorded performance in this table represents the percentage of test samples for which the LLM accurately predicted the true label.
### Analysis on Other Big-Bench Hard Tasks
We conducted experiments on some more tasks from Big-Bench Hard [31] and observed similar gains achieved by AMPLIFY, shown in Table 10. However, we observed only minimal improvement in the task of word sorting. This is because word sorting requires an understanding of lexical properties over linguistic semantics.
\begin{table}
\begin{tabular}{l c c c} \hline Experiment Tasks & AMPLIFY (gpt2-small) & AMPLIFY (gpt2-medium) & CoT \\ \hline Snarks & 88.8 & **91.6** & 69.4 \\ Causal Judgment & **71.0** & **71.0** & 63.1 \\ Ruin Names & 65.1 & **70.7** & 62.9 \\ Formal Fallacies & 48.3 & **56.0** & 54.6 \\ Salient Translation & 57.7 & **60.8** & 54.7 \\ CommonsenseQA & 71.9 & **75.5** & 75.2 \\ Coin Flip (OOD) & 55.4 & 59.6 & **61.0** \\ \hline \end{tabular}
\end{table}
Table 8: The table presents a performance comparison for different models on various experiment tasks. The recorded performance in this table represents the percentage of test samples for which the model accurately predicted the true label.
Figure 3: This image exemplifies an instance of CommonsenseQA task where standard prompts and CoT produce inaccurate responses. The CoT response fails to take into account the context in the question being related to eyes. In contrast, the response generated by AMPLIFY emphasizes this crucial detail.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline Experiment Tasks & Random & SOTA & Avg. & Max & AO & CoT & AMPLIFY \\ \hline Disambiguation QA & 33.2 & 51.6 & 66.6 & 93.3 & 66.6 & 70.5 & **74.5** \\ Word Sorting & 0 & 33.1 & 62.6 & 100 & 37.8 & 43.1 & **43.6** \\ Hyperbaton & 50.0 & 67.1 & 74.7 & 100 & 68.5 & 77.4 & **79.7** \\ \hline \end{tabular}
\end{table}
Table 10: The table presents a performance comparison for the Disambiguation QA, Word Sorting, and Hyperbaton. The recorded performance in this table represents the percentage of test samples for which the model accurately solved the task.
\begin{table}
\begin{tabular}{l c c c} \hline Experiment Tasks & AO & CoT & AMPLIFY (proxy model : gpt2-small) \\ \hline GSM8k & 22.7 & **43.5** & 27.4 \\ \hline \end{tabular}
\end{table}
Table 9: The table presents a performance comparison for the GSM8k task using three different methods: AO, CoT, and AMPLIFY with gpt2-small as the proxy model. The recorded performance in this table represents the percentage of test samples for which the model accurately solved the math problem. | ## Review
### Summary
This paper introduces the AMPLIFY framework, which utilizes post-hoc explanations from a proxy model to enhance the performance of large language models (LLMs) in in-context learning. The framework comprises four stages: selecting a lightweight proxy model, identifying misclassified examples, computing attribution scores to highlight important features, and crafting prompts for LLMs. Experimental results demonstrate significant improvements over baseline methods across various tasks, indicating the potential of AMPLIFY in providing effective, automated few-shot demonstrations without requiring extensive human annotation. Despite some concerns regarding its practicality and comparative evaluations, the method offers a novel approach to improving LLM performance.
### Strengths
- The paper is well-written, clear, and well-motivated.
- The use of post-hoc explanations to enhance prompting methods is a novel and interesting approach.
- The experimental results show substantial performance improvements across multiple tasks.
- Extensive ablation experiments are included, providing thorough analyses of the proposed method's effectiveness.
- The work opens new avenues for using post-hoc explanation techniques in LLMs.
### Weaknesses
- The method requires additional resources, such as fine-tuning a proxy model for each target task, which may not be practical.
- The selection of misclassified samples from the validation set raises questions about the validity of the few-shot approach.
- The framework's reliance on the size of the validation set and other details regarding baseline comparisons are insufficiently addressed.
- There is a lack of comparison with other approaches, such as Auto-CoT, which could provide a more robust context for evaluating performance.
- The performance of AMPLIFY with non-finetuned proxy models is inconsistent across tasks and is not adequately discussed.
### Questions
- What is the minimum size of a validation set required to ensure reliable explanations from the proxy model?
- How does the performance of AMPLIFY compare to CoT in complex reasoning tasks like math problems?
- What training datasets are used when fine-tuning the proxy model?
- Can the authors clarify the default number of shots used in experiments for different LLMs?
- Is AMPLIFY expected to generalize well to models with fewer than 100B parameters?
### Soundness
**Score:** 3
**Description:** 3 = good: The methodology is sound and results are generally robust, although some concerns regarding experimental design and comparisons remain.
### Presentation
**Score:** 3
**Description:** 3 = good: The paper is well-structured and mostly clear, but could benefit from additional details regarding methodology and results.
### Contribution
**Score:** 3
**Description:** 3 = good: The paper contributes significantly to the field by introducing a novel method for utilizing post-hoc explanations, though it lacks comprehensive comparisons with existing methods.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept: The paper is technically solid with moderate-to-high impact, but it has some notable weaknesses that need addressing in revisions.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a novel approach that effectively enhances LLM performance using post-hoc explanations, demonstrating significant improvements across tasks. While there are concerns about the practicality of the method and the robustness of the comparisons, the contributions to the field are notable. The clarity of the presentation and the thoroughness of the experimental evaluations further support the decision to accept, albeit with recommendations for additional clarifications and comparisons.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# CoVR: Learning Composed Video Retrieval
from Web Video Captions
Anonymous Author(s)
Affiliation
Address
email
###### Abstract
Composed Image Retrieval (CoIR) has recently gained popularity as a task that considers _both_ text and image queries together, to search for relevant images in a database. Most CoIR approaches require manually annotated datasets, containing image-text-image triplets, where the text describes a modification from the query image to the target image. However, manual curation of CoIR triplets is expensive and prevents scalability. In this work, we instead propose a scalable automatic dataset creation methodology that generates triplets given video-caption _pairs_. To this end, we mine paired videos with a similar caption from a large database, and leverage a large language model to generate the corresponding modification text. We automatically construct our WebVid-CoVR dataset by applying this procedure to the large WebVid2M collection, resulting in 1.6M triplets. Moreover, we introduce a new benchmark for composed _video_ retrieval (CoVR) and contribute a manually annotated evaluation set, along with baseline results. We further show that training a CoVR model on our dataset transfers well to CoIR, improving the state of the art in the zero-shot setup on both the CIRR and FashionIQ benchmarks. Our code, datasets, and models will be made publicly available.
## 1 Introduction
Consider the scenario where a traveller takes a picture of a landmark or scenic spot and wants to discover videos that capture the essence of that location, by specifying certain conditions via text. For example, the query image in Figure 1 (of a fountain in Barcelona), along with the text "during show" should bring the video showcasing the fountain show. Further refining the text query such as "during show at night", would allow the traveller to decide whether to wait for the show until the night time. In this work, our goal is composed video retrieval (CoVR), where the user performs such multi-modal search, by querying an image of a particular visual concept and a modification text, to find videos that exhibit the similar visual characteristics with the desired modification, in a dynamic context.
Figure 1: **Task: Composed Video Retrieval (CoVR) seeks to retrieve _videos_ from a database by searching with both a query image and a query text. The text typically specifies the desired modification to the query image. In this example, a traveller might wonder how the photographed place looks like during a fountain show, by describing several modifications, such as “during show at night, with people, with fireworks”.**CoVR has many use cases, including but not limited to searching online videos for finding reviews of a specific product, how-to videos of a tool for specific usages, live events in specific locations, sports matches of specific players. Similar to composed image retrieval (CoIR), CoVR is also particularly useful when conveying a concept with a visual is easier and/or more accurate than only using words (e.g., unknown location/object, a specific camera view, a specific color).
Given the increased momentum in vision and language research in the recent years [31, 45], CoIR has emerged as a new task [57], and since then witnessed improvements of both models and benchmarks [6, 7, 21, 28, 37, 58]. However, to the best of our knowledge, CoVR was not studied before. A key challenge in building CoVR models is the difficulty of gathering suitable training data of image-text-video triplets. We overcome this limitation by developing an automatic approach to generate triplets from existing video-caption collections. Specifically, we mine video pairs whose corresponding captions slightly differ in text space. We automatically describe this difference with a language model, which we train for a _modification-text generation_ task. In particular, we use manually annotated triplets, each containing: (a) source caption, (b) target caption, (c) the modification text. We then finetune a large language model (LLM) [54] by inputting (a-b), and outputting (c). We assume the resulting modification to describe the difference between the corresponding videos, thus obtaining video-text-video triplets (see Figure 2 for an overview). When training our CoVR/CoIR models, we can select one or more frames from the videos, enabling multiple settings (i.e., retrieving images or videos).
We apply our triplet generation approach to the WebVid2M dataset [4] which contains 2.5M Web-scraped video-caption pairs. This results in the WebVid-CoVR training dataset with 1.6M CoVR triplets. By virtue of its automatic generation procedure, WebVid-CoVR is inherently noisy. To efficiently train on such large-scale and noisy training data, we use a contrastive loss [55] and additionally sample hard negatives that have the same source caption but different target captions. We design a CoVR model based on the cross-modal BLIP [31] and use query scoring [5] to exploit information from multiple video frames. Training this model on WebVid-CoVR transfers well to the CoIR task, in both zero-shot and finetuning settings, and achieves state-of-the-art results on the CIRR and FashionIQ benchmarks in the zero-shot setup. Finally, to foster research in CoVR, we repeat our generation procedure on a separate subset of the WebVid10M dataset [4] and manually select correctly generated samples to constitute WebVid-CoVR\({}_{m}\), a test set of 2,435 CoVR triplets. We find that our model achieves promising results on WebVid-CoVR\({}_{m}\) compared to standard baselines.
To summarize, our contributions are: (i) We propose a scalable approach to automatically generate composed visual retrieval training data. We apply this pipeline to the WebVid2M dataset and generate the WebVid-CoVR training dataset with 1.6M CoVR triplets. (ii) We show that training a CoVR model on WebVid-CoVR transfers well to the CoIR task, and achieves state-of-the-art results on the CIRR and FashionIQ benchmarks in the zero-shot setup. (iii) We evaluate our model on WebVid
Figure 2: **Method overview:** We automatically mine similar caption pairs from a large video-caption database from the Web, and use our modification text generation language model (MTG-LLM) to describe the difference between the two captions. MTG-LLM is trained on a dataset of 715 triplet text annotations [8]. The resulting triplet of two corresponding videos (query \(q\) and target video \(v\)) and the modification text (\(t\)) is therefore obtained fully automatically, allowing a scalable CoVR training data generation.
CoVR\({}_{m}\), a new CoVR benchmark that we manually annotate. Our code and dataset are provided in the Supplementary Material, and will be publicly released together with our models.
## 2 Related Work
**Composed image retrieval (CoIR).** CoIR [57] has been an active area of research in recent years [7, 14, 25]. Most methods designed for this problem use manually annotated data for training. Some recent works, such as Pic2Word [47] and SEARLE [6], explore zero-shot CoIR setups where no manually annotated CoIR triplet is used. These approaches build on CLIP [45] and train directly on unlabeled image(-text) data. In contrast, we use unlabeled video-text pairs to automatically generate composed video retrieval (CoVR) triplets, train a CoVR model on the generated data, and study zero-shot and finetuning transfer of the resulting model on both CoIR and CoVR.
**Datasets for composed image retrieval.** CIRR [37] and Fashion-IQ [58] are the two most widely used CoIR benchmarks. Both are manually annotated, hence small scale (about 30K triplets, see Table 1) due to the high cost implied in collecting CoIR triplets. To scale up, two concurrent works proposed larger, automatically generated CoIR datasets: LaSCo [28] and SynthTriplets18M [21]. However, these two datasets are currently not publicly available. The LaSCo dataset [28] is generated using the visual question answering annotations and the pairing between images and counterfactual images in the VQAv2 dataset [3]. In detail, this dataset provides for each (image, question, answer) triplet a counterfactual triplet with the same question and different image and answer. In contrast, we do not rely on such expensive annotation schemes. SynthTriplets18M [21] uses the text-conditioned image editing framework InstructPix2Pix [8] to automatically generate CoIR data. Their edit text generation process is similar to ours, but our generation process differs in that we automatically mine similar videos from a dataset of unlabeled video-text pairs to construct CoVR triplets instead of generating visual data. In experiments, we show the superiority of our generation procedure as we achieve much higher CoIR results (e.g., 38% vs 19% zero-shot R@1 on CIRR while generating fewer data). Lastly, our WebVid-CoVR dataset is composed of videos, and not limited to still images.
**Vision-language pretraining.** Many strong multi-modal models have been pretrained on large datasets of image-caption pairs [2, 13, 24, 27, 30, 32, 34, 38, 45, 48, 51, 67, 71] or video-caption pairs [1, 29, 33, 41, 42, 53, 59, 60, 68, 69, 70]. In contrast, we generate CoVR training data from video-caption pairs instead of directly training on them. Our data generation approach is also related to other generation approaches used for other tasks, e.g., action recognition [43], visual question answering [62, 63] and visual dialog [35]. However, unlike all these tasks, the CoVR task requires retrieving visual data.
**Video retrieval.** Text-to-video retrieval has received great attention over the last few years [17, 18, 19, 36, 39, 40, 46, 59, 61, 64, 65]. We also make use of multiple video frames with query scoring similar to [5]. However, different from these methods, we focus on _composed_ video retrieval, where the query consists of both text and visual data.
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline Dataset & Type & \#Triplets & \#Visuals & \#Unique words & Avg. text length & Domain \\ \hline CIRR [37] & \(\blacksquare\) & 36,554 & 21,185 & 7,129 & 59.51 & Natural \\ FashionIQ [58] & \(\blacksquare\) & 30,132 & 7,988 & 4,425 & 27.13 & Fashion \\ CIRCO [6] & \(\blacksquare\) & 1,020 & - & - & - & Natural \\ LaSCo [28] & \(\blacksquare\) & 389,305 & 121,479 & 13,488 & 30.70 & Natural \\ SynthTriplets18M [21] & \(\blacksquare\) & 18,000,000 & - & - & - & Synthetic \\ WebVid-CoVR & \(\blacksquare\) & 1,648,789 & 130,775 & 19,163 & 23.36 & Natural \\ WebVid-CoVR\({}_{m}\) & \(\blacksquare\) & 2,435 & 2,435 & 1,764 & 22.03 & Natural \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Existing datasets:** We compare our proposed WebVid-CoVR training dataset and its manually annotated test set WebVid-CoVR\({}_{m}\) with existing composed visual retrieval datasets. \(\blacksquare\) denotes image, \(\blacksquare\) denotes video datasets. We contribute the largest training dataset for the natural domain. Note that, while SynthTriplets18M is larger, the transfer performance to real images is ineffective potentially due to a domain gap (see Table 3).
Automatic Triplet Generation and CoVR Training
The goal of the composed video retrieval (CoVR) task is, given an input video or image \(q\) and a modification text \(t\), to retrieve a modified video \(v\) in a large database of videos. We wish to avoid the manual annotation of \((q,t,v)\) triplets for training. Hence we automatically generate such triplets from Web-scraped video-caption pairs, as explained in Section 3.1 and illustrated in Figure 2. The resulting WebVid-CoVR dataset, together with its manually curated evaluation set, is presented in Section 3.2. Finally, we present how we train a CoVR model using WebVid-CoVR in Section 3.3.
### Generating composed video retrieval triplets
Given a large (Web-scraped) dataset of video-caption pairs \((v,c)\), we wish to automatically generate video-text-video CoVR triplets \((q,t,v)\) where the text \(t\) describes a modification to the visual query \(q\). However, the dataset of video-caption pairs neither contains annotations of paired videos, nor modification text that describes their difference. Hence we propose a methodology to automatically mine paired videos and describe their difference, as described below. Note that for illustration, we take as an example the WebVid2M dataset [4] with 2.5M video-caption pairs, but this methodology could be applied to other large datasets of video-text (or image-text) pairs.
**Mining paired videos by pairing captions.** In order to obtain paired videos, we leverage their captions. The core idea is that videos with similar captions are likely to have similar visual content. Specifically, we consider captions that differ by a single word, excluding punctuation marks. For instance, the caption _"Young woman smiling"_ is paired with _"Old woman smiling"_ and _"Young couple smiling"_. In the 2M distinct captions from WebVid2M, this process allows us to identify a vast pool of 1.2M distinct caption pairs with 177K distinct captions, resulting in 3.1M paired videos.
**Filtering caption pairs.** We wish to automatically generate the modification text between paired videos using their (paired) captions. However, caption pairs with the same meaning are likely to result in meaningless differences. On the contrary, caption pairs that differ too much are likely to result in large visual differences that cannot be easily described. To address these issues, we filter out caption pairs that are too similar and too dissimilar. Specifically, we exclude caption pairs with CLIP text embedding similarity \(\geq\) 0.96 (e.g., _"Fit and happy young couple playing in the park"_ and _"Fit and happy young couple play in the park"_) and caption pairs with CLIP text embedding similarity \(\leq\) 0.6 (e.g., _"Zebra on a white background"_ and _"Coins on a white background"_). We also exclude pairs where the captions differ by a digit (which mostly consist of date in practice), or by an out-of-vocabulary word. Finally, we remove templated captions such as _"abstract of"_, _"concept of"_, and _"flag of"_ which are over-represented.
**Generating a modification text from paired captions.** In order to generate a modification text between paired videos, we apply a modification text generation large language model (MTG-LLM) to their corresponding paired captions. We describe the MTG-LLM inference process below and then explain its training details. The MTG-LLM takes as input two paired captions and generates a modification text that describes the difference between the two captions (see Fig. 2). In detail, the generation is auto-regressive, i.e., we recursively sample from the token likelihood distribution conditioned on the previously generated tokens until an end-of-sentence token is reached. To increase the diversity of the generated samples, we use top-k sampling instead of maximum-likelihood-based methods such as beam search and its variants [56]. Note that we only generate a single modification text per caption pair for computational efficiency, but the MTG-LLM could be used to generate multiple modification texts per caption pair which could serve as a data augmentation in future work.
We now describe the training details of the MTG-LLM. We start from a LLM pretrained with a next token prediction objective on a Web-scale text dataset [54]. We then finetune this LLM for the MTG task on a manually annotated text dataset. In particular, we repurpose the editing dataset from InstructPix2Pix [8], which provides a modification text and a target caption for 700 input captions. We augment this dataset with 15 additional annotations that are useful in our use case. These examples involve transformations such as changing singular nouns to plural (_tree_ to _trees_), as well as addressing specific edge cases. More details can be found in the Supplementary Material.
**Filtering video pairs.** We wish to avoid some modification texts being over-represented in the dataset as it could harm training. Hence, if there are more than 10 video pairs associated with the same pair of captions (therefore leading to the same modification text), we only select 10 video pairs. As the CoVR task typically involves similar query-target video pairs, we choose pairs of videos with the highest visual similarity, as measured by the CLIP visual embedding similarity computed at the middle frame of the videos.
### Analysis of WebVid-CoVR
**WebVid-CoVR: a large-scale CoVR training dataset.** By applying the previously described pipeline to the WebVid2M dataset [4], we generate WebVid-CoVR, a dataset containing 1.6M CoVR triplets, which is significantly more than prior datasets (see Table 1). On average, a video lasts 16.8 seconds, a modification text contains 4.8 words, and one target video is associated with 12.7 triplets. WebVid-CoVR is highly diverse with 131K distinct videos and 467K distinct modification texts. Examples of CoVR triplets from the WebVid-CoVR dataset are illustrated in Figure 3. These examples show the diversity of the data in WebVid-CoVR, and its noise due to the automatic generation procedure. We provide further analysis of the WebVid-CoVR dataset in the supplementary material.
**WebVid-CoVR\({}_{m}\): a new CoVR evaluation benchmark.** Due to the noise in WebVid-CoVR, we manually annotate a small test set, dubbed WebVid-CoVR\({}_{m}\), for evaluation. For this, we first repeat the data generation procedure described in Section 3.1, but on a different corpus of video-caption pairs. Specifically, we consider video-caption pairs from the WebVid10M corpus [4] that are not included in the WebVid2M dataset, resulting in a pool of 8 million video-caption pairs. This ensures that other models using WebVid2M for pretraining have not been exposed to any of the test examples. In the video pairs filtering stage, for each pair of captions, we here only keep one pair of videos (the one with the highest visual similarity). This results in 163K candidate triplets that could be used for testing purposes. We randomly sample 7K triplets that we use for validation and randomly sample 3.1K other triplets that we manually annotate as described below.
We augment the 3.1K triplets by generating two additional modification texts with the MTG-LLM. The annotator reads the three generated modification texts, looks at three frames from the query and target videos, and either keeps the best modification text if at least one is valid or discards the sample. Through this meticulous annotation process, we ensure that the test set comprises high-quality and meaningful CoVR triplets. This results in a test set of 2.4K triplets, i.e., about 23% of the examples are considered as noisy and are discarded.
### Training on WebVid-CoVR
Here, we describe our CoVR model architecture and how we train it on our WebVid-CoVR dataset.
**CoVR-BLIP model architecture.** Our model architecture builds upon a pretrained image-text model, BLIP [31]. The BLIP model is pretrained on a large dataset of image-caption pairs with three vision-language objectives: image-text contrastive learning, image-text matching, and image-conditioned language modeling. However, BLIP is not pretrained for composed visual retrieval with both visual and text inputs. Therefore we adapt BLIP to the CoIR/CoVR task as follows.
Figure 3: **Examples of generated CoVR triplets in WebVid-CoVR: The middle frame of each video is shown with its corresponding caption, with the distinct word highlighted in bold. Additionally, the generated modification text is displayed on top of each pair of videos.**
We use the BLIP image encoder to encode the image query. The resulting visual features and the modification text are then forwarded to the BLIP image-grounded text encoder together, which outputs a multi-modal embedding \(f_{i}\in\mathbb{R}^{d}\) where \(d\) is the embedding dimension. To retrieve a target video from a database of videos \(V\), we compute embedding vectors for all possible videos as follows. We uniformly sample \(N\) frames from the video and compute a weighted mean of the BLIP image embeddings to obtain the video embedding vector \(\hat{v}\in\mathbb{R}^{d}\). The weights are obtained by computing the image-caption similarity for every video frame with BLIP image and text encoder, respectively, similar to [4] in the context of text-to-video retrieval. Finally, given a multi-modal embedding \(f_{i}\), the retrieved video is the one that maximizes the embedding similarity, i.e., \(\text{arg}\max_{v\in V}(\hat{v}.f_{i}^{T})\).
**Training.** In order to train on WebVid-CoVR, we use a contrastive learning approach [44; 55], as it has been shown to be effective to learn strong multi-modal representations from large-scale noisy data [41; 45]. We make several design choices to maximize its efficiency. First, we create a training batch by sampling distinct target videos and for each target video, we randomly sample an associated query image and modification text. This ensures that the same target video appears only once in a batch and maximizes the number of different target videos that can be used as negatives in contrastive learning.
Second, following HN-NCE [44], we use as negatives all target videos \(v_{j\in\mathcal{B}}\) in the batch \(\mathcal{B}\) and additionally increase the weight of most similar samples. In addition, we mine hard negative samples that we select based on the captions associated with the videos in WebVid2M. Specifically, for a given \((q_{i},t_{i},v_{i})\) triplet, we consider as hard negatives all instances in the batch \((q_{j},t_{j},v_{j})\in HN(i)\) where \(q_{i}\) and \(q_{j}\) have the same caption but \(v_{i}\) and \(v_{j}\) have different captions. In addition, to reduce the number of noisy negatives with the same semantic content as a given sample \(i\), we exclude from the computation of the loss samples \((q_{j},t_{j},v_{j})\in P(i)\) for which \(v_{i}\) and \(v_{j}\) have the same caption.
Formally, given a training batch \(\mathcal{B}\) of triplets \((q_{i},t_{i},v_{i})\), we minimize the following loss:
\[\mathcal{L}(\mathcal{B})=\sum_{i\in\mathcal{B}}\{ -\text{log}(\frac{e^{S_{i,i}/\tau}}{\sum_{j\in\mathcal{B}\setminus P(i)}e ^{S_{i,j}/\tau}w_{i,j}+\alpha\sum_{j\in HN(i)}e^{S_{i,j}/\tau}})\] \[-\text{log}(\frac{e^{S_{i,i}/\tau}}{\sum_{j\in\mathcal{B}\setminus P (i)}e^{S_{j,i}/\tau}w_{j,i}+\alpha\sum_{j\in HN(i)}e^{S_{j,i}/\tau}})\}\]
where \(\alpha\) and \(\tau\) are learnable parameters, \(S_{i,j}\) is the cosine similarity between the multi-modal embedding \(f_{i}\) and the target video embedding \(\hat{v}_{i}\), \(HN(i)\) is the set of hard negatives, \(P(i)\) is the set of noisy negatives and \(w_{i,j}\) is set as in [44].
## 4 Experiments
In this Section, we first describe the experimental protocol including the datasets, evaluation metrics, and implementation details (Section 4.1). We then present the results of CoVR on our new video benchmark (Section 4.2), as well as transfer results of CoIR on standard image benchmarks (Section 4.3). Finally, we provide ablations on our key components (Section 4.4).
### Experimental setup
**Datasets.** WebVid-CoVR is our proposed training CoVR dataset presented in Section 3.2, and WebVid-CoVR\({}_{m}\) is our new CoVR benchmark presented in Section 3.2.
CIRR [37] is a manually annotated CoIR dataset that contains open-domain natural images from NLVR2 [52]. It contains 36.5K queries annotated on 19K different images. CIRR includes two benchmarks: a standard one with the target search space as the entire validation corpus, and a fine-grained _subset_, where the search space is a subgroup of six images similar to the query image (based on pretrained ResNet15 feature distance). The dataset is divided into training, validation, and testing splits with 28,225/16,742, 4,181/2,265 and 4,148/2,178 queries/images, respectively.
FashionIQ [58] is a CoIR dataset that contains images of fashion products, divided into three categories of Shirts, Dresses, and Tops/Tees. The query and target images were automatically paired based on title similarities (crawled from the web), and modification texts were then manually annotated. This dataset consists of 30K queries annotated on 40.5K different images. It is divided into training and validation splits with 18,000/45,429 and 6,016/15,415 queries/images, respectively.
* [233]**Evaluation metrics.** Following standard evaluation protocols [37], we report the video retrieval recall at rank 1, 5, 10, and 50. Recall at rank k (R@k) quantifies the number of times the correct video is among the top k results. MeanR denotes the average of R@1, R@5, R@10, and R@50. Higher recall means better performance.
* [232]**Implementation details.** For our MTG-LLM, we use LLaMA 7B model [54] that we finetune for one epoch with an initial learning rate of \(3e^{-5}\) for MTG. For our CoVR model, we use the BLIP with ViT-L [16] at 384 pixels finetuned for text-image retrieval on COCO and freeze the ViT for computational efficiency. We train our CoVR model on WebVid-CoVR for 3 epochs with a batch size of 2048 and an initial learning rate of \(1e^{-5}\). To finetune on CIRR/FashionIQ, we train for 6/3 epochs with a batch size of 2048/1024 and an initial learning rate of \(5e^{-5}/1e^{-4}\). Experiments are conducted on 4 NVIDIA A100-SXM4-80GB GPUs. More details are included in the Supplementary Material.
### Composed video retrieval results
We report CoVR results on our WebVid-CoVR\({}_{m}\) test set in Table 2. For models trained on WebVid-CoVR, we find that using both modalities is crucial for performance, as the model with visual and text inputs outperforms both the text-only and the visual-only models. Furthermore, using multiple target video frames is beneficial, as the model with 15 frames improves over the model with 1 frame.
\begin{table}
\begin{tabular}{c c c|c c c c c} \hline \hline Train on WebVid-CoVR & Method & Input modalities & \#frames & R@1 & R@5 & R@10 & R@50 \\ \hline \multirow{4}{*}{No} & Random & - & - & 0.08 & 0.21 & 0.49 & 2.34 \\ & CoVR-BLIP & Text & - & 19.88 & 37.66 & 45.91 & 66.08 \\ & CoVR-BLIP & Visual & 15 & 37.04 & 61.36 & 69.94 & 87.23 \\ & CoVR-BLIP & Visual+Text & 15 & 15.98 & 33.22 & 41.36 & 59.18 \\ \hline \multirow{4}{*}{Yes} & CoVR-BLIP & Text & - & 20.78 & 41.68 & 51.29 & 71.05 \\ & CoVR-BLIP & Visual & 15 & 37.04 & 61.36 & 69.94 & 87.23 \\ \cline{1-1} & CoVR-BLIP & Visual+Text & 1 & 53.43 & 80.00 & 87.27 & 97.66 \\ \cline{1-1} & CoVR-BLIP & Visual+Text & 15 & **54.87** & **80.99** & **88.30** & **98.11** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Benchmarking on the WebVid-CoVR\({}_{m}\) test set:** We find that training on WebVid-CoVR, using both the visual and text input modalities, and using multiple frames to model the target video are all important factors of CoVR performance.
\begin{table}
\begin{tabular}{c l c|c c c c c c c} \hline \hline \multirow{2}{*}{Mode} & \multirow{2}{*}{Method} & \multicolumn{4}{c|}{Recall@K} & \multicolumn{4}{c}{R\({}_{\text{subject}}\)@K} \\ & & & K=1 & K=5 & K=10 & K=50 & K=1 & K=2 & K=3 \\ \hline \multirow{8}{*}{No} & TIRG [57]† & - & & 14.61 & 48.37 & 64.08 & 90.03 & 22.67 & 44.97 & 65.14 \\ & TIRG-LastConv [57]† & - & & 11.04 & 35.68 & 51.27 & 83.29 & 23.82 & 45.65 & 64.55 \\ & MAAF [15]† & - & & 10.31 & 33.03 & 48.30 & 80.06 & 21.05 & 41.81 & 61.60 \\ & MAAF-BERT [15]† & - & & 10.12 & 33.10 & 48.01 & 80.57 & 22.04 & 42.41 & 62.14 \\ & MAAF-TT [15]† & - & & 9.90 & 32.86 & 48.83 & 80.27 & 21.17 & 42.04 & 60.91 \\ & MAAF-RP [15]† & - & & 10.22 & 33.32 & 48.68 & 81.84 & 21.41 & 42.17 & 61.60 \\ & ARTEMIS [14] & - & & 16.96 & 46.10 & 61.31 & 87.73 & 39.99 & 62.20 & 75.67 \\ & CIRPLANT [37] & - & & 19.55 & 52.55 & 68.39 & 92.38 & 39.20 & 63.03 & 79.49 \\ & TrI-BLIP [7, 28] & - & & 20.89 & 48.07 & 61.16 & 83.71 & 50.22 & 73.16 & 86.82 \\ (CIRR) & CompDiff [21] & SynthTriplets18M [21] & 22.35 & 54.36 & 73.41 & 91.77 & 35.84 & 61.11 & 76.60 \\ & Combiner [7] & - & & 33.59 & 65.35 & 77.35 & 95.21 & 62.39 & 81.81 & 92.02 \\ & CASE [28] & - & & 48.00 & 79.11 & 87.25 & **97.57** & 75.88 & **90.58** & **96.00** \\ & CASE [28] & LaSCo [28] & 48.79 & 99.88 & 85.51 & 97.49 & 76.39 & 90.12 & 95.86 \\ & CASE [28] & LaSCo [28]+COCO [10] & 49.35 & **80.02** & **88.75** & 97.47 & **76.48** & 90.37 & 95.71 \\ \cline{2-8} & CoVR-BLIP & - & & 49.33 & 78.51 & 86.53 & 94.53 & 75.81 & 88.29 & 92.99 \\ & CoVR-BLIP & WebVid-CoVR & **50.55** & 79.23 & 87.30 & 94.70 & 75.69 & 88.58 & 93.33 \\ \hline \multirow{8}{*}{Zero} & Random† & - & 0.04 & 0.22 & 0.44 & 2.18 & 16.67 & 33.33 & 50.00 \\ & CompDiff [21] & SynthTriplets18M [21] & 19.37 & 53.81 & 72.02 & 90.85 & 28.96 & 49.21 & 67.03 \\ \cline{1-1} & PieZWord [47] & Conceptual Captions [49] & 23.90 & 51.70 & 65.30 & 87.80 & - & - \\ \cline{1-1} & CASE [28] & LaSCo [28] & 30.89 & 60.75 & 73.88 & 92.84 & 60.17 & 80.17 & 90.41 \\ \cline{1-1} & CASE [28] & LaSCo [28]+COCO [10] & 35.40 & 65.78 & **78.53** & **94.63** & 64.29 & 82.66 & **91.61** \\ \cline{1-1} \cline{2-8} & CoVR-BLIP & - & & 19.76 & 41.23 & 50.89 & 71.64 & 63.04 & 81.01 & 89.37 \\ \cline{1-1} \cline{2-8} & CoVR-BLIP & WebVid-CoVR & **38.55** & **66.80** & 77.25 & 91.61 & **69.42** & **84.22** & 91.16 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **State-of-the-art comparison on the CIRR test set:** Our model benefits from training on WebVid-CoVR in the zero-shot setting, and in the finetuning setting where it performs competitively. \(\dagger\) denotes results reported by [37].
We also evaluate baselines that are not trained on WebVid-CoVR and that directly apply the pretrained BLIP model [31] to the CoVR task. These baselines outperform the random baseline but underperform compared to models trained on WebVid-CoVR, showing the benefit of our automatically generated training dataset. Note that BLIP [31] is pretrained for image-text retrieval but not for image-text-image retrieval, hence the drop in performance when applied directly to CoVR with both input modalities compared to only using visual information.
### Transfer learning to composed image retrieval
While our focus is video retrieval, we also experiment with transferring our CoVR models to image retrieval tasks on standard CoIR benchmarks. We define zero-shot CoIR as not using any manually annotated CoIR triplet for training. We perform zero-shot CoIR by directly applying our model trained on our automatically generated WebVid-CoVR dataset to CoIR tasks and also explore finetuning our model on the training set of the downstream benchmark.
Tables 3 and 4 report results on CIRR and Fashion-IQ datasets, respectively. These results show that our model highly benefits from training on WebVid-CoVR, especially in the zero-shot setting, on both datasets. In addition, our model achieves state-of-the-art zero-shot performance on both CIRR and FashionIQ, and performs competitively in the finetuning setting on both benchmarks.
\begin{table}
\begin{tabular}{c l|c c|c c c|c c|c c} \hline \hline & & Pretraining & \multicolumn{3}{c|}{Shirt} & \multicolumn{3}{c|}{Dress} & \multicolumn{3}{c|}{Toptee} & \multicolumn{3}{c}{Average} \\ Mode & Method & Data & R@10 & R@50 & R@10 & R@50 & R@10 & R@50 & R@10 & R@50 \\ \hline \hline \multirow{7}{*}{\begin{tabular}{} \end{tabular} } & JVSM [11] & - & 12.0 & 27.1 & 10.7 & 25.9 & 13.0 & 26.9 & 11.9 & 26.6 \\ & CRPLAPANT [37] & - & 17.53 & 38.81 & 17.45 & 40.41 & 61.64 & 45.38 & 18.87 & 41.53 \\ & TRACE w/BER [23] & - & 20.80 & 40.80 & 22.70 & 44.91 & 24.22 & 49.80 & 22.57 & 46.19 \\ & VAL w/GloVe [12] & - & 22.38 & 44.15 & 22.53 & 44.00 & 27.53 & 51.68 & 24.15 & 46.61 \\ & MAAF [15] & - & 21.3 & 44.2 & 23.8 & 48.6 & 27.9 & 53.6 & 24.3 & 48.8 \\ & CurlingNet [66] & - & 21.45 & 44.56 & 26.15 & 53.24 & 30.12 & 55.23 & 25.90 & 51.01 \\ & RTIC-GCN [50] & - & 23.79 & 47.25 & 29.15 & 54.04 & 31.61 & 57.98 & 28.18 & 53.09 \\ & CoSMo[26] & - & 24.90 & 49.18 & 25.64 & 50.30 & 29.21 & 57.46 & 26.58 & 52.31 \\ & MErEMISi[14] & - & 21.78 & 43.64 & 27.16 & 52.40 & 29.20 & 53.83 & 26.05 & 50.29 \\ & DCNet[25] & - & 23.95 & 47.30 & 28.95 & 56.07 & 30.44 & 58.29 & 27.78 & 53.89 \\ & SAC w/BERT[22] & - & 28.02 & 51.86 & 26.52 & 51.01 & 32.70 & 61.23 & 29.08 & 54.70 \\ & FashionVLP[20] & - & 31.89 & 58.44 & 32.42 & 60.29 & 38.51 & 68.79 & 34.27 & 62.51 \\ & LF-CLIP (Combiner) [7] & - & 36.36 & 58.00 & 31.63 & 56.67 & 38.19 & 62.42 & 35.39 & 59.03 \\ & LF-BLIP [7, 28] & - & 25.39 & 43.57 & 25.31 & 44.05 & 26.54 & 44.48 & 25.75 & 43.98 \\ & CASE [28] & LaSCo [28] & **48.78** & **70.23** & **47.44** & **69.36** & 50.18 & 72.24 & 48.79 & **70.68** \\ & CoVR-BLIP & - & 48.04 & 68.20 & 44.92 & 68.91 & 52.47 & **74.71** & 48.48 & 70.61 \\ & CoVR-BLIP & WebVid-CoVR & **48.48** & 67.86 & 45.31 & 68.37 & **53.14** & 73.94 & **48.98** & 70.06 \\ \hline \multirow{7}{*}{
\begin{tabular}{} \end{tabular} } & Random & - & 0.16 & 0.70 & 0.26 & 1.31 & 0.19 & 0.95 & 0.06 & 0.32 \\ & PiceWord [47] & CC3M [9] & 26.2 & 43.6 & 20.0 & 40.2 & 27.9 & 47.4 & 24.7 & 43.7 \\ \cline{1-1} \cline{2-11} & CoVR-BLIP & - & 16.68 & 30.67 & 13.44 & 31.93 & 17.85 & 35.70 & 15.99 & 32.77 \\ \cline{1-1} & CoVR-BLIP & WebVid-CoVR & **30.37** & **46.27** & **21.81** & **39.02** & **30.85** & **49.06** & **27.68** & **44.78** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **State-of-the-art comparison on the FashionIQ validation set:** Our model benefits from training on WebVid-CoVR in the zero-shot setting, and in the finetuning setting. CC3M is Conceptual Captions 3M [9].
\begin{table}
\begin{tabular}{c c c c|c c|c c|c c} \hline \hline \multicolumn{2}{c|}{_Initial_} & \multicolumn{2}{c|}{_Generated_} & \multicolumn{3}{c|}{WebVid-CoVR\({}_{m}\)} & \multicolumn{3}{c|}{CIRR} & \multicolumn{3}{c}{FashionIQ} \\ \#videos & \#target videos & \#triplets & Filtering & R@1 & MeanR & R@1 & MeanR & R@10 & MeanR \\ \hline
0 & - & - & - & 15.98 & 37.44 & 19.76 & 45.88 & 15.99 & 24.38 \\ \hline
200k & 10k & 4k & ✓ & 25.13 & 51.22 & 33.90 & 63.32 & 26.22 & 35.83 \\
500k & 14k & 66k & ✓ & 46.04 & 74.24 & 38.31 & 67.80 & **28.76** & **37.78** \\
1M & 38k & 269k & ✓ & 48.46 & 76.47 & 38.51 & 67.95 & 28.41 & 37.38 \\ \hline
2.5M & 130k & 1.6M & ✓ & **54.87** & **80.57** & **38.55** & **68.55** & 27.68 & 36.23 \\ \hline
2.5M & 212k & 3.6M & ✗ & 49.86 & 76.12 & 34.10 & 64.77 & 25.81 & 34.16 \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Data size:** We experimentally validate the importance of the number of videos used for data generation and of filtering the generated data, evaluated by downstream performance on WebVid-CoVR\({}_{m}\) (test), CIRR (test), and FashionIQ (val). All models are trained for the same number of iterations on the generated data. Training batches are made up with distinct target videos.
### Ablation studies
In this Section, we ablate the importance of several key aspects of our method by evaluating the downstream performance of the model trained only on WebVid-CoVR.
**Importance of data scale.** In Table 5, we evaluate the importance of the scale of the dataset of video-captions used in our generation pipeline. We construct subsets of videos such that larger ones include smaller ones, and only keep triplets that contain the sampled videos for training. We find that results steadily increase when using more videos, demonstrating that our method largely benefits from scaling the size of the seed dataset of video-captions. We also observe the importance of the filtering techniques described in Section 3.1, as the model trained on unfiltered generated data underperforms.
**Modification text generation.** We use a large language model finetuned for modification text generation as explained in Section 3.1. We here compare this solution to a rule-based baseline that uses several templates to generate the modification text given the two captions that differ by one word. Specifically, the modification text is based on the two different words from the captions. We generate templates that use these words and chose one at random during training. These templates include variations such as _"Remove_txt_diff_1" and _"Change_txt_diff_1_for_txt_diff_2". A full list of all the templates can be seen in the Supplementary Material. In Table 6, we show that our large language model generates better modification texts than the rule-based baseline, by evaluating the results of the model trained on the generated data. Qualitative examples comparing the two approaches are provided in the Supplementary Material.
**Training strategies.** In Table 7, we first show the benefit on WebVid-CoVR of training by iterating on target videos instead of CoVR triplets. This is to avoid having the same target video appearing multiple times in a training batch, hence increasing the number of correct negatives that are used in the contrastive loss. Furthermore, sampling hard negatives, as described in Section 3.3, also slightly benefits the downstream performance.
## 5 Conclusions, Limitations, and Societal Impacts
In this work, we studied the new task of CoVR by proposing a simple yet effective methodology to create automatic training data. Our results on several benchmarks (including our manually curated video benchmark, as well as existing image benchmarks) suggest that, while noisy, such an automated and scalable approach can provide effective CoVR model training. One potential limitation of our method is that our dataset may not depict some visible changes due to the way we generate triplets. Moroever, our modification text generation model is suboptimal due to only inputting text (i.e., without looking at images). Future work can incorporate visually grounded modification generation.
**Societal impact.** Our model constitutes a generic multi-modal search tool, but is not intended for a specific application. While there are helpful use cases such as online shopping, traveling, and personal development (i.e., how-to), there may be potential privacy risks associated to surveillance applications, searching for a specific person in videos.
\begin{table}
\begin{tabular}{l|c c c c|c c c c} \hline \hline & & \multicolumn{3}{c|}{WebVid-CoVR} & \multicolumn{3}{c}{CIRR} \\ Model & R@1 & R@5 & R@10 & R@50 & R@1 & R@5 & R@10 & R@50 \\ \hline Rule-based & 43.00 & 70.10 & 79.38 & 94.58 & 15.90 & 39.06 & 52.36 & 79.22 \\ MTG-LLM & **54.87** & **80.99** & **88.30** & **98.11** & **38.55** & **66.80** & **77.25** & **91.61** \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Modification text generation:** We compare our MTG-LLM to a rule-based MTG baseline and observe important gains in the downstream performance of the model trained on the generated data. All models are trained for the same number of iterations on the generated data.
\begin{table}
\begin{tabular}{l l|c c c c|c c c c} \hline \hline & & \multicolumn{3}{c|}{WebVid-CoVR\({}_{m}\)} & \multicolumn{3}{c}{CIRR} \\ Iteration & Hard negatives & R@1 & R@5 & R@10 & R@50 & R@1 & R@5 & R@10 & R@50 \\ \hline Triplets & ✓ & 47.68 & 76.14 & 85.46 & 97.25 & 38.53 & 65.66 & 76.22 & 90.34 \\ Videos & ✗ & 54.00 & 80.53 & 88.01 & 98.03 & 38.34 & 66.75 & 77.21 & 91.42 \\ Videos & ✓ & **54.87** & **80.99** & **88.30** & **98.11** & **38.55** & **66.80** & **77.25** & **91.61** \\ \hline \hline \end{tabular}
\end{table}
Table 7: **Ablations on training strategies:** Constructing batches of distinct target videos (and not CoVR triplets) and our hard negative mining both benefit the downstream CoVR/CoIR performance.
## References
* [1] Hassan Akbari, Liangzhe Yuan, Rui Qian, Wei-Hong Chuang, Shih-Fu Chang, Yin Cui, and Boqing Gong. Vatt: Transformers for multimodal self-supervised learning from raw video, audio and text. _NeurIPS_, 2021.
* [2] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. In _NeurIPS_, 2022.
* [3] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. VQA: Visual Question Answering. In _ICCV_, 2015.
* [4] Max Bain, Arsha Nagrani, Gul Varol, and Andrew Zisserman. Frozen in time: A joint video and image encoder for end-to-end retrieval. In _ICCV_, 2021.
* [5] Max Bain, Arsha Nagrani, Gul Varol, and Andrew Zisserman. A CLIP-hitchhiker's guide to long video retrieval. _arXiv:2205.08508_, 2022.
* [6] Alberto Baldrati, Lorenzo Agnolucci, Marco Bertini, and Alberto Del Bimbo. Zero-shot composed image retrieval with textual inversion. _arXiv:2303.15247_, 2023.
* [7] Alberto Baldrati, Marco Bertini, Tiberio Uricchio, and Alberto Del Bimbo. Effective conditioned and composed image retrieval combining CLIP-based features. In _CVPR_, 2022.
* [8] Tim Brooks, Aleksander Holynski, and Alexei A Efros. InstructPix2Pix: Learning to follow image editing instructions. _arXiv:2211.09800_, 2022.
* [9] Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts. In _CVPR_, 2021.
* [10] Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, and C Lawrence Zitnick. Microsoft COCO captions: Data collection and evaluation server. _arXiv:1504.00325_, 2015.
* [11] Yanbei Chen and Loris Bazzani. Learning joint visual semantic matching embeddings for language-guided retrieval. In _ECCV_, 2020.
* [12] Yanbei Chen, Shaogang Gong, and Loris Bazzani. Image search with text feedback by visiolinguistic attention learning. In _CVPR_, 2020.
* [13] Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. UNITER: Universal image-text representation learning. In _ECCV_, 2020.
* [14] Ginger Delmas, Rafael Sampaio de Rezende, Gabriela Csurka, and Diane Larlus. ARTEMIS: Attention-based Retrieval with Text-Explicit Matching and Implicit Similarity. In _ICLR_, 2022.
* [15] Eric Dodds, Jack Culpepper, Simao Herdade, Yang Zhang, and Kofi Boakye. Modality-agnostic attention fusion for visual search with text feedback. _CoRR_, abs/2007.00145, 2020.
* [16] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In _ICLR_, 2021.
* [17] Han Fang, Pengfei Xiong, Luhui Xu, and Yu Chen. CLIP2Video: Mastering video-text retrieval via image clip. _arXiv:2106.11097_, 2021.
* [18] Zijian Gao, Jingyu Liu, Sheng Chen, Dedan Chang, Hao Zhang, and Jinwei Yuan. CLIP2TV: an empirical study on transformer-based methods for video-text retrieval. _arXiv:2111.05610_, 2021.
* [19] Yuying Ge, Yixiao Ge, Xihui Liu, Dian Li, Ying Shan, Xiaohu Qie, and Ping Luo. Bridgeformer: Bridging video-text retrieval with multiple choice questions. In _CVPR_, 2022.
** [20] Sonam Goenka, Zhaoheng Zheng, Ayush Jaiswal, Rakesh Chada, Yue Wu, Varsha Hedau, and Pradeep Natarajan. FashionVLP: Vision language transformer for fashion retrieval with feedback. In _CVPR_, 2022.
* [21] Geonmo Gu, Sanghyuk Chun, Wonjae Kim, HeeJae Jun, Yoohoon Kang, and Sangdoo Yun. CompDiff: Versatile composed image retrieval with latent diffusion. _arXiv:2303.11916_, 2023.
* [22] Surgan Jandial, Pinkesh Badjatiya, Pranit Chawla, Ayush Chopra, Mausoom Sarkar, and Balaji Krishnamurthy. SAC: Semantic attention composition for text-conditioned image retrieval. In _WACV_, 2022.
* [23] Surgan Jandial, Ayush Chopra, Pinkesh Badjatiya, Pranit Chawla, Mausoom Sarkar, and Balaji Krishnamurthy. TRACE: Transform aggregate and compose visiolinguistic representations for image search with text feedback. _CoRR_, abs/2009.01485, 2020.
* [24] Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In _ICML_, 2021.
* [25] Jongseok Kim, Youngjae Yu, Hoeseong Kim, and Gunhee Kim. Dual compositional learning in interactive image retrieval. _AAAI_, 2021.
* [26] Seungmin Lee, Dongwan Kim, and Bohyung Han. CoSMo: Content-style modulation for image retrieval with text feedback. In _CVPR_, 2021.
* [27] Jie Lei, Linjie Li, Luowei Zhou, Zhe Gan, Tamara L Berg, Mohit Bansal, and Jingjing Liu. Less is more: ClipBERT for video-and-language learning via sparse sampling. In _CVPR_, 2021.
* [28] Matan Levy, Rami Ben-Ari, Nir Darshan, and Dani Lischinski. Data roaming and early fusion for composed image retrieval. _arXiv:2303.09429_, 2023.
* [29] Dongxu Li, Junnan Li, Hongdong Li, Juan Carlos Niebles, and Steven CH Hoi. Align and prompt: Video-and-language pre-training with entity prompts. In _CVPR_, 2022.
* [30] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. BLIP-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In _ICML_, 2023.
* [31] Junnan Li, Dongxu Li, Caiming Xiong, and Steven C. H. Hoi. BLIP: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In _ICML_, 2022.
* [32] Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. Align before fuse: Vision and language representation learning with momentum distillation. In _NeurIPS_, 2021.
* [33] Linjie Li, Yen-Chun Chen, Yu Cheng, Zhe Gan, Licheng Yu, and Jingjing Liu. HERO: Hierarchical encoder for video+language omni-representation pre-training. In _EMNLP_, 2020.
* [34] Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. Oscar: Object-semantics aligned pre-training for vision-language tasks. In _ECCV_, 2020.
* [35] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. _arXiv:2304.08485_, 2023.
* [36] Yuqi Liu, Pengfei Xiong, Luhui Xu, Shengming Cao, and Qin Jin. TS2-Net: Token shift and selection transformer for text-video retrieval. In _ECCV_, 2022.
* [37] Zheyuan Liu, Cristian Rodriguez-Opazo, Damien Teney, and Stephen Gould. Image retrieval on real-life images with pre-trained vision-and-language models. In _ICCV_, 2021.
* [38] Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. ViLBERT: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In _NeurIPS_, 2019.
* [39] Huaishao Luo, Lei Ji, Ming Zhong, Yang Chen, Wen Lei, Nan Duan, and Tianrui Li. CLIP4Clip: An empirical study of CLIP for end to end video clip retrieval. _arXiv:2104.08860_, 2021.
* [40] Yiwei Ma, Guohai Xu, Xiaoshuai Sun, Ming Yan, Ji Zhang, and Rongrong Ji. X-CLIP: End-to-end multi-grained contrastive learning for video-text retrieval. In _ACMMM_, 2022.
* [41] Antoine Miech, Jean-Baptiste Alayrac, Lucas Smaira, Ivan Laptev, Josef Sivic, and Andrew Zisserman. End-to-end learning of visual representations from uncurated instructional videos. In _CVPR_, 2020.
* [42] Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. HowTo100M: Learning a text-video embedding by watching hundred million narrated video clips. In _ICCV_, 2019.
* [43] Arsha Nagrani, Chen Sun, David Ross, Rahul Sukthankar, Cordelia Schmid, and Andrew Zisserman. Speech2action: Cross-modal supervision for action recognition. In _CVPR_, 2020.
* [44] Filip Radenovic, Abhimanyu Dubey, Abhishek Kadian, Todor Mihaylov, Simon Vandenhende, Yash Patel, Yi Wen, Vignesh Ramanathan, and Dhruv Mahajan. Filtering, distillation, and hard negatives for vision-language pre-training. In _arXiv_, 2023.
* [45] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In _ICML_, 2021.
* [46] Hanoona Rasheed, Muhammad Uzair Khattak, Muhammad Maaz, Salman Khan, and Fahad Shahbaz Khan. Fine-tuned clip models are efficient video learners. In _CVPR_, 2023.
* [47] Kuniaki Saito, Kihyuk Sohn, Xiang Zhang, Chun-Liang Li, Chen-Yu Lee, Kate Saenko, and Tomas Pfister. Pic2word: Mapping pictures to words for zero-shot composed image retrieval. _CVPR_, 2023.
* [48] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade W Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. LAION-5B: An open large-scale dataset for training next generation image-text models. In _NeurIPS Datasets and Benchmarks Track_, 2022.
* [49] Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics_, 2018.
* [50] Minchul Shin, Yoonjae Cho, ByungSoo Ko, and Geonmo Gu. RTIC: Residual Learning for Text and Image Composition using Graph Convolutional Network. _arXiv:2104.03015_, 2021.
* [51] Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. VL-BERT: Pre-training of generic visual-linguistic representations. In _ICLR_, 2019.
* [52] Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. A corpus for reasoning about natural language grounded in photographs. In _ACL_, 2019.
* [53] Yuchong Sun, Hongwei Xue, Ruihua Song, Bei Liu, Huan Yang, and Jianlong Fu. Long-form video-language pre-training with multimodal temporal contrastive learning. In _NeurIPS_, 2022.
* [54] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothe Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models. _arXiv:2302.13971_, 2023.
* [55] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. _arXiv:1807.03748_, 2018.
* [41] Ashwin K Vijayakumar, Michael Cogswell, Ramprasath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. Diverse beam search: Decoding diverse solutions from neural sequence models. _arXiv:1610.02424_, 2016.
* an empirical odyssey. In _CVPR_, 2019.
* [43] Hui Wu, Yupeng Gao, Xiaoxiao Guo, Ziad Al-Halah, Steven Rennie, Kristen Grauman, and Rogerio Feris. Fashion IQ: A new dataset towards retrieving images by natural language feedback. In _CVPR_, 2021.
* [44] Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, and Christoph Feichtenhofer. VideoCLIP: Contrastive pre-training for zero-shot video-text understanding. In _EMNLP_, 2021.
* [45] Hongwei Xue, Tiankai Hang, Yanhong Zeng, Yuchong Sun, Bei Liu, Huan Yang, Jianlong Fu, and Baining Guo. Advancing high-resolution video-language representation with large-scale video transcriptions. In _CVPR_, 2022.
* [46] Hongwei Xue, Yuchong Sun, Bei Liu, Jianlong Fu, Ruihua Song, Houqiang Li, and Jiebo Luo. CLIP-ViP: Adapting pre-trained image-text model to video-language representation alignment. _arXiv_, 2022.
* [47] Antoine Yang, Antoine Miech, Josef Sivic, Ivan Laptev, and Cordelia Schmid. Just ask: Learning to answer questions from millions of narrated videos. In _ICCV_, 2021.
* [48] Antoine Yang, Antoine Miech, Josef Sivic, Ivan Laptev, and Cordelia Schmid. Learning to answer visual questions from web videos. _IEEE TPAMI_, 2022.
* [49] Jianwei Yang, Yonatan Bisk, and Jianfeng Gao. TACo: Token-aware cascade contrastive learning for video-text alignment. _arXiv_, 2021.
* [50] Lewei Yao, Runhui Huang, Lu Hou, Guansong Lu, Minzhe Niu, Hang Xu, Xiaodan Liang, Zhenguo Li, Xin Jiang, and Chunjing Xu. FILIP: Fine-grained interactive language-image pre-training. In _ICLR_, 2022.
* [51] Youngjae Yu, Seunghwan Lee, Yuncheol Choi, and Gunhee Kim. CurlingNet: Compositional learning between images and text for fashionIQ data. _arXiv:2003.1229_, 2020.
* [52] Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, et al. Florence: A new foundation model for computer vision. _arXiv:2111.11432_, 2021.
* [53] Rowan Zellers, Jiasen Lu, Ximing Lu, Youngjae Yu, Yanpeng Zhao, Mohammadreza Salehi, Aditya Kusupati, Jack Hessel, Ali Farhadi, and Yejin Choi. MERLOT Reserve: Neural script knowledge through vision and language and sound. In _CVPR_, 2022.
* [54] Rowan Zellers, Ximing Lu, Jack Hessel, Youngjae Yu, Jae Sung Park, Jize Cao, Ali Farhadi, and Yejin Choi. MERLOT: Multimodal neural script knowledge models. In _NeurIPS_, 2021.
* [55] Yue Zhao, Ishan Misra, Philipp Krahenbuhl, and Rohit Girdhar. Learning video representations from large language models. In _CVPR_, 2023.
* [56] Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason J Corso, and Jianfeng Gao. Unified vision-language pre-training for image captioning and VQA. In _AAAI_, 2020. | ## Review
### Summary
This paper presents a novel approach to Compositional Video Retrieval (CoVR) by creating a large-scale dataset called WebVid-CoVR, which consists of 1.6 million triplets generated from video-caption pairs. The authors propose a systematic methodology for dataset creation and introduce a benchmark for CoVR, along with a smaller, manually annotated evaluation set. The experimental results indicate that the model trained on the WebVid-CoVR dataset shows promising performance in both zero-shot and fine-tuning scenarios. However, there are concerns regarding the dataset's limitations and the clarity of some aspects of the presentation.
### Strengths
- The automatic triplet generation pipeline is well-designed and scalable.
- Strong baseline results on CoVR demonstrate transferability to Compositional Image Retrieval (CoIR).
- The paper introduces a novel task in video retrieval and establishes a useful benchmark.
- The methodology for creating the dataset is logical, especially the manually annotated portion.
- The paper is generally well-written and easy to follow.
### Weaknesses
- The triplet generation relies heavily on caption similarity, potentially ignoring visual characteristics.
- There is insufficient analysis of the dataset, particularly regarding visual elements.
- The details regarding the overhead for dataset augmentation are lacking.
- Only one model is evaluated, and additional baselines should be included.
- The proposed method does not adequately address dynamic content in examples.
- The training method is standard and lacks innovation, and the training data is limited in scope.
### Questions
- What annotations are added to the dataset? (line 145)
- What is the main source of noise within the dataset?
- Has any analysis been performed on the types of modifications in the dataset?
- How are learnable parameters \alpha and \tau determined?
- Could the generated rules be paraphrased by a LLM, and would this affect performance?
- Is the Top K sampling referring to tokens within a modification text?
### Soundness
**Score:** 3
**Description:** 3 = good; the methodology is sound, but there are notable gaps in evaluation and analysis.
### Presentation
**Score:** 3
**Description:** 3 = good; the paper is generally clear, but some sections are ambiguous and could be better articulated.
### Contribution
**Score:** 3
**Description:** 3 = good; the dataset and task are significant contributions, though there are limitations that need to be addressed.
### Rating
**Score:** 5
**Description:** 5 = borderline accept; the paper is technically solid but requires additional evaluation and clarity.
### Paper Decision
**Decision:** Reject
**Reasons:** The paper presents interesting ideas and a significant dataset; however, it lacks comprehensive evaluation, and there are several ambiguities in the presentation. The reliance on caption similarity without adequately addressing visual similarity and the limited dataset analysis are critical concerns. Moreover, the overall lack of a reviewer champion indicates a consensus that further work is necessary before acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society
[https://www.camel-ai.org](https://www.camel-ai.org)
Guohao Li
Equal contribution
Hasan Abed Al Kader Hammoud
Equal contribution
Hani Itani
Equal contribution
Dmitrii Khizbullin
Bernard Ghanem
King Abdullah University of Science and Technology (KAUST)
###### Abstract
The rapid advancement of chat-based language models has led to remarkable progress in complex task-solving. However, their success heavily relies on human input to guide the conversation, which can be challenging and time-consuming. This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents, and provides insight into their "cognitive" processes. To address the challenges of achieving autonomous cooperation, we propose a novel communicative agent framework named _role-playing_. Our approach involves using _inception prompting_ to guide chat agents toward task completion while maintaining consistency with human intentions. We showcase how _role-playing_ can be used to generate conversational data for studying the behaviors and capabilities of a society of agents, providing a valuable resource for investigating conversational language models. In particular, we conduct comprehensive studies on _instruction-following cooperation_ in multi-agent settings. Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems, and open-sourcing our library to support research on communicative agents and beyond: [https://github.com/camel-ai/camel](https://github.com/camel-ai/camel).
## 1 Introduction
_"What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle."_
_- Marvin Minsky, The Society of Mind, p. 308_
Confronted with the complexities of real-world tasks, solving them often requires multiple steps. The rapid progress of chat-based large-scale language models (LLMs) has yielded remarkable achievements in complex task-solving [82, 84, 116, 89, 5, 10, 122, 13]. Nevertheless, it is worth noting that their success is heavily reliant on human input to guide the conversation in the right direction. This reliance necessitates users to provide relevant and precise prompts based on their intentions and the chat agent's feedback. This can be challenging, time-consuming, and sometimes impossible. Crafting effective prompts often demands a deep understanding and expertise of a particular domain of knowledge. Consider an individual who lacks trading expertise; they would find it difficult to create suitable prompts for directing a chat agent to develop a trading application. This predicament is raising a crucial question: can we replace human intervention with an autonomous communicative agent capable of steering the conversation toward task completion with minimal human supervision? To tackle this issue, it is crucial to conduct more research exploring the potential,capabilities, and limitations of communicative agents that operate entirely on their own to complete tasks. Understanding how multiple agents interact with each other is important for anticipating the future of artificial intelligence. The dynamics of collaborating or competing agents play a key role in determining the success of AI systems [6; 26; 27; 84; 99; 9; 10].
This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents and provide insight into their "cognitive" processes. Several challenges arise when asking a society of agents to autonomously cooperate on completing tasks. Examples we encountered in our preliminary analysis include _role flipping_, _assistant repeating instructions_, _fake replies_, and _infinite loop of messages_. Therefore, it is critical to investigate ways to align these models with human intentions and to explore means enabling their effective cooperation. To address these issues, we propose a novel cooperative agent framework named _role-playing_ to automate cooperation between communicative agents. Specifically, our proposed approach involves using _role-playing_ with _inception prompting_ to autonomously guide the communicative agents toward task completion. Only a preliminary _idea_ is needed from human to guide the conversations toward complex task-solving.
Our library, which we make publicly available, provides modular functionality, and includes implementations of different agents, examples of well-crafted prompts, and data explorers. We hope our library serves as a ground for future research in various areas such as multi-agent systems, cooperative AI, game theory simulations, social analysis, AI ethics, AI alignment, and beyond. In addition, our _role-playing_ method provides a highly scalable way to generate conversational data for studying the behaviors and capabilities of chat agents. We showcase how _role-playing_ can be used to let chat agents communicate with each other for task completion and record their conversations for behavior analysis and capability understanding. In particular, we consider two cooperative scenarios of role-playing and generate two large conversational, task-oriented, and instruction-following datasets: _AI Society_ and _Code_. We also use our framework to collect two single-turn question-answer datasets, _Math_ and _Science_, for LLM ability emergence study. Furthermore, we generate a _Misalignment_ dataset that is a simulation of possible malicious applications which demonstrate the potential risks of an unaligned autonomous agent system. The datasets offer a valuable resource for investigating conversational language models, enabling them to comprehend and react to human language more effectively. Furthermore, our _role-playing_ offers a scalable method of creating conversational instruction-following data, which can potentially enhance the development of more advanced language models. We show that solutions derived from our _role-playing_ framework outperform those generated in a single shot by gpt-3.5-turbo [82] in both GPT4 and human evaluations. We also study knowledge emergence in LLMs by fine-tuning LLaMA [117] on progressively growing datasets generated through our framework. Additionally, we evaluate our code generation capabilities through benchmarking our final model on HumanEval [18] and HumanEval\({}^{+}\)[69].
**Contributions.** Our contributions are fourfold: (1) We introduce a novel cooperative agent framework, _role-playing_, that allows communicative agents to collaborate autonomously toward completing tasks while requiring minimal human intervention; (2) Our framework offers a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems. It illuminates the challenges of achieving autonomous cooperation, and provides strategies for addressing them. We showcase the potential power of multi-agent collaboration for complex-task solving; (3) We demonstrate the significant emergence of LLM training abilities by utilizing the datasets we have collected from simulating four distinct agent collaboration scenarios; (4) We have open-sourced our library, containing implementations of various agents, data generation pipelines, data analysis tools, and collected datasets, to support research on communicative agents and beyond.
## 2 Related Work
**Communicative Agents.** Communication between agents has been studied for a long time [76; 77]. There are many ways to facilitate communication between agents, and with agents [29; 90; 97]. Among these, natural language is considered the most natural form of communication [97]. By enabling agents to function as communicators themselves, they become capable of solving complex tasks [113; 85; 72; 3; 30; 111; 79; 41; 28; 102; 80; 106; 35; 49; 2; 51; 1; 55; 50; 65; 92]. Communication between AI agents can occur in a competitive setting [115; 108] or a cooperative setting [40; 27; 11; 137; 70]. Cooperative AI refers to artificial intelligence systems that are designed to work together with humans and other AI systems to achieve common goals [24; 125]. Cooperative AI systems take into account the needs and capabilities of other agents in the system and actively seek to collaborate and coordinate their actions with them, which has many potential benefits, includingincreased efficiency, improved decision-making, and the ability to tackle complex problems that are beyond the reach of any single agent. However, designing effective cooperative AI systems is still an active area of research, as it requires addressing a range of technical, ethical, and social challenges [27]. Our work enables communicative agents to engage in a conversation and cooperate with each other to solve assigned tasks. The agents, each assigned a distinct role, are expected to apply their expertise and knowledge to solve their common task.
**Instructional LLMs and Prompt Engineering.** LLMs are trained on diverse text data and excel in text completion, with various downstream NLP applications [12; 22; 47; 131; 117]. However, InstructGPT suggests that LLMs may not align with user intent, proposing reinforcement learning from human feedback (RLHPs) [23] and Instruction Fine-Tuning (IFT) [121] to improve LLMs' relevance and appropriateness to user instructions. Special types of instruction or prompting methods, such as Chain-of-Thought (CoT) [123], zero-shot-CoT [61], and ReAct [126], have recently been developed to enhance the performance of LLMs on reasoning, arithmetic and decision making tasks [134; 118; 52; 73; 31; 103; 43; 64; 132; 46; 133; 105; 128; 25; 81; 109]. These techniques underpin the impressive capabilities of recent dialogue LLMs [106; 116; 36; 9; 82; 13], which aim to simulate human-like conversations and provide personalized and interactive experiences for users, exhibiting the behavior of conversational AI agents [33]. However, generating instruction datasets is a crucial challenge in building instruct-based LLMs, with existing datasets ranging from crowdsourced to generated. Hand-crafted instruction instances are available in [120], while leveraging previously crowdsourced NLP datasets is a less labor-intensive curation approach [121; 71; 78; 53]. LLMs have been explored for data generation in [101; 63; 68; 114], and Self-Instruct [119] proposes a semi-automated process for instruction instance generation. Unnatural-Instruction [48] collects instruction instances by prompting a language model with only three seed examples and paraphrasing the generated instances to expand the dataset. There is also a large chunk of work that has proposed methods for automatic dataset creation [67; 57; 19; 75; 20; 98; 59; 96; 129; 62; 130; 86; 8]. Another important challenge is prompt engineering. The quality of the prompt used to guide LLMs significantly affects its performance [91; 12; 66]. While LMs pre-trained on large data can implicitly learn tasks with few-shot prompting, hand-crafted prompts may not always suffice. Automated prompt generation methods have been proposed, such as gradient-guided search [104], mining-based and paraphrasing-based techniques [54], a meta-prompt [93], and automatic instruction selection and generation [136]. In this work, we introduce a conversational LLM auto-prompting method called _Inception Prompting_, which enables agents to prompt each other to solve tasks through _Role-Playing_. The AI user continuously provides instructions to the AI assistant for task-solving. This enables us to save the streaming instruction-solution pairs and create diverse, instructional, conversational, and task-oriented datasets. These datasets can be used to analyze the behavior and capabilities of LLMs and for future research for fine-tuning LLMs with conversational instructions.
**AI Alignment.** AI alignment is a field that aims to ensure that AI systems adhere to their intended goals, interests, and values, as envisioned by their designers [4; 39; 110; 32; 38; 74; 10]. The first attempt at AI alignment was made through the "Three Laws of Robotics," which was introduced by Isaac Asimov in his science fiction stories [6]. Developing aligned AI systems is crucial for achieving desired objectives while avoiding unintended consequences. Research in AI alignment focuses on discouraging AI models from producing false, offensive, deceptive, or manipulative information that could result in various harms [56; 112; 42; 37]. Achieving a high level of alignment requires researchers to grapple with complex ethical, philosophical, and technical issues. We conduct extensive experiments to study different _role-playing_ situations, which probe the alignment of LLMs.
## 3 Methodology
In this paper, we focus on studying communicative agents under cooperative settings where they share common interests. In particular, we study the assistant-user scenario, where a preliminary idea is given at the start. Agents will conceptualize the idea into a specific task and complete it autonomously through conversations.
### Role-playing Framework
_"What's the most resilient parasite? An Idea. A single idea from the human mind can build cities. An idea can transform the world and rewrite all the rules. Which is why I have to steal it."_
_- Dom Cobb, Inception_Our proposed framework is a novel _role-playing_ approach for studying multiple communicative agents. Specifically, we concentrate on task-oriented role-playing that involves one _AI assistant_ and one _AI user_. After the multi-agent system receives a preliminary _idea_ and the _role assignment_ from human users, a _task-specific agent_ will provide a detailed description to make the idea specific. Afterwards, the AI assistant and AI user will cooperate on completing the specified task through multi-turn conversations until the AI user determines the task is done. The AI user is responsible for giving instructions to the AI assistant and directing the conversation toward task completion. On the other hand, the AI assistant is designed to follow the instructions from the AI user and respond with specific solutions. The whole _role-playing_ framework is depicted in Figure 1.
**Human Input and Task Specifying.** The _role-playing_ session will be instantiated from an _idea_ and _selected roles_ by humans. As an example in Figure 1, a human has a preliminary idea to _develop a trading bot for the stock market_. Humans may or may not have the knowledge about how the idea can be realized. What is needed is only to designate the potential roles that can implement the idea. For instance, a _Python Programmer_ could collaborate with a _Stock Trader_ to realize the idea of _developing a trading bot for the stock market_. After the idea and roles are determined, the _task specifier_ agent will brainstorm a specific task that the AI Assistant role can help with the AI user role to complete based on the input idea. An example of a specified task in this scenario could be: _develop a trading bot with a sentiment analysis tool that can monitor social media platforms for positive or negative comments about a particular stock, and execute trades based on sentiment analysis results_. The main motivation for introducing a task specifier is that conversational agents usually require a concrete task prompt for realizing the task which might be challenging or time-consuming for a non-domain expert. Therefore, the task specifier agent serves as an enhanced imagination module for the idea implementation. Please note that, when studying our framework at a large scale for AI society and Code scenarios, we generate _roles_ and _ideas_ automatically by prompting LLMs instead of relying on human inputs. For our generated Math and Science datasets we generated problem _topics_, _subtopics_, and _problems_ automatically by prompting LLMs.
**AI Assistant-User Role Assignment.** After the task specification, The AI assistant role and the AI user role will be assigned to the user agent and the assistant agent correspondingly to complete the specified task. In practice, a system message is passed to each agent declaring their role. We refer to the assistant system prompt/message by \(\mathcal{P_{A}}\) and that of the user by \(\mathcal{P_{U}}\). The system messages are passed to the agents before the conversations start. Let \(\mathcal{F}_{1}\) and \(\mathcal{F}_{2}\) denote two large-scale auto-regressive language models [82]. When the system message is passed to those models respectively, we
Figure 1: **CAMEL Role-Playing Framework. Our role-playing setup starts with the human user having an idea they want to implement, e.g. develop a trading bot for the stock market. The roles involved in this task would be an AI assistant agent who is a python programmer and an AI user agent who is a stock trader. The task is made more specific using our task specifier agent, leading to a well-defined task for the assistant to solve. Both AI user and AI assistant are provided with the specified task, after which they collaboratively communicate by chatting with each other in an instruction-following fashion to solve the specified task.**
obtain \(\mathcal{A}\leftarrow\mathcal{F}_{1}^{\mathcal{P}_{\mathcal{A}}}\) and \(\mathcal{U}\leftarrow\mathcal{F}_{2}^{\mathcal{P}_{\mathcal{U}}}\) which are referred to as the assistant and user agents respectively. In Figure 1, the AI assistant and the AI user are assigned the roles of a _Python Programmer_ and a _Stock Trader_ at the beginning of the role-playing session respectively. The AI user serves as a task planner, engaging in interactive planning to determine feasible steps for the AI assistant to execute. Meanwhile, the AI assistant acts as a task executor, offering solutions, executing planned steps, and providing responses to the AI user.
**Conversation Towards Task-Solving.** After the role assignment is completed, the AI assistant \(\mathcal{A}\) and AI user \(\mathcal{U}\) will collaborate in an instruction-following manner to accomplish the task. In the AI assistant-user scenario, the AI user is responsible for providing instructions, and the assistant is expected to respond with a solution that fulfills the instructions. Formally, we denote the user instruction message obtained at time \(t\) by \(\mathcal{I}_{t}\) and the assistant solution by \(\mathcal{S}_{t}\). The set of conversational messages obtained up until time \(t\) is denoted by Equation (1) shown below:
\[\mathcal{M}_{t}=\{(\mathcal{I}_{0},\mathcal{S}_{0}),...,(\mathcal{I}_{t}, \mathcal{S}_{t})\}=\{(\mathcal{I}_{i},\mathcal{S}_{i})\}|_{i=0}^{t} \tag{1}\]
At the next time step, \(t+1\), the AI user \(\mathcal{U}\) takes the historical conversation message set \(\mathcal{M}_{t}\) and provides a new instruction \(\mathcal{I}_{t+1}\), as shown in Equation (2). The produced instruction message \(\mathcal{I}_{t+1}\) is then passed, along with message set \(\mathcal{M}_{t}\), to the AI assistant \(\mathcal{A}\). The AI assistant will then respond with a solution, denoted by \(\mathcal{S}_{t+1}\) in Equation (3):
\[\mathcal{I}_{t+1}=\mathcal{U}(\mathcal{M}t)\hskip 56.905512pt(2)\hskip 56.905512pt \mathcal{S}t+1=\mathcal{A}(\mathcal{M}t,\mathcal{I}t+1) \tag{3}\]
After obtaining the solution \(\mathcal{S}_{t+1}\) to the instruction \(\mathcal{I}_{t+1}\), the message set is updated using Equation (4) to obtain \(\mathcal{M}_{t+1}\):
\[\mathcal{M}_{t+1}\leftarrow\mathcal{M}_{t}\cup(\mathcal{I}_{t+1},\mathcal{S}_ {t+1}) \tag{4}\]
Note that the formulation above not only models AI-AI communicative scenarios, but it can also be easily extended to model human-AI communication or communication between more than two agents. Specifically, we can use message-passing graphs to model communication between an arbitrary number of agents. In Figure 1, we observe that the AI user initiates the _installation and import of essential Python libraries for sentiment analysis and stock trading_ by instructing the AI assistant through conversations. This example is drawn from our experiments, and the entire conversation is available in the Appendix.
**Critic-In-The-Loop.** To enhance the controllability of the role-playing framework, we introduce a critic agent capable of selecting proposals from or providing feedback to the role-playing agents. This enables tree-search-like decision-making for task-solving. In practice, the critic can be either an AI agent or a human. The detailed implementation and case studies can be found in the Appendix.
### Inception Prompting
Since prompt engineering is crucial to our role-playing framework, this section delves deeply into our prompting techniques. Our prompt engineering occurs solely at the beginning of role-playing, for task specification and role assignment. Once the conversation phase commences, the AI assistant and AI user prompt each other automatically in a loop until termination. As such, we refer to our technique as _Inception Prompting_. Our Inception prompt consists of three prompts: the task specifier prompt \(\mathcal{P}_{\mathcal{T}}\), the assistant system prompt \(\mathcal{P}_{\mathcal{A}}\), and the user system prompt \(\mathcal{P}_{\mathcal{U}}\). As an example, we consider the inception prompt of the _AI Society_ scenario. The templates for these prompts of _AI Society_ role-playing are shown in Figure 2. The task specifier prompt contains information about the roles of the AI assistant and AI user in the role-playing session. Therefore, the task specifier agent can take a preliminary task/idea as input and generate a specific task using imagination. The AI assistant system prompt \(\mathcal{P}_{\mathcal{A}}\) and the AI user system prompt \(\mathcal{P}_{\mathcal{U}}\) are mostly symmetrical and include information about the assigned task and roles, communication protocols, termination conditions, and constraints or requirements to avoid unwanted behaviors. The prompt designs for both roles are crucial to achieve autonomous cooperation between agents. It is non-trivial to engineer prompts that ensure agents act in alignment with our intentions. We take the prompt templates from the _AI Society_ in Figure 2 as an example to explain our key design choices. The prompts used for the Code scenario follow a similar sprint as the AI society scenario, but with some additional engineering related to programming languages. More details in the Appendix.
**Prompt Engineering.** To delve deeper into the details in Figure 2, we start by chunking the various parts of the AI assistant system prompt \(\mathcal{P}_{\mathcal{A}}\) shown below:
* Never forget you are a <ASSISTANT_ROLE> and I am a <USER_ROLE>. This assigns the chosen role to the assistant agent and provides it with information about the user's role.
* Never flip roles! Never instruct me! This prevents agents from flipping roles. In some cases, we have observed the assistant and the user switching roles, where the assistant suddenly takes control and instructs the user, and the user follows those instructions.
* You must decline my instruction honestly if you cannot perform the instruction due to physical, moral, legal reasons or your capability and explain the reasons. This prohibits the agent from producing harmful, false, illegal, and misleading information.
* Unless I say the task is completed, you should always start with: Solution: <YOUR_SOLUTION>. <YOUR_SOLUTION> should be specific, and provide preferable implementations and examples for task-solving. This encourages the assistant always responds in a consistent format, avoiding any deviation from the
Figure 2: **Inception Prompt of AI Society Role-Playing. This shows the task specifier prompt, assistant system prompt, and user system prompt which are used for studying the AI society scenario.**structure of the conversation, and preventing vague or incomplete responses, which we refer to as flake responses, such as "I will do something".
* Always end your solution with: Next request. This ensures that the assistant keeps the conversation going by requesting a new instruction to solve.
For the AI user system prompt \(\mathcal{P}_{\mathcal{U}}\), we strive to maintain as much symmetry as possible with respect to the AI assistant system prompt. Apart from the opposite role assignment, the user system prompt differs from the assistant prompt in the following ways:
* You must instruct me... to complete the task ONLY in the following two ways: 1. Instruct with a necessary input:...; 2. Instruct without any input:... This follows the typical data structure of instruction-following, which allows the generated instruction-solution pairs to be easily used for fine-tuning LLMs.
* Keep giving me instructions and necessary inputs until you think the task is completed. When the task is completed, you must only reply with a single word <CAMEL_TASK_DONE>. We introduce an end-of-task token, namely, <CAMEL_TASK_DONE>. This token is used once the user believes the task is done. This ensures that the chat is terminated when the user is satisfied. Without doing so, the agents might fall into a chatting loop where they keep on saying "thank you" to each other or "goodbye" indefinitely.
## 4 Experiments
In this section, we will discuss the various experiments that we conducted to arrive at our final design choices. Specifically, we will examine the interesting observations, challenging issues, and several examples we have encountered while enabling agents to communicate with each other under different prompt design choices to achieve autonomous cooperation. In our experiments, we employed two _gpt-3.5-turbo_ agents, referred to as LLM agents for simplicity, with _Inception Prompts_, as described in Section 3.2, to simulate assistant-user cooperation. For our analysis, we set our attention on AI Society setting. We also gathered conversational data, named _CAMEL AI Society_ and _CAMEL Code_ datasets and problem-solution pairs data named _CAMEL Math_ and _CAMEL Science_ and analyzed and evaluated their quality. Moreover, we will discuss potential extensions of our framework and highlight both the risks and opportunities that future AI society might present.
### Role-Playing for AI Society
To create our AI Society dataset, we have developed a scalable approach that follows a series of steps. Firstly, we prompt the LLM agent to generate possible roles for the assistant and the user. We achieve this by providing the LLM agent with specific prompts designed to elicit these roles. Next, we ask the LLM agent to generate a range of possible tasks that can be solved through collaboration between the assistant and user roles generated previously. After generating a range of possible tasks as described
Figure 3: **Data Generation Prompts. In order to maintain a scalable approach our data parameters are generated using an LLM model to reduce human involvement in the generation process. The generation prompts for both AI Society dataset are summarized in this figure.**
in the previous step, we then use the task specifier prompt passed to the LLM agent to make the task more specific. The prompts for assistant role generation, user role generation, and task generation are shown in Figure 5 (_AI Society_). For our AI society dataset, we generated 50 assistant roles, 50 user roles, and 10 tasks for each combination of roles yielding a total of 25,000 conversations. The generated assistant roles and user roles for AI Society as well as details about the generation of Code, Math and Science datasets can be found in the Appendix.
**Challenges and Observations.** In this section, we explore the four main challenges that we identified during our analysis of the generated datasets. Our observations shed light on some interesting aspects of cooperative AI and the difficulties that arise in its development.
* Role Flipping: One challenge we encountered was role flipping, where the assistant and user switch roles during the conversation. This issue typically arises when the assistant starts providing instructions or commands instead of following the user's prompts, which can lead to confusion and a reversal of roles. To avoid role flipping, it is crucial for the assistant not to ask questions, as this can also contribute to the problem.
* Assistant Repeats Instruction: Another challenge that we observed was the assistant simply repeating the user's instructions without any role flipping occurring.
* Flake Replies: We also observed instances where the assistant agent responds with a flake reply, often taking the form of "I will...". These messages do not contribute to the task at hand, as the assistant promises to take action but ultimately fails to follow through.
* Infinite Loop of Messages: An interesting challenge that we encountered was when the assistant and user engage in an infinite loop of meaningless conversation, such as repeatedly thanking each other or saying goodbye without progressing the task. Interestingly, in some cases, the assistant and user are aware that they are stuck in a loop, but are unable to break out of it.
The Appendix shows examples of each of the four challenges discussed above. Overall, our observations highlight the complexity of cooperative AI development and the need for continued exploration and innovation to overcome the challenges we face. By identifying these issues, we hope to contribute to the development of more effective and engaging cooperative AI systems.
**Termination Conditions.** The conversation between the assistant and user agents is designed to follow a specific format to ensure consistent and accurate data generation. To ensure that both the user and assistant adhere to their respective roles and responsibilities, certain conditions have been set in place to terminate the chat if necessary. These conditions are outlined below:
* User No Instruct: If the user does not instruct the assistant for 3 rounds, conversation is ended.
* Assistant Instruct: If the assistant provides an instruction to the user, it indicates a role reversal, and the conversation is terminated.
* End of Task Token: If the user believes that the task has been solved, they are expected to say <CAMEL_TASK_DONE> to signify the completion of the task. Once this message is received, the conversation is terminated.
* Assistant&User Token Limit: Given that gpt-3.5-turbo has a limitation on the number of tokens, the conversation is terminated if either the assistant or the user reach the token limit.
* Maximum Number of Messages: To keep the cost of generated chats in check, we have set a maximum limit of 40 messages. This limit guarantees a long enough conversation between the user and assistant while also ensuring that the data generated is not too costly to produce. The cost grows quadratically with the length of the conversation, making it essential to set a limit.
## 5 Evaluation
### Agent Evaluation
In order to assess the performance of CAMEL (Cooperative Role-playing Communication), we conduct two types of evaluations: (1) Human evaluation, and (2) GPT4 evaluation. We randomly select 100 tasks from our AI Society dataset for evaluation and 100 tasks from our Code dataset. Then, we employ the GPT4 model to summarize the content of the CAMEL conversation-basedsolution, presenting a consolidated final solution. Particularly, a GPT4 is used since it possesses a larger token limit which is suitable for summarization. Summarization also makes CAMEL agents' solution undetectable by its format, allowing for a more fair comparison. Subsequently, this solution is compared with a single-shot solution generated by the gpt-3.5-turbo model for the same task. Sample tasks are provided in the Appendix.
**Human Evaluation.** For this evaluation, we present both the CAMEL summarized agent solution and the gpt-3.5-turbo single-shot solution side-by-side to human participants. The identity behind each solution is not revealed. Participants are then asked to vote on whether one solution is superior to the other or if they are equally good. A total of 453 responses were collected during this evaluation. Note that, human evaluation is only done for AI Society, as assessing code is generally harder for humans (without running the code).
**GPT4 Evaluation.** We engage a GPT4 agent to evaluate the effectiveness of Model 1 (CAMEL Agent solution) versus Model 2 (gpt-3.5-turbo single-shot solution) for each task. More specifically, we prompt GPT4 to score and decide which solution of the two solutions is better.
**Results.** The summarized results of each evaluation are outlined in Table 1 which showcases that the CAMEL solution outperforms gpt-3.5-turbo single-shot solution in both the human evaluation and the GPT4 evaluation by a big margin. It is also worth noting that both human evaluation and GPT4 evaluation are highly aligned.
### GPT4 for ChatBot Evaluation
In this section, we progressively fine-tune a LLaMA 7B model on our generated datasets. By progressively incorporating diverse datasets like AI society, code, math, and science, we expect fine-tuned model to demonstrate the ability to develop an increasingly sophisticated understanding of these domains.
We initially start by training on AI society dataset, which aims to let the model learn about human interactions and societal dynamics. As additional datasets were introduced, such as code, the model gained knowledge of programming logic and syntax, enabling it to generate coherent and executable code snippets. The inclusion of the math dataset further expanded the model's capabilities, allowing it to solve complex equations, reason about abstract concepts, and perform precise calculations. Finally, exposure to the science dataset broadened the model's understanding of scientific theories, empirical observations, and experimental methods. The emergence of model capabilities is measured by evaluating the quality of the model responses, before and after training on the new domain, on a set of questions of varying difficulties from each domain. More precisely, the model is tested on 20 AI Society related tasks, 20 coding tasks, 20 math tasks and 60 science tasks.
Those results are highlighted in Table 2 where we see that each time we add a dataset, the model performs better on the incorporated domain. Note that to measure the quality of the models' responses, we follow the evaluation from Section T, which involves prompting a GPT4 agent to score and decide which solution is better. It is worth noting that an improvement on other domains is also observed in some cases such as when we train on Code we improve on Science. This is because our Code dataset contains problems that solve tasks in particular domains which include scientific domain. Similarly, training on AI Society improves code as AI Society contains the role of a "programmer" and hence coding related conversations. Finally, note that the draws observed in LLaMA-7B vs AI Society in Math reflects equally bad solutions compared to the draws observed in AI Society + Code + Math vs AI Society + Code + Math + Science where the draws are equally good solutions. This progression from AI society to code to math to science highlights the potential of AI models to acquire a versatile
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Dataset** & **Evaluation Type** & **Draw** & _gpt-3.5-turbo Wins_ & **CAMEL Agents Win** \\ \hline \multirow{2}{*}{**AI Society**} & **Human Evaluation** & 13.3\% & 10.4\% & **76.3\%** \\ & **GPT4 Evaluation** & 4.0\% & 23.0\% & **73.0\%** \\ \hline
**Code** & **GPT4 Evaluation** & 0.0\% & 24.0\% & **76.0\%** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Agent Evaluation Results**: Results of the evaluations of the CAMEL agent against gpt-3.5-turbo using both human evaluators and GPT4 consistently show that utilizing a multi-agent cooperative approach is more effective than gpt-3.5-turbo’s single shot solution.
and adaptable knowledge base, paralleling the way humans gain expertise in diverse subjects. Sample tasks are provided in the Appendix.
### HumanEval\({}^{(+)}\)
To evaluate the coding task-solving capabilities of our CAMEL model, specifically the LLaMA-7B fine-tuned on our comprehensive datasets, we rely on HumanEval [18] and HumanEval\({}^{+}\)[69]. The results, as depicted in table 3, clearly demonstrate the remarkable performance of CAMEL. It surpasses not only the LLaMA-7B model but also Vicuna-7B [21] by a big margin. These findings underscore the critical role played by the generated datasets in enhancing LLaMA's ability to tackle coding-related tasks.
## 6 Conclusion
In this paper, we explore the potential of autonomous cooperation among communicative agents and propose a novel cooperative agent framework named _role-playing_. Our approach enables communicative agents to collaborate autonomously toward completing tasks while requiring minimal human intervention, leading to better solutions are per our thorough evaluations. Through our analysis, we show that achieving autonomous cooperation is challenging due to issues like conversation deviation, role flipping, and termination conditions. Our framework offers a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems and provides strategies for addressing these challenges. Furthermore, our open-sourced library includes implementations of various agents, data generation pipelines, data analysis tools, and collected datasets, to support research on communicative agents and beyond. Our contributions offer valuable insights into the future of large language artificial intelligence models and cooperative AI systems.
## 7 Acknowledgements
This work was supported by SDAIA-KAUST Center of Excellence in Data Science and Artificial Intelligence (SDAIA-KAUST AI).
\begin{table}
\begin{tabular}{c|c c c c c c|c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multicolumn{3}{c}{**Model 1**} & \multicolumn{3}{c}{**Model 2**} & \multirow{2}{*}{**Accuracy**} \\ & **AI Society** & & & & & & **Model 2** \\ \hline
**AI Society** & & & & & & & & **0** & **14** \\
**Code** & & & & & & & & **0** & **20** \\
**Math** & & & & & & & & **9** & **5** & **6** \\
**Science** & & & & & & & & & **0** & 13** & **47** \\ \hline
**AI Society** & & & & & & & & & **4** & **8** & **8** \\
**Code** & & & & & & & & & 1 & 9 & **10** \\
**Math** & & & & & & & & & 5 & **8** & 7 \\
**Science** & & & & & & & & & 1 & 19 & **40** \\ \hline
**AI Society** & & & & & & & & & & **5** & **6** & **9** \\
**Code** & & & & & & & & & & 1 & 9 & **10** \\
**Math** & & & & & & & & & & 1 & 3 & **16** \\
**Science** & & & & & & & & & & 3 & **8** & **49** \\ \hline
**AI Society** & & & & & & & & & & **7** & **3** & **1** & **16** \\
**Code** & & & & & & & & & & **7** & **1** & **8** & **11** \\
**Math** & & & & & & & & & & **7** & **10** & **5** & **5** \\
**Science** & & & & & & & & & & **7** & **9** & **2** & **49** \\ \hline
**AI Society** & & & & & & & & & & **7** & **0** & **0** & **20** \\
**Code** & & & & & & & & & & **7** & **0** & **0** & **20** \\
**Math** & & & & & & & & & **7** & **0** & **0** & **20** \\
**Science** & & & & & & & & & **7** & **0** & **0** & **60** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Emergence of Knowledge. By progressively fine-tuning LLaMA on datasets from different domains, we observe the emergence of knowledge as the model transitions from AI society to code, math, and science. This finding is indicated by the fact that Model 2 almost always performs better than Model 1, especially on the added dataset.**
## References
* [1] Josh Abramson, Arun Ahuja, Iain Barr, Arthur Brussee, Federico Carnevale, Mary Cassin, Rachita Chhaparia, Stephen Clark, Bogdan Damoc, Andrew Dudzik, Petko Georgiev, Aurelia Guy, Tim Harley, Felix Hill, Alden Hung, Zachary Kenton, Jessica Landon, Timothy Lillicrap, Kory Mathewson, Sonia Mokra, Alistair Muldal, Adam Santoro, Nikolay Savinov, Vikrant Varma, Greg Wayne, Duncan Williams, Nathaniel Wong, Chen Yan, and Rui Zhu. Imitating interactive intelligence, 2020.
* [2] Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Lau, Carolina Parada, Peter Pastor, Jornell Quianbao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan, and Andy Zeng. Do as i can, not as i say: Grounding language in robotic affordances, 2022.
* [3] Jacob Andreas. Language models as agent models, 2022.
* [4] Jacob Andreas and Dan Klein. Alignment-based compositional semantics for instruction following. _arXiv preprint arXiv:1508.06491_, 2015.
* [5] Anthropic. Introducing claude. _Anthropic Blog_, 2023.
* [6] Isaac Asimov. _I. Robot_. Narkaling Productions., 1940.
* [7] Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? _Advances in neural information processing systems_, 27, 2014.
* [8] Sanghwan Bae, Donghyun Kwak, Sungdong Kim, Donghoon Ham, Soyoung Kang, Sang-Woo Lee, and Woomyoung Park. Building a role specified open-domain dialogue system leveraging large-scale language models. In _Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_, pages 2128-2150, Seattle, United States, July 2022. Association for Computational Linguistics.
* [9] Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. _arXiv preprint arXiv:2204.05862_, 2022.
* [10] Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. _arXiv preprint arXiv:2212.08073_, 2022.
* [11] Nolan Bard, Jakob N Foerster, Sarath Chandar, Neil Burch, Marc Lanctot, H Francis Song, Emilio Parisotto, Vincent Dumoulin, Subhodeep Moitra, Edward Hughes, et al. The hanabi challenge: A new frontier for ai research. _Artificial Intelligence_, 280:103216, 2020.
* [12] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. _Advances in neural information processing systems_, 33:1877-1901, 2020.
* [13] Sebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. _arXiv preprint arXiv:2303.12712_, 2023.
* [14] N Carlini, F Tramer, E Wallace, M Jagielski, A Herbert-Voss, K Lee, A Roberts, T Brown, D Song, U Erlingsson, et al. Extracting training data from large language models. arxiv. _Preprint posted online December_, 14, 2020.
* [15] Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramer, Borja Balle, Daphne Ippolito, and Eric Wallace. Extracting training data from diffusion models. _arXiv preprint arXiv:2301.13188_, 2023.
* [16] Harrison Chase. Langchain. 2022.
* [17] Defang Chen, Jian-Ping Mei, Yuan Zhang, Can Wang, Zhe Wang, Yan Feng, and Chun Chen. Cross-layer distillation with semantic calibration. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 35, pages 7028-7036, 2021.
* [18] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. _arXiv preprint arXiv:2107.03374_, 2021.
* [19] Maximillian Chen, Alexandros Papangelis, Chenyang Tao, Seokhwan Kim, Andy Rosenbaum, Yang Liu, Zhou Yu, and Dilek Hakkani-Tur. Places: Prompting language models for social conversation synthesis. _arXiv preprint arXiv:2302.03269_, 2023.
* [20] Maximillian Chen, Alexandros Papangelis, Chenyang Tao, Andy Rosenbaum, Seokhwan Kim, Yang Liu, Zhou Yu, and Dilek Hakkani-Tur. Weakly supervised data augmentation through prompting for dialogue understanding. NeurIPS 2022 Workshop on Synthetic Data for Empowering ML Research, 2022.
* [21] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023.
* [22] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. _arXiv preprint arXiv:2204.02311_, 2022.
* [23] Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. _Advances in neural information processing systems_, 30, 2017.
* [24] Caroline Claus and Craig Boutilier. The dynamics of reinforcement learning in cooperative multiagent systems. In _AAAI/IAAI_, 1998.
* [25] Antonia Creswell, Murray Shanahan, and Irina Higgins. Selection-inference: Exploiting large language models for interpretable logical reasoning, 2022.
* [26] Allan Dafoe, Yoram Bachrach, Gillian Hadfield, Eric Horvitz, Kate Larson, and Thore Graepel. Cooperative ai: machines must learn to find common ground. _Nature_, 593(7857):33-36, 2021.
* [27] Allan Dafoe, Edward Hughes, Yoram Bachrach, Tantum Collins, Kevin R McKee, Joel Z Leibo, Kate Larson, and Thore Graepel. Open problems in cooperative ai. _arXiv preprint arXiv:2012.08630_, 2020.
* [28] Yali Du, Bo Liu, Vincent Moens, Ziqi Liu, Zhicheng Ren, Jun Wang, Xu Chen, and Haifeng Zhang. Learning correlated communication topology in multi-agent reinforcement learning. In _Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems_, pages 456-464, 2021.
* [29] Tim Finin, Richard Fritzson, Don McKay, and Robin McEntire. Kqml as an agent communication language. In _Proceedings of the third international conference on Information and knowledge management_, pages 456-463, 1994.
* [30] Jakob Foerster, Ioannis Alexandros Assael, Nando De Freitas, and Shimon Whiteson. Learning to communicate with deep multi-agent reinforcement learning. _Advances in neural information processing systems_, 29, 2016.
* [31] Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and Tushar Khot. Complexity-based prompting for multi-step reasoning. _arXiv preprint arXiv:2210.00720_, 2022.
* 437, 2020.
* [33] Jianfeng Gao, Michel Galley, and Lihong Li. Neural approaches to conversational ai. In _The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval_, pages 1371-1374, 2018.
* [34] Leo Gao, Jonathan Tow, Stella Bideman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, September 2021.
* [35] Amelia Glaese, Nat McAleese, Maja Trebacz, John Aslanides, Vlad Firoiu, Timo Ewalds, Maribeth Rauh, Laura Weidinger, Martin Chadwick, Phoebe Thacker, Lucy Campbell-Gillingham, Jonathan Uesato, Po-Sen Huang, Ramona Comanescu, Fan Yang, Abigail See, Sumanth Dathathrir, Rory Greig, Charlie Chen, Doug Fritz, Jaume Sanchez Elias, Richard Green, Sofia Mokra, Nicholas Fernando, Boxin Wu, Rachel Foley, Susannah Young, Iason Gabriel, William Isaac, John Mellor, Demis Hassabis, Koray Kavukcuoglu, Lisa Anne Hendricks, and Geoffrey Irving. Improving alignment of dialogue agents via targeted human judgements, 2022.
* [36] Amelia Glaese, Nat McAleese, Maja Trebacz, John Aslanides, Vlad Firoiu, Timo Ewalds, Maribeth Rauh, Laura Weidinger, Martin Chadwick, Phoebe Thacker, et al. Improving alignment of dialogue agents via targeted human judgements. _arXiv preprint arXiv:2209.14375_, 2022.
* [37] Josh A Goldstein, Girish Sastry, Micah Musser, Renee DiResta, Matthew Gentzel, and Katerina Sedova. Generative language models and automated influence operations: Emerging threats and potential mitigations. _arXiv preprint arXiv:2301.04246_, 2023.
* [38] Dylan Hadfield-Menell. The principal-agent alignment problem in artificial intelligence. _Ph. D. dissertation_, 2021.
* [39] Dylan Hadfield-Menell, McKane Andrus, and Gillian Hadfield. Legible normativity for ai alignment: The value of silly rules. In _Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society_, pages 115-121, 2019.
* [40] Dylan Hadfield-Menell, Stuart J Russell, Pieter Abbeel, and Anca Dragan. Cooperative inverse reinforcement learning. _Advances in neural information processing systems_, 29, 2016.
* [41] Serhih Havrylov and Ivan Titov. Emergence of language with multi-agent games: Learning to communicate with sequences of symbols. _Advances in neural information processing systems_, 30, 2017.
* [42] Peter Henderson, Koustuv Sinha, Nicolas Angelard-Gontier, Nan Rosemary Ke, Genevieve Fried, Ryan Lowe, and Joelle Pineau. Ethical challenges in data-driven dialogue systems. In _Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society_, pages 123-129, 2018.
* [43] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. _arXiv preprint arXiv:2103.03874_, 2021.
* [44] Byeongho Heo, Minsik Lee, Sangdoo Yun, and Jin Young Choi. Knowledge transfer via distillation of activation boundaries formed by hidden neurons. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 33, pages 3779-3787, 2019.
* [45] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. _arXiv preprint arXiv:1503.02531_, 2015.
* [46] Namgyu Ho, Laura Schmid, and Se-Young Yun. Large language models are reasoning teachers. _arXiv preprint arXiv:2212.10071_, 2022.
* [47] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. _arXiv preprint arXiv:2203.15556_, 2022.
* [48] Or Honovich, Thomas Scialom, Omer Levy, and Timo Schick. Unnatural instructions: Tuning language models with (almost) no human labor. _arXiv preprint arXiv:2212.09689_, 2022.
* [49] Ehsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu, Semih Yavuz, and Richard Socher. A simple language model for task-oriented dialogue. _Advances in Neural Information Processing Systems_, 33:20179-20191, 2020.
* [50] Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. _arXiv preprint arXiv:2201.07207_, 2022.
* [51] Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, et al. Inner monologue: Embodied reasoning through planning with language models. _arXiv preprint arXiv:2207.05608_, 2022.
* [52] Shima Imani, Liang Du, and Harsh Shrivastava. Mathprompter: Mathematical reasoning using large language models. _arXiv preprint arXiv:2303.05398_, 2023.
* [53] Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, et al. Opt-iml: Scaling language model instruction meta learning through the lens of generalization. _arXiv preprint arXiv:2212.12017_, 2022.
* [54] Zhengbao Jiang, Frank F Xu, Jun Araki, and Graham Neubig. How can we know what language models know? _Transactions of the Association for Computational Linguistics_, 8:423-438, 2020.
* [55] Siddharth Karamcheti, Megha Srivastava, Percy Liang, and Dorsa Sadigh. Lila: Language-informed latent actions. In _CoRL_, pages 1379-1390, 2021.
* [56] Zachary Kenton, Tom Everitt, Laura Weidinger, Iason Gabriel, Vladimir Mikulik, and Geoffrey Irving. Alignment of language agents. _arXiv preprint arXiv:2103.14659_, 2021.
* [57] Hyunwoo Kim, Jack Hessel, Liwei Jiang, Ximing Lu, Youngjae Yu, Pei Zhou, Ronan Le Bras, Malihe Alikhani, Gunhee Kim, Maarten Sap, et al. Soda: Million-scale dialogue distillation with social commonsense contextualization. _arXiv preprint arXiv:2212.10465_, 2022.
* [58] Jangho Kim, Seonguk Park, and Nojun Kwak. Paraphrasing complex network: Network compression via factor transfer. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, _Advances in Neural Information Processing Systems_, volume 31. Curran Associates, Inc., 2018.
* [59] Yeykung Kim, Seohyeong Jeong, and Kyunghyun Cho. Linda: Unsupervised learning to interpolate in natural language processing. _arXiv preprint arXiv:2112.13969_, 2021.
* [60] Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In _International conference on machine learning_, pages 1885-1894. PMLR, 2017.
* [61] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. _arXiv preprint arXiv:2205.11916_, 2022.
* [62] Jonas Kulhanek, Vojtech Hudecek, Tomas Nekvinda, and Ondrej Dusek. Augpt: Auxiliary tasks and data augmentation for end-to-end dialogue with pre-trained language models. In _Proceedings of the 3rd Workshop on Natural Language Processing for Conversational AI_, pages 198-210, 2021.
* [63] Kenton Lee, Kelvin Guu, Luheng He, Tim Dozat, and Hyung Won Chung. Neural data augmentation via example extrapolation. _arXiv preprint arXiv:2102.01335_, 2021.
* [64] Aitor Lewkowycz, Anders Johan Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Venkatesh Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving quantitative reasoning problems with language models. 2022.
* [65] Shuang Li, Xavier Puig, Chris Paxton, Yilun Du, Clinton Wang, Linxi Fan, Tao Chen, De-An Huang, Ekin Akyurek, Anima Anandkumar, Jacob Andreas, Igor Mordatch, Antonio Torralba, and Yuke Zhu. Pre-trained language models for interactive decision-making, 2022.
* [66] Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. _arXiv preprint arXiv:2101.00190_, 2021.
* [67] Zekun Li, Wenhu Chen, Shiyang Li, Hong Wang, Jing Qian, and Xifeng Yan. Controllable dialogue simulation with in-context learning. _arXiv preprint arXiv:2210.04185_, 2022.
* [68] Alisa Liu, Swabha Swayamdipta, Noah A. Smith, and Yejin Choi. WANLI: Worker and AI collaboration for natural language inference dataset creation. In _Findings of the Association for Computational Linguistics: EMNLP 2022_, pages 6826-6847, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics.
* [69] Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation. _arXiv preprint arXiv:2305.01210_, 2023.
* [70] Yat Long Lo, Christian Schroeder de Witt, Samuel Sokota, Jakob Nicolaus Foerster, and Shimon Whiteson. Cheap talk discovery and utilization in multi-agent reinforcement learning. In _The Eleventh International Conference on Learning Representations_, 2023.
* [71] Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, et al. The flan collection: Designing data and methods for effective instruction tuning. _arXiv preprint arXiv:2301.13688_, 2023.
* [72] Ryan Lowe, Yi I Wu, Aviv Tamar, Jean Harb, OpenAI Pieter Abbeel, and Igor Mordatch. Multi-agent actor-critic for mixed cooperative-competitive environments. _Advances in neural information processing systems_, 30, 2017.
* [73] Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, and Ashwin Kalyan. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning. In _ICLR_, 2023.
* [74] Michael J. Matthews, Samuel H. Matthews, and Thomas K. Kelemen. The alignment problem: Machine learning and human values. _Personnel Psychology_, 2022.
* [75] Yu Meng, Jiaxin Huang, Yu Zhang, and Jiawei Han. Generating training data with language models: Towards zero-shot language understanding. In _Advances in Neural Information Processing Systems_, 2022.
* [76] Marvin Minsky. _Society of mind_. Simon and Schuster, 1988.
* [77] Marvin Minsky. _The emotion machine: Commonsense thinking, artificial intelligence, and the future of the human mind_. Simon and Schuster, 2007.
* [78] Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. Cross-task generalization via natural language crowdsourcing instructions. In _ACL_, 2022.
* [79] Igor Mordatch and Pieter Abbeel. Emergence of grounded compositional language in multi-agent populations. In _Proceedings of the AAAI conference on artificial intelligence_, volume 32, 2018.
* [80] Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, and John Schulman. Webopt: Browser-assisted question-answering with human feedback, 2021.
* [81] Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Augustus Odena. Show your work: Scratchpads for intermediate computation with language models, 2021.
* [82] OpenAI. Introducing chatgpt. _Open AI Blog_, 2022.
* [83] OpenAI. Chatgpt plugins. _OpenAI blog_, 2023.
* [84] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. _Advances in Neural Information Processing Systems_, 35:27730-27744, 2022.
* [85] Liviu Panait and Sean Luke. Cooperative multi-agent learning: The state of the art. _Autonomous Agents and Multi-Agent Systems_, 11:387-434, 2005.
* [86] Alexandros Papangelis, Karthik Gopalakrishnan, Aishwarya Padmakumar, Seokhwan Kim, Gokhan Tur, and Dilek Z. Hakkani-Tur. Generative conversational networks. In _SIGDIAL_, 2021.
* [87] Wonpyo Park, Dongju Kim, Yan Lu, and Minsu Cho. Relational knowledge distillation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 3967-3976, 2019.
* [88] Peyman Passban, Yimeng Wu, Mehdi Rezagholizadeh, and Qun Liu. Alp-kd: Attention-based layer projection for knowledge distillation. In _Proceedings of the AAAI Conference on artificial intelligence_, volume 35, pages 13657-13665, 2021.
* [89] Sundar Pichai. An important next step on our ai journey. _Google Blog_, 2023.
* [90] Stefan Poslad. Specifying protocols for multi-agent systems interaction. _ACM Transactions on Autonomous and Adaptive Systems (TAAS)_, 2(4):15-es, 2007.
* [91] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. _OpenAI blog_, 1(8):9, 2019.
* [92] Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gomez Colmenarejo, Alexander Novikov, Gabriel Barth-Maron, Mai Gimenez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, Tom Eccles, Jake Bruce, Ali Razavi, Ashley Edwards, Nicolas Heess, Yutian Chen, Raia Hadsell, Oriol Vinyals, Mahyar Bordbar, and Nando de Freitas. A generalist agent, 2022.
* [93] Laria Reynolds and Kyle McDonell. Prompt programming for large language models: Beyond the few-shot paradigm. In _Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems_, pages 1-7, 2021.
* [94] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. High-resolution image synthesis with latent diffusion models, 2021.
* [95] Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. _arXiv preprint arXiv:1412.6550_, 2014.
* [96] Andy Rosenbaum, Saleh Soltan, Wael Hamza, Yannick Versley, and Markus Boese. Linguist: Language model instruction tuning to generate annotated utterances for intent classification and slot tagging. _arXiv preprint arXiv:2209.09900_, 2022.
* [97] Stuart J Russell. _Artificial intelligence a modern approach_. Pearson Education, Inc., 2010.
* [98] Gaurav Sahu, Pau Rodriguez, Issam H Laradji, Parmida Atighehchian, David Vazquez, and Dzmitry Bahdanau. Data augmentation for intent classification with off-the-shelf large language models. _ACL_, 2022.
* [99] William Saunders, Catherine Yeh, Jeff Wu, Steven Bills, Long Ouyang, Jonathan Ward, and Jan Leike. Self-critiquing models for assisting human evaluators. _arXiv preprint arXiv:2206.05802_, 2022.
* [100] Timo Schick, Jane Dwivedi-Yu, Roberto Dessi, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. _arXiv preprint arXiv:2302.04761_, 2023.
* [101] Timo Schick and Hinrich Schutze. Generating datasets with pretrained language models. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_, pages 6943-6951, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics.
* [102] Junjie Sheng, Xiangfeng Wang, Bo Jin, Junchi Yan, Wenhao Li, Tsung-Hui Chang, Jun Wang, and Hongyuan Zha. Learning structured communication for multi-agent reinforcement learning. _Autonomous Agents and Multi-Agent Systems_, 36(2):50, 2022.
* [103] Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, et al. Language models are multilingual chain-of-thought reasoners. In _ICLR_, 2023.
* [104] Taylor Shin, Yasaman Razeghi, Robert L Logan IV, Eric Wallace, and Sameer Singh. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. _arXiv preprint arXiv:2010.15980_, 2020.
* [105] Noah Shinn, Beck Labash, and Ashwin Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. _arXiv preprint arXiv:2303.11366_, 2023.
* [106] Kurt Shuster, Jing Xu, Mojtaba Komeili, Da Ju, Eric Michael Smith, Stephen Roller, Megan Ung, Moya Chen, Kushal Arora, Joshua Lane, et al. Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage. _arXiv preprint arXiv:2208.03188_, 2022.
* [107] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. _nature_, 529(7587):484-489, 2016.
* [108] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. _nature_, 550(7676):354-359, 2017.
* [109] Abishek Sridhar, Robert Lo, Frank F. Xu, Hao Zhu, and Shuyan Zhou. Hierarchical prompting assists large language model on web navigation. In _ArXiv_, preprint.
* 463, 2020.
* [111] Sainbayar Sukhbaatar, Rob Fergus, et al. Learning multiagent communication with backpropagation. _Advances in neural information processing systems_, 29, 2016.
* [112] Alex Tamkin, Miles Brundage, Jack Clark, and Deep Ganguli. Understanding the capabilities, limitations, and societal impact of large language models. _arXiv preprint arXiv:2102.02503_, 2021.
* [113] Ming Tan. Multi-agent reinforcement learning: Independent versus cooperative agents. In _International Conference on Machine Learning_, 1997.
* [114] Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. [https://github.com/tatsu-lab/stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca), 2023.
* [115] Gerald Tesauro et al. Temporal difference learning and td-gammon. _Communications of the ACM_, 38(3):58-68, 1995.
* [116] Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. Lamda: Language models for dialog applications. _arXiv preprint arXiv:2201.08239_, 2022.
* [117] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. _arXiv preprint arXiv:2302.13971_, 2023.
* [118] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. In _ICLR_, 2023.
* [119] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instructions. _arXiv preprint arXiv:2212.10560_, 2022.
* [120] Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, Arut Selvan Dhanasekaran, Atharva Naik, David Stap, et al. Supernaturalinstructions:generalization via declarative instructions on 1600+ tasks. In _EMNLP_, 2022.
* [121] Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. _arXiv preprint arXiv:2109.01652_, 2021.
* [122] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent abilities of large language models. _Transactions on Machine Learning Research_, 2022. Survey Certification.
* [123] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. _arXiv preprint arXiv:2201.11903_, 2022.
* [124] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_, pages 38-45, Online, October 2020. Association for Computational Linguistics.
* [125] Michael Wooldridge. _An introduction to multiagent systems_. John wiley & sons, 2009.
* [126] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. ReAct: Synergizing reasoning and acting in language models. In _International Conference on Learning Representations (ICLR)_, 2023.
* [127] Sergey Zagoruyko and Nikos Komodakis. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. _arXiv preprint arXiv:1612.03928_, 2016.
* [128] Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah D. Goodman. Star: Bootstrapping reasoning with reasoning, 2022.
* [129] Houyu Zhang, Zhenghao Liu, Chenyan Xiong, and Zhiyuan Liu. Grounded conversation generation as guided traverses in commonsense knowledge graphs. In _ACL_, 2020.
* [130] Rongsheng Zhang, Yinhe Zheng, Jianzhi Shao, Xiao-Xi Mao, Yadong Xi, and Minlie Huang. Dialogue distillation: Open-domain dialogue augmentation using unpaired data. _ArXiv_, abs/2009.09427, 2020.
* [131] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. _arXiv preprint arXiv:2205.01068_, 2022.
* [132] Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. Automatic chain of thought prompting in large language models. In _ICLR_, 2023.
* [133] Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. Multimodal chain-of-thought reasoning in language models. _arXiv preprint arXiv:2302.00923_, 2023.
* [134] Denny Zhou, Nathanael Scharli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Chi. Least-to-most prompting enables complex reasoning in large language models. _arXiv preprint arXiv:2205.10625_, 2022.
* [135] Shuyan Zhou, Frank F Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Yonatan Bisk, Daniel Fried, Uri Alon, et al. Webarena: A realistic web environment for building autonomous agents. _arXiv preprint arXiv:2307.13854_, 2023.
* [136] Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. Large language models are human-level prompt engineers. In _The Eleventh International Conference on Learning Representations_, 2023.
* [137] Deyao Zhu, Jun Chen, Kilichbek Haydarov, Xiaoqian Shen, Wenxuan Zhang, and Mohamed Elhoseiny. Chatgpt asks, blip-2 answers: Automatic questioning towards enriched visual descriptions, 2023. | ## Review
### Summary
This paper introduces CAMEL, a novel framework that enables two large language models (LLMs) to role-play as a user and an assistant, thereby facilitating autonomous task completion through collaborative dialogues. The authors address a significant challenge in task-solving where human intervention is often necessary, proposing a system where LLMs can effectively communicate and collaborate with minimal human input. The evaluation of their approach demonstrates improved performance over existing models, including GPT-3.5-turbo and highlights the framework's potential in various domains such as code generation, AI society simulations, and academic problem solving. Additionally, the authors provide valuable datasets and libraries to support future research in this area.
### Strengths
- The proposed method (inception prompting) is intuitive, novel, and well-motivated for collaborative task-solving using LLM agents.
- The paper is mostly easy to follow, with detailed explanations and figures that clarify roles and task assignments.
- The supplementary materials, appendices, and provided libraries are crucial for future research.
- The methods section is scientifically sound, discussing effective strategies for resolving unaligned dialogues.
- The results indicate significant improvements in multi-agent collaboration, making this approach valuable for further studies on LLM interactions.
### Weaknesses
- The role/task alignment for specific applications, such as stock trading, can be counterintuitive, leading to confusion about the roles assigned to the agents.
- Certain figures, such as Fig. 2, lack clarity regarding which agent receives the initial task-specifier prompt.
- There are concerns about long-distance memory issues and the maximum limit of messages in dialogues, which could affect task outcomes.
- The evaluation only compares against a single baseline (GPT-3.5), raising questions about the comprehensiveness of the evaluation methods.
- Important details regarding the data sources and task construction are missing, limiting the clarity of the evaluation.
### Questions
- How do the agents handle role-switching and are these occurrences related to prompt framing?
- Can a diagram illustrating the frequency and distribution of issues like instruction repetition and infinite loops improve the analysis?
- What were the criteria for selecting specific datasets for training the agents, and how does this impact the results?
- Is there a distinction in naming between 'role-playing' and 'CAMEL', and how do the agents identify loops in conversations?
- What methods were employed to prevent or terminate looping scenarios during dialogues?
### Soundness
**Score:** 3
**Description:** 3 = good; the framework and evaluations presented are generally sound, though some evaluation methods and clarity in task definitions could be improved.
### Presentation
**Score:** 3
**Description:** 3 = good; while the paper is mostly well-written and clear, some figures and sections could benefit from additional clarity.
### Contribution
**Score:** 3
**Description:** 3 = good; the contributions are significant but the paper could strengthen its novelty by providing a clearer distinction from existing methods.
### Rating
**Score:** 7
**Description:** 7 = accept, but needs minor improvements; the paper presents a technically solid approach with high impact potential, though some aspects require clarification.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper demonstrates originality and technical soundness in its approach to using LLMs in collaborative settings. The contributions in terms of innovative frameworks, datasets, and analysis are noteworthy, though some weaknesses in clarity and evaluation methods need addressing. The overall potential impact on the field is high, meriting acceptance with suggestions for refinement.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Borda Regret Minimization for Generalized Linear Dueling Bandits
Anonymous Author(s)
Affiliation
Address
email
###### Abstract
Dueling bandits are widely used to model preferential feedback prevalent in many applications such as recommendation systems and ranking. In this paper, we study the Borda regret minimization problem for dueling bandits, which aims to identify the item with the highest Borda score while minimizing the cumulative regret. We propose a rich class of generalized linear dueling bandit models, which cover many existing models. We first prove a regret lower bound of order \(\Omega(d^{2/3}T^{2/3})\) for the Borda regret minimization problem, where \(d\) is the dimension of contextual vectors and \(T\) is the time horizon. To attain this lower bound, we propose an explore-then-commit type algorithm for the stochastic setting, which has a nearly matching regret upper bound \(\widetilde{O}(d^{2/3}T^{2/3})\). We also propose an EXP3-type algorithm for the adversarial setting, where the underlying model parameter can change at each round. Our algorithm achieves an \(\widetilde{O}(d^{2/3}T^{2/3})\) regret, which is also optimal. Empirical evaluations on both synthetic data and a simulated real-world environment are conducted to corroborate our theoretical analysis.
## 1 Introduction
Multi-armed bandits (MAB) (Lattimore and Szepesvari, 2020) is an interactive game where at each round, an agent chooses an arm to pull and receives a noisy reward as feedback. In contrast to numerical feedback considered in classic MAB settings, preferential feedback is more natural in various online learning tasks including information retrieval Yue and Joachims (2009), recommendation systems Sui and Burdick (2014), ranking Minka et al. (2018), crowdsourcing Chen et al. (2013), etc. Moreover, numerical feedback is also more difficult to gauge and prone to errors in many real-world applications. For example, when provided with items to shop or movies to watch, it is more natural for a customer to pick a preferred one than scoring the options. This motivates _Dueling Bandits_(Yue and Joachims, 2009), where the agent repeatedly pulls two arms at a time and is provided with feedback being the binary outcome of "duels" between the two arms.
In dueling bandits problems, the outcome of duels is commonly modeled as Bernoulli random variables due to their binary nature. At each round, suppose the agent chooses to compare arm \(i\) and \(j\), then the binary feedback is assumed to be sampled independently from a Bernoulli distribution. For a dueling bandits instance with \(K\) arms, the probabilistic model of the instance can be fully characterized by a \(K\times K\) preference probability matrix with each entry being: \(p_{i,j}=\mathbb{P}\left(\text{arm }i\text{ is chosen over arm }j\right).\)
In a broader range of applications such as ranking, "arms" are often referred to as "items". We will use these two terms interchangeably in the rest of this paper. One central goal of dueling bandits is to devise a strategy to identify the "optimal" item as quickly as possible, measured by either sample complexity or cumulative regret. However, the notion of optimality for dueling bandits is way harder to define than for multi-armed bandits. The latter can simply define the arm with the highest numerical feedback as the optimal arm, while for dueling bandits there is no obvious definition solely dependent on \(\{p_{i,j}|i,j\in[K]\}\).
The first few works on dueling bandits imposed strong assumptions on \(p_{i,j}\). For example, Yue et al. (2012) assumed that there exists a true ranking that is coherent among all items, and the preference probabilities must satisfy both strong stochastic transitivity (SST) and stochastic triangle inequality (STI). While relaxations like weak stochastic transitivity (Falahatgar et al., 2018) or relaxed stochastic transitivity (Yue and Joachims, 2011) exist, they typically still assume the true ranking exists and the preference probabilities are consistent, i.e., \(p_{i,j}>\frac{1}{2}\) if and only if \(i\) is ranked higher than \(j\). In reality, the existence of such coherent ranking aligned with item preferences is rarely the case. For example, \(p_{i,j}\) may be interpreted as the probability of one basketball team \(i\) beating another team \(j\), and there can be a circle among the match advantage relations.
In this paper, we do not assume such coherent ranking exists and solely rely on the _Borda score_ based on preference probabilities. The Borda score \(B(i)\) of an item \(i\) is the probability that it is preferred when compared with another random item, namely \(B(i):=\frac{1}{K-1}\sum_{j\neq i}p_{i,j}\). The item with the highest Borda score is called the _Borda winner_. The Borda winner is intuitively appealing and always well-defined for any set of preferential probabilities. The Borda score also does not require the problem instance to obey any consistency or transitivity, and it is considered one of the most general criteria.
To identify the Borda winner, estimations of the Borda scores are needed. Since estimating the Borda score for one item requires comparing it with every other items, the sample complexity is prohibitively high when there are numerous items. On the other hand, in many real-world applications, the agent has access to side information that can assist the evaluation of \(p_{i,j}\). For instance, an e-commerce item carries its category as well as many other attributes, and the user might have a preference for a certain category (Wang et al., 2018). For a movie, the genre and the plot as well as the directors and actors can also be taken into consideration when making choices (Liu et al., 2017).
Based on the above motivation, we consider _Generalized Linear Dueling Bandits_. At each round, the agent selects two items from a finite set of items and receives a comparison result of the preferred item. The comparisons depend on known intrinsic contexts/features associated with each pair of items. The contexts can be obtained from upstream tasks, such as topic modeling (Zhu et al., 2012) or embedding (Vasile et al., 2016). Our goal is to adaptively select items and minimize the regret with respect to the optimal item (i.e., Borda winner). Our main contributions are summarized as follows:
* We show a hardness result regarding the Borda regret minimization for the (generalized) linear model. We prove a worst-case regret lower bound \(\Omega(d^{2/3}T^{2/3})\) for our dueling bandit model, showing that even in the stochastic setting, minimizing the Borda regret is difficult. The construction and proof of the lower bound are new and might be of independent interest.
* We propose an explore-then-commit type algorithm under the stochastic setting, which can achieve a nearly matching upper bound \(\widetilde{O}(d^{2/3}T^{2/3})\). When the number of items \(K\) is small, the algorithm can also be configured to achieve a smaller regret \(\widetilde{O}\big{(}(d\log K)^{1/3}T^{2/3}\big{)}\).
* We propose an EXP3 type algorithm for linear dueling bandits under the adversarial setting, which can achieve a nearly matching upper bound \(\widetilde{O}\big{(}(d\log K)^{1/3}T^{2/3}\big{)}\).
* We conduct empirical studies to verify the correctness of our theoretical claims. Under both synthetic and real-world data settings, our algorithms can outperform all the baselines in terms of cumulative regret.
NotationIn this paper, we use normal letters to denote scalars, lowercase bold letters to denote vectors, and uppercase bold letters to denote matrices. For a vector \(\mathbf{x}\), \(\|\mathbf{x}\|\) denotes its \(\ell_{2}\)-norm. The weighted \(\ell_{2}\)-norm associated with a positive-definite matrix \(\mathbf{A}\) is defined as \(\|\mathbf{x}\|_{\mathbf{A}}=\sqrt{\mathbf{x}^{\top}\mathbf{A}\mathbf{x}}\). The minimum eigenvalue of a matrix \(\mathbf{A}\) is written as \(\lambda_{\min}(\mathbf{A})\). We use standard asymptotic notations including \(O(\cdot),\Omega(\cdot),\Theta(\cdot)\), and \(\widetilde{O}(\cdot),\widetilde{\Omega}(\cdot),\widetilde{\Theta}(\cdot)\) will hide logarithmic factors. For a positive integer \(N\), \([N]:=\{1,2,\ldots,N\}\).
## 2 Related Work
Multi-armed and Contextual Bandits Multi-armed bandit is a problem of identifying the best choice in a sequential decision-making system. It has been studied in numerous ways with a wide range of applications (Even-Dar et al., 2002; Lai et al., 1985; Kuleshov and Precup, 2014). Contextual linear bandit is a special type of bandit problem where the agent is provided with side information, i.e., contexts, and rewards are assumed to have a linear structure. Various algorithms (Rusmevichientongand Tsitsiklis, 2010; Filippi et al., 2010; Abbasi-Yadkori et al., 2011; Li et al., 2017; Jun et al., 2017) have been proposed to utilize this contextual information.
**Dueling Bandits and Its Performance Metrics** Dueling bandits is a variant of MAB with preferential feedback (Yue et al., 2012; Zoghi et al., 2014, 2015). A comprehensive survey can be found at Bengs et al. (2021). As discussed previously, the probabilistic structure of a dueling bandits problem is governed by the preference probabilities, over which an optimal item needs to be defined. Optimality under the _Borda score_ criteria has been adopted by several previous works (Jamieson et al., 2015; Falahatgar et al., 2017; Heckel et al., 2018; Saha et al., 2021). The most relevant work to ours is Saha et al. (2021), where they studied the problem of regret minimization for adversarial dueling bandits and proved a \(T\)-round Borda regret upper bound \(\tilde{O}(K^{1/3}T^{2/3})\). They also provide an \(\Omega(K^{1/3}T^{2/3})\) lower bound for stationary dueling bandits using Borda regret.
Apart from the Borda score, _Copeland score_ is also a widely used criteria (Urvoy et al., 2013; Zoghi et al., 2014, 2015; Wu and Liu, 2016; Komiyama et al., 2016). It is defined as \(C(i):=\frac{1}{K^{-1}}\sum_{j\neq i}\mathbbm{1}\{p_{i,j}>1/2\}\). A Copeland winner is the item that beats the most number of other items. It can be viewed as a "thresholded" version of Borda winner. In addition to Borda and Copeland winners, optimality notions such as a von Neumann winner were also studied in Ramamohan et al. (2016); Dudik et al. (2015); Balsubramani et al. (2016).
Another line of work focuses on identifying the optimal item or the total ranking, assuming the preference probabilities are consistent. Common consistency conditions include Strong Stochastic Transitivity (Yue et al., 2012; Falahatgar et al., 2017, 2017, 2018; Weak Stochastic Transitivity (Falahatgar et al., 2018; Ren et al., 2019; Wu et al., 2022; Lou et al., 2022), Relaxed Stochastic Transitivity (Yue and Joachims, 2011) and Stochastic Triangle Inequality. Sometimes the aforementioned transitivity can also be implied by some structured models like the Bradley-Terry model. We emphasize that these consistency conditions are not assumed or implicitly implied in our setting.
**Contextual Dueling Bandits** In Dudik et al. (2015), contextual information is incorporated in the dueling bandits framework. Later, Saha (2021) studied a structured contextual dueling bandits setting where each item \(i\) has its own contextual vector \(\mathbf{x}_{i}\) (sometimes called Linear Stochastic Transitivity). Each item then has an intrinsic score \(v_{i}\) equal to the linear product of an unknown parameter vector \(\mathbf{\theta}^{*}\) and its contextual vector \(\mathbf{x}_{i}\). The preference probability between two items \(i\) and \(j\) is assumed to be \(\mu\left(v_{i}-v_{j}\right)\) where \(\mu(\cdot)\) is the logistic function. These intrinsic scores of items naturally define a ranking over items. The regret is also computed as the gap between the scores of pulled items and the best item. While in this paper, we assume that the contextual vectors are associated with item pairs and define regret on the Borda score. In Section A.1, we provide a more detailed discussion showing that the setting considered in Saha (2021) can be viewed as a special case of our model.
## 3 Backgrounds and Preliminaries
### Problem Setting
We first consider the stochastic preferential feedback model with \(K\) items in the fixed time horizon setting. We denote the item set by \([K]\) and let \(T\) be the total number of rounds. At each round \(t\), the agent can pick any pair of items \((i_{t},j_{t})\) to compare and receive stochastic feedback about whether item \(i_{t}\) is preferred over item \(j_{t}\), (denoted by \(i_{t}\succ j_{t}\)). We denote the probability of seeing the event \(i\succ j\) as \(p_{i,j}\in[0,1]\). Naturally, we assume \(p_{i,j}+p_{j,i}=1\), and \(p_{i,i}=1/2\).
In this paper, we are concerned with the generalized linear model (GLM), where there is assumed to exist an _unknown_ parameter \(\mathbf{\theta}^{*}\in\mathbb{R}^{d}\), and each pair of items \((i,j)\) has its own _known_ contextual/feature vector \(\mathbf{\phi}_{i,j}\in\mathbb{R}^{d}\) with \(\|\mathbf{\phi}_{i,j}\|\leq 1\). There is also a fixed known link function (sometimes called comparison function) \(\mu(\cdot)\) that is monotonically increasing and satisfies \(\mu(x)+\mu(-x)=1\), e.g. a linear function or the logistic function \(\mu(x)=1/(1+e^{-x})\). The preference probability is defined as \(p_{i,j}=\mu(\mathbf{\phi}_{i,j}^{\top}\mathbf{\theta}^{*})\). At each round, denote \(r_{t}=\mathbbm{1}\{i_{t}\succ j_{t}\}\), then we have
\[\mathbb{E}[r_{t}|i_{t},j_{t}]=p_{i_{t},j_{t}}=\mu(\mathbf{\phi}_{i_{t},j_{t}}^{\top }\mathbf{\theta}^{*}).\]
Then our model can also be written as
\[r_{t}=\mu(\mathbf{\phi}_{i_{t},j_{t}}^{\top}\mathbf{\theta}^{*})+\epsilon_{t},\]where the noises \(\{\epsilon_{t}\}_{t\in[T]}\) are zero-mean, \(1\)-sub-Gaussian and assumed independent from each other. Note that, given the constraint \(p_{i,j}+p_{j,i}=1\), it is implied that \(\mathbf{\phi}_{i,j}=-\mathbf{\phi}_{j,i}\) for any \(i\in[K],j\in[K]\).
The agent's goal is to maximize the cumulative Borda score. The (slightly modified 1) Borda score of item \(i\) is defined as \(B(i)=\frac{1}{K}\sum_{j=1}^{K}p_{i,j}\), and the Borda winner is defined as \(i^{*}=\operatorname*{argmax}_{i\in[K]}B(i)\). The problem of merely identifying the Borda winner was deemed trivial (Zoghi et al., 2014; Busa-Fekete et al., 2018) because for a fixed item \(i\), uniformly random sampling \(j\) and receiving feedback \(r_{i,j}=\operatorname*{Bernoulli}(p_{i,j})\) yield a Bernoulli random variable with its expectation being the Borda score \(B(i)\). This so-called _Borda reduction_ trick makes identifying the Borda winner as easy as the best-arm identification for \(K\)-armed bandits. Moreover, if the regret is defined as \(\operatorname{Regret}(T)=\sum_{t=1}^{T}(B(i^{*})-B(i_{t}))\), then any optimal algorithms for multi-arm bandits can achieve \(\widetilde{O}(\sqrt{T})\) regret.
Footnote 1: Previous works define Borda score as \(B^{\prime}_{i}=\frac{1}{K-1}\sum_{j\neq i}p_{i,j}\), excluding the diagonal term \(p_{i,i}=1/2\). Our definition is equivalent since the difference between two items satisfies \(B(i)-B_{j}=\frac{K-1}{K}(B^{\prime}_{i}-B^{\prime}_{j})\). Therefore, the regret will be in the same order for both definitions.
However, the above definition of regret does not respect the fact that a pair of items are selected at each round. When the agent chooses two items to compare, it is natural to define the regret so that both items contribute equally. A commonly used regret, e.g., in Saha et al. (2021), has the following form:
\[\operatorname{Regret}(T)=\sum_{t=1}^{T}\big{(}2B(i^{*})-B(i_{t})-B(j_{t})\big{)}, \tag{1}\]
where the regret is defined as the sum of the sub-optimality of both selected arms. Sub-optimality is measured by the gap between the Borda scores of the compared items and the Borda winner. This form of regret deems any classical multi-arm bandit algorithm with Borda reduction vacuous because taking \(j_{t}\) into consideration will invoke \(\Theta(T)\) regret.
Adversarial SettingSaha et al. (2021) considered an adversarial setting for the multi-armed case, where at each round \(t\), the comparison follows a potentially different probability model, denoted by \(\{p^{t}_{i,j}\}_{i,j\in[K]}\). In this paper, we consider its contextual counterpart. Formally, we assume there is an underlying parameter \(\mathbf{\theta}^{*}_{t}\), and at round \(t\), the preference probability is defined as \(p^{t}_{i,j}=\mu(\mathbf{\phi}^{\top}_{i,j}\mathbf{\theta}^{*}_{t})\).
The Borda score of item \(i\in[K]\) at round \(t\) is defined as \(B_{t}(i)=\frac{1}{K}\sum_{j=1}^{K}p^{t}_{i,j}\), and the Borda winner at round \(T\) is defined as \(i^{*}=\operatorname*{argmax}_{i\in[K]}\sum_{t=1}^{T}B_{t}(i)\). The \(T\)-round regret is thus defined as \(\operatorname{Regret}(T)=\sum_{t=1}^{T}\big{(}2B_{t}(i^{*})-B_{t}(i_{t})-B_{t }(j_{t})\big{)}\).
### Assumptions
In this section, we present the assumptions required for establishing theoretical guarantees. Due to the fact that the analysis technique is largely extracted from Li et al. (2017), we follow them to make assumptions to enable regret minimization for generalized linear dueling bandits.
We make a regularity assumption about the distribution of the contextual vectors:
**Assumption 1**.: There exists a constant \(\lambda_{0}>0\) such that \(\lambda_{\min}\big{(}\frac{1}{K^{2}}\sum_{i=1}^{K}\sum_{j=1}^{K}\mathbf{\phi}_{i, j}\mathbf{\phi}^{\top}_{i,j}\big{)}\geq\lambda_{0}\).
This assumption is only utilized to initialize the design matrix \(\mathbf{V}_{\tau}=\sum_{i=1}^{\tau}\mathbf{\phi}_{i_{t},j_{t}}\mathbf{\phi}^{\top}_{i_{t },j_{t}}\) so that the minimum eigenvalue is large enough. We follow Li et al. (2017) to deem \(\lambda_{0}\) as a constant.
We also need the following assumption regarding the link function \(\mu(\cdot)\):
**Assumption 2**.: Let \(\hat{\mu}\) be the first-order derivative of \(\mu\). We have \(\kappa:=\inf_{\|\mathbf{x}\|\leq 1,\|\mathbf{\theta}-\mathbf{\theta}^{*}\|\leq 1} \hat{\mu}(\mathbf{x}^{\top}\mathbf{\theta})>0\).
Assuming \(\kappa>0\) is necessary to ensure the maximum log-likelihood estimator can converge to the true parameter \(\mathbf{\theta}^{*}\)(Li et al., 2017, Section 3). This type of assumption is commonly made in previous works for generalized linear models (Filippi et al., 2010; Li et al., 2017; Faury et al., 2020).
Another common assumption is regarding the continuity and smoothness of the link function.
**Assumption 3**.: \(\mu\) is twice differentiable. Its first and second-order derivatives are upper-bounded by constants \(L_{\mu}\) and \(M_{\mu}\) respectively.
This is a very mild assumption. For example, it is easy to verify that the logistic link function satisfies Assumption 3 with \(L_{\mu}=M_{\mu}=1/4\).
## 4 The Hardness Result
This section presents Theorem 4, a worst-case regret lower bound for the stochastic linear dueling bandits. The proof of Theorem 4 relies on a class of hard instances, as shown in Figure 1. We show that any algorithm will incur a certain amount of regret when applied to this hard instance class. The constructed hard instances follow a stochastic linear model, which is a sub-class of the generalized linear model. Saha et al. (2021b) first proposed a similar construction for finite many arms with no contexts. Our construction is for a contextual setting and the proof of the lower bound takes a rather different route.
For any \(d>0\), we construct the class of hard instances as follows. An instance is specified by a vector \(\mathbf{\theta}\in\{-\Delta,+\Delta\}^{d}\). The instance contains \(2^{d+1}\) items (indexed from 0 to \(2^{d+1}-1\)). The preference probability for an instance is defined by \(p^{\mathbf{\theta}}_{i,j}\) as:
\[p^{\mathbf{\theta}}_{i,j}=\begin{cases}\frac{1}{2},\text{ if }i<2^{d},j<2^{d} \text{ or if }i\geq 2^{d},j\geq 2^{d}\\ \frac{3}{2},\text{ if }i<2^{d},j\geq 2^{d}\\ \frac{4}{4},\text{ if }i\geq 2^{d},j<2^{d}\end{cases}+\langle\mathbf{\phi}_{i,j}, \mathbf{\theta}\rangle,\]
and the \(d\)-dimensional feature vectors \(\mathbf{\phi}_{i,j}\) are given by
\[\mathbf{\phi}_{i,j}=\begin{cases}\mathbf{0},\text{ if }i<2^{d},j<2^{d}\text{ or if }i\geq 2^{d},j\geq 2^{d}\\ \textbf{bit}(i),\text{ if }i<2^{d},j\geq 2^{d}\\ -\textbf{bit}(j),\text{ if }i\geq 2^{d},j<2^{d},\end{cases}\]
where \(\textbf{bit}(\cdot)\) is the (shifted) bit representation of non-negative integers, i.e., suppose \(x\) has the binary representation \(x=b_{0}\times 2^{0}+b_{1}\times 2^{1}+\cdots+b_{d-1}\times 2^{d-1}\), then
\[\textbf{bit}(x)=(2b_{0}-1,2b_{1}-1,\ldots,2b_{d-1}-1)=2\mathbf{b}-1.\]
Note that \(\textbf{bit}(\cdot)\in\{-1,+1\}^{d}\), and that \(\mathbf{\phi}_{i,j}=-\mathbf{\phi}_{j,i}\) is satisfied. The definition of \(p^{\mathbf{\theta}}_{i,j}\) can be slightly tweaked to fit exactly the model described in Section 3 (see Remark 11 in Appendix).
Some calculation shows that the Borda scores of the \(2^{d+1}\) items are:
\[B^{\mathbf{\theta}}(i)=\begin{cases}\frac{5}{8}+\frac{1}{2}\langle\textbf{bit}(i),\mathbf{\theta}\rangle,\text{ if }i<2^{d},\\ \frac{3}{8},\text{ if }i\geq 2^{d}.\end{cases}\]
Intuitively, the former half of items (those indexed from \(0\) to \(2^{d}-1\)) are "good" items (one among them is optimal, others are nearly optimal), while the latter half of items are "bad" items. Under such hard instances, every time one of the two pulled items is a "bad" item, then a one-step regret
Figure 1: Illustration of the hard-to-learn preference probability matrix \(\{p^{\mathbf{\theta}}_{i,j}\}_{i\in[K],j\in[K]}\). There are \(K=2^{d+1}\) items in total. The first \(2^{d}\) items are “good” items with higher Borda scores, and the last \(2^{d}\) items are “bad” items. The upper right block \(\{p_{i,j}\}_{i\leq 2^{d},j\geq 2^{d}}\) is defined as shown in the blue bubble. The lower left block satisfies \(p_{i,j}=1-p_{j,i}\). For any \(\mathbf{\theta}\), there exist one and only best item \(i\) such that \(\textbf{bit}(i)=\textbf{sign}(\mathbf{\theta})\).
\(B^{\mathbf{\theta}}(i^{*})-B^{\mathbf{\theta}}(i)\geq 1/4\) is incurred. To minimize regret, we should thus try to avoid pulling "bad" items. However, in order to identify the best item among all "good" items, comparisons between "good" and "bad" items are necessary. The reason is simply that comparisons between "good" items give no information about the Borda scores as the comparison probabilities are \(p^{\mathbf{\theta}}_{i,j}=\frac{1}{2}\) for all \(i,j<2^{d}\). Hence, any algorithm that can decently distinguish among the "good" items has to pull "bad" ones for a fair amount of times, and large regret is thus incurred. A similar observation is also made by Saha et al. (2021).
This specific construction emphasizes the intrinsic hardness of Borda regret minimization: to differentiate the best item from its close competitors, the algorithm must query the bad items to gain information.
Formally, this class of hard instances leads to the following regret lower bound for both stochastic and adversarial settings:
**Theorem 4**.: For any algorithm \(\mathcal{A}\), there exists a hard instance \(\{p^{\mathbf{\theta}}_{i,j}\}\) with \(T>4d^{2}\), such that \(\mathcal{A}\) will incur expected regret at least \(\Omega(d^{2/3}T^{2/3})\).
The construction of this hard instance for linear dueling bandits is inspired by the worst-case lower bound for the stochastic linear bandit (Dani et al., 2008), which has the order \(\Omega(d\sqrt{T})\), while ours is \(\Omega(d^{2/3}T^{2/3})\). The difference is that for the linear or multi-armed stochastic bandit, eliminating bad arms can make further exploration less expensive. But in our case, any amount of exploration will not reduce the cost of further exploration. This essentially means that exploration and exploitation must be separate, which is also supported by the fact that a simple explore-then-commit algorithm shown in Section 5 can be nearly optimal.
## 5 Stochastic Contextual Dueling Bandit
### Algorithm Description
```
1:Input: time horizon \(T\), number of items \(K\), feature dimension \(d\), feature vectors \(\mathbf{\phi}_{i,j}\) for \(i\in[K]\), \(j\in[K]\), exploration rounds \(\tau\), error tolerance \(\epsilon\), failure probability \(\delta\).
2:for\(t=1,2,\ldots,\tau\)do
3: sample \(i_{t}\sim\mathrm{Uniform}([K])\), \(j_{t}\sim\mathrm{Uniform}([K])\)
4: query pair \((i_{t},j_{t})\) and receive feedback \(r_{t}\)
5:endfor
6: Find the G-optimal design \(\pi(i,j)\) based on \(\mathbf{\phi}_{i,j}\) for \(i\in[K]\), \(j\in[K]\)
7: Let \(N(i,j)=\left\lceil\frac{d\pi(i,j)}{\epsilon^{2}}\right\rceil\) for any \((i,j)\in\text{supp}(\pi)\), denote \(N=\sum_{i=1}^{K}\sum_{j=1}^{K}N(i,j)\)
8:for\(i\in[K]\), \(j\in[K]\), \(s\in[N(i,j)]\)do
9: set \(t\gets t+1\), set \((i_{t},j_{t})=(i,j)\)
10: query pair \((i_{t},j_{t})\) and receive feedback \(r_{t}\)
11:endfor
12: Calculate the empirical MLE estimator \(\widehat{\mathbf{\theta}}_{\tau+N}\) based on all \(\tau+N\) samples via (2)
13: Estimate the Borda score for each item: \[\widehat{B}(i)=\frac{1}{K}\sum_{j=1}^{K}\mu(\mathbf{\phi}_{i,j}^{\top}\widehat{ \mathbf{\theta}}_{\tau+N}),\qquad\widehat{i}=\operatorname*{argmax}_{i\in[K]} \widehat{B}(i)\]
14: Keep querying \((\widehat{i},\widehat{i})\) for the rest of the time.
```
**Algorithm 1**BETC-GLM
### Algorithm Description
```
1:Input: time horizon \(T\), number of items \(K\), feature dimension \(d\), feature vectors \(\mathbf{\phi}_{i,j}\) for \(i\in[K]\), \(j\in[K]\), exploration rounds \(\tau\), error tolerance \(\epsilon\), failure probability \(\delta\).
2:for\(t=1,2,\ldots,\tau\)do
3: sample \(i_{t}\sim\mathrm{Uniform}([K])\), \(j_{t}\sim\mathrm{Uniform}([K])\)
4: query pair \((i_{t},j_{t})\) and receive feedback \(r_{t}\)
5:endfor
6: Find the G-optimal design \(\pi(i,j)\) based on \(\mathbf{\phi}_{i,j}\) for \(i\in[K]\), \(j\in[K]\)
7: Let \(N(i,j)=\left\lceil\frac{d\pi(i,j)}{\epsilon^{2}}\right\rceil\) for any \((i,j)\in\text{supp}(\pi)\), denote \(N=\sum_{i=1}^{K}\sum_{j=1}^{K}N(i,j)\)
8:for\(i\in[K]\), \(j\in[K]\), \(s\in[N(i,j)]\)do
9: set \(t\gets t+1\), set \((i_{t},j_{t})=(i,j)\)
10: query pair \((i_{t},j_{t})\) and receive feedback \(r_{t}\)
11:endfor
12: Calculate the empirical MLE estimator \(\widehat{\mathbf{\theta}}_{\tau+N}\) based on all \(\tau+N\) samples via (2)
13: Estimate the Borda score for each item: \[\widehat{B}(i)=\frac{1}{K}\sum_{j=1}^{K}\mu(\mathbf{\phi}_{i,j}^{\top}\widehat{ \mathbf{\theta}}_{\tau+N}),\qquad\widehat{i}=\operatorname*{argmax}_{i\in[K]} \widehat{B}(i)\]
14: Keep querying \((\widehat{i},\widehat{i})\) for the rest of the time.
```
**Algorithm 2**BETC-GLM
### Algorithm Description
```
1:Input: time horizon \(T\), number of items \(K\), feature dimension \(d\), feature vectors \(\mathbf{\phi}_{i,j}\) for \(i\in[K]\), \(j\in[K]\), exploration rounds \(\tau\), error tolerance \(\epsilon\), failure probability \(\delta\).
2:for\(t=1,2,\ldots,\tau\)do
3: sample \(i_{t}\sim\mathrm{Uniform}([K])\), \(j_{t}\sim\mathrm{Uniform}([K])\)
4: query pair \((i_{t},j_{t})\) and receive feedback \(r_{t}\)
5:endfor
6: Find the G-optimal design \(\pi(i,j)\) based on \(\mathbf{\phi}_{i,j}\) for \(i\in[K]\), \(j\in[K]\)
7: Let \(N(i,j)=\left\lceil\frac{d\pi(i,j)}{\epsilon^{2}}\right\rceil\) for any \((i,j)\in\text{supp}(\pi)\), denote \(N=\sum_{i=1}^{K}\sum_{j=1}^{K}N(i,j)\)
8:for\(i\in[K]\), \(j\in[K]\), \(s\in[N(i,j)]\)do
9: set \(t\gets t+1\), set \((i_{t},j_{t})=(i,j)\)
10: query pair \((i_{t},j_{t})\) and receive feedback \(r_{t}\)
11:endfor
12: Calculate the empirical MLE estimator \(\widehat{\mathbf{\theta}}_{\tau+N}\) based on all \(\tau+N\) samples via (2)
13: Estimate the Borda score for each item: \[\widehat{B}(i)=\frac{1}{K}\sum_{j=1}^{K}\mu(\mathbf{\phi}_{i,j}^{\top}\widehat{ \mathbf{\theta}}_{\tau+N}),\qquad\widehat{i}=\operatorname*{argmax}_{i\in[K]} \widehat{B}(i)\]
14: Keep querying \((\widehat{i},\widehat{i})\) for the rest of the time.
```
**Algorithm 3**BETC-GLM
```
1:Input: time horizon \(T\), number of items \(K\), feature dimension \(d\), feature vectors \(\mathbf{\phi}_{i,j}\) for \(i\in[K]\), \(j\in[K]\), exploration rounds \(\tau\), error tolerance \(\epsilon\), failure probability \(\delta\).
2:for\(t=1,2,\ldots,\tau\)do
3: sample \(i_{t}\sim\mathrm{Uniform}([K])\), \(j_{t}\sim\mathrm{Uniform}([K])\)
4: query pair \((i_{t},j_{t})\) and receive feedback \(r_{t}\)
5:endfor
6: Find the G-optimal design \(\pi(i,j)\) based on \(\mathbf{\phi}_{i,j}\) for \(i\in[K]\), \(j\in[K]\)
7: Let \(N(i,j)=\left\lceil\frac{d\pi(i,j)}{\epsilon^{2}}\right\rceil\) for any \((i,j)\in\text{supp}(\pi)\), denote \(N=\sum_{i=1}^{K}\sum_{j=1}^{K}N(i,j)\)
18:for\(i\in[K]\), \(j\in[K]\), \(s\in[N(i,j)]\)do
19: set \(t\gets t+1\), set \((i_{t},j_{t})=(i,j)\)
20: query pair \((i_{t},j_{t})\) and receive feedback \(r_{t}\)
21:endfor
22: Calculate the empirical MLE estimator \(\widehat{\mathbf{\theta}}_{\tau+N}\) based on all \(\tau+N\) samples via (2)
23: Estimate the Borda score for each item: \[\widehat{B}(i)=\frac{1}{K}\sum_{j=1}^{K}\mu(\mathbf{\phi}_{i,j}^{\top}\widehat{\mathbf{ \theta}}_{\tau+N}),\qquad\widehat{i}=\operatorname*{argmax}_{i\in[K]}\widehat{B}(i)\]
24: Keep querying \((\widehat{i},\widehat{i})\) for the rest of the time.
```
**Algorithm 4**BETC-GLM
At the high level, Algorithm 1 can be divided into two phases: the exploration phase (Line 2-11) and the exploitation phase (Line 12-14). The exploration phase ensures that the MLE estimator \(\widehat{\mathbf{\theta}}\) is accurate enough so that the estimated Borda score is within \(\widehat{O}(\epsilon)\)-range of the true Borda score (ignoring other quantities). Then the exploitation phase simply chooses the empirical Borda winner to incur small regret.
During the exploration phase, the algorithm first performs "pure exploration" (Line 2-5), which can be seen as an initialization step for the algorithm. The purpose of this step is to ensure the design matrix \(\mathbf{V}_{\tau+N}=\sum_{t=1}^{\tau+N}\mathbf{\phi}_{i_{t},j_{t}}\mathbf{\phi}_{i_{t},j _{t}}^{\top}\) is positive definite.
After that, the algorithm will perform the "designed exploration". Line 6 will find the G-optimal design, which minimizes the objective function \(g(\pi)=\max_{i,j}\|\mathbf{\phi}_{i,j}\|_{\mathbf{\gamma}(\pi)^{-1}}^{2}\), where \(\mathbf{V}(\pi):=\sum_{i,j}\pi(i,j)\mathbf{\phi}_{i,j}\mathbf{\phi}_{i,j}^{\top}\). The G-optimal design \(\pi^{*}(\cdot)\) satisfies \(\|\mathbf{\phi}_{i,j}\|_{\mathbf{V}(\pi^{*})^{-1}}^{2}\leq d\), and can be efficiently approximated by the Frank-Wolfe algorithm (See Remark 8 for a detailed discussion). Then the algorithm will follow \(\pi(\cdot)\) found at Line 6 to determine how many samples (Line 7) are needed. At Line 8-11, there are in total \(N=\sum_{i=1}^{K}\sum_{j=1}^{K}N(i,j)\) samples queried, and the algorithm shall index them by \(t=\tau+1,\tau+2,\ldots,\tau+N\).
At Line 12, the algorithm collects all the \(\tau+N\) samples and performs the maximum likelihood estimation (MLE). For the generalized linear model, the MLE estimator \(\widehat{\mathbf{\theta}}_{\tau+N}\) satisfies:
\[\sum_{t=1}^{\tau+N}\mu(\mathbf{\phi}_{i_{t},j_{t}}^{\top}\widehat{\mathbf{\theta}}_{ \tau+N})\mathbf{\phi}_{i_{t},j_{t}}=\sum_{t=1}^{\tau+N}r_{t}\mathbf{\phi}_{i_{t},j_{t}}, \tag{2}\]
or equivalently, it can be determined by solving a strongly concave optimization problem:
\[\widehat{\mathbf{\theta}}_{\tau+N}\in\operatorname*{argmax}_{\mathbf{\theta}}\sum_{t=1 }^{\tau+N}\bigg{(}r_{t}\mathbf{\phi}_{i_{t},j_{t}}^{\top}\mathbf{\theta}-m(\mathbf{\phi}_{i _{t},j_{t}}^{\top}\mathbf{\theta})\bigg{)},\]
where \(\dot{m}(\cdot)=\mu(\cdot)\). For the logistic link function, \(m(x)=\log(1+e^{x})\). As a special case of our generalized linear model, the linear model has a closed-form solution for (2). For example, if \(\mu(x)=\frac{1}{2}+x\), i.e. \(p_{i,j}=\frac{1}{2}+\mathbf{\phi}_{i,j}^{\top}\mathbf{\theta}^{*}\), then (2) becomes:
\[\widehat{\mathbf{\theta}}_{\tau+N}=\mathbf{V}_{\tau+N}^{-1}\sum_{t=1}^{\tau+N}(r_ {t}-1/2)\mathbf{\phi}_{i_{t},j_{t}},\]
where \(\mathbf{V}_{\tau+N}=\sum_{t=1}^{\tau+N}\mathbf{\phi}_{i_{t},j_{t}}\mathbf{\phi}_{i_{t},j_{t}}^{\top}\).
After the MLE estimator is obtained, Line 13 will calculate the estimated Borda score \(\widehat{B}(i)\) for each item based on \(\widehat{\mathbf{\theta}}_{\tau+N}\), and pick the empirically best one.
### A Matching Regret Upper Bound
Algorithm 1 can be configured to tightly match the worst-case lower bound. The configuration and performance are described as follows:
**Theorem 5**.: Suppose Assumption 1-3 hold and \(T=\Omega(d^{2})\). For any \(\delta>0\), if we set \(\tau=C_{4}\lambda_{0}^{-2}(d+\log(1/\delta))\) (\(C_{4}\) is a universal constant) and \(\epsilon=d^{1/6}T^{-1/3}\), then with probability at least \(1-2\delta\), Algorithm 1 will incur regret bounded by:
\[O\Big{(}\kappa^{-1}d^{2/3}T^{2/3}\sqrt{\log\big{(}T/d\delta\big{)}}\Big{)}.\]
By setting \(\delta=T^{-1}\), the expected regret is bounded as \(\widetilde{O}(\kappa^{-1}d^{2/3}T^{2/3})\).
For linear bandit models, such as the hard-to-learn instances in Section 4, \(\kappa\) is a universal constant. Therefore, Theorem 5 tightly matches the lower bound in Theorem 4, up to logarithmic factors.
**Remark 6** (Regret for Fewer Arms).: In typical scenarios, the number of items \(K\) is not exponentially large in the dimension \(d\). In this case, we can choose a different parameter set of \(\tau\) and \(\epsilon\) such that Algorithm 1 can achieve a smaller regret bound \(\widetilde{O}\big{(}\kappa^{-1}(d\log K)^{1/3}T^{2/3}\big{)}\) with smaller dependence on the dimension \(d\). See Theorem 10 in Appendix A.2.
**Remark 7** (Regret for Infinitely Many Arms).: In most practical scenarios of dueling bandits, it is adequate to consider a finite number \(K\) of items (e.g., ranking items). Nonetheless, BETC-GLMcan be easily adapted to accommodate infinitely many arms in terms of regret. We can construct a covering over all \(\mathbf{\phi}_{i,j}\) and perform optimal design and exploration on the covering set. The resulting regret will be the same as our upper bound, i.e., \(\widetilde{O}(d^{2/3}T^{2/3})\) up to some error caused by the epsilon net argument.
**Remark 8** (Approximate G-optimal Design).: Algorithm 1 assumes an exact G-optimal design \(\pi\) is obtained. In the experiments, we use the Frank-Wolfe algorithm to solve the constraint optimization problem (See Algorithm 5, Appendix G.3). To find a policy \(\pi\) such that \(g(\pi)\leq(1+\varepsilon)g(\pi^{*})\), roughly \(O(d/\varepsilon)\) optimization steps are needed. Such a near-optimal design will introduce a factor of \((1+\varepsilon)^{1/3}\) into the upper bounds.
## 6 Adversarial Contextual Dueling Bandit
This section addresses Borda regret minimization under the adversarial setting. As we introduced in Section 3.1, the unknown parameter \(\mathbf{\theta}_{t}\) can vary for each round \(t\), while the contextual vectors \(\mathbf{\phi}_{i,j}\) are fixed.
Our proposed algorithm, BEXP3, is designed for the contextual linear model. Formally, at round \(t\) and given pair \((i,j)\), we have \(p^{t}_{i,j}=\frac{1}{2}+\langle\mathbf{\phi}_{i,j},\mathbf{\theta}^{*}_{t}\rangle\).
### Algorithm Description
```
1:Input: time horizon \(T\), number of items \(K\), feature dimension \(d\), feature vectors \(\mathbf{\phi}_{i,j}\) for \(i\in[K]\), \(j\in[K]\), learning rate \(\eta\), exploration parameter \(\gamma\).
2:Initialize:\(q_{1}(i)=\frac{1}{K}\).
3:for\(t=1,\ldots,T\)do
4: Sample items \(i_{t}\sim q_{t}\), \(j_{t}\sim q_{t}\).
5: Query pair \((i_{t},j_{t})\) and receive feedback \(r_{t}\)
6: Calculate \(Q_{t}=\sum_{i\in[K]}\sum_{j\in[K]}q_{t}(i)q_{t}(j)\mathbf{\phi}_{i,j}\mathbf{\phi}_{i,j }^{\top},\widehat{\mathbf{\theta}}_{t}=Q_{t}^{-1}\mathbf{\phi}_{i_{t},j_{t}}r_{t}\).
7: Calculate the (shifted) Borda score estimates \(\widehat{B}_{t}(i)=\langle\frac{1}{K}\sum_{j\in[K]}\mathbf{\phi}_{i,j},\widehat{ \mathbf{\theta}}_{t}\rangle\).
8: Update for all \(i\in[K]\), set \[\widetilde{q}_{t+1}(i)=\frac{\exp(\eta\sum_{l=1}^{t}\widehat{B}_{l}(i))}{\sum _{j\in[K]}\exp(\eta\sum_{l=1}^{t}\widehat{B}_{l}(j))};\hskip 28.452756ptq_{t+1}(i )=(1-\gamma)\widetilde{q}_{t+1}(i)+\frac{\gamma}{K}.\]
9:endfor
```
**Algorithm 2** BEXP3
Algorithm 2 is adapted from the DEXP3 algorithm in Saha et al. (2021), which deals with the adversarial multi-armed dueling bandit. Algorithm 2 maintains a distribution \(q_{t}(\cdot)\) over \([K]\), initialized as uniform distribution (Line 2). At every round \(t\), two items are chosen following \(q_{t}\) independently. Then Line 6 calculates the one-sample unbiased estimate \(\widehat{\mathbf{\theta}}_{t}\) of the true underlying parameter \(\mathbf{\theta}^{*}_{t}\). Line 7 further calculates the unbiased estimate of the (shifted) Borda score. Note that the true Borda score at round \(t\) satisfies \(B_{t}(i)=\frac{1}{2}+\langle\frac{1}{K}\sum_{j\in[K]}\mathbf{\phi}_{i,j},\mathbf{ \theta}^{*}_{t}\rangle\). \(\widehat{B}_{t}\) instead only estimates the second term of the Borda score. This is a choice to simplify the proof. The cumulative estimated score \(\sum_{t=1}^{t}\widehat{B}_{l}(i)\) can be seen as the estimated cumulative reward of item \(i\) at round \(t\). In Line 8, \(q_{t+1}\) is defined by the classic exponential weight update, along with a uniform exploration policy controlled by \(\gamma\).
### Upper Bounds
Algorithm 2 can also be configured to tightly match the worst-case lower bound:
**Theorem 9**.: Suppose Assumption 1 holds. If we set \(\eta=(\log K)^{2/3}d^{-1/3}T^{-2/3}\) and \(\gamma=\sqrt{\eta d/\lambda_{0}}=(\log K)^{1/3}d^{1/3}T^{-1/3}\lambda_{0}^{-1/2}\), then the expected regret is upper-bounded by
\[O\big{(}(d\log K)^{1/3}T^{2/3}\big{)}.\]
Note that the lower bound construction is for the linear model and has \(K=O(2^{d})\), thus exactly matching the upper bound.
Experiments
This section compares the proposed algorithm BETC-GLM with existing ones that are capable of minimizing Borda regret. We use random responses (generated from fixed preferential matrices) to interact with all tested algorithms. Each algorithm is run for 50 times over a time horizon of \(T=10^{6}\). We report both the mean and the standard deviation of the cumulative Borda regret and supply some analysis. The following list summarizes all methods we studies in this section, a more complete description of the methods and parameters are available in Appendix E: BETC-GLM(-Match): Algorithm 1 proposed in this paper with different parameters. UCB-Borda: The UCB algorithm (Auer et al., 2002) using _Borda reduction_. DEXP3: Dueling-Exp3 developed by Saha et al. (2021a). ETC-Borda: A simple explore-then-commit algorithm that does not take any contextual information into account. BEXP3: The proposed method for adversarial Borda bandits displayed in Algorithm 2.
Generated Hard CaseWe first test the algorithms on the hard instances constructed in Section 4. We generate \(\mathbf{\theta}^{*}\) randomly from \(\{-\Delta,+\Delta\}^{d}\) with \(\Delta=\frac{1}{4d}\) so that the comparison probabilities \(p_{i,j}^{\mathbf{\theta}^{*}}\in[0,1]\) for all \(i,j\in[K]\). We pick the dimension \(d=6\) and the number of arms is therefore \(K=2^{d+1}=128\). Note the dual usage of \(d\) in our construction and the model setup in Section 3.1. We refer readers to Remark 11 in Appendix B for more details.
As depicted in Figure 1(a), the proposed algorithms (BETC-GLM, BEXP3) outperform the baseline algorithms in terms of cumulative regret when reaching the end of time horizon \(T\). For UCB-Borda, since it is not tailored for the dueling regret definition, it suffers from a linear regret as its second arm is always sampled uniformly at random, leading to a constant regret per round. DEXP3 and ETC-Borda are two algorithms designed for \(K\)-armed dueling bandits. Both are unable to utilize contextual information and thus demand more exploration. As expected, their regrets are higher than BETC-GLM or BEXP3.
Real-world DatasetTo showcase the performance of the algorithms in a real-world setting, we use EventTime dataset (Zhang et al., 2016). In this dataset, \(K=100\) historical events are compared in a pairwise fashion by crowd-sourced workers. We first calculate the empirical preference probabilities \(\widetilde{p}_{i,j}\) from the collected responses, and construct a generalized linear model based on the empirical preference probabilities. The algorithms are tested under this generalized linear model. Due to space limitations, more details are deferred to Appendix F.
As depicted in Figure 1(b), the proposed algorithm BETC-GLM outperforms the baseline algorithms in terms of cumulative regret when reaching the end of time horizon \(T\). The other proposed algorithm BEXP3 performs equally well even when misspecified (the algorithm is designed for linear setting, while the comparison probability follows a logistic model).
## 8 Conclusion and Future Work
In this paper, we introduced Borda regret into the generalized linear dueling bandits setting, along with an explore-then-commit type algorithm BETC-GLM and an EXP3 type algorithm BEXP3. The algorithms can achieve a nearly optimal regret upper bound, which we corroborate with a matching lower bound. The theoretical performance of the algorithms is verified empirically. It demonstrates superior performance compared to other baseline methods.
For future works, due to the fact that our exploration scheme guarantees an accurate estimate in all directions, our work can be extended to solve the top-k recovery or ranking problem, as long as a proper notion of regret can be identified.
Figure 2: The regret of the proposed algorithms (BETC-GLM, BEXP3) and the baseline algorithms (UCB-Borda, DEXP3, ETC-Borda).
## References
* Abbasi-Yadkori et al. (2011)Abbasi-Yadkori, Y., Pal, D. and Szepesvari, C. (2011). Improved algorithms for linear stochastic bandits. In _NIPS_.
* Auer et al. (2002)Auer, P., Cesa-Bianchi, N. and Fischer, P. (2002). Finite-time analysis of the multiarmed bandit problem. _Machine Learning_**47** 235-256.
* Balsubramani et al. (2016)Balsubramani, A., Karnin, Z., Schapire, R. E. and Zoghi, M. (2016). Instance-dependent regret bounds for dueling bandits. In _Conference on Learning Theory_. PMLR.
* Bengs et al. (2021)Bengs, V., Busa-Fekete, R., El Mesaoudi-Paul, A. and Hullermeier, E. (2021). Preference-based online learning with dueling bandits: A survey. _Journal of Machine Learning Research_**22** 7-1.
* Busa-Fekete et al. (2018)Busa-Fekete, R., Hullermeier, E. and Mesaoudi-Paul, A. E. (2018). Preference-based online learning with dueling bandits: A survey. _ArXiv_**abs/1807.11398**.
* Chen et al. (2013)Chen, X., Bennett, P. N., Collins-Thompson, K. and Horvitz, E. (2013). Pairwise ranking aggregation in a crowdsourced setting. In _Proceedings of the sixth ACM international conference on Web search and data mining_.
* Dani et al. (2008)Dani, V., Hayes, T. P. and Kakade, S. M. (2008). Stochastic linear optimization under bandit feedback. In _Annual Conference Computational Learning Theory_.
* Dudik et al. (2015)Dudik, M., Hofmann, K., Schapire, R. E., Slivkins, A. and Zoghi, M. (2015). Contextual dueling bandits. _ArXiv_**abs/1502.06362**.
* Even-Dar et al. (2002)Even-Dar, E., Mannor, S. and Mansour, Y. (2002). Pac bounds for multi-armed bandit and markov decision processes. In _Annual Conference Computational Learning Theory_.
* Falahatgar et al. (2017a)Falahatgar, M., Hao, Y., Orlitsky, A., Pichapati, V. and Ravindrakumar, V. (2017a). Maxing and ranking with few assumptions. _Advances in Neural Information Processing Systems_**30**.
* Falahatgar et al. (2018)Falahatgar, M., Jain, A., Orlitsky, A., Pichapati, V. and Ravindrakumar, V. (2018). The limits of maxing, ranking, and preference learning. In _International conference on machine learning_. PMLR.
* Falahatgar et al. (2017b)Falahatgar, M., Orlitsky, A., Pichapati, V. and Suresh, A. T. (2017b). Maximum selection and ranking under noisy comparisons. In _International Conference on Machine Learning_. PMLR.
* Faury et al. (2020)Faury, L., Abeille, M., Calauzenes, C. and Fercoq, O. (2020). Improved optimistic algorithms for logistic bandits. In _International Conference on Machine Learning_. PMLR.
* Filippi et al. (2010)Filippi, S., Cappe, O., Garivier, A. and Szepesvari, C. (2010). Parametric bandits: The generalized linear case. _Advances in Neural Information Processing Systems_**23**.
* Heckel et al. (2018)Heckel, R., Simchowitz, M., Ramchandran, K. and Wainwright, M. (2018). Approximate ranking from pairwise comparisons. In _International Conference on Artificial Intelligence and Statistics_. PMLR.
* Jamieson et al. (2015)Jamieson, K., Katariya, S., Deshpande, A. and Nowak, R. (2015). Sparse dueling bandits. In _Artificial Intelligence and Statistics_. PMLR.
* Jun et al. (2017)Jun, K.-S., Bhargava, A., Nowak, R. and Willett, R. (2017). Scalable generalized linear bandits: Online computation and hashing. _Advances in Neural Information Processing Systems_**30**.
* Komiyama et al. (2016)Komiyama, J., Honda, J. and Nakagawa, H. (2016). Copeland dueling bandit problem: Regret lower bound, optimal algorithm, and computationally efficient algorithm. In _International Conference on Machine Learning_. PMLR.
* Kuleshov and Precup (2014)Kuleshov, V. and Precup, D. (2014). Algorithms for multi-armed bandit problems. _arXiv preprint arXiv:1402.6028_.
* Lai et al. (1985)Lai, T. L., Robbins, H. et al. (1985). Asymptotically efficient adaptive allocation rules. _Advances in applied mathematics_**6** 4-22.
* Lattimore and Szepesvari (2020)Lattimore, T. and Szepesvari, C. (2020). _Bandit Algorithms_. Cambridge University Press.
* Li et al. (2017)Li, L., Lu, Y. and Zhou, D. (2017). Provably optimal algorithms for generalized linear contextual bandits. In _International Conference on Machine Learning_. PMLR.
* Liu et al. (2017)Liu, C., Jin, T., Hoi, S. C. H., Zhao, P. and Sun, J. (2017). Collaborative topic regression for online recommender systems: an online and bayesian approach. _Machine Learning_**106** 651-670.
* Lou et al. (2022)Lou, H., Jin, T., Wu, Y., Xu, P., Gu, Q. and Farnoud, F. (2022). Active ranking without strong stochastic transitivity. _Advances in neural information processing systems_.
* Minka et al. (2018)Minka, T. P., Cleven, R. and Zaykov, Y. (2018). Trueskill 2: An improved bayesian skill rating system.
* Ramamohan et al. (2016)Ramamohan, S., Rajkumar, A. and Agarwal, S. (2016). Dueling bandits: Beyond condorcet winners to general tournament solutions. In _NIPS_.
* Ren et al. (2019)Ren, W., Liu, J. K. and Shroff, N. (2019). On sample complexity upper and lower bounds for exact ranking from noisy comparisons. _Advances in Neural Information Processing Systems_**32**.
* Rusmevichientong and Tsitsiklis (2010)Rusmevichientong, P. and Tsitsiklis, J. N. (2010). Linearly parameterized bandits. _Mathematics of Operations Research_**35** 395-411.
* Saha (2021)Saha, A. (2021). Optimal algorithms for stochastic contextual preference bandits. _Advances in Neural Information Processing Systems_**34** 30050-30062.
* Saha et al. (2021a)Saha, A., Koren, T. and Mansour, Y. (2021a). Adversarial dueling bandits. _ArXiv_**abs/2010.14563**.
* Saha et al. (2021b)Saha, A., Koren, T. and Mansour, Y. (2021b). Adversarial dueling bandits. In _International Conference on Machine Learning_. PMLR.
* Sui and Burdick (2014)Sui, Y. and Burdick, J. (2014). Clinical online recommendation with subgroup rank feedback. In _Proceedings of the 8th ACM conference on recommender systems_.
* Urvoy et al. (2013)Urvoy, T., Clerot, F., Feraud, R. and Naamane, S. (2013). Generic exploration and k-armed voting bandits. In _ICML_.
* Vasile et al. (2016)Vasile, F., Smirnova, E. and Conneau, A. (2016). Meta-prod2vec: Product embeddings using side-information for recommendation. In _Proceedings of the 10th ACM conference on recommender systems_.
* Wang et al. (2018)Wang, J., Huang, P., Zhao, H., Zhang, Z., Zhao, B. and Lee (2018). Billion-scale commodity embedding for e-commerce recommendation in alibaba. _Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_.
* Wu and Liu (2016)Wu, H. and Liu, X. (2016). Double thompson sampling for dueling bandits. _ArXiv_**abs/1604.07101**.
* Wu et al. (2022)Wu, Y., Jin, T., Lou, H., Xu, P., Farnoud, F. and Gu, Q. (2022). Adaptive sampling for heterogeneous rank aggregation from noisy pairwise comparisons. In _International Conference on Artificial Intelligence and Statistics_. PMLR.
* Yue et al. (2012)Yue, Y., Broder, J., Kleinberg, R. D. and Joachims, T. (2012). The k-armed dueling bandits problem. _J. Comput. Syst. Sci._**78** 1538-1556.
* Yue and Joachims (2009)Yue, Y. and Joachims, T. (2009). Interactively optimizing information retrieval systems as a dueling bandits problem. In _Proceedings of the 26th Annual International Conference on Machine Learning_.
* Yue and Joachims (2011)Yue, Y. and Joachims, T. (2011). Beat the mean bandit. In _International Conference on Machine Learning_.
* Zhang et al. (2016)Zhang, X., Li, G. and Feng, J. (2016). Crowdsourced top-k algorithms: An experimental evaluation. _Proc. VLDB Endow._**9** 612-623.
* Zhu et al. (2012)Zhu, J., Ahmed, A. and Xing, E. P. (2012). Medlda: maximum margin supervised topic models. _J. Mach. Learn. Res._**13** 2237-2278.
* Zoghi et al. (2015)Zoghi, M., Karnin, Z. S., Whiteson, S. and de Rijke, M. (2015). Copeland dueling bandits. In _NIPS_.
* Zoghi et al. (2014)Zoghi, M., Whiteson, S., Munos, R. and de Rijke, M. (2014a). Relative upper confidence bound for the k-armed dueling bandit problem. _ArXiv_**abs/1312.3393**.
* Zoghi et al. (2014b)Zoghi, M., Whiteson, S., Munos, R. and Rijke, M. (2014b). Relative upper confidence bound for the k-armed dueling bandit problem. In _International conference on machine learning_. PMLR. | ## Review
### Summary
This paper investigates the problem of minimizing Borda regret within the context of dueling bandits, specifically under a generalized linear model where each arm pair is associated with a context vector. The authors develop algorithms for both stochastic and adversarial settings, demonstrating that their solutions achieve asymptotic Borda regret that aligns with established lower bounds. While the paper presents some novel contributions, particularly in the form of new lower bounds and empirical results, concerns have been raised about the originality and novelty of the algorithms proposed. Overall, the paper addresses an interesting and significant problem in the field, although some aspects could benefit from further clarification and innovation.
### Strengths
- The paper is well written and easy to read.
- The authors provide a thorough treatment of previous works, illustrating the novelty of their results.
- The proposed algorithms match upper bounds and present interesting constructions for hard instances.
- Experimental results demonstrate empirical performance that outperforms state-of-the-art methods.
- The presentation of algorithms and proofs is reader-friendly and enhances clarity.
### Weaknesses
- The originality of the algorithms is limited, as they appear to be straightforward extensions of existing approaches.
- Concerns about the experimental setup and the convincing nature of the results.
- Some sections vital for understanding the novelty are missing from the main text.
- There are several minor typos and clarity issues present throughout the paper.
### Questions
- Have the authors considered relaxed assumptions that could provide insights into optimality for explore-then-commit algorithms?
- Does the hard instance in Remark 11 fulfill all assumptions 1-3, and if not, could another instance be provided?
- What makes the link function particularly challenging in the context of the BEXP3 algorithm?
### Soundness
**Score:** 3
**Description:** 3 = good; The theoretical arguments are generally sound, but some aspects of the algorithms’ originality and validation could be better articulated.
### Presentation
**Score:** 4
**Description:** 4 = excellent; The paper is well-structured and mostly clear, yet minor issues and typos slightly detract from overall clarity.
### Contribution
**Score:** 3
**Description:** 3 = good; The paper contributes to an important area with notable results, but the novelty of the algorithms is questionable.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept; The paper is technically solid with moderate-to-high impact, but it requires further refinement and clarification in certain areas.
### Paper Decision
**Decision:** Reject
**Reasons:** The decision to reject is based on the lack of originality in the contributions and the concern that the techniques presented largely aggregate existing literature without significant innovation. The paper does not sufficiently meet the standards for acceptance at NeurIPS. Authors are encouraged to address the reviewers' comments and resubmit to a more suitable venue.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Undirected Probabilistic Model for Tensor Decomposition
Zerui Tao\({}^{1,2}\) Toshihisa Tanaka\({}^{1,2}\) Qibin Zhao\({}^{2,1}\)
[email protected] [email protected] [email protected]
\({}^{1}\)Tokyo University of Agriculture and Technology \({}^{2}\)RIKEN AIP
Corresponding author
###### Abstract
Tensor decompositions (TDs) serve as a powerful tool for analyzing multiway data. Traditional TDs incorporate prior knowledge about the data into the model, such as a directed generative process from latent factors to observations. In practice, selecting proper structural or distributional assumptions beforehand is crucial for obtaining a promising TD representation. However, since such prior knowledge is typically unavailable in real-world applications, choosing an appropriate TD model can be challenging. This paper aims to address this issue by introducing a flexible TD framework that discards the structural and distributional assumptions, in order to learn as much information from the data. Specifically, we construct a TD model that captures the joint probability of the data and latent tensor factors through a deep energy-based model (EBM). Neural networks are then employed to parameterize the joint energy function of tensor factors and tensor entries. The flexibility of EBM and neural networks enables the learning of underlying structures and distributions. In addition, by designing the energy function, our model unifies the learning process of different types of tensors, such as static tensors and dynamic tensors with time stamps. The resulting model presents a doubly intractable nature due to the presence of latent tensor factors and the unnormalized probability function. To efficiently train the model, we derive a variational upper bound of the conditional noise-contrastive estimation objective that learns the unnormalized joint probability by distinguishing data from conditional noises. We show advantages of our model on both synthetic and several real-world datasets.
## 1 Introduction
Tensor decompositions (TDs) serve as powerful tools for analyzing high-order and high-dimensional data, aiming to capture the inter-dependencies among different modes by utilizing multiple latent factors. TDs have demonstrated remarkable success in various machine learning tasks, including data imputation [53, 9], factor analysis [4], time-series forecasting [27], model compression [28, 41], generative models [10, 20] among others.
Existing TDs typically incorporate predefined directed graphical models into the generative process. These models specify the priors of latent factors and the conditional probabilities of observations, following specific contraction rules associated with the latent factors. Traditional contraction rules predominantly employ multi-linear products, like CP [15], Tucker [42], tensor train [29] and other variants [19, 6]. However, selecting an appropriate contraction rule for specific datasets is often challenging in real-world applications. Recent research, known as tensor network structure search [TNSS, 22, 23], has demonstrated that selecting an appropriate TN contraction rule significantly enhances the factorization performance. Another promising approach involves learning non-linear mappings from the data, utilizing techniques like nonparametric models [5, 45, 53] and deep neural networks[25; 9]. Empirical results demonstrate that non-linear TDs exhibit superior performance compared to traditional multi-linear TDs in various applications, attributed to their enhanced expressive power.
Despite the success of non-linear TDs in reducing structural assumptions, they often rely on simplistic distributional assumptions. Typically, a specific directed graphical model is adopted to model the generative process from latent factors to tensor entries, represented as \(p(x)=\int p(x\mid z)p(z)\,\mathrm{d}z\), where \(z\) denotes tensor factors and \(x\) represents observations. Additionally, the distributions are usually selected from exponential families for tractability, such as Gaussian and Bernoulli distributions. For instance, a Gaussian prior can be assigned to latent factors, and observed entries can be modeled using Gaussian distribution [32; 49] or Gaussian process [45; 53]. However, these prior assumptions regarding the probabilistic model can introduce model bias and reduce the effectiveness of TD models. In real-world applications, the latent factors might originate from unknown distributions, and the observations can exhibit complex multi-modal generative processes. Without knowing the underlying generative process, these simplistic assumptions can lead to inaccurate estimations.
To address these issues, this paper proposes to construct an undirected graphical model of TD. More specifically, a TD model that captures the joint probability of the data and latent tensor factors is constructed through a deep energy-based model (EBM), represented as \(p(x,z)\approx\exp(-f(x,z))\). Neural networks (NNs) are then employed to parameterize the joint energy function \(f(x,z)\). The flexibility of EBM and NNs facilitates the learning of underlying structures and distributions. Furthermore, our model unifies the learning process in the presence of side information, such as dynamic tensors with time stamps, by designing the energy function. The resulting model presents a doubly intractable nature due to the presence of latent tensor factors and the unnormalized probability density function (pdf). For efficient model training, we derive a variant of conditional noise-contrastive estimation [3] algorithm that learns the unnormalized joint probability by distinguishing data from conditional noises. The proposed model offers several advantages: (1) it features a flexible structure that can adapt to different distributions; (2) the undirected nature allows us to learn more general correlations than traditional directed TDs; (3) it can handle diverse tasks and encode auxiliary information by adjusting the energy function.
Experiments are conducted on synthetic and real-world datasets to showcase the advantages of our model. Through simulation studies, we demonstrate the capability of our model to handle data generated from diverse distributions, in contrast to traditional Gaussian-based models that yield unfaithful and biased estimates. Subsequently, experiments are performed on multiple real-world datasets to evaluate sparse and continuous-time tensor completion. Our model outperforms various baselines across multiple metrics and settings, highlighting the generality of the proposed model.
## 2 Backgrounds
NotationsWe adopt similar notations with [19]. Throughout the paper, we use lowercase letters, bold lowercase letters, bold capital letters and calligraphic bold capital letters to represent scalars, vectors, matrices and tensors, _e.g._, \(x\), \(\mathbf{x}\), \(\mathbf{X}\) and \(\mathbf{\mathcal{X}}\). Tensors refer to multi-way arrays which generalize matrices. For a \(D\)-order tensor \(\mathbf{\mathcal{X}}\in\mathbb{R}^{I_{1}\times\cdots\times I_{D}}\), we denote its \((i_{1},\ldots,i_{D})\)-th entry as \(x_{\mathbf{i}}\).
### Tensor decomposition
Given a \(D\)-order tensor \(\mathbf{\mathcal{X}}\in\mathbb{R}^{I_{1}\times\cdots\times I_{D}}\), tensor decomposition (TD) aims to factorize \(\mathbf{\mathcal{X}}\) into \(D\) smaller latent factors \(\mathbf{Z}^{d=1,\ldots,D}\in\mathbb{R}^{I_{d}\times R_{d}}\) by using some predefined tensor contraction rules. The classical Tucker decomposition [42] assumes \(\mathbf{\mathcal{X}}=\mathbf{\mathcal{W}}\times_{1}\mathbf{Z}^{1}\times_{2}\cdots\times_{D }\mathbf{Z}^{D}\), where \(\mathbf{\mathcal{W}}\in\mathbb{R}^{R_{1}\times\cdots\times R_{D}}\) is the coefficient and \(\times_{d}\) denotes the matrix-tensor contraction [19]. Equivalently, each entry can be written as \(x_{\mathbf{i}}=\sum_{r=1}^{R_{1}}\cdots\sum_{r=1}^{R_{D}}w_{r_{1}\ldots r_{D} }z_{i_{1}r_{1}}^{1}\cdots z_{i_{D}r_{D}}^{D}\), where the tuple \((R_{1},\ldots,R_{D})\) is the Tucker rank of tensor \(\mathbf{\mathcal{X}}\). The latent factors \(\mathbf{Z}^{d}\) can capture information of each tensor mode and \(\mathbf{\mathcal{W}}\) represents the weight of each factors. CP decomposition [15] is a restricted form of Tucker by assuming \(\mathbf{\mathcal{W}}\) is super-diagonal, _i.e._, \(x_{\mathbf{i}}=\sum_{r=1}^{R}w_{r}z_{i_{1}r_{1}}^{1}\cdots z_{i_{D}r_{D}}^{D}\), where we simplify \(w_{r}=w_{r\ldots r}\). In this paper, we focus on probabilistic version of TDs, which serves as generalizations of traditional ones. The standard approach is to formulate TDs as a directed graphical model, \(p(\mathbf{\mathcal{X}})=\int p(\mathbf{\mathcal{X}}\mid\mathbf{Z})p(\mathbf{Z})\,\mathrm{d} \mathbf{Z}\), where \(\mathbf{Z}\) denotes \(\{\mathbf{Z}^{1},\ldots,\mathbf{Z}^{D}\}\) for simplicity. For continuous data, the \(p(\mathbf{\mathcal{X}}\mid\mathbf{Z})\) is usually assumed to be Gaussian and TDs are used to parameterize the mean of corresponding Gaussian distribution [32; 49; 50].
Despite the elegant form of these multi-linear contraction rules, they have limited flexibility that can be mitigated by extending TDs to their non-linear counterparts. We can think of TD as a function that maps the multiway latent factors to tensor entries. One extension is to add Gaussian process (GP) priors on the function to obtain a nonparametric model, which resembles a GP latent variable model. In particular, [53] proposed to stack the latent factors as \(\mathbf{m_{\mathbf{i}}}=[\mathbf{z}_{i_{1}}^{1},\ldots,\mathbf{z}_{i_{D}}^{D}]\in\mathbb{R}^ {DR}\) and then assign a GP prior on the functional mapping. In specific, for continuous data, it assumes \(x_{\mathbf{i}}\sim\mathcal{N}(\mu_{\mathbf{i}},\sigma^{2})\), where the mean function is a GP \(\mu_{\mathbf{i}}=f(\mathbf{m_{\mathbf{i}}})\sim\mathcal{GP}(0,k(\mathbf{m_{\mathbf{i}} },\cdot))\) associated with kernel \(k(\cdot,\cdot)\). Since GP has large computational complexity and designing the kernel requires ad-hoc expert domain knowledge, [25] proposed to parameterize the function using neural networks (NNs), \(x_{\mathbf{i}}\sim\mathcal{N}(\mu_{\mathbf{i}},\sigma^{2})\) where \(\mu_{\mathbf{i}}=f_{\mathrm{NN}}(\mathbf{m_{\mathbf{i}}})\). However, NNs easily overfit due to the high-dimensional and sparse nature of tensor data. To address this issue, [9] proposed to use Bayesian NN with spike-and-slab prior for sparse weights. All these models are based on directed graphical model, that assumes there exists a direct mapping from the latent factors to tensor entries and use simplistic distributions.
### Energy-based model
Energy-based model [EBM, 21] is a class of undirected probabilistic models, which uses an energy function to characterize the data distribution. Given observed data \(x\), the basic idea of EBM is to approximate the data distribution by a Boltzmann distribution, \(p_{\mathrm{data}}(x)\approx\frac{\exp(-f(x;\theta))}{Z(\theta)}\), where \(Z(\theta)=\int\exp(-f(x;\theta))\,\mathrm{d}x\) is the normalization constant (a.k.a., the partition function) and \(f(x;\theta)\) is the energy function. One classical example of EBM is the restricted Boltzmann machine (RBM) where the energy function has bi-linear form for tractability. In deep EBMs, the energy function is typically parameterized by deep neural networks. The main difficulty for training EBMs is to deal with the intractable normalization constant [36]. There are several ways to train EBMs, including contrastive divergence [14], score matching [16], noise-contrastive estimation [NCE, 11] and so on. In this paper, we focus on NCE, due to its efficiency and ability of tackling different data types.
We denote the unnormalized pdf as \(\phi(x;\theta)=\exp(-f(x;\theta))\). NCE consider the normalization constant \(Z(\theta)\) as a trainable parameter. However, maximum likelihood estimation (MLE) does not work for this case since \(Z(\theta)\) can be arbitrarily small and the log-likelihood goes to infinity. Instead, the NCE can be obtained by maximizing the following objective,
\[\mathcal{L}_{\mathrm{NCE}}(\theta)=\mathbb{E}_{x}\log h(x;\theta)+\nu\mathbb{E} _{y}\log(1-h(y;\theta)),\]
where \(x\) denotes observed data and \(y\) denotes noises generated from some known distribution \(p_{n}\), \(\nu\) is the ratio between noise and sample sizes, _i.e._, \(\nu=\#y/\#x\). And \(h(\cdot)\) can be regarded as a classifier that distinguish data from noises, defined as follows, \(h(u;\theta)=\frac{\phi(u;\theta)}{\phi(u;\theta)+\nu p_{n}(u)}\). It has been shown that NCE is consistent with MLE [11].
Although the noise distribution is essential for training efficiency, the selection of noises is currently limited to heuristics. A common intuitive is that the noise should be similar with the data. Indeed, [3] proposed to use conditional noises \(y\sim p_{c}(y\mid x)\) and minimizing the following loss function,
\[\mathcal{L}_{\mathrm{CNCE}}(\theta)=2\mathbb{E}_{xy}\log[1+\exp(-G(x,y))], \tag{1}\]
where \(G(u_{1},u_{2};\theta)=\log\frac{\phi(u_{1};\theta)p_{c}(u_{2}|u_{1})}{\phi(u_{ 2};\theta)p_{c}(u_{1}|u_{2})}\) with \(y\) drawn from \(p_{c}(y\mid x)\).
## 3 Proposed model
### Energy-based tensor decomposition
Even though many non-linear tensor decompositions (TD) have been proposed to enhance flexibility, existing methods typically adopt simplistic distributions such as Gaussian. This can be problematic for complex real-world data. To address the issue, we propose energy-based tensor decomposition (EnergyTD), by integrating EBMs in TD framework. Given an order-\(D\) tensor \(\mathbf{X}\) of shape \(I_{1}\times\cdots\times I_{D}\), we aim to factorize it into \(D\) smaller latent factors \(\mathbf{Z}^{d}\in\mathbb{R}^{I_{d}\times R},\forall d=1,\ldots,D\). We denote the latent factor associated with the \((i_{1},\ldots,i_{D})\)-th entry as \(\mathbf{m_{\mathbf{i}}}=[\mathbf{z}_{i_{1}}^{1},\ldots,\mathbf{z}_{i_{D}}^{D}]\in\mathbb{ R}^{DR}\), where \(\mathbf{z}_{i_{d}}^{d}\in\mathbb{R}^{R}\) is the \(i_{d}\)-th row of \(\mathbf{Z}^{d}\). Unlike traditional directed TDs trying to parameterize the conditional expectation \(\mathbb{E}[x_{\mathbf{i}}\mid\mathbf{m_{\mathbf{i}}}]=f(\mathbf{m_{\mathbf{i}}})\), we model the joint distribution using an EBM,
\[p(x_{\mathbf{i}},\mathbf{m_{\mathbf{i}}};\theta)=\frac{\exp(-f(x_{\mathbf{i}},\mathbf{ m_{\mathbf{i}}};\theta))}{Z(\theta)}, \tag{2}\]where \(f(\cdot,\cdot;\theta)\) is the energy function and \(Z(\theta)=\int\exp(-f(x_{\mathbf{i}},\mathbf{m}_{\mathbf{i}};\theta))\,\mathrm{d}x_{ \mathbf{i}}\,\mathrm{d}\mathbf{m}_{\mathbf{i}}\) is the partition function to make it a valid pdf. We further assume the joint probability of all entries are independent, _i.e._, \(p(\mathbf{\mathcal{X}},\mathbf{m})=\prod_{\mathbf{i}\in\Omega}p(x_{\mathbf{i}},\mathbf{z}_{ \mathbf{i}_{\mathbf{i}}}^{1},\ldots,\mathbf{z}_{\mathbf{i}_{\mathbf{i}_{\mathbf{i} }}}^{D})\), where \(\Omega\) denotes the set of observed entries. This is a standard setting in TDs and the dependence of tensor entries can be captured by sharing latent factors.
The expressive nature of the energy function enables us to easily handle diverse data types. For example, we can deal with discrete data by plugging one-hot codings into Eq. (2) to represent categorical probabilities. Additionally, the flexibility of NNs allows us to model tensors with side information, where each tensor entry incorporates additional features [34]. Specifically, in this paper, we focus on a particular case of dynamic tensors with continuous time stamps [48]. In this case, we consider an observed tensor as a time series \(\mathbf{\mathcal{X}}_{t}\), where the time stamp \(t\) is continuous, and each entry \(\mathbf{i}\) has its own specific time stamp \(t_{\mathbf{i}}\). To model the tensor time series, we assume that each entry follows the same distribution and construct the time-dependent energy function, \(p(x_{\mathbf{i}},\mathbf{m}_{\mathbf{i}};\theta,t_{\mathbf{i}})\propto\exp(-f(x_{ \mathbf{i}},\mathbf{m}_{\mathbf{i}},t_{\mathbf{i}};\theta))\), where the time stamp \(t_{\mathbf{i}}\) is considered as an auxiliary feature. The flexibility of NNs allows this function to learn general patterns across continuous time stamps. Experimental results demonstrate that this simple treatment can achieve good performances.
Network architectureThe network architecture plays a crucial role in learning accurate probabilistic manifolds. Specifically, we define the energy function as \(f(x_{\mathbf{i}},\mathbf{m}_{\mathbf{i}})=g_{1}(g_{2}(g_{3}(x_{\mathbf{i}}),g_{4}( \mathbf{m}_{\mathbf{i}})))\), where \(g_{3}\) and \(g_{4}\) are MLP layers that encode information from \(x_{\mathbf{i}}\) and \(\mathbf{m}_{\mathbf{i}}\), respectively. \(g_{2}\) is a summation or concatenation layer that induce coupling between tensor values and latent factors, and \(g_{1}\) is the output layer. Although we currently utilize only MLPs, it is worth noting that convolutional architectures, as demonstrated by [39, 25], can also be employed, which is a topic for future research. To handle dynamic tensors, we incorporate an extra sinusoidal positional encoding layer [37] denoted as \(g_{5}(t)\) to capture temporal information. This embedding utilizes random Fourier features as proposed by [31]. Consequently, the energy function can be expressed as \(f(x_{\mathbf{i}},\mathbf{m}_{\mathbf{i}},t_{\mathbf{i}})=g_{1}(g_{2}(g_{3}(x_{ \mathbf{i}}),g_{4}(\mathbf{m}_{\mathbf{i}}),g_{5}(t_{\mathbf{i}})))\). This architecture is commonly employed to capture temporal information and has been demonstrated to effectively learn high-frequency information when combined with MLPs [37].
Posterior samplingA significant application of TDs is to estimate the posterior of missing entries. Unlike traditional TDs, direct predictions cannot be obtained even after learning the latent factors due to the utilization of an undirected probabilistic model. Instead, we need to seek for sampling methods of \(p(x_{\mathbf{i}}\mid\mathbf{m}_{\mathbf{i}})\). One choice is score-based samplers by utilizing the score function \(\mathbb{V}_{x_{\mathbf{i}}}\log p(x_{\mathbf{i}}\mid\mathbf{m}_{\mathbf{i}})= \mathbb{V}_{x_{\mathbf{i}}}\log\frac{p(x_{\mathbf{i}},\mathbf{m}_{\mathbf{i}})}{p( \mathbf{m}_{\mathbf{i}})}=-\mathbb{V}_{x_{\mathbf{i}}}f(x_{\mathbf{i}},\mathbf{m}_{ \mathbf{i}})\), such as Langevin dynamics [44]. Score-based samplers are not suitable for handling discrete data. However, in our case, we model the one-dimensional pdf for each entry, enabling us to directly sample the discrete data. Consequently, for continuous data, the use of grid search is a viable approach to obtain maximum a posteriori (MAP) estimations.
### Learning objective
Despite the flexibility of the proposed model in Eq. (2), obtaining maximum likelihood estimation (MLE) becomes doubly intractable, as both the partition function \(Z(\theta)\) and the marginal distribution \(p(x_{\mathbf{i}})\) are intractable. Therefore, the CNCE loss Eq. (1) cannot be directly applied. In this section, we extend the variational approach [33] to construct a upper bound that addresses the challenge posed by intractable marginal distributions.
Denote the unnormalized pdf as \(\phi(x_{\mathbf{i}},\mathbf{m}_{\mathbf{i}};\theta)=\exp(-f(x_{\mathbf{i}},\mathbf{m} _{\mathbf{i}};\theta))\) and the unnormalized marginal pdf as \(\phi(x_{\mathbf{i}};\theta)=\int\phi(x_{\mathbf{i}},\mathbf{m}_{\mathbf{i}};\theta )\,\mathrm{d}\mathbf{m}_{\mathbf{i}}\). For clarity, we omit the index \(\mathbf{i}\) in the subsequent context of this subsection. We follow the idea of CNCE [3] to distinguish data \(x\) from conditional noises \(y\sim p_{c}(y\mid x)\). Firstly, Eq. (1) can be rewritten as
\[\mathcal{L}_{\mathrm{CNCE}}(\theta)=2\mathbb{E}_{xy}\log[1+1/r(x,y;\theta)], \tag{3}\]
where
\[r(x,y;\theta)=\frac{\phi(x;\theta)p_{c}(y\mid x)}{\phi(y;\theta)p_{c}(x\mid y)}. \tag{4}\]
However, for our problem, the unnormalized marginal probability \(\phi(x;\theta)\) is unknown. An additional variational distribution \(q(\mathbf{m};\varphi)\) is used to approximate the true posterior \(p(\mathbf{m}\mid\mathbf{\mathcal{X}};\theta)\). Note that in TDs where the data size is static, there is no need for amortized inference, which is different from previous ones like [2]. Equipped with the variational distribution, the unnormalized marginal distribution can be computed using importance sampling,
\[\phi(x;\theta)=\int\frac{\phi(x,\mathbf{m};\theta)q(\mathbf{m};\varphi)}{q(\mathbf{m};\varphi) }\,\mathrm{d}\mathbf{m}=\mathbb{E}_{q(\mathbf{m};\varphi)}\left[\frac{\phi(x,\mathbf{m}; \theta)}{q(\mathbf{m};\varphi)}\right]. \tag{5}\]
Plugging Eq. (5) into Eq. (4), we have
\[r(x,y;\theta)=\frac{\mathbb{E}_{q(\mathbf{m};\varphi)}[\phi(x,\mathbf{m};\theta)/q(\bm {m};\varphi)]p_{c}(y\mid x)}{\phi(y;\theta)p_{c}(x\mid y)}. \tag{6}\]
Since Eq. (3) is a convex function w.r.t. \(r(x,y;\theta)\), plugging Eq. (6) into Eq. (3) and applying the Jensen's inequality, we have the upper bound,
\[\mathcal{L}_{\mathrm{CNCE}}(\theta) =2\mathbb{E}_{xy}\log[1+1/r(x,y;\theta)]\] \[\leq 2\mathbb{E}_{xy}\mathbb{E}_{q(\mathbf{m};\varphi)}\log\left[1+ \frac{\phi(y;\theta)p_{c}(x\mid y)q(\mathbf{m};\varphi)}{\phi(x,\mathbf{m};\theta)p_{ c}(y\mid x)}\right]\triangleq\mathcal{L}_{\mathrm{VCNCE}(\theta,\varphi)}. \tag{7}\]
Following [33], we have the theorem about the tightness of the bound.
**Theorem 1**: _The difference between the VCNCE loss Eq. (7) and CNCE loss Eq. (1) is the expectation of the \(f\)-divergence,_
\[\mathcal{L}_{\mathrm{VCNCE}}(\theta,\varphi)-\mathcal{L}_{\mathrm{CNCE}}( \theta)=\mathbb{E}_{xy}[\mathbb{D}_{f_{xy}}(p(\mathbf{m}\mid x;\theta)\|q(\mathbf{m}; \varphi))],\]
_where \(f_{xy}(u)=\log(\frac{\kappa_{xy}+u^{-1}}{\kappa_{xy}+1})\) with \(\kappa_{xy}=\frac{\phi(x;\theta)p_{c}(y\mid x)}{\phi(y;\theta)p_{c}(x\mid y)}\)._
The proof can be found in appendix. Based on the theorem, we have the following corollaries to justify the optimization process.
**Corollary 1**: _When \(q(\mathbf{m};\varphi)\) equals to the true posterior, the CVNCE bound is tight, i.e.,_
\[\mathcal{L}_{\mathrm{VCNCE}}=\mathcal{L}_{\mathrm{CNCE}}\Longleftrightarrow q (\mathbf{m};\varphi)=p(\mathbf{m}\mid x;\theta).\]
**Corollary 2**: _The following two optimization problems are equivalent,_
\[\min_{\theta}\mathcal{L}_{\mathrm{CNCE}}(\theta)=\min_{\theta}\min_{q(\mathbf{m}; \varphi)}\mathcal{L}_{\mathrm{VCNCE}}(\theta,\varphi).\]
In practice, we need to seek for the sampled version of Eq. (7). Supposing we have \(N\) observed samples \(\{x_{\mathbf{i}}\}_{i=1}^{N}\), \(\nu\) noises \(\{y_{i,j}\}_{j=1}^{\nu}\) for each sample \(x_{\mathbf{i}}\) and using importance sampling for \(\phi(y;\theta)\), the sampled objective function is,
\[\mathcal{L}_{\mathrm{VCNCE}}(\theta,\varphi)=\frac{2}{\nu N}\sum_{\mathbf{i}= 1}^{N}\sum_{j=1}^{\nu}\mathbb{E}_{q(\mathbf{m}_{\mathbf{i}};\varphi)}\log\left[1+ \frac{\mathbb{E}_{q(\mathbf{m}_{\mathbf{i}};\varphi)}\left[\frac{\phi(y_{\mathbf{i }},\mathbf{m}_{\mathbf{i}};\theta)}{q(\mathbf{m}_{\mathbf{i}};\varphi)}\right]p_{c}(x_ {\mathbf{i}}\mid y_{\mathbf{i}},j)q(\mathbf{m}_{\mathbf{i}};\varphi)}{\phi(x_{ \mathbf{i}},\mathbf{m}_{\mathbf{i}};\theta)p_{c}(y_{\mathbf{i}},j\mid x_{\mathbf{ i}})}\right].\]
Specifically, we formulate \(q(\mathbf{m};\varphi)\) as a diagonal Gaussian and use reparameterization trick [18] to compute the expectation. When dealing with continuous data, we typically select conditional Gaussian noises, represented as \(p_{c}(y\mid x)=\mathcal{N}(y\mid x,\sigma^{2})\). This choice entails only one hyper-parameter \(\sigma\) that needs to be tuned. Another benefit is the symmetry of the conditional distribution for Gaussian noise, expressed as \(p_{c}(y\mid x)=p_{c}(x\mid y)\). Hence, the objective function can be further reduced. For binary or categorical data, such symmetric noises can also be derived [3].
The time complexity of the proposed objective is \(\mathcal{O}(\nu B(DRH+LH^{2}))\), where \(B\) is the batch size, \(\nu\) is the number of conditional noises, \(H\) is the number of hidden units per layer, \(L\) is the number of layers and \(D\) is the tensor order. The time complexity of our model is \(\nu\) times greater than traditional TDs, since we need to compute forward passes for \(\nu\) particles. However, as we only use small networks, the computational speed is still very fast (See Appendix C.3 for an illustration).
Related work
Traditional tensor decompositions (TDs) are based on multi-linear contraction rules, such as CP [15], Tucker [42], tensor networks [29; 51] and their variations [19; 6]. In this paper, we mainly focus on probabilistic TDs, which extend traditional methods by providing uncertainty estimates about both observations and latent factors [32; 49; 50; 26; 38]. These models build directed mapping from latent factors to tensor entries using multi-linear contraction rules, resulting in limited flexibility when dealing with complex datasets. An alternative approach involves replacing the multi-linear relations with non-linear ones. [5; 45; 52] introduced the use of tensor-variate Gaussian processes (GPs) for achieving nonparametric factorization. [53] further expanded on this concept by incorporating a GP prior on the function that maps latent factors to tensor entries, resulting in a nonparametric TD for sparse tensors. GP-based TDs are further extended using hierarchical priors [40], stochastic processes [43; 8]. Despite the success of GP-based TDs, nonparametric approaches can encounter computational challenges and may still lack sufficient flexibility. Recently, neural networks (NNs) are also applied to TDs. [25] suggested the utilization of convolutional NNs to map latent factors to tensor entries. Besides, [7] built a hierarchical version of the Tucker model and introduced non-linear mappings within each hierarchy of latent factors. To mitigate overfitting, [39] suggested the adoption of deep kernels in GP-based TD rather than using NNs directly. On the other hand, [9] proposed to use Bayesian NN with spike-and-slab prior to prevent from overfitting and obtain probabilistic estimates. More recently, [24] adopt neural ODEs to capture dynamic tensor trajectories. Other works regarding more flexible exponential families [13] or mixture of Gaussians [12] employ linear structures. While all these methods using directed mapping from latent factors to tensor entries, our model is fundamentally different from them, in that we construct much more flexible undirected probabilistic model of TD that can deal with diverse distributions and structures.
Another related direction is the energy-based model (EBM). To address the intractable pdf of EBMs, various training methods have been proposed, including contrastive divergence [CD, 14], score matching [SM, 16], noise-contrastive estimation [NCE, 11]. CD requires large steps of Monte Carlo samples, which can be computationally expensive for high-dimensional tensors. SM cannot handle discrete data, and learning latent variables with SM requires complex bi-level optimization [2]. Therefore, we focus on NCE in this paper. Learning energy-based TD is even more challenging because it involves multiple coupled latent factors that cannot be analytically marginalized. [33] proposed VNCE to handle unnormalized models with latent factors. We enhance their algorithm by using conditional noises [3] to improve the learning efficiency. One fundamental distinction between our model and traditional EBMs is that, through TD construction, we only need to learn one-dimensional distributions for each scalar entry, instead of the original high-dimensional tensor. Hence, our model avoids performance degradation when learning high-dimensional data using NCE.
## 5 Experiments
We demonstrate the proposed energy-based TD (EnergyTD) on synthetic data and several real-world applications. All the experiments are conducted on a Linux workstation with Intel Xeon Silver 4316 CPU, 256GB RAM and NVIDIA RTX A5000 GPUs (24GB memory each). The code is implemented based on PyTorch 1.12.1 [30]. More experimental details can be found in the appendix. The code is available at [https://github.com/taozerui/energy_td](https://github.com/taozerui/energy_td)
### Simulation study
Tensors with non-Gaussian distributionsTraditional TDs commonly assume that tensor entries follow a Gaussian distribution. However, in real-world applications, we often encounter highly complex distributions. In this experiment, we evaluate the capability of our model to learn distributions that deviate from the Gaussian assumption.
We consider a two-mode tensor of shape \(I\times I\), where we set \(I=8\). Firstly, two latent factors of shape \(I\times R\) are generated, where the rank \(R\) is set to \(5\). Then, conditioned on the latent factors, we generated tensor observations from particular distributions. For each entry, we generate \(N=200\) samples. Three types of distributions are considered: (1) Beta distribution; (2) Mixture of Gaussians (MoG) and (3) Exponential distribution. For Beta distribution, we generate latent factors from uniform distribution \(\mathbf{Z}^{i=1,2}\stackrel{{\mathrm{iid}}}{{\sim}}Uni(0.0,1.1)\). Then, we sample the observed tensor from Beta distribution \(x_{ij}\stackrel{{\mathrm{iid}}}{{\sim}}Beta((\mathbf{Z}^{1}\mathbf{Z}^{2,\intercal })_{ij},1.2)\). For MoG distribution, we draw latent factors from uniform distribution \(\mathbf{Z}^{i=1,2}\stackrel{{\mathrm{iid}}}{{\sim}}Uni(0.0,1.0)\). The tensor entries are then drawn from MoG \(x_{ij}\stackrel{{\mathrm{iid}}}{{\sim}}0.6\cdot\mathcal{N}(\cos(( \mathbf{Z}^{1}\mathbf{Z}^{2,\intercal})_{ij}),0.1^{2})+0.4\cdot\mathcal{N}(\sin((\mathbf{Z }^{1}\mathbf{Z}^{2,\intercal})_{ij}),0.25^{2})\). For Exponential distribution, we generate latent factors from uniform distribution \(\mathbf{Z}^{i=1,2}\stackrel{{\mathrm{iid}}}{{\sim}}Uni(0.0,1.0)\). The tensor entries are then sampled from Exponential distribution \(x_{ij}\stackrel{{\mathrm{iid}}}{{\sim}}Exp((\mathbf{Z}^{1}\mathbf{Z}^{2, \intercal})_{ij})\). We compare with GP tensor factorization [GPFT, 53], which assumes the entries follow Gaussian distribution.
The results of probability density function (pdf) estimation are presented in Fig. 1. We display the learned pdf of one single tensor entry. This reveals that GPTF is limited to capturing only the 1st moments (mean values) and overlooks higher-order information of more complex distributions. Our model exhibits greater flexibility in handling non-Gaussian distributions.
Continuous-time tensorsWe then consider a dynamic tensor, where each entry is a time series. We follow similar setting with [9] and use the same data size with the previous simulation, _i.e._, a two-mode tensor of shape \(8\times 8\) with each entry being a time series of length \(200\). We firstly generate latent factors of shape \(8\times 2\), with each rows drawn from \(\mathbf{z}_{i}^{1}\sim\mathcal{N}([0,2],2\cdot\mathbf{I})\) and \(\mathbf{z}_{i}^{2}\sim\mathcal{N}([1,1],2\cdot\mathbf{I})\). Then we generate \(N=200\) observed entries from time \(t\in[0,1]\). The tensor entries are computed by \(x_{i}(t)=\sum_{r_{1}=1}^{2}\sum_{r_{2}=1}^{2}z_{i_{1}r_{1}}^{2}z_{i_{2}r_{2}} ^{2}\omega_{r_{1}r_{2}}(t)\), where \(\omega_{11}(t)=\sin(2\pi t),\omega_{12}(t)=\cos(2\pi t),\omega_{21}(t)=\sin^{2 }(2\pi t)\) and \(\omega_{22}(t)=\cos(5\pi t)\sin^{2}(5\pi t)\). Finally, the data are normalized to range \([0,2]\). The synthetic data consist of low-frequency trends and high-frequency fluctuations. Apart from fully observed case, we also test with missing rates (MR) \(10\%\) and \(30\%\). In specific, for each entry, we randomly select a starting time and set the following consecutive \(10\%\) or \(30\%\) time stamps as missing.
We compare with two methods that are designed for dynamic tensors, the Bayesian continuous-time Tucker decomposition [BCTT, 8] and the nonparametric factor trajectory learning [NONFAT, 43].
Figure 1: Simulation results for different distributions. The blue line is the ground truth pdf. The yellow line is the kernel density estimation (KDE) plot of observed samples. The red line is the GPTF estimation, which is a Gaussian pdf. The green line is our method, computed by evaluating the unnormalized pdf on grids and calculating the partition function using Gaussian quadrature.
Figure 2: Simulation results for continuous-time tensor decomposition. The blue regions are observed and the red regions are missing. The trajectories of ground truth, BCTT, NONFAT and our model are drawn in black, blue, red and green lines, respectively.
BCTT treats Tucker core tensors as functions of time and NONFAT treats all GPTF factors as time series. Unlike BCTT with GP prior on the time domain, NONFAT uses GP prior on the frequency domain through inverse Fourier transform of original time series.
Fig. 2 displays the completion results. The learned trajectory of a single tensor entry is plotted. Higher missing rates result in the inability of BCTT to capture accurate trajectories, particularly in missing regions. NONFAT achieves more stable predictions, yet it tends to favor over-smoothed trajectories while disregarding high-frequency fluctuations. This behavior may be attributed to its unique construction, which introduces a GP prior in the frequency domain. Our utilization of flexible neural networks allows us to adapt to complex situations encompassing both low-frequency and high-frequency information.
### Tensor completion
We evaluate our model on two sparse tensor and two dynamic tensor completion applications. For real datasets, the energy function can be difficult to learn, when the pdf has a very sharp curve. Motivated by the idea of noise-perturbed score estimation [35], we add small i.i.d. Gaussian noises on the data during the training of EnergyTD as a form of smoothing technique. The results are reported on _clean_ test data. For EnergyTD, we use MAP estimates as described in Section 3.1.
#### 5.2.1 Sparse tensor completion
We test our model on two sparsely observed tensors: (1) _\(\mathit{Alog}\)_, a file access log dataset [52] of shape _200 users \(\times\) 100 actions \(\times\) 200 resources_ with about 0.33% nonzero entries; (2) _ACC_, a three-way tensor generated from a code repository management system [52] of shape _3k users \(\times\) 150 actions \(\times\) 30k resources_ with about 0.009% nonzero entries. We use the same dataset split as in [52] and report the 5-fold cross validation results.
Competing methodsWe compare with five baselines: (1) CP-WOPT [1], CP decomposition with stochastic optimization; (2) GPTF [53], a GP-based tensor factorization using stochastic variational inference; (3) HGP-GPTF [40], a GPTF equipped with hierarchical Gamma process prior; (4) POND [39], a probabilistic non-linear TD using deep kernels with convolutional NNs (CNNs); (5) CoSTCo [25], a non-linear TD that uses CNNs to map latent factors to tensor entries. CP-WOPT is provided in Matlab Tensor Toolbox [1]. We implement GPTF based on PyTorch by ourselves and use official implementations for HGP-GPTF2, POND3 and CoSTCo4.
Footnote 2: [https://github.com/ctilling/SparseTensorHGP](https://github.com/ctilling/SparseTensorHGP)
Footnote 3: [https://github.com/ctilling/POMD](https://github.com/ctilling/POMD)
Footnote 4: [https://github.com/USC-Melady/KDD19-CoSTCo](https://github.com/USC-Melady/KDD19-CoSTCo)
Experimental settings and resultsWe set batch size 1000 and run 1000 epochs for _\(\mathit{Alog}\)_, 100 epochs for _ACC_. For our model, we use Adam [17] optimizer. Learning rates of all models are chosen from \(\{1\mathrm{e}{-2},1\mathrm{e}{-3},1\mathrm{e}{-4}\}\). For all methods, we evaluate with rank \(R\in\{3,5,8,10\}\). All methods are evaluated by 5 runs with different random seeds.
\begin{table}
\begin{tabular}{l c c c c|c c c c} \hline \hline & \multicolumn{4}{c}{RMSE} & \multicolumn{4}{c}{MAE} \\ \cline{2-9} _Alog_ & Rank 3 & Rank 5 & Rank 8 & Rank 10 & Rank 3 & Rank 5 & Rank 8 & Rank 10 \\ \hline CP-WOPT & 1.486 \(\pm\) 0.282 & 1.386 \(\pm\) 0.043 & 1.228 \(\pm\) 0.063 & 1.355 \(\pm\) 0.079 & 0.694 \(\pm\) 0.098 & 0.664 \(\pm\) 0.018 & 0.610 \(\pm\) 0.027 & 0.658 \(\pm\) 0.026 \\ GPTF & 0.911 \(\pm\) 0.008 & 0.867 \(\pm\) 0.008 & 0.878 \(\pm\) 0.009 & 0.884 \(\pm\) 0.009 & 0.511 \(\pm\) 0.005 & 0.494 \(\pm\) 0.004 & 0.530 \(\pm\) 0.004 & 0.554 \(\pm\) 0.006 \\ HGP-GPTF & 0.896 \(\pm\) 0.011 & 0.867 \(\pm\) 0.009 & 0.850 \(\pm\) 0.011 & 0.844 \(\pm\) 0.006 & 0.479 \(\pm\) 0.007 & 0.473 \(\pm\) 0.003 & 0.474 \(\pm\) 0.004 & 0.480 \(\pm\) 0.004 \\ POND & 0.885 \(\pm\) 0.010 & 0.871 \(\pm\) 0.013 & 0.858 \(\pm\) 0.009 & 0.857 \(\pm\) 0.011 & 0.463 \(\pm\) 0.004 & 0.454 \(\pm\) 0.005 & 0.444 \(\pm\) 0.005 & 0.443 \(\pm\) 0.006 \\ CoSTCo & 0.999 \(\pm\) 0.007 & 0.936 \(\pm\) 0.017 & 0.930 \(\pm\) 0.024 & 0.909 \(\pm\) 0.014 & 0.523 \(\pm\) 0.006 & 0.481 \(\pm\) 0.007 & 0.514 \(\pm\) 0.031 & 0.481 \(\pm\) 0.008 \\ EnergyTD & **0.864 \(\pm\) 0.011** & **0.835 \(\pm\) 0.011** & **0.840 \(\pm\) 0.013** & **0.833 \(\pm\) 0.016** & **0.450 \(\pm\) 0.006** & **0.433 \(\pm\) 0.006** & **0.424 \(\pm\) 0.005** & **0.409 \(\pm\) 0.004** \\ \hline _ACC_ & & & & & & & & \\ \hline CP-WOPT & 0.533 \(\pm\) 0.039 & 0.592 \(\pm\) 0.037 & 0.603 \(\pm\) 0.028 & 0.589 \(\pm\) 0.022 & 0.138 \(\pm\) 0.004 & 0.147 \(\pm\) 0.005 & 0.148 \(\pm\) 0.003 & 0.147 \(\pm\) 0.004 \\ GPTF & 0.367 \(\pm\) 0.001 & 0.357 \(\pm\) 0.001 & 0.359 \(\pm\) 0.001 & 0.368 \(\pm\) 0.001 & 0.152 \(\pm\) 0.002 & 0.150 \(\pm\) 0.001 & 0.167 \(\pm\) 0.002 & 0.182 \(\pm\) 0.001 \\ HGP-GPTF & 0.355 \(\pm\) 0.001 & 0.344 \(\pm\) 0.001 & 0.341 \(\pm\) 0.001 & 0.338 \(\pm\) 0.001 & 0.125 \(\pm\) 0.003 & 0.129 \(\pm\) 0.001 & 0.139 \(\pm\) 0.000 & 0.145 \(\pm\) 0.002 \\ CoSTCo & 0.385 \(\pm\) 0.003 & 0.376 \(\pm\) 0.018 & 0.363 \(\pm\) 0.004 & 0.348 \(\pm\) 0.002 & 0.117 \(\pm\) 0.004 & 0.137 \(\pm\) 0.020 & 0.107 \(\pm\) 0.004 & **0.101 \(\pm\) 0.004** \\ EnergyTD & **0.348 \(\pm\) 0.005** & **0.336 \(\pm\) 0.004** & **0.328 \(\pm\) 0.003** & **0.328 \(\pm\) 0.003** & **0.110 \(\pm\) 0.008** & **0.101 \(\pm\) 0.006** & **0.094 \(\pm\) 0.006** & **0.101 \(\pm\) 0.009** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Sparse tensor completion The completion results are shown in Table 1. The mean and standard deviation of the root mean square error (RMSE) and mean absolute error (MAE) are reported. Results from POND are not included for ACC due to the slow code execution and the substantial memory requirements for this dataset. Our model outperforms the baselines in nearly all cases on both RMSE and MAE. Moreover, the improvement is significant (\(p<0.05\)) for most cases. All non-linear methods exhibit substantial superiority over CP-WOPT, thereby highlighting the advantage of employing more flexible structures. Furthermore, the enhanced performance of our model may be attributed to the utilization of more flexible undirected probabilistic distributions. It is worth noting that our model, despite employing MLP layers, outperforms POND and CoSTCo, which utilize convolutional layers. We believe that our model can be further improved by designing more suitable network architectures.
#### 5.2.2 Continuous-time tensor completion
In this subsection, we evaluate our model on two continuous-time tensor datasets: (1) _Air_, the Beijing air quality dataset [47] of shape _12 sites \(\times\) 6 pollutants_ with about \(1\times 10^{4}\) observations at different time stamps; (2) _Click_, an ads click dataset [43] of shape _7 banner positions \(\times\) 2842 site domains \(\times\) 4127 mobile APPs_ with about \(5\times 10^{4}\) entries at different time stamps. We use the same dataset split as in [43] and report the 5-fold cross validation results.
Competing methodsWe compare with (1) Nonparametric factor trajectory learning [NONFAT, 43], which is the SOTA method for continuous-time tensor completion, and other baselines including: (2) Continuous-time CP [CTCP, 48], which uses polynomial splines to model dynamics of CP coefficients; (3) Continuous-time GP (CTGP), which extends GPTF [53] by injecting time stamps into GP kernels; (5) Continuous-time NN decomposition (CTNN), which directly uses time stamps as inputs in CoSTCo [25] to learn continuous dynamics; (5) Discrete-time NN decomposition with non-linear dynamics [NNDTN, 43], which employs RNN dynamics for time steps; (6) Tensor high-order interaction learning via ODEs [THIS-ODE, 24], where continuous-time trajectories of tensor entries are captured by neural ODEs. For baseline models (1-5), we use the official implementation5 provided by [43]. For (6) THIS-ODE, its official implementation6 is used.
Footnote 5: [https://github.com/wzhut/NONFAT](https://github.com/wzhut/NONFAT)
Footnote 6: [https://github.com/shiboli/THIS-ODE](https://github.com/shiboli/THIS-ODE)
Experimental settings and resultsWe set batch size 128 and run 400 epochs for _Air_, 200 epochs for _Click_. Notably, in the case of THIS-ODE, we only run 200 epochs for _Air_ and 75 epochs for _Click_ due to the slow speed of ODE solvers (See Appendix C.3 for an illustration). For our model, we use Adam [17] optimizer with learning rate chosen from \(\{1\mathrm{e}{-2},1\mathrm{e}{-3}\}\). The network architecture is described in Section 3.1. For baseline models, provided settings in their codebases are used. For all methods, we evaluate with rank \(R\in\{3,5,8,10\}\). All methods are evaluated by 5 runs with different random seeds.
\begin{table}
\begin{tabular}{l c c c|c c c c} \hline \hline & \multicolumn{4}{c}{RMSE} & \multicolumn{2}{c}{MAE} \\ \cline{2-9} _Air_ & Rank 3 & Rank 5 & Rank 8 & Rank 10 & Rank 3 & Rank 5 & Rank 8 & Rank 10 \\ \hline CTCP & 1.020 \(\pm\) 0.002 & 1.022 \(\pm\) 0.002 & 1.022 \(\pm\) 0.002 & 1.022 \(\pm\) 0.002 & 0.784 \(\pm\) 0.002 & 0.785 \(\pm\) 0.002 & 0.787 \(\pm\) 0.002 & 0.787 \(\pm\) 0.002 \\ CTGP & 0.475 \(\pm\) 0.000 & 0.463 \(\pm\) 0.000 & 0.459 \(\pm\) 0.000 & 0.458 \(\pm\) 0.000 & 0.318 \(\pm\) 0.000 & 0.304 \(\pm\) 0.000 & 0.301 \(\pm\) 0.000 & 0.299 \(\pm\) 0.000 \\ CTNN & 1.013 \(\pm\) 0.001 & 1.005 \(\pm\) 0.005 & 0.999 \(\pm\) 0.009 & 0.113 \(\pm\) 0.002 & 0.787 \(\pm\) 0.001 & 0.777 \(\pm\) 0.003 & 0.776 \(\pm\) 0.003 & 0.780 \(\pm\) 0.001 \\ NNDTN & 0.377 \(\pm\) 0.004 & 0.364 \(\pm\) 0.002 & 0.334 \(\pm\) 0.004 & 0.328 \(\pm\) 0.004 & 0.247 \(\pm\) 0.003 & 0.239 \(\pm\) 0.002 & 0.217 \(\pm\) 0.003 & 0.212 \(\pm\) 0.004 \\ NOWAFI & 0.339 \(\pm\) 0.002 & 0.335 \(\pm\) 0.002 & 0.351 \(\pm\) 0.005 & 0.342 \(\pm\) 0.002 & 0.224 \(\pm\) 0.002 & 0.219 \(\pm\) 0.001 & 0.228 \(\pm\) 0.003 & 0.223 \(\pm\) 0.001 \\ THIS-ODE & 0.569 \(\pm\) 0.001 & 0.566 \(\pm\) 0.004 & 0.542 \(\pm\) 0.005 & 0.541 \(\pm\) 0.002 & 0.415 \(\pm\) 0.002 & 0.409 \(\pm\) 0.004 & 0.395 \(\pm\) 0.004 & 0.391 \(\pm\) 0.001 \\ EnergyTD & **0.302 \(\pm\) 0.008** & **0.291 \(\pm\) 0.006** & **0.300 \(\pm\) 0.012** & **0.283 \(\pm\) 0.004** & **0.184 \(\pm\) 0.006** & **0.177 \(\pm\) 0.003** & **0.172 \(\pm\) 0.006** & **0.184 \(\pm\) 0.003** \\ \hline _Click_ & & & & & & & \\ \hline CTCP & 2.063 \(\pm\) 0.009 & 2.020 \(\pm\) 0.025 & 2.068 \(\pm\) 0.012 & 2.009 \(\pm\) 0.023 & 1.000 \(\pm\) 0.009 & 0.977 \(\pm\) 0.021 & 1.005 \(\pm\) 0.010 & 0.969 \(\pm\) 0.012 \\ CTGP & 1.424 \(\pm\) 0.002 & 1.423 \(\pm\) 0.004 & 1.404 \(\pm\) 0.004 & 1.392 \(\pm\) 0.002 & 0.880 \(\pm\) 0.003 & 0.877 \(\pm\) 0.003 & 0.856 \(\pm\) 0.002 & 0.849 \(\pm\) 0.001 \\ CTNN & 1.820 \(\pm\) 0.005 & 1.820 \(\pm\) 0.005 & 1.820 \(\pm\) 0.005 & 1.820 \(\pm\) 0.005 & 1.077 \(\pm\) 0.027 & 1.053 \(\pm\) 0.012 & 1.083 \(\pm\) 0.016 & 1.071 \(\pm\) 0.024 \\ NNDTN & 1.418 \(\pm\) 0.005 & 1.409 \(\pm\) 0.004 & 1.407 \(\pm\) 0.002 & 1.410 \(\pm\) 0.004 & 0.858 \(\pm\) 0.002 & 0.856 \(\pm\) 0.002 & 0.859 \(\pm\) 0.003 & 0.863 \(\pm\) 0.006 \\ NONFAT & 1.400 \(\pm\) 0.008 & 1.411 \(\pm\) 0.006 & 1.365 \(\pm\) 0.004 & **1.351 \(\pm\) 0.002** & 0.853 \(\pm\) 0.004 & 0.873 \(\pm\) 0.004 & 0.832 \(\pm\) 0.004 & 0.812 \(\pm\) 0.002 \\ THIS-ODE & 1.421 \(\pm\) 0.004 & 1.413 \(\pm\) 0.002 & 1.408 \(\pm\) 0.002 & 1.395 \(\pm\) 0.003 & 0.836 \(\pm\) 0.004 & 0.836 \(\pm\) 0.003 & 0.832 \(\pm\) 0.002 & 0.829 \(\pm\) 0.003 \\ EnergyTD & **1.396 \(\pm\) 0.003** & **1.385 \(\pm\) 0.003** & **1.356 \(\pm\) 0.001** & 1.357 \(\pm\) 0.001 & **0.777 \(\pm\) 0.003** & **0.775 \(\pm\) 0.003** & **0.772 \(\pm\) 0.002** & **0.773 \(\pm\) 0.001** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Continuous-time tensor completion Table 2 presents the completion results. It should be noted that the results presented here differ from those reported in [43] as we adhere to the standard definition of RMSE and MAE (see appendix for detail). Our model surpasses the baseline methods in almost all cases for both RMSE and MAE, with statistically significant improvements (\(p\) < 0.05) observed in most cases. Particularly, we observe that the improvements in MAE are notably more significant. One possible reason is that NONFAT is trained implicitly by minimizing the square loss (with regularization) as it adopts Gaussian assumption about the data. However, our model does not make such assumptions about the data distribution and the loss function. Hence, it can adapt to the data distribution more flexibly. Additionally, we observe that directly injecting time stamps into neural networks, as done in CTNN, is ineffective, thus highlighting the advantage of our model in learning more informative structures.
## 6 Conclusion
We introduce an innovative approach to undirected probabilistic tensor decomposition (TD), characterized by its exceptional flexibility in accommodating various structures and distributions. Specifically, our model integrates deep EBMs in TD to relax both structural and distributional assumptions, enabling it to handle complex real-world applications. To efficiently learn the doubly intractable pdf, we derive a VCNCE objective that is the upper bound of the CNCE loss. Experimental results demonstrate that our model can handle diverse distributions and outperforms baseline methods in multiple real-world applications. One limitation is that our final loss function is not a fully variational upper bound of CNCE, since we have to use importance samples to approximate the pdf of noise samples in Eq. (7). In the future, we aim to derive a fully variational bound as in [46]. Finally, we did not delve into the interpretability of learned factors in this work. However, exploring the interpretability of these factors represents a promising avenue for future research in the realm of tensor decompositions.
## Acknowledgments
Zerui Tao was supported by the RIKEN Junior Research Associate Program. This work was supported by the JSPS KAKENHI Grant Numbers JP20H04249, JP23H03419.
## References
* (1) Brett W Bader and Tamara G Kolda. Efficient matlab computations with sparse and factored tensors. _SIAM Journal on Scientific Computing_, 30(1):205-231, 2008.
* (2) Fan Bao, Chongxuan Li, Kun Xu, Hang Su, Jun Zhu, and Bo Zhang. Bi-level score matching for learning energy-based latent variable models. _Advances in Neural Information Processing Systems_, 33:18110-18122, 2020.
* (3) Ciwan Ceylan and Michael U Gutmann. Conditional noise-contrastive estimation of unnormalised models. In _International Conference on Machine Learning_, pages 726-734. PMLR, 2018.
* (4) Rong Chen, Dan Yang, and Cun-Hui Zhang. Factor models for high-dimensional tensor time series. _Journal of the American Statistical Association_, 117(537):94-116, 2022.
* (5) Wei Chu and Zoubin Ghahramani. Probabilistic models for incomplete multi-dimensional arrays. In _Artificial Intelligence and Statistics_, pages 89-96. PMLR, 2009.
* (6) Andrzej Cichocki, Namgil Lee, Ivan Oseledets, Anh-Huy Phan, Qibin Zhao, Danilo P Mandic, et al. Tensor networks for dimensionality reduction and large-scale optimization: Part 1 low-rank tensor decompositions. _Foundations and Trends(r) in Machine Learning_, 9(4-5):249-429, 2016.
* (7) Jicong Fan. Multi-mode deep matrix and tensor factorization. In _international conference on learning representations_, 2021.
* (8) Shikai Fang, Akil Narayan, Robert Kirby, and Shandian Zhe. Bayesian continuous-time tucker decomposition. In _International Conference on Machine Learning_, pages 6235-6245. PMLR, 2022.
* [9] Shikai Fang, Zheng Wang, Zhimeng Pan, Ji Liu, and Shandian Zhe. Streaming bayesian deep tensor factorization. In _International Conference on Machine Learning_, pages 3133-3142. PMLR, 2021.
* [10] Ivan Glasser, Ryan Sweke, Nicola Pancotti, Jens Eisert, and Ignacio Cirac. Expressive power of tensor-network factorizations for probabilistic modeling. _Advances in neural information processing systems_, 32, 2019.
* [11] Michael Gutmann and Aapo Hyvarinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In _Proceedings of the thirteenth international conference on artificial intelligence and statistics_, pages 297-304. JMLR Workshop and Conference Proceedings, 2010.
* [12] Zhi Han, Yao Wang, Qian Zhao, Deyu Meng, Lin Lin, Yandong Tang, et al. A generalized model for robust tensor factorization with noise modeling by mixture of gaussians. _IEEE transactions on neural networks and learning systems_, 29(11):5380-5393, 2018.
* [13] Kohei Hayashi, Takashi Takenouchi, Tomohiro Shibata, Yuki Kamiya, Daishi Kato, Kazuo Kunieda, Keiji Yamada, and Kazushi Ikeda. Exponential family tensor factorization for missing-values prediction and anomaly detection. In _2010 IEEE International Conference on Data Mining_, pages 216-225. IEEE, 2010.
* [14] Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. _Neural computation_, 14(8):1771-1800, 2002.
* [15] Frank L Hitchcock. The expression of a tensor or a polyadic as a sum of products. _Journal of Mathematics and Physics_, 6(1-4):164-189, 1927.
* [16] Aapo Hyvarinen and Peter Dayan. Estimation of non-normalized statistical models by score matching. _Journal of Machine Learning Research_, 6(4), 2005.
* [17] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_, 2014.
* [18] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. _arXiv preprint arXiv:1312.6114_, 2013.
* [19] Tamara G Kolda and Brett W Bader. Tensor decompositions and applications. _SIAM review_, 51(3):455-500, 2009.
* [20] Maxim Kuznetsov, Daniil Polykovskiy, Dmitry P Vetrov, and Alex Zhebrak. A prior of a googol gaussians: a tensor ring induced prior for generative models. _Advances in Neural Information Processing Systems_, 32, 2019.
* [21] Yann LeCun, Sumit Chopra, Raia Hadsell, M Ranzato, and Fujie Huang. A tutorial on energy-based learning. _Predicting structured data_, 1(0), 2006.
* [22] Chao Li and Zhun Sun. Evolutionary topology search for tensor network decomposition. In _International Conference on Machine Learning_, pages 5947-5957. PMLR, 2020.
* [23] Chao Li, Junhua Zeng, Zerui Tao, and Qibin Zhao. Permutation search of tensor network structures via local sampling. In _International Conference on Machine Learning_, pages 13106-13124. PMLR, 2022.
* [24] Shibo Li, Robert Kirby, and Shandian Zhe. Decomposing temporal high-order interactions via latent ODEs. In _Proceedings of the 39th International Conference on Machine Learning_, volume 162 of _Proceedings of Machine Learning Research_, pages 12797-12812. PMLR, 2022.
* [25] Hanpeng Liu, Yaguang Li, Michael Tsang, and Yan Liu. Costco: A neural tensor completion model for sparse tensors. In _Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_, pages 324-334, 2019.
* [26] Zhen Long, Ce Zhu, Jiani Liu, and Yipeng Liu. Bayesian low rank tensor ring for image recovery. _IEEE Transactions on Image Processing_, 30:3568-3580, 2021.
* [27] Jacob Miller, Guillaume Rabusseau, and John Terilla. Tensor networks for probabilistic sequence modeling. In _International Conference on Artificial Intelligence and Statistics_, pages 3079-3087. PMLR, 2021.
* [28] Alexander Novikov, Dmitrii Podoprikhin, Anton Osokin, and Dmitry P Vetrov. Tensorizing neural networks. _Advances in neural information processing systems_, 28, 2015.
* [29] Ivan V Oseledets. Tensor-train decomposition. _SIAM Journal on Scientific Computing_, 33(5):2295-2317, 2011.
* [30] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In _Advances in Neural Information Processing Systems 32_, pages 8024-8035. Curran Associates, Inc., 2019.
* [31] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. _Advances in neural information processing systems_, 20, 2007.
* [32] Piyush Rai, Yingjian Wang, Shengbo Guo, Gary Chen, David Dunson, and Lawrence Carin. Scalable bayesian low-rank decomposition of incomplete multiway tensors. In _International Conference on Machine Learning_, pages 1800-1808. PMLR, 2014.
* [33] Benjamin Rhodes and Michael U Gutmann. Variational noise-contrastive estimation. In _The 22nd International Conference on Artificial Intelligence and Statistics_, pages 2741-2750. PMLR, 2019.
* [34] Qingquan Song, Hancheng Ge, James Caverlee, and Xia Hu. Tensor completion algorithms in big data analytics. _ACM Transactions on Knowledge Discovery from Data (TKDD)_, 13(1):1-48, 2019.
* [35] Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. _Advances in neural information processing systems_, 32, 2019.
* [36] Yang Song and Diederik P Kingma. How to train your energy-based models. _arXiv preprint arXiv:2101.03288_, 2021.
* [37] Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan Barron, and Ren Ng. Fourier features let networks learn high frequency functions in low dimensional domains. _Advances in Neural Information Processing Systems_, 33:7537-7547, 2020.
* [38] Zerui Tao, Xuyang Zhao, Toshihisa Tanaka, and Qibin Zhao. Bayesian latent factor model for higher-order data. In _Asian Conference on Machine Learning_, pages 1285-1300. PMLR, 2021.
* [39] Conor Tillinghast, Shikai Fang, Kai Zhang, and Shandian Zhe. Probabilistic neural-kernel tensor decomposition. In _2020 IEEE International Conference on Data Mining (ICDM)_, pages 531-540. IEEE, 2020.
* [40] Conor Tillinghast, Zheng Wang, and Shandian Zhe. Nonparametric sparse tensor factorization with hierarchical gamma processes. In _International Conference on Machine Learning_, pages 21432-21448. PMLR, 2022.
* [41] Andros Tjandra, Sakriani Sakti, and Satoshi Nakamura. Compressing recurrent neural network with tensor train. In _2017 International Joint Conference on Neural Networks (IJCNN)_, pages 4451-4458. IEEE, 2017.
* [42] Ledyard R Tucker. Some mathematical notes on three-mode factor analysis. _Psychometrika_, 31(3):279-311, 1966.
* [43] Zheng Wang and Shandian Zhe. Nonparametric factor trajectory learning for dynamic tensor decomposition. In _International Conference on Machine Learning_, pages 23459-23469. PMLR, 2022.
* [44] Max Welling and Yee W Teh. Bayesian learning via stochastic gradient langevin dynamics. In _Proceedings of the 28th international conference on machine learning (ICML-11)_, pages 681-688, 2011.
* [45] Zenglin Xu, Feng Yan, and Yuan Qi. Infinite tucker decomposition: nonparametric bayesian models for multiway data analysis. In _Proceedings of the 29th International Coference on International Conference on Machine Learning_, pages 1675-1682, 2012.
* [46] Christopher Zach. Fully variational noise-contrastive estimation. In _Image Analysis: 23rd Scandinavian Conference, SCIA 2023, Sirkka, Finland, April 18-21, 2023, Proceedings, Part II_, pages 175-190. Springer, 2023.
* [47] Shuyi Zhang, Bin Guo, Anlan Dong, Jing He, Ziping Xu, and Song Xi Chen. Cautionary tales on air-quality improvement in beijing. _Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences_, 473(2205):20170457, 2017.
* [48] Yanqing Zhang, Xuan Bi, Niansheng Tang, and Annie Qu. Dynamic tensor recommender systems. _The Journal of Machine Learning Research_, 22(1):3032-3066, 2021.
* [49] Qibin Zhao, Liqing Zhang, and Andrzej Cichocki. Bayesian cp factorization of incomplete tensors with automatic rank determination. _IEEE transactions on pattern analysis and machine intelligence_, 37(9):1751-1763, 2015.
* [50] Qibin Zhao, Liqing Zhang, and Andrzej Cichocki. Bayesian sparse tucker models for dimension reduction and tensor completion. _arXiv preprint arXiv:1505.02343_, 2015.
* [51] Qibin Zhao, Guoxu Zhou, Shengli Xie, Liqing Zhang, and Andrzej Cichocki. Tensor ring decomposition. _arXiv preprint arXiv:1606.05535_, 2016.
* [52] Shandian Zhe, Zenglin Xu, Xinqi Chu, Yuan Qi, and Youngja Park. Scalable nonparametric multiway data analysis. In _Artificial Intelligence and Statistics_, pages 1125-1134. PMLR, 2015.
* [53] Shandian Zhe, Kai Zhang, Pengyuan Wang, Kuang-chih Lee, Zenglin Xu, Yuan Qi, and Zoubin Ghahramani. Distributed flexible nonlinear tensor factorization. _Advances in neural information processing systems_, 29, 2016.
Proof of Theorem 1
Firstly, we give the definition of \(f\)-divergence.
**Definition 1** (\(f\)**-divergence**): _The \(f\)-divergence between two probability density functions (pdf) \(p\) and \(q\) is defined as,_
\[\mathbb{D}_{f}(p\|q)=\mathbb{E}_{q}\left[f\left(\frac{p}{q}\right)\right],\]
_where \(f:[0,\infty)\rightarrow\mathbb{R}\) is a convex function and \(f(1)=0\)._
As shown in [33], since partition functions for \(\phi(x,m;\theta)\) and \(\phi(x;\theta)\) are the same, we have the following factorization,
\[\phi(x,\mathbf{m};\theta)=\phi(x;\theta)p(\mathbf{m}\mid x;\theta).\]
The difference between the two objective becomes,
\[\mathcal{L}_{\mathrm{VCNCE}}(\theta,\varphi)-\mathcal{L}_{\mathrm{ CNCE}}(\theta)\] \[= 2\mathbb{E}_{xy}\mathbb{E}_{q(\mathbf{m};\varphi)}\left\{\log\left[1 +\frac{\phi(y;\theta)p_{c}(x\mid y)q(\mathbf{m};\varphi)}{\phi(x,\mathbf{m};\theta)p_ {c}(y\mid x)}\right]-\log\left[1+\frac{\phi(y;\theta)p_{c}(x\mid y)}{\phi(x; \theta)p_{c}(y\mid x)}\right]\right\}\] \[= 2\mathbb{E}_{xy}\mathbb{E}_{q(\mathbf{m};\varphi)}\log\frac{\phi(x, \mathbf{m};\theta)\phi(x;\theta)p_{c}(y\mid x)+\phi(y;\theta)p_{c}(x\mid y)\phi(x ;\theta)q(\mathbf{m};\varphi)}{\phi(x,\mathbf{m};\theta)\phi(x;\theta)p_{c}(y\mid x)+ \phi(y;\theta)p_{c}(x\mid y)\phi(x,\mathbf{m};\theta)}\] \[= 2\mathbb{E}_{xy}\mathbb{E}_{q(\mathbf{m};\varphi)}\log\frac{p(\mathbf{m} \mid x;\theta)\phi(x;\theta)p_{c}(y\mid x)+\phi(y;\theta)p_{c}(x\mid y)q(\mathbf{ m};\varphi)}{p(\mathbf{m}\mid x;\theta)\phi(x;\theta)p_{c}(y\mid x)+\phi(y;\theta)p_{c}(x \mid y)p(\mathbf{m}\mid x;\theta)}\] \[= 2\mathbb{E}_{xy}[\mathbb{D}_{f_{xy}}(p(\mathbf{m}\mid x;\theta)\|q( \mathbf{m}))],\]
where
\[f_{xy}(u)=\log\left(\frac{\kappa_{xy}+u^{-1}}{\kappa_{xy}+1}\right),\]
with \(\kappa_{xy}=\frac{\phi(x;\theta)p_{c}(y\mid x)}{\phi(y;\theta)p_{c}(x\mid y)}\). It is straightforward to verify that \(f(1)=0\). The derivatives of \(f\) is
\[f^{\prime}(u)=-\frac{1}{u^{2}\kappa+u},\quad f^{\prime\prime}(u)=\frac{2u \kappa+1}{(u^{2}\kappa+u)^{2}}.\]
Since \(\kappa\) and \(u\) are positive, \(f\) is a convex function. Therefore, \(f\) satisfy the requirements of \(f\)-divergence.
## Appendix B Proof of Corollaries 1 and 2
Corollary 1 is a straightforward consequence of Theorem 1. Since the \(f\)-divergence becomes zero if and only if the two distributions are identical, we have,
\[\mathcal{L}_{\mathrm{VCNCE}}(\theta,\varphi)=\mathcal{L}_{\mathrm{CNCE}}( \theta)\Longleftrightarrow q(\mathbf{m};\varphi)=p(\mathbf{m}\mid x;\theta).\]
Moreover, since the \(f\)-divergence is positive and Theorem 1, we have
\[p(\mathbf{m}\mid x;\theta)=\operatorname*{arg\,min}_{q(\mathbf{m};\varphi)}\mathcal{L }_{\mathrm{VCNCE}}(\theta,q(\mathbf{m};\varphi)).\]
Then, plugging the optimal distribution gives the tight bound, we have,
\[\min_{\theta}\mathcal{L}_{\mathrm{CNCE}}(\theta)=\min_{\theta}\min_{q(\mathbf{m}; \varphi)}\mathcal{L}_{\mathrm{VCNCE}}(\theta,\varphi).\]
## Appendix C Experimental details
### Simulation study
Tensors with non-Gaussian distributionsFor both GPTF and our model, we set batch size to 1000 and run 500 epochs with Adam optimizer. The initial learning rate is \(1\mathrm{e}{-3}\) and subsequently reduced by 0.3 at \(60\%,75\%\) and \(90\%\) of the maximum epochs. Moreover, the rank is set to 3 for both models. For GPTF, radial basis function (RBF) kernel with band width 1.0 is used, where 100 inducing points is adopted for approximation. For the conditional distribution \(p(x_{\mathbf{i}}\mid\mathbf{m}_{\mathbf{i}})=\mathcal{N}(x_{\mathbf{i}}\mid f(\mathbf{m }_{\mathbf{i}}),\sigma^{2})\) in GPTF, \(\sigma\) is fixed and chosen as the sample standard variance. For our model, we use 5 hidden layers of width 64 for both \(g_{1}\), \(g_{3}\) and \(g_{4}\) defined in Section 3. \(g_{2}\) is a summation layer. We use ELU activation for non-linearity. For the VCNCE loss, the conditional noise distribution is set as \(p_{c}(y\mid x)=\mathcal{N}(y\mid x,0.3^{2})\) and \(\nu=10\) noise samples are used for each data point.
Continuous-time tensorsThe data sizes and optimization parameters are the same with the previous simulation. The rank of all models are set to 3. For NONFAT, 100 inducing points are used to approximate the kernel function. We run the NONFAT model for 5000 epochs because we find that the algorithm converges very slowly. Other hyper-parameters are chosen by their default settings. For BCTT, we do not modify their code and settings. For our model, we use 3 hidden layers of length 64 with ELU activation. The conditional noise distribution in the VCNCE loss is set to \(p_{c}(y\mid x)=\mathcal{N}(y\mid x,1)\) and \(\nu=20\) noise samples are used for each datum.
### Tensor completion
For all datasets, when training our model, we scale the data to \([0,1]\) based on the _training_ data. For testing, we multiply the scale statistic computed by the training data and evaluate the performance on the original domain. We do not employ such data normalization for baselines models, because that will influence their default settings.
#### c.2.1 Sparse tensor completion
For both _Alog_ and _ACC_, the batch size is set to 1000. We run 1000 epochs for _Alog_ and _100_ epochs for _ACC_ due to their different sample numbers. For _Alog_ dataset, we add i.i.d. Gaussian noises from \(\mathcal{N}(0,0.05^{2})\) during training, while for _ACC_, the standard variance is set to \(0.02\). The Adam optimizer is used with learning rate chosen from \(\{1\mathrm{e}{-2},1\mathrm{e}{-3},1\mathrm{e}{-4}\}\). We also use gradient clip with maximum infinity norm of 2.0 for training stability. Moreover, we use learning rate scheduler by reducing the initial learning rate by 0.3 at 40%, 60%, and 80% of the total iterations. For both datasets, we use 2 hidden layers of length 50 with ELU activation for \(g_{1}\), \(g_{3}\) and \(g_{4}\) for our model. For the VCNCE loss, we set \(\nu=20\) noise samples with noise variance tuned from \(\{0.3^{2},0.5^{2},0.8^{2},1.0^{2}\}\). In practice, we find that the noise variance is influential to the final performance, even we are using conditional noises. However, with VCNCE, there is only one hyper-parameter for the noise distribution. While for CNCE, one may need to tune both mean and variance of the noise.
#### c.2.2 Continuous-time tensor completion
For _Air_ and _Click_ datasets, we set batch size to \(128\). We run 400 epochs for _Air_ and 200 epochs for _Click_ due to their different data sizes. For _Alog_ dataset, we add i.i.d. Gaussian noises from \(\mathcal{N}(0,0.05^{2})\) during training, while for _ACC_, the variance is set as \(0.15^{2}\). To encode the temporal information into the energy function, we use the sinusoidal positional encoding, as described in Section 3. Other settings are the same with Appendix C.2.1.
It should be noted that we use the standard definition of root mean square error (RMSE) and mean absolute error (MAE), namely,
\[\mathrm{RMSE}=\sqrt{\frac{\sum_{i=1}^{N}(x_{i}-\hat{x}_{i})^{2}}{N}},\quad \mathrm{MAE}=\frac{\sum_{i=1}^{N}|x_{i}-\hat{x}_{i}|}{N},\]
where \(x_{i}\) is the ground truth and \(\hat{x}_{i}\) is the estimate. Therefore, the results are different from those presented in [43], where the authors used _relative_ versions of RMSE and MAE,
\[\mathrm{RMSE}=\sqrt{\sum_{i=1}^{N}\frac{(x_{i}-\hat{x}_{i})^{2}}{x_{i}^{2}}}, \quad\mathrm{MAE}=\sum_{i=1}^{N}\frac{|x_{i}-\hat{x}_{i}|}{|x_{i}|}.\]
We modify the evaluation part of their code7 and report the results.
Footnote 7: [https://github.com/wzhut/NONEfat](https://github.com/wzhut/NONEfat)
### Computational time
The time complexity of our model is \(\nu\) times greater than traditional TDs, since we need to compute forward passes for \(\nu\) particles. However, as we only use small networks, the computational speed is still very fast. To illustrate, we compare the runtime performance of several baselines and our model on a single RTX A5000 GPU. We conduct tests on the _Air_ dataset, with a batch size of 128 and tensor rank of 5. The reported running time is the average of the first 10 epochs. For our model, we set \(\nu=20\). The default settings are used for other baselines. Table 3 lists the computational time of CTGP, NNDTN, NONFAT and THIS-ODE, all of which perform better than other baselines. The results show that our model achieves better performances within a reasonable time, especially compared to THIS-ODE.
### Ablation study on the objective function
We conduct an additional ablation study to show the advantage of VCNCE over the variational noise-contrastive estimation [VCNCE, 33] objective. The main difference between the VNCE and VCNCE is that VNCE uses noises from a fixed Gaussian distribution, _e.g._, \(y\sim p_{n}(y)=\mathcal{N}(y\mid\mu,\sigma^{2})\), while VCNCE uses conditional noises, _e.g._, \(y\sim p_{c}(y\mid x)=\mathcal{N}(y\mid x,\sigma^{2})\). Hence, these two strategies yield different objective functions. The objective function of VNCE is defined as
\[\mathcal{L}_{\rm VNCE}=\mathbb{E}_{x}\mathbb{E}_{q(\mathbf{m}|x;\varphi )}\log\left(\frac{\phi(x,\mathbf{m};\theta)}{\phi(x,\mathbf{m};\theta)+\nu q(\mathbf{m} \mid x;\varphi)p_{n}(x)}\right)\\ +\nu\mathbb{E}_{y}\log\left(\frac{\nu p_{n}(y)}{\nu p_{n}(y)+ \mathbb{E}_{q(\mathbf{m}|y)}\left[\frac{\phi(y,\mathbf{m};\theta)}{q(\mathbf{m}|y)}\right] }\right),\]
where \(p_{n}(\cdot)\) is the fixed noise distribution. For VNCE, choosing inappropriate noise distributions may result in bad performances.
We test the proposed model on the _Air_ dataset, training on the VCNCE loss and VNCE loss, respectively. We set the batch size to 128 and run 400 epochs. Adam optimizer with initial learning rate \(1\mathrm{e}{-2}\) is adopted. The initial learning rate is subsequently reduce by 0.3 at \(20\%\), \(50\%\) and \(80\%\) of the total epochs. For VNCE, we set \(\mu=0\), which is a common practice in relevant literature. To show how the noise variance affects the learning process, we test different noise variances, _e.g._, \(\sigma\in\{0.3,0.5,0.7\}\) for both VNCE and VCNCE. Other settings are the same with Appendix C.2.2.
Fig. 3 depicts the RMSE and MAE on the test data when optimizing VNCE and VCNCE objective functions. We test five runs, plot mean values in lines and standard deviations in shadowed areas. It is shown that VCNCE gets better and more stable results on both RMSE and MAE.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline & CTGP & NNDTN & NONFAT & THIS-ODE & EnergyTD \\ \hline Time/Epoch (in seconds) & 1.17\(\pm\)0.30 & 2.18\(\pm\)0.04 & 2.51\(\pm\)0.13 & 464.\(\pm\)131. & 5.30\(\pm\)0.37 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Computing timeFigure 3: Learning process of optimizing the VNCE and VCNCE loss. The first row is RMSE and the second row is MAE. | ## Review
### Summary
This paper presents an innovative approach to probabilistic tensor decomposition by integrating a deep energy-based model (EBM) to capture the joint probability of tensor observations and latent factors. Unlike traditional methods, the proposed EnergyTD method avoids restrictive structural and distributional assumptions, allowing for a more flexible modeling framework. The authors derive a variational upper bound for the conditional noise-contrastive estimation objective, demonstrating the effectiveness of their approach through extensive experiments on various tensor decomposition tasks. The results indicate that EnergyTD outperforms conventional methods, thus contributing significantly to the field of tensor decomposition.
### Strengths
- Introduces a novel framework using Energy-Based Models for tensor decomposition, which is innovative in the field.
- Demonstrates high adaptability for dynamic tensors and flexibility in modeling joint distributions.
- Experimental results show clear advantages over traditional methods, enhancing the paper's credibility.
- Well-written and organized, making it easy to follow the presented concepts.
### Weaknesses
- Lacks evaluation of time and space complexity of the proposed method, which is crucial for practical application.
- Code is not provided, which limits reproducibility.
- Discussion of potential drawbacks of using overly flexible models is insufficient, particularly regarding identifiability and robustness.
- Some technical contributions are based on existing theories, which may reduce the overall novelty.
### Questions
- What is the time/space complexity of the proposed method?
- How does the method scale with higher-order tensors?
- Can the authors provide more insight into the robustness of the inferred factors based on different initializations?
- What measures can be taken to address the issue of non-uniqueness in inferred latent distributions?
### Soundness
**Score:** 3
**Description:** 3 = good; the theoretical foundations are sound, but there are some concerns about scalability and robustness.
### Presentation
**Score:** 3
**Description:** 3 = good; the paper is generally well-organized, but some sections, particularly on theoretical aspects, could be clearer.
### Contribution
**Score:** 3
**Description:** 3 = good; while the paper introduces an innovative approach, some contributions are less original and rely on existing frameworks.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept; the paper is technically solid and presents moderate-to-high impact contributions with minor concerns regarding clarity and completeness.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a novel and effective approach to tensor decomposition, demonstrating significant contributions to the field. While there are some weaknesses regarding scalability, presentation clarity, and reproducibility concerns, the overall impact and soundness warrant an accept decision.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# From Pixels to UI Actions: Learning to Follow
Instructions via Graphical User Interfaces
Peter Shaw\({}^{1}\)1
Mandar Joshi\({}^{1}\)1
James Cohan\({}^{2}\)
Jonathan Berant\({}^{1}\)
Panupong Pasupat\({}^{1}\)
Hexiang Hu\({}^{1}\)
Urvashi Khandelwal\({}^{1}\)
Kenton Lee\({}^{1}\)
Kristina Toutanova\({}^{1}\)
\({}^{1}\) Google DeepMind
\({}^{2}\) Google
Equal contribution.
Footnote 1: footnotemark:
###### Abstract
Much of the previous work towards digital agents for graphical user interfaces (GUIs) has relied on text-based representations (derived from HTML or other structured data sources), which are not always readily available. These input representations have been often coupled with custom, task-specific action spaces. This paper focuses on creating agents that interact with the digital world using the same conceptual interface that humans commonly use -- via pixel-based screenshots and a generic action space corresponding to keyboard and mouse actions. Building upon recent progress in pixel-based pretraining, we show, for the first time, that it is possible for such agents to outperform human crowdworkers on the MiniWob++ benchmark of GUI-based instruction following tasks.
## 1 Introduction
Systems that can follow instructions to complete tasks through graphical user interfaces (GUIs) can help automate tedious tasks, improve accessibility, and expand the usefulness of digital assistants by allowing them to interact with tools and services. Despite the visual nature of GUIs, prior work has primarily focused on utilizing structured representations of the user interfaces (such as HTML sources, Document Object Model (DOM) trees, and Android view hierarchies) as well as custom, task-specific representations of high-level actions based on these structured representations (see SS6). Recent efforts have achieved positive outcomes thanks to the advances of powerful language models (Gur et al., 2022; Kim et al., 2023; Yao et al., 2022).
While structured and task-specific representations may be useful, they are not always available - some examples are web applications that use extensive scripting, sandboxed environments where access to DOM is limited, and mobile applications which often do not expose the underlying structure to external modules. Even when structured application source data is available, it may be hard to interpret due to obfuscation and misalignment with what actually appears on the GUIs. Finally, aligning human demonstrations with task-dependent actions is often challenging.
In contrast, people interact with GUIs by perceiving the visual input and using generic mouse and keyboard actions, without needing to inspect the application's source code for cues on its functionality. They can quickly learn to interact with new applications that offer familiar visual interfaces, regardless of differences in implementation technologies. In this paper we ask: _Can we build an agent that cancomplete tasks for users while relying solely on pixel-level visual representations of the GUI state, and generic low-level actions?_
Learning based on pixel-only inputs proved effective for game playing environments such as Atari (Mnih et al., 2015). However, for GUI-based instruction following tasks, learning from pixel-only inputs coupled with general low-level actions leads to several challenges. Interpreting GUIs visually requires understanding the interface layout, recognizing and interpreting visually-situated natural language, identifying visual elements, and predicting their functions and methods of interaction. A generic action space also poses the challenge of a more complex mapping between high-level textual instructions and corresponding sequences of low-level actions. As an example of the increased difficulty in this setting, on the MiniWob++ benchmark (Shi et al., 2017; Liu et al., 2018) of web GUI interaction, CC-Net (Humphreys et al., 2022) demonstrates human-level accuracy when accessing both screenshots and DOM structure, but its performance drops by 75% when the DOM information is removed from the agent's observations.
Here we present Pix2Act, a model that relies solely on pixel-based screenshots as input and selects actions corresponding to basic mouse and keyboard functionalities.2 We build on Pix2Struct(Lee et al., 2022), a Transformer-based (Vaswani et al., 2017) image-to-text model pre-trained to map
Figure 1: **Our agent learns to follow instructions via Graphical User Interfaces (GUs). Unlike most prior work studying instruction following for GUI-based tasks, our agent does not rely on text-based observations corresponding to DOM trees or HTML source code, or task-specific actions. Instead, our agent receives pixel-based observations and generates outputs corresponding to mouse and keyboard actions. The possible actions are encoded as text and shown on the top of the figure. We show examples of observations from various episodes for two benchmarks, MiniWob++ (top row) and WebShop (bottom row), that we adapt to study within the context of our general Chrome-based environment framework. See details in §2.**
screenshots to structured representations derived from HTML on web-scale data. Pix2Act tunes this model using a combination of human demonstrations and environment interactions, applying tree search to iteratively generate new expert trajectories for training. We develop a general browser-based environment framework, and adapt two benchmark datasets, MiniWob++ and WebShop (Yao et al., 2022), to our setting with a unified, general purpose observation and action format.
On MiniWob++, Pix2Act outperforms human crowdworkers and improves task score nearly 4x compared to the best prior results in our proposed setting (CC-Net without DOM). Ablations show that a key ingredient for Pix2Act's performance is the pixel-based pre-training of Pix2Struct.
Our contributions are as follows:
1. We show, for the first time, that an agent using pixel-only inputs and a generic action space can outperform human crowdworkers on the MiniWob++ benchmark, significantly improving over prior work on this setting, and reaching performance comparable to that of state-of-the-art agents that access DOM information and use a comparable number of human demonstrations.
2. We adapt the WebShop benchmark to our setting, using pixel-based observations and general low-level actions. We establish the first baseline on this setting, although there is still a performance gap relative to larger language models using HTML-based inputs and task-specific actions.
3. We show that Pix2Struct's pre-training via screenshot parsing is effective for GUI-based instruction following with pixel-based inputs. In the behavioral cloning setting, pre-training improves task scores from 17.1 to 66.5 on MiniWob++ and from 1.1 to 46.7 on WebShop.
4. We demonstrate the successful application of tree search as a relatively simple method for policy improvement for MiniWob++.
## 2 Environment
Following the reinforcement learning literature, we model GUI interaction as a Markov Decision Process (MDP): at each time step, our agent receives an observation and selects an action. We develop a common environment framework with shared observation and action formats for browser-based tasks. Similarly to prior work on web-based agents (Liu et al., 2018), we use Selenium to programmatically interact with the Google Chrome browser.
ObservationsTo form an observation, we first take a screenshot of the current browser window using Selenium and then augment it with additional information. First, if not already present, we render the natural language instruction on the top of the screenshot, following Lee et al. (2022). Second, as Selenium screenshots do not include cursors (which are typically rendered by the operating system), we draw a cursor on the screenshot to indicate the mouse pointer position. Finally, we render an indicator of whether the mouse button is currently pressed down, which is useful for dragging actions.
ActionsOur action space consists of raw mouse and keyboard actions, as shown in Figure 1, where X and Y refer to discrete coordinate bins, K is one or more keys, M is an optional modifier key such as "shift", and Z refers to a vertical scroll amount, also represented as a discrete bin.3 The begin_drag and end_drag actions can be used to execute "click and drag" actions. We use a configurable number of coordinate buckets per vertical and horizontal axis. Importantly, the DOM information is not provided by the environment and is therefore not used in any way to define observations or actions.
Footnote 3: We chose discrete bins because they enable a simple encoding of actions as tokens. Alternatives could include continuously-valued coordinates or relative movements with foveated binning (Baker et al., 2022).
Episodes and RewardsEpisodes continue until a terminal state or a configurable number of maximum steps is reached. For the environments we consider, the agent only receives a reward at a terminal state. This can be a binary reward based on whether the task was completed successfully or a partial reward based on how well the task was completed.
## 3 Proposed Agent
Our agent, Pix2Act, is based on the Pix2Struct model (Lee et al., 2022), which uses an image Transformer encoder and a text Transformer decoder. The architecture is based on Vision Transformer (Dosovitskiy et al., 2021) and T5 (Raffel et al., 2020). Pix2Struct is pre-trained on a _screenshot parsing_ task: predicting simplified HTMLs from screenshots with visually-masked regions. Such pre-training was proven effective for tasks related to understanding user interfaces in a non-interactive setting, such as screen summarization and widget captioning (Wang et al., 2021; Li et al., 2020). We use the Pix2Struct base variant with 282M parameters (12 encoder and 12 decoder layers; hidden size 768) for all our experiments. The model is called once per time step.
InputThe only input to the model is pixel-based observation from the environment. We can also condition on multiple previous observations by concatenating multiple frames. In preliminary experiments, we did not observe significant gains from conditioning on past observations for MiniWob++, and thus we only use the screenshot of the current step in our experiments. We reuse Pix2Struct's image processing by scaling input images up or down so as to extract the maximal number of fixed-size patches that still fit within the sequence length limit. We use resolutions of 160\(\times\)210 and 800\(\times\)600 for MiniWoB++ and WebShop, respectively.
OutputWe encode actions as text tokens, which are predicted autoregressively by the Transformer decoder. We use beam search over tokens to output the \(k\)-best actions (see Appendix B.1 for details).
Greedy PolicyFor interacting with the environment, we adopt a standard greedy policy, selecting the highest scoring action at each step, with one modification. To help prevent the agent from getting stuck in cycles, we track which actions have been taken for a given observation, and select the highest probability action in the beam that has not previously been taken given the current observation, which provides a modest increase in performance.
### Training
We explore two methods for training models to follow instructions via GUIs. First, similarly to prior work, we use Behavioral Cloning (BC), where we train our model using standard supervised learning to predict the given action for each observation in a set of human demonstrations. Second, given access to environments with reward signals, prior work has also explored Reinforcement Learning (RL) to further improve agent performance. As an alternative to common reinforcement learning algorithms such as REINFORCE (Williams, 2004) and PPO (Schulman et al., 2017), we apply tree search as a simple method for policy improvement.
Figure 2: **An example episode of our agent on the MiniWob++ use-colorwheel-2 task. At each step, the agent receives a new observation and outputs the next action to take. The screenshots include a rendered instruction that the agent needs to follow to successfully complete the episode. For MiniWob++, we use 32 vertical and horizontal coordinate bins to specify locations. We show the click location visually for this figure.**Tree SearchFor a given set of model parameters, tree search leverages the deterministic nature of the environment to look ahead at the consequences of possible actions to determine a more optimal policy than greedily selecting actions.
We adopt Monte Carlo Tree Search (MCTS) (Coulom, 2006), which outperformed more naive search algorithms in initial experiments, and has been successfully integrated with neural network policies in prior work (Silver et al., 2017; Anthony et al., 2017). Similarly to this prior work, we train a model to estimate a _value function_, which predicts the value (i.e., estimated future rewards) of a given state. We use a surrogate reward which penalizes the number of steps taken to encourage concise trajectories without unnecessary actions. We implement this value function approximator using the same Pix2Struct architecture used for our agent.4 However, instead of predicting actions, this model predicts state-values mapped to discrete buckets. To estimate the value of leaf states during MCTS, we use a combination of this value function approximator and rollouts using our greedy policy, similarly to Silver et al. (2017). See Appendix B for additional technical details.
Footnote 4: While it may be more efficient to share an encoder between these two Pix2Struct-based models that condition on the same inputs, we trained separate models for simplicity.
We can then use successful episodes found with this stronger tree search policy to improve our model. As this stronger model then yields a more effective tree search policy, we can continue to iteratively improve our model using this method. Notably, this approach requires no modifications to the fine-tuning procedure of Pix2Act, as, for simplicity, we tune on episodes from the tree search policy using standard supervised learning.
## 4 Benchmarks and Demonstrations
We adapt two benchmarks, MiniWob++ and WebShop, to our environment framework (SS2) which consists of pixel-based observations and generic low-level actions. We also map previously collected human demonstrations for these benchmarks to our observation and action spaces.
### MiniWob++
MiniWob++ (Liu et al., 2018) is a set of over a hundred web-browser based tasks. See Figures 1 and 2 for task examples. Each task consists of an algorithm for generating variations of the task and an instruction template, controlled by a random seed, with up to billions of possible configurations per task. The task instruction is given as (mostly) natural language text in the top yellow part, which in our framework can only be accessed visually. An automatic reward is given at the end of the task.
Human DemonstrationsWe use the human demonstrations collected by Humphreys et al. (2022). However, their demonstrations were collected using an X11-based environment, which is different from our Selenium-based environment. This results in different renderings of the same underlying environment state, introducing a shift between the screenshots seen during training and those observed at test time. Additionally, we need to map from their real-time X11-based action sequences to our action space. We were able to perform this mapping with a reasonable degree of success for 59 tasks. Notably, not all behaviors in the human demonstrations are supported in our Selenium-based environment. For example, Selenium does not implement the ability to highlight text and drag it into a text field, and such an action is widely used in the human demonstrations for tasks where text is copied and pasted. Additionally, while our environment framework intends to cover the basic functionality of most web interfaces, aspects of some MiniWob++ tasks, such as capturing real-time observations for animated elements, are not supported. See Appendix A for additional details.5
Footnote 5: Other prior work has used the demonstrations from Liu et al. (2018), which cover a different subset of MiniWob++ tasks. However, these demonstrations do not include screenshots or sufficient information to replay the episodes in a browser environment to collect new screenshots, and therefore cannot be applied to our setting.
Starting with approximately 1.3 million demonstrations across the 59 supported tasks, we filtered demonstrations with a reward of \(<0.8\), or approximately 6% of demonstrations. We were able to successfully convert 81% of the remaining demonstrations to our action space. We reserve 10% of the data for a development set. Demonstrations contain approximately 3 steps per task on average, although this varies considerably across tasks.
EvaluationWe report the mean score across seeds and tasks. The score is the MiniWob++ raw reward (without time decay) mapped from the original range \([-1,1]\) to the range \([0,100]\). The score is equivalent to the success rate (_i.e._ the proportion of episodes in which the agent receives a positive reward) for tasks with binary rewards. For episodes that do not complete due to reaching a maximum number of allowed steps, we assume a score of 0. For each task, we compute the mean over 100 random seeds, and then compute the mean over 59 MiniWob++ tasks.
### WebShop
WebShop (Yao et al., 2022) is a web-based shopping environment with over 1.1 million products from Amazon. The task is to find and purchase a product based on a human-authored text instruction. Finding a suitable product requires entering search queries, clicking on results, and determining the relevance of various products to the instruction. An automatic reward is computed based on similarity between the purchased product and the gold target product.
Human DemonstrationsWe use the 1,566 human demonstrations (with a train/development/test split of 1012/54/500) collected in Yao et al. (2022). As with the MiniWob++ demonstrations, we need to map between the observation and action sequences used in their setup to our framework. Yao et al. (2022) used high-level actions (_e.g._ "search" or "click[item]"), each of which could map to multiple lower-level actions in our environment. Specifically, for all actions involving a mouse click, we determine the coordinates of the center of the corresponding HTML element. For WebShop, the entire screen content is not always visible due to page heights exceeding the viewport dimensions. If the clicked element lies outside the visible area, we add scroll actions until the element is visible. Finally, we map search actions to two actions in our environment: clicking on the center of the search box and entering the search query followed by the _enter_ key. We render the HTML inputs in the human demonstrations using our browser to obtain screenshots. Additionally we found that rendering the last 5 actions (separated by <s>) on top of the screenshot to be helpful.
EvaluationConsistent with previous work, we report Task Score, which is the average reward across 500 test instructions.
## 5 Experiments and Analysis
### Training Details
We updated all model parameters during fine-tuning, including both the image encoder and text decoder. We used the Adafactor optimizer (Shazeer and Stern, 2018) with a learning rate of 0.01.
MiniWob++We finetuned a single model jointly on episodes from all tasks for a total of 26K steps using a batch size of 512, input/output sequence lengths of 512/16. We also evaluated using the tree search procedure described in SS3.1 to improve our agent. We performed 2 iterations of policy
Figure 3: **Main results evaluating PIX2Act (ours) on MiniWob++ (left) and WebShop (right). In this paper we focus on approaches that do not have access to DOM or HTML information, and receive pixel-based observations (blue). On this setting, PIX2Act significantly improves over prior work on MiniWob++ and establishes the first baseline on WebShop. Our method performs competitively with humans (green) and with methods that have access to DOM or HTML information (red) on MiniWob++, although there is a gap with the best performing methods that access HTML on WebShop (see §5.3 for detailed analysis).**
improvement with tree search, collecting a total of 826K episodes across all tasks, and tuning for a further 26K steps.
WebShopWe used only the provided human demonstrations to train our model.6 Due to its larger resolution and text-heavy data, we used a higher input sequence length of 4096. We also found it useful to perform intermediate finetuning on MiniWob++, followed by 10K steps of further finetuning on WebShop using a batch size of 256 (see SS5.3 for details).
Footnote 6: We did not explore applying RL techniques to WebShop in this work. Prior work (Yao et al., 2022) has not shown as significant an advantage to applying RL on WebShop relative to the large improvements shown by prior work on MiniWob++, which offers a near limitless variety of environments with reward signals for training.
### Main Results
We report the results of our models on MiniWob++ and WebShop in Figure 3. For MiniWob++, we also provide task-level comparisons between Pix2Act and human crowdworkers in Figure 4. There is limited prior work studying these tasks without access to DOM and HTML information. For MiniWob++, the only comparable baselines are from the CC-Net model of Humphreys et al. (2022), which mentions an ablation experiment where performance dropped by 75% from their primary results when the models conditioned on only screenshots without DOM information. As they did not provide per-task numbers for this ablation, we estimate the performance of CC-Net without DOM information by assuming that the drop in performance on the subset of tasks we study was also 75%. Regardless, it is clear that Pix2Act significantly outperforms CC-Net on this setting. The difference in performance can be largely attributed to the screenshot parsing pre-training of Lee et al. (2022). For WebShop, there is no prior work exploring such a setting, so we establish the first baseline.
### Ablations and Analysis
Pre-training ablationsTo study the impact of the pre-training on our model's ability to effectively learn to follow instructions via GUIs, we evaluate model performance without the pre-training procedure. For these experiments, we only compared performance of models trained using behavioral cloning. The results are shown in Figure 3, and demonstrate that pre-training is critical for our model's performance.
Comparison with models that use DOM or HTML as inputWe can also compare our results without access to DOM or HTML to previous methods that utilized these resources, including those which also leverage DOM information to construct specialized action spaces. The performance of the best model from prior work leveraging DOM or HTML information is shown in Figure 3.
For MiniWob++, the best model on this setting is CC-Net (Humphreys et al., 2022) trained with BC and RL and with access to both DOM and pixel-based observations.7Pix2Act achieves comparable performance to their best model, while relying on only a subset of the information used by CC-Net, and using a comparable number of human demonstrations for training. Pix2Act also outperforms
Figure 4: Comparing scores on MiniWob++ tasks of Pix2Act (blue) with human crowdworkers (green), ranked from left to right by the relative difference in performance.
CC-Net when each model is trained only with behavioral cloning, as CC-Net performance on this setting drops to 38.7 (results not shown in the Figure). Notably, CC-Net scores also drop by approximately 10% when the model is not given access to a dictionary of input strings provided by the environment. As shown in Figure 3, the key to our model's ability to achieve comparable performance without relying on DOM-based inputs is pixel-based pre-training. Another difference is that CC-Net uses a real time setting, which enables some forms of interaction not supported by our environment, and therefore can support a larger set of MiniWob++ tasks. On the other hand, for BC, CC-Net does not need to handle the shift in rendering format and potentially noisy action space conversion.
For WebShop, the best model on this setting is WebGUM (Furuta et al., 2023a), which leverages the HTML source, a custom action space for the shopping domain, and a Flan-T5-XL (Chung et al., 2022) backbone. WebGUM outperforms Pix2Act when compared on this setting. Some of this gap can be attributed to their simplified high-level action space, direct access to the relevant text on the page, and ability to transfer from Flan-T5's pretraining scale and instruction finetuning. Comparable improvements to the scale and pretraining of pixel-based models could reduce this gap.
We discuss other approaches that leverage DOM or HTML information further in SS6. We also offer a complete comparison across all MiniWob++ tasks in Appendix C.
Evaluating transfer across tasksTraining a pretrained, pixel-based model to interact with a GUI can intuitively lead to better generalization to new tasks that use common GUI design principles. To study this, we evaluate the ability of Pix2Act (without RL) to generalize to tasks unseen during training. Specifically, we hold out 9 out of 59 tasks and train on the remaining 50.8 We then evaluate performance on the held-out tasks, comparing initializing with Pix2Struct to random initialization. Table 1 illustrates that Pix2Act can reach a mean score of 28.3 on held out tasks compared to 65.5 when training on those tasks. Conversely, mean score is 7.6 when Pix2Struct initialization is not used. This shows that combining pretraining with a general GUI interface can lead to non-trivial generalization to held out tasks.
Footnote 8: We manually pick the 9 tasks to verify they include only actions or elements that would be reasonable to generalize to from the training tasks. The tasks are click-checkboxes-large, click-color, click-tab-2, click-tab-2-hard, count-shape, drag-shapes, use-color-wheel-2, use-slider-2.
For WebShop, we find that finetuning directly on WebShop (without intermediate finetuning on MiniWob++ as mentioned in 5.1) results in a drop of 4.0 in Task Score, demonstrating transfer learning benefits across these datasets.
Tree search analysisTable 2 shows the improvement in MiniWob++ scores by training on episodes generated by tree search. After an initial round of training on episodes generated by tree search, the effectiveness of tree search also improves due to improvements in the underlying model used to guide the search. The best greedy policy achieves performance close to the best tree search policy, but does not require access to reward signals or additional exploration at inference time. Our results indicate that we could further improve performance with more iterations of policy improvement via tree search.
\begin{table}
\begin{tabular}{l c c} \hline \hline Pre-training & Included & Heldout \\ \hline Yes & 65.5 & 28.3 \\ No & 11.0 & 7.6 \\ \hline \hline \end{tabular}
\end{table}
Table 1: We selected 9 MiniWob++ tasks and evaluated mean scores when they are _heldout_ from the training set. Pretraining leads to non-trivial generalization (28.3) to held out tasks that were unobserved at training time compared to a randomly initialized model (7.6). We also include scores when the tasks are _included_ during training for reference.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{Iteration} \\ \cline{2-4} Policy & 0 & 1 & 2 \\ \hline Greedy & 66.5 & 93.1 & 96.2 \\ Tree Search & 91.7 & 98.4 & — \\ \hline \hline \end{tabular}
\end{table}
Table 2: We compare average MiniWob++ scores using the greedy policy with one that uses tree search and lookahead, given the same underlying model. The model is initially trained on human demonstrations and iteratively improved by training on episodes generated by the tree search policy.
Related Work
We focus on agents that interact with GUIs, such as operating system dialogs or web pages, to accomplish a given task. Many early approaches relied on the structured information from the GUIs (Zettlemoyer and St. Amant, 1999; Allen et al., 2007; Branavan et al., 2010). This information could range from a flat list of GUI components and their properties, to the full hierarchical structure of the components (_e.g._ the DOM tree). The output space also depends on this structured information, often using GUI components as action targets (_e.g._ clicking button #7). As discussed in SS1, such structured information might not always be available, or might not align with what visually appears to the users.
When Shi et al. (2017) introduced the _World of Bits_ tasks, which was the precursor to MiniWob++ (Liu et al., 2018), they proposed a model based on a convolutional neural network that takes both visual and structured inputs and then performs generic low-level computer actions (_e.g._ clicking at a coordinate or pressing a key), similarly to Pix2Act. However, the model performed poorly compared to humans. Follow-up work studied specialized architectures for incorporating structured DOM information and restricted the action space to clicking and typing predetermined texts on DOM elements (Liu et al., 2018; Gur et al., 2018; Jia et al., 2019). Humphreys et al. (2022) reconsidered incorporating both visual and structured information as well as a low-level action space that aligns better to the human demonstrations. We discussed their approach, CC-Net, in SS5.3. Humphreys et al. (2022) also explored the benefits of large-scale human demonstrations, and we build on their work to utilize a large number of human demonstrations to train Pix2Act. This paper shows that Pix2Act, a model with pixel-only inputs, can outperform humans on MiniWob++ and match the state-of-the-art approaches that rely on DOM information.
Automating web-based tasks using large language models (LLMs) has also been broadly explored. For instance, WebGPT uses a text-based web browsing environment to search and navigate the web (Nakano et al., 2021). More relatedly, recent work has investigated prompting LLMs to produce agents that can generalize to tasks based on a small number of in-context examples. Yao et al. (2023) proposed ReAct, a few-shot prompted LLM, which uses observations derived from HTML and a custom action space to make predictions based on explicit reasoning steps. Similarly, Kim et al. (2023) proposed RCI, a prompted LLM that iteratively critiques and refines its outputs, also using HTML inputs and custom action spaces. These approaches achieve competitive performance on WebShop and MiniWob++, respectively, and are extremely sample-efficient, relying on just a handful of demonstrations per task. Gur et al. (2022) treated raw HTML as a string and fed it to LLMs pretrained on natural language. After fine-tuning them on demonstrations, the models improved MiniWob++ task success rate and sample efficiency compared to models that take DOM-based inputs and specialized architectures. Finally, WebGUM (Furuta et al., 2023), discussed in SS5.3, extends HTML-based models to integrate a vision encoder pretrained on ImageNet-21K.
Other work has focused on tasks related to mobile apps. Li and Li (2022) considered a model with pixel-based inputs similar to that of Lee et al. (2022), and included evaluations on tasks related to grounding instructions to screenshots, but did not consider interactive environments. Some work has considered instruction following tasks in mobile app environments (Li et al., 2020; Burns et al., 2022), but has generally not studied observation and action formats similar to ours, instead relying on inputs based on the Android view hierarchy. We focused on web-based GUIs so that we could use a consistent environment framework for simplicity. Besides GUIs, several works on video game agents also considered visual-only input and low-level actions. For example, most works on Atari games used the screenshot as visual input and predicted the controller buttons to press (Mnih et al., 2015). More recently, Baker et al. (2022), which focuses on learning from unlabeled videos, proposes an agent for Minecraft that uses pixel-based inputs paired with keyboard and mouse actions, similarly to Pix2Act.
## 7 Limitations and Discussion
Pixel-based vs. text-based representationsText-based representations may be practically useful when available, especially since they enable transferring knowledge from LLMs, demonstrating impressive few-shot learning with LLMs for MiniWob++ (Kim et al., 2023) and WebShop (Yao et al., 2023). When structured source is not available, OCR systems and models trained to predict the location and function of UI elements may also help connect models with the power of LLMs. On the other hand, similar advances in scaling and pre-training of vision or multimodal models could potentially enable similar capabilities in a pixel-based setting in the future, as we have shown the effectiveness of pixel-based pre-training (albeit at a smaller scale) for GUI-based tasks. Nevertheless, beyond addressing the case where HTML or DOM information is unavailable, we hope our study contributes towards a better understanding of the potential of pixel-based representations for instruction following via GUIs.
Tree SearchOur approach to policy improvement with tree search for MiniWob++ relied on the ability to procedurally generate new MiniWob++ environment and instruction variations and receive reward signals for task completion. Both aspects are unlikely to be available for some real world environments, and such an approach might need to rely on generative models of potential instructions and approximate reward models for task completion (Bahdanau et al. (2018); Du et al. (2023)). Our implementation also relied on the ability to reset the environment to an initial state, a useful feature for environments being used for exploration. Additionally, while we show that tree search can be sufficient to reach high performance on MiniWob++, we did not perform a detailed comparison relative to other search and RL algorithms in this study, which would be useful to better understand the most efficient approaches for learning from GUI-based environments.
Broader ImpactIn this paper we have trained and evaluated models only in offline environments. Responsibly deploying models in an environment where they can interact with online services would require additional considerations. Prior to enabling a model to access a new service, it would be important to sufficiently verify and/or constrain the behavior of the model to ensure that it is consistent with the terms-of-service for that service and does not otherwise cause harm. Ensuring sufficient data privacy could also be an important consideration for deploying models such as Pix2Act that rely on capturing screenshots from browsers.
There would be many potential risks associated with deploying models that could interact with services in violation of their terms-of-service or otherwise engage in various forms of spam, fraud, or abuse. Examples of such behavior could include impersonating human users, generating harmful content or spam, or engaging in denial-of-service attacks. Models that use the same conceptual interface humans use could potentially be more capable of breaking security defenses (e.g. solving CAPTCHAs) or engaging in forms of spam, fraud, or abuse that are more difficult to detect. It is therefore important for research related to security and techniques for detecting spam, fraud, and abuse to take such potential uses into account.
## Acknowledgments
We would like to thank Peter Humphreys, Toby Pohlen, and Gregory Thornton for their assistance with the MiniWob++ demonstraions. We also thank Ming-Wei Chang, Austin Huang, Luheng He, Tianze Shi, David Gaddy, Jacob Eisenstein, and Yi Luan for useful discussions and comments.
## References
* Allen et al. (2007) James F. Allen, Nathanael Chambers, George Ferguson, Lucian Galescu, Hyuckchul Jung, Mary D. Swift, and William Taysom. Plow: A collaborative task learning agent. In _AAAI Conference on Artificial Intelligence_, 2007.
* Anthony et al. (2017) Thomas Anthony, Zheng Tian, and David Barber. Thinking fast and slow with deep learning and tree search. _Advances in neural information processing systems_, 30, 2017.
* Bahdanau et al. (2018) Dzmitry Bahdanau, Felix Hill, Jan Leike, Edward Hughes, Arian Hosseini, Pushmeet Kohli, and Edward Grefenstette. Learning to understand goal specifications by modelling reward. In _International Conference on Learning Representations_, 2018.
* Baker et al. (2022) Bowen Baker, Ilge Akkaya, Peter Zhokov, Joost Huizinga, Jie Tang, Adrien Ecoffet, Brandon Houghton, Raul Sampedro, and Jeff Clune. Video pretraining (VPT): Learning to act by watching unlabeled online videos. _Advances in Neural Information Processing Systems_, 35:24639-24654, 2022.
* Branavan et al. (2010) S. R. K. Branavan, Luke Zettlemoyer, and Regina Barzilay. Reading between the lines: Learning to map high-level instructions to commands. In _Annual Meeting of the Association for Computational Linguistics_, 2010.
* Berman et al. (2010)Andrea Burns, Deniz Arsan, Sanjna Agrawal, Ranjitha Kumar, Kate Saenko, and Bryan A Plummer. Interactive mobile app navigation with uncertain or under-specified natural language commands. _arXiv preprint arXiv:2202.02312_, 2022.
* Chung et al. (2022) Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. _arXiv preprint arXiv:2210.11416_, 2022.
* Coulom (2006) Remi Coulom. Efficient selectivity and backup operators in monte-carlo tree search. In _Computers and Games_, 2006.
* Dosovitskiy et al. (2021) Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In _ICLR_, 2021.
* Du et al. (2023) Yuqing Du, Ksenia Konyushkova, Misha Denil, Akhil Raju, Jessica Landon, Felix Hill, Nando de Freitas, and Serkan Cabi. Vision-language models as success detectors. _arXiv preprint arXiv:2303.07280_, 2023.
* Furuta et al. (2023a) Hiroki Furuta, Ofir Nachum, Kuang-Huei Lee, Yutaka Matsuo, Shixiang Shane Gu, and Izzeddin Gur. Instruction-finetuned foundation models for multimodal web navigation. In _Workshop on Reincarnating Reinforcement Learning at ICLR 2023_, 2023a.
* Furuta et al. (2023b) Hiroki Furuta, Ofir Nachum, Kuang-Huei Lee, Yutaka Matsuo, Shixiang Shane Gu, and Izzeddin Gur. Instruction-finetuned foundation models for multimodal web navigation. In _First Workshop on Multimodal Representation Learning at ICLR_, 2023b.
* Gur et al. (2018) Izzeddin Gur, Ulrich Rueckert, Aleksandra Faust, and Dilek Hakkani-Tur. Learning to navigate the web. _arXiv preprint arXiv:1812.09195_, 2018.
* Gur et al. (2022) Izzeddin Gur, Ofir Nachum, Yingjie Miao, Mustafa Safdari, Austin Huang, Aakanksha Chowdhery, Sharan Narang, Noah Fiedel, and Aleksandra Faust. Understanding HTML with large language models. _arXiv preprint 2210.03945_, 2022.
* Humphreys et al. (2022) Peter C Humphreys, David Raposo, Tobias Pohlen, Gregory Thornton, Rachita Chhaparia, Alistair Muldal, Josh Abramson, Petko Georgiev, Adam Santoro, and Timothy Lillicrap. A data-driven approach for learning to control computers. In _International Conference on Machine Learning_, pages 9466-9482. PMLR, 2022.
* Jia et al. (2019) Sheng Jia, Jamie Ryan Kiros, and Jimmy Ba. Dom-q-net: Grounded rl on structured language. In _International Conference on Learning Representations_, 2019.
* Kim et al. (2023) Geunwoo Kim, Pierre Baldi, and Stephen McAleer. Language models can solve computer tasks. _arXiv preprint arXiv:2303.17491_, 2023.
* Lee et al. (2022) Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, and Kristina Toutanova. Pix2Struct: Screenshot parsing as pretraining for visual language understanding. _arXiv preprint arXiv:2210.03347_, 2022.
* Li and Li (2022) Gang Li and Yang Li. Spotlight: Mobile ui understanding using vision-language models with a focus. In _The Eleventh International Conference on Learning Representations_, 2022.
* Li et al. (2020) Yang Li, Jiacong He, Xin Zhou, Yuan Zhang, and Jason Baldridge. Mapping natural language instructions to mobile ui action sequences. _arXiv preprint arXiv:2005.03776_, 2020a.
* Li et al. (2020) Yang Li, Gang Li, Luheng He, Jingjie Zheng, Hong Li, and Zhiwei Guan. Widget captioning: Generating natural language description for mobile user interface elements. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_, pages 5495-5510, Online, November 2020b. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.443. URL [https://aclanthology.org/2020.emnlp-main.443](https://aclanthology.org/2020.emnlp-main.443).
* Liu et al. (2018) Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, Tianlin Shi, and Percy Liang. Reinforcement learning on web interfaces using workflow-guided exploration. In _International Conference on Learning Representations (ICLR)_, 2018. URL [https://arxiv.org/abs/1802.08802](https://arxiv.org/abs/1802.08802).
* Liu et al. (2019)Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin A. Riedmiller, Andreas Kirkeby Fidjeland, Georg Ostrovski, Stig Petersen, Charlie Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. _Nature_, 518:529-533, 2015.
* Nakano et al. (2021) Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. _arXiv preprint arXiv:2112.09332_, 2021.
* Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. _Journal of Machine Learning Research_, 21:1-67, 2020. URL [https://arxiv.org/abs/1910.10683](https://arxiv.org/abs/1910.10683).
* Schulman et al. (2017) John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. _ArXiv_, abs/1707.06347, 2017.
* Shazeer and Stern (2018) Noam Shazeer and Mitchell Stern. Adafactor: Adaptive learning rates with sublinear memory cost. In _International Conference on Machine Learning_, pages 4596-4604. PMLR, 2018.
* Shi et al. (2017) Tianlin Shi, Andrej Karpathy, Linxi Fan, Jonathan Hernandez, and Percy Liang. World of bits: An open-domain platform for web-based agents. In Doina Precup and Yee Whye Teh, editors, _Proceedings of the 34th International Conference on Machine Learning_, volume 70 of _Proceedings of Machine Learning Research_, pages 3135-3144. PMLR, 06-11 Aug 2017. URL [https://proceedings.mlr.press/v70/shii17a.html](https://proceedings.mlr.press/v70/shii17a.html).
* Silver et al. (2017) David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. _nature_, 550(7676):354-359, 2017.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. _Advances in neural information processing systems_, 30, 2017.
* Wang et al. (2021) Bryan Wang, Gang Li, Xin Zhou, Zhourong Chen, Tovi Grossman, and Yang Li. Screen2Words: Automatic mobile UI summarization with multimodal learning. In _The 34th Annual ACM Symposium on User Interface Software and Technology_, UIST '21, page 498-510, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450386357. doi: 10.1145/3472749.3474765. URL [https://doi.org/10.1145/3472749.3474765](https://doi.org/10.1145/3472749.3474765).
* Williams (2004) Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. _Machine Learning_, 8:229-256, 2004.
* Wu et al. (2016) Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google's neural machine translation system: Bridging the gap between human and machine translation. _arXiv preprint arXiv:1609.08144_, 2016.
* Yao et al. (2022) Shunyu Yao, Howard Chen, John Yang, and Karthik R Narasimhan. Webshop: Towards scalable real-world web interaction with grounded language agents. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, _Advances in Neural Information Processing Systems_, 2022. URL [https://openreview.net/forum?id=R9KnuFlvnU](https://openreview.net/forum?id=R9KnuFlvnU).
* Yao et al. (2023) Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. ReAct: Synergizing reasoning and acting in language models. In _International Conference on Learning Representations (ICLR)_, 2023.
* Zettlemoyer and Amant (1999) Luke S Zettlemoyer and Robert St. Amant. A visual medium for programmatic control of interactive applications. In _Proceedings of the SIGCHI conference on Human Factors in Computing Systems_, pages 199-206, 1999.
Additional Dataset Details
### MiniWob++ Supported Tasks
MiniWob++ consists of 104 tasks. Most prior work (Shi et al., 2017; Liu et al., 2018; Gur et al., 2018; Jia et al., 2019) has evaluated performance on only a subset of these tasks, with the notable exception of Humphreys et al. (2022), which evaluated on all 104 tasks. We evaluated on 59 of these 104 tasks, based on our best effort attempt to (1) design a general purpose set of actions that could be implemented using Selenium and (2) convert the demonstrations collected by Humphreys et al. (2022) to our observation and action format. While further development of the conversion process and Selenium-based actions could potentially support more tasks, the 59 tasks we support still include a wide range of instructions and interactions. Note that determining the set of 59 tasks was based solely on the feasibility of conversion to our observation and action format, and _not_ based on model performance. Below we offer further details.
Several tasks in MiniWob++ feature animated elements. These tasks can require sampling observations in a real-time manner in order to capture the information needed to select the correct action. Also, the effects of an action may be delayed and therefore not captured by an observation sampled immediately after the action has executed. MiniWob++ provides a -modelay version for several tasks which removes such animations. We train and evaluate on the -modelay version of these tasks (choose-date, click-collapsible-2, click-collapsible, click-pie, use-autocomplete). We exclude choose-date-easy and choose-date-medium which offer simpler versions of choose-date but do not have a corresponding -modelay version. Additionally, we exclude chase-circle, drag-cube, moving-items, and simon-says, which feature animation without a -modelay version.
Many MiniWob++ tasks also involve vertical scrolling. In the human demonstrations, this can be implemented using a scroll wheel, or various clicking or dragging interactions with a vertical scroll bar rendered on the right side of a scrollable element. Mapping such interactions to actions that lead to equivalent scrolling in our Selenium-based environment is non-trivial. Therefore, for simplicity, we excluded tasks that involve scrolling: book-flight, click-scroll-list, email-inbox, email-inbox-nl-turk, read-table, read-table-2, scroll-text, scroll-text-2, search-engine, social-media, social-media-all, social-media-some, terminal.
Demonstrations for many MiniWob++ tasks also include copying and pasting text. In many cases, this was executed in the human demonstrations by double clicking a text string and then clicking and dragging it into an input field. Such an interaction is not supported in Selenium, which made it challenging to support these tasks. This led us to exclude the following tasks: login-user-popup, copy-paste, copy-paste-2, email-inbox-forward, email-inbox-forward-nl, email-inbox-forward-nl-turk, email-inbox-nosroll, email-inbox-reply, email-inbox-star-reply, enter-password, enter-text, enter-text-dynamic, find-word, login-user, multi-layouts, multi-orderings.
Finally, we excluded several other tasks for various other reasons. The choose-list task uses the HTML <select> tag to implement a drop-down menu, which is not supported properly by our Selenium-based environment. The click-menu and click-menu-2 tasks require unsupported mouseover effects. Demonstrations for the text-editor task features click and drag interactions to highlight text which do not have the same effect when executed in Selenium. There also appeared to be differences in how Selenium implemented the number input field for guess-number. Finally, we excluded several tasks due to low demonstration conversion success rates (focus-text, focus-text-2, use-spinner). Upon further investigation, this was due to episodes completing immediately after a "pointer down" event without a complete click for focus-text and focus-text-2, and due to frequent double clicking for use-spinner.
### MiniWob++ Rendering Differences
There are differences between the rendering of observations in the human demonstrations from Humphreys et al. (2022) and the rendering of environment state in our Selenium-based environment. We show an example in Figure 5, which shows subtle differences, _e.g._ in font style and in element sizes and positions.
Additional Technical Details
### Beam Search
As mentioned in SS3, we use beam search over tokens in the text decoder to produce a set of top-\(k\) actions for a given state, along with their approximate probabilities. We refer to these as approximate probabilities because they are subject to a length normalization factor (Wu et al., 2016) of \(0.6\) during beam search, following Raffel et al. (2020). For MiniWob and WebShop, our experiments used \(k=8\) and \(k=10\), respectively.
### Tree Search
Here we describe the details of the tree search approach described in SS3.1. We adopt Monte Carlo Tree Search (MCTS) (Coulom, 2006), and follow prior work which has integrated MCTS with neural networks (Silver et al., 2017; Anthony et al., 2017), which we apply to MiniWob++ environments. We performed a minimal amount of tuning to determine an approach that yielded improvements in mean score over the greedy policy, even for the most challenging tasks.
Problem SettingWe consider an environment with states \(\mathcal{S}\) and actions \(\mathcal{A}\). The reward function, \(r(s)\), returns a scalar corresponding to the reward given for transitioning to state \(s\in\mathcal{S}\), and is described below. MiniWob++ environments are randomly generated, but transitions are deterministic within an environment generated by a particular random seed. The transition function, \(f(s,a)\), returns the state resulting from taking action \(a\in\mathcal{A}\) in state \(s\in\mathcal{S}\).
Surrogate rewardRather than using the raw reward directly provided by the MiniWob++ environment, we consider a surrogate reward: \(r(s)=\alpha_{s}+r^{t}(s)\), where \(\alpha_{s}\) provides a small negative reward that encourages shorter trajectories without unnecessary actions. \(r^{t}(s)\) is the raw reward from the MiniWob++ environment if \(s\) is a terminal state and the raw reward is \(>0.8\), or \(0\) otherwise. We use \(\alpha_{S}=-\frac{1}{30}\). As all tasks can be completed within 30 steps, this is small enough to ensure a positive reward is possible for all tasks. Additionally, the penalty is small enough such that in practice the agent should not be incentivized to sacrifice raw reward to reduce the number of steps taken.
Value networkThe value function \(v^{\pi}(s)\) for a given policy \(\pi\) is the expected future rewards from state \(s\) if actions are selected according to policy \(\pi\). The optimal value function, \(v^{*}(s)\), is the expected future rewards if optimal actions are chosen. We attempt to learn an approximation of this function, \(\hat{v}_{\phi}(s)\approx v^{*}(s)\), parameterized as a Pix2Struct-initialized model with parameters \(\phi\), which we refer to as the _value network_. The model is trained on transitions from the human demonstrations, which demonstrate close to optimal behavior in many cases. For every state in the human demonstrations, we compute the actual future rewards for the given episode, according to the surrogate reward. We map these future rewards to discrete bins and represent them as integers in the Pix2Struct decoder. At inference time, we approximate the mean of the distribution over these discrete bins by considering the top-\(n\) predictions from the model using beam search (with \(n=3\)), weighted proportional to their respective probabilities.
Figure 5: Comparison of differences between the screenshots of the human demonstrations for MiniWob++ from Humphreys et al. (2022) (right) with how the same environment state is rendered in our Selenium-based environment (left).
Policy networkFor consistency with prior work, we will refer to the Pix2Struct model tuned to generate actions (_i.e._ Pix2Act) as the _policy network_, with parameters \(\theta\). The greedy policy \(\pi_{\theta}(s)\) selects the action \(a\) with the highest approximate probability \(p_{\theta}(a|s)\) in the top-\(k\) beam (see SSB.1), subject to the conditions described in SS3.
Search policyWe can use lookahead search to implement a policy, \(\pi_{\theta}^{*}(s)\), which leverages interactions with the environment (\(f(s,a)\) and \(r(s)\)) to select actions in a more optimal way than the greedy policy \(\pi_{\theta}(s)\). Both the policy network and value network are used to constrain and prioritize the search.
MCTS performs \(K\) rounds of traversing a search tree with nodes corresponding to states, and edges corresponding to actions. Due to the computational cost of the policy and value networks, we use a modest number of rounds, \(K=16\), for our experiments. The search tree is initialized with a single root node for state \(s\). Each round starts at \(s\) and traverses the tree. At each step \(t\) of a given round, an action \(a_{t}\) is selected for state \(s_{t}\), where \(a_{t}=\max_{a}Q(s_{t},a)+U(s_{t},a)\). \(Q(s_{t},a)\) is an average reward over all rounds that have traversed the associated edge. It is based on actual accumulated rewards during tree traversal and the value estimates of leaf states (described below).
\(U(s_{t},a)=c*p_{\theta}(a|s)*\frac{\sqrt{N(s_{t})}}{1+n(s_{t},a)}\) is a term that encourages exploration, where \(n(s_{t},a)\) is the number of times action \(a\) has been selected from state \(s_{t}\), \(N(s_{t})\) is the total number of times state \(s_{t}\) has been visited, and \(c\) is a scalar hyperparameter that we set to \(0.1\). Following Silver et al. (2017), we use the policy network to bias this exploration term. To constrain the search, we only consider the top-\(k\) actions according to the policy network, where \(k=8\) in our experiments.
If we select an action \(a_{t}\) for state \(s_{t}\) which has never been previously selected from \(s_{t}\), then the simulation ends and we add a new leaf state, \(s_{L}=f(s_{t},a)\), to the search tree. If \(s_{L}\) is not a terminal state, then we estimate its value (_i.e._ future returns) using both the value network and a rollout with the greedy policy. Specifically, following Silver et al. (2017), we estimate its value as \(\lambda*\tilde{v}_{\theta}(s_{L})+(1-\lambda)*v^{\pi_{\theta}}(s_{L})\) where \(v^{\pi_{\theta}}(s_{L})\) is equal to the actual returns from following the policy \(\pi_{\theta}\) starting at \(s_{L}\) for a maximum of \(20\) steps, with actual returns clipped to a minimum value of \(0\). Is there \(\lambda\) is a mixing parameter that we set to \(0.1\). For challenging environments, rollouts may be unlikely to find a terminal state with positive reward, and in such cases rollouts may not be very informative. On the other hand, the value network can provide poor value estimates for certain states, especially if they are not well represented in the human demonstrations. By combining both methods we aim to provide a better approximation of the value of leaf states. Returns are propagated up the tree to each parent \(s^{\prime}\) to update \(Q(s^{\prime},a)\). As \(Q(s_{L},a)\) is undefined prior to selecting \(a\) from \(s_{L}\) for the first time, we initialize \(Q(s_{L},a)\) for each action to be equal to the initial value estimate of \(s_{L}\) plus \(\alpha_{s}\).
To understand the impact of rollouts and value estimates using the value network, in Table 3 we compare mean scores over 12 challenging MiniWob++ tasks for different values of \(\lambda\): 0 (rollout only), 0.1 (both rollout and value network), and 1 (value network only). We also include the mean score using the greedy policy for reference. These results use the policy network and value network trained on the human demonstrations. The results show that using a combination of rollouts and the value network gives the best results. The value network is primarily useful for challenging tasks that require longer trajectories, such as number-checkboxes, relative to using rollouts only.
Once we have completed \(K\) rounds, \(\pi_{\theta}^{*}(s)\) selects the most visited action \(a\) for state \(s\), and we begin the process again at the subsequent state. We reuse the search tree for subsequent time steps for efficiency, so we require only \(K-n(s,a)\) additional rounds for the subsequent state.
Policy improvementWe can sample trajectories with \(\pi_{\theta}^{*}\), then update \(\theta\) by training \(\pi_{\theta}(s)\) to approximate \(\pi_{\theta}^{*}(s)\) for each \(s\) in the sampled trajectories. This then also improves \(\pi_{\theta}^{*}(s)\), as \(\theta\) informs how the search space is constrained and prioritized. Therefore, we can continue to iteratively improve \(\pi_{\theta}(s)\). To produce these trajectories, we randomly sample MiniWob++ tasks and seeds, and select
\begin{table}
\begin{tabular}{c c c c} \hline \hline Greedy Policy & \(\lambda=0\) (rollout only) & \(\lambda=0.1\) & \(\lambda=1\) (value network only) \\ \hline
28.8 & 74.2 & **78.3** & 57.4 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Mean scores for different policies over 12 challenging MiniWob++ tasks.
actions according to \(\pi_{\theta}^{*}\). We then filter trajectories where the raw reward is \(<0.8\). We then tune \(\theta\) on these new trajectories. For simplicity, we keep the value network (_i.e_. \(\phi\)) fixed.
We initially found that tuning on trajectories from MCTS could be unstable, leading to an early loss spike. To resolve this, we slightly decreased the learning rate (from \(1e-3\) to \(5e-4\)) and increased the number of warmup steps (from \(1000\) to \(4000\)) relative to the hyperparameters used for behavioral cloning.
### Compute Details
We fine-tuned models using 64 Google Cloud TPU v3 cores.
## Appendix C Additional Results
### Variance Estimates
We evaluated results for MiniWob++ based on 100 randomly selected seeds for each of the 59 tasks. To understand how results vary depending on which 100 seeds per task are used for evaluation, we ran 3 trials with different evaluation seeds for the strongest Pix2Act model reported in Table 3, yielding mean scores of \(96.2\), \(96.4\), and \(96.1\); the standard deviation across these trials was \(0.15\). For WebShop, there is a standard test set consisting of 500 instances, so selecting seeds for evaluation is not necessary.
### MiniWob++ Results Per Task
We show the performance of Pix2Act (ours) on each of the 59 MiniWob++ tasks we study, compared to other approaches, in Table 4. We compare with human crowdworker performance reported by Humphreys et al. (2022), CC-Net Humphreys et al. (2022), DOM-Q-Net Jia et al. (2019), DOMNET with workflow-guided execution Liu et al. (2018), QWeb Gur et al. (2018), RCI Kim et al. (2023), WebN-T5-3B Gur et al. (2022), and WebGUM Furuta et al. (2023). We also report scores for Pix2Act and CC-Net with behavioral cloning (BC) only. We do not include scores for GlobalCNN Shi et al. (2017), which reported only human normalized success rates. Other than Humphreys et al. (2022), prior work has primarily reported success rate (_i.e_. the percentage of episodes with positive rewards), which can be equivalently mapped to the scores we report for tasks without partial rewards. | ## Review
### Summary
This paper presents PIX2ACT, a novel approach to interact with graphical user interfaces (GUIs) using pixel-level visual representations instead of structured data like HTML or DOM trees. The authors demonstrate that their model can effectively perform GUI-based tasks using generic mouse and keyboard actions. Through rigorous validation on benchmarks such as MiniWob++ and WebShop, the study highlights the advantages of this pixel-only approach, which aligns closely with human interaction methods. The findings indicate that pre-training on image-to-text tasks can significantly enhance the model's performance, achieving results competitive with human operators in specific scenarios.
### Strengths
- The paper presents a clear motivation for developing a pixel-based agent that interacts with GUIs, closely mimicking human behavior.
- It employs well-crafted illustrations that enhance comprehension of the methods and findings.
- The proposed framework is rigorously validated on two datasets, demonstrating robustness through detailed comparisons with baselines.
- The use of the PIX2STRUCT pre-training method significantly improves performance in GUI tasks.
- The paper is well-written and easy to follow, providing valuable insights into the field of automated GUI interactions.
### Weaknesses
- The experimental design lacks certain ablations that would shed light on the effectiveness of each component in the agent framework.
- The rationale for embedding instructions into the screenshot rather than providing them separately is not clearly justified.
- The model's performance on more complex datasets remains unclear, as the chosen benchmarks may not fully represent real-world GUI complexities.
- There is an over-reliance on pre-training, which could limit adaptability in diverse applications.
- The evaluation procedure lacks detail, making it challenging to assess the rigor of the experiments conducted.
### Questions
- What are the potential implications of the model’s performance in real-time environments?
- How does the choice of pixel-based input impact the model's scalability with different resolutions or GUI complexities?
- What is the typical latency involved in the transformation of screenshots into actionable representations?
- Can the authors elaborate on the impact of using a greedy action selection policy in the context of long-term strategies?
- What specific steps were taken to ensure statistical significance in the reported results?
### Soundness
**Score:** 3
**Description:** 3 = good. The methodology is solid and well-supported by experimental results, although some aspects of the design could benefit from further clarification and detail.
### Presentation
**Score:** 3
**Description:** 3 = good. The writing is clear and organized, but some figures are blurry and the details in the evaluation section lack sufficient depth.
### Contribution
**Score:** 3
**Description:** 3 = good. The paper makes a notable contribution to the field of GUI interaction through its innovative approach and validation on established benchmarks.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept: The paper is technically sound with moderate-to-high impact potential, though it requires some improvements in experimental design and clarity.
### Paper Decision
**Decision:** Accept (spotlight)
**Reasons:** The paper presents an original approach to pixel-based GUI interaction that demonstrates significant potential and performance. While there are some weaknesses in experimental design and clarity, the strengths and contributions outweigh these issues. Given its innovative nature and the thoroughness of the evaluations provided, the decision to accept is justified.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# DP-HyPO: An Adaptive Private Hyperparameter Optimization Framework
Hua Wang
Department of Statistics and Data Science
University of Pennsylvania
Philadelphia, PA 19104
[email protected]
&Sheng Gao
Department of Statistics and Data Science
University of Pennsylvania
Philadelphia, PA 19104
[email protected]
&Huanyu Zhang
Meta Platforms, Inc.
New York, NY 10003
[email protected]
&Weijie J. Su
Department of Statistics and Data Science
University of Pennsylvania
Philadelphia, PA 19104
[email protected]
&Milan Shen
Meta Platforms, Inc.
Menlo Park, CA 94025
[email protected]
###### Abstract
Hyperparameter optimization, also known as hyperparameter tuning, is a widely recognized technique for improving model performance. Regrettably, when training private ML models, many practitioners often overlook the privacy risks associated with hyperparameter optimization, which could potentially expose sensitive information about the underlying dataset. Currently, the sole existing approach to allow privacy-preserving hyperparameter optimization is to uniformly and randomly select hyperparameters for a number of runs, subsequently reporting the best-performing hyperparameter. In contrast, in non-private settings, practitioners commonly utilize "adaptive" hyperparameter optimization methods such as Gaussian process-based optimization, which select the next candidate based on information gathered from previous outputs. This substantial contrast between private and non-private hyperparameter optimization underscores a critical concern. In our paper, we introduce DP-HyPO, a pioneering framework for "adaptive" private hyperparameter optimization, aiming to bridge the gap between private and non-private hyperparameter optimization. To accomplish this, we provide a comprehensive differential privacy analysis of our framework. Furthermore, we empirically demonstrate the effectiveness of DP-HyPO on a diverse set of real-world datasets.
## 1 Introduction
In recent decades, modern deep learning has demonstrated remarkable advancements in various applications. Nonetheless, numerous training tasks involve the utilization of sensitive information pertaining to individuals, giving rise to substantial concerns regarding privacy [29, 6]. To address these concerns, the concept of differential privacy (DP) was introduced by [12, 13]. DP provides a mathematically rigorous framework for quantifying privacy leakage, and it has gained widespreadacceptance as the most reliable approach for formally evaluating the privacy guarantees of machine learning algorithms.
When training deep learning models, the most popular method to ensure privacy is noisy (stochastic) gradient descent (DP-SGD) [4, 35]. DP-SGD typically resembles non-private gradient-based methods; however, it incorporates gradient clipping and noise injection. More specifically, each individual gradient is clipped to ensure a bounded \(\ell_{2}\) norm. Gaussian noise is then added to the average gradient which is utilized to update the model parameters. These adjustments guarantee a bounded sensitivity of each update, thereby enforcing DP through the introduction of additional noise.
In both non-private and private settings, hyperparameter optimization (HPO) plays a crucial role in achieving optimal model performance. Commonly used methods for HPO include grid search (GS), random search (RS), and Bayesian optimization (BO). GS and RS approaches are typically non-adaptive, as they select the best hyperparameter from a predetermined or randomly selected set. While these methods are straightforward to implement, they can be computationally expensive and inefficient when dealing with large search spaces. As the dimensionality of hyperparameters increases, the number of potential trials may grow exponentially. To address this challenge, adaptive HPO methods such as Bayesian optimization have been introduced [34, 14, 40]. BO leverages a probabilistic model that maps hyperparameters to objective metrics, striking a balance between exploration and exploitation. BO quickly emerged as the default method for complex HPO tasks, offering improved efficiency and effectiveness compared to non-adaptive methods.
While HPO is a well-studied problem, the integration of a DP constraint into HPO has received little attention. Previous works on DP machine learning often neglect to account for the privacy cost associated with HPO [1, 39, 42, 42]. These works either assume that the best parameters are known in advance or rely on a supplementary public dataset that closely resembles the private dataset distribution, which is not feasible in most real-world scenarios.
Only recently have researchers turned to the important concept of honest HPO [28], where the privacy cost during HPO cannot be overlooked. Private HPO poses greater challenges compared to the non-private case for two primary reasons. First, learning with DP-SGD introduces additional hyperparameters (e.g., clipping norm, the noise scale, and stopping time), which hugely adds complexity to the search for optimal hyperparameters. Second, DP-SGD is more sensitive to the selection of hyperparameter combinations, with its performance largely influenced by this choice [28, 10, 31].
To tackle this challenging question, previous studies such as [24, 32] propose running the base algorithm with different hyperparameters a random number of times. They demonstrate that this approach significantly benefits privacy accounting, contrary to the traditional scaling of privacy guarantees with the square root of the number of runs (based on the composition properties from [19]). While these papers make valuable contributions, their approaches only allow for uniformly random subsampling from a finite and pre-fixed set of candidate hyperparameters at each run. As a result, any advanced technique from HPO literature that requires adaptivity is either prohibited or necessitates a considerable privacy cost (polynomially dependent on the number of runs), creating a substantial gap between non-private and private HPO methods.
Given these considerations, a natural question arises: _Can private hyperparameter optimization be adaptive, without a huge privacy cost?_ In this paper, we provide an affirmative answer to this question.
### Our Contributions
* **We introduce the pioneering adaptive private hyperparameter optimization framework, DP-HyPO,** which enables practitioners to adapt to previous runs and focus on potentially superior hyperparameters. DP-HyPO permits the flexible use of non-DP adaptive hyperparameter optimization methods, such as Gaussian process, for enhanced efficiency, while avoiding the substantial privacy costs due to composition. In contrast to the non-adaptive methods presented in [32, 24], our adaptive framework, DP-HyPO, effectively bridges the gap between private and non-private hyperparameter optimization. Importantly, our framework not only encompasses the aforementioned non-adaptive methods as special cases, but also seamlessly integrates virtually all conceivable adaptive methods into the framework.
* **We provide sharp DP guarantees for the adaptive private hyperparameter optimization.** Specifically, when the training procedure is executed multiple times, with each iteration being DP on its own, outputting the best repetition is DP ensured by the composition property. However, applying composition results in excessively loose privacy guarantees. Prior work in [24, 32] presents bounds that are either independent of the number of repetitions or depend logarithmically on it. Nevertheless, these results require that the hyperparameter selection for each iteration follows a uniform sampling distribution. In contrast, DP-HyPO allows arbitrary adaptive sampling distributions based on previous runs. Utilizing the Renyi DP framework, we offer a strict generalization of those uniform results by providing an accurate characterization of the Renyi divergence between the adaptive sampling distributions of neighboring datasets, without any stability assumptions.
* **Empirically, we observe that the Gaussian process-based DP-HyPO algorithm outperforms its uniform counterpart across several practical scenarios.** Generally, practitioners can integrate any non-private adaptive HPO methods into the DP-HyPO framework, opening up a vast range of adaptive private HPO algorithm possibilities. Furthermore, DP-HyPO grants practitioners the flexibility to determine the privacy budget allocation for adaptivity, empowering them to balance between the adaptivity and privacy loss when confronting various hyperparameter optimization challenges.
## 2 Preliminaries
### Differential Privacy and Hyperparameter Optimization
Differential Privacy is a mathematically rigorous framework for quantifying privacy leakage. A DP algorithm promises that an adversary with perfect information about the entire private dataset in use - except for a single individual - would find it hard to distinguish between its presence or absence based on the output of the algorithm [12]. Formally, for \(\varepsilon>0\), and \(0\leq\delta<1\), we consider a (randomized) algorithm \(M:\mathcal{Z}^{n}\rightarrow\mathcal{Y}\) that takes as input a dataset.
**Definition 2.1** (Differential privacy).: A randomized algorithm \(M\) is \((\varepsilon,\delta)\)-DP if for any neighboring dataset \(D,D^{\prime}\in\mathcal{Z}^{n}\) differing by an arbitrary sample, and for any event \(E\), we have
\[\mathbb{P}[M(D)\in E]\leqslant\mathrm{e}^{\varepsilon}\cdot\mathbb{P}\left[M \left(D^{\prime}\right)\in E\right]+\delta.\]
Here, \(\varepsilon\) and \(\delta\) are privacy parameters that characterize the privacy guarantee of algorithm \(M\). One of the fundamental properties of DP is composition. When multiple DP algorithms are sequentially composed, the resulting algorithm remains private. The total privacy cost of the composition scales approximately with the square root of the number of compositions [19].
We now formalize the problem of hyperparameter optimization with DP guarantees, which builds upon the finite-candidate framework presented in [24, 32]. Specifically, we consider a set of base DP algorithms \(M_{\lambda}:\mathcal{Z}^{n}\rightarrow\mathcal{Y}\), where \(\lambda\in\Lambda\) represents a set of hyperparameters of interest, \(\mathcal{Z}^{n}\) is the domain of datasets, and \(\mathcal{Y}\) denotes the range of the algorithms. This set \(\Lambda\) may be any infinite set, e.g., the cross product of the learning rate \(\eta\) and clipping norm \(R\) in DP-SGD. We require that the set \(\Lambda\) is a measure space with an associated measure \(\mu\). Common choices for \(\mu\) include the counting measure or Lebesgue measure. We make a mild assumption that \(\mu(\Lambda)<\infty\).
Based on the previous research [32], we make two simplifying assumptions. First, we assume that there is a total ordering on the range \(\mathcal{Y}\), which allows us to compare two selected models based on their "performance measure", denoted by \(q\). Second, we assume that, for hyperparameter optimization purposes, we output the trained model, the hyperparameter, and the performance measure. Specifically, for any input dataset \(D\) and hyperparameter \(\lambda\), the return value of \(M_{\lambda}\) is \((x,q)\sim M_{\lambda}(D)\), where \(x\) represents the combination of the model parameters and the hyperparameter \(\lambda\), and \(q\) is the (noisy) performance measure of the model.
### Related Work
In this section, we focus on related work concerning private HPO, while deferring the discussion on non-private HPO to Appendix F.
Historically, research in DP machine learning has neglected the privacy cost associated with HPO [1, 39, 42]. It is only recently that researchers have begun to consider the honest HPO setting [28], in which the cost is taken into account.
A direct approach to addressing this issue involves composition-based analysis. If each training run of a hyperparameter satisfies DP, the entire HPO procedure also complies with DP through composition across all attempted hyperparameter values. However, the challenge with this method is that the privacy guarantee derived from accounting can be excessively loose, scaling polynomially with the number of runs.
Chaudhuri et al. [7] were the first to enhance the DP bounds for HPO by introducing additional stability assumptions on the learning algorithms. [24] made significant progress in enhancing DP bounds for HPO without relying on any stability properties of the learning algorithms. They proposed a simple procedure where a hyperparameter was randomly selected from a uniform distribution for each training run. This selection process was repeated a random number of times according to a geometric distribution, and the best model obtained from these runs was outputted. They showed that this procedure satisfied \((3\varepsilon,0)\)-DP as long as each training run of a hyperparameter was \((\varepsilon,0)\)-DP. Building upon this, [32] extended the procedure to accommodate negative binomial or Poisson distributions for the repeated uniform selection. They also offered more precise Renyi DP guarantees for this extended procedure. Furthermore, [8] explored a generalization of the procedure for top-\(k\) selection, considering \((\varepsilon,\delta)\)-DP guarantees.
In a related context, [28] explored a setting that appeared superficially similar to ours, as their title mentioned "adaptivity." However, their primary focus was on improving adaptive optimizers such as DP-Adam, which aimed to reduce the necessity of hyperparameter tuning, rather than the adaptive HPO discussed in this paper. Notably, in terms of privacy accounting, their approach only involved composing the privacy cost of each run without proposing any new method.
Another relevant area of research is DP selection, which encompasses well-known methods such as the exponential mechanism [25] and the sparse vector technique [13], along with subsequent studies. However, this line of research always assumes the existence of a low-sensitivity score function for each candidate, which is an unrealistic assumption for hyperparameter optimization.
## 3 DP-HyPO: General Framework for Private Hyperparameter Optimization
The obvious approach to the problem of differentially private hyperparameter optimization would be to run each base algorithm and simply return the best one. However, running such an algorithm on large hyperparameter space is not feasible due to the privacy cost growing linearly in the worst case.
While [24, 32] have successfully reduced the privacy cost for hyperparameter optimization from linear to constant, there are still two major drawbacks. First, none of the previous methods considers the case when the potential number of hyperparameter candidates is infinite, which is common in most hyperparameter optimization scenarios. In fact, we typically start with a range of hyperparameters that we are interested in, rather than a discrete set of candidates. Furthermore, prior methods are limited to the uniform sampling scheme over the hyperparameter domain \(\Lambda\). In practice, this setting is unrealistic since we want to "adapt" the selection based on previous results. For instance, one could use Gaussian process to adaptively choose the next hyperparameter for evaluation, based on all the previous outputs. However, no adaptive hyperparameter optimization method has been proposed or analyzed under the DP constraint. In this paper, we bridge this gap by introducing the first DP adaptive hyperparameter optimization framework.
### DP-HyPO Framework
To achieve adaptive hyperparameter optimization with differential privacy, we propose the DP-HyPO framework. Our approach keeps an adaptive sampling distribution \(\pi\) at each iteration that reflects accumulated information.
Let \(Q(D,\pi)\) be the procedure that randomly draws a hyperparameter \(\lambda\) from the distribution1\(\pi\in\mathcal{D}(\Lambda)\), and then returns the output from \(M_{\lambda}(D)\). We allow the sampling distribution to depend on both the dataset and previous outputs, and we denote as \(\pi^{(j)}\) the sampling distribution at the \(j\)-thiteration on dataset \(D\). Similarly, the sampling distribution at the \(j\)-th iteration on the neighborhood dataset \(D^{\prime}\) is denoted as \(\pi^{\prime(j)}\).
We now present the DP-HyPO framework, denoted as \(\mathcal{A}(D,\pi^{(0)},\mathcal{T},C,c)\), in Framework 1. The algorithm takes a prior distribution \(\pi^{(0)}\in\mathcal{D}(\Lambda)\) as input, which reflects arbitrary prior knowledge about the hyperparameter space. Another input is the distribution \(\mathcal{T}\) of the total repetitions of training runs. Importantly, we require it to be a random variable rather than a fixed number to preserve privacy. The last two inputs are \(C\) and \(c\), which are upper and lower bounds of the density of any posterior sampling distributions. A finite \(C\) and a positive \(c\) are required to bound the privacy cost of the entire framework.
```
Initialize \(\pi^{(0)}\), a prior distribution over \(\Lambda\). Initialize the result set \(A=\{\}\) Draw \(T\sim\mathcal{T}\) for\(j=0\) to \(T-1\)do \((x,q)\sim Q(D,\pi^{(j)})\) \(A=A\cup\{(x,q)\}\) Update \(\pi^{(j+1)}\) based on \(A\) according to any adaptive algorithm such that for all \(\lambda\in\Lambda\), \[c\leq\frac{\pi^{(j+1)}(\lambda)}{\pi^{(0)}(\lambda)}\leq C\] endfor Output \((x,q)\) from \(A\) with the highest \(q\)
```
**Framework 1** DP-HyPO \(\mathcal{A}(D,\pi^{(0)},\mathcal{T},C,c)\)
Notice that we intentionally leave the update rule for \(\pi^{(j+1)}\) unspecified in Framework 1 to reflect the fact that any adaptive update rule that leverages information from previous runs can be used. However, for a non-private adaptive HPO update rule, the requirement of bounded adaptive density \(c\leq\frac{\pi^{(j+1)}(\lambda)}{\pi^{(0)}(\lambda)}\leq C\) may be easily violated. In Section 3.2, We provide a simple projection technique to privatize any non-private update rules. In Section 4, we provide an instantiation of DP-HyPO using Gaussian process.
We now state our main privacy results for this framework in terms of Renyi Differential Privacy (RDP) [27]. RDP is a privacy measure that is more general than the commonly used \((\varepsilon,\delta)\)-DP and provides tighter privacy bounds for composition. We defer its exact definition to Definition A.2 in the appendix.
We note that different distributions of the number of selections (iterations), \(\mathcal{T}\), result in very different privacy guarantees. Here, we showcase the key idea for deriving the privacy guarantee of DP-HyPO framework by considering a special case when \(\mathcal{T}\) follows a truncated negative binomial distribution2\(\operatorname{NegBin}(\theta,\gamma)\) (the same assumption as in [32]). In fact, as we show in the proof of Theorem 1 in Appendix A, the privacy bounds only depend on \(\mathcal{T}\) directly through its probability generating function, and therefore one can adapt the proof to obtain the corresponding privacy guarantees for other probability families, for example, the Possion distribution considered in [32]. From here and on, unless otherwise specified, we will stick with \(\mathcal{T}=\operatorname{NegBin}(\theta,\gamma)\) for simplicity. We also assume for simplicity that the prior distribution \(\pi^{(0)}\) is a uniform distribution over \(\Lambda\). We provide more detailed discussion of handling informed prior other than uniform distribution in Appendix D.
Footnote 2: Truncated negative binomial distribution is a direct generalization of the geometric distribution. See Appendix B for its definition.
**Theorem 1**.: _Suppose that \(T\) follows truncated negative Binomial distribution \(T\sim\operatorname{NegBin}(\theta,\gamma)\). Let \(\theta\in(-1,\infty)\), \(\gamma\in(0,1)\), and \(0<c\leq C\). Suppose for all \(M_{\lambda}:\mathcal{Z}^{n}\to\mathcal{Y}\) over \(\lambda\in\Lambda\), the base algorithms satisfy \((\alpha,\varepsilon)\)-RDP and \((\hat{\alpha},\hat{\varepsilon})\)-RDP for some \(\varepsilon,\hat{\varepsilon}\geq 0,\alpha\in(1,\infty)\), and \(\hat{\alpha}\in[1,\infty)\). Then the DP-HyPO algorithm \(\mathcal{A}(D,\pi^{(0)},\operatorname{NegBin}(\theta,\gamma),C,c)\) satisfies \((\alpha,\varepsilon^{\prime})\)-RDP where_
\[\varepsilon^{\prime}=\varepsilon+(1+\theta)\cdot\left(1-\frac{1}{\hat{\alpha }}\right)\hat{\varepsilon}+\left(\frac{\alpha}{\alpha-1}+1+\theta\right) \log\frac{C}{c}+\frac{(1+\theta)\cdot\log(1/\gamma)}{\hat{\alpha}}+\frac{\log \mathbb{E}[T]}{\alpha-1}.\]To prove Theorem 1, one of our main technical contributions is Lemma A.4, which quantifies the Renyi divergence of the sampling distribution at each iteration between the neighboring datasets. We then leverage this crucial result and the probability generating function of \(\mathcal{T}\) to bound the Renyi divergence in the output of \(\mathcal{A}\). We defer the detailed proof to Appendix A.
Next, we present the case with pure DP guarantees. Recall the fact that \((\varepsilon,0)\)-DP is equivalent to \((\infty,\varepsilon)\)-RDP [27]. When both \(\alpha\) and \(\hat{\alpha}\) tend towards infinity, we easily obtain the following theorem in terms of \((\varepsilon,0)\)-DP.
**Theorem 2**.: _Suppose that \(T\) follows truncated negative Binomial distribution \(T\sim\mathrm{NegBin}(\theta,\gamma)\). Let \(\theta\in(-1,\infty)\) and \(\gamma\in(0,1)\). If all the base algorithms \(M_{\lambda}\) satisfies \((\varepsilon,0)\)-DP then the DP-HyPO algorithm \(\mathcal{A}(D,\pi^{(0)},\mathrm{NegBin}(\theta,\gamma),C,c)\) satisfies \(\left((2+\theta)\left(\varepsilon+\log\frac{C}{c}\right),0\right)\)-DP._
Theorem 1 and Theorem 2 provide practitioners the freedom to trade off between allocating more DP budget to enhance the base algorithm or to improve adaptivity. In particular, a higher value of \(\frac{C}{c}\) signifies greater adaptivity, while a larger \(\varepsilon\) improves the performance of base algorithms.
#### 3.1.1 Uniform Optimization Method as a Special Case
We present the uniform hyperparameter optimization method [32, 23] in Algorithm 2, which is a special case of our general DP-HyPO Framework with \(C=c=1\). Essentially, this algorithm never updates the sampling distribution \(\pi\).
```
Let \(\pi=\text{Unif}(\{1,...,|\Lambda|\})\), and \(A=\{\}\) Draw \(T\sim\text{NegBin}(\theta,\gamma)\) for\(j=0\) to \(T-1\)do \((x,q)\sim Q(D,\pi)\) \(A=A\cup\{(x,q)\}\) endfor Output \((x,q)\) from \(A\) with the highest \(q\)
```
**Algorithm 2** Uniform Hyperparameter Optimization \(\mathcal{U}(D,\theta,\gamma,\Lambda)\)
Our results in Theorem 1 and Theorem 2 generalize the main technical results of [32, 24]. Specifically, when \(C=c=1\) and \(\Lambda\) is a finite discrete set, our Theorem 1 precisely recovers Theorem 2 in [32]. Furthermore, when we set \(\theta=1\), the truncated negative binomial distribution reduces to the geometric distribution, and our Theorem 2 recovers Theorem 3.2 in [24].
### Practical Recipe to Privatize HPO Algorithms
In the DP-HyPO framework, we begin with a prior and adaptively update it based on the accumulated information. However, for privacy purposes, we require the density \(\pi^{(j)}\) to be bounded by some constants \(c\) and \(C\), which is due to the potential privacy leakage when updating \(\pi^{(j)}\) based on the history. It is crucial to note that this distribution \(\pi^{(j)}\) can be significantly different from the distribution \(\pi^{(j)}\) if we were given a different input dataset \(D^{\prime}\). Therefore, we require the probability mass/density function to satisfy \(\frac{c}{\mu(\Lambda)}\leq\pi^{(j)}(\lambda)\leq\frac{C}{\mu(\Lambda)}\) for all \(\lambda\in\Lambda\) to control the privacy loss due to adaptivity.
This requirement is not automatically satisfied and typically necessitates modifications to current non-private HPO methods. To address this challenge, we propose a general recipe to modify any non-private method. The idea is quite straightforward: throughout the algorithm, we maintain a non-private version of the distribution density \(\pi^{(j)}\). When sampling from the space \(\Lambda\), we perform a projection from \(\pi^{(j)}\) to the space consisting of bounded densities. Specifically, we define the space of essentially bounded density functions by \(S_{C,c}=\{f\in\Lambda^{\mathbb{R}^{+}}:\text{ess sup }f\leq\frac{C}{\mu(\Lambda)},\text{ess inf }f\geq\frac{c}{\mu(\Lambda)},\int_{\alpha\in\Lambda}f(\alpha)\mathrm{d}\alpha=1\}\). For such a space to be non-empty, we require that \(c\leq 1\leq C,\) where \(\mu\) is the measure on \(\Lambda\). This condition is well-defined as we assume \(\mu(\Lambda)<\infty\).
To privatize \(\pi^{(j)}\) at the \(j\)-th iteration, we project it into the space \(S_{C,c}\), by solving the following convex functional programming problem:
\[\min_{f} \ \ \|f-\pi^{(j)}\|_{2},\] (3.1) s.t. \[\ f\in S_{C,c}.\]
Note that this is a convex program since \(S_{C,c}\) is convex and closed. We denote the output from this optimization problem by \(\mathcal{P}_{S_{C,c}}(\pi^{(j)})\). Theoretically, problem (3.1) allows the hyperparameter space \(\Lambda\) to be general measurable space with arbitrary topological structure. However, empirically, practitioners need to discretize \(\Lambda\) to some extent to make the convex optimization computationally feasible. Compared to the previous work, our formulation provides the most general characterization of the problem and allows pratitioners to _adaptively_ and _iteratively_ choose a proper discretization as needed. Framework 1 tolerates a much finer level of discretization than the previous method, as the performance of latter degrades fast when the number of candidates increases. We also provide examples using CVX to solve this problem in Section 4.2. In Appendix C, we discuss about its practical implementation, and the connection to information projection.
## 4 Application: DP-HyPO with Gaussian Process
In this section, we provide an instantiation of DP-HyPO using Gaussian process (GP) [38]. GPs are popular non-parametric Bayesian models frequently employed for hyperparameter optimization. At the meta-level, GPs are trained to generate surrogate models by establishing a probability distribution over the performance measure \(q\). While traditional GP implementations are not private, we leverage the approach introduced in Section 3.2 to design a private version that adheres to the bounded density contraint.
We provide the algorithmic description in Section 4.1 and the empircal evaluation in Section 4.2.
### Algorithm Description
The following Algorithm (\(\mathcal{AGP}\)) is a private version of Gaussian process for hyperparameter tuning. In Algorithm 3, we utilize GP to construct a surrogate model that generates probability distributions for the performance measure \(q\). By estimating the mean and variance, we assign a "score" to each hyperparameter \(\lambda\), known as the estimated upper confidence bound (UCB). The weight factor \(\tau\) controls the balance between exploration and exploitation, where larger weights prioritize exploration by assigning higher scores to hyperparameters with greater uncertainty.
To transform these scores into a sampling distribution, we apply the softmax function across all hyperparameters, incorporating the parameter \(\beta\) as the inverse temperature. A higher value of \(\beta\) signifies increased confidence in the learned scores for each hyperparameter.
### Empirical Evaluations
We now evaluate the performance of our GP-based DP-HyPO (referred to as "GP") in various settings. Since DP-HyPO is the first adaptive private hyperparameter optimization method of its kind, we compare it to the special case of Uniform DP-HyPO (Algorithm 2), referred to as "Uniform", as proposed in [24, 32]. In this demonstration, we consider two pragmatic privacy configurations: the white-box setting and the black-box setting, contingent on whether adaptive HPO algorithms incur extra privacy cost. In the white-box scenario (Section 4.2.1 and 4.2.2), we conduct experiments involving training deep learning models on both the MNIST dataset and CIFAR-10 dataset. Conversely, when considering the black-box setting (Section 4.2.3), our attention shifts to a real-world Federated Learning (FL) task from the industry. These scenarios provide meaningful insights into the effectiveness and applicability of our GP-based DP-HyPO approach.
#### 4.2.1 MNIST Simulation
We begin with the white-box scenario, in which the data curator aims to provide overall protection to the published model. In this context, to accommodate adaptive HPO algorithms, it becomes necessary to reduce the budget allocated to the base algorithm.
In this section, we consider the MNIST dataset, where we employ DP-SGD to train a standard CNN. The base algorithms in this case are different DP-SGD models with varying hyperparameters, and we evaluate each base algorithm based on its accuracy. Our objective is to identify the best hyperparameters that produce the most optimal model within a given total privacy budget.
Specifically, we consider two variable hyperparameters: the learning rate \(\eta\) and clipping norm \(R\), while keeping the other parameters fixed. We ensure that both the GP algorithm and the Uniform algorithm operate under the same total privacy budget, guaranteeing a fair comparison.
Due to constraints on computational resources, we conduct a semi-real simulation using the MNIST dataset. For both base algorithms (with different noise multipliers), we cache the mean accuracy of \(5\) independently trained models for each discretized hyperparameter and treat that as a proxy for the "actual accuracy" of the hyperparameter. Each time we sample the accuracy of a hyperparameter, we add a Gaussian noise with a standard deviation of \(0.1\) to the cached mean. We evaluate the performance of the output model based on the "actual accuracy" corresponding to the selected hyperparameter. Further details on the simulation and parameter configuration can be found in Appendix E.1.
In the left panel of Figure 1, we demonstrated the comparison of performance of the Uniform and GP methods with total privacy budget \(\varepsilon=15\)3 and \(\delta=1e-5\). The accuracy reported is the actual accuracy of the output hyperparameter. From the figure, we see that when \(T\) is very small (\(T<8\)), GP method is slightly worse than Uniform method as GP spends \(\log(C/c)\) budget less than Uniform method for each base algorithm (the cost of adaptivity). However, we see that after a short period of exploration, GP consistently outperform Uniform, mostly due to the power of being adaptive. The superiority of GP is further demonstrated in Table 1, aggregating over geometric distribution.
Footnote 3: The \(\varepsilon\) values are seemingly very large. Nonetheless, the reported privacy budget encompasses the overall cost of the entire HPO, which is typically overlooked in the existing literature. Given that HPO roughly incurs three times the privacy cost of the base algorithm, an \(\varepsilon\) as high as \(15\) could be reported as only \(5\) in many other works.
#### 4.2.2 CIFAR-10 Simulation
When examining the results from MNIST, a legitimate critique arises: our DP-Hypo exhibits only marginal superiority over its uniform counterpart, which questions the assertion that adaptivity holds significant value. Our conjecture is that the hyperparameter landscape of MNIST is relatively uncomplicated, which limits the potential benefits of adaptive algorithms.
To test the hypothesis, we conduct experiments on the CIFAR-10 dataset, with a setup closely mirroring the previous experiment: we employ the same CNN model for training, and optimize the same set of hyperparameters, which are the learning rate \(\eta\) and clipping norm \(R\). The primary difference lies in how we generate the hyperparameter landscape. Given that a single run on CIFAR-10 is considerably more time-consuming than on MNIST, conducting multiple runs for every hyperparameter combination is unfeasible. To address this challenge, we leverage BoTorch [3],an open-sourced library for HPO, to generate the landscape. Since we operate in the white-box setting, where the base algorithms have distinct privacy budgets for the uniform and adaptive scenarios, we execute 50 runs and generate the landscape for each case, including the mean (\(\mu_{\lambda}\)) and standard error (\(\sigma_{\lambda}\)) of accuracy for each hyperparameter combination \(\lambda\). When the algorithm (GP or Uniform) visits a specific \(\lambda\), our oracle returns a noisy score \(q(\lambda)\) drawn from a normal distribution of \(N(\mu_{\lambda},\sigma_{\lambda})\). A more detailed description of our landscapes and parameter configuration can be found in Appendix E.2.
In the middle of Figure 1, we showcase a performance comparison between the Uniform and GP methods with a total privacy budget of \(\varepsilon=12\) and \(\delta=1e-5\). Clearly, GP consistently outperforms the Uniform method, with the largest performance gap occurring when the number of runs is around 10.
#### 4.2.3 Federated Learning
In this section, we move to the black-box setting, where the privacy budget allocated to the base algorithm remains fixed, while we allow extra privacy budget for HPO. That being said, the adaptivity can be achieved without compromising the utility of the base algorithm.
We explore another real-world scenario: a Federated Learning (FL) task conducted on a proprietary dataset 4 from industry. Our aim is to determine the optimal learning rates for the central server (using AdaGrad) and the individual users (using SGD). To simulate this scenario, we once again rely on the landscape generated by BoTorch [3], as shown in Figure 3 in Appendix E.3.
Footnote 4: We have to respect confidentiality constraints that limit our ability to provide extensive details about this dataset.
Under the assumption that base algorithms are black-box models with fixed privacy costs, we proceed with HPO while varying the degree of adaptivity. The experiment results are visualized in the right panel of Figure 1, and Table 2 presents the aggregated performance data.
We consistently observe that GP outperforms Uniform in the black-box setting. Furthermore, our findings suggest that allocating a larger privacy budget to the GP method facilitates the acquisition of adaptive information, resulting in improved performance in HPO. This highlights the flexibility of GP in utilizing privacy resources effectively.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline Geometric(\(\gamma\)) & 0.001 & 0.002 & 0.003 & 0.005 & 0.01 & 0.02 & 0.025 & 0.03 \\ \hline \hline GP & 0.946 & 0.948 & 0.948 & 0.947 & 0.943 & 0.937 & 0.934 & 0.932 \\ \hline Uniform & 0.943 & 0.945 & 0.945 & 0.944 & 0.940 & 0.935 & 0.932 & 0.929 \\ \hline \end{tabular}
\end{table}
Table 1: Accuracy of MNIST using Geometric Distribution with various different values of \(\gamma\) for Uniform and GP methods. Each number is the mean of \(200\) runs.
Figure 1: Left: The accuracy of the output hyperparameter in MNIST semi-real simulation, with \(\varepsilon=15\), \(\delta=0.00001\). Middle: The accuracy of the output hyperparameter in CIFAR-10, with \(\varepsilon=12\), \(\delta=0.00001\). Right: The loss of the output hyperparameter in FL. Error bars stands for \(95\%\) confidence. Curves for GP are calculated by averaging \(400\) independent runs, and curves for Uniform are calculated by averaging \(10000\) independent runs. For a clearer demonstration, we compare the performance for each fixed value of \(T\), and recognize that the actual performance is a weighted average across different values of \(T\).
## 5 Conclusion
In conclusion, this paper presents a novel framework, DP-HyPO. As the first adaptive HPO framework with sharp DP guarantees, DP-HyPO effectively bridges the gap between private and non-private HPO. Our work encompasses the random search method by [24; 32] as a special case, while also granting practitioners the ability to adaptively learn better sampling distributions based on previous runs. Importantly, DP-HyPO enables the conversion of any non-private adaptive HPO algorithm into a private one. Our framework proves to be a powerful tool for professionals seeking optimal model performance and robust DP guarantees.
The DP-HyPO framework presents two interesting future directions. One prospect involves an alternative HPO specification which is practically more favorable. Considering the extensive literature on HPO, there is a significant potential to improve the empirical performance by leveraging more advanced HPO methods. Secondly, there is an interest in establishing a theoretical utility guarantee for DP-HyPO. By leveraging similar proof methodologies to those in Theorem 3.3 in [24], it is feasible to provide basic utility guarantees for the general DP-HyPO, or for some specific configurations within DP-HyPO.
## 6 Acknowledgements
The authors would like to thank Max Balandat for his thoughtful comments and insights that helped us improve the paper.
## References
* (1) Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In _Proceedings of the 2016 ACM SIGSAC conference on computer and communications security_, pages 308-318, 2016.
* (2) Martin S Andersen, Joachim Dahl, Lieven Vandenberghe, et al. Cvxopt: A python package for convex optimization. _Available at cvxopt. org_, 54, 2013.
* (3) Maximilian Balandat, Brian Karrer, Daniel Jiang, Samuel Daulton, Ben Letham, Andrew G Wilson, and Eytan Bakshy. Botorch: A framework for efficient monte-carlo bayesian optimization. _Advances in neural information processing systems_, 33:21524-21538, 2020.
* (4) Raef Bassily, Adam Smith, and Abhradeep Thakurta. Private empirical risk minimization: Efficient algorithms and tight error bounds. In _2014 IEEE 55th annual symposium on foundations of computer science_, pages 464-473. IEEE, 2014.
* (5) James Bergstra, Remi Bardenet, Yoshua Bengio, and Balazs Kegl. Algorithms for hyper-parameter optimization. _Advances in neural information processing systems_, 24, 2011.
* (6) Nicholas Carlini, Chang Liu, Ulfar Erlingsson, Jernej Kos, and Dawn Song. The secret sharer: Evaluating and testing unintended memorization in neural networks. In _USENIX Security Symposium_, volume 267, 2019.
* (7) Kamalika Chaudhuri and Staal A Vinterbo. A stability-based validation procedure for differentially private machine learning. _Advances in Neural Information Processing Systems_, 26, 2013.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline Geometric(\(\gamma\)) & 0.001 & 0.002 & 0.003 & 0.005 & 0.01 & 0.02 & 0.025 & 0.03 \\ \hline \hline GP (C = 1.25) & 0.00853 & 0.0088 & 0.00906 & 0.00958 & 0.0108 & 0.0129 & 0.0138 & 0.0146 \\ \hline GP (C = 1.33) & 0.00821 & 0.00847 & 0.00872 & 0.00921 & 0.0104 & 0.0123 & 0.0132 & 0.0140 \\ \hline GP (C = 1.5) & 0.00822 & 0.00848 & 0.00872 & 0.00920 & 0.0103 & 0.0123 & 0.0131 & 0.0130 \\ \hline Uniform & 0.0104 & 0.0106 & 0.0109 & 0.0113 & 0.0123 & 0.0141 & 0.0149 & 0.0156 \\ \hline \end{tabular}
\end{table}
Table 2: Loss of FL using Geometric Distribution with various different values of \(\gamma\) for Uniform and GP methods with different choice of \(C\) and \(c=1/C\). Each number is the mean of \(200\) runs.
* [8] Edith Cohen, Xin Lyu, Jelani Nelson, Tamas Sarlos, and Uri Stemmer. Generalized private selection and testing with high confidence. _arXiv preprint arXiv:2211.12063_, 2022.
* [9] Imre Csiszar and Frantisek Matus. Information projections revisited. _IEEE Transactions on Information Theory_, 49(6):1474-1490, 2003.
* [10] Soham De, Leonard Berrada, Jamie Hayes, Samuel L Smith, and Borja Balle. Unlocking high-accuracy differentially private image classification through scale. _arXiv preprint arXiv:2204.13650_, 2022.
* [11] Jinshuo Dong, Aaron Roth, and Weijie J Su. Gaussian differential privacy. _Journal of the Royal Statistical Society Series B: Statistical Methodology_, 84(1):3-37, 2022.
* [12] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In _Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4-7, 2006. Proceedings 3_, pages 265-284. Springer, 2006.
* [13] Cynthia Dwork, Moni Naor, Omer Reingold, Guy N Rothblum, and Salil Vadhan. On the complexity of differentially private data release: efficient algorithms and hardness results. In _Proceedings of the forty-first annual ACM symposium on Theory of computing_, pages 381-390, 2009.
* [14] Matthias Feurer and Frank Hutter. Hyperparameter optimization. _Automated machine learning: Methods, systems, challenges_, pages 3-33, 2019.
* [15] Yonatan Geifman and Ran El-Yaniv. Deep active learning with a neural architecture search. _Advances in Neural Information Processing Systems_, 32, 2019.
* [16] Xin He, Kaiyong Zhao, and Xiaowen Chu. Automl: A survey of the state-of-the-art. _Knowledge-Based Systems_, 212:106622, 2021.
* [17] Andrew Hundt, Varun Jain, and Gregory D Hager. sharpdarts: Faster and more accurate differentiable architecture search. _arXiv preprint arXiv:1903.09900_, 2019.
* [18] Frank Hutter, Holger H Hoos, and Kevin Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In _Learning and Intelligent Optimization: 5th International Conference, LION 5, Rome, Italy, January 17-21, 2011. Selected Papers 5_, pages 507-523. Springer, 2011.
* [19] Peter Kairouz, Sewoong Oh, and Pramod Viswanath. The composition theorem for differential privacy. In _International conference on machine learning_, pages 1376-1385. PMLR, 2015.
* [20] Kirthevasan Kandasamy, Willie Neiswanger, Jeff Schneider, Barnabas Poczos, and Eric P Xing. Neural architecture search with bayesian optimisation and optimal transport. _Advances in neural information processing systems_, 31, 2018.
* [21] Rajiv Khanna, Joydeep Ghosh, Russell Poldrack, and Oluwasanmi Koyejo. Information projection and approximate inference for structured sparse variables. In _Artificial Intelligence and Statistics_, pages 1358-1366. PMLR, 2017.
* [22] Liam Li and Ameet Talwalkar. Random search and reproducibility for neural architecture search. In _Uncertainty in artificial intelligence_, pages 367-377. PMLR, 2020.
* [23] Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. Hyperband: A novel bandit-based approach to hyperparameter optimization. _The Journal of Machine Learning Research_, 18(1):6765-6816, 2017.
* [24] Jingcheng Liu and Kunal Talwar. Private selection from private candidates. In _Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing_, pages 298-309, 2019.
* [25] Frank McSherry and Kunal Talwar. Mechanism design via differential privacy. In _48th Annual IEEE Symposium on Foundations of Computer Science (FOCS'07)_, pages 94-103. IEEE, 2007.
* [26] Hector Mendoza, Aaron Klein, Matthias Feurer, Jost Tobias Springenberg, and Frank Hutter. Towards automatically-tuned neural networks. In _Workshop on automatic machine learning_, pages 58-65. PMLR, 2016.
* [27] Ilya Mironov. Renyi differential privacy. In _2017 IEEE 30th computer security foundations symposium (CSF)_, pages 263-275. IEEE, 2017.
* [28] Shubhankar Mohapatra, Sajin Sasy, Xi He, Gautam Kamath, and Om Thakkar. The role of adaptive optimizers for honest private hyperparameter selection. In _Proceedings of the aaai conference on artificial intelligence_, volume 36, pages 7806-7813, 2022.
* [29] Milad Nasr, Reza Shokri, and Amir Houmansadr. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In _2019 IEEE symposium on security and privacy (SP)_, pages 739-753. IEEE, 2019.
* [30] Renato Negrinho, Matthew Gormley, Geoffrey J Gordon, Darshan Patil, Nghia Le, and Daniel Ferreira. Towards modular and programmable architecture search. _Advances in neural information processing systems_, 32, 2019.
* [31] Ashwinee Panda, Xinyu Tang, Vikash Sehwag, Saeed Mahloujifar, and Prateek Mittal. Dp-raft: A differentially private recipe for accelerated fine-tuning. _arXiv preprint arXiv:2212.04486_, 2022.
* [32] Nicolas Papernot and Thomas Steinke. Hyperparameter tuning with renyi differential privacy. In _International Conference on Learning Representations_, 2021.
* [33] Carl Edward Rasmussen. Gaussian processes in machine learning. In _Advanced Lectures on Machine Learning: ML Summer Schools 2003, Canberra, Australia, February 2-14, 2003, Tubingen, Germany, August 4-16, 2003, Revised Lectures_, pages 63-71. Springer, 2004.
* [34] Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P Adams, and Nando De Freitas. Taking the human out of the loop: A review of bayesian optimization. _Proceedings of the IEEE_, 104(1):148-175, 2015.
* [35] Shuang Song, Kamalika Chaudhuri, and Anand D Sarwate. Stochastic gradient descent with differentially private updates. In _2013 IEEE global conference on signal and information processing_, pages 245-248. IEEE, 2013.
* [36] Salil Vadhan. The complexity of differential privacy. _Tutorials on the Foundations of Cryptography: Dedicated to Oded Goldreich_, pages 347-450, 2017.
* [37] Hua Wang, Sheng Gao, Huanyu Zhang, Milan Shen, and Weijie J Su. Analytical composition of differential privacy via the edgeworth accountant. _arXiv preprint arXiv:2206.04236_, 2022.
* [38] Christopher KI Williams and Carl Edward Rasmussen. _Gaussian processes for machine learning_, volume 2. MIT press Cambridge, MA, 2006.
* [39] Da Yu, Huishuai Zhang, Wei Chen, Jian Yin, and Tie-Yan Liu. Large scale private learning via low-rank reparametrization. In _International Conference on Machine Learning_, pages 12208-12218. PMLR, 2021.
* [40] Tong Yu and Hong Zhu. Hyper-parameter optimization: A review of algorithms and applications. _arXiv preprint arXiv:2003.05689_, 2020.
* [41] Arber Zela, Aaron Klein, Stefan Falkner, and Frank Hutter. Towards automated deep learning: Efficient joint neural architecture and hyperparameter search. _arXiv preprint arXiv:1807.06906_, 2018.
* [42] Huanyu Zhang, Ilya Mironov, and Meisam Hejazinia. Wide network learning with differential privacy. _arXiv preprint arXiv:2103.01294_, 2021.
Proofs of the technical results
### Proof of Main Results
First, we define Renyi divergence as follows.
**Definition A.1** (Renyi Divergences).: Let \(P\) and \(Q\) be probability distributions on a common space \(\Omega\). Assume that \(P\) is absolutely continuous with respect to \(Q\) - i.e., for all measurable \(E\subset\Omega\), if \(Q(E)=0\), then \(P(E)=0\). Let \(P(x)\) and \(Q(x)\) denote the densities of \(P\) and \(Q\) respectively. The KL divergence from \(P\) to \(Q\) is defined as
\[\mathrm{D}_{1}(P\|Q):=\mathop{\mathbb{E}}_{X\gets P}\left[ \log\left(\frac{P(X)}{Q(X)}\right)\right]=\int_{\Omega}P(x)\log\left(\frac{P(x )}{Q(x)}\right)\mathrm{d}x.\]
The max divergence from \(P\) to \(Q\) is defined as
\[\mathrm{D}_{\infty}(P\|Q):=\sup\left\{\log\left(\frac{P(E)}{Q(E) }\right):P(E)>0\right\}.\]
For \(\alpha\in(1,\infty)\), the Renyi divergence from \(P\) to \(Q\) of order \(\alpha\) is defined as
\[\mathrm{D}_{\alpha}(P\|Q) :=\frac{1}{\alpha-1}\log\left(\mathop{\mathbb{E}}_{X\gets P} \left[\left(\frac{P(X)}{Q(X)}\right)^{\alpha-1}\right]\right)\] \[=\frac{1}{\alpha-1}\log\left(\mathop{\mathbb{E}}_{X\gets Q} \left[\left(\frac{P(X)}{Q(X)}\right)^{\alpha}\right]\right)\] \[=\frac{1}{\alpha-1}\log\left(\int_{Q}P(x)^{\alpha}Q(x)^{1-\alpha} \mathrm{d}x\right).\]
We now present the definition of Renyi DP (RDP) in [27].
**Definition A.2** (Renyi Differential Privacy).: A randomized algorithm \(M:\mathcal{X}^{n}\rightarrow\mathcal{Y}\) is \((\alpha,\varepsilon)\)-Renyi differentially private if, for all neighbouring pairs of inputs \(D,D^{\prime}\in\mathcal{X}^{n},\mathrm{D}_{\alpha}\left(M(x)\|M\left(x^{ \prime}\right)\right)\leq\varepsilon\).
We define some additional notations for the sake of the proofs. In algorithm 1, for any \(1\leq j\leq T\), and neighboring dataset \(D\) and \(D^{\prime}\), we define the following notations for any \(y=(x,q)\in\mathcal{Y}\), the totally ordered range set.
\[P_{j}(y) =\mathbb{P}_{\tilde{y}\sim Q(D,\pi^{(j)})}(\tilde{y}=y)\quad \text{and}\quad P^{\prime}_{j}(y)=\mathbb{P}_{\tilde{y}\sim Q(D^{\prime},\pi^{ (j)})}(\tilde{y}=y)\] \[P_{j}(\leq y) =\mathbb{P}_{\tilde{y}\sim Q(D,\pi^{(j)})}(\tilde{y}\leq y)\quad \text{and}\quad P^{\prime}_{j}(\leq y)=\mathbb{P}_{\tilde{y}\sim Q(D^{\prime}, \pi^{(j)})}(\tilde{y}\leq y)\] \[P_{j}(<y) =\mathbb{P}_{\tilde{y}\sim Q(D,\pi^{(j)})}(\tilde{y}<y)\quad \text{and}\quad P^{\prime}_{j}(<y)=\mathbb{P}_{\tilde{y}\sim Q(D^{\prime},\pi^ {(j)})}(\tilde{y}<y).\]
By these definitions, we have \(P_{j}(\leq y)=P_{j}(<y)+P_{j}(y)\), and \(P^{\prime}_{j}(\leq y)=P^{\prime}_{j}(<y)+P^{\prime}_{j}(y)\). And additionally, we have
\[\frac{P_{j}(y)}{P^{\prime}_{j}(y)}=\frac{\int_{\lambda\in\Lambda }\mathbb{P}(M_{\lambda}(D)=y)\pi^{(j)}(\lambda)d\lambda}{\int_{\lambda\in \Lambda}\mathbb{P}(M_{\lambda}(D^{\prime})=y)\pi^{\prime(j)}(\lambda)d\lambda} \leq\sup_{\lambda\in\Lambda}\frac{\mathbb{P}(M_{\lambda}(D)=y) \pi^{(j)}(\lambda)}{\mathbb{P}(M_{\lambda}(D^{\prime})=y)\pi^{\prime(j)}( \lambda)}\] \[\leq\frac{C}{c}\cdot\sup_{\lambda\in\Lambda}\frac{\mathbb{P}(M_ {\lambda}(D)=y)}{\mathbb{P}(M_{\lambda}(D^{\prime})=y)}.\] (A.1)
Here, the first inequality follows from the simple property of integration, and the second inequality follows from the fact that \(\pi^{(j)}\) has bounded density between \(c\) and \(C\). Similarly, we have
\[\frac{P_{j}(\leq y)}{P^{\prime}_{j}(\leq y)}\leq\frac{C}{c}\cdot \sup_{\lambda\in\Lambda}\frac{\mathbb{P}(M_{\lambda}(D)\leq y)}{\mathbb{P}(M_ {\lambda}(D^{\prime})\leq y)},\] (A.2)
and
\[\frac{P_{j}(<y)}{P^{\prime}_{j}(<y)}\leq\frac{C}{c}\cdot\sup_{ \lambda\in\Lambda}\frac{\mathbb{P}(M_{\lambda}(D)<y)}{\mathbb{P}(M_{\lambda}(D ^{\prime})<y)}.\] (A.3)Note that \(D\) and \(D^{\prime}\) are neighboring datasets, and \(M_{\lambda}\) satisfies some DP guarantees. So the ratio \(\frac{\mathbb{P}(M_{\lambda}(D)\in E)}{\mathbb{P}(M_{\lambda}(D)^{\prime})\in E)}\) for any event \(E\) can be bounded.
For simplicity, we define the inner product of a distribution \(\pi\) with the vector \(\mathbf{M}(D)=(\mathbb{P}(M_{\lambda}(D)=y):\lambda\in\Lambda)\) as
\[\pi\cdot\mathbf{M}(D):=\int_{\lambda\in\Lambda}\mathbb{P}(M_{\lambda}(D)=y)\pi( \lambda)d\lambda.\] (A.4)
Now, we define additional notations to bound the probabilities. Recall \(S_{C,s}\) is given by \(\{f\in\Lambda^{\mathbb{R}^{+}}:\)\(\text{ess sup }f\leq C,\)\(\text{ess inf }f\geq c,\)\(\int_{\alpha\in\Lambda}f(\alpha)d\alpha=1.\}\). It is straightforward to see this is a compact set as it is the intersection of three compact sets. We define
\[P^{+}(y):=\sup_{\pi\in S_{C,c}}\int_{\lambda\in\Lambda}\mathbb{P}(M_{\lambda} (D)=y)\pi^{(j)}(\lambda)d\lambda=\pi^{+}\cdot\mathbf{M}(D),\] (A.5)
where \(\pi^{+}\) is the distribution that achieves the supreme in the compact set \(S_{C,c}\). Similarly, we define \(P^{\prime-}(y)\) for \(D^{\prime}\) as given by
\[P^{\prime-}(y):=\inf_{\pi\in S_{C,c}}\int_{\lambda\in\Lambda} \mathbb{P}(M_{\lambda}(D^{\prime})=y)\cdot\pi^{\prime(j)}(\lambda)d\lambda=\pi ^{\prime-}\cdot\mathbf{M}.\] (A.6)
Similarly, we can define \(P^{\prime+}(y)\) and \(P^{-}(y)\) accordingly. From the definition, we know that
\[P^{-}(y)\leq P_{j}(y)\leq P^{+}(y)\quad\text{and}\quad P^{\prime-}(y)\leq P^{ \prime}_{j}(y)\leq P^{\prime+}(y).\] (A.7)
We also have
\[\frac{P^{+}(y)}{P^{\prime-}(y)}=\frac{\pi^{*}\cdot\mathbf{M}(D)}{\pi^{ \prime-}\cdot\mathbf{M}(D^{\prime})}\leq\sup_{\lambda}\frac{\mathbb{P}(M_{\lambda }(D)=y)}{\mathbb{P}(M_{\lambda}(D^{\prime})=y)}\cdot\frac{C}{c}.\] (A.8)
It is similar to define
\[P^{+}(\leq y):=\sup_{\pi\in S_{C,c}}\int_{\lambda\in\Lambda} \mathbb{P}(M_{\lambda}(D)\leq y)\quad\text{and}\quad P^{\prime+}(\leq y):=\sup _{\pi\in S_{C,c}}\int_{\lambda\in\Lambda}\mathbb{P}(M_{\lambda}(D^{\prime}) \leq y)\] \[P^{-}(\leq y):=\inf_{\pi\in S_{C,c}}\int_{\lambda\in\Lambda} \mathbb{P}(M_{\lambda}(D)\leq y)\quad\text{and}\quad P^{\prime-}(\leq y):=\inf _{\pi\in S_{C,c}}\int_{\lambda\in\Lambda}\mathbb{P}(M_{\lambda}(D^{\prime}) \leq y)\] \[P^{+}(<y):=\sup_{\pi\in S_{C,c}}\int_{\lambda\in\Lambda} \mathbb{P}(M_{\lambda}(D)<y)\quad\text{and}\quad P^{\prime+}(<y):=\sup_{\pi\in S _{C,c}}\int_{\lambda\in\Lambda}\mathbb{P}(M_{\lambda}(D^{\prime})<y)\] \[P^{-}(<y):=\inf_{\pi\in S_{C,c}}\int_{\lambda\in\Lambda}\mathbb{ P}(M_{\lambda}(D)<y)\quad\text{and}\quad P^{\prime-}(<y):=\inf_{\pi\in S_{C,c}} \int_{\lambda\in\Lambda}\mathbb{P}(M_{\lambda}(D^{\prime})<y).\]
Following the exact same proof, we have
\[P^{-}(\leq y)\leq P_{j}(\leq y)\leq P^{+}(\leq y)\quad\text{and} \quad P^{\prime-}(\leq y)\leq P^{\prime}_{j}(\leq y)\leq P^{\prime+}(\leq y)\] (A.9) \[P^{-}(<y)\leq P_{j}(<y)\leq P^{+}(<y)\quad\text{and}\quad P^{ \prime-}(<y)\leq P^{\prime}_{j}(<y)\leq P^{\prime+}(<y)\] (A.10) \[\frac{P^{+}(\leq y)}{P^{\prime-}(\leq y)}\leq\sup_{\lambda}\frac{ \mathbb{P}(M_{\lambda}(D)\leq y)}{\mathbb{P}(M_{\lambda}(D^{\prime})\leq y)} \cdot\frac{C}{c}\quad\text{and}\quad\frac{P^{+}(<y)}{P^{\prime-}(<y)}\leq\sup _{\lambda}\frac{\mathbb{P}(M_{\lambda}(D)<y)}{\mathbb{P}(M_{\lambda}(D^{\prime })<y)}\cdot\frac{C}{c}.\] (A.11)
It is also straightforward to verify from the definition that
\[P^{+}(\leq y)=P^{+}(<y)+P^{+}(y)\quad\text{and}\quad P^{\prime+}( \leq y)=P^{\prime+}(<y)+P^{\prime+}(y)\] (A.12) \[P^{+}-(\leq y)=P^{-}(<y)+P^{-}(y)\quad\text{and}\quad P^{\prime-}(\leq y)=P^{ \prime-}(<y)+P^{\prime-}(y).\] (A.13)
**Lemma A.3**.: _Suppose if \(a_{\lambda},b_{\lambda}\) are non-negative and \(c_{\lambda},c^{\prime}_{\lambda}\) are positive for all \(\lambda\). Then we have_
\[\frac{\sum_{\lambda}a_{\lambda}c_{\lambda}}{\sum_{\lambda}b_{\lambda}c^{\prime }_{\lambda}}\leq\frac{\sum_{\lambda}a_{\lambda}}{\sum_{\lambda}b_{\lambda}} \cdot\sup_{\lambda,\lambda^{\prime}}\left|\frac{c_{\lambda}}{c^{\prime}_{ \lambda}}\right|.\]
Proof of Lemma a.3.: This lemma is pretty straight forward by comparing the coefficient for each term in the full expansion. Specifically, we re-write the inequality as
\[\sum_{\lambda}a_{\lambda}c_{\lambda}\sum_{\lambda^{\prime}}b^{\prime}_{\lambda }\leq\sum_{\lambda}a_{\lambda}\sum_{\lambda^{\prime}}b^{\prime}_{\lambda}c^{ \prime}_{\lambda}\cdot\sup_{\lambda,\lambda^{\prime}}\left|\frac{c_{\lambda}}{ c^{\prime}_{\lambda}}\right|.\] (A.14)For each term \(a_{\lambda}b^{\prime}_{\lambda}\), its coefficient on the left hand side of (A.14) is \(c_{\lambda}\), but its coefficient on the right hand side of (A.14) is \(c^{\prime}_{\lambda}\cdot\sup_{\lambda,\lambda^{\prime}}\left|\frac{c_{\lambda}}{ c^{\prime}_{\lambda}}\right|\). Since we always have \(c^{\prime}_{\lambda}\cdot\sup_{\lambda,\lambda^{\prime}}\left|\frac{c_{\lambda}} {c^{\prime}_{\lambda}}\right|\geq c_{\lambda}\), and \(a_{\lambda}b^{\prime}_{\lambda}\geq 0\), we know the inequality (A.14) holds.
Next, in order to present our results in terms of RDP guarantees, we prove the following lemma.
**Lemma A.4**.: _The Renyi divergence between \(P^{+}\) and \(P^{-}\) is be bounded as follows:_
\[\mathrm{D}_{\alpha}(P^{+}\|P^{\prime-})\leq\frac{\alpha}{\alpha-1}\log\frac{C} {c}+\sup_{\lambda\in\Lambda}\mathrm{D}_{\alpha}\left(M_{\lambda}(D)\|M_{ \lambda}(D^{\prime})\right)\]
Proof of Lemma a.4.: We write that
\[e^{(\alpha-1)\mathrm{D}_{\alpha}(P^{+}\|P^{\prime-})}=\sum_{y\in\mathcal{Y}}P^ {+}(y)^{\alpha}\cdot P^{\prime-}(y)^{1-\alpha}=\sum_{y\in\mathcal{Y}}\frac{ \left(\sum_{\lambda}\pi^{+}(\lambda)\mathbb{P}(M_{\lambda}(D)=y)\right)^{ \alpha}}{\left(\sum_{\lambda}\pi^{\prime-}(\lambda)\mathbb{P}(M_{\lambda}(D^{ \prime})=y)\right)^{\alpha-1}}\] (A.15)
Here, \(\pi^{+}\) and \(\pi^{\prime-}\) are defined in (A.5) and (A.6), so they are essentially \(\pi^{+}_{y}\) and \(\pi^{\prime-}_{y}\) as they depend on the value of \(y\). Therefore, we need to "remove" this dependence on \(y\) to leverage the RDP guarantees for each base algorithm \(M_{\lambda}\). We accomplish this task by bridging via \(\pi\), the uniform density on \(\Lambda\) (that is \(\pi(\lambda)=\pi(\lambda^{\prime})\) for any \(\lambda,\lambda^{\prime}\in\Lambda\)). Specifically, we define \(a_{\lambda}=\pi(\lambda)\mathbb{P}(M_{\lambda}(D)=y)\), \(b_{\lambda}=\pi(\lambda)\mathbb{P}(M_{\lambda}(D^{\prime})=y)\), \(c_{\lambda}=\frac{\pi^{+}_{y}(\lambda)}{\pi(\lambda)}\), and \(c^{\prime}_{\lambda}=\frac{\pi^{\prime-}_{y}(\lambda)}{\pi(\lambda)}\). We see that
\[\sup_{\lambda,\lambda^{\prime}}\left|\frac{c_{\lambda}}{c^{\prime}_{\lambda}} \right|=\sup_{\lambda,\lambda^{\prime}}\left|\frac{\pi^{+}_{y}(\lambda)/\pi( \lambda)}{\pi^{\prime-}_{y}(\lambda^{\prime})/\pi(\lambda^{\prime})}\right|= \sup_{\lambda,\lambda^{\prime}}\left|\frac{\pi^{+}_{y}(\lambda))}{\pi^{\prime -}_{y}(\lambda^{\prime})}\right|\leq C/c,\] (A.16)
since \(\pi\) is the uniform, and \(\pi^{+}_{y}\) and \(\pi^{\prime-}_{y}\) belongs to \(S_{C,c}\). We now apply Lemma A.3 with the above notations for each \(y\) to (A.15), and we have
\[\sum_{y\in\mathcal{Y}}\frac{\left(\sum_{\lambda}\pi^{+}(\lambda) \mathbb{P}(M_{\lambda}(D)=y)\right)^{\alpha}}{\left(\sum_{\lambda}\pi^{\prime -}(\lambda)\mathbb{P}(M_{\lambda}(D^{\prime})=y)\right)^{\alpha-1}}\] \[= \sum_{y\in\mathcal{Y}}\frac{\left(\sum_{\lambda}\pi(\lambda) \mathbb{P}(M_{\lambda}(D)=y)\cdot\frac{\pi^{+}(\lambda)}{\pi(\lambda)}\right) ^{\alpha-1}\left(\sum_{\lambda}\pi(\lambda)\mathbb{P}(M_{\lambda}(D)=y)\cdot \frac{\pi^{+}(\lambda)}{\pi(\lambda)}\right)}{\left(\sum_{\lambda}\pi(\lambda )\mathbb{P}(M_{\lambda}(D^{\prime})=y)\cdot\frac{\pi^{\prime-}(\lambda)}{ \pi(\lambda)}\right)^{\alpha-1}}\] \[= \sum_{y\in\mathcal{Y}}\frac{\left(\sum_{\lambda}a_{\lambda}\cdot c _{\lambda}\right)^{\alpha-1}\left(\sum_{\lambda}\pi(\lambda)\mathbb{P}(M_{ \lambda}(D)=y)\cdot\frac{\pi^{+}(\lambda)}{\pi(\lambda)}\right)}{\left(\sum_{ \lambda}b_{\lambda}\cdot c^{\prime}_{\lambda}\right)^{\alpha-1}}\] \[\leq \sum_{y\in\mathcal{Y}}\sup_{\lambda,\lambda^{\prime}}\left|\frac {c_{\lambda}}{c^{\prime}_{\lambda}}\right|^{\alpha-1}\frac{\left(\sum_{\lambda}a _{\lambda}\right)^{\alpha-1}\left(\sum_{\lambda}\pi(\lambda)\mathbb{P}(M_{ \lambda}(D)=y)\cdot\frac{\pi^{+}(\lambda)}{\pi(\lambda)}\right)}{\left(\sum_{ \lambda}b_{\lambda}\right)^{\alpha-1}}\] \[= \sum_{y\in\mathcal{Y}}\sup_{\lambda,\lambda^{\prime}}\left|\frac {c_{\lambda}}{c^{\prime}_{\lambda}}\right|^{\alpha-1}\frac{\left(\sum_{\lambda }a_{\lambda}\right)^{\alpha-1}\left(\sum_{\lambda}a_{\lambda}\cdot c_{\lambda }\right)}{\left(\sum_{\lambda}b_{\lambda}\right)^{\alpha-1}}\] \[\leq \sum_{y\in\mathcal{Y}}\sup_{\lambda,\lambda^{\prime}}\left|\frac {c_{\lambda}}{c^{\prime}_{\lambda}}\right|^{\alpha-1}\frac{\left(\sum_{\lambda }a_{\lambda}\right)^{\alpha-1}\left(\sum_{\lambda}a_{\lambda}\right)\cdot\sup_ {\lambda}c_{\lambda}}{\left(\sum_{\lambda}b_{\lambda}\right)^{\alpha-1}}\] \[\leq \sum_{y\in\mathcal{Y}}\left(\frac{C}{c}\right)^{\alpha-1}\frac{ \left(\sum_{\lambda}a_{\lambda}\right)^{\alpha-1}\left(\sum_{\lambda}a_{\lambda }\right)\cdot\left(\frac{C}{c}\right)}{\left(\sum_{\lambda}b_{\lambda}\right)^{ \alpha-1}}\] \[= \sum_{y\in\mathcal{Y}}\left(\frac{C}{c}\right)^{\alpha}\cdot \frac{\left(\sum_{\lambda}\pi(\lambda)\mathbb{P}(M_{\lambda}(D)=y)\right)^{ \alpha}}{\left(\sum_{\lambda}\pi(\lambda)\mathbb{P}(M_{\lambda}(D^{\prime})=y )\right)^{\alpha-1}}\]The first inequality is due to Lemma A.3, the second inequality is because \(a_{\lambda}\) are non-negative, and the last inequality is because of (A.16) and the fact that both \(\pi^{+}(\lambda)\) and \(\pi(\lambda)\) are defined in \(\mathbf{S}_{C,c}\), and thus their ratio is upper bounded by \(\frac{C}{c}\) for any \(\lambda\).
Now we only need to prove that for any fixed distribution \(\pi\) that doesn't depend on value \(y\), we have
\[\sum_{y\in\mathcal{Y}}\frac{\left(\sum_{\lambda}\pi(\lambda)\mathbb{P}(M_{ \lambda}(D)=y)\right)^{\alpha}}{\left(\sum_{\lambda}\pi(\lambda)\mathbb{P}(M_{ \lambda}(D^{\prime})=y)\right)^{\alpha-1}}\leq\sup_{\lambda\in\Lambda}e^{( \alpha-1)\mathrm{D}_{\alpha}\left(M_{\lambda}(D)\|M_{\lambda}(D^{\prime})\right)}.\] (A.17)
With this result, we immediately know the result holds for uniform distribution \(\pi\) as a special case. To prove this result, we first observe that the function \(f(u,v)=u^{\alpha}v^{1-\alpha}\) is a convex function. This is because the Hessian of \(f\) is
\[\begin{pmatrix}\alpha(\alpha-1)u^{\alpha-2}v^{1-\alpha}&-\alpha(\alpha-1)u^{ \alpha-1}v^{-\alpha}\\ -\alpha(\alpha-1)u^{\alpha-1}v^{-\alpha}&\alpha(\alpha-1)u^{\alpha}v^{-\alpha -1}\end{pmatrix},\]
which is easy to see to be positive semi-definite. And now, consider any distribution \(\pi\), denote \(u(\lambda)=\mathbb{P}(M_{\lambda}(D)=y)\) and \(v(\lambda)=\mathbb{P}(M_{\lambda}(D^{\prime})=y)\) by Jensen's inequality, we have
\[f(\sum_{\lambda}\pi(\lambda)u(\lambda),\sum_{\lambda}\pi(\lambda)v(\lambda)) \leq\sum_{\lambda}\pi(\lambda)f(u(\lambda),v(\lambda)).\]
By adding the summation over \(y\) on both side of the above inequality, we have
\[\sum_{y\in\mathcal{Y}}\frac{\left(\sum_{\lambda}\pi(\lambda) \mathbb{P}(M_{\lambda}(D)=y)\right)^{\alpha}}{\left(\sum_{\lambda}\pi(\lambda) \mathbb{P}(M_{\lambda}(D^{\prime})=y)\right)^{\alpha-1}} \leq\sum_{y\in\mathcal{Y}}\sum_{\lambda}\pi(\lambda)\frac{\mathbb{ P}(M_{\lambda}(D)=y)^{\alpha}}{\mathbb{P}(M_{\lambda}(D^{\prime})=y)^{\alpha-1}}\] \[=\sum_{\lambda}\sum_{y\in\mathcal{Y}}\pi(\lambda)\frac{\mathbb{P} (M_{\lambda}(D)=y)^{\alpha}}{\mathbb{P}(M_{\lambda}(D^{\prime})=y)^{\alpha-1}}\] \[\leq\sup_{\lambda}\sum_{y\in\mathcal{Y}}\frac{\mathbb{P}(M_{ \lambda}(D)=y)^{\alpha}}{\mathbb{P}(M_{\lambda}(D^{\prime})=y)^{\alpha-1}}.\]
The first equality is due to Fubini's theorem. And the second inequality is straight forward as one observe \(\pi(\lambda)\) only depends on \(\lambda\). This concludes the proof as we know that
\[e^{(\alpha-1)\mathrm{D}_{\alpha}(P^{+}\|P^{\prime-})} \leq\left(\frac{C}{c}\right)^{\alpha}\sup_{\lambda}\sum_{y\in \mathcal{Y}}\frac{\mathbb{P}(M_{\lambda}(D)=y)^{\alpha}}{\mathbb{P}(M_{ \lambda}(D^{\prime})=y)^{\alpha-1}}\] \[=\left(\frac{C}{c}\right)^{\alpha}\sup_{\lambda}e^{(\alpha-1) \mathrm{D}_{\alpha}(M_{\lambda}(D)\|M_{\lambda}(D^{\prime})}\]
or equivalently,
\[\mathrm{D}_{\alpha}(P^{+}\|P^{\prime-})\leq\frac{\alpha}{\alpha-1}\log\frac{C }{c}+\sup_{\lambda\in\Lambda}\mathrm{D}_{\alpha}\left(M_{\lambda}(D)\|M_{ \lambda}(D^{\prime})\right).\]
We now present our crucial technical lemma for adaptive hyperparameter tuning with any distribution on the number of repetitions \(T\). This is a generalization from [32].
**Lemma A.5**.: _Fix \(\alpha>1\). Let \(T\) be a random variable supported on \(\mathbb{N}_{\geq 0}\). Let \(f:[0,1]\rightarrow\mathbb{R}\) be the probability generating function of \(K\), that is, \(f(x)=\sum_{k=0}^{\infty}\mathbb{P}[T=k]x^{k}\)._
_Let \(M_{\lambda}\) and \(M_{\lambda}^{\prime}\) be the base algorithm for \(\lambda\in\Lambda\) on \(\mathcal{Y}\) on \(D\) and \(D^{\prime}\) respectively. Define \(A_{1}:=\mathcal{A}(D,\pi^{(0)},\mathcal{T},C,c)\), and \(A_{2}:=\mathcal{A}(D^{\prime},\pi^{(0)},\mathcal{T},C,c)\). Then_
\[\mathrm{D}_{\alpha}\left(A_{1}\|A_{2}\right)\leq\sup_{\lambda}\mathrm{D}_{ \alpha}\left(M_{\lambda}\|M_{\lambda}^{\prime}\right)+\frac{\alpha}{\alpha-1 }\log\frac{C}{c}+\frac{1}{\alpha-1}\log\left(f^{\prime}(q)^{\alpha}\cdot f^{ \prime}\left(q^{\prime}\right)^{1-\alpha}\right),\]
_where applying the same postprocessing to the bounding probabilities \(P^{+}\) and \(P^{\prime-}\) gives probabilities \(q\) and \(q^{\prime}\) respectively. This means that, there exist a function set \(g:\mathcal{Y}\rightarrow[0,1]\) such that \(q=\underset{X\gets D^{+}}{\mathbb{E}}[g(X)]\) and \(q^{\prime}=\underset{X\gets D^{\prime-}}{\mathbb{E}}\left[g(X^{\prime})\right]\)._Proof of Lemma a.5.: We consider the event that \(A_{1}\) outputs \(y\). By definition, we have
\[A_{1}(y) =\sum_{k=1}^{\infty}\mathbb{P}(T=k)[\prod_{j=1}^{k}P_{j}(\leq y)- \prod_{i=1}^{k}P_{j}(<y)]\] \[=\sum_{k=1}^{\infty}\mathbb{P}(T=k)[\sum_{i=1}^{k}P_{i}(y)\prod_{j =1}^{i-1}P_{j}(<y)\cdot\prod_{j=i+1}^{k}P_{j}(\leq y)]\] \[\leq\sum_{k=1}^{\infty}\mathbb{P}(T=k)[\sum_{i=1}^{k}P^{+}(y)\prod _{j=1}^{i-1}P^{+}(<y)\cdot\prod_{j=i+1}^{k}P^{+}(\leq y)]\] \[=\sum_{k=1}^{\infty}\mathbb{P}(T=k)[\sum_{i=1}^{k}P^{+}(y)\cdot P^ {+}(<y)^{i-1}\cdot P^{+}(\leq y)^{k-i}]\] \[=\sum_{k=1}^{\infty}\mathbb{P}(T=k)[P^{+}(\leq y)^{k}-P^{+}(<y)^{ k}]\] \[=f(P^{+}(\leq y))-f(P^{+}(<y))=P^{+}(y)\cdot\operatorname*{ \mathbb{E}}_{X\leftarrow\operatorname{Uniform}([P^{+}(<y),P^{+}(\leq y)])}[f^ {\prime}(X)].\]
The second equality is by partitioning on the events of the first time of getting \(y\), we use \(i\) to index such a time. The third inequality is using (A.7), (A.9), and (A.10). The third to last equality is by (A.12) and algebra. The second to last equality is by definition of the probability generating function \(f\). The last equality follows from definition of integral.
Similarly, we have
\[A_{2}(y)\geq\sum_{k=1}^{\infty}\mathbb{P}(T=k)[P^{\prime-}(\leq y)^{k}-P^{\prime -}(<y)^{k}]=P^{\prime-}(y)\cdot\operatorname*{\mathbb{E}}_{X\leftarrow \operatorname{Uniform}([P^{\prime-}(<y),P^{\prime-}(\leq y)])}[f^{\prime}(X)].\]
The rest part of the proof is standard and follows similarly as in [32]. Specifically, we have
\[e^{(\alpha-1)\operatorname{D}_{\alpha}(A_{1}\|A_{2})}\] \[=\sum_{y\in\mathcal{Y}}A_{1}(y)^{\alpha}\cdot A_{2}(y)^{1-\alpha}\] \[\leq\sum_{y\in\mathcal{Y}}P^{+}(y)^{\alpha}\cdot P^{\prime-}(y)^{ 1-\alpha}\cdot\operatorname*{\mathbb{E}}_{X\leftarrow[P^{+}(<y),P^{+}(\leq y)] }[f^{\prime}(X)]^{\alpha}\cdot\operatorname*{\mathbb{E}}_{X^{\prime} \leftarrow[P^{\prime-}(<y),P^{\prime-}(\leq y)]}[f^{\prime}(X^{\prime})]^{1-\alpha}\] \[\leq\sum_{y\in\mathcal{Y}}P^{+}(y)^{\alpha}\cdot P^{\prime-}(y)^{ 1-\alpha}\cdot\operatorname*{\mathbb{E}}_{X\leftarrow[P^{+}(<y),P^{+}(\leq y)] \atop X^{\prime}\leftarrow[P^{\prime-}(<y),P^{\prime-}(\leq y)]}\left[f^{ \prime}(X)^{\alpha}\cdot f^{\prime}\left(X^{\prime}\right)^{1-\alpha}\right]\] \[\leq\left(\frac{C}{c}\right)^{\alpha}\sup_{\lambda}e^{(\alpha-1) \operatorname{D}_{\alpha}\left(M_{\lambda}(D)\|M_{\lambda}(D^{\prime})\right) }\cdot\max_{y\in\mathcal{Y}}\operatorname*{\mathbb{E}}_{X\leftarrow[P^{+}(<y),P^{+}(\leq y)]\atop X^{\prime}\leftarrow[P^{\prime-}(<y),P^{\prime-}(\leq y )]}\left[f^{\prime}(X)^{\alpha}\cdot f^{\prime}\left(X^{\prime}\right)^{1- \alpha}\right].\]
The last inequality follows from Lemma A.4. The second inequality follows from the fact that, for any \(\alpha\in\mathbb{R}\), the function \(h:(0,\infty)^{2}\rightarrow(0,\infty)\) given by \(h(u,v)=u^{\alpha}\cdot v^{1-\alpha}\) is convex. Therefore, \(\mathbb{E}[U]^{\alpha}\mathbb{E}[V]^{1-\alpha}=h(\mathbb{E}[(U,V)])\leq \mathbb{E}[h(U,V)]=\mathbb{E}\left[U^{\alpha}\cdot V^{1-\alpha}\right]\) all positive random variables \((U,V)\). Note that \(X\) and \(X^{\prime}\) are required to be uniform separately, but their joint distribution can be arbitrary. As in [32], we will couple them so that \(\frac{X-P^{\prime}(<y)}{P^{+}(y)}=\frac{X^{\prime}-P^{\prime-}(<y)}{P^{\prime -}(y)}\). In particular, this implies that, for each \(y\in\mathcal{Y}\), there exists some \(t\in[0,1]\) such that
\[\operatorname*{\mathbb{E}}_{X\leftarrow[P^{+}(<y),P^{+}(\leq y)] }\left[f^{\prime}(X)^{\alpha}\cdot f^{\prime}\left(X^{\prime}\right)^{1- \alpha}\right]\leq f^{\prime}(P^{+}(<y)+t\cdot P^{+}(y))^{\alpha}\cdot f^{ \prime}\left(P^{\prime-}(<y)+t\cdot P^{\prime-}(y)\right)^{1-\alpha}\] \[X^{\prime}\leftarrow[P^{\prime-}(<y),P^{\prime-}(\leq y)]\]Therefore, we have
\[\mathrm{D}_{\alpha}\left(A_{1}\|A_{2}\right)\leq \sup_{\lambda}\mathrm{D}_{\alpha}\left(M_{\lambda}\|M_{\lambda}^{ \prime}\right)+\frac{\alpha}{\alpha-1}\log\frac{C}{c}\] \[+\frac{1}{\alpha-1}\log\left(\max_{\begin{subarray}{c}y\in \mathcal{Y}\\ t\in[0,1]\end{subarray}}f^{\prime}(P^{+}(<y)+t\cdot P^{+}(y))^{\alpha}\cdot f ^{\prime}\left(P^{\prime-}(<y)+t\cdot P^{\prime-}(y)\right)^{1-\alpha}\right).\]
To prove the result, we simply fix \(y_{*}\in\mathcal{Y}\) and \(t_{*}\in[0,1]\) achieving the maximum above and define
\[g(y):=\left\{\begin{array}{ll}1&\text{if }y<y_{*}\\ t_{*}&\text{if }y=y_{*}\\ 0&\text{if }y>y_{*}\end{array}\right.\]
The result directly follows by setting \(q=\underset{X\gets P^{+}}{\mathbb{E}}[g(X)]\) and \(q^{\prime}=\underset{X^{\prime}\gets P^{\prime-}}{\mathbb{E}}\left[g\left( X^{\prime}\right)\right]\).
Now we can prove Theorem 1, given the previous technical lemma. The proof share similarity to the proof of Theorem 2 in [32] with the key difference from the different form in Lemma A.5. We demonstrate this proof as follows for completeness.
Proof of Theorem 1.: We first specify the probability generating function of the truncated negative binomial distribution
\[f(x)=\underset{T\sim\mathbf{NegBin}(\theta,\gamma)}{\mathbb{E}}\left[x^{T} \right]=\begin{cases}\frac{(1-(1-\gamma)x)^{-\theta}-1}{\gamma-\theta-1}&\text {if }\theta\neq 0\\ \frac{\log(1-1-\gamma)x}{\log(\gamma)}&\text{if }\theta=0\end{cases}\]
Therefore,
\[f^{\prime}(x) =(1-(1-\gamma)x)^{-\theta-1}\cdot\begin{cases}\frac{\theta\cdot( 1-\gamma)}{\gamma-\theta-1}&\text{if }\theta\neq 0\\ \frac{1-\gamma}{\log(1/\gamma)}&\text{if }\theta=0\end{cases}\] \[=(1-(1-\gamma)x)^{-\theta-1}\cdot\gamma^{\theta+1}\cdot\mathbb{E }[T].\]
By Lemma A.5, for appropriate values \(q,q^{\prime}\in[0,1]\) and for all \(\alpha>1\) and all \(\hat{\alpha}>1,\) we have \[\begin{split}&\mathrm{D}_{\alpha}\left(A_{1}\|A_{2}\right)\\ &\leq\sup_{\lambda}\mathrm{D}_{\alpha}\left(M_{\lambda}\|M_{ \lambda}^{\prime}\right)+\frac{\alpha}{\alpha-1}\log\frac{C}{c}+\frac{1}{ \alpha-1}\log\left(f^{\prime}(q)^{\alpha}\cdot f^{\prime}\left(q^{\prime} \right)^{1-\alpha}\right)\\ &\leq\varepsilon+\frac{\alpha}{\alpha-1}\log\frac{C}{c}+\frac{1}{ \alpha-1}\log\left(\gamma^{\theta+1}\cdot\mathbb{E}[T]\cdot(1-(1-\gamma)q)^{- \alpha(\theta+1)}\cdot(1-(1-\gamma)q^{\prime})^{-(1-\alpha)(\theta+1)}\right)\\ &=\varepsilon+\frac{\alpha}{\alpha-1}\log\frac{C}{c}\\ &\quad+\frac{1}{\alpha-1}\log\left(\gamma^{\theta+1}\cdot\mathbb{ E}[T]\cdot\left(\left(\gamma+(1-\gamma)(1-q)\right)^{1-\hat{\alpha}}\cdot\left( \gamma+(1-\gamma)\left(1-q^{\prime}\right)\right)^{\hat{\alpha}}\right)^{\nu} \cdot\left(\gamma+(1-\gamma)(1-q)\right)^{u}\right)\\ &\quad\text{(Here, we let }\hat{\alpha}\nu=(\alpha-1)(1+\theta)\text{ and }(1-\hat{\alpha})\nu+u=-\alpha(\theta+1))\\ &\leq\varepsilon+\frac{\alpha}{\alpha-1}\log\frac{C}{c}+\frac{1}{ \alpha-1}\log\left(\gamma^{\theta+1}\cdot\mathbb{E}[T]\cdot\left(\gamma+(1- \gamma)\cdot e^{(\hat{\alpha}-1)\mathrm{D}_{\hat{\alpha}}\left(P^{+}\|P^{-} \right)}\right)^{\nu}\cdot\left(\gamma+(1-\gamma)(1-q)\right)^{u}\right)\\ &\quad\text{(Here,}1-q\text{ and }1-q^{\prime}\text{ are postprocessings of some }P^{+}\text{ and }P^{\prime-}\text{ respectively and }e^{(\hat{\alpha}-1)\mathrm{D}_{\hat{\alpha}}\left(\cdot\|\cdot\right)}\text{ is convex })\\ &\leq\varepsilon+\frac{\alpha}{\alpha-1}\log\frac{C}{c}+\frac{1}{ \alpha-1}\log\left(\gamma^{\theta+1}\cdot\mathbb{E}[T]\cdot\left(\gamma+(1- \gamma)\cdot e^{(\hat{\alpha}-1)\sup_{\lambda}\mathrm{D}_{\hat{\alpha}}\left(M_{ \lambda}\|M_{\lambda}^{\prime}\right)+\hat{\alpha}\log\frac{C}{c}}\right)^{ \nu}\cdot\left(\gamma+(1-\gamma)(1-q)\right)^{u}\right)\\ &\quad\text{(By Lemma A.4)}\\ &\leq\varepsilon+\frac{\alpha}{\alpha-1}\log\frac{C}{c}+\frac{1}{ \alpha-1}\log\left(\gamma^{\theta+1}\cdot\mathbb{E}[T]\cdot\left(\gamma+(1- \gamma)\cdot e^{(\hat{\alpha}-1)\sup_{\lambda}\mathrm{D}_{\hat{\alpha}}\left(M_{ \lambda}\|M_{\lambda}^{\prime}\right)+\hat{\alpha}\log\frac{C}{c}}\right)^{ \nu}\cdot\left(\gamma+(1-\gamma)(1-q)\right)^{u}\right)\\ &\leq\varepsilon+\frac{\alpha}{\alpha-1}\log\frac{C}{c}+\frac{1}{ \alpha-1}\log\left(\gamma^{\theta+1}\cdot\mathbb{E}[T]\cdot\left(\gamma+(1- \gamma)\cdot e^{(\hat{\alpha}-1)\sup_{\lambda}\mathrm{D}_{\hat{\alpha}}\left(M_{ \lambda}\|M_{\lambda}^{\prime}\right)+\hat{\alpha}\log\frac{C}{c}}\right)^{ \nu}\cdot\gamma^{u}\right)\\ &\quad\text{(Here }\gamma\leq\gamma+(1-\gamma)(1-q)\text{ and }u\leq 0)\\ &=\varepsilon+\frac{\alpha}{\alpha-1}\log\frac{C}{c}+\frac{\nu}{ \alpha-1}\log\left(\gamma+(1-\gamma)\cdot e^{(\hat{\alpha}-1)\sup_{\lambda} \mathrm{D}_{\hat{\alpha}}\left(M_{\lambda}\|M_{\lambda}^{\prime}\right)+\hat{ \alpha}\log\frac{C}{c}}\right)+\frac{1}{\alpha-1}\log\left(\gamma^{\theta+1} \cdot\mathbb{E}[T]\cdot\gamma^{u}\right)\\ &=\varepsilon+\frac{\alpha}{\alpha-1}\log\frac{C}{c}+\frac{\nu}{ \alpha-1}\left((\hat{\alpha}-1)\sup_{\lambda}\mathrm{D}_{\hat{\alpha}}\left(M_{ \lambda}\|M_{\lambda}^{\prime}\right)+\hat{\alpha}\log\frac{C}{c}\right)\\ &\quad\text{(Here we have }\nu=\frac{(\alpha-1)(1+\theta)}{\hat{\alpha}}\text{ and }u=-(1+\theta)\left(\frac{\alpha-1}{\hat{\alpha}}+1\right)\right)\\ &=\varepsilon+\frac{\alpha}{\alpha-1}\log\frac{C}{c}+(1+\theta) \left(1-\frac{1}{\hat{\alpha}}\right)\sup_{\lambda}\mathrm{D}_{\hat{\alpha}} \left(M_{\lambda}\|M_{\lambda}^{\prime}\right)+(1+\theta)\log\frac{C}{c}\\ &\quad\text{(}\text{Here* If \(\theta\neq 0\) and \(T\) is drawn from \(\text{NegBin}(\theta,\gamma)\), then \[\forall k\in\mathbb{N}\quad\mathbb{P}[T=k]=\frac{(1-\gamma)^{k}}{\gamma^{-\theta }-1}\cdot\prod_{\ell=0}^{k-1}\left(\frac{\ell+\theta}{\ell+1}\right)\] and \(\mathbb{E}[T]=\frac{\theta\cdot(1-\gamma)}{\gamma\cdot(1-\gamma^{\theta})}\). Note that when \(\theta=1\), it reduces to the geometric distribution with parameter \(\gamma\).
* If \(\theta=0\) and \(T\) is drawn from \(\text{NegBin}(0,\gamma)\), then \[\mathbb{P}[T=k]=\frac{(1-\gamma)^{k}}{k\cdot\log(1/\gamma)}\] and \(\mathbb{E}[T]=\frac{1/\gamma-1}{\log(1/\gamma)}\).
## Appendix C Privatization of Sampling Distribution
### General Functional Projection Framework
In section 3.2, we define the projection onto a convex set \(S_{C,c}\) as an optimization in terms of \(\ell_{2}\) loss. More generally, we can perform the following general projection at the \(j\)-th iteration by considering an additional penalty term, with a constant \(\nu\):
\[\min_{f} \|f-\pi^{(j)}\|_{2}+\nu KL(\pi^{(j)},f)\] (C.1) s.t. \[f\in S_{C,c}.\]
When \(\nu=0\), we recover the original \(\ell_{2}\) projection. Moreover, it's worth noting that our formulation has implications for the information projection literature [9, 21]. Specifically, as the penalty term parameter \(\nu\) approaches infinity, the optimization problem evolves into a minimization of KL divergence, recovering the objective function of information projection (in this instance, moment projection). However, the constraint sets in the literature of information projection are generally much simpler than our set \(S_{C,c}\), making it infeasible to directly borrow methods from its field. To the best of our knowledge, our framework is the first to address this specific problem in functional projection and establish a connection to information projection in the DP community.
### Practical Implementation of Functional Projection
Optimization program (3.1) is essentially a functional programming since \(f\) is a function on \(\Lambda\). However, when \(\Lambda\) represents a non-discrete parameter space, such functional minimization is typically difficult to solve analytically. Even within the literature of information projection, none of the methods considers our constraint set \(S_{C,c}\), which can be viewed as the intersections of uncountable single-point constraints on \(f\). To obtain a feasible solution to the optimization problem, we leverage the idea of discretization. Instead of viewing (3.1) as a functional projection problem, we manually discretize \(\Lambda\) and solve (3.1) as a minimization problem over a discrete set. Note that such approximation is unavoidable in numerical computations since computers can only manage discrete functions, even when we solve the functional projection analytically. Moreover, we also have the freedom of choosing the discretization grid without incurring extra privacy loss since the privacy cost is independent of the size of parameter space. By converting \(S_{C,c}\) into a set of finite constraints, we are able to solve the discrete optimization problem efficiently using CVXOPT [2].
## Appendix D DP-HyPO with General Prior Distribution
In the main manuscript, we assume \(\pi^{(0)}\) follows a uniform distribution over the parameter space \(\Lambda\) for simplicity. In practice, informed priors can be used when we want to integrate knowledge about the parameter space into sampling distribution, which is common in the Bayesian optimization framework. We now present the general DP-HyPO framework under the informed prior distribution.
To begin with, we define the space of essentially bounded density functions with respect to \(\pi^{(0)}\) as
\[S_{C,c}(\pi^{(0)})=\{f\in\Lambda^{\mathbb{R}^{+}}:\text{ess sup }f/\pi^{(0)}\leq C,\text{ess inf }f/\pi^{(0)}\geq c,\int_{\alpha\in\Lambda}f(\alpha)\mathrm{d}\alpha=1,f\ll \pi^{(0)}\}.\]When \(\pi^{(0)}=\frac{1}{\mu(\lambda)}\), we recover the original definition of \(S_{C,c}\). Note that here \(f\ll\pi^{(0)}\) means that \(f\) is absolute continuous with respect to the prior distribution \(\pi^{(0)}\) and this ensures that \(S_{C,c}(\pi^{(0)})\) is non-empty. Note that such condition is automatically satisfied when \(\pi^{(0)}\) is the uniform prior over the entire parameter space.
To define the projection of a density at the \(j\)-th iteration, \(\pi^{(j)}\), into the space \(S_{C,c}(\pi^{(0)})\), we consider the following functional programming problem:
\[\min_{f} \|f-\pi^{(j)}\|_{2}\] s.t. \[f\in S_{C,c}(\pi^{(0)}),\]
which is a direct generalization of Equation (3.1). As before, \(S_{C,c}(\pi^{(0)})\) is also convex and closed and the optimization program can be solved efficiently via discretization on \(\Lambda\).
## Appendix E Experiment Details
### MNIST Simulation
We now provide the detailed description of the experiment in Section 4.2.1. As specified therein, we consider two variable hyperparameters: the learning rate \(\eta\) and clipping norm \(R\), while keeping all the other hyperparameters fixed. We set the training batch size to be \(256\), and the total number of epoch to be \(10\). The value of \(\sigma\) is determined based on the allocated \(\varepsilon\) budget for each base algorithm. Specifically, \(\sigma=0.71\) for GP and \(\sigma=0.64\) for Uniform. For demonstration purposes, we set \(C\) to 2 and \(c\) to 0.75 in the GP method, so each base algorithm of Uniform has \(\log C/c\) more privacy budget than base algorithms in GP method. In Algorithm 3, we set \(\tau\) to 0.1 and \(\beta\) to 1. To facilitate the implementation of both methods, we discretize the learning rates and clipping norms as specified in the following setting to allow simple implementation of sampling and projection for Uniform and GP methods.
**Setting E.1**.: _we set a log-spaced grid discretization on \(\eta\) in the range \([0.0001,10]\) with a multiplicative factor of \(\sqrt[3]{10}\), resulting in \(16\) observations for \(\eta\). We also set a linear-spaced grid discretization on \(R\) in the range \([0.3,6]\) with an increment of \(0.3\), resulting in \(20\) observations for \(R\). This gives a total of \(320\) hyperparameters over the search region._
We specify the network structure we used in the simulation as below. It is the standard CNN in Tensorflow Privacy and Opacus.
class ConvNet(nn.Module): def __init__(self): super()__init__() self.conv1 = nn.Conv2d(1, 16, 8, 2, padding=3) self.conv2 = nn.Conv2d(16, 32, 4, 2) self.fc1 = nn.Linear(32 * 4 * 4, 32) self.fc2 = nn.Linear(32, 10)
def forward(self, x): x = F.relu(self.conv1(x)) x = F.max_pool2d(x, 2, 1) x = F.relu(self.conv2(x)) x = F.max_pool2d(x, 2, 1) x = x.view(-1, 32 * 4 * 4) x = F.relu(self.fc1(x)) x = self.fc2(x) return x
Despite the simple nature of MNIST, the simulation of training CNN with the two methods over each different fixed \(T\) still take significant computation resources. Due to the constraints on computational resources, we conduct a semi-real simulation using the MNIST dataset. We cache the mean accuracy of \(5\) independently trained models for each discretized hyperparameter and treat that as a proxy for the "actual accuracy" of the hyperparameter. Each time we sample the accuracy of a hyperparameter, we add Gaussian noise with a standard deviation of \(0.1\) to the cached mean. We evaluate the performance of the output model based on the "actual accuracy" corresponding to the selected hyperparameter.
### CIFAR-10 Simulation
We also provide a description of the experiment in Section 4.2.2. We set the training batch size to be \(256\), and the total number of epoch to be \(10\). The value of \(\sigma\) is determined based on the allocated \(\varepsilon\) budget for each base algorithm. Specifically, \(\sigma=0.65\) for GP and \(\sigma=0.6\) for Uniform. Regarding our GP method, we adopt the same set of hyperparameters as used in our MNIST experiments, which include \(C=2\), \(c=0.75\), \(\tau=0.1\), and \(\beta=1\). As usual, we discretize the learning rates and clipping norms as specified in the following Setting.
**Setting E.2**.: _we set a log-spaced grid discretization on \(\eta\) in the range \([0.0001,1]\) with a multiplicative factor of \(10^{0.1}\), resulting in \(50\) observations for \(\eta\). We also set a linear-spaced grid discretization on \(R\) in the range \([0,100]\) with an increment of \(2\), resulting in \(50\) observations for \(R\). This gives a total of \(2500\) hyperparameter combinations over the search region._
We follow the same CNN model architecture with our MNIST experiments.
In Figure 2, we provide the hyperparameter landscape for \(\sigma=0.65\), as generated by BoTorch [3].
### Federated Learning Simulation
We now provide the detailed description of the experiment in Section 4.2.3. As specified therein, we considered a FL task on a proprietary dataset5. Our objective is to determine the optimal learning
Figure 3: Mean and Standard Error of the loss of the FL over the two hyperparameters.
Figure 2: Mean and standard error of the accuracy of DP-SGD over the two hyperparameters for \(\sigma=0.65\). The learning rate (log-scale) ranges from \(0.00001\) (left) to 1 (right) while the clipping norm ranges from 0 (top) to 100 (bottom). The landscape for \(\sigma=0.6\) is similar, with a better accuracy.
rates for the central server (using AdaGrad) and the individual users (using SGD). To simulate this scenario, we utilize the landscape generated by BoTorch [3], as illustrated in Figure 3, and consider it as our reference landscape for both mean and standard deviation of the loss for each hyperparameter. When the algorithm (GP or Uniform) visits a specific hyperparameter \(\lambda\), our oracle returns a noisy score \(q(\lambda)\) drawn from a normal distribution \(N(\mu_{\lambda},\sigma_{\lambda})\). Figure 3 displays a heatmap that presents the mean (\(\mu_{\lambda}\)) and standard error (\(\sigma_{\lambda}\)) structure of the loss over these two hyperparameters, providing insights into the landscape's characteristics.
## Appendix F Additional Related Work
In this section, we delve into a more detailed review of the pertinent literature.
We begin with non-private Hyperparameter Optimization, a critical topic in the realm of Automated Machine Learning (AutoML) [16]. The fundamental inquiry revolves around the generation of high-performing models within a specific search space. In historical context, two types of optimizations have proven significant in addressing this inquiry: architecture optimization and hyperparameter optimization. Architecture optimization pertains to model-specific parameters such as the number of neural network layers and their interconnectivity, while hyperparameter optimization concerns training-specific parameters, including the learning rate and minibatch size. In our paper, we incorporate both types of optimizations within our HPO framework. Practically speaking, \(\Lambda\) can encompass various learning rates and network architectures for selection. For HPO, elementary methods include grid search and random search [22, 17, 15]. Progressing beyond non-adaptive random approaches, surrogate model-based optimization presents an adaptive method, leveraging information from preceding results to construct a surrogate model of the objective function [26, 41, 20, 30]. These methods predominantly employs Bayesian optimization techniques, including Gaussian process [33], Random Forest [18], and tree-structured Parzen estimator [5].
Another important topic in this paper is Differential Privacy (DP). DP offers a mathematically robust framework for measuring privacy leakage. A DP algorithm promises that an adversary with perfect information about the entire private dataset in use - except for a single individual - would find it hard to distinguish between its presence or absence based on the output of the algorithm [12].
Historically, DP machine learning research has overlooked the privacy cost associated with HPO [1, 39, 42]. The focus has only recently shifted to the "honest HPO" setting, where this cost is factored in [28]. Addressing this issue directly involves employing a composition-based analysis. If each training run of a hyperparameter upholds DP, then the overall HPO procedure adheres to DP through composition across all attempted hyperparameter values. A plethora of literature on the composition of DP mechanisms attempts to quantify a better DP guarantee of the composition. Vadhan et al. [36] demonstrated that though \((\varepsilon,\delta)\)-DP possesses a simple mathematical form, deriving the precise privacy parameters of a composition is #-P hard. Despite this obstacle, numerous advanced techniques are available to calculate a reasonably accurate approximation of the privacy parameters, such as Moments Accountant [1], GDP Accountant [11], and Edgeworth Accountant [37]. The efficacy of these accountants is attributed to the fact that it is easier to reason about the privacy guarantees of compositions within the framework of Renyi differential privacy [27] or \(f\)-differential privacy [11]. These methods have found widespread application in DP machine learning. For instance, when training deep learning models, one of the most commonly adopted methods to ensure DP is via noisy stochastic gradient descent (noisy SGD) [4, 35], which uses Moments Accountant to better quantify the privacy guarantee.
Although using composition for HPO is a simple and straightforward approach, it carries with it a significant challenge. The privacy guarantee derived from composition accounting can be excessively loose, scaling polynomially with the number of runs. Chaudhuri et al. [7] were the first to enhance the DP bounds for HPO by introducing additional stability assumptions on the learning algorithms. [24] made significant progress in enhancing DP bounds for HPO without relying on any stability properties of the learning algorithms. They proposed a simple procedure where a hyperparameter was randomly selected from a uniform distribution for each training run. This selection process was repeated a random number of times according to a geometric distribution, and the best model obtained from these runs was outputted. They showed that this procedure satisfied \((3\varepsilon,0)\)-DP as long as each training run of a hyperparameter was \((\varepsilon,0)\)-DP. Building upon this, [32] extended the procedure to accommodate negative binomial or Poisson distributions for the repeated uniform selection. They also offered more precise Renyi DP guarantees for this extended procedure. Furthermore, [8] explored a generalization of the procedure for top-\(k\) selection, considering \((\varepsilon,\delta)\)-DP guarantees.
In a related context, [28] explored a setting that appeared superficially similar to ours, as their title mentioned "adaptivity." However, their primary focus was on improving adaptive optimizers such as DP-Adam, which aimed to reduce the necessity of hyperparameter tuning, rather than the adaptive HPO discussed in this paper. Notably, in terms of privacy accounting, their approach only involved composing the privacy cost of each run without proposing any new method.
Another relevant area of research is DP selection, which encompasses well-known methods such as the exponential mechanism [25] and the sparse vector technique [13], along with subsequent studies. However, this line of research always assumes the existence of a low-sensitivity score function for each candidate, which is an unrealistic assumption for hyperparameter optimization. | ## Review
### Summary
This paper proposes DP-HyPO, a differentially private adaptive hyperparameter optimization (HPO) framework that bridges the gap between non-private and private HPO methods. DP-HyPO aims to enhance the performance of private model training by allowing adaptive hyperparameter selection based on previous runs while maintaining privacy guarantees. The authors provide both theoretical analysis and empirical evaluations demonstrating the framework's advantages over non-adaptive methods, particularly in complex settings. By focusing on hyperparameter tuning within the context of privacy, this work contributes valuable insights into the effective integration of privacy-preserving techniques in machine learning workflows.
### Strengths
- Clear and intriguing presentation of the framework.
- New theoretical results analyzing privacy loss in adaptive private hyperparameter optimization.
- Flexible framework that allows various model-based HPO methods to be adapted to DP-HPO.
- The paper is well-written and easy to understand.
- The availability of code facilitates reproducibility.
### Weaknesses
- Experimental results lack depth; more extensive evaluations are needed.
- The paper's presentation could be improved for better flow and clarity.
- Some important concepts, like the definition of RDP, are inadequately covered in the main text.
- The focus on deep learning methods may limit the generalizability of the framework to other types of learning algorithms.
- Use of simplistic datasets like MNIST does not sufficiently challenge the proposed methods.
### Questions
- How can one interpret π^(0)=π(0)·μ(Λ) in Framework 1, which is a product of two distributions?
- What is the run time of performing the projection in Eq.(3.1)?
- Is it possible to derive convergence guarantees for DP-HyPO with Gaussian Process?
- What about deterministic learning algorithms? Is DP also preserved via the framework?
- What is the purpose of Figure 1? What are the details of experimental settings?
### Soundness
**Score:** 3
**Description:** 3 = good; the theoretical foundation is generally solid, but some assumptions require more clarity and justification.
### Presentation
**Score:** 3
**Description:** 3 = good; while the paper is mostly well-written, there are several areas where the flow and clarity could be improved.
### Contribution
**Score:** 3
**Description:** 3 = good; the paper addresses an important problem in the field and provides valuable insights, although some aspects could be explored more thoroughly.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept; the paper is technically solid with moderate-to-high impact, but it requires further experimental analysis and clarity in presentation.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a novel and relevant contribution to the field of differential privacy in hyperparameter optimization. While there are notable weaknesses in the experimental section and presentation, the theoretical framework and potential impact justify an acceptance decision. The clarity of the writing and the provision of code for reproducibility further enhance its value.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Learning bounded degree polytrees with samples
Anonymous Author(s)
Affiliation
Address
email
###### Abstract
We establish finite-sample guarantees for efficient proper learning of bounded-degree _polytrees_, a rich class of high-dimensional probability distributions and a subclass of Bayesian networks, a widely-studied type of graphical models. Very recently, Bhattacharyya et al. (2021) obtained finite-sample guarantees for recovering tree-structured Bayesian networks, i.e., 1-polytrees. We considerably extend their results by providing an efficient algorithm which learns \(d\)-polytrees in polynomial time and sample complexity when the in-degree \(d\) is constant, provided that the underlying undirected graph (skeleton) is known. We complement our algorithm with an information-theoretic lower bound, showing that the dependence of our sample complexity is nearly tight in both the dimension and target accuracy parameters.
## 1 Introduction
Distribution learning, or density estimation, is the task of obtaining a good estimate of some unknown underlying probability distribution \(P\) from observational samples. Understanding which classes of distributions could be or could not be learnt efficiently is a fundamental problem in both computer science and statistics, where efficiency is measured in terms of _sample_ (data) and _computational_ (time) complexities.
_Bayesian networks_ (or _Bayes nets_ in short) represent a class of high-dimensional distributions that can be explicitly described by how each variable is be generated sequentially in a directed fashion. Being interpretable, Bayes nets have been used to model beliefs in a wide variety of domains (e.g. see Jensen and Nielsen, 2007; Koller and Friedman, 2009) and references therein). A fundamental problem in computational learning theory is to identify families of Bayes nets which can be learned efficiently from observational data.
Formally, a Bayes net is a probability distribution \(P\), defined over some directed acyclic graphs (DAG) \(G=(V,E)\) on \(|V|=n\) nodes that factorizes according to \(G\) (i.e. Markov with respect to \(G\)) in the following sense: \(P(v_{1},\ldots,v_{n})=\prod_{v_{1},\ldots,v_{n}}P(v\mid\pi(v))\), where \(\pi(v)\subseteq V\) are the parents of \(v\) in \(G\). While it is well-known that given the DAG (structure) of a Bayes net, there exists sample-efficient algorithms that output good hypotheses Dasgupta, 1997; Bhattacharyya et al., 2020), there is no known computationally efficient algorithms for obtaining the DAG of a Bayes net. In fact, it has long been understood that Bayes net structure learning is computationally expensive, in various general settings (Chickering et al., 2004). However, these hardness results fall short when the goal is learning the distribution \(P\) in the probabilistically approximately correct (PAC) (Valiant, 1984) sense (with respect to, say, KL divergence or total variation distance), rather than trying to recover an exact graph from the Markov equivalence class of \(P\).
_Polytrees_ are a subclass of Bayesian networks where the undirected graph underlying the DAG is a forest, i.e., there is no cycle for the undirected version of the DAG; a polytree with maximum in-degree \(d\) is also known as a \(d\)-polytree. With an infinite number of samples, one can recover theDAG of a non-degenerate polytree in the equivalence class with the Chow-Liu algorithm (Chow and Liu, 1968) and some additional conditional independence tests (Rebane and Pearl, 1988). However, this algorithm does _not_ work in the finite sample regime. The only known result for learning polytrees with finite sample guarantees is for 1-polytrees (Bhattacharyya et al., 2021). Furthermore, in the agnostic setting, when the goal is to find the closest polytree distribution to an arbitrary distribution \(P\), the learning problem becomes NP-hard (Dasgupta, 1999).
In this work, we investigate what happens when the given distribution is a \(d\)-polytree, for \(d>1\). _Are \(d\)-polytrees computationally hard to learn in the realizable PAC-learning setting?_ One motivation for studying polytrees is due to a recent work of Gao and Aragam (2021) which shows that polytrees are easier to learn than general Bayes nets due to the underlying graph being a tree, allowing typical causal assumptions such as faithfulness to be dropped when designing efficient learning algorithms.
**Contributions.** Our main contribution is a sample-efficient algorithm for proper Bayes net learning in the realizable setting, when provided with the ground truth skeleton (i.e., the underlying forest). Crucially, our result does not require any distributional assumptions such as strong faithfulness, etc.
**Theorem 1**.: _There exists an algorithm which, given \(m\) samples from a polytree \(P\) over \(\Sigma^{n}\), accuracy parameter \(\varepsilon>0\), failure probability \(\delta\), as well as its maximum in-degree \(d\) and the explicit description of the ground truth skeleton of \(P\), outputs a \(d\)-polytree \(\hat{P}\) such that \(d_{\mathrm{KL}}(P\parallel\hat{P})\leq\varepsilon\) with success probability at least \(1-\delta\), as long as_
\[m=\tilde{\Omega}\!\left(\frac{n\cdot|\Sigma|^{d+1}}{\varepsilon}\log\frac{1}{ \delta}\right)\.\]
_Moreover, the algorithm runs in time polynomial in \(m\), \(|\Sigma|^{d}\), and \(n^{d}\)._
We remark that our result holds when even given only an upper bound on the true in-degree \(d\). In particular, our result provides, for constant \(|\Sigma|\), \(d\), an upper bound of \(\tilde{O}(n/\varepsilon)\) on the sample complexity of learning \(O(1)\)-polytrees. Note that this dependence on the dimension \(n\) and the accuracy parameter \(\varepsilon\) are optimal, up to logarithmic factors: indeed, we establish in Theorem 15 an \(\Omega(n/\varepsilon)\) sample complexity lower bound for this question, even for \(d=2\) and \(|\Sigma|=2\).1
Footnote 1: We remark that (Bhattacharyya et al., 2021, Theorem 7.6) implies an \(\Omega(\frac{n}{\varepsilon}\log\frac{n}{\varepsilon})\) sample complexity lower bound for the analogous question when the skeleton is unknown and \(d=1\).
We also state sufficient conditions on the distribution that enable recovery of the ground truth skeleton. Informally, we require that the data processing inequality hold in a strong sense with respect to the edges in the skeleton graph. Under these conditions, combining with our main result in Theorem 1, we obtain a polynomial-time PAC algorithm to learn bounded-degree polytrees from samples.
**Other related work.** Structure learning of Bayesian networks is an old problem in machine learning and statistics that has been intensively studied; see, for example, Chapter 18 of Koller and Friedman (2009). Many of the early approaches required faithfulness, a condition which permits learning of the Markov equivalence class, e.g. Spirtes and Glymour (1991), Chickering (2002), Friedman et al. (2013). Finite sample complexity of such algorithms assuming faithfulness-like conditions has also been studied, e.g. Friedman and Yakhini (1996). An alternate line of more modern work has considered various other distributional assumptions that permits for efficient learning, e.g., Chickering and Meek (2002), Hoyer et al. (2008), Shimizu et al. (2006), Peters and Buhlmann (2014), Ghoshal and Honorio (2017), Park and Raskutti (2017), Aragam et al. (2019), with the latter three also showing analyzing finite sample complexity. Specifically for polytrees, Rebane and Pearl (1988), Geiger et al. (1990) studied recovery of the DAG for polytrees under the infinite sample regime. Gao and Aragam (2021) studied the more general problem of learning Bayes nets, and their sufficient conditions simplified in the setting of polytrees. Their approach emphasize more on the exact recovery, and thus the sample complexity has to depend on the minimum gap of some key mutual information terms. In contrast, we allow the algorithm to make mistakes when certain mutual information terms are too small to detect for the given sample complexity budget and achieve a PAC-type guarantee. As such, once the underlying skeleton is discovered, our sample complexity only depends on the \(d,n,\varepsilon\) and not on any distributional parameters.
There are existing works on Bayes net learning with tight bounds in total variation distance, with a focus on sample complexity (and not necessarily computational efficiency); for instance, (Canonne et al., 2020). Acharya et al. (2018) consider the problem of learning (in TV distance) a bounded-degree causal Bayesian network from interventions, assuming the underlying DAG is known.
**Outline of paper.** We begin with some preliminary notions and related work in Section 2. Section 3 then shows how to recover a polytree close in KL divergence, assuming knowledge of the skeleton and maximum in-degree. Section 4 gives sufficient conditions to recover the underlying skeleton from samples, while Section 5 provides our sample complexity lower bound. We conclude in Section 6 with some open directions and defer some full proofs to the appendix.
## 2 Preliminaries and tools from previous work
### Preliminary notions and notation
We write the disjoint union as \(\dot{\cup}\). For any set \(A\), let \(|A|\) denotes its size. We use hats to denote estimated quantities, e.g., \(\hat{I}(X;Y)\) will be the estimated mutual information of \(I(X;Y)\). We employ the standard asymptotic notation \(O(\cdot)\), \(\Omega(\cdot)\ \Theta(\cdot)\), and write \(\tilde{O}(\cdot)\) to omit polylogarithmic factors. Throughout, we identify probability distributions over discrete sets with their probability mass functions (pmf). We use \(d^{*}\) to denote the true maximum in-degree of the underlying polytree.
### Probability distribution definitions
We begin by defining KL divergence and squared Hellinger distances for a pair of discrete distributions with the same support.
**Definition 2** (KL divergence and squared Hellinger distance).: For distributions \(P,Q\) defined on the same discrete support \(\mathcal{X}\), their KL divergence and squared Hellinger distances are defined as \(d_{\mathrm{KL}}(P\ \|\ Q)=\sum_{x\in\mathcal{X}}P(x)\log\frac{P(x)}{Q(x)}\) and \(d_{\mathrm{H}}^{2}(P,Q)=1-\sum_{x\in\mathcal{X}}\sqrt{P(x)\cdot Q(x)}\) respectively.
Abusing notation, for a distribution \(P\) on variables \(X=\{X_{1},\ldots,X_{n}\}\), we write \(P_{S}\) to mean the projection of \(P\) to the subset of variables \(S\subseteq X\) and \(P_{G}\) to mean the projection of \(P\) onto a graph \(G\). More specifically, we have \(P_{G}(x_{1},\ldots,x_{n})=\prod_{x\in X}P(x\mid\pi_{G}(x))\) where \(\pi_{G}(x)\) are the parents of \(x\) in \(G\). Note that \(P_{G}\) is the closest distribution2 on \(G\) to \(P\) in \(d_{\mathrm{KL}}\), i.e. \(P_{G}=\operatorname*{argmin}_{Q\in G}d_{\mathrm{KL}}(P\ \|\ Q)\). By Chow and Liu (1968), we also know that
Footnote 2: One can verify this using Bhattacharyya et al. (2021, Lemma 3.3): For any distribution \(Q\) defined on graph \(G\), we have \(d_{\mathrm{KL}}(P\ \|\ Q)-d_{\mathrm{KL}}(P\ \|\ P_{G})=\sum_{v\in V}P(\pi_{G}(v))\cdot d_{ \mathrm{KL}}(P(v\mid\pi_{G}(v))\ \|\ Q(v\mid\pi_{G}(v)))\geq 0\).
\[d_{\mathrm{KL}}(P,P_{G})=-\sum_{i=1}^{n}I(X_{i};\pi_{G}(X_{i}))-H(P_{X})+\sum_ {i=1}^{n}H(P_{X_{i}})\, \tag{1}\]
where \(H\) is the entropy function. Note that only the first term depends on the graph structure of \(G\).
By maximizing the sum of mutual information (the negation of the first term in (1)), we can obtain an \(\varepsilon\)-approximated graph \(G\) such that \(d_{\mathrm{KL}}(P\ \|\ P_{G})\leq\varepsilon\). In the case of tree-structured distributions, this can be efficiently solved by using any maximum spanning tree algorithm; a natural generalization to bounded degree bayes nets remains open due to the hardness of solving the underlying optimization problem (Hoffgen, 1993). If any valid topological ordering of the target Bayes net \(P\) is present, then an efficient greedy approach is able to solve the problem.
**Definition 3** ((Conditional Mutual Information).: Given a distribution \(P\), the mutual information of two random variables \(X\) and \(Y\), supported on \(\mathcal{X}\) and \(\mathcal{Y}\) respectively, is defined as
\[I(X;Y)=\sum_{x\in\mathcal{X},y\in\mathcal{Y}}P(x,y)\cdot\log\left(\frac{P(x,y )}{P(x)\cdot P(y)}\right)\.\]
Conditioning on a third random variable \(Z\), supported on \(\mathcal{Z}\), the conditional mutual information is defined as:
\[I(X;Y\mid Z)=\sum_{x\in\mathcal{X},y\in\mathcal{Y},z\in\mathcal{Z}}P(x,y,z) \cdot\log\left(\frac{P(x,y,z)\cdot P(z)}{P(x,z)\cdot P(y,z)}\right)\.\]
By adapting a known testing result from (Bhattacharyya et al., 2021, Theorem 1.3), we can obtain the following corollary, which we will use. We provide the full derivation in the supplementary materials.
**Corollary 4** (Conditional Mutual Information Tester, adapted from [2, Theorem 1.3]).: _Fix any \(\varepsilon>0\). Let \((X,Y,Z)\) be three random variables over \(\Sigma_{X},\Sigma_{Y},\Sigma_{Z}\) respectively. Given the empirical distribution \((\hat{X},\hat{Y},\hat{Z})\) over a size \(N\) sample of \((X,Y,Z)\), there exists a universal constant \(0<C<1\) so that for any_
\[N\geq\Theta\left(\frac{|\Sigma_{X}|\cdot|\Sigma_{Y}|\cdot|\Sigma_{Z}|}{ \varepsilon}\cdot\log\frac{|\Sigma_{X}|\cdot|\Sigma_{Y}|\cdot|\Sigma_{Z}|}{ \delta}\cdot\log\frac{|\Sigma_{X}|\cdot|\Sigma_{Y}|\cdot|\Sigma_{Z}|\cdot\log( 1/\delta)}{\varepsilon}\right),\]
_the following statements hold with probability \(1-\delta\): (1) If \(I(X;Y\mid Z)=0\), then \(\hat{I}(X;Y\mid Z)<C\cdot\varepsilon\). (2) If \(\hat{I}(X;Y\mid Z)\geq C\cdot\varepsilon\), then \(I(X;Y\mid Z)>0\). (3) If \(\hat{I}(X;Y\mid Z)\leq C\cdot\varepsilon\), then \(I(X;Y\mid Z)<\varepsilon\). Unconditional statements involving \(I(X;Y)\) and \(\hat{I}(X;Y)\) hold similarly by choosing \(|\Sigma_{Z}|=1\)._
### Graph definitions
Let \(G=(V,E)\) be a graph on \(|V|=n\) vertices and \(|E|\) nodes where adjacencies are denoted with dashes, e.g. \(u-v\). For any vertex \(v\in V\), we use \(N(v)\subseteq V\setminus\{v\}\) to denote the neighbors of \(v\) and \(d(v)=|N(v)|\) to denote \(v\)'s degree. An undirected cycle is a sequence of \(k\geq 3\) vertices such that \(v_{1}-v_{2}-\ldots-v_{k}-v_{1}\). For any subset \(E^{\prime}\subseteq E\) of edges, we say that the graph \(H=(V,E^{\prime})\) is the edge-induced subgraph of \(G\) with respect to \(E^{\prime}\).
For oriented graphs, we use arrows to denote directed edges, e.g. \(u\to v\). We denote \(\pi(v)\) to denote the parents of \(v\) and \(d^{in}(v)\) to denote \(v\)'s incoming degree. An interesting directed subgraph on three vertices is the v-structure, where \(u\to v\gets w\) and \(u\mathord{\raise 1.29pt\hbox{$\searrow$}}w\); we say that \(v\) is the center of the v-structure. In this work, we study a generalized higher-degree version of v-structures: we define the notion of _deg-\(\ell\) v-structure_ as a node \(v\) with \(\ell\geq 2\) parents \(u_{1},u_{2}\ldots,u_{\ell}\). We say that a deg-\(\ell\) v-structure is said to be \(\varepsilon\)-strong if we can reliably identify them in the finite sample regime. In our context, it means that for all \(k\in[\ell]\), \(I(u_{k};\{u_{1},u_{2}\ldots,u_{\ell}\}\setminus u_{k}\mid v)\geq C\cdot\varepsilon\), for the universal constant \(0<C<1\) appearing in Corollary 4. A directed acyclic graph (DAG) is a fully oriented graph without any directed cycles (a sequence of \(k\geq 3\) vertices such that \(v_{1}\to v_{2}\to\ldots\to v_{k}\to v_{1}\)) and are commonly used to represent the conditional dependencies of a Bayes net.
For any partially directed graph, an _acyclic completion_ or _consistent extension_ refers to an assignment of edge directions to unoriented edges such that the resulting fully directed graph has no directed cycles; we say that a DAG \(G\) is _consistent_ with a partially directed graph \(H\) if \(G\) is an acyclic completion of \(H\). Meek rules are a set of 4 edge orientation rules that are sound and complete with respect to any given set of arcs that has a consistent DAG extension Meek [1995]. Given any edge orientation information, one can always repeatedly apply Meek rules till a fixed point to maximize the number of oriented arcs. One particular orientation rule (Meek \(R1\)) orients \(b\to c\) whenever a partially oriented graph has the configuration \(a\to b-c\) and \(a\mathord{\raise 1.29pt\hbox{$\searrow$}}c\) so as to avoid forming a new v-structure of the form \(a\to b\gets c\). In the same spirit, we define Meek \(R1(d^{*})\) to orient all incident unoriented edges away from \(v\) whenever \(v\) already has \(d^{*}\) parents in a partially oriented graph.
The _skeleton_\(\operatorname{skel}(G)\) of a graph \(G\) refers to the resulting undirected graph after unorienting all edges in \(G\), e.g. see Fig. 1. A graph \(G\) is a _polytree_ if \(\operatorname{skel}(G)\) is a forest. For \(d\geq 1\), a polytree \(G\) is a \(d\)-polytree if all vertices in \(G\) have at most \(d\) parents. Without loss of generality, by picking the minimal \(d\), we may assume that \(d\)-polytrees have a vertex with \(d\) parents. When we _freely orient_ a forest, we pick arbitrary root nodes in the connected components and orient to form a 1-polytree.
## 3 Recovering a good orientation given a skeleton and degree bound
In this section, we describe and analyze an algorithm for estimating a probability distribution \(P\) that is defined on a \(d^{*}\)-polytree \(G^{*}\). We assume that we are given \(\operatorname{skel}(G^{*})\) and \(d\) as input.
Note that for some distributions there could be more than one ground truth graph, e.g. when the Markov equivalence class has multiple graphs. In such situations, for analysis purposes, we are free to choose any graph that \(P\) is Markov with respect to. As the mutual information scores3 are the same for any graphs that \(P\) is Markov with respect to, the choice of \(G^{*}\) does not matter here.
### Algorithm
At any point in the algorithm, let us define the following sets. Let \(N(v)\) be the set of all neighbors of \(v\) in \(\operatorname{skel}(G^{*})\). Let \(N^{in}(v)\subseteq N(v)\) be the current set of incoming neighbors of \(v\). Let \(N^{out}(v)\subseteq N(v)\) be the current set of outgoing neighbors of \(v\). Let \(N^{un}(v)\subseteq N(v)\) be the current set of unoriented neighbors of \(v\). That is,
\[N(v)=N^{in}(v)\ \dot{\cup}\ N^{out}(v)\ \dot{\cup}\ N^{un}(v)\]
```
Input: Skeleton \(\operatorname{skel}(G^{*})=(V,E)\), max in-degree \(d^{*}\), threshold \(\varepsilon>0\), universal constant \(C\) Output: A complete orientation of \(\operatorname{skel}(G^{*})\)
1: Run Phase 1: Orient strong v-structures (Algorithm 3) \(\triangleright\)\(\mathcal{O}(n^{d^{*}})\) time
2: Run Phase 2: Local search and Meek \(R1(d^{*})\) (Algorithm 4) \(\triangleright\)\(\mathcal{O}(n^{3})\) time
3: Run Phase 3: Freely orient remaining unoriented edges (Algorithm 5) \(\triangleright\)\(\mathcal{O}(n)\) time via DFS
4:return\(\hat{G}\)
```
**Algorithm 1** Algorithm for known skeleton and max in-degree.
There are three phases to our algorithm. In Phase 1, we orient strong v-structures. In Phase 2, we locally check if an edge is forced to orient one way or another to avoid incurring too much error. In Phase 3, we orient the remaining unoriented edges as a 1-polytree. Since the remaining edges were not forced, we may orient the remaining edges in an arbitrary direction (while not incurring "too much error") as long as the final incoming degrees of any vertex does not increase by more than 1. Subroutine Orient (Algorithm 2) performs the necessary updates when we orient \(u-v\) to \(u\to v\).
```
Input: Vertices \(u\) and \(v\) where \(u-v\) is currently unoriented
1: Orient \(u-v\) as \(u\to v\).
2: Update \(N^{in}(v)\) to \(N^{in}(v)\cup\{u\}\) and \(N^{un}(v)\) to \(N^{un}(v)\setminus\{u\}\).
3: Update \(N^{out}(u)\) to \(N^{out}(u)\cup\{v\}\) and \(N^{un}(u)\) to \(N^{un}(u)\setminus\{v\}\).
```
**Algorithm 2**Orient: Subroutine to orient edges
### Analysis
Observe that we perform \(\mathcal{O}(n^{d^{*}})\) (conditional) mutual information tests in Algorithm 1. The following lemma (Lemma 5) ensures us that _all_ our tests will behave as expected with sufficient samples. As such, in the rest of our analysis, we analyze under the assumption that our tests are correct.
**Lemma 5**.: _Suppose all variables in the Bayesian network has alphabet \(\Sigma\). For \(\varepsilon^{\prime}>0\), with_
\[m\in\mathcal{O}\left(\frac{|\Sigma|^{d^{*}+1}}{\varepsilon^{\prime}}\cdot\log \frac{|\Sigma|^{d^{*}+1}\cdot n^{d^{*}}}{\delta}\cdot\log\frac{|\Sigma|^{d^{*} +1}\cdot\log(n^{d^{*}}/\delta)}{\varepsilon^{\prime}}\right)\]
_empirical samples, \(\mathcal{O}(n^{d^{*}})\) statements of the following forms, where \(\mathbf{X}\) and \(\mathbf{Y}\) are variable sets of size \(|\mathbf{X}\ \dot{\cup}\ \mathbf{Y}|\leq d\) and \(Z\) is possibly \(\emptyset\), all succeed with probability at least \(1-\delta\):_
Figure 1: Running polytree example with \(d^{*}=3\) where \(I(a;b,c)=I(b;a,c)=I(c;a,b)=0\) due to the deg-3 v-structure centered at \(d\). Since \(I(a;f\mid d)=0\), Corollary 4 tells us that \(\hat{I}(a;f\mid d)\leq C\cdot\varepsilon\). Thus, we will _not_ detect \(a\to d\to f\) erroneously as a strong deg-2 v-structure \(a\to d\gets f\).
1. _If_ \(I(\mathbf{X};\mathbf{Y}\mid Z)=0\)_, then_ \(\hat{I}(\mathbf{X};\mathbf{Y}\mid Z)<C\cdot\varepsilon^{\prime}\)_,_
2. _If_ \(\hat{I}(\mathbf{X};\mathbf{Y}\mid Z)\geq C\cdot\varepsilon^{\prime}\)_, then_ \(I(\mathbf{X};\mathbf{Y}\mid Z)>0\)_,_
3. _If_ \(\hat{I}(\mathbf{X};\mathbf{Y}\mid Z)\leq C\cdot\varepsilon^{\prime}\)_, then_ \(I(\mathbf{X};\mathbf{Y}\mid Z)<\varepsilon^{\prime}\)_._
Proof.: Use Corollary 4 and apply union bound over \(\mathcal{O}(n^{d})\) tests.
Recall that \(\pi(v)\) is the set of true parents of \(v\) in \(G^{*}\). Let \(H\) be the forest induced by the remaining unoriented edges after phase \(2\). Let \(\hat{G}\) be returned graph of the algorithm 1. Let us denote the final \(N^{in}(v)\) as \(\pi^{in}(v)\) at the end of Phase 2, just before freely orienting, i.e. the vertices pointing into \(v\) in \(\hat{G}\setminus H\). Let \(\pi^{un}(v)=\pi(v)\setminus\pi^{in}(v)\) be the set of ground truth parents that are not identified in Phase 1. Since the algorithm does not make mistakes for orientations in \(\hat{G}\setminus H\) (Lemma 6), all edges of in \(\pi^{un}(v)\) will be unoriented.
**Lemma 6**.: _Any oriented arc in \(\hat{G}\setminus H\) is a ground truth orientation. That is, any vertex parent set in \(\hat{G}\setminus H\) is a subset of \(\pi(v)\), i.e. \(\pi^{in}(v)\subseteq\pi(v)\), and \(N^{in}(v)\) at any time during the algorithm will have \(N^{in}(v)\subseteq\pi^{in}(v)\)._
Let \(\hat{\pi}(v)\) be the proposed parents of \(v\) output by Algorithm 1. The KL divergence between the true distribution and our output distribution is essentially \(\sum_{v\in V}I(v;\pi(v))-\sum_{v\in V}I(v;\hat{\pi}(v))\) as the structure independent terms will cancel out.
To get a bound on the KL divergence, we will upper bound \(\sum_{v\in V}I(v;\pi(v))\) and lower bound \(\sum_{v\in V}I(v;\hat{\pi}(v))\). To upper bound \(I(v;\pi(v))\) in terms of \(\pi^{in}(v)\subseteq\pi(v)\) and \(I(v;u)\) for \(u\in\pi^{un}(v)\), we use Lemma 8 which relies on repeated applications of Lemma 7. To lower bound \(\sum_{v\in V}I(v;\hat{\pi}(v))\), we use Lemma 9.
**Lemma 7**.: _Fix any vertex \(v\), any \(S\subseteq\pi^{un}(v)\), and any \(S^{\prime}\subseteq\pi^{in}(v)\). If \(S\neq\emptyset\), then there exists a vertex \(u\in S\cup S^{\prime}\) with_
\[I(v;S\cup S^{\prime})\leq I(v;S\cup S^{\prime}\setminus\{u\})+I(v;u)+ \varepsilon\;. \tag{2}\]
**Lemma 8**.: _For any vertex \(v\) with \(\pi^{in}(v)\), we can show that_
\[I(v;\pi(v))\leq\varepsilon\cdot|\pi(v)|+I(v;\pi^{in}(v))+\sum_{u\in\pi^{un}(v) }I(v;u)\;.\]
```
1:\(d\gets d^{*}\)
2:while\(d\geq 2\)do
3:for\(v\in V\)do\(\triangleright\) Arbitrary order
4: Let \(\mathcal{N}_{d}\subseteq 2^{N(v)}\) be the set of \(d\) neighbors of \(v\)\(\triangleright\)\(|\mathcal{N}_{d}|=\binom{|N(v)|}{d}\)
5:for\(S\in\mathcal{N}_{d}\) s.t. \(|S|=d\), \(|S\cup N^{in}(v)|\leq d^{*}\), and \(\hat{I}(u;S\setminus\{u\}\mid v)\geq C\cdot\varepsilon\), \(\forall u\in S\)do
6:for\(u\in S\)do\(\triangleright\) Strong deg-\(d\) v-structure
7: orient(\(u\), \(v\))\(\triangleright\) Decrement degree bound
8:\(d\gets d-1\)
```
**Algorithm 3** Phase 1: Orient strong v-structures
Figure 2: Suppose we have the following partially oriented graph in the execution of Algorithm 4 (after Phase 1). Since \(N^{in}(d)=\{a,b\}\), we will check the edge orientations of \(c-d\) and \(f-d\). Since \(I(f;\{a,b\}\mid d)=0\), we will have \(\hat{I}(f;\{a,b\}\mid d)\leq\varepsilon\), so we will _not_ erroneously orient \(f\to d\). Meanwhile, \(I(c;\{a,b\})=0\), we will have \(\hat{I}(c;\{a,b\})\leq\varepsilon\), so we will _not_ erroneously orient \(d\to c\).
In Phase 3, we increase the incoming edges to any vertex by at most one. The following lemma tells us that we lose at most4 an additive \(\varepsilon\) error per vertex.
Footnote 4: Orienting “freely” could also increase the mutual information score and this is considering the worst case.
**Lemma 9**.: _Consider an arbitrary vertex \(v\) with \(\pi^{in}(v)\) at the start of Phase 3. If Phase 3 orients \(u\to v\) for some \(u-v\in H\), then_
\[I(v;\pi^{in}(v)\cup\{u\})\geq I(v;\pi^{in}(v))+I(v;u)-\varepsilon.\]
By using Lemma 8 and Lemma 9, we can show our desired KL divergence bound (Lemma 10).
**Lemma 10**.: _Let \(\pi(v)\) be the true parents of \(v\). Let \(\hat{\pi}(v)\) be the proposed parents of \(v\) output by our algorithm. Then,_
\[\sum_{v\in V}I(v;\pi(v))-\sum_{v\in V}I(v;\hat{\pi}(v))\leq n\cdot(d^{*}+1) \cdot\varepsilon\;.\]
With these results in hand, we are ready to establish our main theorem:
Proof of Theorem 1.: We first combine Lemma 10 and Lemma 5 with \(\varepsilon^{\prime}=\frac{\varepsilon}{2n\cdot(d^{*}+1)}\) in order to obtain an orientation \(\hat{G}\) which is close to \(G^{*}\). Now, recall that there exist efficient algorithms for estimating the parameters of a Bayes net with in-degree-\(d\) (note that this includes \(d\)-polytrees) \(P\) once a close-enough graph \(\hat{G}\) is recovered (Dasgupta, 1997, Bhattacharyya et al., 2020), with sample complexity \(\tilde{\mathcal{O}}(|\Sigma|^{d}n/\varepsilon)\). Denote the final output \(\hat{P}_{\hat{G}}\), a distribution that is estimated using the conditional probabilities implied by \(\hat{G}\). One can bound the KL divergences as follows:
\[d_{\mathrm{KL}}(P\parallel P_{\hat{G}})-d_{\mathrm{KL}}(P\parallel P_{G^{*}}) \leq\varepsilon/2\quad\text{and}\quad d_{\mathrm{KL}}(P\parallel\hat{P}_{\hat {G}})-d_{\mathrm{KL}}(P\parallel P_{\hat{G}})\leq\varepsilon/2\;.\]
Thus, we see that
\[d_{\mathrm{KL}}(P\parallel\hat{P}_{\hat{G}})\leq\varepsilon+d_{\mathrm{KL}}(P \parallel P_{G^{*}})=\varepsilon\;.\]
## 4 Skeleton assumption
In this section, we present a set of _sufficient_ assumptions (Assumption 11) under which the Chow-Liu algorithm will recover the true skeleton even while with finite samples.
**Assumption 11**.: For any given distribution \(P\), there exists a constant \(\varepsilon_{P}>0\) such that:
(1) For every pair of nodes \(u\) and \(v\), if there exists a path \(u-\cdots-v\) of length greater than \(2\) in \(G^{*}\), then then \(I(u;v)+3\cdot\varepsilon_{P}\leq I(a;b)\) for every pair of adjacent vertices \(a-b\) in the path.
(2) For every pair of directly connected nodes \(a-b\) in \(G^{*}\), \(I(a;b)\geq 3\cdot\varepsilon_{P}\).
Suppose there is a large enough gap of \(\varepsilon_{P}\) between edges in \(G^{*}\) and edges outside of \(G^{*}\). Then, with \(\mathcal{O}(1/\varepsilon_{P}^{2})\) samples, each estimated mutual information \(\hat{I}(a;b)\) will be sufficiently close to the true mutual information \(I(a;b)\). Thus, running the Chow-Liu algorithm (which is essentially maximum spanning tree on the estimated mutual information on each pair of vertices) recovers \(\operatorname{skel}(G^{*})\).
**Lemma 12**.: _Under Assumption 11, running the Chow-Liu algorithm on the \(m\)-sample empirical estimates \(\{\hat{I}(u;v)\}_{u,v\in V}\) recovers a ground truth skeleton with high probability when \(m\geq\Omega(\frac{\log n}{\varepsilon_{P}^{2}})\)._
Combining Lemma 12 with our algorithm Algorithm 1, one can learn a polytree that is \(\varepsilon\)-close in KL with \(\tilde{\mathcal{O}}\left(\max\left\{\frac{\log(n)}{\varepsilon_{P}^{2}},\frac{ 2^{t}.n}{\varepsilon}\right\}\right)\) samples, where \(\varepsilon_{P}\) depends on the distribution \(P\).
## 5 Lower bound
In this section, we show that \(\Omega(n/\varepsilon)\) samples are necessary _even when a known skeleton is provided_. For constant in-degree \(d\), this shows that our proposed algorithm in Section 3 is sample-optimal up to logarithmic factors.
We first begin by showing a lower bound of \(\Omega(1/\varepsilon)\) on a graph with three vertices, even when the skeleton is given. Let \(G_{1}\) be \(X\to Z\to Y\) and \(G_{2}\) be \(X\to Z\gets Y\), such that \(\operatorname{skel}(G_{1})=\operatorname{skel}(G_{2})\) is \(X-Z-Y\). Now define \(P_{1}\) and \(P_{2}\) as follows:
Figure 4: The five different possible orientations of \(H\). Observe that the ground truth orientation of these edges is inconsistent with all five orientations shown here.
Figure 3: Consider the partially oriented graph before the final phase, where \(H\) is the edge-induced subgraph on the unoriented edges in red. Since \(d^{*}=3\) is known, we can conclude that \(g\to i\) was oriented due to a local search step and not due to Meek \(R1(3)\). We have the following sets before the final phase: \(\pi^{in}(c)=\{a,b\}\), \(\pi^{in}(g)=\{f,j\}\), \(\pi^{i}=\{g\}\), \(\pi^{un}(d)=\{c\}\), \(\pi^{un}(f)=\{d,e\}\), and \(\pi^{un}(e)=\{h\}\). With respect to the chosen orientation of \(H\) and the notation in Lemma 10, we have \(A=\{c,d,f,e,h\}\), \(a_{c}=d\), \(a_{d}=f\), \(a_{f}=e\), and \(a_{e}=h\). Observe that the \(\pi^{un}\)’s and \(a\)’s are two different ways to refer to the set of red edges of \(H\).
\[P_{1}:\begin{cases}X\sim\operatorname{Bern}(1/2)\\ Z=\begin{cases}X&\text{w.p. 1/2}\\ \operatorname{Bern}(1/2)&\text{w.p. 1/2}\\ Y=\begin{cases}Z&\text{w.p. }\sqrt{\varepsilon}\\ \operatorname{Bern}(1/2)&\text{w.p. }1-\sqrt{\varepsilon}\end{cases}\end{cases}P_{2}: \begin{cases}X\sim\operatorname{Bern}(1/2)\\ Y\sim\operatorname{Bern}(1/2)\\ \end{cases}P_{2}:\begin{cases}X\text{ w.p. 1/2}\\ Y\text{ w.p. }\sqrt{\varepsilon}\\ \operatorname{Bern}(1/2)&\text{w.p. }1/2-\sqrt{\varepsilon}\end{cases} \tag{3}\]
The intuition is that we keep the edge \(X\to Z\) "roughly the same" and tweak the edge \(Y-Z\) between the distributions. We define \(P_{i,G}\) as projecting \(P_{i}\) onto \(G\). One can check that the following holds (see Supplemental for the detailed calculations):
**Lemma 13**.: _Let \(G_{1}\) be \(X\to Z\to Y\) and \(G_{2}\) be \(X\to Z\gets Y\), such that \(\operatorname{skel}(G_{1})=\operatorname{skel}(G_{2})\) is \(X-Z-Y\). With respect to Eq. (3), we have the following:_
1. \(d_{\mathrm{H}}^{2}(P_{1},P_{2})\in\mathcal{O}(\varepsilon)\)__
2. \(d_{\mathrm{KL}}(P_{1}\parallel P_{1,G_{1}})=0\) _and_ \(d_{\mathrm{KL}}(P_{1}\parallel P_{1,G_{2}})\in\Omega(\varepsilon)\)__
3. \(d_{\mathrm{KL}}(P_{2}\parallel P_{2,G_{2}})=0\) _and_ \(d_{\mathrm{KL}}(P_{2}\parallel P_{2,G_{1}})\in\Omega(\varepsilon)\)__
Our hardness result (Lemma 14) is obtained by reducing the problem of finding an \(\varepsilon\)-close graph orientation of \(X-Z-Y\) to the problem of _testing_ whether the samples are drawn from \(P_{1}\) or \(P_{2}\): To ensure \(\varepsilon\)-closeness in the graph orientation, one has to correctly determine whether the samples come from \(P_{1}\) or \(P_{2}\) and then pick \(G_{1}\) or \(G_{2}\) respectively. However, it is well-known that distinguishing two distributions whose squared Hellinger distance is \(\varepsilon\) requires \(\Omega(1/\varepsilon)\) samples (see, e.g., [2, Theorem 4.7]).
**Lemma 14**.: _Even when given \(\operatorname{skel}(G^{*})\), it takes \(\Omega(1/\varepsilon)\) samples to learn an \(\varepsilon\)-close graph orientation of \(G^{*}\) for distributions on \(\{0,1\}^{3}\)._
Using the above construction as a gadget, we can obtain a dependency on \(n\) in our lower bound by constructing \(n/3\) independent copies of the above gadget, a la proof strategy of Bhattacharyya et al. [2021, Theorem 7.6]. For some constant \(c>0\), we know that a constant \(1/c\) fraction of the gadgets will incur an error or more than \(\varepsilon/n\) if less than \(cn/\varepsilon\) samples are used. The desired result then follows from the tensorization of KL divergence, i.e., \(d_{\mathrm{KL}}\left(\prod_{i}P_{i}\parallel\prod_{i}Q_{i}\right)=\sum_{i}d_{ \mathrm{KL}}(P_{i}\parallel Q_{i})\).
**Theorem 15**.: _Even when given \(\operatorname{skel}(G^{*})\), it takes \(\Omega(n/\varepsilon)\) samples to learn an \(\varepsilon\)-close graph orientation of \(G^{*}\) for distributions on \(\{0,1\}^{n}\)._
## 6 Conclusion
In this work, we studied the problem of estimating a distribution defined on a \(d\)-polytree \(P\) with graph structure \(G^{*}\) using finite observational samples. We designed and analyzed an efficient algorithm that produces an estimate \(\hat{P}\) such that \(d_{\mathrm{KL}}(P\parallel\hat{P})\leq\varepsilon\) assuming access to \(\operatorname{skel}(G^{*})\) and \(d\). The skeleton \(\operatorname{skel}(G^{*})\) is recoverable under Assumption 11 and we show that there is an inherent hardness in the learning problem even under the assumption that \(\operatorname{skel}(G^{*})\) is given. For constant \(d\), our hardness result shows that our proposed algorithm is sample-optimal up to logarithmic factors.
An interesting open question is whether one can extend the hardness result to arbitrary \(d\geq 1\), or design more efficient learning algorithms for \(d\)-polytrees.
## References
* Acharya et al. [2018] J. Acharya, A. Bhattacharyya, C. Daskalakis, and S. Kandasamy. Learning and testing causal models with interventions. In _NeurIPS_, pages 9469-9481, 2018.
* Aragam et al. [2019] B. Aragam, A. Amini, and Q. Zhou. Globally optimal score-based learning of directed acyclic graphs in high-dimensions. _Advances in Neural Information Processing Systems_, 32, 2019.
* Bar-Yossef [2002] Z. Bar-Yossef. _The Complexity of Massive Data Set Computations_. PhD thesis, UC Berkeley, 2002. Adviser: Christos Papadimitriou. Available at [http://webee.technion.ac.il/people/zivby/index_files/Page1489.html](http://webee.technion.ac.il/people/zivby/index_files/Page1489.html).
* Bhattacharya et al. (2020) A. Bhattacharya, S. Gayen, K. S. Meel, and N. V. Vinodchandran. Efficient distance approximation for structured high-dimensional distributions via learning. In _NeurIPS_, 2020.
* Bhattacharyya et al. (2021) A. Bhattacharyya, S. Gayen, E. Price, and N. V. Vinodchandran. Near-optimal learning of tree-structured distributions by Chow-Liu. In _STOC '21--Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing_, pages 147-160. ACM, New York, 2021. doi: 10.1145/3406325.3451066. URL [https://doi.org/10.1145/3406325.3451066](https://doi.org/10.1145/3406325.3451066).
* Canonne et al. (2020) C. L. Canonne, I. Diakonikolas, D. M. Kane, and A. Stewart. Testing bayesian networks. _IEEE Trans. Inf. Theory_, 66(5):3132-3170, 2020. Preprint available at arXiv:1612.03156.
* Chickering (2002) D. M. Chickering. Optimal structure identification with greedy search. _Journal of machine learning research_, 3(Nov):507-554, 2002.
* Chickering and Meek (2002) D. M. Chickering and C. Meek. Finding optimal bayesian networks. In _Proceedings of the Eighteenth Annual Conference on Uncertainty in Artificial Intelligence (UAI 02)_, pages 94-102, 2002.
* Chickering et al. (2004) D. M. Chickering, D. Heckerman, and C. Meek. Large-sample learning of bayesian networks is np-hard. _J. Mach. Learn. Res._, 5:1287-1330, 2004.
* Chow and Liu (1968) C. K. Chow and C. N. Liu. Approximating discrete probability distributions with dependence trees. _IEEE Trans. Inf. Theory_, 14(3):462-467, 1968.
* Dasgupta (1997) S. Dasgupta. The sample complexity of learning fixed-structure bayesian networks. _Mach. Learn._, 29(2-3):165-180, 1997.
* Dasgupta (1999) S. Dasgupta. Learning polytrees. In _UAI_, pages 134-141. Morgan Kaufmann, 1999.
* Friedman and Yakhini (1996) N. Friedman and Z. Yakhini. On the sample complexity of learning bayesian networks. In _Proceedings of the Twelfth Annual Conference on Uncertainty in Artificial Intelligence (UAI \(\{\)96\(\}\)), San Francisco, CA_, pages 274-282, 1996.
* Friedman et al. (2013) N. Friedman, I. Nachman, and D. Pe'er. Learning bayesian network structure from massive datasets: The" sparse candidate" algorithm. _arXiv preprint arXiv:1301.6696_, 2013.
* Gao and Aragam (2021) M. Gao and B. Aragam. Efficient bayesian network structure learning via local markov boundary search. _Advances in Neural Information Processing Systems_, 34:4301-4313, 2021.
* Geiger et al. (1990) D. Geiger, A. Paz, and J. Pearl. Learning causal trees from dependence information. In _AAAI_, pages 770-776. AAAI Press / The MIT Press, 1990.
* Ghoshal and Honorio (2017) A. Ghoshal and J. Honorio. Learning identifiable gaussian bayesian networks in polynomial time and sample complexity. _Advances in Neural Information Processing Systems_, 30, 2017.
* Hoffgen (1993) K.-U. Hoffgen. Learning and robust learning of product distributions. In _Proceedings of the sixth annual conference on Computational learning theory_, pages 77-83, 1993.
* Hoyer et al. (2008) P. Hoyer, D. Janzing, J. M. Mooij, J. Peters, and B. Scholkopf. Nonlinear causal discovery with additive noise models. _Advances in Neural Information Processing Systems_, 21, 2008.
* Jensen and Nielsen (2007) F. V. Jensen and T. D. Nielsen. _Bayesian networks and decision graphs_, volume 2. Springer, 2007.
* Principles and Techniques_. MIT Press, 2009.
* Meek (1995) C. Meek. Causal Inference and Causal Explanation with Background Knowledge. In _Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence_, UAI'95, page 403-410, San Francisco, CA, USA, 1995. Morgan Kaufmann Publishers Inc. ISBN 1558603859.
* Park and Raskutti (2017) G. Park and G. Raskutti. Learning quadratic variance function (qvf) dag models via overdispersion scoring (ods). _J. Mach. Learn. Res._, 18:224-1, 2017.
* Peters and Buhlmann (2014) J. Peters and P. Buhlmann. Identifiability of gaussian structural equation models with equal error variances. _Biometrika_, 101(1):219-228, 2014.
* Park and Raskutti (2017)* [334] G. Rebane and J. Pearl. The recovery of causal poly-trees from statistical data. _Int. J. Approx. Reason._, 2(3):341, 1988.
* [335] S. Shimizu, P. O. Hoyer, A. Hyvarinen, A. Kerminen, and M. Jordan. A linear non-gaussian acyclic model for causal discovery. _Journal of Machine Learning Research_, 7(10), 2006.
* [336] P. Spirtes and C. Glymour. An algorithm for fast recovery of sparse causal graphs. _Social science computer review_, 9(1):62-72, 1991.
* [337] L. G. Valiant. A theory of the learnable. In _STOC_, pages 436-445. ACM, 1984. | ## Review
### Summary
The paper presents an efficient PAC-learning algorithm for learning bounded polytrees, a class of graphical models where the skeleton is a forest and each node has a bounded in-degree. Building on prior work, it assumes the skeleton is known, providing sample complexity guarantees and polynomial time performance. The results extend earlier findings on tree-structured distributions and include conditions for learning the skeleton itself. The study is relevant for understanding the learning dynamics of graphical models, though it raises questions regarding the practicality of the skeleton assumption in real-world applications.
### Strengths
- The paper addresses the important problem of efficient learning of graphical models with bounded polytrees.
- It extends previous results for tree-structured Bayesian networks, providing a novel algorithm for learning d-polytrees with finite-sample guarantees.
- The theoretical analysis is strong, demonstrating nearly tight sample complexity.
- The paper is generally well-written, with a clear outline of the algorithm and its correctness proof.
### Weaknesses
- The algorithm's reliance on the known skeleton limits its applicability, making the results less useful for a broader ML audience.
- More motivation for studying polytrees could enhance its relevance, particularly in the context of the NeurIPS community.
- The writing at times lacks clarity, particularly in sections where standard procedures are assumed to be understood.
### Questions
- What is the expected tightness of Assumption 11 regarding the skeleton, and what obstacles exist in recovering the skeleton in general cases?
- Is there further discussion available on the necessity of assuming the skeleton is known?
- Could the authors clarify the justification for the formulas presented without adequate context?
- What kinds of instances would the described algorithms be practical for, considering the algorithm's complexity?
### Soundness
**Score:** 3
**Description:** Good - The theoretical foundations are solid, but reliance on the known skeleton raises concerns about practical applicability.
### Presentation
**Score:** 3
**Description:** Good - Generally clear presentation, though some sections could benefit from improved clarity and proofreading.
### Contribution
**Score:** 3
**Description:** Good - Offers a significant contribution to the field, particularly in extending learning algorithms for graphical models, but some limitations reduce its impact.
### Rating
**Score:** 6
**Description:** Weak Accept - The paper is technically solid with moderate-to-high impact potential, but it requires clarification on practical applications.
### Paper Decision
**Decision:** Reject
**Reasons:** While the paper is theoretically sound and presents interesting results, the reliance on the known skeleton limits its usefulness for the ML community. The motivation for this assumption is not adequately addressed, and the paper could mislead readers with its title. Overall, the work is promising but lacks the necessary practicality and justification for a strong recommendation.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Why Does Sharpness-Aware Minimization Generalize Better Than SGD?
Zixiang Chen Junkai Zhang Yiwen Kou Xiangning Chen Cho-Jui Hsieh Quanquan Gu
Department of Computer Science
University of California, Los Angeles
Los Angeles, CA 90095
{chenzx19,zhang,evankou,xiangning,chohsieh,qgu}@cs.ucla.edu
Equal contribution.
###### Abstract
The challenge of overfitting, in which the model memorizes the training data and fails to generalize to test data, has become increasingly significant in the training of large neural networks. To tackle this challenge, Sharpness-Aware Minimization (SAM) has emerged as a promising training method, which can improve the generalization of neural networks even in the presence of label noise. However, a deep understanding of how SAM works, especially in the setting of nonlinear neural networks and classification tasks, remains largely missing. This paper fills this gap by demonstrating why SAM generalizes better than Stochastic Gradient Descent (SGD) for a certain data model and two-layer convolutional ReLU networks. The loss landscape of our studied problem is nonsmooth, thus current explanations for the success of SAM based on the Hessian information are insufficient. Our result explains the benefits of SAM, particularly its ability to prevent noise learning in the early stages, thereby facilitating more effective learning of features. Experiments on both synthetic and real data corroborate our theory.
## 1 Introduction
The remarkable performance of deep neural networks has sparked considerable interest in creating ever-larger deep learning models, while the training process continues to be a critical bottleneck affecting overall model performance. The training of large models is unstable and difficult due to the sharpness, non-convexity, and non-smoothness of its loss landscape. In addition, as the number of model parameters is much larger than the training sample size, the model has the ability to memorize even randomly labeled data (Zhang et al., 2021), which leads to overfitting. Therefore, although traditional gradient-based methods like gradient descent (GD) and stochastic gradient descent (SGD) can achieve generalizable models under certain conditions, these methods may suffer from unstable training and harmful overfitting in general.
To overcome the above challenge, _Sharpness-Aware Minimization_ (SAM) (Foret et al., 2020), an innovative training paradigm, has exhibited significant improvement in model generalization and has become widely adopted in many applications. In contrast to traditional gradient-based methods that primarily focus on finding a point in the parameter space with a minimal gradient norm, SAM also pursues a solution with reduced sharpness, characterized by how rapidly the loss function changes locally. Despite the empirical success of SAM across numerous tasks (Bahri et al., 2021; Behdin et al., 2022; Chen et al., 2021; Liu et al., 2022a), the theoretical understanding of this method remains limited.
Foret et al. (2020) provided a PAC-Bayes bound on the generalization error of SAM to show that it will generalize well, while the bound only holds for the infeasible average-direction perturbation instead ofpractically used ascend-direction perturbation. Andriushchenko and Flammarion (2022) investigated the implicit bias of SAM for diagonal linear networks under global convergence assumption. The oscillations in the trajectory of SAM were explored by Bartlett et al. (2022), leading to a convergence result for the convex quadratic loss. A concurrent work (Wen et al., 2022) demonstrated that SAM can locally regularize the eigenvalues of the Hessian of the loss. In the context of least-squares linear regression, Behdin and Mazumder (2023) found that SAM exhibits lower bias and higher variance compared to gradient descent. However, all the above analyses of SAM utilize the Hessian information of the loss and require the smoothness property of the loss implicitly. The study for non-smooth neural networks, particularly for the classification task, remains open.
In this paper, our goal is to provide a theoretical basis demonstrating when SAM outperforms SGD. In particular, we consider a data distribution mainly characterized by the signal \(\mathbf{\mu}\) and input data dimension \(d\), and prove the following separation in terms of test error between SGD and SAM.
**Theorem 1.1** (Informal statement of Theorems 3.2 and 4.1).: _Let \(p\) be the strength of the label flipping noise. For any \(\epsilon>0\), under certain regularity conditions, with high probability, there exists \(0\leq t\leq T\) such that the training loss converges, i.e., \(L_{S}(\mathbf{W}^{(t)})\leq\epsilon\). Besides,_
1. _For SGD, when the signal strength_ \(\|\mathbf{\mu}\|_{2}\geq\Omega(d^{1/4})\)_, we have_ \(L_{\mathcal{D}}^{0-1}(\mathbf{W}^{(t)})\leq p+\epsilon\)_. When the signal strength_ \(\|\mathbf{\mu}\|_{2}\leq O(d^{1/4})\)_, we have_ \(L_{\mathcal{D}}^{0-1}(\mathbf{W}^{(t)})\geq p+0.1\)_._
2. _For SAM, provided the signal strength_ \(\|\mathbf{\mu}\|_{2}\geq\widehat{\Omega}(1)\)_, we have_ \(L_{\mathcal{D}}^{0-1}(\mathbf{W}^{(t)})\leq p+\epsilon\)_._
Our contributions are summarized as follows:
* We discuss how the loss landscape of two-layer convolutional ReLU networks is different from the smooth loss landscape and thus the current explanation for the success of SAM based on the Hessian information is insufficient for neural networks.
* To understand the limit of SGD, we precisely characterize the conditions under which benign overfitting can occur in training two-layer convolutional ReLU networks with SGD. To the best of our knowledge, this is the first benign overfitting result for neural network trained with mini-batch SGD. We also prove a phase transition phenomenon for SGD, which is illustrated in Figure 1.
* Under the conditions when SGD leads to harmful overfitting, we formally prove that SAM can achieve benign overfitting. Consequently, we establish a rigorous theoretical distinction between SAM and SGD, demonstrating that SAM strictly outperforms SGD in terms of generalization error. Specifically, we show that SAM effectively mitigates noise learning in the early stages of training, enabling neural networks to learn features more efficiently.
**Notation.** We use lower case letters, lower case bold face letters, and upper case bold face letters to denote scalars, vectors, and matrices respectively. For a vector \(\mathbf{v}=(v_{1},\cdots,v_{d})^{\top}\), we denote
Figure 1: Illustration of the phase transition between benign overfitting and harmful overfitting. The yellow region represents the regime under which the overfitted CNN trained by SGD is guaranteed to have a small excess risk, and the blue region represents the regime under which the excess risk is guaranteed to be a constant order (e.g., greater than \(0.1\)). The gray region is the regime where the excess risk is not characterized.
by \(\|\mathbf{v}\|_{2}:=\left(\sum_{j=1}^{d}v_{j}^{2}\right)^{1/2}\) its \(2\)-norm. For two sequence \(\{a_{k}\}\) and \(\{b_{k}\}\), we denote \(a_{k}=O(b_{k})\) if \(|a_{k}|\leq C|b_{k}|\) for some absolute constant \(C\), denote \(a_{k}=\Omega(b_{k})\) if \(b_{k}=O(a_{k})\), and denote \(a_{k}=\Theta(b_{k})\) if \(a_{k}=O(b_{k})\) and \(a_{k}=\Omega(b_{k})\). We also denote \(a_{k}=o(b_{k})\) if \(\lim|a_{k}/b_{k}|=0\). Finally, we use \(\widetilde{O}(\cdot)\) and \(\widetilde{\Omega}(\cdot)\) to omit logarithmic terms in the notation. We denote the set \(\{1,\cdots,n\}\) by \([n]\), and the set \(\{0,\cdots,n-1\}\) by \(\overline{[n]}\), respectively. The carnality of a set \(S\) is denoted by \(|S|\).
## 2 Preliminaries
### Data distribution
Our focus is on binary classification with label \(y\in\{\pm 1\}\). We consider the following data model, which can be seen as a special case of sparse coding model (Olshausen and Field, 1997; Allen-Zhu and Li, 2022; Ahn et al., 2022).
**Definition 2.1**.: Let \(\mathbf{\mu}\in\mathbb{R}^{d}\) be a fixed vector representing the signal contained in each data point. Each data point \((\mathbf{x},y)\) with input \(\mathbf{x}=[\mathbf{x}^{(1)\top},\mathbf{x}^{(2)\top},\ldots,\mathbf{x}^{(P) \top}]^{\top}\in\mathbb{R}^{P\times d},\mathbf{x}^{(1)},\mathbf{x}^{(2)}, \ldots,\mathbf{x}^{(P)}\in\mathbb{R}^{d}\) and label \(y\in\{-1,1\}\) is generated from a distribution \(\mathcal{D}\) specified as follows:
1. The true label \(\widehat{y}\) is generated as a Rademacher random variable, i.e., \(\mathbb{P}[\widehat{y}=1]=\mathbb{P}[\widehat{y}=-1]=1/2\). The observed label \(y\) is then generated by flipping \(\widehat{y}\) with probability \(p\) where \(p<1/2\), i.e., \(\mathbb{P}[y=\widehat{y}]=1-p\) and \(\mathbb{P}[y=-\widehat{y}]=p\).
2. A noise vector \(\mathbf{\xi}\) is generated from the Gaussian distribution \(\mathcal{N}(\mathbf{0},\sigma_{p}^{2}\mathbf{I})\), where \(\sigma_{p}^{2}\) is the variance.
3. One of \(\mathbf{x}^{(1)},\mathbf{x}^{(2)},\ldots,\mathbf{x}^{(P)}\) is randomly selected and then assigned as \(y\cdot\mathbf{\mu}\), which represents the signal, while the others are given by \(\mathbf{\xi}\), which represents noises.
The data distribution in Definition 2.1 has also been extensively employed in several previous works (Allen-Zhu and Li, 2020; Jelassi and Li, 2022; Shen et al., 2022; Cao et al., 2022; Kou et al., 2023). When \(P=2\), this data distribution aligns with the one analyzed in Kou et al. (2023). This distribution is inspired by image data, where the input is composed of different patches, with only a few patches being relevant to the label. The model has two key vectors: the feature vector and the noise vector. For any input vector \(\mathbf{x}=[\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(P)}]\), there is exactly one \(\mathbf{x}^{(j)}=y\mathbf{\mu}\), and the others are random Gaussian vectors. For example, the input vector \(\mathbf{x}\) can be \([y\mathbf{\mu},\mathbf{\xi},\ldots,\mathbf{\xi}],\ldots,[\mathbf{\xi},\ldots,y\mathbf{\mu},\mathbf{\xi}]\) or \([\mathbf{\xi},\ldots,\mathbf{\xi},y\mathbf{\mu}]\): the signal patch \(y\mathbf{\mu}\) can appear at any position. To avoid harmful overfitting, the model must learn the feature vector rather than the noise vector.
### Neural Network and Training Loss
To effectively learn the distribution as per Definition 2.1, it is advantageous to utilize a shared weights structure, given that the specific signal patch is not known beforehand. When \(P>n\), shared weights become indispensable as the location of the signal patch in the test can differ from the location of the signal patch in the training data.
We consider a two-layer convolutional neural network whose filters are applied to the \(P\) patches \(\mathbf{x}_{1},\cdots,\mathbf{x}_{P}\) separately, and the second layer parameters of the network are fixed as \(+1/m\) and \(-1/m\) respectively, where \(m\) is the number of convolutional filters. Then the network can be written as \(f(\mathbf{W},\mathbf{x})=F_{+1}(\mathbf{W}_{+1},\mathbf{x})-F_{-1}(\mathbf{W}_ {-1},\mathbf{x})\), where \(F_{+1}(\mathbf{W}_{+1},\mathbf{x})\) and \(F_{-1}(\mathbf{W}_{-1},\mathbf{x})\) are defined as
\[F_{j}(\mathbf{W}_{j},\mathbf{x})=m^{-1}{\sum_{r=1}^{m}{\sum_{p=1}^{P}\sigma( \langle\mathbf{w}_{j,r},\mathbf{x}^{(p)}\rangle)}}. \tag{1}\]
Here we consider ReLU activation function \(\sigma(z)=\mathds{1}(z\geq 0)z\), \(\mathbf{w}_{j,r}\in\mathbb{R}^{d}\) denotes the weight for the \(r\)-th filter, and \(\mathbf{W}_{j}\) is the collection of model weights associated with \(F_{j}\) for \(j=\pm 1\). We use \(\mathbf{W}\) to denote the collection of all model weights. Denote the training data set by \(\mathcal{S}=\{(\mathbf{x}_{i},y_{i})\}_{i\in[n]}\). We train the above CNN model by minimizing the empirical cross-entropy loss function
\[L_{\mathcal{S}}(\mathbf{W})=n^{-1}{\sum_{i\in[n]}}\ell(y_{i}f(\mathbf{W}, \mathbf{x}_{i})),\]
where \(\ell(z)=\log(1+\exp(-z))\).
### Training Algorithm
**Minibatch Stochastic Gradient Descent.** For epoch \(t\), the training data set \(S\) is randomly divided into \(H:=n/B\) mini batches \(\mathcal{I}_{t,b}\) with batch size \(B\geq 2\). The empirical loss for batch \(\mathcal{I}_{t,b}\) is defined as \(L_{\mathcal{I}_{t,b}}(\mathbf{W})=(1/B)\sum_{i\in\mathcal{I}_{t,b}}\ell(y_{i}f( \mathbf{W},\mathbf{x}_{i}))\). Then the gradient descent update of the filters in the CNN can be written as
\[\mathbf{w}^{(t,b+1)}=\mathbf{w}^{(t,b)}-\eta\cdot\nabla_{\mathbf{W}}L_{ \mathcal{I}_{t,b}}(\mathbf{W}^{(t,b)}), \tag{2}\]
where the gradient of the empirical loss \(\nabla_{\mathbf{W}}L_{\mathcal{I}_{t,b}}\) is the collection of \(\nabla_{\mathbf{w}_{j},r}L_{\mathcal{I}_{t,b}}\) as follows
\[\nabla_{\mathbf{w}_{j},r}L_{\mathcal{I}_{t,b}}(\mathbf{W}^{(t,b)}) =\frac{(P-1)}{Bm}\sum_{i\in\mathcal{I}_{t,b}}\ell_{i}^{\prime(t,b )}\cdot\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b)},\mathbf{\xi}_{i}\rangle) \cdot jy_{i}\mathbf{\xi}_{i}\] \[\qquad+\frac{1}{Bm}\sum_{i\in\mathcal{I}_{t,b}}\ell_{i}^{\prime(t,b)}\cdot\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b)},\widehat{y}_{i}\mathbf{ \mu}\rangle)\cdot\widehat{y}_{i}y_{i}j\mathbf{\mu}, \tag{3}\]
for all \(j\in\{\pm 1\}\) and \(r\in[m]\). Here we introduce a shorthand notation \(\ell_{i}^{\prime(t,b)}=\ell^{\prime}[y_{i}\cdot f(\mathbf{W}^{(t,b)},\mathbf{x }_{i})]\) and assume the gradient of the ReLU activation function at \(0\) to be \(\sigma^{\prime}(0)=1\) without loss of generality. We use \((t,b)\) to denote epoch index \(t\) with mini-batch index \(b\) and use \((t)\) as the shorthand of \((t,0)\). We initialize SGD by random Gaussian, where all entries of \(\mathbf{W}^{(0)}\) are sampled from i.i.d. Gaussian distributions \(\mathcal{N}(0,\sigma_{0}^{2})\), with \(\sigma_{0}^{2}\) being the variance. From (3), we can infer that the loss landscape of the empirical loss is highly non-smooth because the ReLU function is not differentiable at zero. In particular, when \(\langle\mathbf{w}_{j,r}^{(t,b)},\mathbf{\xi}\rangle\) is close to zero, even a very small perturbation can greatly change the activation pattern \(\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b)},\mathbf{\xi}\rangle)\) and thus change the direction of \(\nabla_{\mathbf{w}_{j},r}L_{\mathcal{I}_{t,b}}(\mathbf{W}^{(t,b)})\). This observation prevents the analysis technique based on the Taylor expansion with the Hessian matrix, and calls for a more sophisticated activation pattern analysis.
**Sharpness Aware Minimization.** Given an empirical loss function \(L_{S}(\mathbf{W})\) with trainable parameter \(\mathbf{W}\), the idea of SAM is to minimize a perturbed empirical loss at the worst point in the neighborhood ball of \(\mathbf{W}\) to ensure a uniformly low training loss value. In particular, it aims to solve the following optimization problem
\[\min_{\mathbf{W}}L_{S}^{\text{SAM}}(\mathbf{W}),\quad\text{where} \quad L_{S}^{\text{SAM}}(\mathbf{W}):=\max_{\|\mathbf{\epsilon}\|_{2}\leq\tau}L_{S }(\mathbf{W}+\mathbf{\epsilon}), \tag{4}\]
where the hyperparameter \(\tau\) is called the perturbation radius. However, directly optimizing \(L_{S}^{\text{SAM}}(\mathbf{W})\) is computationally expensive. In practice, people use the following sharpness-aware minimization (SAM) algorithm (Foret et al., 2020; Zheng et al., 2021) to minimize \(L_{S}^{\text{SAM}}(\mathbf{W})\) efficiently,
\[\mathbf{W}^{(t+1)}=\mathbf{W}^{(t)}-\eta\nabla_{\mathbf{W}}L_{S}\big{(} \mathbf{W}+\widehat{\mathbf{\epsilon}}\big{)},\quad\text{where}\quad\widehat{ \mathbf{\epsilon}}=\tau\cdot\frac{\nabla_{\mathbf{W}}L_{S}(\mathbf{W})}{\|\nabla_ {\mathbf{W}}L_{S}(\mathbf{W})\|_{F}}. \tag{5}\]
When applied to SGD in (2), the gradient \(\nabla_{\mathbf{W}}L_{S}\) in (5) is further replaced by stochastic gradient \(\nabla_{\mathbf{W}}L_{\mathcal{I}_{t,b}}\)(Foret et al., 2020). The detailed algorithm description of SAM in shown in Algorithm 1.
```
Input: Training set \(\mathcal{S}=\cup_{i=1}^{n}\{(\mathbf{x}_{i},\mathbf{y}_{i})\}\), Batch size \(B\), step size \(\eta>0\), neighborhood size \(\tau>0\). Initialize weights \(\mathbf{W}^{(0)}\). for\(t=0,1,\dots,T-1\)do Randomly divide the training data set into \(H\) mini batches \(\{\mathcal{I}_{t,b}\}_{b=0}^{H-1}\). for\(b=0,1,\dots,H-1\)do We calculate the perturbation \(\widehat{\mathbf{\epsilon}}^{(t,b)}=\tau\frac{\nabla_{\mathbf{W}}L_{\mathcal{I}_{t,b }}(\mathbf{W}^{(t,b)})}{\|\nabla_{\mathbf{W}}L_{\mathcal{I}_{t,b}}(\mathbf{W} ^{(t,b)})\|_{F}}\). Update model parameters: \(\mathbf{W}^{(t,b+1)}=\mathbf{W}^{(t,b)}-\eta\nabla_{\mathbf{W}}L_{\mathcal{I} _{t,b}}(\mathbf{W})|_{\mathbf{W}=\mathbf{W}^{(t,b)}+\widehat{\mathbf{\epsilon}}^{(t,b)}}\). endfor Update model parameters: \(\mathbf{W}^{(t+1,0)}=\mathbf{W}^{(t,H)}\) endfor
```
**Algorithm 1** Minibatch Sharpness Aware Minimization
## 3 Result for SGD
In this section, we present our main theoretical results for the CNN trained with SGD. Our results are based on the following conditions on the dimension \(d\), sample size \(n\), neural network width \(m\), initialization scale \(\sigma_{0}\) and learning rate \(\eta\).
**Condition 3.1**.: Suppose there exists a sufficiently large constant \(C\), such that the following hold:
1. Dimension \(d\) is sufficiently large: \(d\geq\widetilde{\Omega}\Big{(}\max\{nP^{-2}\sigma_{p}^{-2}\|\mathbf{\mu}\|_{2}^{2},n^{ 2},P^{-2}\sigma_{p}^{-2}Bm\}\Big{)}\).
2. Training sample size \(n\) and neural network width satisfy \(m,n\geq\widetilde{\Omega}(1)\).
3. The \(2\)-norm of the signal satisfies \(\|\mathbf{\mu}\|_{2}\geq\widetilde{\Omega}(P\sigma_{p})\).
4. The noise rate \(p\) satisfies \(p\leq 1/C\).
5. The standard deviation of Gaussian initialization \(\sigma_{0}\) is appropriately chosen such that \(\sigma_{0}\leq\widetilde{O}\Big{(}\big{(}\max\big{\{}P\sigma_{p}d/\sqrt{n},\| \mathbf{\mu}\|_{2}\big{\}}\big{)}^{-1}\Big{)}\).
6. The learning rate \(\eta\) satisfies \(\eta\leq\widetilde{O}\Big{(}\big{(}\max\big{\{}P^{2}\sigma_{p}^{2}d^{3/2}/(Bm ),P^{2}\sigma_{p}^{2}d/B,n\|\mu\|_{2}/(\sigma_{0}B\sqrt{d}m),\)
\(nP\sigma_{p}\|\mathbf{\mu}\|_{2}/(B^{2}m\epsilon)\big{\}}\Big{)}^{-1}\Big{)}\).
The conditions imposed on the data dimensions \(d\), network width \(m\), and the number of samples \(n\) ensure adequate overparameterization of the network. Additionally, the condition on the learning rate \(\eta\) facilitates efficient learning by our model. By concentration inequality, with high probability, the \(\ell_{2}\) norm of the noise patch is of order \(\Theta(d\sigma_{p}^{2})\). Therefore, the quantity \(d\sigma_{p}^{2}\) can be viewed as the strength of the noise. Comparable conditions have been established in Chatterji and Long (2021); Cao et al. (2022); Frei et al. (2022); Kou et al. (2023). Based on the above condition, we first present a set of results on benign/harmful overfitting for SGD in the following theorem.
**Theorem 3.2** (Benign/harmful overfitting of SGD in training CNNs).: _For any \(\epsilon>0\), under Condition 3.1, with probability at least \(1-\delta\) there exists \(t=\widetilde{O}(\eta^{-1}\epsilon^{-1}mnd^{-1}P^{-2}\sigma_{p}^{-2})\) such that:_
1. _The training loss converges, i.e.,_ \(L_{S}(\mathbf{W}^{(t)})\leq\epsilon\)_._
2. _When_ \(n\|\mathbf{\mu}\|_{2}^{4}\geq C_{1}dP^{4}\sigma_{p}^{4}\)_, the test error_ \(L_{\mathcal{D}}^{0-1}(\mathbf{W}^{(t)})\leq p+\epsilon\)_._
3. _When_ \(n\|\mathbf{\mu}\|_{2}^{4}\leq C_{3}dP^{4}\sigma_{p}^{4}\)_, the test error_ \(L_{\mathcal{D}}^{0-1}(\mathbf{W}^{(t)})\geq p+0.1\)_._
Theorem 3.2 reveals a sharp phase transition between benign and harmful overfitting for CNN trained with SGD. This transition is determined by the relative scale of the signal strength and the data dimension. Specifically, if the signal is relatively large such that \(n\|\mathbf{\mu}\|_{2}^{4}\geq C_{1}d(P-1)^{4}\sigma_{p}^{4}\), the model can efficiently learn the signal. As a result, the test error decreases, approaching the Bayes risk \(p\), although the presence of label flipping noise prevents the test error from reaching zero. Conversely, when the condition \(n\|\mathbf{\mu}\|_{2}^{4}\leq C_{3}d(P-1)^{4}\sigma_{p}^{4}\) holds, the test error fails to approach the Bayes risk. This phase transition is empirically illustrated in Figure 2. In both scenarios, the model is capable of fitting the training data thoroughly, even for examples with flipped labels. This finding aligns with longstanding empirical observations.
The negative result of SGD, which encompasses the third point of Theorem 3.2 and the high test error observed in Figure 2, suggests that the signal strength needs to scale with the data dimension to enable benign overfitting. This constraint substantially undermines the efficiency of SGD, particularly when dealing with high-dimensional data. A significant part of this limitation stems from the fact that SGD does not inhibit the model from learning noise, leading to a comparable rate of signal and noise learning during iterative model parameter updates. This inherent limitation of SGD is effectively addressed by SAM, as we will discuss later in Section 4.
### Analysis of Mini-Batch SGD
In contrast to GD, SGD does not utilize all the training data at each iteration. Consequently, different samples may contribute to parameters differently, leading to possible unbalancing in parameters. To analyze SGD, we extend the signal-noise decomposition technique developed by Kou et al. (2023); Cao et al. (2022) for GD, which in our case is formally defined as:
**Lemma 3.3**.: _Let \(\mathbf{w}_{j,r}^{(t,b)}\) for \(j\in\{\pm 1\}\), \(r\in[m]\) be the convolution filters of the CNN at the \(b\)-th batch of \(t\)-th epoch of gradient descent. Then there exist unique coefficients \(\gamma_{j,r}^{(t,b)}\) and \(p_{j,r,i}^{(t,b)}\) such that_
\[\mathbf{w}_{j,r}^{(t,b)}=\mathbf{w}_{j,r}^{(0,0)}+j\cdot\gamma_{j,r}^{(t,b)} \cdot\|\mathbf{\mu}\|_{2}^{-2}\cdot\mathbf{\mu}+\frac{1}{P-1}\sum_{i=1}^{n}\rho_{j,r,i} ^{(t,b)}\cdot\|\mathbf{\xi}_{i}\|_{2}^{-2}\cdot\mathbf{\xi}_{i}. \tag{6}\]
_Further denote \(\overline{\rho}_{j,r,i}^{(t,b)}:=\rho_{j,r,i}^{(t,b)}\mathds{1}(\rho_{j,r,i}^{( t,b)}\geq 0)\), \(\rho_{j,r,i}^{(t,b)}:=\rho_{j,r,i}^{(t,b)}\mathds{1}(\rho_{j,r,i}^{(t,b)}\leq 0)\). Then_
\[\mathbf{w}_{j,r}^{(t,b)}=\mathbf{w}_{j,r}^{(0,0)}+j\gamma_{j,r}^{(t,b)}\|\mathbf{ \mu}\|_{2}^{-2}\mathbf{\mu}+\frac{1}{P-1}\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(t,b)}\|\mathbf{\xi}_{i}\|_{2}^{-2}\mathbf{\xi}_{i}+\frac{1}{P-1}\sum_{i=1}^{n}\underline {\rho}_{j,r,i}^{(t,b)}\|\mathbf{\xi}_{i}\|_{2}^{-2}\mathbf{\xi}_{i}. \tag{7}\]
Note that (7) is a variant of (6): by decomposing the coefficient \(\rho_{j,r,i}^{(t,b)}\) into \(\overline{\rho}_{j,r,i}^{(t,b)}\) and \(\underline{\rho}_{j,r,i}^{(t,b)}\), we can streamline our proof process. The normalization terms \(\frac{1}{P-1}\), \(\|\mathbf{\mu}\|_{2}^{-2}\), and \(\|\mathbf{\xi}_{i}\|_{2}^{-2}\) ensure that \(\gamma_{j,r}^{(t,b)}\approx\langle\mathbf{w}_{j,r}^{(t,b)},\mu\rangle\) and \(\rho_{j,r}^{(t,b)}\approx(P-1)\langle\mathbf{w}_{j,r}^{(t,b)},\mathbf{\xi}_{i}\rangle\). Through signal-noise decomposition, we characterize the learning progress of signal \(\mathbf{\mu}\) using \(\gamma_{j,r}^{(t,b)}\), and the learning progress of noise using \(\rho_{j,r}^{(t,b)}\). This decomposition turns the analysis of SGD updates into the analysis of signal noise coefficients. Kou et al. (2023) extend this technique to the ReLU activation function as well as in the presence of label flipping noise. However, mini-batch SGD updates amplify the complications introduced by label flipping noise, making it more difficult to ensure learning. We have developed advanced methods for coefficient balancing and activation pattern analysis. These techniques will be thoroughly discussed in the sequel. The progress of signal learning is characterized by \(\gamma_{j,r}^{(t,b)}\), whose update rule is as follows:
\[\gamma_{j,r}^{(t,b+1)}=\gamma_{j,r}^{(t,b)}-\frac{\eta}{Bm}\cdot \left[\sum_{i\in\mathcal{I}_{t,b}\cap\mathcal{S}_{+}}\ell_{i}^{\prime(t,b)} \sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b)},\widehat{y}_{i}\cdot\mathbf{\mu} \rangle)\right.\] \[\left.-\sum_{i\in\mathcal{I}_{t,b}\cap\mathcal{S}_{-}}\ell_{i}^{ \prime(t,b)}\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b)},\widehat{y}_{i} \cdot\mathbf{\mu}\rangle)\right]\cdot\|\mathbf{\mu}\|_{2}^{2}. \tag{8}\]
Here, \(\mathcal{I}_{t,b}\) represents the indices of samples in batch \(b\) of epoch \(t\), \(S_{+}\) denotes the set of clean samples where \(y_{i}=\widehat{y}_{i}\), and \(S_{-}\) represents the set of noisy samples where \(y_{i}=-\widehat{y}_{i}\). The updates of \(\gamma_{j,r}^{(t,b)}\) comprise an increment arising from sample learning, counterbalanced by a decrement due to noisy sample learning. Both empirical and theoretical analyses have demonstrated that overparametrization allows the model to fit even random labels. This occurs when the negative term \(\sum_{i\in\mathcal{I}_{t,b}\cap\mathcal{S}_{-}}\ell_{i}^{\prime(t,b)}\sigma^{ \prime}(\langle\mathbf{w}_{j,r}^{(t,b)},y_{i}\cdot\mathbf{\mu}\rangle)\) primarily drives model learning. Such unfavorable scenarios can be attributed to two possible factors. Firstly, the gradient of the loss \(\ell_{i}^{\prime(t,b)}\) might be significantly larger for noisy samples compared to clean samples. Secondly, during certain epochs, the majority of samples may be noisy, meaning that \(\mathcal{I}_{t,b}\cap S_{-}\) significantly outnumbers \(\mathcal{I}_{t,b}\cap\mathcal{S}_{+}\).
To deal with the first factor, we have to control the ratio of the loss gradient with regard to different samples, as depicted in (9). Given that noisy samples may overwhelm a single batch, we impose an additional requirement: the ratio of the loss gradient must be controllable across different batches within a single epoch, i.e.,
\[\ell_{i}^{\prime(t,b_{1})}/\ell_{k}^{\prime(t,b_{2})}\leq C_{2}. \tag{9}\]
As \(\ell^{\prime}(z_{1})/\ell^{\prime}(z_{2})\approx\exp(z_{2}-z_{1})\), we can upper bound \(\ell_{i}^{\prime(t,b_{1})}/\ell_{k}^{\prime(t,b_{2})}\) by \(y_{i}\cdot f(\mathbf{W}^{(t,b_{1})},\mathbf{x}_{i})-y_{k}\cdot f(\mathbf{W}^{(t,b_{2})},\mathbf{x}_{k})\), which can be further upper bounded by \(\sum_{r}\overline{\rho}_{y_{i},r,i}^{(t,b_{1})}-\sum_{r}\overline{\rho}_{y_{i},r,k}^{(t,b_{2})}\) with a small error. Therefore, the proof of (9) can be reduced to proving a uniform bound on the difference among \(\overline{\rho}_{y_{i},r,i}^{(t,b_{1})}\), i.e., \(\sum_{r=1}^{m}\overline{\rho}_{y_{i},r,i}^{(t,b_{1})}-\sum_{r=1}^{m}\overline{ \rho}_{y_{k},r,k}^{(t,b_{2})}\leq\kappa,\;\forall\;i,k\).
However, achieving this uniform upper bound turns out to be challenging, since the updates of \(\overline{\rho}_{j,r,i}^{(t,b)}\)'s are not evenly distributed across different batches within an epoch. Each mini-batch update utilizes only a portion of the samples, meaning that some \(\overline{\rho}_{y_{i},r,i}^{(t,b)}\) can increase or decrease much more thanthe others. Therefore, the uniformly bounded difference can only be achieved after the entire epoch is processed. Consequently, we have to first bound the difference among \(\overline{p}^{(t,b)}_{y_{i},r,i}\)'s after each entire epoch, and then control the maximal difference within one epoch. The full batch (epoch) update rule is established as follows:
\[\sum_{r=1}^{m}\big{[}\overline{p}^{(t+1,0)}_{y_{i},r,i}-\overline{ p}^{(t+1,0)}_{y_{k},r,k}\big{]} =\sum_{r=1}^{m}\big{[}\overline{p}^{(t,0)}_{y_{i},r,i}-\overline{ p}^{(t,0)}_{y_{k},r,k}\big{]}-\frac{\eta(P-1)^{2}}{Bm}\cdot\Big{(}|\widetilde{S }^{(t,b^{(t)}_{i}}_{i}|\ell^{\prime(t,b^{(t)}_{i}}_{i})\cdot\|\mathbf{\xi}_{i}\|_{2 }^{2}\] \[\qquad\qquad\qquad\qquad-|\widetilde{S}^{(t,b^{(t)}_{k}}_{k})| \ell^{\prime(t,b^{(t)}_{k}}_{k})\cdot\|\mathbf{\xi}_{k}\|_{2}^{2}\Big{)}. \tag{10}\]
Here, \(b^{(t)}_{i}\) denotes the batch to which sample \(i\) belongs in epoch \(t\), and \(\widetilde{S}^{(t,b^{(t)}_{i}}_{i})\) represents the parameters that learn \(\mathbf{\xi}_{i}\) at epoch \(t\) defined as
\[\widetilde{S}^{(t,b)}_{i}:=\{r:\langle\mathbf{w}^{(t,b)}_{y_{i},r },\mathbf{\xi}_{i}\rangle>0\}. \tag{11}\]
Therefore, the update of \(\sum_{r=1}^{m}\big{[}\overline{p}^{(t,0)}_{y_{i},r,i}-\overline{p}^{(t,0)}_{y_ {k},r,k}\big{]}\) is indeed characterized by the activation pattern of parameters, which serves as the key technique for analyzing the full batch update of \(\sum_{r=1}^{m}\big{[}\overline{\rho}^{(t,0)}_{y_{i},r,i}-\overline{\rho}^{(t,0 )}_{y_{k},r,k}\big{]}\). However, analyzing the pattern of \(S^{(t,b)}_{i}\) directly is challenging since \(\langle\mathbf{w}^{(t,b)}_{y_{i},r},\mathbf{\xi}_{i}\rangle\) fluctuates in mini-batches without sample \(i\). Therefore, we introduce the set series \(S^{(t,b)}_{i}\) as the activation pattern with certain threshold as follows:
\[S^{(t,b)}_{i}:=\{r:\langle\mathbf{w}^{(t,b)}_{y_{i},r},\mathbf{\xi}_ {i}\rangle>\sigma_{0}\sigma_{p}\sqrt{d}/\sqrt{2}\}. \tag{12}\]
The following lemma suggests that the set of activated parameters \(S^{(t,0)}_{i}\) is a non-decreasing sequence with regards to \(t\), and the set of plain activated parameters \(\widetilde{S}^{(t,b)}_{i}\) always include \(S^{(t,0)}_{i}\). Consequently, \(S^{(0,0)}_{i}\) is always included in \(\widetilde{S}^{(t,b)}_{i}\), guaranteeing that \(\mathbf{\xi}_{i}\) can always be learned by some parameter. And this further makes sure the difference among \(\overline{\rho}^{(t,b)}_{y_{i},r,i}\) is bounded, as well as \(\ell^{\prime(t,b_{1})}_{i}/\ell^{\prime(t,b_{2})}_{k}\leq C_{2}\). In the proof for SGD, we consider the learning period \(0\leq t\leq T^{*}\), where \(T^{*}=\eta^{-1}\mathrm{poly}(\epsilon^{-1},d,n,m)\) is the maximum number of admissible iterations.
**Lemma 3.4** (Informal Statement of Lemma C.8).: _For all \(t\in[0,T^{*}]\) and \(b\in\overline{[H]}\), we have_
\[S^{(t-1,0)}_{i}\subseteq S^{(t,0)}_{i}\subseteq\widetilde{S}^{ (t,b)}_{i}. \tag{13}\]
As we have mentioned above, if noisy samples outnumber clean samples, \(\gamma^{(t,b)}_{j,r}\) may also decrease. To deal with such a scenario, we establish a two-stage analysis of \(\gamma^{(t,b)}_{j,r}\) progress. In the first stage, when \(-\ell^{\prime}_{i}\) is lower bound by a positive constant, we prove that there are enough batches containing sufficient clear samples. This is characterized by the following high-probability event.
**Lemma 3.5** (Informal Statement of Lemma B.6).: _With high probability, for all \(T\in[\widetilde{O}(1),T^{*}]\), there exist at least \(c_{1}\cdot T\) epochs among \([0,T]\), such that at least \(c_{2}\cdot H\) batches in each of these epochs satisfying the following condition:_
\[|S_{+}\cap S_{y}\cap\mathcal{I}_{t,b}|\in[0.25B,0.75B]. \tag{14}\]
After the first stage of \(T=\Theta(\eta^{-1}m(P-1)^{-2}\sigma_{p}^{-2}d^{-1})\) epochs, we have \(\gamma^{(T,0)}_{j,r}=\Omega\Big{(}n\frac{\|\mathbf{\mu}\|_{2}^{2}}{(P-1)^{2} \sigma_{p}^{2}d}\Big{)}\). The scale of \(\gamma^{(T,0)}_{j,r}\) guarantees that \(\langle\mathbf{w}^{(t,b)}_{j,r},\mathbf{\mu}\rangle\) remains resistant to intra-epoch fluctuations. Consequently, this implies the sign of \(\langle\mathbf{w}^{(t,b)}_{j,r},\mathbf{\mu}\rangle\) will persist unchanged throughout the entire epoch. Without loss of generality, we suppose that \(\langle\mathbf{w}^{(t,b)}_{j,r},\mathbf{\mu}\rangle>0\), then the update of \(\gamma^{(t,b)}_{j,r}\) can be written as follows:
\[\gamma^{(t+1,0)}_{j,r}=\gamma^{(t,0)}_{j,r}+\frac{\eta}{Bm}\cdot\bigg{[}\min _{i\in\mathcal{I}_{t,b},b}|\ell^{\prime(t,b)}_{i}||S_{+}\cap S_{1}|-\max_{i \in\mathcal{I}_{t,b},b}|\ell^{\prime(t,b)}_{i}||S_{-}\cap S_{-1}|\bigg{]} \cdot\|\mathbf{\mu}\|_{2}^{2}. \tag{15}\]
As we have proved the balancing of logits \(\ell^{\prime(t,b)}_{i}\) across batches, the progress analysis of \(\gamma^{(t+1,0)}_{j,r}\) is established to characterize the signal learning of SGD.
## 4 Result for SAM
In this section, we present the positive results for SAM in the following theorem.
**Theorem 4.1**.: _For any \(\epsilon>0\), under Condition 3.1 with \(\sigma_{0}=\widetilde{\Theta}(P^{-1}\sigma_{p}^{-1}d^{-1/2})\), choose \(\tau=\Theta\Big{(}\frac{m\sqrt{B}}{P\sigma_{p}\sqrt{d}}\Big{)}\). With probability at least \(1-\delta\), neural networks first trained with SAM with \(O\Big{(}\eta^{-1}\epsilon^{-1}n^{-1}mB\|\mathbf{\mu}\|_{2}^{-2}\Big{)}\) iterations, then trained with SGD with \(\widetilde{O}\Big{(}\eta^{-1}\epsilon^{-1}mnd^{-1}P^{-2}\sigma_{p}^{-2}\Big{)}\) iterations can find \(\mathbf{W}^{(t)}\) such that,_
1. _The training loss satisfies_ \(L_{S}(\mathbf{W}^{(t)})\leq\epsilon\)_._
2. _The test error_ \(L_{\mathcal{D}}^{0-1}(\mathbf{W}^{(t)})\leq p+\epsilon\)_._
In contrast to Theorem 3.2, Theorem 4.1 demonstrates that CNNs trained by SAM exhibit benign overfitting under much milder conditions. This condition is almost dimension-free, as opposed to the threshold of \(\|\mathbf{\mu}\|_{2}^{4}\geq\widetilde{\Omega}((d/n)P^{4}\sigma_{p}^{4})\) for CNNs trained by SGD. The discrepancy in the thresholds can be observed in Figure 1. This difference is because SAM introduces a perturbation during the model parameter update process, which effectively prevents the early-stage memorization of noise by deactivating the corresponding neurons.
### Noise Memorization Prevention
In this subsection, we will show how SAM can prevent noise memorization by changing the activation pattern of the neurons. For SAM, we have the following update rule of decomposition coefficients \(\gamma_{j,r}^{(t,b)},\overline{\rho}_{j,r,i}^{(t,b)},\underline{\rho}_{j,r,i}^ {(t,b)}\).
**Lemma 4.2**.: _The coefficients \(\gamma_{j,r}^{(t,b)},\overline{\rho}_{j,r,i}^{(t,b)},\underline{\rho}_{j,r,i}^ {(t,b)}\) defined in Lemma 3.3 satisfy the following iterative equations for all \(r\in[m]\), \(j\in\{\pm 1\}\) and \(i\in[n]\):_
\[\gamma_{j,r}^{(0,0)},\overline{\rho}_{j,r,i}^{(0,0)},\underline{ \rho}_{j,r,i}^{(0,0)}=0,\] \[\gamma_{j,r}^{(t,b+1)}=\gamma_{j,r}^{(t,b)}-\frac{\eta}{Bm} \cdot\bigg{[}\sum_{i\in\mathcal{I}_{t,b}\cap S_{+}}\ell_{i}^{(t,b)} \sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b)}+\widehat{\mathbf{\epsilon}}_{j,r}^ {(t,b)},\widehat{y}_{i}\cdot\mathbf{\mu}\rangle)\] \[-\sum_{i\in\mathcal{I}_{t,b}\cap S_{-}}\ell_{i}^{(t,b)}\sigma^{ \prime}(\langle\mathbf{w}_{j,r}^{(t,b)}+\widehat{\mathbf{\epsilon}}_{j,r}^{(t,b)},\widehat{y}_{i}\cdot\mathbf{\mu}\rangle)\bigg{]}\cdot\|\mathbf{\mu}\|_{2}^{2},\] \[\overline{\rho}_{j,r,i}^{(t,b+1)}=\overline{\rho}_{j,r,i}^{(t,b)} -\frac{\eta(P-1)^{2}}{Bm}\cdot\ell_{i}^{\prime(t,b)}\cdot\sigma^{\prime}( \langle\mathbf{w}_{j,r}^{(t,b)}+\widehat{\mathbf{\epsilon}}_{j,r}^{(t,b)},\mathbf{ \xi}_{i}\rangle)\cdot\|\mathbf{\xi}_{i}\|_{2}^{2}\cdot\mathds{1}(y_{i}=j)\, \mathds{1}(i\in\mathcal{I}_{t,b}),\] \[\underline{\rho}_{j,r,i}^{(t,b+1)}=\underline{\rho}_{j,r,i}^{(t,b )}+\frac{\eta(P-1)^{2}}{Bm}\cdot\ell_{i}^{\prime(t,b)}\cdot\sigma^{\prime}( \langle\mathbf{w}_{j,r}^{(t,b)}+\widehat{\mathbf{\epsilon}}_{j,r}^{(t,b)},\mathbf{\xi} _{i}\rangle)\cdot\|\mathbf{\xi}_{i}\|_{2}^{2}\cdot\mathds{1}(y_{i}=-j)\,\mathds{1 }(i\in\mathcal{I}_{t,b}),\]
_where \(\mathcal{I}_{t,b}\) denotes the sample index set of the \(b\)-th batch in the \(t\)-th epoch._
The primary distinction between SGD and SAM is how neuron activation is determined. In SAM, the activation is based on the perturbed weight \(\mathbf{w}_{j,r}^{(t,b)}+\widehat{\mathbf{\epsilon}}_{j,r}^{(t,b)}\), whereas in SGD, it is determined by the unperturbed weight \(\mathbf{w}_{j,r}^{(t,b)}\). This perturbation to the weight update process at each iteration gives SAM an intriguing denoising property. Specifically, if a neuron is activated by noise in the SGD update, it will subsequently become deactivated after the perturbation, as stated in the following lemma.
**Lemma 4.3** (Informal Statement of Lemma d.5).: _Suppose the Condition 3.1 holds with parameter choices in Theorem 4.1, if \(\langle\mathbf{w}_{j,r}^{(t,b)},\mathbf{\xi}_{k}\rangle\geq 0\), \(k\in\mathcal{I}_{t,b}\) and \(j=y_{k}\), then \(\langle\mathbf{w}_{j,r}^{(t,b)}+\widehat{\mathbf{\epsilon}}_{j,r}^{(t,b)},\mathbf{\xi}_ {k}\rangle<0\)._
By leveraging this intriguing property, we can derive a constant upper bound for the noise coefficients \(\overline{\rho}_{j,r,i}^{(t,b)}\) by considering the following cases:
1. If \(\mathbf{\xi}_{i}\) is not in the current batch, then \(\overline{\rho}_{j,r,i}^{(t,b)}\) will not be updated in the current iteration.
2. If \(\mathbf{\xi}_{i}\) is in the current batch, we discuss two cases: 1. If \(\langle\mathbf{w}_{j,r}^{(t,b)},\mathbf{\xi}_{i}\rangle\geq 0\), then by Lemma 4.3, one can know that \(\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b)}+\tilde{\mathbf{\varepsilon}}_{j,r}^ {(t,b)},\mathbf{\xi}_{i}\rangle)=0\) and thus \(\overline{\rho}_{j,r,i}^{(t,b)}\) will not be updated in the current iteration. 2. If \(\langle\mathbf{w}_{j,r}^{(t,b)},\mathbf{\xi}_{i}\rangle\leq 0\), then given that \(\langle\mathbf{w}_{j,r}^{(t,b)},\mathbf{\xi}_{i}\rangle\approx\overline{\rho}_{j, r,i}^{(t,b)}\) and \(\overline{\rho}_{j,r,i}^{(t,b+1)}\leq\overline{\rho}_{j,r,i}^{(t,b)}+\frac{\eta( P-1)^{2}\|\mathbf{\xi}_{i}\|_{2}^{2}}{Bm}\), we can assert that, provided \(\eta\) is sufficiently small, the term \(\overline{\rho}_{j,r,i}^{(t,b)}\) can be upper bounded by a small constant.
In contrast to the analysis of SGD, which provides an upper bound for \(\overline{\rho}_{j,r,i}^{(t,b)}\) of order \(O(\log d)\), the noise memorization prevention property described in Lemma 4.3 allows us to obtain an upper bound for \(\overline{\rho}_{j,r,i}^{(t,b)}\) of order \(O(1)\) throughout \([0,T_{1}]\). This indicates that SAM memorizes less noise compared to SGD. On the other hand, the signal coefficient \(\gamma_{j,r,i}^{(t)}\) also increases to \(\Omega(1)\) for SAM, following the same argument as in SGD. This property ensures that training with SAM does not exhibit harmful overfitting for the same signal-to-noise ratio at which training with SGD suffers from harmful overfitting.
## 5 Experiments
In this section, we conduct synthetic experiments to validate our theory. Additional experiments on real data sets can be found in Appendix A.
We set training data size \(n=20\) without label-flipping noise. Since the learning problem is rotation-invariant, without loss of generality, we set \(\mathbf{\mu}=\|\mathbf{\mu}\|_{2}\cdot[1,0,\ldots,0]^{\top}\). We then generate the noise vector \(\mathbf{\xi}\) from the Gaussian distribution \(\mathcal{N}(\mathbf{0},\sigma_{p}^{2}\mathbf{I})\) with fixed standard deviation \(\sigma_{p}=1\). We train a two-layer CNN model defined in Section 2 with the ReLU activation function. The number of filters is set as \(m=10\). We use the default initialization method in PyTorch to initialize the CNN parameters and train the CNN with full-batch gradient descent with a learning rate of \(0.01\) for \(100\) iterations. We consider different dimensions \(d\) ranging from \(1000\) to \(20000\), and different signal strengths \(\|\mathbf{\mu}\|_{2}\) ranging from \(0\) to \(10\). Based on our results, for any dimension \(d\) and signal strength \(\mu\) setting we consider, our training setup can guarantee a training loss smaller than \(0.05\). After training, we estimate the test error for each case using \(1000\) test data points. We report the test error heat map with average results over \(10\) runs in Figure 2.
## 6 Related Work
**Sharpness Aware Minimization.**Foret et al. (2020), and Zheng et al. (2021) concurrently introduced methods to enhance generalization by minimizing the loss in the worst direction, perturbed from the current parameter. Kwon et al. (2021) introduced ASAM, a variant of SAM, designed to address parameter re-scaling. Subsequently, Liu et al. (2022b) presented LookSAM, a more computationally
Figure 2: (a) is a heatmap illustrating test error on synthetic data for various dimensions \(d\) and signal strengths \(\mathbf{\mu}\) when trained using Vanilla Gradient Descent. High test errors are represented in blue, while low test errors are shown in yellow. (b) displays a heatmap of test errors on the synthetic data under the same conditions as in (a), but trained using SAM instead with \(\tau=0.03\). The y-axis represents a normal scale with a range of \(1000\sim 21000\).
efficient alternative. Zhuang et al. (2022) highlighted that SAM did not consistently favor the flat minima and proposed GSAM to improve generalization by minimizing the surrogate. Recently, Zhao et al. (2022) showed that the SAM algorithm is related to the gradient regularization (GR) method when the loss is smooth and proposed an algorithm that can be viewed as a generalization of the SAM algorithm. Meng et al. (2023) further studied the mechanism of Per-Example Gradient Regularization (PEGR) on the CNN training and revealed that PEGR penalizes the variance of pattern learning.
**Benign Overfitting in Neural Networks.** Since the pioneering work by Bartlett et al. (2020) on benign overfitting in linear regression, there has been a surge of research studying benign overfitting in linear models, kernel methods, and neural networks. Li et al. (2021); Montanari and Zhong (2022) examined benign overfitting in random feature or neural tangent kernel models defined in two-layer neural networks. Chatterji and Long (2022) studied the excess risk of interpolating deep linear networks trained by gradient flow. Understanding benign overfitting in neural networks beyond the linear/kernel regime is much more challenging because of the non-convexity of the problem. Recently, Frei et al. (2022) studied benign overfitting in fully-connected two-layer neural networks with smoothed leaky ReLU activation. Cao et al. (2022) provided an analysis for learning two-layer convolutional neural networks (CNNs) with polynomial ReLU activation function (ReLU\({}^{q}\), \(q>2\)). Kou et al. (2023) further investigates the phenomenon of benign overfitting in learning two-layer ReLU CNNs. Kou et al. (2023) is most related to our paper. However, our work studied SGD rather than GD, which requires advanced techniques to control the update of coefficients at both batch-level and epoch-level. We also provide a novel analysis for SAM, which differs from the analysis of GD/SGD.
## 7 Conclusion
In this work, we rigorously analyze the training behavior of two-layer convolutional ReLU networks for both SGD and SAM. In particular, we precisely outlined the conditions under which benign overfitting can occur during SGD training, marking the first such finding for neural networks trained with mini-batch SGD. We also proved that SAM can lead to benign overfitting under circumstances that prompt harmful overfitting via SGD, which demonstrates the clear theoretical superiority of SAM over SGD. Our results provide a deeper comprehension of SAM, particularly when it comes to its utilization with non-smooth neural networks. An interesting future work is to consider other modern deep learning techniques, such as weight normalization, momentum, and weight decay, in our analysis.
## Acknowledgements
We thank the anonymous reviewers for their helpful comments. ZC, JZ, YK, and QG are supported in part by the National Science Foundation CAREER Award 1906169 and IIS-2008981, and the Sloan Research Fellowship. XC and CJH are supported in part by NSF under IIS-2008173, IIS-2048280, CISCO and Sony. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies.
## References
* Ahn et al. (2022)Ahn, K., Bubeck, S., Chewi, S., Lee, Y. T., Suarez, F. and Zhang, Y. (2022). Learning threshold neurons via the" edge of stability". _arXiv preprint arXiv:2212.07469_.
* Allen-Zhu and Li (2020)Allen-Zhu, Z. and Li, Y. (2020). Towards understanding ensemble, knowledge distillation and self-distillation in deep learning. _arXiv preprint arXiv:2012.09816_.
* Allen-Zhu and Li (2022)Allen-Zhu, Z. and Li, Y. (2022). Feature purification: How adversarial training performs robust deep learning. In _2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS)_. IEEE.
* Andriushchenko and Flammarion (2022)Andriushchenko, M. and Flammarion, N. (2022). Towards understanding sharpness-aware minimization. In _International Conference on Machine Learning_. PMLR.
* Andriushchenko et al. (2023)Andriushchenko, M., Varre, A. V., Pillaud-Vivien, L. and Flammarion, N. (2023). Sgd with large step sizes learns sparse features. In _International Conference on Machine Learning_. PMLR.
* Andriushchenko et al. (2021)Bahri, D., Mobahi, H. and Tay, Y. (2021). Sharpness-aware minimization improves language model generalization. _arXiv preprint arXiv:2110.08529_.
* Bartlett et al. (2022)Bartlett, P. L., Long, P. M. and Bousquet, O. (2022). The dynamics of sharpness-aware minimization: Bouncing across ravines and drifting towards wide minima. _arXiv preprint arXiv:2210.01513_.
* Bartlett et al. (2020)Bartlett, P. L., Long, P. M., Lugosi, G. and Tsigler, A. (2020). Benign overfitting in linear regression. _Proceedings of the National Academy of Sciences_.
* Behdin and Mazumder (2023)Behdin, K. and Mazumder, R. (2023). Sharpness-aware minimization: An implicit regularization perspective. _arXiv preprint arXiv:2302.11836_.
* Behdin et al. (2022)Behdin, K., Song, Q., Gupta, A., Durfee, D., Acharya, A., Keerthi, S. and Mazumder, R. (2022). Improved deep neural network generalization using m-sharpness-aware minimization. _arXiv preprint arXiv:2212.04343_.
* Cao et al. (2022)Cao, Y., Chen, Z., Belkin, M. and Gu, Q. (2022). Benign overfitting in two-layer convolutional neural networks. _Advances in neural information processing systems_**35** 25237-25250.
* Chatterji and Long (2021)Chatterji, N. S. and Long, P. M. (2021). Finite-sample analysis of interpolating linear classifiers in the overparameterized regime. _Journal of Machine Learning Research_**22** 129-1.
* Chatterji and Long (2022)Chatterji, N. S. and Long, P. M. (2022). Deep linear networks can benignly overfit when shallow ones do. _arXiv preprint arXiv:2209.09315_.
* Chen et al. (2021)Chen, X., Hsieh, C.-J. and Gong, B. (2021). When vision transformers outperform resnets without pre-training or strong data augmentations. _arXiv preprint arXiv:2106.01548_.
* Devroye et al. (2018)Devroye, L., Mehrabian, A. and Reddad, T. (2018). The total variation distance between high-dimensional gaussians. _arXiv preprint arXiv:1810.08693_.
* Foret et al. (2020)Foret, P., Kleiner, A., Mobahi, H. and Neyshabur, B. (2020). Sharpness-aware minimization for efficiently improving generalization. _arXiv preprint arXiv:2010.01412_.
* Frei et al. (2022)Frei, S., Chatterji, N. S. and Bartlett, P. (2022). Benign overfitting without linearity: Neural network classifiers trained by gradient descent for noisy linear data. In _Conference on Learning Theory_. PMLR.
* Jelassi and Li (2022)Jelassi, S. and Li, Y. (2022). Towards understanding how momentum improves generalization in deep learning. In _International Conference on Machine Learning_. PMLR.
* Kou et al. (2023)Kou, Y., Chen, Z., Chen, Y. and Gu, Q. (2023). Benign overfitting for two-layer relu networks. _arXiv preprint arXiv:2303.04145_.
* Kwon et al. (2021)Kwon, J., Kim, J., Park, H. and Choi, I. K. (2021). Asam: Adaptive sharpness-aware minimization for scale-invariant learning of deep neural networks. In _International Conference on Machine Learning_. PMLR.
* Li et al. (2019)Li, Y., Wei, C. and Ma, T. (2019). Towards explaining the regularization effect of initial large learning rate in training neural networks. In _Advances in Neural Information Processing Systems_.
* Li et al. (2021a)Li, Z., Wang, T. and Arora, S. (2021a). What happens after sgd reaches zero loss?-a mathematical framework. _arXiv preprint arXiv:2110.06914_.
* Li et al. (2021b)Li, Z., Zhou, Z.-H. and Gretton, A. (2021b). Towards an understanding of benign overfitting in neural networks. _arXiv preprint arXiv:2106.03212_.
* Liu et al. (2022a)Liu, Y., Mai, S., Chen, X., Hsieh, C.-J. and You, Y. (2022a). Towards efficient and scalable sharpness-aware minimization. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_.
* Liu et al. (2022b)Liu, Y., Mai, S., Chen, X., Hsieh, C.-J. and You, Y. (2022b). Towards efficient and scalable sharpness-aware minimization. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_.
* Liu et al. (2022)Meng, X., Cao, Y. and Zou, D. (2023). Per-example gradient regularization improves learning signals from noisy data. _arXiv preprint arXiv:2303.17940_.
* Montanari and Zhong (2022)Montanari, A. and Zhong, Y. (2022). The interpolation phase transition in neural networks: Memorization and generalization under lazy training. _The Annals of Statistics_**50** 2816-2847.
* Olshausen and Field (1997)Olshausen, B. A. and Field, D. J. (1997). Sparse coding with an overcomplete basis set: A strategy employed by v1? _Vision research_**37** 3311-3325.
* Shen et al. (2022)Shen, R., Bubeck, S. and Gunasekar, S. (2022). Data augmentation as feature manipulation: a story of desert cows and grass cows. _arXiv preprint arXiv:2203.01572_.
* Vershynin (2018)Vershynin, R. (2018). _High-Dimensional Probability: An Introduction with Applications in Data Science_. Cambridge Series in Statistical and Probabilistic Mathematics, Cambridge University Press.
* Wen et al. (2022)Wen, K., Ma, T. and Li, Z. (2022). How does sharpness-aware minimization minimize sharpness? _arXiv preprint arXiv:2211.05729_.
* Zhang et al. (2021)Zhang, C., Bengio, S., Hardt, M., Recht, B. and Vinyals, O. (2021). Understanding deep learning (still) requires rethinking generalization. _Communications of the ACM_**64** 107-115.
* Zhao et al. (2022)Zhao, Y., Zhang, H. and Hu, X. (2022). Penalizing gradient norm for efficiently improving generalization in deep learning. In _International Conference on Machine Learning_. PMLR.
* Zheng et al. (2021)Zheng, Y., Zhang, R. and Mao, Y. (2021). Regularizing neural networks via adversarial model perturbation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_.
* Zhuang et al. (2022)Zhuang, J., Gong, B., Yuan, L., Cui, Y., Adam, H., Dvornek, N., Tatikonda, S., Duncan, J. and Liu, T. (2022). Surrogate gap minimization improves sharpness-aware training. _arXiv preprint arXiv:2203.08065_.
## Appendix A Additional Experiments
### Real Experiments on CIFAR
In this section, we provide the experiments on real data sets.
**Varying different starting points for SAM.** In Section 4, we show that the SAM algorithm can effectively prevent noise memorization and thus improve feature learning in the early stage of training. Is SAM also effective if we add the algorithm in the middle of the training process? We conduct experiments on the ImageNet dataset with ResNet50. We choose the batch size as \(1024\), and the model is trained for \(90\) epochs with the best learning rate in grid search \(\{0.01,0.03,0.1,0.3\}\). The learning rate schedule is \(10\)k steps linear warmup, then cosine decay. As shown in Table 1, the earlier SAM is introduced, the more pronounced its effectiveness becomes.
**SAM with additive noises.** Here, we conduct experiments on the CIFAR dataset with WRN-16-8. We add Gaussian random noises to the image data with variance \(\{0.1,0.3,1\}\). We choose the batch size as \(128\) and train the model over \(200\) epochs using a learning rate of \(0.1\), a momentum of \(0.9\), and a weight decay of \(5e-4\). The SAM hyperparameter is chosen as \(\tau=2.0\). As we can see from Table 2, SAM can consistently prevent noise learning and get better performance, compared to the SGD, which varies from different additive noise levels.
### Discussion on the Stochastic Gradient Descent
Here, we include additional empirical results to show how the batch size and step size influence the transition phase of the benign/harmful regimes.
Our synthetic experiment is only performed with gradient descent instead of SGD in Figure 2. We add an experiment of SGD with a mini-batch size of 10 on the synthetic data. If we compare Figure 3 and Figure 2, the comparison results should remain the same, and both support our main Theorems 3.2 and 4.1.
In Figure 4, we also conducted an extended study on the learning rate to check whether tuning the learning rate can get better generalization performance and achieve SAM's performance. Specifically, we experimented with learning rates of \(0.001,0.01,0.1\), and \(1\) under the same conditions described
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \(\tau\) & **10\%** & **30\%** & **50\%** & **70\%** & **90\%** \\ \hline
0.01 & 76.9 & 76.9 & 76.9 & 76.7 & 76.7 \\
0.02 & 77.1 & 77.0 & 76.9 & 76.8 & 76.6 \\
0.05 & 76.2 & 76.4 & 76.3 & 76.2 & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Top-1 accuracy (%) of ResNet-50 on the ImageNet dataset when we vary the starting point of using the SAM update rule, baseline result is 76.4%.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model** & **Noise** & **Dataset** & **Optimizer** & **Accuracy** \\ \hline WRN-16-8 & - & CIFAR-10 & SGD & 96.69 \\ WRN-16-8 & - & CIFAR-10 & SAM & 97.19 \\ \hline WRN-16-8 & \(\mathcal{N}(0,0.1)\) & CIFAR-10 & SGD & 95.87 \\ WRN-16-8 & \(\mathcal{N}(0,0.1)\) & CIFAR-10 & SAM & 96.57 \\ \hline WRN-16-8 & \(\mathcal{N}(0,0.3)\) & CIFAR-10 & SGD & 92.40 \\ WRN-16-8 & \(\mathcal{N}(0,0.3)\) & CIFAR-10 & SAM & 93.37 \\ \hline WRN-16-8 & \(\mathcal{N}(0,1)\) & CIFAR-10 & SGD & 79.50 \\ WRN-16-8 & \(\mathcal{N}(0,1)\) & CIFAR-10 & SAM & 80.37 \\ \hline WRN-16-8 & - & CIFAR-100 & SGD & 81.93 \\ WRN-16-8 & - & CIFAR-100 & SAM & 83.68 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Top-1 accuracy (%) of wide ResNet on the CIFAR datasets when adding different levels of Gaussian noise.
in Section 5. The results show that for all learning rates, the harmful and benign overfitting patterns are quite similar and consistent with our Theorem 3.2.
We observed that larger learning rates, such as \(0.1\) and \(1\), can improve SGD's generalization performance. The benign overfitting region is enlarged for learning rates of \(0.1\) and \(1\) when contrasted with \(0.01\). This trend resonates with recent findings that large LR helps SGD find better minima (Li et al., 2019, 2021; Ahn et al., 2022; Andriushchenko et al., 2023). Importantly, even with this expansion, the benign overfitting region remains smaller than what is empirically observed with SAM. Our conclusion is that while SGD, with a more considerable learning rate, exhibits improved generalization, it still falls short of matching SAM's performance.
## Appendix B Preliminary Lemmas
**Lemma B.1** (Lemma B.4 in Kou et al. (2023)).: _Suppose that \(\delta>0\) and \(d=\Omega(\log(6n/\delta))\). Then with probability at least \(1-\delta\),_
\[\sigma_{p}^{2}d/2\leq\|\mathbf{\xi}_{i}\|_{2}^{2}\leq 3\sigma_{p}^{2}d /2,\] \[|\langle\mathbf{\xi}_{i},\mathbf{\xi}_{i^{\prime}}\rangle|\leq 2\sigma_{p} ^{2}\cdot\sqrt{d\log(6n^{2}/\delta)},\] \[|\langle\mathbf{\xi}_{i},\mathbf{\mu}\rangle|\leq\|\mathbf{\mu}\|_{2}\sigma_{ p}\cdot\sqrt{2\log(6n/\delta)}\]
_for all \(i,i^{\prime}\in[n]\)._
**Lemma B.2** (Lemma B.5 in Kou et al. (2023)).: _Suppose that \(d=\Omega(\log(mn/\delta))\), \(m=\Omega(\log(1/\delta))\). Then with probability at least \(1-\delta\),_
\[\sigma_{0}^{2}d/2\leq\|\mathbf{w}_{j,r}^{(0,0)}\|_{2}^{2}\leq 3 \sigma_{0}^{2}d/2,\] \[|\langle\mathbf{w}_{j,r}^{(0,0)},\mathbf{\mu}\rangle|\leq\sqrt{2\log( 12m/\delta)}\cdot\sigma_{0}\|\mathbf{\mu}\|_{2},\] \[|\langle\mathbf{w}_{j,r}^{(0,0)},\mathbf{\xi}_{i}\rangle|\leq 2\sqrt{ \log(12mn/\delta)}\cdot\sigma_{0}\sigma_{p}\sqrt{d}\]
_for all \(r\in[m]\), \(j\in\{\pm 1\}\) and \(i\in[n]\). Moreover,_
\[\sigma_{0}\|\mathbf{\mu}\|_{2}/2\leq\max_{r\in[m]}j\cdot\langle\mathbf{ w}_{j,r}^{(0,0)},\mathbf{\mu}\rangle\leq\sqrt{2\log(12m/\delta)}\cdot\sigma_{0}\| \mathbf{\mu}\|_{2},\] \[\sigma_{0}\sigma_{p}\sqrt{d}/4\leq\max_{r\in[m]}j\cdot\langle \mathbf{w}_{j,r}^{(0,0)},\mathbf{\xi}_{i}\rangle\leq 2\sqrt{\log(12mn/\delta)}\cdot\sigma_{0} \sigma_{p}\sqrt{d}\]
_for all \(j\in\{\pm 1\}\) and \(i\in[n]\)._
**Lemma B.3**.: _Let \(S_{i}^{(t,b)}\) denote \(\{r:\langle\mathbf{w}_{y_{i},r}^{(t,b)},\mathbf{\xi}_{i}\rangle>\sigma_{0}\sigma_{ p}\sqrt{d}/\sqrt{2}\rangle\}\). Suppose that \(\delta>0\) and \(m\geq 50\log(2n/\delta)\). Then with probability at least \(1-\delta\),_
\[|S_{i}^{(0,0)}|\geq 0.8\Phi(-1)m,\,\forall i\in[n].\]
Figure 3: (a) is a heatmap illustrating test error on synthetic data for various dimensions \(d\) and signal strengths \(\mathbf{\mu}\) when trained using stochastic gradient descent. High test errors are represented in blue, while low test errors are shown in yellow. (b) displays a heatmap of test errors on the synthetic data under the same conditions as in (a), but trained using SAM instead. The y-axis represents a normal scale with a range of 1000-21000.
Proof of Lemma b.3.: Since \(\langle\mathbf{w}_{y_{i},r}^{(0,0)},\mathbf{\xi}_{i}\rangle\sim\mathcal{N}(0,\sigma_{0 }^{2}\|\mathbf{\xi}_{i}\|_{2}^{2})\), we have
\[P(\langle\mathbf{w}_{y_{i},r}^{(0,0)},\mathbf{\xi}_{i}\rangle>\sigma_{0}\sigma_{p} \sqrt{d}/\sqrt{2})\geq P(\langle\mathbf{w}_{y_{i},r}^{(0,0)},\mathbf{\xi}_{i} \rangle>\sigma_{0}\|\mathbf{\xi}_{i}\|_{2})=\Phi(-1),\]
where \(\Phi(\cdot)\) is CDF of the standard normal distribution. Note that \(|S_{i}^{(0,0)}|=\sum_{r=1}^{m}\mathds{1}[\langle\mathbf{w}_{y_{i},r}^{(0,0)}, \mathbf{\xi}_{i}\rangle>\sigma_{0}\sigma_{p}\sqrt{d}/\sqrt{2}]\) and \(P\big{(}\langle\mathbf{w}_{y_{i},r}^{(0,0)},\mathbf{\xi}_{i}\rangle>\sigma_{0} \sigma_{p}\sqrt{d}/\sqrt{2}\big{)}\geq\Phi(-1)\), then by Hoeffding's inequality, with probability at least \(1-\delta/n\), we have
\[\frac{|S_{i}^{(0,0)}|}{m}\geq\Phi(-1)-\sqrt{\frac{\log(2n/\delta)}{2m}}.\]
Therefore, as long as \(0.2\sqrt{m}\Phi(-1)\geq\sqrt{\frac{\log(2n/\delta)}{2}}\), by applying union bound, with probability at least \(1-\delta\), we have
\[|S_{i}^{(0)}|\geq 0.8\Phi(-1)m,\;\forall i\in[n].\]
**Lemma B.4**.: _Let \(S_{j,r}^{(t,b)}\) denote \(\{i\in[n]:y_{i}=j,\;\langle\mathbf{w}_{y_{i},r}^{(t,b)},\mathbf{\xi}_{i}\rangle> \sigma_{0}\sigma_{p}\sqrt{d}/\sqrt{2}\}\). Suppose that \(\delta>0\) and \(n\geq 32\log(4m/\delta)\). Then with probability at least \(1-\delta\),_
\[|S_{j,r}^{(0)}|\geq n\Phi(-1)/4,\;\forall j\in\{\pm 1\},r\in[m].\]
Figure 4: (a) is a heatmap illustrating test error on synthetic data for various dimensions \(d\) and signal strengths \(\mathbf{\mu}\) when trained using learning rate \(0.001\). High test errors are represented in blue, while low test errors are shown in yellow. (b)(c)(d) displays a heatmap of test errors on the synthetic data under the same conditions as in (a), but trained using GD with different learning rates \(0.01,0.1,1\). The y-axis represents a normal scale with a range of 1000-21000.
Proof of Lemma b.4.: Since \(\langle\mathbf{w}_{j,r}^{(0,0)},\mathbf{\xi}_{i}\rangle\sim\mathcal{N}(0,\sigma_{0}^{2 }\|\mathbf{\xi}_{i}\|_{2}^{2})\), we have
\[P(\langle\mathbf{w}_{j,r}^{(0,0)},\mathbf{\xi}_{i}\rangle>\sigma_{0}\sigma_{p} \sqrt{d}/\sqrt{2})\geq P(\langle\mathbf{w}_{j,r}^{(0,0)},\mathbf{\xi}_{i}\rangle> \sigma_{0}\|\mathbf{\xi}_{i}\|_{2})=\Phi(-1),\]
where \(\Phi(\cdot)\) is CDF of the standard normal distribution.
Note that \(|S_{j,r}^{(0,0)}|=\sum_{i=1}^{n}\mathds{1}[y_{i}=j]\mathds{1}[\langle\mathbf{w }_{j,r}^{(0)},\mathbf{\xi}_{i}\rangle>\sigma_{0}\sigma_{p}\sqrt{d}/\sqrt{2}]\) and \(\mathbb{P}(y_{i}=j,\langle\mathbf{w}_{j,r}^{(0)},\mathbf{\xi}_{i}\rangle>\sigma_{ 0}\sigma_{p}\sqrt{d}/\sqrt{2})\geq\Phi(-1)/2\), then by Hoeffding's inequality, with probability at least \(1-\delta/2m\), we have
\[\frac{|S_{j,r}^{(0)}|}{n}\geq\Phi(-1)/2+\sqrt{\frac{\log(4m/\delta)}{2n}}.\]
Therefore, as long as \(\Phi(-1)/4\geq\sqrt{\frac{\log(4m/\delta)}{2n}}\), by applying union bound, we have with probability at least \(1-\delta\),
\[|S_{j,r}^{(0)}|\geq n\Phi(-1)/4,\,\forall j\in\{\pm 1\},r\in[m].\]
**Lemma B.5** (Lemma B.3 in Kou et al. (2023)).: _For \(|S_{+}\cap S_{y}|\) and \(|S_{-}\cap S_{y}|\) where \(y\in\{\pm 1\}\), it holds with probability at least \(1-\delta(\delta>0)\) that_
\[\Big{|}|S_{+}\cap S_{y}|-\frac{(1-p)n}{2}\Big{|}\leq\sqrt{\frac{n}{2}\log\Big{(} \frac{8}{\delta}\Big{)}},\Big{|}|S_{-}\cap S_{y}|-\frac{pn}{2}\Big{|}\leq\sqrt {\frac{n}{2}\log\Big{(}\frac{8}{\delta}\Big{)}},\forall\,y\in\{\pm 1\}.\]
**Lemma B.6**.: _It holds with probability at least \(1-\delta\), for all \(T\in[\frac{\log(2T^{*}/\delta)}{c_{3}^{2}},T^{*}]\) and \(y\in\{\pm 1\}\), there exist at least \(c_{3}\cdot T\) epochs among \([0,T]\), such that at least \(c_{4}\cdot H\) batches in these epochs, satisfy_
\[|S_{+}\cap S_{y}\cap\mathcal{I}_{t,b}|\in\bigg{[}\frac{B}{4},\frac{3B}{4} \bigg{]}. \tag{16}\]
Proof.: Let
\[\mathcal{E}_{1,t} :=\{\text{In epoch $t$, there are at least $c_{2}\cdot\frac{n}{B}$ batches such that \ref{eq:t\[\mathbb{P}[\mathds{1}(\mathcal{E}_{1,t,c_{t}H-1})=l_{c_{1}H-1}|\, \mathds{1}(\mathcal{E}_{1,t,0})=l_{0},\cdots,\mathds{1}(\mathcal{E}_{1,t,c_{1}H-2 })=l_{c_{1}H-2}]\] \[\leq\sum_{i=0}^{c_{1}c_{2}H}C_{c_{1}H}^{i}(1-2c_{2})^{c_{1}H-i}\] \[\leq c_{1}c_{2}H\cdot(2c_{2})^{c_{1}c_{2}H}(1-2c_{2})^{c_{1}H-c_{1 }c_{2}H}.\]
Choose \(H_{0}\) such that \(c_{1}c_{2}H_{0}\cdot(2c_{2})^{c_{1}c_{2}H_{0}}(1-2c_{2})^{c_{1}H_{0}-c_{1}c_{2} H_{0}}=1-2c_{3}\), then as long as \(H\geq H_{0}\), with probability \(c_{3}\), there are at least \(c_{1}c_{2}H\) batches in first \(c_{1}H\) batches such that 16 holds. Then \(\mathbb{P}[\mathcal{E}_{1,t}]\geq 2c_{3}\). Therefore,
\[\mathbb{P}(\sum_{t}\mathds{1}(\mathcal{E}_{t})-2Tc_{3}\leq-t)\leq\exp(-\frac{2 t^{2}}{T}).\]
Let \(T\geq\frac{\log(2T^{*}/\delta)}{2c_{3}^{2}}\). Then, with probability at least \(1-\delta/(2T^{*})\),
\[\sum_{t}\mathds{1}(\mathcal{E}_{1,t})\geq c_{3}T.\]
Let \(c_{4}=c_{1}c_{2}\). Thus, there are at least \(c_{3}T^{*}\) epochs, such that they have at least \(c_{4}H\) batches satisfying EquationB.6. This also holds for \(y=-1\). Taking a union bound to get the result.
## Appendix C Proofs for SGD
In this section, we prove the results for SGD. We first define some notations. Define \(H=n/B\) as the number of batches within an epoch. For any \(t_{1},t_{2}\) and \(b_{1},b_{2}\in\overline{[H]}\), we write \((t_{1},b_{1})\leq(t,b)\leq(t_{2},b_{2})\) to denote all iterations from \(t_{1}\)-th epoch's \(b_{1}\)-th batch (included) to \(t_{2}\)-th epoch's \(b_{2}\)-th batch (included). And the meanings change accordingly if we replace \(\leq\) with \(<\).
### Signal-noise Decomposition Coefficient Analysis
This part is dedicated to analyzing the update rule of Signal-noise Decomposition Coefficients. It is worth noting that
\[F_{j}(\mathbf{W},\mathbf{X})=\frac{1}{m}\sum_{r=1}^{m}\sum_{p=1}^{P}\sigma( \langle\mathbf{w}_{j,r},\mathbf{x}_{p}\rangle)=\frac{1}{m}\sum_{r=1}^{m} \sigma(\langle\mathbf{w}_{j,r},\widehat{y}\mathbf{\mu}\rangle)+(P-1)\sigma( \langle\mathbf{w}_{j,r},\mathbf{\xi}\rangle).\]
Let \(\mathcal{I}_{t,b}\) denote the set of indices of randomly chosen samples at epoch \(t\) batch \(b\), and \(|\mathcal{I}_{t,b}|=B\), then the update rule is:
\[\text{for }b\in\overline{[H]}\qquad\mathbf{w}_{j,r}^{(t,b+1)} =\mathbf{w}_{j,r}^{(t,b)}-\eta\cdot\nabla_{\mathbf{w}_{j,r}}L_{ \mathcal{I}_{t,b}}(\mathbf{W}^{(t,b)})\] \[=\mathbf{w}_{j,r}^{(t,b)}-\frac{\eta(P-1)}{Bm}\sum_{i\in \mathcal{I}_{t,b}}\ell_{i}^{\prime(t,b)}\cdot\sigma^{\prime}(\langle\mathbf{w }_{j,r}^{(t,b)},\mathbf{\xi}_{i}\rangle)\cdot jy_{i}\mathbf{\xi}_{i}\] \[\qquad-\frac{\eta}{Bm}\sum_{i\in\mathcal{I}_{t,b}}\ell_{i}^{ \prime(t,b)}\cdot\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b)},\widehat{y}_ {i}\mathbf{\mu}\rangle)\cdot jy_{i}\widehat{y}_{i}\mathbf{\mu},\] \[\text{and}\qquad\mathbf{w}_{j,r}^{(t+1,0)} =\mathbf{w}_{j,r}^{(t,H)}. \tag{17}\]
#### c.1.1 Iterative Expression for Decomposition Coefficient Analysis
**Lemma C.1**.: _The coefficients \(\gamma_{j,r}^{(t,b)},\overline{\rho}_{j,r,i}^{(t,b)},\rho_{j,r,i}^{(t,b)}\) defined in Lemma 3.3 satisfy the following iterative equations:_
\[\gamma_{j,r}^{(0,0)},\overline{\rho}_{j,r,i}^{(0,0)},\rho_{j,r,i} ^{(0,0)}=0, \tag{18}\] \[\gamma_{j,r}^{(t,b+1)}=\gamma_{j,r}^{(t,b)}-\frac{\eta}{Bm}\cdot \left[\sum_{i\in\mathcal{I}_{t,b}\cap\mathcal{S}_{+}}\ell_{i}^{\prime(t,b)} \sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b)},\widehat{y}_{i}\cdot\mathbf{\mu} \rangle)\right.\]\[\alpha :=4\log(T^{*}), \tag{22}\] \[\beta :=2\max_{i,j,r}\{|\langle\mathbf{w}_{j,r}^{(0,0)},\mathbf{\mu}\rangle|,(P- 1)|\langle\mathbf{w}_{j,r}^{(0,0)},\mathbf{\xi}_{i}\rangle|\},\] (23) \[\mathrm{SNR} :=\frac{\|\mathbf{\mu}\|_{2}}{(P-1)\sigma_{p}\sqrt{d}}. \tag{24}\]
By Lemma B.2 and Condition 3.1, \(\beta\) can be bounded as
\[\beta=2\max_{i,j,r}\{|\langle\mathbf{w}_{j,r}^{(0,0)},\mathbf{\mu}\rangle|,(P-1)| \langle\mathbf{w}_{j,r}^{(0,0)},\mathbf{\xi}_{i}\rangle|\}\]\[\leq 2\max\{\sqrt{2\log(12m/\delta)}\cdot\sigma_{0}\|\mathbf{\mu}\|_{2},2 \sqrt{\log(12mn/\delta)}\cdot\sigma_{0}(P-1)\sigma_{p}\sqrt{d}\}\] \[=O\big{(}\sqrt{\log(mn/\delta)}\cdot\sigma_{0}(P-1)\sigma_{p}\sqrt {d}\big{)},\]
Then, by Condition 3.1, we have the following inequality:
\[\max\bigg{\{}\beta,\mathrm{SNR}\sqrt{\frac{32\log(6n/\delta)}{d}}n\alpha,5\sqrt {\frac{\log(6n^{2}/\delta)}{d}}n\alpha\bigg{\}}\leq\frac{1}{12}. \tag{25}\]
We first prove the following bounds for signal-noise decomposition coefficients.
**Proposition C.2**.: _Under Assumption 3.1, for \((0,0)\leq(t,b)\leq(T^{*},0)\), we have that_
\[\gamma_{j,r}^{(0,0)},\overline{\rho}_{j,r,i}^{(0,0)},\underline{ \rho}_{j,r,i}^{(0,0)}=0 \tag{26}\] \[0\leq\overline{\rho}_{j,r,i}^{(t,b)}\leq\alpha,\] (27) \[0\geq\underline{\rho}_{j,r,i}^{(t,b)}\geq-\beta-10\sqrt{\frac{ \log(6n^{2}/\delta)}{d}}n\alpha\geq-\alpha, \tag{28}\]
_and there exists a positive constant \(C^{\prime}\) such that_
\[-\frac{1}{12}\leq\gamma_{j,r}^{(t,b)}\leq C^{\prime}\widehat{\gamma}\alpha, \tag{29}\]
_for all \(r\in[m]\), \(j\in\{\pm 1\}\) and \(i\in[n]\), where \(\widehat{\gamma}:=n\cdot\mathrm{SNR}^{2}\)._
We will prove Proposition C.2 by induction. We first approximate the change of the inner product by corresponding decomposition coefficients when Proposition C.2 holds.
**Lemma C.3**.: _Under Assumption 3.1, suppose (27), (28) and (29) hold after \(b\)-th batch of \(t\)-th epoch. Then, for all \(r\in[m]\), \(j\in\{\pm 1\}\) and \(i\in[n]\),_
\[\big{|}\langle\mathbf{w}_{j,r}^{(t,b)}-\mathbf{w}_{j,r}^{(0,0)}, \mathbf{\mu}\rangle-j\cdot\gamma_{j,r}^{(t,b)}\big{|}\leq\mathrm{SNR}\sqrt{\frac{3 2\log(6n/\delta)}{d}}n\alpha, \tag{30}\] \[\big{|}\langle\mathbf{w}_{j,r}^{(t,b)}-\mathbf{w}_{j,r}^{(0,0)}, \mathbf{\xi}_{i}\rangle-\frac{1}{P-1}\underline{\rho}_{j,r,i}^{(t,b)}\big{|}\leq \frac{5}{P-1}\sqrt{\frac{\log(6n^{2}/\delta)}{d}}n\alpha,\,j\neq y_{i},\] (31) \[\big{|}\langle\mathbf{w}_{j,r}^{(t,b)}-\mathbf{w}_{j,r}^{(0,0)}, \mathbf{\xi}_{i}\rangle-\frac{1}{P-1}\overline{\rho}_{j,r,i}^{(t,b)}\big{|}\leq \frac{5}{P-1}\sqrt{\frac{\log(6n^{2}/\delta)}{d}}n\alpha,\,j=y_{i}. \tag{32}\]
Proof of Lemma c.3.: First, for any time \((t,b)\geq(0,0)\), we have from the following decomposition by definitions,
\[\langle\mathbf{w}_{j,r}^{(t,b)}-\mathbf{w}_{j,r}^{(0,0)},\mathbf{\mu }\rangle=j\cdot\gamma_{j,r}^{(t,b)}+\frac{1}{P-1}\sum_{i^{\prime}=1}^{n} \overline{\rho}_{j,r,i^{\prime}}^{(t,b)}\|\mathbf{\xi}_{i^{\prime}}\|_{2}^{-2} \cdot\langle\mathbf{\xi}_{i^{\prime}},\mathbf{\mu}\rangle\] \[\qquad\qquad+\frac{1}{P-1}\sum_{i^{\prime}=1}^{n}\underline{\rho }_{j,r,i^{\prime}}^{(t,b)}\|\mathbf{\xi}_{i^{\prime}}\|_{2}^{-2}\cdot\langle\mathbf{ \xi}_{i^{\prime}},\mathbf{\mu}\rangle.\]
According to Lemma B.1, we have
\[\Bigg{|}\frac{1}{P-1}\sum_{i^{\prime}=1}^{n}\overline{\rho}_{j, r,i^{\prime}}^{(t,b)}\|\mathbf{\xi}_{i^{\prime}}\|_{2}^{-2}\cdot\langle\mathbf{\xi}_{i^{ \prime}},\mathbf{\mu}\rangle+\frac{1}{P-1}\sum_{i^{\prime}=1}^{n}\underline{\rho} _{j,r,i^{\prime}}^{(t,b)}\|\mathbf{\xi}_{i^{\prime}}\|_{2}^{-2}\cdot\langle\mathbf{ \xi}_{i^{\prime}},\mathbf{\mu}\rangle\Bigg{|}\] \[\leq\frac{1}{P-1}\sum_{i^{\prime}=1}^{n}|\overline{\rho}_{j,r,i^ {\prime}}^{(t,b)}\|\mathbf{\xi}_{i^{\prime}}\|_{2}^{-2}\cdot|\langle\mathbf{\xi}_{i^{ \prime}},\mathbf{\mu}\rangle|+\frac{1}{P-1}\sum_{i^{\prime}=1}^{n}|\underline{\rho }_{j,r,i^{\prime}}^{(t,b)}\|\mathbf{\xi}_{i^{\prime}}\|_{2}^{-2}\cdot|\langle \mathbf{\xi}_{i^{\prime}},\mathbf{\mu}\rangle|\] \[\leq\frac{2\|\mathbf{\mu}\|_{2}\sqrt{2\log(6n/\delta)}}{(P-1)\sigma_{ p}d}\bigg{(}\sum_{i^{\prime}=1}^{n}|\overline{\rho}_{j,r,i^{\prime}}^{(t,b)}|+ \sum_{i^{\prime}=1}^{n}|\underline{\rho}_{j,r,i^{\prime}}^{(t,b)}|\bigg{)}\]\[=\mathrm{SNR}\sqrt{\frac{8\log(6n/\delta)}{d}}\bigg{(}\sum_{i^{\prime}= 1}^{n}|\overline{\rho}_{j,r,i^{\prime}}^{(t,b)}|+\sum_{i^{\prime}=1}^{n}|\underline {\rho}_{j,r,i^{\prime}}^{(t,b)}|\bigg{)}\] \[\leq\mathrm{SNR}\sqrt{\frac{32\log(6n/\delta)}{d}}n\alpha,\]
where the first inequality is by triangle inequality, the second inequality is by Lemma B.1, the equality is by \(\mathrm{SNR}=\|\mathbf{\mu}\|_{2}/((P-1)\sigma_{p}\sqrt{d})\), and the last inequality is by (27), (28). It follows that
\[\big{|}\langle\mathbf{w}_{j,r}^{(t,b)}-\mathbf{w}_{j,r}^{(0,0)},\mathbf{\mu} \rangle-j\cdot\gamma_{j,r}^{(t,b)}\big{|}\leq\mathrm{SNR}\sqrt{\frac{32\log(6n /\delta)}{d}}n\alpha.\]
Then, for \(j\neq y_{i}\) and any \(t\geq 0\), we have
\[\langle\mathbf{w}_{j,r}^{(t,b)}-\mathbf{w}_{j,r}^{(0,0)},\mathbf{\xi} _{i}\rangle\] \[=j\cdot\gamma_{j,r}^{(t,b)}\|\mathbf{\mu}\|_{2}^{-2}\cdot\langle\mathbf{ \mu},\mathbf{\xi}_{i}\rangle+\frac{1}{P-1}\sum_{i^{\prime}=1}^{n}\overline{\rho}_{ j,r,i^{\prime}}^{(t,b)}\|\mathbf{\xi}_{i^{\prime}}\|_{2}^{-2}\cdot\langle\mathbf{\xi}_{i^{ \prime}},\mathbf{\xi}_{i}\rangle\] \[\qquad+\frac{1}{P-1}\sum_{i^{\prime}=1}^{n}\underline{\rho}_{j,r, i^{\prime}}^{(t,b)}\|\mathbf{\xi}_{i^{\prime}}\|_{2}^{-2}\cdot\langle\mathbf{\xi}_{i^{ \prime}},\mathbf{\xi}_{i}\rangle\] \[=j\cdot\gamma_{j,r}^{(t,b)}\|\mathbf{\mu}\|_{2}^{-2}\cdot\langle\mathbf{ \mu},\mathbf{\xi}_{i}\rangle+\frac{1}{P-1}\sum_{i^{\prime}=1}^{n}\underline{\rho}_ {j,r,i^{\prime}}^{(t,b)}\|\mathbf{\xi}_{i^{\prime}}\|_{2}^{-2}\cdot\langle\mathbf{\xi} _{i^{\prime}},\mathbf{\xi}_{i}\rangle\] \[=\frac{1}{P-1}\underline{\rho}_{j,r,i}^{(t,b)}+j\cdot\gamma_{j,r} ^{(t,b)}\|\mathbf{\mu}\|_{2}^{-2}\cdot\langle\mathbf{\mu},\mathbf{\xi}_{i}\rangle+\frac{1} {P-1}\sum_{i^{\prime}\neq i}\underline{\rho}_{j,r,i^{\prime}}^{(t,b)}\|\mathbf{\xi} _{i^{\prime}}\|_{2}^{-2}\cdot\langle\mathbf{\xi}_{i^{\prime}},\mathbf{\xi}_{i}\rangle,\]
where the second equality is due to \(\underline{\rho}_{j,r,i}^{(t,b)}=0\) for \(j\neq y_{i}\). Next, we have
\[\bigg{|}j\cdot\gamma_{j,r}^{(t,b)}\|\mathbf{\mu}\|_{2}^{-2}\cdot \langle\mathbf{\mu},\mathbf{\xi}_{i}\rangle+\frac{1}{P-1}\sum_{i^{\prime}\neq i}\underline {\rho}_{j,r,i^{\prime}}^{(t,b)}\|\mathbf{\xi}_{i^{\prime}}\|_{2}^{-2}\cdot\langle \mathbf{\xi}_{i^{\prime}},\mathbf{\xi}_{i}\rangle\bigg{|}\] \[\leq|\gamma_{j,r}^{(t,b)}\|\mathbf{\mu}\|_{2}^{-2}\cdot|\langle\mathbf{ \mu},\mathbf{\xi}_{i}\rangle|+\frac{1}{P-1}\sum_{i^{\prime}\neq i}|\underline{ \rho}_{j,r,i^{\prime}}^{(t,b)}\|\|\mathbf{\xi}_{i^{\prime}}\|_{2}^{-2}\cdot|\langle \mathbf{\xi}_{i^{\prime}},\mathbf{\xi}_{i}\rangle|\] \[\leq|\gamma_{j,r}^{(t,b)}\|\|\mathbf{\mu}\|_{2}^{-1}\sigma_{p}\sqrt{2 \log(6n/\delta)}+\frac{4}{P-1}\sqrt{\frac{\log(6n^{2}/\delta)}{d}}\sum_{i^{ \prime}\neq i}|\underline{\rho}_{j,r,i^{\prime}}^{(t,b)}|\] \[=\frac{\mathrm{SNR}^{-1}}{P-1}\sqrt{\frac{2\log(6n/\delta)}{d}}| \gamma_{j,r}^{(t,b)}|+\frac{4}{P-1}\sqrt{\frac{\log(6n^{2}/\delta)}{d}}\sum_{i^ {\prime}\neq i}|\underline{\rho}_{j,r,i^{\prime}}^{(t,b)}|\] \[\leq\frac{\mathrm{SNR}}{P-1}\sqrt{\frac{8C^{2}\log(6n/\delta)}{d }}n\alpha+\frac{4}{P-1}\sqrt{\frac{\log(6n^{2}/\delta)}{d}}n\alpha\] \[\leq\frac{5}{P-1}\sqrt{\frac{\log(6n^{2}/\delta)}{d}}n\alpha,\]
where the first inequality is by triangle inequality; the second inequality is by Lemma B.1; the equality is by \(\mathrm{SNR}=\|\mathbf{\mu}\|_{2}/\sigma_{p}\sqrt{d}\); the third inequality is by (28) and (29); the forth inequality is by \(\mathrm{SNR}\leq 1/\sqrt{8C^{\prime 2}}\). Therefore, for \(j\neq y_{i}\), we have
\[\big{|}\langle\mathbf{w}_{j,r}^{(t,b)}-\mathbf{w}_{j,r}^{(0,0)},\mathbf{\xi}_{i} \rangle-\frac{1}{P-1}\underline{\rho}_{j,r,i}^{(t,b)}\big{|}\leq\frac{5}{P-1} \sqrt{\frac{\log(6n^{2}/\delta)}{d}}n\alpha.\]
Similarly, we have for \(y_{i}=j\) that
\[\langle\mathbf{w}_{j,r}^{(t,b)}-\mathbf{w}_{j,r}^{(0,0)},\mathbf{\xi}_{i}\rangle\] \[=j\cdot\gamma_{j,r}^{(t,b)}\|\mathbf{\mu}\|_{2}^{-2}\cdot\langle\mathbf{ \mu},\mathbf{\xi}_{i}\rangle+\frac{1}{P-1}\sum_{i^{\prime}=1}^{n}\overline{\rho}_{ j,r,i^{\prime}}^{(t,b)}\|\mathbf{\xi}_{i^{\prime}}\|_{2}^{-2}\cdot\langle\mathbf{\xi}_{i^{ \prime}},\mathbf{\xi}_{i}\rangle\]\[\leq 2\max\{\langle\mathbf{w}_{j,r}^{(t,b)},y_{i}\boldsymbol{\mu} \rangle,(P-1)\langle\mathbf{w}_{j,r}^{(t,b)},\boldsymbol{\xi}_{i}\rangle,0\}\] \[\leq 6\max\left\{\langle\mathbf{w}_{j,r}^{(0)},y_{i}\boldsymbol{ \mu}\rangle,(P-1)\langle\mathbf{w}_{j,r}^{(0)},\boldsymbol{\xi}_{i}\rangle, \mathrm{SNR}\sqrt{\frac{32\log(6n/\delta)}{d}}n\alpha,y_{i}j\gamma_{j,r}^{(t, b)},\right.\] \[\left.\hskip 56.905512pt5\sqrt{\frac{\log(6n^{2}/\delta)}{d}}n \alpha+\underline{\rho}_{j,r,i}^{(t,b)}\right\}\] \[\leq 6\max\left\{\beta/2,\mathrm{SNR}\sqrt{\frac{32\log(6n/\delta) }{d}}n\alpha,-\gamma_{j,r}^{(t,b)},5\sqrt{\frac{\log(6n^{2}/\delta)}{d}}n \alpha\right\}\] \[\leq 0.5,\]
where the second inequality is by (30), (31) and (32); the third inequality is due to the definition of \(\beta\) and \(\underline{\rho}_{j,r,i}^{(t,b)}<0\); the third inequality is by (25) and \(-\gamma_{j,r}^{(t,b)}\leq\frac{1}{12}\).
**Lemma C.5**.: _Under Condition 3.1, suppose (27), (28) and (29) hold at \(b\)-th batch of \(t\)-th epoch. Then, it holds that_
\[(P-1)\langle\mathbf{w}_{j,r}^{(t,b)},\boldsymbol{\xi}_{i}\rangle \geq-0.25,\] \[(P-1)\langle\mathbf{w}_{j,r}^{(t,b)},\boldsymbol{\xi}_{i}\rangle \leq(P-1)\sigma(\langle\mathbf{w}_{j,r}^{(t,b)},\boldsymbol{\xi}_{i} \rangle)\leq(P-1)\langle\mathbf{w}_{j,r}^{(t,b)},\boldsymbol{\xi}_{i}\rangle+0.25,\]
_for any \(i\in[n]\)._Proof of Lemma c.5.: According to (32) in Lemma C.3, we have
\[(P-1)\langle\mathbf{w}^{(t,b)}_{y_{i},r},\mathbf{\xi}_{i}\rangle \geq(P-1)\langle\mathbf{w}^{(0,0)}_{y_{i},r},\mathbf{\xi}_{i}\rangle+ \overline{\rho}^{(t,b)}_{y_{i},r,i}-5n\sqrt{\frac{\log(6n^{2}/\delta)}{d}}\alpha\] \[\geq-\beta-5n\sqrt{\frac{\log(6n^{2}/\delta)}{d}}\alpha\] \[\geq-0.25,\]
where the second inequality is due to \(\overline{\rho}^{(t,b)}_{y_{i},r,i}\geq 0\), the third inequality is due to \(\beta<1/8\) and \(5n\sqrt{\log(6n^{2}/\delta)/d}\cdot\alpha<1/8\) by Condition 3.1.
For the second equation, the first inequality holds naturally since \(z\leq\sigma(z)\). For the inequality, if \(\langle\mathbf{w}^{(t)}_{y_{i},r},\mathbf{\xi}_{i}\rangle\leq 0\), we have
\[(P-1)\sigma(\langle\mathbf{w}^{(t,b)}_{y_{i},r},\mathbf{\xi}_{i}\rangle)=0\leq(P-1 )\langle\mathbf{w}^{(t,b)}_{y_{i},r},\mathbf{\xi}_{i}\rangle+0.25.\]
And if \(\langle\mathbf{w}^{(t,b)}_{y_{i},r},\mathbf{\xi}_{i}\rangle>0\), we have
\[(P-1)\sigma(\langle\mathbf{w}^{(t,b)}_{y_{i},r},\mathbf{\xi}_{i}\rangle)=(P-1) \langle\mathbf{w}^{(t,b)}_{y_{i},r},\mathbf{\xi}_{i}\rangle<(P-1)\langle\mathbf{w}^ {(t,b)}_{y_{i},r},\mathbf{\xi}_{i}\rangle+0.25.\]
**Lemma C.6** (Lemma C.6 in Kou et al. (2023)).: _Let \(g(z)=\ell^{\prime}(z)=-1/(1+\exp(z))\), then for all \(z_{2}-c\geq z_{1}\geq-1\) where \(c\geq 0\) we have that_
\[\frac{\exp(c)}{4}\leq\frac{g(z_{1})}{g(z_{2})}\leq\exp(c).\]
**Lemma C.7**.: _For any iteration \(t\in[0,T^{*})\) and \(b,b_{1},b_{2}\in\overline{[H]}\), we have the following statements hold:_
1. \(\Big{|}\sum_{r=1}^{m}\big{[}\overline{\rho}^{(t,0)}_{y_{i},r,i}-\overline{\rho }^{(t,0)}_{y_{k},r,k}\big{]}-\sum_{r=1}^{m}\big{[}\overline{\rho}^{(t,b_{1})} _{y_{i},r,i}-\overline{\rho}^{(t,b_{2})}_{y_{k},r,k}\big{]}\Big{|}\leq 0.1\kappa.\)__
2. \(\langle\mathbf{w}^{(t,b)}_{y_{i},r},\mathbf{\xi}_{i}\rangle\geq\langle\mathbf{w}^ {(t,0)}_{y_{i},r},\mathbf{\xi}_{i}\rangle-\sigma_{0}\sigma_{p}\sqrt{d}/\sqrt{2}\,.\)__
3. _Let_ \(\widetilde{S}^{(t,b)}_{i}=\{r\in[m]:\langle\mathbf{w}^{(t,b)}_{y_{i},r},\mathbf{ \xi}_{i}\rangle>0\}\)_, then we have_ \[S^{(t,0)}_{i}\subseteq\widetilde{S}^{(t,b)}_{i}.\]
4. _Let_ \(\widetilde{S}^{(t,b)}_{j,r}=\{i\in[n]:y_{i}=j,\langle\mathbf{w}^{(t,b)}_{j,r}, \mathbf{\xi}_{i}\rangle>0\}\)_, then we have_ \[S^{(t,0)}_{j,r}\subseteq\widetilde{S}^{(t,b)}_{j,r}.\]
Proof.: For the first statement,
\[\Big{|}\sum_{r=1}^{m}\big{[}\overline{\rho}^{(t,0)}_{y_{i},r,i}- \overline{\rho}^{(t,0)}_{y_{k},r,k}\big{]}-\sum_{r=1}^{m}\big{[}\overline{\rho }^{(t,b_{1})}_{y_{i},r,i}-\overline{\rho}^{(t,b_{2})}_{y_{k},r,k}\big{]}\Big{|}\] \[\qquad\leq\frac{\eta(P-1)^{2}}{Bm}\max\Big{\{}|S^{(\widetilde{t} -1,b_{1})}_{i}||\ell^{(\widetilde{t}-1,b_{1})}_{i}|\cdot\|\mathbf{\xi}_{i}\|_{2}^{ 2},|S^{(\widetilde{t}-1,b_{2})}_{k}||\ell^{(\widetilde{t}-1,b_{2})}_{k}| \cdot\|\mathbf{\xi}_{k}\|_{2}^{2}\Big{\}}\] \[\qquad\leq\frac{\eta(P-1)^{2}}{B}\frac{3\sigma_{p}^{2}d}{2}\] \[\qquad\leq 0.1\kappa,\]
where the first inequality follows from the iterative update rule of \(\overline{\rho}^{(t,b)}_{j,r,i}\), the second inequality is due to Lemma B.2, and the last inequality is due to Condition 3.1.
For the second statement, recall that the stochastic gradient update rule is
\[\langle\mathbf{w}^{(t,b)}_{y_{i},r},\mathbf{\xi}_{i}\rangle=\langle\mathbf{w}^{(t, b-1)}_{y_{i},r},\mathbf{\xi}_{i}\rangle-\frac{\eta}{Bm}\cdot\sum_{i^{\prime}\in \mathcal{I}_{t,b-1}}\ell^{(t,b-1)}_{i^{\prime}}\cdot\sigma^{\prime}(\langle \mathbf{w}^{(t,b-1)}_{y_{i},r},\mathbf{\mu}\rangle)\cdot\langle y_{i^{\prime}}\mathbf{ \mu},\mathbf{\xi}_{i}\rangle y_{i^{\prime}}\]\[-\frac{\eta(P-1)}{Bm}\cdot\sum_{i^{\prime}\in\mathcal{I}_{i,k-1}/i}\ell_{i^{\prime}} ^{(t,b-1)}\cdot\sigma^{\prime}(\langle\mathbf{w}_{y_{i},r}^{(t,b-1)},\mathbf{\xi}_ {i^{\prime}}\rangle)\cdot\langle\mathbf{\xi}_{i^{\prime}},\mathbf{\xi}_{i}\rangle.\]
Therefore,
\[\langle\mathbf{w}_{y_{i},r}^{(t,b)},\mathbf{\xi}_{i}\rangle \geq\langle\mathbf{w}_{y_{i},r}^{(t,0)},\mathbf{\xi}_{i}\rangle-\frac {\eta}{Bm}\cdot n\cdot\|\mu\|_{2}\sigma_{p}\sqrt{2\log(6n/\delta)}-\frac{\eta( P-1)}{Bm}\cdot n\cdot 2\sigma_{p}^{2}\sqrt{d\log(6n^{2}/\delta)}\] \[\geq\langle\mathbf{w}_{y_{i},r}^{(t,0)},\mathbf{\xi}_{i}\rangle-\sigma _{0}\sigma_{p}\sqrt{d}/\sqrt{2},\]
where the first inequality is due to Lemma B.1, and the second inequality is due to Condition 3.1.
For the third statement. Let \(r^{*}\in S_{i}^{(t,0)}\), then
\[\langle\mathbf{w}_{y_{i},r^{*}}^{(t,b)},\mathbf{\xi}_{i}\rangle\geq\langle\mathbf{ w}_{y_{i},r^{*}}^{(t,0)},\mathbf{\xi}_{i}\rangle-\sigma_{0}\sigma_{p}\sqrt{d}/ \sqrt{2}>0,\]
where the first inequality is due to the second statement, and the second inequality is due to the definition of \(\widetilde{S}_{i}^{t,0}\). Therefore, \(r^{*}\in\widetilde{S}_{i}^{(t,b)}\) and \(S_{i}^{(t,b)}\subseteq\widetilde{S}_{i}^{(t,b)}\). The fourth statement can be obtained similarly.
**Lemma C.8**.: _Under Assumption 3.1, suppose (27), (28) and (29) hold for any iteration \((t^{\prime},b^{\prime})\leq(t,0)\). Then, the following conditions also hold for \(\forall t^{\prime}\leq t\) and \(\forall b^{\prime},b^{\prime}_{1},b^{\prime}_{2}\in[H]\):_
1. \(\sum_{r=1}^{m}\left[\overline{\rho}_{y_{i},r_{i}}^{(t^{\prime},0)}-\overline{ \rho}_{y_{k},r_{k}}^{(t^{\prime},0)}\right]\leq\kappa\) _for all_ \(i,k\in[n]\)_._
2. \(y_{i}\cdot f(\mathbf{W}^{(t^{\prime},b^{\prime}_{1})},\mathbf{x}_{i})-y_{k} \cdot f(\mathbf{W}^{(t^{\prime},b^{\prime}_{2})},\mathbf{x}_{k})\leq C_{1}\) _for all_ \(i,k\in[n]\)_._
3. \(\ell_{i}^{(t^{\prime},b^{\prime}_{1})}/\ell_{k}^{(t^{\prime},b^{\prime}_{2})} \leq C_{2}=\exp(C_{1})\) _for all_ \(i,k\in[n]\)_._
4. \(S_{i}^{(0,0)}\subseteq S_{i}^{(t^{\prime},0)}\)_, where_ \(S_{i}^{(t^{\prime},0)}:=\{r\in[m]:\langle\mathbf{w}_{y_{i},r}^{(t^{\prime},0)},\mathbf{\xi}_{i}\rangle>\sigma_{0}\sigma_{p}\sqrt{d}/\sqrt{2}\}\)_, and hence_ \(|S_{i}^{(t^{\prime},0)}|\geq 0.8m\Phi(-1)\) _for all_ \(i\in[n]\)_._
5. \(S_{j,r}^{(0,0)}\subseteq S_{j,r}^{(t^{\prime},0)}\) _, where_ \(S_{j,r}^{(t^{\prime},0)}:=\{i\in[n]:y_{i}=j,\langle\mathbf{w}_{j,r}^{(t^{ \prime},0)},\mathbf{\xi}_{i}\rangle>\sigma_{0}\sigma_{p}\sqrt{d}/\sqrt{2}\}\)_, and hence_ \(|S_{j,r}^{(t^{\prime},0)}|\geq\Phi(-1)n/4\) _for all_ \(j\in\{\pm 1\},r\in[m]\)_._
_Here we take \(\kappa\) and \(C_{1}\) as \(10\) and \(5\) respectively._
Proof of Lemma c.8.: We prove Lemma C.8 by induction. When \(t^{\prime}=0\), the fourth and fifth conditions hold naturally by Lemma B.3 and B.4.
For the first condition, since we have \(\overline{\rho}_{j,r,i}^{(0,0)}=0\) for any \(j,r,i\) according to (26), it is straightforward that \(\sum_{r=1}^{m}\left[\overline{\rho}_{y_{i},r,i}^{(0,0)}-\overline{\rho}_{y_{k },r,k}^{(0,0)}\right]=0\) for all \(i,k\in[n]\). So the first condition holds for \(t^{\prime}=0\).
For the second condition, we have
\[y_{i}\cdot f(\mathbf{W}^{(0,0)},\mathbf{x}_{i})-y_{k}\cdot f( \mathbf{W}^{(0,0)},\mathbf{x}_{k})\] \[=F_{y_{i}}(\mathbf{W}_{y_{i}}^{(0,0)},\mathbf{x}_{i})-F_{-y_{i}} (\mathbf{W}_{-y_{i}}^{(0,0)},\mathbf{x}_{i})+F_{-y_{k}}(\mathbf{W}_{-y_{k}}^{(0,0)},\mathbf{x}_{i})-F_{y_{k}}(\mathbf{W}_{y_{k}}^{(0,0)},\mathbf{x}_{i})\] \[\leq F_{y_{i}}(\mathbf{W}_{y_{i}}^{(0,0)},\mathbf{x}_{i})+F_{-y_{k }}(\mathbf{W}_{-y_{k}}^{(0,0)},\mathbf{x}_{i})\] \[=\frac{1}{m}\sum_{r=1}^{m}[\sigma(\langle\mathbf{w}_{y_{i},r}^{(0,0)},y_{i}\mathbf{\mu}\rangle)+(P-1)\sigma(\langle\mathbf{w}_{y_{i},r}^{(0,0)}, \mathbf{\xi}_{i}\rangle)]\] \[\qquad+\frac{1}{m}\sum_{r=1}^{m}[\sigma(\langle\mathbf{w}_{-y_{k },r}^{(0,0)},y_{k}\mathbf{\mu}\rangle)+(P-1)\sigma(\langle\mathbf{w}_{-y_{k},r}^{(0,0)},\mathbf{\xi}_{i}\rangle)]\] \[\leq 4\beta\leq 1/3\leq C_{1},\]
where the first inequality is by \(F_{j}(\mathbf{W}_{j}^{(0,0)},\mathbf{x}_{i})>0\), the second inequality is due to (23), and the third inequality is due to (25).
By Lemma C.6 and the second condition, the third condition can be obtained directly as
\[\frac{\ell_{i}^{\prime(0,0)}}{\ell_{k}^{\prime(0,0)}}\leq\exp\big{(}y_{k}\cdot f( \mathbf{W}^{(0,0)},\mathbf{x}_{k})-y_{i}\cdot f(\mathbf{W}^{(0,0)},\mathbf{x}_{ i})\big{)}\leq\exp(C_{1}).\]
Now suppose there exists \((\widetilde{t},\widetilde{b})\leq(t,b)\) such that these five conditions hold for any \((0,0)\leq(t^{\prime},b^{\prime})<(\widetilde{t},\widetilde{b})\). We aim to prove that these conditions also hold for \((t^{\prime},b^{\prime})=(\widetilde{t},\widetilde{b})\).
We first show that, for any \(0\leq t^{\prime}\leq t\) and \(0\leq b^{\prime}_{1},b^{\prime}_{2}\leq b\), \(y_{i}\cdot f(\mathbf{W}^{(t^{\prime},b^{\prime}_{1}},\mathbf{x}_{i})-y_{k} \cdot f(\mathbf{W}^{(t^{\prime},b^{\prime}_{2}}),\mathbf{x}_{k})\) can be approximated by \(\frac{1}{m}\sum_{r=1}^{m}\big{[}\overline{\rho}_{y_{i},r_{i}}^{(t^{\prime},b^ {\prime}_{1})}-\overline{\rho}_{y_{k},r,k}^{(t^{\prime},b^{\prime}_{2})}\big{]}\) with a small constant approximation error. We begin by writing out
\[y_{i}\cdot f(\mathbf{W}^{(t^{\prime},b^{\prime}_{1}},\mathbf{x}_ {i})-y_{k}\cdot f(\mathbf{W}^{(t^{\prime},b^{\prime}_{2}},\mathbf{x}_{k})\] \[=y_{i}\sum_{j\in\{\pm 1\}}j\cdot F_{j}(\mathbf{W}^{(t^{\prime},b^{ \prime}_{1}},\mathbf{x}_{i})-y_{k}\sum_{j\in\{\pm 1\}}j\cdot F_{j}(\mathbf{W}^{(t^{ \prime},b^{\prime}_{2}}_{j},\mathbf{x}_{k})\] \[=F_{-y_{k}}(\mathbf{W}^{(t^{\prime},b^{\prime}_{2}}_{-y_{k}}, \mathbf{x}_{k})-F_{-y_{i}}(\mathbf{W}^{(t^{\prime},b^{\prime}_{1}}_{-y_{i}}, \mathbf{x}_{i})+F_{y_{i}}(\mathbf{W}^{(t^{\prime},b^{\prime}_{1}}_{y_{i}}, \mathbf{x}_{i})-F_{y_{k}}(\mathbf{W}^{(t^{\prime},b^{\prime}_{2}}_{y_{k}}, \mathbf{x}_{k})\] \[=F_{-y_{k}}(\mathbf{W}^{(t^{\prime},b^{\prime}_{2}}_{-y_{k}}, \mathbf{x}_{k})-F_{-y_{i}}(\mathbf{W}^{(t^{\prime},b^{\prime}_{1}}_{-y_{i}}, \mathbf{x}_{i})\] \[\qquad+\frac{1}{m}\sum_{r=1}^{m}[\sigma(\langle\mathbf{w}^{(t^{ \prime},b^{\prime}_{1}}_{y_{i},r},y_{i}\cdot\boldsymbol{\mu}\rangle)+(P-1) \sigma(\langle\mathbf{w}^{(t^{\prime},b^{\prime}_{1}}_{y_{i},r},\boldsymbol{ \xi}_{i}\rangle)]\] \[\qquad-\frac{1}{m}\sum_{r=1}^{m}[\sigma(\langle\mathbf{w}^{(t^{ \prime},b^{\prime}_{2}}_{y_{k},r},y_{k}\cdot\boldsymbol{\mu}\rangle)+(P-1) \sigma(\langle\mathbf{w}^{(t^{\prime},b^{\prime}_{2}}_{y_{k},r},\boldsymbol{ \xi}_{k}\rangle)]\] \[=\underbrace{F_{-y_{k}}(\mathbf{W}^{(t^{\prime},b^{\prime}_{2}}_{ -y_{k}},\mathbf{x}_{k})-F_{-y_{i}}(\mathbf{W}^{(t^{\prime},b^{\prime}_{1}}_{-y _{i}},\mathbf{x}_{i})}_{\text{I}_{1}}\] \[\qquad+\underbrace{\frac{1}{m}\sum_{r=1}^{m}[\sigma(\langle \mathbf{w}^{(t^{\prime},b^{\prime}_{1}}_{y_{i},r},y_{i}\cdot\boldsymbol{\mu} \rangle)-\sigma(\langle\mathbf{w}^{(t^{\prime},b^{\prime}_{2}}_{y_{k},r},y_{k }\cdot\boldsymbol{\mu}\rangle)]}_{\text{I}_{2}}\] \[\qquad+\underbrace{\frac{1}{m}\sum_{r=1}^{m}[(P-1)\sigma( \langle\mathbf{w}^{(t^{\prime},b^{\prime}_{1}}_{y_{i},r},\boldsymbol{\xi}_{i} \rangle)-(P-1)\sigma(\langle\mathbf{w}^{(t^{\prime},b^{\prime}_{2}}_{y_{k},r}, \boldsymbol{\xi}_{k}\rangle)]}_{\text{I}_{3}}, \tag{33}\]
where all the equalities are due to the network definition. Then we bound \(\text{I}_{1}\), \(\text{I}_{2}\) and \(\text{I}_{3}\).
For \(|\text{I}_{1}|\), we have the following upper bound by Lemma C.4:
\[|\text{I}_{1}| \leq|F_{-y_{k}}(\mathbf{W}^{(t^{\prime},b^{\prime}_{2}}_{-y_{k}},\mathbf{x}_{k})|+|F_{-y_{i}}(\mathbf{W}^{(t^{\prime},b^{\prime}_{1}}_{-y_{i}}, \mathbf{x}_{i})|\] \[=F_{-y_{k}}(\mathbf{W}^{(t^{\prime},b^{\prime}_{2}}_{-y_{k}}, \mathbf{x}_{k})+F_{-y_{i}}(\mathbf{W}^{(t^{\prime},b^{\prime}_{1}}_{-y_{i}}, \mathbf{x}_{i})\] \[\leq 1. \tag{34}\]
For \(|\text{I}_{2}|\), we have the following upper bound:
\[|\text{I}_{2}| \leq\max\left\{\frac{1}{m}\sum_{r=1}^{m}\sigma(\langle\mathbf{w }^{(t^{\prime},b^{\prime}_{1}}_{y_{i},r},y_{i}\cdot\boldsymbol{\mu}\rangle), \frac{1}{m}\sum_{r=1}^{m}\sigma(\langle\mathbf{w}^{(t^{\prime},b^{\prime}_{2}}_{ y_{k},r},y_{k}\cdot\boldsymbol{\mu}\rangle)\right\}\] \[\leq 3\max\left\{|\langle\mathbf{w}^{(0,0)}_{y_{i},r},y_{i}\cdot \boldsymbol{\mu}\rangle|,|\langle\mathbf{w}^{(0,0)}_{y_{k},r},y_{k}\cdot \boldsymbol{\mu}\rangle|,\gamma^{(t^{\prime},b^{\prime}_{1}}_{j,r},\gamma^{(t^{ \prime},b^{\prime}_{2}}_{j,r},\text{SNR}\sqrt{\frac{32\log(6n/\delta)}{d}}n \alpha\right\}\] \[\leq 3\max\left\{\beta,C^{\prime}\widehat{\gamma}\alpha,\text{SNR} \sqrt{\frac{32\log(6n/\delta)}{d}}n\alpha\right\}\] \[\leq 0.25, \tag{35}\]where the second inequality is due to (30), the second inequality is due to the definition of \(\beta\) and (29), the third inequality is due to Condition 3.1 and (25).
For \(\mathrm{I}_{3}\), we have the following upper bound
\[\mathrm{I}_{3} =\frac{1}{m}\sum_{r=1}^{m}\big{[}(P-1)\sigma(\langle\mathbf{w}_{y_ {i},r}^{(t^{\prime},b_{1}^{\prime})},\boldsymbol{\xi}_{i}\rangle)-(P-1)\sigma( \langle\mathbf{w}_{y_{k},r}^{(t^{\prime},b_{2}^{\prime})},\boldsymbol{\xi}_{k }\rangle)\big{]}\] \[\leq\frac{1}{m}\sum_{r=1}^{m}\big{[}(P-1)\langle\mathbf{w}_{y_{i},r}^{(t^{\prime},b_{1}^{\prime})},\boldsymbol{\xi}_{i}\rangle-(P-1)\langle \mathbf{w}_{y_{k},r}^{(t^{\prime},b_{2}^{\prime})},\boldsymbol{\xi}_{k}\rangle \big{]}+0.25\] \[\leq\frac{1}{m}\sum_{r=1}^{m}\Big{[}\overline{\rho}_{y_{i},r,i}^ {(t^{\prime},b_{1}^{\prime})}-\overline{\rho}_{y_{k},r,k}^{(t^{\prime},b_{2}^ {\prime})}+10\sqrt{\frac{\log(6n^{2}/\delta)}{d}}n\alpha\Big{]}+0.25\] \[\leq\frac{1}{m}\sum_{r=1}^{m}\big{[}\overline{\rho}_{y_{i},r,i}^ {(t^{\prime},b_{1}^{\prime})}-\overline{\rho}_{y_{k},r,k}^{(t^{\prime},b_{2}^ {\prime})}\big{]}+0.5, \tag{36}\]
where the first inequality is due to Lemma C.5, the second inequality is due to Lemma C.3, the third inequality is due to \(5\sqrt{\log(6n^{2}/\delta)/d}n\alpha\leq 1/8\) according to Condition 3.1.
Similarly, we have the following lower bound
\[\mathrm{I}_{3} =\frac{1}{m}\sum_{r=1}^{m}\big{[}(P-1)\sigma(\langle\mathbf{w}_{y _{i},r}^{(t^{\prime},b_{1}^{\prime})},\boldsymbol{\xi}_{i}\rangle)-(P-1)\sigma (\langle\mathbf{w}_{y_{k},r}^{(t^{\prime},b_{2}^{\prime})},\boldsymbol{\xi}_{k }\rangle)\big{]}\] \[\geq\frac{1}{m}\sum_{r=1}^{m}\big{[}(P-1)\langle\mathbf{w}_{y_{i},r}^{(t^{\prime},b_{1}^{\prime})},\boldsymbol{\xi}_{i}\rangle-(P-1)\langle \mathbf{w}_{y_{k},r}^{(t^{\prime},b_{2}^{\prime})},\boldsymbol{\xi}_{k}\rangle \big{]}-0.25\] \[\geq\frac{1}{m}\sum_{r=1}^{m}\Big{[}\overline{\rho}_{y_{i},r,i}^ {(t^{\prime},b_{1}^{\prime})}-\overline{\rho}_{y_{k},r,k}^{(t^{\prime},b_{2}^ {\prime})}-10\sqrt{\frac{\log(6n^{2}/\delta)}{d}}n\alpha\Big{]}-0.25\] \[\geq\frac{1}{m}\sum_{r=1}^{m}\big{[}\overline{\rho}_{y_{i},r,i}^ {(t^{\prime},b_{1}^{\prime})}-\overline{\rho}_{y_{k},r,k}^{(t^{\prime},b_{2}^ {\prime})}\big{]}-0.5, \tag{37}\]
where the first inequality is due to Lemma C.5, the second inequality is due to Lemma C.3, the third inequality is due to \(5\sqrt{\log(6n^{2}/\delta)/d}n\alpha\leq 1/8\) according to Condition 3.1.
By plugging (34)-(36) into (33), we have
\[y_{i}\cdot f(\mathbf{W}^{(t^{\prime},b_{1}^{\prime})},\mathbf{x }_{i})-y_{k}\cdot f(\mathbf{W}^{(t^{\prime},b_{2}^{\prime})},\mathbf{x}_{k}) \leq|\mathrm{I}_{1}|+|\mathrm{I}_{2}|+\mathrm{I}_{3}\] \[\leq\frac{1}{m}\sum_{r=1}^{m}\big{[}\overline{\rho}_{y_{i},r,i}^ {(t^{\prime},b_{1}^{\prime})}-\overline{\rho}_{y_{k},r,k}^{(t^{\prime},b_{2}^ {\prime})}\big{]}+1.75,\] \[y_{i}\cdot f(\mathbf{W}^{(t^{\prime},b_{1}^{\prime})},\mathbf{x }_{i})-y_{k}\cdot f(\mathbf{W}^{(t^{\prime},b_{2}^{\prime})},\mathbf{x}_{k}) \geq-|\mathrm{I}_{1}|-|\mathrm{I}_{2}|+\mathrm{I}_{3}\] \[\geq\frac{1}{m}\sum_{r=1}^{m}\big{[}\overline{\rho}_{y_{i},r,i}^ {(t^{\prime},b_{1}^{\prime})}-\overline{\rho}_{y_{k},r,k}^{(t^{\prime},b_{2}^ {\prime})}\big{]}-1.75,\]
which is equivalent to
\[\left|y_{i}\cdot f(\mathbf{W}^{(t^{\prime},b_{1}^{\prime})},\mathbf{x}_{i})-y_ {k}\cdot f(\mathbf{W}^{(t^{\prime},b_{2}^{\prime})},\mathbf{x}_{k})-\frac{1}{m }\sum_{r=1}^{m}\big{[}\overline{\rho}_{y_{i},r,i}^{(t^{\prime},b_{1}^{\prime})} -\overline{\rho}_{y_{k},r,k}^{(t^{\prime},b_{2}^{\prime})}\big{]}\right|\leq 1.75. \tag{38}\]
Therefore, the second condition immediately follows from the first condition.
Then, we prove the first condition holds for \((\widetilde{t},\widetilde{b})\). Recall from Lemma C.1 that
\[\overline{\rho}_{j,r,i}^{(t,b+1)}=\overline{\rho}_{j,r,i}^{(t,b)}-\frac{\eta(P- 1)^{2}}{Bm}\cdot\ell_{i}^{(t,b)}\cdot\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{ (t,b)},\boldsymbol{\xi}_{i}\rangle)\cdot\|\boldsymbol{\xi}_{i}\|_{2}^{2}\cdot \mathds{1}(y_{i}=j)\,\mathds{1}(i\in\mathcal{I}_{t,b})\]
for all \(j\in\{\pm 1\},r\in[m],i\in[n],(0,0)\leq(t,b)<[T^{*},0]\). It follows that
\[\sum_{r=1}^{m}\big{[}\overline{\rho}_{y_{i},r,i}^{(t,b+1)}-\overline{\rho}_{y_{k },r,k}^{(t,b+1)}\big{]}\]\[=\sum_{r=1}^{m}\big{[}\overline{\rho}_{y_{i},r,i}^{(t,b)}-\overline{\rho}_{y_{k},r,k }^{(t,b)}\big{]}-\frac{\eta(P-1)^{2}}{Bm}\cdot\big{(}|\widetilde{S}_{i}^{(t,b)} |\ell_{i}^{\prime(t,b)}\cdot\|\mathbf{\xi}_{i}\|_{2}^{2}\,\mathds{1}(i\in\mathcal{I }_{t,b})\]
for all \(i,k\in[n]\) and \(0\leq t\leq T^{*}\), \(b<H\).
If \(\widetilde{b}\in\{1,2,\cdots,H-1\}\), then the first statement for \((t^{\prime},b^{\prime})=(\widetilde{t},\widetilde{b})\) and for the last \((t^{\prime},b^{\prime})<(\widetilde{t},\widetilde{b})\) are the same. Otherwise, if \(\widetilde{b}=0\), we consider two separate cases: \(\sum_{r=1}^{m}\big{[}\overline{\rho}_{y_{i},r,i}^{(\widetilde{t}-1,0)}- \overline{\rho}_{y_{k},r,k}^{(\widetilde{t}-1,0)}\big{]}\leq 0.9\kappa\) and \(\sum_{r=1}^{m}\big{[}\overline{\rho}_{y_{i},r,i}^{(\widetilde{t}-1,0)}- \overline{\rho}_{y_{k},r,k}^{(\widetilde{t}-1,0)}\big{]}>0.9\kappa\).
When \(\sum_{r=1}^{m}\big{[}\overline{\rho}_{y_{i},r,i}^{(\widetilde{t}-1,0)}- \overline{\rho}_{y_{k},r,k}^{(\widetilde{t}-1,0)}\big{]}\leq 0.9\kappa\), we have
\[\sum_{r=1}^{m}\big{[}\overline{\rho}_{y_{i},r,i}^{(\widetilde{t} -1,0)}-\overline{\rho}_{y_{k},r,k}^{(\widetilde{t}-1,0)}\big{]}\] \[=\sum_{r=1}^{m}\big{[}\overline{\rho}_{y_{i},r,i}^{(\widetilde{t} -1,0)}-\overline{\rho}_{y_{k},r,k}^{(\widetilde{t}-1,0)}\big{]}-\frac{\eta(P-1) ^{2}}{Bm}\cdot\Big{(}|\widetilde{S}_{i}^{(\widetilde{t}-1,b_{i}^{(\widetilde{t }-1)})}|\ell_{i}^{(\widetilde{t}-1,b_{i}^{(\widetilde{t}-1)})}\cdot\|\mathbf{\xi}_ {i}\|_{2}^{2}\] \[-\big{|}\widetilde{S}_{k}^{(\widetilde{t}-1,b_{k}^{(\widetilde{t }-1)})}|\ell_{k}^{(\widetilde{t}-1,b_{k}^{(\widetilde{t}-1)})}\cdot\|\mathbf{\xi}_ {k}\|_{2}^{2}\Big{)}\] \[\leq\sum_{r=1}^{m}\big{[}\overline{\rho}_{y_{i},r,i}^{(\widetilde {t}-1,0)}-\overline{\rho}_{y_{k},r,k}^{(\widetilde{t}-1,0)}\big{]}-\frac{\eta(P -1)^{2}}{Bm}\cdot|\mathbf{\tilde{S}}_{i}^{(\widetilde{t}-1,b_{i}^{(\widetilde{t}-1 )})}|\ell_{i}^{(\widetilde{t}-1,b_{i}^{(\widetilde{t}-1)})}\cdot\|\mathbf{\xi}_{i} \|_{2}^{2}\] \[\leq\sum_{r=1}^{m}\big{[}\overline{\rho}_{y_{i},r,i}^{(\widetilde {t}-1,0)}-\overline{\rho}_{y_{k},r,k}^{(\widetilde{t}-1,0)}\big{]}+\frac{\eta(P -1)^{2}}{B}\cdot\|\mathbf{\xi}_{i}\|_{2}^{2}\] \[\leq 0.9\kappa+0.1\kappa\] \[=\kappa,\]
where the first inequality is due to \(\ell_{i}^{(\widetilde{t}-1,b_{i}^{(\widetilde{t}-1)})}<0\); the second inequality is due to \(\big{|}S_{i}^{(\widetilde{t}-1,b_{i}^{(\widetilde{t}-1)})}\big{|}\leq m\) and \(-\ell_{i}^{\prime(\widetilde{t}-1,b_{i}^{(\widetilde{t}-1)})}<1\); the third inequality is due to Condition 3.1.
On the other hand, for when \(\sum_{r=1}^{m}\big{[}\overline{\rho}_{y_{i},r,i}^{(\widetilde{t}-1,0)}- \overline{\rho}_{y_{k},r,k}^{(\widetilde{t}-1,0)}\big{]}>0.9\kappa\), we have from the (38) that
\[y_{i}\cdot f(\mathbf{W}^{(\widetilde{t}-1,b_{i}^{(\widetilde{t}- 1)})},\mathbf{x}_{i})-y_{k}\cdot f(\mathbf{W}^{(\widetilde{t}-1,b_{k}^{( \widetilde{t}-1)})},\mathbf{x}_{k})\] \[\geq\frac{1}{m}\sum_{r=1}^{m}\big{[}\overline{\rho}_{y_{i},r,i}^{( \widetilde{t}-1,b_{i}^{(\widetilde{t}-1)})}-\overline{\rho}_{y_{k},r,k}^{( \widetilde{t}-1,b_{k}^{(\widetilde{t}-1)})}\big{]}-1.75\] \[\geq\frac{1}{m}\sum_{r=1}^{m}\big{[}\overline{\rho}_{y_{i},r,i}^ {(\widetilde{t}-1,0)}-\overline{\rho}_{y_{k},r,k}^{(\widetilde{t}-1,0)}\big{]}- 0.1\kappa-1.75\] \[\geq 0.9\kappa-0.1\kappa-0.54\kappa\] \[=0.26\kappa, \tag{39}\]
where the second inequality is due to \(\kappa=10\). Thus, according to Lemma C.6, we have
\[\frac{\ell_{i}^{\prime(\widetilde{t}-1,b_{i}^{(\widetilde{t}-1)})}}{\ell_{k}^{ \prime(\widetilde{t}-1,b_{k}^{(\widetilde{t}-1)})}}\leq\exp\big{(}y_{k}\cdot f (\mathbf{W}^{(\widetilde{t}-1,b_{k}^{(\widetilde{t}-1)})},\mathbf{x}_{k})-y_{i} \cdot f\big{(}\mathbf{W}^{(\widetilde{t}-1,b_{i}^{(\widetilde{t}-1)})},\mathbf{x} _{i})\big{)}\leq\exp(-0.26\kappa).\]
Since \(S_{i}^{(\widetilde{t}-1,0)}\subseteq\widetilde{S}_{i}^{(\widetilde{t}-1,b_{i}^{( \widetilde{t}-1)})}\), we have \(\left|\widetilde{S}_{k}^{(\widetilde{t}-1,b_{k}^{(\widetilde{t}-1)})}\right| \geq 0.8\Phi(-1)m\) according to the fourth condition. Also we have that \(|S_{i}^{(\widetilde{t}-1,b_{i}^{(\widetilde{t}-1)})}|\leq m\). It follows that
\[\frac{\big{|}S_{i}^{(\widetilde{t}-1,b_{i}^{(\widetilde{t}-1)})}\big{|}\ell_{i}^ {(\widetilde{t}-1,b_{i}^{(\widetilde{t}-1)})}\big{|}\ell_{k}^{(\widetilde{t}-1,b_{k }^{(\widetilde{t}-1)})}\big{|}\ell_{k}^{(\widetilde{t}-1,b_{k}^{(\widetilde{t}-1)}) }\leq\frac{\exp(-0.26\kappa)}{0.8\Phi(-1)}<0.8.\]According to Lemma B.1, under event \(\mathcal{E}_{\mathrm{prelim}}\), we have
\[\big{|}\|\mathbf{\xi}_{i}\|_{2}^{2}-d\cdot\sigma_{p}^{2}|=O\big{(}\sigma_{p}^{2}\cdot \sqrt{d\log(6n/\delta)}\big{)},\,\forall i\in[n].\]
Note that \(d=\Omega(\log(6n/\delta))\) from Condition 3.1, it follows that
\[|S_{i}^{(\widetilde{t},b_{i}^{(\widetilde{t}-1)})}|(-\ell_{i}^{(\widetilde{t},b _{i}^{(\widetilde{t}-1)})})\cdot\|\mathbf{\xi}_{i}\|_{2}^{2}<|S_{k}^{(\widetilde{t },b_{k}^{(\widetilde{t}-1)})}|(-\ell_{k}^{(\widetilde{t},b_{k}^{(\widetilde{t} -1)})})\cdot\|\mathbf{\xi}_{k}\|_{2}^{2}.\]
Then we have
\[\sum_{r=1}^{m}\big{[}\overline{p}_{y_{i},r,i}^{(\widetilde{t},0)}-\overline{p} _{y_{k},r,k}^{(\widetilde{t},0)}\big{]}\leq\sum_{r=1}^{m}\big{[}\overline{p}_ {y_{i},r,i}^{(\widetilde{t}-1,0)}-\overline{p}_{y_{k},r,k}^{(\widetilde{t}-1, 0)}\big{]}\leq\kappa,\]
which completes the proof of the first hypothesis at iteration \((t^{\prime},b^{\prime})=(\widetilde{t},\widetilde{b})\). Next, by applying the approximation in (38), we are ready to verify the second hypothesis at iteration \((\widetilde{t},\widetilde{b})\). In fact, for any \((t^{\prime},b_{1}^{\prime}),(t^{\prime},b_{2}^{\prime})\leq(\widetilde{t}, \widetilde{b})\), we have
\[y_{i}\cdot f(\mathbf{W}^{(t^{\prime},b_{1}^{\prime})},\mathbf{x} _{i})-y_{k}\cdot f(\mathbf{W}^{(t^{\prime},b_{2}^{\prime})},\mathbf{x}_{k}) \leq\frac{1}{m}\sum_{r=1}^{m}\big{[}\overline{p}_{y_{i},r,i}^{(t^ {\prime},b_{1}^{\prime})}-\overline{p}_{y_{k},r,k}^{(t^{\prime},b_{2}^{ \prime})}\big{]}+1.75\] \[\leq\frac{1}{m}\sum_{r=1}^{m}\big{[}\overline{\rho}_{y_{i},r,i}^ {(t^{\prime},0)}-\overline{\rho}_{y_{k},r,k}^{(t^{\prime},0)}\big{]}+0.1\kappa +1.75\] \[\leq C_{1},\]
where the first inequality is by (38); the last inequality is by induction hypothesis and taking \(\kappa\) as 10 and \(C_{1}\) as 5.
And the third hypothesis directly follows by noting that, for any \((t^{\prime},b_{1}^{\prime}),(t^{\prime},b_{2}^{\prime})\leq(\widetilde{t}, \widetilde{b})\),
\[\frac{\ell_{i}^{(t^{\prime},b_{1}^{\prime})}}{\ell_{k}^{(t^{\prime},b_{2}^{ \prime})}}\leq\exp\big{(}y_{k}\cdot f(\mathbf{W}^{(t^{\prime},b_{1}^{\prime})},\mathbf{x}_{k})-y_{i}\cdot f(\mathbf{W}^{(t^{\prime},b_{2}^{\prime})}, \mathbf{x}_{i})\big{)}\leq\exp(C_{1})=C_{2}.\]
For the fourth hypothesis, If \(\widetilde{b}\in\{1,2,\cdots,H-1\}\), then the first statement for \((t^{\prime},b^{\prime})=(\widetilde{t},\widetilde{b})\) and for the last \((t^{\prime},b^{\prime})<(\widetilde{t},\widetilde{b})\) are the same. Otherwise, if \(\widetilde{b}=0\), according to the gradient descent rule, we have
\[\langle\mathbf{w}_{y_{i},r}^{(\widetilde{t},0)},\mathbf{\xi}_{i}\rangle =\langle\mathbf{w}_{y_{i},r}^{(\widetilde{t}-1,0)},\mathbf{\xi}_{i} \rangle-\frac{\eta}{Bm}\cdot\sum_{b^{\prime}=0}^{H-1}\sum_{i^{\prime}\in \mathcal{I}_{\widetilde{t}-1,b^{\prime}}}\ell_{i^{\prime}}^{(\widetilde{t}-1, \widetilde{b})}\cdot\sigma^{\prime}(\langle\mathbf{w}_{y_{i},r}^{(\widetilde{t }-1,b^{\prime})},y_{i^{\prime}}\mathbf{\mu}\rangle)\cdot\langle y_{i^{\prime}}\mathbf{ \mu},\mathbf{\xi}_{i}\rangle y_{i^{\prime}}\] \[\qquad-\frac{\eta(P-1)}{Bm}\cdot\sum_{b^{\prime}=0}^{H-1}\sum_{i^{ \prime}\in\mathcal{I}_{\widetilde{t}-1,b^{\prime}}}\ell_{i^{\prime}}^{( \widetilde{t}-1,b^{\prime})}\cdot\sigma^{\prime}(\langle\mathbf{w}_{y_{i},r}^{ (\widetilde{t}-1,\widetilde{b})},\mathbf{\xi}_{i^{\prime}}\rangle)\cdot\langle\bm {\xi}_{i^{\prime}},\mathbf{\xi}_{i}\rangle\] \[=\langle\mathbf{w}_{y_{i},r}^{(\widetilde{t}-1,0)},\mathbf{\xi}_{i} \rangle-\frac{\eta}{Bm}\cdot\sum_{b^{\prime}=0}^{H-1}\sum_{i^{\prime}\in \mathcal{I}_{\widetilde{t}-1,b^{\prime}}}\ell_{i^{\prime}}^{(\widetilde{t}-1,b ^{\prime})}\cdot\sigma^{\prime}(\langle\mathbf{w}_{y_{i},r}^{(\widetilde{t}-1, b^{\prime})},\widehat{y}_{i^{\prime}}\mathbf{\mu}\rangle)\cdot\langle y_{i^{\prime}}\mathbf{ \mu},\mathbf{\xi}_{i}\rangle y_{i^{\prime}}\] \[\qquad-\frac{\eta(P-1)}{Bm}\cdot\ell_{i}^{(\widetilde{t}-1,b_{i} ^{(\widetilde{t}-1)})}\cdot\sigma^{\prime}(\langle\mathbf{w}_{y_{i},r}^{( \widetilde{t}-1,\widetilde{b})},\mathbf{\xi}_{i}\rangle)\cdot\|\mathbf{\xi}_{i}\|_{2}^ {2}\] \[\qquad-\frac{\eta(P-1)}{Bm}\cdot\sum_{b^{\prime}=0}^{H-1}\sum_{i^{ \prime}\in\mathcal{I}_{\widetilde{t}-1,b^{\prime}}}\ell_{i^{\prime}}^{( \widetilde{t},b^{\prime})}\cdot\sigma^{\prime}(\langle\mathbf{w}_{y_{i},r}^{( \widetilde{t},b^{\prime})},\mathbf{\xi}_{i^{\prime}}\rangle)\cdot\langle\mathbf{\xi}_{i^ {\prime}},\mathbf{\xi}_{i}\rangle\,\mathds{1}(i^{\prime}\neq i)\] \[=\langle\mathbf{w}_{y_{i},r}^{(\widetilde{t},0)},\mathbf{\xi}_{i} \rangle-\frac{\eta(P-1)}{Bm}\cdot\underbrace{\ell_{i}^{(\widetilde{t}-1,b_{i} ^{(\widetilde{t}-1)})}\cdot\|\mathbf{\xi}_{i}\|_{2}^{2}}_{\text{I}_{4}}\] \[\qquad-\frac{\eta(P-1)}{Bm}\cdot\underbrace{\sum_{b^{\prime}=0}^{H -1}\sum_{i^{\prime}\in\mathcal{I}_{\widetilde{t}-1,b^{\prime}}}\ell_{i^{\prime}}^{( \widetilde{t}-1,b^{\prime})}\cdot\sigma^{\prime}(\langle\mathbf{w}_{y_{i},r}^{( \widetilde{t}-1,b^{\prime})},\mathbf{\xi}_{i^{\prime}}\rangle)\cdot\langle\mathbf{\xi}_{i^ {\prime}},\mathbf{\xi}_{i}\rangle\,\mathds{1}(i^{\prime}\neq i)}_{\text{I}_{5}}\]\[-\frac{\eta}{Bm}\cdot\underbrace{\sum_{\vec{b}=0}^{H-1}\sum_{i^{\prime}\in\mathcal{I} _{t-1,\nu}}\ell_{i^{\prime}}^{(\tilde{t}-1,b^{\prime})}\cdot\sigma^{\prime}( \langle\mathbf{w}_{y_{i,r}}^{(\tilde{t}-1,b^{\prime})},y_{i^{\prime}}\mathbf{\mu} \rangle)\cdot\langle y_{i^{\prime}}\mathbf{\mu},\mathbf{\xi}_{i}\rangle y_{i^{\prime}}} _{I_{6}},\]
for any \(r\in S_{i}^{(\tilde{t}-1,0)}\), where the last equality is by \(\langle\mathbf{w}_{y_{i,r}}^{(\tilde{t}-1,b_{i}^{(\tilde{t}-1)})},\mathbf{\xi}_{i} \rangle>0\). Then we respectively estimate \(\mathrm{I}_{4},\mathrm{I}_{5},\mathrm{I}_{6}\). For \(\mathrm{I}_{4}\), according to Lemma B.1, we have
\[-\mathrm{I}_{4}\geq|\ell_{i}^{(\tilde{t}-1,b_{i}^{(\tilde{t}-1)})}|\cdot \sigma_{p}^{2}d/2.\]
For \(\mathrm{I}_{5}\), we have following upper bound
\[|\mathrm{I}_{5}| \leq\sum_{b^{\prime}=0}^{H-1}\sum_{i^{\prime}\in\mathcal{I}_{ \tilde{t}-1,b^{\prime}}}|\ell_{i^{\prime}}^{(\tilde{t}-1,b^{\prime})}|\cdot \sigma^{\prime}(\langle\mathbf{w}_{y_{i,r}}^{(\tilde{t}-1,b^{\prime})},\mathbf{\xi }_{i^{\prime}}\rangle)\cdot|\langle\mathbf{\xi}_{i^{\prime}},\mathbf{\xi}_{i}\rangle| \operatorname{\mathds{1}}(i^{\prime}\neq i)\] \[\leq\sum_{b^{\prime}=0}^{H-1}\sum_{i^{\prime}\in\mathcal{I}_{ \tilde{t}-1,b^{\prime}}}|\ell_{i^{\prime}}^{(\tilde{t}-1,b^{\prime})}|\cdot| \langle\mathbf{\xi}_{i^{\prime}},\mathbf{\xi}_{i}\rangle|\operatorname{\mathds{1}}(i^{ \prime}\neq i)\] \[\leq\sum_{b^{\prime}=0}^{H-1}\sum_{i^{\prime}\in\mathcal{I}_{ \tilde{t}-1,b^{\prime}}}|\ell_{i^{\prime}}^{(\tilde{t}-1,b^{\prime})}|\cdot 2 \sigma_{p}^{2}\cdot\sqrt{d\log(6n^{2}/\delta)}\] \[\leq nC_{2}\big{|}\ell_{i}^{(\tilde{t}-1,b_{i}^{(\tilde{t}-1)})} \big{|}\cdot 2\sigma_{p}^{2}\cdot\sqrt{d\log(6n^{2}/\delta)},\]
where the first inequality is due to triangle inequality, the second inequality is due to \(\sigma^{\prime}(z)\in\{0,1\}\), the third inequality is due to Lemma B.1, the forth inequality is due to the third hypothesis at epoch \(\tilde{t}-1\).
For \(\mathrm{I}_{6}\), we have following upper bound
\[|\mathrm{I}_{6}| \leq\sum_{b^{\prime}=0}^{H-1}\sum_{i^{\prime}\in\mathcal{I}_{ \tilde{t}-1,b^{\prime}}}|\ell_{i^{\prime}}^{(\tilde{t}-1,b^{\prime})}|\cdot \sigma^{\prime}(\langle\mathbf{w}_{y_{i,r}}^{(\tilde{t}-1,b^{\prime})},y_{i^{ \prime}}\mathbf{\mu}\rangle)\cdot|\langle y_{i^{\prime}}\mathbf{\mu},\mathbf{\xi}_{i}\rangle|\] \[\leq\sum_{b^{\prime}=0}^{H-1}\sum_{i^{\prime}\in\mathcal{I}_{ \tilde{t}-1,b^{\prime}}}|\ell_{i^{\prime}}^{(\tilde{t}-1,b^{\prime})}||\langle y _{i^{\prime}}\mathbf{\mu},\mathbf{\xi}_{i}\rangle|\] \[\leq nC_{2}\big{|}\ell_{i}^{(\tilde{t}-1,b_{i}^{(\tilde{t}-1)})} \big{|}\cdot\|\mathbf{\mu}\|_{2}\sigma_{p}\sqrt{2\log(6n/\delta)},\]
where the first inequality is by triangle inequality; the second inequality is due to \(\sigma^{\prime}(z)\in\{0,1\}\); the third inequality is by Lemma B.1; the last inequality is due to the third hypothesis at epoch \(\tilde{t}-1\).
Since \(d\geq\max\{32C_{2}^{2}n^{2}\cdot\log(6n^{2}/\delta),4C_{2}n\|\mathbf{\mu}\|\sigma_ {p}^{-1}\sqrt{2\log(6n/\delta)}\}\), we have \(-(P-1)\mathrm{I}_{4}\geq\max\{(P-1)|\mathrm{I}_{5}|/2,|\mathrm{I}_{6}|/2\}\) and hence \(-(P-1)\mathrm{I}_{4}\geq(P-1)|\mathrm{I}_{5}|+|\mathrm{I}_{6}|\). It follows that
\[\langle\mathbf{w}_{y_{i,r}}^{(\tilde{t},0)},\mathbf{\xi}_{i}\rangle\geq\langle \mathbf{w}_{y_{i,r}}^{(\tilde{t}-1,0)},\mathbf{\xi}_{i}\rangle>\sigma_{0}\sigma_{p} \sqrt{d}/\sqrt{2},\]
for any \(r\in S_{i}^{(\tilde{t}-1,0)}\). Therefore, \(S_{i}^{(0,0)}\subseteq S_{i}^{(\tilde{t}-1,0)}\subseteq S_{i}^{(\tilde{t},0)}\). And it directly follows by Lemma B.3 that \(|S_{i}^{(\tilde{t},0)}|\geq 0.8m\Phi(-1),\,\forall i\in[n]\).
For the fifth hypothesis, similar to the proof of the fourth hypothesis, we also have
\[\langle\mathbf{w}_{y_{i},r}^{(\tilde{t},0)},\mathbf{\xi}_{i}\rangle =\langle\mathbf{w}_{y_{i},r}^{(\tilde{t}-1,0)},\mathbf{\xi}_{i} \rangle-\frac{\eta(P-1)}{Bm}\cdot\underbrace{\ell_{i}^{(\tilde{t}-1,b_{i}^{(t -1)})}\cdot\|\mathbf{\xi}_{i}\|_{2}^{2}}_{\mathrm{I}_{4}}\] \[\qquad-\frac{\eta(P-1)}{Bm}\cdot\underbrace{\sum_{b^{\prime}=0}^ {H-1}\sum_{i^{\prime}\in\mathcal{I}_{\tilde{t},b^{\prime}}}\ell_{i^{\prime}}^{( \tilde{t}-1,b^{\prime})}\cdot\sigma^{\prime}(\langle\mathbf{w}_{y_{i,r}}^{( \tilde{t}-1,b^{\prime})},\mathbf{\xi}_{i^{\prime}}\rangle)\cdot\langle\mathbf{\xi}_{i^{ \prime}},\mathbf{\xi}_{i}\rangle\operatorname{\mathds{1}}(i^{\prime}\neq i)}_{ \mathrm{I}_{5}}\]\[-\frac{\eta}{Bm}\cdot\underbrace{\sum_{\mathbf{\nu}=0}^{H-1}\sum_{i^{\prime}\in\mathcal{I }_{z-1,\mathbf{\nu}}}\ell_{i^{\prime}}^{(\bar{\ell}-1,b^{\prime})}\cdot\sigma^{\prime }(\langle\mathbf{w}_{y_{i},r}^{(\bar{\ell}-1,b^{\prime})},y_{i^{\prime}}\mathbf{ \mu}\rangle)\cdot\langle y_{i^{\prime}}\mathbf{\mu},\mathbf{\xi}_{i}\rangle y_{i^{ \prime}}}_{\mathbf{l}_{6}},\]
for any \(i\in S_{j,r}^{(\bar{\ell}-1,0)}\), where the equality holds due to \(\langle\mathbf{w}_{j,r}^{(\bar{\ell}-1,b_{i}^{(\bar{\ell}-1)})},\mathbf{\xi}_{i} \rangle>0\) and \(y_{i}=j\). By applying the same technique used in the proof of the fourth hypothesis, it follows that
\[\langle\mathbf{w}_{j,r}^{(\bar{\ell},0)},\mathbf{\xi}_{i}\rangle\geq\langle \mathbf{w}_{j,r}^{(\bar{\ell}-1,0)},\mathbf{\xi}_{i}\rangle>0,\]
for any \(i\in S_{j,r}^{(\bar{\ell}-1,0)}\). Thus, we have \(S_{j,r}^{(0,0)}\subseteq S_{j,r}^{(\bar{\ell}-1,0)}\subseteq S_{j,r}^{(\bar{ \ell},0)}\). And it directly follows by Lemma B.4 that \(|S_{j,r}^{(\bar{\ell},0)}|\geq n\Phi(-1)/4\).
Proof of Proposition c.2.: Our proof is based on induction. The results are obvious at iteration \((0,0)\) as all the coefficients are zero. Suppose that the results in Proposition C.2 hold for all iterations \((0,0)\leq(t,b)<(\widetilde{t},\widetilde{b})\). We aim to prove that they also hold for iteration \((\widetilde{t},\widetilde{b})\).
Firstly, We prove that (28) exists at iteration \((\widetilde{t},\widetilde{b})\), i.e., \(\underline{\rho}_{j,r,i}^{(\widetilde{t},\widetilde{b})}\geq-\beta-10\sqrt{ \log(6n^{2}/\delta)/d}\cdot n\alpha\) for any \(r\in[m]\), \(j\in\{\pm 1\}\) and \(i\in[n]\). Notice that \(\underline{\rho}_{j,r,i}^{(\widetilde{t},\widetilde{b})}=0\) for \(j=y_{i}\), therefore we only need to consider the case that \(j\neq y_{i}\). We also only need to consider the case of \(\widetilde{b}=b_{i}^{(\widetilde{t})}+1\) since \(\underline{\rho}_{j,r,i}^{(\widetilde{t},\widetilde{b})}\) doesn't change in other cases according to (21).
When \(\underline{\rho}_{j,r,t}^{(\widetilde{t},b_{i}^{(\widetilde{t})})}<-0.5\beta- 5\sqrt{\log(6n^{2}/\delta)/d}\cdot n\alpha\), by (32) in Lemma C.3 we have that
\[(P-1)\langle\mathbf{w}_{j,r}^{(\widetilde{t},b_{i}^{(\widetilde{t})})},\mathbf{ \xi}_{i}\rangle\leq\underline{\rho}_{j,r,i}^{(\widetilde{t},b_{i}^{(\widetilde {t})})}+(P-1)\langle\mathbf{w}_{j,r}^{(0,0)},\mathbf{\xi}_{i}\rangle+5\sqrt{\frac{ \log(6n^{2}/\delta)}{d}}n\alpha<0,\]
and thus
\[\underline{\rho}_{j,r,i}^{(\widetilde{t},\widetilde{b})} =\underline{\rho}_{j,r,i}^{(\widetilde{t},b_{i}^{(\widetilde{t}) })}+\frac{\eta(P-1)^{2}}{Bm}\cdot\ell_{i}^{(\widetilde{t},b_{i}^{(\widetilde {t})})}\cdot\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(\widetilde{t},b_{i}^{( \widetilde{t})})},\mathbf{\xi}_{i}\rangle)\cdot\|\mathbf{\xi}_{i}\|_{2}^{2}\] \[=\underline{\rho}_{j,r,i}^{(\widetilde{t},b_{i}^{(\widetilde{t}) })}\geq-\beta-10\sqrt{\frac{\log(6n^{2}/\delta)}{d}}n\alpha,\]
where the last inequality is by induction hypothesis.
When \(\underline{\rho}_{j,r,i}^{(\widetilde{t},b_{i}^{(\widetilde{t})})}\geq-0.5\beta -5\sqrt{\log(6n^{2}/\delta)/d}\cdot n\alpha\), we have
\[\underline{\rho}_{j,r,i}^{(\widetilde{t},\widetilde{b})} =\underline{\rho}_{j,r,i}^{(t,b_{i}^{(\widetilde{t})})}+\frac{ \eta(P-1)^{2}}{Bm}\cdot\ell_{i}^{(t,b_{i}^{(\widetilde{t})})}\cdot\sigma^{ \prime}(\langle\mathbf{w}_{j,r}^{(t,b_{i}^{(\widetilde{t})})},\mathbf{\xi}_{i} \rangle)\cdot\|\mathbf{\xi}_{i}\|_{2}^{2}\] \[\geq-0.5\beta-5\sqrt{\frac{\log(6n^{2}/\delta)}{d}}n\alpha-\frac {\eta(P-1)^{2}\cdot 3\sigma_{p}^{2}d}{2Bm}\] \[\geq-0.5\beta-10\sqrt{\frac{\log(6n^{2}/\delta)}{d}}n\alpha\] \[\geq-\beta-10\sqrt{\frac{\log(6n^{2}/\delta)}{d}}n\alpha,\]
where the first inequality is by \(\ell_{i}^{\prime(t,b_{i}^{(\widetilde{t})})}\in(-1,0)\) and \(\|\mathbf{\xi}_{i}\|_{2}^{2}\leq(3/2)\sigma_{p}^{2}d\) by Lemma B.1; the second inequality is due to \(5\sqrt{\log(6n^{2}/\delta)/d}\cdot n\alpha\geq 3\eta\sigma_{p}^{2}d/(2Bm)\) by Condition 3.1.
Next we prove (27) holds for \((\widetilde{t},\widetilde{b})\). We only need to consider the case of \(j=y_{i}\). Consider
\[|\ell_{i}^{(\widetilde{t},\widetilde{b})}| =\frac{1}{1+\exp\{y_{i}\cdot[F_{+1}(\mathbf{W}_{+1}^{(\widetilde {t},\widetilde{b})},\mathbf{x}_{i})-F_{-1}(\mathbf{W}_{-1}^{(\widetilde{t}, \widetilde{b})},\mathbf{x}_{i})]\}} \tag{40}\] \[\leq\exp(-y_{i}\cdot[F_{+1}(\mathbf{W}_{+1}^{(\widetilde{t}, \widetilde{b})},\mathbf{x}_{i})-F_{-1}(\mathbf{W}_{-1}^{(\widetilde{t}, \widetilde{b})},\mathbf{x}_{i})])\] \[\leq\exp(-F_{y_{i}}(\mathbf{W}_{y_{i}}^{(\widetilde{t},\widetilde {b})},\mathbf{x}_{i})+0.5),\]where the last inequality is by \(F_{j}(\mathbf{W}_{j}^{(t,b)},\mathbf{x}_{i})\leq 0.5\) for \(j\neq y_{i}\) according to Lemma C.4. Now recall the iterative update rule of \(\overline{\rho}_{j,r,i}^{(t,b)}\):
\[\overline{\rho}_{j,r,i}^{(t,b+1)}=\overline{\rho}_{j,r,i}^{(t,b)}-\frac{\eta(P -1)^{2}}{Bm}\cdot\ell_{i}^{\prime(t,b)}\cdot\sigma^{\prime}(\langle\mathbf{w}_{ j,r}^{(t,b)},\mathbf{\xi}_{i}\rangle)\cdot\|\mathbf{\xi}_{i}\|_{2}^{2}\cdot\mathds{1}(i \in\mathcal{I}_{t,b}).\]
Let \((t_{j,r,i},b_{j,r,i})\) be the last time before \((\widetilde{t},\widetilde{b})\) that \(\overline{\rho}_{j,r,i}^{(t_{j,r,i},b_{j,r,i})}\leq 0.5\alpha\). Then by iterating the update rule from \((t_{j,r,i},b_{j,r,i})\) to \((\widetilde{t},\widetilde{b})\), we get
\[\begin{split}&\overline{\rho}_{j,r,i}^{(\widetilde{t},\widetilde{ b})}\\ &=\overline{\rho}_{j,r,i}^{(t_{j,r,i},b_{j,r,i})}-\underbrace{ \eta(P-1)^{2}}_{Bm}\cdot\ell_{i}^{\prime(t_{j,r,i},b_{j,r,i})}\cdot\mathds{1}( \langle\mathbf{w}_{j,r}^{(t_{j,r,i},b_{j,r,i})},\mathbf{\xi}_{i}\rangle\geq 0)\cdot \mathds{1}(i\in\mathcal{I}_{t,b})\|\mathbf{\xi}_{i}\|_{2}^{2}\\ &\qquad-\underbrace{\sum_{(t_{j,r,i},b_{j,r,i})<(t,b)<(\widetilde {t},\widetilde{b})}\frac{\eta(P-1)^{2}}{Bm}\cdot\ell_{i}^{\prime(t,b)}\cdot \mathds{1}(\langle\mathbf{w}_{j,r}^{(t,b)},\mathbf{\xi}_{i}\rangle\geq 0)\cdot \mathds{1}(i\in\mathcal{I}_{t,b})\|\mathbf{\xi}_{i}\|_{2}^{2}}_{1s}.\end{split} \tag{41}\]
We first bound \(\mathds{I}_{7}\) as follows:
\[|\mathds{I}_{7}|\leq(\eta(P-1)^{2}/Bm)\cdot\|\mathbf{\xi}_{i}\|_{2}^{2}\leq(\eta(P -1)^{2}/Bm)\cdot 3\sigma_{p}^{2}d/2\leq 1\leq 0.25\alpha,\]
where the first inequality is by \(\ell_{i}^{\prime(t_{j,r,i},b_{j,r,i})}\in(-1,0)\); the second inequality is by Lemma B.1; the third inequality is by Condition 3.1; the last inequality is by our choice of \(\alpha=4\log(T^{*})\) and \(T^{*}\geq e\).
Second, we bound \(\mathds{I}_{8}\). For \((t_{j,r,i},b_{j,r,i})<(t,b)<(\widetilde{t},\widetilde{b})\) and \(y_{i}=j\), we can lower bound the inner product \(\langle\mathbf{w}_{j,r}^{(t,b)},\mathbf{\xi}_{i}\rangle\) as follows
\[\begin{split}\langle\mathbf{w}_{j,r}^{(t,b)},\mathbf{\xi}_{i}\rangle& \geq\langle\mathbf{w}_{j,r}^{(0,0)},\mathbf{\xi}_{i}\rangle+\frac{1}{P -1}\overline{\rho}_{j,r,i}^{(t,b)}-\frac{5}{P-1}\sqrt{\frac{\log(6n^{2}/ \delta)}{d}}n\alpha\\ &\geq-\frac{0.5}{P-1}\beta+\frac{0.5}{P-1}\alpha-\frac{5}{P-1} \sqrt{\frac{\log(6n^{2}/\delta)}{d}}n\alpha\\ &\geq\frac{0.25}{P-1}\alpha,\end{split} \tag{42}\]
where the first inequality is by (31) in Lemma C.3; the second inequality is by \(\overline{\rho}_{j,r,i}^{(t,b)}>0.5\alpha\) and \(\langle\mathbf{w}_{j,r}^{(0,0)},\mathbf{\xi}_{i}\rangle\geq-0.5\beta/(P-1)\) due to the definition of \(t_{j,r,i}\) and \(\beta\); the last inequality is by \(\beta\leq 1/8\leq 0.1\alpha\) and \(5\sqrt{\log(6n^{2}/\delta)/d}\cdot n\alpha\leq 0.2\alpha\) by Condition 3.1.
Thus, plugging the lower bounds of \(\langle\mathbf{w}_{j,r}^{(t,b)},\mathbf{\xi}_{i}\rangle\) into \(\mathds{I}_{8}\) gives
\[\begin{split}|\mathds{I}_{8}|&\leq\sum_{(t_{j,r,i},b_{j,r,i})<(t,b)}\frac{\eta(P-1)^{2}}{Bm}\cdot\exp\Big{(}-\frac{1}{m}\sum_{r= 1}^{m}(P-1)\sigma(\langle\mathbf{w}_{j,r}^{(t,b)},\mathbf{\xi}_{i}\rangle)+0.5 \Big{)}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\cdot\mathds{1}( \langle\mathbf{w}_{j,r}^{(t,b)},\mathbf{\xi}_{i}\rangle\geq 0)\cdot\|\mathbf{\xi}_{i}\|_{2}^{2}\\ &\leq\frac{2\eta\Gamma^{*}n(P-1)^{2}}{Bm}\cdot\exp(-0.25\alpha) \exp(0.5)\cdot\frac{3\sigma_{p}^{2}d}{2}\\ &\leq\frac{2\eta\Gamma^{*}n(P-1)^{2}}{Bm}\cdot\exp(-\log(T^{*})) \exp(0.5)\cdot\frac{3\sigma_{p}^{2}d}{2}\\ &=\frac{2\eta n(P-1)^{2}}{Bm}\cdot\frac{3\sigma_{p}^{2}d}{2}\exp( 0.5)\leq 1\leq 0.25\alpha,\end{split}\]
where the first inequality is by (40); the second inequality is by (42); the third inequality is by \(\alpha=4\log(T^{*})\); the fourth inequality is by Condition 3.1; the last inequality is by \(\log(T^{*})\geq 1\) and \(\alpha=4\log(T^{*})\). Plugging the bound of \(\mathds{I}_{7},\mathds{I}_{8}\) into (41) completes the proof for \(\overline{\rho}\).
For the upper bound of (29), we prove a augmented hypothesis that there exists a \(i^{*}\in[n]\) with \(y_{i^{*}}=j\) such that for \(1\leq t\leq T^{*}\) we have that \(\gamma_{j,r}^{(t,0)}/\overline{\rho}_{j,r,i^{*}}\leq C^{\prime}\widehat{\gamma}\). Recall the iterative update rule of \(\gamma_{j,r}^{(t,b)}\) and \(\overline{\rho}_{j,r,i}^{(t,b)}\), we have
\[\overline{\rho}_{j,r,i}^{(t,b+1)} =\overline{\rho}_{j,r,i}^{(t,b)}-\frac{\eta(P-1)^{2}}{Bm}\cdot \ell_{i}^{\prime(t,b)}\cdot\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b)}, \boldsymbol{\xi}_{i}\rangle)\cdot\|\boldsymbol{\xi}_{i}\|_{2}^{2}\cdot\mathds{1 }(y_{i}=j)\,\mathds{1}(i\in\mathcal{I}_{t,b}),\] \[\gamma_{j,r}^{(t,b+1)} =\gamma_{j,r}^{(t,b)}-\frac{\eta}{Bm}\cdot\bigg{[}\sum_{i\in \mathcal{I}_{t,b}\cap S_{+}}\ell_{i}^{\prime(t,b)}\sigma^{\prime}(\langle \mathbf{w}_{j,r}^{(t,b)},y_{i}\cdot\boldsymbol{\mu}\rangle)\] \[\qquad\qquad\qquad\qquad\qquad-\sum_{i\in\mathcal{I}_{t,b}\cap S_ {-}}\ell_{i}^{\prime(t,b)}\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b)},y_{ i}\cdot\boldsymbol{\mu}\rangle)\bigg{]}\cdot\|\boldsymbol{\mu}\|_{2}^{2}.\]
According to the fifth statement of Lemma C.8, for any \(i^{*}\in S_{j,r}^{(0,0)}\) it holds that \(j=y_{i^{*}}\) and \(\langle\mathbf{w}_{j,r}^{(t,b)},\boldsymbol{\xi}_{i^{*}}\rangle\geq 0\) for any \((t,b)\leq(\widetilde{t},\widetilde{b})\). Thus, we have
\[\overline{\rho}_{j,r,i^{*}}^{(\widetilde{t},0)}=\overline{\rho}_{j,r,i^{*}}^{( \widetilde{t}-1,0)}-\frac{\eta(P-1)^{2}}{Bm}\cdot\ell_{i^{*}}^{(\widetilde{t} -1,b_{i^{*}}^{(\widetilde{t}-1)})}\cdot\|\boldsymbol{\xi}_{i^{*}}\|_{2}^{2} \geq\overline{\rho}_{j,r,i^{*}}^{(\widetilde{t}-1,0)}-\frac{\eta(P-1)^{2}}{Bm }\cdot\ell_{i^{*}}^{(\widetilde{t}-1,b_{i^{*}}^{(\widetilde{t}-1)})}\cdot \sigma_{p}^{2}d/2.\]
For the update rule of \(\gamma_{j,r}^{(t,b)}\), according to Lemma C.8, we have
\[\sum_{b<H}\Big{|}\sum_{i\in\mathcal{I}_{t,b}\cap S_{+}}\ell_{i}^{ \prime(t,b)}\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b)},y_{i}\cdot \boldsymbol{\mu}\rangle) -\sum_{i\in\mathcal{I}_{t,b}\cap S_{-}}\ell_{i}^{\prime(t,b)} \sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b)},y_{i}\cdot\boldsymbol{\mu} \rangle)\Big{|}\] \[\leq C_{2}n\Big{|}\ell_{i^{*}}^{(\widetilde{T}-1,b_{i^{*}}^{( \widetilde{T}-1)})}\Big{|}.\]
Then, we have
\[\frac{\gamma_{j,r}^{(\widetilde{t},0)}}{\overline{\rho}_{j,r,i^{*}}^{( \widetilde{t},0)}} \leq\max\left\{\frac{\gamma_{j,r}^{(\widetilde{t}-1,0)}}{\overline {\rho}_{j,r,i^{*}}^{(\widetilde{t}-1,0)}},\frac{C_{2}n\ell_{i^{*}}^{\prime( \widetilde{t}-1,b_{i^{*}}^{(\widetilde{t}-1)})}\|\boldsymbol{\mu}\|_{2}^{2}}{ (P-1)^{2}\cdot\ell_{i^{*}}^{\prime(\widetilde{t}-1,b_{i^{*}}^{(\widetilde{t}- 1)})}\cdot\sigma_{p}^{2}d/2}\right\}\] \[=\max\left\{\frac{\gamma_{j,r}^{(\widetilde{t}-1,0)}}{\overline{ \rho}_{j,r,i^{*}}^{(\widetilde{t}-1,0)}},\frac{2C_{2}n\|\boldsymbol{\mu}\|_{2} ^{2}}{(P-1)^{2}\sigma_{p}^{2}d}\right\} \tag{43}\] \[\leq\frac{2C_{2}n\|\boldsymbol{\mu}\|_{2}^{2}}{(P-1)^{2}\sigma_{ p}^{2}d},\]
where the last inequality is by \(\gamma_{j,r}^{(\widetilde{t}-1,0)}/\overline{\rho}_{j,r,i^{*}}^{(\widetilde{t}-1,0)}\leq 2 C_{2}\widehat{\gamma}=2C_{2}n\|\boldsymbol{\mu}\|_{2}^{2}/(P-1)^{2}\sigma_{p}^ {2}d\). Therefore,
\[\frac{\gamma_{j,r}^{(\widetilde{t},0)}}{\overline{\rho}_{j,r,i^{*}}^{(\widetilde {t},0)}}\leq 2C_{2}\widehat{\gamma}.\]
For iterations other than the starting of en epoch, we have the following upper bound:
\[\frac{\gamma_{j,r}^{(\widetilde{t},b)}}{\overline{\rho}_{j,r,i^{*}}^{( \widetilde{t},b)}}\leq\frac{2\gamma_{j,r}^{(\widetilde{t},0)}}{\overline{\rho }_{j,r,i^{*}}^{(\widetilde{t},0)}}\leq 4C_{2}\widehat{\gamma}.\]
Thus, by taking \(C^{\prime}=4C_{2}\), we have \(\gamma_{j,r}^{(\widetilde{t},b)}/\overline{\rho}_{j,r,i^{*}}^{(\widetilde{t}, b)}\leq C^{\prime}\widehat{\gamma}\).
On the other hand, when \((t,b)<(\frac{\log(2T^{*}/\delta)}{2c_{3}^{3}},0)\), we have
\[\gamma_{j,r}^{(t,b)}\geq-\frac{\log(2T^{*}/\delta)}{2c_{3}^{3}}\cdot\frac{\eta} {Bm}\cdot n\cdot\|\boldsymbol{\mu}\|_{2}^{2}\geq-\frac{1}{12},\]
where the first inequality is due to update rule of \(\gamma_{j,r}^{t,b}\), and the second inequality is due to Condition 3.1.
When \((t,b)\geq\big{(}\frac{\log(2T^{7}/\delta)}{2c_{3}^{3}},0\big{)}\), According to Lemma B.6, we have
\[\gamma_{j,r}^{(t,b)} \geq\sum_{(t^{\prime},b^{\prime})<(t,b)}\frac{\eta}{Bm}\big{[}\min _{i,b^{\prime}}\ell_{i}^{\prime(t^{\prime},b^{\prime})}\min\{|\mathcal{I}_{t^{ \prime},b^{\prime}}\cap S_{+}\cap S_{-1}|,|\mathcal{I}_{t^{\prime},b^{\prime}} \cap S_{+}\cap S_{1}|\}\] \[\qquad\qquad\qquad\qquad\qquad-\max_{i,b^{\prime}}\ell_{i}^{(t^{ \prime},b^{\prime})}|\mathcal{I}_{t^{\prime},b^{\prime}}\cap S_{-}|\big{]} \cdot\|\mathbf{\mu}\|_{2}^{2}\] \[\geq\frac{\eta}{Bm}\big{(}\sum_{t^{\prime}=0}^{t-1}(c_{3}c_{4}H \frac{B}{4}\min_{i,b^{\prime}}\ell_{i}^{\prime(t^{\prime},b^{\prime})}-nq\max_ {i,b^{\prime}}\ell_{i}^{\prime(t^{\prime},b^{\prime})})-nq\max_{i,b^{\prime}} \ell_{i}^{(t,b^{\prime})}\big{)}\big{)}\|\mathbf{\mu}\|_{2}^{2}\] \[\geq 0,\]
where the first inequality is due to the update rule of \(\gamma_{j,r}^{(t,b)}\), the second inequality is due to Lemma B.6, and the third inequality is due to Condition 3.1.
### Decoupling with a Two-stage Analysis
#### c.2.1 First Stage
**Lemma C.9**.: _There exist_
\[T_{1}=C_{3}\eta^{-1}Bm(P-1)^{-2}\sigma_{p}^{-2}d^{-1},T_{2}=C_{4}\eta^{-1}Bm(P -1)^{-2}\sigma_{p}^{-2}d^{-1},\]
_where \(C_{3}=\Theta(1)\) is a large constant and \(C_{4}=\Theta(1)\) is a small constant, such that_
* \(\overline{\rho}_{j,r^{*},i}^{(T_{1},0)}\geq 2\) _for any_ \(r^{*}\in S_{i}^{(0,0)}=\{r\in[m]:\langle\mathbf{w}_{y_{i},r}^{(0)},\mathbf{\xi}_{i }\rangle>0\}\)_,_ \(j\in\{\pm 1\}\) _and_ \(i\in[n]\) _with_ \(y_{i}=j\)_._
* \(\max_{j,r}\gamma_{j,r}^{(t,b)}=O(\widehat{\gamma})\) _for all_ \((t,b)\leq(T_{1},0)\)_._
* \(\max_{j,r,i}|\varrho_{j,r,i}^{(t,b)}|=\max\{\beta,O\big{(}n\sqrt{\log(n/\delta )}\log(T^{*})/\sqrt{d}\big{)}\}\) _for all_ \((t,b)\leq(T_{1},0)\)_._
* \(\min_{j,r}\gamma_{j,r}^{(t,0)}=\Omega(\widehat{\gamma})\) _for all_ \(t\geq T_{2}\)_._
* \(\max_{j,r}\overline{\rho}_{j,r,i}^{(T_{1},0)}=O(1)\) _for all_ \(i\in[n]\)_._
Proof of Lemma c.9.: By Proposition C.2, we have that \(\underline{\rho}_{j,r,i}^{(t,b)}\geq-\beta-10n\sqrt{\frac{\log(6n^{2}/\delta )}{d}}\alpha\) for all \(j\in\{\pm 1\}\), \(r\in[m]\), \(i\in[n]\) and \((0,0)\leq(t,b)\leq(T^{*},0)\). According to Lemma B.2, for \(\beta\) we have
\[\beta =2\max_{i,j,r}\{|\langle\mathbf{w}_{j,r}^{(0,0)},\mathbf{\mu}\rangle|,(P-1)|\langle\mathbf{w}_{j,r}^{(0,0)},\mathbf{\xi}_{i}\rangle|\}\] \[\leq 2\max\{\sqrt{2\log(12m/\delta)}\cdot\sigma_{0}\|\mathbf{\mu}\|_{2 },2\sqrt{\log(12mn/\delta)}\cdot\sigma_{0}(P-1)\sigma_{p}\sqrt{d}\}\] \[=O\big{(}\sqrt{\log(mn/\delta)}\cdot\sigma_{0}(P-1)\sigma_{p} \sqrt{d}\big{)},\]
where the last equality is by the first condition of Condition 3.1. Since \(\underline{\rho}_{j,r,i}^{(t,b)}\leq 0\), we have that
\[\max_{j,r,i}|\underline{\rho}_{j,r,i}^{(t,b)}| =\max_{j,r,i}-\underline{\rho}_{j,r,i}^{(t,b)}\] \[\leq\beta+10\sqrt{\frac{\log(6n^{2}/\delta)}{d}}n\alpha\] \[=\max\bigg{\{}\beta,O\big{(}\sqrt{\log(n/\delta)}\log(T^{*})\cdot n /\sqrt{d}\big{)}\bigg{\}}.\]
Next, for the growth of \(\gamma_{j,r}^{(t)}\), we have following upper bound
\[\gamma_{j,r}^{(t,b+1)} =\gamma_{j,r}^{(t,b)}-\frac{\eta}{Bm}\cdot\sum_{i\in\mathcal{I}_{ t,b}}\ell_{i}^{\prime(t,b)}\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b)},y_{i} \cdot\mathbf{\mu}\rangle)\cdot\|\mathbf{\mu}\|_{2}^{2}\] \[\leq\gamma_{j,r}^{(t,b)}+\frac{\eta}{m}\cdot\|\mathbf{\mu}\|_{2}^{2},\]where the inequality is by \(|\ell^{\prime}|\leq 1\). Note that \(\gamma^{(0,0)}_{j,r}=0\) and recursively use the inequality \(tB+b\) times we have
\[\gamma^{(t,b)}_{j,r}\leq\frac{\eta(tH+b)}{m}\cdot\|\mathbf{\mu}\|_{2}^{2}. \tag{44}\]
Since \(n\cdot\mathrm{SNR}^{2}=n\|\mathbf{\mu}\|_{2}^{2}/\big{(}(P-1)^{2}\sigma_{p}^{2}d \big{)}=\widehat{\gamma}\), we have
\[T_{1}=C_{3}\eta^{-1}Bm(P-1)^{-2}\sigma_{p}^{-2}d^{-1}=C_{3}\eta^{-1}m\|\mathbf{\mu }\|_{2}^{-2}\widehat{\gamma}B/n.\]
And it follows that
\[\gamma^{(t)}_{j,r}\leq\frac{\eta(tH+b)}{m}\cdot\|\mathbf{\mu}\|_{2}^{2}\leq\frac{ \eta nT_{1}}{mB}\cdot\|\mathbf{\mu}\|_{2}^{2}\leq C_{3}\widehat{\gamma},\]
for all \((0,0)\leq(t,b)\leq(T_{1},0)\).
For \(\overline{\rho}^{(t)}_{j,r,i}\), recall from (20) that
\[\overline{\rho}^{(t+1,0)}_{y_{i},r,i}=\overline{\rho}^{(t,0)}_{y_{i},r,i}- \frac{\eta(P-1)^{2}}{Bm}\cdot\ell^{(t,b^{(t)}_{i})}_{i}\cdot\sigma^{\prime}( \langle\mathbf{w}^{(t,b^{(t)}_{i})}_{y_{i},r},\mathbf{\xi}_{i}\rangle)\cdot\|\mathbf{ \xi}_{i}\|_{2}^{2}.\]
According to Lemma C.8, for any \(r^{*}\in S^{(0,0)}_{i}=\{r\in[m]:\langle\mathbf{w}^{(0)}_{y_{i},r},\mathbf{\xi}_{i }\rangle>\sigma_{0}\sigma_{p}\sqrt{d}/\sqrt{2}\}\), we have \(\langle\mathbf{w}^{(t,b)}_{y_{i},r^{*}},\mathbf{\xi}_{i}\rangle>0\) for all \((0,0)\leq(t,b)\leq(T^{*},0)\) and hence
\[\overline{\rho}^{(t+1,0)}_{j,r^{*},i}=\overline{\rho}^{(t,0)}_{j,r^{*},i}- \frac{\eta(P-1)^{2}}{Bm}\cdot\ell^{(t,b^{(t)}_{i})}_{i}\|\mathbf{\xi}_{i}\|_{2}^{2}.\]
For each \(i\), we denote by \(T_{1}^{(i)}\) the last time in the period \([0,T_{1}]\) satisfying that \(\overline{\rho}^{(t,0)}_{y_{i},r^{*},i}\leq 2\). Then for \((0,0)\leq(t,b)<(T_{1}^{(i)},0)\), \(\max_{j,r}\{|\overline{\rho}^{(t,b)}_{j,r,i}|,|\varrho^{(t,b)}_{j,r,i}|\}=O(1)\) and \(\max_{j,r}\gamma^{(t,b)}_{j,r}=O(1)\). Therefore, we know that \(F_{-1}(\mathbf{W}^{(t,b)},\mathbf{x}_{i}),F_{+1}(\mathbf{W}^{(t,b)},\mathbf{x }_{i})=O(1)\). Thus there exists a positive constant \(C\) such that \(-\ell^{\prime(t,b)}_{i}\geq C\geq C_{2}\) for \(0\leq t\leq T_{1}^{(i)}\).
Then we have
\[\overline{\rho}^{(t,0)}_{y_{i},r^{*},i}\geq\frac{C\eta(P-1)^{2}\sigma_{p}^{2} dt}{2Bm}.\]
Therefore, \(\overline{\rho}^{(t,0)}_{y_{i},r^{*},i}\) will reach 2 within
\[T_{1}=C_{3}\eta^{-1}Bm(P-1)^{2}\sigma_{p}^{-2}d^{-1}\]
iterations for any \(r^{*}\in S^{(0,0)}_{i}\), where \(C_{3}\) can be taken as \(4/C\).
Next, we will discuss the lower bound of the growth of \(\gamma^{(t,b)}_{j,r}\). For \(\overline{\rho}^{(t,b)}_{j,r,i}\), we have
\[\overline{\rho}^{(t,b+1)}_{j,r,i} =\overline{\rho}^{(t,b)}_{j,r,i}-\frac{\eta(P-1)^{2}}{Bm}\cdot\ell ^{\prime(t,b)}_{i}\cdot\sigma^{\prime}(\langle\mathbf{w}^{(t,b)}_{j,r},\mathbf{ \xi}_{i}\rangle)\cdot\|\mathbf{\xi}_{i}\|_{2}^{2}\cdot\mathds{1}(y_{i}=j)\, \mathds{1}(i\in\mathcal{I}_{t,b})\] \[\leq\overline{\rho}^{(t,b)}_{j,r,i}+\frac{3\eta(P-1)^{2}\sigma_{p }^{2}d}{2Bm}.\]
According to (44) and \(\overline{\rho}^{(0,0)}_{j,r,i}=0\), it follows that
\[\overline{\rho}^{(t,b)}_{j,r,i}\leq\frac{3\eta(P-1)^{2}\sigma_{p}^{2}d(tH+b)} {2Bm},\gamma^{(t,b)}_{j,r}\leq\frac{\eta(tH+b)}{m}\cdot\|\mathbf{\mu}\|_{2}^{2}. \tag{45}\]
Therefore, \(\max_{j,r,i}\overline{\rho}^{(t,b)}_{j,r,i}\) will be smaller than \(1\) and \(\gamma^{(t,b)}_{j,r}\) smaller than \(\Theta(n\|\mathbf{\mu}\|_{2}^{2}/(P-1)^{2}\sigma_{p}^{2}d)=\Theta(n\cdot\mathrm{ SNR}^{2})=\Theta(\widehat{\gamma})=O(1)\) within
\[T_{2}=C_{4}\eta^{-1}Bm(P-1)^{-2}\sigma_{p}^{-2}d^{-1}\]
iterations, where \(C_{4}\) can be taken as \(2/3\). Therefore, we know that \(F_{-1}(\mathbf{W}^{(t,b)},\mathbf{x}_{i}),F_{+1}(\mathbf{W}^{(t,b)},\mathbf{x}_ {i})=O(1)\) in \((0,0)\leq(t,b)\leq(T_{2},0)\). Thus, there exists a positive constant \(C\) such that \(-\ell^{\prime(t,b)}_{i}\geq C\) for \(0\leq t\leq T_{2}\).
Recall that we denote \(\{i\in[n]|y_{i}=y\}\) as \(S_{y}\), and we have the update rule
\[\gamma_{j,r}^{(t,b+1)}=\gamma_{j,r}^{(t,b)}-\frac{\eta}{Bm}\cdot \bigg{[}\sum_{i\in\mathcal{I}_{k,b}\cap S_{+}}\ell_{i}^{\prime(t,b)} \sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b)},\widehat{y}_{i}\cdot\mathbf{\mu} \rangle)\] \[\qquad\qquad\qquad\qquad\qquad-\sum_{i\in\mathcal{I}_{k,b}\cap S_{- }}\ell_{i}^{\prime(t,b)}\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b)}, \widehat{y}_{i}\cdot\mathbf{\mu}\rangle)\bigg{]}\cdot\|\mathbf{\mu}\|_{2}^{2}.\]
For the growth of \(\gamma_{j,r}^{(t,b)}\), if \(\langle\mathbf{w}_{j,r}^{(t,b)},\mathbf{\mu}\rangle\geq 0\), we have
\[\gamma_{j,r}^{(t,b+1)} =\gamma_{j,r}^{(t,b)}-\frac{\eta}{Bm}\cdot\bigg{[}\sum_{i\in \mathcal{I}_{k,b}\cap S_{+}\cap S_{1}}\ell_{i}^{\prime(t)}-\sum_{i\in \mathcal{I}_{k,b}\cap S_{-}\cap S_{1}}\ell_{i}^{\prime(t)}\bigg{]}\|\mathbf{\mu}\|_ {2}^{2}\] \[\geq\gamma_{j,r}^{(t,b)}+\frac{\eta}{Bm}\cdot\big{[}C|\mathcal{I} _{t,b}\cap S_{+}\cap S_{1}|-|\mathcal{I}_{t,b}\cap S_{-}\cap S_{-1}|\big{]} \cdot\|\mathbf{\mu}\|_{2}^{2}.\]
Similarly, if \(\langle\mathbf{w}_{j,r}^{(t,b)},\mathbf{\mu}\rangle<0\),
\[\gamma_{j,r}^{(t,b+1)}\geq\gamma_{j,r}^{(t,b)}+\frac{\eta}{Bm}\cdot\big{[}C| \mathcal{I}_{t,b}\cap S_{+}\cap S_{-1}|-|\mathcal{I}_{t,b}\cap S_{-}\cap S_{1 }|\big{]}\cdot\|\mathbf{\mu}\|_{2}^{2}.\]
Therefore, for \(t\in[T_{2},T_{1}]\), we have
\[\gamma_{j,r}^{(t,0)} \geq\sum_{(t^{\prime},b^{\prime})<(t,0)}\frac{\eta}{Bm}\big{[}C \min\{|\mathcal{I}_{t^{\prime},b^{\prime}}\cap S_{+}\cap S_{-1}|,|\mathcal{I} _{t^{\prime},b^{\prime}}\cap S_{+}\cap S_{1}|\}-|\mathcal{I}_{t,b}\cap S_{-}| \big{]}\cdot\|\mathbf{\mu}\|_{2}^{2}\] \[\geq\frac{\eta}{Bm}(c_{3}tc_{4}HC\frac{B}{4}-T_{1}nq)\|\mathbf{\mu}\| _{2}^{2}\] \[=\frac{\eta}{Bm}(c_{3}c_{4}TC\frac{n}{4}-T_{1}nq)\|\mathbf{\mu}\|_{2} ^{2}\] \[\geq\frac{\eta c_{3}c_{4}Ctn\|\mu\|_{2}^{2}}{8Bm} \tag{46}\] \[\geq\frac{c_{3}c_{4}CC_{4}n\|\mu\|_{2}^{2}}{(P-1)^{2}\sigma_{p}^ {2}d}=\Theta(n\cdot\mathrm{SNR}^{2})=\Theta(\widehat{\gamma}),\]
where the second inequality is due to Lemma B.6, the third inequality is due to \(q<\frac{C_{4}C_{4}c_{4}}{8C_{3}}\) in Condition 3.1.
And it follows directly from (45) that
\[\overline{\rho}_{j,r,i}^{(T_{1},0)}\leq\frac{3\eta(P-1)^{2}\sigma_{p}^{2}dT_{1 }H}{2Bm}=\frac{3C_{3}}{2},\overline{\rho}_{j,r,i}^{(T_{1},0)}=O(1),\]
which completes the proof.
#### c.2.2 Second Stage
By the signal-noise decomposition, at the end of the first stage, we have
\[\mathbf{w}_{j,r}^{(t,b)}=\mathbf{w}_{j,r}^{(0,0)}+j\gamma_{j,r}^{(t,b)}\|\mathbf{\mu }\|_{2}^{-2}\mathbf{\mu}+\frac{1}{P-1}\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(t,b )}\|\mathbf{\xi}_{i}\|_{2}^{-2}\mathbf{\xi}_{i}+\frac{1}{P-1}\sum_{i=1}^{n}\varrho_{j,r,i}^{(t,b)}\|\mathbf{\xi}_{i}\|_{2}^{-2}\mathbf{\xi}_{i}.\]
for \(j\in[\pm 1]\) and \(r\in[m]\). By the results we get in the first stage, we know that at the beginning of this stage, we have the following property holds:
* \(\overline{\rho}_{j,r^{*},i}^{(T_{1},0)}\geq 2\) for any \(r^{*}\in S_{i}^{(0,0)}=\{r\in[m]:\langle\mathbf{w}_{y_{i,r}}^{(0,0)},\mathbf{\xi}_ {i}\rangle>\sigma_{0}\sigma_{p}\sqrt{d}/\sqrt{2}\}\), \(j\in\{\pm 1\}\) and \(i\in[n]\) with \(y_{i}=j\).
* \(\max_{j,r,i}|\varrho_{j,r,i}^{(T_{1},0)}|=\max\{\beta,O\big{(}n\sqrt{\log(n/ \delta)}\log(T^{*})/\sqrt{d}\big{)}\}\).
* \(\gamma_{j,r}^{(T_{1},0)}=\Theta(\widehat{\gamma})\) for any \(j\in\{\pm 1\},r\in[m]\).
where \(\widehat{\gamma}=n\cdot\mathrm{SNR}^{2}\). Now we choose \(\mathbf{W}^{*}\) as follows
\[\mathbf{w}^{*}_{j,r}=\mathbf{w}^{(0,0)}_{j,r}+\frac{20\log(2/\epsilon)}{P-1} \Big{[}\sum_{i=1}^{n}\mathds{1}(j=y_{i})\cdot\frac{\mathbf{\xi}_{i}}{\|\mathbf{\xi}_{i} \|_{2}^{2}}\Big{]}.\]
**Lemma C.10**.: _Under the same conditions as Theorem 3.2, we have that \(\|\mathbf{W}^{(T_{1},0)}-\mathbf{W}^{*}\|_{F}\leq\widetilde{O}(m^{1/2}n^{1/2}(P -1)^{-1}\sigma_{p}^{-1}d^{-1/2}(1+\max\{\beta,n\sqrt{\log(n/\delta)}\log(T^{*} )/\sqrt{d}\}))\)._
Proof.: \[\|\mathbf{W}^{(T_{1},0)}-\mathbf{W}^{*}\|_{F}\] \[\leq\|\mathbf{W}^{(T_{1},0)}-\mathbf{W}^{(0,0)}\|_{F}+\|\mathbf{ W}^{*}-\mathbf{W}^{(0,0)}\|_{F}\] \[\leq O(\sqrt{m})\max_{j,r}\gamma_{j,r}^{(T_{1},0)}\|\mathbf{\mu}\|_{ 2}^{-1}+\frac{1}{P-1}O(\sqrt{m})\max_{j,r}\bigg{\|}\sum_{i=1}^{n}\overline{ \rho}_{j,r,i}^{(T_{1},0)}\cdot\frac{\mathbf{\xi}_{i}}{\|\mathbf{\xi}_{i}\|_{2}^{2}}+ \sum_{i=1}^{n}\underline{\rho}_{j,r,i}^{(T_{1},0)}\cdot\frac{\mathbf{\xi}_{i}}{\| \mathbf{\xi}_{i}\|_{2}^{2}}\bigg{\|}_{2}\] \[\qquad+O(m^{1/2}n^{1/2}\log(1/\epsilon)(P-1)^{-1}\sigma_{p}^{-1} d^{-1/2})\] \[=O(m^{1/2}\widehat{\gamma}\|\mathbf{\mu}\|_{2}^{-1})+\widetilde{O}(m ^{1/2}n^{1/2}(P-1)^{-1}\sigma_{p}^{-1}d^{-1/2}(1+\max\{\beta,n\sqrt{\log(n/ \delta)}\log(T^{*})/\sqrt{d}\}))\] \[\quad+O(m^{1/2}n^{1/2}\log(1/\epsilon)(P-1)^{-1}\sigma_{p}^{-1}d ^{-1/2})\] \[=O(m^{1/2}n\cdot\mathrm{SNR}\cdot(P-1)^{-1}\sigma_{p}^{-1}d^{-1/2 }(1+\max\{\beta,n\sqrt{\log(n/\delta)}\log(T^{*})/\sqrt{d}\}))\] \[\quad+\widetilde{O}(m^{1/2}n^{1/2}\log(1/\epsilon)(P-1)^{-1} \sigma_{p}^{-1}d^{-1/2})\] \[=\widetilde{O}(m^{1/2}n^{1/2}(P-1)^{-1}\sigma_{p}^{-1}d^{-1/2}(1+ \max\{\beta,n\sqrt{\log(n/\delta)}\log(T^{*})/\sqrt{d}\})),\]
where the first inequality is by triangle inequality, the second inequality and the first equality are by our decomposition of \(\mathbf{W}^{(T_{1},0)}\), \(\mathbf{W}^{*}\) and Lemma B.1; the second equality is by \(n\cdot\mathrm{SNR}^{2}=\Theta(\widehat{\gamma})\) and \(\mathrm{SNR}=\|\mathbf{\mu}\|/(P-1)\sigma_{p}d^{1/2}\); the third equality is by \(n^{1/2}\cdot\mathrm{SNR}=O(1)\).
**Lemma C.11**.: _Under the same conditions as Theorem 3.2, we have that_
\[y_{i}\langle\nabla f(\mathbf{W}^{(t,b)},\mathbf{x}_{i}),\mathbf{W}^{*}\rangle \geq\log(2/\epsilon)\]
_for all \((T_{1},0)\leq(t,b)\leq(T^{*},0)\)._
Proof of Lemma c.11.: Recall that \(f(\mathbf{W}^{(t,b)})=(1/m)\sum_{j,r}j\cdot[\sigma(\langle\mathbf{w}^{(t,b)}_{ j,r},\mathbf{\xi}_{i}\rangle)]\), thus we have
\[y_{i}\langle\nabla f(\mathbf{W}^{(t,b)},\mathbf{x}_{i}),\mathbf{W }^{*}\rangle\] \[=\frac{1}{m}\sum_{j,r}\sigma^{\prime}(\langle\mathbf{w}^{(t,b)}_{ j,r},\widehat{y}_{i}\mathbf{\mu}\rangle)\langle y_{i}\widehat{y}_{i}\mathbf{\mu},j \mathbf{w}^{*}_{j,r}\rangle+\frac{P-1}{m}\sum_{j,r}\sigma^{\prime}(\langle \mathbf{w}^{(t,b)}_{j,r},\mathbf{\xi}_{i}\rangle)\langle y_{i}\mathbf{\xi}_{i},j \mathbf{w}^{*}_{j,r}\rangle\] \[=\frac{1}{m}\sum_{j,r}\sum_{i^{\prime}=1}^{n}\sigma^{\prime}( \langle\mathbf{w}^{(t,b)}_{j,r},\mathbf{\xi}_{i}\rangle)20\log(2/\epsilon)\, \mathds{1}(j=y_{i^{\prime}})\cdot\frac{\langle\mathbf{\xi}_{i^{\prime}},\mathbf{\xi}_{ i}\rangle}{\|\mathbf{\xi}_{i^{\prime}}\|_{2}^{2}}\] \[\qquad+\frac{1}{m}\sum_{j,r}\sum_{i^{\prime}=1}^{n}\sigma^{\prime }(\langle\mathbf{w}^{(t,b)}_{j,r},\widehat{y}_{i}\mathbf{\mu}\rangle)20\log(2/ \epsilon)\,\mathds{1}(j=y_{i^{\prime}})\cdot\frac{\langle\widehat{y}_{i}\mathbf{ \mu},\mathbf{\xi}_{i^{\prime}}\rangle}{\|\mathbf{\xi}_{i^{\prime}}\|_{2}^{2}}\] \[\qquad+\frac{1}{m}\sum_{j,r}\sigma^{\prime}(\langle\mathbf{w}^{(t,b)}_{j,r},\widehat{y}_{i}\mathbf{\mu}\rangle)\langle y_{i}\widehat{y}_{i}\mathbf{\mu},j\mathbf{w}^{(0,0)}_{j,r}\rangle+\frac{P-1}{m}\sum_{j,r}\sigma^{\prime}( \langle\mathbf{w}^{(t,b)}_{j,r},\mathbf{\xi}_{i}\rangle)\langle y_{i}\mathbf{\xi}_{i}, j\mathbf{w}^{(0,0)}_{j,r}\rangle\] \[\geq\frac{1}{m}\sum_{j=y_{i},r}\sigma^{\prime}(\langle\mathbf{w} ^{(t,b)}_{j,r},\mathbf{\xi}_{i}\rangle)20\log(2/\epsilon)-\frac{1}{m}\sum_{j,r} \sum_{i^{\prime}\neq i}\sigma^{\prime}(\langle\mathbf{w}^{(t,b)}_{j,r},\mathbf{\xi} _{i}\rangle)20\log(2/\epsilon)\cdot\frac{|\langle\mathbf{\xi}_{i^{\prime}},\mathbf{\xi} _{i}\rangle|}{\|\mathbf{\xi}_{i^{\prime}}\|_{2}^{2}}\] \[\qquad-\frac{1}{m}\sum_{j,r}\sum_{i^{\prime}=1}^{n}\sigma^{\prime }(\langle\mathbf{w}^{(t,b)}_{j,r},\widehat{y}_{i}\mathbf{\mu}\rangle)20\log(2/ \epsilon)\cdot\frac{|\langle\widehat{y}_{i}\mathbf{\mu},\mathbf{\xi}_{i^{\prime}}\rangle|}{ \|\mathbf{\xi}_{i^{\prime}}\|_{2}^{2}}-\frac{1}{m}\sum_{j,r}\sigma^{\prime}( \langle\mathbf{w}^{(t,b)}_{j,r},\widehat{y}_{i}\mathbf{\mu}\rangle)\beta\]\[\geq\underbrace{\frac{1}{m}\sum_{j=y_{i},r}\sigma^{\prime}(\langle\mathbf{ w}_{j,r}^{(t,b)},\mathbf{\xi}_{i}\rangle)20\log(2/\epsilon)}_{\mathbf{l}_{0}}- \underbrace{\frac{1}{m}\sum_{j,r}\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b) },\mathbf{\xi}_{i}\rangle)20\log(2/\epsilon)O\big{(}n\sqrt{\log(n/\delta)}/\sqrt{d} \big{)}}_{\mathbf{l}_{1_{0}}}\] \[-\underbrace{\frac{1}{m}\sum_{j,r}\sigma^{\prime}(\langle\mathbf{ w}_{j,r}^{(t,b)},\widehat{y}_{i}\mathbf{\mu}\rangle)O\big{(}n\sqrt{\log(n/\delta)} \cdot\mathrm{SNR}\cdot d^{-1/2}\big{)}}_{\mathbf{l}_{11}}-\underbrace{\frac{1} {m}\sum_{j,r}\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b)},y_{i}\mathbf{\mu} \rangle)\beta}_{\mathbf{l}_{12}}, \tag{47}\]
where the first inequality is by Lemma B.2 and the last inequality is by Lemma B.1. Then, we will bound each term in (47) respectively.
For \(\mathbf{l}_{10}\), \(\mathbf{l}_{11}\), \(\mathbf{l}_{12}\), \(\mathbf{l}_{14}\), we have that
\[|\mathbf{l}_{10}| \leq O\big{(}n\sqrt{\log(n/\delta)}/\sqrt{d}\big{)},\;|\mathbf{l }_{11}|\leq O\big{(}n\sqrt{\log(n/\delta)}\cdot\mathrm{SNR}\cdot d^{-1/2}\big{)}, \tag{48}\] \[|\mathbf{l}_{12}| \leq O\big{(}\beta\big{)},\]
For \(j=y_{i}\) and \(r\in S_{i}^{(0)}\), according to Lemma C.3, we have
\[(P-1)\langle\mathbf{w}_{j,r}^{(t,b)},\mathbf{\xi}_{i}\rangle \geq(P-1)\langle\mathbf{w}_{j,r}^{(0,0)},\mathbf{\xi}_{i}\rangle+ \overline{\rho}_{j,r,i}^{(t,b)}-5n\sqrt{\frac{\log(4n^{2}/\delta)}{d}}\alpha\] \[\geq 2-\beta-5n\sqrt{\frac{\log(4n^{2}/\delta)}{d}}\alpha\] \[\geq 1.5-\beta>0,\]
where the first inequality is by Lemma C.3; the second inequality is by \(5n\sqrt{\frac{\log(4n^{2}/\delta)}{d}}\leq 0.5\); and the last inequality is by \(\beta<1.5\). Therefore, for \(\mathbf{l}_{9}\), according to the fourth statement of Proposition C.8, we have
\[\mathbf{l}_{9}\geq\frac{1}{m}|\widetilde{S}_{i}^{(t,b)}|20\log(2/\epsilon)\geq 2 \log(2/\epsilon). \tag{49}\]
By plugging (48) and (49) into (47) and according to triangle inequality we have
\[y_{i}\langle\nabla f(\mathbf{W}^{(t,b)},\mathbf{x}_{i}),\mathbf{W}^{*}\rangle \geq\mathbf{l}_{9}-|\mathbf{l}_{10}|-|\mathbf{l}_{11}|-|\mathbf{l}_{12}|-| \mathbf{l}_{14}|\geq\log(2/\epsilon),\]
which completes the proof.
**Lemma C.12**.: _Under Assumption 3.1, for \((0,0)\leq(t,b)\leq(T^{*},0)\), the following result holds._
\[\|\nabla L_{\mathcal{I}_{t,b}}(\mathbf{W}^{(t,b)})\|_{F}^{2}\leq O(\max\{\|\mathbf{ \mu}\|_{2}^{2},(P-1)^{2}\sigma_{p}^{2}d\})L_{\mathcal{I}_{t,b}}(\mathbf{W}^{(t,b)}).\]
Proof.: We first prove that
\[\|\nabla f(\mathbf{W}^{(t,b)},\mathbf{x}_{i})\|_{F}=O(\max\{\|\mathbf{\mu}\|_{2},( P-1)\sigma_{p}\sqrt{d}\}). \tag{50}\]
Without loss of generality, we suppose that \(\widehat{y}_{i}=1\). Then we have that
\[\|\nabla f(\mathbf{W}^{(t,b)},\mathbf{x}_{i})\|_{F} \leq\frac{1}{m}\sum_{j,r}\left\|\big{[}\sigma^{\prime}(\langle \mathbf{w}_{j,r}^{(t,b)},\mathbf{\mu}\rangle)\mathbf{\mu}+(P-1)\sigma^{\prime}(\langle \mathbf{w}_{j,r}^{(t,b)},\mathbf{\xi}_{i}\rangle)\mathbf{\xi}_{i}\big{]}\right\|_{2}\] \[\leq\frac{1}{m}\sum_{j,r}\sigma^{\prime}(\langle\mathbf{w}_{j,r} ^{(t,b)},\mathbf{\mu}\rangle)\|\mathbf{\mu}\|_{2}+\frac{P-1}{m}\sum_{j,r}\sigma^{ \prime}(\langle\mathbf{w}_{j,r}^{(t,b)},\mathbf{\xi}_{i}\rangle)\|\mathbf{\xi}_{i}\|_ {2}\] \[\leq 4\max\{\|\mathbf{\mu}\|_{2},2(P-1)\sigma_{p}\sqrt{d}\},\]
where the first and second inequalities are by triangle inequality, the third inequality is by Lemma B.1. Then we upper bound the gradient norm \(\|\nabla L_{\mathcal{I}_{t,b}}(\mathbf{W}^{(t,b)})\|_{F}\) as:
\[\|\nabla L_{\mathcal{I}_{t,b}}(\mathbf{W}^{(t,b)})\|_{F}^{2}\leq\left[\frac{1} {B}\sum_{i\in\mathcal{I}_{t,b}}\ell^{\prime}\big{(}y_{i}f(\mathbf{W}^{(t,b)}, \mathbf{x}_{i})\big{)}\|\nabla f(\mathbf{W}^{(t,b)},\mathbf{x}_{i})\|_{F} \right]^{2}\]\[\leq\frac{1}{B}\sum_{i\in\mathcal{I}_{t,b}}\frac{1}{m}\sum_{j,r} \Big{(}\Big{|}\langle\mathbf{w}_{j,r}^{(t,b)}-\mathbf{w}_{j,r}^{(t,0)},\mathbf{\mu} \rangle\Big{|}+(P-1)\Big{|}\langle\mathbf{w}_{j,r}^{(t,b)}-\mathbf{w}_{j,r}^{( t,0)},\mathbf{\xi}_{i}\rangle\Big{|}\Big{)}\] \[\leq\frac{H\eta(P-1)}{Bm}\|\mu\|_{2}\sigma_{p}\sqrt{2\log(6n/ \delta)}+\frac{H\eta(P-1)^{2}}{Bm}2\sigma_{p}^{2}\sqrt{d\log(6n^{2}/\delta)}\] \[\leq\epsilon,\]
where the first inequality is due to triangle inequality, the second inequality is due to \(|\ell_{i}^{\prime}|\leq 1\), the third inequality is due to triangle inequality and the definition of neural networks, the forth inequality is due to parameter update rule (17) and Lemma B.1, and the fifth inequality is due to Condition 3.1.
**Lemma C.15**.: _Under the same conditions as Theorem 3.2, for all \(T_{1}\leq t\leq T^{*}\), we have \(\max_{j,r,i}|\mathcal{E}_{j,r,i}^{(t,b)}|=\max\big{\{}O\big{(}\sqrt{\log(mn/ \delta)}\cdot\sigma_{0}(P-1)\sigma_{p}\sqrt{d}\big{)},O\big{(}n\sqrt{\log(n/ \delta)}\log(T^{*})/\sqrt{d}\big{)}\big{\}}.\) Besides,_
\[\frac{1}{(s-T_{1})H}\sum_{(T_{1},0)\leq(t,b)<(s,0)}L_{\mathcal{I}_{t,b}}( \mathbf{W}^{(t,b)})\leq\frac{\|\mathbf{W}^{(T_{1},0)}-\mathbf{W}^{*}\|_{F}^{2} }{\eta(s-T_{1})H}+\epsilon\]
_for all \(T_{1}\leq t\leq T^{*}\). Therefore, we can find an iterate with training loss smaller than \(2\epsilon\) within \(T=T_{1}+\left\lfloor\|\mathbf{W}^{(T_{1})}-\mathbf{W}^{*}\|_{F}^{2}/(\eta \epsilon)\right\rfloor=T_{1}+\widetilde{O}(\eta^{-1}\epsilon^{-1}mnd^{-1} \sigma_{p}^{-2})\) iterations._
Proof of Lemma c.15.: Note that \(\max_{j,r,i}|\mathcal{E}_{j,r,i}^{(t)}|=\max\big{\{}O\big{(}\sqrt{\log(mn/ \delta)}\ \cdot\ \sigma_{0}(P\ -1)\sigma_{p}\sqrt{d}\big{)},O\big{(}n\sqrt{\log(n/ \delta)}\log(T^{*})/\sqrt{d}\big{)}\big{\}}\) can be proved in the same way as Lemma C.9.
For any \(T_{1}\leq s\leq T^{*}\), by taking a summation of the inequality in Lemma C.13 and dividing \((s-T_{1})H\) on both sides, we obtain that
\[\frac{1}{(s-T_{1})H}\sum_{(T_{1},0)\leq(t,b)<(s,0)}L_{\mathcal{I}_{t,b}}( \mathbf{W}^{(t,b)})\leq\frac{\|\mathbf{W}^{(T_{1},0)}-\mathbf{W}^{*}\|_{F}^{2} }{\eta(s-T_{1})H}+\epsilon.\]
According to the definition of \(T\), we have
\[\frac{1}{(T-T_{1})H}\sum_{(T_{1},0)\leq(t,b)<(T,0)}L_{\mathcal{I}_{t,b}}( \mathbf{W}^{(t,b)})\leq 2\epsilon.\]
Then there exists an epoch \(T_{1}\leq t\leq T^{*}\) such that
\[\frac{1}{H}\sum_{b=0}^{H-1}L_{\mathcal{I}_{t,b}}(\mathbf{W}^{(t,b)})\leq 2\epsilon.\]
Thus, according to Lemma C.14, we have
\[L_{S}(\mathbf{W}^{(t,0)})\leq 3\epsilon.\]
**Lemma C.16**.: _Under the same conditions as Theorem 3.2, we have_
\[\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(t,b)}/\gamma_{j^{\prime},r^{\prime}}^{ (t,b)}=\Theta(\mathrm{SNR}^{-2}) \tag{51}\]
_for all \(j,j^{\prime}\in\{\pm 1\}\), \(r,r^{\prime}\in[m]\) and \((T_{2},0)\leq(t,b)\leq(T^{*},0)\)._
Proof.: Now suppose that there exists \((0,0)<(\widetilde{T},0)\leq(T^{*},0)\) such that \(\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(t,0)}/\gamma_{j^{\prime},r^{\prime}} ^{(t,b)}=\Theta(\mathrm{SNR}^{-2})\) for all \((0,0)<(t,0)<(\widetilde{T},0)\). Then for \(\overline{\rho}_{j,r,i}^{(t,b)}\), according to Lemma C.1, we have
\[\gamma_{j,r}^{(t+1,0)}=\gamma_{j,r}^{(t,b)}-\frac{\eta}{Bm}\cdot \sum_{b<H}\bigg{[}\sum_{i\in S_{t}\cap\mathcal{I}_{t,b}}\ell_{i}^{\prime(t,b _{i}^{(t)})}\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b_{i}^{(t)})},\widehat {y}_{i}\cdot\mathbf{\mu}\rangle)\] \[\qquad\qquad\qquad\qquad\qquad-\sum_{i\in S_{-}\cap\mathcal{I}_{ t,b}}\ell_{i}^{\prime(t,b_{i}^{(t)})}\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b _{i}^{(t)})},\widehat{y}_{i}\cdot\mathbf{\mu}\rangle)\bigg{]}\cdot\|\mathbf{\mu}\|_{2}^{2},\] \[\overline{\rho}_{j,r,i}^{(t+1,0)}=\overline{\rho}_{j,r,i}^{(t,0) }-\frac{\eta(P-1)^{2}}{Bm}\cdot\ell_{i}^{\prime(t,b_{i}^{(t)})}\cdot\sigma^{ \prime}(\langle\mathbf{w}_{j,r}^{(t,b_{i}^{(t)})},\mathbf{\xi}_{i}\rangle)\cdot\| \mathbf{\xi}_{i}\|_{2}^{2}\cdot\mathds{1}(y_{i}=j),\]
It follows that
\[\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(\widetilde{T},0)}\]\[=\sum_{i:y_{i}=j}\overline{\rho}_{j,r,i}^{(\widetilde{T},0)}\] \[=\sum_{i:y_{i}=j}\overline{\rho}_{j,r,i}^{(\widetilde{T}-1,0)}- \frac{\eta(P-1)^{2}}{Bm}\cdot\sum_{i:y_{i}=j}\ell_{i}^{(\widetilde{T}-1,b_{i}^{ (\widetilde{T}-1)})}\cdot\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(\widetilde{ T}-1,b_{i}^{(\widetilde{T}-1)}},\boldsymbol{\xi}_{i}\rangle)\|\boldsymbol{\xi}_{i} \|_{2}^{2}\] \[=\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(\widetilde{T}-1,0)}-\frac {\eta(P-1)^{2}}{Bm}\cdot\sum_{i\in\widetilde{S}_{j,r}^{(\widetilde{T}-1,\xi_{i} ^{(\widetilde{T}-1)})}}\ell_{i}^{(\widetilde{T}-1,b_{i}^{(\widetilde{T}-1)})} \|\boldsymbol{\xi}_{i}\|_{2}^{2}\] \[\geq\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(\widetilde{T}-1)}+ \frac{\eta(P-1)^{2}\sigma_{p}^{2}dH\Phi(-1)}{8m}\cdot\min_{i\in\widetilde{S}_{ j,r}^{(\widetilde{t},\widetilde{b}-1)}\cap\mathcal{I}_{\widetilde{t},\widetilde{b}-1}}| \ell_{i}^{(\widetilde{T}-1,b_{i}^{(\widetilde{T}-1)})}|, \tag{52}\]
where the last equality is by the definition of \(S_{j,r}^{(\widetilde{T}-1)}\) as \(\{i\in[n]:y_{i}=j,\langle\mathbf{w}_{j,r}^{(\widetilde{T}-1)},\boldsymbol{\xi }_{i}\rangle>0\}\); the last inequality is by Lemma B.1 and the fifth statement of Lemma C.8. And
\[\gamma_{j^{\prime},r^{\prime}}^{(\widetilde{T},0)} \leq\gamma_{j^{\prime},r^{\prime}}^{(\widetilde{T}-1,0)}-\frac{ \eta}{Bm}\cdot\sum_{i\in S_{+}}\ell_{i}^{\prime(\widetilde{T}-1,b_{i}^{( \widetilde{T}-1)})}\sigma^{\prime}(\langle\mathbf{w}_{j^{\prime},r^{\prime}}^{ (\widetilde{T}-1,b_{i}^{\widetilde{T}-1}},\widehat{y}_{i}\cdot\boldsymbol{ \mu}\rangle)\cdot\|\boldsymbol{\mu}\|_{2}^{2}\] \[\leq\gamma_{j^{\prime},r^{\prime}}^{(\widetilde{T}-1,0)}+\frac{H \eta\|\boldsymbol{\mu}\|_{2}^{2}}{m}\cdot\max_{i\in S_{+}}|\ell_{i}^{( \widetilde{T}-1)}|. \tag{53}\]
According to the third statement of Lemma C.8, we have \(\max_{i\in S_{+}}|\ell_{i}^{\prime(\widetilde{T}-1,b_{i}^{(\widetilde{T}-1)}) }|\leq C_{2}\min_{i\in S_{j,r}^{(\widetilde{T}-1,b_{i}^{(\widetilde{T}-1)})}} |\ell_{i}^{\prime(\widetilde{T}-1)}|\). Then by combining (52) and (53), we have
\[\frac{\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(\widetilde{T},0)}}{\gamma_{j^{ \prime},r^{\prime}}^{(\widetilde{T},0)}}\geq\min\left\{\frac{\sum_{i=1}^{n} \overline{\rho}_{j,r,i}^{(\widetilde{T}-1,0)}}{\gamma_{j^{\prime},r^{\prime}}^ {(\widetilde{T}-1,0)}},\frac{(P-1)^{2}\sigma_{p}^{2}d}{16C_{2}\|\boldsymbol{ \mu}\|_{2}^{2}}\right\}=\Theta(\mathrm{SNR}^{-2}). \tag{54}\]
On the other hand, we will now show \(\frac{\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(\widetilde{t},0)}}{\gamma_{j^{ \prime},r^{\prime}}^{(\widetilde{t},0)}}\leq\Theta(\mathrm{SNR}^{-2})\) for \(t\geq T_{2}\) by induction. By Lemma B.1 and (52), we have
\[\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(T_{2},0)} \leq\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(T_{2}-1,0)}+\frac{3 \eta(P-1)^{2}\sigma_{p}^{2}dn}{2Bm}\] \[\leq\frac{3\eta(P-1)^{2}\sigma_{p}^{2}dnT_{2}}{2Bm}.\]
And, by (46), we know that at \(t=T_{2}\), we have
\[\gamma_{j^{\prime},r^{\prime}}^{(T_{2},0)}\geq\frac{\eta c_{3}c_{4}CT_{2}n\| \mu\|_{2}^{2}}{8Bm}.\]
Thus,
\[\frac{\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(T_{2},0)}}{\gamma_{j^{\prime},r ^{\prime}}^{(T_{2},0)}}\leq\Theta(\mathrm{SNR}^{-2}).\]
Suppose \(\frac{\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(\widetilde{T},0)}}{\gamma_{j^{ \prime},r^{\prime}}^{(\widetilde{T},0)}}\leq\Theta(\mathrm{SNR}^{-2})\). According to the decomposition, we have:
\[\langle\mathbf{w}_{j,r}^{(T,b)},\widehat{y}_{i}\boldsymbol{\mu}\rangle =\langle\mathbf{w}_{j,r}^{(0,0)},\widehat{y}_{i}\boldsymbol{\mu} \rangle+j\cdot\gamma_{j,r}^{(T,b)}\cdot\widehat{y}_{i}\] \[+\frac{1}{P-1}\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(T,b)}\cdot \|\boldsymbol{\xi}_{i}\|_{2}^{-2}\langle\boldsymbol{\xi}_{i},\widehat{y}_{i} \boldsymbol{\mu}\rangle+\frac{1}{P-1}\sum_{i=1}^{n}\underline{\rho}_{j,r,i}^{(T,b)}\cdot\|\boldsymbol{\xi}_{i}\|_{2}^{-2}\langle\boldsymbol{\xi}_{i},\widehat{y }_{i}\boldsymbol{\mu}\rangle. \tag{55}\]And we have that
\[|\langle\mathbf{w}_{j,r}^{(0,0)},\widehat{y}_{i}\mathbf{\mu}\rangle+ \frac{1}{P-1}\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(T,b)}\cdot\|\mathbf{\xi}_{i}\|_ {2}^{-2}\langle\mathbf{\xi}_{i},\widehat{y}\mathbf{\mu}\rangle+\frac{1}{P-1}\sum_{i=1} ^{n}\rho_{j,r,i}^{(T,b)}\cdot\|\mathbf{\xi}_{i}\|_{2}^{-2}\langle\mathbf{\xi}_{i}, \widehat{y}\mathbf{\mu}\rangle|\] \[\leq\beta/2+|\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(T,b)}|\frac{ 4\|\mu\|_{2}\sqrt{2\log(6n/\delta)}}{\sigma_{p}d(P-1)}\] \[\leq\beta/2+\frac{\Theta(\mathrm{SNR}^{-1})\gamma_{j,r}^{(T,b)}}{ \sqrt{d}}\] \[\leq\gamma_{j,r}^{(T,0)},\]
where the first inequality is due to triangle inequality and Lemma B.1, the second inequality is due to induction hypothesis, and the last inequality is due to Condition 3.1.
Thus, the sign of \(\langle\mathbf{w}_{j,r}^{(T,b)},\widehat{y}_{i}\mathbf{\mu}\rangle\) is persistent through out the epoch. Then, without loss of generality, we suppose \(\langle\mathbf{w}_{j,r}^{(T,b)},\mathbf{\mu}\rangle>0\). Thus, the update rule of \(\gamma\) is:
\[\gamma_{j,r}^{(t,b+1)}\] \[=\gamma_{j,r}^{(t,b)}-\frac{\eta}{Bm}\cdot\bigg{[}\sum_{i\in \mathcal{I}_{T,b}\cap S_{+}\cap S_{1}}\ell_{i}^{\prime(T,b)}-\sum_{i\in \mathcal{I}_{T,b}\cap S_{-}\cap S_{1}}\ell_{i}^{\prime(T,b)}\bigg{]}\|\mathbf{\mu} \|_{2}^{2}\] \[\geq\gamma_{j,r}^{(T,b)}+\frac{\eta}{Bm}\cdot\big{[}\min_{i\in \mathcal{I}_{T,b}}\ell_{i}^{(T,b)}|\mathcal{I}_{T,b}\cap S_{+}\cap S_{1}|-\max _{i\in\mathcal{I}_{T,b}}|\mathcal{I}_{T,b}\cap S_{-}\cap S_{-1}|\big{]}\cdot \|\mathbf{\mu}\|_{2}^{2}.\]
Therefore,
\[\gamma_{j,r}^{(T+1,0)}\geq\gamma_{j,r}^{(T,b)}+\frac{\eta}{Bm}\cdot\big{[}\min \ell_{i}^{(T,b_{i}^{(T)})}|S_{+}\cap S_{1}|-\max\ell_{i}^{(T,b_{i}^{(T)})}|S_ {-}\cap S_{-1}|\big{]}\cdot\|\mathbf{\mu}\|_{2}^{2}. \tag{56}\]
And, by (52), we have
\[\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(T+1,0)}\leq\sum_{i=1}^{n}\overline{ \rho}_{j,r,i}^{(T,0)}+\frac{\eta(P-1)^{2}\sigma_{p}^{2}dH\Phi(-1)}{8m}\cdot \max\Big{|}\ell_{i}^{(T,b_{i}^{(T)})}\Big{|}. \tag{57}\]
Thus, combining (56) and (57), we have
\[\frac{\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(T+1,0)}}{\gamma_{j,r }^{(T+1,0)}}\] \[\leq\max\left\{\frac{\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(T,0 )}}{\gamma_{j,r}^{(T,0)}},\frac{(P-1)^{2}\sigma_{p}^{2}dn\Phi(-1)\cdot\max| \ell_{i}^{\prime(T,b_{i}^{(T)})}|}{8\big{[}\min\ell_{i}^{(T,b_{i}^{(T)})}|S_{ +}\cap S_{1}|-\max\ell_{i}^{(T,b_{i}^{(T)})}|S_{-}\cap S_{-1}|\big{]}\cdot\| \mathbf{\mu}\|_{2}^{2}}\right\}\] \[\leq\Theta(\mathrm{SNR}^{-2}), \tag{58}\]
where the last inequality is due to the induction hypothesis, the third statement of Lemma C.8, and Lemma B.5. Thus, by induction, we have for all \(T_{1}\leq t\leq T^{*}\) that
\[\frac{\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(t,0)}}{\gamma_{j^{\prime},r^{ \prime}}^{(t,0)}}\leq\Theta(\mathrm{SNR}^{-2}).\]
And for \((T_{1},0)\leq(t,b)\leq(T^{*},0)\), we can bound the ratio as follows:
\[\frac{\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(t,b)}}{\gamma_{j^{\prime},r^{ \prime}}^{(t,b)}}\leq\frac{4\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(t,0)}}{ \gamma_{j^{\prime},r^{\prime}}^{(t,0)}}\leq\Theta(\mathrm{SNR}^{-2}),\]
where the first inequality is due to the update rule of \(\overline{\rho}_{j,r,i}^{(t,b)}\) and \(\overline{\rho}_{j,r,i}^{(t,b)}\). Thus, we have completed the proof.
### Test Error Analysis
In this section, we present and prove the exact upper bound and lower bound of test error in Theorem 3.2. Since we have resolved the challenges brought by stochastic mini-batch parameter update, the remaining proof for test error is similar to the counterpart in Kou et al. (2023).
#### c.3.1 Test Error Upper Bound
First, we prove the upper bound of test error in Theorem 3.2 when the training loss converges.
**Theorem C.17** (Second part of Theorem 3.2).: _Under the same conditions as Theorem 3.2, then there exists a large constant \(C_{1}\) such that when \(n\|\mathbf{\mu}\|_{2}^{2}\geq C_{1}(P-1)^{4}\sigma_{p}^{4}d\), for time \(t\) defined in Lemma C.15, we have the test error_
\[\mathbb{P}_{(\mathbf{x},y)\sim\mathcal{D}}\big{(}y\neq\operatorname{sign}(f( \mathbf{W}^{(t,0)},\mathbf{x}))\big{)}\leq p+\exp\bigg{(}-n\|\mathbf{\mu}\|_{2}^{4} /(C_{2}(P-1)^{4}\sigma_{p}^{4}d)\bigg{)},\]
_where \(C_{2}=O(1)\)._
Proof.: The proof is similar to the proof of Theorem E.1 in Kou et al. (2023). The only difference is substituting \(\mathbf{\xi}\) in their proof with \((P-1)\mathbf{\xi}\).
#### c.3.2 Test Error Lower Bound
In this part, we prove the lower bound of the test error in Theorem 3.2. We give two key Lemmas.
**Lemma C.18**.: _For \((T_{1},0)\leq(t,b)<(T^{*},0)\), denote \(g(\mathbf{\xi})=\sum_{j,r}j(P-1)\sigma(\langle\mathbf{w}_{j,r}^{(t,b)},\mathbf{\xi} \rangle)\). There exists a fixed vector \(\mathbf{v}\) with \(\|\mathbf{v}\|_{2}\leq 0.06\sigma_{p}\) such that_
\[\sum_{j^{\prime}\in\{\pm 1\}}[g(j^{\prime}\mathbf{\xi}+\mathbf{v})-g(j^{\prime}\mathbf{ \xi})]\geq 4C_{6}\max_{j\in\{\pm 1\}}\Big{\{}\sum_{r}\gamma_{j,r}^{(t,b)} \Big{\}}, \tag{59}\]
_for all \(\mathbf{\xi}\in\mathbb{R}^{d}\)._
Proof of Lemma c.18.: The proof is similar to the proof of Lemma 5.8 in Kou et al. (2023). The only difference is substituting \(\mathbf{\xi}\) in their proof with \((P-1)\mathbf{\xi}\).
**Lemma C.19** (Proposition 2.1 in Devroye et al. (2018)).: _The TV distance between \(\mathcal{N}(0,\sigma_{p}^{2}\mathbf{I}_{d})\) and \(\mathcal{N}(\mathbf{v},\sigma_{p}^{2}\mathbf{I}_{d})\) is smaller than \(\|\mathbf{v}\|_{2}/2\sigma_{p}\)._
Then, we can prove the lower bound of the test error.
**Theorem C.20** (Third part of Theorem 3.2).: _Suppose that \(n\|\mathbf{\mu}\|_{2}^{4}\leq C_{3}d(P-1)^{4}\sigma_{p}^{4}\), then we have that \(L_{\mathcal{D}}^{0-1}(\mathbf{W}^{(t,0)})\geq p+0.1\), where \(C_{3}\) is an sufficiently large absolute constant._
Proof.: The proof is similar to the proof of Theorem 4.3 in Kou et al. (2023). The only difference is substituting \(\mathbf{\xi}\) in their proof with \((P-1)\mathbf{\xi}\).
## Appendix D Proofs for SAM
### Noise Memorization Prevention
The following lemma shows the update rule of the neural network
**Lemma D.1**.: _We denote \(\ell_{i}^{\prime(t,b)}=\ell^{\prime}[y_{i}\cdot f(\mathbf{W}^{(t,b)},\mathbf{x }_{i})]\), then the adversarial point of \(\mathbf{W}^{(t,b)}\) is \(\mathbf{W}^{(t,b)}+\mathbf{\widehat{e}}^{(t,b)}\), where_
\[\mathbf{\widehat{e}}^{(t,b)}_{j,r}=\frac{\tau}{m}\frac{\sum_{i\in\mathcal{I}_{t,b }}\sum_{p\in[P]}\ell_{i}^{\prime(t,b)}j\cdot y_{i}\sigma^{\prime}(\langle \mathbf{w}_{j,r}^{(t,b)},\mathbf{x}_{i,p}\rangle)\mathbf{x}_{i,p}}{\|\nabla_{ \mathbf{W}}L_{\mathcal{I}_{t,b}}(\mathbf{W}^{(t,b)})\|_{F}}.\]
_Then the training update rule of the parameter is_
\[\mathbf{w}_{j,r}^{(t+1,b)}=\mathbf{w}_{j,r}^{(t,b)}-\frac{\eta}{Bm}\sum_{i\in \mathcal{I}_{t,b}}\sum_{p\in[P]}\ell_{i}^{\prime(t,b)}\sigma^{\prime}(\langle \mathbf{w}_{j,r}^{(t,b)}+\mathbf{\widehat{e}}_{j,r}^{(t,b)},\mathbf{x}_{i,p}\rangle )j\cdot\mathbf{x}_{i,p}\]\[=\mathbf{w}_{j,r}^{(t,b)}-\frac{\eta}{Bm}\sum_{i\in\mathcal{I}_{t,b}} \sum_{p\in[P]}\ell_{i}^{\prime(t,b)}\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b)},\mathbf{x}_{i,p}\rangle+\langle\widehat{\mathbf{\epsilon}}_{t,j,r},\mathbf{x}_ {i,p}\rangle)j\cdot\mathbf{x}_{i,p}\] \[=\mathbf{w}_{j,r}^{(t,b)}-\frac{\eta}{Bm}\sum_{i\in\mathcal{I}_{t,b}}\ell_{i}^{\prime(t,b)}\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b)},y \mathbf{\mu}\rangle+\langle\widehat{\mathbf{\epsilon}}_{t,j,r},y\mathbf{\mu}\rangle)j\mathbf{\mu}\] \[\qquad\underbrace{-\frac{\eta(P-1)}{Bm}\sum_{i\in\mathcal{I}_{t,b }}\ell_{i}^{\prime(t,b)}\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b)},\mathbf{ \xi}_{i}\rangle+\langle\widehat{\mathbf{\epsilon}}_{j,r}^{(t,b)},\mathbf{\xi}_{i} \rangle)jy_{i}\mathbf{\xi}_{i}}_{\text{NoiseTerm}}.\]
We will show that the noise term will be small if we train with the SAM algorithm. We consider the first stage where \(t\leq T_{1}\) where \(T_{1}=mB/(12n\eta||\mathbf{\mu}||_{2}^{2})\). Then, the following property holds.
**Proposition D.2**.: _Under Assumption 3.1, for \(0\leq t\leq T_{1}\), we have that_
\[\gamma_{j,r}^{(0,0)},\overline{\rho}_{j,r,i}^{(0,0)},\rho_{j,r,i} ^{(0,0)}=0, \tag{60}\] \[0\leq\gamma_{j,r}^{(t,b)}\leq 1/12,\] (61) \[0\leq\overline{\rho}_{j,r,i}^{(t,b)}\leq 1/12,\] (62) \[0\geq\underline{\rho}_{j,r,i}^{(t,b)}\geq-\beta-10\sqrt{\frac{ \log(6n^{2}/\delta)}{d}}n. \tag{63}\]
_Besides, \(\gamma_{j,r}^{(T_{1},0)}=\Omega(1)\)._
**Lemma D.3**.: _Under Assumption 3.1, suppose (27), (28) and (29) hold at iteration \(t\). Then, for all \(r\in[m]\), \(j\in\{\pm 1\}\) and \(i\in[n]\),_
\[\left|\langle\mathbf{w}_{j,r}^{(t,b)}-\mathbf{w}_{j,r}^{(0,0)}, \mathbf{\mu}\rangle-j\cdot\gamma_{j,r}^{(t,b)}\right|\leq\mathrm{SNR}\sqrt{\frac{3 2\log(6n/\delta)}{d}}n\alpha, \tag{64}\] \[\left|\langle\mathbf{w}_{j,r}^{(t,b)}-\mathbf{w}_{j,r}^{(0,0)}, \mathbf{\xi}_{i}\rangle-\frac{1}{P-1}\underline{\rho}_{j,r,i}^{(t,b)}\right|\leq \frac{5}{P-1}\sqrt{\frac{\log(6n^{2}/\delta)}{d}}n\alpha,\,j\neq y_{i},\] (65) \[\left|\langle\mathbf{w}_{j,r}^{(t,b)}-\mathbf{w}_{j,r}^{(0,0)}, \mathbf{\xi}_{i}\rangle-\frac{1}{P-1}\overline{\rho}_{j,r,i}^{(t,b)}\right|\leq \frac{5}{P-1}\sqrt{\frac{\log(6n^{2}/\delta)}{d}}n\alpha,\,j=y_{i}. \tag{66}\]
Proof of Lemma d.3.: Notice that \(1/12<\alpha\), if the condition (61), (62), (63) holds, (27), (28) and (29) also holds. Therefore, by Lemma C.3, we know that Lemma D.3 also holds.
**Lemma D.4**.: _Under Assumption 3.1, suppose (61), (62), (63) hold at iteration \(t,b\). Then, for all \(j\in\{\pm 1\}\) and \(i\in[n]\), \(F_{j}(\mathbf{W}_{j}^{(t,b)},\mathbf{x}_{i})\leq 0.5\). Therefore \(-0.3\geq\ell_{i}^{\prime}\geq-0.7\)._
Proof.: Notice that \(1/12<\alpha\), if the condition (61), (62), (63) holds, (27), (28) and (29) also holds. Therefore, by Lemma C.4, we know that for all \(j\neq y_{i}\) and \(i\in[n]\), \(F_{j}(\mathbf{W}_{j}^{(t,b)},\mathbf{x}_{i})\leq 0.5\). Next we will show that for \(j=y_{i}\), \(F_{j}(\mathbf{W}_{j}^{(t,b)},\mathbf{x}_{i})\leq 0.5\) also holds.
According to Lemma D.3, we have
\[F_{j}(\mathbf{W}_{j}^{(t,b)},\mathbf{x}_{i}) =\frac{1}{m}\sum_{r=1}^{m}[\sigma(\langle\mathbf{w}_{j,r}^{(t,b)},y_{i}\mathbf{\mu}\rangle)+(P-1)\sigma(\langle\mathbf{w}_{j,r}^{(t)},\mathbf{\xi}_{i }\rangle)]\] \[\leq 2\max\{|\langle\mathbf{w}_{j,r}^{(t,b)},y_{i}\mathbf{\mu}\rangle |,(P-1)|\langle\mathbf{w}_{j,r}^{(t)},\mathbf{\xi}_{i}\rangle|\}\] \[\leq 6\max\left\{|\langle\mathbf{w}_{j,r}^{(0)},\widehat{y}_{i}\bm {\mu}\rangle|,(P-1)|\langle\mathbf{w}_{j,r}^{(0)},\mathbf{\xi}_{i}\rangle|, \mathrm{SNR}\sqrt{\frac{32\log(6n/\delta)}{d}}n\alpha,\right.\] \[\qquad\left.5\sqrt{\frac{\log(6n^{2}/\delta)}{d}}n\alpha,|\gamma_{ j,r}^{(t,b)}|,|\underline{\rho}_{j,r,i}^{(t,b)}\right\}\] \[\leq 6\max\left\{\beta,\mathrm{SNR}\sqrt{\frac{32\log(6n/\delta)}{ d}}n\alpha,5\sqrt{\frac{\log(6n^{2}/\delta)}{d}}n\alpha,|\gamma_{j,r}^{(t,b)}|,| \overline{\rho}_{j,r,i}^{(t,b)}|\right\}\]\[\leq 0.5,\]
where the second inequality is by (64), (65) and (66); the third inequality is due to the definition of \(\beta\); the last inequality is by (25), (61), (62).
Since \(F_{j}(\mathbf{W}_{j}^{(t,b)},\mathbf{x}_{i})\in[0,0.5]\) we know that
\[-0.3\geq-\frac{1}{1+\exp(0.5)}\geq\ell_{i}^{\prime}\geq-\frac{1}{1+\exp(-0.5)} \geq-0.7.\]
Based on the previous foundation lemmas, we can provide the key lemma of SAM which is different from the dynamic of SGD.
**Lemma D.5**.: _Under Assumption 3.1, suppose (61), (62) and (63) hold at iteration \(t,b\). We have that if \(\langle\mathbf{w}_{j,r}^{(t,b)},\boldsymbol{\xi}_{k}\rangle\geq 0\), \(k\in\mathcal{I}_{t,b}\) and \(j=y_{k}\), then \(\langle\mathbf{w}_{j,r}^{(t,b)}+\boldsymbol{\hat{\epsilon}}_{j,r}^{(t,b)}, \boldsymbol{\xi}_{k}\rangle<0\)._
Proof.: We first prove that there for \(t\leq T_{1}\), there exists a constant \(C_{2}\) such that
\[\|\nabla_{\mathbf{W}}L_{\mathcal{I}_{t,b}}(\mathbf{W}^{(t,b)})\|_{F}\leq C_{2 }P\sigma_{p}\sqrt{d/B}.\]
Recall that
\[L_{\mathcal{I}_{t,b}}(\mathbf{W}^{(t,b)})=\frac{1}{B}\sum_{i\in\mathcal{I}_{t,b}}\ell(y_{i}f(\mathbf{W}^{(t,b)},x_{i})),\]
we have
\[\nabla_{\mathbf{w}_{j,r}}L_{\mathcal{I}_{t,b}}(\mathbf{W}^{(t,b)}) =\frac{1}{B}\sum_{i\in\mathcal{I}_{t,b}}\nabla_{\mathbf{w}_{j,r}} \ell(y_{i}f(\mathbf{W}^{(t,b)},\mathbf{x}_{i}))\] \[=\frac{1}{B}\sum_{i\in\mathcal{I}_{t,b}}y_{i}\ell^{\prime}(y_{i}f( \mathbf{W}^{(t,b)},\mathbf{x}_{i}))\nabla_{\mathbf{w}_{j,r}}f(\mathbf{W}^{(t, b)},\mathbf{x}_{i})\] \[=\frac{1}{Bm}\sum_{i\in\mathcal{I}_{t,b}}y_{i}\ell_{i}^{\prime(t, b)}[\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b)},\boldsymbol{\mu}\rangle) \cdot\boldsymbol{\mu}+\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b)}, \boldsymbol{\xi}_{i}\rangle)\cdot(P-1)\boldsymbol{\xi}_{i}].\]
We have
\[\|\nabla_{\mathbf{w}_{j,r}}L_{\mathcal{I}_{t,b}}(\mathbf{W}^{(t,b )})\|_{2}\] \[\leq\frac{1}{Bm}\bigg{\|}\sum_{i\in\mathcal{I}_{t,b}}|\ell_{i}^{ \prime(t,b)}|\cdot\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t)},\boldsymbol{ \mu}\rangle)\cdot\boldsymbol{\mu}\bigg{\|}_{2}+\frac{1}{Bm}\bigg{\|}\sum_{i \in\mathcal{I}_{t,b}}|\ell_{i}^{\prime(t,b)}|\cdot\sigma^{\prime}(\langle \mathbf{w}_{j,r}^{(t)},\boldsymbol{\xi}_{i}\rangle)\cdot(P-1)\boldsymbol{\xi }_{i}\bigg{\|}_{2}\] \[\leq 0.7m^{-1}\|\boldsymbol{\mu}\|_{2}+1.4(P-1)m^{-1}\sigma_{p} \sqrt{d/B}\] \[\leq 2Pm^{-1}\sigma_{p}\sqrt{d/B},\]
and
\[\|\nabla_{\mathbf{W}}L_{\mathcal{I}_{t,b}}(\mathbf{W}^{(t,b)})\|_{F}^{2}=\sum _{j,r}\|\nabla_{\mathbf{w}_{j,r}}L_{\mathcal{I}_{t,b}}(\mathbf{W}^{(t,b)})\|_{ 2}^{2}\leq 2m(2Pm^{-1}\sigma_{p}\sqrt{d/B})^{2},\]
leading to
\[\|\nabla_{\mathbf{W}}L_{\mathcal{I}_{t,b}}(\mathbf{W}^{(t,b)})\|_{F}\leq 2 \sqrt{2}P\sigma_{p}\sqrt{d/Bm}.\]
From Lemma D.1, we have
\[\langle\boldsymbol{\hat{\epsilon}}_{j,r}^{(t,b)},\boldsymbol{\xi} _{k}\rangle =\frac{\tau}{mB}\|\nabla_{\mathbf{W}}L_{\mathcal{I}_{t,b}}( \mathbf{W}^{(t,b)})\|_{F}^{-1}\sum_{i\in\mathcal{I}_{t,b}}\sum_{p\in[P]}\ell _{i}^{\prime(t)}j\cdot y_{i}\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t)}, \mathbf{x}_{i,p}\rangle)\langle\mathbf{x}_{i,p},\boldsymbol{\xi}_{k}\rangle\] \[=\frac{\tau}{mB}\|\nabla_{\mathbf{W}}L_{\mathcal{I}_{t,b}}( \mathbf{W}^{(t,b)})\|_{F}^{-1}\cdot\bigg{(}\sum_{i\in\mathcal{I}_{t,b},i\neq k }\ell_{i}^{\prime(t,b)}jy_{i}\cdot(P-1)\sigma^{\prime}(\langle\mathbf{w}_{j, r}^{(t,b)},\boldsymbol{\xi}_{i}\rangle)\langle\boldsymbol{\xi}_{i},\boldsymbol{\xi}_{k}\rangle\] \[\qquad+\ell_{k}^{\prime(t)}jy_{k}\cdot(P-1)\sigma^{\prime}(\langle \mathbf{w}_{j,r}^{(t)},\boldsymbol{\xi}_{k}\rangle)\langle\boldsymbol{\xi}_{k},\boldsymbol{\xi}_{k}\rangle\]\[\gamma_{j,r}^{(T,\widetilde{b})}\leq T_{1}\cdot(n/B)\cdot\frac{\eta}{m}\| \boldsymbol{\mu}\|_{2}^{2}\leq 1/12.\]Second, by Lemma D.6, we know that (62) holds for \((\widetilde{T}-1,\widetilde{b})\).
Last, we need to prove that (63) holds \((\widetilde{T}-1,\widetilde{b})\). The proof is similar to previous proof without SAM.
When \(\rho_{j,r,k}^{(\widetilde{T}-1,\widetilde{b}-1)}<-0.5(P-1)\beta-6\sqrt{\frac{ \log(6n^{2}/\delta)}{d}}n\alpha\), by (31), we have
\[\langle\mathbf{w}_{j,r}^{(\widetilde{T}-1,\widetilde{b}-1)}, \boldsymbol{\xi}_{k}\rangle <\langle\mathbf{w}_{j,r}^{(0,0)},\boldsymbol{\xi}_{k}\rangle+ \frac{1}{P-1}\rho_{j,r,k}^{(\widetilde{T}-1,\widetilde{b}-1)}+\frac{5}{P-1} \sqrt{\frac{\log(6n^{2}/\delta)}{d}}n\alpha\] \[\leq-\frac{1}{P-1}\sqrt{\frac{\log(6n^{2}/\delta)}{d}}n\alpha,\]
and we have
\[\langle\boldsymbol{\widehat{\epsilon}}_{j,r}^{(\widetilde{T}-1, \widetilde{b}-1)},\boldsymbol{\xi}_{i}\rangle =\frac{\tau}{mB}\|\nabla_{\mathbf{W}}L_{\mathcal{I}_{\widetilde{T }-1,\widetilde{b}-1}}(\mathbf{W}^{(\widetilde{T}-1,\widetilde{b}-1)})\|_{F}^{ -1}\sum_{i\in\mathcal{I}_{\widetilde{T}-1,\widetilde{b}-1}}\sum_{p\in[P]}\ell _{i}^{\prime(\widetilde{T}-1,\widetilde{b}-1)}j\cdot y_{i}\] \[\qquad\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(\widetilde{T}-1, \widetilde{b}-1)},\boldsymbol{x}_{i,p}\rangle)\langle\mathbf{x}_{i,p}, \boldsymbol{\xi}_{k}\rangle\] \[=\frac{\tau}{mB}\|\nabla_{\mathbf{W}}L_{\mathcal{I}_{\widetilde{T }-1,\widetilde{b}-1}}(\mathbf{W}^{(\widetilde{T}-1,\widetilde{b}-1)})\|_{F}^{ -1}\cdot\bigg{(}\sum_{i\in\mathcal{I}_{\widetilde{T}-1,\widetilde{b}-1},i\neq k }\ell_{i}^{\prime(\widetilde{T}-1,\widetilde{b}-1)}j\cdot y_{i}\] \[\qquad(P-1)\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(\widetilde{ T}-1,\widetilde{b}-1)},\boldsymbol{\xi}_{i}\rangle)\langle\boldsymbol{\xi}_{i}, \boldsymbol{\xi}_{k}\rangle+\ell_{k}^{\prime(t)}jy_{k}\cdot(P-1)\sigma^{\prime }(\langle\mathbf{w}_{j,r}^{(t)},\boldsymbol{\xi}_{k}\rangle)\langle\boldsymbol {\xi}_{k},\boldsymbol{\xi}_{k}\rangle\] \[\qquad+\sum_{i\in\mathcal{I}_{\widetilde{T}-1,\widetilde{b}-1}} \ell_{i}^{\prime(\widetilde{T}-1,\widetilde{b}-1)}j\cdot\sigma^{\prime}( \langle\mathbf{w}_{j,r}^{(\widetilde{T}-1,\widetilde{b}-1)},y_{i}\boldsymbol{ \mu}\rangle\langle\boldsymbol{\mu},\boldsymbol{\xi}_{k}\rangle\bigg{)}\] \[\leq\frac{\tau}{mC_{2}P\sigma_{p}\sqrt{Bd}}\bigg{[}0.8B(P-1) \sigma_{P}^{2}\sqrt{d\log(6n^{2}/\delta)}+0.4B\sigma_{P}\|\boldsymbol{\mu}\|_{ 2}\sqrt{2\log(6n^{2}/\delta)}\bigg{]}\] \[\leq C_{4}\frac{\tau\sqrt{B}\sigma_{p}\sqrt{\log(6n^{2}/\delta)} }{m}\] \[=C_{4}\frac{B\sqrt{\log(6n^{2}/\delta)}}{C_{3}P\sqrt{d}}\] \[\leq\frac{1}{P}\sqrt{\frac{\log(6n^{2}/\delta)}{d}}n\alpha,\]
and thus \(\langle\mathbf{w}_{j,r}^{(\widetilde{T}-1,\widetilde{b}-1)}+\boldsymbol{\widehat {\epsilon}}_{j,r}^{(\widetilde{T}-1,\widetilde{b}-1)},\boldsymbol{\xi}_{i} \rangle<0\) which leads to
\[\rho_{j,r,i}^{(\widetilde{T}-1,\widetilde{b})} =\rho_{j,r,i}^{(\widetilde{T}-1,\widetilde{b}-1)}+\frac{\eta(P- 1)^{2}}{Bm}\cdot\ell_{i}^{\prime(\widetilde{T}-1,\widetilde{b}-1)}\cdot\sigma^ {\prime}(\langle\mathbf{w}_{j,r}^{(\widetilde{T}-1,\widetilde{b}-1)}, \boldsymbol{\xi}_{i}\rangle)\cdot\|\boldsymbol{\xi}_{i}\|_{2}^{2}.\] \[\quad\mathds{1}(y_{i}=-j)\,\mathds{1}(i\in\mathcal{I}_{\widetilde {T}-1,\widetilde{b}-1})\] \[=\rho_{j,r,i}^{(\widetilde{T}-1,\widetilde{b}-1)}.\]
Therefore, we have
\[\rho_{j,r,i}^{(\widetilde{T}-1,\widetilde{b})}=\rho_{j,r,i}^{(\widetilde{T}-1, \widetilde{b}-1)}\geq-(P-1)\beta-5P\sqrt{\frac{\log(6n^{2}/\delta)}{d}}n\alpha.\]
When \(\rho_{j,r,i}^{(\widetilde{T}-1,\widetilde{b}-1)}\geq-0.5(P-1)\beta-5\sqrt{\frac{ \log(6n^{2}/\delta)}{d}}n\alpha\), we have that
\[\rho_{j,r,i}^{(\widetilde{T}-1,\widetilde{b})} \geq\rho_{j,r,i}^{(\widetilde{T}-1,\widetilde{b}-1)}+\frac{\eta(P- 1)^{2}}{Bm}\cdot\ell_{i}^{\prime(\widetilde{T}-1,\widetilde{b}-1)}\cdot\| \boldsymbol{\xi}_{i}\|_{2}^{2}\] \[\geq\rho_{j,r,i}^{(\widetilde{T}-1,\widetilde{b}-1)}-\frac{0.4\eta (P-1)^{2}}{Bm}\cdot 2d\sigma_{p}^{2}\] \[\geq-(P-1)\beta-5P\sqrt{\frac{\log(6n^{2}/\delta)}{d}}n\alpha.\]Therefore, the induction is completed, and thus Proposition D.2 holds.
Next, we will prove that \(\gamma_{j,r}^{(t)}\) can achieve \(\Omega(1)\) after \(T_{1}=mB/(12n\eta\|\mathbf{\mu}\|_{2}^{2})\) iterations. By Lemma B.6, we know that there exists \(c_{3}\cdot T_{1}\) epochs such that at least \(c_{4}\cdot H\) batches in these epochs, satisfy
\[|S_{+}\cap S_{y}\cap\mathcal{I}_{t,b}|\in\left[\frac{B}{4},\frac{3B}{4}\right]\]
for both \(y=+1\) and \(y=-1\). For SAM, we have the following update rule for \(\gamma_{j,r}^{(t,b)}\):
\[\gamma_{j,r}^{(t,b+1)} =\gamma_{j,r}^{(t,b)}-\frac{\eta}{Bm}\sum_{i\in\mathcal{I}_{t,b} \cap S_{+}}\ell_{i}^{\prime(t,b)}\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t, b)}+\widehat{\mathbf{\epsilon}}_{j,r}^{(t,b)},y_{i}\cdot\mathbf{\mu}\rangle)\cdot\|\mathbf{ \mu}\|_{2}^{2}\] \[\qquad+\frac{\eta}{Bm}\sum_{i\in\mathcal{I}_{t,b}\cap S_{-}}\ell _{i}^{\prime(t,b)}\sigma^{\prime}(\langle\mathbf{w}_{j,r}^{(t,b)}+\widehat{\bm {\epsilon}}_{j,r}^{(t,b)},y_{i}\cdot\mathbf{\mu}\rangle)\cdot\|\mathbf{\mu}\|_{2}^{2}.\]
If \(\langle\mathbf{w}_{j,r}^{(t,b)}+\widehat{\mathbf{\epsilon}}_{j,r}^{(t,b)},\mathbf{\mu} \rangle\geq 0\), we have
\[\gamma_{j,r}^{(t,b+1)} =\gamma_{j,r}^{(t,b)}-\frac{\eta}{Bm}\cdot\bigg{[}\sum_{i\in \mathcal{I}_{t,b}\cap S_{+}\cap S_{1}}\ell_{i}^{\prime(t)}-\sum_{i\in\mathcal{ I}_{t,b}\cap S_{+}\cap S_{-1}}\ell_{i}^{\prime(t)}\bigg{]}\|\mathbf{\mu}\|_{2}^{2}\] \[\geq\gamma_{j,r}^{(t,b)}+\frac{\eta}{Bm}\cdot\big{(}0.3|\mathcal{ I}_{t,b}\cap S_{+}\cap S_{1}|-0.7|\mathcal{I}_{t,b}\cap S_{+}\cap S_{-1}| \big{)}\cdot\|\mathbf{\mu}\|_{2}^{2}.\]
If \(\langle\mathbf{w}_{j,r}^{(t,b)}+\widehat{\mathbf{\epsilon}}_{j,r}^{(t,b)},\mathbf{\mu} \rangle<0\), we have
\[\gamma_{j,r}^{(t,b+1)} =\gamma_{j,r}^{(t,b)}+\frac{\eta}{Bm}\cdot\bigg{[}\sum_{i\in \mathcal{I}_{t,b}\cap S_{+}\cap S_{-1}}\ell_{i}^{(t)}-\sum_{i\in\mathcal{I}_{t,b}\cap S_{+}\cap S_{1}}\ell_{i}^{\prime(t)}\bigg{]}\|\mathbf{\mu}\|_{2}^{2}\] \[\geq\gamma_{j,r}^{(t,b)}+\frac{\eta}{Bm}\cdot\big{(}0.3|\mathcal{ I}_{t,b}\cap S_{+}\cap S_{-1}|-0.7|\mathcal{I}_{t,b}\cap S_{+}\cap S_{1}| \big{)}\cdot\|\mathbf{\mu}\|_{2}^{2}.\]
Therefore, we have
\[\gamma_{j,r}^{(T_{1},0)} \geq\frac{\eta}{Bm}(0.3\cdot c_{3}T_{1}\cdot c_{4}H\cdot 0.25B-0.7T_ {1}nq)\|\mathbf{\mu}\|_{2}^{2}\] \[=\frac{\eta}{Bm}(0.075c_{3}c_{4}T_{1}n-0.7T_{1}nq)\|\mathbf{\mu}\|_{2} ^{2}\] \[\geq\frac{\eta}{16Bm}c_{3}c_{4}T_{1}n\|\mathbf{\mu}\|_{2}^{2}\] \[=\frac{c_{3}c_{4}}{192}=\Omega(1).\]
**Lemma D.7**.: _Suppose Condition 3.1 holds. Then we have that \(\big{\|}\mathbf{w}_{j,r}^{(T_{1},0)}\big{\|}_{2}=\Theta(\sigma_{0}\sqrt{d})\) and_
\[\langle\mathbf{w}_{j,r}^{(T_{1},0)},j\mathbf{\mu}\rangle=\Omega(1),\] \[\langle\mathbf{w}_{-j,r}^{(T_{1},0)},j\mathbf{\mu}\rangle=-\Omega(1),\] \[\widehat{\beta}:=2\max_{i,j,r}\{|\langle\mathbf{w}_{j,r}^{(T_{1 },0)},\mathbf{\mu}\rangle|,(P-1)|\langle\mathbf{w}_{j,r}^{(T_{1},0)},\mathbf{\xi}_{i} \rangle|\}=O(1).\]
_Besides, for \(S_{i}^{(t,b)}\) and \(S_{j,r}^{(t,b)}\) defined in Lemma B.3 and B.4, we have that_
\[|S_{i}^{(T_{1},0)}|=\Omega(m),\,\forall i\in[n]\] \[|S_{j,r}^{(T_{1})}|=\Omega(n),\,\forall j\in\{\pm 1\},r\in[m].\]
### Test Error Analysis
Proof of Theorem 4.1.: Recall that
\[\mathbf{w}_{j,r}^{(t,b)}=\mathbf{w}_{j,r}^{(0,0)}+j\cdot\gamma_{j,r}^{(t,b)}\cdot \|\boldsymbol{\mu}\|_{2}^{-2}\cdot\boldsymbol{\mu}+\frac{1}{P-1}\sum_{i=1}^{n} \rho_{j,r,i}^{(t,b)}\cdot\|\boldsymbol{\xi}_{i}\|_{2}^{-2}\cdot\boldsymbol{\xi }_{i},\]
by triangle inequality we have
\[\left|\left\|\mathbf{w}_{j,r}^{(T_{1},0)}\right\|_{2}-\left\| \mathbf{w}_{j,r}^{(0,0)}\right\|_{2}\right| \leq|\gamma_{j,r}^{(t,b)}|\cdot\|\boldsymbol{\mu}\|_{2}^{-1}+ \frac{1}{P-1}\bigg{\|}\sum_{i=1}^{n}|\rho_{j,r,i}^{(t,b)}|\cdot\|\boldsymbol{ \xi}_{i}\|_{2}^{-2}\cdot\boldsymbol{\xi}_{i}\bigg{\|}_{2}\] \[\leq\frac{1}{12}\|\boldsymbol{\mu}\|_{2}^{-1}+\frac{\sqrt{n}}{12 (P-1)}(\sigma_{p}^{2}d/2)^{-1/2}\] \[\leq\frac{1}{6}\|\boldsymbol{\mu}\|_{2}^{-1}.\]
By the condition on \(\sigma_{0}\) and Lemma B.2, we have
\[\left\|\mathbf{w}_{j,r}^{(T_{1},0)}\right\|_{2}=\Theta\big{(}\big{\|}\mathbf{w }_{j,r}^{(0,0)}\big{\|}_{2}\big{)}=\Theta(\sigma_{0}\sqrt{d}).\]
By taking the inner product with \(\boldsymbol{\mu}\) and \(\boldsymbol{\xi}_{i}\), we can get
\[\langle\mathbf{w}_{j,r}^{(t,b)},\boldsymbol{\mu}\rangle=\langle\mathbf{w}_{j,r }^{(0,0)},\boldsymbol{\mu}\rangle+j\cdot\gamma_{j,r}^{(t,b)}+\frac{1}{P-1}\sum_ {i=1}^{n}\rho_{j,r,i}^{(t,b)}\cdot\|\boldsymbol{\xi}_{i}\|_{2}^{-2}\cdot \langle\boldsymbol{\xi}_{i},\boldsymbol{\mu}\rangle,\]
and
\[\langle\mathbf{w}_{j,r}^{(t,b)},\boldsymbol{\xi}_{i}\rangle =\langle\mathbf{w}_{j,r}^{(0,0)},\boldsymbol{\xi}_{i}\rangle+j \cdot\gamma_{j,r}^{(t,b)}\cdot\|\boldsymbol{\mu}\|_{2}^{-2}\cdot\langle \boldsymbol{\mu},\boldsymbol{\xi}_{i}\rangle+\frac{1}{P-1}\sum_{i^{\prime}=1} ^{n}\rho_{j,r,i^{\prime}}^{(t,b)}\cdot\|\boldsymbol{\xi}_{i^{\prime}}\|_{2}^ {-2}\cdot\langle\boldsymbol{\xi}_{i^{\prime}},\boldsymbol{\xi}_{i}\rangle\] \[=\langle\mathbf{w}_{j,r}^{(0,0)},\boldsymbol{\xi}_{i}\rangle+j \cdot\gamma_{j,r}^{(t,b)}\cdot\|\boldsymbol{\mu}\|_{2}^{-2}\cdot\langle \boldsymbol{\mu},\boldsymbol{\xi}_{i}\rangle+\frac{1}{P-1}\rho_{j,r,i}^{(t,b)}\] \[\qquad+\frac{1}{P-1}\sum_{i\neq i^{\prime}}\rho_{j,r,i^{\prime}} ^{(t,b)}\cdot\|\boldsymbol{\xi}_{i^{\prime}}\|_{2}^{-2}\cdot\langle \boldsymbol{\xi}_{i^{\prime}},\boldsymbol{\xi}_{i}\rangle.\]
Then, we have
\[\langle\mathbf{w}_{j,r}^{(T_{1},0)},j\boldsymbol{\mu}\rangle =\langle\mathbf{w}_{j,r}^{(0,0)},j\boldsymbol{\mu}\rangle+\gamma _{j,r}^{(T_{1},0)}+\frac{1}{P-1}\sum_{i=1}^{n}\rho_{j,r,i}^{(T_{1},0)}\cdot \|\boldsymbol{\xi}_{i}\|_{2}^{-2}\cdot\langle\boldsymbol{\xi}_{i},j\boldsymbol {\mu}\rangle\] \[\geq\gamma_{j,r}^{(T_{1},0)}-|\langle\mathbf{w}_{j,r}^{(0,0)}, \boldsymbol{\mu}\rangle|-\frac{1}{P-1}\sum_{i=1}^{n}|\rho_{j,r,i}^{(T_{1},0)}| \cdot\|\boldsymbol{\xi}_{i}\|_{2}^{-2}\cdot|\langle\boldsymbol{\xi}_{i}, \boldsymbol{\mu}\rangle|\] \[\geq\gamma_{j,r}^{(T_{1},0)}-\sqrt{2\log(12m/\delta)}\cdot\sigma_ {0}\|\boldsymbol{\mu}\|_{2}-\frac{n}{12(P-1)}(\sigma_{0}^{2}d/2)^{-1}\| \boldsymbol{\mu}\|_{2}\sigma_{p}\cdot\sqrt{2\log(6n/\delta)}\] \[\geq\frac{1}{2}\gamma_{j,r}^{(T_{1},0)},\]
and
\[\langle\mathbf{w}_{-j,r}^{(T_{1},0)},j\boldsymbol{\mu}\rangle =\langle\mathbf{w}_{-j,r}^{(0,0)},j\boldsymbol{\mu}\rangle-\gamma _{-j,r}^{(T_{1},0)}-\frac{1}{P-1}\sum_{i=1}^{n}\rho_{-j,r,i}^{(T_{1},0)}\cdot \|\boldsymbol{\xi}_{i}\|_{2}^{-2}\cdot\langle\boldsymbol{\xi}_{i},j\boldsymbol {\mu}\rangle\] \[\leq-\gamma_{-j,r}^{(T_{1},0)}+|\langle\mathbf{w}_{-j,r}^{(0,0)}, \boldsymbol{\mu}\rangle|+\frac{1}{P-1}\sum_{i=1}^{n}|\rho_{-j,r,i}^{(T_{1},0)}| \cdot\|\boldsymbol{\xi}_{i}\|_{2}^{-2}\cdot|\langle\boldsymbol{\xi}_{i}, \boldsymbol{\mu}\rangle|\] \[\leq-\gamma_{-j,r}^{(T_{1},0)}+\sqrt{2\log(12m/\delta)}\cdot \sigma_{0}\|\boldsymbol{\mu}\|_{2}+\frac{n}{12(P-1)}(\sigma_{0}^{2}d/2)^{-1}\| \boldsymbol{\mu}\|_{2}\sigma_{p}\cdot\sqrt{2\log(6n/\delta)}\] \[\leq-\frac{1}{2}\gamma_{j,r}^{(T_{1},0)},\]where the last inequality is by the condition on \(\sigma_{0}\) and \(\gamma_{j,r}^{(T_{1},0)}=\Omega(1)\). Thus, it follows that
\[\langle\mathbf{w}_{j,r}^{(T_{1},0)},j\boldsymbol{\mu}\rangle=\Omega(1),\;\langle \mathbf{w}_{-j,r}^{(T_{1},0)},j\boldsymbol{\mu}\rangle=-\Omega(1).\]
By triangle inequality, we have
\[|\langle\mathbf{w}_{j,r}^{(T_{1},0)},\boldsymbol{\mu}\rangle| \leq|\langle\mathbf{w}_{j,r}^{(0,0)},\boldsymbol{\mu}\rangle|+| \gamma_{j,r}^{(T_{1},0)}|+\frac{1}{P-1}\sum_{i=1}^{n}|\rho_{j,r,i}^{(t,b)}|\cdot \|\boldsymbol{\xi}_{i}\|_{2}^{-2}\cdot|\langle\boldsymbol{\xi}_{i},\boldsymbol{ \mu}\rangle|\] \[\leq\frac{1}{2}\beta+\frac{1}{12}+\frac{n}{P-1}\cdot\frac{1}{12} (\sigma_{p}^{2}d/2)^{-1}\cdot\|\boldsymbol{\mu}\|_{2}\sigma_{p}\cdot\sqrt{2 \log(6n/\delta)}\] \[=\frac{1}{2}\beta+\frac{1}{12}+\frac{n}{6(P-1)}\|\boldsymbol{\mu }\|_{2}\sqrt{2\log(6n/\delta)}/(\sigma_{p}d)\] \[\leq\frac{1}{6},\]
and
\[|\langle\mathbf{w}_{j,r}^{(T_{1},0)},\boldsymbol{\xi}_{i}\rangle| \leq|\langle\mathbf{w}_{j,r}^{(0,0)},\boldsymbol{\xi}_{i}\rangle| +|\gamma_{j,r}^{(T_{1},0)}|\cdot\|\boldsymbol{\mu}\|_{2}^{-2}\cdot|\langle \boldsymbol{\mu},\boldsymbol{\xi}_{i}\rangle|+\frac{1}{P-1}|\rho_{j,r,i}^{(T_ {1},0)}|\] \[\qquad+\frac{1}{P-1}\sum_{i\neq i^{\prime}}|\rho_{j,r,i^{\prime}}^ {(T_{1},0)}|\cdot\|\boldsymbol{\xi}_{i^{\prime}}\|_{2}^{-2}\cdot|\langle \boldsymbol{\xi}_{i^{\prime}},\boldsymbol{\xi}_{i}\rangle|\] \[\leq\frac{1}{2}\beta+\frac{1}{12}\|\boldsymbol{\mu}\|_{2}^{-1} \sigma_{p}\cdot\sqrt{2\log(6n/\delta)}+\frac{1}{12(P-1)}\] \[\qquad+\frac{n}{12(P-1)}(\sigma_{p}^{2}d/2)^{-1}2\sigma_{p}^{2} \cdot\sqrt{d\log(6n^{2}/\delta)}\] \[\leq\frac{1}{2}\beta+\frac{1}{12(P-1)}+\frac{1}{6}\|\boldsymbol{ \mu}\|_{2}^{-1}\sigma_{p}\cdot\sqrt{\log(6n/\delta)}\] \[\leq\frac{1}{6}.\]
This leads to
\[\widehat{\beta}:=2\max_{i,j,r}\{|\langle\mathbf{w}_{j,r}^{(T_{1},0)}, \boldsymbol{\mu}\rangle|,(P-1)|\langle\mathbf{w}_{j,r}^{(T_{1},0)},\boldsymbol {\xi}_{i}\rangle|\}=O(1).\]
In addition, we also have for \(t\leq T_{1}\) and \(j=y_{i}\) that
\[\langle\mathbf{w}_{j,r}^{(t,b)},\boldsymbol{\xi}_{i}\rangle- \langle\mathbf{w}_{j,r}^{(0,0)},\boldsymbol{\xi}_{i}\rangle\] \[\geq\frac{1}{P-1}\rho_{j,r,i}^{(t,b)}-\gamma_{j,r}^{(t,b)}\cdot\| \boldsymbol{\mu}\|_{2}^{-2}\cdot|\langle\boldsymbol{\mu},\boldsymbol{\xi}_{i} \rangle|-\frac{1}{P-1}\sum_{i\neq i^{\prime}}|\rho_{j,r,i^{\prime}}^{(t,b)}| \cdot\|\boldsymbol{\xi}_{i^{\prime}}\|_{2}^{-2}\cdot|\langle\boldsymbol{\xi}_{i ^{\prime}},\boldsymbol{\xi}_{i}\rangle|\] \[\geq-\gamma_{j,r}^{(t,b)}\cdot\|\boldsymbol{\mu}\|_{2}^{-2}\cdot| \langle\boldsymbol{\mu},\boldsymbol{\xi}_{i}\rangle|-\frac{1}{P-1}\sum_{i\neq i ^{\prime}}|\rho_{j,r,i^{\prime}}^{(t,b)}|\cdot\|\boldsymbol{\xi}_{i^{\prime}} \|_{2}^{-2}\cdot|\langle\boldsymbol{\xi}_{i^{\prime}},\boldsymbol{\xi}_{i}\rangle|\] \[\geq-\frac{1}{12}\|\boldsymbol{\mu}\|_{2}^{-2}\cdot|\langle \boldsymbol{\mu},\boldsymbol{\xi}_{i}\rangle|-\frac{n}{12(P-1)}\|\boldsymbol{ \xi}_{i^{\prime}}\|_{2}^{-2}\cdot|\langle\boldsymbol{\xi}_{i^{\prime}}, \boldsymbol{\xi}_{i}\rangle|\] \[\geq-\frac{1}{12}\|\boldsymbol{\mu}\|_{2}^{-1}\sigma_{p}\cdot \sqrt{2\log(6n/\delta)}-\frac{n}{12(P-1)}(\sigma_{p}^{2}d/2)^{-1}2\sigma_{p}^{ 2}\cdot\sqrt{d\log(6n^{2}/\delta)}\] \[=-\frac{1}{12}\|\boldsymbol{\mu}\|_{2}^{-1}\sigma_{p}\cdot\sqrt{2 \log(6n/\delta)}-\frac{n}{3(P-1)}\sqrt{\log(6n^{2}/\delta)/d}\] \[\geq-\frac{1}{6}\|\boldsymbol{\mu}\|_{2}^{-1}\sigma_{p}\cdot\sqrt{ \log(6n/\delta)}.\]
Now let \(\bar{S}_{i}^{(0,0)}\) denote \(\{r:\langle\mathbf{w}_{i,r}^{(0,0)},\boldsymbol{\xi}_{i}\rangle>\sigma_{0} \sigma_{p}\sqrt{d}\}\) and let \(\bar{S}_{j,r}^{(0,0)}\) denote \(\{i\in[n]:y_{i}=j,\;\langle\mathbf{w}_{yi,r}^{(t,b)},\boldsymbol{\xi}_{i} \rangle>\sigma_{0}\sigma_{p}\sqrt{d}\}\). By the condition on \(\sigma_{0}\), we have for \(t\leq T_{1}\) that
\[\langle\mathbf{w}_{j,r}^{(t,b)},\boldsymbol{\xi}_{i}\rangle\geq\frac{1}{\sqrt{ 2}}\langle\mathbf{w}_{j,r}^{(0,0)},\boldsymbol{\xi}_{i}\rangle,\]for any \(r\in\bar{S}_{i}^{(0,0)}\) or \(i\in\bar{S}_{j,r}^{(0,0)}\). Therefore, we have \(\bar{S}_{i}^{(0,0)}\subseteq S_{i}^{(T_{1},0)}\) and \(\bar{S}_{j,r}^{(0,0)}\subseteq S_{j,r}^{(T_{1},0)}\) and hence
\[0.8\Phi(-\sqrt{2})m\leq|\bar{S}_{i}^{(0,0)}|\leq|S_{i}^{(T_{1},0 )}|=\Omega(m),\] \[0.25\Phi(-\sqrt{2})n\leq|\bar{S}_{j,r}^{(0,0)}|\leq|S_{j,r}^{(T_{ 1},0)}|=\Omega(n),\]
where \(\Phi(\cdot)\) is the CDF of the standard normal distribution.
Now we can give proof of Theorem 4.
Proof of Theorem 4.: After the training process of SAM after \(T_{1}\), we get \(\mathbf{W}^{(T_{1},0)}\). To differentiate the SAM process and SGD process. We use \(\widetilde{\mathbf{W}}\) to denote the trajectory obtained by SAM in the proof, i.e., \(\widetilde{\mathbf{W}}^{(T_{1},0)}\). By Proposition D.2, we have that
\[\widetilde{\mathbf{w}}_{j,r}^{(T_{1},0)}=\widetilde{\mathbf{w}}_{j,r}^{(0,0)}+ j\cdot\widetilde{\gamma}_{j,r}^{(T_{1},0)}\cdot\frac{\mathbf{\mu}}{\|\mathbf{\mu}\|_{2}^{2} }+\frac{1}{P-1}\sum_{i=1}^{n}\widetilde{\rho}_{j,r,i}^{(T_{1},0)}\cdot\frac{\bm {\xi}_{i}}{\|\mathbf{\xi}_{i}\|_{2}^{2}}+\frac{1}{P-1}\sum_{i=1}^{n}\widetilde{\rho }_{j,r,i}^{(T_{1},0)}\cdot\frac{\mathbf{\xi}_{i}}{\|\mathbf{\xi}_{i}\|_{2}^{2}}, \tag{69}\]
where \(\widetilde{\gamma}_{j,r}^{(T_{1},0)}=\Theta(1)\), \(\widetilde{\rho}_{j,r,i}^{(T_{1},0)}\in[0,1/12]\), \(\widetilde{\rho}_{j,r,i}^{(T_{1},0)}\in[-\beta-10\sqrt{\log(6n^{2}/\delta)/d}n,0]\). Then the SGD start at \(\mathbf{W}^{(0,0)}:=\widetilde{\mathbf{W}}^{(T_{1},0)}\). Notice that by Lemma D.7, we know that the initial weights of SGD (i.e., the end weight of SAM) \(\mathbf{W}^{(0,0)}\) still satisfies the conditions for Subsection C.1 and C.2. Therefore, following the same analysis in Subsection C.1 and C.2, we have that there exist \(t=\widetilde{O}(\eta^{-1}\epsilon^{-1}mnd^{-1}P^{-2}\sigma_{p}^{-2})\) such that \(L_{S}(\mathbf{W}^{(t,0)})\leq\epsilon\). Besides,
\[\mathbf{w}_{j,r}^{(t,0)}=\mathbf{w}_{j,r}^{(0,0)}+j\cdot\gamma_{j,r}^{(t,0)} \cdot\frac{\mathbf{\mu}}{\|\mathbf{\mu}\|_{2}^{2}}+\frac{1}{P-1}\sum_{i=1}^{n}\widetilde {\rho}_{j,r,i}^{(t,0)}\cdot\frac{\mathbf{\xi}_{i}}{\|\mathbf{\xi}_{i}\|_{2}^{2}}+\frac{ 1}{P-1}\sum_{i=1}^{n}\rho_{j,r,i}^{(t,0)}\cdot\frac{\mathbf{\xi}_{i}}{\|\mathbf{\xi}_{ i}\|_{2}^{2}}\]
for \(j\in[\pm 1]\) and \(r\in[m]\) where
\[\gamma_{j,r}^{(t,0)}=\Theta(\mathrm{SNR}^{2})\sum_{i\in[n]}\overline{\rho}_{j,r,i}^{(t,0)},\quad\overline{\rho}_{j,r,i}^{(t,0)}\in[0,\alpha],\quad\underline {\rho}_{j,r,i}^{(t,0)}\in[-\alpha,0]. \tag{70}\]
Next, we will evaluate the test error for \(\mathbf{W}^{(t,0)}\). Notice that we use \((t)\) as the shorthand notation of \((t,0)\). For the sake of convenience, we use \((\mathbf{x},\widehat{y},y)\sim\mathcal{D}\) to denote the following: data point \((\mathbf{x},y)\) follows distribution \(\mathcal{D}\) defined in Definition 2.1, and \(\widehat{y}\) is its true label. We can write out the test error as
\[\mathbb{P}_{(\mathbf{x},y)\sim\mathcal{D}}\big{(}y\neq\operatorname {sign}(f(\mathbf{W}^{(t)},\mathbf{x}))\big{)}\] \[=\mathbb{P}_{(\mathbf{x},y)\sim\mathcal{D}}\big{(}yf(\mathbf{W}^{ (t)},\mathbf{x})\leq 0\big{)}\] \[=\mathbb{P}_{(\mathbf{x},y)\sim\mathcal{D}}\big{(}yf(\mathbf{W}^{ (t)},\mathbf{x})\leq 0,y\neq\widehat{y}\big{)}+\mathbb{P}_{(\mathbf{x},\widehat{y},y )\sim\mathcal{D}}\big{(}yf(\mathbf{W}^{(t)},\mathbf{x})\leq 0,y=\widehat{y}\big{)} \tag{71}\] \[=p\cdot\mathbb{P}_{(\mathbf{x},\widehat{y},y)\sim\mathcal{D}} \big{(}\widehat{y}f(\mathbf{W}^{(t)},\mathbf{x})\geq 0\big{)}+(1-p)\cdot \mathbb{P}_{(\mathbf{x},\widehat{y},y)\sim\mathcal{D}}\big{(}\widehat{y}f( \mathbf{W}^{(t)},\mathbf{x})\leq 0\big{)}\] \[\leq p+\mathbb{P}_{(\mathbf{x},\widehat{y},y)\sim\mathcal{D}} \big{(}\widehat{y}f(\mathbf{W}^{(t)},\mathbf{x})\leq 0\big{)},\]
where in the second equation we used the definition of \(\mathcal{D}\) in Definition 2.1. It therefore suffices to provide an upper bound for \(\mathbb{P}_{(\mathbf{x},\widehat{y})\sim\mathcal{D}}\big{(}\widehat{y}f( \mathbf{W}^{(t)},\mathbf{x})\leq 0\big{)}\). To achieve this, we write \(\mathbf{x}=(\widehat{y}\mathbf{\mu},\mathbf{\xi})\), and get
\[\widehat{y}f(\mathbf{W}^{(t)},\mathbf{x}) =\frac{1}{m}\sum_{j,r}\widehat{y}j[\sigma(\langle\mathbf{w}_{j, r}^{(t)},\widehat{y}\mathbf{\mu}\rangle)+\sigma(\langle\mathbf{w}_{j,r}^{(t)},\mathbf{\xi} \rangle)]\] \[=\frac{1}{m}\sum_{r}[\sigma(\langle\mathbf{w}_{\widehat{y},r}^{(t )},\widehat{y}\mathbf{\mu}\rangle)+(P-1)\sigma(\langle\mathbf{w}_{\widehat{y},r}^{(t )},\mathbf{\xi}\rangle)]\] \[\qquad-\frac{1}{m}\sum_{r}[\sigma(\langle\mathbf{w}_{-\widehat{y},r}^{(t)},\widehat{y}\mathbf{\mu}\rangle)+(P-1)\sigma(\langle\mathbf{w}_{-\widehat{y},r}^{(t)},\mathbf{\xi}\rangle)]. \tag{72}\]The inner product with \(j=\widehat{y}\) can be bounded as
\[\langle\mathbf{w}^{(t)}_{\widehat{y},r},\widehat{y}\boldsymbol{\mu}\rangle =\langle\mathbf{w}^{(0)}_{\widehat{y},r},\widehat{y}\boldsymbol{ \mu}\rangle+\gamma^{(t)}_{\widehat{y},r}+\frac{1}{(P-1)}\sum_{i=1}^{n}\overline {\rho}^{(t)}_{\widehat{y},r,i}\cdot\|\boldsymbol{\xi}_{i}\|_{2}^{-2}\cdot \langle\boldsymbol{\xi}_{i},\widehat{y}\boldsymbol{\mu}\rangle \tag{73}\] \[\qquad+\frac{1}{(P-1)}\sum_{i=1}^{n}\underline{\rho}^{(t)}_{ \widehat{y},r,i}\cdot\|\boldsymbol{\xi}_{i}\|_{2}^{-2}\cdot\langle\boldsymbol{ \xi}_{i},\widehat{y}\boldsymbol{\mu}\rangle\] \[\geq\langle\mathbf{w}^{(0)}_{\widehat{y},r},\widehat{y} \boldsymbol{\mu}\rangle+\gamma^{(t)}_{\widehat{y},r}-\frac{\sqrt{2\log(6n/ \delta)}}{P-1}\cdot\sigma_{p}\|\boldsymbol{\mu}\|_{2}\cdot(\sigma_{p}^{2}d/2 )^{-1}\bigg{[}\sum_{i=1}^{n}\overline{\rho}^{(t)}_{\widehat{y},r,i}+\sum_{i=1 }^{n}|\underline{\rho}^{(t)}_{\widehat{y},r,i}|\bigg{]}\] \[=\langle\mathbf{w}^{(0)}_{\widehat{y},r},\widehat{y}\boldsymbol{ \mu}\rangle+\gamma^{(t)}_{\widehat{y},r}-\Theta\big{(}\sqrt{\log(n/\delta)} \cdot(P\sigma_{p}d)^{-1}\|\boldsymbol{\mu}\|_{2}\big{)}\cdot\Theta(\mathrm{SNR }^{-2})\cdot\gamma^{(t)}_{\widehat{y},r}\] \[=\langle\mathbf{w}^{(0)}_{\widehat{y},r},\widehat{y}\boldsymbol{ \mu}\rangle+\big{[}1-\Theta\big{(}\sqrt{\log(n/\delta)}\cdot P\sigma_{p}/\| \boldsymbol{\mu}\|_{2}\big{)}\big{]}\gamma^{(t)}_{\widehat{y},r}\] \[=\langle\mathbf{w}^{(0)}_{\widehat{y},r},\widehat{y}\boldsymbol{ \mu}\rangle+\Theta(\gamma^{(t)}_{\widehat{y},r})\] \[=\Omega(1),\]
where the inequality is by Lemma B.1; the second equality is obtained by plugging in the coefficient orders we summarized at (70); the third equality is by the condition \(\mathrm{SNR}=\|\boldsymbol{\mu}\|_{2}/P\sigma_{p}\sqrt{d}\); the fourth equality is due to \(\|\boldsymbol{\mu}\|_{2}^{2}\geq C\cdot P^{2}\sigma_{p}^{2}\log(n/\delta)\) in Condition 3.1, so for sufficiently large constant \(C\) the equality holds; the last equality is by Lemma D.7. Moreover, we can deduce in a similar manner that
\[\langle\mathbf{w}^{(t)}_{-\widehat{y},r},\widehat{y}\boldsymbol{ \mu}\rangle =\langle\mathbf{w}^{(0)}_{-\widehat{y},r},\widehat{y} \boldsymbol{\mu}\rangle-\gamma^{(t)}_{-\widehat{y},r}+\sum_{i=1}^{n}\overline {\rho}^{(t)}_{-\widehat{y},r,i}\cdot\|\boldsymbol{\xi}_{i}\|_{2}^{-2}\cdot \langle\boldsymbol{\xi}_{i},-\widehat{y}\boldsymbol{\mu}\rangle \tag{74}\] \[\qquad+\sum_{i=1}^{n}\underline{\rho}^{(t)}_{-\widehat{y},r,i} \cdot\|\boldsymbol{\xi}_{i}\|_{2}^{-2}\cdot\langle\boldsymbol{\xi}_{i}, \widehat{y}\boldsymbol{\mu}\rangle\] \[\leq\langle\mathbf{w}^{(0)}_{-\widehat{y},r},\widehat{y} \boldsymbol{\mu}\rangle-\gamma^{(t)}_{-\widehat{y},r}\] \[\qquad+\sqrt{2\log(6n/\delta)}\cdot\sigma_{p}\|\boldsymbol{\mu} \|_{2}\cdot(\sigma_{p}^{2}d/2)^{-1}\bigg{[}\sum_{i=1}^{n}\overline{\rho}^{(t )}_{-\widehat{y},r,i}+\sum_{i=1}^{n}|\underline{\rho}^{(t)}_{-\widehat{y},r, i}|\bigg{]}\] \[=\langle\mathbf{w}^{(0)}_{-\widehat{y},r},\widehat{y}\boldsymbol {\mu}\rangle-\Theta(\gamma^{(t)}_{-\widehat{y},r})\] \[=-\Omega(1)<0,\]
where the second equality holds based on similar analyses as in (73).
Denote \(g(\boldsymbol{\xi})\) as \(\sum_{r}\sigma(\langle\mathbf{w}^{(t)}_{-\widehat{y},r},\boldsymbol{\xi} \rangle)\). According to Theorem 5.2.2 in Vershynin (2018), we know that for any \(x\geq 0\) it holds that
\[\mathbb{P}(g(\boldsymbol{\xi})-\mathbb{E}g(\boldsymbol{\xi})\geq x)\leq\exp \Big{(}-\frac{cx^{2}}{\sigma_{p}^{2}\|g\|_{\mathrm{Lip}}^{2}}\Big{)}, \tag{75}\]
where \(c\) is a constant. To calculate the Lipschitz norm, we have
\[|g(\boldsymbol{\xi})-g(\boldsymbol{\xi}^{\prime})| =\bigg{|}\sum_{r=1}^{m}\sigma(\langle\mathbf{w}^{(t)}_{-\widehat {y},r},\boldsymbol{\xi}\rangle)-\sum_{r=1}^{m}\sigma(\langle\mathbf{w}^{(t)} _{-\widehat{y},r},\boldsymbol{\xi}^{\prime}\rangle)\bigg{|}\] \[\leq\sum_{r=1}^{m}\big{|}\sigma(\langle\mathbf{w}^{(t)}_{- \widehat{y},r},\boldsymbol{\xi}\rangle)-\sigma(\langle\mathbf{w}^{(t)}_{- \widehat{y},r},\boldsymbol{\xi}^{\prime}\rangle)\big{|}\] \[\leq\sum_{r=1}^{m}|\langle\mathbf{w}^{(t)}_{-\widehat{y},r}, \boldsymbol{\xi}-\boldsymbol{\xi}^{\prime}\rangle|\] \[\leq\sum_{r=1}^{m}\big{\|}\mathbf{w}^{(t)}_{-\widehat{y},r}\|_{2 }\cdot\|\boldsymbol{\xi}-\boldsymbol{\xi}^{\prime}\|_{2},\]where the first inequality is by triangle inequality, the second inequality is by the property of ReLU; and the last inequality is by Cauchy-Schwartz inequality. Therefore, we have
\[\|g\|_{\mathrm{Lip}}\leq\sum_{r=1}^{m}\big{\|}\mathbf{w}_{-\widehat{y},r}^{(t)} \big{\|}_{2}, \tag{76}\]
and since \(\langle\mathbf{w}_{-\widehat{y},r}^{(t)},\boldsymbol{\xi}\rangle\sim\mathcal{N }\big{(}0,\|\mathbf{w}_{-\widehat{y},r}^{(t)}\|_{2}^{2}\sigma_{p}^{2}\big{)}\), we can get
\[\mathbb{E}g(\boldsymbol{\xi})=\sum_{r=1}^{m}\mathbb{E}\sigma(\langle\mathbf{w}_{ -\widehat{y},r}^{(t)},\boldsymbol{\xi}\rangle)=\sum_{r=1}^{m}\frac{\|\mathbf{ w}_{-\widehat{y},r}^{(t)}\|_{2}\sigma_{p}}{\sqrt{2\pi}}=\frac{\sigma_{p}}{\sqrt{2 \pi}}\sum_{r=1}^{m}\|\mathbf{w}_{-\widehat{y},r}^{(t)}\|_{2}.\]
Next we seek to upper bound the \(2\)-norm of \(\mathbf{w}_{j,r}^{(t)}\). First, we tackle the noise section in the decomposition, namely:
\[\bigg{\|}\sum_{i=1}^{n}\rho_{j,r,i}^{(t)}\cdot\|\boldsymbol{\xi} \|_{2}^{-2}\cdot\boldsymbol{\xi}_{i}\bigg{\|}_{2}^{2}\] \[=\sum_{i=1}^{n}{\rho_{j,r,i}^{(t)}}^{2}\cdot\|\boldsymbol{\xi}_{i }\|_{2}^{-2}+2\sum_{1\leq i_{1}<i_{2}\leq n}\rho_{j,r,i_{1}}^{(t)}\rho_{j,r,i_ {2}}^{(t)}\cdot\|\boldsymbol{\xi}_{i_{1}}\|_{2}^{-2}\cdot\|\boldsymbol{\xi}_{i _{2}}\|_{2}^{-2}\cdot\langle\boldsymbol{\xi}_{i_{1}},\boldsymbol{\xi}_{i_{2}}\rangle\] \[\leq 4\sigma_{p}^{-2}d^{-1}\sum_{i=1}^{n}{\rho_{j,r,i}^{(t)}}^{2 }+2\sum_{1\leq i_{1}<i_{2}\leq n}\big{|}\rho_{j,r,i_{1}}^{(t)}\rho_{j,r,i_{2}} ^{(t)}\big{|}\cdot(16\sigma_{p}^{-4}d^{-2})\cdot(2\sigma_{p}^{2}\sqrt{d\log(6 n^{2}/\delta)})\] \[=4\sigma_{p}^{-2}d^{-1}\sum_{i=1}^{n}{\rho_{j,r,i}^{(t)}}^{2}+32 \sigma_{p}^{-2}d^{-3/2}\sqrt{\log(6n^{2}/\delta)}\bigg{[}\bigg{(}\sum_{i=1}^{n }\big{|}\rho_{j,r,i}^{(t)}\big{|}\bigg{)}^{2}-\sum_{i=1}^{n}{\rho_{j,r,i}^{(t )}}^{2}\bigg{]}\] \[=\Theta(\sigma_{p}^{-2}d^{-1})\sum_{i=1}^{n}{\rho_{j,r,i}^{(t)}}^ {2}+\widetilde{\Theta}(\sigma_{p}^{-2}d^{-3/2})\bigg{(}\sum_{i=1}^{n}\big{|} \rho_{j,r,i}^{(t)}\bigg{)}^{2}\] \[\leq\Theta(\sigma_{p}^{-2}d^{-1}n^{-1})\bigg{(}\sum_{i=1}^{n} \overline{\rho}_{j,r,i}^{(t)}\bigg{)}^{2},\]
where for the first inequality, we used Lemma B.1; for the second inequality, we used the definition of \(\overline{\rho}\), \(\underline{\rho}\); for the second to last equation, we plugged in coefficient orders. We can thus upper bound the \(2\)-norm of \(\mathbf{w}_{j,r}^{(t)}\) as:
\[\|\mathbf{w}_{j,r}^{(t)}\|_{2} \leq\|\mathbf{w}_{j,r}^{(0)}\|_{2}+\gamma_{j,r}^{(t)}\cdot\| \boldsymbol{\mu}\|_{2}^{-1}+\frac{1}{P-1}\bigg{\|}\sum_{i=1}^{n}\rho_{j,r,i}^ {(t)}\cdot\|\boldsymbol{\xi}_{i}\|_{2}^{-2}\cdot\boldsymbol{\xi}_{i}\bigg{\|} _{2}\] \[\leq\|\mathbf{w}_{j,r}^{(0)}\|_{2}+\gamma_{j,r}^{(t)}\cdot\| \boldsymbol{\mu}\|_{2}^{-1}+\Theta(P^{-1}\sigma_{p}^{-1}d^{-1/2}n^{-1/2})\cdot \sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(t)}\] \[=\Theta(\sigma_{0}\sqrt{d})+\Theta(P^{-1}\sigma_{p}^{-1}d^{-1/2} n^{-1/2})\cdot\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(t)}, \tag{77}\]
where the first inequality is due to the triangle inequality, and the equality is due to the following comparisons:
\[\frac{\gamma_{j,r}^{(t)}\cdot\|\boldsymbol{\mu}\|_{2}^{-1}}{ \Theta(P^{-1}\sigma_{p}^{-1}d^{-1/2}n^{-1/2})\cdot\sum_{i=1}^{n}\overline{\rho }_{j,r,i}^{(t)}} =\Theta(P^{-1}\sigma_{p}d^{1/2}n^{1/2}\|\boldsymbol{\mu}\|_{2}^{ -1}\mathrm{SNR}^{2})\] \[=\Theta(P^{-1}\sigma_{p}^{-1}d^{-1/2}n^{1/2}\|\boldsymbol{\mu}\|_ {2})\] \[=O(1)\]based on the coefficient order \(\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{(t)}/\gamma_{j,r}^{(t)}=\Theta(\mathrm{SNR}^ {-2})\), the definition \(\mathrm{SNR}=\|\mathbf{\mu}\|_{2}/(\sigma_{p}\sqrt{d})\), and the condition for \(d\) in Condition 3.1; and also \(\|\mathbf{w}_{j,r}^{(0)}\|_{2}=\Theta(\sigma_{0}\sqrt{d})\) based on Lemma D.7. With this and (73), we analyze the key component in (80):
\[\frac{\sum_{r}\sigma(\langle\mathbf{w}_{\widehat{y},r}^{(t)}, \widehat{y}\mathbf{\mu}\rangle)}{(P-1)\sigma_{p}\sum_{r=1}^{m}\left\|\mathbf{w}_{- \widehat{y},r}^{(t)}\right\|_{2}} \geq\frac{\Theta(1)}{\Theta(\sigma_{0}\sqrt{d})+\Theta(P^{-1} \sigma_{p}^{-1}d^{-1/2}n^{-1/2})\cdot\sum_{i=1}^{n}\overline{\rho}_{j,r,i}^{( t)}}\] \[\geq\frac{\Theta(1)}{\Theta(\sigma_{0}\sqrt{d})+O(P^{-1}\sigma_{p }^{-1}d^{-1/2}n^{1/2}\alpha)}\] \[\geq\min\{\Omega(\sigma_{0}^{-1}d^{-1/2}),\Omega(P\sigma_{p}d^{1 /2}n^{-1/2}\alpha^{-1})\}\] \[\geq 1.\]
It directly follows that
\[\sum_{r}\sigma(\langle\mathbf{w}_{\widehat{y},r}^{(t)},\widehat{y}\mathbf{\mu} \rangle)-\frac{(P-1)\sigma_{p}}{\sqrt{2\pi}}\sum_{r=1}^{m}\|\mathbf{w}_{- \widehat{y},r}^{(t)}\|_{2}>0. \tag{79}\]
Now using the method in (75) with the results above, we plug (74) into (72) and then (71), to obtain
\[\mathbb{P}_{(\mathbf{x},\widehat{y},y)\sim\mathcal{D}}\Big{(} \widehat{y}f(\mathbf{W}^{(t)},\mathbf{x})\leq 0\Big{)}\] \[=\mathbb{P}_{(\mathbf{x},\widehat{y},y)\sim\mathcal{D}}\Bigg{(}g( \mathbf{\xi})-\mathbb{E}g(\mathbf{\xi})\geq(1/(P-1))\sum_{r}\sigma(\langle\mathbf{w}_{ \widehat{y},r}^{(t)},\widehat{y}\mathbf{\mu}\rangle)-\frac{\sigma_{p}}{\sqrt{2\pi }}\sum_{r=1}^{m}\|\mathbf{w}_{-\widehat{y},r}^{(t)}\|_{2}\Bigg{)}\] \[\leq\exp\Bigg{[}-\frac{c\Big{(}(1/(P-1))\sum_{r}\sigma(\langle \mathbf{w}_{\widehat{y},r}^{(t)},\widehat{y}\mathbf{\mu}\rangle)-(\sigma_{p}/ \sqrt{2\pi})\sum_{r=1}^{m}\left\|\mathbf{w}_{-\widehat{y},r}^{(t)}\right\|_{2 }\Big{)}^{2}}{\sigma_{p}^{2}\Big{(}\sum_{r=1}^{m}\left\|\mathbf{w}_{-\widehat{ y},r}^{(t)}\right\|_{2}\Big{)}^{2}}\Bigg{]}\] \[=\exp\bigg{[}-c\bigg{(}\frac{\sum_{r}\sigma(\langle\mathbf{w}_{ \widehat{y},r}^{(t)},\widehat{y}\mathbf{\mu}\rangle)}{(P-1)\sigma_{p}\sum_{r=1}^{ m}\left\|\mathbf{w}_{-\widehat{y},r}^{(t)}\right\|_{2}}-1/\sqrt{2\pi}\bigg{)}^{2} \bigg{]}\] \[\leq\exp(c/2\pi)\exp\bigg{(}-0.5c\bigg{(}\frac{\sum_{r}\sigma( \langle\mathbf{w}_{\widehat{y},r}^{(t)},\widehat{y}\mathbf{\mu}\rangle)}{(P-1) \sigma_{p}\sum_{r=1}^{m}\left\|\mathbf{w}_{-\widehat{y},r}^{(t)}\right\|_{2} }\bigg{)}^{2}\bigg{)}, \tag{80}\]
where the second inequality is by (79) and plugging (76) into (75), the third inequality is due to the fact that \((s-t)^{2}\geq s^{2}/2-t^{2},\forall s,t\geq 0\).
And we can get from (78) and (80) that
\[\mathbb{P}_{(\mathbf{x},\widehat{y},y)\sim\mathcal{D}}\big{(} \widehat{y}f(\mathbf{W}^{(t)},\mathbf{x})\leq 0\big{)} \leq\exp(c/2\pi)\exp\bigg{(}-0.5c\bigg{(}\frac{\sum_{r}\sigma( \langle\mathbf{w}_{\widehat{y},r}^{(t)},\widehat{y}\mathbf{\mu}\rangle)}{(P-1) \sigma_{p}\sum_{r=1}^{m}\left\|\mathbf{w}_{-\widehat{y},r}^{(t)}\right\|_{2}} \bigg{)}^{2}\bigg{)}\] \[\leq\exp\Big{(}\frac{c}{2\pi}-C\min\{\sigma_{0}^{-2}d^{-1},P \sigma_{p}^{2}dn^{-1}\alpha^{-2}\}\Big{)}\] \[\leq\exp\Big{(}-0.5C\min\{\sigma_{0}^{-2}d^{-1},P\sigma_{p}^{2}dn ^{-1}\alpha^{-2}\}\Big{)}\] \[\leq\epsilon,\]
where \(C=O(1)\), the last inequality holds since \(\sigma_{0}^{2}\leq 0.5Cd^{-1}\log(1/\epsilon)\) and \(d\geq 2C^{-1}P^{-1}\sigma_{p}^{-2}n\alpha^{2}\log(1/\epsilon)\). | ## Review
### Summary
This paper presents a theoretical examination of Sharpness-Aware Minimization (SAM) in the context of two-layer convolutional ReLU networks, contrasting its effectiveness against Stochastic Gradient Descent (SGD). It provides conditions under which benign overfitting occurs with both methods, demonstrating that SAM can achieve better generalization, particularly in scenarios where SGD leads to harmful overfitting. The authors substantiate their claims with numerical experiments, revealing SAM's advantages in mitigating noise learning and enhancing the learning of weak features. Though the contributions are significant and well-structured, the presentation requires improvement to enhance clarity and definition consistency.
### Strengths
- The paper addresses the significant issue of overfitting in large neural networks, making it timely and relevant.
- It provides a strong theoretical analysis explaining why SAM outperforms SGD, enhancing understanding of generalization in neural networks.
- The authors conduct a comprehensive study comparing SAM and SGD using both synthetic and real data, adding validity to their findings.
- The work claims to present the first benign overfitting results for mini-batch SGD, which could signify a novel contribution to the field.
- The paper is well-structured and clearly written, providing a good overview of background literature and detailed contributions.
### Weaknesses
- The presentation is disordered and lacks clear definitions for several notations, making the results difficult to understand.
- Certain key theoretical aspects, such as the harmful overfitting regime for SAM, are not sufficiently covered.
- The experiments may lack reliability due to potentially high learning rates, which could misrepresent the findings.
- The paper's focus on a specific architecture limits the generalizability of its findings to other neural network types.
- The related work section is more of an enumeration rather than a discussion of technical differences with prior works.
### Questions
- Why are equations in Line 175 and 176 presented as definitions when they seem to stem from the data distribution and network architecture?
- Could the authors clarify why the learning rate was set to 0.1 in the experiments, given concerns over its potential to cause overfitting?
- Is the theoretical result for SGD also applicable to Gradient Flow dynamics?
- Will switching from SAM to SGD after a few epochs yield similar performance to SAM?
- How does the perturbation radius (τ) requirement in theory reconcile with practical applications?
- Could the authors elaborate on the connection between SAM and recent work regarding large learning rates and their implications for generalization?
### Soundness
**Score:** 3
**Description:** 3 = good; the theoretical analysis is solid, but some parts lack completeness and robustness.
### Presentation
**Score:** 3
**Description:** 3 = good; while the paper is well-structured, clarity could be improved and some notations need definition.
### Contribution
**Score:** 4
**Description:** 4 = excellent; the paper makes significant contributions to understanding SAM's advantages over SGD, particularly in benign overfitting.
### Rating
**Score:** 7
**Description:** 7 = accept; the paper is technically solid with a high impact on the field, but requires minor improvements in presentation and evaluation.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents important theoretical contributions to understanding SAM's advantages over SGD in mitigating overfitting, supported by numerical experiments. While the results are significant, the presentation requires clarity and definition consistency. Given the solid theoretical grounding and the potential impact on the field, the decision is to accept the paper with suggestions for improving presentation and addressing the raised questions.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Learning via Wasserstein-Based High Probability Generalisation Bounds
Paul Viallard
Inria, CNRS, Ecole Normale Superieure, PSL Research University, Paris, France
[email protected]
&Maxime Haddouche
Inria, University College London and
Universite de Lille, France
[email protected]
&Umut Simsekli
Inria, CNRS, Ecole Normale Superieure
PSL Research University, Paris, France
[email protected]
The authors contributed equally to this work
Benjamin Guedj
Inria and University College London, France and UK
[email protected]
###### Abstract
Minimising upper bounds on the population risk or the generalisation gap has been widely used in structural risk minimisation (SRM) - this is in particular at the core of PAC-Bayesian learning. Despite its successes and unfailing surge of interest in recent years, a limitation of the PAC-Bayesian framework is that most bounds involve a Kullback-Leibler (KL) divergence term (or its variations), which might exhibit erratic behavior and fail to capture the underlying geometric structure of the learning problem - hence restricting its use in practical applications. As a remedy, recent studies have attempted to replace the KL divergence in the PAC-Bayesian bounds with the Wasserstein distance. Even though these bounds alleviated the aforementioned issues to a certain extent, they either hold in expectation, are for bounded losses, or are nontrivial to minimize in an SRM framework. In this work, we contribute to this line of research and prove novel Wasserstein distance-based PAC-Bayesian generalisation bounds for both batch learning with independent and identically distributed (_i.i.d._) data, and online learning with potentially non-_i.i.d._ data. Contrary to previous art, our bounds are stronger in the sense that _(i)_ they hold with high probability, _(ii)_ they apply to unbounded (potentially heavy-tailed) losses, and _(iii)_ they lead to optimizable training objectives that can be used in SRM. As a result we derive novel Wasserstein-based PAC-Bayesian learning algorithms and we illustrate their empirical advantage on a variety of experiments.
## 1 Introduction
Understanding generalisation is one of the main challenges in statistical learning theory, and even more so in modern machine learning applications. Typically, a _learning problem_ is described by a tuple \((\mathcal{H},\mathcal{Z},\ell)\) consisting of a hypothesis (or predictor) space \(\mathcal{H}\), a data space \(\mathcal{Z}\), and a loss function \(\ell:\mathcal{H}\times\mathcal{Z}\rightarrow\mathbb{R}\). The goal is to estimate the _population risk_ of a given hypothesis \(h\), defined as \(\mathsf{R}_{\mu}(h)=\mathbb{E}_{\mathbf{z}\sim\mu}[\ell(h,\mathbf{z})]\), where \(\mu\) denotes the unknown _data distribution_ over \(\mathcal{Z}\).
As \(\mu\) is not known, in practice, a hypothesis \(h\) is usually built by (approximately) minimising the _empirical risk_, given by \(\hat{\mathsf{R}}_{\mathcal{S}}(h)=\frac{1}{m}\sum_{i=1}^{m}\ell(h,\mathbf{z}_{ i})\), where \(\mathcal{S}=\{\mathbf{z}_{i}\in\mathcal{Z}\}_{i=1}^{m}\) is a dataset of \(m\) data points, independent and identically distributed (_i.i.d._) from \(\mu\). We define the generalisation gap of a hypothesis \(h\) as \(\hat{\mathsf{R}}_{\mathcal{S}}(h)-\mathsf{R}_{\mu}(h)\).
Developing upper bounds on the generalisation gap, _i.e._, _generalisation bounds_ has been a long-standing topic in statistical learning. While a plethora of techniques have been introduced, the PAC-Bayesian framework has gained significant traction over the past two decades to provide non-vacuous generalisation guarantees for complex structures such as neural networks during the training phase (see DR17, PORPH\({}^{+}\)21, PRSS21, among others). In these works, the bounds are also used to derive learning algorithms by minimising the right-hand side of a given bound. Beyond neural networks, the flexibility of PAC-Bayes learning makes it a useful toolbox to derive both theoretical results and practical algorithms in various learning fields such as reinforcement learning [15], online learning [16], multi-armed bandits [17, 18, 19], meta-learning [20, 21, 22, 13] to name but a few. The PAC-Bayesian bounds focus on a _randomised_ setting where the hypothesis is drawn from a _data-dependent_ distribution \(\rho\in\mathcal{M}(\mathcal{H})\), where \(\mathcal{M}(\mathcal{H})\) denotes the set of probability distributions defined on \(\mathcal{H}\). A classical PAC-Bayesian result is [20, Theorem 5] (the so-called McAllester bound), which states that, with probability at least \(1-\delta\), for any posterior distribution \(\rho\in\mathcal{M}(\mathcal{H})\),
\[\operatorname*{\mathbb{E}}_{h\sim\rho}\left[\operatorname*{\mathbb{R}}_{\mu}(h )-\hat{\operatorname*{\mathbb{R}}}_{\mathcal{S}}(h)\right]\leq\sqrt{\frac{ \operatorname*{KL}(\rho\|\pi)+\ln\frac{2\sqrt{m}}{\delta}}{2m}},\]
where \(\pi\in\mathcal{M}(\mathcal{H})\) is any data-free distribution and \(\operatorname*{KL}\) denotes the Kullback-Leibler divergence. In analogy with Bayesian statistics, \(\pi\) is often called the _prior_, and \(\rho\) is called the _posterior_ - we refer to [14] for a discussion on these terms.
While PAC-Bayesian bounds remain nowadays of the utmost interest to explain generalisation in various learning problems [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 140, 132, 133, 134, 135, 136, 137, 138, 139, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 199, 199, 198, 199, 199, 190, 191, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 1999, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 1999, 199, 1999, 199, 1999, 199, 1999, 199, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 19999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 19999, 1999, 1999, 1999, 1999, 19999, 19999, 1999, 1999, 1999, 1999, 19999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 1999, 19999, 1999, 19999, 19999, 1999, 1999, 1999, 19999, 1999, 1999, 1999, 19999, 1999, 19999, 19999, 1999, 1999, 1999, 1999, 19999, 1999, 1999, 1999, 19999, 1999, 19999, 1999, 19999, 1999, 19999, 1999, 19999, 19999, 19999, 19999, 19999, 19999, 1999, 19999, 1999, 19999, 19999, 19999, 1999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 1999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 1999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 19999, 199999, 11. Using the supermartingale toolbox introduced in [13, 14], we prove in Section 3.1, novel PAC-Bayesian bounds based on the Wasserstein distance for _i.i.d._ data. While [1] proposed a McAllester-like bound for bounded losses, we propose a Catoni-like bound (see _e.g._, [16, Theorem 4.1] valid for heavy-tailed losses with bounded order 2 moments. This assumption is less restrictive than assuming subgaussian or bounded losses, which are at the core of many PAC-Bayes results. This assumption also covers distributions beyond subgaussian or subexponential ones (_e.g._, gamma distributions with a scale smaller than 1, which have an infinite exponential moment).
2. We provide in Section 3.2 the first generalisation bounds based on Wasserstein distances for the online PAC-Bayes framework of [13]. Our results are, again, Catoni-like bounds and hold for heavy-tailed losses with bounded order 2 moments. Previous work [1] already provided online strategies mixing PAC-Bayes and Wasserstein distances. However, their contributions focus on the best deterministic strategy, regularised by a Wasserstein distance, with respect to the deterministic notion of regret. Our results differ significantly as we provide the best-regularised strategy (still in the sense of a Wasserstein term) with respect to the notion of generalisation, which is new.
3. As our bounds are linear with respect to Wasserstein terms (contrary to those of [1]), they are well suited for optimisation procedures. Thus, we propose the first PAC-Bayesian learning algorithms based on Wasserstein distances instead of KL divergences. For the first time, we design PAC-Bayes algorithms able to output deterministic predictors (instead of distributions over all \(\mathcal{H}\)) designed from deterministic priors. This is due to the ability of the Wasserstein distance to measure the discrepancy between Dirac distributions. We then instantiate those algorithms in Section 4 on various datasets, paving the way to promising practical developments of PAC-Bayes learning.
To sum up, we highlight two benefits of PAC-Bayes learning with Wasserstein distance. First, it ships with sound theoretical results exploiting the geometry of the predictor space, holding for heavy-tailed losses. Such a weak assumption on the loss extends the usefulness of PAC-Bayes with Wasserstein distances to a wide range of learning problems, encompassing bounded losses. Second, it allows us to consider deterministic algorithms (_i.e._, sampling from Dirac measures) designed with respect to the notion of generalisation: we showcase their performance in our experiments.
**Outline.** Section 2 describes our framework and background, Section 3 contains our new theoretical results and Section 4 gathers our experiments. Appendix A gathers supplementary discussion, Appendix B contains all proofs of our claims, and Appendix C provides insights into our practical results as well as additional experiments.
## 2 Our framework
**Framework.** We consider a Polish predictor space \(\mathcal{H}\) equipped with a distance \(d\) and a \(\sigma\)-algebra \(\Sigma_{\mathcal{H}}\), a data space \(\mathcal{Z}\), and a loss function \(\ell:\mathcal{H}\times\mathcal{Z}\to\mathbb{R}\). In this work, we consider Lipschitz functions with respect to \(d\). We also associate a filtration \((\mathcal{F}_{i})_{i\geq 1}\) adapted to our data \((\mathbf{z}_{i})_{i=1,\dots,m}\), and we assume that the dataset \(\mathcal{S}\) follows the distribution \(\mathcal{D}\). In PAC-Bayes learning, we construct a data-driven posterior distribution \(\rho\in\mathcal{M}(\mathcal{H})\) with respect to a prior distribution \(\pi\).
**Definitions.** For all \(i\), we denote by \(\mathbb{E}_{i}[\cdot]\) the conditional expectation \(\mathbb{E}[\,\cdot\mid\mathcal{F}_{i}]\). In this work, we consider data-dependent priors. A stochastic kernel is a mapping \(\pi:\cup_{m=1}^{\infty}\mathcal{Z}^{m}\times\Sigma_{\mathcal{H}}\to[0,1]\) where _(i)_ for any \(B\in\Sigma_{\mathcal{H}}\), the function \(\mathcal{S}\mapsto\pi(\mathcal{S},B)\) is measurable, _(ii)_ for any dataset \(\mathcal{S}\), the function \(B\mapsto\pi(\mathcal{S},B)\) is a probability measure over \(\mathcal{H}\).
In what follows, we consider two different learning paradigms: _batch learning_, where the dataset is directly available, and _online learning_, where data streams arrive sequentially.
**Batch setting.** We assume the dataset \(\mathcal{S}\) to be _i.i.d._, so there exists a distribution \(\mu\) over \(\mathcal{Z}\) such that \(\mathcal{D}=\mu^{m}\). We then define, for a given \(h\in\mathcal{H}\), the _risk_ to be \(\mathsf{R}_{\mu}:=\mathbb{E}_{\mathbf{\kappa}\sim\mu}[\ell(h,\mathbf{z})]\) and its empirical counterpart \(\hat{\mathsf{R}}_{\mathcal{S}}:=\frac{1}{m}\sum_{i=1}^{m}\ell(h,\mathbf{z}_{i})\). Our results aim to bound the _expected generalisation gap_ defined by \(\mathbb{E}_{h\sim\rho}[\mathsf{R}_{\mu}(h)-\hat{\mathsf{R}}_{\mathcal{S}}(h)]\). We assume that the dataset \(\mathcal{S}\) is split into \(K\) disjoint sets \(\mathcal{S}_{1},\dots,\mathcal{S}_{K}\). We consider \(K\) stochastic kernels \(\pi_{1},\dots,\pi_{K}\) such that for any \(\mathcal{S}\), the distribution \(\pi_{i}(\mathcal{S},\cdot)\)_does not_ depend on \(\mathcal{S}_{i}\).
**Online setting.** We adapt the online PAC-Bayes framework of [13]. We assume that we have access to a stream of data \(\mathcal{S}=(\mathbf{z}_{i})_{i=1,\dots,m}\), arriving sequentially, with no assumption on \(\mathcal{D}\). In online PAC-Bayes, the goal is to define a posterior sequence \((\rho_{i})_{i\geq 1}\) from a prior sequence \((\pi_{i})_{i\geq 1}\), which can be data-dependent. We define an _online predictive sequence_\((\pi_{i})_{i=1\dots m}\) satisfying: _(i)_ for all \(i\) and dataset \(\mathcal{S}\), the distribution \(\pi_{i}(S,.)\) is \(\mathcal{F}_{i-1}\) measurable and _(ii)_ there exists \(\pi_{0}\) such that for all \(i\geq 1\), we have \(\pi_{i}(S,.)\gg\pi_{0}\). This last condition covers, in particular, the case where \(\mathcal{H}\) is an Euclidean space and for any \(i\), the distribution \(\pi_{i,\mathcal{S}}\) is a Dirac mass. All of those measures are uniformly continuous with respect to any Gaussian distribution.
**Wasserstein distance.** In this paper, we focus on the Wasserstein distance of order 1 (_a.k.a._, Earth Mover's distance) introduced by [14] in the optimal transport literature. Given a distance \(d:\mathcal{A}\times\mathcal{A}\to\mathbb{R}\) and a Polish space \((\mathcal{A},d)\), for any probability measures \(\alpha\) and \(\beta\) on \(\mathcal{A}\), the Wasserstein distance is defined by
\[\mathrm{W}(\alpha,\beta):=\inf_{\gamma\in\Gamma(\alpha,\beta)}\left\{\mathop{ \mathbb{E}}_{(a,b)\sim\gamma}d(a,b)\right\}, \tag{1}\]
where \(\Gamma(\alpha,\beta)\) is the set of joint probability measures \(\gamma\in\mathcal{M}(\mathcal{A}^{2})\) such that the marginals are \(\alpha\) and \(\beta\). The Wasserstein distance aims to find the probability measure \(\gamma\in\mathcal{M}(\mathcal{A}^{2})\) minimising the expected cost \(\mathop{\mathbb{E}}_{(a,b)\sim\gamma}d(a,b)\). We refer the reader to [20, 13] for an introduction to optimal transport.
## 3 Wasserstein-based PAC-Bayesian generalisation bounds
We present novel high-probability PAC-Bayesian bounds involving Wasserstein distances instead of the classical Kullback-Leibler divergence. Our bounds hold for heavy-tailed losses (instead of classical subgaussian and subexponential assumptions), extending the remits of [1, Theorem 11]. We exploit the supermartingale toolbox, recently introduced in PAC-Bayes framework by [13, 14], to derive bounds for both batch learning (Theorems 1 and 2) and online learning (Theorems 3 and 4).
### PAC-Bayes for batch learning with _i.i.d._ data
In this section, we use the batch setting described in Section 2. We state our first result, holding for heavy-tailed losses admitting order 2 moments. Such an assumption is in line, for instance, with reinforcement learning with heavy-tailed reward (see, _e.g._, [1, 1, 10]).
**Theorem 1**.: _We assume the loss \(\ell\) to be \(L\)-Lipschitz. Then, for any \(\delta\in(0,1]\), for any sequence of positive scalar \((\lambda_{i})_{i\in\{1,\dots,K\}}\), with probability at least \(1-\delta\) over the sample \(\mathcal{S}\), the following holds for the distributions \(\pi_{i,\mathcal{S}}:=\pi_{i}(\mathcal{S},.)\) and for any \(\rho\in\mathcal{M}(\mathcal{H})\):_
\[\mathop{\mathbb{E}}_{h\sim\rho}\Big{[}\mathcal{R}_{\mu}(h)-\hat{ \mathcal{R}}_{\mathcal{S}}(h)\Big{]}\\ \leq\sum_{i=1}^{K}\frac{2|\mathcal{S}_{i}|L}{m}\mathrm{W}(\rho, \pi_{i,\mathcal{S}})+\frac{1}{m}\sum_{i=1}^{K}\frac{\ln\left(\frac{K}{\delta} \right)}{\lambda_{i}}+\frac{\lambda_{i}}{2}\left(\mathop{\mathbb{E}}_{h\sim \pi_{i,\mathcal{S}}}\Big{[}\hat{V}_{|\mathcal{S}_{i}|}(h)+V_{|\mathcal{S}_{i} |}(h)\Big{]}\right),\]
_where \(\pi_{i,\mathcal{S}}\) does not depend on \(\mathcal{S}_{i}\). Also, for any \(i,|S_{i}|\), we have \(\hat{V}_{|\mathcal{S}_{i}|}(h)=\sum_{\mathbf{z}\in\mathcal{S}_{i}}\left(\ell(h,\mathbf{z})-R_{\mu}(h)\right)^{2}\) and \(V_{|\mathcal{S}_{i}|}(h)=\mathop{\mathbb{E}}_{\mathcal{S}_{i}}\Big{[}\hat{V}_{ |\mathcal{S}_{i}|}(h)\Big{]}\)._
The proof is deferred to Appendix B.1. While Theorem 1 holds for losses taking values in \(\mathbb{R}\), many learning problems rely in practice on more constrained losses. This loss can be bounded as in the case of, _e.g._, supervised learning or the multi-armed bandit problem [15], or simply non-negative as in regression problems involving the quadratic loss (studied, for instance, in [13, 14]). Using again the supermartingale toolbox, we prove in Theorem 2 a tighter bound holding for heavy-tailed non-negative losses.
**Theorem 2**.: _We assume our loss \(\ell\) to be non-negative and \(L\)-Lipschitz. We also assume that, for any \(1\leq i\leq K\), for any dataset\(\mathcal{S}\), we have \(\mathop{\mathbb{E}}_{h\sim\pi_{i}(.,\mathcal{S}),z\sim\mu}\left[\ell(h,z)^{2} \right]\leq 1\) (bounded order 2 moments forpriors). Then, for any \(\delta\in(0,1]\), with probability at least \(1-\delta\) over the sample \(\mathcal{S}\), the following holds for the distributions \(\pi_{i,\mathcal{S}}:=\pi_{i}(\mathcal{S},.)\) and for any \(\rho\in\mathcal{M}(\mathcal{H})\):_
\[\operatorname*{\mathbb{E}}_{h\sim\rho}\left[\mathcal{R}_{\mu}(h)-\hat{\mathcal{ R}}_{\mathcal{S}}(h)\right]\leq\sum_{i=1}^{K}\frac{2|\mathcal{S}_{i}|L}{m}\mathrm{W}( \rho,\pi_{i,\mathcal{S}})+\sum_{i=1}^{K}\sqrt{\frac{2|\mathcal{S}_{i}|\ln\frac {K}{\delta}}{m^{2}}},\]
_where \(\pi_{i,\mathcal{S}}\) does not depend on \(\mathcal{S}_{i}\)._
Note that when the loss function takes values in \([0,1]\), an alternative strategy allows tightening the last term of the bound by a factor \(\nicefrac{{1}}{{2}}\). This result is rigorously stated in Theorem 6 of Appendix B.3.
**High-level ideas of the proofs.** Theorems 1 and 2 are structured around two tools. First, we exploit the Kantorovich-Rubinstein duality [22, Remark 6.5] to replace the change of measure inequality [14, 23]; this allows us to consider a Wasserstein distance instead of a KL term. Then, we exploit the supermartingales used in [11, 1] alongside Ville's inequality (instead of Markov's one) to obtain a high probability bound holding for heavy-tailed losses. Combining those techniques provides our PAC-Bayesian bounds.
**Analysis of our bounds.** Our results hold for Lipschitz losses and allow us to consider heavy-tailed losses with bounded order 2 moments. While such an assumption on the loss is more restrictive than in classical PAC-Bayes, allowing heavy-tailed losses is strictly less restrictive. While Theorem 1 is our most general statement, Theorem 2 allows recovering a tighter result (without empirical variance terms) for non-negative heavy-tailed losses. An important point is that the variance terms are considered with respect to the prior distributions \(\pi_{i,\mathcal{S}}\) and not \(\rho\) as in [11, 1]. This is crucial as these papers rely on the implicit assumption of order 2 moments, holding uniformly for all \(\rho\in\mathcal{M}(\mathcal{H})\), while we only require this assumption for the prior distributions \((\pi_{i,\mathcal{S}})_{i=1,\ldots,K}\). Such an assumption is in line with the PAC-Bayesian literature, which often relies on bounding an averaged quantity with respect to the prior. This strength is a consequence of the Kantorovich-Rubinstein duality. To illustrate this, consider _i.i.d._ data with distribution \(\mu\) admitting a finite variance bounded by \(V\) and the loss \(\ell(h,z)=|h-z|\) where both \(h\) and \(z\) lie in the real axis. Notice that in this particular case, we can imagine that \(z\) is a data point and \(h\) is a hypothesis outputting the same scalar for all data. To satisfy the assumption of Theorem 2, it is enough, by Cauchy Schwarz, to satisfy \(\operatorname*{\mathbb{E}}_{h\sim\pi_{i,\mathcal{S}},z\sim\mathcal{S}}[\ell(h, z)^{2}]\leq\operatorname*{\mathbb{E}}[h^{2}]+2V\operatorname*{\mathbb{E}}[|h|]+V^{2} \leq 1\) for all \(\pi_{i,\mathcal{S}}\). On the contrary, [11, 1] would require this condition to hold for all \(\rho\), which is more restrictive. Finally, an important point is that our bound allows us to consider Dirac distributions with disjoint support as priors and posteriors. On the contrary, KL divergence forces us to consider a non-Dirac prior for our bound to be non-vacuous. This allows us to retrieve a uniform-convergence bound described in Corollary 7.
**Role of data-dependent priors.** Theorems 1 and 2 allow the use of prior distributions depending possibly on a fraction of data. Such a dependency is crucial to control our sum of Wasserstein terms as we do not have an explicit convergence rate. For instance, for a fixed \(K\), consider a compact predictor space \(\mathcal{H}\), a bounded loss and the _Gibbs posterior_ defined as \(d\rho(h)\propto\exp\left(-\lambda\hat{\mathcal{R}}_{\mathcal{S}}(h)\right)dh\) where \(\lambda>0\). Also define for any \(i\) and \(\mathcal{S}\), the distribution \(d\pi_{i,\mathcal{S}}(h)\propto\exp\left(-\lambda\mathcal{R}_{\mathcal{S}/ \mathcal{S}_{i}}(h)\right)dh\). Then, by the law of large numbers, when \(m\) goes to infinity, for any \(h\), both \(\mathcal{R}_{\mathcal{S}}(h)\) and \((\mathcal{R}_{\mathcal{S}/\mathcal{S}_{i}}(h))_{i=1,\ldots,m}\) converge to \(\operatorname*{\mathbb{R}}_{\mu}(h)\). This ensures, alongside with the dominated convergence theorem, that for any \(i\), the Wasserstein distance \(\mathrm{W}(\rho,\pi_{i,\mathcal{S}})\) goes to zero as \(m\) goes to infinity.
**Comparison with the literature.**[1, Theorem 11] establishes a PAC-Bayes bound with Wasserstein distance valid for bounded losses being Lipschitz with high probability. While we circumvent the first assumption, the second one is less restrictive than actual Lipschitzness and can also be used in our setting. Also [1, Theorem 12] proposes an explicit convergence for finite predictor classes. We show in Appendix A that we are also able to recover such a convergence.
**Towards new PAC-Bayesian algorithms.** From Theorem 2, we derive a new PAC-Bayesian algorithm for Lipschitz non-negative losses:
\[\operatorname*{argmin}_{\rho\in\mathcal{M}(\mathcal{H})}\operatorname*{ \mathbb{E}}_{h\sim\rho}\left[\hat{\mathcal{R}}_{\mathcal{S}}(h)\right]+\sum_{ i=1}^{K}\frac{2|\mathcal{S}_{i}|L}{m}\mathrm{W}(\rho,\pi_{i,\mathcal{S}}). \tag{2}\]
Equation (2) uses Wasserstein distances as regularisers and allows the use of multiple priors. We compare ourselves to the classical PAC-Bayes algorithm derived from [1, Theorem 1.2.6] (whichleads to Gibbs posteriors):
\[\operatorname*{argmin}_{\rho\in\mathcal{M}(\mathcal{H})}\operatorname*{\mathbb{E}}_ {h\sim\rho}\big{[}\hat{\mathsf{R}}_{\mathcal{S}}(h)\big{]}+\frac{\operatorname {KL}(\rho,\pi)}{\lambda}. \tag{3}\]
Considering a Wasserstein distance in Equation (2) makes our algorithm more flexible than in Equation (3), the KL divergence implies absolute continuity _w.r.t._ the prior \(\pi\). Such an assumption is not required to use Equation (2) and covers the case of prior Dirac distributions. Finally, Equation (2) relies on a fixed value \(K\) whose value is discussed below.
**Role of \(K\).** We study the cases \(K=1\), \(\sqrt{m}\), and \(m\) in Theorem 2. We refer to Appendix A for a detailed treatment. First of all, when \(K=1\), we recover a classical batch learning setting where all data are collected at once. In this case, we have a single Wasserstein with no convergence rate coupled with a statistical srsatz of \(\sqrt{\nicefrac{{\ln(1/\delta)}}{{m}}}\). However, similarly to [1, Theorem 12], in the case of a finite predictor class, we are able to recover an explicit convergence rate. The case \(K=\sqrt{m}\) provides a tradeoff between the number of points required to have good data-dependent priors (which may lead to a small \(\sum_{i=1}^{\sqrt{m}}\operatorname{W}(\rho,\pi_{i})\)) and the number of sets required to have an explicit convergence rate. Finally, the case \(K=m\) leads to a vacuous bound as we have the incompressible term \(\sqrt{\ln\nicefrac{{(m/\delta)}}{{\delta}}}\), which makes the bound vacuous for large values of \(m\). This means that the batch setting is not fitted to deal with a data stream arriving sequentially. To mitigate that weakness, we propose in Section 3.2 the first online PAC-Bayes bounds with Wasserstein distances.
### Wasserstein-based generalisation bounds for online learning
Here, we use the online setting described in Section 2 and derive the first online PAC-Bayes bounds involving Wasserstein distances in Theorems 3 and 4. Online PAC-Bayes bounds are meant to derive online counterparts of classical PAC-Bayesian algorithms [1], where the KL-divergence acts as a regulariser. We show in Theorems 3 and 4 that it is possible to consider online PAC-Bayesian algorithms where the regulariser is a Wasserstein distance, which allows us to optimise on measure spaces without a restriction of absolute continuity.
**Theorem 3**.: _We assume our loss \(\ell\) to be \(L\)-Lipschitz. Then, for any \(\delta\in(0,1]\), with probability at least \(1-\delta\) over the sample \(\mathcal{S}\), the following holds for the distributions \(\pi_{i,\mathcal{S}}:=\pi_{i}(\mathcal{S},.)\) and for any sequence \((\rho_{i})_{i=1\cdots m}\in\mathcal{M}(\mathcal{H})^{m}\):_
\[\sum_{i=1}^{m}\operatorname*{\mathbb{E}}_{h_{i}\sim\rho_{i}}\Big{[} \operatorname*{\mathbb{E}}[\ell(h_{i},\mathbf{z}_{i})\mid\mathcal{F}_{i-1}]- \ell(h_{i},\mathbf{z}_{i})\Big{]} \leq 2L\sum_{i=1}^{m}\operatorname{W}(\rho_{i},\pi_{i,\mathcal{S}})\] \[+\frac{1}{2}\sum_{i=1}^{m}\operatorname*{\mathbb{E}}_{h_{i}\sim \pi_{i,\mathcal{S}}}\Big{[}\hat{V}_{i}(h_{i},\mathbf{z}_{i})+V_{i}(h_{i})\Big{]} +\frac{\ln(1/\delta)}{\lambda},\]
_where for all \(i\), \(\hat{V}_{i}(h_{i},\mathbf{z}_{i})=(\ell(h_{i},\mathbf{z}_{i})-\operatorname*{ \mathbb{E}}_{i-1}[\ell(h_{i},\mathbf{z}_{i})])^{2}\) is the conditional empirical variance at time \(i\) and \(V_{i}(h_{i})=\operatorname*{\mathbb{E}}_{i-1}[\hat{V}(h_{i},\mathbf{z}_{i})]\) is the true conditional variance._
The proof is deferred to Appendix B.4. We also provide the following bound, being an online analogous of Theorem 2, valid for non-negative heavy-tailed losses.
**Theorem 4**.: _We assume our loss \(\ell\) to be non-negative and \(L\)-Lipschitz. We also assume that, for any \(i\), \(\mathcal{S}\), \(\operatorname*{\mathbb{E}}_{h\sim\pi_{i},\mathcal{S}}\big{[}\operatorname*{ \mathbb{E}}_{i-1}[\ell(h,\mathbf{z}_{i})^{2}]\big{]}\leq 1\) (bounded conditional order \(2\) moments for priors). Then, for any \(\delta\in(0,1]\), with probability at least \(1-\delta\) over the sample \(\mathcal{S}\), any online predictive sequence (used as priors) \((\pi_{i})_{i\geq 1}\), we have with probability at least \(1-\delta\) over the sample \(S\sim\mu\), the following, holding for the data-dependent measures \(\pi_{i,\mathcal{S}}:=\pi_{i}(S,.)\) and any posterior sequence \((\rho_{i})_{i\geq 1}\):_
\[\frac{1}{m}\sum_{i=1}^{m}\operatorname*{\mathbb{E}}_{h_{i}\sim \rho_{i}}\Big{[}\operatorname*{\mathbb{E}}[\ell(h_{i},\mathbf{z}_{i})\mid \mathcal{F}_{i-1}]-\ell(h_{i},\mathbf{z}_{i})\Big{]}\leq\frac{2L}{m}\sum_{i=1} ^{m}\operatorname{W}(\rho_{i},\pi_{i,\mathcal{S}})+\sqrt{\frac{2\ln\big{(} \frac{1}{\delta}\big{)}}{m}}.\]
The proof is deferred to Appendix B.5.
**Analysis of our bounds.** Theorems 3 and 4 are, to our knowledge, the first results involving Wasserstein distances for online PAC-Bayes learning. They are the online counterpart of Theorems 1 and 2, and the discussion of Section 3.1 about the involved assumptions also apply here. The sum of Wasserstein distances involved here is a consequence of the online setting and must grow sublinearly for the bound to be tight. For instance, when \((\rho_{i}=\delta_{h_{i}})_{i\geq 1}\) is the output of an online algorithm outputting Dirac measures and \(\pi_{i,\mathcal{S}}=\rho_{i-1}\), the sum of Wasserstein is exactly \(\sum_{i=1}^{m}d(h_{i},h_{i-1})\). This sum has to be sublinear for the bound to be non-vacuous, and the tightness depends on the considered learning problem. An analogous of this sum can be found in dynamic online learning (Zin2013) where similar sums appear as _path lengths_ to evaluate the complexity of the problem.
**Comparison with literature.** We compare our results to existing PAC-Bayes bounds for martingales of [SLCB\({}^{+}\)12]. [SLCB\({}^{+}\)12, Theorem 4] is a PAC-Bayes bound for martingales, which controls an average of martingales, similar to our Theorem 1. Under a boundedness assumption, they recover a McAllester-typed bound, while Theorem 1 is more of a Catoni-typed result. Also, [SLCB\({}^{+}\)12, Theorem 7] is a Catoni-typed bound involving a conditional variance, similar to our Theorem 4. They require to bound uniformly the variance on all the predictor sets, while we only assume averaged variance with respect to priors, which is what we required to perform Theorem 4.
**A new online algorithm.**[HG22] derived from their main theorem, an online counterpart of Equation (3), proving it comes with guarantees. Similarly, we exploit Theorem 4 to derive the online counterpart of Equation (2), from the data-free initialisation \(\rho_{1}\)
\[\forall i\geq 1,\ \ \rho_{i}\in\operatorname*{argmin}_{\rho\in\mathcal{M}( \mathcal{H})}\operatorname*{\mathbb{E}}_{h\sim\rho}[\ell(h_{i},\mathbf{z}_{i}) ]+2L\mathrm{W}(\rho,\pi_{i,\mathcal{S}}). \tag{4}\]
We highlight the merits of the algorithm defined by Equation (4), alongside with the one from Equation (2), in Section 4.
## 4 Learning via Wasserstein regularisation
Theorems 2 and 4 are designed to be informative on the generalisation ability of a single hypothesis even when Dirac distributions are considered. In particular, our results involve Wasserstein distances acting as regularisers on \(\mathcal{H}\). In this section, we show that a Wasserstein regularisation of the learning objective, which comes from our theoretical bounds, helps to better generalise in practice. Inspired by Equations (2) and (4), we derive new PAC-Bayesian algorithms for both batch and online learning involving a Wasserstein distance (see Section 4.1), we describe our experimental framework in Section 4.2 and we present some of the results in Section 4.3. Additional details, experiments, and discussions are gathered in Appendix C due to space constraints. All the experiments are reproducible with the source code provided on GitHub at [https://github.com/paulviallard/NeurIPS23-PB-Wasserstein](https://github.com/paulviallard/NeurIPS23-PB-Wasserstein).
### Learning algorithms
**Classification.** In the classification setting, we assume that the data space \(\mathcal{Z}=\mathcal{X}\times\mathcal{Y}\) is composed of a \(d\)-dimensional _input space_\(\mathcal{X}=\{\mathbf{x}\in\mathbb{R}^{d}\mid\|\mathbf{x}\|_{2}\leq 1\}\) and a finite _label space_\(\mathcal{Y}=\{1,\ldots,|\mathcal{Y}|\}\) with \(|\mathcal{Y}|\) labels. We aim to learn models \(h_{\mathbf{w}}:\mathbb{R}^{d}\to\mathbb{R}^{|\mathcal{Y}|}\) parameterised by a weight vector \(\mathbf{w}\) that outputs, given an input \(\mathbf{x}\in\mathcal{X}\), a score \(h_{\mathbf{w}}(\mathbf{x})[y^{\prime}]\in\mathbb{R}\) for each label \(y^{\prime}\). This score allows us to assign a label to \(\mathbf{x}\in\mathcal{X}\); to check if \(h_{\mathbf{w}}\) classifies correctly the example \((\mathbf{x},y)\), we use the _classification loss_ defined by \(\ell^{c}(h_{\mathbf{w}},(\mathbf{x},y)):=\mathds{1}\left[h_{\mathbf{w}}( \mathbf{x})[y]-\max_{y^{\prime}\neq y}h_{\mathbf{w}}(\mathbf{x})[y^{\prime}] \leq 0\right]\), where \(\mathds{1}\) denotes the indicator function.
**Batch algorithm.** In the batch setting, we aim to learn a parametrised hypothesis \(h_{\mathbf{w}}\in\mathcal{H}\) that minimises the population classification risk \(\mathfrak{R}_{\mu}(h_{\mathbf{w}})=\operatorname*{\mathbb{E}}_{(\mathbf{x},y) \sim\mu}\ell^{c}(h_{\mathbf{w}},(\mathbf{x},y))\) that we can only estimate through the empirical classification risk \(\mathfrak{R}_{\mathcal{S}}(h_{\mathbf{w}})=\frac{1}{m}\sum_{i=1}^{m}\ell^{c}( h_{\mathbf{w}},(\mathbf{x}_{i},y_{i}))\). To learn the hypothesis, we start from Equation (2), when the distributions \(\rho\) and \(\pi_{1},\ldots,\pi_{K}\) are Dirac masses, localised at \(h_{\mathbf{w}},h_{\mathbf{w}_{1}},\ldots h_{\mathbf{w}_{K}}\in\mathcal{H}\) respectively. Indeed, in this case, \(\mathrm{W}(\rho,\pi_{i,\mathcal{S}})=d(h_{\mathbf{w}},h_{\mathbf{w}_{i}})\) for any \(i\). However, the loss \(\ell^{c}(.,\mathbf{z})\) is not Lipschitz and the derivatives are zero for all examples \(\mathbf{z}\in\mathcal{X}\times\mathcal{Y}\), which prevents its use in practice to obtain such a hypothesis \(h_{\mathbf{w}}\). Instead, for the population risk \(\mathds{R}_{\mu}(h)\) and the empirical risk \(\mathds{R}_{\mathcal{S}}(h)\) (in Theorem 2 and Equation (2)), we consider the loss \(\ell(h,(\mathbf{x},y))=\frac{1}{|\mathcal{Y}|}\sum_{y^{\prime}\neq y}\max(0,1 -\eta(h[y]-h[y^{\prime}]))\), which is \(\eta\)-Lipschitz _w.r.t._ the outputs \(h[1],\ldots,h[|\mathcal{Y}|]\). This loss has subgradients everywhere, which is convenient in practice. We go a step further by _(a)_ setting \(L=\frac{1}{2}\) and _(b)_ adding a parameter \(\varepsilon>0\) to obtain the objective
\[\operatorname*{argmin}_{h_{\mathbf{w}}\in\mathcal{H}}\left\{\hat{\mathbf{R}}_{ \mathcal{S}}(h_{\mathbf{w}})+\varepsilon\left[\sum_{i=1}^{K}\frac{|\mathcal{S} _{i}|}{m}d\left(h_{\mathbf{w}},h_{\mathbf{w}_{i}}\right)\right]\right\}. \tag{5}\]
To (approximately) solve Equation (5), we propose a two-step algorithm. First, Priors Learning learns \(K\) hypotheses \(h_{\mathbf{w}_{1}},\ldots,h_{\mathbf{w}_{K}}\in\mathcal{H}\) by minimising the empirical risk via stochastic gradient descent. Second, Posterior Learning learns the hypothesis \(h_{\mathbf{w}}\in\mathcal{H}\) by minimising the objective associated with Equation (5). More precisely, Priors Learning outputs the hypotheses \(h_{\mathbf{w}_{1}},\cdots,h_{\mathbf{w}_{K}}\), obtained by minimising the empirical risk through mini-batches. Those batches are designed such that for any \(i\), the hypothesis \(h_{\mathbf{w}_{i}}\) does not depend on \(\mathcal{S}_{i}\). Then, given \(h_{\mathbf{w}_{1}},\ldots,h_{\mathbf{w}_{K}}\in\mathcal{H}\), Posterior Learning minimises the objective in Equation (5) with mini-batches. Those algorithms are presented in Algorithm 1 of Appendix C. While \(\varepsilon\) is not suggested by Equation (2), it helps to control the impact of the regularisation in practice. Equation (5) then optimises a tradeoff between the empirical risk and the regularisation term \(\varepsilon\sum_{i=1}^{K}\frac{|\mathcal{S}_{i}|}{m}d(h_{\mathbf{w}},h_{ \mathbf{w}_{i}})\).
**Online algorithm.** Online algorithms output, at each time step \(i\in\{1,\ldots,m\}\), a new hypothesis \(h_{\mathbf{w}_{i}}\). From Equation (4), particularised to a sequence of Dirac distributions (localised in \(h_{\mathbf{w}_{1}},\cdots,h_{\mathbf{w}_{K}}\)), we design a novel online PAC-Bayesian algorithm with a Wasserstein regulariser:
\[\forall i\geq 1,\;\;h_{i}\in\operatorname*{argmin}_{h_{\mathbf{w}}\in\mathcal{H}} \ell(h_{\mathbf{w}},\mathbf{z}_{i})+d\left(h_{\mathbf{w}},h_{\mathbf{w}_{i-1}} \right)\;\;s.t.\;\;d\left(h_{\mathbf{w}},h_{\mathbf{w}_{i-1}}\right)\leq 1. \tag{6}\]
According to Theorem 4, such an algorithm aims to bound the _population cumulative classification loss_\(\mathfrak{C}_{\mu}=\sum_{i=1}^{m}\mathbb{E}[\ell^{c}(h_{\mathbf{w}_{i}}, \mathbf{z}_{i})\mid\mathcal{F}_{i-1}]\). Note that we added the constraint \(d\left(h_{\mathbf{w}},h_{\mathbf{w}_{i-1}}\right)\leq 1\) compared to Equation (4). This constraint ensures that the new hypothesis \(h_{\mathbf{w}_{i}}\) is not too far from \(h_{\mathbf{w}_{i-1}}\) (in the sense of the distance \(\|\cdot\|_{2}\)). Note that the constrained optimisation problem in Equation (6) can be rewritten in an unconstrained form (see [1]) thanks to a barrier \(B(\cdot)\) defined by \(B(a)=0\) if \(a\leq 0\) and \(B(a)=+\infty\) otherwise; we have
\[\forall i\geq 1,\;\;h_{i}\in\operatorname*{argmin}_{h_{\mathbf{w}}\in\mathcal{H}} \ell(h_{\mathbf{w}},\mathbf{z}_{i})+d\left(h_{\mathbf{w}},h_{\mathbf{w}_{i-1} }\right)+B(d\left(h_{\mathbf{w}},h_{\mathbf{w}_{i-1}}\right)-1). \tag{7}\]
When solving the problem in Equation (7) is not feasible, we approximate it with a log barrier of [1] (suitable in a stochastic gradient setting); given a parameter \(t>0\), the log barrier extension is defined by \(\hat{B}(a)=-\frac{1}{t}\ln(-a)\) if \(a\leq-\frac{1}{t^{2}}\) and \(\hat{B}(a)=ta-\frac{1}{t}\ln(\frac{1}{t^{2}})+\frac{1}{t}\) otherwise. We present in Appendix C Algorithm 2 that aims to (approximately) solve Equation (7). To do so, for each new example \((\mathbf{x}_{i},y_{i})\), the algorithm runs several gradient descent steps to optimise Equation (7).
### Experimental framework
In this part, we assimilate the predictor space \(\mathcal{H}\) to the parameter space \(\mathbb{R}^{d}\). Thus, the distance \(d\) is the Euclidean distance between two parameters: \(d\left(h_{\mathbf{w}},h_{\mathbf{w}^{\prime}}\right)=\|\mathbf{w}-\mathbf{w}^ {\prime}\|_{2}\). This implies that the Lipschitzness of \(\ell\) has to be taken _w.r.t._\(\mathbf{w}\) instead of \(h_{\mathbf{w}}\).
**Models.** We consider that the models are either linear or neural networks (NN). Linear models are defined by \(h_{\mathbf{w}}(\mathbf{x})=W\mathbf{x}+b\), where \(W\in\mathbb{R}^{|\mathcal{Y}|\times d}\) is the weight matrix, \(b\in\mathbb{R}^{|\mathcal{Y}|}\) is the bias, and \(\mathbf{w}=\operatorname{vec}(\{W,b\})\) its vectorisation; the vector \(\mathbf{w}\) with the zero vector. Thanks to the definition of \(\mathcal{X}\), we know from Lemma 8 (and the composition of Lipschitz functions) that the loss is \(\sqrt{2}\eta\)-Lipschitz _w.r.t._\(\mathbf{w}\). For neural networks, we consider fully connected ReLU neural networks with \(L\) hidden layers and \(D\) nodes, where the leaky ReLU activation function \(\operatorname{ReLU}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{D}\) applies elementwise \(x\mapsto\max(x,0.01x)\). More precisely, the network is defined by \(h_{\mathbf{w}}(\mathbf{x})=Wh^{L}(\cdots h^{1}(\mathbf{x}))+b\) where \(W\in\mathbb{R}^{|\mathcal{Y}|\times D}\), \(b\in\mathbb{R}^{|\mathcal{Y}|}\). Each layer \(h^{i}(\mathbf{x})=\operatorname{ReLU}(W_{i}\mathbf{x}+b_{i})\) has a weight matrix \(W_{i}\in\mathbb{R}^{D\times D}\) and bias \(b_{i}\in\mathbb{R}^{D}\) except for \(i=1\) where we have \(W_{1}\in\mathbb{R}^{D\times d}\). The weights \(\mathbf{w}\) are also the vectorisation \(\mathbf{w}=\operatorname{vec}(\{W,W_{L},\ldots,W_{1},b,b_{L},\ldots,b_{1}\})\). We have precised in Lemma 9 that our loss is Lipschitz _w.r.t._ the weights \(\mathbf{w}\). We initialise the network similarly to [1] by sampling the weights from a Gaussian distribution with zero mean and a standard deviation of \(\sigma=0.04\); the weights are further clipped between \(-2\sigma\) and \(+2\sigma\). Moreover, the values in the biases \(b_{1},\ldots,b_{L}\) are set to 0.1, while the values for \(b\) are set to 0. In the following, we consider \(D=600\) and \(L=2\); more experiments are considered in the appendix.
**Optimisation.** To perform the gradient steps, we use the COCOB-Backprop optimiser [1] (with parameter \(\alpha=10000\)).2 This optimiser is flexible as the learning rate is adaptative and, thus, does not require hyperparameter tuning. For Algorithm 1, which solves Equation (5), we fix a batch size of \(100\), _i.e._, \(|\mathcal{U}|=100\), and the number of epochs \(T\) and \(T^{\prime}\) are fixed to perform at least \(20000\) iterations. Regarding Algorithm 2, which solves Equation (7), we set \(t=100\) for the log barrier, which is enough to constrain the weights and the number of iterations to \(T=10\).
**Datasets.** We study the performance of Algorithms 1 and 2 on UCI datasets [1] along with MNIST [14] and FashionMNIST [15]. We also split all the data (from the original training/test set) in two halves; the first part of the data serves in the algorithm (and is considered as a training set), while the second part is used to approximate the population risks \(\mathfrak{R}_{\mu}(h)\) and \(\mathfrak{C}_{\mu}\) (and considered as a testing set).
### Results
We present in Table 1 the performance of Algorithms 1 and 2 compared to the Empirical Risk Minimisation (ERM) and the Online Gradient Descent (OGD) with the COCOB-Backprop optimiser. Tables 0(a) and 0(c) present the results of Algorithm 1 for the _i.i.d._ setting on linear and neural networks respectively, while Tables 0(b) and 0(d) present the results of Algorithm 2 for the online case.
**Analysis of the results.** In batch learning, we note that the regularisation term brings generalisation improvements compared to the empirical risk minimisation. Indeed, our batch algorithm (Algorithm 1) has a lower population risk \(\mathfrak{R}_{\mu}(h)\) on 11 datasets for the linear models and 9 datasets for the neural networks. In particular, notice that NNs obtained from Algorithm 1 are more efficient than the ones obtained from ERM on MNIST and FashionMNIST, which are the more challenging datasets. This suggests that the regularisation term helps to generalise well. For the online case, the performance of the linear models obtained from our algorithm (Algorithm 2) and by OGD are comparable: we have a tighter population classification risk \(\mathfrak{R}_{\mu}(h)\) on \(5\) datasets over \(13\). However, notice that the risk difference is less than \(0.05\) on \(6\) datasets. The advantage of Algorithm 2 is more pronounced for neural networks: we improve the performance in all datasets except adult and sensorless. Hence, this confirms that optimising the regularised loss \(\ell(h_{\mathbf{w}},\mathbf{z}_{i})+\|\mathbf{w}-\mathbf{w}_{i-1}\|\) brings a good advantage compared to the loss \(\ell(h_{\mathbf{w}},\mathbf{z}_{i})\) only. A possible explanation would be that OGD suffers from underfitting (with a high empirical risk \(\mathfrak{C}_{\mu}\)) while we are able to control overfitting through a regularisation term. Indeed, only one gradient descent step is done for each new datum \((\mathbf{x}_{i},y_{i})\), which might not be sufficient to decrease the loss. Instead, our method solves the problem associated with Equation (7) and constrains the descent with the norm \(\|\mathbf{w}-\mathbf{w}_{i-1}\|\).
## 5 Conclusion and Perspectives
We derived novel generalisation bounds based on the Wasserstein distance, both for batch and online learning, allowing for the use of deterministic hypotheses through PAC-Bayes. We derived new learning algorithms which are inspired by the bounds, with remarkable empirical performance on a number of datasets: we hope our work can pave the way to promising future developments (both theoretical and practical) of generalisation bounds based on the Wasserstein distance. Given the mostly theoretical nature of our work, we do not foresee an immediate negative societal impact, although we hope a better theoretical understanding of generalisation will ultimately benefit practitioners of machine learning algorithms and encourage virtuous initiatives.
## 6 Acknowledgements
We warmly thank the reviewers who provided insightful comments and suggestions which greatly helped us improve our manuscript. P.V. and U.S. are partially supported by the French government under the management of Agence Nationale de la Recherche as part of the "Investissements d'avenir" program, reference ANR-19-P3IA-0001 (PRAIRIE 3IA Institute). U.S. is also partially supported by the European Research Council Starting Grant DYNASTY - 101039676. B.G. acknowledges partial support by the U.S. Army Research Laboratory and the U.S. Army Research Office, and by the U.K. Ministry of Defence and the U.K. Engineering and Physical Sciences Research Council (EPSRC) under grant number EP/R013616/1. B.G. acknowledges partial support from the French National Agency for Research, grants ANR-18-CE40-0016-01 and ANR-18-CE23-0015-02.
## References
* [ACB17] Martin Arjovsky, Soumith Chintala, and Leon Bottou. Wasserstein Generative Adversarial Networks. In _International Conference on Machine Learning (ICML)_, 2017.
* [AEMM22] Ron Amit, Baruch Epstein, Shay Moran, and Ron Meir. Integral Probability Metrics PAC-Bayes Bounds. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2022.
* [AG18] Pierre Alquier and Benjamin Guedj. Simpler PAC-Bayesian bounds for hostile data. _Machine Learning_, 107(5), 2018.
* [AM18] Ron Amit and Ron Meir. Meta-Learning by Adjusting Priors Based on Extended PAC-Bayes Theory. In _International Conference on Machine Learning (ICML)_, 2018.
\begin{table}
\end{table}
Table 1: Performance of Algorithms 1 and 2 compared respectively to ERM and OGD on different datasets on linear and neural network models. For the _i.i.d._ setting, we consider \(\varepsilon=1/_{m}\) and \(\varepsilon=1/\sqrt{m}\) and with \(K=0.2\sqrt{m}\). For each method, we plot the empirical risk \(\mathfrak{R}_{\mathcal{S}}(h)\) or \(\mathcal{E}_{\mathcal{S}}\) with its associated test risk \(\mathfrak{R}_{\mu}(h)\) or \(\mathfrak{e}_{\mu}\). The risk in **bold** corresponds to the lowest one among the ones considered. For the online case, the two population risks are underlined when the absolute difference is lower than 0.05.
* [ARC16] Pierre Alquier, James Ridgway, and Nicolas Chopin. On the properties of variational approximations of Gibbs posteriors. _Journal of Machine Learning Research_, 2016.
* [BG21] Felix Biggs and Benjamin Guedj. Differentiable PAC-Bayes Objectives with Partially Aggregated Neural Networks. _Entropy_, 23(10), 2021.
* [BG22a] Felix Biggs and Benjamin Guedj. Non-Vacuous Generalisation Bounds for Shallow Neural Networks. In _International Conference on Machine Learning (ICML)_, 2022.
* [BG22b] Felix Biggs and Benjamin Guedj. On Margins and Derandomisation in PAC-Bayes. In _International Conference on Artificial Intelligence and Statistics (AISTATS)_, 2022.
* [BG23] Felix Biggs and Benjamin Guedj. Tighter PAC-Bayes Generalisation Bounds by Leveraging Example Difficulty. In _International Conference on Artificial Intelligence and Statistics (AISTATS)_, 2023.
* [BM01] Peter Bartlett and Shahar Mendelson. Rademacher and Gaussian Complexities: Risk Bounds and Structural Results. In _Conference on Computational Learning Theory (COLT)_, 2001.
* [BM02] Peter Bartlett and Shahar Mendelson. Rademacher and Gaussian Complexities: Risk Bounds and Structural Results. _Journal of Machine Learning Research_, 2002.
* [BT08] Bernard Bercu and Abderrahmen Touati. Exponential inequalities for self-normalized martingales with applications. _The Annals of Applied Probability_, 2008.
* [BV04] Stephen Boyd and Lieven Vandenberghe. _Convex optimization_. Cambridge University Press, 2004.
* [BZG22] Felix Biggs, Valentina Zantedeschi, and Benjamin Guedj. On Margins and Generalisation for Voting Classifiers. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2022.
* [Cat07] Olivier Catoni. _PAC-Bayesian supervised classification: the thermodynamics of statistical learning_. Institute of Mathematical Statistics, 2007.
* [Cat16] Olivier Catoni. PAC-Bayesian bounds for the Gram matrix and least squares regression with a random design. _arXiv_, abs/1603.05229, 2016.
* [CDE\({}^{+}\)21] Alexander Camuto, George Deligiannidis, Murat A. Erdogdu, Mert Gurbuzbalaban, Umut Simsekli, and Lingjiong Zhu. Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2021.
* [CG17] Olivier Catoni and Ilaria Giulini. Dimension-free PAC-Bayesian bounds for matrices, vectors, and linear least squares regression. _arXiv_, abs/1712.02747, 2017.
* Bregman and Optimal Transport divergences. 2021.
* [CSDG22] Badr-Eddine Cherief-Abdellatif, Yuyang Shi, Arnaud Doucet, and Benjamin Guedj. On PAC-Bayesian reconstruction guarantees for VAEs. In _International Conference on Artificial Intelligence and Statistics (AISTATS)_, 2022.
* [Csi75] Imre Csiszar. \(I\)-Divergence Geometry of Probability Distributions and Minimization Problems. _The Annals of Probability_, 3(1), 1975.
* [CWR23] Ben Chugg, Hongjian Wang, and Aaditya Ramdas. A unified recipe for deriving (time-uniform) PAC-Bayes bounds. _arXiv_, abs/2302.03421, 2023.
* [DCL\({}^{+}\)21] Nan Ding, Xi Chen, Tomer Levinboim, Sebastian Goodman, and Radu Soricut. Bridging the Gap Between Practice and PAC-Bayes Theory in Few-Shot Meta-Learning. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2021.
* [DG17] Dheeru Dua and Casey Graff. UCI Machine Learning Repository, 2017.
* [DR17] Gintare Karolina Dziugaite and Daniel Roy. Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data. In _Conference on Uncertainty in Artificial Intelligence (UAI)_, 2017.
* [DV76] Monroe David Donsker and Srinivasa Varadhan. Asymptotic evaluation of certain Markov process expectations for large time--III. _Communications on Pure and Applied Mathematics_, 29(4), 1976.
* [FM21] Alec Farid and Anirudha Majumdar. Generalization Bounds for Meta-Learning via PAC-Bayes and Uniform Stability. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2021.
* [FP10] Mahdi Milani Fard and Joelle Pineau. PAC-Bayesian Model Selection for Reinforcement Learning. In _Advances in Neural Information Processing Systems (NIPS)_, 2010.
* [FRKP22] Hamish Flynn, David Reeb, Melih Kandemir, and Jan Peters. PAC-Bayesian lifelong learning for multi-armed bandits. _Data Mining and Knowledge Discovery_, 2022.
* [GBTS21] Borja Rodriguez Galvez, German Bassi, Ragnar Thobaben, and Mikael Skoglund. Tighter Expected Generalization Error Bounds via Wasserstein Distance. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2021.
* European Conference (ECML-PKDD)_, 2017.
* [Gue19] Benjamin Guedj. A Primer on PAC-Bayesian Learning. _arXiv_, abs/1901.05353, 2019.
* [HG22] Maxime Haddouche and Benjamin Guedj. Online PAC-Bayes Learning. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2022.
* [HG23a] Maxime Haddouche and Benjamin Guedj. PAC-Bayes Generalisation Bounds for Heavy-Tailed Losses through Supermartingales. _Transactions on Machine Learning Research_, 2023.
* [HG23b] Maxime Haddouche and Benjamin Guedj. Wasserstein PAC-Bayes Learning: A Bridge Between Generalisation and Optimisation. _arXiv_, abs/2304.07048, 2023.
* [HGRS21] Maxime Haddouche, Benjamin Guedj, Omar Rivasplata, and John Shawe-Taylor. PAC-Bayes Unleashed: Generalisation Bounds with Unbounded Losses. _Entropy_, 23(10), 2021.
* [JJKO23] Kyoungseok Jang, Kwang-Sung Jun, Ilja Kuzborskij, and Francesco Orabona. Tighter PAC-Bayes Bounds Through Coin-Betting. In _Conference on Learning Theory (COLT)_, 2023.
* [Kan60] Leonid Vitalievitch Kantorovitch. Mathematical Methods of Organizing and Planning Production. _Management Science_, 1960.
* [KDY\({}^{+}\)22] Hoel Kervadec, Jose Dolz, Jing Yuan, Christian Desrosiers, Eric Granger, and Ismail Ben Ayed. Constrained deep networks: Lagrangian optimization via log-barrier extensions. In _European Signal Processing Conference (EUSIPCO)_, 2022.
* [KP00] Vladimir Koltchinskii and Dmitriy Panchenko. Rademacher processes and bounding the risk of function learning. In _High dimensional probability II_, 2000.
* [LeC98] Yann LeCun. The MNIST database of handwritten digits, 1998.
* [LFK\({}^{+}\)22] Sanae Lotfi, Marc Finzi, Sanyam Kapoor, Andres Potapczynski, Micah Goldblum, and Andrew Wilson. PAC-Bayes Compression Bounds So Tight That They Can Explain Generalization. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2022.
* [LGGL19] Gael Letarte, Pascal Germain, Benjamin Guedj, and Francois Laviolette. Dichotomize and Generalize: PAC-Bayesian Binary Activated Deep Neural Networks. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2019.
* [LN22] Gabor Lugosi and Gergely Neu. Generalization Bounds via Convex Analysis. In _Conference on Learning Theory (COLT)_, 2022.
* [LWHZ19] Shiyin Lu, Guanghui Wang, Yao Hu, and Lijun Zhang. Optimal Algorithms for Lipschitz Bandits with Heavy-tailed Rewards. In _International Conference on Machine Learning (ICML)_, 2019.
* [LZ11] Keqin Liu and Qing Zhao. Multi-Armed Bandit Problems with Heavy Tail Reward Distributions. _Allerton Conference on Communication, Control, and Computing_, 2011.
* [Mau04] Andreas Maurer. A note on the PAC-Bayesian theorem. _arXiv_, cs/0411099, 2004.
* [MGG19] Zakaria Mhammedi, Peter Grunwald, and Benjamin Guedj. PAC-Bayes Un-Expected Bernstein Inequality. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2019.
* [MGW20] Zakaria Mhammedi, Benjamin Guedj, and Robert Williamson. PAC-Bayesian Bound for the Conditional Value at Risk. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2020.
* [Mon81] Gaspard Monge. Memoire sur la theorie des deblais et des remblais. _Histoire de l'Academie Royale des Sciences de Paris_, 1781.
* [Mu97] Alfred Muller. Integral Probability Metrics and Their Generating Classes of Functions. _Advances in Applied Probability_, 29(2), 1997.
* [NGG20] Kento Nozawa, Pascal Germain, and Benjamin Guedj. PAC-Bayesian Contrastive Unsupervised Representation Learning. In _Conference on Uncertainty in Artificial Intelligence (UAI)_, 2020.
* [OH21] Yuki Ohnishi and Jean Honorio. Novel Change of Measure Inequalities with Applications to PAC-Bayesian Bounds and Monte Carlo Estimation. In _International Conference on Artificial Intelligence and Statistics (AISTATS)_, 2021.
* [OT17] Francesco Orabona and Tatiana Tommasi. Training Deep Networks without Learning Rates Through Coin Betting. In _Advances in Neural Information Processing Systems (NIPS)_, 2017.
* [PC19] Gabriel Peyre and Marco Cuturi. Computational Optimal Transport. _Foundations and Trends in Machine Learning_, 11(5-6), 2019.
* [PORPH\({}^{+}\)21] Maria Perez-Ortiz, Omar Rivasplata, Emilio Parrado-Hernandez, Benjamin Guedj, and John Shawe-Taylor. Progress in Self-Certified Neural Networks. In _NeurIPS 2021 Workshop on Bayesian Deep Learning_, 2021.
* [PRSS21] Maria Perez-Ortiz, Omar Rivasplata, John Shawe-Taylor, and Csaba Szepesvari. Tighter risk certificates for neural networks. _Journal of Machine Learning Research_, 22, 2021.
* [PWG22] Antoine Picard-Weibel and Benjamin Guedj. On change of measure inequalities for \(f\)-divergences. _arXiv_, abs/2202.05568, 2022.
* [RACA23] Charles Riou, Pierre Alquier, and Badr-Eddine Cherief-Abdellatif. Bayes meets Bernstein at the Meta Level: an Analysis of Fast Rates in Meta-Learning with PAC-Bayes. _arXiv_, abs/2302.11709, 2023.
* [RFJK21] Jonas Rothfuss, Vincent Fortuin, Martin Josifoski, and Andreas Krause. PACOH: Bayes-optimal meta-learning with PAC-guarantees. In _International Conference on Machine Learning (ICML)_, 2021.
* [RJFK22] Jonas Rothfuss, Martin Josifoski, Vincent Fortuin, and Andreas Krause. PAC-Bayesian Meta-Learning: From Theory to Practice. _arXiv_, abs/2211.07206, 2022.
* [RZ20] Daniel Russo and James Zou. How Much Does Your Data Exploration Overfit? Controlling Bias via Information Usage. _IEEE Transactions on Information Theory_, 66(1), 2020.
* [SAC23] Otmane Sakhi, Pierre Alquier, and Nicolas Chopin. PAC-Bayesian Offline Contextual Bandits With Guarantees. In _International Conference on Machine Learning (ICML)_, 2023.
* [SLCB\({}^{+}\)12] Yevgeny Seldin, Francois Laviolette, Nicolo Cesa-Bianchi, John Shawe-Taylor, and Peter Auer. PAC-Bayesian Inequalities for Martingales. _IEEE Transactions on Information Theory_, 58(12), 2012.
* [Sli19] Aleksandrs Slivkins. Introduction to Multi-Armed Bandits. _Foundations and Trends in Machine Learning_, 2019.
* [SLST\({}^{+}\)11] Yevgeny Seldin, Francois Laviolette, John Shawe-Taylor, Jan Peters, and Peter Auer. PAC-Bayesian Analysis of Martingales and Multiarmed Bandits. _arXiv_, abs/1105.2416, 2011.
* [VC68] Vladimir Vapnik and Alexey Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. In _Doklady Akademii Nauk USSR_, 1968.
* [VC74] Vladimir Vapnik and Alexey Chervonenkis. Theory of pattern recognition, 1974.
* [Vil09] Cedric Villani. _Optimal transport: old and new_. Number 338 in Grundlehren der mathematischen Wissenschaften. Springer, 2009.
* [WDFC19] Hao Wang, Mario Diaz, Jose Candido Silveira Santos Filho, and Flavio P. Calmon. An Information-Theoretic View of Generalization via Wasserstein Distance. In _IEEE International Symposium on Information Theory (ISIT)_, 2019.
* [XR17] Aolin Xu and Maxim Raginsky. Information-theoretic analysis of generalization capability of learning algorithms. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2017.
* [XRV17] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms, 2017.
* [Zin03] Martin Zinkevich. Online Convex Programming and Generalized Infinitesimal Gradient Ascent. In _International Conference on Machine Learning (ICML)_, 2003.
* [ZLT18] Jingwei Zhang, Tongliang Liu, and Dacheng Tao. An Optimal Transport View on Generalization. _arXiv_, abs/1811.03270, 2018.
* [ZS21] Vincent Zhuang and Yanan Sui. No-Regret Reinforcement Learning with Heavy-Tailed Rewards. In _International Conference on Artificial Intelligence and Statistics (AISTATS)_, 2021.
* [ZVM\({}^{+}\)21] Valentina Zantedeschi, Paul Viallard, Emilie Morvant, Remi Emonet, Amaury Habrard, Pascal Germain, and Benjamin Guedj. Learning Stochastic Majority Votes by Minimizing a PAC-Bayes Generalization Bound. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2021.
The supplementary material is organized as follows:
1. We provide more discussion about Theorems 1 and 2 in Appendix A;
2. The proofs of Theorems 1 to 4 are presented in Appendix B;
3. We present in Appendix C additional information about the experiments.
## Appendix A Additional insights on Section 3.1
In Appendix A.1, we provide additional discussion about Theorem 1 while Appendix A.2 discuss about the convergence rates for Theorem 2.
### Supplementary discussion about Theorem 1
[16, Corollary 10] proposed PAC-Bayes bounds with Wasserstein distances on a Euclidean predictor space with Gaussian prior and posteriors. The bounds have an explicit convergence rate of \(\mathcal{O}(\sqrt{\nicefrac{{dW_{1}}}{{m}}(\rho,\pi)/m})\) where the predictor space is Euclidean with dimension \(d\). While our bound does not propose such an explicit convergence rate, it allows us to derive learning algorithms as described in Section 4. A broader discussion about the role of \(K\) is detailed in Theorem 2. Furthermore, our bound holds for any Polish predictor space and does not require Gaussian distributions. Furthermore, our result exploits data-dependent priors and deals with the dimension only through the Wasserstein distance, which can attenuate the impact of the dimension.
### Convergence rates for Theorem 2
In this section, we discuss more deeply the values of \(K\) in Theorem 2. This implies a tradeoff between the number of sets \(K\) and the cardinal of each \(\mathcal{S}_{i}\). The tightness of the bound depends highly on the sets \(\mathcal{S}_{1},\ldots,\mathcal{S}_{K}\).
**Full batch setting K=1.** When \(\mathcal{S}_{1}=\mathcal{S}\) with \(K=1\), the bound of Theorem 2 becomes, with probability \(1-\delta\), for any \(\rho\in\mathcal{M}(\mathcal{H})\)
\[\operatorname*{\mathbb{E}}_{h\sim\rho}\left[\operatorname{\mathbf{R}}_{\mu}(h )-\hat{\operatorname{\mathbf{R}}}_{\mathcal{S}}(h)\right]\leq 2L\mathrm{W}(\rho, \pi)+2\sqrt{\frac{\ln\frac{1}{\delta}}{m}}\;,\]
where \(\pi=\pi_{1}\) is data-free. This bound can be seen as the high-probability (PAC-Bayesian) version of the expected bound of [17]. Furthermore, in this setting, we are able, through our proof technique, to recover an explicit convergence rate similar to the one of [1, Theorem 12]. It is stated below.
**Corollary 5**.: _For any distribution \(\mu\) on \(\mathcal{Z}\), for any finite hypothesis space \(\mathcal{H}\) equipped with a distance \(d\), for any \(L\)-Lipschitz loss function \(\ell:\mathcal{H}\times\mathcal{Z}\to[0,1]\), for any \(\delta\in(0,1]\), we have, with probability \(1-\delta\) over the sample \(\mathcal{S}\), for any \(\rho\in\mathcal{M}(\mathcal{H})\):_
\[\operatorname*{\mathbb{E}}_{h\sim\rho}\left[\operatorname{\mathbf{R}}_{\mu}(h) -\hat{\operatorname{\mathbf{R}}}_{\mathcal{S}}(h)\right]\leq L\sqrt{\frac{2\ln \left(\nicefrac{{4|\mathcal{H}|^{2}}}{{\delta}}\right)}{m}}\mathrm{W}(\rho,\pi) +2\sqrt{\frac{\ln\left(\nicefrac{{2}}{{\delta}}\right)}{m}}\]
_where \(\pi\) is a data-free prior._
Proof.: We exploit [1, Equation 35] to state that with probability at least \(1-\nicefrac{{\delta}}{{2}}\), for any \((h,h^{\prime})\in\mathcal{H}^{2}\):
\[\left|\frac{1}{m}\sum_{i=1}^{m}\left[\ell\left(h^{\prime},\mathbf{z}_{i} \right)-\ell\left(h,\mathbf{z}_{i}\right)\right]-\operatorname*{\mathbb{E}}_{ \mathbf{z}\sim\mu}\left[\ell\left(h^{\prime},\mathbf{z}\right)-\ell(h, \mathbf{z})\right]\right|\leq L\sqrt{\frac{2\ln\left(\frac{4|\mathcal{H}|^{2} }{\delta}\right)}{m}}d\left(h,h^{\prime}\right).\]
So, with high probability, we can exploit the Kantorovich-Rubinstein duality with this new Lipschitz constant: with probability at least \(1-\delta/2\):
\[\operatorname*{\mathbb{E}}_{h\sim\rho}\left[\operatorname{\mathbf{R}}_{\mu}(h )-\hat{\operatorname{\mathbf{R}}}_{\mathcal{S}}(h)\right]\leq L\sqrt{\frac{2 \ln\left(\frac{4|\mathcal{H}|^{2}}{\delta}\right)}{m}}\mathrm{W}(\rho,\pi)+ \operatorname*{\mathbb{E}}_{h\sim\pi}\frac{1}{m}\left[\sum_{i=1}^{m} \operatorname{\mathbf{R}}_{\mu}(h)-\ell(h,\mathbf{z}_{i})\right],\]To conclude, we control the quantity on the right-hand side the same way as in Theorem 1 and Theorem 2. We then have, with probability at least \(1-\delta/2\), for a loss function in \([0,1]\):
\[\frac{1}{m}\sum_{i=1}^{m}\mathsf{R}_{\mu}(h)-\ell(h,\mathbf{z}_{i})\leq 2\sqrt{ \frac{\ln\frac{K}{\delta}}{m}}.\]
Taking the union bound concludes the proof.
**Mini-batch setting \(K=\sqrt{m}\).** When a tradeoff is desired between the quantity of data we want to infuse in our priors and an explicit convergence rate, a meaningful candidate is when \(K=\sqrt{m}\). Theorem 2's bound becomes, in this particular case:
\[\mathop{\mathbb{E}}_{h\sim\rho}\left[\mathsf{R}_{\mu}(h)-\hat{\mathsf{R}}_{ \mathcal{S}}(h)\right]\leq\frac{2L}{\sqrt{m}}\sum_{i=1}^{\sqrt{m}}\mathrm{W}( \rho,\pi_{i})+2\sqrt{\frac{\ln\frac{\sqrt{m}}{\delta}}{\sqrt{m}}}. \tag{8}\]
**Towards online learning:**\(K=m\). When \(K=m\), the sets \(\mathcal{S}_{i}\) contain only one example. More precisely, we have for all \(i\in\{1,\ldots,m\}\) the set \(\mathcal{S}_{i}=\{\mathbf{z}_{i}\}\). In this case, the bound becomes:
\[\mathop{\mathbb{E}}_{h\sim\rho}\left[\mathsf{R}_{\mu}(h)-\hat{\mathsf{R}}_{ \mathcal{S}}(h)\right]\leq\frac{2L}{m}\sum_{i=1}^{m}\mathrm{W}(\rho,\pi_{i})+ 2\sqrt{\ln\frac{m}{\delta}}.\]
This bound is vacuous since the last term is incompressible, hence the need for a new technique detailed in Section 3.2 to deal with it.
## Appendix B Proofs
The proof of Theorem 1 is presented in Appendix B.1. Appendices B.2 and B.3 introduce two proofs of Theorem 2. Theorem 3's proof is presented in Appendix B.4. Appendix B.5 provides the proof of Theorem 3.
### Proof of Theorem 1
**Theorem 1**.: _We assume the loss \(\ell\) to be \(L\)-Lipschitz. Then, for any \(\delta\in(0,1]\), for any sequence of positive scalar \((\lambda_{i})_{i\in\{1,\ldots,K\}}\), with probability at least \(1-\delta\) over the sample \(\mathcal{S}\), the following holds for the distributions \(\pi_{i,\mathcal{S}}:=\pi_{i}(\mathcal{S},.)\) and for any \(\rho\in\mathcal{M}(\mathcal{H})\):_
\[\mathop{\mathbb{E}}_{h\sim\rho}\left[R_{\mu}(h)-\hat{\mathsf{R}}_{ \mathcal{S}}(h)\right]\\ \leq\sum_{i=1}^{K}\frac{2|\mathcal{S}_{i}|L}{m}\mathrm{W}(\rho, \pi_{i,\mathcal{S}})+\frac{1}{m}\sum_{i=1}^{K}\frac{\ln\left(\frac{K}{\delta} \right)}{\lambda_{i}}+\frac{\lambda_{i}}{2}\left(\mathop{\mathbb{E}}_{h\sim \pi_{i,\mathcal{S}}}\left[\hat{V}_{|\mathcal{S}_{i}|}(h)+V_{|\mathcal{S}_{i}| }(h)\right]\right),\]
_where \(\pi_{i,\mathcal{S}}\) does not depend on \(\mathcal{S}_{i}\). Also, for any \(i,|S_{i}|\), we have \(\hat{V}_{|\mathcal{S}_{i}|}(h)=\sum_{\mathbf{z}\in\mathcal{S}_{i}}\left(\ell( h,\mathbf{z})-R_{\mu}(h)\right)^{2}\) and \(V_{|\mathcal{S}_{i}|}(h)=\mathop{\mathbb{E}}_{\mathcal{S}_{i}}\left[\hat{V}_{| \mathcal{S}_{i}|}(h)\right]\)._
Proof.: For the sake of readability, we identify, for any \(i\), \(\pi_{i}\) and \(\pi_{i,\mathcal{S}}\).
**Step 1: Exploit the Kantorovich duality [20, Remark 6.5].** First of all, note that for a \(L\)-Lipschitz loss function \(\ell:\mathcal{H}\times\mathcal{Z}\rightarrow[0,1]\), we have
\[\left|\left(|\mathcal{S}_{i}|\mathsf{R}_{\mu}(h_{1})-\sum_{\mathbf{z}\in \mathcal{S}_{i}}\ell(h_{1},\mathbf{z})\right)-\left(|\mathcal{S}_{i}|\mathsf{ R}_{\mu}(h_{2})-\sum_{\mathbf{z}\in\mathcal{S}_{i}}\ell(h_{2},\mathbf{z}) \right)\right|\leq 2|\mathcal{S}_{i}|Ld(h_{1},h_{2}). \tag{9}\]Indeed, we can deduce Equation (9) from Jensen inequality, the triangle inequality, and by definition that we have
\[\left|\left(|\mathcal{S}_{i}|\text{R}_{\mu}(h_{1})-\sum_{\mathbf{z}\in \mathcal{S}_{i}}\ell(h_{1},\mathbf{z})\right)-\left(|\mathcal{S}_{i}|\text{R}_{ \mu}(h_{2})-\sum_{\mathbf{z}\in\mathcal{S}_{i}}\ell(h_{2},\mathbf{z})\right)\right|\] \[=\left|\left(\sum_{\mathbf{z}\in\mathcal{S}_{i}}\text{R}_{\mu}(h_{1})- \sum_{\mathbf{z}\in\mathcal{S}_{i}}\ell(h_{1},\mathbf{z})\right)-\left(\sum_{\mathbf{z }\in\mathcal{S}_{i}}\text{R}_{\mu}(h_{2})-\sum_{\mathbf{z}\in\mathcal{S}_{i}}\ell( h_{2},\mathbf{z})\right)\right|\] \[\leq\sum_{\mathbf{z}\in\mathcal{S}_{i}}\mathop{\mathbb{E}}_{\mathbf{z}^{ \prime}\sim\mu}\left[|\ell(h_{1},\mathbf{z}^{\prime})-\ell(h_{2},\mathbf{z}^{ \prime})|+|\ell(h_{2},\mathbf{z})-\ell(h_{1},\mathbf{z})|\right]\] \[\leq\mathop{\mathbb{E}}_{\mathbf{z}^{\prime}\sim\mu}\sum_{\mathbf{z}\in \mathcal{S}_{i}}2Ld(h_{1},h_{2})\] \[=2|\mathcal{S}_{i}|Ld(h_{1},h_{2}).\]
We are now able to upper-bound \(\mathop{\mathbb{E}}_{h\sim\rho}[\text{R}_{\mu}(h)-\hat{\text{R}}_{\mathcal{S}} (h)]\). Indeed, we have
\[\mathop{\mathbb{E}}_{h\sim\rho}\left[\text{R}_{\mu}(h)-\hat{\text {R}}_{\mathcal{S}}(h)\right] =\frac{1}{m}\sum_{i=1}^{K}\mathop{\mathbb{E}}_{h\sim\rho}\left[| \mathcal{S}_{i}|\text{R}_{\mu}(h)-\sum_{\mathbf{z}\in\mathcal{S}_{i}}\ell(h, \mathbf{z})\right]\] \[\leq\sum_{i=1}^{K}\frac{2|\mathcal{S}_{i}|L}{m}\text{W}(\rho,\pi_ {i})+\sum_{i=1}^{K}\mathop{\mathbb{E}}_{h\sim\pi_{i}}\frac{1}{m}\left[| \mathcal{S}_{i}|\text{R}_{\mu}(h)-\sum_{\mathbf{z}\in\mathcal{S}_{i}}\ell(h, \mathbf{z})\right], \tag{10}\]
where the inequality comes from the Kantorovich-Rubinstein duality theorem.
Step 2: Define an adapted supermartingale.For any \(1\leq i\leq K\), we fix \(\lambda_{i}>0\) and we provide an arbitrary order to the elements of \(\mathcal{S}_{i}:=\{\mathbf{z}_{i,1},\cdots,\mathbf{z}_{i,|\mathcal{S}_{i}|}\}\). Then we define for any \(h\):
\[M_{|\mathcal{S}_{i}|}(h):=|\mathcal{S}_{i}|\text{R}_{\mu}(h)-\sum_{\mathbf{z}\in \mathcal{S}_{i}}\ell(h,\mathbf{z})=\sum_{j=1}^{|\mathcal{S}_{i}|}R_{\mu}(h)- \ell(h,\mathbf{z}_{i,j}).\]
Remark that, because our data are _i.i.d._, \((M_{|\mathcal{S}_{i}|})_{|\mathcal{S}_{i}|\geq 1}\) is a martingale. We then exploit the technique [1] to define a supermartingale. More precisely, we exploit a result from [1] cited in Lemma 1.3 of [1] coupled with Lemma 2.2 of [1] to ensure that the process
\[SM_{|\mathcal{S}_{i}|}:=\mathop{\mathbb{E}}_{h\sim\pi_{i}}\left[\exp\left( \lambda_{i}M_{|\mathcal{S}_{i}|}(h)-\frac{\lambda_{i}^{2}}{2}\left(\hat{V}_{| \mathcal{S}_{i}|}(h)+V_{|\mathcal{S}_{i}|}(h)\right)\right)\right],\]
is a supermartingale, where \(\hat{V}_{|\mathcal{S}_{i}|}(h)=\sum_{j=1}^{|\mathcal{S}_{i}|}\left(\ell(h, \mathbf{z}_{i,j})-R_{\mu}(h)\right)^{2}\) and \(V_{|\mathcal{S}_{i}|}(h)=\mathop{\mathbb{E}}_{\mathcal{S}_{i}}\left[\hat{V}_{| \mathcal{S}_{i}|}(h)\right]\).
Step 3. Combine steps 1 and 2.We restart from Equation (10) to exploit again the Kantorovich-Rubinstein duality.
\[\mathop{\mathbb{E}}_{h\sim\rho}\left[\text{R}_{\mu}(h)-\hat{\text {R}}_{\mathcal{S}}(h)\right] =\frac{1}{m}\sum_{i=1}^{K}\mathop{\mathbb{E}}_{h\sim\rho}\left[| \mathcal{S}_{i}|\text{R}_{\mu}(h)-\sum_{\mathbf{z}\in\mathcal{S}_{i}}\ell(h, \mathbf{z})\right]\] \[\leq\sum_{i=1}^{K}\frac{2|\mathcal{S}_{i}|L}{m}\text{W}(\rho,\pi_ {i})+\sum_{i=1}^{K}\frac{1}{m\lambda_{i}}\lambda_{i}\mathop{\mathbb{E}}_{h\sim \pi_{i}}\left[|\mathcal{S}_{i}|\text{R}_{\mu}(h)-\sum_{\mathbf{z}\in\mathcal{S}_{i }}\ell(h,\mathbf{z})\right],\] \[=\sum_{i=1}^{K}\frac{2|\mathcal{S}_{i}|L}{m}\text{W}(\rho,\pi_{i}) +\sum_{i=1}^{K}\frac{1}{m\lambda_{i}}\mathop{\mathbb{E}}_{h\sim\pi_{i}}\left[ \lambda_{i}M_{|\mathcal{S}_{i}|}|\right],\] \[\leq\sum_{i=1}^{K}\frac{2|\mathcal{S}_{i}|L}{m}\text{W}(\rho,\pi_ {i})+\sum_{i=1}^{K}\frac{1}{m\lambda_{i}}\ln\left(SM_{|\mathcal{S}_{i}|}\right)\] \[+\frac{1}{m}\sum_{i=1}^{K}\mathop{\mathbb{E}}_{h\sim\pi_{i}} \left[\frac{\lambda_{i}}{2}\left(\hat{V}_{|\mathcal{S}_{i}|}(h)+V_{|\mathcal{ S}_{i}|}(h)\right)\right].\]The last line holds thanks to Jensen's inequality. We now apply Ville's inequality (see _e.g._, Section 1.2 of [13]). We have for any \(i\):
\[\mathbb{P}_{\mathcal{S}_{i}\sim\mu^{|\mathcal{S}_{i}|}}\left(\forall|S_{i}|\geq 1,SM_{|S_{i}|}\leq\frac{1}{\delta}\right)\geq 1-\delta.\]
Applying an union bound and authorising \(\lambda_{i}\) to be a function of \(|S_{i}|\) (thus the inequality does not hold for all \(|\mathcal{S}_{i}|\) simultaneously) finally gives with probability at least \(1-\delta\), for all \(\rho\in\mathcal{M}(\mathcal{H})\) :
\[\mathbb{E}_{h\sim\rho}\left[\mathbf{R}_{\mu}(h)-\hat{\mathbf{R}}_{\mathcal{S}}( h)\right]\leq\sum_{i=1}^{K}\frac{2|\mathcal{S}_{i}|L}{m}\mathrm{W}(\rho,\pi_{i})+ \sum_{i=1}^{K}\frac{\ln\left(\frac{K}{\delta}\right)}{\lambda_{i}m}+\frac{ \lambda_{i}}{2m}\mathbb{E}_{h\sim\pi_{i}}\left[\hat{V}_{|\mathcal{S}_{i}|}(h)+ V_{|\mathcal{S}_{i}|}(h)\right].\]
### Proof of Theorem 2
**Theorem 2**.: _We assume our loss \(\ell\) to be non-negative and \(L\)-Lipschitz. We also assume that, for any \(1\leq i\leq K\), for any dataset\(\mathcal{S}\), we have \(\mathbb{E}_{h\sim\pi_{i},(\cdot,\mathcal{S}),z\sim\mu}\left[\ell(h,z)^{2} \right]\leq 1\) (bounded order 2 moments for priors). Then, for any \(\delta\in(0,1]\), with probability at least \(1-\delta\) over the sample \(\mathcal{S}\), the following holds for the distributions \(\pi_{i,\mathcal{S}}:=\pi_{i}(\mathcal{S},.)\) and for any \(\rho\in\mathcal{M}(\mathcal{H})\):_
\[\mathbb{E}_{h\sim\rho}\left[\mathbf{R}_{\mu}(h)-\hat{\mathbf{R}}_{\mathcal{S}} (h)\right]\leq\sum_{i=1}^{K}\frac{2|\mathcal{S}_{i}|L}{m}\mathrm{W}(\rho,\pi_{ i,\mathcal{S}})+\sum_{i=1}^{K}\sqrt{\frac{2|\mathcal{S}_{i}|\ln\frac{K}{\delta} }{m^{2}}},\]
_where \(\pi_{i,\mathcal{S}}\) does not depend on \(\mathcal{S}_{i}\)._
Proof.: For the sake of readability, we identify, for any \(i\), \(\pi_{i}\) and \(\pi_{i,\mathcal{S}}\).
Step 1: Exploit the Kantorovich duality [14, Remark 6.5].First of all, note that for a \(L\)-Lipschitz loss function \(\ell:\mathcal{H}\times\mathcal{Z}\to[0,1]\), we have
\[\left|\left(|\mathcal{S}_{i}|\mathbf{R}_{\mu}(h_{1})-\sum_{\mathbf{z}\in \mathcal{S}_{i}}\ell(h_{1},\mathbf{z})\right)-\left(|\mathcal{S}_{i}|\mathbf{ R}_{\mu}(h_{2})-\sum_{\mathbf{z}\in\mathcal{S}_{i}}\ell(h_{2},\mathbf{z}) \right)\right|\leq 2|\mathcal{S}_{i}|Ld(h_{1},h_{2}). \tag{11}\]
Indeed, we can deduce Equation (11) from Jensen inequality, the triangle inequality, and by definition that we have
\[\left|\left(|\mathcal{S}_{i}|\mathbf{R}_{\mu}(h_{1})-\sum_{\mathbf{ z}\in\mathcal{S}_{i}}\ell(h_{1},\mathbf{z})\right)-\left(|\mathcal{S}_{i}|\mathbf{R} _{\mu}(h_{2})-\sum_{\mathbf{z}\in\mathcal{S}_{i}}\ell(h_{2},\mathbf{z})\right)\right|\] \[=\left|\left(\sum_{\mathbf{z}\in\mathcal{S}_{i}}\mathbf{R}_{\mu}( h_{1})-\sum_{\mathbf{z}\in\mathcal{S}_{i}}\ell(h_{1},\mathbf{z})\right)-\left( \sum_{\mathbf{z}\in\mathcal{S}_{i}}\mathbf{R}_{\mu}(h_{2})-\sum_{\mathbf{z} \in\mathcal{S}_{i}}\ell(h_{2},\mathbf{z})\right)\right|\] \[\leq\sum_{\mathbf{z}\in\mathcal{S}_{i}}\mathbb{E}_{\mathbf{z}^{ \prime}\sim\mu}\left[\ell(h_{1},\mathbf{z}^{\prime})-\ell(h_{2},\mathbf{z}^{ \prime})|+|\ell(h_{2},\mathbf{z})-\ell(h_{1},\mathbf{z})|\right]\] \[\leq\mathbb{E}_{\mathbf{z}^{\prime}\sim\mu}\sum_{\mathbf{z}\in \mathcal{S}_{i}}2Ld(h_{1},h_{2})\] \[=2|\mathcal{S}_{i}|Ld(h_{1},h_{2}).\]
We are now able to upper-bound \(\mathbb{E}_{h\sim\rho}[\mathbf{R}_{\mu}(h)-\hat{\mathbf{R}}_{\mathcal{S}}(h)]\). Indeed, we have
\[\mathbb{E}_{h\sim\rho}\left[\mathbf{R}_{\mu}(h)-\hat{\mathbf{R }}_{\mathcal{S}}(h)\right] =\frac{1}{m}\sum_{i=1}^{K}\mathbb{E}_{h\sim\rho}\left[|\mathcal{ S}_{i}|\mathbf{R}_{\mu}(h)-\sum_{\mathbf{z}\in\mathcal{S}_{i}}\ell(h, \mathbf{z})\right]\] \[\leq\sum_{i=1}^{K}\frac{2|\mathcal{S}_{i}|L}{m}\mathrm{W}(\rho, \pi_{i})+\sum_{i=1}^{K}\mathbb{E}_{h\sim\pi_{i}}\frac{1}{m}\left[|\mathcal{S}_ {i}|\mathbf{R}_{\mu}(h)-\sum_{\mathbf{z}\in\mathcal{S}_{i}}\ell(h,\mathbf{z}) \right], \tag{12}\]
where the inequality comes from the Kantorovich-Rubinstein duality theorem.
Step 2: Define an adapted supermartingale. For any \(1\leq i\leq K\), we fix \(\lambda_{i}>0\) and we provide an arbitrary order to the elements of \(\mathcal{S}_{i}:=\{\mathbf{z}_{i,1},\cdots,\mathbf{z}_{i,|\mathcal{S}_{i}|}\}\). Then we define for any \(h\):
\[M_{|\mathcal{S}_{i}|}(h):=|\mathcal{S}_{i}|\mathbf{R}_{\mu}(h)-\sum_{\mathbf{z }\in\mathcal{S}_{i}}\ell(h,\mathbf{z})=\sum_{j=1}^{|\mathcal{S}_{i}|}R_{\mu}(h )-\ell(h,\mathbf{z}_{i,j}).\]
Remark that, because our data are _i.i.d._, \((M_{|\mathcal{S}_{i}|})_{|\mathcal{S}_{i}|\geq 1}\) is a martingale. We then exploit the technique [12] to define a supermartingale. More precisely, we exploit [12, Lemma A.2 and Lemma B.1] to ensure that the process
\[SM_{|\mathcal{S}_{i}|}:=\mathop{\mathbb{E}}_{h\sim\pi_{i}}\left[\exp\left( \lambda_{i}M_{|\mathcal{S}_{i}|}(h)-\frac{\lambda_{i}^{2}}{2}L_{|\mathcal{S}_ {i}|}(h)\right)\right],\]
is a supermartingale, where, because our data are _i.i.d._, \(L_{|\mathcal{S}_{i}|}(h)=\mathop{\mathbb{E}}_{\mathcal{S}}\left[\sum_{j=1}^{| \mathcal{S}_{i}|}\ell(h,\mathbf{z}_{i,j})^{2}\right]=|\mathcal{S}_{i}|\mathop{ \mathbb{E}}_{z\sim\mu}[\ell(h,z)^{2}]\).
Step 3. Combine steps 1 and 2.We restart from Equation (12) to exploit the Kantorovich-Rubinstein duality again.
\[\mathop{\mathbb{E}}_{h\sim\rho}\left[\mathbf{R}_{\mu}(h)-\hat{ \mathbf{R}}_{\mathcal{S}}(h)\right] =\frac{1}{m}\sum_{i=1}^{K}\mathop{\mathbb{E}}_{h\sim\rho}\left[| \mathcal{S}_{i}|\mathbf{R}_{\mu}(h)-\sum_{\mathbf{z}\in\mathcal{S}_{i}}\ell(h, \mathbf{z})\right]\] \[\leq\sum_{i=1}^{K}\frac{2|\mathcal{S}_{i}|L}{m}\mathrm{W}(\rho, \pi_{i})+\sum_{i=1}^{K}\frac{1}{m\lambda_{i}}\lambda_{i}\mathop{\mathbb{E}}_{h \sim\pi_{i}}\left[|\mathcal{S}_{i}|\mathbf{R}_{\mu}(h)-\sum_{\mathbf{z}\in \mathcal{S}_{i}}\ell(h,\mathbf{z})\right],\] \[=\sum_{i=1}^{K}\frac{2|\mathcal{S}_{i}|L}{m}\mathrm{W}(\rho,\pi_{ i})+\sum_{i=1}^{K}\frac{1}{m\lambda_{i}}\mathop{\mathbb{E}}_{h\sim\pi_{i}} \left[\lambda_{i}M_{|\mathcal{S}_{i}|}\right],\] \[\leq\sum_{i=1}^{K}\frac{2|\mathcal{S}_{i}|L}{m}\mathrm{W}(\rho, \pi_{i})+\sum_{i=1}^{K}\frac{1}{m\lambda_{i}}\ln\left(SM_{|\mathcal{S}_{i}|}\right)\] \[+\frac{1}{m}\sum_{i=1}^{K}\mathop{\mathbb{E}}_{h\sim\pi_{i}}\left[ \frac{\lambda_{i}}{2}L_{|\mathcal{S}_{i}|}(h)\right].\]
The last line holds thanks to Jensen's inequality. We now apply Ville's inequality (see _e.g._, section 1.2 of [10]). We have for any \(i\):
\[\mathop{\mathbb{P}}_{\mathcal{S}_{i}\sim\mu|\mathcal{S}_{i}|}\left(\forall|S_ {i}|\geq 1,SM_{|\mathcal{S}_{i}|}\leq\frac{1}{\delta}\right)\geq 1-\delta.\]
Applying an union bound and authorising \(\lambda_{i}\) to be a function of \(|S_{i}|\) (thus the inequality does not hold for all \(|\mathcal{S}_{i}|\) simultaneously) finally gives with probability at least \(1-\delta\), for all \(\rho\in\mathcal{M}(\mathcal{H})\) :
\[\mathop{\mathbb{E}}_{h\sim\rho}\left[\mathbf{R}_{\mu}(h)-\hat{\mathbf{R}}_{ \mathcal{S}}(h)\right]\leq\sum_{i=1}^{K}\frac{2|\mathcal{S}_{i}|L}{m}\mathrm{ W}(\rho,\pi_{i})+\sum_{i=1}^{K}\frac{\ln\left(\frac{K}{\delta}\right)}{ \lambda_{i}m}+\frac{\lambda_{i}}{2m}\mathop{\mathbb{E}}_{h\sim\pi_{i}}\left[L |\mathcal{S}_{i}|(h)\right].\]
Finally, using the assumption \(\mathop{\mathbb{E}}_{h\sim\pi_{i}}\mathop{\mathbb{E}}_{z\sim\mu}[\ell(h,z)^{2 }]\leq 1\) gives, with probability at least \(1-\delta\), for all \(\rho\in\mathcal{M}(\mathcal{H})\):
\[\mathop{\mathbb{E}}_{h\sim\rho}\left[\mathbf{R}_{\mu}(h)-\hat{\mathbf{R}}_{ \mathcal{S}}(h)\right]\leq\sum_{i=1}^{K}\frac{2|\mathcal{S}_{i}|L}{m}\mathrm{ W}(\rho,\pi_{i})+\sum_{i=1}^{K}\frac{\ln\left(\frac{K}{\delta}\right)}{ \lambda_{i}m}+\frac{\lambda_{i}[\mathcal{S}_{i}|]}{2m}.\]
Taking for each \(i\), \(\lambda_{i}=\sqrt{\frac{2\ln(K/\delta)}{|\mathcal{S}_{i}|}}\) concludes the proof.
### Alternative proof of Theorem 2
We state here a slightly tighter version of Theorem 2 for bounded losses, which relies on an application of McDiarmid's inequality instead of supermartingale techniques. This is useful for the numerical evaluations of our bound.
**Theorem 6**.: _We assume our loss \(\ell\) to be in \([0,1]\) and \(L\)-Lipschitz. Then, for any \(\delta\in(0,1]\), with probability at least \(1-\delta\) over the sample \(\mathcal{S}\), the following holds for the distributions \(\pi_{i,\mathcal{S}}:=\pi_{i}(\mathcal{S},.)\) and for any \(\rho\in\mathcal{M}(\mathcal{H})\):_
\[\mathop{\mathbb{E}}_{h\sim\rho}\left[\mathsf{R}_{\mu}(h)-\hat{\mathsf{R}}_{ \mathcal{S}}(h)\right]\leq\sum_{i=1}^{K}\frac{2|\mathcal{S}_{i}|L}{m}\mathrm{W} (\rho,\pi_{i,\mathcal{S}})+\sum_{i=1}^{K}\sqrt{\frac{|\mathcal{S}_{i}|\ln\frac{ K}{\delta}}{2m^{2}}}\]
_where \(\pi_{i}\) does not depend on \(\mathcal{S}_{i}\)._
Proof.: For the sake of readability, we identify, for any \(i\), \(\pi_{i}\) and \(\pi_{i,\mathcal{S}}\).
First of all, note that for a \(L\)-Lipschitz loss function \(\ell:\mathcal{H}\times\mathcal{Z}\to[0,1]\), we have
\[\left|\left(|\mathcal{S}_{i}|\mathsf{R}_{\mu}(h_{1})-\sum_{\mathbf{z}\in \mathcal{S}_{i}}\ell(h_{1},\mathbf{z})\right)-\left(|\mathcal{S}_{i}|\mathsf{ R}_{\mu}(h_{2})-\sum_{\mathbf{z}\in\mathcal{S}_{i}}\ell(h_{2},\mathbf{z})\right) \right|\leq 2|\mathcal{S}_{i}|Ld(h_{1},h_{2}). \tag{13}\]
Indeed, we can deduce Equation (13) from Jensen's inequality, the triangle inequality, and by definition that we have
\[\left|\left(|\mathcal{S}_{i}|\mathsf{R}_{\mu}(h_{1})-\sum_{\mathbf{ z}\in\mathcal{S}_{i}}\ell(h_{1},\mathbf{z})\right)-\left(|\mathcal{S}_{i}| \mathsf{R}_{\mu}(h_{2})-\sum_{\mathbf{z}\in\mathcal{S}_{i}}\ell(h_{2},\mathbf{ z})\right)\right|\] \[=\left|\left(\sum_{\mathbf{z}\in\mathcal{S}_{i}}\mathsf{R}_{\mu}(h _{1})-\sum_{\mathbf{z}\in\mathcal{S}_{i}}\ell(h_{1},\mathbf{z})\right)-\left( \sum_{\mathbf{z}\in\mathcal{S}_{i}}\mathsf{R}_{\mu}(h_{2})-\sum_{\mathbf{z}\in \mathcal{S}_{i}}\ell(h_{2},\mathbf{z})\right)\right|\] \[\leq\sum_{\mathbf{z}\in\mathcal{S}_{i}}\mathop{\mathbb{E}}_{ \mathbf{z}^{\prime}\sim\mu}\left[|\ell(h_{1},\mathbf{z}^{\prime})-\ell(h_{2}, \mathbf{z}^{\prime})|+|\ell(h_{2},\mathbf{z})-\ell(h_{1},\mathbf{z})|\right]\] \[\leq\mathop{\mathbb{E}}_{\mathbf{z}^{\prime}\sim\mu}\sum_{ \mathbf{z}\in\mathcal{S}_{i}}2Ld(h_{1},h_{2})\] \[=2|\mathcal{S}_{i}|Ld(h_{1},h_{2}).\]
We are now able to upper-bound \(\mathop{\mathbb{E}}_{h\sim\rho}[\mathsf{R}_{\mu}(h)-\hat{\mathsf{R}}_{ \mathcal{S}}(h)]\). Indeed, we have
\[\mathop{\mathbb{E}}_{h\sim\rho}\left[\mathsf{R}_{\mu}(h)-\hat{ \mathsf{R}}_{\mathcal{S}}(h)\right] =\frac{1}{m}\sum_{i=1}^{K}\mathop{\mathbb{E}}_{h\sim\rho}\left[ |\mathcal{S}_{i}|\mathsf{R}_{\mu}(h)-\sum_{\mathbf{z}\in\mathcal{S}_{i}}\ell( h,\mathbf{z})\right]\] \[\leq\sum_{i=1}^{K}\frac{2|\mathcal{S}_{i}|L}{m}\mathrm{W}(\rho, \pi_{i})+\sum_{i=1}^{K}\mathop{\mathbb{E}}_{h\sim\pi_{i}}\frac{1}{m}\left[| \mathcal{S}_{i}|\mathsf{R}_{\mu}(h)-\sum_{\mathbf{z}\in\mathcal{S}_{i}}\ell (h,\mathbf{z})\right], \tag{14}\]
where the inequality comes from the Kantorovich-Rubinstein duality theorem. Let \(f(\mathcal{S}_{i})=\mathop{\mathbb{E}}_{h\sim\pi_{i}}\frac{1}{m}\left[| \mathcal{S}_{i}|\mathsf{R}_{\mu}(h)-\sum_{\mathbf{z}\in\mathcal{S}_{i}}\ell(h,\mathbf{z}_{i})\right]\), the function has the bounded difference inequality, _i.e._, for two datasets \(\mathcal{S}_{i}\) and \(\mathcal{S}_{i}^{\prime}\) that differs from one example (the \(k\)-th example, without loss of generality), we have
\[|f(\mathcal{S}_{i})-f(\mathcal{S}_{i}^{\prime})| =\left|\mathop{\mathbb{E}}_{h\sim\pi_{i}}\frac{1}{m}\left[| \mathcal{S}_{i}|\mathsf{R}_{\mu}(h)-\sum_{\mathbf{z}\in\mathcal{S}_{i}}\ell(h,\mathbf{z})\right]-\mathop{\mathbb{E}}_{h\sim\pi_{i}}\frac{1}{m}\left[| \mathcal{S}_{i}|\mathsf{R}_{\mu}(h)-\sum_{\mathbf{z}^{\prime}\in\mathcal{S}_{i }^{\prime}}\ell(h,\mathbf{z}^{\prime})\right]\right|\] \[=\left|\mathop{\mathbb{E}}_{h\sim\pi_{i}}\left[\frac{1}{m}| \mathcal{S}_{i}|\mathsf{R}_{\mu}(h)-\frac{1}{m}\sum_{\mathbf{z}\in\mathcal{S}_{i }}\ell(h,\mathbf{z})-\frac{1}{m}|\mathcal{S}_{i}|\mathsf{R}_{\mu}(h)+\frac{1}{ m}\sum_{\mathbf{z}^{\prime}\in\mathcal{S}_{i}^{\prime}}\ell(h,\mathbf{z}^{ \prime})\right]\right|\] \[=\left|\mathop{\mathbb{E}}_{h\sim\pi_{i}}\left[\frac{1}{m}\sum_ {\mathbf{z}^{\prime}\in\mathcal{S}_{i}^{\prime}}\ell(h,\mathbf{z}^{\prime})- \frac{1}{m}\sum_{\mathbf{z}\in\mathcal{S}_{i}}\ell(h,\mathbf{z})\right]\right|\] \[=\left|\mathop{\mathbb{E}}_{h\sim\pi_{i}}\left[\frac{1}{m}\ell(h,\mathbf{z}^{\prime}_{k})-\frac{1}{m}\ell(h,\mathbf{z}_{k})\right]\right|\] \[\leq\frac{1}{m}.\]Hence, from Mediarmid's inequality, we have with probability at least \(1-\frac{\delta}{K}\) over \(\mathcal{S}\sim\mu^{m}\)
\[\mathop{\mathbb{E}}_{h\sim\pi_{i}}\frac{1}{m}\left[|\mathcal{S}_{i} |\mathbb{R}_{\mu}(h)-\sum_{\mathbf{z}\in\mathcal{S}_{i}}\ell(h,\mathbf{z})\right]\] \[\leq\mathop{\mathbb{E}}_{\mathcal{S}\sim\mu^{m}}\mathop{\mathbb{E} }_{h\sim\pi_{i}}\frac{1}{m}\left[|\mathcal{S}_{i}|\mathbb{R}_{\mu}(h)-\sum_{ \mathbf{z}\in\mathcal{S}_{i}}\ell(h,\mathbf{z})\right]+\sqrt{\frac{|\mathcal{S} _{i}|\ln\frac{K}{\delta}}{2m^{2}}}\] \[=\mathop{\mathbb{E}}_{\mathcal{S}_{i}^{c}\sim\mu^{m-|\mathcal{S}_ {i}|}}\mathop{\mathbb{E}}_{h\sim\pi_{i}}\frac{1}{m}\left[|\mathcal{S}_{i}| \mathbb{R}_{\mu}(h)-\sum_{\mathbf{z}\in\mathcal{S}_{i}}\ell(h,\mathbf{z}) \right]+\sqrt{\frac{|\mathcal{S}_{i}|\ln\frac{K}{\delta}}{2m^{2}}}\] \[=\mathop{\mathbb{E}}_{\mathcal{S}_{i}^{c}\sim\mu^{m-|\mathcal{S}_ {i}|}}\mathop{\mathbb{E}}_{h\sim\pi_{i}}\frac{1}{m}\left[|\mathcal{S}_{i}| \mathbb{R}_{\mu}(h)-|\mathcal{S}_{i}|\mathbb{R}_{\mu}(h)\right]+\sqrt{\frac{| \mathcal{S}_{i}|\ln\frac{K}{\delta}}{2m^{2}}}\] \[=\sqrt{\frac{|\mathcal{S}_{i}|\ln\frac{K}{\delta}}{2m^{2}}}.\]
From the union bound, we have with probability at least \(1-\delta\) over \(\mathcal{S}\sim\mu^{m}\), for any \(\rho\in\mathcal{M}(\mathcal{H})\),
\[\mathop{\mathbb{E}}_{h\sim\rho}\left[\mathbb{R}_{\mu}(h)-\hat{ \mathbb{R}}_{\mathcal{S}}(h)\right]\leq\sum_{i=1}^{K}\frac{2|\mathcal{S}_{i}|L }{m}\mathrm{W}(\rho,\pi_{i})+\sum_{i=1}^{K}\sqrt{\frac{|\mathcal{S}_{i}|\ln \frac{K}{\delta}}{2m^{2}}},\]
which is the claimed result.
We are now able to give a corollary of Theorem 6.
**Corollary 7**.: _We assume our loss \(\ell\) to be in \([0,1]\) and \(L\)-Lipschitz. Then, for any \(\delta\in(0,1]\), with probability at least \(1-\delta\) over the sample \(\hat{\mathcal{S}}\), the following holds for the hypotheses \(h_{i,\mathcal{S}}\in\mathcal{H}\) associated with the Dirac distributions \(\pi_{i,\mathcal{S}}\) and for any \(h\in\mathcal{H}\):_
\[R_{\mu}(h)\leq\hat{R}_{\mathcal{S}}(h)+\sum_{i=1}^{K}\frac{2| \mathcal{S}_{i}|L}{m}d(h,h_{i,\mathcal{S}})+\sum_{i=1}^{K}\sqrt{\frac{| \mathcal{S}_{i}|\ln\frac{K}{\delta}}{2m^{2}}}.\]
Such a bound was impossible to obtain from the PAC-Bayesian bounds based on a KL divergence. Indeed, the KL divergence is infinite for two distributions with disjoint supports. Hence, the PAC-Bayesian framework based on the Wasserstein distance allows us to provide uniform-convergence bounds from a proof technique different from the ones based on the Rademacher complexity [20, 1, 1] or the VC-dimension [21, 22]. In Section 4, we provide an algorithm minimising such a bound.
### Proof of Theorem 3
**Theorem 3**.: _We assume our loss \(\ell\) to be \(L\)-Lipschitz. Then, for any \(\delta\in(0,1]\), with probability at least \(1-\delta\) over the sample \(\mathcal{S}\), the following holds for the distributions \(\pi_{i,\mathcal{S}}:=\pi_{i}(\mathcal{S},.)\) and for any sequence \((\rho_{i})_{i=1\cdots m}\in\mathcal{M}(\mathcal{H})^{m}\):_
\[\sum_{i=1}^{m}\mathop{\mathbb{E}}_{h_{i}\sim\rho_{i}}\Big{[} \mathop{\mathbb{E}}[\ell(h_{i},\mathbf{z}_{i})\mid\mathcal{F}_{i-1}]-\ell(h_{ i},\mathbf{z}_{i})\Big{]} \leq 2L\sum_{i=1}^{m}\mathrm{W}(\rho_{i},\pi_{i,\mathcal{S}})\] \[+\frac{\lambda}{2}\sum_{i=1}^{m}\mathop{\mathbb{E}}_{h_{i}\sim \pi_{i,\mathcal{S}}}\Big{[}\hat{V}_{i}(h_{i},\mathbf{z}_{i})+V_{i}(h_{i})\Big{]} +\frac{\ln(1/\delta)}{\lambda},\]
_where for all \(i\), \(\hat{V}_{i}(h_{i},\mathbf{z}_{i})=(\ell(h_{i},\mathbf{z}_{i})-\mathop{\mathbb{ E}}_{i-1}[\ell(h_{i},\mathbf{z}_{i})])^{2}\) is the conditional empirical variance at time \(i\) and \(V_{i}(h_{i})=\mathop{\mathbb{E}}_{i-1}[\hat{V}(h_{i},\mathbf{z}_{i})]\) is the true conditional variance._Proof.: First of all, note that for a \(L\)-Lipschitz loss function \(\ell:\mathcal{H}\times\mathcal{Z}\to\mathbb{R}\), we have
\[\left|\left(\mathop{\mathbb{E}}_{i-1}[\ell(h_{i},\mathbf{z}_{i})]-\ell(h_{i}, \mathbf{z}_{i})\right)-\left(\mathop{\mathbb{E}}_{i-1}[\ell(h_{i}^{\prime}, \mathbf{z}_{i})]-\ell(h_{i}^{\prime},\mathbf{z}_{i})\right)\right|\leq 2Ld(h_{i},h_{i} ^{\prime}). \tag{15}\]
Indeed, we can deduce Equation (15) from Jensen inequality, the triangle inequality, and by definition that we have
\[\left|\left(\mathop{\mathbb{E}}_{i-1}[\ell(h_{i},\mathbf{z}_{i})]- \ell(h_{i},\mathbf{z}_{i})\right)-\left(\mathop{\mathbb{E}}_{i-1}[\ell(h_{i}^ {\prime},\mathbf{z}_{i})]-\ell(h_{i}^{\prime},\mathbf{z}_{i})\right)\right|\\ \leq\mathop{\mathbb{E}}_{i-1}\left[|\ell(h_{i},\mathbf{z}_{i}^{ \prime})-\ell(h_{i}^{\prime},\mathbf{z}_{i}^{\prime})|+|\ell(h_{i},\mathbf{z}_ {i})-\ell(h_{i}^{\prime},\mathbf{z}_{i})|\right]\\ \leq\mathop{\mathbb{E}}_{i-1}2Ld(h_{i},h_{i}^{\prime})=2Ld(h_{i},h_{i}^{\prime}).\]
From the Kantorovich-Rubinstein duality theorem [22, Remark 6.5], we have
\[\sum_{i=1}^{m}\mathop{\mathbb{E}}_{h_{i}\sim\rho_{i}}\left[\mathop{\mathbb{E}} _{i-1}[\ell(h_{i},\mathbf{z}_{i})]-\ell(h_{i},\mathbf{z}_{i})\right]\leq 2L \sum_{i=1}^{m}W_{1}(\rho_{i},\pi_{i,\mathcal{S}})+\sum_{i=1}^{m}\mathop{\mathbb{ E}}_{h\sim\pi_{i,\mathcal{S}}}\left[\mathbb{R}_{\mu}(h_{i})-\ell(h_{i},\mathbf{z}_{i}) \right].\]
Now, we define \(X_{i}(h_{i},\mathbf{z}_{i}):=\mathbb{E}_{i-1}[\ell(h_{i},\mathbf{z}_{i})]-\ell (h_{i},\mathbf{z}_{i})\). We also recall that for any \(i\), we have \(\hat{V}_{i}(h_{i},\mathbf{z}_{i})=(\ell(h_{i},\mathbf{z}_{i})-\mathbb{E}_{i-1 }[\ell(h_{i},\mathbf{z}_{i})])^{2}\) and \(V_{i}(h_{i})=\mathbb{E}_{i-1}[\hat{V}(h_{i},\mathbf{z}_{i})]\). To apply the supermartingales techniques of [1], we define the following function:
\[f_{m}(S,h_{1},...,h_{m}):=\sum_{i=1}^{m}\lambda X_{i}(h_{i},\mathbf{z}_{i})- \frac{\lambda^{2}}{2}\sum_{i=1}^{m}(\hat{V}_{i}(h_{i},\mathbf{z}_{i})+V_{i}(h _{i})).\]
Now, Lemma 3.2 of [1] state that the sequence \((SM_{m})_{m\geq 1}\) defined for any \(m\) as:
\[SM_{m}:=\mathop{\mathbb{E}}_{(h_{1},\cdots,h_{m})\sim\pi_{1,\mathcal{S}} \otimes\cdots\otimes\pi_{m,\mathcal{S}}}\left[\exp\left(f_{m}(\mathcal{S},h_{ 1},...,h_{m})\right)\right],\]
is a supermartingale. We exploit this fact as follows:
\[\sum_{i=1}^{m}\mathop{\mathbb{E}}_{h\sim\rho_{i-1}}\left[\mathop {\mathbb{E}}_{i-1}[\ell(h_{i},\mathbf{z}_{i})]-\ell(h_{i},\mathbf{z}_{i}) \right] =\mathop{\mathbb{E}}_{(h_{1},\cdots,h_{m})\sim\pi_{1,\mathcal{S}} \otimes\cdots\otimes\pi_{m,\mathcal{S}}}\left[\sum_{i=1}^{m}X_{i}(h_{i}, \mathbf{z}_{i})\right]\] \[=\frac{1}{\lambda}\mathop{\mathbb{E}}_{(h_{1},\cdots,h_{m})\sim \pi_{1,\mathcal{S}}\otimes\cdots\otimes\pi_{m,\mathcal{S}}}\left[f_{m}( \mathcal{S},h_{1},\cdots,h_{m})\right]\] \[+\frac{\lambda}{2}\sum_{i=1}^{m}\mathop{\mathbb{E}}_{h_{i}\sim\pi _{i,\mathcal{S}}}\left[\hat{V}_{i}(h_{i},\mathbf{z}_{i})+V_{i}(h_{i})\right]\] \[\leq\frac{\ln\left(SM_{m}\right)}{\lambda}+\frac{\lambda}{2}\sum _{i=1}^{m}\mathop{\mathbb{E}}_{h_{i}\sim\pi_{i,\mathcal{S}}}\left[\hat{V}_{i}(h _{i},\mathbf{z}_{i})+V_{i}(h_{i})\right]\]
The last line holds thanks to Jensen's inequality. Now using Ville's inequality ensures us that:
\[\mathop{\mathbb{P}}_{\mathcal{S}}\left(\forall m,SM_{m}\leq\frac{1}{\delta} \right)\geq\frac{1}{\delta}.\]
Thus, with probability \(1-\delta\), for any \(m\) we have \(\ln\left(SM_{m}\right)\leq\ln\left(\frac{1}{\delta}\right)\). This concludes the proof.
### Proof of Theorem 4
**Theorem 4**.: _We assume our loss \(\ell\) to be non-negative and \(L\)-Lipschitz. We also assume that, for any \(i,\mathcal{S}\), \(\mathbb{E}_{h\sim\pi_{i}(\cdot,\mathcal{S})}\left[\mathbb{E}_{i-1}[\ell(h, \mathbf{z}_{i})^{2}]\right]\leq 1\) (bounded conditional order 2 moments for priors). Then, for any \(\delta\in(0,1]\), with probability at least \(1-\delta\) over the sample \(\mathcal{S}\), any online predictive sequence (used as priors) \((\pi_{i})_{i\geq 1}\), we have with probability at least \(1-\delta\) over the sample \(\mathcal{S}\sim\mu\), the following, holding for the data-dependent measures \(\pi_{i,\mathcal{S}}:=\pi_{i}(S,.)\) and any posterior sequence \((\rho_{i})_{i\geq 1}\):_
\[\frac{1}{m}\sum_{i=1}^{m}\mathop{\mathbb{E}}_{h_{i}\sim\rho_{i}}\left[\mathbb{E} [\ell(h_{i},\mathbf{z}_{i})\mid\mathcal{F}_{i-1}]-\ell(h_{i},\mathbf{z}_{i}) \right]\leq\frac{2L}{m}\sum_{i=1}^{m}\mathrm{W}(\rho_{i},\pi_{i,\mathcal{S}})+ \sqrt{\frac{2\ln\left(\frac{1}{\delta}\right)}{m}}.\]Proof.: The proof starts similarly to the one of Theorem 3. Indeed, note that for a \(L\)-Lipschitz loss function \(\ell:\mathcal{H}\times\mathcal{Z}\to\mathbb{R}\), we have
\[\left|\left(\underset{i-1}{\mathbb{E}}[\ell(h_{i},\mathbf{z}_{i})]-\ell(h_{i}, \mathbf{z}_{i})\right)-\left(\underset{i-1}{\mathbb{E}}[\ell(h_{i}^{\prime}, \mathbf{z}_{i})]-\ell(h_{i}^{\prime},\mathbf{z}_{i})\right)\right|\leq 2Ld(h_{i},h_{i} ^{\prime}). \tag{16}\]
Indeed, we can deduce Equation (16) from Jensen inequality, the triangle inequality, and by definition that we have
\[\left|\left(\underset{i-1}{\mathbb{E}}[\ell(h_{i},\mathbf{z}_{i})]-\ell(h_{i},\mathbf{z}_{i})\right)-\left(\underset{i-1}{\mathbb{E}}[\ell(h_{i}^{\prime}, \mathbf{z}_{i})]-\ell(h_{i}^{\prime},\mathbf{z}_{i})\right)\right|\\ \leq\underset{i-1}{\mathbb{E}}\left[|\ell(h_{i},\mathbf{z}_{i}^{ \prime})-\ell(h_{i}^{\prime},\mathbf{z}_{i}^{\prime})|+|\ell(h_{i},\mathbf{z}_ {i})-\ell(h_{i}^{\prime},\mathbf{z}_{i})|\right]\\ \leq\underset{i-1}{\mathbb{E}}2Ld(h_{i},h_{i}^{\prime})=2Ld(h_{ i},h_{i}^{\prime}).\]
From the Kantorovich-Rubinstein duality theorem [11, Remark 6.5], we have
\[\sum_{i=1}^{m}\underset{h_{i}\sim\rho_{i}}{\mathbb{E}}\left[\underset{i-1}{ \mathbb{E}}[\ell(h_{i},\mathbf{z}_{i})]-\ell(h_{i},\mathbf{z}_{i})\right]\leq 2 L\sum_{i=1}^{m}W_{1}(\rho_{i},\pi_{i,\mathcal{S}})+\sum_{i=1}^{m}\underset{h\sim \pi_{i,\mathcal{S}}}{\mathbb{E}}\left[\mathbb{R}_{\mu}(h_{i})-\ell(h_{i}, \mathbf{z}_{i})\right].\]
Now, we define \(X_{i}(h_{i},\mathbf{z}_{i}):=\mathbb{E}_{i-1}[\ell(h_{i},\mathbf{z}_{i})]-\ell (h_{i},\mathbf{z}_{i})\). To apply the supermartingales techniques of [10], we define the following function:
\[f_{m}(S,h_{1},...,h_{m}):=\sum_{i=1}^{m}\lambda X_{i}(h_{i},\mathbf{z}_{i})- \frac{\lambda^{2}}{2}\sum_{i=1}^{m}\underset{i-1}{\mathbb{E}}[\ell(h_{i}, \mathbf{z}_{i})^{2}].\]
Now, because our loss is nonnegative, [10, Lemma A.2 and Lemma B.1] state that the sequence \((SM_{m})_{m\geq 1}\) defined for any \(m\) as:
\[SM_{m}:=\underset{(h_{1},\cdots,h_{m})\sim\pi_{1,\mathcal{S}}\otimes\cdots \otimes\pi_{m,\mathcal{S}}}{\mathbb{E}}\left[\exp\left(f_{m}(\mathcal{S},h_{1},...,h_{m})\right)\right],\]
is a supermartingale. We exploit this fact as follows:
\[\sum_{i=1}^{m}\underset{h\sim\rho_{i-1}}{\mathbb{E}}\left[\underset {i-1}{\mathbb{E}}[\ell(h_{i},\mathbf{z}_{i})]-\ell(h_{i},\mathbf{z}_{i})\right] =\underset{(h_{1},\cdots,h_{m})\sim\pi_{1,\mathcal{S}}\otimes \cdots\otimes\pi_{m,\mathcal{S}}}{\mathbb{E}}\left[\sum_{i=1}^{m}X_{i}(h_{i}, \mathbf{z}_{i})\right]\] \[=\frac{1}{\lambda}\left(\underset{h_{1},\cdots,h_{m})\sim\pi_{1,\mathcal{S}}\otimes\cdots\otimes\pi_{m,\mathcal{S}}}{\mathbb{E}}\left[f_{m}( \mathcal{S},h_{1},\cdots,h_{m})\right]\right.\] \[+\frac{1}{2}\sum_{i=1}^{m}\underset{h_{i}\sim\pi_{i,\mathcal{S}}} {\mathbb{E}}\left[\underset{i-1}{\mathbb{E}}[\ell(h_{i},\mathbf{z}_{i})^{2}]\right]\] \[\leq\frac{\ln(SM_{m})}{\lambda}+\frac{\lambda}{2}\sum_{i=1}^{m} \underset{h_{i}\sim\pi_{i,\mathcal{S}}}{\mathbb{E}}\left[\underset{i-1}{ \mathbb{E}}[\ell(h_{i},\mathbf{z}_{i})^{2}]\right]\]
The last line holds thanks to Jensen's inequality. Now using Ville's inequality ensures us that:
\[\underset{\mathcal{S}}{\mathbb{P}}\left(\forall m,SM_{m}\leq\frac{1}{\delta} \right)\geq\frac{1}{\delta}\]
Thus, with probability \(1-\delta\), for any \(m\) we have \(\ln(SM_{m})\leq\ln\frac{1}{\delta}\). We conclude the proof by exploiting the boundedness assumption on conditional order 2 moments and optimising the bound in \(\lambda\).
## Appendix C Supplementary insights on experiments
In this section, Appendix C.1 presents the learning algorithm for the _i.i.d._ setting. We also introduce the online algorithm in Appendix C.2. We prove the Lipschitz constant of the loss for the linear models in Appendix C.3. Finally, we provide more experiments in Appendix C.5.
### Batch algorithm for the _i.i.d._ setting
The pseudocode of our batch algorithm is presented in Algorithm 1.
```
1:procedurePriors Learning
2:\(h_{1},\ldots,h_{K}\leftarrow\) initialize the hypotheses
3:for\(t\gets 1,\ldots,T\)do
4:for each mini-batch \(\mathcal{U}\subseteq\mathcal{S}\)do
5:for\(i\gets 1,\ldots,K\)do
6:\(\mathcal{U}_{i}\leftarrow\mathcal{U}\setminus\mathcal{S}_{i}\)
7:\(h_{i}\leftarrow\) perform a gradient descent step with \(\nabla\mathsf{R}_{\mathcal{U}_{i}}(h_{i})\)
8:return hypotheses \(h_{1},\ldots,h_{K}\)
9:
10:procedurePosterior Learning
11:\(h\leftarrow\) initialize the hypothesis
12:for\(t\gets 1,\ldots,T^{\prime}\)do
13:for each mini-batch \(\mathcal{U}\subseteq\mathcal{S}\)do
14:\(h\leftarrow\) perform a gradient descent step with \(\nabla[\mathsf{R}_{\mathcal{U}}(h)+\varepsilon\sum_{i=1}^{K}\frac{|\mathcal{S} _{i}|}{m}d(h,h_{i})]\)
15:return hypothesis \(h\)
```
**Algorithm 1** (Mini-)Batch Learning Algorithm with Wasserstein distances
Priors Learning minimises the empirical risk through mini-batches \(\mathcal{U}\subseteq\mathcal{S}\) for \(T\) epochs. More precisely, for each epoch, we _(a)_ sample a mini-batch \(\mathcal{U}\) (line 4) by excluding the set \(\mathcal{S}_{i}\) from \(\mathcal{U}\) for each \(h_{i}\in\mathcal{H}\) (line 5-6), then _(b)_ the hypotheses \(h_{1},\ldots,h_{K}\in\mathcal{H}\) are updated (line 7). In Posterior Learning, we perform a gradient descent step (line 14) on the objective function associated with Equation (5) for \(T^{\prime}\) epochs in a mini-batch fashion.
### Learning algorithm for the online setting
Algorithm 2 presents the pseudocode of our online algorithm.
```
1:Initialize the hypothesis \(h_{0}\in\mathcal{H}\)
2:for\(i\gets 1,\ldots,m\)do
3:for\(t\gets 1,\ldots,T\)do
4:\(h_{i}\leftarrow\) perform a gradient step with \(\nabla[\ell(h_{i},\mathbf{z}_{i})+\hat{B}(d(h_{i},h_{i-1}){-}1)]\) (Eq. (7) with \(\hat{B}\))
5:return hypotheses \(h_{1},\ldots,h_{m}\)
```
**Algorithm 2** Online Learning Algorithm with Wasserstein distances
For each time step \(i\), we perform \(T\) gradient descent steps on the objective associated with Equation (6) (line 4). Note that we can retrieve OGD from Algorithm 2 by _(a)_ setting \(T=1\) and _(b)_ removing the regularisation term \(\hat{B}(d(h_{i},h_{i-1}){-}1)\).
### Lipschitzness for the linear model
Recall that we use, in our experiments, the multi-margin loss function from the Pytorch module defined for any linear model with weights \(W\in\mathbb{R}^{|\mathcal{Y}|\times d}\) and biases \(b\in\mathbb{R}^{|\mathcal{Y}|}\), any data point \(\mathbf{z}\in\mathcal{X}\times\mathcal{Y}\)
\[\ell(W,b,\mathbf{z})=\frac{1}{|\mathcal{Y}|-1}\sum_{y^{\prime}\neq y}\max \left(0,f(W,b,\mathbf{z},y^{\prime})\right),\]
where \(f(W,b,\mathbf{z},y^{\prime})=1+\langle W[y^{\prime}]-W[y],\mathbf{x}\rangle+b[ y^{\prime}]-b[y]\), and \(W[y]\in\mathbb{R}^{d}\) and \(b[y]\in\mathbb{R}\) are respectively the vector and the scalar for the \(y\)-th output.
To apply our theorems, we must ensure that our loss function is Lipschitz with respect to the linear model, hence the following lemma.
**Lemma 8**.: _For any \(\mathbf{z}=(\mathbf{x},y)\in\mathcal{X}\times\mathcal{Y}\) with the norm of \(\mathbf{x}\) bounded by \(1\), the function \(W,b\mapsto\ell(W,b,\mathbf{z})\) is \(2\)-Lipschitz._
Proof.: Let \((W,b),(W^{\prime},b^{\prime})\) both in \(\mathbb{R}^{|\mathcal{Y}|\times d}\times\mathbb{R}^{|\mathcal{Y}|}\), we have
\[|\ell(W,b,\mathbf{z})-\ell(W^{\prime},b^{\prime},\mathbf{z})|\leq\frac{1}{| \mathcal{Y}|-1}\sum_{y^{\prime}\neq y}|\max\left(0,f(W,b,\mathbf{z},y^{\prime} )\right)-\max\left(0,f(W^{\prime},b^{\prime},\mathbf{z},y^{\prime})\right)|.\]
Note that because \(\alpha\mapsto\max(0,\alpha)\) is 1-Lipschitz, we have:
\[|\ell(W,b,\mathbf{z})-\ell(W^{\prime},b^{\prime},\mathbf{z})|\leq\frac{1}{| \mathcal{Y}|-1}\sum_{y^{\prime}\neq y}|f(W,b,\mathbf{z},y^{\prime})-f(W^{ \prime},b^{\prime},\mathbf{z},y^{\prime})|.\]
Finally, notice that:
\[\frac{1}{|\mathcal{Y}|-1}\sum_{y^{\prime}\neq y}|f(W,b,\mathbf{z},y^{\prime})-f(W^{\prime},b^{\prime},\mathbf{z},y^{\prime})| \leq\frac{1}{|\mathcal{Y}|-1}\sum_{y^{\prime}\neq y}|\langle(W-W ^{\prime})[y^{\prime}]-(W-W^{\prime})[y],\mathbf{x}\rangle|\] \[+\frac{1}{|\mathcal{Y}|-1}\sum_{y^{\prime}\neq y}|(b-b^{\prime})[ y^{\prime}]-(b-b^{\prime})[y]|\] \[\leq\frac{1}{|\mathcal{Y}|-1}\sum_{y^{\prime}\neq y}\|(W-W^{ \prime})[y^{\prime}]-(W-W^{\prime})[y]\|\,\|\mathbf{x}\|\] \[+\frac{1}{|\mathcal{Y}|-1}\sum_{y^{\prime}\neq y}|(b-b^{\prime})[ y^{\prime}]-(b-b^{\prime})[y]|.\]
Because we consider the Euclidean norm, we have for any \(y^{\prime}\in\mathcal{Y}\):
\[\|(W-W^{\prime})[y^{\prime}]-(W-W^{\prime})[y]\| =\sqrt{\|(W-W^{\prime})[y^{\prime}]-(W-W^{\prime})[y]\|^{2}}\] \[\leq\sqrt{2\left(\|(W-W^{\prime})[y^{\prime}]\|^{2}+\|(W-W^{ \prime})[y]\|^{2}\right)}\] \[\leq\sqrt{2}\|W-W^{\prime}\|.\]
The second line holding because for any scalars \(a,b\), we have \((a-b)^{2}\leq 2(a^{2}+b^{2})\) and the last line holding because \(\|W-W^{\prime}\|^{2}=\sum_{y\in\mathcal{Y}}\|(W-W^{\prime})[y]\|^{2}\). A similar argument gives
\[\frac{1}{|\mathcal{Y}|-1}\sum_{y^{\prime}\neq y}|(b-b^{\prime})[y^{\prime}]-( b-b^{\prime})[y]|\leq\sqrt{2}\|b-b^{\prime}\|.\]
Then, using that \(\|x\|\leq 1\) and summing on all \(y^{\prime}\) gives:
\[|\ell(W,b,\mathbf{z})-\ell(W^{\prime},b^{\prime},\mathbf{z})|\leq\sqrt{2} \left(\|W-W^{\prime}\|+\|b-b^{\prime}\|\right).\]
Finally, notice that \((\|W-W^{\prime}\|+\|b-b^{\prime}\|)^{2}\leq 2(\|W-W^{\prime}\|^{2}+\|b-b^{ \prime}\|^{2})=2\|(W,b)-(W^{\prime},b^{\prime})\|^{2}\). Thus \(\|W-W^{\prime}\|+\|b-b^{\prime}\|\leq\sqrt{2}\|(W,b)-(W^{\prime},b^{\prime})\|\). This concludes the proof.
### Lipschitzness for neural networks
Recall that we use, in our experiments, the multi-margin loss function from the Pytorch module defined we consider the loss \(\ell(h,(\mathbf{x},y))=\frac{1}{|\mathcal{Y}|}\sum_{y^{\prime}\neq y}\max(0,1- \eta(h[y]-h[y^{\prime}]))\), which is \(\eta\)-Lipschitz _w.r.t._ the outputs \(h[1],\ldots,h[|\mathcal{Y}|]\). For neural networks, \(h\) is the output of the neural network with input \(\mathbf{x}\). Note that this loss is \(\eta\)-lipschitz with respect to the outputs. To apply our theorems, we must ensure that our loss function is Lipschitz with respect to the weights of the neural networks, hence the following lemma with associated background.
We define a FCN recursively as follows: for a vector \(\mathbf{W}_{1}=\operatorname{vec}(\{W_{1},b\})\), (_i.e._, the vectorisation of a weight matrix \(W_{1}\) and a bias \(b\)) and an input datum \(\mathbf{x}\), \(\operatorname{FCN}_{1}(\mathbf{W}_{1},\mathbf{x})=\sigma_{1}\left(W_{1}\mathbf{ x}+b_{1}\right)\), where \(\sigma_{1}\) is the activation function. Also, for any \(i\geq 2\) we define for a vector \(\mathbf{W}_{i}=(W_{i},b_{i},\mathbf{W}_{i-1})\) (defined recursively as well), \(\operatorname{FCN}_{i}(\mathbf{W}_{i},\mathbf{x})=\sigma_{i}\left(W_{i} \text{FCN}_{i-1}(\mathbf{W}_{i-1},\mathbf{x})+b_{i}\right)\). Then, setting \(\mathbf{z}=(\mathbf{x},y)\) a datum and \(h_{i}(\mathbf{x}):=\operatorname{FCN}_{i}(\mathbf{W}_{i},\mathbf{x})\) we can rewrite our loss as a function of \((\mathbf{W}_{i},\mathbf{z})\).
**Lemma 9**.: _Assume that all the weight matrices of \(\mathbf{W}_{i}\) are bounded and that the activation functions are Lipschitz continuous with constant bounded by \(K_{\sigma}\). Then for any datum \(\mathbf{z}=(\mathbf{x},y)\), any \(i\), \(\mathbf{W}_{i}\to\ell(\mathbf{W}_{i},\mathbf{z})\) is Lipschitz continuous._
Proof.: We consider the Frobenius norm on matrices as \(\mathbf{W}_{2}\) is a vector as we consider the L2-norm on the vector. We prove the result for \(i=2\), assuming it is true for \(i=1\). We then explain how this proof generalises the case \(i=1\) and works recursively. Let \(\mathbf{z},\mathbf{W}_{2},\mathbf{W}_{2}^{\prime}\), for clarity we write \(\operatorname{FCN}_{2}(\mathbf{x}):=\operatorname{FCN}(\mathbf{W}_{2},\mathbf{ x})\) and \(\operatorname{FCN}_{2}^{\prime}(\mathbf{x}):=\operatorname{FCN}(\mathbf{W}_{2}^{ \prime},\mathbf{x})\). As \(\ell\) is Lipschitz on the outputs \(\operatorname{FCN}_{2}(\mathbf{x}),\operatorname{FCN}_{2}^{\prime}(\mathbf{x})\). We have
\[|\ell(\mathbf{W}_{2},\mathbf{z})-\ell(\mathbf{W}_{2}^{\prime}, \mathbf{z})|\leq\eta\left\|\operatorname{FCN}_{2}(\mathbf{x})-\operatorname{ FCN}_{2}^{\prime}(\mathbf{x})\right\|\\ \leq\eta\left\|\sigma_{2}\left(W_{2}\text{FCN}_{1}(\mathbf{x})+b_ {2}\right)-\sigma_{2}\left(W_{2}^{\prime}\text{FCN}_{1}^{\prime}(\mathbf{x})+b _{2}^{\prime}\right)\right\|\\ \leq\eta K_{\sigma}\|W_{2}\text{FCN}_{1}(\mathbf{x})+b_{2}-W_{2}^ {\prime}\text{FCN}_{1}^{\prime}(\mathbf{x})-b_{2}^{\prime}\|\\ \leq\eta K_{\sigma}\left(||(W_{2}-W_{2}^{\prime})\text{FCN}_{1}( \mathbf{x})||+||W_{2}^{\prime}(\text{FCN}_{1}(\mathbf{x})-\text{FCN}_{1}^{ \prime}(\mathbf{x}))||+\|b_{2}-b_{2}^{\prime}\|\right).\]
Then, we have \(||(W_{2}-W_{2}^{\prime})\text{FCN}_{1}(\mathbf{x})||\leq||(W_{2}-W_{2}^{\prime })||_{F}||\text{FCN}_{1}(\mathbf{x})||\leq K_{\mathbf{x}}||(W_{2}-W_{2}^{\prime })||_{F}\). The second inequality holding as \(\text{FCN}_{1}(\mathbf{x})\) is a continuous function of the weights. Indeed, as on a compact space, a continuous function reaches its maximum, then its norm is bounded by a certain \(K_{\mathbf{x}}\). Also, as the weights are bounded, any weight matrix has its norm bounded by a certain \(K_{W}\) thus \(\|W_{2}^{\prime}(\text{FCN}_{1}(\mathbf{x})-\text{FCN}_{1}^{\prime}(\mathbf{x} )\|\leq\|W_{2}^{\prime}\|_{F}\|(\text{FCN}_{1}(\mathbf{x})-\text{FCN}_{1}^{ \prime}(\mathbf{x})\|\leq K_{W}\|\text{FCN}_{1}(\mathbf{x})-\text{FCN}_{1}^{ \prime}(\mathbf{x})\|\). Finally, taking \(K_{\text{temp}}=\eta K_{\sigma}\max(K_{\mathbf{x}},K_{W},1)\) gives:
\[|\ell(\mathbf{W}_{2},\mathbf{z})-\ell(\mathbf{W}_{2}^{\prime}, \mathbf{z})|\leq K_{\text{temp}}\left(\|(W_{2}-W_{2}^{\prime})\|_{F}+\|b_{2} -b_{2}^{\prime}\|+\|\text{FCN}_{1}(\mathbf{x})-\text{FCN}_{1}^{\prime}(\mathbf{ x})\|\right).\]
Exploiting the recursive assumption that \(\text{FCN}_{1}\) is Lipschitz with respect to its weights \(\mathbf{W}_{1}\) gives \(\|\text{FCN}_{1}(\mathbf{x})-\text{FCN}_{1}^{\prime}(\mathbf{x})\|\leq K_{1} ||\mathbf{W}_{1}-\mathbf{W}_{1}^{\prime}||\).
If we denote by \((W_{2},b_{2})\) the vector of all concatenated weights, notice that \(\|(W_{2}-W_{2}^{\prime})\|_{F}+\|b_{2}-b_{2}^{\prime}\|=\sqrt{(\|(W_{2}-W_{2}^ {\prime})\|_{F}+\|b_{2}-b_{2}^{\prime}\|)^{2}}\leq\sqrt{2(\|(W_{2}-W_{2}^{ \prime})\|^{2}_{F}+\|b_{2}-b_{2}^{\prime}\|^{2})}=\sqrt{2}\|(W_{2},b_{2})-(W_{2}^ {\prime},b_{2}^{\prime})\|\) (we used that for any real numbers \(a,b,(a+b)^{2}\leq 2(a^{2}+b^{2})\)). We then have:
\[|\ell(\mathbf{W}_{2},\mathbf{z})-\ell(\mathbf{W}_{2}^{\prime}, \mathbf{z})| \leq K_{\text{temp}}\max(\sqrt{2},K_{1})\left(\|(W_{2},b_{2})-(W_{ 2}^{\prime},b_{2}^{\prime})\|+\|\mathbf{W}_{1}-\mathbf{W}_{1}^{\prime}\|\right)\] \[\leq\sqrt{2}K_{\text{temp}}\max(\sqrt{2},K_{1})||\mathbf{W}_{2}- \mathbf{W}_{2}^{\prime}||.\]
The last line holds by reusing the same calculation trick. This concludes the proof for \(i=2\). Then for \(i=1\) the same proof holds by replacing \(W_{2},b_{2},\text{FCN}_{2}\) by \(W_{1},b_{1},\text{FCN}_{1}\) and replacing \(\text{FCN}_{1}(\mathbf{x}),\text{FCN}_{1}^{\prime}(\mathbf{x})\) by \(\mathbf{x}\) (we then do not need to assume a recursive Lipschitz behaviour). Therefore the result holds for \(i=1\).
We then properly apply a recursive argument by assuming the result at rank \(i-1\) reusing the same proof at any rank \(i\) by replacing \(W_{2},b_{2},\text{FCN}_{2}\) by \(W_{i},b_{i},\text{FCN}_{i}\) and \(\text{FCN}_{1}(\mathbf{x}),\text{FCN}_{1}^{\prime}(\mathbf{x})\) by \(\text{FCN}_{i-1}(\mathbf{x}),\text{FCN}_{i-1}^{\prime}(\mathbf{x})\). This concludes the proof.
### Experiments with varying number of priors
The experiments of Section 4 rely on data-dependent priors constructed through the procedure Priors Learning. We fixed a number of priors \(K\) equal to \(0.2\sqrt{m}\). This number is an empirical tradeoff between the informativeness of our priors and time-efficient computation. However, there is no theoretical intuition for the value of this parameter (the discussion of Section 3.1 considered \(K=\sqrt{m}\) as a potential tradeoff; see Appendix A). Thus, we gather below the performance of our learning procedures for \(K=\alpha\sqrt{m}\), where \(\alpha\in\{0,0.4,0.6,0.8,1\}\) (the case \(\alpha=0\) being a convention to denote \(K=1\)). The experiments are gathered below, and all remaining hyperparameters (except \(K\)) are identical to those described in Section 4.
**Analysis of our results.** First, when considering neural networks, note that for any dataset except segmentation, letter, the performances of our methods are similar or better when considering data-dependent priors (_i.e._, when \(\alpha>0\)). A similar remark holds for the linear models for all datasets except for satimage, segmentation, and tictactoe. This illustrates the relevance of data-dependent priors. We also remark that there is no value of \(\alpha\), which provides a better performance on all datasets. For instance, considering neural networks, note that \(\alpha=1\) gives the better performance (_i.e._, the smallest \(\mathfrak{R}_{\mu}(h)\)) for Algorithm 1 (\(/\sqrt{m}\)) for the satimage dataset while, for the same algorithm, the better performance on the segmentation dataset is attained for \(\alpha=0.8\). Sometimes, the number \(K\) does not have a clear influence: on mnist with NNs, for Algorithm 1 (\(/\sqrt{m}\)), our performances are similar, whatever the value of \(K\), but still significantly better than ERM. In any case, note that for every dataset, there exists a value of \(K\) and such that our algorithm attains either similar or significantly better performances than ERM on every dataset, which shows the relevance of our learning algorithm to ensure a good generalisation ability. Moreover, there is no obvious choice for the parameters \(\varepsilon\). For instance, in Tables 2 and 3, for the segmentation dataset, the parameters \(K=1,\varepsilon=\frac{1}{m}\) are optimal (in terms of test risks) for both models. As \(K=1\) means that our single prior is data-free, this shows that the intrinsic structure of segmentation makes it less sensitive to both the information contained in the prior (\(K=1\) meaning data-free prior) and the place of the prior itself (\(\varepsilon=1/m\) meaning that we give less weight to the regularisation within our optimisation procedure). On the contrary, in Table Table 1, the yeast dataset performs significantly better when \(\varepsilon=1/\sqrt{m}(K=0.2\sqrt{m})\), exhibiting a positive impact of our data-dependent priors.
### Experiments on classical regularisation methods
We perform additional experiments to see the performance of the weight decay, _i.e._, the L2 regularisation on the weights; the results are presented in Table 4. Moreover, notice that the 'distance to initialisation' \(\|\mathbf{w}-\mathbf{w}_{0}\|\) (where \(\mathbf{w}_{0}\) is the weights initialized randomly) is a particular case of Algorithm 1 when \(K=1\) (_i.e._, we treat the data as a single batch, and the prior is the data-free initialisation); the results are in Tables 2 and 3.
**Analysis of our results.** This experiment on the weight decay demonstrates that on a few datasets (namely sensorless and yeast), when our predictors are neural nets, the weight decay regularisation fails to learn while ours succeeds, as shown in Table 1. In general, this table shows that, on most of the datasets, considering data-dependent priors leads to sharper results. This shows the efficiency of our method compared to the 'distance to initialisation' regularisation.
\begin{table}
\end{table}
Table 2: Performance of Algorithm 1 compared to ERM on different datasets for neural network models. We consider \(\varepsilon=1/m\) and \(\varepsilon=1/\sqrt{m}\), with \(K=\alpha\sqrt{m}\) and \(\alpha\in\{0,0.4,0.6,0.8,1\}\). We plot the empirical risk \(\mathfrak{R}_{\mathcal{S}}(h)\) with its associated test risk \(\mathfrak{R}_{\mu}(h)\).
\begin{table}
\end{table}
Table 3: Performance of Algorithm 1 compared to ERM on different datasets for linear models. We consider \(\varepsilon=\nicefrac{{1}}{{m}}\) and \(\varepsilon=\nicefrac{{1}}{{\sqrt{m}}}\), with \(K=\alpha\sqrt{m}\) and \(\alpha\in\{0,0.4,0.6,0.8,1\}\). We plot the empirical risk \(\mathfrak{R}_{\mathcal{S}}(h)\) with its associated test risk \(\mathfrak{R}_{\mu}(h)\).
\begin{table}
\end{table}
Table 4: Performance of ERM with weight decay (with the L2 regularisation) for linear and neural network models. | ## Review
### Summary
This paper presents novel PAC-Bayesian learning methods utilizing Wasserstein distance instead of KL divergence, contributing to both batch and online learning frameworks. It derives high-probability generalization bounds applicable to heavy-tailed loss functions and proposes new learning algorithms based on these bounds. The results are validated with experiments demonstrating improved generalization performance over traditional methods like ERM and OGD. Notably, the work accommodates data-dependent priors, enhancing the applicability of the approach. The theoretical foundations are sound, and the empirical results are promising, though some aspects require further clarification.
### Strengths
- The introduction and results are well-contextualized within recent literature.
- The theorems presented have weak assumptions, making them broadly applicable.
- The proofs are clearly explained, even for complex results.
- Empirical validation demonstrates improved performance over existing methods.
### Weaknesses
- Variability in model performance based on batch size needs clarification.
- The role of the Lipschitz assumption and its implications should be better explained.
- Some sections, especially regarding algorithm comparisons and implications, require clearer discussion.
- Experimental results should be compared with standard regularization methods for a more comprehensive evaluation.
### Questions
- What justifies the use of the ℓ2 norm on weights if the loss function is only Lipschitz with respect to outputs?
- Can the authors clarify the performance disparities between algorithms based on their empirical results?
- How does the proposed approach compare with standard KL-based PAC-Bayes bounds, especially regarding variance terms?
### Soundness
**Score:** 3
**Description:** 3 = good; the theoretical contributions are solid, but some assumptions and implications lack full clarity.
### Presentation
**Score:** 3
**Description:** 3 = good; while the writing is mostly clear, some sections could benefit from improved clarity and organization.
### Contribution
**Score:** 4
**Description:** 4 = excellent; the paper introduces significant advancements in PAC-Bayesian learning with practical implications.
### Rating
**Score:** 7
**Description:** 7 = accept, but needs minor improvements; the paper is technically solid with important contributions, but certain aspects require refinement.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper demonstrates originality in applying Wasserstein distance to PAC-Bayesian learning, presenting sound theoretical foundations and promising empirical results. While some weaknesses exist, particularly in clarity and depth of discussion, the overall significance and contribution of the work warrant acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# A General Framework for Equivariant Neural Networks on Reductive Lie Groups
Ilyes Batatia
Engineering Laboratory,
University of Cambridge
Cambridge, CB2 1PZ UK
Department of Chemistry,
ENS Paris-Saclay, Universite Paris-Saclay
91190 Gif-sur-Yvette, France
[email protected]
&Mario Geiger
Department of Electrical Engineering
and Computer Science,
Massachusetts Institute of Technology
Cambridge, MA, USA
&Jose Munoz
EIA University, FTA Group
Anitoquia, Colombia
&Tess Smidt
Department of Electrical Engineering
and Computer Science,
Massachusetts Institute of Technology
Cambridge, MA, USA
&Lior Silberman
Department of Mathematics
University of British Columbia
Vancouver, BC, Canada V6T 1Z2
&Christoph Ortner
Department of Mathematics
University of British Columbia
Vancouver, BC, Canada V6T 1Z2
###### Abstract
Reductive Lie Groups, such as the orthogonal groups, the Lorentz group, or the unitary groups, play essential roles across scientific fields as diverse as high energy physics, quantum mechanics, quantum chromodynamics, molecular dynamics, computer vision, and imaging. In this paper, we present a general Equivariant Neural Network architecture capable of respecting the symmetries of the finite-dimensional representations of any reductive Lie Group \(G\). Our approach generalizes the successful ACE and MACE architectures for atomistic point clouds to any data equivariant to a reductive Lie group action. We also introduce the lie-nn software library, which provides all the necessary tools to develop and implement such general \(G\)-equivariant neural networks. It implements routines for the reduction of generic tensor products of representations into irreducible representations, making it easy to apply our architecture to a wide range of problems and groups. The generality and performance of our approach are demonstrated by applying it to the tasks of top quark decay tagging (Lorentz group) and shape recognition (orthogonal group).
## 1 Introduction
Convolutional Neural Networks (CNNs) [14] have become a widely used and powerful tool for computer vision tasks, in large part due to their ability to achieve translation equivariance. This property led to improved generalization and a significant reduction in the number of parameters. Translation equivariance is one of many possible symmetries occurring in machine learning tasks.
A wide range of symmetries described by reductive Lie Groups is present in physics, such as \(O(3)\) in molecular mechanics, \(\mathrm{SO}(1,3)\) in High-Energy Physics, \(\mathrm{SU}(2^{N})\) in quantum mechanics, and \(\mathrm{SU}(3)\) in quantum chromodynamics. Machine learning architectures that respect these symmetries often lead to significantly improved predictions while requiring far less training data. This has been demonstrated in many applications including 2D imaging with \(\mathrm{O}(2)\) symmetry (Cohen and Welling, 2016a; Esteves _et al._, 2017), machine learning force fields with \(\mathrm{O}(3)\) symmetry (Anderson _et al._, 2019; Bartok _et al._, 2013; Batzner _et al._, 2022; Batatia _et al._, 2022a) or jet tagging with \(\mathrm{SO}^{+}(1,3)\) symmetry (Bogatskiy _et al._, 2022; Li _et al._, 2022).
One way to extend CNNs to other groups (Finzi _et al._, 2020; Kondor and Trivedi, 2018) is through harmonic analysis on homogeneous spaces, where the convolution becomes an integral over the group. Other architectures work directly with finite-dimensional representations. We follow the demonstration of Bogatskiy _et al._ (2020a) who constructed a universal approximation of any equivariant map with a feed-forward neural network with vector activations belonging to finite-dimensional representations of a wide class of Lie groups. In this way, one can avoid computational challenges created by infinite-dimensional representations.
Alternatively, our current work can be thought of as a generalization of the Atomic Cluster Expansion (ACE) formalism of Drautz (2019) to general Lie groups. The ACE formalism provides a complete body-ordered basis of \(\mathrm{O}(3)\)-invariant features. By combining the concepts of ACE and \(\mathrm{E}(3)\)-equivariant neural networks, Batatia _et al._ (2022a) proposed the MACE architecture, which achieves state-of-the-art performance on learning tasks in molecular modelling. The present work generalizes the ACE and MACE architectures to arbitrary Lie groups in order to propose a generic architecture for creating representations of geometric point clouds in interaction.
Concretely, our work makes the following contributions:
* We develop the \(G\)-Equivariant Cluster Expansion. This framework generalizes the ACE (Drautz, 2019) and MACE (Batatia _et al._, 2022b) architectures to parameterize properties of point clouds, equivariant under a general reductive Lie group \(G\).
* We prove that our architecture is universal, even for a single layer.
* We introduce lie-nn, a new library providing all the essential tools to apply our framework to a variety of essential Lie Groups in physics and computer visions, including the Lorentz group, \(\mathrm{SU}(N)\), \(\mathrm{SL}_{2}(\mathbb{C})\) and product groups.
* We illustrate the generality and efficiency of our general-purpose approach by demonstrating excellent accuracy on two prototype applications, jet tagging, and 3D point cloud recognition.
## 2 Background
We briefly review a few important group-theoretic concepts: A real (complex) **Lie group** is a group that is also a finite-dimensional smooth (complex) manifold in which the product and inversion of the group are also smooth (holomorphic) maps. Among the most important Lie groups are Matrix Lie groups, which are closed subgroups of \(\mathrm{GL}(n,\mathbb{C})\) the group of invertible \(n\times n\) matrices with complex entries. This includes well-known groups such as \(\mathrm{Sp}(2n,\mathbb{R})\) consisting of matrices of determinant one, that is relevant in Hamiltonian dynamics A finite-dimensional **representation** of the Lie group
Figure 1: Examples of natural science problems and associated reductive Lie groups. For high energy physics, the Lorentz group \(\mathrm{SO}(1,3)\); for chemistry, the Euclidean group \(\mathrm{E}(3)\); for quantum-chromodynamics, the \(\mathrm{SU}(3)\) group.
\(G\) is a finite-dimensional vector space \(V\) endowed with a smooth homomorphism \(\rho\colon G\to\operatorname{GL}(V)\). Features in the equivariant neural networks live in these vector spaces. An **irreducible** representation \(V\) is a representation that has no subspaces which are invariant under the action of the group (other than \(\{0\}\) and \(V\) itself). This means that \(V\) can not be decomposed non-trivially as the direct sum of representations. A **reductive group** over a field \(F\) is a (Zariski-) closed subgroup of the group of matrices \(\operatorname{GL}(n,F)\) such that every finite-dimensional representation of \(G\) on an \(F\)-vectorspace can be decomposed as a sum of irreducible representations.
## 3 Related Work
Lie group convolutionsConvolutional neural networks (CNNs), which are translation equivariant, have also been generalized to other symmetries. For example, G-convolutions (Cohen and Welling, 2016) generalized CNNs to discrete groups. Steerable CNNs (Cohen and Welling, 2016) generalized CNNs to \(O(2)\) equivariance and Spherical CNNs (Cohen _et al._, 2018)\(O(3)\) equivariance. A general theory of convolution on any compact group and symmetric space was given by Kondor and Trivedi (2018). This work was further extended to equivariant convolutions on Riemannian manifolds by Weiler _et al._ (2021).
AccThe Atomic Cluster Expansion (ACE) (Drautz, 2019) introduced a systematic framework for constructing complete \(O(3)\)-invariant high body order basis sets with constant cost per basis function, independent of body order (Dusson _et al._, 2022).
e3nn + Equivariant MLPsThe e3nn library (Geiger and Smidt, 2022) provides a complete solution to build \(E(3)-\)equivariant neural networks based on irreducible representations. The Equivariant MLPs (Finzi _et al._, 2021) include more groups, such as \(SO(1,3)\), \(Z_{n}\), but are restricted to reducible representations making them much less computationally efficient than irreducible representations.
Equivariant MPNNs and MACEEquivariant MPNNs (Kondor _et al._, 2018; Anderson _et al._, 2019; Bogatskiy _et al._, 2020; Satorras _et al._, 2021; Brandstetter _et al._, 2022; Batzner _et al._, 2022) have emerged as a powerful architecture to learn on geometric point clouds. They construct permutation invariants and group equivariant representations of point clouds. Successful applications include simulations in chemistry, particle physics, and 3D vision. MACE (Batatia _et al._, 2022) generalized the \(O(3)\)-Equivariant MPNNs to build messages of arbitrary body order, outperforming other approaches on molecular tasks. (Batatia _et al._, 2022) showed that the MACE design space is large enough to include most of the previously published equivariant architectures.
## 4 The \(G\)-Equivariant Cluster Expansion
We are concerned with the representation of properties of point clouds. Point clouds are described as multi-sets (unordered tuples) \(X=[x_{i}]_{i}\) where each particle \(x_{i}\) belongs to a configuration domain \(\Omega\). We denote the set of all such multi-sets by \(\operatorname{msets}(\Omega)\). For example, in molecular modeling, \(x_{i}\) might describe the position and species of an atom and therefore \(x_{i}=(\mathbf{r}_{i},Z_{i})\in\mathbb{R}^{3}\times\mathbb{Z}\), while in high energy physics, one commonly uses the four-momentum \(x_{i}=(E_{i},\mathbf{p}_{i})\in\mathbb{R}^{4}\), but one could also include additional features such as charge, spin, and so forth. A property of the point cloud is a map
\[\Phi\colon\operatorname{msets}(\Omega)\to Z \tag{1}\]
i.e., \(X\mapsto\Phi(X)\in Z\), usually a scalar or tensor. The range space \(Z\) is application dependent and left abstract throughout this paper. Expressing the input as a multi-set implicitly entails two important facts: (1) it can have varying lengths; (2) it is invariant under the permutations of the particles. The methods developed in this article are also applicable to fixed-length multi-sets, in which case \(\Phi\) is simply a permutation-invariant function defined on some \(\Omega^{N}\). Mappings that are not permutation-invariant are special cases with several simplifications.
In many applications, especially in the natural sciences, particle properties satisfy additional symmetries. When a group \(G\) acts on \(\Omega\) as well as on \(Z\) we say that \(\Phi\) is \(G\)-**equivariant** if
\[\Phi\circ g=\rho_{Z}(g)\Phi,\qquad g\in G \tag{2}\]
where \(\rho_{Z}(g)\) is the action of the group element \(g\) on the range space \(Z\). In order to effectively incorporate exact group symmetry into properties \(\Phi\), we consider model architectures of the form
\[\Phi\colon\operatorname{msets}(\Omega)\underset{\text{embedding}}{ \longrightarrow}V\underset{\text{parameterization}}{\longrightarrow}V \underset{\text{readout}}{\longrightarrow}Z, \tag{3}\]where the space \(V\) into which we "embed" the parameterization is a possibly infinite-dimensional vector space in which a convenient representation of the group is available. For simplicity we will sometimes assume that \(Z=V\).
The Atomic Cluster Expansion (ACE) framework (Drautz, 2019; Dusson _et al._, 2022; Drautz, 2020) produces a complete linear basis for the space of all "smooth" \(G\)-equivariant properties \(\Phi\) for the specific case when \(G=\operatorname{O}(3)\) and \(x_{i}\) are vectorial interatomic distances. Aspects of the ACE framework were incorporated into \(\operatorname{E}(3)\)-equivariant message passing architectures, with significant improvements in accuracy (Batatia _et al._, 2022a). In the following paragraphs we demonstrate that these ideas readily generalize to arbitrary reductive Lie groups.
### Efficient many-body expansion
The first step is to expand \(\Phi\) in terms of body orders, and truncate the expansion at a finite order \(N\):
\[\Phi^{(N)}(X)=\varphi_{0}+\sum_{i}\varphi_{1}(x_{i})+\sum_{i_{1},i_{2}}\varphi _{2}(x_{i_{1}},x_{i_{2}})+\cdots+\sum_{i_{1},\ldots,i_{N}}\varphi_{N}(x_{i_{1} },\ldots,x_{i_{N}}), \tag{4}\]
where \(\varphi_{n}\) defines the \(n\)-body interaction. Formally, the expansion becomes systematic in the limit as \(N\to\infty\). The second step is the expansion of the \(n\)-particle functions \(\varphi_{n}\) in terms of a symmetrized tensor product basis. To define this we first need to specify the embedding of particles \(x\): A countable family \((\phi_{k})_{k}\) is a 1-particle basis if they are linearly independent on \(\Omega\) and any smooth 1-particle function \(\varphi_{1}\) (not necessarily equivariant) can be expanded in terms of \((\phi_{k})_{k}\), i.e,
\[\varphi_{1}(x)=\sum_{k}w_{k}\phi_{k}(x). \tag{5}\]
For the sake of concreteness, we assume that \(\phi_{k}:\Omega\to\mathbb{C}\), but the range can in principle be any field.We provide concrete examples of 1-particle bases in Appendix A.2. Let a complex vector space \(V\) be given, into which the particle embedding maps, i.e.,
\[(\phi_{k}(x))_{k}\in V\qquad\forall x\in\Omega.\]
As a consequence of (5) any smooth scalar \(n\)-particle function \(\varphi_{n}\) can be expanded in terms of the corresponding tensor product basis,
\[\varphi_{n}(x_{1},\ldots,x_{n})=\sum_{k_{1},\ldots,k_{n}}w_{k_{1}\ldots k_{n} }\prod_{s=1}^{n}\phi_{k_{s}}(x_{s}). \tag{6}\]
Inserting these expansions into (4) and interchanging summation (see appendix for the details) we arrive at a model for scalar permutation-symmetric properties,
\[A_{k}=\sum_{x\in X}\phi_{k}(x),\qquad\boldsymbol{A_{k}}=\prod_{s=1}^{n}A_{k}, \qquad\Phi^{(N)}=\sum_{\boldsymbol{k}\in\mathcal{K}}w_{\boldsymbol{k}} \boldsymbol{A_{k}}, \tag{7}\]
where \(\mathcal{K}\) is the set of all \(\boldsymbol{k}\) tuples indexing the features \(\boldsymbol{A_{k}}\). Since \(\boldsymbol{A_{k}}\) is invariant under permuting \(\boldsymbol{k}\), only ordered \(\boldsymbol{k}\) tuples are retained. The features \(A_{k}\) are an embedding of \(\operatorname{msets}(\Omega)\) into the space \(V\). The tensorial product features (basis functions) \(\boldsymbol{A_{k}}\) form a complete linear basis of multi-set functions on \(\Omega\) and the weights \(w_{\boldsymbol{k}}\) can be understood as a symmetric tensor. We will extend this linear cluster expansion model \(\Phi^{(N)}\) to a message-passing type neural network model in SS 4.4.
While the standard tensor product embeds \((\otimes_{s=1}^{n}\phi_{k_{s}})_{\boldsymbol{k}}\colon\Omega^{n}\to V^{n}\), the \(n\)-correlations \(\boldsymbol{A_{k}}\) are _symmetric tensors_ and embed \((\boldsymbol{A_{k}})_{\boldsymbol{k}}\colon\operatorname{msets}(\Omega)\to \operatorname{Sym}^{n}V\).
The evaluation of the symmetric tensor features \(\boldsymbol{A_{k}}\) is the computational bottleneck in most scenarios, but efficient recursive evaluation algorithms (Batatia _et al._, 2022a; Kaliuzhnyi and Ortner, 2022) are available. See Appendix A.13.2 for further discussion of model computational costs.
### Symmetrisation
With (7) we obtained a systematic linear model for (smooth) multi-set functions. It remains to incorporate \(G\)-equivariance. We assume that \(G\) is a reductive Lie group with a locally finite representation in \(V\). In other words we choose a representation \(\rho=(\rho_{kk^{\prime}})\colon G\to\operatorname{GL}(V)\) such that
\[\phi_{k}\circ g=\sum_{k^{\prime}}\rho_{kk^{\prime}}(g)\phi_{k^{\prime}}, \tag{8}\]where for each \(k\) the sum over \(k^{\prime}\) is over a finite index-set depending only on \(k\). Most Lie groups one encounters in physical applications belong to this class, the affine groups being notable exceptions. However, those can usually be treated in an _ad hoc_ fashion, which is done in all \(E(3)\)-equivariant architectures we are aware of. In practice, these requirements restrict how we can choose the embedding \((\phi_{k})_{k}\). If the point clouds \(X=[x_{i}]_{i}\) are already given in terms of a representation of the group, then one may simply construct \(V\) to be iterative tensor products of \(\Omega\); see e.g. the MTP (Shapeev, 2016) and PELICAN (Bogatskiy _et al._, 2022) models. To construct an equivariant two-particle basis we need to first construct the set of all intertwining operators from \(V\otimes V\to V\). Concretely, we seek all solutions \(C^{\mathbf{\alpha},K}_{k_{1}k_{2}}\) to the equation
\[\sum_{k^{\prime}_{1}k^{\prime}_{2}}C^{\mathbf{\alpha},K}_{k^{\prime}_{1}k^{\prime}_ {2}}\rho_{k^{\prime}_{1}k_{1}}(g)\rho_{k^{\prime}_{2}k_{2}}(g)=\sum_{K^{\prime }}\rho_{KK^{\prime}}(g)C^{\mathbf{\alpha},K^{\prime}}_{k_{1}k_{2}}; \tag{9}\]
or, written in operator notation, \(C^{\mathbf{\alpha}}\rho\otimes\rho=\rho C^{\mathbf{\alpha}}\). We will call the \(C^{\mathbf{\alpha},K}_{\mathbf{k}}\)_generalized Clebsch-Gordan coefficients_ since in the case \(G=\mathrm{SO}(3)\) acting on the spherical harmonics embedding \(\phi_{lm}=Y_{l}^{m}\) those coefficients are exactly the classical Clebsch-Gordan coefficients. The index \(\mathbf{\alpha}\) enumerates a basis of the space of all solutions to this equation. For the most common groups, one normally identifies a canonical basis \(C^{\mathbf{\alpha}}\) and assigns a natural meaning to this index (cf. SS A.5). Our abstract notation is chosen because of its generality and convenience for designing computational schemes. The generalization of the Clebsch-Gordan equation (9) to \(n\) products of representations acting on the symmetric tensor space \(\mathrm{Sym}^{n}(V)\) becomes (cf. SS A.9)
\[\begin{split}&\sum_{\mathbf{k}^{\prime}}C^{\mathbf{\alpha},K}_{\mathbf{k}^{ \prime}}\overline{\mathbf{\rho}}_{\mathbf{k}^{\prime}\mathbf{k}}=\sum_{K^{\prime}}\rho_{KK^ {\prime}}C^{\mathbf{\alpha},K^{\prime}}_{\mathbf{k}}\qquad\forall K,\quad\mathbf{k}=(k_{1},\dots,k_{N}),\quad g\in G,\\ &\text{where}\qquad\overline{\mathbf{\rho}}_{\mathbf{k}^{\prime}\mathbf{k}}= \sum_{\begin{subarray}{c}\mathbf{k}^{\prime\prime}=\mathbf{k}^{\prime}\\ \mathbf{\pi}\in S_{n}\end{subarray}}\mathbf{\rho}_{\mathbf{k}^{\prime\prime}\mathbf{k}}\qquad \text{and}\qquad\mathbf{\rho}_{\mathbf{k}^{\prime}\mathbf{k}}=\prod_{t=1}^{n}\rho_{k^{ \prime}_{t}k_{t}}.\end{split} \tag{10}\]
Due to the symmetry of the \((\mathbf{A}_{\mathbf{k}})_{\mathbf{k}}\) tensors \(C^{\mathbf{\alpha},K}_{\mathbf{k}}\) need only be computed for ordered \(\mathbf{k}\) tuples and the sum \(\sum_{\mathbf{k}^{\prime}}\) also runs only over ordered \(\mathbf{k}\) tuples. Again, the index \(\mathbf{\alpha}\) enumerates a basis of the space of solutions. Equivalently, (10) can be written in compact notation as \(\mathcal{C}^{\mathbf{\alpha}}\overline{\mathbf{\rho}}=\rho\mathcal{C}^{\mathbf{\alpha}}\). These coupling operators for \(N\) products can often (but not always) be constructed recursively from couplings of pairs (9). We can now define the symmetrized basis
\[\mathbf{B}^{K}_{\mathbf{\alpha}}=\sum_{\mathbf{k}^{\prime}}C^{\mathbf{\alpha},K}_{\mathbf{k}^{ \prime}}\mathbf{A}_{\mathbf{k}^{\prime}}. \tag{11}\]
The equivariance of (11) is easily verified by applying a transformation \(g\in G\) to the input (cf SS A.6).
**Universality:** In the limit as the correlation order \(N\to\infty\), the features \((\mathbf{B}^{K}_{\mathbf{\alpha}})_{K,\mathbf{\alpha}}\) form a complete basis of smooth equivariant multi-set functions, in a sense that we make precise in Appendix A.7. Any equivariant property \(\Phi_{V}:\Omega\to V\) can be approximated by a linear model
\[\Phi_{V}^{K}=\sum_{\mathbf{\alpha}}c^{K}_{\mathbf{\alpha}}B^{K}_{\mathbf{\alpha}}, \tag{12}\]
to within arbitrary accuracy by taking the number of terms in the linear combination to infinity.
### Dimension Reduction
The tensor product of the cluster expansion in (7) is taken on all the indices of the one-particle basis. Unless the embedding \((\phi_{k})_{k}\) is very low-dimensional it is often preferable to "sketch" this tensor product. For example, consider the canonical embedding of an atom \(x_{i}=(\mathbf{r}_{i},Z_{i})\),
\[\phi_{k}(x_{i})=\phi_{znlm}(x_{i})=\delta_{zZ_{i}}R_{nl}(r_{i})Y_{l}^{m}(\hat{ \mathbf{r}}_{i}).\]
Only the \((lm)\) channels are involved in the representation of \(\mathrm{O}(3)\) hence there is considerable freedom in "compressing" the \((z,n)\) channels.
Following Darby _et al._ (2022) we construct a sketched \(G\)-equivariant cluster expansion: We endow the one-particle basis with an additional index \(c\), referred to as the sketched channel, replacing the index \(k\) with the index pair \((c,k)\), and renaming the embedding \((\phi_{ck})_{c,k}\). In the case of three-dimensional particles one may, for example, choose \(c=(z,n)\). In general, it is crucial that the representation remains in terms of \(\rho_{k,k^{\prime}}\), that is, (8) becomes \(\phi_{ck}\circ g=\sum_{k^{\prime}}\rho_{kk^{\prime}}(g)\phi_{ck^{\prime}}\). Therefore, manipulating only the \(c\) channel does not change any symmetry properties of the architecture. Generalizing Darby _et al._ (2022), the \(G\)-TRACE (tensor-reduced ACE) basis then becomes
\[\mathbf{B}_{\mathbf{c}\mathbf{\alpha}}^{K}=\sum_{\mathbf{k}^{\prime}}C_{\mathbf{k}^{\prime}}^{\mathbf{ \alpha},K}\tilde{\mathbf{A}}_{\mathbf{c}\mathbf{k}^{\prime}},\qquad\text{where} \tag{13}\]
\[\tilde{\mathbf{A}}_{\mathbf{c}\mathbf{k}}=\prod_{t=1}^{n}\Bigg{(}\sum_{c^{\prime}}w_{cc^{ \prime}}\sum_{x\in X}\phi_{c^{\prime}k_{t}}(x)\Bigg{)}. \tag{14}\]
This construction is best understood as an equivariance-preserving canonical tensor decomposition (Darby _et al._, 2022). There are numerous variations, but for the sake of simplicity, we restrict our presentation to this one case.
**Universality:** Following the proof of Darby _et al._ (2022) one can readily see that the \(G\)-TRACE architecture inherits the universality of the cluster expansion, in the limit of decoupled channels \(\#c\to\infty\). A smooth equivariant property \(\Phi\) may be approximated to within arbitrary accuracy by an expansion \(\Phi^{K}(X)\approx\sum_{c,\mathbf{\alpha}}c_{\mathbf{\alpha}}^{K}\mathbf{B}_{c,\mathbf{\alpha} }^{K}(X)\). Since the embedding \(\tilde{A}_{ck}\) is learnable, this is a _nonlinear model_. We refer to SS A.7 for the details.
### G-Mace, Multi-layer cluster expansion
The \(G\)-equivariant cluster expansion is readily generalized to a multi-layer architecture by re-expanding previous features in a new cluster expansion (Batatia _et al._, 2022b). The multi-set \(X\) is endowed with extra features, \(\mathbf{h}_{i}^{t}=(h_{i,ck}^{t})_{c,K}\), that are updated for \(t\in\{1,...,T\}\) iterations. These features themselves are chosen to be a field of representations such that they have a well-defined transformation under the action of the group. This results in
\[x_{i}^{t} =(x_{i},\mathbf{h}_{i}^{t}) \tag{15}\] \[\phi_{ck}^{t}(x_{i},\mathbf{h}_{i}^{t}) =\sum_{\mathbf{\alpha}}w_{\mathbf{\alpha}}^{t,ck}\sum_{k^{\prime},k^{ \prime\prime}}C_{k^{\prime}k^{\prime\prime}}^{\mathbf{\alpha},k}h_{i,ck^{\prime}} ^{t}\phi_{ck^{\prime\prime}}(x_{i}) \tag{16}\]
The recursive update of the features proceeds as in a standard message-passing framework but with the unique aspect that messages are formed via the \(G\)-TRACE and in particular can contain arbitrary high correlation order..
\[m_{i,cK}^{t}=\sum_{\mathbf{\alpha}}W_{\mathbf{\alpha}}^{t,cK}\mathbf{B}_{\mathbf{c}\mathbf{\alpha }}^{t,K}. \tag{17}\]
The gathered message \(\mathbf{m}_{i}^{t}=(m_{i,cK}^{t})_{c,k}\) is then used to update the particle states,
\[x_{i}^{t+1}=(x_{i},\mathbf{h}_{i}^{t+1}),\qquad\mathbf{h}_{i}^{t+1}=U_{t}\big{(}\mathbf{m} _{i}^{t}\big{)}, \tag{18}\]
where \(U_{t}\) can be an arbitary fixed or learnable transformation (even the identity). Lastly, a readout function maps the state of a particle to a target quantity of interest, which could be _local_ to each particle or _global_ to the mset \(X\),
\[y_{i}=\sum_{t=1}^{T}\mathcal{R}_{t}^{\mathrm{loc}}(x_{i}^{t}),\qquad\text{ respectively,}\qquad y=\sum_{t=1}^{T}\mathcal{R}_{t}^{\mathrm{glob}}(\{x_{i}^{t}\}_{i}). \tag{19}\]
This multi-layer architecture corresponds to a general message-passing neural network with arbitrary body order of the message at each layer. We will refer to this architecture as \(G\)-MACE. The \(G\)-MACE architecture directly inherits universality from the \(G\)-ACE and \(G\)-TRACE architectures:
**Theorem 4.1** (Universality of \(G\)-Mace).: _Assume that the one-particle embedding \((\phi_{k})_{k}\) is a complete basis. Then, the set of \(G\)-MACE models, with a fixed finite number of layers \(T\), is dense in the set of continuous and equivariant properties of point clouds \(X\in\mathrm{msets}(\Omega)\), in the topology of pointwise convergence. It is dense in the uniform topology on compact and size-bounded subsets._
## 5 lie-nn : Generating Irreducible Representations for Reductive Lie Groups
In order to construct the G-cluster expansion for arbitrary Lie groups, one needs to compute the generalized Clebsch-Gordan coefficients (10) for a given tuple of representations (see 11). To facilitate this task, we have implemented an open source software library, lie-nn1. In this section we review the key techniques employed in this library.
### Lie Algebras of Reductive Lie Groups
Formally, the Lie algebra of a Lie group is its tangent space at the origin and carries an additional structure, the Lie bracket. Informally the Lie algebra can be thought of as a linear approximation to the Lie group but, due to the group structure, this linear approximation carries (almost) full information about the group. In particular the representation theory of the Group is almost entirely determined by the Lie algebra, which is a simpler object to work with instead of the fully nonlinear Lie group.
Lie algebraThe Lie groups we study can be realized as closed subgroups \(G\subset\operatorname{GL}_{n}(\mathbb{R})\) of the general linear group. In that case their Lie algebras can be concretely realized as \(\mathfrak{g}=\operatorname{Lie}(G)=\{X\in M_{n}(\mathbb{R})\mid\forall t\in \mathbb{R}:\exp(tX)\in G\}\) where \(\exp(X)=1+X+\frac{1}{2}X^{2}...\) is the standard matrix exponential. It turns out that \(\mathfrak{g}\subset M_{n}(\mathbb{R})\) is a linear subspace closed under the commutator bracket \([X,Y]=XY-YX\).
Structure theoryWe fix a linear basis \(\{X_{i}\}\subset\mathfrak{g}\), called a set of generators for the group. The Lie algebra structure is determined by the _structure constants_\(A_{ijk}\) defined by \([X_{i},X_{j}]=\sum_{k}A_{ijk}X_{k}\), in that if \(X=\sum_{i}a_{i}X_{i}\) and \(Y=\sum_{j}b_{j}X_{j}\) then \([X,Y]=\sum_{k}\left(\sum_{i,j}A_{ijk}a_{i}b_{j}\right)X_{k}\). The classification of reductive groups provides convenient generating sets for their Lie algebras (or their complexifications). One identifies a large commutative subalgebra \(\mathfrak{h}\subset\mathfrak{g}\) (sometimes of \(\mathfrak{g}_{\mathbb{C}}=\mathfrak{g}\otimes_{\mathbb{R}}\mathbb{C}\)) with basis \(\{H_{i}\}\) so that most (or all) of the other generators \(E_{\alpha}\) can be chosen so that \([H_{i},E_{\alpha}]=\alpha(H_{i})E_{\alpha}\) for a linear function \(\alpha\) on \(\mathfrak{h}\). These functions are the so-called _roots_ of \(\mathfrak{g}\). Structural information about \(\mathfrak{g}\) is commonly encoded pictorially via the _Dynkin diagram_ of \(\mathfrak{g}\), a finite graph the nodes of which are a certain subset of the roots. There are four infinite families of simple complex Lie algebras \(A_{n}=\mathfrak{su}(n+1),B_{n}=\mathfrak{so}(2n+1),C_{n}=\mathfrak{sp}(2n),D_{ n}=\mathfrak{so}(2n)\) and further five exceptional simple complex Lie algebras (a general reductive Lie algebra is the direct sum of several simple ones and its centre). The Lie algebra only depends on the connected component of \(G\). thus when the group \(G\) is disconnected in addition to the infinitesimal generators \(\{X_{i}\}\) one also needs to fix so-called "discrete generators", a subset \(\mathbf{H}\subset G\) containing a representative from each connected component.
Representation theoryThe representation theory of complex reductive Lie algebras is completely understood. Every finite-dimensional representation is (isomorphic to) the direct sum of irreducible representations ("irreps"), with the latter parametrized by appropriate linear functional on \(\mathfrak{h}\) ("highest weight"). Further given a highest weight \(\lambda\) there is a construction of the associated irrep with an explicit action of the infinitesimal generators chosen above. The **Weyl Dimension Formula** gives the dimension of an irrep in terms of its highest weight.
### Numerical Computations in lie-nn
The most basic class of the lie-nn library encodes a group \(G\) and infinitesimal representation \(d\rho\) of \(\mathfrak{g}\) using the tuple
\[\rho:=(A,n,\{d\rho(X_{i})\}_{i},\{\rho(h)\}_{h\in\mathbf{H}})\, \tag{20}\]
with \(A\) the structure constants of the group, \(n\) the dimension of the representation, and \(d\rho(X_{i})\) and \(\rho(h)\) being \(n\times n\) matrices encoding the action of the infinitesimal and the discrete generators respectively. The action of infinitesimal generators is related to the action of group generators by the exponential, \(\forall X\in\mathfrak{g}\), \(\rho(e^{X})=e^{d\rho(X)}\). For finite groups, we assume that \(d\rho(X)=\mathbf{0}\) as they have only discrete generators.
As the building blocks of the theory irreps are treated specially; the package implements functionality for the following operations for each supported Lie group:
* Constructing the irrep with a given highest weight.
Figure 2: Examples of Dynkin diagrams and their associated group class.
* Determining the dimension of an irrep.
* Decomposing the tensor product of several irreps into irreps up to isomorphism (the **selection rule**, giving the list of irreducible components and their multiplicities).
* Decomposing the tensor product of several irreps into irreps explicitly via a change of basis ("generalized **Clebsch-Gordan** coefficients").
* Computating the symmetrized tensor product of the group (see. 5.3 and A.9 for details).
To construct an irrep explicitly as in (20) one needs to choose a basis in the abstract representation space (including a labeling scheme for the basis) so that we can give matrix representations for the action of generators. For this purpose, we use in lie-nn the Gelfand-Tsetlin (GT) basis (Gelfand and Tsetlin, 1950) and associated labeling of the basis by GT patterns (this formalism was initially introduced for algebras of type \(A_{n}\) but later generalized to all classical groups). Enumerating the GT patterns for a given algebra gives the dimension of a given irrep, the selection rules can be determined combinatorially, and it is also possible to give explicit algorithms to compute Clebsch-Gordan coefficients (the case of \(A_{n}\) is treated by Alex _et al._ (2011)). For some specific groups, simplifications to this procedure are possible and GT patterns are not required.
In some cases, one wants to compute coefficients for reducible representations or for representations where the analytical computation with GT patterns is too complex. In these cases, a numerical algorithm to compute the coefficients is required. Let \(d\rho_{1},d\rho_{2}\) be two Lie algebra representations of interest. The tensor product on the Lie algebra \(d\rho_{1}\otimes d\rho_{2}(X)\) can be computed as,
\[d\rho_{1}\otimes d\rho_{2}\;(X)=d\rho_{1}(X)\otimes 1+1\otimes d\rho_{2}(X) \tag{21}\]
Therefore, given sets of generators of three representations \(d\rho_{1},d\rho_{2},d\rho_{3}\), the Clebsch-Gordan coefficients are the change of basis between \((d\rho_{1}(X)\otimes 1+1\otimes d\rho_{2}(X))\) and \(d\rho_{3}(X)\). One can compute this change of basis numerically via a null space algorithm. For some groups, one can apply an iterative algorithm that generates all irreps starting with a single representation, using the above-mentioned procedure (see A.10).
### Symmetric Powers
Let \(V\) be a vector space and \(\{e_{i}\}\) be a basis of \(V\).The symmetric power of \(V\), \(\operatorname{Sym}^{n}V\), can be regarded as the space of homogeneous polynomials of degree \(n\) in the variables \(e_{i}\). The product basis in Equation 10 spans exactly this space. A basis of \(\operatorname{Sym}^{n}V\) can be constructed as,
\[\{e_{i_{1}}\cdot e_{i_{2}}\cdot...\cdot e_{i_{n}}|i_{1}\leq...\leq i_{n}\} \tag{22}\]
If \(V_{\lambda}\) if an irreducible representation of a reductive Lie group \(G\) with highest weight \(\lambda\), \(\operatorname{Sym}^{n}V_{\lambda}\) admits a decomposition into irreducible representations,
\[\operatorname{Sym}^{n}V_{\lambda}=\bigoplus c_{\lambda,\mu}V_{\mu} \tag{23}\]
The generalized Clebsch Gordan coefficients in (11) represent the change of basis between \(\operatorname{Sym}^{n}V_{\lambda}\) and one of the \(V_{\mu}\). The following steps are taken to obtain these coefficients:
* Construct the symmetric power basis as in (22)
* Compute the coefficients \(c_{\lambda,\mu}\), using Freudenthal's Formula or GT patterns.
* For any \(\mu\) with \(c_{\lambda,\mu}\) non-zero, find a basis of \(V_{\mu}\), and compute the change of basis between the basis of \(\operatorname{Sym}^{n}V_{\lambda}\) and \(V_{\mu}\)
Alternatively, if one simply has the Clebsch Gordan coefficients, the change of basis from \(V_{\lambda}\) to some \(V_{\mu}\), a new algorithm outlined in Appendix A.9 and implemented in lie-nn can construct the change of basis from \(\operatorname{Sym}^{n}V_{\lambda}\) to \(V_{\mu}\).
## 6 Applications
### Lie groups and their applications
In Table 6.1 we give a non-exhaustive overview of Lie groups and their typical application domains, to which our methodology naturally applies.
Benchmarking our method on all of these applications is beyond the scope of the present work, in particular, because most of these fields do not have standardized benchmarks and baselines to compare against. The MACE architecture has proven to be state of the art for a large range of atomistic modeling benchmarks (Batatia _et al._, 2022a). In the next section, we choose two new prototypical applications and their respective groups to further assess the performance of our general approach.
### Particle physics with the \(So(1,3)\)
Jet tagging consists in identifying the process that generated a collimated spray of particles called a _jet_ after a high-energy collision occurs at particle colliders. Each jet can be defined as a multiset of four-momenta \([(E_{i},\mathbf{p}_{i})]_{i=1}^{N}\), where \(E_{i}\in\mathbb{R}^{+}\) and \(\mathbf{p}_{i}\in\mathbb{R}^{3}\).
Current state-of-the-art models incorporate the natural symmetry arising from relativistic objects, e.g, the Lorentz symmetry, as model invariance. To showcase the performance and generality of the \(G\)-MACE framework we use the Top-Tagging dataset (Butter _et al._, 2019), where the task is to differentiate boosted top quarks from the background composed of gluons and light quark jets. In Table 2, we can see that \(G\)-MACE achieves excellent accuracy, being the only arbitrary equivariant model to reach similar accuracy as PELICAN, which is an invariant model. We refer to Appendix A.11.1 for the details of the architecture.
### 3D Shape recognition
3D shape recognition from point clouds is of central importance for computer vision. We use the ModelNet10 dataset (Wu _et al._, 2015) to test our proposed architecture in this setting. As rotated objects need to map to the same class, we use a MACE model with \(O(3)\) symmetry. To create an encoder version of \(G\)-MACE, we augment a PointNet++ implementation (Yan, 2019) with \(G\)-MACE layers. See the appendix A.11.2 for more details on the architecture. We see in Table 3 that MACE outperforms the non-equivariant baseline.
\begin{table}
\begin{tabular}{l r r} \hline \hline Group & Application & Reference \\ \hline \(\mathrm{U}(1)\) & Electromagnetism & (Lagrave _et al._, 2021) \\ \(\mathrm{SU}(3)\) & Quantum Chromodynamics & (Favoni _et al._, 2022) \\ \(\mathrm{SO}(3)\) & 3D point clouds & (Batatia _et al._, 2022a) \\ \(\mathrm{SO}^{+}(1,3)\) & Particle Physics & (Bogatskiy _et al._, 2020b) \\ \(\mathrm{SL}(3,\mathbb{R})\) & Point cloud classification & - \\ \(\mathrm{SU}(2^{N})\) & Entangled QP & - \\ \hline \(\mathrm{Sp}(N)\) & Hamiltonian dynamics & - \\ \(\mathrm{SO}(2N+1)\) & Projective geometry & - \\ \hline \hline \end{tabular}
\end{table}
Table 1: Lie groups of interests covered by the present methods and their potential applications to equivariant neural networks. The groups above the horizontal line are already available in lie-nn. The ones below the line fall within our framework and can be added.
\begin{table}
\begin{tabular}{l r r r r} \hline \hline Architecture & \#Params & Accuracy & AUC & \(\mathrm{Rej}_{30\%}\) \\ \hline \hline
**PELICAN** & 45k & **0.942** & **0.987** & \(\mathbf{2289\pm 204}\) \\
**partT** & 2.14M & 0.940 & 0.986 & \(1602\pm 81\) \\
**ParticleNet** & 498k & 0.938 & 0.985 & \(1298\pm 46\) \\
**LorentzNet** & 224k & **0.942** & **0.987** & \(2195\pm 173\) \\
**BIP** & 4k & 0.931 & \(0.981\) & \(853\pm 68\) \\
**LGN** & 4.5k & 0.929 & \(0.964\) & \(435\pm 95\) \\
**EFN** & 82k & 0.927 & 0.979 & \(888\pm 17\) \\
**TopoDNN** & 59k & 0.916 & 0.972 & \(295\pm 5\) \\
**LorentzMACE** & 228k & **0.942** & **0.987** & \(1935\pm 85\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison between state-of-the-art metrics on the Top-Tagging dataset. Scores were taken from (Bogatskiy _et al._, 2022; Qu _et al._, 2022; Qu and Gouskos, 2020; Munoz _et al._, 2022; Bogatskiy _et al._, 2020a; Komiske _et al._, 2019; Pearkes _et al._, 2017).
## 7 Conclusion
We introduced the \(G\)-Equivariant Cluster Expansion, which generalizes the successful ACE and MACE architectures to symmetries under arbitrary reductive Lie groups. We provide an open-source Python library lie-nn that provides all the essential tools to construct such general Lie-group equivariant neural networks. We demonstrated that the general \(G\)-MACE architecture simultaneously achieves excellent accuracy in Chemistry, Particle Physics, and Computer Vision. Future development will implement additional groups and generalize to new application domains.
IB's work was supported by the ENS Paris Saclay. CO's work was supported by NSERC Discovery Grant IDGR019381 and NFRF Exploration Grant GR022937. This work was also performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3).IB would like to thank Gabor Csanyi for his support.
## References
* LeCun _et al._ (1989)Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, Neural Computation **1**, 541 (1989).
* Cohen and Welling (2016)T. S. Cohen and M. Welling, ICLR 2017 (2016a), 10.48550/ARXIV.1612.08498.
* Esteves _et al._ (2017)C. Esteves, C. Allen-Blanchette, X. Zhou, and K. Daniilidis, "Polar transformer networks," (2017).
* Anderson _et al._ (2019)B. Anderson, T. S. Hy, and R. Kondor, in _Advances in Neural Information Processing Systems_, Vol. 32, edited by H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alche-Buc, E. Fox, and R. Garnett (Curran Associates, Inc., 2019).
* Bartok _et al._ (2013)A. P. Bartok, R. Kondor, and G. Csanyi, Physical Review B **87**, 184115 (2013).
* Batzner _et al._ (2022)S. Batzner, A. Musaelian, L. Sun, M. Geiger, J. P. Mailoa, M. Kornbluth, N. Molinari, T. E. Smidt, and B. Kozinsky, Nature Communications **13**, 2453 (2022).
* Batatia _et al._ (2022a)I. Batatia, D. P. Kovacs, G. N. C. Simm, C. Ortner, and G. Csanyi, "Mace: Higher order equivariant message passing neural networks for fast and accurate force fields," (2022a).
* Bogatskiy _et al._ (2022)A. Bogatskiy, T. Hoffman, D. W. Miller, and J. T. Offermann, Machine Learning and the Physical Sciences workshop, NeurIPS 2022 (2022), arXiv:2211.00454 [hep-ph].
* Li _et al._ (2022)C. Li, H. Qu, S. Qian, Q. Meng, S. Gong, J. Zhang, T.-Y. Liu, and Q. Li, (2022), arXiv:2208.07814 [hep-ph].
* Finzi _et al._ (2020)M. Finzi, S. Stanton, P. Izmailov, and A. G. Wilson, in _Proceedings of the 37th International Conference on Machine Learning_, Proceedings of Machine Learning Research, Vol. 119, edited by H. D. III and A. Singh (PMLR, 2020) pp. 3165-3176.
* Kondor and Trivedi (2018)R. Kondor and S. Trivedi, in _Proceedings of the 35th International Conference on Machine Learning_, Proceedings of Machine Learning Research, Vol. 80, edited by J. Dy and A. Krause (PMLR, 2018) pp. 2747-2755.
* Bogatskiy _et al._ (2020a)A. Bogatskiy, B. Anderson, J. T. Offermann, M. Roussi, D. W. Miller, and R. Kondor, "Lorentz Group Equivariant Neural Network for Particle Physics," (2020a), arXiv:2006.04780 [hep-ph].
* Drautz (2019)R. Drautz, Phys. Rev. B **99**, 014104 (2019).
*
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Architecture & **PointMACE** (ours) & **PointNet** & **PointNet ++** & **KCN** & **SO-Net** & **LP-3DCNN** \\ \hline Accuracy & **96.1** & 94.2 & 95.0 & 94.4 & 95.5 & 94.4 \\ Representation & Point Cloud & Point cloud & Point cloud & Point Cloud & Point Cloud & Voxel grid \\ \hline \hline \end{tabular}
\end{table}
Table 3: Accuracy in shape recognition. Scores were taken from (Qi _et al._, 2016), (Qi _et al._, 2017), Shen _et al._ (2018), Li _et al._ (2018), Kumawat and Raman (2019)I. Batatia, S. Batzner, D. P. Kovacs, A. Musaelian, G. N. C. Simm, R. Drautz, C. Ortner, B. Kozinsky, and G. Csanyi, "The design space of e(3)-equivariant atom-centered interatomic potentials," (2022b).
* Cohen and Welling (2016)T. Cohen and M. Welling, in _Proceedings of The 33rd International Conference on Machine Learning_, Proceedings of Machine Learning Research, Vol. 48, edited by M. F. Balcan and K. Q. Weinberger (PMLR, New York, New York, USA, 2016) pp. 2990-2999.
* Cohen _et al._ (2018)T. S. Cohen, M. Geiger, J. Kohler, and M. Welling, in _International Conference on Learning Representations_ (2018).
* isometry and gauge equivariant convolutions on riemannian manifolds," (2021).
* Dusson _et al._ (2022)G. Dusson, M. Bachmayr, G. Csanyi, R. Drautz, S. Etter, C. van der Oord, and C. Ortner, Journal of Computational Physics **454**, 110946 (2022).
* Geiger and Smidt (2022)M. Geiger and T. Smidt, "e3nn: Euclidean neural networks," (2022).
* Finzi _et al._ (2021)M. Finzi, M. Welling, and A. G. Wilson, "A practical method for constructing equivariant multilayer perceptrons for arbitrary matrix groups," (2021).
* Kondor _et al._ (2018)R. Kondor, Z. Lin, and S. Trivedi, in _Advances in Neural Information Processing Systems_, Vol. 31, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Curran Associates, Inc., 2018).
* Satorras _et al._ (2021)V. G. Satorras, E. Hoogeboom, and M. Welling, "E(n) equivariant graph neural networks," (2021).
* Brandstetter _et al._ (2022)J. Brandstetter, R. Hesselink, E. van der Pol, E. J. Bekkers, and M. Welling, "Geometric and physical quantities improve e(3) equivariant message passing," (2022), arXiv:2110.02905 [cs.LG].
* Drautz (2020)R. Drautz, Phys. Rev. B **102**, 024104 (2020).
* Kaliuzhnyi and Ortner (2022)I. Kaliuzhnyi and C. Ortner, ArXiv e-prints **2202.04140** (2022).
* Shapeev (2016)A. Shapeev, Multiscale Model. Simul. **14**, 1153 (2016).
* Darby _et al._ (2022)J. P. Darby, D. P. Kovacs, I. Batatia, M. A. Caro, G. L. W. Hart, C. Ortner, and G. Csanyi, "Tensor-reduced atomic density representations," (2022).
* Gelfand and Tsetlin (1950)I. M. Gelfand and M. L. Tsetlin, **825828**, 71 (1950).
* Alex _et al._ (2011)A. Alex, M. Kalus, A. Huckleberry, and J. von Delft, Journal of Mathematical Physics **52**, 023507 (2011).
* Lagrave _et al._ (2021)P.-Y. Lagrave, Y. Cabanes, and F. Barbaresco, in _Geometric Science of Information_, edited by F. Nielsen and F. Barbaresco (Springer International Publishing, Cham, 2021) pp. 577-584.
* Favoni _et al._ (2022)M. Favoni, A. Ipp, D. I. Muller, and D. Schuh, Phys. Rev. Lett. **128**, 032003 (2022).
* Bogatskiy _et al._ (2020)A. Bogatskiy, B. Anderson, J. Offermann, M. Roussi, D. Miller, and R. Kondor, in _Proceedings of the 37th International Conference on Machine Learning_, Proceedings of Machine Learning Research, Vol. 119, edited by H. D. III and A. Singh (PMLR, 2020) pp. 992-1002.
* Butter _et al._ (2019)A. Butter _et al._, SciPost Phys. **7**, 014 (2019), arXiv:1902.09914 [hep-ph].
* Qu _et al._ (2022)H. Qu, C. Li, and S. Qian, "Particle transformer for jet tagging," (2022).
* Qu and Gouskos (2020)H. Qu and L. Gouskos, Phys. Rev. D **101**, 056019 (2020), arXiv:1902.08570 [hep-ph].
* Munoz _et al._ (2022)J. M. Munoz, I. Batatia, and C. Ortner, "Bip: Boost invariant polynomials for efficient jet tagging," (2022).
* Komiske _et al._ (2019)P. T. Komiske, E. M. Metodiev, and J. Thaler, JHEP **01**, 121 (2019), arXiv:1810.05165 [hep-ph].
* Pearkes _et al._ (2017)J. Pearkes, W. Fedorko, A. Lister, and C. Gay, (2017), arXiv:1704.02124 [hep-ex].
* Gouskos _et al._ (2020)Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, in _2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ (IEEE Computer Society, Los Alamitos, CA, USA, 2015) pp. 1912-1920.
* Yan (2019)X. Yan, github.com/yanx27 (2019).
* Qi _et al._ (2016)C. R. Qi, H. Su, K. Mo, and L. J. Guibas, arXiv preprint arXiv:1612.00593 (2016).
* Qi _et al._ (2017)C. R. Qi, L. Yi, H. Su, and L. J. Guibas, arXiv preprint arXiv:1706.02413 (2017).
* Shen _et al._ (2018)Y. Shen, C. Feng, Y. Yang, and D. Tian, in _CVPR_ (2018).
* Li _et al._ (2018)J. Li, B. M. Chen, and G. H. Lee, CoRR (2018).
* Kumawat and Raman (2019)S. Kumawat and S. Raman, in _CVPR_ (2019).
* Molev (1999)A. I. Molev, "A weight basis for representations of even orthogonal lie algebras," (1999), arXiv:math/9902060 [math.RT].
* Steinberg (1961)R. Steinberg, Bull. Amer. Math. Soc. **67**, 406 (1961).
* Bachmayr _et al._ (2021)M. Bachmayr, G. Dusson, C. Ortner, and J. Thomas, ArXiv e-prints **2109.14771** (2021).
* Thomas _et al._ (2022)J. Thomas, H. Chen, and C. Ortner, Arch. Ration. Mech. Anal. **246** (2022).
* Broecker (1985)T. Broecker, _Representation of Compact Lie Groups_ (Springer Berlin Heidelberg, 1985).
* Bourbaki (1989)N. Bourbaki, _Lie Groups and Lie Algebras_ (Springer Berlin, Heidelberg, 1989).
Appendix
### Background on Lie groups and Tensor Products of Representations
#### a.1.1 Lie groups
Lie groups generalize the idea of continuous symmetry such as the rotation symmetry of the round sphere \(S^{n}\) (given by the action of the orthogonal group \(\mathrm{O}(n)\)), the symmetry of a vector space under changes of basis (given by the action of the general linear group \(\mathrm{GL}_{n}(\mathbb{R})\)), or the invariance of Minkowski spacetime under boosts (given by the Lorentz group). The symmetry groups of specific geometries were studied in the \(19\)th century, leading to Sophus Lie's study of general continuous symmetry. Other major contributors to our understanding include Klein, Riemann, Hilbert, Poincarre, Noether, Cartan, among many others. In particular the solution of Hilbert's \(5\)th problem by Gleason, Montgomery, and Zippin shows that under very general conditions a continuous symmetry group must be a Lie group. Symmetry being central to mathematics Lie groups appeal in many areas of mathematics, from differential geometry to number theory.
Let us give the general formal definition, as well as a more practical concrete one:
**Definition A.1**.: A real (complex) **Lie group** is a group that is also a finite-dimensional smooth (complex) manifold in which the product and inversion of the group are also smooth (holomorphic) maps.
**A linear Lie group** is a subgroup of the group \(\mathrm{GL}_{n}(\mathbb{R})\) which is also a closed subset (in the sense of real analysis).
It is a fact that every closed subgroup of a Lie group is a Lie group, so every group of the second type is indeed also a group of the first type. Conversely while in general a Lie group is only _locally_ isomorphic to a linear group (by a surjective group homomorphism), every compact Lie group is isomorphic to a linear group, and most Lie groups that arise in practice are linear.
When the Lie group \(G\) acts on a space \(X\) (e.g. as a symmetry group) it naturally also acts on spaces of functions on \(X\) (by translation), and this action is _linear_. In general. an action of \(G\) on a (usually complex) vector space by linear map is called a **linear representation** (of \(G\)). In the context of our paper the space of functions on \(X\) is the space of features, and we achieve efficient computation in it by decomposing this representation into simpler constituents - in other words by understanding the **representation theory** of \(G\).
Describing the representation theory of arbitrary Lie groups is a very difficult problem and an active area of research in pure mathematics. However, for certain classes of groups it is possible to say quite a bit (for example classical harmonic analysis can be viewed from the lens of the representation theory of _commutative_ groups). An important class of groups whose representation theory can be completely understood is the class of _reductive_ Lie groups, on which we focus in this paper.
For now let us give a concrete definition; an abstract will follow later once we introduce the necessary language.
**Definition A.2**.: A linear Lie group \(G\subset\mathrm{GL}_{n}(\mathbb{R})\) is **reductive** if it is also closed under transpose: for every \(g\in G\) we have that \(g^{T}\in G\).
Most of the groups that are of interest in natural sciences are reductive Lie groups. This includes well-known groups such as the symplectic group \(\mathrm{Sp}_{2n}(\mathbb{R})\) relevant in Hamiltonian dynamics, and orthogonal group \(\mathrm{O}(n)\) which parametrizes rotations in \(\mathbb{R}^{n}\).
#### a.1.2 Linear representations
Fix a Lie group \(G\). A **linear representation** of \(G\) is a pair \((\pi,V)\) where \(V\) is a complex vector space and \(\pi\) is an action of \(G\) on \(V\) by linear maps. In other words for each \(g\in G\) we have a complex-linear map \(\pi(g)\colon V\to V\) such that \(\pi(g_{1}g_{2})=\pi(g_{1})\pi(g_{2})\) and such that \(\pi(1_{G})=\mathrm{Id}_{V}\) (in other words, \(\pi\colon G\to\mathrm{GL}(V)\) is a group homomorphism).
For an example let the familiar rotation group \(\mathrm{O}(3)\) act on the usual round sphere \(S^{2}\). Then \(\mathrm{O}(3)\) acts on the space of functions on \(S^{2}\). This space is infinite-dimensional, but it turns out we can approximate it by a sum of finite-dimensional pieces. Indeed for each degree \(d\geq 0\) let \(V_{d}\) be the space of polynomials in the variables \(x,y,z\) which are homogeneous of degree \(d\) (for example \(V_{0}=\operatorname{span}\{1\}\), \(V_{1}=\operatorname{span}\{x,y,z\}\), \(V_{2}=\operatorname{span}\{x^{2},xy,y^{2},\ldots\}\). Letting \(\operatorname{O}(3)\) act on \(\mathbb{R}^{3}\) by linear change of variable, it acts on each \(V_{d}\) since such change of variable does not change the degree of homogeneity. Now let \(W_{d}\subset V_{d}\) be the subspace of **harmonic** polynomials, those polynomials \(p\) that satisfy \(\frac{\partial^{2}p}{\partial x^{2}}+\frac{\partial^{2}p}{\partial y^{2}}+\frac {\partial^{2}p}{\partial z^{2}}=0\). Then \(W_{d}\) is itself \(\operatorname{O}(3)\) invariant since the Laplace operator is rotation-invariant, making \(W_{d}\) a **subrepresentation** of \(V_{d}\). It is a (not immediately obvious) fact that each \(W_{d}\) is in fact an **irreducible** representation: it has no subrepresentations of its own other than itself and the zero subspace. Further, if we interpret the harmonic polynomials in \(W_{d}\) as functions on the unit sphere \(S^{2}\subset\mathbb{R}^{3}\) we can view \(W_{d}\) as a space of functions on the sphere. It then turns out that the spaces \(W_{d}\) are linearly independent of each other, and that their sum \(\bigoplus_{d\geq 0}W_{d}\) is dense in any reasonable space of functions on the sphere (the space of continuous functions, the space of smooth functions, or the space of square-integrable functions, each with its respective notion of convergence). In other words, we have morally decomposed the space of functions on \(S^{2}\) as the sum of \(\operatorname{O}(3)\)-invariant subrepresentations, each of which is irreducible.
Decomposing a representation into its irreducible constituents is a major goal of representation theory (as well as a computational goal of this paper). When this is possible we can understand a general representations \(V\) of a group \(G\) by first enumerating its irreducible representations and then counting how many times each irreducible occurs as summand in \(V\), known as the (this is very much analogous to understanding general positive integers via their prime factorization - here the irreducible representations play the role of the prime numbers).
Reductive groups are particularly amenable to this approach because their representations indeed do decompose as direct sums in this fashion. It is fact that a linear Lie group \(G\) is reductive if and only if each of its finite-dimensional representations is isomorphic to a direct sum of irreducible representations.
Before we delve further into representation theory we need a brief detour into the structure theory of Lie groups.
#### a.1.3 Lie algebras
As Lie groups are manifolds, understanding their representations theory usually leads to advanced geometrical and topological results. We will now describe how these difficult questions can be transformed into simpler linear algebra questions using the Lie algebras associated with Lie groups.
**A Lie algebra** is a vector space \(\mathfrak{g}\) over a field \(F\) together with a bilinear map \([\cdot,\cdot]:\mathfrak{g}\times\mathfrak{g}\to\mathfrak{g}\) called the Lie bracket, satisfying the following axioms:
* The Lie bracket is bilinear, that is for all \(x,y,z\in\mathfrak{g}\) and \(\alpha,\beta\in F\), \([\alpha x+\beta y,z]=\alpha[x,z]+\beta[y,z]\) and \([x,\alpha y+\beta z]=\alpha[x,y]+\beta[x,z]\).
* The Lie bracket is antisymmetric, that is for all \(x,y\in\mathfrak{g}\), \([x,y]=-[y,x]\).
* The Lie bracket satisfies the Jacobi identity, that is for all \(x,y,z\in\mathfrak{g}\), \([x,[y,z]]+[y,[z,x]]+[z,[x,y]]=0\).
Lie algebras can alternatively be seen as the tangent space of a Lie group to the identity:
**Theorem A.3**.: _Let \(G\) be a Lie group viewed as a smooth manifold, the tangent space \(T_{I}G\) is a Lie algebra called the Lie algebra of \(G\) noted \(\mathfrak{g}\). Moreover in the case of connected complex Lie groups, the category of representation of \(G\), \(Rep(G)\), is equivalent to the category of representation of \(\mathfrak{g}\), \(Rep(\mathfrak{g})\)._
This theorem is at the heart of the representation theory of Lie groups. As Lie algebra are vector spaces, the functorial equivalence between the two categories of representation translates geometrical and topological questions into simpler linear algebra questions.
A classification of Lie algebras was carried out by Cartan and later Gelfand work. In the case of complex Lie algebras, this classification is complete. Two major classes of Lie algebras emerged from Cartan's work, semi-simple Lie algebras and nilpotent Lie algebras.
**Definition A.4** (Central series).: A central series of a Lie algebra \(\mathfrak{g}\) is a sequence of ideals \((\mathfrak{g}_{i})_{i\in\mathbb{N}}\) such that \(\mathfrak{g}_{0}=\mathfrak{g}\) and \(\mathfrak{g}_{i+1}=[\mathfrak{g}_{i},\mathfrak{g}]\).
**Definition A.5**.: A Lie algebra is called semi-simple if it does not contain any non-trivial ideals. A Lie algebra is called nilpotent if its central series terminates to zero.
An important result of this classification is that any Lie algebras over an algebraically closed field can be decomposed into semisimple parts and a nilpotent part. We call this the Jordan decomposition. Therefore, understanding arbitrary representations of a Lie algebra boils down to understanding representation of both semi-simple and nilpotent algebras.
Most reductive Lie groups of interest have a semi-simple Lie algebra as associated Lie algebra. These algebras are also the one for which the representation theory is the best understood.
The study of Lie algebras is decomposed in two subfields. The structure theory studies the classification of Lie algebras in terms of their eigenspaces (as vector spaces). The representation theory builds on the structure theory to classify representations of semi-simple Lie algebras. We will give a very short introduction to both of them.
#### a.1.4 Structure theory of semi-simple Lie algebras
Let \(\mathfrak{g}\) be a semi-simple complex Lie algebras. Let \(\mathfrak{h}\) be one maximal set of mutually commutative element of \(\mathfrak{g}\). Maximal means here that no other such set contains strictly more element than \(\mathfrak{h}\). We call \(\mathfrak{h}\) the Cartan algebra of \(\mathfrak{g}\). As elements of \(\mathfrak{h}\) are mutually commutative, they share the same eigenspaces.
Therefore one can decompose \(\mathfrak{g}\) in terms of eigenspaces of elements of \(\mathfrak{h}\). Let \(\alpha\in\mathfrak{h}^{*}\) be a function in the dual of the Cartan algebra such that \(\forall h_{i}\in\mathfrak{h},\alpha(h_{i})=\alpha_{i}\) with \(\alpha_{i}\) the eigenvalue associated to \(h_{i}\). We call this linear operator the roots of \(\mathfrak{g}\). Let \(R\) be the space of all this functions, called the root space. One therefore has the the following decomposition of the Lie algebra;
\[\mathfrak{g}=\mathfrak{h}+\bigoplus_{\alpha\in R}g_{\alpha} \tag{24}\]
The structure theory of semi-simple complex Lie algebras is the study of the root space \(R\) and the associated eigenspaces \(g_{\alpha}\). One can classify complex semi-simple Lie algebras based on their type of root spaces. For real semi-simple Lie algebras, the structure theory essential reduces down to the complex case with a few extra steps.
#### a.1.5 Representation theory of complex semi-simple Lie algebras and Lie groups
The representation theory of complex reductive Lie algebras is completely understood. Every finite-dimensional representation is the direct sum of irreducible representations. Every irreducible representation is uniquely determined by a highest weight \(\lambda\), which is an element of the dual of the Cartan algebra \(\mathfrak{h}\). The dimension of each irreducible representation can be computed using the Weyl dimension formula.
\[\text{dim}V_{\lambda}=\frac{\prod_{\alpha\in R^{+}}(\lambda+\rho,\alpha)}{\prod _{\alpha\in R^{+}}(\rho,\alpha)} \tag{25}\]
where \(R^{+}\) is a subset of the root space \(R\), called the positive roots, and \(\rho\) is the half sum of the positive roots. Obtaining explicitly a basis for each representation is much harder. Gelfand and Tselin proposed a basis for the general case of \(SU(N)\) groups. This basis was recently generalized to other complex semi-simple algebras Molev (1999).
#### a.1.6 Representations of real semi-simple Lie groups
The representation theory of real semi-simple Lie groups can be elucidated by examining the complex semi-simple case through the application of the so-called Weyl unitary trick. This technique establishes a connection between the representations of real Lie groups and those of the complexification of their universal covering groups.
**Definition A.6**.: Given a Lie algebra \(\mathfrak{g}\), there exists a unique simply connected group \(\hat{G}\) possessing \(\mathfrak{g}\) as its Lie algebra. A group homomorphism \(\psi\) exists from this unique simply connected Lie group to any group \(G\) sharing the same Lie algebra. We refer to \(\hat{G}\) as the universal covering Lie group, and it is unique up to isomorphism.
An essential observation is that the irreducible representations of \(G\) form a subset of the irreducible representations of \(\hat{G}\). Understanding the irreducible representations of \(\hat{G}\) suffices. Take the case of the \(\mathfrak{su}(2)\) Lie algebra, for instance. The universal covering group is \(\mathrm{SU}(2)\), corresponding to the special unitary two-by-two matrices. The group \(\mathrm{SO}(3)\) of special orthogonal matrices also has \(\mathfrak{su}(2)\) as its Lie algebra. Consequently, the irreducible representations of \(\mathrm{SO}(3)\) are a subset of those of \(\mathfrak{su}(2)\). Representations of \(\mathfrak{su}(2)\) can be indexed by a half-integer, known as the quantum number \(j\). The integer representations are also irreducible representations of \(\mathrm{SO}(3)\), with non-integer ones being referred to as the spin representations.
Since the representation theory of complex Lie groups is well understood, an optimal approach to understanding the representation theory of a real Lie group involves studying the complexification of its universal cover.
**Definition A.7**.: For a universal covering Lie group \(G\) with Lie algebra \(\mathfrak{g}\), the universal covering Lie group \(\hat{G}_{\mathbb{C}}\), having the associated Lie algebra \(\mathfrak{g}_{\mathbb{C}}=\mathfrak{g}\otimes\mathbb{C}\), is termed the complexification of \(\hat{G}\).
The group of interest \(G\), its universal cover \(\hat{G}\) and the complexification of the universal cover \(\hat{G}_{\mathbb{C}}\), all share the similar irreducible representations. Therefore one obtains the following chain of inclusion of the respective representations of the groups,
\[Rep(G)\subset Rep(\hat{G})\subset Rep(\hat{G}_{\mathbb{C}}) \tag{26}\]
Let \(K\) be the maximal compact subgroup of \(\hat{G}_{\mathbb{C}}\) and \(\rho_{\lambda}\) a finite dimensional irreducible representation of \(K\), with \(\lambda\) as its highest weight. It is possible to analytically continue \(\rho_{\lambda}\) into an irreducible representation of \(\hat{G}_{\mathbb{C}}\), denoted as \(\rho_{\lambda}^{\hat{G}_{\mathbb{C}}}\), and also into the analytical conjugate \(\bar{\rho}_{\lambda}^{\hat{G}_{\mathbb{C}}}\). Every finite-dimensional representation of \(\hat{G}_{\mathbb{C}}\) can be constructed as the tensor product form \(\rho_{\lambda}^{\hat{G}_{\mathbb{C}}}\otimes\bar{\rho}_{\lambda^{\prime}}^{ \hat{G}_{\mathbb{C}}}\). Due to the fact that finite-dimensional representations of the universal covers and complexification of a semi-simple Lie group are isomorphic, one can express:
\[\rho_{\lambda}^{\hat{G}_{\mathbb{C}}}\otimes\bar{\rho}_{\lambda^{\prime}}^{ \hat{G}_{\mathbb{C}}}:=\rho_{(\lambda,\lambda^{\prime})}^{\hat{G}}:=\rho_{( \lambda,\lambda^{\prime})}^{G} \tag{27}\]
As universal covering groups contain more representations than the original cover, this mapping is not an isomorphism for all lambda. To illustrate this, consider the example of the Lorentz group \(SO(1,3)\).
The universal cover of \(\mathrm{SO}(1,3)\) is \(\mathrm{SL}(2,C)\), and the maximal compact subgroup of \(\mathrm{SL}(2,C)\) is \(\mathrm{SU}(2)\). The irreducible representations of \(\mathrm{SU}(2)\) are indexed by \(l\in\mathbb{N}/2\) and correspond to the Wigner D matrices \(D^{l}(g),g\in\mathrm{SU}(2)\). The \(\mathrm{SU}(2)\) group elements are parameterized by Euler angles that can be analytically continued to \(\mathrm{SL}(2,C)\) as follows:
\[\begin{split}&\alpha=\phi+i\kappa,\beta=\theta+i\epsilon,\gamma= \phi+i\xi\\ &\phi\in[0,2\pi),\theta\in[0,\pi],\phi\in[0,2\pi),\kappa,\epsilon,\xi\in\mathbb{R}\end{split} \tag{28}\]
This enables the construction of the fundamental representation of \(\mathrm{SL}(2,\mathbb{C})\), resulting in representations of the Lorentz group corresponding to the product of Wigner D matrices:
\[D^{k/2}(\alpha,\beta,\gamma)\otimes\bar{D}^{l/2}(\alpha,\beta,\gamma) \tag{29}\]
The irreducible representations of \(\mathrm{SO}(1,3)\) are indexed by a pair of integers \((l,k)\) corresponding to the associated \(SU(2)\) representations in the tensor product.
#### a.1.7 Gelfand-Tselin Basis
The enumeration of finite-dimensional representations of complex Lie groups has been well understood. However, the derivation of explicit matrix representations in specific bases and the tensor product decomposition of representations within these bases remain challenging and subject to ongoing research. The Gelfand-Tselin basis provides an explicit structure for irreducible representations, initially designed for the \(SU(N)\) group Gelfand and Tsetlin (1950); Alex _et al._ (2011), and subsequently generalized to encompass other classical Lie groups Molev (1999).
Central to the concept of GT patterns is the application of branching rules, whereby representations of a larger group are constructed by enumerating induced representations on its subgroups. Consider, for instance, the \(SU(N)\) case where irreducible representations are characterized by the highest weight vector \(\lambda\), a vector of length \(N\) terminated by a zero, \(\lambda=(\lambda_{1},\ldots,\lambda_{N-1},0)\). This highest weight vector may spawn several induced representations of the subgroup \(SU(N-1)\), each described by \(N-1\) vectors. This process can be iteratively executed in a descending chain:
\[SU(N)\to SU(N-1)\rightarrow\cdots\to SU(2)\to U(1) \tag{30}\]
This chain commences with the group of interest and culminates with the smallest subgroup. From this branching chain, triangular arrays known as GT-patterns can be constructed, with each row corresponding to a vector of a representation ranging from \(SU(N)\) to \(U(1)\):
\[\left(\begin{array}{ccccc}\lambda_{N,1}&\lambda_{N,2}&\ldots&\lambda_{N,N-1} &0\\ \lambda_{N-1,1}&\lambda_{N-1,2}&\ldots&\lambda_{N-1,N-1}\\ \vdots&\vdots&&\iddots\\ \lambda_{21}&\lambda_{22}&&\\ \lambda_{11}&&&\end{array}\right) \tag{31}\]
Adjacent rows within these patterns must adhere to a so-called "snake rule" to form an admissible GT pattern:
\[\lambda_{k,1}>\lambda_{k-1,1}>\lambda_{k,2}>\lambda_{k-1,2}>\ldots>\lambda_{k,k-1}>\lambda_{k-1,k-1}>\lambda_{k,k} \tag{32}\]
The dimension of a specific irreducible representation corresponds to the count of such admissible arrays, with the top row representing the highest weight vector. Representations sharing the same highest weight vectors interrelate through ladder operators. One key attribute of GT patterns is that the action of ladder operators on a GT pattern is analytically known, enabling the construction of the matrix representation for all representations using only the highest weight representation. Furthermore, constructing the highest weight matrix representation equals solving a linear system when the ladder operators are known analytically. Extending this methodology to other classical Lie groups (such as \((n),\mathrm{Sp}(n),\mathrm{SL}(n)\)) presents challenges as the ladder operators' entries become increasingly complex and recursive.
### Example of one particle basis for O(3) and SO(1,3) groups
In the paper, the one-particle basis \(\phi\) is left abstract, as it depends on the specific type of input point cloud and the group under consideration. We now describe two concrete examples, one involving isometry the other Lorentz equivariance.
#### a.2.1 Isometry equivariance
In the first example we consider a point cloud comprised of atoms, for example representing a molecule. Here, the point states are denoted by \(x_{i}=(\mathbf{r}_{i},Z_{i})\in\mathbb{R}^{3}\times\mathbb{Z}\), where \(\mathbf{r}_{i}\) corresponds to the position in 3D and \(Z_{i}\), its atomic number, is a categorical variable. The one-particle basis must form a complete basis for \(\mathbb{R}^{3}\times\mathbb{Z}\), which is both rotationally equivariant and translationally invariant. The translation is typically handled by referring to the atom's state in relation to a central value or another point. For simplicity, we can assume \(\mathbf{r}_{i}\) the particle's position relative to the point cloud's centre of mass, ensuring translational invariance.
The space \(\mathbb{R}^{3}\) can be represented as the product \(\mathbb{R}\times S^{2}\), with \(S^{2}\) being the sphere. As such, we choose a basis for \(\mathbb{R}\) such as Bessel functions, Chebyshev polynomials or a Fourier basis, denoted as \(R_{n}\). For \(S^{2}\), we utilise the spherical harmonics \(Y_{lm}\), which form a basis of functions in the sphere. Consequently, the one-particle basis can be expanded as:
\[\phi_{nlm}(x_{j})=R_{n}(r_{i})Y_{lm}(\hat{r}_{i})f(Z_{i}) \tag{33}\]
where \(r_{i}\) corresponds to the norm of \(\mathbf{r}_{i}\) and \(\hat{r}_{i}=\frac{r_{i}}{r_{i}}\).
The case of the Lorentz group is more complex than the \((3)\) case. Consider a point cloud made up of elementary particles, represented by their 4-momenta \(x_{i}=(E_{i},\mathbf{p}_{i})\in\mathbb{R}^{4}\). Although the 4-momenta is a vector, the natural metric in this space is not the Euclidean metric but rather a pseudo metric known as the Minkowski metric. This pseudo-metric encapsulates the unique role that time plays in special relativity. In natural units, the pseudo norm \(\eta\) of a 4-vector \(u=(u_{0},u_{1},u_{2},u_{3})\) is defined as
\[\eta(u,u)=u_{0}^{2}-u_{1}^{2}-u_{2}^{2}-u_{3}^{2} \tag{34}\]
To construct the one-particle basis, it is necessary to expand a four-vector in terms of the basis of representations of the Lorentz groups. As outlined in Appendix A.1.6, irreducible representationsof the Lorentz group are characterised by a tuple of integers \((l,k)\). The 4-vectors correspond to the fundamental, irreducible representation, namely the \((1,1)\) representation. Analogous structures to the spherical harmonics can be obtained by examining the symmetric powers of the irreducible representations. These are analogous to harmonic polynomials on Minkowski space, much as spherical harmonics are harmonic polynomials on the sphere \(S^{2}\). We thus define the Minkowski spherical harmonics as the following symmetric tensor product,
\[Y_{lm}(\textbf{p}_{i})=\mathrm{Sym}^{l}\,\textbf{p}_{i} \tag{35}\]
These spherical harmonics form a harmonic polynomial basis on Minkowski space. Therefore, one can create the one-particle basis in a manner akin to the \(O(3)\) case, as follows:
\[\phi_{nlm}(\textbf{p}_{j})=R_{n}(p_{i})Y_{lm}(\hat{p}_{i}) \tag{36}\]
Where \(p_{i}=\eta(\textbf{p}_{i},\textbf{p}_{i})\), and \(\hat{p}_{i}=\frac{\textbf{p}_{i}}{p_{i}+\epsilon}\), with \(\epsilon\) being a small constant that prevents divergence when \(p_{i}\) is zero due to the non-positiveness of the Minkowski norm.
### Extended Related Work
Convolutional Neural Networks (CNNs), which are translation equivariance, initiated the utilization of data symmetry in machine learning architectures. Throughout time, CNNs have been extended to include other symmetries as well. Central to all these generalizations is the group averaging operation,
\[\text{Avg}(f)(x)=\int_{g\in G}f(g\cdot x)dg, \tag{37}\]
Where \(x\) denotes the input signal or feature, \(f\) is the convolution kernel, \(G\) represents the group of interest, and \(dg\) is an invariant measure on \(G\). This transformation is essential, as it converts any convolution into a group invariant convolution. The feasibility of this approach largely depends on the computational simplicity of the integral. The most straightforward instance occurs for finite groups, as explained by G-convolutions (Cohen and Welling, 2016b), where the integral simplifies to a sum,
\[\text{Avg}(f)(x)=\sum_{g\in G}f(g\cdot x). \tag{38}\]
In the context of \(G\) being a compact group, the invariant measure \(dg\) is unique, referred to as the Haar measure of the group, allowing the integral to be computed numerically, e.g., on a grid. Steerable CNNs (Cohen and Welling, 2016a), LieConv Finzi _et al._ (2020) and Spherical CNNs (Cohen _et al._, 2018) extended CNNs to \(O(2)\) and \(O(3)\) equivariance, respectively. A general theory for convolution on any compact group and symmetric space was developed by Kondor and Trivedi (2018), and further extended to equivariant convolutions on Riemannian manifolds by Weiler _et al._ (2021). However, this approach has several limitations,
* The direct computation of the integral can be numerically unstable and inefficient, even for relatively small groups like \(O(3)\).
* In the case of non-compact groups, a unique invariant measure is absent, and the integral diverges.
* Across these methods, the convolution kernel \(f\) is usually constrained to a two-body operator.
In the case of compact groups, the integral over the group may be calculated via an alternative means. There exists a linear operator, called the Clebsch Gordan operator \(\mathcal{C}\), such that,
\[\text{Avg}(f)(x)=\mathcal{C}(f)(x) \tag{39}\]
Therefore, the complex integral over the group becomes a linear operation. The form of this operator depends on the basis in which \(f\) is expended. In the case of \(G=O(3)\) and if this basis is carefully chosen, the entries of this operator are known analytically. This approach is numerically stable and more efficient and was taken by numerous works, including ACE Dusson _et al._ (2022), Cormorant Anderson _et al._ (2019), NequIP Batzner _et al._ (2022), or MACE Batatia _et al._ (2022b). The central aim of our work is to show that this approach can also be generalized to all reductive Lie groups, even non-compact ones, and provide tools to do so.
### Proof of (7)
This statement follows closely the arguments by Dusson _et al._ (2022); Drautz (2020) and others.
\[\sum_{j_{1},\dots,j_{n}}\sum_{\mathbf{k}}w_{\mathbf{k}}\prod_{s}\phi_{k_{s}} (x_{j_{s}}) =\sum_{\mathbf{k}}w_{\mathbf{k}}\sum_{j_{1},\dots,j_{n}}\prod_{s}\phi_{k_{s }}(x_{j_{s}})\] \[=\sum_{\mathbf{k}}w_{\mathbf{k}}\prod_{s=1}^{n}\sum_{j}\phi_{k_{s}}(x_{j})\] \[=\sum_{\mathbf{k}}w_{\mathbf{k}}\prod_{s=1}^{n}A_{k}\] \[=\sum_{\mathbf{k}}w_{\mathbf{k}}\mathbf{A_{k}}.\]
### Custom notation and indexing
We briefly contrast our notation for Clebsch-Gordan coefficients (10) with the standard notation. By means of example, consider the group \(SO(3)\) in which case the Clebsch-Gordan equations are written as
\[\sum_{m_{1}^{\prime}m_{2}^{\prime}}C_{l_{1}m_{1}l_{2}m_{2}^{\prime}}^{LM}\rho_{ m_{1}^{\prime}m_{1}}^{l_{1}}(g)\rho_{m_{2}^{\prime}m_{2}}^{l_{2}}(g)=\sum_{M^{ \prime}}\rho_{MM^{\prime}}^{L}(g)C_{l_{1}m_{1}l_{2}m_{2}}^{LM^{\prime}}. \tag{40}\]
In this setting, our index \(\mathbf{\alpha}\) simply enumerates all possible such coefficients. One can often assign a natural meaning to this index, e.g., for the group \(SO(3)\) it is given by the pair of angular quantum numbers \((l_{1},l_{2})\). Specifically, in this case, we obtain
\[C_{l_{1}m_{1}l_{2}m_{2}}^{\mathbf{\alpha},LM}=\begin{cases}C_{l_{1}m_{1}l_{2}m_{2}} ^{LM},&\text{if }\mathbf{\alpha}=(l_{1},l_{2}),\\ 0,&\text{otherwise},\end{cases} \tag{41}\]
where \(C_{l_{1}m_{1}l_{2}m_{2}}^{LM}\) are the Clebsch-Gordan coefficients in the classical notation. Thus, the additional index \(\mathbf{\alpha}\) is not really required in the case of \(SO(3)\), nor our other main example, \(SO(1,3)\). Our notation is still useful to organize the computations of equivariant models, especially when additional channels are present, which is usually the case. Moreover, it allows for easy generalization to other groups where such a simple identification is not possible (Steinberg, 1961).
### Equivariance of G-cluster expansion
The equivariance of the G-cluster expansion is easily verified by applying a transformation \(g\) to the input,
\[\mathbf{B}_{\mathbf{\alpha}}^{K}\circ g =\sum_{\mathbf{k}}C_{\mathbf{k}}^{\mathbf{\alpha},K}\mathbf{A_{k}}\circ g\] \[=\sum_{\mathbf{k}}C_{\mathbf{k}}^{\mathbf{\alpha},K}\left(\sum_{\mathbf{k}^{ \prime}}\prod_{t}\rho_{k_{t},k_{t}^{\prime}}(g)\mathbf{A_{k^{\prime}}}\right)\] \[=\sum_{\mathbf{k}^{\prime}}\left(\sum_{\mathbf{k}}C_{\mathbf{k}}^{\mathbf{\alpha},K}\prod_{t}\rho_{k_{t},k_{t}^{\prime}}(g)\right)\mathbf{A_{k^{\prime}}} \tag{42}\] \[=\sum_{\mathbf{k}^{\prime}}\left(\sum_{K^{\prime}}\rho_{KK^{\prime}}( g)C_{\mathbf{k}^{\prime}}^{\mathbf{\alpha},K^{\prime}}\right)\mathbf{A_{k^{\prime}}}\] \[=\sum_{K^{\prime}}\rho_{KK^{\prime}}(g)\mathbf{B}_{\mathbf{\alpha}}^{K^{ \prime}}.\]
### Completeness of the basis and Universality of G-MACE
We explain in which sense the basis \(\mathbf{B}^{K}_{\mathbf{\alpha}}\) is a complete basis, and briefly sketch how to prove this claim. The argument is contained almost entirely in (Dusson _et al._, 2022) and only requires a single modification, namely Step 3 below, using a classical argument from representation theory. We will therefore give only a very brief summary and explain that necessary change.
We start with an arbitrary equivariant property \(\Phi^{V}\) embedded in \(V\) where we have a representation, i.e. the actual target property is \(\Phi\) is then given as a linear mapping from \(V\) to \(Z\). For technical reasons, we require that only finitely many entries \(\Phi^{V}_{K}\) may be non-zero, but this is consistent with common usage. For example, if \(G=O(3)\) and if \(\Phi\) is a scalar, then \(\Phi^{V}_{0}=\Phi\), while all other \(\Phi^{V}_{LM}\equiv 0\). If \(\Phi\) is a covariant vector, then \(\Phi^{V}_{LM}\) is non-zero if and only if \(L=1\); and so forth. For other groups, the labeling may differ but the principle remains the same.
_1. Convergence of the cluster expansion._ The first step in our parameterisation is to approximate \(\Phi^{V}\) in terms of a truncated many-body expansion (4). It is highly application-dependent on how fast this expansion converges. Rigorous results in this direction in the context of learning interatomic potentials can be found in (Bachmayr _et al._, 2021; Thomas _et al._, 2022). A generic statement can be made if the number of input particles is limited by an upper bound, in which case the expansion becomes exact for a finite \(N\). This case leads to the uniform density result stated in Theorem 4.1. We adopt this setting for the time being and return to the pointwise convergence setting below.
In the uniform convergence setting we also require that the domain \(\Omega\) is compact.
Throughout the remainder of this section we may therefore assume that an \(N\) can be chosen as well as smooth components \(\varphi^{(n)}\) such that the resulting model \(\Phi^{V,N}\) approximates \(\Phi^{V}\) to within a target accuracy \(\epsilon\),
\[|\Phi^{V,N}_{K}(\mathbf{x})-\Phi^{V}_{K}(\mathbf{x})|\leq\epsilon\qquad\forall\mathbf{x} \in\mathrm{msets}(\Omega).\]
_2. The density of the embedding._ As already stated in the main text, if the components \(\varphi^{(n)}_{K}\) are smooth, and the embedding \(\{\phi_{k}\}_{k}\) is dense in the space of one-particle functions (5) then it follows that the \(\varphi^{(n)}_{K}\) can be expanded in terms of the tensor product basis \(\phi_{\mathbf{k}}:=\otimes_{s=1}^{n}\phi_{k_{s}}\) to within arbitrary accuracy. The precise statement is the following standard result of approximation theory: if \(\mathrm{span}\{\phi_{k}\}_{k}\) are dense in \(C(\Omega)\), then \(\mathrm{span}\{\phi_{\mathbf{k}}\}_{\mathbf{k}}\) are dense in \(C(\Omega^{n})\). That is, for any \(\epsilon>0\), there exist approximants \(p^{(n)}_{K}\) such that
\[\|\varphi^{(n)}_{K}-p^{(n)}_{K}\|_{\infty}\leq\epsilon.\]
_3. The density of the symmetrized basis._ The next and crucial step is to show that, if the \(\varphi^{(n)}_{K}\) are equivariant, then the \(p^{(n)}_{K}\) may be chosen equivariant as well without loss of accuracy. If the group \(G\) is compact then the representations \(\rho\) can be chosen unitary (Broecker, 1985). In that case, the argument from (Dusson _et al._, 2022) can be used almost verbatim: let
\[\bar{p}^{(n)}(\mathbf{x}):=\int_{G}\rho(g)^{-1}p^{(n)}(g\mathbf{x})\,H(dg),\]
where \(H\) is the normalized Haar measure then \(\bar{p}^{(n)}\) is equivariant by construction and
\[\left|\varphi^{(n)}(\mathbf{x})-\bar{p}^{(n)}(\mathbf{x})\right|\] \[=\left|\int_{G}\rho(g)^{-1}\Big{(}\varphi^{(n)}(g\mathbf{x})-p^{(n)} (g\mathbf{x})\Big{)}\,H(dg)\right|\] \[\leq\int_{G}\left|\varphi^{(n)}(g\mathbf{x})-p^{(n)}(g\mathbf{x})\right| H(dg)\] \[\leq\int_{G}\|\varphi^{(n)}-p^{(n)}\|_{\infty}\,H(dg)\leq\epsilon.\]
If the group is not compact, then one can apply "Weyl's Unitary Trick" (see (Bourbaki, 1989), Ch. 3): first, one complexifies the group (if it is real) and then constructs a maximal compact subgroup \(K_{\mathbb{C}}\) of the complexification. This new group \(K\) will have the same representation as \(G\) and because it is compact, that representation may again be chosen as unitary. Therefore, symmetrizing \(p^{(n)}\) withrespect to \(K_{\mathbb{C}}\) results in an approximant that is not only equivariant w.r.t. \(K_{\mathbb{C}}\) but also equivariant w.r.t. \(G\).
_4. The density of the basis \(\mathbf{B}_{\mathbf{\alpha}}^{K}\)._ As the last step, one can readily observe that the symmetrization and cluster expansion steps can be exchanged. I.e. first symmetrizing and then employing the steps (7) result in the same model. Letting \(\epsilon\to 0\) in the foregoing argument while fixing the number of particles \(\#\mathbf{x}\) results in all errors vanishing. Note that this will in particular require taking \(N\to\infty\).
_5. Pointwise convergence._ To obtain density in the sense of pointwise convergence we first introduce the _canonical cluster expansion_ without self-interacting terms
\[\Phi_{K}(\mathbf{x})=\sum_{n=0}^{\infty}\sum_{j_{1}<\cdots<j_{n}}v_{K}^{(n)}(x_{j_{ 1}},\ldots x_{j_{n}}).\]
The difference here is that the summation is only over genuine sub-clusters. Because of this restriction, the series is finite for all multi-set inputs \(\mathbf{x}\). In other words, it converges in the pointwise sense.
One can easily see that \(v_{n}\) can be chosen (explicitly) to make this expansion exact. After truncating the expansion at finite \(n\leq N\) and then expanding the potentials \(v_{K}^{(n)}\) one can exactly transform the canonical cluster expansion into the self-interacting cluster expansion. This procedure is detailed in (Dusson _et al._, 2022; Drautz, 2020).
The arguments up to this point establish the claimed universality for the linear ACE model. The corresponding universality of the TRACE model follows immediately from (Darby _et al._, 2022). Since a single layer of the MACE model is a TRACE model, this completes the proof of Theorem 4.1.
### Product of groups
Let \(G_{1}\) and \(G_{2}\) be two reductive Lie groups (or finite groups). Let \(A_{1}\) and \(A_{2}\) be the two structure constants of \(G_{1}\) and \(G_{2}\), (\(d\rho_{1}\), \(\rho_{1}\)) and (\(d\rho_{2}\), \(\rho_{2}\)) their continuous and discrete generators. One can define a representation of the direct product group \(G_{1}\times G_{2}\) as
\[\rho:=\left(A_{1}|A_{2},n_{1}n_{2},\{d\rho_{1}(X_{i})\otimes I_{2}+I_{1} \otimes d\rho_{2}(\tilde{X}_{j})\}_{i,j},\{\rho(h_{1})\otimes I_{2}\,+I_{1} \otimes\rho(h_{2})\}_{h_{1},h_{2}\in\mathbf{H}_{1},\mathbf{H}_{2}}\right) \tag{43}\]
The following essential property holds: if \(\rho_{1}\) and \(\rho_{2}\) are irreducible representations of \(G_{1}\) and \(G_{2}\) then \(\rho_{1}\otimes\rho_{2}\) is an irreducible representation of \(G_{1}\times G_{2}\). Moreover, for reducible Lie groups, all the irreps of \(G_{1}\times G_{2}\) are of this form. Therefore, one can construct all the irreps of \(G_{1}\times G_{2}\) this way. It is of particular interest in the case of equivariant message passing networks on points clouds, where the group of interest is \(G\times S_{n}\).
Let us give a non-trivial application of lie-nn for computing invariants for the product group of \(O(3)\times S_{3}\). We consider a set of three vectors,
\[a=\begin{bmatrix}a_{x}\\ a_{y}\\ a_{z}\end{bmatrix}\quad b=\begin{bmatrix}a_{x}\\ a_{y}\\ a_{z}\end{bmatrix}\quad c=\begin{bmatrix}a_{x}\\ a_{y}\\ a_{z}\end{bmatrix}\]
A given vector belong to the product representation of the \(l=1\) vector of the \(O(3)\) group and the natural representation of \(S_{3}\). In lie-nn, we can define the product representation as follows,
rep = lie.group_product(lie.finite.Sn_natural(3), lie.irreps.O3(l=1, p=-1)) If one wants to know the unique permutation invariant vector that one can construct from the three vectors \(a\), \(b\) and \(c\), one can use the following code,
qs = lie.infer_change_of_basis( lie.group_product(lie.finite.Sn_trivial(3), lie.irreps.O3(l=1, p=-1)), rep, ) As expected, the output is the sum of the three vectors, \(a+b+c\), corresponding to the only permutation invariant vector that can be constructed from the three vectors.
Consider now the case where we want to extract a permutation invariant scalar from the set of products of two vectors. In this case, we can use the following code,qs = lie.infer_change_of_basis( lie.group_product(lie.finite.Sn_trivial(3),lie.irreps.O3(l=0,p=1)), lie.tensor_product(rep,rep), ) In this code, we first construct the tensor product representation and then seek the permutation invariant scalar in it. The result is two distinct scalars,
\[e_{1}=a_{x}^{2}+a_{y}^{2}+a_{z}^{2}+b_{x}^{2}+b_{y}^{2}+b_{z}^{2}+c_{x}^{2}+c_{y }^{2}+c_{z}^{2}\]
\[e_{2}=a_{x}b_{x}+a_{x}c_{x}+a_{y}b_{y}+a_{y}c_{y}+a_{z}b_{z}+a_{z}c_{z}+b_{x}c_{x }+b_{y}c_{y}+b_{z}c_{z}\]
Finally we consider a case that would be very hard to do by hand. It is the case of extracting a permutation invariant \(l=2\) vector for this tensor product of the original set of three vectors.
qs = lie.infer_change_of_basis( lie.group_product(lie.finite.Sn_trivial(3),lie.irreps.O3(l=2,p=1)), lie.tensor_product(rep,rep), ) The computation tells us that there exists two such vectors,
\[v_{1}=\begin{bmatrix}-\sqrt{2}a_{x}a_{z}-\sqrt{2}b_{x}b_{z}-\sqrt{2}c_{x}c_{z} \\ -\sqrt{2}a_{x}a_{y}-\sqrt{2}b_{x}b_{y}-\sqrt{2}c_{x}c_{y}\\ \frac{\sqrt{6}a_{x}^{2}}{6}-\frac{\sqrt{6}a_{y}^{2}}{3}+\frac{\sqrt{6}a_{z}^{2 }}{6}+\frac{\sqrt{6}b_{z}^{2}}{6}-\frac{\sqrt{6}b_{z}^{2}}{3}+\frac{\sqrt{6}b _{z}^{2}}{6}+\frac{\sqrt{6}c_{x}^{2}}{6}-\frac{\sqrt{6}c_{y}^{2}}{3}+\frac{ \sqrt{6}c_{z}^{2}}{6}\\ \frac{\sqrt{2}a_{x}^{2}}{2}-\frac{\sqrt{2}a_{y}^{2}}{2}+\frac{\sqrt{2}b_{z}^{2 }}{2}-\frac{\sqrt{2}b_{z}^{2}}{2}+\frac{\sqrt{2}c_{z}^{2}}{2}-\frac{\sqrt{2}c_ {z}^{2}}{2}\end{bmatrix}\]
\[v_{2}=\begin{bmatrix}-a_{x}b_{z}-a_{x}c_{z}-a_{z}b_{x}-a_{z}c_{x}-b_{x}c_{z}-b_ {z}c_{x}\\ -a_{x}b_{y}-a_{x}c_{y}-a_{y}b_{x}-a_{y}c_{x}-b_{x}c_{y}-b_{y}c_{x}\\ \frac{\sqrt{3}a_{x}b_{x}}{3}+\frac{\sqrt{3}a_{x}c_{x}}{3}-\frac{2\sqrt{3}a_{y} b_{y}}{3}-\frac{2\sqrt{3}a_{y}c_{x}}{3}+\frac{\sqrt{3}a_{x}b_{x}}{3}+\frac{ \sqrt{3}a_{x}c_{y}}{3}\\ -a_{y}b_{z}-a_{y}c_{z}-a_{z}b_{y}-a_{z}c_{y}-\frac{\sqrt{3}b_{z}c_{y}}{3}-\frac {2\sqrt{3}b_{x}c_{y}}{3}-\frac{2\sqrt{3}b_{x}c_{y}}{3}\\ a_{x}b_{x}+a_{x}c_{x}-a_{z}b_{z}-a_{z}c_{z}+b_{x}c_{x}-b_{z}c_{z}\end{bmatrix}\]
### Symmetric Tensor products
The permutation group is an important concept in the context of tensor products. It can be useful to focus on a subset of the full tensor product space that exhibits certain permutation equivariance. For example, the spherical harmonics are defined as the permutation-invariant part of a tensor product.
The symmetric tensor product can be thought of as a change of basis, or projector, from the tensor product to the symmetric part of the tensor product. In the case of a tensor product of correlation order four we have,
\[S_{\nu}=B_{\nu;ijkl}x_{i}y_{j}z_{k}w_{l} \tag{44}\]
where \(B\) is the change of basis that satisfies:
\[B_{\nu;ijkl}=B_{\nu;\sigma(ijkl)}\forall\sigma\in S_{4} \tag{45}\]
We propose in lie-nn a new algorithm used to calculate \(B\). The Symmetric Tensor Product is calculated using a tree structure, starting at the leaves and progressing towards the trunk. The leaves are the basis of the individual indices, and they are combined and constrained at each step to impose symmetry.
### Computing the irreps from input representations
For some groups, the computation of the generators \(X\) can become a very involved task. However in most applications, the data itself is already given in a form of a representation. One approach proposed by (Finzi _et al._, 2021) is to not work in the space of irreps but the space of polynomials of the input representation. This approach has the advantage of requiring little previous knowledge of the group. However it is also much less efficient than using irreps. One alternative way is to consider polynomials of the input representation, that are reducible and then compute the block diagonalisation to project down to irreps subspace. One can then work directly as polynomials in this subspace and compute Clebsch-Gordan coefficients numerically. We provide routines in lie-nn to carry out these operations from any given input representation.
### Details of numerical experiments
#### a.11.1 Jet Tagging
DatasetThe dataset (Butter _et al._, 2019) was generated using a Pythia, Delphes, and FastJet (using cuts for the jet's kinematics on \(\Delta\eta=2\), \(R=0.8\)) to simulate the response of the ATLAS detector at the Large Hadron Collider (LHC). The dataset is released under the "Creative Commons Attribution 4.0" license. The entire dataset contains 2 millions jets with a 60/20/20 for training, validation, and testing balanced splits.
ModelThe model uses **3 layers** of the \(G\)-MACE architecture to generate the Lorentz group equivariant representation of each jet. For the 1 particle basis, we use a product of radial features on the Minkowski distances, and \(SO(1,3)\) spherical harmonics. The radial features are computing by passing a logarithmic radial basis as in (Bogatskiy _et al._, 2022) into a \([64,64,64,512]\) MLP using SiLU nonlinearities on the outputs of the hidden layers. The internal representations used are \((0,0)\) and \((1,1)\). We use 72 channels for each representation. For the embedding, and readout out, we use similar achitectures to LorentzNet.
TrainingModels were trained on an NVIDIA A100 GPU in single GPU training. Typical training time for the dataset is up to 72 hours. Models were trained with AMSGrad variant of Adam, with default parameters of \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), and \(\epsilon=10^{-8}\). We used a learning rate of \(0.0035\) and a batch size of 64. The model was trained for 80 epochs with 2 epochs of linear learning rate warmup and followed by a phase of cosine annealing LR scheduling.
#### a.11.2 3D shape recognition
DatasetModelNet10 (Wu _et al._, 2015) is a synthetic 3D object point clouds dataset containing 4,899 pre-aligned shapes from 10 categories. The dataset is split into 3,991 \((80\%)\) shapes for training and 908 \((20\%)\) shapes for testing. We were unable to find a license.
ModelThe model uses a three-layer encoder architecture following the PointNet++ one. We use an encoder of the full point cloud into sub-point clouds of sizes \([1024,256,128]\). Each PointNet layer maps a point cloud of size \(N^{t}\) to one of size \(N^{t+1}\). We compute the node features as the sum of the PointNet output and the MACE output,
\[h^{(t+1)}=\text{PointNet}(xyz^{(t)},h^{(t)})+\text{MACE}(xyz^{(t)},h^{(t)}) \tag{46}\]
TrainingModels were trained on an NVIDIA A100 GPU in single GPU training. The typical training time for the dataset is up to 12 hours. Models were trained with the AMSGrad variant of Adam, with default parameters of \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), and \(\epsilon=10^{-8}\).
### Limitations and Future Work
The spectrum of potential applications of the present method is very large. In this paper, we focus on a subset of applications that have known benchmarks and baselines. A broader range of groups is implemented in the lie-nn library. Future work should focus on applying this architecture to tasks with domain-specific knowledge.
### Computational cost
#### a.13.1 Clebsch Gordan generation
Generation timeThe Clebsch Gordan generation time depends on the size of the representations. In Table 3, we plot the generation time as a function of the size of the representations for different groups. The generation time is found to range from a matter of milliseconds for smaller representations to a few seconds for larger ones. The generation of CGs constitutes a preprocessing step, separate from the model inference computations. Therfore, the results can be stored for subsequent use, ensuring that this phase does not affect the model's overall performance.
SparsityOne distinguishing feature of the Clebsch-Gordan coefficient is their sparsity. This sparsity comes from the selection rule, which is unique for each group. These selection rules also give rise to a sparsity structure that can be exploited to achieve the best efficiency. In Figure 4 we compare the sparsity percentage as a function of the size of the representations for different groups.
#### a.13.2 G-MACE computational cost
The computational bottleneck of traditional equivariant MPNNs is the equivariant tensor product on the edges of the graph used to form the message. In G-MACE the edge operation is the pooling operation to compute the features \(A_{k}\). Correlations \(\mathbf{A_{k}}\) are computed through the tensor product of the product basis, an operation that is carried out on nodes. In typical models, the correlations are the bottleneck. Since the number of nodes is orders of magnitudes smaller than the number of edges in most applications we are envisioning, it is a significant computational advantage to organize the computation in this way. We return to reviewing the cost of the product basis below.
Figure 4: Sparsity in the percentage of non-zero entries of Clebsch-Gordan coefficients as a function of the size of the representations for different groups. The representations size is computed as the product of the three representation sizes involved in the Clebsch-Gordan.
Figure 3: Generation time in seconds of Clebsch-Gordan coefficients as a function of the size of the representations for different groups. The representation size is computed as the product of the three representation sizes involved in the Clebsch-Gordan.
Obtaining the equivariant basis \(\mathbf{B}^{K}\) involves simply linear operation. The Clebsch-Gordan coefficients are very sparse, with a well defined structural sparsity, which can be easily exploited for constructing highly efficient low level code on both CPUs and GPUs.
Thus, we return to the product basis \(\mathbf{A_{k}}\), which is normally the computational bottleneck. It has structural sparsity due to its symmetry under permuting the \(\mathbf{k}\) tuples and due to the structural sparsity in the Clebsch-Gordan coefficients. Despite that sparsity it was shown in Kaliuzhnyi and Ortner (2022) that _in theory_ the product basis can be evaluated recursively with \(O(1)\) operations per feature, however, this requires effective use of sparse tensor formats which is challenging to optimize for GPU architectures since a naive implemention relies on random memory access patterns. Instead, the _current_ MACE code employs a similar recursive evaluation algorithm described in Batatia _et al._ (2022a), which employs full tensor formats, but avoids the random memory access. For the most common case of 3-correlation models we estimate that this code reaches within a factor 3-5 of the optimal performance of hypothetical sparse tensor implementation.
Figure 5: Inference time for a single jet for a two layer LorentzMACE model as a function of the correlation order in the product basis and the maximal angular resolution. Timing made on a Nvidia A100 and averaged over a 100 runs. | ## Review
### Summary
This paper introduces a novel framework for constructing equivariant neural networks specifically designed for arbitrary reductive Lie groups, building upon the established ACE and MACE methods. The proposed approach includes a linear model for multi-set functions that is symmetrized to produce a comprehensive basis of equivariant functions, and it is developed into a multi-layer architecture using a message-passing scheme. The authors validate their claims through experimental results on tasks such as top quark decay tagging and 3D shape recognition, while also providing a software library to facilitate the implementation of their method.
### Strengths
- The idea of generalizing ACE and MACE frameworks to arbitrary reductive Lie groups is novel.
- The paper is well-organized and clearly written, making it mostly easy to read.
- Strong experimental results that support the claimed advantages of the proposed method.
- A library is provided for developing G-equivariant neural networks.
- The proposed method could serve as a useful tool for researchers dealing with specific symmetry problems.
### Weaknesses
- Minor writing errors exist, such as incorrect table captions and the need for better referencing of experimental results in the text.
- Lack of robust comparison against state-of-the-art methods, particularly in the context of 3D shape recognition.
- Some definitions and key concepts are not clearly articulated, potentially hindering reader comprehension.
- In-depth discussion of computational aspects and numerical errors related to the implementation is lacking.
- The novelty of the work is not adequately delineated from the previous ACE and MACE methods.
### Questions
- How can the computations in Eqs. (7) and (16) be efficiently executed in practice?
- What is the impact of higher-order messages on the performance of G-MACE?
- Could the authors clarify the definition of B used in equation (20)?
- A comparison with methods based on Lie Groups, such as LieConv, would be beneficial.
### Soundness
**Score:** 3
**Description:** 3 = good: The theoretical framework is sound, with appropriate proofs provided, though some areas require additional clarity and justification.
### Presentation
**Score:** 3
**Description:** 3 = good: The paper is generally well-written and organized, but contains minor errors and could benefit from additional explanations and clarity in certain sections.
### Contribution
**Score:** 3
**Description:** 3 = good: The work presents a valuable contribution to the field of equivariant neural networks, though it would benefit from clearer delineation of its novelty.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept: The paper is technically sound with moderate-to-high impact, but it requires improvements in clarity and evaluation against other methods.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper demonstrates originality in generalizing existing methods and tackling significant challenges in the field of equivariant neural networks. While there are minor weaknesses in presentation and comparisons, the overall soundness and contribution justify acceptance.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# ReTR: Modeling Rendering Via Transformer for
Generalizable Neural Surface Reconstruction
Yixun Liang\({}^{1}\) Hao He\({}^{1,2}\) Ying-Cong Chen\({}^{1,\,2}\)
\({}^{1}\) The Hong Kong University of Science and Technology (Guangzhou).
\({}^{2}\) The Hong Kong University of Science and Technology.
[email protected], [email protected], [email protected]
Equal contribution.Corresponding author
###### Abstract
Generalizable neural surface reconstruction techniques have attracted great attention in recent years. However, they encounter limitations of low confidence depth distribution and inaccurate surface reasoning due to the oversimplified volume rendering process employed. In this paper, we present Reconstruction TRansformer (ReTR), a novel framework that leverages the transformer architecture to redesign the rendering process, enabling complex render interaction modeling. It introduces a learnable _meta-ray token_ and utilizes the cross-attention mechanism to simulate the interaction of rendering process with sampled points and render the observed color. Meanwhile, by operating within a high-dimensional feature space rather than the color space, ReTR mitigates sensitivity to projected colors in source views. Such improvements result in accurate surface assessment with high confidence. We demonstrate the effectiveness of our approach on various datasets, showcasing how our method outperforms the current state-of-the-art approaches in terms of reconstruction quality and generalization ability. _Our code is available at_[https://github.com/YixunLiang/ReTR](https://github.com/YixunLiang/ReTR).
## 1 Introduction
In the realm of computer vision and graphics, extracting geometric information from multi-view images poses a significant challenge with far-reaching implications for various fields, including robotics, augmented reality, and virtual reality. As a popular approach to this problem, neural implicit reconstruction techniques [1, 2, 3, 4, 5, 6] are frequently employed, generating accurate and plausible geometry from multi-view images by utilizing volume rendering and neural implicit representations based on the Sign Distance Function (SDF) [7] and its variant. Despite their efficacy, these methods possess inherent limitations such as the lack of cross-scene generalization capabilities and the necessity for extensive computational resources for training them from scratch for each scene. Furthermore, these techniques heavily rely on a large number of input views.
Recent studies such as SparseNeuS [9] and VolRecon [8] have attempted to overcome these challenges by integrating prior image information with volume rendering methods, thereby achieving impressive cross-scene generalization capabilities while only requiring sparse views as input. Nevertheless, these solutions are still based on volume rendering, which poses some intrinsic drawbacks in surface reconstruction. Specifically, volume rendering is a simplification of the physical world and might not capture the full extent of its complexity. It models the interactions of incident photons and particles into density, which is predicted solely based on sampled point features. This oversimplified modeling fails to disentangle the contribution of the light transport effect and surface properties to the observed color, resulting in an inaccurate assessment of the actual surface. Moreover, the prediction of colorblending heavily relies on the projected color of the source views, thereby overlooking intricate physical effects. These shortcomings can lead to less confident surface predictions, producing a depth distribution with low kurtosis, and may be accompanied by high levels of noise, thereby compromising the overall quality of the reconstruction as illustrated in Fig. 1 (b).
In this paper, we first propose a more generalized formulation for volume rendering. Then based on this formulation, we introduce the Reconstruction TRansformer (ReTR), a novel approach for generalizable neural surface reconstruction. Our method utilizes the transformer to redesign the rendering process, allowing for accurate modeling of the complex render interaction while retaining the fundamental properties that make volume rendering effective. Particularly, we propose a learnable token, the _meta-ray token_, that encapsulates the complex light transport effect. By leveraging this token and the "cross-attention" mechanism, we simulate the interaction of rendering with the sampled points and aggregate the features of each point to render the observed color in an end-to-end manner. Additionally, we introduce a unidirectional transformer and continuous positional encoding to simulate photon-medium interaction, effectively considering occlusion and the interval of sample points. Moreover, as our model operates in the feature space rather than the color space, to further enhance the accuracy of semantic information and enable seamless adaptation, we propose a novel hybrid extractor designed to efficiently extract 3D-aware features.
Our method provides an excellent solution for generalizable neural surface reconstruction, offering several advantages over traditional volume rendering techniques. First, the ability to generalize complicated physical effects in a data-driven way provides an efficient approach to decoupling the contribution of light transport and surface to the observed color. Second, the entire process takes place in a high-dimensional feature space, rather than the color space, reducing the model's sensitivity to projected color in source views. Moreover, the transformer employs a re-weighted _softmax_ function of the attention map, driving the learning of a depth distribution with positive kurtosis. As a result, our method achieves a more confident distribution, leading to reduced noise and improved quality, As shown in Fig. 1 (c).
In summary, our contribution can be summarized as:
* We identify the limitation and derive a general form of volume rendering. By leveraging this form, we can effectively tailor the rendering process for task-specific requirements.
* Through the derived general form, we propose ReTR, a learning-based rendering framework utilizing transformer architecture to model light transport. ReTR incorporates continuous positional encoding and leverages the hybrid feature extractor to enhance performance in generalizable neural surface reconstruction.
* Extensive experiments conducted on DTU, BlendedMVS, ETH3D, and Tanks & Temples datasets [10; 11; 12; 13] validate the efficacy and generalization ability of ReTR.
Figure 1: Generalizable neural surface reconstructions from three input views in (a). VolRecon [8] produces depth distribution with low kurtosis and a noisy surface as shown in (b). In contrast, our proposed **ReTR** successfully extracts plausible surfaces with sharper depth distribution that has high kurtosis as shown in (c).
Related Works
Multi-View Stereo (MVS).Multi-view stereo methods is another branch for 3D reconstruction, which can be broadly categorized into three main branches: depth maps-based [14; 15; 16; 17; 18; 19], voxel grids-based [20; 21; 22], and point clouds-based [23; 24; 25]. Among these, depth maps-based methods are more flexible and hence more popular than the others. Depth maps-based methods typically decouple the problem into depth estimation and fusion, which has shown impressive performance with densely captured images. However, these methods exhibit limited robustness in situations with a shortage of images, thereby highlighting the problem we aim to address.
**Neural Surface Reconstruction.** With the advent of NeRF [26], there has been a paradigm shift towards using similar techniques for shape modeling, novel view synthesis, and multi-view 3D reconstruction. IDR [27] uses surface rendering to learn geometry from multi-view images, but it requires extra object masks. Several methods [2; 3; 1; 28; 29] have attempted to rewrite the density function in NeRF using SDF and its variants, successfully regressed plausible geometry. Among them, NeuS [2] first uses SDF value to model density in the volume rendering to learn neural implicit surface, offering a robust method for multi-view 3D reconstruction from 2D images. However, these methods require lengthy optimization to train each scene independently. Inspired by the recent success of generalizable novel view synthesis [30; 31; 32; 33; 34]. SparseNeuS [9] and VolRecon [8] achieve generalizable neural surface reconstruction using the information from the source images as the prior to neural surface reconstruction. However, these methods suffer from oversimplified modeling of light transport in traditional volume rendering and color blending, resulting in extracted geometries that fail to make a confident prediction of the surface, leading to distortion in the reconstructions. However, there are also some attempts [35; 36; 37] that adopt other rendering methods in implicit neural representation, such as ray tracing. However, these models are still based on enhanced explicit formulations; thus, their capacity to model real-world interaction is still limited. In contrast, our method introduces a learning-based rendering, providing an efficient way to overcome such limitations.
**Learning Based Rendering.** Unlike volume rendering, another line of work [38; 39; 40] explores deep learning techniques to simulate the rendering process. Especially Recurrent neural networks, which naturally fit the rendering process. Specifically, DeepVoxels [38] employs GRU to process voxel features along a ray, and its successor SRN [41], leverages LSTM for ray-marching. However, such methods recursively process features and demand large computation resources. The computation constraints inhibit such methods' ability to produce high-quality renderings. Unlike RNN-based methods, ReTR leverages the transformer to parallel compute each point's hitting probability. Greatly improve the efficiency of the rendering process and achieve high-quality renderings.
**Transformers With Radiance Field.** The attention mechanism in transformers [42] has also been widely used in the area of radiance field. In image-based rendering, IBRNet [30] proposes a ray transformer to process sampled point features and predict density. NeRFormer [43] utilizes a transformer to aggregate source views and construct feature volumes. NeuRays [44] leverages neural networks to model and address occlusions, enhancing the quality and accuracy of image-based rendering. GPBR [45] employs neural networks to transform and composite patches from source images, enabling versatile and realistic image synthesis across various scenes. However, these methods only use the transformer to enhance feature aggregation, and the sampled point features are still decoded into colors and densities and aggregated using traditional volume rendering, leading to unconfident surface prediction. Recently, GNT [46] naively replaces classical volume rendering with transformers in image-based rendering, which overlooks the absence of occlusion and positional awareness within the transformer architecture. In contrast to GNT, we improved the traditional transformer architecture in those two limitations based on an in-depth analysis of the fundamental components to make volume rendering work.
## 3 Methodology
In this section, we present an analysis of the limitations of existing generalizable neural surface reconstruction approaches that adopt volume rendering from NeRF [26] and propose ReTR, a novel architecture that leverages transformer to achieve learning-based rendering. We introduce the formulations of volume rendering and revisit its limitations in generalizable neural surface reconstruction in Sec 3.1. We then depict the general form of volume rendering in Sec. 3 and present our learning-based rendering in Sec. 3.3. To effectively extract features for our proposed rendering, we further introduce the hybrid extractor in Sec. 3.4. Our loss functions are explained in Sec. 3.5.
### Preliminary
Generalizable neural surface reconstruction aims to recover the geometry of a scene \(\mathbf{S}\), which is represented by a set of \(M\) posed input views \(\mathbf{S}=\{\mathbf{I}_{j},\mathbf{P}_{j}\}_{j=1}^{M}\), where \(\mathbf{I}_{j}\in\mathbb{R}^{H\times W\times 3}\) and \(\mathbf{P}_{j}\in\mathbb{R}^{3\times 4}\) are the \(j\)-th view's image and camera parameters, respectively. Existing approaches [9; 8] generate the features of the radiance field from \(\mathbf{S}\) using a neural network \(\mathcal{F}_{enc.}\), and we formalize the process as:
\[\mathbf{f}^{v},\{\mathbf{f}_{1}^{img},\dots,\mathbf{f}_{M}^{img}\}=\mathcal{F} _{enc.}(\mathbf{S}), \tag{1}\]
where \(\mathbf{f}^{v}\in\mathbb{R}^{R\times R\times R\times D}\) is the volume feature with resolution \(R\) and \(\mathbf{f}_{j}^{img}\in\mathbb{R}^{h\times w\times D}\) is the image feature with dimension \(D\). To decode the color of a sampled ray \(\mathbf{r}=(\mathbf{o},\mathbf{d})\) passing through the scene, where \(\mathbf{o}\) and \(\mathbf{d}\) denotes as ray original and direction, the existing approaches [9; 8] sample \(N\) points along the ray from coarse to fine sampling between near and far planes and obtain the location \(\mathbf{x}\in\mathbb{R}^{3}\) of each sample point:
\[\mathbf{x}_{i}=\mathbf{o}+t_{i}\mathbf{d},\quad i=1,\dots,N. \tag{2}\]
The predicted SDF from features first converts to weights using the conversion function [2], denoted as \(\sigma_{i}\). The weights are then used to accumulate the re-weighted projected colors along the ray using volume rendering. Specifically:
\[C(\mathbf{r})=\sum_{i=1}^{N}T_{i}\left(1-\exp\left(-\sigma_{i}\right)\right) \mathbf{c}_{i},\quad\text{where}\quad T_{i}=\exp\left(-\sum_{j=1}^{i-1}\sigma_ {j}\right), \tag{3}\]
\[\mathbf{c}_{i}=\sum_{j=1}^{M}\mathcal{F}_{weight.}(\mathbf{f}_{i}^{v},\{\Pi( \mathbf{f}_{k}^{img},\mathbf{x}_{i})\}_{k=1}^{M})\Pi(\mathbf{I}_{j},\mathbf{x}_{i}). \tag{4}\]
Here the \(\mathcal{F}_{weight.}(\cdot)\) denotes the module that predicts the weight of each projected color. The \(\mathbf{f}_{i}^{v}\) represents the volume feature at \(\mathbf{x}_{i}\) obtained using trilinear interpolation in \(\mathbf{f}^{v}\). \(\Pi(\mathbf{I},\mathbf{x})\) denotes the operation that projects \(\mathbf{x}\) onto the corresponding input grid \(\mathbf{I}\) and then extracts the feature at the projected location through bilinear interpolation.
**Limitation.** The volume rendering, denoted by Eq. (3), greatly simplifies light transport processes, thus introducing key limitations. A complete physical model for light transport categorizes the process into absorption, emission, out-scattering, and in-scattering. Each represents a nonlinear photon-particle interaction, with incident photon properties being influenced by both their inherent nature and medium characteristics. However, Eq. (3) condenses these complexities into a single density value, predicted merely based on sampled point features, leading to an _oversimplification_
Figure 2: Our ReTR pipeline comprises several steps: (1). Extracting features through the proposed hybrid extractor model from source views, (2). Processing features in each sample point using the feature fusion block, and (3). Using the occlusion transformer and render transformer to aggregate features along the ray and predict colors and depths.
of incident photon modeling_. Moreover, the color determination method, influenced by a weighted blend of projected colors akin to Eq. (4), _over-relies on input view projected colors, overlooking intricate physical effects_. As a result, the model requires a wider accumulation of projected colors from points near the exact surface, resulting in a "murky" surface appearance, as depicted in Fig. 1.
### Generalized Rendering Function
To overcome the limitations we mentioned in Sec. 3.1, we propose an improved rendering equation that takes into consideration of incident photon modeling and feature-based color decoding. As we revisit Eq. (3) and determine its key components to facilitate our redesign. This differentiable equation consists of three parts: The \(T_{i}\) term accounts for the accumulated transmittance and gives the first surface a bias to contribute more to the observed color. The \((1-\exp\left(-\sigma_{i}\right))\) term denotes the alpha value of traditional alpha compositing, which represents the transparency of \(\mathbf{x}_{i}\) and is constrained to be non-negative. The color part of Eq. (3) denotes the color of \(\mathbf{x}_{i}\). Based on these analyses, we summarize three key rendering properties that the system must hold:
1. _Differentiable_. To enable effective learning, the weight function needs to be differentiable to the training network with observed color through back-propagation.
2. _Occlusion-aware_. In line with the bias towards the first surface in the original equation, the weight function needs to be aware that the points close to the first _exact surface_ should have a larger contribution to the final output color than other points.
3. _Non-negative_. Echoing the non-negativity constraint in the alpha value, the weight of each point also needs to be positive.
Having identified these key properties, we can now reformulate our approach to meet these requirements. Specifically, we propose a more general form of Eq. (3), which can be formalized as:
\[C(\mathbf{r})=\sum_{i=1}^{N}\mathcal{W}\left(\mathbf{F}_{1},\ldots,\mathbf{F}_{i} \right)\mathcal{C}\left(\mathbf{F}_{i}\right), \tag{5}\]
where \(\mathbf{F}_{i}\) represents the set comprising image feature \(f^{img}\) and volume feature \(f^{v}\) in the \(\mathbf{x}_{i}\in\mathbb{R}^{3}\), \(\mathcal{W}\left(\cdot\right)\) is the weight function that satisfies those three key properties we mentioned above, and and \(\mathcal{C}\left(\cdot\right)\) denotes the color function. Specifically, color \(c\) can then be interpreted as characteristic of each feature point in 5. Therefore, feature at each point can be aggregated in a manner analogous to RGB value, and enabling us to deduce the primary feature points. This can be mathematically expressed as:
\[C(\mathbf{r})=C(\sum_{i=1}^{N}\mathcal{W}\left(\mathbf{F}_{1},\ldots,\mathbf{F}_{i} \right)\mathbf{F}_{i}), \tag{6}\]
where the \(C(\cdot)\) represents the color function that maps the feature into RGB space.
### Reconstruction Transformer
Note that Eq. (3) can be considered as a special form of Eq. (5). With this generalized form, we can reformulate the rendering function to overcome the oversimplification weakness of Eq. (3). Based on Eq. (5), we introduce the Reconstruction Transformer (ReTR) that preserves essential rendering properties while incorporating sufficient complexity to implicitly learn and model intricate physical processes. ReTR is composed of a Render Transformer and an Occlusion Transformer. The Render Transformer leverages a learnable "meta-ray token" to encapsulate complex render properties, enhancing surface modeling. The Occlusion Transformer utilizes an attention mask to enable the occlusion-aware property. Also, ReTR works in the high-dimensional feature space instead of the color space, and thus allows for more complex and physically accurate light interactions. Consequently, ReTR not only overcomes the restrictions of Eq. (3) but also preserves its fundamental rendering characteristics. We elaborate on the design of these components as follows.
**Render Transformer.** Here, we discuss the design of the render transformer. Specifically, we introduce a global learnable token, refer to as "meta-ray token" and denote as \(\mathbf{f}^{tok}\in\mathbb{R}^{D}\), to capture and store the complex render properties. For each sample ray, we first use the FeatureFusion block to combine all features associated with each sample point of the ray, resulting in FeatureFusion\((\mathbf{F}_{i})\in\mathbb{R}^{D}\). We then employ the cross-attention mechanism within the Render Transformer to simulate the interaction of sample points along the ray. It can be formalized as follows:
\[C(\mathbf{r})=\mathcal{C}\left(\sum_{i=1}^{N}softmax\left(\frac{q(\mathbf{f}^{tok })k(\mathbf{f}_{i}^{f})^{\top}}{\sqrt{D}}\right)v(\mathbf{f}_{i}^{f})\right), \tag{7}\]
where \(q(\cdot),k(\cdot),v(\cdot)\) denotes three linear layers and \(\mathcal{C}(\cdot)\) in this formulation is an MLP structure to directly regress observed color from the aggregated feature, and \(W(\cdot)\) translates to \(softmax\left(\frac{q(\mathbf{f}^{tok})k(\mathbf{f}_{i}^{f})^{\top}}{\sqrt{D}}\right)\). Furthermore, \(C(\cdot)\) is operationalized as MLP, which serves to decode the integrated feature into its corresponding RGB value. Then, the hitting probability will be normalized by \(softmax\) function, which is _Non-negative_ and encourages the network to learn a weight distribution with positive kurtosis. And we can extract the attention map from the Render Transformer and derive the rendered depth as:
\[D\left(\mathbf{r}\right)=\sum_{i=1}^{N}\alpha_{i}t_{i},\quad\text{where}\quad \alpha_{i}=softmax\left(\frac{q(\mathbf{f}^{tok})k(\mathbf{f}_{i}^{f})^{\top}}{ \sqrt{D}}\right), \tag{8}\]
where \(\alpha_{1},\dots,\alpha_{N}\) denotes the attention map extracted from the Eq. (7). The rendered depth map can be further used to generate mesh [47] and point cloud [16].
**Occlusion Transformer.** To further make our system _Occlusion-aware_ and enable of simulation of photon-medium interaction. We introduce Occlusion Transformer. Similar to previous works [46; 48], we introduce an attention mask to achieve that the sample point interacts only with the points in front of it and the meta-ray token. Such unidirectional processes encourage the later points to respond to the preceding surface. This process can be formalized as:
\[\mathbf{R}^{f}=\{\mathbf{f}^{tok},\mathbf{f}_{1}^{f},\mathbf{f}_{2}^{f},\dots, \mathbf{f}_{N}^{f}\}, \tag{9}\]
\[\mathbf{R}^{occ}= \text{OccTrans}(Q,K,V=\mathbf{R}_{f}), \tag{10}\]
Where MHA denotes the multi-head self-attention operation [42] and \(\mathbf{f}_{i}^{occ}\) denotes the refine feature of \(\mathbf{x}_{i}\) which obtained from the render transformer. In addition, \(R^{f}\) is the collective set of tokens, and \(R^{occ}\) signifies the occlusion transformer that employs \(R^{f}\) for cross attention. Then, our Eq. (7) can be rewritten as:
\[C(\mathbf{r})=\mathcal{C}\left(\sum_{i=1}^{N}softmax\left(\frac{q(\mathbf{f}^{ tok})k(\mathbf{f}_{i}^{occ})^{\top}}{\sqrt{D}}\right)v(\mathbf{f}_{i}^{f})\right). \tag{11}\]
**Continuous Positional Encoding.** Following the traditional transformer design, we need to introduce a positional encoding to make the whole structure positional-aware. However, positional encoding proposed in [42] ignores the _actual distance_ between each token, which is unsuitable for our situation. Furthermore, weighted-based resampling [26] would lead to misalignment of positional encoding when an increased number of sample points are used.
To solve this problem, we extend the traditional positional encoding formula to continuous scenarios. Specifically, it can be formulated as follows:
\[PE_{(\mathbf{x}_{i},2i)} =sin(\beta t_{i}/10000^{2i/D}), \tag{12}\] \[PE_{(\mathbf{x}_{i},2i+1)} =cos(\beta t_{i}/10000^{2i/D}).\]
Here, \(i\) represents the positional encoding in the \(i_{th}\) dimension and \(\beta\) is a scale hyperparameter we empirically set to 100. The updated formula successfully solves the misalignment of traditional positional encoding, results are shown in Tab. 4. The specific proofs will be included in the Appendix section.
### Hybrid Extractor
In our learning-based rendering, a finer level of visual feature is necessary, which is not achieved by traditional feature extractors that rely on high-level features obtained through FPN. These featuresare highly semantic abstract and not suitable for low-level visual feature matching [49]. To overcome this limitation, inspired by NeuralRecon [50], we further propose Hybrid Extractor. Rather than relying on FPN to generate one feature map from high-level features of CNN, and using a 3D U-Net to process projected features as shown in Fig. 3, we leverage all level features from various layers to construct multi-level volume features. Then, we adopt a 3D CNN decoder to fuse and decode the multi-level volume features, producing the final global volume features.
Our approach enables us to perceive both low and high-level features, which is crucial for generalizable neural surface reconstructions that require detailed surface processing. Second, by avoiding the use of the encoder part of the 3D U-Net, we reduce the computational complexity and allow us to build a higher resolution volume feature within the same computational budget.
### Loss Functions
Our overall loss function is defined as the weighted sum of two loss terms:
\[\mathcal{L}=\mathcal{L}_{\text{rendering}}+\alpha\mathcal{L}_{\text{depth}}, \tag{13}\]
where \(\mathcal{L}_{\text{rendering}}\) constrains the observed colors to match the ground truth colors and is formulated as:
\[\mathcal{L}_{\text{rendering}}=\frac{1}{S}\sum_{s=1}^{S}\left\|C\left(\mathbf{r }\right)-C_{g}\left(\mathbf{r}\right)\right\|_{2}, \tag{14}\]
Here, \(S\) is the number of sampled rays for training, and \(C_{g}\left(\mathbf{r}\right)\) represents the ground truth color of the sample ray \(r\). The depth loss \(\mathcal{L}_{\text{depth}}\) is defined as
\[\mathcal{L}_{\text{depth}}=\frac{1}{S_{1}}\sum_{s=1}^{S_{1}}\left|D\left( \mathbf{r}\right)-D_{g}\left(\mathbf{r}\right)\right|, \tag{15}\]
where \(S_{1}\) is the number of pixels with valid depth and \(D_{g}\left(\mathbf{r}\right)\) is the ground truth depth. In our experiments, we set \(\alpha=1.0\).
## 4 Experiments
**Datasets.** The DTU dataset [10] is a large-scale indoor multi-view stereo dataset consisting of 124 different scenes captured under 7 different lighting conditions. To train our frameworks, we adopted the same approach as in previous works [9; 8]. Furthermore, we evaluated our models' generalization capabilities by testing them on three additional datasets: Tanks & Templates [12], ETH3D [13], and BlendedMVS [11], where no additional training was performed on the testing datasets.
Figure 3: Comparision of the original extractor used in [8; 9] (top) and our **hybrid extractor** (bottom). The original extractor primarily discerns high-level features, demonstrating less efficacy. In contrast, our hybrid extractor excels in integrating multi-level features, demonstrating superior efficacy.
**Baselines.** To demonstrate the effectiveness of our method from various perspectives, we compared it with (1) SparseNeus [9] and VolRecon [8], the state-of-the-art generalizable neural surface reconstruction method; (2) Generalizable neural rendering methods (3) Neural implicit reconstruction [27; 2; 1; 3] which require individual training for each scene from scratch. (4) Popular multi-view stereo (MVS) [16; 51] methods. Further details on the baselines are provided in the appendix.
### Sparse View Reconstruction
For comparison, we performed sparse reconstruction using only three views, following the same approach as [8; 9]. We adopted the same evaluation process and testing split as used in previous works [8; 9] to ensure a fair comparison. We use a similar approach as VolRecon [8] to generate mesh, more details can be found in Appendix. As shown in Tab. 1, our method outperforms VolRecon [8] and SparseNeuS [9] by a significant margin. Moreover, our method also outperforms popular MVS methods such as MVSNet [51]. Furthermore, we present the qualitative results of sparse view reconstruction in Fig. 4. Our reconstructed geometry exhibits smoother surfaces and less noise compared to the current SoTA methods.
### Depth Map Evaluation & Full View Reconstruction
We compare our rendered depth with those generated by SparseNeuS [9], MVSNet [51] and VolRecon [8]. Following the experiment settings introduced in VolRecon [8], we also use four source views as input for depth rendering. Additionally, we evaluated the performance by fusing all depth maps into a global point cloud. As shown in Tab. 2, our method outperforms existing methods in both evaluations. Moreover, as demonstrated in Fig. 5, our method achieves a sharper boundary with less noise and fewer holes compared to the current SoTA method [8].
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline SCAN & Mean\(\mid\) & 24 & 37 & 40 & 55 & 63 & 65 & 69 & 83 & 97 & 105 & 106 & 110 & 114 & 118 & 122 \\ \hline COLMAP [16] & 1.52 & **0.90** & 2.89 & 1.63 & 1.08 & 2.18 & 1.94 & 1.61 & 1.30 & 2.34 & 1.28 & 1.10 & 1.42 & 0.76 & 1.17 & 1.14 \\ MVSNet [51] & 1.22 & 1.05 & 2.52 & 1.71 & 1.94 & 1.45 & **1.52** & **0.88** & **1.29** & 1.38 & 1.05 & **0.91** & **0.66** & 0.61 & 1.08 & 1.16 \\ \hline DRR [27] & 3.99 & 4.01 & 6.40 & 3.52 & 1.91 & 3.96 & 2.36 & 4.85 & 1.62 & 6.37 & 5.97 & 1.23 & 4.73 & 0.91 & 1.72 & 1.26 \\ VolSDF [1] & 3.41 & 4.03 & 4.21 & 6.12 & **0.91** & 8.24 & 1.73 & 2.74 & 1.82 & 5.14 & 3.09 & 2.08 & 4.81 & 0.60 & 3.51 & 2.18 \\ UNISNet [3] & 4.39 & 5.08 & 7.18 & 3.96 & 5.30 & 4.61 & 2.44 & 3.94 & 3.14 & 5.63 & 3.40 & 5.09 & 6.38 & 2.98 & 4.05 & 2.81 \\ NusS [2] & 4.00 & 4.57 & 4.49 & 3.97 & 3.32 & 4.63 & 1.95 & 4.68 & 3.83 & 4.15 & 2.50 & 1.52 & 6.47 & 1.26 & 5.57 & 6.11 \\ \hline PixelNeRF [32] & 6.18 & 5.13 & 6.07 & 5.85 & 4.40 & 7.11 & 4.64 & 5.68 & 6.76 & 9.05 & 6.11 & 3.95 & 5.92 & 6.26 & 6.89 & 6.93 \\ IRRNet [30] & 2.32 & 2.39 & 3.70 & 2.66 & 1.83 & 3.02 & 2.83 & 1.77 & 2.28 & 2.73 & 1.96 & 1.87 & 2.13 & 1.58 & 2.05 & 2.09 \\ MVSNet [31] & 2.09 & 19.6 & 3.27 & 2.54 & 1.93 & 2.57 & 2.71 & 1.82 & 1.72 & 1.72 & 1.27 & 1.47 & 1.29 & 2.06 & 2.26 \\ \hline SparseNeuS [9] & 1.96 & 2.17 & 3.29 & 2.74 & 1.67 & 2.69 & 2.42 & 1.58 & 1.86 & 1.94 & 1.35 & 1.50 & 1.45 & 0.98 & 1.86 & 1.87 \\ VolRecon [8] & 1.38 & 1.20 & 2.59 & 1.56 & 1.08 & 1.43 & 1.92 & 1.11 & 1.48 & 1.42 & 1.05 & 1.19 & 1.38 & 0.74 & 1.23 & 1.27 \\
**ReTR (Ours)** & **1.17** & 1.05 & **2.31** & **1.44** & 0.98 & **1.18** & **1.52** & **0.88** & 1.35 & **1.30** & **0.87** & 1.07 & 0.77 & **0.59** & **1.05** & **1.12** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative results of **sparse view** reconstruction on 15 testing scenes of DTU dataset [10]. We report the chamfer distance, the lower the better, Methods are split into four categories from top to bottom: a) MVS-based methods, b) Per-scene optimization methods, c) Generalizable rendering methods, and d) Generalizable reconstruction methods. The best scores are in **bold** and the second best are in underlined.
Figure 4: Sparse view reconstruction on testing scenes in the DTU [10]. Comparison with VolRecon [8] (left), our proposed ReTR (right) renders a more accurate surface and preserves finer details, _e.g._ the window of the house (scan 24), the nose of smurf, ReTR produces much sharper details. Best viewed on a screen when zoomed in.
### Generalization
To evaluate the generalization ability of our model without retraining, we use three datasets, namely Tank & Temples, BlendedMVS, and ETH3D [12; 11; 13]. The high-quality reconstruction of large-scale scenes and small objects in different domains, as shown in Fig. 6, demonstrates the effectiveness of our method in terms of generalization capability.
## 5 Ablation Study
We conduct ablation studies to examine the effectiveness of each module in our design. The ablation study results are reported on sparse view reconstruction in test split following SparseNeuS [9] and VolRecon [8].
**Effectiveness of Modules.** We evaluate key components of our approach to generalizable neural surface reconstruction, as shown in Tab. 3. For the evaluation of the occlusion transformer, we keep the original transformer architecture while removing the special design we proposed in Sec. 3.3, to ensure the training parameter would not affect the evaluation. For the hybrid extractor part, we replace this module with the original extractor that has been used in [8]. Our results demonstrate that our approach can better aggregate features from different levels and use them more effectively. These evaluations highlight the importance of these components in our approach.
**Robustness of Different Sampling.** Tab. 4 displays the effects of altering the number of sample points on the reconstruction quality of VolRecon[8] and ReTR. Our method surpasses the current SoTA, even when the number of sampling points decreases. These results suggest that existing methods that rely on sampling points of the ray struggle to provide confident predictions of the surface due to the nature of volume rendering. Our approach, which uses learning-based rendering, is more resilient to sampling strategies and can provide reliable depth estimations even with fewer samples. Meanwhile, the effectiveness of continuous P.E. in Sec. 3.3 is proved through the result.
**Unsupervised Neural Surface Reconstruction.** Our approach is still applicable to unsupervised neural surface reconstruction using only colors for training, which remove \(\mathcal{L}_{\text{depth}}\). Meanwhile, we find
\begin{table}
\begin{tabular}{c c c c|c c c} \hline \hline Method & Acc. \(\downarrow\) & Comp. \(\downarrow\) & Chamfer \(\downarrow\) & \(<\)1mm \(\uparrow\) & \(<\)2mm \(\uparrow\) & \(<\)4mm \(\uparrow\) \\ \hline MVSNet [51] & 0.55 & 0.59 & 0.57 & 29.95 & 52.82 & 72.33 \\ SparseNeuS [9] & 0.75 & 0.76 & 0.76 & 38.60 & 56.28 & 68.63 \\ VolRecon [8] & 0.55 & 0.66 & 0.60 & 44.22 & 65.62 & 80.19 \\
**ReTR (Ours)** & **0.54** & **0.51** & **0.52** & **45.00** & **66.43** & **81.52** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative results of **full view** reconstruction on 15 testing scenes of DTU dataset [10]. For the Accuracy (ACC), Completeness (COMP), and Chamfer Distance, the lower is the better. For depth map evaluation, threshold percentages (\(<\)1mm, \(<\)2mm, \(<\)4mm) are reported in percentage (%). The best scores are in **bold**.
Figure 5: Full view reconstruction visualization in test set of DTU [10], comparison with VolRecon [8] (left), our proposed ReTR (right) reconstructs better point clouds, _e.g._ fewer holes, the skull head top, and the house roof gives a much complete representation, _e.g._ finer details, the house window, and the skull cheek, provides much finer details. Best viewed on a screen when zoomed in.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \begin{tabular}{c} Reder \\ Trans. \\ \end{tabular} & \begin{tabular}{c} Occ. \\ Trans. \\ \end{tabular} &
\begin{tabular}{c} Hybrid \\ Ext. \\ \end{tabular} & Chamfer\(\downarrow\) \\ \hline ✓ & \(\times\) & \(\times\) & 1.31 \\ ✓ & \(\times\) & ✓ & 1.29 \\ ✓ & ✓ & \(\times\) & 1.28 \\ ✓ & ✓ & ✓ & **1.17** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Model component ablation. All of these parts are described in Sec. 3.3.
that our method significantly outperforms the current SoTA method under unsupervised situations and is even comparable to COLMAP [16] the popular MVS technique, As shown in Tab. 5, This is further evidence that the improvement of complex rendering systems for implicit reconstruction is huge.
## 6 Conclusion
We have proposed Reconstruction Transformer (ReTR), a novel framework for generalizable neural surface reconstruction that uses transformers to model complex rendering processes. ReTR represents a significant advancement in the field of surface reconstruction, offering a powerful solution to the challenges faced by neural implicit reconstruction methods. Additionally, we delve into the design procedure of learning-based rendering. This exploration broadens our understanding of enhancing complex rendering systems and sets the stage for future research endeavors not only in surface reconstruction but also in other tasks relative to differentiable rendering.
## 7 Acknowledgements
We thank Shuai Yang and Wenhang Ge for the thoughtful review of our manuscript and valuable discussions throughout this project. Thank you to Yukang Chen, Shuhan Zhong and Jierun Chen for ideas and feedbacks on our manuscript. We would also like to thank the Turing AI Computing Cloud (TACC) [52] and HKUST iSING Lab for providing us with computation resources on their platform.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Sample & \multirow{2}{*}{VolRecon [8]} & ReTR & \multirow{2}{*}{\begin{tabular}{c} **ReTR** \\ (Ours*) \\ \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{c} **(Ours)** \\ \end{tabular} } \\ \cline{1-1} \cline{4-4}
## References
* [1] L. Yariv, J. Gu, Y. Kasten, and Y. Lipman, "Volume rendering of neural implicit surfaces," _Advances in Neural Information Processing Systems_, vol. 34, pp. 4805-4815, 2021.
* [2] P. Wang, L. Liu, Y. Liu, C. Theobalt, T. Komura, and W. Wang, "Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction," _arXiv preprint arXiv:2106.10689_, 2021.
* [3] M. Oechsle, S. Peng, and A. Geiger, "Unisurf: Unifying neural implicit surfaces and radiance fields for multi-view reconstruction," in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2021, pp. 5589-5599.
* [4] M. Niemeyer, L. Mescheder, M. Oechsle, and A. Geiger, "Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision," in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2020, pp. 3504-3515.
* [5] F. Darmon, B. Bascle, J.-C. Devaux, P. Monasse, and M. Aubry, "Improving neural implicit surfaces geometry with patch warping," in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pp. 6260-6269.
* [6] Y. Wang, I. Skorokhodov, and P. Wonka, "Hf-neus: Improved surface reconstruction using high-frequency details," _Advances in Neural Information Processing Systems_, vol. 35, pp. 1966-1978, 2022.
* [7] J. J. Park, P. Florence, J. Straub, R. Newcombe, and S. Lovegrove, "Deepsdf: Learning continuous signed distance functions for shape representation," in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2019, pp. 165-174.
* [8] Y. Ren, F. Wang, T. Zhang, M. Pollefeys, and S. Susstrunk, "Volrecon: Volume rendering of signed ray distance functions for generalizable multi-view reconstruction," _arXiv preprint arXiv:2212.08067_, 2022.
* [9] X. Long, C. Lin, P. Wang, T. Komura, and W. Wang, "Sparseneux: Fast generalizable neural surface reconstruction from sparse views," in _Proceedings of the European conference on computer vision (ECCV)_, 2022, pp. 210-227.
* [10] H. Aanaes, R. R. Jensen, G. Vogiatzis, E. Tola, and A. B. Dahl, "Large-scale data for multiple-view stereopsis," _International Journal of Computer Vision_, vol. 120, pp. 153-168, 2016.
* [11] Y. Yao, Z. Luo, S. Li, J. Zhang, Y. Ren, L. Zhou, T. Fang, and L. Quan, "Blendedmvs: A large-scale dataset for generalized multi-view stereo networks," in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2020, pp. 1790-1799.
* [12] A. Knapitsch, J. Park, Q.-Y. Zhou, and V. Koltun, "Tanks and temples: Benchmarking large-scale scene reconstruction," _ACM Transactions on Graphics_, vol. 36, no. 4, 2017.
* [13] T. Schops, J. L. Schonberger, S. Galliani, T. Sattler, K. Schindler, M. Pollefeys, and A. Geiger, "A multi-view stereo benchmark with high-resolution images and multi-camera videos," in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, 2017, pp. 3260-3269.
* [14] N. D. Campbell, G. Vogiatzis, C. Hernandez, and R. Cipolla, "Using multiple hypotheses to improve depth-maps for multi-view stereo," in _Proceedings of the European conference on computer vision (ECCV)_, 2008, pp. 766-779.
* [15] D. Chang, A. Bozic, T. Zhang, Q. Yan, Y. Chen, S. Susstrunk, and M. Niessner, "Rc-mvsnet: unsupervised multi-view stereo with neural rendering," in _Proceedings of the European conference on computer vision (ECCV)_, 2022, pp. 665-680.
* [16] J. L. Schonberger, E. Zheng, J.-M. Frahm, and M. Pollefeys, "Pixelwise view selection for unstructured multi-view stereo," in _Proceedings of the European conference on computer vision (ECCV)_, 2016, pp. 501-518.
* [17] X. Gu, Z. Fan, S. Zhu, Z. Dai, F. Tan, and P. Tan, "Cascade cost volume for high-resolution multi-view stereo and stereo matching," in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2020, pp. 2495-2504.
* [18] J. Yang, W. Mao, J. M. Alvarez, and M. Liu, "Cost volume pyramid based depth inference for multi-view stereo," in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2020, pp. 4877-4886.
* [19] K. Luo, T. Guan, L. Ju, H. Huang, and Y. Luo, "P-mvsnet: Learning patch-wise matching confidence aggregation for multi-view stereo," in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2019, pp. 10 452-10 461.
* [20] K. N. Kutulakos and S. M. Seitz, "A theory of shape by space carving," _International journal of computer vision_, vol. 38, pp. 199-218, 2000.
* [21] S. M. Seitz and C. R. Dyer, "Photorealistic scene reconstruction by voxel coloring," _International journal of computer vision_, vol. 35, pp. 151-173, 1999.
* [22] Y. Yao, Z. Luo, S. Li, T. Shen, T. Fang, and L. Quan, "Recurrent mvsnet for high-resolution multi-view stereo depth inference," in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2019, pp. 5525-5534.
* [23] Y. Furukawa and J. Ponce, "Accurate, dense, and robust multiview stereopsis," _IEEE transactions on pattern analysis and machine intelligence_, vol. 32, no. 8, pp. 1362-1376, 2009.
* [24] M. Lhuillier and L. Quan, "A quasi-dense approach to surface reconstruction from uncalibrated images," _IEEE transactions on pattern analysis and machine intelligence_, vol. 27, no. 3, pp. 418-433, 2005.
* [25] R. Chen, S. Han, J. Xu, and H. Su, "Point-based multi-view stereo network," in _Proceedings of the IEEE/CVF international conference on computer vision_, 2019, pp. 1538-1547.
* [26] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, "Nerf: Representing scenes as neural radiance fields for view synthesis," _Communications of the ACM_, vol. 65, no. 1, pp. 99-106, 2021.
* [27] L. Yariv, Y. Kasten, D. Moran, M. Galun, M. Atzmon, B. Ronen, and Y. Lipman, "Multiview neural surface reconstruction by disentangling geometry and appearance," _Advances in Neural Information Processing Systems_, vol. 33, pp. 2492-2502, 2020.
* [28] J. Zhang, Y. Yao, and L. Quan, "Learning signed distance field for multi-view surface reconstruction," in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2021, pp. 6525-6534.
* [29] W. Ge, T. Hu, H. Zhao, S. Liu, and Y.-C. Chen, "Ref-neus: Ambiguity-reduced neural implicit surface learning for multi-view reconstruction with reflection," _arXiv preprint arXiv:2303.10840_, 2023.
* [30] Q. Wang, Z. Wang, K. Genova, P. P. Srinivasan, H. Zhou, J. T. Barron, R. Martin-Brualla, N. Snavely, and T. Funkhouser, "Ibrnet: Learning multi-view image-based rendering," in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2021, pp. 4690-4699.
* [31] A. Chen, Z. Xu, F. Zhao, X. Zhang, F. Xiang, J. Yu, and H. Su, "Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo," in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2021, pp. 14 124-14 133.
* [32] A. Yu, V. Ye, M. Tancik, and A. Kanazawa, "pixelnerf: Neural radiance fields from one or few images," in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2021, pp. 4578-4587.
* [33] A. Trevithick and B. Yang, "Grf: Learning a general radiance field for 3d representation and rendering," in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2021, pp. 15 182-15 192.
* [34] S. Peng, M. Niemeyer, L. Mescheder, M. Pollefeys, and A. Geiger, "Convolutional occupancy networks," in _Proceedings of the European conference on computer vision (ECCV)_, 2020, pp. 523-540.
* [35] K. Zhang, F. Luan, Q. Wang, K. Bala, and N. Snavely, "Physg: Inverse rendering with spherical gaussians for physics-based material editing and relighting," in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2021, pp. 5453-5462.
* [36] X. Zhang, P. P. Srinivasan, B. Deng, P. Debevec, W. T. Freeman, and J. T. Barron, "Ner-factor: Neural factorization of shape and reflectance under an unknown illumination," _ACM Transactions on Graphics (TOG)_, vol. 40, no. 6, pp. 1-18, 2021.
* [37] J. Knodt, J. Bartusek, S.-H. Baek, and F. Heide, "Neural ray-tracing: Learning surfaces and reflectance for relighting and view synthesis," _arXiv preprint arXiv:2104.13562_, 2021.
* [38] V. Sitzmann, J. Thies, F. Heide, M. Niessner, G. Wetzstein, and M. Zollhofer, "Deepvoxels: Learning persistent 3d feature embeddings," in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2019, pp. 2437-2446.
* [39] V. Sitzmann, S. Rezchikov, B. Freeman, J. Tenenbaum, and F. Durand, "Light field networks: Neural scene representations with single-evaluation rendering," _Advances in Neural Information Processing Systems_, vol. 34, pp. 19 313-19 325, 2021.
* [40] M. S. Sajjadi, H. Meyer, E. Pot, U. Bergmann, K. Greff, N. Radwan, S. Vora, M. Lucic, D. Duckworth, A. Dosovitskiy _et al._, "Scene representation transformer: Geometry-free novel view synthesis through set-latent scene representations," in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2022, pp. 6229-6238.
* [41] V. Sitzmann, M. Zollhofer, and G. Wetzstein, "Scene representation networks: Continuous 3d-structure-aware neural scene representations," _Advances in Neural Information Processing Systems_, vol. 32, 2019.
* [42] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, "Attention is all you need," _Advances in neural information processing systems_, vol. 30, 2017.
* [43] J. Reizenstein, R. Shapovalov, P. Henzler, L. Sbordone, P. Labatut, and D. Novotny, "Common objects in 3d: Large-scale learning and evaluation of real-life 3d category reconstruction," in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2021, pp. 10 901-10 911.
* [44] Y. Liu, S. Peng, L. Liu, Q. Wang, P. Wang, C. Theobalt, X. Zhou, and W. Wang, "Neural rays for occlusion-aware image-based rendering," in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2022, pp. 7824-7833.
* [45] M. Suhail, C. Esteves, L. Sigal, and A. Makadia, "Generalizable patch-based neural rendering," in _Proceedings of the European conference on computer vision (ECCV)_, 2022, pp. 156-174.
* [46] P. Wang, X. Chen, T. Chen, S. Venugopalan, Z. Wang _et al._, "Is attention all nerf needs?" _arXiv preprint arXiv:2207.13298_, 2022.
* [47] B. Curless and M. Levoy, "A volumetric method for building complex models from range images," in _Proceedings of the 23rd annual conference on Computer graphics and interactive techniques_, 1996, pp. 303-312.
* [48] Z. J. Tang, T.-J. Cham, and H. Zhao, "Able-nerf: Attention-based rendering with learnable embeddings for neural radiance field," _arXiv preprint arXiv:2303.13817_, 2023.
* [49] C. Cao, X. Ren, and Y. Fu, "Mvsformer: Learning robust image representations via transformers and temperature-based depth for multi-view stereo," _arXiv preprint arXiv:2208.02541_, 2022.
* [50] J. Sun, Y. Xie, L. Chen, X. Zhou, and H. Bao, "Neuralrecon: Real-time coherent 3d reconstruction from monocular video," in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2021, pp. 15 598-15 607.
* [51] Y. Yao, Z. Luo, S. Li, T. Fang, and L. Quan, "Mvssnet: Depth inference for unstructured multi-view stereo," in _Proceedings of the European conference on computer vision (ECCV)_, 2018, pp. 767-783.
* [52] K. Xu, X. Wan, H. Wang, Z. Ren, X. Liao, D. Sun, C. Zeng, and K. Chen, "Tacc: A full-stack cloud computing infrastructure for machine learning tasks," _arXiv preprint arXiv:2110.01556_, 2021. [Online]. Available: [https://arxiv.org/abs/2110.01556](https://arxiv.org/abs/2110.01556)
## Appendix A Implementation Details
### Experimental Environment
Software and hardware environment:
* CUDA version: 11.1
* cuDNN version: 8.0.5
* PyTorch version: 1.10.1
* GPU: Nvidia RTX 3090
* CPU: Intel Xeon Platinum 8180 @ 2.50 GHz \(\times\) 2
### Training Detail
Our model is implemented in PyTorch using the PyTorch Lightning framework. During the training stage, we resize the input image to 640 x 512 and the source views to \(N=4\). To train our model, we employ the Adam optimizer on a single Nvidia 3090 GPU. Initially, the learning rate is set to \(10^{-4}\) and gradually decays to \(10^{-6}\) using a cosine learning rate scheduler. Throughout the training, we use a batch size of 2 and set the number of rays to 1024. To enhance the sampling strategy, we apply a coarse-to-fine approach with both \(N_{coarse}\) and \(N_{fine}\) set to 64. The \(N_{coarse}\) points are uniformly sampled between the near and far plane, while the \(N_{fine}\) points are sampled using importance sampling based on the coarse probability estimation. Regarding the global feature volume \(f^{v}\), we set its resolution to K=128. For inference on DTU, the image resolution is set to 800 x 600. For datasets such as BlendedMVS [11], ETH3D [13], and Tanks & Temples [12], we maintain the original image resolution. Training our model requires approximately 3 days on a single Nvidia 3090 GPU. Moreover, when constructing larger models such as the large and xlarge models by stacking more layers, the training time will naturally increase due to the increased model size.
### Mesh and Point Cloud Generation
Following the settings employed in VolRecon [8], we generate depth maps from virtual viewpoints by shifting the original camera along its \(x\)-axis by \(d=25\) mm. Subsequently, we perform TSDF fusion and applied the Marching Cubes algorithm to merge all the rendered depths into a voxel grid with a resolution of 1.5 mm and extract a mesh representation. For point cloud generation, we initially generate 49 depth maps by leveraging the four nearest source views. These 49 depth maps are then fused together to form a unified point cloud.
## Appendix B Technical Details and Discussion
### Discussion of Hitting Probability
The attention score in ReTR can be interpreted as the probability of a ray being _hit_. However, when using _softmax_, the attention scores for each ray are forced to sum up to 1, implying that every ray should be considered a _hit_. To gain further insights, we examine the distribution of attention scores for rays that are _not hitting_. Figure 7 illustrates the results, demonstrating that the transformer intelligently employs a wider distribution to model rays that do _not hit_. The underlying rationale is that the transformer treats the surrounding air as a medium that contributes to the color.
Figure 7: Hitting probability comparison.
When a ray does _not hit_, the transformer aggregates information from the surrounding air to obtain the color from these mediums.
### Hierarchical Volume Sampling through Attention Map
Given that our framework does not incorporate weights as seen in traditional frameworks like NeRF or NueS, we refine the original hierarchical sampling strategy by substituting the weights with the attention scores of each point. This approach, as discussed in the main text, is both straightforward and impactful. Additionally, we highlight that our method exhibits greater robustness in terms of the number of sampling points compared to the current state-of-the-art techniques, thereby offering an additional advantage within our framework.
### Continous Positional Encoding Proof
To imbue our system with positional awareness of actual distance. we initially derive the formula for the attention score, denotes as \(s\) of features in \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\):
\[s=(\mathbf{f}_{i}^{f}+\mathbf{p}_{i})W_{q}W_{k}^{\top}(\mathbf{f}_{j}^{f}+ \mathbf{p}_{j})^{\top}, \tag{16}\]
where \(\mathbf{p}\) represents the positional encoding in \(\mathbf{x}\). We subsequently expand Eq. (16) as follows:
\[s=(\mathbf{f}_{i}^{f})W_{q}W_{k}^{\top}(\mathbf{f}_{j}^{f})^{\top}+(\mathbf{f} _{i}^{f})W_{q}W_{k}^{\top}(\mathbf{p}_{j})^{\top}+(\mathbf{p}_{i})W_{q}W_{k}^ {\top}(\mathbf{f}_{j}^{f})^{\top}+(\mathbf{p}_{i})W_{q}W_{k}^{\top}(\mathbf{p} _{j})^{\top}, \tag{17}\]
where the fourth component of Eq.(17) denotes the interaction between two locations, and \(W_{q}W_{k}^{\top}\) represents the trainable parameters. To ensure our MLP actual positional awareness, we need to make the function satisfy the following conditions:
\[(\mathbf{p}_{i})(\mathbf{p}_{j})^{\top}=f(t_{j}-t_{i}), \tag{18}\]
where \(t_{j}-t_{i}\) denotes the distance between \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\). When we apply our positional encoding (PE), the fourth component of Eq.(17) can be simplified as:
\[(\mathbf{p}_{i})(\mathbf{p}_{j})^{\top}= [sin(\beta t_{i}/10000^{2i/D}),cos(\beta t_{i}/10000^{2i/D})] \tag{19}\] \[\times[sin(\beta t_{j}/10000^{2i/D}),cos(\beta t_{j}/10000^{2i/D} )]^{\top},\] \[(\mathbf{p}_{i})(\mathbf{p}_{j})^{\top}= \ sin(\beta t_{i}/10000^{2i/D})sin(\beta t_{i}/10000^{2i/D})\] (20) \[+cos(\beta t_{j}/10000^{2i/D})cos(\beta t_{j}/10000^{2i/D}).\]
By applying the sum-to-product identities, we obtain:
\[(\mathbf{p}_{i})(\mathbf{p}_{j})^{\top}=cos(\beta(t_{j}-t_{i})/10000^{2i/D}), \tag{21}\]
where \((t_{j}-t_{i})\) represents the actual distance between \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\). Thus, continuous positional encoding enables the attainment of actual positional awareness.
## Appendix C Additional Experimental Results
Here we show additional experiment results:
### Visualization Supplementary
Due to space limitations, we provide additional visual results for the experiments in this section. Specifically, we present the results for **sparse view reconstruction** in Fig. 9 and **full view reconstruction** of the point cloud in Fig. 11. Furthermore, we include the per-scene results for the number of sampling points in Tab. 6.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c} Sample Points & scan 24 & scan 37 & scan 40 & scan 55 & scan 65 & scan 65 & scan 69 & scan 83 & scan 97 & scan 105 & scan 106 & scan 110 & scan 114 & scan 118 & scan 122 \\ \hline
16-16 & 1.09 & 2.49 & 1.51 & 1.09 & 1.62 & 1.84 & 0.97 & 1.35 & 1.43 & 1.05 & 1.21 & 0.77 & 0.72 & 1.27 & 1.28 \\
23-22 & 1.06 & 2.30 & 1.46 & 0.97 & 1.38 & 1.53 & 0.89 & 1.38 & 1.34 & 0.92 & 1.10 & 0.34 & 0.60 & 1.10 & 1.17 \\
64-0 & 1.11 & 2.39 & 1.43 & 1.06 & 1.36 & 1.62 & 0.94 & 1.28 & 1.31 & 0.91 & 1.12 & 0.78 & 0.64 & 1.18 & 1.20 \\
12-040 & 1.39 & 2.36 & 1.54 & 1.01 & 1.18 & 1.65 & 0.97 & 1.26 & 1.26 & 0.83 & 1.10 & 0.84 & 0.62 & 1.09 & 1.15 \\ \hline \end{tabular}
\end{table}
Table 6: Chamfer distance of a number of different sampling points, results are shown for each scan under different settings.
### Error Bar of ReTR
In order to assess the reproducibility and robustness of our model, we conduct three separate training runs using different random seeds. The corresponding results are presented in Figure 8. These results demonstrate that our model exhibits consistent performance across multiple training runs, indicating good reproducibility. Moreover, the minimal variance observes in the results further underscores the robustness of our model.
### Effectiveness of Stacking Transformer Blocks
To explore the potential of simulating more complex light transport effects, we extend our learnable rendering approach by stacking multiple layers of transformer blocks. Specifically, we introduce two variations: **ReTR-L**, where we stack two transformer blocks, and **ReTR-XL**, where we stack three transformer blocks. This allows us to experimentally evaluate the effectiveness of a more complex rendering system. The results of these experiments are summarized in Tab. 7. The findings indicate that by overlaying multiple layers of transformers, we can simulate complex lighting effects and achieve more powerful results. This demonstrates the potential of our approach to capture intricate light transport phenomena and enhance the overall rendering capabilities.
### Novel View Synthesis
In order to assess ReTR's performance in Novel View Synthesis, a task where many multi-view stereo techniques struggle, we conduct a quantitative comparison with VolRecon [8]. The novel views are generated during the full reconstruction for fusing the point clouds as we discussed in the main paper. Our results demonstrate a significant improvement over VolRecon in terms of novel view synthesis, as shown in Tab. 8. Additionally, we provide visualizations of novel view synthesis and depth synthesis in Figure 10. It has been challenging to achieve high-quality results simultaneously in rendering-based studies and reconstruction-based studies, with few methods excelling in both aspects. However, the results achieved by our proposed framework, ReTR, are highly promising, which suggests that a learnable rendering approach based on transformers can effectively integrate both tasks, yielding impressive results on both fronts within a unified framework.
## Appendix D Limitations
Our method requires approximately 30 seconds to render a depth map and image with a resolution of \(600\times 800\). Similar to other rendering-based methods such as IBRNet [30], VolRecon [8], and MVSNeRF [31], our approach has limitations in terms of efficiency. While learning-based rendering offers enhanced capabilities, it does introduce additional training parameters compared to traditional volume rendering techniques. Stacking multiple layers of our model can improve performance; however, it also increases training time due to the larger model size. It is important to strike a balance between achieving higher rendering quality and maintaining reasonable computational efficiency. Further research and optimization efforts can be explored to enhance the efficiency of our method,
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline Models & Mean & xen.24 & scin.37 & scan.40 & scan.55 & scan.63 & scan.65 & scan.69 & scan.83 & scan.97 & scan.105 & scan.106 & scan.114 & scan.118 & scan.122 \\ \hline ReTR-B & 1.17 & 1.05 & 2.31 & 1.44 & 0.98 & 1.18 & 1.52 & 0.88 & 1.35 & 1.30 & 0.87 & 1.07 & 0.77 & 0.99 & 1.05 & 1.12 \\ ReTR-XL & 1.16 & 0.98 & 2.26 & 1.59 & 1.00 & 1.41 & 1.56 & 0.90 & 1.35 & 1.26 & 0.86 & 1.06 & 0.78 & 0.57 & 1.01 & 1.07 \\ ReTR-XL & 1.15 & 0.96 & 2.26 & 1.64 & 0.94 & 1.19 & 1.59 & 0.86 & 1.32 & 1.25 & 0.85 & 1.02 & 0.75 & 0.55 & 1.02 & 1.11 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Result of ReTR-Base, ReTR-Large and ReTR-XLarge evaluated on DTU under 3 views setting. We report chamfer distance, the lower the better.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & PSNR\(\uparrow\) & MSE\(\downarrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) \\ \hline MVSNeRF* & 25.92 & 0.003 & **0.89** & **0.19** \\ VolRecon* & 23.37 & 0.004 & 0.80 & 0.30 \\ ReTR-B & 25.88 & 0.004 & 0.83 & 0.28 \\ ReTR-L & 26.03 & 0.003 & 0.84 & 0.27 \\ ReTR-XL & **26.33** & **0.003** & 0.84 & 0.27 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Novel View synthesis result on DTU, * denotes our reproduced result.
potentially through techniques such as model compression, parallelization, or hardware acceleration. Acknowledging these limitations, we aim to provide a comprehensive understanding of the trade-offs between rendering quality, efficiency, and model complexity within our proposed framework.
## Appendix E Broader Impacts
The proposed ReTR framework not only enables accurate surface reconstruction through learnable rendering but also generates high-quality novel views. These capabilities open up possibilities for various downstream applications in fields such as virtual reality (VR), robotics, and view synthesis with geometry. While these applications offer numerous benefits, it is important to acknowledge that they also come with ethical considerations. As authors of the ReTR framework, we are committed to promoting ethical practices and responsible development. We recognize the potential for misuse, such as generating content without consent, and we prioritize fair representation and responsible usage of the technology. We strive to adhere to ethical guidelines and contribute to the development of responsible AI practices. It is crucial to ensure that technological advancements are leveraged for the betterment of society while minimizing potential negative impacts. By maintaining a focus on ethics, fairness, and responsible development, we aim to ensure that ReTR and its applications are aligned with the principles of responsible AI and contribute positively to the broader scientific community and society as a whole.
Figure 8: Error bar on 3 runs of Ours (ReTR).
Figure 10: View synthesis and depth synthesis visualization of our proposed ReTR.
Figure 9: Comparison of VolRecon and ReTR in sparse view reconstruction with 3 input views.
Figure 11: Visualization of full view generalization of a point cloud of our proposed ReTR. | ## Review
### Summary
The paper presents a novel framework called ReTR for generalizable neural surface reconstruction, leveraging transformers to model complex photon-particle interactions in rendering. It identifies limitations in traditional volume rendering and proposes a solution that replaces it with a learned rendering approach. Extensive experiments across multiple datasets demonstrate its superior performance compared to existing state-of-the-art methods. The paper is generally well-structured and provides valuable insights, although some aspects of the methodology could be clarified further.
### Strengths
- The framework presents a novel approach to modeling light transport and enhances feature representation through transformers.
- Extensive experiments validate the effectiveness of the proposed method, showing superior performance compared to state-of-the-art baselines.
- The paper is well-written and flows smoothly, effectively conveying the motivation and methodology.
### Weaknesses
- The contribution seems somewhat incremental compared to existing works, particularly regarding the hybrid feature extraction method and the use of transformers.
- Certain sections lack clarity, such as the definitions of key terms and equations, which may confuse readers.
- The qualitative comparisons are limited, primarily focusing on VolRecon without sufficient visual comparisons to other relevant methods.
- The paper does not adequately address timing evaluations for training and inference, which are crucial for assessing the practical usability of the proposed methods.
### Questions
- How does the performance of the proposed transformer-based method compare with traditional optimization-based methods?
- What are the specific metrics (e.g., PSNR) compared to other baselines?
- Could the authors clarify the conditions under which their method can generalize to unseen regions of objects?
- How does the framework account for intricate global physical effects in light transport modeling?
### Soundness
**Score:** 3
**Description:** Good. The methodology is generally sound, though some explanations and definitions need clarification to ensure comprehensive understanding.
### Presentation
**Score:** 3
**Description:** Good. The paper is well-structured, but some sections require more clarity and detail to enhance readability.
### Contribution
**Score:** 3
**Description:** Good. The proposed approach provides valuable insights and improvements over existing methods, although it may be seen as somewhat incremental.
### Rating
**Score:** 6
**Description:** Weak Accept: The paper is technically solid and presents a moderate-to-high impact contribution, but it has some weaknesses that need addressing.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper introduces a novel approach to a well-studied problem in neural surface reconstruction. Despite some concerns regarding clarity and the incremental nature of certain contributions, the overall soundness, presentation, and experimental validation support an acceptance decision. The proposed method's effective performance across multiple datasets indicates its potential significance in the field.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Deep Learning with Kernels through RKHM and the Perron-Frobenius Operator
Yuka Hashimoto
NTT Network Service Systems Laboratories /
RIKEN AIP,
Tokyo, Japan
[email protected]
&Masahiro Ikeda
RIKEN AIP / Keio University,
Tokyo, Japan
[email protected]
Hachem Kadri
Aix-Marseille University, CNRS, LIS,
Marseille, France
[email protected]
###### Abstract
Reproducing kernel Hilbert \(C^{*}\)-module (RKHM) is a generalization of reproducing kernel Hilbert space (RKHS) by means of \(C^{*}\)-algebra, and the Perron-Frobenius operator is a linear operator related to the composition of functions. Combining these two concepts, we present deep RKHM, a deep learning framework for kernel methods. We derive a new Rademacher generalization bound in this setting and provide a theoretical interpretation of benign overfitting by means of Perron-Frobenius operators. By virtue of \(C^{*}\)-algebra, the dependency of the bound on output dimension is milder than existing bounds. We show that \(C^{*}\)-algebra is a suitable tool for deep learning with kernels, enabling us to take advantage of the product structure of operators and to provide a clear connection with convolutional neural networks. Our theoretical analysis provides a new lens through which one can design and analyze deep kernel methods.
## 1 Introduction
Kernel methods and deep neural networks are two major topics in machine learning. Originally, they had been investigated independently. However, their interactions have been researched recently. One important perspective is deep kernel learning [1, 2, 3]. In this framework, we construct a function with the composition of functions in RKHSs, which is learned by given training data. Representer theorems were shown for deep kernel methods, which guarantee the representation of solutions of minimization problems only with given training data [4, 5]. We can combine the flexibility of deep neural networks with the representation power and solid theoretical understanding of kernel methods. Other important perspectives are neural tangent kernel [6, 7] and convolutional kernel [8], which enable us to understand neural networks using the theory of kernel methods. In addition, Bietti et al. [9] proposed a regularization of deep neural network through a kernel perspective.
The generalization properties of kernel methods and deep neural networks have been investigated. One typical technique for bounding generalization errors is to use the Rademacher complexity [10, 11]. For kernel methods, generalization bounds based on the Rademacher complexity can be derived by the reproducing property. Bounds for deep kernel methods and vector-valued RKHSs (vvRKHSs) were also derived [5, 12, 13]. Table 1 shows existing Rademacher generalization bounds for kernel methods. Generalization bounds for deep neural networks have also been actively studied [14, 15, 16, 17, 18, 19]. Recently, analyzing the generalization property using the concept of benign overfittinghas emerged [20; 21]. Unlike the classical interpretation of overfitting, which is called catastrophic overfitting, it explains the phenomenon that the model fits both training and test data. For kernel regression, it has been shown that the type of overfitting can be described by an integral operator associated with the kernel [22].
In this paper, we propose deep RKHM to make the deep kernel methods more powerful. RKHM is a generalization of RKHS by means of \(C^{*}\)-algebra [23; 24; 25], where \(C^{*}\)-algebra is a generalization of the space of complex numbers, regarded as space of operators. We focus on the \(C^{*}\)-algebra of matrices in this paper. Applications of RKHMs to kernel methods have been investigated recently [26; 27]. We generalize the concept of deep kernel learning to RKHM, which constructs a map from an operator to an operator as the composition of functions in RKHMs. The product structure of operators induces interactions among elements of matrices, which enables us to capture relations between data components. Then, we derive a generalization bound of the proposed deep RKHM. By virtue of \(C^{*}\)-algebras, we can use the operator norm, which alleviates the dependency of the generalization bound on the output dimension.
We also use Perron-Frobenius operators, which are linear operators describing the composition of functions and have been applied to analyzing dynamical systems [28; 29; 30; 31], to derive the bound. The compositions in the deep RKHM are effectively analyzed by the Perron-Frobenius operators.
\(C^{*}\)-algebra and Perron-Frobenius operator are powerful tools that provide connections of the proposed deep RKHM with existing studies. Since the norm of the Perron-Frobenius operator is described by the Gram matrix associated with the kernel, our bound shows a connection of the deep RKHM with benign overfitting. In addition, the product structure of a \(C^{*}\)-algebra enables us to provide a connection between the deep RKHMs and convolutional neural networks (CNNs).
Our main contributions are as follows.
* We propose deep RKHM, which is a generalization of deep kernel method by means of \(C^{*}\)-algebra. We can make use of the products in \(C^{*}\)-algebra to induce interactions among data components. We also show a representer theorem to guarantee the representation of solutions only with given data.
* We derive a generalization bound for deep RKHM. The dependency of the bound on the output dimension is milder than existing bounds by virtue of \(C^{*}\)-algebras. In addition, the Perron-Frobenius operators provide a connection of our bound with benign overfitting.
* We show connections of our study with existing studies such as CNNs and neural tangent kernel. We emphasize that our theoretical analysis using \(C^{*}\)-algebra and Perron-Frobenius operators gives a new and powerful lens through which one can design and analyze kernel methods.
## 2 Preliminaries
### \(C^{*}\)-algebra and reproducing kernel \(C^{*}\)-module
\(C^{*}\)-algebra, which is denoted by \(\mathcal{A}\) in the following, is a Banach space equipped with a product and an involution satisfying the \(C^{*}\) identity (condition 3 below).
**Definition 2.1** (\(C^{*}\)-algebra): _A set \(\mathcal{A}\) is called a \(C^{*}\)-algebra if it satisfies the following conditions:_
1. \(\mathcal{A}\) _is an algebra over_ \(\mathbb{C}\) _and equipped with a bijection_ \((\cdot)^{*}:\mathcal{A}\rightarrow\mathcal{A}\) _that satisfies the following conditions for_ \(\alpha,\beta\in\mathbb{C}\) _and_ \(a,b\in\mathcal{A}\)_:_ \[\bullet\;(\alpha a+\beta b)^{*}=\overline{\alpha}a^{*}+\overline{\beta}b^{*}, \qquad\bullet(ab)^{*}=b^{*}a^{*},\qquad\bullet(a^{*})^{*}=a.\]
\begin{table}
\begin{tabular}{c|c|c|c} Reproducing space & Output dimension & Shallow & Deep \\ \hline RKHS & 1 & \(O(\sqrt{1/n})\)[10] & \(O(A^{L}\sqrt{1/n})\) \\ vvRKHS & \(d\) & \(O(\sqrt{d/n})\)[12; 13] & \(O(A^{L}\sqrt{d/n})\)[5] \\ RKHM (existing) & \(d\) & \(O(\sqrt{d/n})\)[27] & \(-\) \\ RKHM (ours) & \(d\) & \(O(d^{1/4}/\sqrt{n})\) & \(O(B^{L}d^{1/4}\sqrt{n})\) \\ \end{tabular}
\end{table}
Table 1: Existing generalization bounds for kernel methods based on the Rademacher complexity and our bound (\(n\): sample size, \(A\): Lipschitz constant regarding the positive definite kernel, \(B\): The norm of the Perron–Frobenius operator)2. \(\mathcal{A}\) _is a normed space endowed with_ \(\|\cdot\|_{\mathcal{A}}\)_, and for_ \(a,b\in\mathcal{A}\)_,_ \(\|ab\|_{\mathcal{A}}\leq\|a\|_{\mathcal{A}}\|b\|_{\mathcal{A}}\) _holds. In addition,_ \(\mathcal{A}\) _is complete with respect to_ \(\|\cdot\|_{\mathcal{A}}\)_._
3. _For_ \(a\in\mathcal{A}\)_, the_ \(C^{*}\) _identity_ \(\|a^{*}a\|_{\mathcal{A}}=\|a\|_{\mathcal{A}}^{2}\) _holds._
**Example 2.2**: _A typical example of \(C^{*}\)-algebras is the \(C^{*}\)-algebra of \(d\) by \(d\) circulant matrices. Another example is the \(C^{*}\)-algebra of \(d\) by \(d\) block matrices with \(M\) blocks and their block sizes are \(\mathbf{m}=(m_{1},\ldots,m_{M})\). We denote them by \(Circ(d)\) and \(Block(\mathbf{m},d)\), respectively. See [27, 32] for more details about these examples._
We now define RKHM. Let \(\mathcal{X}\) be a non-empty set for data.
**Definition 2.3** (\(\mathcal{A}\)-valued positive definite kernel): _An \(\mathcal{A}\)-valued map \(k:\mathcal{X}\times\mathcal{X}\rightarrow\mathcal{A}\) is called a positive definite kernel if it satisfies the following conditions: \(\bullet\)\(k(x,y)=k(y,x)^{*}\) for \(x,y\in\mathcal{X}\), \(\bullet\)\(\sum_{i,j=1}^{n}c_{i}^{*}k(x_{i},x_{j})c_{j}\) is positive semi-definite for \(n\in\mathbb{N}\), \(c_{i}\in\mathcal{A}\), \(x_{i}\in\mathcal{X}\)._
Let \(\phi:\mathcal{X}\rightarrow\mathcal{A}^{\mathcal{X}}\) be the _feature map_ associated with \(k\), defined as \(\phi(x)=k(\cdot,x)\) for \(x\in\mathcal{X}\) and let \(\mathcal{M}_{k,0}=\{\sum_{i=1}^{n}\phi(x_{i})c_{i}|\ n\in\mathbb{N},\ c_{i}\in \mathcal{A},\ x_{i}\in\mathcal{X}\ (i=1,\ldots,n)\}\). We can define an \(\mathcal{A}\)-valued map \(\langle\cdot,\cdot\rangle_{\mathcal{M}_{k}}:\mathcal{M}_{k,0}\times\mathcal{M} _{k,0}\rightarrow\mathcal{A}\) as
\[\big{\langle}\sum_{i=1}^{n}\phi(x_{i})c_{i},\sum_{j=1}^{l}\phi(y_{j})b_{j} \big{\rangle}_{\mathcal{M}_{k}}:=\sum_{i=1}^{n}\sum_{j=1}^{l}c_{i}^{*}k(x_{i},y_{j})b_{j},\]
which enjoys the reproducing property \(\langle\phi(x),f\rangle_{\mathcal{M}_{k}}=f(x)\) for \(f\in\mathcal{M}_{k,0}\) and \(x\in\mathcal{X}\). The _reproducing kernel Hilbert \(\mathcal{A}\)-module (RKHM)_\(\mathcal{M}_{k}\) associated with \(k\) is defined as the completion of \(\mathcal{M}_{k,0}\). See, for example, the references [33, 26, 34] for more details about \(C^{*}\)-algebra and RKHM.
### Perron-Frobenius operator on RKHM
We introduce Perron-Frobenius operator on RKHM [35]. Let \(\mathcal{X}_{1}\) and \(\mathcal{X}_{2}\) be nonempty sets and let \(k_{1}\) and \(k_{2}\) be \(\mathcal{A}\)-valued positive definite kernels. Let \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) be RKHMs associated with \(k_{1}\) and \(k_{2}\), respectively. Let \(\phi_{1}\) and \(\phi_{2}\) be the feature maps of \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\), respectively. We begin with the standard notion of linearity in RKHMs.
**Definition 2.4** (\(\mathcal{A}\)-linear): _A linear map \(A:\mathcal{M}_{1}\rightarrow\mathcal{M}_{2}\) is called \(\mathcal{A}\)-linear if for any \(a\in\mathcal{A}\) and \(w\in\mathcal{M}_{1}\), we have \(A(wa)=(Aw)a\)._
**Definition 2.5** (Perron-Frobenius operator): _Let \(f:\mathcal{X}_{1}\rightarrow\mathcal{X}_{2}\) be a map. The Perron-Frobenius operator with respect to \(f\) is an \(\mathcal{A}\)-linear map \(P_{f}:\mathcal{M}_{1}\rightarrow\mathcal{M}_{2}\) satisfying_
\[P_{f}\phi_{1}(x)=\phi_{2}(f(x)).\]
Note that a Perron-Frobenius operator is not always well-defined since \(ac=bc\) for \(a,b\in\mathcal{A}\) and nonzero \(c\in\mathcal{A}\) does not always mean \(a=b\).
**Definition 2.6** (\(\mathcal{A}\)-linearly independent): _The set \(\{\phi_{1}(x)\ |\ x\in\mathcal{X}\}\) is called \(\mathcal{A}\)-linearly independent if it satisfies the following condition: For any \(n\in\mathbb{N}\), \(x_{1},\ldots,x_{n}\in\mathcal{X}\), and \(c_{1},\ldots,c_{n}\in\mathcal{A}\), "\(\sum_{i=1}^{n}\phi_{1}(x_{i})c_{i}=0\)" is equivalent to "\(c_{i}=0\) for all \(i=1,\ldots,n\)"._
**Lemma 2.7**: _If \(\{\phi_{1}(x)\ |\ x\in\mathcal{X}\}\) is \(\mathcal{A}\)-linearly independent, then \(P_{f}\) is well-defined._
The following lemma gives a sufficient condition for \(\mathcal{A}\)-linearly independence.
**Lemma 2.8**: _Let \(k_{1}=\tilde{k}a\), i.e., \(k\) is separable, for an invertible operator \(a\) and a complex-valued kernel \(\tilde{k}\). Assume \(\{\tilde{\phi}(x)\ |\ x\in\mathcal{X}\}\) is linearly independent (e.g. \(\tilde{k}\) is Gaussian or Laplacian), where \(\tilde{\phi}\) is the feature map associated with \(\tilde{k}\). Then, \(\{\phi_{1}(x)\ |\ x\in\mathcal{X}\}\) is \(\mathcal{A}\)-linearly independent._
Note that separable kernels are widely used in existing literature of vvRKHS (see e.g. [36]). Lemma 2.8 guarantees the validity of the analysis with Perron-Frobenius operators, at least in the separable case. This provides a condition for "good kernels" by means of the well-definedness of the Perron-Frobenius operator.
NotationWe denote the Euclidean inner product and norm by \(\langle\cdot,\cdot\rangle\) and \(\|\cdot\|\) without the subscript. For \(a\in\mathcal{A}\), let \(|a|_{\mathcal{A}}\) be the unique element in \(\mathcal{A}\) satisfying \(|a|_{\mathcal{A}}^{2}=a^{*}a\). If \(a\) is a matrix, \(\|a\|_{\mathrm{HS}}\) is the Hilbert-Schmidt norm of \(a\). The operator norm of a linear operator \(A\) on an RKHM is denoted by \(\|A\|_{\mathrm{op}}\). All the technical proofs are documented in the supplementary.
## 3 Deep RKHM
We now construct an \(L\)-layer deep RKHM. Let \(\mathcal{A}=\mathbb{C}^{d\times d}\) be the \(C^{*}\)-algebra of \(d\) by \(d\) matrices. Let \(\mathcal{A}_{0},\ldots,\mathcal{A}_{L}\) be \(C^{*}\)-subalgebras of \(\mathcal{A}\) and for \(j=1,\ldots,L\) and \(k_{j}:\mathcal{A}_{j-1}\times\mathcal{A}_{j-1}\rightarrow\mathcal{A}_{j}\) be an \(\mathcal{A}_{j}\)-valued positive definite kernel for the \(j\)th layer. For \(j=1,\ldots,L\), we denote by \(\mathcal{M}_{j}\) the RKHM over \(\mathcal{A}_{j}\) associated with \(k_{j}\). In addition, we denote by \(\tilde{\mathcal{M}}_{j}\) the RKHM over \(\mathcal{A}\) associated with \(k_{j}\). Note that \(\mathcal{M}_{j}\) is a subspace of \(\tilde{\mathcal{M}}_{j}\) and for \(u,v\in\mathcal{M}_{j}\), we have \(\langle u,v\rangle_{\mathcal{M}_{j}}=\langle u,v\rangle_{\mathcal{M}_{j}}\). We set the function space corresponding to each layer as \(\mathcal{F}_{L}=\{f\in\mathcal{M}_{L}\ \mid\ f(x)\in\mathbb{R}^{d\times d}\) for any \(x\in\mathcal{A}_{L-1},\ \|f\|_{\mathcal{M}_{L}}\leq B_{L}\}\) and \(\mathcal{F}_{j}=\{f\in\mathcal{M}_{j}\ |\ f(x)\in\mathbb{R}^{d\times d}\) for any \(x\in\mathcal{A}_{j-1},\ \|P_{f}\|_{\mathrm{op}}\leq B_{j}\}\) for \(j=1,\ldots,L-1\). Here, for \(f\in\mathcal{M}_{j}\) with \(j=1,\ldots,L-1\), \(P_{f}\) is the Perron-Frobenius operator with respect to \(f\) from \(\tilde{\mathcal{M}}_{j}\) to \(\tilde{\mathcal{M}}_{j+1}\). We assume the well-definedness of these operators. Then, we set the class of deep RKHM as
\[\mathcal{F}_{L}^{\mathrm{deep}}=\{f_{L}\circ\cdots\circ f_{1}\ \mid\ f_{j} \in\mathcal{F}_{j}\ (j=1,\ldots,L)\}.\]
Figure 1 schematically shows the structure of the deep RKHM.
**Example 3.1**: _We can set \(\mathcal{A}_{j}=Block((m_{1,j},\ldots,m_{M_{l},j}),d)\), the \(C^{*}\)-algebra of block diagonal matrices. By setting \(M_{1}\leq\cdots\leq M_{l}\) and \(M_{l}\geq\cdots\geq M_{L}\) for \(l<L\), the number of nonzero elements in \(\mathcal{A}_{j}\) decreases during \(1\leq j\leq l\) and increases during \(l\leq j\leq L\). This construction is regarded as an autoencoder, where the \(1\sim l\)th layer corresponds to the encoder and the \(l+1\sim L\)th layer corresponds to the decoder._
Advantage over existing deep kernel methodsNote that deep kernel methods with RKHSs and vvRKHSs have been proposed [1; 4; 2]. Autoencoders using RKHSs and vvRKHSs were also proposed [3; 5]. We have at least three advantages of deep RKHM over deep vvRKHSs or RKHSs: 1) useful structures of matrices stated in Remark 5.2, 2) availability of the operator norm in the generalization bound stated in Section 4, 3) connection with CNNs stated in Subsection 6.1.
## 4 Generalization Bound
We derive a generalization error for deep RKHM. To derive the bound, we bound the Rademacher complexity, which is one of the typical methods on deriving generalization bounds [11]. Let \(\Omega\) be a probability space equipped with a probability measure \(P\). We denote the integral \(\int_{\Omega}s(\omega)\mathrm{d}P(\omega)\) of a measurable function \(s\) on \(\Omega\) by \(\mathrm{E}[s]\). Let \(x_{1},\ldots,x_{n}\) and \(y_{1},\ldots,y_{n}\) be input and output samples from the distributions of \(\mathcal{A}_{0}\) and \(\mathcal{A}_{L}\)-valued random variables \(x\) and \(y\), respectively. Let \(\sigma_{i,j}\ (i=1,\ldots,d,\ j=1,\ldots,n)\) be i.i.d. Rademacher variables. Let \(\sigma_{j}=(\sigma_{1,j},\ldots,\sigma_{d,j})\). For an \(\mathbb{R}^{d}\)-valued function class \(\mathcal{F}\) and \(\mathbf{x}=(x_{1},\ldots,x_{n})\), the empirical Rademacher complexity \(\hat{R}_{n}(\mathbf{x},\mathcal{F})\) is defined by \(\hat{R}_{n}(\mathbf{x},\mathcal{F}):=\mathrm{E}[\sup_{f\in\mathcal{F}}\sum_{ i=1}^{n}\langle\sigma_{i},f(x_{i})\rangle]/n\).
### Bound for shallow RKHMs
We use the operator norm to derive a bound, whose dependency on output dimension is milder than existing bounds. Availability of the operator norm is one of the advantages of considering
Figure 1: Overview of the proposed deep RKHM. The small blue squares represent matrix elements. In the case of the autoencoder (see Example 3.1), \(f_{1}\circ f_{2}\) is the encoder, and \(f_{3}\circ f_{4}\) is the decoder.
\(C^{*}\)-algebras (RKHMs) instead of vectors (vvRKHSs). Note that although we can also use the Hilbert-Schmidt norm for matrices, it tends to be large as the dimension \(d\) becomes large. On the other hand, the operator norm is defined independently of the dimension \(d\). Indeed, the Hilbert-Schmidt norm of a matrix \(a\in\mathbb{C}^{d\times d}\) is calculated as \((\sum_{i=1}^{d}s_{i}^{2})^{1/2}\), where \(s_{i}\) is the \(i\)th singular value of \(a\). The operator norm of \(a\) is the largest singular value of \(a\).
To see the derivation of the bound, we first focus on the case of \(L=1\), i.e., the network is shallow. Let \(E>0\). For a space \(\mathcal{F}\) of \(\mathcal{A}\)-valued functions on \(\mathcal{A}_{0}\), let \(\mathcal{G}(\mathcal{F})=\{(x,y)\mapsto f(x)-y\;\mid\;f\in\mathcal{F},\|y\|_{ \mathcal{A}}\leq E\}\). The following theorem shows a bound for RKHMs with the operator norm.
**Theorem 4.1**: _Assume there exists \(D>0\) such that \(\|k_{1}(x,x)\|_{\mathcal{A}}\leq D\) for any \(x\in\mathcal{A}_{0}\). Let \(\tilde{K}=4\sqrt{2}(\sqrt{D}B_{1}+E)B_{1}\) and \(\tilde{M}=6(\sqrt{D}B_{1}+E)^{2}\). Let \(\delta\in(0,1)\). Then, for any \(g\in\mathcal{G}(\mathcal{F}_{1})\), where \(\mathcal{F}_{1}\) is defined in Section 3, with probability at least \(1-\delta\), we have_
\[\|\mathrm{E}[|g(x,y)|_{\mathcal{A}}^{2}]\|_{\mathcal{A}}\leq\left\|\frac{1}{n} \sum_{i=1}^{n}|g(x_{i},y_{i})|_{\mathcal{A}}^{2}\right\|_{\mathcal{A}}+\frac{ \tilde{K}}{n}\bigg{(}\sum_{i=1}^{n}\mathrm{tr}\,k_{1}(x_{i},x_{i})\bigg{)}^{1/ 2}+\tilde{M}\sqrt{\frac{\log(2/\delta)}{2n}}.\]
Theorem 4.1 is derived by the following lemmas. We first fix a vector \(p\in\mathbb{R}^{d}\) and consider the operator-valued loss function acting on \(p\). We first show a relation between the generalization error and the Rademacher complexity of vector-valued functions. Then, we bound the Rademacher complexity. Since the bound does not depend on \(p\), we can finally remove the dependency on \(p\).
**Lemma 4.2**: _Let \(\mathcal{F}\) be a function class of \(\mathbb{R}^{d\times d}\)-valued functions on \(\mathcal{A}_{0}\) bounded by \(C\) (i.e., \(\|f(x)\|_{\mathcal{A}}\leq C\) for any \(x\in\mathcal{A}_{0}\)). Let \(\tilde{\mathcal{G}}(\mathcal{F},p)=\{(x,y)\mapsto\|(f(x)-y)p\|^{2}\;\mid\;f\in \mathcal{F},\|y\|_{\mathcal{A}}\leq E\}\) and \(M=2(C+E)^{2}\). Let \(p\in\mathbb{R}^{d}\) satisfy \(\|p\|=1\) and let \(\delta\in(0,1)\). Then, for any \(g\in\tilde{\mathcal{G}}(\mathcal{F},p)\), with probability at least \(1-\delta\), we have_
\[\|\mathrm{E}[|g(x,y)|_{\mathcal{A}}^{2}]^{1/2}p\|^{2}\leq\left\|\frac{1}{n} \sum_{i=1}^{n}|g(x_{i},y_{i})|_{\mathcal{A}}^{2}\right\|_{\mathcal{A}}+2\hat{ R}_{n}(\mathbf{x},\tilde{\mathcal{G}}(\mathcal{F},p))+3M\sqrt{\frac{\log(2/\delta)}{2 n}}.\]
**Lemma 4.3**: _With the same notations in Lemma 4.2, let \(K=2\sqrt{2}(C+E)\). Then, we have \(\hat{R}_{n}(\mathbf{x},\mathcal{G}(\mathcal{F},p))\leq K\hat{R}_{n}(\mathbf{x},\mathcal{F}p)\), where \(\mathcal{F}p=\{x\mapsto f(x)p\;\mid\;f\in\mathcal{F}\}\)._
**Lemma 4.4**: _Let \(p\in\mathbb{R}^{d}\) satisfy \(\|p\|=1\). For \(\mathcal{F}_{1}\) defined in Section 3, we have_
\[\hat{R}_{n}(\mathbf{x},\mathcal{F}_{1}p)\leq\frac{B_{1}}{n}\Big{(}\sum_{i=1}^{ n}\mathrm{tr}(k(x_{i},x_{i}))\Big{)}^{1/2}.\]
### Bound for deep RKHMs
We now generalize Theorem 4.1 to the deep setting (\(L\geq 2\)) using the Perron-Frobenius operators.
**Theorem 4.5**: _Assume there exists \(D>0\) such that \(\|k_{L}(x,x)\|_{\mathcal{A}}\leq D\) for any \(x\in\mathcal{A}_{0}\). Let \(\tilde{K}=4\sqrt{2}(\sqrt{D}B_{L}+E)B_{1}\cdots B_{L}\) and \(\tilde{M}=6(\sqrt{D}B_{L}+E)^{2}\). Let \(\delta\in(0,1)\). Then, for any \(g\in\mathcal{G}(\mathcal{F}_{L}^{\mathrm{deep}})\), with probability at least \(1-\delta\), we have_
\[\|\mathrm{E}[|g(x,y)|_{\mathcal{A}}^{2}]\|_{\mathcal{A}}\leq\left\|\frac{1}{n} \sum_{i=1}^{n}|g(x_{i},y_{i})|_{\mathcal{A}}^{2}\right\|_{\mathcal{A}}+\frac{ \tilde{K}}{n}\Big{(}\sum_{i=1}^{n}\mathrm{tr}\,k_{1}(x_{i},x_{i})\Big{)}^{1/2} +\tilde{M}\sqrt{\frac{\log(2/\delta)}{2n}}.\]
We use the following proposition and Lemmas 4.2 and 4.3 to show Theorem 4.5. The key idea of the proof is that by the reproducing property and the definition of the Perron-Frobenius operator, we get \(f_{L}\circ\cdots\circ f_{1}(x)=\langle\phi_{L}(f_{L-1}\circ\cdots\circ f_{1}(x) ),f_{L}\rangle_{\tilde{\mathcal{M}}_{L}}=\big{\langle}P_{f_{L-1}}\cdots P_{f_ {1}}\phi(x),f_{L}\big{\rangle}_{\tilde{\mathcal{M}}_{L}}\).
**Proposition 4.6**: _Let \(p\in\mathbb{R}^{d}\) satisfy \(\|p\|=1\). Then, we have_
\[\hat{R}_{n}(\mathbf{x},\mathcal{F}_{L}^{\mathrm{deep}}p)\leq\frac{1}{n}\sup_{(f _{j}\in\mathcal{F}_{j})_{j}}\|P_{f_{L-1}}\cdots P_{f_{1}}|_{\tilde{\mathcal{V}} (\mathbf{x})}\|_{\mathrm{op}}\;\|f_{L}\|_{\mathcal{M}_{L}}\;\Big{(}\sum_{i=1 }^{n}\mathrm{tr}(k_{1}(x_{i},x_{i}))\Big{)}^{1/2}.\]
_Here, \(\tilde{\mathcal{V}}(\mathbf{x})\) is the submodule of \(\tilde{\mathcal{M}}_{1}\) generated by \(\phi_{1}(x_{1}),\ldots\phi_{1}(x_{n})\)._
**Corollary 4.7**: _Let \(p\in\mathbb{R}^{d}\) satisfy \(\|p\|=1\). Then, we have_
\[\hat{R}_{n}(\mathbf{x},\mathcal{F}_{L}^{\mathrm{deep}}p)\leq\frac{1}{n}B_{1} \cdots B_{L}\Bigm{(}\sum_{i=1}^{n}\mathrm{tr}(k_{1}(x_{i},x_{i}))\Big{)}^{1/2}.\]
Comparison to deep vvRKHSWe can also regard \(\mathbb{C}^{d\times d}\) as the Hilbert space equipped with the Hilbert-Schmidt inner product, i.e., we can flatten matrices and get \(d^{2}\)-dimensional Hilbert space. In this case, the corresponding operator-valued kernel is the multiplication operator of \(k(x,y)\), which we denote by \(M_{k(x,y)}\). Then, we can apply existing results for vvRKHSs [5; 12], which involve the term \((\sum_{i=1}^{n}\mathrm{tr}(M_{k(x_{i},x_{i})}))^{1/2}\). It is calculated as
\[\sum_{i=1}^{n}\mathrm{tr}(M_{k(x_{i},x_{i})})=\sum_{i=1}^{n}\sum_{j,l=1}^{d} \left\langle e_{jl},k(x_{i},x_{i})e_{jl}\right\rangle_{\mathrm{HS}}=\sum_{i=1 }^{n}\sum_{j,l=1}^{d}k(x_{i},x_{i})_{l,l}=d\sum_{i=1}^{n}\mathrm{tr}\,k(x_{i}, x_{i}).\]
Thus, using the existing approaches, we have the factor \((d\sum_{i=1}^{n}\mathrm{tr}\,k(x_{i},x_{i}))^{1/2}\). On the other hand, we have the smaller factor \((\sum_{i=1}^{n}\mathrm{tr}\,k(x_{i},x_{i}))^{1/2}\) in Theorems 4.1 and 4.5. Using the operator norm, we can reduce the dependency on the dimension \(d\).
## 5 Learning Deep RKHMs
We focus on the practical learning problem. To learn deep RKHMs, we consider the following minimization problem based on the generalization bound derived in Section 4:
\[\min_{(f_{j}\in\mathcal{M}_{j})_{j}}\left\|\frac{1}{n}\sum_{i=1}^{n}|f_{L} \circ\cdots\circ f_{1}(x_{i})-y_{i}|_{\mathcal{A}}^{2}\right\|_{\mathcal{A}}+ \lambda_{1}\|P_{f_{L-1}}\cdots P_{f_{1}}|_{\hat{\nu}(\mathbf{x})}\|_{\mathrm{ op}}+\lambda_{2}\|f_{L}\|_{\mathcal{M}_{L}}. \tag{1}\]
The second term regarding the Perron-Frobenius operators comes from the bound in Proposition 4.6. We try to reduce the generalization error by reducing the magnitude of the norm of the Perron-Frobenius operators.
### Representer theorem
We first show a representer theorem to guarantee that a solution of the minimization problem (1) is represented only with given samples.
**Proposition 5.1**: _Let \(h:\mathcal{A}^{n}\times\mathcal{A}^{n}\rightarrow\mathbb{R}_{+}\) be an error function, let \(g_{1}\) be an \(\mathbb{R}_{+}\)-valued function on the space of bounded linear operators on \(\tilde{\mathcal{M}}_{1}\), and let \(g_{2}:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}\) satisfy \(g_{2}(a)\leq g_{2}(b)\) for \(a\leq b\). Assume the following minimization problem has a solution:_
\[\min_{(f_{j}\in\mathcal{M}_{j})_{j}}h(f_{L}\circ\cdots\circ f_{1}(x_{1}), \ldots,f_{L}\circ\cdots\circ f_{1}(x_{n}))+g_{1}(P_{f_{L-1}}\cdots P_{f_{L}}|_ {\hat{\nu}(\mathbf{x})})+g_{2}(\|f_{L}\|_{\mathcal{M}_{L}}).\]
_Then, there exists a solution admitting a representation of the form \(f_{j}=\sum_{i=1}^{n}\phi_{j}(x_{i}^{j-1})c_{i,j}\) for some \(c_{1,j},\ldots,c_{n,j}\in\mathcal{A}\) and for \(j=1,\ldots,L\). Here, \(x_{i}^{j}=f_{j}\circ\cdots\circ f_{1}(x_{i})\) for \(j=1,\ldots,L\) and \(x_{i}^{0}=x_{i}\)._
**Remark 5.2**: _An advantage of deep RKHM compared to deep vvRKHS is that we can make use of the structure of matrices. For example, the product of two diagonal matrices is calculated by the element-wise product of diagonal elements. Thus, when \(\mathcal{A}_{1}=\cdots=\mathcal{A}_{L}=Block((1,\ldots,1),d)\), interactions among elements in an input are induced only by the kernels, not by the product \(k_{j}(x,x_{i}^{j-1})\cdot c_{i,j}\). That is, the form of interactions does not depend on the learning parameter \(c_{i,j}\). On the other hand, if we set \(\mathcal{A}_{j}=Block(\mathbf{m}_{j},d)\) with \(\mathbf{m}_{j}=(m_{1,j},\ldots,m_{M_{j},j})\neq(1,\ldots,1)\), then at the \(j\)th layer, we get interactions among elements in the same block through the product \(k_{j}(x,x_{i}^{j-1})\cdot c_{i,j}\). In this case, the form of interactions is learned through \(c_{i,j}\). For example, the part \(M_{1}\leq\cdots\leq M_{l}\) (the encoder) in Example 3.1 tries to gradually reduce the dependency among elements in the output of each layer and describe the input with a small number of variables. The part \(M_{l}\geq\cdots\geq M_{L}\) (the decoder) tries to increase the dependency._
### Computing the norm of the Perron-Frobenius operator
We discuss the practical computation and the role of the factor \(\|P_{f_{L-1}}\cdots P_{f_{1}}|_{\hat{\mathcal{V}}({\mathbf{x}})}\|_{\mathrm{op}}\) in Eq. (1). Let \(G_{j}\in\mathcal{A}^{n\times n}\) be the Gram matrix whose \((i,l)\)-entry is \(k_{j}(x_{i}^{j-1},x_{l}^{j-1})\in\mathcal{A}\).
**Proposition 5.3**: _For \(j=1,L\), let \([\phi_{j}(x_{1}^{j-1}),\ldots,\phi_{j}(x_{n}^{j-1})]R_{j}=Q_{j}\) be the QR decomposition of \([\phi_{j}(x_{1}^{j-1}),\ldots,\phi_{j}(x_{n}^{j-1})]\). Then, we have \(\|P_{f_{L-1}}\cdots P_{f_{1}}|_{\hat{\mathcal{V}}({\mathbf{x}})}\|_{\mathrm{op} }=\|R_{L}^{*}G_{L}R_{1}\|_{\mathrm{op}}\)._
The computational cost of \(\|R_{L}^{*}G_{L}R_{1}\|_{\mathrm{op}}\) is expensive if \(n\) is large since computing \(R_{1}\) and \(R_{L}\) is expensive. Thus, we consider upper bounding \(\|R_{L}^{*}G_{L}R_{1}\|_{\mathrm{op}}\) by a computationally efficient value.
**Proposition 5.4**: _Assume \(G_{L}\) is invertible. Then, we have_
\[\|P_{f_{L-1}}\cdots P_{f_{1}}|_{\hat{\mathcal{V}}({\mathbf{x}})}\|_{\mathrm{op} }\leq\|G_{L}^{-1}\|_{\mathrm{op}}^{1/2}\|G_{L}\|_{\mathrm{op}}\|G_{1}^{-1}\|_{ \mathrm{op}}^{1/2}. \tag{2}\]
Since \(G_{1}\) is independent of \(f_{1},\ldots,f_{L}\), to make the value \(\|P_{f_{L-1}}\cdots P_{f_{1}}|_{\hat{\mathcal{V}}({\mathbf{x}})}\|_{\mathrm{op}}\) small, we try to make the norm of \(G_{L}\) and \(G_{L}^{-1}\) small according to Proposition 5.4. For example, instead of the second term regarding the Perron-Frobenius operators in Eq. (1), we can consider the following term with \(\eta>0\):
\[\lambda_{1}(\|(\eta I+G_{L})^{-1}\|_{\mathrm{op}}+\|G_{L}\|_{\mathrm{op}}).\]
Note that the term depends on the training samples \(x_{1},\ldots,x_{n}\). The situation is different from the third term in Eq. (1) since the third term does not depend on the training samples before applying the representer theorem. If we try to minimize a value depending on the training samples, the model seems to be more specific for the training samples, and it may cause overfitting. Thus, the connection between the minimization of the second term in Eq. (1) and generalization cannot be explained by the classical argument about generalization and regularization. However, as we will see in Subsection 6.2, it is related to benign overfitting and has a good effect on generalization.
**Remark 5.5**: _The inequality (2) implies that as \(\{\phi_{L}(x_{1}^{L-1}),\ldots,\phi_{L}(x_{n}^{L-1})\}\) becomes nearly linearly dependent, the Rademacher complexity becomes large. By the term \(\|(\eta I+G_{L})^{-1}\|\) in Eq. (1), the function \(f_{L-1}\circ\cdots\circ f_{1}\) is learned so that it separates \(x_{1},\ldots,x_{n}\) well._
**Remark 5.6**: _To evaluate \(f_{1}\circ\cdots\circ f_{L}(x_{i})\) for \(i=1,\ldots,n\), we have to construct a Gram matrix \(G_{j}\in\mathcal{A}_{j}^{n\times n}\), and compute the product of the Gram matrix and a vector \(c_{j}\in\mathcal{A}_{j}^{n}\) for \(j=1,\ldots,L\). The computational cost of the construction of the Gram matrices does not depend on \(d\). The cost of computing \(G_{j}c_{j}\) is \(O(n^{2}\tilde{d}_{j})\), where \(\tilde{d}_{j}\) is the number of nonzero elements in the matrices in \(\mathcal{A}_{j}\). Note that if \(\mathcal{A}_{j}\) is the \(C^{*}\)-algebra of block diagonal matrices, then we have \(\tilde{d}_{j}\ll d^{2}\). Regarding the cost with respect to \(n\), if the positive definite kernel is separable, we can use the random Fourier features [37] to replace the factor \(n^{2}\) with \(mn\) for a small integer \(m\ll n\)._
## 6 Connection and Comparison with Existing Studies
The proposed deep RKHM is deeply related to existing studies by virtue of \(C^{*}\)-algebra and the Perron-Frobenius operators. We discuss the connection below.
### Connection with CNN
The proposed deep RKHM has a duality with CNNs. Let \(\mathcal{A}_{0}=\cdots=\mathcal{A}_{L}=Circ(d)\), the \(C^{*}\)-algebra of \(d\) by \(d\) circulant matrices. For \(j=1,\ldots,L\), let \(a_{j}\in\mathcal{A}_{j}\). Let \(k_{j}\) be an \(\mathcal{A}_{j}\)-valued positive definite kernel defined as \(k_{j}(x,y)=\tilde{k}_{j}(a_{j}x,a_{j}y)\), where \(\tilde{k}_{j}\) is an \(\mathcal{A}_{j}\)-valued function. The \(\mathcal{A}_{j}\)-valued positive definite kernel makes the output of each layer become a circulant matrix, which enables us to apply the convolution as the product of the output and a parameter. Then, \(f_{j}\) is represented as \(f_{j}(x)=\sum_{i=1}^{n}\tilde{k}_{j}(a_{j}x,a_{j}x_{i}^{j-1})c_{i,j}\) for some \(c_{i,j}\in\mathcal{A}_{j}\). Thus, at the \(j\)th layer, the input \(x\) is multiplied by \(a_{j}\) and is transformed nonlinearly by \(\sigma_{j}(x)=\sum_{i=1}^{n}k_{j}(x,a_{j}x_{i}^{j-1})c_{i,j}\). Since the product of two circulant matrices corresponds to the convolution, \(a_{j}\) corresponds to a filter. Inaddition, \(\sigma_{j}\) corresponds to the activation function at the \(j\)th layer. Thus, the deep RKHM with the above setting corresponds to a CNN. The difference between the deep RKHM and the CNN is the parameters that we learn. Whereas for the deep RKHM, we learn the coefficients \(c_{1,j},\ldots,c_{n,j}\), for the CNN, we learn the parameter \(a_{j}\). In other words, whereas for the deep RKHM, we learn the activation function \(\sigma_{j}\), for the CNN, we learn the filter \(a_{j}\). It seems reasonable to interpret this difference as a consequence of solving the problem in the primal or in the dual. In the primal, the number of the learning parameters depends on the dimension of the data (or the filter for convolution), while in the dual, it depends on the size of the data.
**Remark 6.1**: _The connection between CNNs and shallow RKHMs has already been studied [27]. However, the existing study does not provide the connection between the filter and activation function. The above investigation shows a more clear layer-wise connection of deep RKHMs with CNNs._
### Connection with benign overfitting
Benign overfitting is a phenomenon that the model fits any amount of data yet generalizes well [20; 21]. For kernel regression, Mallinar et al. [22] showed that if the eigenvalues of the integral operator associated with the kernel function over the data distribution decay slower than any powerlaw decay, than the model exhibits benign overfitting. The Gram matrix is obtained by replacing the integral with the sum over the finite samples. The inequality (2) suggests that the generalization error becomes smaller as the smallest and the largest eigenvalues of the Gram matrix get closer, which means the eigenvalue decay is slower. Combining the observation in Remark 5.5, we can interpret that as the right-hand side of the inequality (2) becomes smaller, the outputs of noisy training data at the \(L-1\)th layer tend to be more separated from the other outputs. In other words, for the random variable \(x\) following the data distribution, \(f_{L-1}\circ\cdots\circ f_{1}\) is learned so that the distribution of \(f_{L-1}\circ\cdots\circ f_{1}(x)\) generates the integral operator with more separated eigenvalues, which appreciates benign overfitting. We will also observe this phenomenon numerically in Section 7 and Appendix C.3.2. Since the generalization bound for deep vrRKHSs [5] is described by the Lipschitz constants of the feature maps and the norm of \(f_{j}\) for \(j=1,\ldots,L\), this type of theoretical interpretation regarding benign overfitting is not available for the existing bound for deep vvRKHSs.
**Remark 6.2**: _The above arguments about benign overfitting are valid only for deep RKHMs, i.e., the case of \(L\geq 2\). If \(L=1\) (shallow RKHM), then the Gram matrix \(G_{L}=[k(x_{i},x_{j})]_{i,j}\) is fixed and determined only by the training data and the kernel. On the other hand, if \(L\geq 2\) (deep RKHM), then \(G_{L}=[k_{L}(f_{L-1}\circ\cdots\circ f_{1}(x_{i}),f_{L-1}\circ\cdots\circ f_{1 }(x_{j}))]_{i,j}\) depends also on \(f_{1},\ldots,f_{L-1}\). As a result, by adding the term using \(G_{L}\) to the loss function, we can learn proper \(f_{1},\ldots,f_{L-1}\) so that they make the right-hand side of Eq. (2) small, and the whole network overfits benignly. As \(L\) becomes large, the function \(f_{L-1}\circ\cdots\circ f_{1}\) changes more flexibly to attain a smaller value of the term. This is an advantage of considering a large \(L\)._
### Connection with neural tangent kernel
Neural tangent kernel has been investigated to understand neural networks using the theory of kernel methods [6; 7]. Generalizing neural networks to \(C^{*}\)-algebra, which is called the \(C^{*}\)-algebra network, is also investigated [32; 38]. We define a neural tangent kernel for the \(C^{*}\)-algebra network and develop a theory for combining neural networks and deep RKHMs as an analogy of the existing studies. Consider the \(C^{*}\)-algebra network \(f:\mathcal{A}^{N_{0}}\rightarrow\mathcal{A}\) over \(\mathcal{A}\) with \(\mathcal{A}\)-valued weight matrices \(W_{j}\in\mathcal{A}^{N_{j}\times N_{j-1}}\) and element-wise activation functions \(\sigma_{j}\): \(f(x)=W_{L}\sigma_{L-1}(W_{L-1}\cdots\sigma_{1}(W_{1}x)\cdots)\). The \((i,j)\)-entry of \(f(x)\) is \(f_{i}(\mathbf{x}_{j})=\mathbf{W}_{L,i}\sigma_{L-1}(W_{L-1}\cdots\sigma_{1}(W_{ 1}\mathbf{x}_{j})\cdots)\), where \(\mathbf{x}_{j}\) is the \(i\)th column of \(x\) regarded as \(x\in\mathbb{C}^{M_{0}\times d}\) and \(\mathbf{W}_{L,i}\) is the \(i\)th row of \(W_{L}\) regarded as \(W_{L}\in\mathbb{C}^{d\times dN_{L-1}}\). Thus, the \((i,j)\)-entry of \(f(x)\) corresponds to the output of the network \(f_{i}(\mathbf{x}_{j})\). We can consider the neural tangent kernel for each \(f_{i}\). Chen and Xu [7] showed that the RKHS associated with the neural tangent kernel restricted to the sphere is the same set as that associated with the Laplacian kernel. Therefore, the \(i\)th row of \(f(x)\) is described by the shallow RKHS associated with the neural tangent kernel \(k_{i}^{\rm NT}\). Let \(k^{\rm NN}\) be the \(\mathcal{A}\)-valued positive definite kernel whose \((i,j)\)-entry is \(k_{i}^{\rm NT}\) for \(i=j\) and \(0\) for \(i\neq j\). Then, for any function \(g\in\mathcal{M}_{k^{\rm NN}}\), the elements in the \(i\)th row of \(g\) are in the RKHS associated with \(k_{i}^{\rm NT}\), i.e., associated with the Laplacian kernel. Thus, \(f\) is described by this shallow RKHM.
**Remark 6.3**: _We can combine the deep RKHM and existing neural networks by replacing some \(f_{j}\) in our model with an existing neural network. The above observation enables us to apply our results in Section 4 to the combined network._
### Comparison to bounds for classical neural networks
Existing bounds for classical neural networks typically depend on the product or sum of matrix \((p,q)\) norm of all the weight matrices \(W_{j}\)[14, 15, 16, 17]. Typical bound is \(O(\sqrt{1/n}\prod_{j=1}^{L}\|W_{j}\|_{\mathrm{HS}})\). Note that the Hilbert-Schmidt norm is the matrix \((2,2)\) norm. Unlike the operator norm, the matrix \((p,q)\) norm tends to be large as the width of the layers becomes large. On the other hand, the dependency of our bound on the width of the layer is not affected by the number of layers in the case where the kernels are separable. Indeed, assume we set \(k_{1}=\tilde{k}_{1}a_{1}\) and \(k_{L}=\tilde{k}_{L}a_{L}\) for some complex-valued kernels \(\tilde{k}_{1}\) and \(\tilde{k}_{2}\), \(a_{1}\in\mathcal{A}_{1}\), and \(a_{L}\in\mathcal{A}_{L}\). Then by Proposition 5.3, the factor \(\alpha(L):=\|P_{f_{L-1}}\cdots P_{f_{1}}\|_{\mathcal{V}(\mathbf{x})}\|_{\mathrm{ op}}\) is written as \(\|\tilde{R}_{L}^{*}\tilde{G}_{L}\tilde{R}_{1}\otimes a_{L}^{2}a_{1}\|_{\mathrm{op }}=\|\tilde{R}_{L}^{*}\tilde{G}_{L}\tilde{R}_{1}\|_{\mathrm{op}}\ \|a_{L}^{2}a_{1}\|_{ \mathcal{A}}\) for some \(\tilde{R}_{L},\tilde{G}_{L},\tilde{R}_{1}\in\mathbb{C}^{n\times n}\). Thus, it is independent of \(d\). The only part depending on \(d\) is \(\mathrm{tr}\,k_{1}(x_{i},x_{i})\), which results in the bound \(O(\alpha(L)\sqrt{d/n})\). Note that the width of the \(j\)th layer corresponds to the number of nonzero elements in a matrix in \(\mathcal{A}_{j}\). We also discuss in Appendix B the connection of our bound with the bound by Koopman operators, the adjoints of the Perron-Frobenius operators [18].
## 7 Numerical Results
We numerically confirm our theory and the validity of the proposed deep RKHM.
Comparison to vvRKHSWe compared the generalization property of the deep RKHM to the deep vvRKHS with the same positive definite kernel. For \(d=10\) and \(n=10\), we set \(x_{i}=(az_{i})^{2}+\epsilon_{i}\) as input samples, where \(a\in\mathbb{R}^{100\times 10}\) and \(z_{i}\in\mathbb{R}^{10}\) are randomly generated by \(\mathcal{N}(0,0.1)\), the normal distribution of mean 0 and standard deviation 0.1, \((\cdot)^{2}\) is the elementwise product, and \(\epsilon_{i}\) is the random noise drawn from \(\mathcal{N}(0,1e-3)\). We reshaped \(x_{i}\) to a \(10\) by \(10\) matrix. We set \(L=3\) and \(k_{j}=\tilde{k}I\) for \(j=1,2,3\), where \(\tilde{k}\) is the Laplacian kernel. For RKHMs, we set \(\mathcal{A}_{1}=Block((1,\ldots,1),d)\), \(\mathcal{A}_{2}=Block((2,\ldots,2),d)\), and \(\mathcal{A}_{3}=\mathbb{C}^{d\times d}\). This is the autoencoder mentioned in Example 3.1. For vvRKHSs, we set the corresponding Hilbert spaces with the Hilbert-Schmidt inner product. We set the loss function as \(1/n\|\sum_{i=1}^{n}|f(x_{i})-x_{i}|_{\mathcal{A}}^{2}\|_{\mathcal{A}}\) for the deep RKHM and \(s1/n\sum_{i=1}^{n}\|f(x_{i})-x_{i}\|_{\mathrm{HS}}^{2}\) for the deep vvRKHS. Here, \(f=f_{3}\circ f_{2}\circ f_{1}\). To be did not add any terms to the loss function to see how the loss function with the operator norm affects the generalization performance. We computed the same value \(\|\mathrm{E}[|f(x)-x|_{\mathcal{A}}^{2}|]\|_{\mathcal{A}}-1/n\|\sum_{i=1}^{n}|f (x_{i})-x_{i}|_{\mathcal{A}}^{2}\|_{\mathcal{A}}\) for both RKHM and vvRKHS. Figure 2 (a) shows the results. We can see that the deep RKHM generalizes better than the deep vvRKHS, only with the loss function and without any additional terms.
Observation about benign overfittingWe analyzed the overfitting numerically. For \(d=10\) and \(n=1000\), we randomly sampled \(d\) by \(d\) diagonal matrices \(x_{1},\ldots,x_{n}\in\mathcal{A}_{0}\) from the normal distribution \(\mathcal{N}(0,0.1)\). We set \(y_{i}=x_{i}^{2}+\epsilon_{i}\) for \(i=1,\ldots,n\), where \(\epsilon_{i}\) is a noise drawn from the normal distribution \(\mathcal{N}(0,0.001)\). The magnitude of the noise is \(10\)% of \(x_{i}^{2}\). In addition, we set \(L=2\), \(\mathcal{A}_{1}=\mathbb{C}^{d\times d}\), \(\mathcal{A}_{2}=Block((1,\ldots,1),d)\), and \(k_{j}\) as the same kernel as the above experiment. The additional term to the loss function is set as \(\lambda_{1}(\|(\eta I+G_{L})^{-1}\|_{\mathrm{op}}+\|G_{L}\|_{\mathrm{op}})+ \lambda_{2}\|f_{L}\|_{\mathcal{M}_{L}}^{2}\), where \(\eta=0.01\) and \(\lambda_{2}=0.01\) according to Subsection 5.2. We computed the generalization error for the cases of \(\lambda_{1}=0\) and \(\lambda_{1}=10^{2}\). Figure 2 (b) shows the result. We can see that the generalization error saturates without the additional term motivated by the Perron-Frobenius operator. On the other hand, with the additional term, the generalization error becomes small, which is the effect of benign overfitting.
Comparison to CNNWe compared the deep RKHM to a CNN on the classification task with MNIST [39]. We set \(d=28\) and \(n=20\). We constructed a deep RKHM combined with a neural network with 2 dense layers. For the deep RKHM, we set \(L=2\), \(\mathcal{A}_{0}=\mathbb{C}^{d\times d}\), \(\mathcal{A}_{1}=Block((7,7,7,7),d)\), and \(\mathcal{A}_{2}=Block((4,\ldots,4),d)\). Then, two dense layers are added. See Subsection 6.3 about combining the deep RKHM with neural networks. Regarding the additional term to the loss function, we set the same term with the previous experiment with \(\lambda_{2}=0.001\) andset \(\lambda_{1}=1\) or \(\lambda_{1}=0\). To compare the deep RKHM to CNNs, we also constructed a network by replacing the deep RKHM with a CNN. The CNN is composed of 2 layers with \(7\times 7\) and \(4\times 4\) filters. Figure 2 (c) shows the test accuracy of these networks. We can see that the deep RKHM outperforms the CNN. In addition, we can see that the test accuracy is higher if we add the term regarding the Perron-Frobenius operators. We discuss the memory consumption and the computational cost of each of the deep RKHM and the CNN in Appendix C.3, and we empirically show that the deep RKHM outperforms the CNN that has the same size of learning parameters as the deep RKHM. We also show additional results about benign overfitting in Appendix C.3.
## 8 Conclusion and Limitations
In this paper, we proposed deep RKHM and analyzed it through \(C^{*}\)-algebra and the Perron-Frobenius operators. We derived a generalization bound, whose dependency on the output dimension is alleviated by the operator norm, and which is related to benign overfitting. We showed a representer theorem about the proposed deep RKHM, and connections with existing studies such as CNNs and neural tangent kernel. Our theoretical analysis shows that \(C^{*}\)-algebra and Perron-Frobenius operators are effective tools for analyzing deep kernel methods. The main contributions of this paper are our theoretical results with \(C^{*}\)-algebra and the Perron-Frobenius operators. More practical investigations are required for further progress. For example, although we numerically showed the validity of our method for the case where the number of samples is limited (the last experiment in Section 7), more experimental results for the case where the number of samples is large are useful for further analysis. Also, although we can apply random Fourier features (Remark 5.6) to reduce the computational costs, studying more efficient methods specific to deep RKHM remains to be investigated in future work. As for the theoretical topic, we assumed the well-definedness of the Perron-Frobenius operators. Though separable kernels with invertible matrices, which are typical examples of kernels, satisfy the assumption, generalization of our analysis to other kernels should also be studied in future work.
## Acknowledgements
Hachem Kadri is partially supported by grant ANR-19-CE23-0011 from the French National Research Agency. Masahiro Ikeda is partially supported by grant JPMJCR1913 from JST CREST.
## References
* [1] Youngmin Cho and Lawrence Saul. Kernel methods for deep learning. In _Proceedings of Advances in Neural Information Processing Systems (NIPS) 22_, 2009.
* [2] Sebastian W. Ober, Carl E. Rasmussen, and Mark van der Wilk. The promises and pitfalls of deep kernel learning. In _Proceedings of the 37th Conference on Uncertainty in Artificial Intelligence (UAI)_, 2021.
Figure 2: (a) The box plot of the generalization error of the deep RKHM and vvRKHS at the point that the training error reaches 0.05. (b) Behavior of the generalization error during the learning process with and without the additional term regarding the Perron–Frobenius operators. (c) Test accuracy of the classification task with MNIST for a deep RKHM and a CNN.
* [3] Behnam Gholami and Abolfazl Hajisami. Kernel auto-encoder for semi-supervised hashing. In _Proceedings of 2016 IEEE Winter Conference on Applications of Computer Vision (WACV)_, 2016.
* [4] Bastian Bohn, Michael Griebel, and Christian Rieger. A representer theorem for deep kernel learning. _Journal of Machine Learning Research_, 20(64):1-32, 2019.
* [5] Pierre Laforgue, Stephan Clemencon, and Florence d'Alche Buc. Autoencoding any data through kernel autoencoders. In _Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS)_, 2019.
* [6] Arthur Jacot, Franck Gabriel, and Clement Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In _In proceedings of Advances in Neural Information Processing Systems (NeurIPS) 31_, 2018.
* [7] Lin Chen and Sheng Xu. Deep neural tangent kernel and laplace kernel have the same RKHS. In _Proceedings of the 9th International Conference on Learning Representations (ICLR)_, 2021.
* [8] Julien Mairal, Piotr Koniusz, Zaid Harchaoui, and Cordelia Schmid. Convolutional kernel networks. In _Proceedings of the Advances in Neural Information Processing Systems (NIPS) 27_, 2014.
* [9] Alberto Bietti, Gregoire Mialon, Dexiong Chen, and Julien Mairal. A kernel perspective for regularizing deep neural networks. In _Proceedings of the 36th International Conference on Machine Learning (ICML)_, 2019.
* [10] Peter L. Bartlett and Shahar Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results. _Journal of Machine Learning Research_, 3:463-482, 2002.
* [11] Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. _Foundations of Machine Learning_. MIT press, 2018.
* [12] Vikas Sindhwani, Ha Quang Minh, and Aurelie C. Lozano. Scalable matrix-valued kernel learning for high-dimensional nonlinear multivariate regression and granger causality. In _Proceedings of the 29th Conference on Uncertainty in Artificial Intelligence (UAI)_, 2013.
* beyond separability. _Journal of Machine Learning Research_, 22(24):1-40, 2021.
* [14] Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. Norm-based capacity control in neural networks. In _Proceedings of the 2015 Conference on Learning Theory (COLT)_, 2015.
* [15] Noah Golowich, Alexander Rakhlin, and Ohad Shamir. Size-independent sample complexity of neural networks. In _Proceedings of the 2018 Conference On Learning Theory (COLT)_, 2018.
* [16] Peter L Bartlett, Dylan J Foster, and Matus J Telgarsky. Spectrally-normalized margin bounds for neural networks. In _Proceedings of Advances in Neural Information Processing Systems (NIPS) 31_, 2017.
* [17] Haotian Ju, Dongyue Li, and Hongyang R Zhang. Robust fine-tuning of deep neural networks with Hessian-based generalization guarantees. In _Proceedings of the 39th International Conference on Machine Learning (ICML)_, 2022.
* [18] Yuka Hashimoto, Sho Sonoda, Isao Ishikawa, Atsushi Nitanda, and Taiji Suzuki. Koopman-based bound for generalization: New aspect of neural networks regarding nonlinear noise filtering. arXiv: 2302.05825, 2023.
* [19] Taiji Suzuki, Hiroshi Abe, and Tomoaki Nishimura. Compression based bound for non-compressed network: unified generalization error analysis of large compressible deep neural network. In _Proceedings of the 8th International Conference on Learning Representations (ICLR)_, 2020.
* [20] Mikhail Belkin, Alexander Rakhlin, and Alexandre B. Tsybakov. Does data interpolation contradict statistical optimality? In _Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS)_, 2019.
* [21] Peter L. Bartlett, Philip M. Long, Gabor Lugosi, and Alexander Tsigler. Benign overfitting in linear regression. _Proceedings of the National Academy of Sciences_, 117(48):30063-30070, 2020.
* [22] Neil Rohit Mallinar, James B Simon, Amirhesam Abedsoltan, Parthe Pandit, Misha Belkin, and Preetum Nakkiran. Benign, tempered, or catastrophic: Toward a refined taxonomy of overfitting. In _Proceedings of Advances in Neural Information Processing Systems (NeurIPS) 37_, 2022.
* [23] Jaeseong Heo. Reproducing kernel Hilbert \(C^{*}\)-modules and kernels associated with cocycles. _Journal of Mathematical Physics_, 49(10):103507, 2008.
* [24] Shigeru Itoh. Reproducing kernels in modules over \(C^{*}\)-algebras and their applications. _Journal of Mathematics in Nature Science_, pages 1-20, 1990.
* [25] Mohammad S. Moslehian. Vector-valued reproducing kernel Hilbert \(C^{*}\)-modules. _Complex Analysis and Operator Theory_, 16(1):Paper No. 2, 2022.
* [26] Yuka Hashimoto, Isao Ishikawa, Masahiro Ikeda, Fuyuta Komura, Takeshi Katsura, and Yoshi-nobu Kawahara. Reproducing kernel Hilbert \(C^{*}\)-module and kernel mean embeddings. _Journal of Machine Learning Research_, 22(267):1-56, 2021.
* [27] Yuka Hashimoto, Masahiro Ikeda, and Hachem Kadri. Learning in RKHM: a \(C^{*}\)-algebraic twist for kernel machines. In _Proceedings of the 26th International Conference on Artificial Intelligence and Statistics (AISTATS)_, 2023.
* [28] Yoshinobu Kawahara. Dynamic mode decomposition with reproducing kernels for Koopman spectral analysis. In _Advances in Neural Information Processing Systems (NIPS) 29_, pages 911-919, 2016.
* [29] Isao Ishikawa, Keisuke Fujii, Masahiro Ikeda, Yuka Hashimoto, and Yoshinobu Kawahara. Metric on nonlinear dynamical systems with Perron-Frobenius operators. In _Advances in Neural Information Processing Systems (NeurIPS) 31_, pages 2856-2866, 2018.
* [30] Dimitrios Giannakis and Suddhasattwa Das. Extraction and prediction of coherent patterns in incompressible flows through space-time Koopman analysis. _Physica D: Nonlinear Phenomena_, 402:132211, 2020.
* [31] Yuka Hashimoto, Isao Ishikawa, Masahiro Ikeda, Yoichi Matsuo, and Yoshinobu Kawahara. Krylov subspace method for nonlinear dynamical systems with random noise. _Journal of Machine Learning Research_, 21(172):1-29, 2020.
* [32] Ryuichiro Hataya and Yuka Hashimoto. Noncommutative \(C^{*}\)-algebra net: Learning neural networks with powerful product structure in \(C^{*}\)-algebra. arXiv: 2302.01191, 2023.
* a Toolkit for Operator Algebraists_. London Mathematical Society Lecture Note Series, vol. 210. Cambridge University Press, 1995.
* [34] Gerard J. Murphy. \(C^{*}\)_-Algebras and Operator Theory_. Academic Press, 1990.
* [35] Yuka Hashimoto, Isao Ishikawa, Masahiro Ikeda, Fuyuta Komura, Takeshi Katsura, and Yoshi-nobu Kawahara. Analysis via orthonormal systems in reproducing kernel hilbert \(c^{*}\)-modules and applications. arXiv: 2003.00738, 2020.
* [36] Mauricio A Alvarez, Lorenzo Rosasco, Neil D Lawrence, et al. Kernels for vector-valued functions: A review. _Foundations and Trends(r) in Machine Learning_, 4(3):195-266, 2012.
* [37] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In _Proceedings of the Advances in Neural Information Processing Systems 20 (NIPS)_, 2007.
* [38] Yuka Hashimoto, Zhao Wang, and Tomoko Matsui. \(C^{*}\)-algebra net: a new approach generalizing neural network parameters to \(C^{*}\)-algebra. In _Proceedings of the 39th International Conference on Machine Learning (ICML)_, 2022.
* [39] Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. _Proceedings of the IEEE_, 86(11):2278-2324, 1998.
* [40] Andreas Maurer. A vector-contraction inequality for rademacher complexities. In _Proceedings of the 27th International Conference on Algorithmic Learning Theory (ALT)_, 2016.
Proofs
We provide the proofs of the theorems, propositions, and lemmas in the main paper.
**Lemma 2.7**: _If \(\{\phi_{1}(x)\,\mid\,x\in\mathcal{X}\}\) is \(\mathcal{A}\)-linearly independent, then \(P_{f}\) is well-defined._
**Proof** Assume \(\sum_{i=1}^{n}\phi_{1}(x_{i})c_{i}=\sum_{i=1}^{n}\phi_{1}(x_{i})d_{i}\) for \(n\in\mathbb{N}\), \(c_{i},d_{i}\in\mathcal{A}\). Since \(\{\phi_{1}(x)\,\mid\,x\in\mathcal{X}\}\) is \(\mathcal{A}\)-linearly independent, we have \(c_{i}=d_{i}\) for \(i=1,\ldots,n\). Thus, \(P_{f}\sum_{i=1}^{n}\phi_{1}(x_{i})c_{i}=\sum_{i=1}^{n}\phi_{1}(f(x_{i}))c_{i}= \sum_{i=1}^{n}\phi_{1}(f(x_{i}))d_{i}=P_{f}\sum_{i=1}^{n}\phi_{1}(x_{i})d_{i}\). \(\Box\)
**Lemma 2.8**: _Let \(k_{1}=\tilde{k}a\), i.e., \(k\) is separable, for an invertible operator \(a\) and a complex-valued kernel \(\tilde{k}\). Assume \(\{\tilde{\phi}(x)\,\mid\,x\in\mathcal{X}\}\) is linearly independent (e.g. \(\tilde{k}\) is Gaussian or Laplacian), where \(\tilde{\phi}\) is the feature map associated with \(\tilde{k}\). Then, \(\{\phi_{1}(x)\,\mid\,x\in\mathcal{X}\}\) is \(\mathcal{A}\)-linearly independent._
**Proof** If \(\sum_{i=1}^{n}\phi_{1}(x_{i})c_{i}=0\), we have
\[0=\left\langle\sum_{i=1}^{n}\phi_{1}(x_{i})c_{i},\sum_{i=1}^{n}\phi_{1}(x_{i}) c_{i}\right\rangle_{\mathcal{M}_{k}}=c^{*}Gc=(G^{1/2}c)^{*}(G^{1/2}c),\]
where \(G\) is the \(\mathcal{A}^{n\times n}\)-valued Gram matrix whose \((i,j)\)-entry is \(k(x_{i},x_{j})\in\mathcal{A}\) and \(c=(c_{1},\ldots,c_{n})\). Thus, we obtain \(G^{1/2}c=0\). Let \(\tilde{G}\) be the standard Gram matrix whose \((i,j)\)-entry is \(\tilde{k}(x_{i},x_{j})\). Since \(G=\tilde{G}\otimes a\) and \(\tilde{G}\) is invertible, the inverse of \(G\) is \(\tilde{G}^{-1}\otimes a^{-1}\). Thus, \(G^{1/2}\) is invertible and we have \(c=0\). \(\Box\)
**Lemma 4.2**: _Let \(\mathcal{F}\) be a function class of \(\mathbb{R}^{d\times d}\)-valued functions on \(\mathcal{A}_{0}\) bounded by \(C\) (i.e., \(\|f(x)\|_{\mathcal{A}}\leq C\) for any \(x\in\mathcal{A}_{0}\)). Let \(\tilde{\mathcal{G}}(\mathcal{F},p)=\{(x,y)\mapsto\|(f(x)-y)p\|^{2}\,\mid\,f\in \mathcal{F},\|y\|_{\mathcal{A}}\leq E\}\) and \(M=2(C+E)^{2}\). Let \(p\in\mathbb{R}^{d}\) satisfy \(\|p\|=1\) and let \(\delta\in(0,1)\). Then, for any \(g\in\tilde{\mathcal{G}}(\mathcal{F},p)\), with probability at least \(1-\delta\), we have_
\[\|\mathrm{E}[|g(x,y)|_{\mathcal{A}}^{2}]^{1/2}p\|^{2}\leq\left\|\frac{1}{n} \sum_{i=1}^{n}|g(x_{i},y_{i})|_{\mathcal{A}}^{2}\right\|_{\mathcal{A}}+2\hat{R }_{n}(\mathbf{x},\tilde{\mathcal{G}}(\mathcal{F},p))+3M\sqrt{\frac{\log(2/ \delta)}{2n}}.\]
**Proof** For \((x,y),(x^{\prime},y^{\prime})\in\mathcal{A}\times\mathbb{R}^{d\times d}\), we have
\[\|(f(x)-y)p\|^{2}-\|(f(x^{\prime})-y^{\prime})p\|^{2}\] \[\qquad=\|f(x)p\|^{2}-2\left\langle f(x)p,yp\right\rangle+\|yp\|^{ 2}-\|f(x^{\prime})p\|^{2}+2\left\langle f(x^{\prime})p,y^{\prime}p\right\rangle -\|y^{\prime}p\|^{2}\leq 2(C+E)^{2}.\]
In addition, we have
\[\|\mathrm{E}[|g(x,y)|_{\mathcal{A}}^{2}]^{1/2}p\|^{2}-\left\|\left(\frac{1}{n} \sum_{i=1}^{n}|g(x_{i},y_{i})|_{\mathcal{A}}^{2}\right)^{1/2}p\right\|^{2}= \left\langle p,\mathrm{E}[|g(x,y)|_{\mathcal{A}}^{2}]p\right\rangle-\left\langle p,\frac{1}{n}\sum_{i=1}^{n}|g(x_{i},y_{i})|_{\mathcal{A}}^{2}p\right\rangle\]
Since \((x,y)\mapsto\|g(x,y)p\|^{2}\) is a real-valued map, by Theorem 3.3 by Mohri et al. [11], we have
\[\|\mathrm{E}[|g(x,y)|_{\mathcal{A}}^{2}]^{1/2}p\|^{2}-\left\|\left(\frac{1}{n} \sum_{i=1}^{n}|g(x_{i},y_{i})|_{\mathcal{A}}^{2}\right)^{1/2}p\right\|^{2} \leq 2\hat{R}_{n}(\mathbf{x},\tilde{\mathcal{G}}(\mathcal{F},p))+3M\sqrt{ \frac{\log(2/\delta)}{2n}}.\]
The inequality \(\|(\sum_{i=1}^{n}|g(x_{i},y_{i})|_{\mathcal{A}}^{2})^{1/2}p\|^{2}\leq\|\sum_{i=1 }^{n}|g(x_{i},y_{i})|_{\mathcal{A}}^{2}\|_{\mathcal{A}}\) completes the proof. \(\Box\)
**Lemma 4.3**: _With the same notations in Lemma 4.2, let \(K=2\sqrt{2}(C+E)\). Then, we have \(\hat{R}_{n}(\mathbf{x},\tilde{\mathcal{G}}(\mathcal{F},p))\leq K\hat{R}_{n}( \mathbf{x},\mathcal{F}p)\), where \(\mathcal{F}p=\{x\mapsto f(x)p\,\mid\,f\in\mathcal{F}\}\)._
**Proof** Let \(h_{i}(z)=\|z-y_{i}p\|^{2}\) for \(z\in\{f(x)p\,\mid\,f\in\mathcal{F}\}\subseteq\mathbb{R}^{d}\) and \(i=1,\ldots,n\). The Lipschitz constant of \(h_{i}\) is calculated as follows: We have
\[h_{i}(z)-h_{i}(z^{\prime}) =\|z\|^{2}-2\left\langle z-z^{\prime},y_{i}p\right\rangle-\|z^{ \prime}\|^{2}\] \[\leq(\|z\|+\|z^{\prime}\|)(\|z\|-\|z^{\prime}\|)+2\|z-z^{\prime} \|\|y_{i}p\|\] \[\leq(\|z\|+\|z^{\prime}\|)(\|z-z^{\prime}\|+\|z^{\prime}\|-\|z^{ \prime}\|)+2\|z-z^{\prime}\|\|y_{i}p\|\] \[\leq(2C+2E)\|z-z^{\prime}\|.\]
Thus, we have \(|h_{i}(z)-h_{i}(z^{\prime})|\leq 2(C+E)\|z-z^{\prime}\|\). By Corollary 4 by Maurer [40], the statement is proved. \(\square\)
**Lemma 4.4**: _Let \(p\in\mathbb{R}^{d}\) satisfy \(\|p\|=1\). For \(\mathcal{F}_{1}\) defined in Section 3, we have_
\[\hat{R}_{n}(\mathbf{x},\mathcal{F}_{1}p)\leq\frac{B_{1}}{n}\Big{(}\sum_{i=1}^{n }\mathrm{tr}(k(x_{i},x_{i}))\Big{)}^{1/2}.\]
**Proof** We have
\[\mathrm{E}\bigg{[}\sup_{f\in\mathcal{F}_{1}}\frac{1}{n}\sum_{i=1} ^{n}\left\langle\sigma_{i},f(x_{i})p\right\rangle\bigg{]}=\mathrm{E}\bigg{[} \sup_{f\in\mathcal{F}_{1}}\frac{1}{n}\sum_{i=1}^{n}\left\langle(\sigma_{i}p^{ *})p,\left\langle\phi_{1}(x_{i}),f\right\rangle_{\mathcal{M}_{1}}p\right\rangle \bigg{]}\] \[=\mathrm{E}\bigg{[}\sup_{f\in\mathcal{F}_{1}}\frac{1}{n}\bigg{\langle} p,\,\,\sum_{i=1}^{n}(p\sigma_{i}^{*})\left\langle\phi_{1}(x_{i}),f\right\rangle_{ \mathcal{M}_{1}}p\bigg{\rangle}\bigg{]}\] \[=\mathrm{E}\bigg{[}\sup_{f\in\mathcal{F}_{1}}\frac{1}{n}\bigg{(} \sum_{i=1}^{n}\phi_{1}(x_{i})(\sigma_{i}p^{*}),f\bigg{)}_{\tilde{\mathcal{M}}_ {1},p}\bigg{]}\] \[\leq\mathrm{E}\bigg{[}\sup_{f\in\mathcal{F}_{1}}\frac{1}{n}\bigg{ }\bigg{\|}\sum_{i=1}^{n}\phi_{1}(x_{i})(\sigma_{i}p^{*})\bigg{\|}_{\tilde{ \mathcal{M}}_{1},p}\,\|f\|_{\tilde{\mathcal{M}}_{1},p}\bigg{]} \tag{3}\] \[\leq\mathrm{E}\bigg{[}\sup_{f\in\mathcal{F}_{1}}\frac{1}{n}\bigg{ }\bigg{\langle}p,\,\,\bigg{|}\sum_{i=1}^{n}\phi_{1}(x_{i})(\sigma_{i}p^{*}) \bigg{|}_{\tilde{\mathcal{M}}_{1}}^{2}p\bigg{\rangle}^{1/2}\,\,\|f\|_{\tilde{ \mathcal{M}}_{1}}\bigg{]}\] (4) \[\leq\frac{B_{1}}{n}\mathrm{E}\bigg{[}\bigg{(}\sum_{i,j=1}^{n}p^{* }\rho_{i}^{*}k_{1}(x_{i},x_{j})\sigma_{j}p^{*}p\bigg{)}^{1/2}\bigg{]}\] \[\leq\frac{B_{1}}{n}\mathrm{E}\bigg{[}\sum_{i,j=1}^{n}\sigma_{i}^{ *}k_{1}(x_{i},x_{j})\sigma_{j}\bigg{]}^{1/2}\] (5) \[=\frac{B_{1}}{n}\mathrm{E}\bigg{[}\sum_{i=1}^{n}\sigma_{i}^{*}k_{1 }(x_{i},x_{i})\sigma_{i}\bigg{]}^{1/2}=\frac{B_{1}}{n}\bigg{(}\sum_{i=1}^{n} \mathrm{tr}\,k_{1}(x_{i},x_{i})\bigg{)}^{1/2},\]
where for \(p\in\mathbb{C}^{d}\) and \(f,g\in\tilde{\mathcal{M}}_{1}\), \((f,g)_{\tilde{\mathcal{M}}_{1},p}\) is the semi-inner product defined by \((f,g)_{\tilde{\mathcal{M}}_{1},p}=\left\langle p,\left\langle f,g\right\rangle_{ \tilde{\mathcal{M}}_{1}}p\right\rangle\) and \(|f|_{\tilde{\mathcal{M}}_{1}}^{2}=\left\langle f,f\right\rangle_{\tilde{ \mathcal{M}}_{1}}\). In addition, the inequality (3) is by the Cauchy-Schwartz inequality and the inequality (5) is by the Jensen's inequality. Note that the Cauchy-Schwartz inequality is still valid for semi-inner products. The inequality (4) is derived by the inequality
\[\|f_{L}\|_{\tilde{\mathcal{M}}_{L},p}^{2}=\left\langle p,\left\langle f_{L},f_{ L}\right\rangle_{\mathcal{M}_{L}}p\right\rangle\leq\|p\|^{2}\left\|\left\langle f _{L},f_{L}\right\rangle_{\mathcal{M}_{L}}\right\|_{\mathcal{A}}\leq\|f_{L}\|_{ \tilde{\mathcal{M}}_{L}}^{2}.\]
**Theorem 4.1**: _Assume there exists \(D>0\) such that \(\|k_{1}(x,x)\|_{\mathcal{A}}\leq D\) for any \(x\in\mathcal{A}_{0}\). Let \(\tilde{K}=4\sqrt{2}(\sqrt{D}B_{1}+E)B_{1}\) and \(\tilde{M}=6(\sqrt{D}B_{1}+E)^{2}\). Let \(\delta\in(0,1)\). Then, for any \(g\in\mathcal{G}(\mathcal{F}_{1})\), where \(\mathcal{F}_{1}\) is defined in Section 3, with probability at least \(1-\delta\), we have_
\[\|\mathrm{E}[|g(x,y)|_{\mathcal{A}}^{2}]\|_{\mathcal{A}}\leq\left\|\frac{1}{n} \sum_{i=1}^{n}|g(x_{i},y_{i})|_{\mathcal{A}}^{2}\right\|_{\mathcal{A}}+\frac{ \tilde{K}}{n}\bigg{(}\sum_{i=1}^{n}\mathrm{tr}\,k_{1}(x_{i},x_{i})\bigg{)}^{1/ 2}+\tilde{M}\sqrt{\frac{\log(2/\delta)}{2n}}.\]
**Proof** For \(f\in\mathcal{M}_{1}\) and \(x\in\mathcal{A}_{0}\), we have
\[\|f(x)\|_{\mathcal{A}}=\|\left\langle\phi_{1}(x),f\right\rangle_{\mathcal{M}_{ 1}}\|_{\mathcal{A}}\leq\|k_{1}(x,x)\|_{\mathcal{A}}^{1/2}\|f\|_{\mathcal{M}_{ 1}}\leq\sqrt{D}B_{1}.\]
Thus, we set \(C\) as \(\sqrt{D}B_{1}\) and apply Lemmas 4.2, 4.3, and 4.4. Then, for \(p\in\mathbb{R}^{d}\) satisfy \(\|p\|=1\), with probability at least \(1-\delta\), we have
\[\|\mathrm{E}[|g(x,y)|_{\mathcal{A}}^{2}]^{1/2}p\|^{2}\leq\left\|\frac{1}{n} \sum_{i=1}^{n}|g(x_{i},y_{i})|_{\mathcal{A}}^{2}\right\|_{\mathcal{A}}+\frac{ \tilde{K}}{n}\bigg{(}\sum_{i=1}^{n}\mathrm{tr}\,k_{1}(x_{i},x_{i})\bigg{)}^{1/ 2}+\tilde{M}\sqrt{\frac{\log(2/\delta)}{2n}}.\]
Therefore, we obtain
\[\|\mathrm{E}[|g(x,y)|_{\mathcal{A}}^{2}]\|_{\mathcal{A}} =\|\mathrm{E}[|g(x,y)|_{\mathcal{A}}^{2}]^{1/2}\|_{\mathcal{A}}^{2}\] \[\leq\left\|\frac{1}{n}\sum_{i=1}^{n}|g(x_{i},y_{i})|_{\mathcal{A} }^{2}\right\|_{\mathcal{A}}+\frac{\tilde{K}}{n}\bigg{(}\sum_{i=1}^{n}\mathrm{ tr}\,k_{1}(x_{i},x_{i})\bigg{)}^{1/2}+\tilde{M}\sqrt{\frac{\log(2/\delta)}{2n}},\]
which completes the proof. \(\Box\)
**Proposition 4.6**: _We have_
\[\hat{R}_{n}(\mathbf{x},\mathcal{F}_{L}^{\mathrm{deep}}p)\leq\frac{1}{n}\sup_{ (f_{j}\in\mathcal{F}_{j})_{j}}\|P_{f_{L-1}}\cdots P_{f_{1}}|_{\tilde{\mathcal{ V}}(\mathbf{x})}\|_{\mathrm{op}}\ \|f_{L}\|_{\mathcal{M}_{L}}\ \Big{(}\sum_{i=1}^{n}\mathrm{tr}(k_{1}(x_{i},x_{i}))\Big{)}^{1/2}.\]
_Here, \(\tilde{\mathcal{V}}(\mathbf{x})\) is the submodule of \(\tilde{\mathcal{M}}_{1}\) generated by \(\phi_{1}(x_{1}),\ldots\phi_{1}(x_{n})\)._
The following lemma by Lance [33, Proposition 1.2] is used in proving Proposition 4.6. Here, for \(a,b\in\mathcal{A}\), \(a\leq b\) means \(b-a\) is Hermitian positive definite.
**Lemma A.1**: _Let \(\mathcal{M}\) and \(\mathcal{N}\) be Hilbert \(\mathcal{A}\)-modules and let \(A\) be an \(\mathcal{A}\)-linear operator from \(\mathcal{M}\) to \(\mathcal{N}\). Then, we have \(|Aw|_{\mathcal{N}}^{2}\leq\|A\|_{\mathrm{op}}^{2}|w|_{\mathcal{M}}^{2}\)._
**Proof**
\[\mathrm{E}\bigg{[}\sup_{f\in\mathcal{F}_{L}^{\mathrm{deep}}}\frac{1}{n}\sum_{i= 1}^{n}\left\langle\sigma_{i},f(x_{i})p\right\rangle\bigg{]}=\mathrm{E}\bigg{[} \sup_{(f_{j}\in\mathcal{F}_{j})_{j}}\frac{1}{n}\sum_{i=1}^{n}\left\langle( \sigma_{i}p^{*})p,f_{L}(f_{L-1}(\cdots f_{1}(x_{i})\cdots))p\right\rangle\bigg{]}\]
\[=\mathrm{E}\bigg{[}\sup_{(f_{j}\in\mathcal{F}_{j})_{j}}\frac{1}{n}\sum_{i=1}^ {n}\left\langle(\sigma_{i}p^{*})p,\left\langle\phi_{L}(f_{L-1}(\cdots f_{1}(x_ {i})\cdots)),f_{L}\right\rangle_{\mathcal{M}_{L}}p\right\rangle\bigg{]}\]
\[=\mathrm{E}\bigg{[}\sup_{(f_{j}\in\mathcal{F}_{j})_{j}}\frac{1}{n}\bigg{\langle} p,\left\langle\sum_{i=1}^{n}\phi_{L}(f_{L-1}(\cdots f_{1}(x_{i})\cdots))(\sigma_{i}p^{*}),f_{L} \right\rangle_{\tilde{\mathcal{M}}_{L}}p\bigg{)}\bigg{]}\]\[=\mathrm{E}\biggl{[}\sup_{\{f_{j}\in\mathcal{F}_{j}\}_{j}}\frac{1}{n} \biggl{(}\sum_{i=1}^{n}\phi_{L}(f_{L-1}(\cdots f_{1}(x_{i})\cdots))(\sigma_{i}p^ {*}),f_{L}\biggr{)}_{\tilde{\mathcal{M}}_{L},p}\biggr{]}\] \[\leq\mathrm{E}\biggl{[}\sup_{\{f_{j}\in\mathcal{F}_{j}\}_{j}}\frac {1}{n}\biggl{\|}\sum_{i=1}^{n}\phi_{L}(f_{L-1}(\cdots f_{1}(x_{i})\cdots))( \sigma_{i}p^{*})\biggr{\|}_{\tilde{\mathcal{M}}_{L},p}\|f_{L}\|_{\tilde{ \mathcal{M}}_{L},p}\biggr{]} \tag{6}\] \[\leq\mathrm{E}\biggl{[}\sup_{\{f_{j}\in\mathcal{F}_{j}\}_{j}}\frac {1}{n}\biggl{\|}P_{f_{L-1}}\cdots P_{f_{1}}\sum_{i=1}^{n}\phi_{1}(x_{i})( \sigma_{i}p^{*})\biggr{\|}_{\tilde{\mathcal{M}}_{L},p}\|f_{L}\|_{\tilde{ \mathcal{M}}_{L}}\biggr{]}\] \[=\mathrm{E}\biggl{[}\sup_{\{f_{j}\in\mathcal{F}_{j}\}_{j}}\frac{ 1}{n}\biggl{\langle}p,\,\biggl{|}P_{f_{L-1}}\cdots P_{f_{1}}\sum_{i=1}^{n}\phi _{1}(x_{i})(\sigma_{i}p^{*})\biggr{|}_{\tilde{\mathcal{M}}_{L}}^{2}p\biggr{\rangle} ^{1/2}\|f_{L}\|_{\tilde{\mathcal{M}}_{L}}\biggr{]}\] \[\leq\mathrm{E}\biggl{[}\sup_{\{f_{j}\in\mathcal{F}_{j}\}_{j}}\frac {1}{n}\biggl{\langle}p,\,\|P_{f_{L-1}}\cdots P_{f_{1}}\|_{\tilde{\mathcal{V}}( \mathbf{x})}\|_{\mathrm{op}}^{2}\,\biggl{|}\sum_{i=1}^{n}\phi_{1}(x_{i})(\sigma _{i}p^{*})\biggr{|}_{\tilde{\mathcal{M}}_{L}}^{2}p\biggr{\rangle}^{1/2}\|f_{L} \|_{\tilde{\mathcal{M}}_{L}}\biggr{]}\] (7) \[\leq\frac{1}{n}\sup_{\{f_{j}\in\mathcal{F}_{j}\}_{j}}\|P_{f_{L-1} }\cdots P_{f_{1}}\|_{\tilde{\mathcal{V}}(\mathbf{x})}\|_{\mathrm{op}}\,\|f_{L} \|_{\mathcal{M}_{L}}\,\mathrm{E}\biggl{[}\biggl{\langle}p,\,\biggl{|}\sum_{i=1 }^{n}\phi_{1}(x_{i})(\sigma_{i}p^{*})\biggr{|}_{\tilde{\mathcal{M}}_{L}}^{2}p \biggr{\rangle}^{1/2}\biggr{]}\] \[\leq\frac{1}{n}\sup_{\{f_{j}\in\mathcal{F}_{j}\}_{j}}\|P_{f_{L-1} }\cdots P_{f_{1}}\|_{\tilde{\mathcal{V}}(\mathbf{x})}\|_{\mathrm{op}}\,\|f_{L} \|_{\mathcal{M}_{L}}\,\mathrm{E}\biggl{[}\biggl{(}\sum_{i,j=1}^{n}p^{*}p\sigma _{i}^{*}k_{1}(x_{i},x_{j})\sigma_{j}p^{*}p\biggr{)}^{1/2}\biggr{]}\] \[\leq\frac{1}{n}\sup_{\{f_{j}\in\mathcal{F}_{j}\}_{j}}\|P_{f_{1}} \cdots P_{f_{L-1}}\|_{\tilde{\mathcal{V}}(\mathbf{x})}\|_{\mathrm{op}}\,\|f_{L }\|_{\mathcal{M}_{L}}\,\biggl{(}\sum_{i=1}^{n}\mathrm{tr}(k_{1}(x_{i},x_{i})) \biggr{)}^{1/2},\]
where the inequality (6) is by the Cauchy-Schwartz inequality and the inequality (7) is by Lemma A.1. \(\Box\)
**Proposition 5.1**: _Let \(h:\mathcal{A}^{n}\times\mathcal{A}^{n}\to\mathbb{R}_{+}\) be an error function, let \(g_{1}\) be an \(\mathbb{R}_{+}\)-valued function on the space of bounded linear operators on \(\tilde{\mathcal{M}}_{1}\), and let \(g_{2}:\mathbb{R}_{+}\to\mathbb{R}_{+}\) satisfy \(g_{2}(a)\leq g_{2}(b)\) for \(a\leq b\). Assume the following minimization problem has a solution:_
\[\min_{(f_{j}\in\mathcal{M}_{j})_{j}}h(f_{L}\circ\cdots\circ f_{1}(x_{1}),\ldots,f_{L}\circ\cdots\circ f_{1}(x_{n}))+g_{1}(P_{f_{L-1}}\cdots P_{f_{L}}|_{\tilde {\mathcal{V}}(\mathbf{x})})+g_{2}(\|f_{L}\|_{\mathcal{M}_{L}}).\]
_Then, there exists a solution admitting a representation of the form \(f_{j}=\sum_{i=1}^{n}\phi_{j}(x_{i}^{j-1})c_{i,j}\) for some \(c_{1,j},\ldots,c_{n,j}\in\mathcal{A}\) and for \(j=1,\ldots,L\). Here, \(x_{i}^{j}=f_{j}\circ\cdots\circ f_{1}(x_{i})\) for \(j=1,\ldots,L\) and \(x_{i}^{0}=x_{i}\)._
**Proof** Let \(\mathcal{V}_{j}(\mathbf{x})\) be the submodule of \(\mathcal{M}_{j}\) generated by \(\phi_{j}(x_{1}^{j-1}),\ldots,\phi_{j}(x_{n}^{j-1})\). Let \(f_{j}=f_{j}^{\parallel}+f_{j}^{\perp}\), where \(f_{j}^{\parallel}\in\mathcal{V}_{j}(\mathbf{x})\) and \(f_{j}^{\perp}\in\mathcal{V}_{j}(\mathbf{x})^{\perp}\). Then, for \(j=1,\ldots,L-1\), we have
\[f_{j}(x_{i}^{j-1})=\bigl{\langle}\phi_{j}(x_{i}^{j-1}),f_{j}\bigr{\rangle}_{ \mathcal{M}_{j}}=\bigl{\langle}\phi_{j}(x_{i}^{j-1}),f_{j}^{\parallel}\bigr{\rangle} _{\mathcal{M}_{j}}=f_{j}^{\parallel}(x_{i}^{j-1}). \tag{8}\]
In addition, we have
\[P_{f_{L-1}}\cdots P_{f_{1}}|_{\tilde{\mathcal{V}}(\mathbf{x})}=\prod_{j=1}^{L-1 }P_{f_{j}}|_{\tilde{\mathcal{V}}_{j}(\mathbf{x})},\]
where \(\tilde{\mathcal{V}}_{j}(\mathbf{x})\) is the submodule of \(\tilde{\mathcal{M}}_{j}\) generated by \(\phi_{j}(x_{1}^{j-1}),\ldots\phi_{j}(x_{n}^{j-1})\). In the same manner as Eq. (8), we obtain
\[P_{f_{j}}\sum_{i=1}^{n}\phi_{j}(x_{i}^{j-1})c_{i,j} =\sum_{i=1}^{n}\phi_{j+1}(f_{j}(x_{i}^{j-1}))c_{i,j}=\sum_{i=1}^{n} \phi_{j+1}\bigl{(}f_{j}^{\parallel}(x_{i}^{j-1})\bigr{)}c_{i,j}\] \[=\sum_{i=1}^{n}P_{f_{j}^{\parallel}}\phi_{j}(x_{i}^{j-1})c_{i,j}.\]Thus, we have \(P_{f_{j}}|_{\hat{\wp}_{j}(\mathbf{x})}=P_{f_{j}^{\parallel}}|_{\hat{\wp}_{j}( \mathbf{x})}\). Furthermore, we have
\[\big{\|}f_{L}\|_{\mathcal{M}_{L}}=\|\langle f_{L}^{\parallel}+f_{L}^{\perp},f_{L }^{\parallel}+f_{L}^{\perp}\rangle_{\mathcal{M}_{L}}\big{\|}_{\mathcal{A}}= \big{\|}\langle f_{L}^{\parallel},f_{L}^{\parallel}\rangle_{\mathcal{M}_{L}}+ \big{\langle}f_{L}^{\perp},f_{L}^{\perp}\rangle_{\mathcal{M}_{L}}\big{\|}_{ \mathcal{A}}\geq\big{\|}\langle f_{L}^{\parallel},f_{L}^{\parallel}\rangle_{ \mathcal{M}_{L}}\big{\|}_{\mathcal{A}},\]
where the last inequality is derived from the fact that \(\big{\langle}f_{L}^{\parallel},f_{L}^{\parallel}\big{\rangle}_{\mathcal{M}_{L} }+\big{\langle}f_{L}^{\perp},f_{L}^{\perp}\big{\rangle}_{\mathcal{M}_{L}}- \big{\langle}f_{L}^{\perp},f_{L}^{\perp}\big{\rangle}_{\mathcal{M}_{L}}\) is positive and Theorem 2.2.5 (3) by Murphy [34]. As a result, the statement is proved. \(\Box\)
**Proposition 5.3**: _For \(j=1,L\), let \([\phi_{j}(x_{1}^{j-1}),\ldots,\phi_{j}(x_{n}^{j-1})]R_{j}=Q_{j}\) be the QR decomposition of \([\phi_{j}(x_{1}^{j-1}),\ldots,\phi_{j}(x_{n}^{j-1})]\). Then, we have \(\|P_{f_{L-1}}\cdots P_{f_{1}}|_{\hat{\wp}(\mathbf{x})}\|_{\mathrm{op}}=\|R_{L} ^{*}G_{L}R_{1}\|_{\mathrm{op}}\)._
* The result is derived by the identities \(\|P_{f_{L-1}}\cdots P_{f_{1}}|_{\hat{\wp}(\mathbf{x})}\|_{\mathrm{op}}=\|Q_{L} ^{*}P_{f_{L-1}}\cdots P_{f_{1}}Q_{1}\|_{\mathrm{op}}=\|Q_{L}^{*}P_{f_{L-1}} \cdots P_{f_{1}}[\phi_{1}(x_{1}),\ldots,\phi_{1}(x_{n})]R_{1}\|_{\mathrm{op}}\) \[=\|R_{L}^{*}[\phi_{L}(x_{1}^{L-1}),\ldots,\phi_{L}(x_{n}^{L-1})]^{ *}[\phi_{L}(x_{1}^{L-1}),\ldots,\phi_{L}(x_{n}^{L-1})]R_{1}\|_{\mathrm{op}}\] \[=\|R_{L}^{*}G_{L}R_{1}\|_{\mathrm{op}}.\]
\(\Box\)
**Proposition 5.4**: _Assume \(G_{L}\) is invertible. Then, we have_
\[\|P_{f_{L-1}}\cdots P_{f_{1}}|_{\hat{\wp}(\mathbf{x})}\|_{\mathrm{op}}\leq\|G_ {L}^{-1}\|_{\mathrm{op}}^{1/2}\|G_{L}\|_{\mathrm{op}}\|G_{1}^{-1}\|_{\mathrm{op }}^{1/2}.\]
* Since \(G_{L}\) is invertible, by Proposition 5.3, \(\|R_{L}^{*}G_{L}R_{1}\|_{\mathrm{op}}\) is bounded as \(\|P_{f_{L-1}}\cdots P_{f_{1}}|_{\hat{\wp}(\mathbf{x})}\|_{\mathrm{op}}=\|R_{L} ^{*}G_{L}R_{1}\|_{\mathrm{op}}\leq\|R_{L}\|_{\mathrm{op}}\|G_{L}\|_{\mathrm{op }}\|R_{1}\|_{\mathrm{op}}=\|G_{L}^{-1}\|_{\mathrm{op}}^{1/2}\|G_{L}\|_{\mathrm{ op}}\|G_{1}^{-1}\|_{\mathrm{op}}^{1/2}\), where the last equality is derived by the identity \(\|R_{j}\|_{\mathrm{op}}^{2}=\|R_{j}R_{j}^{*}\|_{\mathrm{op}}=\|{G_{j}}^{-1}\|_{ \mathrm{op}}\). \(\Box\)
## Appendix B Connection with Generalization Bound with Koopman Operators
Hashimoto et al. [18] derived a generalization bound for the classical neural network composed of linear transformations and activation functions using Koopman operators. The Koopman operator is the adjoint of the Perron-Frobenius operator. In their analysis, they set the final transformation \(f_{L}\) in an RKHS and observed that if the transformation of each layer is injective, the noise is separated through the linear transformations and cut off by \(f_{L}\). Since the final transformation \(f_{L}\) in our case is in an RKHM, the same interpretation about noise is valid for deep RKHM, too. Indeed, if \(f_{L}\circ\cdots\circ f_{1}\) is not injective, then \(G_{L}\) is not invertible, which results in the right-hand side of the inequality (2) going to infinity. The bound derived by Hashimoto et al. also goes to infinity if the network is not injective.
## Appendix C Experimental Details and Additional Results
We provide details for the experiments in Section 7. All the experiments are executed with Python 3.9 and TensorFlow 2.6 on Intel(R) Core(TM) i9 CPU and NVIDIA Quadro RTX 5000 GPU with CUDA 11.7.
### Comparison with vvRKHS
We set \(k_{j}(x,y)=\tilde{k}(x,y)I\) for \(j=1,\ldots,3\) with \(\tilde{k}(x,y)=\mathrm{e}^{-c\sum_{i,j=1}^{d}|x_{i,j}-y_{i,j}|}\) and \(c=0.001\) for positive definite kernels. For the optimizer, we used SGD. The learning rate is set as \(1e^{-4}\) both for the deep RKHM and deep vvRKHS. The initial value of \(c_{i,j}\) is set as \(a_{i,j}+\epsilon_{i,j}\), where \(a_{i,j}\in\mathcal{A}_{j}\) is the block matrix all of whose elements are \(0.1\) and \(\epsilon_{i,j}\) is randomly drawn from \(\mathcal{N}(0,0.05)\). The result in Figure 2 is obtained by 5 independent runs.
### Observation about benign overfitting
We set the same positive definite kernel as Subsection C.1 for \(j=1,2\). For the optimizer, we used SGD. The learning rate is set as \(3\times 1e^{-4}\). The initial value of \(c_{i,j}\) is the same as Subsection C.1. The result in Figure 2 is obtained by 3 independent runs.
### Comparison to CNN
We set \(k_{j}(x,y)=\tilde{k}(xa_{j},ya_{j})xy^{*}\) for \(j=1,2\), where \(a_{1}\) and \(a_{2}\) are block matrices whose block sizes are \(2\) and \(4\), for positive definite kernels. All the nonzero elements of \(a_{j}\) are set as \(1\). We set \(a_{1}\) and \(a_{2}\) to induce interactions in the block elements (see Remark 5.2). For the optimizer, we used Adam with learning rate \(1e^{-3}\) for both the deep RKHM and the CNN. The initial value of \(c_{i,j}\) is set as \(\epsilon_{i,j}\), where \(\epsilon_{i,j}\) is randomly drawn from \(\mathcal{N}(0,0.1)\).
We combined the deep RKHM and CNN with \(2\) dense layers. Their activation functions are sigmoid and softmax, respectively. For the CNN layers, we also used the sigmoid activation functions. The loss function is set as the categorical cross-entropy for the CNN. The result in Figure 2 is obtained by 5 independent runs.
#### c.3.1 Memory consumption and computational cost
Memory consumptionWe used a CNN with \((7\times 7)\) and \((4\times 4)\)-filters. On the other hand, for the deep RKHM, we learned the coefficients \(c_{i,j}\) in Proposition 5.1 for \(j=1,2\) and \(i=1,\ldots,n\). That is, we learned the following coefficients:
* \(n(=20)\) block diagonal matrices each of whom has four \((7\times 7)\)-blocks (for the first layer)
* \(n\) block diagonal matrices each of whom has seven \((4\times 4)\)-blocks (for the second layer)
Thus, the size of the parameters we have to learn for the deep RKHM is larger than the CNN. Since the memory consumption depends on the size of learning parameters, the memory consumption is larger for the deep RKHM than for the CNN.
Computational costIn each iteration, we compute \(f(x_{i})\) for \(i=1,\ldots,n\) and the derivative of \(f\) with respect to the learning parameters. Here, \(f=f_{1}\circ f_{2}\) is the network. For the deep RKHM, we compute the product of a Gram matrix (composed of \(n\times n\) block diagonal matrices) and a vector (composed of \(n\) block diagonal matrices) for computing \(f_{j}(x_{i})\) for all \(i=1,\ldots,n\). Thus, the computational cost for computing \(f(x_{i})\) for all \(i=1,\ldots,n\) is \(O(n^{2}d(m_{1}+m_{2}))\), where \(m_{1}=7\) and \(m_{2}=4\) are the sizes of the block diagonal matrices. For the CNN, the computational cost for computing \(f(x_{i})\) for all \(i=1,\ldots,n\) is \(O(nd^{2}(l_{1}+l_{2}))\), where \(l_{1}=7\times 7\) and \(l_{2}=4\times 4\) are the number of elements in the filters. Since we set \(n=20\) and \(d=28\), the computational cost of the deep RKHM for computing \(f(x_{i})\) for \(i=1,\ldots,n\) is smaller than that of the CNN. However, since the size of the learning parameters of the deep RKHM is large, the computational cost for computing the derivative of \(f(x_{i})\) is larger than that of the CNN.
Additional resultsTo compare the deep RKHM to a CNN with the same size of learning parameters (the same memory consumption), we conducted the same experiment as the last experiment in the main text, excepting for the structure of the CNN. We constructed a CNN with \((28\times 7\cdot 20)\) and \((28\times 4\cdot 20)\)-filters (The size of learning parameters is the same as the deep RKHM) and replaced the CNN used in the experiment in the main text. Figure 3 shows the result. The result is similar to that in Figure 2 (c), and the deep RKHM also outperforms the CNN with the same learning parameter size. Since the size of the learning parameter is the same in this case, the computational cost for one iteration for learning the deep RKHM is the same or smaller than that for learning the CNN.
#### c.3.2 Additional results for benign overfitting
We also observed benign overfitting for the deep RKHM. We compared the train and test losses for the deep RKHM with \(\lambda_{1}=1\) to those with \(\lambda_{1}=0\). The results are illustrated in Figure 4. If we set \(\lambda_{1}=0\) in Eq. (1), i.e., do not minimize the term in Eq. (2), then whereas the training loss becomes small as the learning process proceeds, the test loss becomes large after sufficient iterations. On the other hand, if we set \(\lambda_{1}=1\), i.e., minimize the term in Eq. (2), then the training loss becomessmaller than the case of \(\lambda_{1}=0\), and the test loss does not become large even the learning process proceeds. The result implies that the term regarding the Perron-Frobenius operators in Eq. (1) causes overfitting, but it is benign overfitting.
Figure 4: Train loss and test loss for the MNIST classification task with (\(\lambda=1\)) and without (\(\lambda=0\)) the minimization of the second term in Eq. (1) regarding the Perron–Frobenius operators.
Figure 3: Comparison of deep RKHM to a CNN with the same size of learning parameter. | ## Review
### Summary
This paper introduces deep Reproducing kernel Hilbert-module (RKHM) as a novel deep learning framework for kernel methods, generalizing RKHS by means of C*-algebra. The authors compute the Rademacher complexity for this function class and establish a representer theorem, demonstrating improved generalization bounds. They present connections to existing studies, including CNNs and benign overfitting, and provide numerical experiments that support their theoretical findings. Overall, the paper contributes significant insights into the analysis and understanding of deep kernel methods, particularly with respect to generalization bounds that are less dependent on output dimensions.
### Strengths
- The paper presents a new approach for analyzing deep kernel methods.
- It derives a generalization bound for deep RKHMs, which relaxes dependence on output dimensions using the Perron-Frobenius norm.
- The writing is clear and the paper is well-organized.
- The results are technically solid and relevant to the NeurIPS community.
- Experiments conducted support the theoretical claims.
- Novel tools and insights are provided that can inspire further research.
### Weaknesses
- The derivation of results, while correct, follows standard approaches that do not significantly innovate beyond existing literature.
- The motivation for using deep kernels over traditional methods lacks depth and clarity.
- Some discussions, particularly concerning the relationship between deep RKHMs and CNNs, are unconvincing.
- The paper contains some unclear notations and could benefit from improved explanations.
- There are concerns about the applicability of the methods to larger datasets like ImageNet, as efficient methods specific to deep RKHM are yet to be developed.
### Questions
- What exactly is being done when writing 'Thus' in the proof of Lemma 2.7? What does 'well defined' mean?
- How is Corollary 4.7 derived? Can you elaborate on controlling the operator norm of the Perron-Frobenius operator?
- Can you explain how the generalization bounds in Theorem 4.1 and Theorem 4.5 depend on the input dimension?
- In the connection with the neural tangent kernel, is the aim to define a neural tangent kernel for deep RKHM?
- Any theoretical understanding of why deep RKHMs might be better than non-deep ones?
### Soundness
**Score:** 3
**Description:** Good: The theoretical foundations are solid, and the results are sound, though some derivations are standard.
### Presentation
**Score:** 3
**Description:** Good: The paper is generally well-written but could benefit from clarifying certain notations and improving explanations.
### Contribution
**Score:** 3
**Description:** Good: The paper contributes valuable insights and a new framework, but the practical implications and motivations could be expanded.
### Rating
**Score:** 7
**Description:** Accept: The paper is technically solid with high impact potential, but some areas need further clarification and depth.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The decision to accept the paper is based on its originality in presenting a deep learning framework for kernel methods, soundness in its theoretical contributions, significance in the context of deep kernel theory, and generally clear presentation. Despite some weaknesses in motivation and clarity, the strengths and potential impact outweigh the concerns.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Volume Feature Rendering for Fast Neural Radiance Field Reconstruction
Kang Han Wei Xiang1 Lu Yu
School of Computing, Engineering and Mathematical Sciences
La Trobe University
{k.han, w.xiang, l.yu}@latrobe.edu.au
Corresponding author.
Footnote 1: footnotemark:
###### Abstract
Neural radiance fields (NeRFs) are able to synthesize realistic novel views from multi-view images captured from distinct positions and perspectives. In NeRF's rendering pipeline, neural networks are used to represent a scene independently or transform queried learnable feature vector of a point to the expected color or density. With the aid of geometry guides either in the form of occupancy grids or proposal networks, the number of color neural network evaluations can be reduced from hundreds to dozens in the standard volume rendering framework. However, many evaluations of the color neural network are still a bottleneck for fast NeRF reconstruction. This paper revisits volume feature rendering (VFR) for the purpose of fast NeRF reconstruction. The VFR integrates the queried feature vectors of a ray into one feature vector, which is then transformed to the final pixel color by a color neural network. This fundamental change to the standard volume rendering framework requires only one single color neural network evaluation to render a pixel, which substantially lowers the high computational complexity of the rendering framework attributed to a large number of color neural network evaluations. Consequently, we can use a comparably larger color neural network to achieve a better rendering quality while maintaining the same training and rendering time costs. This approach achieves the state-of-the-art rendering quality on both synthetic and real-world datasets while requiring less training time compared with existing methods.
## 1 Introduction
For the task of view synthesis, unobserved views of a scene are synthesized from captured multi-view images. This is a long-standing problem that has been studied for several decades in computer vision and computer graphics. The most popular solution to this problem at present is the neural radiance field (NeRF) [18] since it can render views with high fidelity. NeRF represents the density and color of a spatial point in a given direction using a neural network (NN), typically the multilayer perceptron (MLP). With the predicted densities and colors of sampled points along a ray, the NeRF and its variants [2, 3, 28] use the volume rendering technique to aggregate colors to obtain the final rendered pixel color. As many samples are required to render one pixel, the underlying MLP needs to run many times, leading to high computational complexities for both training and rendering. Alternatively, researchers introduced extra learnable features in the form of 3D grids [25, 17], hash tables [19] and decomposed tensors [6, 12, 7, 23] in addition to MLP's parameters to represent a scene's density and color fields. As querying the feature vector of a sampled point by interpolation is much faster than one MLP evaluation, only a small MLP is used to transform the queried feature vector to the density or color, leading to substantial speed acceleration compared with pure MLP representations.
Furthermore, modeling the density field independently by computational efficient representations can greatly reduce the number of samples in the color field. The density representations could be occupancy grids [19], hash grids with less parameters [16], and small MLPs [3]. These coarse density fields enable importance sampling to make the model focus on non-empty points near the surface. As a result, only dozens of samples in the color field (typically represented by both learnable features and NNs) are needed to render a pixel. For example, Zip-NeRF [4] uses 32 samples after importance sampling through efficient density representations.
Nevertheless, standard volume rendering techniques still need to run a color NN dozens of times to render a single pixel. This is why Instant-NGP [19] uses a small color MLP for fast training and rendering, and why the latest Zip-NeRF [4] is comparably slow when using large color MLPs to achieve a better rendering quality. Although a radiance field can be implemented even without NNs [9], the importance of large color NNs for high-quality rendering is undoubted, considering that the state-of-the-art rendering quality can be achieved by pure MLP representations. The authors of Zip-NeRF [4] and NRFF [12] also highlight the importance of large color NNs in modeling view-dependent effects. Unfortunately, the standard volume rendering technique limits the size of color NNs for the sake of fast training and rendering.
One the other hand, research in generative NeRF models [5; 11; 21] and fast NeRF rendering [13] shows that the feature vectors of samples along a ray can be rendered or integrated first to enable one single evaluation of following NNs. However, it is unclear whether this volume feature rendering (VFR) technique can also be used for fast NeRF reconstruction. Towards this end, this paper revisits the VFR method for the purpose of fast NeRF reconstruction. Our investigation reveals that the VFR can be integrated into the training phase of NeRF models, thereby enabling the utilization of large color neural networks to either enhance rendering quality while maintaining the same training time, or to achieve a similar rendering quality with much faster training.
## 2 Related work
The volume rendering technique integrates the colors of samples along a ray according to their densities [15]. A global density field that is differentiable is very effective in finding the underlying geometry of a scene from multi-view 2D color observations using volume rendering. This capability is one of the essential reasons to NeRF's success. As every point in a scene has a density value, the density field is capable of modeling complex geometry. However, standard volume rendering employed by NeRF and its successors [18; 2; 3; 26; 35; 27; 6; 34] suffer from high computational complexity caused by many NN calls even with importance sampling guided by coarse density fields, as discussed in the preceding section.
Representing the underlying geometry by the signed distance function (SDF) leads to a well-defined surface to to enable one sample-based fast rendering [33; 8; 29; 20; 10; 32]. However, the SDF struggles with modeling complex geometry in the context of neural inverse rendering, e.g., trees and leaves, and this is why the best rendering quality is still achieved by volume representations. In this paper, we revisit the VFR framework that uses density as the geometry representation but shares a similar behavior as the SDF that processes a feature vector by color NNs only once. Thus, the VFR inherits the advantage of volume representation in modeling complex geometry and also has the strength of the SDF in single NN evaluation.
Generative NeRF models such as EG3D [5], StyleNeRF [11] and StyleSDF [21] focus on 3D consistency and fidelity of generated content using generalizable neural networks with NeRF's structure. In comparison, 3D reconstruction from multi-view images using NeRF is a different task from 3D generation. As per-scene optimization is required for NeRF reconstruction, the reconstruction speed is a critical issue in this field, but this problem has not been investigated in the existing generative NeRF works. This paper revisits feature integration in volume rendering for fast reconstruction and demonstrate its effectiveness in the NeRF reconstruction task.
It is noted that feature accumulation is also discussed in the Sparse Neural Radiance Grid (SNeRG) [13] mainly for the purpose of modeling specular effects. However, the diffuse color is still obtained using the standard volume rendering method in [13], which requires many times of MLP evaluations. Besides, the SNeRG needs to train a NeRF first (require days to train) and then bakes the trained NeRF to a sparse grid with specular features. In this work, we demonstrate that this pre-training is not necessary, and the accumulated feature vector can be used to predict the final view-dependentcolor instead of only the specular color. Our VFR method can be directly optimized from scratch instead of baking the optimized features in the SNeRG. As a result, our method requires significantly less training time but achieves a much better rendering quality compared with the SNeRG.
Integrating feature vectors of samples along the ray is natural in transformer models [24; 30]. For example, Suhail _et al._ proposed an epipolar transformer to aggregate features extracted from reference views on epipolar lines [24]. However, the computational cost of transformer is much higher than MLP and that transformer-based method is significantly (8 times) slower than MLP-based Mip-NeRF [2] on the same hardware. In this work, we demonstrate that the VFR's aggregation only involves a weighted combination of feature vectors based on densities such that the VFR is significantly faster than transformer-based feature aggregation.
## 3 Model architecture
### Standard volume rendering
Standard volume rendering integrates color along a cast ray according to density [15]. A high density of a point on the ray indicates a high probability of hitting the surface. The NeRF and its various variants adopt this standard volume rendering technique, where the color and density of a point are typically predicted by a neural network. For a cast ray \(\mathbf{r}(t)=\mathbf{o}+t\mathbf{d}\), where \(\mathbf{o}\) is the camera origin and \(\mathbf{d}\) is the view direction, the rendered color \(C(\mathbf{r})\) using standard volume rendering is:
\[C(\mathbf{r})=\int T(t)\sigma(\mathbf{r}(t))\mathbf{c}(\mathbf{r}(t),\mathbf{ d})\,dt,\,\text{where}\,T(t)=\exp\left(-\int_{0}^{t}\sigma(\mathbf{r}(s))\,ds\right) \tag{1}\]
where \(\sigma(\mathbf{x})\) and \(\mathbf{c}(\mathbf{x},\mathbf{d})\) are the density and color at position \(\mathbf{x}\), repsectively. In practice, the above integral is solved by sampling and integration along the cast ray. After transforming the densities to weights by \(w(t)=T(t)\sigma(\mathbf{r}(t))\), the color will be rendered as follows:
\[C(\mathbf{r})=\sum_{i=1}^{N}w_{i}\text{NeuralNet}\left(F(\mathbf{x}_{i}), \mathbf{d}\right) \tag{2}\]
where \(\mathbf{x}\) is the position of a sample point on the ray and \(F\) is a function to query the corresponding feature vector of \(\mathbf{x}\). Pure MLP-based methods including the NeRF [18], Mip-NeRF [2], Ref-NeRF [28] and Mip-NeRF 360 [3] represent \(F\) by using MLPs. However, this representation imposes a computational burden as \(N\) MLP evaluations are required to render a ray. Alternatively, recent research [19; 25; 6] shows that modeling \(F\) by extra learnable features is significantly faster than pure-MLP based representations, because linear interpolation of learnable features is much more efficient than MLP evaluations. These learnable features are typically organized in the form of the 3D grids, hash tables, and decomposed tensors. With the learnable features, a small NN can achieve the state-of-the-art rendering quality but accelerate both the training and rendering time significantly.
Furthermore, hierarchical importance sampling guided by coarse geometries is employed to greatly reduce the number of samples in (2). This importance sampling can be achieved by modeling the density fields independently [25; 6; 12], occupancy grid [19], and proposal networks [3; 4]. As shown in Fig. 1, importance sampling makes the model focus on valuable samples on the surface. For instance, by using two levels of importance sampling, Zip-NeRF [4] reduces the final number of valuable samples to 32. However, the color NN still needs to run 32 times to deliver the best rendering quality. Although integrating colors predicted by the color NN is in line with the concept in classical volume rendering, the many color NN calls are the main obstacle for fast training and rendering. On the other hand, the importance of large color NNs for realistic rendering especially in modeling the view-dependent effect is highlighted in recent works [3; 12]. Considering the fact that valuable samples on the surface are close in terms of spatial position (see red sampling points in Fig. 1), the queried feature vectors are also very similar as they are interpolated according to their position. As such, evaluating the color NN with similar input feature vectors is not necessary. In the next subsection, we will introduce the volume feature rendering method that integrates these feature vectors to enable a single evaluation of the color NN.
### Volume feature rendering
As shown in Fig. 1, the volume feature rendering method integrates queried feature vectors instead of predicted colors to form a rendered feature vector. A subsequent color NN is then applied to transform the feature vector to the final rendered color. By doing this, the color NN only needs to run once. Specifically, we take the NN out of the integral as follows:
\[C(\mathbf{r})=\text{NeuralNet}\left(\left(\sum_{i=1}^{N}w_{i}F(\mathbf{x}_{i}) \right),\mathbf{d}\right). \tag{3}\]
This fundamental change to the standard volume rendering framework relieves the rendering framework from the high computational cost caused by many NN evaluations. Consequently, we can use a larger network to achieve a better rendering quality while maintaining a similar training time.
### Pilot network
It is reasonable to integrate feature vectors of non-empty samples in the VFR. However, at the beginning of training, the model does not have a coarse geometry to focus on samples on the surface. The VFR will integrate feature vectors of all samples during the early training stage. We found that
Figure 1: Standard volume rendering needs to run NeuralNet many times for all the samples on the surface, while the volume feature rendering only needs to evaluate NeuralNet once.
Figure 2: The VFR fails to converge on the scene of _ship_ from the dataset in [18]. A small pilot network functioning as standard volume rendering by a few training steps in the early training stage resolves this problem.
this feature integration works well for most scenes but fails to converge for some scenes. Such failure is more likely to occur when employing a comparably large NN. The VFR normally converges using the NN with two hidden layers each of 256 neurons, but it does not converge when using four hidden layers. Fig. 2 illustrates a failure case on the scene of _ship_ from the NeRF synthetic dataset [18].
As increasing the size of the NN only has a slight implication to reconstruction and rendering speeds but can lead to a better rendering quality thanks to the VFR framework, we are motivated to use a large NN. We hypothesize that the reason for the aforementioned convergence failure is that the model cannot find the correct geometry due to the feature integration of all samples in the early training stage. Motivated by this insight, we design a pilot network that functions as standard volume rendering to aid the model in finding a coarse geometry in the early training stage. A small pilot network and a few training steps are sufficient to achieve this goal. In this work, we use the pilot network with two hidden layers, each with 64 neurons. The pilot network is only used in the first 300 training steps. After this training, the pilot network will be discarded, and we can use the VFR to train the model normally. The number of pilot training steps is small compared with the total training steps, e.g., 20K steps. Thus, it has a negligible effect on the overall training time. As shown in Fig. 2, the model converges normally with the aid of the pilot network.
### View-dependent effect
We use the spherical harmonics (SH) feature encoding method to encode view direction for modeling view-dependent effects. This approach can be seen as a variant of the rendering equation encoding in [12]. Different from the vanilla SH encoding [19] employed in the literature, we multiply each encoded SH coefficient by a feature vector as shown in Fig. 3. Specifically, the rendered feature vector by the VFR is fed into a spatial MLP to produce two feature vectors: SH feature vector and bottleneck feature vector. The SH feature vector will be split into small feature vectors \(\mathbf{f}\) for SH coefficients, i.e., \(\{\mathbf{f}_{0}^{0}|l=0\}\), \(\{\mathbf{f}_{1}^{-1},\mathbf{f}_{1}^{0},\mathbf{f}_{1}^{1}|l=1\}\), etc. In the SH feature encoding block, the SH feature vector \(\mathbf{f}_{m}^{l}\) will be multiplied with SH coefficient \(Y_{l}^{m}(\mathbf{d})\) with view direction \(\mathbf{d}\) to form one SH feature encoding vector \(\mathbf{e}_{m}^{l}\), written as
\[\mathbf{e}_{l}^{m}=\mathbf{f}_{l}^{m}Y_{l}^{m}(\mathbf{d}) \tag{4}\]
A comprehensive SH encoding is derived by concatenating all encoding vectors as \(\mathbf{E}\) = \(\{\mathbf{e}_{0}^{0},\mathbf{e}_{1}^{-1},\mathbf{e}_{1}^{0}...\}\), which will be further concatenated with the bottleneck feature vector (view direction independent). This concatenated vector is used to predict the final rendered color by a directional MLP. The NN in (3) thus includes the spatial and directional MLPs.
## 4 Implemention details
We implement the proposed VFR using the NerfAcc library [16], which is based on the deep learning framework PyTorch [22]. The learnable features are organized by a multiresolution hash grid (MHG) [19]. This feature representation models the feature function \(F\) in (3) that accepts the input of the position and outputs a feature vector. The color NN contains a spatial MLP and a directional MLP. The spatial MLP has two layers and the directional MLP has four layers, all with 256 hidden neurons. The size of the bottleneck feature is set to 256. We use the GELU [14] instead of the commonly used ReLU [1] activation function, as we found the GELU results in a slightly better quality.
Figure 3: Architecture of the designed neural network for modeling view-dependent effects. Each spherical harmonics basis function is multiplied by a feature vector to encode the input view direction.
The density of a sample \(\mathbf{x}_{i}\) is derived from the queried feature vector \(F(\mathbf{x}_{i})\) by a tiny density mapping layer. On the real-world 360 dataset [3], the mapping layer is a linear transform from the feature vector to the density value. On the NeRF synthetic [18] dataset, we found a tiny network with one hidden layer and 64 neurons yields a slightly better rendering quality. The density mapping layers have negligible impact on the overall training time for both datasets.
We employ a hierarchical importance sampling strategy on both the NeRF synthetic [18] and real-world 360 datasets [3]. The occupancy grid is used for efficient sampling on the NeRF synthetic dataset. For the 360 dataset, we follow the most recent Zip-NeRF [4] that uses two levels of importance sampling. These two density fields are implemented by two small MHGs, where the number of hashing levels is ten and the feature channel in each level is one. The final number of samplings on the radiance field is 32. The models are trained using the Adam optimizer with a learning rate of 0.01 and the default learning rate scheduler in NerfAcc [16]. We use PSNR, SSIM [31] and LPIPS [36] to evaluate the rendered image quality. The running times are measured on one RTX 3090 GPU. Table 1 summarizes other experimental settings in this work.
## 5 Experimental results
### NeRF synthetic dataset
The VFR achieves the best rendering quality but uses the minimum training time compared with state-of-the-art fast methods on the NeRF synthetic dataset. Since the original Instant-NGP [19] uses the alpha channel in the training images to perform background augmentation, we report the results of the Instant-NGP from NerfAcc [16], where such augmentation is not employed for fair comparison. As shown in Table 2, the VFR outperforms Instant-NGP by 2.27 dB and NerfAcc by 1.51 dB, but only uses 3.3 minutes for training. Compared with TensoRF [6], our method only uses 33% of its training time while surpassing TensoRF by 1.48 dB in PSNR. In addition, the VFR achieves the best SSIM and LPIPS on this dataset, which is consistent with the PSNR metric.
This significant improvement in rendering quality stems from the increased size of the NN. The MLP in Instant-NGP and NerfAcc has 4 layers and 64 neurons in each layer. By comparison, we use a six-layer MLP with 256 neurons. The computational cost of our MLP is roughly 24 times higher than MLPs in Instant-NGP and NerfAcc. However, thanks to the VFR framework, our MLP only needs to run one time instead of many times as in the standard volume rendering framework employed in the
\begin{table}
\begin{tabular}{l l r r r r} \hline \hline Dataset & Model & \# MHG & HGH & \#Batch & SH degree & \#SH feature \\ \hline NeRF synthetic[18] & VFR-small & 13M & 16\(\times 2^{19}\times 2\) & \(2^{18}\) points & 4 & 4 \\ Real-world 360[3] & VFR-base & 34M & 32\(\times 2^{19}\times 2\) & \(2^{13}\) rays & 7 & 8 \\ Real-world 360[3] & VFR-large & 134M & 32\(\times 2^{21}\times 2\) & \(2^{13}\) rays & 7 & 8 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Experimental settings on the NeRF synthetic and real-world 360 datasets. #Features represents the number of learnable features in the underlying hash grids, which is estimated by hash grid hyperparameters (HGH), i.e., the number of levels\(\times\) the number of voxels each level\(\times\)the number of feature channels.
\begin{table}
\begin{tabular}{l r r r r} \hline \hline & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & Training time\(\downarrow\) \\ \hline NeRF [18] & 31.69 & 0.953 & 0.050 & hours \\ Mip-NeRF [2] & 33.09 & 0.961 & 0.043 & hours \\ \hline TensoRF [6] & 33.14 & 0.963 & 0.047 & 10 mins \\ Instant-NGP [19] & 32.35 & 0.960 & 0.042 & 4.2 mins \\ NerfAcc [16] & 33.11 & 0.961 & 0.053 & 4.3 mins \\ \hline VFR-small: 6K & 33.02 & 0.960 & 0.055 & 0.97 mins \\ VFR-small: 20K & **34.62** & **0.971** & **0.038** & **3.3 mins** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Rendering quality and training time on the NeRF synthetic dataset. The VFR trained with 6K steps in 0.97 minutes achieves a similar quality compared with existing methods.
compared methods. As can be observed from Table 2, the single color NN evaluation property in the VFR leads to a minimum training time even when we use a much larger color NN. Noticeably, the VFR only needs 6K training steps to reach a similar rendering quality compared with the existing algorithms. Such 6K training steps can be completed within 1 minute on one RTX 3090 GPU, which is less than the 25% of training times of Instant-NGP and NerfAcc, and only 10% of the training time in TensoRF.
Fig. 4 provides a visual comparison of the rendered novel views by NerfAcc (based on Instant-NGP) and the VFR. On the compared scenes, both NerfAcc and the VFR can recover good geometries. However, for the scene of _drums_, obvious visual artefacts appear in the rendered views from NerfAcc while our results do not have such visual artefacts. Besides, the VFR produces more realistic visual effects such as shadow and reflection, as observed on the scenes of _hotdog_. The presented error maps and objective PSNRs in Fig. 4 also demonstrate the advantage of the VFR in terms of rendering quality. While NerfAcc and our method use the same feature representation, i.e., hash grids, we believe the quality advantage of our method stems from the large color NN enabled by the VFR.
### Real-world 360 dataset
We also experiment on seven real-world unbounded scenes from the 360 dataset [3]. The scenes of _flowers_ and _treehill_ in this dataset are not included as they are not publicly available. We use the unbounded ray parameterization method proposed in Mip-NeRF 360 [3], where points are mapped to a sphere if their radius to the scene center is larger than one.
The visual comparison in Fig. 5 shows that the VFR is able to render views in similar quality with Zip-NeRF but render more realistic views than NerfAcc (based on Instant-NGP). The VFR provides a more accurate geometry reconstruction than NerfAcc on the scene of _bicycle_. As can be observed from Fig. 5, the rendering results of NerfAcc contain obvious visual artefacts on the scenes of _bonsai_ and _kitchen_, while the VFR's results are more realistic and closer to the ground truth. The reflective effect in the highlighted patches on the scene of _kitchen_ is also well modeled by our model thanks to our new SH feature encoding method.
As shown in Table 3, the VFR is able to achieve similar rendering quality to the state-of-the-art Zip-NeRF [4] but using the minimum training time. Although Mip-NeRF 360 is able to synthesize novel high-fidelity views, it requires days to train on high-end GPUs because it is based on a pure MLP representation. Using learned features organized in hash grids, Instant-NGP achieves a comparable rendering quality but reduces the training time from days to hours. The most recent Zip-NeRF uses multisampling for high-quality anti-aliasing view rendering at the expense of training time relative to Instant-NGP. The VFR achieves a slightly better rendering quality than Zip-NeRF in terms of the PSNR. However, we achieve this state-of-the-art rendering quality with a significantly less training
Figure 4: Visual comparison of the synthesized novel views on the scenes of (from top to bottom) _drums_ and _hotdog_ in the NeRF synthetic dataset.
time. As a comparison, the VFR-large model only uses 22 minutes to train on one RTX 3090 GPU, while Zip-NeRF's training time is 5.2 hours on the same GPU.
It should be noted that we achieve this high-quality view rendering using an entirely different approach than Zip-NeRF. As Zip-NeRF focuses on anti-aliasing, multisampling is a reasonable solution but leads to a higher computational cost. Besides, the authors of Zip-NeRF found a larger color NN helps in improving the rendering quality. Since Zip-NeRF still employs the standard volume rendering method, a larger color NN will undoubtedly increase the training time. The VFR, however, relieves the rendering framework from the high computational cost caused by many color network evaluations. As only one color network evaluation is required in the VFR, NN evaluation is no longer the computing bottleneck in the rendering framework. As a result, the VFR enables one to achieve a similar rendering quality as Zip-NeRF but with a significantly less training time.
### Ablation study
Table 4 shows the ablation study of the proposed methods. Compared with the commonly used ReLU, we found that GELU activation improves the rendering quality for both the standard volume rendering and the VFR. Larger MLP does increase the quality using the standard volume rendering but at the expense of more training time. For the VFR, when increasing the size of the NN by using more hidden neurons, the rendering quality constantly improves but the training time only slightly
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & \#Features\(\downarrow\) & Training time\(\downarrow\) \\ \hline NeRF [18] & 24.85 & 0.659 & 0.426 & N/A & days \\ Mip-NeRF [2] & 25.12 & 0.672 & 0.414 & N/A & days \\ NeRF++ [35] & 26.21 & 0.729 & 0.348 & N/A & days \\ Mip-NeRF 360 [3] & 29.11 & 0.846 & 0.203 & N/A & days \\ \hline Instant-NGP [19] & 27.06 & 0.796 & 0.265 & 84M & 0.81 hrs \\ Zip-NeRF [4] & 29.82 & **0.874** & **0.170** & 84M & 5.20 hrs \\ NerfAcc [16] & 28.69 & 0.834 & 0.221 & 34M & 11 mins \\ \hline VFR-base: 20K & 29.48 & 0.830 & 0.233 & 34M & 5.7 mins \\ VFR-base: 40K & 29.92 & 0.850 & 0.208 & 34M & 11 mins \\ VFR-large: 20K & 29.51 & 0.846 & 0.252 & 134M & 11 mins \\ VFR-large: 40K & **29.90** & 0.862 & 0.231 & 134M & 22 mins \\ \hline \hline \end{tabular}
\end{table}
Table 3: Rendering quality and training time on the real-world 360 dataset. The scale factor for NerfAcc and the VFR-base models remains constant at four for both outdoor and indoor scenes. For the VFR-large model, the scale factor is adjusted to four for outdoor scenes (_bicycle, garden, stump_) and two for indoor scenes (_bonsai, counter, kitchen, room_) in order to maintain conformity with Zip-NeRF [4].
Figure 5: Visual comparison of synthesized novel views on the scenes of (from top to bottom) _bicycle_, _bonsai_, and _kitchen_ on the real-world 360 dataset [3].
increases. Replacing the SH encoding with the SH feature encoding (W/O SHFE vs Full model) also leads to a better quality with negligible extra training time.
## 6 Limitations
Similar to many other neural rendering methods, the VFR requires per-scene optimization to deliver high-fidelity view rendering. Although the VFR can be optimized in several minutes for one scene, we are still far from realizing real-time reconstruction for 3D video applications. As for the rendering time, our method can render 4\(\sim\)5 frames per second at the resolution of 800\(\times\)800 on the NeRF synthetic dataset, similar to the rendering speed of NerfAcc. On the real-world 360 dataset, the rendering speed is about 1\(\sim\)2 frames per second at the resolution of 1300\(\times\)840.
Elaborate implementation optimization can help in achieving high-quality real-time rendering using this VFR. However, compared with the recent work that specifically focuses on real-time rendering such as BakedSDF [33] and MobileNeRF [8], the rendering speed of the VFR needs to be further improved. BakedSDF and MobileNeRF represent a scene's surface to enable one feature querying and one neural network evaluation, at the expense of some rendering quality decline compared with volume representation. The VFR uses the volume representation with one color NN evaluation to deliver the state-of-the-art rendering quality, but the feature querying still needs to be conducted many times for a cast ray. Considering the fact that the queried feature vectors of non-empty samples are finally integrated into one feature vector using the VFR, we believe the number of feature querying can also be reduced to a small value, e.g., from 32 to 5, to achieve high-quality and real-time rendering simultaneously. An additional prospective constraint of the VFR lies in its potential inefficacy when confronted with semitransparent objects as it integrates feature vectors first and then predicts a single final color. This property is not well reflected in the NeRF synthetic and real-world 360 datasets and requires further investigation in future work.
## 7 Conclusion
We revisited the volume feature rendering method in the context of fast NeRF reconstruction. The VFR integrates the queried feature vectors of non-empty samples on a ray and then transforms the integrated feature vector to the final rendered color using one single color neural network evaluation. We found the VFR cannot guarantee the convergence on some scenes and thus introduced the pilot network to guide the early-stage training to tackle this problem. The VFR is able to achieve the state-of-the-art rendering quality while using significantly less training time. It has a great potential to replace the standard volume rendering in many neural rendering methods based on volume representation, with the benefits of either a better rendering quality by employing a larger color neural network or a significantly reduced training time thanks to one single color neural network evaluation.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline & & & & \multicolumn{4}{c}{NeRF synthetic (VFR-small)} & \multicolumn{4}{c}{Real-world 360 (VFR-base)} \\ \cline{4-11} & Activation & MLP & VDE & Time\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) & Time\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\(\downarrow\) \\ \hline Baseline & ReLU & 4\(\times\)64 & SH & 4.4 & 33.11 & 0.9614 & 0.053 & 5.6 & 28.39 & 0.8147 & 0.245 \\ +GELU & GELU & 4\(\times\)64 & SH & 4.4 & 33.43 & 0.9634 & 0.049 & 5.6 & 28.43 & 0.8152 & 0.245 \\ +Medium MLP & GELU & 4\(\times\)128 & SH & 5.6 & 33.92 & 0.9663 & 0.045 & 6.8 & 28.70 & 0.8205 & 0.239 \\ \hline VFR & ReLU & 4\(\times\)64 & SH & 2.9 & 33.17 & 0.9619 & 0.053 & 5.0 & 28.20 & 0.8013 & 0.262 \\ +GELU & GELU & 4\(\times\)64 & SH & 3.0 & 33.61 & 0.9641 & 0.048 & 5.0 & 28.26 & 0.8064 & 0.254 \\ +Medium MLP & GELU & 6\(\times\)64 & SHFE & 3.0 & 34.05 & 0.9678 & 0.043 & 5.2 & 29.06 & 0.8215 & 0.240 \\ +Large MLP & GELU & 6\(\times\)128 & SHFE & 3.0 & 34.33 & 0.9697 & 0.041 & 5.4 & 29.28 & 0.8266 & 0.236 \\ W/O SHFE & GELU & 6\(\times\)256 & SH & 3.2 & 34.46 & 0.9701 & 0.040 & 5.5 & 28.91 & 0.8192 & 0.245 \\ Full model & GELU & 6\(\times\)256 & SHFE & 3.3 & **34.62** & **0.9713** & **0.038** & 5.7 & **29.48** & **0.8301** & **0.233** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study. SH means spherical harmonics for view direction encoding (VDE) while SHFE stands for SH feature encoding. We use the NerfAcc’s implementation of Instant-NGP as a baseline. MLP’s size is represented by layers\(\times\)neurons. Times are measured in minutes by 20K training steps.
## References
* [1]A. Fred Agarap (2018) Deep learning using rectified linear units (relu). arXiv preprint arXiv:1803.08375. Cited by: SS1.
* [2]J. T. Barron, B. Mildenhall, M. Tancik, P. Hedman, R. Martin-Brualla, and P. P. Srinivasan (2021) MIP-NeRF: a multiscale representation for anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5855-5864. Cited by: SS1.
* [3]J. T. Barron, B. Mildenhall, D. Verbin, P. P. Srinivasan, and P. Hedman (2022) MIP-NeRF 360: unbounded anti-aliased neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5470-5479. Cited by: SS1.
* [4]J. T. Barron, B. Mildenhall, D. Verbin, P. P. Srinivasan, and P. Hedman (2023) ZIP-NeRF: anti-aliased grid-based neural radiance fields. arXiv preprint arXiv:2304.06706. Cited by: SS1.
* [5]E. R. Chan, C. Z. Lin, M. A. Chan, K. Nagano, B. Pan, S. De Mello, O. Gallo, L. J. Guibas, J. Tremblay, S. Khamis, et al. (2022) Efficient geometry-aware 3D generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16123-16133. Cited by: SS1.
* [6]A. Chen, Z. Xu, A. Geiger, J. Yu, and H. Su (2022) TensoRF: tensorial radiance fields. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXII, pp. 333-350. Cited by: SS1.
* [7]A. Chen, Z. Xu, X. Wei, S. Tang, H. Su, and A. Geiger (2023) Factor fields: a unified framework for neural fields and beyond. arXiv preprint arXiv:2302.01226. Cited by: SS1.
* [8]Z. Chen, T. Funkhouser, P. Hedman, and A. Tagliasacchi (2022) MobileNeRF: exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures. arXiv preprint arXiv:2208.00277. Cited by: SS1.
* [9]S. Fridovich-Keil, A. Yu, M. Tancik, Q. Chen, B. Recht, and A. Kanazawa (2022) Plenoxels: radiance fields without neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5501-5510. Cited by: SS1.
* [10]Q. Fu, Q. Xu, Y. S. Ong, and W. Tao (2022) Geo-NeuS: geometry-consistent neural implicit surfaces learning for multi-view reconstruction. Advances in Neural Information Processing Systems35, pp. 3403-3416. Cited by: SS1.
* [11]J. Gu, L. Liu, P. Wang, and C. Theobalt (2021) StyleNeRF: a style-based 3D aware generator for high-resolution image synthesis. In International Conference on Learning Representations, Cited by: SS1.
* [12]K. Han and W. Xiang (2021) Multiscale tensor decomposition and rendering equation encoding for view synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4232-4241. Cited by: SS1.
* [13]P. Hedman, P. P. Srinivasan, B. Mildenhall, J. T. Barron, and P. Debevec (2021) Baking neural radiance fields for real-time view synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5875-5884. Cited by: SS1.
* [14]D. Hendrycks and K. Gimpel (2016) Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415. Cited by: SS1.
* [15]J. T. Kajiya and B. P. Von Herzen (1984) Ray tracing volume densities. ACM SIGGRAPH18 (3), pp. 165-174. Cited by: SS1.
* [16]R. Li, M. Tancik, and A. Kanazawa (2022) Nerfacc: a general nerf acceleration toolbox. arXiv preprint arXiv:2210.04847. Cited by: SS1.
* [17]L. Liu, J. Gu, K. Z. Lin, T. Chua, and C. Theobalt (2020) Neural sparse voxel fields. Advances in Neural Information Processing Systems33, pp. 15651-15663. Cited by: SS1.
* [18] B Mildenhall, PP Srinivasan, M Tancik, JT Barron, R Ramamoorthi, and R Ng. NeRF: Representing scenes as neural radiance fields for view synthesis. In _European Conference on Computer Vision_, 2020.
* [19] Thomas Muller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. _ACM Transactions on Graphics (ToG)_, 41(4):1-15, 2022.
* [20] Michael Oechsle, Songyou Peng, and Andreas Geiger. Unisurf: Unifying neural implicit surfaces and radiance fields for multi-view reconstruction. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 5589-5599, 2021.
* [21] Roy Or-El, Xuan Luo, Mengyi Shan, Eli Shechtman, Jeong Joon Park, and Ira Kemelmacher-Shlizerman. StyleSDF: High-resolution 3d-consistent image and geometry generation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 13503-13513, 2022.
* [22] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. In _Proceedings of Thirty-first Conference on Neural Information Processing Systems Workshop_, 2017.
* [23] Ruizhi Shao, Zerong Zheng, Hanzhang Tu, Boning Liu, Hongwen Zhang, and Yebin Liu. Tensor4D: Efficient neural 4D decomposition for high-fidelity dynamic reconstruction and rendering. _arXiv preprint arXiv:2211.11610_, 2022.
* [24] Mohammed Suhail, Carlos Esteves, Leonid Sigal, and Ameesh Makadia. Light field neural rendering. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 8269-8279, 2022.
* [25] Cheng Sun, Min Sun, and Hwann-Tzong Chen. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 5459-5469, 2022.
* [26] Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan Barron, and Ren Ng. Fourier features let networks learn high frequency functions in low dimensional domains. _Advances in Neural Information Processing Systems_, 33:7537-7547, 2020.
* [27] Matthew Tancik, Vincent Casser, Xinchen Yan, Sabeek Pradhan, Ben Mildenhall, Pratul P Srinivasan, Jonathan T Barron, and Henrik Kretzschmar. Block-NeRF: Scalable large scene neural view synthesis. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 8248-8258, 2022.
* [28] Dor Verbin, Peter Hedman, Ben Mildenhall, Todd Zickler, Jonathan T Barron, and Pratul P Srinivasan. Ref-NeRF: Structured view-dependent appearance for neural radiance fields. In _2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 5481-5490. IEEE, 2022.
* [29] Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang. NeuS: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. In _Advances in Neural Information Processing Systems_, 2021.
* [30] Yunxiao Wang, Yanjie Li, Peidong Liu, Tao Dai, and Shu-Tao Xia. NeXT: Towards high quality neural radiance fields via multi-skip transformer. In _Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXII_, pages 69-86. Springer, 2022.
* [31] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. _IEEE transactions on image processing_, 13(4):600-612, 2004.
* [32] Lior Yariv, Jiatao Gu, Yoni Kasten, and Yaron Lipman. Volume rendering of neural implicit surfaces. _Advances in Neural Information Processing Systems_, 34:4805-4815, 2021.
* [33] Lior Yariv, Peter Hedman, Christian Reiser, Dor Verbin, Pratul P Srinivasan, Richard Szeliski, Jonathan T Barron, and Ben Mildenhall. BakedSDF: Meshing neural SDFs for real-time view synthesis. _arXiv preprint arXiv:2302.14859_, 2023.
* [34] Alex Yu, Ruilong Li, Matthew Tancik, Hao Li, Ren Ng, and Angjoo Kanazawa. Plenoctrees for real-time rendering of neural radiance fields. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 5752-5761, 2021.
* [35] Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. NeRF++: Analyzing and improving neural radiance fields. _arXiv preprint arXiv:2010.07492_, 2020.
* [36] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 586-595, 2018. | ## Review
### Summary
The paper presents a novel approach to view synthesis by introducing a method that leverages multi-resolution hash grid features, optimizing them without the need for multiple MLP evaluations on individual points. Instead, it utilizes a single MLP to predict pixel colors from these accumulated features, leading to improved rendering speed and quality. The proposed technique demonstrates superior performance over existing state-of-the-art methods in terms of training time and view synthesis quality on both synthetic and real-world datasets. However, the authors are encouraged to refine claims regarding the novelty and better position their work within the existing literature.
### Strengths
- The proposed system outperforms state-of-the-art methods like Zip-NeRF and Instant-NGP in both training times and view synthesis quality.
- An extensive ablation study is included, clarifying the contributions of various components to overall performance.
- The manuscript is well-organized, making it easy to follow.
- The introduction of a pilot network aids in stabilizing training and improving robustness.
- The paper is generally easy to reproduce and follow, with solid presentation.
### Weaknesses
- Time comparisons are not conducted on the same hardware, which raises concerns about the validity of performance claims.
- The consistency of rendered frames (3D smoothness) is not adequately demonstrated, potentially affecting qualitative results.
- The method's dependence on a pilot network and how it integrates with the rendering process needs clearer explanation.
- Key figures and equations lack clarity, which may confuse readers.
- The paper does not provide video results to demonstrate improvements in view-dependent effects.
### Questions
- Are the time comparisons performed on the same hardware?
- How does the proposed method maintain 3D consistency and smoothness in rendered animations compared to traditional methods?
- What is the exact role and operation of the pilot network in the proposed method?
- Can the authors clarify how the density is predicted and integrated into the rendering process?
- Could the authors explain the details of the spherical harmonic-based deferred rendering technique more precisely?
### Soundness
**Score:** 3
**Description:** 3 = good: The methodology is solid and the results are generally convincing, though some claims require more rigorous justification.
### Presentation
**Score:** 3
**Description:** 3 = good: The paper is well-structured and clear, but some sections need refinement for better clarity and precision.
### Contribution
**Score:** 3
**Description:** 3 = good: While the paper presents a valuable improvement over existing methods, its novelty is somewhat diminished by prior works that have explored similar concepts.
### Rating
**Score:** 6
**Description:** 6 = accept, but needs major improvements: The paper is technically sound and the contributions are beneficial, but significant revisions are necessary to address clarity and novelty concerns.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper demonstrates solid technical contributions with improvements in speed and quality over existing methods. However, it requires revisions to clarify claims about its novelty and better position it in the context of related works. Incorporating reviewer feedback in the final version will enhance its overall quality.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# On Formal Feature Attribution and Its Approximation
Anonymous Author(s)
Affiliation
Address
email
###### Abstract
Recent years have witnessed the widespread use of artificial intelligence (AI) algorithms and machine learning (ML) models. Despite their tremendous success, a number of vital problems like ML model brittleness, their fairness, and the lack of interpretability warrant the need for the active developments in explainable artificial intelligence (XAI) and formal ML model verification. The two major lines of work in XAI include _feature selection_ methods, e.g. Anchors, and _feature attribution_ techniques, e.g. LIME and SHAP. Despite their promise, most of the existing feature selection and attribution approaches are susceptible to a range of critical issues, including explanation unsoundness and _out-of-distribution_ sampling. A recent formal approach to XAI (FXAI) although serving as an alternative to the above and free of these issues suffers from a few other limitations. For instance and besides the scalability limitation, the formal approach is unable to tackle the feature attribution problem. Additionally, a formal explanation despite being formally sound is typically quite large, which hampers its applicability in practical settings. Motivated by the above, this paper proposes a way to apply the apparatus of formal XAI to the case of feature attribution based on formal explanation enumeration. Formal feature attribution (FFA) is argued to be advantageous over the existing methods, both formal and non-formal. Given the practical complexity of the problem, the paper then proposes an efficient technique for approximating exact FFA. Finally, it offers experimental evidence of the effectiveness of the proposed approximate FFA in comparison to the existing feature attribution algorithms not only in terms of feature importance and but also in terms of their relative order.
## 1 Introduction
Thanks to the unprecedented fast growth and the tremendous success, Artificial Intelligence (AI) and Machine Learning (ML) have become a universally acclaimed standard in automated decision making causing a major disruption in computing and the use of technology in general [1; 29; 35; 47]. An ever growing range of practical applications of AI and ML, on the one hand, and a number of critical issues observed in modern AI systems (e.g. decision bias [3] and brittleness [64]), on the other hand, gave rise to the quickly advancing area of theory and practice of Explainable AI (XAI).
Numerous methods exist to explain decisions made by what is called black-box ML models [46; 48]. Here, _model-agnostic_ approaches based on random sampling prevail [46], with the most popular being _feature selection_[56] and _feature attribution_[40; 56] approaches. Despite their promise, model-agnostic approaches are susceptible to a range of critical issues, like unsoundness of explanations [21; 24] and _out-of-distribution sampling_[34; 62], which exacerbates the problem of trust in AI.
An alternative to model-agnostic explainers is represented by the methods building on the success of formal reasoning applied to the logical representations of ML models [42; 61]. Aiming to address the limitations of model-agnostic approaches, formal XAI (FXAI) methods themselves suffer from a few downsides, including the lack of scalability and the requirement to build a complete logicalrepresentation of the ML model. Formal explanations also tend to be larger than their model-agnostic counterparts because they do not reason about (unknown) data distribution [65]. Finally and most importantly, FXAI methods have not been applied so far to answer feature attribution questions.
Motivated by the above, we define a novel formal approach to feature attribution, which builds on the success of existing FXAI methods [42]. By exhaustively enumerating all formal explanations, we can give a crisp definition of _formal feature attribution_ (FFA) as the proportion of explanations in which a given feature occurs. We argue that formal feature attribution is hard for the second level of the polynomial hierarchy. Although it can be challenging to compute exact FFA in practice, we show that existing anytime formal explanation enumeration methods can be applied to efficiently approximate FFA. Our experimental results demonstrate the effectiveness of the proposed approach in practice and its advantage over SHAP and LIME given publicly available tabular and image datasets, as well as on a real application of XAI in the domain of Software Engineering [45; 52].
## 2 Background
This section briefly overviews the status quo in XAI and background knowledge the paper builds on.
### Classification Problems
Classification problems consider a set of classes \(\mathcal{K}=\{1,2,\ldots,k\}\)1, and a set of features \(\mathcal{F}=\{1,\ldots,m\}\). The value of each feature \(i\in\mathcal{F}\) is taken from a domain \(\mathbb{D}_{i}\), which can be categorical or ordinal, i.e. integer, real-valued or Boolean. Therefore, the complete feature space is defined as \(\mathbb{F}\triangleq\prod_{i=1}^{m}\mathbb{D}_{i}\). A concrete point in feature space is represented by \(\mathbf{v}=(v_{1},\ldots,v_{m})\in\mathbb{F}\), where each component \(v_{i}\in\mathbb{D}_{i}\) is a constant taken by feature \(i\in\mathcal{F}\). An _instance_ or _example_ is denoted by a specific point \(\mathbf{v}\in\mathbb{F}\) in feature space and its corresponding class \(c\in\mathcal{K}\), i.e. a pair \((\mathbf{v},c)\) represents an instance. Additionally, the notation \(\mathbf{x}=(x_{1},\ldots,x_{m})\) denotes an arbitrary point in feature space, where each component \(x_{i}\) is a variable taking values from its corresponding domain \(\mathbb{D}_{i}\) and representing feature \(i\in\mathcal{F}\). A classifier defines a non-constant classification function \(\kappa:\mathbb{F}\rightarrow\mathcal{K}\).
Footnote 1: Any set of classes \(\{c_{1},\ldots,c_{k}\}\) can always be mapped into the set of the corresponding indices \(\{1,\ldots,k\}\).
Many ways exist to learn classifiers \(\kappa\) given training data, i.e. a collection of labeled instances \((\mathbf{v},c)\), including decision trees [23] and their ensembles [11; 12], decision lists [57], neural networks [35], etc. Hereinafter, this paper considers boosted tree (BT) models trained with the use of XGBoost [12].
**Example 1**.: _Figure 1 shows a BT model trained for a simplified version of the adult dataset [33]. For a data instance \(\mathbf{v}=\{\textit{Education}=\textit{Bachelors},\textit{Status}=\textit{ Separated, Occupation}=\textit{Sales, Relation
Figure 1: Example boosted tree model [12] trained on the well-known _adult_ classification dataset.
Figure 2: Examples of feature attribution reported by LIME and SHAP, as well as both AXp’s (no more AXp’s exist) followed by FFA for the instance \(\mathbf{v}\) shown in Example 1.
ship \(=\) Not-in-family_, \(\text{Sex}=\) Male, _Hours/w \(\leq\) 40_), _the model predicts \(<\)50\(k\) because the sum of the weights in the 3 trees for this instance equals \(-0.4073=(-0.1089-0.2404-0.0580)<0\).
### ML Model Interpretability and Post-Hoc Explanations
Interpretability is generally accepted to be a subjective concept, without a formal definition [39]. One way to measure interpretability is in terms of the succinctness of information provided by an ML model to justify a given prediction. Recent years have witnessed an upsurge in the interest in devising and applying interpretable models in safety-critical applications [48, 58]. An alternative to interpretable models is post-hoc explanation of _black-box_ models, which this paper focuses on.
Numerous methods to compute explanations have been proposed recently [46, 48]. The lion's share of these comprise what is called _model-agnostic_ approaches to explainability [40, 55, 56] of heuristic nature that resort to extensive sampling in the vicinity of an instance being explained in order to "estimate" the behavior of the classifier in this local vicinity of the instance. In this regard, they rely on estimating input data distribution by building on the information about the training data [34]. Depending on the form of explanations model-agnostic approaches offer, they are conventionally classified as _feature selection_ or _feature attribution_ approaches briefly discussed below.
Feature Selection.A feature selection approach identifies subsets of features that are deemed _sufficient_ for a given prediction \(c=\kappa(\mathbf{v})\). As mentioned above, the majority of feature selection approaches are model-agnostic with one prominent example being Anchors [56]. As such, the sufficiency of the selected set of features for a given prediction is determined statistically based on extensive sampling around the instance of interest, by assessing a few measures like _fidelity, precision_, among others. As a result, feature selection explanations given as a set of features \(\omega\subseteq\mathcal{F}\) should be interpreted as the conjunction \(\bigwedge_{i\in\omega}\left(x_{i}=v_{i}\right)\) deemed responsible for prediction \(c=\kappa(\mathbf{v})\), \(\mathbf{v}\in\mathbb{F}\), \(c\in\mathcal{K}\). Due to the statistical nature of these explainers, they are known to suffer from various explanation quality issues [24, 34, 63]. An additional line of work on _formal_ explainability [25, 61] also tackles feature selection while offering guarantees of soundness; these are discussed below.
Feature Attribution.A different view on post-hoc explanations is provided by feature attribution approaches, e.g. LIME [55] and SHAP [40]. Based on random sampling in the neighborhood of the target instance, these approaches attribute responsibility to all model's features by assigning a numeric value \(w_{i}\in\mathbb{R}\) of importance to each feature \(i\in\mathcal{F}\). Given these importance values, the features can then be ranked from most important to least important. As a result, a feature attribution explanation is conventionally provided as a linear form \(\sum_{i\in\mathcal{F}}w_{i}\cdot x_{i}\), which can be also seen as approximating the original black-box explainer \(\kappa\) in the _local_ neighborhood of instance \(\mathbf{v}\in\mathbb{F}\). Among other feature attribution approaches, SHAP [5, 6, 40] is often claimed to stand out as it aims at approximating Shapley values, a powerful concept originating from cooperative games in game theory [60].
Formal Explainability.In this work, we build on formal explainability proposed in earlier work [8, 13, 25, 42, 61]. where explanations are equated with _abductive explanations_ (AXp's). Abductive explanations are _subset-minimal_ sets of features formally proved to suffice to explain an ML prediction given a formal representation of the classifier of interest. Concretely, given an instance \(\mathbf{v}\in\mathbb{F}\) and a prediction \(c=\kappa(\mathbf{v})\), an AXp is a subset-minimal set of features \(\mathcal{X}\subseteq\mathcal{F}\), such that
\[\forall(\mathbf{x}\in\mathbb{F})\cdot\bigwedge_{i\in\mathcal{X}}(x_{i}=v_{i}) \rightarrow(\kappa(\mathbf{x})=c) \tag{1}\]
Abductive explanations are guaranteed to be subset-minimal sets of features proved to satisfy (1). As other feature selection explanations, they answer _why_ a certain prediction was made. An alternate way to explain a model's behavior is to seek an answer _why not_ another prediction was made, or, in other words, _how_ to change the prediction. Explanations answering _why not_ questions are referred to as _contrastive explanations_ (CXp's) [26, 42, 46]. As in prior work, we define a CXp as a subset-minimal set of features that, if allowed to change their values, are _necessary_ to change the prediction of the model. Formally, a CXp for prediction \(c=\kappa(\mathbf{v})\) is a subset-minimal set of features \(\mathcal{Y}\subseteq\mathcal{F}\), such that
\[\exists(\mathbf{x}\in\mathbb{F})\cdot\bigwedge_{i\not\in\mathcal{Y}}(x_{i}=v_{ i})\wedge(\kappa(\mathbf{x})\neq c) \tag{2}\]
Finally, recent work has shown that AXp's and CXp's for a given instance \(\mathbf{v}\in\mathbb{F}\) are related through the _minimal hitting set duality_[26, 54]. The duality implies that each AXp for a prediction \(c=\kappa(\mathbf{v})\)is a _minimal hitting set2_ (MHS) of the set of all CXp's for that prediction, and the other way around: each CXp is an MHS of the set of all AXp's. The explanation enumeration algorithm [26] applied in this paper heavily relies on this duality relation and is inspired by the MARCO algorithm originating from the area of over-constrained systems [36, 37, 53]. A growing body of recent work on formal explanations is represented (but not limited) by [2, 4, 7, 9, 10, 14, 18, 20, 27, 41, 42, 43, 65].
Footnote 2: Given a set of sets \(\mathbb{S}\), a _hitting set_ of \(\mathbb{S}\) is a set \(H\) such that \(\forall S\in\mathbb{S},S\cup H\neq\emptyset\), i.e. \(H\) “hits” every set in \(\mathbb{S}\). A hitting set \(H\) for \(\mathbb{S}\) is _minimal_ if none of its strict subsets is also a hitting set.
**Example 2**.: _In the context of Example 1, feature attribution computed by LIME and SHAP as well as all 2 AXp's are shown in Figure 2. AXp \(\mathcal{X}_{1}\) indicates that specifying Education \(=\) Bachelors and Hours/\(\kappa\leq\) 40 guarantees that any compatible instance is classified as \(<\) 50k independent of the values of other features, e.g. Status and Relationship, since the maximal sum of weights is \(0.0770-0.0200-0.0580=-0.0010<0\) as long as the feature values above are used. Observe that another AXp \(\mathcal{X}_{2}\) for \(\mathbf{v}\) is [Education, Status]. Since both of the two AXp's for \(\mathbf{v}\) consist of two features, it is difficult to judge which one is better without a formal feature importance assessment._
## 3 Why Formal Feature Attribution?
On the one hand, abductive explanations serve as a viable alternative to non-formal feature selection approaches because they (i) guarantee subset-minimality of the selected sets of features and (ii) are computed via formal reasoning over the behavior of the corresponding ML model. Having said that, they suffer from a few issues. First, observe that deciding the validity of (1) requires a formal reasoner to take into account the complete feature space \(\mathbb{F}\), assuming that the features are independent and uniformly distributed [65]. In other words, the reasoner has to check all the combinations of feature values, including those that _never appear in practice_. This makes AXp's being unnecessarily _conservative_ (long), i.e. they may be hard for a human decision maker to interpret. Second, AXp's are not aimed at providing feature attribution. The abundance of various AXp's for a single data instance [25], e.g. see Example 2, exacerbates this issue as it becomes unclear for a user which of the AXp's to use to make an informed decision in a particular situation.
On the other hand, non-formal feature attribution in general is known to be susceptible to out-of-distribution sampling [62, 34] while SHAP is shown to fail to effectively approximate Shapley values [21]. Moreover and quite surprisingly, [21] argued that even the use of exact Shapley values is inadequate as a measure of feature importance. Our results below confirm that both LIME and SHAP often fail to grasp the real feature attribution in a number of practical scenarios.
To address the above limitations, we propose the concept of _formal feature attribution_ (FFA) as defined next. Let us denote the set of all formal abductive explanations for a prediction \(c=\kappa(\mathbf{v})\) by \(\mathbb{A}_{\kappa}(\mathbf{v},c)\). Then formal feature attribution of a feature \(i\in\mathcal{F}\) can be defined as the proportion of abductive explanations where it occurs. More formally,
**Definition 1**:: **(FFA).** The _formal feature attribution_\(\text{ffa}_{\kappa}(i,(\mathbf{v},c))\) of a feature \(i\in\mathcal{F}\) to an instance \((\mathbf{v},c)\) for machine learning model \(\kappa\) is
\[\text{ffa}_{\kappa}(i,(\mathbf{v},c))=\frac{|\{\mathcal{X}\mid\mathcal{X}\in \mathbb{A}_{\kappa}(\mathbf{v},c),i\in\mathcal{X})|}}{|\mathbb{A}_{\kappa}( \mathbf{v},c)|} \tag{3}\]
Formal feature attribution has some nice properties. First, it has a strict and formal definition, i.e. we can, assuming we are able to compute the complete set of AXp's for an instance, exactly define it for all features \(i\in\mathcal{F}\). Second, it is fairly easy to explain to a user of the classification system, even if they are non-expert. Namely, it is the percentage of (formal abductive) explanations that make use of a particular feature \(i\). Third, as we shall see later, even though we may not be able to compute all AXp's exhaustively, we can still get good approximations fast.
**Example 3**.: _Recall Example 2. As there are 2 AXp's for instance \(\mathbf{v}\), the prediction can be attributed to the 3 features with non-zero FFA shown in Figure 1(d). Also, observe how both LIME and SHAP (see Figure 1(a) and Figure 1(b)) assign non-zero attribution to the feature Relationship, which is in fact irrelevant for the prediction, but overlook the highest importance of feature Education._
One criticism of the above definition is that it does not take into account the length of explanations where the feature arises. Arguably if a feature arises in many AXp's of size 2, it should be considered more important than a feature which arises in the same number of AXp's but where each is of size 10. An alternate definition, which tries to take this into account, is the weighted formal feature attribution (WFFA), i.e. the _average_ proportion of AXp's that include feature \(i\in\mathcal{F}\). Formally,
**Definition 2**: **(WFFA).** The _weighted formal feature attribution_\(\text{wfffa}_{\kappa}(i,(\mathbf{v},c))\) of a feature \(i\in\mathcal{F}\) to an instance \((\mathbf{v},c)\) for machine learning model \(\kappa\) is
\[\text{wfffa}_{\kappa}(i,(\mathbf{v},c))=\frac{\sum_{\mathcal{X}\in\mathbb{A}_{ \kappa}(\mathbf{v},c),i\in\mathcal{X}}|\mathcal{X}|^{-1}}{|\mathbb{A}_{\kappa} (\mathbf{v},c)|} \tag{4}\]
Note that these attribution values are not on the same scale although they are convertible:
\[\sum_{i\in\mathcal{F}}\text{fffa}_{\kappa}(i,(\mathbf{v},c))=\frac{\sum_{ \mathcal{X}\in\mathbb{A}_{\kappa}(\mathbf{v},c)}|\mathcal{X}|}{|\mathbb{A}_{ \kappa}(\mathbf{v},c)|}\times\sum_{i\in\mathcal{F}}\text{wffa}_{\kappa}(i,( \mathbf{v},c)).\]
FFA can be related to the problem of _feature relevancy_[22], where a feature is said to be _relevant_ if it belongs to at least one AXp. Indeed, feature \(i\in\mathcal{F}\) is relevant for prediction \(c=\kappa(\mathbf{v})\) if and only if \(\text{ffa}_{\kappa}(i,(\mathbf{v},c))>0\). As a result, the following claim can be made.
**Proposition 1**: _Given a feature \(i\in\mathcal{F}\) and a prediction \(c=\kappa(\mathbf{v})\), deciding whether \(\text{fffa}_{\kappa}(i,(\mathbf{v},c))>\omega\), \(\omega\in(0,1]\), is at least as hard as deciding whether feature \(i\) is relevant for the prediction._
The above result indicates that computing exact FFA values may be expensive in practice. For example and in light of [22], one can conclude that the decision version of the problem is \(\Sigma_{2}^{\text{p}}\)-hard in the case of DNF classifiers.
Similarly and using the relation between FFA and feature relevancy above, we can note that the decision version of the problem is in \(\Sigma_{2}^{\text{p}}\) as long as deciding the validity of (1) is in NP, which in general is the case (unless the problem is simpler, e.g. for decision trees [28]). Namely, the following result is a simple consequence of the membership result for the feature relevance problem [22].
**Proposition 2**: _Deciding whether \(\text{fffa}_{\kappa}(i,(\mathbf{v},c))>\omega\), \(\omega\in(0,1]\), is in \(\Sigma_{2}^{\text{p}}\) if deciding (1) is in NP._
## 4 Approximating Formal Feature Attribution
As the previous section argues and as our experimental results confirm, it may be challenging in practice to compute exact FFA values due to the general complexity of the problem. Although some ML models admit efficient formal encodings and reasoning procedures, effective principal methods for FFA approximation seem necessary. This section proposes one such method.
Normally, formal explanation enumeration is done by exploiting the MHS duality between AXp's and CXp's and the use of MARCO-like [37] algorithms aiming at efficient exploration of minimal hitting sets of either AXp's or CXp's [26, 36, 37, 53]. Depending on the target type of formal explanation, MARCO exhaustively enumerates all such explanations one by one, each time extracting a candidate minimal hitting set and checking if it is a desired explanation. If it is then it is recorded and blocked such that this candidate is never repeated again. Otherwise, a dual explanation is extracted from the subset of features complementary to the candidate [25], gets recorded and blocked so that it is hit by each future candidate. The procedure proceeds until no more hitting sets of the set of dual explanations can be extracted, which signifies that all target explanations are enumerated. Observe that while doing so, MARCO also enumerates all the dual explanations as a kind of "side effect".
One of the properties of MARCO used in our approximation approach is that it is an _anytime_ algorithm, i.e. we can run it for as long as we need to get a sufficient number of explanations. This means we can stop it by using a timeout or upon collecting a certain number of explanations.
The main insight of FFA approximation is as follows. Recall that to compute FFA, we are interested in AXp enumeration. Although intuitively this suggests the use of MARCO targeting AXp's, for the sake of fast and high-quality FFA approximation, we propose to target CXp enumeration with AXp's as dual explanations computed "unintentionally". The reason for this is twofold: (i) we need to get a good FFA approximation as fast as we can and (ii) according to our practical observations, MARCO needs to amass a large number of dual explanations before it can start producing target explanations. This is because the hitting set enumerator is initially "blind" and knows nothing about the featuresit should pay attention to -- it uncovers this information gradually by collecting dual explanations to hit. This way a large number of dual explanations can quickly be enumerated during this initial phase of grasping the search space, essentially "for free". Our experimental results demonstrate the effectiveness of this strategy in terms of monotone convergence of approximate FFA to the exact FFA with the increase of the time limit. A high-level view of the version of MARCO used in our approach targeting CXp enumeration and amassing AXp's as dual explanations is shown in Algorithm 1.
```
1:procedureXpEnum(\(\kappa\), \(\mathbf{v}\), \(c\))
2:\((\mathbb{A},\mathbb{C})\leftarrow(\emptyset,\emptyset)\)\(\triangleright\)Sets of AXp's and CXp's to collect.
3:while true do
4:\(\mathcal{Y}\leftarrow\textsc{MinimalHS}(\mathbb{A},\mathbb{C})\)\(\triangleright\)Get a new MHS of \(\mathbb{A}\) subject to \(\mathbb{C}\).
5:if\(\mathcal{Y}=\bot\)thenbreak\(\triangleright\)Stop if none is computed.
6:if\(\exists(\mathbf{x}\in\mathbb{F}).\bigwedge_{i\not\in\mathcal{Y}}(x_{i}=v_{i}) \land(\kappa(\mathbf{x})\neq c)\)then\(\triangleright\)Check CXp condition (2) for \(\mathcal{Y}\).
7:\(\mathbb{C}\leftarrow\mathbb{C}\cup\{\mathcal{Y}\}\)\(\triangleright\)\(\mathcal{Y}\) appears to be a CXp.
8:else\(\triangleright\)There must be a missing AXp\(\mathcal{X}\subseteq\mathcal{F}\setminus\mathcal{Y}\).
9:\(\mathcal{X}\leftarrow\textsc{ExtractAXp}(\mathcal{F}\setminus\mathcal{Y}, \kappa,\mathbf{v},c)\)\(\triangleright\)Get AXp\(\mathcal{X}\) by iteratively checking (1) [25].
10:\(\mathbb{A}\leftarrow\mathbb{A}\cup\{\mathcal{X}\}\)\(\triangleright\)Collect new AXp\(\mathcal{X}\). return\(\mathbb{A}\), \(\mathbb{C}\)
```
**Algorithm 1** MARCO-like Anytime Explanation Enumeration
## 5 Experimental Evidence
This section assesses the formal feature attribution for gradient boosted trees (BT) [12] on multiple widely used images and tabular datasets, and compares FFA with LIME and SHAP. In addition, it also demonstrates the use of FFA in a real-world scenario of Just-in-Time (JIT) defect prediction, which assists teams in prioritizing their limited resources on high-risk commits or pull requests [52].
Setup and Prototype Implementation.All experiments were performed on an Intel Xeon 8260 CPU running Ubuntu 20.04.2 LTS, with the memory limit of 8 GByte. A prototype of the approach implementing Algorithm 1 and thus producing FFA was developed as a set of Python scripts and builds on [27]. As the FFA and WFFA values turn out to be almost identical (subject to normalization) in our experiments, here we report only FFA. WFFA results can be found in supplementary material.
Datasets and Machine Learning Models.The well-known MNIST dataset [15; 50] of handwritten digits 0-9 is considered, with two concrete binary classification tasks created: 1 vs. 3 and 1 vs. 7. We also consider PneumoniaMNIST [67], a binary classification dataset to distinguish X-ray images of pneumonia from normal cases. To demonstrate extraction of _exact_ FFA values for the above datasets, we also examine their downscaled versions, i.e. reduced from 28 \(\times\) 28 \(\times\) 1 to 10 \(\times\) 1. We also consider 11 tabular datasets often applied in the area of ML explainability and fairness [3; 16; 17; 19; 49; 59]. All the considered datasets are randomly split into 80% training and and 20% test data. For images, 15 test instances are randomly selected in each test set for explanation while all tabular test instances are explained. For all datasets, gradient boosted trees (BTs) are trained by XGBoost [12], where each BT consists of 25 trees of depth 3 per class.3 Finally, we show the use of FFA on 2 JIT defect prediction datasets [52], with 500 instances per dataset chosen for analysis.
Footnote 3: Test accuracy for MNIST digits is 0.99, while it is 0.83 for PneumoniaMNIST. This holds both for the 28 \(\times\) 28 and 10 \(\times\) 10 versions of the datasets. The average accuracy across the 11 selected tabular datasets is 0.80.
### Formal Feature Attribution
In this section, we restrict ourselves to examples where we can compute the _exact_ FFA values for explanations by computing all AXp's. To compare with LIME and SHAP, we take their solutions, replace negative attributions by the positive counterpart (in a sense taking the absolute value) and then normalize the values into \([0,1]\). We then compare these approaches with the computed FFA values, which are also in \([0,1]\). The _error_ is measured as Manhattan distance, i.e. the sum of absolute differences across all features. We also compare feature rankings according to the competitors (again using absolute values for LIME and SHAP) using Kendall's Tau [31] and rank-biased overlap (RBO) [66]metrics.4 Kendall's Tau and RBO are measured on a scale \([-1,1]\) and \([0,1]\), respectively. A higher value in both metrics indicates better agreement or closeness between a ranking and FFA.
Footnote 4: Kendall’s Tau is a correlation coefficient assessing the ordinal association between two ranked lists, offering a measure of similarity in the order of values; on the other hand, RBO is a metric that measures the similarity between two ranked lists, taking into account both the order and the depth of the overlap.
Tabular Data.Figure 3 exemplifies a comparison of FFA, LIME and SHAP on an instance of the Compas dataset [3]. While FFA and LIME agree on the most important feature, "Asian", SHAP gives it very little weight. Neither LIME nor SHAP agree with FFA, though there is clearly some similarity.
Table 1 details the comparison conducted on 11 tabular datasets, including _adult_, _compas_, and _recidivism_ datasets commonly used in XAI. For each dataset, we calculate the metric for each individual instance and then average the outcomes to obtain the final result for that dataset. As can be observed, the errors of LIME's feature attribution across these datasets span from 1.39 to 5.13. SHAP demonstrates similar errors within a range \([1.40,4.76]\). LIME and SHAP also exhibit comparable performance in relation to the two ranking comparison metrics. The values of Kendall's Tau for LIME (resp. SHAP) are between \(-0.36\) and \(0.22\) (resp. \(-0.39\) and \(0.27\)). Regarding the RBO values, LIME exhibits values between 0.39 and 0.68, whereas SHAP demonstrates values ranging from 0.44 to 0.67. Overall, as Table 1 indicates, both LIME and SHAP fail to get close enough to FFA.
10 \(\times\) 10 Digits.We now compare the results on 10 \(\times\) 10 downscaled MNIST digits and PneumoniaMNIST images, where it is feasible to compute all AXp's. Table 2 compares LIME's, SHAP's feature attribution and approximate FFA. Here, we run AXp enumeration for a number of seconds, which is denoted as FFA\({}_{*}\), \(*\in\mathbb{R}^{+}\). The runtime required for each image by LIME and SHAP is less than one second. The results show that the errors of our approximation are small, even after 10 seconds it beats both LIME and SHAP, and decreases as we generate more AXp's. The results for the orderings show again that after 10 seconds, FFA\({}_{*}\) ordering gets closer to the exact FFA than both LIME and SHAP. Observe how LIME is particularly far away from the _exact_ FFA ordering.
Summary._These results make us confident that we can get useful approximations to the exact FFA without exhaustively computing all AXp's while feature attribution determined by LIME and SHAP is quite erroneous and fails to provide a human-decision maker with useful insights, despite being fast.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline
**Dataset** & **adult** & **appendicitis australian** & **cars** & **compas** & **heart-statilog** & **hungarian** & **lending** & **liver-disorder** & **pima** & **redictivism** \\ (\(|\mathcal{F}|\)) & (12) & (7) & (14) & (8) & (11) & (13) & (13) & (9) & (6) & (8) & (15) \\ \hline
**Approach** & & & & & & & & & & & & \\ \hline \hline \multirow{2}{*}{LIME} & 4.48 & 2.25 & 5.13 & 1.53 & 3.28 & 4.48 & 4.56 & 1.39 & 2.39 & 2.72 & 4.73 \\ & 4.47 & 2.01 & 4.49 & 1.40 & 2.67 & 3.71 & 4.14 & 1.44 & 2.28 & 3.00 & 4.76 \\ \hline \multicolumn{10}{c}{**Kendall’s Tau**} \\ \hline LIME & 0.07 & 0.11 & 0.22 & -0.11 & -0.11 & 0.17 & 0.04 & -0.36 & -0.22 & 0.17 & 0.05 \\ & SHAP & 0.03 & 0.12 & 0.27 & -0.10 & -0.10 & 0.17 & 0.20 & -0.39 & -0.21 & 0.07 & 0.12 \\ \multicolumn{10}{c}{**RBO**} \\ \hline LIME & 0.54 & 0.66 & 0.49 & 0.63 & 0.55 & 0.56 & 0.41 & 0.59 & 0.66 & 0.68 & 0.39 \\ SHAP & 0.49 & 0.67 & 0.55 & 0.66 & 0.59 & 0.52 & 0.49 & 0.61 & 0.67 & 0.63 & 0.44 \\ \hline \hline \end{tabular}
\end{table}
Table 1: LIME and SHAP versus FFA on tabular data.
Figure 3: Explanations for an instance of Compas \(\mathbf{v}=\{\text{\#Priors}=3,\text{Score\_factor}=1,\text{Age\_ Above\_ FourtyFive}=0,\text{Age\_ Below\_ TwentyFive}=1,\text{African\_ American}=1,\text{Asian}=0,\text{Hispanic}=0,\text{Native\_ American}=0,\text{Other}=0,\text{Female}=0,\text{Misdemeanor}=1\}\) predicted as Two\(\_\)yr\(\_\)Recidivism \(=\) true.
### Approximating Formal Feature Attribution
Since the problem of formal feature attribution "lives" in \(\Sigma_{2}^{\mathrm{P}}\), it is not surprising that computing FFA may be challenging in practice. Table 2 suggests that our approach gets good FFA approximations even if we only collect AXp's for a short time. Here we compare the fidelity of our approach versus the approximate FFA computed after 2 hours (7200s). Figure 4, 5, and 6 depict feature attributions generated by LIME, SHAP and FFA\({}_{*}\) for the three selected 28 \(\times\) 28 images. The comparison between LIME, SHAP, and the approximate FFA computation is detailed in Table 3. The LIME and SHAP processing time for each image is less than one second. The average findings detailed in Table 3 are consistent with those shown in Table 2. Namely, FFA approximation yields better errors, Kendall's Tau and RBO values, outperforming both LIME, and SHAP after 10 seconds. Furthermore, the results demonstrate that after 10 seconds our approach places feature attributions closer to FFA\({}_{7200}\) compared to both LIME and SHAP hinting on the features that are truly relevant for the prediction.
defect prediction has often been considered a black-box, lacking explainability for practitioners. To tackle this challenge, our proposed approach to generating FFA can be employed, as model-agnostic approaches cannot guarantee to provide accurate feature attribution (see above). We use logistic regression models of [52] based on large-scale open-source Openstack and Qt datasets provided by [45] commonly used for JIT defect prediction [52]. Monotonicity of logistic regression enables us to enumerate explanations using the approach of [44] and so to extract _exact FFA_ for each instance _within a second_. Table 4 details the comparison of FFA, LIME and SHAP in terms of the three considered metrics. As with the outcomes presented in Table 1, Table 2, and Table 3, neither LIME nor SHAP align with formal feature attribution, though there are some similarities between them.
## 6 Limitations
Despite the rigorous guarantees provided by formal feature attribution and high-quality of the result explanations, the following limitations can be identified. First, our approach relies on formal reasoning and thus requires an ML model of interest to admit a representation in some fragments of first-order logic, and the corresponding reasoner to deal with it [42]. Second, the problem complexity impedes immediate and widespread use of FFA and signifies the need to develop effective methods of FFA approximation. Finally, though our experimental evidence suggests that FFA approximations quickly converge to the exact values of FFA, whether or not this holds in general remains an open question.
## 7 Conclusions
Most approaches to XAI are heuristic methods that are susceptible to unsoundness and out-of-distribution sampling. Formal approaches to XAI have so far concentrated on the problem of feature selection, detecting which features are important for justifying a classification decision, and not on feature attribution, where we can understand the weight of a feature in making such a decision. In this paper we define the first formal approach to feature attribution (FFA) we are aware of, using the proportion of abductive explanations in which a feature occurs to weight its importance. We show that we can compute FFA exactly for many classification problems, and when we cannot we can compute effective approximations. Existing heuristic approaches to feature attribution do not agree with FFA. Sometimes they markedly differ, for example, assigning no weight to a feature that appears in (a large number of) explanations, or assigning (large) non-zero weight to a feature that is irrelevant for the prediction. Overall, the paper argues that if we agree that FFA is a correct measure of feature attribution then we need to investigate methods that compute good FFA approximations quickly.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Approach**} & \multicolumn{3}{c}{**openstack \((|\mathcal{F}|=13)\)**} & \multicolumn{3}{c}{**qt \((|\mathcal{F}|=16)\)**} \\ \cline{2-7} & **Error** & **Kendall’s Tau** & **RBO** & **Error** & **Kendall’s Tau** & **RBO** \\ \hline LIME & 4.84 & 0.05 & 0.55 & 5.63 & -0.08 & 0.45 \\ SHAP & 5.08 & 0.00 & 0.53 & 5.22 & -0.13 & 0.44 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Just-in-Time Defect Prediction comparison of FFA versus LIME and SHAP.
Figure 5: 28 \(\times\) 28 MNIST 1 vs. 7. The prediction is digit 7.
Figure 6: 28 \(\times\) 28 PneumoniaMNIST. The prediction is normal.
## References
* [1] ACM. Fathers of the deep learning revolution receive ACM A.M. Turing award. [http://tiny.cc/9plzpz](http://tiny.cc/9plzpz), 2018.
* [2] L. Amgoud and J. Ben-Naim. Axiomatic foundations of explainability. In L. D. Raedt, editor, _IJCAI_, pages 636-642, 2022.
* [3] J. Angwin, J. Larson, S. Mattu, and L. Kirchner. Machine bias. [http://tiny.cc/dd7mjz](http://tiny.cc/dd7mjz), 2016.
* [4] M. Arenas, D. Baez, P. Barcelo, J. Perez, and B. Subercaseaux. Foundations of symbolic languages for model interpretability. In _NeurIPS_, 2021.
* [5] M. Arenas, P. Barcelo, L. E. Bertossi, and M. Monet. The tractability of SHAP-score-based explanations for classification over deterministic and decomposable Boolean circuits. In _AAAI_, pages 6670-6678. AAAI Press, 2021.
* [6] M. Arenas, P. Barcelo, L. E. Bertossi, and M. Monet. On the complexity of SHAP-score-based explanations: Tractability via knowledge compilation and non-approximability results. _CoRR_, abs/2104.08015, 2021.
* [7] M. Arenas, P. Barcelo, M. A. R. Orth, and B. Subercaseaux. On computing probabilistic explanations for decision trees. In _NeurIPS_, 2022.
* [8] G. Audemard, F. Koriche, and P. Marquis. On tractable XAI queries based on compiled representations. In _KR_, pages 838-849, 2020.
* [9] G. Blanc, J. Lange, and L. Tan. Provably efficient, succinct, and precise explanations. In _NeurIPS_, 2021.
* [10] R. Boumazouza, F. C. Alili, B. Mazure, and K. Tabia. ASTERYX: A model-Agnostic SaT-basEd appRoach for SYmbolic and score-based eXplanations. In _CIKM_, pages 120-129, 2021.
* [11] L. Breiman. Random forests. _Mach. Learn._, 45(1):5-32, 2001.
* [12] T. Chen and C. Guestrin. XGBoost: A scalable tree boosting system. In _KDD_, pages 785-794, 2016.
* [13] A. Darwiche and A. Hirth. On the reasons behind decisions. In _ECAI_, pages 712-720, 2020.
* [14] A. Darwiche and P. Marquis. On quantifying literals in Boolean logic and its applications to explainable AI. _J. Artif. Intell. Res._, 72:285-328, 2021.
* [15] L. Deng. The MNIST database of handwritten digit images for machine learning research. _IEEE Signal Processing Magazine_, 29(6):141-142, 2012.
* [16] D. Dua and C. Graff. UCI machine learning repository, 2017. [http://archive.ics.uci.edu/ml](http://archive.ics.uci.edu/ml).
* [17] FairML. Auditing black-box predictive models. [http://tiny.cc/6e7mjz](http://tiny.cc/6e7mjz), 2016.
* [18] J. Ferreira, M. de Sousa Ribeiro, R. Goncalves, and J. Leite. Looking inside the black-box: Logic-based explanations for neural networks. In _KR_, page 432-442, 2022.
* [19] S. Friedler, C. Scheidegger, and S. Venkatasubramanian. On algorithmic fairness, discrimination and disparate impact. [http://fairness.haverford.edu/](http://fairness.haverford.edu/), 2015.
* [20] N. Gorji and S. Rubin. Sufficient reasons for classifier decisions in the presence of domain constraints. In _AAAI_, pages 5660-5667, 2022.
* [21] X. Huang and J. Marques-Silva. The inadequacy of Shapley values for explainability. _CoRR_, abs/2302.08160, 2023.
* [22] X. Huang, M. C. Cooper, A. Morgado, J. Planes, and J. Marques-Silva. Feature necessity & relevancy in ML classifier explanations. In _TACAS (1)_, pages 167-186, 2023.
* Hyafil and Rivest [1976] L. Hyafil and R. L. Rivest. Constructing optimal binary decision trees is NP-complete. _Inf. Process. Lett._, 5(1):15-17, 1976. URL [https://doi.org/10.1016/0020-0190](https://doi.org/10.1016/0020-0190)(76)90095-8.
* Ignatiev [2020] A. Ignatiev. Towards trustable explainable AI. In _IJCAI_, pages 5154-5158, 2020.
* Ignatiev et al. [2019] A. Ignatiev, N. Narodytska, and J. Marques-Silva. Abduction-based explanations for machine learning models. In _AAAI_, pages 1511-1519, 2019.
* Ignatiev et al. [2020] A. Ignatiev, N. Narodytska, N. Asher, and J. Marques-Silva. From contrastive to abductive explanations and back again. In _AI*IA_, pages 335-355, 2020.
* Ignatiev et al. [2022] A. Ignatiev, Y. Izza, P. J. Stuckey, and J. Marques-Silva. Using MaxSAT for efficient explanations of tree ensembles. In _AAAI_, pages 3776-3785, 2022.
* Izza et al. [2022] Y. Izza, A. Ignatiev, and J. Marques-Silva. On tackling explanation redundancy in decision trees. _J. Artif. Intell. Res._, 75:261-321, 2022. URL [https://doi.org/10.1613/jair.1.13575](https://doi.org/10.1613/jair.1.13575).
* Jordan and Mitchell [2015] M. I. Jordan and T. M. Mitchell. Machine learning: Trends, perspectives, and prospects. _Science_, 349(6245):255-260, 2015.
* Kamei et al. [2013] Y. Kamei, E. Shihab, B. Adams, A. E. Hassan, A. Mockus, A. Sinha, and N. Ubayashi. A Large-Scale Empirical Study of Just-In-Time Quality Assurance. _IEEE Transactions on Software Engineering (TSE)_, 39(6):757-773, 2013.
* Kendall [1938] M. G. Kendall. A new measure of rank correlation. _Biometrika_, 30(1/2):81-93, 1938.
* Kim et al. [2007] S. Kim, T. Zimmermann, E. J. Whitehead Jr, and A. Zeller. Predicting Faults from Cached History. In _ICSE_, pages 489-498, 2007.
* Kohavi [1996] R. Kohavi. Scaling up the accuracy of naive-Bayes classifiers: A decision-tree hybrid. In _KDD_, pages 202-207, 1996.
* Lakkaraju and Bastani [2020] H. Lakkaraju and O. Bastani. "How do I fool you?": Manipulating user trust via misleading black box explanations. In _AIES_, pages 79-85, 2020.
* LeCun et al. [2015] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. _Nature_, 521(7553):436, 2015.
* Liffiton and Malik [2013] M. H. Liffiton and A. Malik. Enumerating infeasibility: Finding multiple MUSes quickly. In _CPAIOR_, pages 160-175, 2013.
* Liffiton et al. [2016] M. H. Liffiton, A. Previti, A. Malik, and J. Marques-Silva. Fast, flexible MUS enumeration. _Constraints An Int. J._, 21(2):223-250, 2016.
* Lin et al. [2021] D. Lin, C. Tantithamthavorn, and A. E. Hassan. The impact of data merging on the interpretation of cross-project just-in-time defect models. _IEEE Transactions on Software Engineering_, 2021.
* Lipton [2018] Z. C. Lipton. The mythos of model interpretability. _Commun. ACM_, 61(10):36-43, 2018.
* Lundberg and Lee [2017] S. M. Lundberg and S. Lee. A unified approach to interpreting model predictions. In _NeurIPS_, pages 4765-4774, 2017.
* Malfa et al. [2021] E. L. Malfa, R. Michelmore, A. M. Zbrzezny, N. Paoletti, and M. Kwiatkowska. On guaranteed optimal robust explanations for NLP models. In _IJCAI_, pages 2658-2665, 2021.
* Marques-Silva and Ignatiev [2022] J. Marques-Silva and A. Ignatiev. Delivering trustworthy AI through formal XAI. In _AAAI_, pages 12342-12350. AAAI Press, 2022.
* Marques-Silva et al. [2020] J. Marques-Silva, T. Gerspacher, M. C. Cooper, A. Ignatiev, and N. Narodytska. Explaining naive Bayes and other linear classifiers with polynomial time and delay. In _NeurIPS_, 2020.
* Marques-Silva et al. [2021] J. Marques-Silva, T. Gerspacher, M. C. Cooper, A. Ignatiev, and N. Narodytska. Explanations for monotonic classifiers. In _ICML_, pages 7469-7479, 2021.
* McIntosh and Kamei [2017] S. McIntosh and Y. Kamei. Are fix-inducing changes a moving target? A longitudinal case study of Just-in-Time defect prediction. _IEEE Transactions on Software Engineering (TSE)_, pages 412-428, 2017.
* Miller [2019] T. Miller. Explanation in artificial intelligence: Insights from the social sciences. _Artif. Intell._, 267:1-38, 2019.
* Mnih et al. [2015] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. _Nature_, 518(7540):529, 2015.
* Molnar [2020] C. Molnar. _Interpretable Machine Learning_. Leanpub, 2020. [http://tiny.cc/6c76tz](http://tiny.cc/6c76tz).
* Olson et al. [2017] R. S. Olson, W. G. L. Cava, P. Orzechowski, R. J. Urbanowicz, and J. H. Moore. PMLB: a large benchmark suite for machine learning evaluation and comparison. _BioData Min._, 10(1):36:1-36:13, 2017.
* Paszke et al. [2019] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Z. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. PyTorch: An imperative style, high-performance deep learning library. In _NeurIPS_, pages 8024-8035, 2019.
* Pornprasit and Tantithamthavorn [2021] C. Pornprasit and C. Tantithamthavorn. JITLine: A Simpler, Better, Faster, Finer-grained Just-In-Time Defect Prediction. In _MSR_, pages 369-379, 2021.
* Pornprasit et al. [2021] C. Pornprasit, C. Tantithamthavorn, J. Jiarpakdee, M. Fu, and P. Thongtanunam. PyExplainer: Explaining the predictions of Just-In-Time defect models. In _ASE_, pages 407-418, 2021.
* Previti and Marques-Silva [2013] A. Previti and J. Marques-Silva. Partial MUS enumeration. In _AAAI_. AAAI Press, 2013.
* Reiter [1987] R. Reiter. A theory of diagnosis from first principles. _Artif. Intell._, 32(1):57-95, 1987.
* Ribeiro et al. [2016] M. T. Ribeiro, S. Singh, and C. Guestrin. "Why should I trust you?": Explaining the predictions of any classifier. In _KDD_, pages 1135-1144, 2016.
* Ribeiro et al. [2018] M. T. Ribeiro, S. Singh, and C. Guestrin. Anchors: High-precision model-agnostic explanations. In _AAAI_, pages 1527-1535, 2018.
* Rivest [1987] R. L. Rivest. Learning decision lists. _Mach. Learn._, 2(3):229-246, 1987.
* Rudin [2019] C. Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. _Nat. Mach. Intell._, 1(5):206-215, 2019.
* Schmidt and Witte [1988] P. Schmidt and A. D. Witte. Predicting recidivism in North Carolina, 1978 and 1980. _InterUniversity Consortium for Political and Social Research_, 1988.
* Shapley [1953] L. S. Shapley. A value of \(n\)-person games. _Contributions to the Theory of Games_, 2(28):307-317, 1953.
* Shih et al. [2018] A. Shih, A. Choi, and A. Darwiche. A symbolic approach to explaining Bayesian network classifiers. In _IJCAI_, pages 5103-5111, 2018.
* Slack et al. [2020] D. Slack, S. Hilgard, E. Jia, S. Singh, and H. Lakkaraju. Fooling LIME and SHAP: adversarial attacks on post hoc explanation methods. In _AIES_, pages 180-186, 2020.
* Slack et al. [2021] D. Slack, A. Hilgard, S. Singh, and H. Lakkaraju. Reliable post hoc explanations: Modeling uncertainty in explainability. In _NeurIPS_, pages 9391-9404, 2021.
* Szegedy et al. [2014] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In _ICLR (Poster)_, 2014.
* Waldchen et al. [2021] S. Waldchen, J. MacDonald, S. Hauch, and G. Kutyniok. The computational complexity of understanding binary classifier decisions. _J. Artif. Intell. Res._, 70:351-387, 2021.
* Webber et al. [2010] W. Webber, A. Moffat, and J. Zobel. A similarity measure for indefinite rankings. _ACM Transactions on Information Systems (TOIS)_, 28(4):1-38, 2010.
* Yang et al. [2023] J. Yang, R. Shi, D. Wei, Z. Liu, L. Zhao, B. Ke, H. Pfister, and B. Ni. MedMNIST v2-a large-scale lightweight benchmark for 2D and 3D biomedical image classification. _Scientific Data_, 10(1):41, 2023. | ## Review
### Summary
This paper introduces a novel approach called Formal Feature Attribution (FFA) for explaining predictions of black box models, addressing limitations of existing model-agnostic methods like SHAP and LIME. FFA defines feature attribution as the proportion of minimal explanations in which a feature occurs, leveraging abductive explanations. Although the authors propose an approximation algorithm to compute FFA efficiently, they acknowledge the computational challenges of exact calculations. The empirical validation of FFA is presented through experiments on various datasets, showcasing its potential in the field of Explainable AI.
### Strengths
- The paper is well-written and easy to understand, making it a pleasure to read.
- FFA is rooted in a simple concept, with clear motivations and sensible algorithm design.
- Good empirical performance was demonstrated on various datasets.
- The authors clearly discuss essential limitations and provide a well-structured limitations section.
- The related work is well-referenced and provides a good context for the proposed method.
### Weaknesses
- The rationale for claiming FFA as the 'real' feature attribution is not well-articulated.
- Important theoretical aspects, such as axioms governing FFA, are missing or unclear.
- The experimental results are limited, particularly in the context of comparing FFA with state-of-the-art methods.
- The paper lacks a formal proof for some propositions and does not provide guarantees for the approximation method.
- Some definitions and annotations in the paper are not adequately clarified, making it hard to follow.
### Questions
- What is the motivation behind claiming that FFA provides a better feature attribution than LIME and SHAP?
- How does FFA address the issues of out-of-distribution sampling effectively?
- Could the averaging of AXps lead to loss of important information regarding feature interactions?
- What axioms do FFA explanations adhere to, and how do they compare with those of LIME and SHAP?
### Soundness
**Score:** 2
**Description:** 2 = fair. The theoretical foundations are somewhat lacking, with key aspects such as approximation guarantees and axiomatic analysis not fully developed.
### Presentation
**Score:** 3
**Description:** 3 = good. While the writing is generally clear and organized, some definitions and explanations require better clarity for full comprehension.
### Contribution
**Score:** 2
**Description:** 2 = fair. The paper presents a novel approach, but significant gaps exist in theoretical grounding and empirical evaluation, limiting its impact.
### Rating
**Score:** 5
**Description:** 5 = borderline accept: The paper has technical merit, but due to limited evaluation and clarity issues, it does not meet the standards for acceptance at this time.
### Paper Decision
**Decision:** Reject
**Reasons:** While the paper presents an interesting approach to feature attribution with potential contributions to Explainable AI, the lack of rigorous theoretical justification, insufficient empirical validation, and unclear motivations for its claims hinder its acceptance. The reviewers unanimously felt that the ideas need further development to achieve publication standards.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# A Data-Free Approach to Mitigate Catastrophic Forgetting in Federated Class Incremental Learning
for Vision Tasks
Sara Babakniya
Computer Science
University of Southern California
Los Angeles, CA
[email protected]
&Zalan Fabian
Electrical and Computer Engineering
University of Southern California
Los Angeles, CA
[email protected]
Chaoyang He
FedML
Sunnyvale, CA
[email protected]
&Mahdi Soltanolkotabi
Electrical and Computer Engineering
University of Southern California
Los Angeles, CA
[email protected]
&Salman Avestimehr
Electrical and Computer Engineering
University of Southern California
Los Angeles, CA
[email protected]
###### Abstract
Deep learning models often suffer from forgetting previously learned information when trained on new data. This problem is exacerbated in federated learning (FL), where the data is distributed and can change independently for each user. Many solutions are proposed to resolve this catastrophic forgetting in a centralized setting. However, they do not apply directly to FL because of its unique complexities, such as privacy concerns and resource limitations. To overcome these challenges, this paper presents a framework for **federated class incremental learning** that utilizes a generative model to synthesize samples from past distributions. This data can be later exploited alongside the training data to mitigate catastrophic forgetting. To preserve privacy, the generative model is trained on the server using data-free methods at the end of each task without requesting data from clients. Moreover, our solution does not demand the users to store old data or models, which gives them the freedom to join/leave the training at any time. Additionally, we introduce SuperImageNet, a new regrouping of the ImageNet dataset specifically tailored for federated continual learning. We demonstrate significant improvements compared to existing baselines through extensive experiments on multiple datasets.
## 1 Introduction
Federated learning (FL) [40, 29] is a decentralized machine learning technique that enables privacy-preserving collaborative learning. In FL, multiple users (clients) train a common (global) model in coordination with a server without sharing personal data. In recent years, FL has attracted tremendous attention in both research and industry and has been successfully employed in various fields, such as autonomous driving [17], next-word prediction [21], health care [13], and many more.
Despite its popularity, deploying FL in practice requires addressing critical challenges, such as resource limitation and statistical and system heterogeneity [27, 33]. While tackling these challenges is an essential step towards practical and efficient FL, there are still common assumptions in most FL frameworks that are too restrictive in realistic scenarios.
In particular, one of the most common assumptions is that clients' local data distribution is fixed and does not change over time. However, in real-world applications [49], clients' data constantly evolve due to changes in the environment, trends, or new interests. For example, [6] presents the real-world data of an online shop, suggesting interest in items shifts through seasons. Another example arises in healthcare, where a model trained on old diseases should be able to generalize to new diseases [58]. In such scenarios (Figure 1), the model must rapidly adapt to the incoming data while preserving performance on past data distributions to avoid catastrophic forgetting [28; 39].
In the centralized setting, such problems have been explored in continual learning [48; 34] (also called lifelong learning [3] or incremental learning [9; 7] based on the initial settings and assumptions). In recent years, various algorithms have been proposed in Continual Learning (CL) to tackle catastrophic forgetting from different angles and can achieve promising performance in different scenarios.
Despite all the significant progress, most CL methods are not directly applicable to the federated setting due to inherent differences (Table 1) between the two settings. For instance, experience replay [47] is a popular approach, where a portion of past data points is saved to maintain some representation of previous distributions throughout the training. However, deploying experience replay in FL has resource and privacy limitations. It requires clients to store and keep their data, which may increase the memory usage of already resource-limited clients. Furthermore, users may not be able to store data for more than a specific time due to privacy concerns. Finally, depending solely on the clients to preserve the past is not reliable, as clients leaving means losing their data.
To address the aforementioned problems, we propose MFCL, _Mimicking Federated Continual Learning_: a privacy-preserving federated continual learning approach without episodic memory. In particular, MFCL is based on training a generative model in the server and sharing it with clients to sample synthetic examples of past data instead of storing the actual data on the client side. The generative model training is data-free in the sense that no form of training data is required from the clients, and only the global model is used in this step. It is specifically crucial because this step does not require powerful clients and does not cause any extra data leakage. Finally, this algorithm has competitive performance; our numerical experiments demonstrate improvement by \(10\%-20\%\) in average accuracy while reducing the training overhead of the clients.
Moreover, benchmarking federated continual learning in practical scenarios requires a large dataset to split among tasks and clients. However, existing datasets are not sufficiently large, causing most of the existing works in federated continual learning evaluating on a few clients (\(5\) to \(20\)) [45; 24; 52]. To enable more practical evaluations, we release a new regrouping of the ImageNet dataset, _SuperImageNet_. SuperImageNet enables evaluation with many clients and ensures all clients are assigned sufficient training samples regardless of the total number of tasks and active clients.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Challenge** & **Limitation** \\ \hline Low memory & Clients cannot store many examples \\ \hline Clients drop out & Causes loss of information stored in memory \\ \hline New clients join & New clients only have access to new classes \\ \hline Privacy & Limits data saving and sharing of the clients \\ \hline \end{tabular}
\end{table}
Table 1: Challenges that limit the direct use of continual learning methods in federated settings.
Figure 1: In the real world, users constantly change their interests, observe new data, or lose some of the old ones. As a result, the training dataset is divided into different tasks. For example, here, at \(Task=1\), the clients’ datasets dominantly include pictures of animals, and by the end of the training (\(Task=T\)), the trend shifts towards landscapes.
We summarize our contributions below:
* We propose a novel framework to tackle the federated class incremental learning problem more efficiently for many users. Our framework specifically targets applications where past data samples on clients are unavailable.
* We point out potential issues with relying on client-side memory for FCL. Furthermore, we propose using a generative model trained by the server in a _data-free manner_ to help overcome catastrophic forgetting while preserving privacy.
* We modify the client-side training of traditional FL techniques in order to mitigate catastrophic forgetting using a generative model.
* We propose a new regrouping of the ImageNet dataset, SuperImageNet, tailored to federated continual learning settings that can be scaled to a large number of clients and tasks.
* We demonstrate the efficacy of our method in more realistic scenarios with a larger number of clients and more challenging datasets such as CIFAR-100 and TinyImageNet.
## 2 Related Work
**Continual Learning.** Catastrophic forgetting [39] is a fundamental problem in machine learning: when we train a model on new examples, its performance degrades when evaluated on past data. This problem is investigated in continual learning (CL) [59], and the goal is for the model to learn new information while preserving its knowledge of old data. A large body of research has attempted to tackle this problem from different angles, such as adding regularization terms [31, 1, 41], experience replay by storing data in memory [2, 10, 4, 35], training a generative model [56, 53, 32], or architecture parameter isolation [16, 38, 19, 51].
In CL settings, the training data is presented to the learner as a sequence of datasets - commonly known as **tasks**. In each timestamp, only one dataset (task) is available, and the learner's goal is to perform well on all the current and previous tasks.
Recent work focuses on three main scenarios, namely task-, domain- and class-incremental learning (IL) [54]. In _Task-IL_, tasks are disjoint, and the output spaces are separated by task IDs provided during training and test time. For _Domain-IL_, the output space does not change for different tasks, but the task IDs are no longer provided. Finally, in _Class-IL_, new tasks introduce new classes to the output space, and the number of classes increases incrementally. Here, we work on **Class-IL**, which is the more challenging and realistic, especially in FL. In most of the FL applications, there is no task ID available, and it is preferred to learn a _single_ model for all the observed data.
**Class Incremental Learning.** In standard centralized Class-IL, the model is trained on a sequence of non-overlapping \(T\) tasks \(\{\mathcal{T}^{(1)},\mathcal{T}^{(2)},...,\mathcal{T}^{(T)}\}\) where the data distribution of task \(t\), \(D^{t}\), is fixed but unknown in advance, while all the tasks share the same output space (\(\mathcal{Y}\)). For task \(t\), \(D^{t}\) consists of \(N^{t}\) pairs of samples and their labels \(\{(x_{i}^{t},y_{i}^{t})\}_{i=1}^{N^{t}}\}\), where all the newly introduced classes (\(y_{i}^{t}\)) belong to \(\mathcal{Y}^{t}\) (\(y_{i}^{t}\in\{\mathcal{Y}^{t}\}\) and \(\bigcup\limits_{j=1}^{t-1}\{\mathcal{Y}^{j}\}\bigcap\{\mathcal{Y}^{t}\}=\emptyset\)). Moreover, a shared output space among all tasks means that at the end of task \(t\), the total number of available classes equals \(q=\sum_{i=1}^{t}|\mathcal{Y}^{i}|\).
**Federated Continual Learning.** In real-life scenarios, users' local data is not static and may evolve. For instance, users' interests may change over time due to seasonal variations, resulting in more examples for a given class. On the other hand, reliability issues or privacy concerns may lead to users losing part of their old data as well. In Federated Continual Learning (FCL), the main focus is to adapt the global model to new data while maintaining the knowledge of the past.
Even though FCL is an important problem, it has only gained attention very recently, and [58] is the first paper on this topic. It focuses on Task-IL, which requires a unique task ID per task during inference. Furthermore, it adapts separate masks per task to improve personalized performance without preserving a common global model. This setting is considerably different than ours as we target Class-IL with a single global model to classify all the classes seen so far. [37] employs server and client-side knowledge distillation using a surrogate dataset. [15] relaxes the problem as clients have access to large memory to save the old examples and share their data, which is different from the standard FL setting. Some works, such as [26, 44, 52], explore the FCL problem in domains other than image classification. [42] has proposed using variational embedding to send data to the server securely and then server-side training to rehearse the previous task for Domain-IL.
This work focuses on Class-IL for supervised image classification without memory replay, similar to [45; 24]. However, [24] allows overlapping classes between tasks and focuses on few-shot learning, which is different from the standard Class-IL. The most related work to ours is [45], where authors propose FedCIL. This work also benefits from generative replay to compensate for the absence of old data and overcome forgetting. In FedCIL, clients train the discriminator and generator locally. Then, the server takes a consolidation step after aggregating the updates. In this step, the server generates synthetic data using all the generative models trained by the clients to consolidate the global model and improve the performance. The main difference between this work and ours is that in our work, the generative model is trained by the server in a data-free manner, which can reduce clients' training time and computation and does not require their private data (detailed comparison in Appendix H).
**Data-Free Knowledge Distillation.** Knowledge distillation (KD) [25] is a popular method to transfer knowledge from a well-trained teacher model to a (usually) smaller student model. Common KD methods are data-driven, and at least a small portion of training data is required. However, in some cases, training data may not be available during knowledge distillation due to privacy concerns. To tackle this problem, a new line of work [12; 22] proposes _data-free knowledge distillation_. In such methods, a generative model is used as a training data substitute. This generative model is trained to generate synthetic images such that the teacher model predicts them as their assigned label (Figure 2). This method has recently become popular in CL [57; 50] as well, mainly due to the fact that it can eliminate the need for memory in preserving knowledge. Data-free KD has been previously used in FL [60] to reduce the effect of data heterogeneity. However, to the best of our knowledge, this is the first work that adapted such a technique in the context of federated continual learning.
## 3 Federated Class Incremental Learning with MFCL
In federated Class-IL, a shared model is trained on \(T\) different tasks. However, the distributed and private nature of FL makes it distinct from the centralized version. In FL, users may join, drop out, or change their data independently. Besides, required data or computation power for some centralized algorithms may not be available in FL due to privacy and resource constraints.
To address the aforementioned problems, we propose MFCL, which is less reliant on the client-side memory and computational power. This algorithm includes two essential parts: _first_, at the end of each task, the server trains a generative model with data-free knowledge distillation methods to learn the representation of the seen classes. _Second_, clients can reduce catastrophic forgetting by generating synthetic images from the trained generative model obtained from the server side. This way, clients are not required to use their memory for storing old data. Moreover, this technique can address the problem of newly connected clients without past data. Furthermore, since the server trains the generative model training without additional information, this step does not introduce **new** privacy issues. Finally, MFCL can help mitigate the data heterogeneity problem, as clients can synthesize samples from classes they do not own [60] in memory. Next, we explain the two key parts of MFCL: server-side generative model (Figure 3 Left) and client-side continual learning (Figure 3 Right).
### Server-Side: Generative Model
The motivation for deploying a generative model is to synthesize images that mimic the old tasks and to avoid storing past data. However, training these generative models on the client's side, where the training data exists, is _computationally expensive_, _requires a large amount of training data_ and can be potentially _privacy concerning_. On the other hand, the server has only access to the global model and aggregated weights and no data. We propose training a generative model on the server, but in a data-free manner, i.e., utilizing model-inversion image synthesis [57; 50]. In such approaches, the goal is to synthesize images optimized with respect to the discriminator (global model). Then, the
Figure 2: Data-Free Knowledge Distillation. The generator receives random noise as input labels and synthesizes images that are labeled correctly by the trained teacher model.
generative model is shared with the clients to generate images during local training. To this aim, we utilize a generative model with ConvNet architecture, \(\mathcal{G}\), that takes noise \(z\sim\mathcal{N}(0,1)\) as input and produces a synthetic sample \(\tilde{x}\), resembling the original training input with the same dimensions. In order to train this model, we must balance the various training objectives we detail next.
**Cross Entropy Loss.** First, the synthetic data should be labeled correctly by the current discriminator model (global model or \(\mathcal{F}\)). To this end, we employ cross entropy classification loss between its assigned label \(z\) and the prediction of \(\mathcal{F}\) on synthetic data \(\tilde{x}\). Note that noise dimension can be arbitrary and greater than the current discovered classes of task \(t\); therefore, we only consider the first \(q\) dimension here, where \(q=\sum_{i=1}^{t}|\mathcal{Y}^{i}|\) (which is equal to the total number of classes seen in the previous tasks). Then, we can define the cross-entropy loss as
\[\mathcal{L}_{CE}=CE(argmax(z[:q]),\mathcal{F}(\tilde{x})). \tag{1}\]
**Diversity Loss.** Synthetic images can suffer from a lack of class diversity. To solve this problem, we utilize the information entropy (IE) loss [12]. For a probability vector \(\text{p}=(p_{1},p_{2},...,p_{q})\), information entropy is evaluated as \(\mathcal{H}_{info}(\text{p})=-\frac{1}{q}\sum_{i}p_{i}\log(p_{i})\). Based on the definition, inputs with uniform data distributions have the maximum IE. Hence, to encourage \(\mathcal{G}\) to produce diverse samples, we deploy the diversity loss defined as
\[\mathcal{L}_{div}=-\mathcal{H}_{info}(\frac{1}{bs}\sum_{i=1}^{bs}\mathcal{F}( \tilde{x}_{i})). \tag{2}\]
This loss measures the IE for samples of a batch (\(bs\): batch size). Maximizing this term encourages the output distribution of the generator to be more uniform and balanced for all the available classes.
**Batch Statistics Loss.** Prior works [22, 57, 50] in the centralized setting have recognized that the distribution of synthetic images generated by model inversion methods can drift from real data. Therefore, in order to avoid such problems, we add batch statistics loss \(\mathcal{L}_{BN}\) to our generator training objective. Specifically, the server has access to the statistics (mean and standard deviation) of the global model's BatchNorm layers obtained from training on real data. We want to enforce the same statistics in all BatchNorm layers on the generated synthetic images as well. To this end, we minimize the layer-wise distances between the two statistics written as
\[\mathcal{L}_{BN}=\frac{1}{L}\sum_{i=1}^{L}KL(\mathcal{N}(\mu_{i},\sigma_{i}^{2 }),\mathcal{N}(\tilde{\mu}_{i},\tilde{\sigma}_{i}^{2}))=\log\frac{\hat{\sigma }}{\sigma}-\frac{1}{2}(1-\frac{\sigma^{2}+(\mu-\hat{\mu})^{2}}{\hat{\sigma}^{2 }}). \tag{3}\]
Here, \(L\) denotes the total number of BatchNorm layers, \(\mu_{i}\) and \(\sigma_{i}\) are the mean and standard deviation stored in BatchNorm layer \(i\) of the global model, \(\tilde{\mu}_{i},~{}\tilde{\sigma}_{i}\) are measured statistics of BatchNorm layer \(i\) for the synthetic images. Finally, \(KL\) stands for the Kullback-Leibler (KL) divergence.
We want to note that this loss does not rely on the BatchNorm layer itself but rather on their stored statistics (\(\tilde{\mu}_{i},\tilde{\sigma}_{i}\) ). \(\mathcal{G}\) aims to generate synthetic images similar to the real ones such that the global model would not be able to classify them purely based on these statistics. One way to achieve this is to ensure that synthetic and real images have similar statistics in the intermediate layers, and this is
Figure 3: Overview of MFCL. **Left.** The server aggregates the updates every round and trains a generator using data-free methods at the end of each task. **Right.** Clients train their models locally using their local data and synthetic images of past tasks from the generator.
the role of \(\mathcal{L}_{BN}\). In our experiments, we employed the most common baseline model in CL, which already contains BatchNorm layers and measures those statistics. However, these layers are not a necessity and can be substituted by similar ones, such as GroupNorm. In general, if no normalization layer is used in the model, clients can still compute the running statistics of specific layers and share them with the server, and later, the server can use them in the training of the \(\mathcal{G}\).
**Image Prior Loss.** In natural images, adjacent pixels usually have values close to each other. Adding prior loss is a common technique to encourage a similar trend in the synthetic images [22]. In particular, we can create the smoothed (blurred) version of an image by applying a Gaussian kernel and minimizing the distance of the original and \(Smooth(\tilde{x})\) using the image prior loss
\[\mathcal{L}_{pr}=||\tilde{x}-Smooth(\tilde{x})||_{2}^{2}. \tag{4}\]
In summary, we can write the training objective of \(\mathcal{G}\) as Equation 5 where \(w_{div}\), \(w_{BN}\) and \(w_{pr}\) control weight of each term.
\[\min_{\mathcal{G}}\mathcal{L}_{CE}+w_{div}\mathcal{L}_{div}+w_{BN}\mathcal{L} _{BN}+w_{pr}\mathcal{L}_{pr}, \tag{5}\]
### Client-side: Continual Learning
For client-side training, our solution is inspired by the algorithm proposed in [50]. In particular, the authors distill the _stability-plasticity_ dilemma into three critical requirements of continual learning and aim to address them one by one.
**Current Task.** To have plasticity, the model needs to learn the new features in a way that is least biased towards the old tasks. Therefore, instead of including all the output space in the loss, the CE loss can be computed _for the new classes only_ by splitting the linear heads and excluding the old ones, which we can write as
\[\mathcal{L}_{CE}^{t}=\begin{cases}CE(\mathcal{F}_{t}(x),y),&if\ y\in\mathcal{Y }^{t}\\ 0,&O.W.\end{cases} \tag{6}\]
**Previous Tasks.** To overcome forgetting, after the first task, we train the model using synthetic and real data simultaneously. However, the distribution of the synthetic data might differ from the real one, and it becomes important to prevent the model from distinguishing old and new data only based on the distribution difference. To address this problem, we only use the extracted features of the data. To this aim, clients freeze the feature extraction part and only update the classification head (represented by \(\mathcal{F}_{t}^{*}\)) for both real (\(x\)) and synthetic (\(\tilde{x}\)) images. This fine-tuning loss is formulated as
\[\mathcal{L}_{FT}^{t}=CE(\mathcal{F}_{t}^{*}([x,\tilde{x}]),y). \tag{7}\]
Finally, to minimize feature drift and forgetting of the previous tasks, the common method is knowledge distillation over the prediction layer. However, [50] proposed _importance-weighted feature distillation_: instead of using the knowledge in the decision layer, they use the output of the feature extraction part of the model (penultimate layer). This way, only the more significant features of the old model are transferred, enabling the model to learn the new features from the new tasks. This loss can be written as
\[\mathcal{L}_{KD}^{t}=||\mathcal{W}(\mathcal{F}_{t}^{1:L-1}([x,\tilde{x}]))- \mathcal{W}(\mathcal{F}_{t-1}^{1:L-1}([x,\tilde{x}]))||_{2}^{2}, \tag{8}\]
where \(\mathcal{W}\) is the frozen linear head of the model trained on the last task (\(\mathcal{W}=\mathcal{F}_{t-1}^{L}\)).
In summary, the final objective on the client side as
\[\min_{\mathcal{F}_{t}}\mathcal{L}_{CE}^{t}+w_{FT}\mathcal{L}_{FT}^{t}+w_{KD} \mathcal{L}_{KD}^{t}, \tag{9}\]
where \(w_{FT}\) and \(w_{KD}\) are hyper-parameters determining the importance of each loss term.
### Summary of MFCL Algorithm
In summary, during the first task, clients train the model using only the \(\mathcal{L}_{CE}\) part of (9) and send their updates to the server where the global model gets updated (FedAvg) for \(R\) rounds. At the end of training task \(t=1\), the server trains the generative model by optimizing (5), using the latest global model. Finally, the server freezes and saves \(\mathcal{G}\) and the global model (\(\mathcal{F}_{t-1}\)). This procedure repeats for all future tasks, with the only difference being that for \(t>1\), the server needs to send the current global model (\(\mathcal{F}_{t}\)), precious task's final model (\(\mathcal{F}_{t-1}\)) and \(\mathcal{G}\) to clients. Since \(\mathcal{F}_{t-1}\) and \(\mathcal{G}\) are fixed during training \(\mathcal{F}_{t}\), the server can send them to each client once per task to reduce the communication cost. To further decrease this overhead, we can employ communication-efficient methods in federated learning, such as [5], that can highly compress the model with minor performance degradation, which we leave for future work. Algorithm 1 in the Appendix A shows different steps of MFCL.
## 4 SuperImageNet
In centralized Class-IL, the tasks are disjoint, and each task reveals a new set of classes; therefore, the total number of classes strongly limits the number of tasks. Moreover, we must ensure that each task has sufficient training data for learning. Thus, the number of examples per class is essential in creating CL datasets. However, the dataset needs to be split along the task dimension and clients in a Federated Class-IL setup. For instance, CIFAR-100, a popular dataset for benchmarking FL algorithms, consists of \(100\) classes, each with \(500\) examples, which must be partitioned into \(T\) tasks, and each task's data is split among \(N\) clients. In other words, for a single task, a client has access to only \(\frac{1}{T\times N}\) of that dataset; in a common scenario where \(N=100\) and \(T=10\), we can assign only \(50\) samples to each client (about \(5\) example per class in i.i.d data distribution), which is hardly enough.
To resolve this problem, prior works have used a small number of clients [45; 24; 52], combined multiple datasets [58], employed a surrogate dataset [37] or allowed data sharing among the clients [15]. However, these solutions may not be possible, applicable, or may violate the FL's assumptions. This demonstrates the importance of introducing new benchmark datasets for federated continual settings.
We introduce **SuperImageNet**, a dataset created by superclassing the _ImageNet_[14] dataset, thus greatly increasing the number of available samples for each class. There are \(3\) versions of the dataset, each offering a different trade-off between the number of classes (for Class-IL) and the number of examples per class (for FL) as shown in Table 4. For example, _SuperImageNet-M_ has _10x_ more samples per class compared to CIFAR-100, which allows for an order of magnitude increase in the number of federated clients in while maintaining the same amount of training data per client. As shown in Figure 4, we have merged classes of similar concepts to increase the sample size per class.
## 5 Experiments
**Setting.** We demonstrate the efficacy of our method on three challenging datasets: CIFAR-100 [30], TinyImageNet [43] and SuperImageNet-L 1. For all datasets, we use the baseline ResNet18 [23] as the global model and ConvNet architecture for \(\mathcal{G}\), which we explain in detail in the Appendix C.
Footnote 1: The image size of the CIFAR-100, TinyImageNet, and SuperImageNet datasets is \(32\times 32\), \(64\times 64\) and \(224\times 224\), respectively
\begin{table}
\begin{tabular}{l c c}
**Dataset** & **\# examples/class** & **\# classes** \\ \hline \hline _SuperImageNet-S_ & \(2500\) & \(100\) \\ _SuperImageNet-M_ & \(5000\) & \(75\) \\ _SuperImageNet-L_ & \(7500\) & \(50\) \\ \end{tabular}
\end{table}
Table 2: Versions of SuperImageNet
Figure 4: Building SuperImageNet by regrouping ImageNet dataset. Labels in Blue are the original labels, and in Red are the labels in SuperImageNet.
Table 3 summarizes the setting for each dataset. For each dataset, there are 10 non-overlapping tasks (\(T=10\)), and we use Latent Dirichlet Allocation (LDA) [46] with \(\alpha=10\) to distribute the data of each task among the clients. Clients train the local model using an SGD optimizer, and all the results were reported after averaging over 3 different random initializations (seeds). We refer to Appendix F for other hyperparameters.
**Metric.** We use three metrics -Average Accuracy, Average Forgetting, and Wallclock time.
_Average Accuracy (\(\tilde{\mathcal{A}}\)):_ Let us define Accuracy (\(\mathcal{A}^{t}\)) as the accuracy of the model at the end of task \(t\), over _all_ the classes observed so far. Then, \(\tilde{\mathcal{A}}\) is average of all \(\mathcal{A}^{t}\) for all the \(T\) available tasks.
_Average Forgetting (\(\tilde{f}\)):_ Forgetting (\(f^{t}\)) of task \(t\) is defined as the difference between the highest accuracy of the model on task \(t\) and its performance at the end of the training. Therefore, we can evaluate the average forgetting by averaging all the \(f^{t}\) for task \(1\) to \(T-1\) at the end of task \(T\).
_Wallclock time._ This is the time the server or clients take to perform one FL round in seconds. The time is measured rounds on our local GPU NVIDIA-A100 and averaged between different clients.
**Baseline.** We compare our method with **FedAvg**[40], **FedProx**[33], **FedProx\({}^{+}\)**, **FedCLL**[45], **FedLwF-2T**[52] and **Oracle. FedAvg** and **FedProx** are the two most common aggregation methods; specifically, FedProx is designed for non-i.i.d data distributions and tries to minimize the distance of the client's update from the global model. Inspired by FedProx, we also explore adding a loss term to minimize the change of the current global model from one from the previous task, which we name **FedProx\({}^{+}\)**. **FedCIL** is a GAN-based method where clients train the discriminator and generator locally to generate synthetic samples from the old tasks. **FedLwF-2T** is another method designed for federated continual learning. In this method, clients have two additional knowledge distillation loss terms: their local model trained on the previous task and the current global model. Finally, **Oracle** is an upper bound on the performance: during the training of the \(i_{th}\) task, clients have access to all of their training data from \(t=1\) to \(t=i\).
### Results
Figure 5 shows the accuracy of the model on all the observed classes so far. In all three datasets, MFCL consistently outperforms the baselines by a large margin (up to \(25\%\) absolute improvement in test accuracy). In the CIFAR-100 dataset, the only baseline that can also correctly classify some examples from past data is **FedCIL**. Both MFCL and FedCIL benefit from a generative model (roughly the same size) to remember the past. Here, a similar generative model to the one in the [45] for the CIFAR-10 dataset is used. Since, in FedCIL, the clients train the generative and global models simultaneously, they require more training iteration. We repeat the same process and adapt similar architectures for the other two datasets. 2 But, given that GANs are not straightforward to fine-tune, this method does not perform well or converge. We explain more in the Appendix H.
Footnote 2: This result might improve by allocating relatively more resources to the clients.
We have further compared the performance and overhead of the methods in Table 4. The first two metrics, Average Accuracy and Average Forgetting reveal how much the model is learning new tasks while preserving its performance on the old task. As expected, FedAvg and FedProx have the highest forgetting values because they are not designed for such a scenario. Also, high forgetting for FedLwF-2T indicates that including teachers in the absence of old data cannot be effective. Notably, FedProx\({}^{+}\) has a lower forgetting value, mainly due to the fact that it also has lower performance for each task. Finally, FedCIL and MFCL have experienced the least forgetting with knowledge transferred from the old task to the new ones. Particularly, MFCL has the smallest forgetting, which means it is the most successful in preserving the learned knowledge.
We also compare the methods based on their computational costs. It is notable that some methods change after learning the first task; therefore, we distinguish between the cost of the first task and the other ones. As depicted, for \(T>1\), MFCL slightly increases the training time caused by employing the generative model. But, as a trade-off, it can significantly improve performance and forgetting.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Dataset & \#Client & \begin{tabular}{c} \#Client \\ per round \\ \end{tabular} &
\begin{tabular}{c} \#classes \\ per task \\ \end{tabular} \\ \hline \hline CIFAR-100 & 50 & 5 & 10 \\ \hline TinyImageNet & 100 & 10 & 20 \\ \hline SuperImageNet-L & 300 & 30 & 5 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Training parameters of each dataset.
The server cost in MFCL is similar to FedAvg except at the end of each task, where it needs to train the generative model. This extra computation cost should not be a bottleneck because it occurs once per task, and servers usually have access to better computing power compared to clients.
### Ablation Studies
Here, we demonstrate the importance of each component in our proposed algorithm, both on the server and client side, by ablating their effects one by one. Table 5 shows our results, where each row removes a single loss component, and each column represents the corresponding test accuracy (\(\mathcal{A}^{t}\)), average accuracy (\(\tilde{\mathcal{A}}\)), average forgetting (\(\tilde{f}\)) and their difference from our proposed method. The first three rows are the losses for training the generative model. Our experiments show that Batch Statistics Loss (\(\mathcal{L}_{BN}\)) and Diversity loss (\(\mathcal{L}_{div}\)) play an essential role in the final performance. The next three rows reflect the importance of client-side training. In particular, the fourth row (_Ours-w/o \(\mathcal{L}_{CE}^{t}\)_) represents the case where clients use all the linear heads of the model for cross-entropy instead of splitting the heads and using the part related to the current task only. The following two rows show the impact of removing \(\mathcal{L}_{FT}^{t}\) and \(\mathcal{L}_{KD}^{t}\) from the client loss. In all three cases, the loss considerably drops, demonstrating the importance of all components. Finally, FedAvg + Gen shows the performance of the case where the server trains the generative model, and clients use its synthetic data the same way as the real ones without further modifications. In the Appendix G, we perform additional ablations on hyperparameters, such as weights of each loss term, generator model size, and noise dimension.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} & Average Accuracy & Average forgetting & Training time (s) & Training time (s) & Server Runtime (s) \\ & \(\tilde{\mathcal{A}}\)(\%) & \(\tilde{f}\)(\%) & \((T=1)\) & \((T>1)\) & \(\approx 1.8\) \\ \hline \hline
**FedAvg** & \(22.27\pm 0.22\) & \(78.77\pm 0.83\) & \(\approx 1.2\) & \(\approx 1.2\) & \(\approx 1.8\) \\
**FedProx** & \(22.00\pm 0.31\) & \(78.17\pm 0.33\) & \(\approx 1.98\) & \(\approx 1.98\) & \(\approx 1.8\) \\
**FedClL** & \(26.8\pm 0.44\) & \(38.19\pm 0.31\) & \(\approx 17.8\) & \(\approx 24.5\) & \(\approx 2.5\) for \(T=1\), \(\approx 4.55\) for \(T>1\) \\
**FedLwF-2T** & \(22.17\pm 0.13\) & \(75.08\pm 0.72\) & \(\approx 1.2\) & \(\approx 3.4\) & \(\approx 1.8\) \\
**MFCL (Ours)** & \(\mathbf{44.98\pm 0.12}\) & \(\mathbf{28.3\pm 0.78}\) & \(\approx 1.2\) & \(\approx 3.7\) & \(\approx 330\) (once per task), \(\approx 1.8\) O.W. \\
**Oracle** & \(67.12\pm 0.4\) & \(--\) & \(\approx 1.2\) & \(\approx 1.2\times T\) & \(\approx 1.8\) \\ \end{tabular}
\end{table}
Table 4: Performance of the different baselines in terms of Average Accuracy. Average Forgetting and Wallclock time for CIFAR-100 dataset.
Figure 5: Test Accuracy vs. \(\#\) observed tasks for (a) CIFAR-100, (b) TinyImageNet, (C) SuperImageNet-L datasets. After each task, the model is evaluated on all the seen tasks so far.
\begin{table}
\begin{tabular}{c|c c c c c c c c c c|c|c|c} \hline Method & \(\mathcal{A}^{1}\) & \(\mathcal{A}^{2}\) & \(\mathcal{A}^{3}\) & \(\mathcal{A}^{4}\) & \(\mathcal{A}^{5}\) & \(\mathcal{A}^{6}\) & \(\mathcal{A}^{7}\) & \(\mathcal{A}^{8}\) & \(\mathcal{A}^{9}\) & \(\mathcal{A}^{10}\) & \(\tilde{\mathcal{A}}\) & \(\Delta\) & \(\tilde{F}\) & \(\Delta\) \\ \hline \hline Ours-w/o \(\mathcal{L}_{BN}\) & \(70.00\) & \(47.02\) & \(43.93\) & \(38.98\) & \(35.98\) & \(34.14\) & \(32.60\) & \(30.17\) & \(27.93\) & \(24.36\) & \(38.51\) & \(-6.47\) & \(45.95\) & \(+17.65\) \\ \hline Ours-w/o \(\mathcal{L}_{div}\) & \(70.47\) & \(52.33\) & \(49.90\) & \(44.87\) & \(42.09\) & \(39.56\) & \(38.18\) & \(35.21\) & \(33.74\) & \(32.40\) & \(43.87\) & \(-1.11\) & \(29.47\) & \(+11.17\) \\ \hline Ours-w/o \(\mathcal{L}_{div}\) & \(69.87\) & \(53.48\) & \(47.60\) & \(39.60\) & \(35.43\) & \(32.95\) & \(30.81\) & \(27.15\) & \(25.14\) & \(22.94\) & \(38.44\) & \(-6.54\) & \(44.80\) & \(+16.5\) \\ \hline Ours-w/o \(\mathcal{L}_{ER}\) & \(70.10\) & \(40.10\) & \(33.40\) & \(26.70\) & \(21.33\) & \(19.24\) & \(17.96\) & \(14.00\) & \(13.69\) & \(11.28\) & \(26.78\) & \(-18.20\) & \(72.24\) & \(+43.94\) \\ \hline Ours-w/o \(\mathcal{L}_{ER}\) & \(70.32\) & \(46.17\) & \(42.16\) & \(37.57\) & \(33.91\) & \(32.29\) & \(30.94\) & \(28.25\) & \(27.00\) & \(24.64\) & \(37.33\) & \(-7.65\) & \(42.85\) & \(+14.55\) \\ \hline Ours-w/o \(\mathcal{L}_{ER}\) & \(70.10\) & \(45.92\) & \(38.60\) & \(31.01\) & \(26.45\) & \(24.07\) & \(21.32\) & \(18.02\) & \(16.85\) & \(16.29\) & \(30.86\) & \(-14.12\) & \(53.64\) & \(+25.34\) \\ \hline FedAvg + Gen & \(70.57\) & \(40.07\) & \(30.91\) & \(23.75\) & \(20.38\) & \(17.56\) & \(16.02\) & \(12.90\) & \(13.18\) & \(11.57\) & \(25.69\) & \(-19.29\) & \(60.46\) & \(+32.16\) \\ \hline Ours & \(71.50\) & \(55.00\) & \(50.73\) & \(45.73\) & \(42.38\) & \(40.62\) & \(38.97\) & \(36.18\) & \(35.47\) & \(33.25\) & \(44.98\) & \(-\) & \(28.3\) & \(-\) \\ \hline \end{tabular}
\end{table}
Table 5: Ablation study for MFCL on CIFAR-100
## 6 Discussion
**Privacy of MFCL.** Federated Learning, specifically FedAvg, is vulnerable to different attacks, such as data poisoning, model poisoning, backdoor attacks, and gradient inversion attacks [27; 36; 18; 20; 11; 33]. We believe, MFCL generally does not introduce any additional privacy issues and still it is prone to the same set of attacks as FedAvg. MFCL trains the generative model based on the weights of the _global model_, which is already available to all clients in the case of FedAvg. On the contrary, in some prior work in federated continual learning, the clients need to share a locally trained generative model or perturbed private data, potentially causing more privacy problems.
Furthermore, for FedAvg, various solutions and defenses, such as differential privacy or secure aggregation [55; 8], are proposed to mitigate the effect of such privacy attacks. One can employ these solutions in the case of MFCL as well. Notably, in MFCL, the server **does not** require access to the individual client's updates and uses the aggregated model for training. Therefore, training a generative model is still viable after incorporating these mechanisms.
In MFCL, the server trains the generator using only client updates. Figure 6 presents random samples of real and synthetic images from the CIFAR-100 dataset. Images of the same column correspond to real and synthetic samples from the same class. Synthetic samples do not resemble any specific training examples of the clients and thus preserve privacy. However, they consist of some common knowledge about the class and effectively represent the whole class. Therefore, they can significantly reduce catastrophic forgetting.
**Limitations.** In our method, clients need the generative model, the final global model of the last task, and the current global model, which adds overheads such as communication between the server and clients and storage. However, there are fundamental differences between storing the generative model and actual data. First, the memory cost is independent of the task size: as the number of tasks increases, clients either have to delete some of the existing examples of the memory to be able to add new ones or need to increase the memory size. In contrast, the generative model size is constant. Finally, clients can delete the generative model while not participating in the FL process and retrieve it later if they join. On the other hand, deleting data samples from memory results in a permanent loss of information. We have delved into this in Appendix D.
## 7 Conclusion
This work presents a federated Class-IL framework while addressing resource limitations and privacy challenges. We exploit generative models trained by the server in a data-free fashion, obviating the need for expensive on-device memory on clients. Our experiments demonstrate that our method can effectively alleviate catastrophic forgetting and outperform the existing state-of-the-art solutions.
## 8 Acknowledgment
This material is based upon work supported by ONR grant N00014-23-1-2191, ARO grant W911NF-22-1-0165, Defense Advanced Research Projects Agency (DARPA) under Contract No. FASTNICS HR001120C0088 and HR001120C0160, and gifts from Intel and Qualcomm. The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
Figure 6: Real vs synthetic data generated by the generative model for CIFAR-100 dataset.
## References
* [1]R. Aljundi, F. Babiloni, M. Elhoseiny, M. Rohrbach, and T. Tuytelaars (2018) Memory aware synapses: learning what (not) to forget. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 139-154. Cited by: SS1.
* [2]R. Aljundi, E. Belilovsky, T. Tuytelaars, L. Charlin, M. Caccia, M. Lin, and L. Page-Caccia (2019) Online continual learning with maximal interfered retrieval. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlche-Buc, E. Fox, and R. Garnett (Eds.), pp. 11849-11860. Cited by: SS1.
* [3]R. Aljundi, P. Chakravarty, and T. Tuytelaars (2017) Expert gate: lifelong learning with a network of experts. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3366-3375. Cited by: SS1.
* [4]R. Aljundi, M. Lin, B. Goujaud, and Y. Bengio (2019) Gradient based sample selection for online continual learning. Advances in neural information processing systems32. Cited by: SS1.
* [5]S. Babakniya, S. Kundu, S. Prakash, Y. Niu, and S. Avestimehr (2022) Federated sparse training: lottery aware model compression for resource constrained edge. arXiv preprint arXiv:2208.13092. Cited by: SS1.
* [6]J. Bang, H. Kim, Y. Yoo, J. Ha, and J. Choi (2021) Rainbow memory: continual learning with a memory of diverse samples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8218-8227. Cited by: SS1.
* [7]E. Belouadah, A. Popescu, and I. Kanellos (2021) A comprehensive study of class incremental learning algorithms for visual tasks. Neural Networks135, pp. 38-54. Cited by: SS1.
* [8]K. Bonawitz, V. Ivanov, B. Kreuter, A. Marcedone, H. Brendan McMahan, S. Patel, D. Ramage, A. Segal, and K. Seth (2016) Practical secure aggregation for federated learning on user-held data. arXiv preprint arXiv:1611.04482. Cited by: SS1.
* [9]F. M. Castro, M. J. Marin-Jimenez, N. Guil, C. Schmid, and K. Alahari (2018) End-to-end incremental learning. In Proceedings of the European conference on computer vision (ECCV), pp. 233-248. Cited by: SS1.
* [10]A. Chaudhry, M. Rohrbach, M. Elhoseiny, T. Ajanthan, P. K. Dokania, P. H. Torr, and M. Ranzato (2019) On tiny episodic memories in continual learning. arXiv preprint arXiv:1902.10486. Cited by: SS1.
* [11]C. Chen, S. Babakniya, M. Paolieri, and L. Golubchik (2022) Defending against poisoning backdoor attacks on federated meta-learning. ACM Transactions on Intelligent Systems and Technology (TIST)13 (5), pp. 1-25. Cited by: SS1.
* [12]H. Chen, Y. Wang, C. Xu, Z. Yang, C. Liu, B. Shi, C. Xu, C. Xu, and Q. Tian (2019) Data-free learning of student networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3514-3522. Cited by: SS1.
* [13]Y. Chen, X. Qin, J. Wang, C. Yu, and W. Gao (2020) Fedhealth: a federated transfer learning framework for wearable healthcare. IEEE Intelligent Systems35 (4), pp. 83-93. Cited by: SS1.
* [14]J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. Cited by: SS1.
* [15]J. Dong, L. Wang, Z. Fang, G. Sun, S. Xu, X. Wang, and Q. Zhu (2022) Federated class-incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10164-10173. Cited by: SS1.
* [16]S. Ebrahimi, F. Meier, R. Calandra, T. Darrell, and M. Rohrbach (2020) Adversarial continual learning. In European Conference on Computer Vision, pp. 386-402. Cited by: SS1.
* [17]A. M. Elbir, B. Soner, and S. Coleri (2020) Federated learning in vehicular networks. arXiv preprint arXiv:2006.01412. Cited by: SS1.
* [18]M. Fang, X. Cao, J. Jia, and N. Gong (2020) Local model poisoning attacks to {Byzantine-Robust} federated learning. In 29th USENIX security symposium (USENIX Security 20), pp. 1605-1622. Cited by: SS1.
* [19] Chrisantha Fernando, Dylan Banarse, Charles Blundell, Yori Zwols, David Ha, Andrei A Rusu, Alexander Pritzel, and Daan Wierstra. Pathnet: Evolution channels gradient descent in super neural networks. _arXiv preprint arXiv:1701.08734_, 2017.
* [20] Jonas Geiping, Hartmut Bauermeister, Hannah Droge, and Michael Moeller. Inverting gradients-how easy is it to break privacy in federated learning? _Advances in Neural Information Processing Systems_, 33:16937-16947, 2020.
* [21] Andrew Hard, Kanishka Rao, Rajiv Mathews, Swaroop Ramaswamy, Francoise Beaufays, Sean Augenstein, Hubert Eichner, Chloe Kiddon, and Daniel Ramage. Federated learning for mobile keyboard prediction. _arXiv preprint arXiv:1811.03604_, 2018.
* [22] Matan Haroush, Itay Hubara, Elad Hoffer, and Daniel Soudry. The knowledge within: Methods for data-free model compression. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 8494-8502, 2020.
* [23] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016.
* [24] Sean M Hendryx, Dharma Raj KC, Bradley Walls, and Clayton T Morrison. Federated reconnaissance: Efficient, distributed, class-incremental learning. _arXiv preprint arXiv:2109.00150_, 2021.
* [25] Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. Distilling the knowledge in a neural network. _arXiv preprint arXiv:1503.02531_, 2(7), 2015.
* [26] Ziyue Jiang, Yi Ren, Ming Lei, and Zhou Zhao. Fedspeech: Federated text-to-speech with continual learning. _arXiv preprint arXiv:2110.07216_, 2021.
* [27] Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurelien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. Advances and open problems in federated learning. _Foundations and Trends(r) in Machine Learning_, 14(1-2):1-210, 2021.
* [28] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. _Proceedings of the national academy of sciences_, 114(13):3521-3526, 2017.
* [29] Jakub Konecny, H Brendan McMahan, Felix X Yu, Peter Richtarik, Ananda Theertha Suresh, and Dave Bacon. Federated learning: Strategies for improving communication efficiency. _arXiv preprint arXiv:1610.05492_, 2016.
* [30] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
* [31] Sang-Woo Lee, Jin-Hwa Kim, Jaehyun Jun, Jung-Woo Ha, and Byoung-Tak Zhang. Overcoming catastrophic forgetting by incremental moment matching. _Advances in neural information processing systems_, 30, 2017.
* [32] Timothee Lesort, Hugo Caselles-Dupre, Michael Garcia-Ortiz, Andrei Stoian, and David Filliat. Generative models from the perspective of continual learning. In _2019 International Joint Conference on Neural Networks (IJCNN)_, pages 1-8. IEEE, 2019.
* [33] Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. Federated learning: Challenges, methods, and future directions. _IEEE Signal Processing Magazine_, 37(3):50-60, 2020.
* [34] Zhizhong Li and Derek Hoiem. Learning without forgetting. _IEEE transactions on pattern analysis and machine intelligence_, 40(12):2935-2947, 2017.
* [35] David Lopez-Paz and Marc'Aurelio Ranzato. Gradient episodic memory for continual learning. _Advances in neural information processing systems_, 30, 2017.
* [36] Lingjuan Lyu, Han Yu, and Qiang Yang. Threats to federated learning: A survey. _arXiv preprint arXiv:2003.02133_, 2020.
* [37] Yuhang Ma, Zhongle Xie, Jue Wang, Ke Chen, and Lidan Shou. Continual federated learning based on knowledge distillation.
* [38] Arun Mallya and Svetlana Lazebnik. Packnet: Adding multiple tasks to a single network by iterative pruning. In _Proceedings of the IEEE conference on Computer Vision and Pattern Recognition_, pages 7765-7773, 2018.
* [39] Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In _Psychology of learning and motivation_, volume 24, pages 109-165. Elsevier, 1989.
* [40] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In _Artificial intelligence and statistics_, pages 1273-1282. PMLR, 2017.
* [41] Pingbo Pan, Siddharth Swaroop, Alexander Immer, Runa Eschenhagen, Richard Turner, and Mohammad Emtiyaz E Khan. Continual deep learning by functional regularisation of memorable past. _Advances in Neural Information Processing Systems_, 33:4453-4464, 2020.
* [42] Tae Jin Park, Kenichi Kumatani, and Dimitrios Dimitriadis. Tackling dynamics in federated incremental learning with variational embedding rehearsal. _arXiv preprint arXiv:2110.09695_, 2021.
* [43] Hadi Pouransari and Saman Ghili. Tiny imagenet visual recognition challenge. _CS231N course, Stanford Univ., Stanford, CA, USA_, 5, 2014.
* [44] Aman Priyanshu, Mudit Sinha, and Shreyans Mehta. Continual distributed learning for crisis management. _arXiv preprint arXiv:2104.12876_, 2021.
* [45] Daiqing Qi, Handong Zhao, and Sheng Li. Better generative replay for continual federated learning. _arXiv preprint arXiv:2302.13001_, 2023.
* [46] Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konecny, Sanjiv Kumar, and H Brendan McMahan. Adaptive federated optimization. _arXiv preprint arXiv:2003.00295_, 2020.
* [47] David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy Lillicrap, and Gregory Wayne. Experience replay for continual learning. _Advances in Neural Information Processing Systems_, 32, 2019.
* [48] Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative replay. _Advances in neural information processing systems_, 30, 2017.
* [49] Neta Shoham, Tomer Avidor, Aviv Keren, Nadav Israel, Daniel Benditiks, Liron Mor-Yosef, and Itai Zeitak. Overcoming forgetting in federated learning on non-iid data. _arXiv preprint arXiv:1910.07796_, 2019.
* [50] James Smith, Yen-Chang Hsu, Jonathan Balloch, Yilin Shen, Hongxia Jin, and Zsolt Kira. Always be dreaming: A new approach for data-free class-incremental learning. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 9374-9384, 2021.
* [51] James Smith, Cameron Taylor, Seth Baer, and Constantine Dovrolis. Unsupervised progressive learning and the stam architecture. _arXiv preprint arXiv:1904.02021_, 2019.
* [52] Anastasia Usmanova, Francois Portet, Philippe Lalanda, and German Vega. A distillation-based approach integrating continual learning and federated learning for pervasive services. _arXiv preprint arXiv:2109.04197_, 2021.
* [53] Gido M Van de Ven and Andreas S Tolias. Generative replay with feedback connections as a general strategy for continual learning. _arXiv preprint arXiv:1809.10635_, 2018.
* [54] Gido M Van de Ven and Andreas S Tolias. Three scenarios for continual learning. _arXiv preprint arXiv:1904.07734_, 2019.
* [55] Kang Wei, Jun Li, Ming Ding, Chuan Ma, Howard H. Yang, Farhad Farokhi, Shi Jin, Tony Q. S. Quek, and H. Vincent Poor. Federated learning with differential privacy: Algorithms and performance analysis. _IEEE Transactions on Information Forensics and Security_, 15:3454-3469, 2020.
* [56] Yue Wu, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, Zhengyou Zhang, and Yun Fu. Incremental classifier learning with generative adversarial networks. _arXiv preprint arXiv:1802.00853_, 2018.
* [57] Hongxu Yin, Pavlo Molchanov, Jose M Alvarez, Zhizhong Li, Arun Mallya, Derek Hoiem, Niraj K Jha, and Jan Kautz. Dreaming to distill: Data-free knowledge transfer via deepinversion. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 8715-8724, 2020.
* [58] Jaehong Yoon, Wonyong Jeong, Giwoong Lee, Eunho Yang, and Sung Ju Hwang. Federated continual learning with weighted inter-client transfer. In _International Conference on Machine Learning_, pages 12073-12086. PMLR, 2021.
* [59] Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence. In _International Conference on Machine Learning_, pages 3987-3995. PMLR, 2017.
* [60] Zhuangdi Zhu, Junyuan Hong, and Jiayu Zhou. Data-free knowledge distillation for heterogeneous federated learning. In _International Conference on Machine Learning_, pages 12878-12889. PMLR, 2021.
MFCL Algorithm
Algorithm 1 summarizes our method. Here, for every task, clients train the local model using the shared generative model. At the end of each task, the server updates the generative model using data-free methods.
```
1:\(N\): #Clients, \([\mathcal{C}_{N}]\): Client Set, \(K\): #Clients per Round, \(u_{i}\): client i Update, \(E\): Local Epoch
2:\(R\): FL Rounds per Task, \(T\): #Tasks, \(t\): current task, \(|\mathcal{Y}|^{t}\): Task \(t\) Size, \(q\): #Discovered Classes
3:\(\mathcal{F}_{t}\): Global Model for task t, \(\mathcal{G}_{t}\): Generative Model, \(E_{\mathcal{G}}\): Generator Training Epoch
4:\(q\gets 0\)
5:\(\mathcal{G},\mathcal{F}_{1}\leftarrow\textbf{initialize}()\)
6:for\(t=1\)to\(T\)do
7:\(q\gets q+|\mathcal{Y}^{t}|\)
8:\(\mathcal{F}_{t}\leftarrow\textbf{updateArchitecture}(\mathcal{F}_{t},q)\) # Add new observed classes in the classification layer.
9:for\(r=1\)to\(R\)do
10:\(C_{K}\leftarrow\textbf{RandomSelect}([\mathcal{C}_{N}],K)\)
11:for\(c\in C_{K}\) in parallel do
12:\(u_{c}\leftarrow\textbf{localUpdate}(\mathcal{F}_{t},\mathcal{G},\mathcal{F}_{t -1},E)\) # For \(t=1\) we do not need \(\mathcal{F}_{0}\) and \(\mathcal{G}\).
13:endfor
14:\(\mathcal{F}_{t}\leftarrow\textbf{globalAggregation}(\mathcal{F}_{t},[u_{c}])\)
15:endfor
16:\(\mathcal{F}_{t}\leftarrow\textbf{freezeModel}(\mathcal{F}_{t})\) # Fix Global model.
17:\(\mathcal{G}\leftarrow\textbf{trainDFGenerator}(\mathcal{F}_{t},E_{\mathcal{G}},q)\) # Train the generative model.
18:\(\mathcal{G}\leftarrow\textbf{freezeModel}(\mathcal{G})\) # Fix generator weights.
19:endfor
```
**Algorithm 1** MFCL
## Appendix B Code for Reproduction
The codebase for this work and regrouping the ImageNet dataset is available at [https://github.com/SaraBabakN/MFCL-NeurIPS23](https://github.com/SaraBabakN/MFCL-NeurIPS23).
## Appendix C Details of the Generative Model
**Architectures.** In Table 6, we show the generative model architectures used for CIFAR-100, TinyImageNet, and SuperImageNet datasets. In all experiments, the global model has ResNet18 architecture. For the CIFAR-100 and TinyImageNet datasets, we change the first CONV layer kernel size to \(3\times 3\) from \(7\times 7\). In this table, CONV layers are reported as \(\texttt{CONV}K\times K(C_{in},C_{out})\), where \(K\), \(C_{in}\) and \(C_{out}\) are the size of the kernel, input channel and output channel of the layer, respectively.
**Weight Initialization.** The generative model is randomly initialized for the first task and trained from scratch. For all the future tasks (t > 1), the server uses the previous generative model (t - 1) as the initialization.
**Synthetic Samples Generation.** To generate the synthetic data, clients sample i.i.d noise, which later would determine the classes via the argmax function applied to the first q elements (considering q is the total number of seen classes). Given the noise is sampled i.i.d, the probability of generating samples from class \(i\) equals \(\frac{1}{q}\). Although this might not lead to the same number of synthetic samples from each class in every batch, the generated class distribution is uniform over all classes. Thus, in expectation, we have class balance in generated samples.
**Catastrophic Forgetting in the Generative Model.** The effectiveness of the \(\mathcal{G}\) is closely linked to the performance of the global model. If the global model forgets old classes after completing a task, the quality of corresponding synthetic data will decline. Hence, it is crucial to select a reliable generative model and a robust global model. A good generative model can assist the global model in preventing forgetting when learning new tasks. This model can then serve as a teacher for the next round of the \(\mathcal{G}\) model.
**Global Aggregation Method.** In this work, we have employed FedAvg to aggregate the client updates. Since the generator is always trained after the aggregation, its training is not impacted by changing the aggregation method. However, the generative model uses the aggregated model as its discriminator, and it is directly affected by the quality of the final global model. Therefore, any aggregation mechanism that improves the global model's performance would also help the generative model and vice versa.
## Appendix D Overheads of generative model
**Client-side.** Using \(\mathcal{G}\) on the client side would increase the computational costs compared to vanilla FedAvg. However, existing methods in CL often need to impose additional costs such as memory, computing, or both to mitigate catastrophic forgetting. Nevertheless, there are ways to reduce costs for MFCL. For example, clients can perform inference once, generate and store synthetic images only for training, and then delete them all. They can further reduce costs by requesting that the server generate synthetic images and send them the data instead of \(\mathcal{G}\). Here, we raise two crucial points about the synthesized data. Firstly, there is an intrinsic distinction between storing synthetic (or \(\mathcal{G}\)) and actual data; the former is solely required during training, and clients can delete them right after the training. Conversely, the data in episodic memory should always be saved on the client's side because once deleted, it becomes unavailable. Secondly, synthetic data is shared knowledge that can assist anyone with unbalanced data or no memory in enhancing their model's performance. In contrast, episodic memory can only be used by one client.
**Server-side.** The server needs to train the \(\mathcal{G}\)**once per task**. It is commonly assumed that the server has access to more powerful computing power and can compute more information in a faster time compared to clients. This training step does not have overhead on the client side and might slow down the whole process. However, tasks do not change rapidly in real life, giving the server ample time to train the generative model before any trends or client data shifts occur.
**Communication Cost.** Transmitting the generative model can be a potential overhead for MFCL, as it is a cost that clients must bear **once per task** to prevent or reduce catastrophic forgetting. However, several possible methods, such as compression, can significantly reduce this cost while maintaining excellent performance. This could be an interesting direction for future research.
## Appendix E More on the Privacy of MFCL
**MFCL with Differential Privacy.** We want to highlight that the generator can only be as good as the discriminator in data-free generative model training. If the global model can learn the decision boundaries and individual classes with a DP guarantee, the generator can learn this knowledge and present it through the synthetic example. Otherwise, if the global model fails to learn the current tasks, there is not much knowledge to preserve for the future. With the DP guarantee, the main challenge is training a reasonable global model; improving this performance can also help the generative model.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**CIFAR-100** & **TinyImageNet** & **SuperImageNet** \\ \hline \hline \(\texttt{FC}(200,128\times 8\times 8)\) & \(\texttt{FC}(400,128\times 8\times 8)\) & \(\texttt{FC}(200,64\times 7\times 7)\) \\ \hline \(\texttt{reshape}(-,128,8,8)\) & \(\texttt{reshape}(-,128,8,8)\) & \(\texttt{reshape}(-,64,7,7)\) \\ \hline BatchNorm(128) & BatchNorm(128) & BatchNorm(64) \\ \hline Interpolate(2) & Interpolate(2) & Interpolate(2) \\ \hline \(\texttt{CONV}3\times 3(128,128)\) & \(\texttt{CONV}3\times 3(128,128)\) & \(\texttt{CONV}3\times 3(64,64)\) \\ \hline BatchNorm(128) & BatchNorm(128) & BatchNorm(64) \\ \hline LeakyReLU & LeakyReLU & LeakyReLU \\ \hline Interpolate(2) & Interpolate(2) & Interpolate(2) \\ \hline \(\texttt{CONV}3\times 3(128,64)\) & \(\texttt{CONV}3\times 3(128,128)\) & \(\texttt{CONV}3\times 3(64,64)\) \\ \hline BatchNorm(64) & BatchNorm(128) & BatchNorm(64) \\ \hline LeakyReLU & LeakyReLU & LeakyReLU \\ \hline \(\texttt{CONV}3\times 3(64,3)\) & Interpolate(2) \\ \hline Tanh & \(\texttt{CONV}3\times 3(64,64)\) \\ \hline BatchNorm(3) & BatchNorm(3) & BatchNorm(64) \\ \hline LeakyReLU & LeakyReLU \\ \hline \(\texttt{CONV}3\times 3(64,3)\) & Interpolate(2) \\ \hline Tanh & \(\texttt{CONV}3\times 3(64,64)\) \\ \hline BatchNorm(3) & BatchNorm(3) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Generative model Architecture
**MFCL with Secure Aggregation.** If the clients do not trust the server with their updates, a potential solution is _Secure Aggregation_. In a nutshell, secure aggregation is a defense mechanism that ensures update privacy, especially when the server is potentially malicious. More importantly, since MFCL also does not require individual updates, it is compatible with secure aggregation and can be employed to align with Secure Aggregation.
**Privacy Concerns Associated with Data Storage.** Currently, some different regulations and rules limit the storage time of users' data. Usually, the service providers do not own the data forever and are obligated to erase it after a specific duration. Sometimes, the data is available only in the form of a stream, and it never gets stored. But most of the time, data is available for a short period of enough to perform a few rounds of training. In this way, if multiple service providers participate in federated learning, their data would dynamically change as they delete old data and acquire new ones.
**MFCL and Batch Statistics.** MFCL benefits from Batch Statistics Loss (\(\mathcal{L}_{BN}\)) in training the generative model. However, some defense mechanisms suggest not sharing local Batch Statistics with the server. While training the generative model without the \(\mathcal{L}_{BN}\) is still possible, it can reduce the accuracy. Addressing this is an interesting future direction.
## Appendix F Hyperparameters
Table 7 presents some of the more important parameters and settings for each experiment.
## Appendix G Hyperparameter tuning for MFCL
Hyperparameters can play an essential role in the final performance of algorithms. In our experiments, we have adapted the commonly used parameters, and here, we show how sensitive the final performance is regarding each hyperparameter. This is particularly important because hyperparameter tuning is very expensive in federated learning and can be unfeasible in continual learning. To this aim, we change one parameter at a time while fixing the rest. In Table 8, we report the final \(\tilde{\mathcal{A}}\) of each hyperparameter on CIFAR-100 datasets with 10 tasks.
\(w_{div}\)**:** Weight of diversity loss (\(\mathcal{L}_{div}\)).
\(w_{BN}\)**:** Weight of Batch Statistics loss (\(\mathcal{L}_{BN}\)).
\(w_{pr}\)**:** Weight of Image Prior loss (\(\mathcal{L}_{PT}\)).
\(Z\_dim\)**:** Input noise dimension for training the \(\mathcal{G}\) model.
\(gen\_epoch\)**:** Number of iteration to train the \(\mathcal{G}\) model.
This is the setting that we used \(w_{div}=1,w_{BN}=75,w_{pr}=0.001,Z\_dim=200,gen\_epoch=5000\) and the average accuracy equals \(45.1\%\). (There may be a minor difference between this value and the result in the main manuscript. This discrepancy arises because we only ran the ablation for a single seed, whereas the results reported in the primary manuscript are the average of three different seeds.)
\begin{table}
\begin{tabular}{c|c c c}
**Dataset** & **CIFAR-100** & **TinyImageNet** & **SuperImageNet-L** \\ \hline
**Data Size** & \(32\times 32\) & \(64\times 64\) & \(224\times 224\) \\ \hline \(\#\) **Tasks** & \(10\) & \(10\) & \(10\) \\ \hline \(\#\) **Classes per task** & \(10\) & \(20\) & \(5\) \\ \hline
**\# Samples per class** & \(500\) & \(500\) & \(7500\) \\ \hline
**LR** & All task start with 0.1 and exponentially decay to 0.01 \\ \hline
**Batch Size** & 32 & 32 & 32 \\ \hline
**Synthetic Batch Size** & 32 & 32 & 32 \\ \hline
**FL round per task** & 100 & 100 & 100 \\ \hline
**Local epoch** & 10 & 10 & 1 \\ \end{tabular}
\end{table}
Table 7: Parameter Settings in different datasets
\begin{table}
\begin{tabular}{c c|c c|c c|c c|c c} \(\mathbf{w_{div}}\) & \(\tilde{\mathcal{A}}\) & \(\mathbf{w_{BN}}\) & \(\tilde{\mathcal{A}}\) & \(\mathbf{w_{pr}}\) & \(\tilde{\mathcal{A}}\) & \(\mathbf{Z\_dim}\) & \(\tilde{\mathcal{A}}\) & \(\mathbf{gen\_epoch}\) & \(\tilde{\mathcal{A}}\) \\ \hline
0.1 & \(44.35\) & 0.1 & \(40.12\) & 0.0001 & \(43.10\) & \(110\) & \(42.39\) & \(100\) & \(40.77\) \\
0.5 & \(44.37\) & 1 & \(43.90\) & 0.001 & \(45.1\) & \(200\) & \(45.1\) & \(5000\) & \(45.1\) \\
1 & \(45.1\) & 10 & \(44.77\) & 0.01 & \(43.56\) & \(1000\) & \(45.01\) & \(10000\) & \(43.35\) \\
2 & \(44.08\) & 75 & \(45.1\) & 0.1 & \(44.73\) & & & & \\
5 & \(44.57\) & 100 & \(45.02\) & 1 & \(44.37\) & & & & \\ \end{tabular}
\end{table}
Table 8: Effect of different hyperparameters on the final \(\tilde{\mathcal{A}}\) (\(\%\)) for CIFAR-100 dataset.
This table shows how robust the final performance is with respect to each parameter, which is preferred both in federated and continual learning problems.
## Appendix H Comparison between MFCL and FedCIL
Here, we would like to highlight some distinctions between our algorithm and FedCIL, both of which aim to alleviate catastrophic forgetting using generative models.
* In FedCIL, **clients** train the local generative model **every round**, which adds great computational overhead. On the other hand, in our approach, the generative model is trained on the server and only **once per task**.
* Training models in GANs usually require a large amount of data that is not commonly available, especially on edge devices. Our data-free generative models address this issue.
* Training the generative model directly from the training dataset may pose a risk of exposing sensitive training data, which contradicts the goal of FL. On the other hand, MFCL uses only the information from the global model.
* FedCIL is limited to simpler datasets and FL settings, such as MNIST and CIFAR10, with fewer clients and less complex architectures. In contrast, our approach can handle more complex datasets, such as CIFAR100, TinyImageNet, and SuperImagenet, with a much larger number of clients.
* Training GAN models usually require more careful hyperparameter tuning. To train FedCIL for TinyImageNet and SuperImageNet, we tried SGD and Adam optimizers with learning rates \(\in\{0.1,0.05,0.01\}\) and local epoch \(\in\{1,2\}\). Furthermore, we adopt a generative model architecture with a similar input dimension and a total number of parameters in MFCL. However, the model did not converge to a good performance. While a more extensive hyperparameter search might improve the results, it can indicate the difficulty of the hyperparameter tuning of this algorithm. It is worth mentioning that in order to train the CIFAR-10 dataset, we used a local epoch \(8\times\) larger than the other baselines; otherwise, the performance on this dataset would also degrade.
In conclusion, FedCIL can be a good fit for a cross-silo federated learning setting with only a few clients, each possessing a large amount of data and computing resources. Meanwhile, while still applicable in the above setting, our method is also suitable for edge devices with limited data and power. | ## Review
### Summary
This paper introduces a novel approach to federated continual learning (FLCL) by employing a server-trained generative model to produce synthetic data for mitigating catastrophic forgetting. The method allows clients to benefit from previous task distributions without requiring them to store real data, thus preserving privacy. The empirical results demonstrate that the proposed method outperforms existing techniques across several benchmarks, including the newly introduced SuperImageNet protocol. The paper effectively articulates the challenges of class-incremental learning in FL settings and provides a solid evaluation of its contributions.
### Strengths
- Addresses a realistic setting in federated learning where clients come and go and data changes over time.
- Introduces a data-free generative model that saves client computational resources while preserving privacy.
- Demonstrates significant performance improvements over existing methods across multiple datasets.
- Provides transparency regarding training costs and server overhead.
- Proposes a new benchmark dataset (SuperImageNet) for evaluating federated continual learning methods.
### Weaknesses
- The concept of 'task' may not be clear to those unfamiliar with continual learning terminology.
- Main results in some figures lack clarity in interpretation, raising questions about their presentation.
- The novelty may be perceived as a combination of existing ideas rather than a groundbreaking approach.
- The evaluation lacks a detailed theoretical analysis and may benefit from discussing potential privacy concerns related to the shared generative model.
### Questions
- Could the authors clarify the specific contributions of their method compared to existing federated learning approaches?
- How does the generative model manage privacy concerns when it is shared among clients?
- Is it practical for all clients to share the same task transitions simultaneously?
- What implications would adding differential privacy to the discriminator have on performance and privacy?
- Can the authors provide more details on the experimental setup and the rationale behind the design choices?
### Soundness
**Score:** 3
**Description:** 3 = good; the methodology is sound and results are reliable, though some aspects require further clarity and discussion.
### Presentation
**Score:** 3
**Description:** 3 = good; while the paper presents its ideas clearly, there are areas where figures and explanations could be improved for better comprehension.
### Contribution
**Score:** 3
**Description:** 3 = good; the paper makes a solid contribution to the field of federated continual learning, though its novelty is somewhat debatable.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept; the paper is technically solid with moderate-to-high impact potential but requires minor improvements in clarity and depth.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper addresses a significant and timely problem in federated continual learning, presenting a practical solution with empirical validation. While there are some weaknesses in clarity and theoretical depth, the overall contributions, including the introduction of SuperImageNet, and the favorable evaluation results support acceptance. The approach is innovative and fits well within current research trends, warranting its presentation at the conference.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# SoTTA: Robust Test-Time Adaptation on Noisy Data Streams
Taesik Gong\({}^{\dagger}\)1 Yewon Kim\({}^{\ddagger}\)1 Taeckyung Lee\({}^{\ddagger*}\) Sorn Chottananurak\({}^{\ddagger}\) Sung-Ju Lee\({}^{\ddagger}\)
\({}^{\dagger}\)Nokia Bell Labs \({}^{\ddagger}\)KAIST
[email protected]
{yewon.e.kim,taeckyung,sorn111930,profsj}@kaist.ac.kr
Equal contribution.
Footnote 1: footnotemark:
###### Abstract
Test-time adaptation (TTA) aims to address distributional shifts between training and testing data using only unlabeled test data streams for continual model adaptation. However, most TTA methods assume benign test streams, while test samples could be unexpectedly diverse in the wild. For instance, an unseen object or noise could appear in autonomous driving. This leads to a new threat to existing TTA algorithms; we found that prior TTA algorithms suffer from those noisy test samples as they blindly adapt to incoming samples. To address this problem, we present Screening-out Test-Time Adaptation (SoTTA), a novel TTA algorithm that is robust to noisy samples. The key enabler of SoTTA is two-fold: (i) input-wise robustness via high-confidence uniform-class sampling that effectively filters out the impact of noisy samples and (ii) parameter-wise robustness via entropy-sharpness minimization that improves the robustness of model parameters against large gradients from noisy samples. Our evaluation with standard TTA benchmarks with various noisy scenarios shows that our method outperforms state-of-the-art TTA methods under the presence of noisy samples and achieves comparable accuracy to those methods without noisy samples. The source code is available at [https://github.com/taeckyung/SoTTA](https://github.com/taeckyung/SoTTA).
## 1 Introduction
Deep learning has achieved remarkable performance in various domains [6; 8; 33], but its effectiveness is often limited when the test and training data distributions are misaligned. This phenomenon, known as domain shift [31], is prevalent in real-world scenarios where unexpected environmental changes and noises result in poor model performance. For instance, in autonomous driving, weather conditions can change rapidly. To address this challenge, Test-Time Adaptation (TTA) [1; 5; 29; 38; 39; 44] has emerged as a promising paradigm that aims to improve the generalization ability of deep learning models by adapting them to test samples, without requiring further data collection or labeling costs.
While TTA has been acknowledged as a promising method for enhancing the robustness of machine learning models against domain shifts, the evaluation of TTA frequently relies on the assumption that the test stream contains only benign test samples of interest. However, test data can be unexpectedly diverse in real-world settings, containing not only relevant data but also extraneous elements that are outside the model's scope, which we refer to _noisy_ samples (Figure 1). For instance, unexpected noises can be introduced in autonomous driving scenarios, such as dust on the camera or adversarial samples by malicious users. As shown in Figure 2, we found that most of the prior TTA algorithms showed significantly degraded accuracy with the presence of noisy samples (e.g., 81.0% \(\rightarrow\) 52.1% in TENT [38] and 82.2% \(\rightarrow\) 54.8% in CoTTA [39]).
To ensure the robustness of TTA against noisy samples, an intuitive solution might be screening out noisy samples from the test stream. Out-of-distribution (OOD) detection [10; 11; 18; 19; 20; 21; 24; 25; 43] is a representative method for this, as it tries to detect whether a sample is drawn from the same distribution as the training data or not. Similarly, open-set domain adaptation (OSDA) [30; 35] and universal domain adaptation (UDA) [34; 42] generalize the adaptation scenario by assuming that unknown classes are present in test data that are not in training data. However, these methods require access to a whole batch of training data and unlabeled target data, which do not often comply with TTA settings where the model has no access to train data at test time due to privacy issues [38] and storing a large batch of data is often infeasible due to resource constraints [12]. Therefore, how to make online TTA robust under practical noisy settings is still an open question.
In this paper, we propose Screening-out Test-Time Adaptation (SoTTA) that is robust to noisy samples. SoTTA achieves robustness to noisy samples in two perspectives: (i) _input-wise_ robustness and (ii) _parameter-wise_ robustness aims to filter out noisy samples so that the model will be trained only with benign samples. We achieve this goal via High-confidence Uniform-class Sampling (HUS) that avoids selecting noisy samples when updating the model (Section 3.1). Parameter-wise robustness pursues updating the model weights in a way that prevents model drifting due to large gradients caused by noisy samples. We achieve this via entropy-sharpness minimization (ESM) that makes the loss landscape smoother and parameters resilient to weight perturbation caused by noisy samples (Section 3.2).
We evaluate SoTTA with three common TTA benchmarks (CIFAR10-C, CIFAR100-C, and ImageNet-C [9]) under four noisy scenarios with different levels of distributional shifts: Near, Far, Attack, and Noise (Section 2). We compare SoTTA with eight state-of-the-art TTA algorithms [1; 17; 27; 28; 29; 38; 39; 44], including the latest studies that address temporal distribution changes in TTA [1; 28; 29; 39; 44]. SoTTA showed its effectiveness with the presence of noisy samples. For instance, in CIFAR10-C, SoTTA achieved 80.0% accuracy under the strongest shift case (Noise), which is a 22.3%p improvement via TTA and 6.4%p better than the best baseline [44]. In addition, SoTTA achieves comparable performance to state-of-the-art TTA algorithms without noisy samples, e.g., showing 82.2% accuracy when the best baseline's accuracy is 82.4% in CIFAR10-C.
**Contributions.** (i) We highlight that test sample diversity in real-world scenarios is an important problem but has not been investigated yet in the literature. We found that most existing TTA algorithms undergo significant performance degradation with sample diversity. (ii) As a solution, we propose SoTTA that is robust to noisy samples by achieving input-wise and parameter-wise robustness. (iii) Our evaluation with three TTA benchmarks (CIFAR10-C, CIFAR100-C, and ImageNet-C) show that SoTTA outperforms the existing baselines.
Figure 1: Unlike prior assumptions (Clean TTA), real-world test streams could include unexpected noisy samples out of the model’s scope (Noisy TTA), such as glare, fallen leaf covering the lens, unseen objects (e.g., a flamingo), and noise in autonomous driving scenarios. The accuracy of existing TTA methods degrades in such cases.
## 2 Preliminaries
**Test-time adaptation.** Let \(\mathcal{D}_{\mathcal{S}}=\{\mathcal{X}^{\mathcal{S}},\mathcal{Y}\}\) be source data and \((\mathbf{x}_{i},y_{i})\in\mathcal{X}^{\mathcal{S}}\times\mathcal{Y}\) be each instance and the label pair that follows a probability distribution of the source data \(P_{\mathcal{S}}(\mathbf{x},y)\). Similarly, let \(\mathcal{D}_{\mathcal{T}}=\{\mathcal{X}^{\mathcal{T}},\mathcal{Y}\}\) be target data and \((\mathbf{x}_{j},y_{j})\in\mathcal{X}^{\mathcal{T}}\times\mathcal{Y}\) be each target instance and the label pair following a target probability distribution \(P_{\mathcal{T}}(\mathbf{x},y)\), where \(y_{j}\) is usually unknown to the learning algorithm. The covariate shift assumption [31] is given between source and target data distributions, which is defined as \(P_{\mathcal{S}}(\mathbf{x})\neq P_{\mathcal{T}}(\mathbf{x})\) and \(P_{\mathcal{S}}(y|\mathbf{x})=P_{\mathcal{T}}(y|\mathbf{x})\). Given an off-the-shelf model \(f(\cdot;\Theta)\) pre-trained from \(\mathcal{D}_{\mathcal{S}}\), the (fully) test-time adaptation (TTA) [38] aims to adapt \(f(\cdot;\Theta)\) to the target distribution \(P_{\mathcal{T}}\) utilizing only \(\mathbf{x}_{j}\) given test time.
**Noisy test samples.** We define _noisy_ test samples to represent any samples that are not included in the target data distribution, i.e., \(\mathbf{\tilde{x}}\notin\mathcal{X}^{\mathcal{T}}\). We use the term noisy to distinguish it from out-of-distribution (OOD), as TTA typically aims to adapt to OOD samples, such as corrupted ones. Theoretically, there could be numerous categories of noisy samples. In this study, we consider five scenarios: Benign, Near, Far2, Attack, and Noise. Figure 3 shows examples of these scenarios. Benign is the typical setting of TTA studies without noisy samples, Near represents a semantic shift [41] from target distribution, Far is a severe shift where covariate shift is evident [41], Attack refers to intelligently generated adversarial attack with perturbation [40], and Noise refers to random noise. We focus on these scenarios to understand the impact of noisy samples in TTA as well as the simplicity of analysis. Detailed settings are described in Section 4.
Footnote 2: We borrowed the term Near and Far from the OOD detection benchmark [41].
## 3 Methodology
**Problem and challenges.** Prior TTA methods assume test samples are benign and blindly adapt to incoming batches of test samples. The presence of noisy samples during test time can significantly degrade their performance for those TTA algorithms, which has not been explored in the literature yet. Dealing with noisy samples in TTA scenarios is particularly challenging as (i) TTA has no access to the source data, (ii) no labels are given for target test data, and (iii) the model is continually adapted and thus a desirable solution should apply to varying models. This makes it difficult to apply existing solutions that deal with a similar problem. For instance, out-of-distribution (OOD) detection studies [11; 19; 20; 43] are built on the assumption that a model is fixed at test time, and open-set domain adaptation (OSDA) methods [30; 35; 42] require labeled source and unlabeled target data for training.
Figure 4: Overview of SoTTA. SoTTA achieves _input-wise robustness_ via high-confidence uniform-class sampling (HUS) and _parameter-wise robustness_ via entropy-sharpness minimization (ESM).
Figure 3: Five test sample scenarios considered in this work: Benign, Near, Far, Attack, and Noise.
**Methodology overview.** To address the problem, we propose Screening-out Test-Time Adaptation (SoTTA), whose overview is described in Figure 4. SoTTA achieves robustness to noisy samples in two perspectives: (i) _input-wise_ robustness via high-confidence uniform-class sampling that avoids selecting noisy samples when updating the model (Section 3.1), and (ii) _parameter-wise_ robustness via entropy-sharpness minimization that makes parameters resilient to weight perturbation caused by noisy samples (Section 3.2).
### Input-wise robustness via high-confidence uniform-class sampling
Our first approach is to ensure input-wise robustness to noisy samples by filtering out them when selecting samples for adaptation. As locating noisy samples without their labels is challenging, our idea is based on the empirical observation of the model predictions with respect to noisy samples.
**Observation.** Our hypothesis is that noisy samples have distinguished properties from benign samples due to the distributional shifts, and this could be observable via the models' prediction outputs. We investigate two types of features that work as proxies for identifying benign samples: (i) confidence of the samples and (ii) predicted class distributions. Specifically, we compared the distribution of the softmax confidence (Figure 4(a)) and the predicted class distribution (Figure 4(b)) of benign samples with noisy samples. First, the confidence of the samples is relatively lower than that of benign samples. The more severe the distribution shift, the lower the confidence (e.g., Far is less confidence than Near), which is also in line with findings of previous studies that pre-trained models show higher confidence on target distribution than out-of-distribution data [10; 21]. Second, we found that noisy samples are often skewed in terms of predictions, and this phenomenon is prominent in more severe shifts (e.g., Noise), except for Attack, whose objective is to make the model fail to correctly classify. These skewed distributions could lead to an undesirable bias in \(p(y)\) and thus might negatively impact the TTA objective, such as entropy minimization [38].
```
0: test data stream \(\mathbf{x}_{t}\), memory \(M\) with capacity \(N\) for test time \(t\in\{1,\cdots,T\}\)do \(\hat{y}_{t}\gets f(\mathbf{x};\Theta)\) if\(C(\mathbf{x};\Theta)>C_{0}\)then\(\triangleright\) Sampling confident data if\(|M|<N\)then Add \((\mathbf{x}_{t},\hat{y}_{t})\) to \(M\) else \(\mathcal{Y}^{*}\leftarrow\) the most prevalent class(es) in \(M\) if\(\hat{y}_{t}\notin\mathcal{Y}^{*}\)then Randomly discard \((\mathbf{x}_{i},\hat{y}_{i})\) from \(M\) where \(\hat{y}_{i}\in\mathcal{Y}^{*}\) else Randomly discard \((\mathbf{x}_{i},\hat{y}_{i})\) from \(M\) where \(\hat{y}_{i}=\hat{y}_{t}\) Add \((\mathbf{x}_{t},\hat{y}_{t})\) to \(M\)
```
**Algorithm 1** High-confidence Uniform-class Sampling (HUS)
**Solution.** Based on the aforementioned empirical analysis, we propose High-confidence Uniform-class Sampling (HUS) that avoids using noisy samples for adaptation by utilizing a small memory. We maintain confident samples while balancing their predicted classes in the memory. The selected samples in the memory are then used for adaptation. We describe the procedure as a pseudo-code code in Algorithm 1. Given a target test sample \(\mathbf{x}\), HUS measures its confidence. Specifically, we define the confidence \(C(\mathbf{x};\Theta)\) of each test sample \(\mathbf{x}\) as:
\[C(\mathbf{x};\Theta)=\max_{i=1\cdots n}\left(\frac{e^{\hat{y}_{i}}}{\sum_{j=1 }^{n}e^{\hat{y}_{j}}}\right)\text{ where }\hat{y}=f(\mathbf{x};\Theta). \tag{1}\]
Figure 5: Model predictions on benign (CIFAR10-C) and noisy samples.
We store the sample if its confidence is higher than the predefined threshold \(C_{0}\). In this way, we maintain only high-confidence samples in the memory used for adaptation and thus reduce the impact of low-confidence noisy samples.
Furthermore, while storing data in the memory, we balance classes among them. Specifically, if the predicted class of the current test sample is not in the most prevalent class in the memory, then HUS randomly replaces one random sample in the most prevalent class with the new sample. Otherwise, if the current sample belongs to the most prevalent class in the memory, HUS replaces one random sample in the same class with the current one. We can maintain classes uniformly with this strategy, which is effective for not only filtering out noisy samples but also removing class biases among samples when used for adaptation, which we found is beneficial for TTA. We found these two memory management strategies not only effectively reduce the impact of noisy samples for adaptation but also improve the model performance in benign-only cases by avoiding model drifting due to biased and low-confidence samples (Section 4).
With the stored samples in the memory, we update the normalization statistics and affine parameters in batch normalization (BN) [13] layers, following the prior TTA methods [29, 38, 44]. This is known as not only computationally efficient but also showed comparable performance improvement to updating the whole layers. While we aim to avoid using noisy samples for adaptation, a few noisy samples could still be stored in the memory, e.g., when they are similar to benign samples or outliers. To be robust to temporal variances of the samples in the memory, we take the exponential moving average to update the BN statistics (means \(\mu\) and variances \(\sigma^{2}\)) instead of directly using the statistics from the samples in the memory. Specifically, we update the means and variances of BN layers as: (i) \(\hat{\mu}_{t}=(1-m)\hat{\mu}_{t-1}+m\mu_{t}\) and (ii) \(\hat{\sigma}_{t}^{2}=(1-m)\hat{\sigma}_{t-1}^{2}+m\sigma_{t}^{2}\), where \(m\in[0,1]\) is a momentum hyperparameter. We describe updating the affine parameters in Section 3.2.
### Parameter-wise robustness via entropy-sharpness minimization
Our second approach is to secure parameter-wise robustness to noisy samples by training the model in a way that is robust to noisy samples. Our idea is based on the observation that the parameter update is often corrupted with noisy samples, and this could be mitigated by smoothing the loss landscape with respect to parameters.
**Observation.** While most existing TTA algorithms utilize the entropy minimization loss [5, 38, 44], it could drift the model with high gradient samples [29]. We observed that adaptation with noisy samples often hinders the model from adapting to benign samples. Figure 6 shows an example of this phenomenon. Specifically, test samples consist of 10k benign samples and 10k noisy samples (Noise), which are randomly shuffled. We computed the cumulative accuracy for the benign test data and the gradient norm of noisy samples at each step. As shown in Figure 5(a), the gradient norm of noisy samples dropped rapidly, indicating that the model is gradually adapting to these noise data. This leads to a significant accuracy drop for benign samples. The key question here is how to prevent the model from overfitting to noisy samples. Figure 5(b) shows the result with entropy-sharpness minimization (ESM) that we explain in the following paragraph. With ESM, the gradient norm of noise data remains high, and the accuracy for benign samples improved after adaptation as intended.
**Solution.** To make the model parameters robust to adaptation with noisy samples, the entropy loss landscape should be smoother so that the model becomes resilient to unexpected model drift due to noisy samples. To that end, we jointly minimize the naive entropy loss and the sharpness of the entropy loss and thus make the loss landscape robust to model weight perturbations by large gradients from noisy samples. Specifically, we replace the naive entropy minimization \(E\) with the
Figure 6: Cumulative accuracy (%) of benign samples (CIFAR10-C) and gradient norm of noisy samples (Noise) as the adaptation proceeds. The dotted line refers to the source model accuracy.
entropy-sharpness minimization (ESM) as:
\[\min_{\Theta}E_{S}(\mathbf{x},\Theta)=\min_{\Theta}\max_{||t||\leq\rho}E(\mathbf{x };\Theta+\epsilon), \tag{2}\]
where the entropy-sharpness \(E_{S}(\mathbf{x},\Theta)\) is defined as the maximum objective around the weight perturbation with L2-norm constraint \(\rho\). To tackle this joint optimization problem, we follow sharpness-aware minimization [4] similar to [29], which originally aims to improve the generalizability of models over standard optimization algorithms such as stochastic gradient descent (SGD). We repurpose this optimization to make the model robust to noisy samples in TTA scenarios.
Specifically, assuming \(\rho\ll 1\), the optimization can be approximated via a first-order Taylor expansion:
\[\epsilon^{*}(\Theta)\triangleq\operatorname*{arg\,max}_{||\epsilon||2\leq \rho}E(\mathbf{x};\Theta+\epsilon)\approx\operatorname*{arg\,max}_{||\epsilon|| 2\leq\rho}\epsilon^{T}\nabla_{\Theta}E(\mathbf{x};\Theta). \tag{3}\]
The solution for this approximation problem is given by the classical dual norm problem:
\[\hat{\epsilon}(\Theta)=\rho\ \text{sign}(\nabla_{\Theta}E(\mathbf{x};\Theta)) \ |\nabla_{\Theta}E(\mathbf{x};\Theta)|^{q-1}\big{/}\left(||\nabla_{\Theta}E( \mathbf{x};\Theta)||_{q}^{q}\right)^{1/p}, \tag{4}\]
where \(1/p+1/q=1\). We set \(p=2\) for further implementation, following the suggestion [4]. By substituting \(\hat{\epsilon}(\Theta)\) to the original entropy-sharpness minimization problem, the final gradient approximation is:
\[\nabla_{\Theta}E_{S}(\mathbf{x},\Theta)\approx\nabla_{\Theta}E(\mathbf{x}, \Theta)|_{\Theta+\hat{\epsilon}(\Theta)}. \tag{5}\]
In summary, we calculate the entropy-sharpness minimization objective via two steps. First, at time step \(t\), it calculates the \(\hat{\epsilon}(\Theta_{t})\) with previous parameters and entropy loss. It generates the temporal model with the new parameters: \(\Theta_{t}+\hat{\epsilon}(\Theta_{t})\). Second, based on the temporary model, the second step updates the original model's parameters with the approximation in Equation 5. By putting it together with HUS (Section 3.1), parameters are updated by:
\[\Theta_{t}=\Theta_{t-t_{0}}-\eta\nabla_{\Theta}E(\mathbf{x},\Theta)|_{\mathbf{ x}=\text{HUS}_{C_{0}}(t),\ \Theta=\Theta_{t-t_{0}}+\hat{\epsilon}(\Theta_{t-t_{0}})}, \tag{6}\]
where \(\eta\) is the step size and \(t_{0}\) is the model adaptation interval. For simplicity, we set the model adaptation interval the same as the memory capacity, i.e., \(t_{0}=N\). In the experiments, we use \(N=64\), which is one of the most widely-used batch sizes in TTA methods [36; 38; 44].
## 4 Experiments
This section describes our experimental setup and demonstrates the results in various settings. Please refer to Appendix A and B for further details.
**Scenario.** To mimic the presence of noisy samples in addition to the original target test samples, we injected various noisy datasets into target datasets and randomly shuffled them, which we detail in the following paragraphs. We report the classification accuracy of the original target samples to measure the performance of the model in the presence of noisy samples. We ran all experiments with three random seeds (0, 1, 2) and reported the average accuracy and standard deviations in Appendix B.
**Target datasets.** We used three standard TTA benchmarks: **CIFAR10-C**, **CIFAR100-C**, and **ImageNet-C**[9] as our target datasets. All datasets contain 15 different types and five levels of corruption, where we use the most severe corruption level of five. For each corruption type, the CIFAR10-C/CIFAR100-C dataset has 10,000 test data with 10/100 classes, and the ImageNet-C dataset has 50,000 test data with 1000 classes. We use ResNet18 [8] as the backbone network. We pre-trained the model for CIFAR10 with training data and used the TorchVision [23] pre-trained model for ImageNet.
**Noisy datasets.** Besides the target datasets (**Benign**) mentioned above, we consider four noisy scenarios (Figure 3): CIFAR100 [15]/ImageNet [3] (**Near**), MNIST [26] (**Far**), adversarial attack (**Attack**), and uniform random noise (**Noise**). As both CIFAR10-C and ImageNet-C have real-world images for object recognition tasks, CIFAR100 would be a near dataset to them. For CIFAR100-C, we select ImageNet [3] for a near dataset. We select MNIST as a far dataset because they are not real-world images. We referred to the OOD benchmark [41] for choosing the term Near and Far and the datasets. For the adversarial attack, we adopt the Distribution Invading Attack (DIA), which is an adversarial attack algorithm designed for TTA [40]. We inject the small perturbations to malicious samples to increase the overall error rate on benign data in the same batch. For the uniform random noise, we generate pixel-wise uniform-random images with the same size as the target images. We set the number of noisy samples equal to each target dataset as default, and we also investigated the impact of the number of noisy samples in the following ablation study (Figure 7). In cases where the number of noisy samples is different from the target datasets, we randomly resampled them. To ensure that the learning algorithm does not know the information of noisy samples beforehand, the pixel values of all noisy images are normalized with respect to the target dataset.
**Baselines.** We consider various state-of-the-art TTA methods as baselines. **Source** evaluates the pre-trained model directly on the target data without adaptation. Test-time batch normalization (**BN stats**) [27] updates the BN statistics from the test batch. Pseudo-Label (**PL**) [17] optimizes the trainable BN parameters via pseudo labels. Test entropy minimization (**TENT**) [38] updates the BN parameters via entropy minimization.
We also consider the latest TTA algorithms that improve robustness to temporal distribution changes in test streams to understand their performance to noisy samples. Laplacian adjusted maximum-likelihood estimation (**LAME**) [1] modifies the classifier output probability without modifying model internal parameters. Continual test-time adaptation (**CoTTA**) [39] uses weight-averaged and
augmentation-averaged predictions and avoids catastrophic forgetting by stochastically restoring a part of the model. Efficient anti-forgetting test-time adaptation (**EATA**) [28] uses entropy and diversity weight with Fisher regularization to prevent forgetting. Sharpness-aware and reliable optimization (**SAR**) [29] removes high-entropy samples and optimizes entropy with sharpness minimization [4]. Robust test-time adaptation (**RoTTA**) [44] utilizes the teacher-student model to stabilize while selecting the data with category-balanced sampling with timeliness and uncertainty.
**Hyperparameters.** We adopt the hyperparameters of the baselines from the original paper or official codes. We use the test batch size of 64 in all methods for a fair comparison. Accordingly, we set the memory size to 64 and adapted the model for every 64 samples for our method and RoTTA [44]. We conduct TTA in an online manner. We used a fixed hyperparameter of BN momentum \(m=0.2\) and updated the BN affine parameters via the Adam optimizer [14] with a fixed learning rate of \(l=0.001\) and a single adaptation epoch. The confidence threshold \(C_{0}\) is set to 0.99 for CIFAR10-C, 0.66 for CIFAR100-C, and 0.33 for ImageNet-C. We set the sharpness threshold \(\rho=0.05\) as previous works [4, 29]. We specify further details of the hyperparameters in Appendix A.
**Overall result.** Table 1 shows the result on CIFAR10-C for 15 types of corruptions under five noisy scenarios described in Figure 3. We observed significant performance degradation in most of the baselines under the noisy settings. In addition, the extent of accuracy degradation from the Benign case is more prominent in more severe shift cases (e.g., the degree of degradation is generally Near < Far < Attack < Noise). Popular TTA baselines (BN stats, PL, and TENT) might fail due to updating with noisy samples. State-of-the-art TTA baselines that address temporal distribution shifts in TTA (LAME, CoTTA, EATA, SAR, and RoTTA) still suffered from noisy scenarios, as they are not designed to deal with unexpected samples. On the other hand, SoTTA showed its robustness across different scenarios. This validates the effectiveness of our approaches to ensure both input-wise and parameter-wise robustness. Interestingly, we found that SoTTA showed comparable performance to state-of-the-art baselines in the Benign case as well. Our interpretation of this result is two-fold. First, our high-confidence uniform-class sampling strategy filters not only noisy samples but also benign samples that would negatively impact the algorithm's objective. This implies that there exist samples that are more beneficial for adaptation, which aligns with the findings that high-entropy samples
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Method & Benign & Near & Far & Attack & Noise & Avg. \\ \hline Source & \(33.2\pm 0.4\) & \(33.2\pm 0.4\) & \(33.2\pm 0.4\) & \(33.2\pm 0.4\) & \(33.2\pm 0.4\) & \(33.2\pm 0.4\) \\ BN Stats [27] & \(53.7\pm 0.2\) & \(50.8\pm 0.1\) & \(46.8\pm 0.1\) & \(29.2\pm 0.4\) & \(28.3\pm 0.3\) & \(41.8\pm 0.1\) \\ PL [17] & \(56.6\pm 0.2\) & \(48.0\pm 0.3\) & \(42.8\pm 0.7\) & \(39.0\pm 0.4\) & \(23.8\pm 0.6\) & \(42.1\pm 0.3\) \\ TENT [38] & \(59.5\pm 0.0\) & \(46.4\pm 1.4\) & \(40.0\pm 1.3\) & \(31.9\pm 0.7\) & \(20.0\pm 0.9\) & \(39.5\pm 0.7\) \\ LAME [1] & \(31.0\pm 0.5\) & \(31.5\pm 0.5\) & \(30.8\pm 0.7\) & \(31.0\pm 0.6\) & \(31.1\pm 0.7\) & \(31.1\pm 0.6\) \\ CoTTA [39] & \(55.8\pm 0.4\) & \(50.0\pm 0.3\) & \(42.4\pm 0.4\) & \(37.2\pm 0.2\) & \(27.3\pm 0.3\) & \(42.6\pm 0.2\) \\ EATA [28] & \(23.5\pm 1.9\) & \(6.1\pm 0.3\) & \(4.8\pm 0.5\) & \(3.7\pm 0.6\) & \(2.4\pm 0.2\) & \(8.1\pm 0.3\) \\ SAR [29] & \(57.3\pm 0.3\) & \(55.4\pm 0.1\) & \(51.2\pm 0.1\) & \(34.4\pm 0.3\) & \(38.1\pm 1.2\) & \(47.3\pm 0.3\) \\ RoTTA [44] & \(48.7\pm 0.6\) & \(49.4\pm 0.5\) & \(49.8\pm 0.9\) & \(51.5\pm 0.4\) & \(48.3\pm 0.5\) & \(49.6\pm 0.6\) \\ SoTTA & \(\mathbf{60.5\pm 0.0}\) & \(\mathbf{57.1\pm 0.2}\) & \(\mathbf{59.0\pm 0.4}\) & \(\mathbf{61.9\pm 0.8}\) & \(\mathbf{58.6\pm 1.0}\) & \(\mathbf{59.4\pm 0.3}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Classification accuracy (%) on CIFAR100-C. **Bold** numbers are the highest accuracy. Averaged over three different random seeds for 15 types of corruption.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Method & Benign & Near & Far & Attack & Noise & Avg. \\ \hline Source & \(14.6\pm 0.0\) & \(14.6\pm 0.0\) & \(14.6\pm 0.0\) & \(14.6\pm 0.0\) & \(14.6\pm 0.0\) & \(14.6\pm 0.0\) \\ BN Stats [27] & \(27.1\pm 0.0\) & \(18.9\pm 0.1\) & \(14.8\pm 0.0\) & \(17.4\pm 0.8\) & \(12.8\pm 0.0\) & \(18.2\pm 0.1\) \\ PL [17] & \(30.5\pm 0.1\) & \(6.9\pm 0.0\) & \(5.1\pm 0.2\) & \(18.1\pm 1.3\) & \(3.4\pm 0.6\) & \(12.8\pm 0.2\) \\ TENT [38] & \(27.1\pm 0.0\) & \(18.9\pm 0.1\) & \(14.8\pm 0.0\) & \(17.4\pm 0.8\) & \(12.8\pm 0.0\) & \(18.2\pm 0.1\) \\ LAME [1] & \(14.4\pm 0.0\) & \(14.4\pm 0.1\) & \(14.4\pm 0.0\) & \(14.0\pm 0.6\) & \(14.3\pm 0.0\) & \(14.3\pm 0.1\) \\ CoTTA [39] & \(32.2\pm 0.1\) & \(23.3\pm 0.2\) & \(17.6\pm 0.2\) & \(28.3\pm 1.3\) & \(16.0\pm 0.9\) & \(23.4\pm 0.2\) \\ EATA [28] & \(38.0\pm 0.1\) & \(25.6\pm 0.4\) & \(23.1\pm 0.1\) & \(26.1\pm 0.1\) & \(20.7\pm 0.2\) & \(26.7\pm 0.0\) \\ SAR [29] & \(36.1\pm 0.1\) & \(27.6\pm 0.3\) & \(23.5\pm 0.4\) & \(26.8\pm 1.0\) & \(22.0\pm 0.4\) & \(27.2\pm 0.2\) \\ RoTTA [44] & \(29.7\pm 0.0\) & \(25.6\pm 0.4\) & \(29.2\pm 0.2\) & \(32.0\pm 1.2\) & \(31.2\pm 0.2\) & \(29.5\pm 0.3\) \\ SoTTA & \(\mathbf{39.8\pm 0.0}\) & \(\mathbf{27.9\pm 0.3}\) & \(\mathbf{36.1\pm 0.1}\) & \(\mathbf{41.1\pm 0.1}\) & \(\mathbf{39.0\pm 0.1}\) & \(\mathbf{36.8\pm 0.0}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Classification accuracy (%) on ImageNet-C. **Bold** numbers are the highest accuracy. Averaged over three different random seeds for 15 types of corruption.
harm adaptation performance [29]. Second, entropy-sharpness minimization helps ensure both the robustness to noisy samples and the generalizability of the model by preventing model drifts from large gradients, leading to performance improvement with benign samples. We found similar patterns for the CIFAR100-C (Table 2) and ImageNet-C (Table 3). More details are in Appendix B.
**Impact of individual components of SoTTA.** We conducted an ablative study to further investigate the effectiveness of SoTTA's individual components. Table 4 shows the result of the ablation study for CIFAR10-C. For input-wise robustness, **HC** refers to high-confidence sampling, and **UC** refers to uniform-class sampling. The two strategies are integrated into our high-confidence uniform-class sampling (**HUS**). **ESM** is our entropy-sharpness minimization for parameter-wise robustness. Note that we utilized FIFO memory with the same size for the UC case without HC and the native entropy minimization where we did not utilize ESM. Overall, the accuracy is improved as we sequentially added each approach of SoTTA. This validates our claim that ensuring both input-wise and parameter-wise robustness via HUS and ESM is a synergetic strategy to combat noisy samples in TTA.
**Impact of the number of noisy samples.** We also investigate the effect of the size of noisy samples on TTA algorithms. Specifically, for the CIFAR10-C dataset with the Noise case, we varied the noise samples from 5,000 (5k) to 20,000 (20k) while fixing the size of benign samples as 10,000. Figure 7 shows the result. We observe that the accuracy of most baselines tends to deteriorate with a larger number of noisy samples and is sometimes even worse than without adaptation (Source). SoTTA showed its resilience to the size of noisy samples with 1.9%p degradation from 5k to 20k samples. RoTTA also showed its robustness to noisy samples to some extent, but the performance gain is around 6.4%p lower than SoTTA.
## 5 Related work
**Test-time adaptation.** While test-time adaptation (TTA) attempts to optimize model parameters with unlabeled test data streams, it lacks consideration of spoiling of the sample itself; i.e., it inevitably adapts to potential outlier samples mixed in the stream, such as adversarial data or mere noise. Most existing TTA algorithms directly optimize the model with incoming sample data. Test-time normalization [27, 36] updates the batch norm statistics in test-time to minimize the expected loss. TENT [38] minimizes the entropy of the model's predictions on a batch of test data.On the one hand, recent studies promote the robustness of the model, yet the consideration is limited to temporal distribution shifts of test data [1, 4, 5, 28, 29, 39, 44]. For instance, CoTTA [39] tries to adapt the model in a continually changing target environment via self-training and stochastical restoring. NOTE [5] proposes instance-aware batch normalization combined with prediction-balanced reservoir sampling to ensure model robustness towards temporally correlated data stream, which requires retraining with source data. We provide a method-wise comparison with EATA [28], SAR [29], and RoTTA [44] in Appendix D.2. To conclude, while existing TTA methods seek the robustness of the
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Method & Benign & Near & Far & Attack & Noise & Avg. \\ \hline Source & 57.7 ± 1.0 & 57.7 ± 1.0 & 57.7 ± 1.0 & 57.7 ± 1.0 & 57.7 ± 1.0 & 57.7 ± 1.0 & 57.7 ± 1.0 \\ HC & 34.9 ± 4.8 & 13.6 ± 0.3 & 17.6 ± 3.8 & 16.9 ± 1.6 & 16.8 ± 0.2 & 20.0 ± 2.0 \\ UC & 66.4 ± 3.0 & 62.1 ± 0.8 & 56.5 ± 2.0 & 70.0 ± 3.9 & 59.5 ± 3.0 & 62.9 ± 0.7 \\ HC + UC (HUS) & 69.8 ± 1.1 & 61.7 ± 1.3 & 58.4 ± 0.5 & 40.9 ± 5.5 & 58.9 ± 2.6 & 57.9 ± 0.8 \\ ESM & **82.6 ± 0.2** & 77.9 ± 0.4 & 72.8 ± 0.7 & 83.4 ± 0.2 & 60.5 ± 1.8 & 75.4 ± 0.5 \\ HC + ESM & 82.3 ± 0.2 & 80.9 ± 0.6 & 74.9 ± 2.4 & 83.5 ± 0.2 & 68.7 ± 7.0 & 78.0 ± 2.0 \\ UC + ESM & 82.2 ± 0.2 & 78.0 ± 0.4 & 75.9 ± 0.5 & 84.3 ± 0.1 & 77.7 ± 0.7 & 79.6 ± 0.2 \\ HUS + ESM (SoTTA) & 82.2 ± 0.3 & **81.4 ± 0.5** & **81.6 ± 0.6** & **84.5 ± 0.2** & **80.0 ± 1.4** & **81.9 ± 0.5** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Classification accuracy (%) and corresponding standard deviation of varying ablative settings in SoTTA on CIFAR10-C. **Bold** numbers are the highest accuracy. Averaged over three different random seeds for 15 types of corruption.
Figure 7: Classification accuracy (%) varying the size of noisy samples on CIFAR10-C under Noise.
model to temporal distribution shifts, they do not consider scenarios where noisy samples appear in test streams.
**Out-of-distribution detection.** Out-of-distribution (OOD) detection [10; 11; 18; 19; 20; 21; 24; 25; 43] aims to ensure the robustness of the model by identifying when a given data falls outside the training distribution. A representative method is a thresholding approach [10; 19; 20; 21], that defines a scoring function given an input and pretrained classifier. The sample is detected as OOD data if the output of the scoring function is higher than a threshold. Importantly, OOD detection studies are built on the condition that training and test domains are the same, which differs from TIA's scenario. Furthermore, OOD detection assumes that a model is fixed during test time, while a model changes continually in TTA. These collectively make it difficult to apply OOD detection studies directly to TTA scenarios.
**Open-set domain adaptation.** Open-set Domain Adaptation (OSDA) assumes that a target domain contains unknown classes not discovered in a source domain [30; 35] in domain adaptation scenarios. These methods aim to learn a mapping function between the source and target domains. While the target scenario that a model could encounter unknown classes of data in the test time is similar to our objective, these methods do not fit into TTA as it assumes both labeled source and unlabeled target data are available in the training time. Universal domain adaptation (UDA) [34; 42] further generalizes the assumption of OSDA by allowing unknown classes to present in both the source and the target domains. However, the same problem still remains as it requires labeled source and unlabeled target data in training time, which do not often comply with TTA settings where the model has no access to train data at test time due to privacy issues [38].
## 6 Discussion and conclusion
We investigate the problem of having noisy samples and the performance degradations caused by those samples in existing TTA methods. To address this issue, we propose SoTTA that is robust to noisy samples by high-confidence uniform-class sampling and entropy-sharpness minimization. Our evaluation with four noisy scenarios reveals that SoTTA outperforms state-of-the-art TTA methods in those scenarios. We believe that the takeaways from this study are a meaningful stride towards practical advances in overcoming domain shifts in test time.
**Limitations and future work.** While we focus on four noisy test stream scenarios, real-world test streams might have other types of sample diversities that are not considered in this work. Furthermore, recent TTA algorithms consider various temporal distribution shifts, such as temporally-correlated streams [5] and domain changes [39]. Towards developing a TTA algorithm robust to any test streams in the wild, more comprehensive and realistic considerations should be taken into account, which we believe is a meaningful future direction.
**Potential negative societal impacts.** As TTA requires continual computations for every test sample for adaptation, environmental concerns might be raised such as carbon emissions [37]. Recent studies such as memory-economic TTA [12] might be an effective way to mitigate this problem.
## Acknowledgments and Disclosure of Funding
This work was supported by the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2022-0-00495, On-Device Voice Phishing Call Detection).
## References
* [1] Malik Boudiaf, Romain Mueller, Ismail Ben Ayed, and Luca Bertinetto. Parameter-free online test-time adaptation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 8344-8353, June 2022.
* [2] Dhanajit Brahma and Piyush Rai. A probabilistic framework for lifelong test-time adaptation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 3582-3591, June 2023.
* [3] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _2009 IEEE Conference on Computer Vision and Pattern Recognition_, pages 248-255, 2009.
* [4] Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. In _International Conference on Learning Representations_, 2021.
* [5] Taesik Gong, Jongheon Jeong, Taewon Kim, Yewon Kim, Jinwoo Shin, and Sung-Ju Lee. NOTE: Robust continual test-time adaptation against temporal correlation. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, _Advances in Neural Information Processing Systems_, 2022.
* [6] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. _Commun. ACM_, 63(11):139-144, oct 2020.
* [7] Yves Grandvalet and Yoshua Bengio. Semi-supervised learning by entropy minimization. In L. Saul, Y. Weiss, and L. Bottou, editors, _Advances in Neural Information Processing Systems_, volume 17. MIT Press, 2004.
* [8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2016.
* [9] Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In _International Conference on Learning Representations_, 2019.
* [10] Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In _International Conference on Learning Representations_, 2017.
* [11] Dan Hendrycks, Mantas Mazeika, and Thomas Dietterich. Deep anomaly detection with outlier exposure. _Proceedings of the International Conference on Learning Representations_, 2019.
* [12] Junyuan Hong, Lingjuan Lyu, Jiayu Zhou, and Michael Spranger. MECTA: Memory-economic continual test-time model adaptation. In _The Eleventh International Conference on Learning Representations_, 2023.
* [13] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Francis Bach and David Blei, editors, _Proceedings of the 32nd International Conference on Machine Learning_, volume 37 of _Proceedings of Machine Learning Research_, pages 448-456, Lille, France, 07-09 Jul 2015. PMLR.
* [14] Diederick P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In _International Conference on Learning Representations (ICLR)_, 2015.
* [15] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. _Master's thesis, Department of Computer Science, University of Toronto_, 2009.
* [16] Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. _Proceedings of the IEEE_, 86(11):2278-2324, 1998.
* [17] Dong-Hyun Lee. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In _Workshop on challenges in representation learning, ICML_, page 896, 2013.
* [18] Kimin Lee, Honglak Lee, Kibok Lee, and Jinwoo Shin. Training confidence-calibrated classifiers for detecting out-of-distribution samples. In _International Conference on Learning Representations_, 2018.
* [19] Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, _Advances in Neural Information Processing Systems_, volume 31. Curran Associates, Inc., 2018.
* [20] Shiyu Liang, Yixuan Li, and R. Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. In _International Conference on Learning Representations_, 2018.
* [21] Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. Energy-based out-of-distribution detection. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, _Advances in Neural Information Processing Systems_, volume 33, pages 21464-21475. Curran Associates, Inc., 2020.
* [22] Ilya Loshchilov and Frank Hutter. SGDR: Stochastic gradient descent with warm restarts. In _International Conference on Learning Representations (ICLR)_, 2017.
* [23] TorchVision maintainers and contributors. Torchvision: Pytorch's computer vision library. [https://github.com/pytorch/vision](https://github.com/pytorch/vision), 2016.
* [24] Yifei Ming, Ying Fan, and Yixuan Li. POEM: Out-of-distribution detection with posterior sampling. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, _Proceedings of the 39th International Conference on Machine Learning_, volume 162 of _Proceedings of Machine Learning Research_, pages 15650-15665. PMLR, 17-23 Jul 2022.
* [25] Sina Mohseni, Mandar Pitale, JBS Yadawa, and Zhangyang Wang. Self-supervised learning for generalizable out-of-distribution detection. _Proceedings of the AAAI Conference on Artificial Intelligence_, 34(04):5216-5223, Apr. 2020.
* [26] Norman Mu and Justin Gilmer. Mnist-c: A robustness benchmark for computer vision. _arXiv preprint arXiv:1906.02337_, 2019.
* [27] Zachary Nado, Shreyas Padhy, D Sculley, Alexander D'Amour, Balaji Lakshminarayanan, and Jasper Snoek. Evaluating prediction-time batch normalization for robustness under covariate shift. _arXiv preprint arXiv:2006.10963_, 2020.
* [28] Shuaiecheng Niu, Jiaxiang Wu, Yifan Zhang, Yaofo Chen, Shijian Zheng, Peilin Zhao, and Mingkui Tan. Efficient test-time model adaptation without forgetting. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, _Proceedings of the 39th International Conference on Machine Learning_, volume 162 of _Proceedings of Machine Learning Research_, pages 16888-16905. PMLR, 17-23 Jul 2022.
* [29] Shuaiecheng Niu, Jiaxiang Wu, Yifan Zhang, Zhiquan Wen, Yaofo Chen, Peilin Zhao, and Mingkui Tan. Towards stable test-time adaptation in dynamic wild world. In _The Eleventh International Conference on Learning Representations_, 2023.
* [30] Pau Panareda Busto and Juergen Gall. Open set domain adaptation. In _Proceedings of the IEEE International Conference on Computer Vision (ICCV)_, Oct 2017.
* [31] Joaquin Quinonero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D Lawrence. _Dataset shift in machine learning_. Mit Press, 2008.
* [32] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2016.
* MICCAI 2015_, pages 234-241, Cham, 2015. Springer International Publishing.
* [34] Kuniaki Saito, Donghyun Kim, Stan Sclaroff, and Kate Saenko. Universal domain adaptation through self supervision. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, _Advances in Neural Information Processing Systems_, volume 33, pages 16282-16292. Curran Associates, Inc., 2020.
* [35] Kuniaki Saito, Shohei Yamamoto, Yoshitaka Ushiku, and Tatsuya Harada. Open set domain adaptation by backpropagation. In _Proceedings of the European Conference on Computer Vision (ECCV)_, September 2018.
* [36] Steffen Schneider, Evgenia Rusak, Luisa Eck, Oliver Bringmann, Wieland Brendel, and Matthias Bethge. Improving robustness against common corruptions by covariate shift adaptation. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, _Advances in Neural Information Processing Systems_, volume 33, pages 11539-11551. Curran Associates, Inc., 2020.
* [37] Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. Green ai. _Commun. ACM_, 63(12):54-63, nov 2020.
* [38] Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, and Trevor Darrell. Tent: Fully test-time adaptation by entropy minimization. In _International Conference on Learning Representations_, 2021.
* [39] Qin Wang, Olga Fink, Luc Van Gool, and Dengxin Dai. Continual test-time domain adaptation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 7201-7211, June 2022.
* [40] Tong Wu, Feiran Jia, Xiangyu Qi, Jiachen T. Wang, Vikash Sehwag, Saeed Mahloujifar, and Prateek Mittal. Uncovering adversarial risks of test-time adaptation. _arXiv preprint arXiv:2301.12576_, 2023.
* [41] Jingkang Yang, Pengyun Wang, Dejian Zou, Zitang Zhou, Kunyuan Ding, WENXUAN PENG, Haoqi Wang, Guangyao Chen, Bo Li, Yiyou Sun, Xuefeng Du, Kaiyang Zhou, Wayne Zhang, Dan Hendrycks, Yixuan Li, and Ziwei Liu. Openood: Benchmarking generalized out-of-distribution detection. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, _Advances in Neural Information Processing Systems_, volume 35, pages 32598-32611. Curran Associates, Inc., 2022.
* [42] Kaichao You, Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I. Jordan. Universal domain adaptation. In _2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 2715-2724, 2019.
* [43] Qing Yu and Kiyoharu Aizawa. Unsupervised out-of-distribution detection by maximum classifier discrepancy. In _Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)_, October 2019.
* [44] Longhui Yuan, Binhui Xie, and Shuang Li. Robust test-time adaptation in dynamic scenarios. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 15922-15932, June 2023.
Experiment details
We conducted all experiments in the paper using three random seeds (0, 1, 2) and reported the average accuracies and their corresponding standard deviations. The experiments were performed on NVIDIA GeForce RTX 3090 and NVIDIA TITAN RTX GPUs. For a single execution of SoTTA, the test-time adaptation phase consumed 1 minutes for CIFAR10-C/CIFAR100-C and 10 minutes for ImageNet-C.
### Baseline details
In this study, we utilized the official implementations of the baseline methods. To ensure consistency, we adopted the reported best hyperparameters documented in the respective papers or source code repositories. Furthermore, we present supplementary information regarding the implementation specifics of the baseline methods and provide a comprehensive overview of our experimental setup, including detailed descriptions of the employed hyperparameters.
SoTTA (Ours).We used ADAM optimizer [14], with a BN momentum of \(m=0.2\), and learning rate of \(l=0.001\) with a single adaptation epoch. We set the HUS size to 64 and the confidence threshold \(C_{0}\) to 0.99 for CIFAR10-C (10 classes), 0.66 for CIFAR100-C (100 classes), and 0.33 for ImageNet-C (1,000 classes). We set entropy-sharpness L2-norm constraint \(\rho=0.5\) following the suggestion [4].
Pl.For PL [17], we only updated the BN layers following the previous studies [38, 39]. We set the learning rate as \(LR=0.001\) as the same as TENT [38].
Tent.For TENT [38], we set the learning rate as \(LR=0.001\) for CIFAR10-C and \(LR=0.00025\) for ImageNet-C, following the guidance provided in the original paper. We referred to the official code3 for implementations.
Footnote 3: [https://github.com/DequanWang/tent](https://github.com/DequanWang/tent)
Lame.LAME [1] relies on an affinity matrix and incorporates hyperparameters associated with it. We followed the hyperparameter selection specified by the authors in their paper and referred to their official code4 for implementation details. Specifically, we employed the kNN affinity matrix with a value of k set to 5.
Footnote 4: [https://github.com/fiveai/LAME](https://github.com/fiveai/LAME)
CoTTA [39] incorporates three hyperparameters: the augmentation confidence threshold \(p_{th}\), restoration factor \(p\), and exponential moving average (EMA) factor \(m\). To ensure consistency, we adopted the hyperparameter values recommended by the authors. Specifically, we set the restoration factor to \(p=0.01\) and the EMA factor to \(\alpha=0.999\). For the augmentation confidence threshold, the authors provide a guideline for its selection, suggesting using the 5% quantile of the softmax predictions' confidence on the source domains. We followed this guideline, which results in \(p_{th}=0.92\) for CIFAR10-C, \(p_{th}=0.72\) for CIFAR100-C, and \(p_{th}=0.1\) for ImageNet-C. We referred to the official code5 for implementing CoTTA.
Footnote 5: [https://github.com/qinenergy/cotta](https://github.com/qinenergy/cotta)
EATA.For EATA [28], we followed the settings from the original paper. We set \(LR=0.005/0.005/0.00025\) for CIFAR10-C/CIFAR100-C/ImageNet-C, entropy constant \(E_{0}=0.4\times\ln\lvert\mathcal{Y}\rvert\) where \(\lvert\mathcal{Y}\rvert\) is number of classes. We set cosine sample similarity threshold \(\epsilon=0.4/0.4/0.05\), trade-off parameter \(\beta=1/1/2,000\), the moving average factor \(\alpha=0.1\). We utilized 2,000 samples for calculating Fisher importance as suggested. We referred to the official code6 for implementing EATA.
Footnote 6: [https://github.com/mr-eggplant/EATA](https://github.com/mr-eggplant/EATA)
Sar.SAR [29] aims to adapt to diverse batch sizes, and we chose a typical batch size of 64 for a fair comparison. We followed the learning rate as \(LR=0.00025\), sharpness threshold \(\rho=0.5\), and entropy threshold \(E_{0}=0.4\times\ln\lvert\mathcal{Y}\rvert\) where \(\lvert\mathcal{Y}\rvert\) is the total number of classes, as suggested in the original paper. Finally, we froze the top layer (layer4 for ResNet18) as the original paper, and SoTTA also follows this implementation. We referred to the original code7 for implementing SAR.
Footnote 7: [https://github.com/mr-eggplant/SAR](https://github.com/mr-eggplant/SAR)
RoTTA.RoTTA [44] uses Adam Optimizer by setting learning rate as \(LR=0.001\) and \(\beta=0.9\). We followed the authors' hyperparameters selection from the paper, including BN-statistic exponential moving average updating rate as \(\alpha=0.05\), the Teacher model's exponential moving average updating rate as \(\nu=0.001\), timeliness parameter as \(\lambda_{t}=1.0\), and uncertainty parameter as \(\lambda_{u}=1.0\). We referred to the original code8 for implementing RoTTA.
Footnote 8: [https://github.com/BIT-DA/RoTTA](https://github.com/BIT-DA/RoTTA)
### Target dataset details
CIFAR10-C/CIFAR100-C.CIFAR10-C/CIFAR100-C [9] serves as a widely adopted benchmark for evaluating the robustness of models against corruptions [27, 36, 38, 39]. Both datasets consist of 50,000 training samples and 10,000 test samples, categorized into 10/100 classes. To assess the robustness of models, datasets introduce 15 types of corruptions to the test data, including Gaussian Noise, Shot Noise, Impulse Noise, Defocus Blur, Frosted Glass Blur, Motion Blur, Zoom Blur, Snow, Frost, Fog, Brightness, Contrast, Elastic Transformation, Pixelate, and JPEG Compression. For our experiments, we adopt the highest severity level of corruption, level 5, in line with previous studies [27, 36, 38, 39]. Consequently, the datasets consist of 150,000 corrupted test samples. To train our models, we employ the ResNet18 [8] architecture as the backbone network. The model is trained on the clean training data to generate the source models. We utilize stochastic gradient descent with a momentum of 0.9 and cosine annealing learning rate scheduling [22] for 200 epochs. The initial learning rate is set to 0.1, and a batch size 128 is used during training.
ImageNet-C.ImageNet-C is another widely adopted benchmark for evaluating the robustness of models against corruptions [1, 27, 36, 38, 39]. The ImageNet dataset [3] consists of 1,281,167 training samples and 50,000 test samples. Similar to CIFAR10-C, ImageNet-C applies the same 15 types of corruptions, resulting in 750,000 corrupted test samples. We utilize the highest severity level of corruption, equivalent to CIFAR10-C. For our experiments, we employ a pre-trained ResNet18 [8] model from the TorchVision library [23], which is pre-trained on the ImageNet dataset [3] and is widely used as a backbone for various computer vision tasks.
### Noisy dataset details
CIFAR100 (Near).CIFAR100 [15] consists of 50,000/10,000 training/test data with 100 classes. We utilized training data without any corruption. We undersampled the dataset to 10,000 for the CIFAR10-C and CIFAR100-C target cases by randomly removing samples and used the entire training set (50,000) for the ImageNet-C target case.
ImageNet (Near).ImageNet [3] consists of 1,281,167/50,000 training/test data with 1,000 classes. We utilized test data without any corruption. We undersampled the dataset to 10,000 for the CIFAR100-C target case by randomly removing samples.
MNIST (Far).MNIST [16] contains 60,000/10,000 training/test data with 10 classes. We utilized test data without any corruption. We used the entire test set for the CIFAR10-C/CIFAR100-C target case, and oversampled the dataset by randomly resampling, which results in 50,000 samples that is equivalent to the size of each ImageNet-C target data.
Attack.We implemented the modified indiscriminate distribution invading attack (DIA) [40]. First, we duplicated the entire set of target samples and treated them as malicious samples. Subsequently, we randomly shuffled these duplicated samples within the original target sample set. During the adaptation phase, we injected perturbations into the malicious samples to increase the overall error rate on benign samples within the same batch. As a result, we perturbed 10,000 samples (CIFAR10-C/CIFAR100-C) and 50,000 samples (ImageNet-C) to serve as attack samples. For CIFAR10-C/CIFAR100-C, we used hyperparameters of maximum perturbation constraint \(\epsilon=0.1\), attack learning rate \(\alpha=1/255\), and attacking steps \(N=10\). For ImageNet-C, we used hyperparameters of maximum perturbation constraint \(\epsilon=0.2\), attack learning rate \(\alpha=1/255\), and attacking steps \(N=1\).
Uniform random noise (Noise).We generated a uniform random valued image in the scaled RGB range \([0,1]\), with the same height and width as the corresponding target dataset. We generated the same amount of noise samples as each target dataset.
Result details
Additional ablative studies
We conducted experiments to understand the sensitivity of our two hyperparameters: confidence threshold (\(C_{0}\)) and BN momentum (\(m\)). We varied \(C_{0}\) and \(m\) and reported the corresponding accuracy.
Confidence threshold.Our result shows that the selection of \(C_{0}\) shows similar patterns across different scenarios (Benign \(\sim\) Noise). The result illustrates a tradeoff; a low \(C_{0}\) value does not effectively reject noisy samples, while a high \(C_{0}\) value filters benign data. We found a proper value of \(C_{0}\) (0.99) that generally works well across the scenarios. Also, we found that the optimal \(C_{0}\) depends primarily on in-distribution data. Our interpretation is that setting different \(C_{0}\) values for CIFAR10-C, CIFAR100-C, and ImageNet-C is straightforward as they have a different number of classes (10, 100, and 1,000), which leads to different ranges of the model's confidence.
BN momentum.Across the tested range, the variations in performance were found to be negligible. This finding indicates that choosing a low momentum value from within the specified range ([0.05, 0.3]) is adequate to maintain a favorable performance. Please note that setting a high momentum would corrupt the result, which is implicated by the algorithms directly utilizing test-time statistics (e.g., TENT) suffering from accuracy degradation with noisy data streams (e.g., TENT: 81.0% \(\rightarrow\) 52.1% for Noise at Table 1).
## Appendix D Further discussions
### Theoretical explanation of the impact of noisy data streams
We provide a theoretical explanation of the impact of noisy data streams with a common entropy minimization as an example. With the Bayesian-learning-based frameworks [2; 7], we can express the posterior distribution \(p\) of the model in terms of training data \(D\) and benign test data \(B\) in test-time adaptation:
\[\log p(\theta|D,B)=\log q(\theta)-\frac{\lambda_{B}}{|B|}\sum_{b=1}^{|B|}H(y_{ b}|x_{b}). \tag{7}\]
The posterior distribution of model parameters depends on the prior distribution \(q\) and the average of entropy \(H\) of benign samples with a multiplier \(\lambda\). Here, we incorporate the additional noisy data stream \(N\) into Equation 7 and introduce a new posterior distribution considering noisy streams:
\[\log p(\theta|D,B,N)=\log q(\theta)-\frac{\lambda_{B}}{|B|}\sum_{b=1}^{|B|}H(y_ {b}|x_{b})-\frac{\lambda_{N}}{|N|}\sum_{n=1}^{|N|}H(y_{n}|x_{n}). \tag{8}\]
Figure 8: Effect of hyperparameters on the model accuracy on CIFAR10-C for 15 types of corruptions under five scenarios: Benign, Near, Far, Attack, and Noise. Averaged over three different random seeds.
With Equation 7 and Equation 8, we can now derive model parameter variations caused by noisy test samples:
\[\log p(\theta|D,B)-\log p(\theta|D,B,N)=\frac{\lambda_{N}}{|N|}\sum_{n=1}^{M}H(y_ {n}|x_{n}). \tag{9}\]
Equation 9 implies that the (1) model adapted only from benign data and (2) model adapted with both benign and noisy data differ by the amount of the average entropy of noisy samples. This also suggests that a high entropy from severe noisy samples would result in a significant model drift in adaptation (i.e., model corruption).
### Comparison with previous TTA methods
#### d.2.1 EATA and SAR
While SoTTA, EATA [28], and SAR [29] all leverage sample filtering strategy, the key distinction of input-wise robustness of SoTTA and EATA/SAR lies in three aspects: (1) Our high-confidence sampling strategy in SoTTA aims to filter noisy samples by utilizing only the samples with high confidence, while both EATA and SAR use a different approach that excludes a few high-entropy samples, particularly during the early adaptation stage. In our preliminary study, we found that our method excludes 99.98% of the noisy samples, whereas EATA and SAR exclude 33.55% of such samples. (2) While EATA and SAR adapt to every incoming low-entropy sample, SoTTA leverages a uniform-class memory management approach to prevent overfitting. As shown in Figure 5b, noisy samples often lead to imbalanced class predictions, and these skewed distributions could lead to an undesirable bias in \(p(y)\) and thus might negatively impact TTA objectives, such as entropy minimization. The ablation study in Table 10 shows the effectiveness of uniform sampling with a 3.9%p accuracy improvement. (3) EATA and SAR reset the memory buffer and restart the sample collection process for each adaptation. This strategy is susceptible to overfitting due to a smaller number of samples used for adaptation and the temporal distribution drift of the samples. In contrast, our continual memory management approach effectively mitigates this issue by retaining high-confidence uniform-class samples in the memory, as shown in Table 10.
We acknowledge that both SoTTA and SAR utilize sharpness-aware minimization proposed by Foret et al. [4]. However, we clarify that the motivation behind using SAM is different. While SAR intends to avoid model collapse when exposed to samples with large gradients, we aim to enhance the model's robustness to noisy samples with high confidence scores. As illustrated in Figure 6, we observed that entropy-sharpness minimization effectively prevents the model from overfitting to noisy samples. As a result, while our algorithm led to marginal performance degradation in noisy settings (82.2% \(\rightarrow\) 80.0% for Noise), EATA and SAR showed significant degradation (EATA 82.4% \(\rightarrow\) 36.0% for Noise; SAR 78.3% \(\rightarrow\) 58.3% for Noise).
#### d.2.2 RoTTA
Regarding our high-confidence uniform sampling technique, RoTTA [44] could be compared. First of all, RoTTA's objective is different from ours; RoTTA focused on temporal distribution changes of test streams without considering noisy samples. Similar to SoTTA, RoTTA's memory bank maintains recent high-confidence samples. However, RoTTA has no filtering mechanism for low-confidence samples, which makes RoTTA fail to avoid noisy samples, especially in the early stage of TTA. In contrast, our confidence-based memory management scheme effectively rejects noisy samples, and
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & Benign & Near & Far & Attack & Noise & Avg \\ \hline SoTTA (w/o High-confidence) & 82.2 ± 0.2 & 78.0 ± 0.4 & 75.9 ± 0.5 & 84.3 ± 0.1 & 77.7 ± 0.7 & 79.6 ± 0.2 \\ SoTTA (w/o Uniform-class) & **82.3 ± 0.2** & 80.9 ± 0.6 & 74.9 ± 2.4 & 83.5 ± 0.2 & 68.7 ± 7.0 & 78.0 ± 2.0 \\ SoTTA (w/o Continual) & 81.0 ± 0.5 & 79.5 ± 0.3 & 75.5 ± 1.8 & 84.4 ± 0.2 & 65.7 ± 7.0 & 77.2 ± 1.8 \\ SoTTA & 82.2 ± 0.3 & **81.4 ± 0.5** & **81.6 ± 0.6** & **84.5 ± 0.2** & **80.0 ± 1.4** & **81.9 ± 0.5** \\ \hline \hline \end{tabular}
\end{table}
Table 10: Average classification accuracy (%) and their corresponding standard deviations on ablation study of the effect of high-confidence uniform-class continual memory of SoTTA on CIFAR10-C. **Bold** numbers are the highest accuracy. Averaged over three different random seeds.
thus it prevents potential model drift from the beginning of TTA scenarios. As a result, our approach outperforms RoTTA in noisy test streams (e.g., 5.4%p better than RoTTA on CIFAR10-C).
### Comparison with out-of-distribution detection algorithms
We discussed the limitation of applying out-of-distribution detection to TTA in Section 5. Still, we are curious about the effect of applying out-of-distribution algorithms to our scenario. To this end, we conduct experiments using one of the out-of-distribution algorithms, ODIN [20], in our noisy data streams. Specifically, we filtered OOD samples detected by ODIN and performed TTA algorithms on the samples left.
Note that similar to prior studies on OOD, ODIN uses a thresholding approach to predict whether a sample is OOD. It thus requires validation data with binary labels indicating whether it is in-distribution or OOD to decide the best threshold \(\delta\). However, in TTA scenarios, validation data is not provided, which makes it difficult to apply OOD algorithms directly in our scenario. We circumvented this problem using the labeled test batches to get the best threshold. Following the original paper, we searched for the best threshold from 0.1 to 0.12 with a step size of 0.000001, which took over 20,000 times longer than the original TTA algorithm.
Table 11 shows that the impact of discarding OOD samples with ODIN is negligible, yielding only a 0.3%p improvement in the average accuracy despite a huge computation cost. Also, Figure 9 shows the high sensitivity of ODIN with respect to threshold hyperparameter \(\delta\), which implies that applying OOD in TTA is impractical.
We conclude the practical limitations of OOD detection algorithms for TTA as follows: (1) OOD methods assume that a model is fixed during test time, while a model changes continually in TTA. (2) As previously noted, most OOD algorithms require labels for validation data unavailable in TTA scenarios. Even using the same test dataset for selecting the threshold, the performance improvement was marginal. (3) Low performance possibly results from the fact that OOD detection studies are
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Benign} & \multicolumn{3}{c}{Near} & \multicolumn{3}{c}{Far} & \multicolumn{3}{c}{Attack} & \multicolumn{3}{c}{Noise} \\ \cline{2-13} Method & w/o ODIN & w/ ODIN & w/o ODIN & w/ ODIN & w/ ODIN & w/ ODIN & w/ ODIN & w/ ODIN & w/ ODIN & w/ ODIN \\ \hline Source & 57.7 ± 0.1 & 57.7 ± 0.1 & 57.7 ± 0.1 & 57.7 ± 0.1 & 57.7 ± 0.1 & 57.7 ± 0.1 & 57.7 ± 0.1 & 57.7 ± 0.1 & 57.7 ± 0.1 & 57.7 ± 0.1 \\ BN stats [27] & 78.2 ± 0.3 & 78.2 ± 0.3 & 76.5 ± 0.4 & 76.5 ± 0.4 & 75.4 ± 0.3 & **75.9 ± 0.4** & 55.8 ± 1.4 & 55.8 ± 1.4 & 55.9 ± 0.8 & 56.7 ± 0.9 \\ PL [17] & 78.4 ± 0.3 & **78.8** ± 0.5 & 73.1 ± 0.3 & **74.3 ± 0.6** & 71.3 ± 0.8 & 66.5 ± 1.1 & 66.5 ± 1.1 & 52.1 ± 0.4 & 52.1 ± 0.4 \\ TENT [38] & 81.5 ± 1.0 & 81.5 ± 10.8 & 76.1 ± 0.6 & 76.1 ± 0.6 & 73.5 ± 1.4 & 77.3 ± 0.9 & **69.1 ± 0.1** & 54.4 ± 0.3 & **56.2 ± 0.6** \\ LAME [1] & 56.1 ± 0.3 & 56.1 ± 0.3 & 56.7 ± 0.5 & 56.7 ± 0.8 & 55.7 ± 0.4 & 55.7 ± 0.4 & 56.2 ± 0.8 & 56.2 ± 0.8 & 54.9 ± 0.8 & **55.2 ± 0.7** \\ CoTTA [39] & 82.2 ± 0.3 & 82.2 ± 0.3 & 78.2 ± 0.3 & 78.2 ± 0.4 & 73.6 ± 0.9 & 69.6 ± 1.3 & 69.6 ± 1.3 & 57.8 ± 0.8 & **62.0 ± 1.3** \\ ETA [28] & 82.4 ± 0.3 & 82.4 ± 0.3 & 63.9 ± 0.4 & **69.2 ± 0.4** & 56.3 ± 0.3 & **59.9 ± 0.6 & 70.9 ± 0.7 & 70.9 ± 0.7 & 36.0 ± 0.8 & 58.0 ± 0.1 \\ SAR [29] & 78.4 ± 0.7 & 78.4 ± 0.7 & 72.8 ± 0.2 & 72.8 ± 0.2 & 75.7 ± 0.3 & **76.0 ± 0.1** & 56.2 ± 1.8 & 56.2 ± 1.4 & 58.7 ± 0.3 & 58.7 ± 0.3 \\ RoTTA [44] & 75.3 ± 0.7 & 75.3 ± 0.7 & 77.5 ± 0.5 & 77.5 ± 0.7 & 77.0 ± 0.9 & 77.4 ± 0.8 & 78.4 ± 0.8 & 73.5 ± 0.5 & 73.5 ± 0.5 \\ SoTTA & 82.1 ± 0.4 & 82.1 ± 0.4 & 81.6 ± 0.4 & 81.6 ± 0.4 & 81.7 ± 0.8 & **82.0 ± 0.8** & **84.5 ± 0.3** & **84.5 ± 0.3** & 81.5 ± 1.2 & 81.5 ± 1.2 \\ \hline \hline \end{tabular}
\end{table}
Table 11: Average classification accuracy (%) of ODIN+TTA on CIFAR10-C. **Bold** numbers are the accuracy with improvement from normal TTAs. Averaged over three different random seeds.
Figure 9: Effect of OOD threshold \(\delta\) on classification accuracy (%) of ODIN+TENT on CIFAR10-C. Averaged over three different random seeds.
built on the condition that training and test domains are the same, which differs from TTA's scenario. These collectively make it difficult to apply OOD detection studies directly to TTA scenarios.
### Applying to other domains
While this study primarily focuses on classification tasks, there are other tasks where test-time adaptation would be useful. Here we discuss the applicability of SoTTA to (1) image segmentation and (2) object detection, which are crucial in autonomous driving scenarios.
For image segmentation, when noisy objects are present in the input, the model might produce noisy predictions on those pixels, leading to detrimental results. Extending SoTTA to operate at the pixel level would allow it to be compatible with the segmentation task while minimizing the negative influences of those noisy pixels on model predictions in test-time adaptation scenarios.
Similarly, SoTTA could be tailored to object detection's classification (recognition) task. For example, in the context of the YOLO framework [32], SoTTA could filter and store grids with high confidence for test-time adaptation, enhancing detection accuracy. However, our current approach must address the localization task (bounding box regression) during test-time adaptation. Implementing this feature is non-trivial and would require careful consideration and potential redesign of certain aspects of our methodology. Accurately localizing bounding boxes during test-time adaptation presents an exciting avenue for future research.
## Appendix E License of assets
DatasetsCIFAR10/CIFAR100 (MIT License), CIFAR10-C/CIFAR100-C (Creative Commons Attribution 4.0 International), ImageNet-C (Apache 2.0), and MNIST (CC-BY-NC-SA 3.0).
CodesTorchvision for ResNet18 (Apache 2.0), the official repository of CoTTA (MIT License), the official repository of TENT (MIT License), the official repository of LAME (CC BY-NC-SA 4.0), the official repository of EATA (MIT License), the official repository of SAR (BSD 3-Clause License), and the official repository of RoTTA (MIT License). | ## Review
### Summary
This paper addresses the challenge of test-time adaptation (TTA) in the presence of non-interest samples, proposing a method called Screening-out Test-Time Adaptation (SoTTA). The approach focuses on both input-wise and parameter-wise robustness to enhance performance under noisy conditions. The authors validate their method through comprehensive experiments on standard datasets, demonstrating significant improvements compared to existing TTA methods. Although the problem setting is novel and relevant, reviewers note concerns regarding the technical originality of the proposed methods, as they are closely related to existing techniques. Overall, the work is recognized for its practical implications and experimental results.
### Strengths
- The problem of non-interest samples in TTA is novel and practically significant.
- The proposed method, SoTTA, includes reasonable adaptations for robustness.
- The paper presents clear experiments demonstrating state-of-the-art performance.
- The presentation of the paper is generally good, with clear figures.
### Weaknesses
- The proposed methods may lack originality, drawing heavily from existing techniques like RoTTA and SAR.
- Key results are not fully presented in the main paper, particularly those related to ImageNet-C.
- There is no discussion on the selection and sensitivity of key hyperparameters, which is critical for practical applications.
- Several typographical errors are present in the manuscript.
### Questions
- Can the authors discuss the potential performance if CSTU and ESM methods were combined?
- What is the outcome when the proposed method is compared to EATA?
- Can the authors provide sensitivity analysis regarding the threshold C0?
- How do the proposed methods differ from existing techniques like RoTTA and SAR?
### Soundness
**Score:** 2
**Description:** 2 = fair: The methodology is reasonable but lacks originality and thoroughness in addressing important aspects.
### Presentation
**Score:** 3
**Description:** 3 = good: The paper is mostly well-presented with clear figures and layouts, but it contains some typographical errors.
### Contribution
**Score:** 2
**Description:** 2 = fair: The contribution is relevant, but the originality of the methods used is questioned due to similarities with existing approaches.
### Rating
**Score:** 5
**Description:** 5 = borderline accept: The paper is technically solid with significant contributions, though it requires some improvements and clarification on methodological originality.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents a relevant and novel problem in the field of test-time adaptation and proposes a method with promising experimental results. However, concerns regarding technical originality and thoroughness in presenting results and methodology exist. The decision to accept aligns with the majority of reviewer feedback, recognizing the potential impact of the work while suggesting areas for improvement.
|
You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections:
1. Summary: A summary of the paper in 100-150 words.
2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.
3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are:
1 poor
2 fair
3 good
4 excellent
4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are:
1 strong reject
2 reject, significant issues present
3 reject, not good enough
4 possibly reject, but has redeeming facets
5 marginally below the acceptance threshold
6 marginally above the acceptance threshold
7 accept, but needs minor improvements
8 accept, good paper
9 strong accept, excellent work
10 strong accept, should be highlighted at the conference
5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.
Here is the template for a review format, you must follow this format to output your review result:
**Summary:**
Summary content
**Strengths:**
- Strength 1
- Strength 2
- ...
**Weaknesses:**
- Weakness 1
- Weakness 2
- ...
**Questions:**
- Question 1
- Question 2
- ...
**Soundness:**
Soundness result
**Presentation:**
Presentation result
**Contribution:**
Contribution result
**Rating:**
Rating result
**Paper Decision:**
- Decision: Accept/Reject
- Reasons: reasons content
Please ensure your feedback is objective and constructive. The paper is as follows:
# Mitigating Over-smoothing in Transformers via Regularized Nonlocal Functionals
Tam Nguyen
Department of Electrical & Computer Engineering
Rice University
Houston, USA
[email protected]
&Tan M. Nguyen
Department of Mathematics
National University of Singapore
Singapore
[email protected]
&Richard G. Baraniuk
Department of Electrical & Computer Engineering
Rice University
Houston, USA
[email protected]
###### Abstract
Transformers have achieved remarkable success in a wide range of natural language processing and computer vision applications. However, the representation capacity of a deep transformer model is degraded due to the over-smoothing issue in which the token representations become identical when the model's depth grows. In this work, we show that self-attention layers in transformers minimize a functional which promotes smoothness, thereby causing token uniformity. We then propose a novel regularizer that penalizes the norm of the difference between the smooth output tokens from self-attention and the input tokens to preserve the fidelity of the tokens. Minimizing the resulting regularized energy functional, we derive the Neural Transformer with a Regularized Nonlocal Functional (NeuTRENO), a novel class of transformer models that can mitigate the over-smoothing issue. We empirically demonstrate the advantages of NeuTRENO over the baseline transformers and state-of-the-art methods in reducing the over-smoothing of token representations on various practical tasks, including object classification, image segmentation, and language modeling.
## 1 Introduction
Transformer models [62] have achieved substantial success in natural language processing [16, 2, 13, 10, 47, 4, 6, 14], reinforcement learning [9, 32], computer vision [19, 40, 59, 49, 44, 3, 41, 71, 27], and other practical applications [50, 33, 70, 26, 66]. Transformers also excel at transferring knowledge from pre-trained models to new tasks, even when limited supervision is available [45, 46, 16, 69, 39]. At the heart of transformers lies the self-attention mechanism, which computes a weighted average of token representations within a sequence. These weights are determined based on the similarity scores between pairs of tokens, determining their relative importance in the sequence [11, 43, 38]. This flexibility in capturing diverse syntactic and semantic relationships has been identified as a crucial factor contributing to the success of transformers [57, 63, 12, 64, 31].
### Background: Self-Attention
For a given input sequence \(\mathbf{X}:=[\mathbf{x}(1),\cdots,\mathbf{x}(N)]^{\top}\in\mathbb{R}^{N\times D_{x}}\) of \(N\) feature vectors, self-attention transforms \(\mathbf{X}\) into the output sequence \(\mathbf{H}\) in the following two steps:
**Step 1.** The input sequence \(\mathbf{X}\) is projected into the query matrix \(\mathbf{Q}\), the key matrix \(\mathbf{K}\), and the value matrix \(\mathbf{V}\) via three linear transformations
\[\mathbf{Q}=\mathbf{X}\mathbf{W}_{Q}^{\top};\mathbf{K}=\mathbf{X}\mathbf{W}_{K}^{ \top};\mathbf{V}=\mathbf{X}\mathbf{W}_{V}^{\top}, \tag{1}\]
where \(\mathbf{W}_{Q},\mathbf{W}_{K}\in\mathbb{R}^{D_{qk}\times D_{x}}\), and \(\mathbf{W}_{V}\in\mathbb{R}^{D\times D_{x}}\) are the weight matrices. We denote \(\mathbf{Q}:=[\mathbf{q}(1),\ldots,\mathbf{q}(N)]^{\top},\mathbf{K}:=[\mathbf{k}(1),\ldots,\mathbf{ k}(N)]^{\top}\), and \(\mathbf{V}:=[\mathbf{v}(1),\ldots,\mathbf{v}(N)]^{\top}\), where the vectors \(\mathbf{q}(i),\mathbf{k}(i)\), and \(\mathbf{v}(i)\), for \(i=1,\ldots,N\) are the query, key, and value vectors, respectively.
**Step 2.** The output sequence \(\mathbf{U}:=[\mathbf{u}(1),\ldots,\mathbf{u}(N)]^{\top}\in\mathbb{R}^{N\times D_{qk}}\) is then computed as follows
\[\mathbf{U}=\mathrm{softmax}\Big{(}\mathbf{Q}\mathbf{K}^{\top}/\sqrt{D_{qk}} \Big{)}\mathbf{V}:=\mathbf{A}\mathbf{V}, \tag{2}\]
where the softmax function is applied to each row of the matrix \(\mathbf{Q}\mathbf{K}^{\top}/\sqrt{D_{qk}}\). The matrix \(\mathbf{A}:=\mathrm{softmax}\Big{(}\frac{\mathbf{Q}\mathbf{K}^{\top}}{\sqrt{D_{ qk}}}\Big{)}\in\mathbb{R}^{N\times N}\) and its component \(a_{ij}\) for \(i\), \(j=1,\cdots,N\) are called the attention matrix and attention scores, respectively. For each query vector \(\mathbf{q}(i)\) for \(i=1,\cdots,N\), an equivalent form of Eqn. (2) to compute the output vector \(\mathbf{u}(i)\) is given by
\[\mathbf{u}(i)=\sum_{j=1}^{N}\mathrm{softmax}\Big{(}\mathbf{q}(i)^{\top}\mathbf{k}(j)/\sqrt {D_{qk}}\Big{)}\mathbf{v}(j). \tag{3}\]
The self-attention computed by Eqn. (2) and (3) is refered as softmax attention. In our work, we refer to a transformer that uses softmax attention as a softmax transformer.
### Over-smoothing in Transformers
Despite their remarkable success, deep transformer-based models have been observed to suffer from the over-smoothing issue, in which all token representations become identical when more layers are added to the models [55, 65, 18]. This over-smoothing phenomenon, also known as the "token uniformity" problem, significantly limits the representation capacity of transformers. To illustrate this phenomenon, we examine the average cosine similarity between pairs of token representations across different layers in a softmax transformer trained for the Imagenet object classification and ADK20 image segmentation tasks [73]. As depicted in Fig. 1, in both tasks, this cosine similarity between tokens increases as the models become deeper. Particularly, in the last two layers, the cosine similarity scores are approximately 0.9, indicating a high degree of similarity among tokens.
### Contribution
We develop a nonlocal variational denoising framework for self-attention, providing insights into the over-smoothing phenomenon in transformers. In particular, by viewing self-attention as a gradient descent step toward minimizing a nonlocal functional that penalizes high-frequency noise in the signal, we uncover the diffusive nature of self-attention, which explains the over-smoothing issue of transformers. Motivated by this understanding, we propose the Neural Transformer with a Regularized
Figure 1: The cosine similarity between tokens representations across layers of NeuTRENO DeiT vs. the baseline DeiT models on the Imagenet classification and ADE20K image segmentation tasks. In both tasks, the DeiT baseline suffers from over-smoothing as tokens become similar to identical when the model gets deeper. In contrast, tokens in NeuTRENO models are significantly more diverse, suggesting a reduction in over-smoothing. Further details regarding this analysis can be found in Appendix E.
Nonlocal Functional (NeuTRENO), a novel class of transformers designed to mitigate over-smoothing. NeuTRENO is derived by optimizing a regularized nonlocal functional, which includes an additional convex fidelity term. This fidelity term penalizes the norm of the difference between the smooth output tokens from self-attention and the input tokens, thereby reducing the over-smoothing effect. Our contribution is three-fold.
1. We develop a nonlocal variational denoising framework for self-attention and shed light on the over-smoothing issue that hampers the representation capacity of transformers.
2. We develop NeuTRENO, a novel class of transformers that are capable of alleviating the over-smoothing issue.
3. We theoretically prove that transformers with softmax self-attention are prone to over-smoothing while NeuTRENO can avoid this issue.
We empirically demonstrate the benefits of NeuTRENO on various large-scale applications, including the ImageNet object classification, ADE20K image segmentation, and WikiText-103 language modeling tasks.
**Organization**: We organize our paper as follows: in Section 2, we develop a nonlocal variational denoising framework for self-attention and provide an explanation for the over-smoothing issue in transformer-based models. In section 3, we propose NeuTRENO, and present a theoretical result that guarantees NeuTRENO's capability of mitigating over-smoothing. In Section 4, we empirically validate the benefits of NeuTRENO. We discuss the related work in Section 6. Finally, we conclude our main contributions and remarks. Further results, details, and proofs are provided in the Appendix.
## 2 A Nonlocal Variational Denoising Framework for Self-attention
We first consider the output matrix \(\mathbf{U}:=[\mathbf{u}(1),\cdots,\mathbf{u}(N)]^{\top}\in\mathbb{R}^{N\times D}\) in self-attention as given by Eqn. 2 in Section 1.1. Let \(\Omega\subset\mathbb{R}\), \(x\in\Omega\), and \(\mathbf{u}(x):=[u_{1}(x),\dots,u_{D}(x)]^{T}\) be a real vector-valued function, \(\mathbf{u}:\Omega\rightarrow\mathbb{R}^{D}\), \(\mathbf{u}\in L^{2}(\Omega)\). The output matrix \(\mathbf{U}\) in self-attention discretizes the function \(\mathbf{u}(x)\) on a 1-D grid. In the context of signal/image denoising, \(\mathbf{U}\) can be considered as the _desired clean signal_, and \(\mathbf{u}(x)\) is its corresponding intensity function denoting the signal values at the position \(x\in\Omega\). We further let the observed intensity function \(\mathbf{f}(x)\) denote the values of the _observed noisy signal_ at \(x\in\Omega\), \(\mathbf{f}:\Omega\rightarrow\mathbb{R}^{D}\), \(\mathbf{f}\in L^{2}(\Omega)\). For example, \(\mathbf{f}(x)\) can be given as
\[\mathbf{f}(x)=\mathbf{u}(x)+\mathbf{n}(x), \tag{4}\]
where \(\mathbf{n}\) is the additive noise. We wish to reconstruct \(\mathbf{u}(x)\) from \(\mathbf{f}(x)\). Following the variational denoising method proposed in [23] and [24], the denoised image \(\mathbf{u}(x)\) can be obtained by minimizing the following regularized functional with respect to \(\mathbf{u}\):
\[E(\mathbf{u},\mathbf{f}) =J(\mathbf{u})+G(\mathbf{u},\mathbf{f}) \tag{5}\] \[=\frac{1}{2}\int_{\Omega\times\Omega}\|\mathbf{u}(x)-\mathbf{u}(y)\|_{2}^ {2}k(x,y)dxdy+\frac{\lambda}{2}\int_{\Omega}\|\mathbf{u}(x)-\mathbf{f}(x)\|_{2}^{2}dx.\]
Here, \(J(\mathbf{u})=\frac{1}{2}\int_{\Omega\times\Omega}\|\mathbf{u}(x)-\mathbf{u}(y)\|_{2}^{2}k (x,y)dxdy\) is a nonlocal functional of weighted differences. The weights \(k(x,y)\) represent the affinity between signal values at positions \(x\) and \(y\). For example, for images, \(k(x,y)\) captures the proximity between pixels \(x\) and \(y\) in the image. \(J(\mathbf{u})\) works as a regularizer. Minimizing \(J(\mathbf{u})\) promotes the smoothness of \(\mathbf{u}\) and penalizes high-frequency noise in the signal. Adding the convex fidelity term \(G(\mathbf{u},\mathbf{f})=\frac{\lambda}{2}\int_{\Omega}\|\mathbf{u}(x)-\mathbf{f}(x)\|_{2}^{2}dx\) to the functional \(J(\mathbf{u})\) allows the denoised signal \(\mathbf{u}(x)\) to preserve relevant information in the observed noisy signal \(\mathbf{f}(x)\). The regularized functional \(E(\mathbf{u},\mathbf{f})\) can be considered as an energy functional.
### Self-attention as a Gradient Descent Step to Minimize the Nonlocal Functional \(J\)
We show that self-attention is equivalent to taking a gradient descent step toward minimizing the functional \(J(\mathbf{u})\) in the energy functional \(E(\mathbf{u},\mathbf{f})\). We expand \(J(\mathbf{u})\) as follows
\[J(\mathbf{u})=\frac{1}{2}\int_{\Omega\times\Omega}\sum_{j=1}^{D}(u_{j}(x)-u_{j}(y ))^{2}k(x,y)dxdy \tag{6}\]
The gradient of \(J\) with respect to \(\mathbf{u}\) is then given by
\[\nabla_{\mathbf{u}}J(\mathbf{u})=\left[\frac{\partial J}{\partial u_{1}},\frac{ \partial J}{\partial u_{2}},\dots,\frac{\partial J}{\partial u_{D}}\right]^{T}. \tag{7}\]The partial derivative \(\partial J/\partial u_{j}\), \(j=1,2,\ldots,D\), is defined through its dot product with an arbitrary function \(h_{j}\in L^{2}(\Omega)\) as follows
\[\frac{\partial J}{\partial u_{j}}\cdot h_{j}(x) =\frac{d}{d\tau}J(u_{j}+\tau h_{j})\big{|}_{\tau=0}\] \[=\frac{1}{2}\left(\frac{d}{d\tau}\int_{\Omega\times\Omega}(u_{j}( x)-u_{j}(y)+\tau h_{j}(x)-\tau h_{j}(y))^{2}k(x,y)dxdy\right)\bigg{|}_{\tau=0}\] \[=\left(\int_{\Omega\times\Omega}(u_{j}(x)-u_{j}(y)+\tau h_{j}(x)- \tau h_{j}(y))(h_{j}(x)-h_{j}(y))k(x,y)dxdy\right)\bigg{|}_{\tau=0}\] \[=\int_{\Omega\times\Omega}(u_{j}(x)-u_{j}(y))(h_{j}(x)-h_{j}(y))k (x,y)dxdy\] \[=\int_{\Omega\times\Omega}(u_{j}(x)-u_{j}(y))h_{j}(x)k(x,y)dxdy- \int_{\Omega\times\Omega}(u_{j}(x)-u_{j}(y))h_{j}(y)k(x,y)dxdy\]
Applying a change of variables \((x,y)\rightarrow(y,x)\) to the second term of the above integral, we have
\[\frac{\partial J}{\partial u_{j}}\cdot h_{j}(x) =\int_{\Omega\times\Omega}(u_{j}(x)-u_{j}(y))h_{j}(x)k(x,y)dxdy- \int_{\Omega\times\Omega}(u_{j}(y)-u_{j}(x))h_{j}(x)k(y,x)dxdy\] \[=\int_{\Omega\times\Omega}(u_{j}(x)-u_{j}(y)(k(x,y)+k(y,x))d{y}h_ {j}(x)dx\]
Thus, the Frechet derivative of J with respect to \(u_{j}\) is given by
\[\frac{\partial J}{\partial u_{j}}=\int_{\Omega}(u_{j}(x)-u_{j}(y)(k(x,y)+k(y,x ))dy. \tag{8}\]
Substituting the formula for \(\partial J/\partial u_{j}\) in Eqn. 8 into Eqn. 7 for \(\nabla_{\mathbf{u}}J(\mathbf{u})(x)\), we obtain the following gradient flow
\[\frac{d\mathbf{u}(x,t)}{dt}=-\nabla_{\mathbf{u}}J(\mathbf{u})=\int_{\Omega} \bigl{(}\mathbf{u}(y,t)-\mathbf{u}(x,t)\bigr{)}\bigl{(}k(x,y)+k(y,x)\bigr{)}dy, \tag{9}\]
where \(t\) is the time variable we introduce to capture the dynamics of \(\mathbf{u}\) when gradient descent is applied to minimize \(J(\mathbf{u})\). Let \(\mathbf{v}(x):=[v_{1}(x),\ldots,v_{D}(x)]^{T}\) be a real vector-valued function, \(\mathbf{v}:\Omega\rightarrow\mathbb{R}^{D}\), \(\mathbf{v}\in L^{2}(\Omega)\). We discretize \(\mathbf{v}(x)\) on a 1-D grid to attain the value vectors \(\mathbf{v}(1),\ldots,\mathbf{v}(N)\in\mathbb{R}^{D}\), which form the value matrix \(\mathbf{V}:=[\mathbf{v}(1),\cdots,\mathbf{v}(N)]^{\top}\in\mathbb{R}^{N\times D}\) in self-attention as defined in Eqn. 2. We initialize \(\mathbf{u}\) at \(t=0\) with \(\mathbf{v}(x)\), i.e., \(\mathbf{u}(x,0)=\mathbf{v}(x)\).
**Self-attention is an Euler Discretization of the Gradient Flow Given in 9.** We discretize the gradient flow in Eqn. 9 using the Euler method [21] with step size \(\Delta t(x)=1/\int_{\Omega}\bigl{(}k(x,y)+k(y,x)\bigr{)}dy\) and obtain the following update
\[\mathbf{u}(x,\Delta t(x)) =\mathbf{u}(x,0)+\Delta t(x)\int_{\Omega}\bigl{(}\mathbf{u}(y,0)-\mathbf{u}(x,0)\bigr{)}\bigl{(}k(x,y)+k(y,x)\bigr{)}dy\] \[=\int_{\Omega}\frac{\bigl{(}k(x,y)+k(y,x)\bigr{)}\mathbf{u}(y,0)}{ \int_{\Omega}\bigl{(}k(x,y^{\prime})+k(y^{\prime},x)\bigr{)}dy^{\prime}}dy= \int_{\Omega}\frac{K(x,y)\mathbf{v}(y)}{\int_{\Omega}K(x,y^{\prime})dy^{\prime}}dy. \tag{10}\]
Here, \(K(x,y):=k(x,y)+k(y,x)\) is a symmetric kernel and \(\mathbf{u}(y,0)=\mathbf{v}(y)\) since \(\mathbf{u}\) is initialized at \(t=0\) with \(\mathbf{v}\) as aforementioned. Let \(\mathbf{k}(x):=[k_{1}(x),\ldots,k_{D_{\phi k}}(x)]^{T}\) be a real vector-valued function, \(\mathbf{k}:\Omega\rightarrow\mathbb{R}^{D_{\phi k}}\), \(\mathbf{k}\in L^{2}(\Omega)\). Similar to \(\mathbf{u}(x)\) and \(\mathbf{v}(x)\), we can discretize \(\mathbf{k}(x)\) on a 1-D grid to attain the key vectors \(\mathbf{k}(1),\ldots,\mathbf{k}(N)\in\mathbb{R}^{D_{\phi k}}\), which form the key matrix \(\mathbf{K}:=[\mathbf{k}(1),\cdots,\mathbf{k}(N)]^{\top}\in\mathbb{R}^{N\times D_{\phi k}}\) in self-attention as defined in Eqn. 2. We choose \(K(x,y)=\exp\bigl{(}\mathbf{k}(x)^{T}\mathbf{k}(y)/\sqrt{D_{\phi k}}\bigr{)}\) and rewrite Eqn. 10 as follows
\[\mathbf{u}(x,\Delta t(x))=\int_{\Omega}\frac{\exp\bigl{(}\mathbf{k}(x)^{T}\mathbf{k}(y)/ \sqrt{D_{\phi k}}\bigr{)}}{\int_{\Omega}\exp\bigl{(}\mathbf{k}(x)^{T}\mathbf{k}(y^{ \prime})/\sqrt{D_{\phi k}}\bigr{)}dy^{\prime}}\mathbf{v}(y)dy. \tag{11}\]Estimating the integrals in Eqn. 11 via Monte-Carlo approximation using the key vectors \(\mathbf{k}(1),\ldots,\mathbf{k}(N)\in\mathbb{R}^{D_{qk}}\) and and value vectors \(\mathbf{v}(1),\ldots,\mathbf{v}(N)\in\mathbb{R}^{D}\), we obtain
\[\mathbf{u}(x,\Delta t(x))\approx\sum_{j=1}^{N}\frac{\exp\bigl{(}\mathbf{k}(x)^{T}\mathbf{k} (j)/\sqrt{D_{qk}}\bigr{)}}{\sum_{j^{\prime}=1}^{N}\exp\bigl{(}\mathbf{k}(x)^{T}\bm {k}(j^{\prime})/\sqrt{D_{qk}}\bigr{)}}\mathbf{v}(j). \tag{12}\]
Discretizing \(\mathbf{u}(x,\Delta t(x))\) on another 1-D grid, we attain
\[\mathbf{u}(i) \approx\sum_{j=1}^{N}\frac{\exp\bigl{(}\mathbf{k}(i)^{T}\mathbf{k}(j)/ \sqrt{D_{qk}}\bigr{)}}{\sum_{j^{\prime}=1}^{N}\exp\bigl{(}\mathbf{k}(i)^{T}\mathbf{k}( j^{\prime})/\sqrt{D_{qk}}\bigr{)}}\mathbf{v}(j)\] \[=\sum_{j=1}^{N}\mathrm{softmax}\Bigl{(}\mathbf{k}(i)^{\top}\mathbf{k}(j)/ \sqrt{D_{qk}}\Bigr{)}\mathbf{v}(j),\;\;i=1,\ldots,N. \tag{13}\]
Comparing Eqn. 13 and Eqn. 3, we observe that Eqn. 13 implement a symmetric self-attention, in which the query matrix \(\mathbf{Q}\) and the key matrix \(\mathbf{K}\) are the same, i.e. \(\mathbf{W}_{Q}=\mathbf{W}_{K}\) where \(\mathbf{W}_{Q}\) and \(\mathbf{W}_{K}\) are the linear projections that map the input sequence \(\mathbf{X}\) into \(\mathbf{Q}\) and \(\mathbf{K}\) as given in Eqn. 1. This symmetry of the attention scores is desirable in some image processing tasks due to the symmetric similarities between pixels, but can be relaxed for other tasks. To break the symmetry of attention scores in Eqn. 13, we replace the key vectors \(\mathbf{k}(i)\) by the query vectors \(\mathbf{q}(i)\), \(i=1,\ldots,N\), to obtain the exact formula of self-attention given by Eqn. 3. The following theorem summarizes our results:
**Theorem 1** (Self-attention as a Gradient Descent Step to Minimize a Nonlocal Functional).: _Given the nonlocal functional \(J(\mathbf{u})=\frac{1}{2}\int_{\Omega\times\Omega}\|\mathbf{u}(x)-\mathbf{u}(y)\|_{2}^{2}k (x,y)dxdy\) of a vector-valued function \(\mathbf{u}:\Omega\to\mathbb{R}^{D}\), \(\mathbf{u}\in L^{2}(\Omega)\), and let \(K(x,y):=k(x,y)+k(y,x)=\exp\bigl{(}\mathbf{k}(x)^{T}\mathbf{k}(y)/\sqrt{D_{qk}}\bigr{)}\), where \(\mathbf{k}:\Omega\to\mathbb{R}^{D_{qk}}\), \(\mathbf{k}\in L^{2}(\Omega)\). Then, taking a gradient descent step on \(\mathbf{u}\) at time \(t=0\), where \(\mathbf{u}(x,0)=\mathbf{v}(x)\), with an adaptive step size \(\Delta t(x):=\frac{1}{\int_{\Omega}\bigl{(}k(x,y)+k(y,x)\bigr{)}dy}\) to minimize \(J\) is equivalent to updating \(\mathbf{u}\) via a symmetric self-attention_
\[\mathbf{u}(x,\Delta t(x))=\sum_{j=1}^{N}\mathrm{softmax}\Bigl{(}\mathbf{k}(x)^{\top} \mathbf{k}(j)/\sqrt{D_{qk}}\Bigr{)}\mathbf{v}(j),\]
_which results in_
\[\mathbf{u}(i)=\sum_{j=1}^{N}\mathrm{softmax}\Bigl{(}\mathbf{k}(i)^{\top}\mathbf{k}(j)/ \sqrt{D_{qk}}\Bigr{)}\mathbf{v}(j),\;\;i=1,\ldots,N. \tag{14}\]
_Here, \(\mathbf{u}(n)\), \(\mathbf{v}(n)\), and \(\mathbf{u}(n)\), \(n=1,\ldots,N\), are the key, value, and output vectors in self-attention, respectively. Breaking the symmetry of the attention scores by replacing \(\mathbf{k}(i)\) with \(\mathbf{q}(i)\), \(i=1,\ldots,N\), in Eqn. 14, we obtain the exact formula of self-attention_
\[\mathbf{u}(i)=\sum_{j=1}^{N}\mathrm{softmax}\Bigl{(}\mathbf{q}(i)^{\top}\mathbf{k}(j)/ \sqrt{D_{qk}}\Bigr{)}\mathbf{v}(j),\;\;i=1,\ldots,N.\]
**Remark 1**.: _In Eqn. 9, the change in \(\mathbf{u}\) at position \(x\) is proportional to the sum of differences between \(\mathbf{u}(x)\) and \(\mathbf{u}\) at other position in the domain \(\Omega\). In particular, when \(\mathbf{u}(x)\) is smaller or larger than the values at other positions, it will increase or decrease, respectively. This is analogous to a diffusion process in which particles or substances move from high-concentration to low-concentration regions. It has been proved that a diffusion process converges to a saturating state in which the concentrations at all positions are the same. This suggests that \(\mathbf{u}(x)\) tends to suffer from the over-smoothing issue._
### Random Walk Analysis of Over-smoothing
The diffusion process and random walk are closely related concepts, as diffusion can be seen as a collective behavior of numerous random walks performed by individual particles or molecules. Inspired by the analogy between the dynamics of \(\mathbf{u}\) in Eqn 9 and a diffusion process, as well as the relationship between diffusion process and random walk, in this section, we show the connection between the evolution of \(\mathbf{u}\) and a random walk. By adopting a random walk perspective on graph neural network [58], we demonstrate that \(\mathbf{u}(x)\) under the dynamics given in Eqn 9 suffers from over-smoothing.
Recall from the gradient flow in Eqn 9, by using Euler method discretization, after \(k\) update steps starting from the initial \(\mathbf{u}(x,0)=\mathbf{v}(x)\), with adaptive stepsize \(\Delta t=1/\!\int_{\Omega}\!\big{(}k(x,y)+k(y,x)\big{)}dy\), we obtain the following
\[\mathbf{u}(x,k\Delta t(x))=\int_{\Omega}\frac{K(x,y)\mathbf{u}(y,(k-1)\Delta t(x))}{ \int_{\Omega}K(x,y^{\prime})dy^{\prime}}dy. \tag{15}\]
Discretizing \(\mathbf{u}(x,k\Delta t(x))\) and using Monte-Carlo approximation for the integrals in 15, we attain
\[\mathbf{u}^{(k)}(i)=\sum_{j=1}^{N}\mathbf{A}_{ij}\mathbf{u}^{(k-1)}(j) \tag{16}\]
where \(\mathbf{A}_{ij}\) is computed using the keys and queries as either \(\mathrm{softmax}\!\left(\mathbf{k}(i)^{\top}\mathbf{k}(j)/\sqrt{D_{qk}}\right)\) or \(\mathrm{softmax}\!\left(\mathbf{q}(i)^{\top}\mathbf{k}(j)/\sqrt{D_{qk}}\right)\). Let \(\{\mathbf{B}^{(k)}(i)\}_{k\in K}\) be a random walk on \(\{\mathbf{v}(i)\}_{i=1}^{N}\) as defined:
\[\mathbf{B}^{(0)}(i)=\mathbf{v}(i) \tag{17}\] \[\mathbb{P}(\mathbf{B}^{(k+1)}(l)=\mathbf{v}(j)|\mathbf{B}^{(k)}(l)= \mathbf{v}(i))=\mathbf{A}_{ij}\]
where \(\mathbf{B}^{(k)}(n)\) is the random value of a \(k\)-step walk, starts at node \(n\), and \(\mathbf{v}(n)\) is the initial value at node \(n\), respectively, for \(n=1,2,\ldots,N\). The transition probability \(\mathbf{A}\) is defined as above. To investigate the connection between the update process of \(\mathbf{u}\) and the random walk defined in 17, we show that, for \(i=1,2,\ldots,N\), after \(k\) update steps as in 16, with initial value \(\mathbf{u}^{(0)}(i)=\mathbf{v}(i)\), \(\mathbf{u}(i)^{(k)}\) equals to the expected value of the \(k\)-step walk, starting at node \(i\):
**Lemma 1**.: _Let \(\mathbf{u}^{(k)}(i)\) defined in 16 and \(\{\mathbf{B}^{(k)}(i)\}_{k\in K}\) is the random walk defined by 17. Then_
\[\mathbf{u}^{(k)}(i)=\mathbb{E}[\mathbf{B}^{(k)}(i)]. \tag{18}\]
We next present the Lemma 2 which is necessary to show the convergence of \(\mathbf{u}^{(k)}(i)\).
**Lemma 2**.: _The random walk \(\mathbf{B}^{(k)}(i)\) in 17 with the transition matrix \(\mathbf{A}\) either be \(\mathbf{A}_{ij}=\mathrm{softmax}\!\left(\mathbf{k}(i)^{\top}\mathbf{k}(j)/\sqrt{D_{qk }}\right)\) or \(\mathbf{A}_{ij}=\mathrm{softmax}\!\left(\mathbf{q}(i)^{\top}\mathbf{k}(j)/\sqrt{D_{qk }}\right)\), has a unique stationary distribution \(\mathbf{\pi}=[\pi_{1},\pi_{2},\ldots,\pi_{N}]\) such that \(\pi_{i}:=P(\mathbf{B}^{(k)}(j)=\mathbf{v}(i))\), for \(i,j=1,2,\ldots,N\), \(\sum_{i=1}^{N}\pi_{i}=1\), and \(\mathbf{\pi}^{T}=\mathbf{\pi}^{T}\mathbf{A}\)._
_If \(\mathbf{A}_{ij}=\mathrm{softmax}\!\left(\mathbf{k}(i)^{\top}\mathbf{k}(j)/\sqrt{D_{qk }}\right)\), the stationary distribution is:_
\[\mathbf{\pi}=\Bigg{(}\frac{d_{1}}{\sum_{j=1}^{N}d_{j}},\frac{d_{2}}{\sum_{j=1}^{N }d_{j}},\ldots,\frac{d_{n}}{\sum_{j=1}^{N}d_{j}}\Bigg{)}, \tag{19}\]
_where \(d_{i}=\sum_{j=1}^{N}\exp\!\left(\mathbf{k}(i)^{\top}\mathbf{k}(j)/\sqrt{D_{qk}}\right)\), \(\mathbf{k}(1),\mathbf{k}(2),\ldots,\mathbf{k}(N)\) are the key vectos._
_In general, \(\pi_{i}\) can be found by finding the left eigenvector of \(\mathbf{A}\) corresponding to the dominant eigenvalue 1._
From the Lemma 1 and Lemma 2, we see that, for all \(i=1,2,\ldots,N\),
\[\mathbf{u}^{(k)}(i)=\mathbb{E}[\mathbf{B}^{(k)}(i)]=\sum_{j=1}^{N}\mathbf{v}(j) \mathbb{P}(\mathbf{B}^{(k-1)}(i)=\mathbf{v}(j))\to\sum_{j=1}^{N}\pi_{j}\mathbf{v}(j)=: \bar{\mathbf{v}}. \tag{20}\]
as \(k\to\infty\). This shows that when \(k\) increases, \(\mathbf{u}(i)^{(k)}\) converges to a constant vector, indicating that \(\mathbf{u}(x)\), under the dynamic in 9, suffers from over-smoothing.
## 3 NeuTRENO: Mitigating the Over-smoothing in Transformers via Minimizing a Regularized Functional
In Section 2.1, we have shown that self-attention implicitly performs a gradient descent step to minimize the nonlocal functional \(J(\mathbf{u})\) in Eqn. 5, which results in the diffusive characteristics of \(\mathbf{u}\) and causes the over-smoothing phenomenon in transformers, as proved in Section 2.2. Fortunately, our objective is not to minimize \(J(\mathbf{u})\) but the energy/regularized functional \(E(\mathbf{u},\mathbf{f})\) defined by Eqn. 5. This regularized functional consists of not only \(J(\mathbf{u})\) but also the convex fidelity term \(G(\mathbf{u},\mathbf{f})=\frac{\lambda}{2}\int_{\Omega}\|\mathbf{u}(x)-\mathbf{f}(x)\|_{2}^{2}dx\). This fidelity term aims to preserve the relevant information in the observed noisy signal \(\mathbf{f}(x)\) by penalizing solution \(\mathbf{u}(x)\) that deviates significantly from \(\mathbf{f}(x)\), thereby mitigating the effects of over-smoothing caused by minimizing \(J(\mathbf{u})\).
In this section, we will derive our Neural Transformer with a Regularized Nonlocal Functional (NeuTRENO) by minimizing the regularized functional \(E(\mathbf{u},\mathbf{f})\). We then provide a theoretical result to prove that NeuTRENO does not suffer from over-smoothing. Recall from Eqn. 5 that \(E(\mathbf{u},\mathbf{f})\) is given by
\[E(\mathbf{u},\mathbf{f})=J(\mathbf{u})+G(\mathbf{u},\mathbf{f})=J(\mathbf{u})+\frac{ \lambda}{2}\int_{\Omega}\sum_{j=1}^{D}(u_{j}(x)-f_{j}(x))^{2}dx\]
Following a similar derivation as in Section 2.1 (see Appendix C for the detailed derivation), we obtain the following gradient flow when minimizing \(E(\mathbf{u},\mathbf{f})\) using gradient descent
\[\frac{d\mathbf{u}(x,t)}{dt}=-\nabla_{\mathbf{u}}E(\mathbf{u},\mathbf{f})=-\nabla_{ \mathbf{u}}J(\mathbf{u})-\lambda\big{(}\mathbf{u}(x)-\mathbf{f}(x)\big{)}, \tag{21}\]
**NeuTRENO-attention is an Euler Discretization of the Gradient Flow Given in 21.** Following the similar derivation in Section 2.1, we discretize the gradient flow in Eqn. 21 using the Euler method [21] with step size \(\Delta t(x)=1/\!\int_{\Omega}\!\big{(}k(x,y)+k(y,x)\big{)}dy\) and initializing \(\mathbf{u}\) at \(t=0\) with \(\mathbf{v}(x)\), i.e., \(\mathbf{u}(x,0)=\mathbf{v}(x)\). Choosing \(\lambda=\tilde{\lambda}/\Delta t(x)\), we obtain the following update
\[\mathbf{u}(x,\Delta t(x)) =\mathbf{u}(x,0)-\Delta t(x)\nabla_{\mathbf{u}}J-\lambda\Delta t(x)\big{(} \mathbf{u}(x,0)-\mathbf{f}(x)\big{)}\] \[=\int_{\Omega}\frac{K(x,y)\mathbf{v}(y)}{\int_{\Omega}K(x,y^{\prime}) dy^{\prime}}dy+\tilde{\lambda}\big{(}\mathbf{f}(x)-\mathbf{v}(x)\big{)}. \tag{22}\]
We choose the observed noisy signal \(\mathbf{f}(x)=\mathbf{v}^{0}(x)\) where \(\mathbf{v}^{0}(x)\) is \(\mathbf{v}(x)\) at the first layer in the transformer model. The update in Eqn. 22 becomes
\[\mathbf{u}(x,\Delta t(x))=\int_{\Omega}\frac{K(x,y)\mathbf{v}(y)}{\int_{ \Omega}K(x,y^{\prime})dy^{\prime}}dy+\tilde{\lambda}\big{(}\mathbf{v}^{0}(x)-\mathbf{v }(x)\big{)}. \tag{23}\]
Applying the Monte-Carlo method to approximate the integrals in Eqn. 23 and discretizing \(\mathbf{u}(x,\Delta t(x))\), \(\mathbf{v}(x)\), and \(\mathbf{v}^{0}(x)\) on a 1-D grid, we attain the following new formula for calculating symmetric self-attention:
\[\mathbf{u}(i)=\sum_{j=1}^{N}\mathrm{softmax}\Big{(}\mathbf{k}(i)^{\top} \mathbf{k}(j)/\sqrt{D_{qk}}\Big{)}\mathbf{v}(j)+\tilde{\lambda}(\mathbf{v}^{0}(i)-\mathbf{v}(i )),\ \ i=1,\ldots,N. \tag{24}\]
Figure 2: Our proposed NeuTRENO model adds a proportion of the difference between the values of the first and that of the current layer to the self-attention’s output at each layer.
Its corresponding asymmetric self-attention is obtained by replacing the key vectors \(\mathbf{k}(i)\) with the query vectors \(\mathbf{q}(i)\), \(i=1,\ldots,N\), and given by
\[\mathbf{u}(i)=\sum_{j=1}^{N}\mathrm{softmax}\Big{(}\mathbf{q}(i)^{\top}\mathbf{k}(j)/\sqrt{D _{qk}}\Big{)}\mathbf{v}(j)+\tilde{\lambda}(\mathbf{v}^{0}(i)-\mathbf{v}(i)),\ \ i=1,\ldots,N. \tag{25}\]
Leveraging Eqn. 25, we define the Neural Transformer with a Regularized Nonlocal Functional (NeuTRENO) as follows.
**Definition 1** (Neural Transformer with a Regularized Nonlocal Functional (NeuTRENO)).: _Given a set of key and value vectors \(\{\mathbf{k}^{\ell}(j),\mathbf{v}^{\ell}(j)\}_{j=1}^{S}\) in each layer \(\ell\), \(\ell=1,\ldots,L\), for each query vector \(\mathbf{q}^{\ell}(i)\), \(i=1,\ldots,N\), in the same layer, the self-attention unit at layer \(\ell\) in a Neural Transformer with a Regularized Nonlocal Functional (NeuTRENO) computes the corresponding output vector \(\mathbf{u}^{\ell}(i)\) of the query \(\mathbf{q}^{\ell}(i)\) by the following attention formula:_
\[\mathbf{u}^{\ell}(i)=\sum_{j=1}^{N}\mathrm{softmax}\Big{(}\mathbf{q}^{\ell}(i)^{\top} \mathbf{k}^{\ell}(j)/\sqrt{D_{qk}}\Big{)}\mathbf{v}^{\ell}(j)+\tilde{\lambda}(\mathbf{v}^ {0}(i)-\mathbf{v}^{\ell}(i)),\ \ i=1,\ldots,N. \tag{26}\]
_where \(\mathbf{v}^{0}(1),\ldots\mathbf{v}^{0}(N)\in\mathbb{R}^{D}\) are the value vectors in the first layer of NeuTRENO._
Fig. 2 illustrates the architecture of NeuTRENO.
**Proposition 1**.: _The evolution of \(\mathbf{u}(x)\) under the dynamic in 21 does not converge to a constant vector._
Proposition 1 indicates that our NeuTRENO mitigates the over-smoothing issue, suggesting the benefit of our method. The proof for Proposition 1 is given in Appendix B.3.
## 4 Experimental Results
In this section, we empirically demonstrate the advantages of our proposed NeuTRENO approach across various tasks, including ImageNet classification [15], ADE20K image segmentation [73], and language modeling on the WikiText-103 [42]. Our aim to show: (i) NeuTRENO significantly outperforms the transformer baseline with softmax-attention defined in 2 across various tasks; moreover, NeuTRENO surpass FeatScale, a vision transformer that addresses over-smoothing, combining NeuTRENO with FeatScale is beneficial; (ii) the advantages of incorporating our proposed method with pre-trained models. We also demonstrate the benefits of our NeuTRENO in the symmetry setting and we point to Appendix D for the results. Throughout our experiments, we compare the performance of our proposed models with baselines of the same configuration. For additional details regarding datasets, models, and training procedures, please refer to Appendix A.
**Object classification on ImageNet.** To demonstrate the advantage of our NeuTRENO method, we compare it with the DeiT baseline [59] on the ImageNet image classification task. Our NeuTRENO DeiT surpasses the DeiT baseline, as shown in Table 1. Notably, our NeuTRENO DeiT achieves significantly higher performance in terms of both Top-1 Accuracy and Top-5 Accuracy. We also compare our method with FeatScale [65], a vision transformer model addressing over-smoothing (see Table 1). Our NeuTRENO significantly outperforms FeatScale, and combining NeuTRENO with FeatScale leads to substantial improvements. These results confirm the benefits of our model.
**Image Segmentation on ADE20K dataset.** To further validate the advantages of our proposed methods, we compare the performance of the Segmenter models [56] using the NeuTRENO DeiT
\begin{table}
\begin{tabular}{l|c c} \hline \hline Model/Metric & Top-1 Acc (\%) & Top-5 Acc (\%) \\ \hline _Softmax DeiT_ & 72.17 & 91.02 \\ NeuTRENO-DeiT & **73.01** & **91.56** \\ NeuTRENO Adaptation & 72.63 & 91.38 \\ \hline _DeiT + FeatScale_ & 72.346 & 91.22 \\ NeuTRENO DeiT + FeatScale & **73.23** & **91.73** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Top-1 and Top-5 accuracy (%) of NeuTRENO DeiT vs. DeiT on the ImageNet benchmark. We also present the performance of adapting NeuTRENO to the pre-trained DeiT baseline, NeuTRENO Adaptation. In addition, we compare NeuTRENO with FeatScale [65] and incorporate our method with FeatScale model.
**Language Model on WikiText-103.** In addition to computer vision tasks, we also evaluate the effectiveness of our model on a large-scale natural language processing application, specifically language modeling on WikiText-103. Our NeuTRENO language model demonstrates better performance in terms of both test perplexity and valid perplexity when compared to the softmax transformer language model [68]. These findings, combined with the results obtained across various tasks, empirically confirm the significant benefits of our NeuTRENO models.
**Combine with pre-trained models.** Furthermore, our proposed method is also beneficial to combine with pre-trained models. To empirically demonstrate that we incorporate NeuTRENO with pre-trained DeiT and fine-tune on the ImageNet dataset with one-third number of epochs that are used in training. The result is presented in Table 1, showing that combined with our method improves both the Top-1 and Top-5 accuracies of the pre-trained models.
## 5 Empirical Analysis
**Applying Softmax-Attention Reduces the functional \(J(\mathbf{u})\).** We present evidence supporting that the employment of softmax attention minimizes the functional \(J(\mathbf{u})\). Initially, we observe that the average cosine similarity between the numerical approximation of \(\nabla_{\mathbf{u}}J(\mathbf{u})\) using symmetric or asymmetric kernel \(K(x,y)\) for both the trained Sym-DeiT (using symmetric self-attention 14) and DeiT models, closed 1, as shown in Table 4. This suggests that reversing the direction of the asymmetric approximation effectively decreases \(J(\mathbf{u})\). Considering that softmax attention takes steps in this reversed direction numerically, its application leads to a reduction in \(J(\mathbf{u})\). This is further substantiated by Fig. 3, which demonstrates a decrease in \(J(\mathbf{u})\) as the depth of the trained DeiT increases when softmax attention is employed. More details of this analysis are in Appendix E
**Over-smoothing Analysis.** We empirically illustrate the effectiveness of NeuTRENOs in mitigating the over-smoothing problem in transformers. Fig. 1 compares the cosine similarity between token representations across layers for both NeuTRENO and softmax baseline models, specifically focusing on the Imagenet classification task (Left) and ADE20K image segmentation (Right). The token
\begin{table}
\begin{tabular}{l|c c} \hline \hline Model/Metric & SS MIoU & MS MIoU (\%) \\ \hline _Softmax DeiT_ & 35.72 & 36.68 \\ NeuTRENO DeiT & **37.24** & **38.06** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Single-scale (SS) MIoU and multi-scale MIoU (MS) of the NeuTRENO DeiT vs. the DeiT on the ADE20K image segmentation.
Figure 3: The average value of functional \(J(\mathbf{u})\) over 1000 training (Left) samples and test (Right) samples. When softmax attention is applied, the functional decreases as the depth of the trained DeiT increases.
and DeiT backbones the on ADE20K image segmentation task [72], as shown in Table 2. The results demonstrate the substantial performance improvements achieved by utilizing the NeuTRENO DeiT backbone over the DeiT backbone, in terms of both single-scale (SS) MIoU and multi-scale (MS) MIoU metrics. These results strongly emphasize the effectiveness of our NeuTRENO approach in enhancing image segmentation performance.
features extracted by NeuTRENOs exhibit significantly lower similarity, particularly in the final layers. This finding highlights the ability of NeuTRENOs to address the over-smoothing issue and improve the diversity of token representations. We provide more details of this analysis in Appendix E.
## 6 Related Work
**Over-smoothing in Transformers.** Over-smoothing in deep transformers has been observed in various domains and applications from natural language processing [55] to computer vision [65, 18]. In vision tasks, [74] observes that the performance of the vision transformer (ViT [20]) quickly saturates as more layers are added to the model. Moreover, experiments in [74] show that the 32-layer ViT underperforms the 24-layer ViT, indicating the difficulty of ViTs in gaining benefits from deeper architectures. The authors point out that over-smoothing results in this phenomenon by causing the token representations to become identical when the model grows deeper. Based on this observation, the authors propose a cross-head communication method that helps enhance the diversity of both token representations and attention matrices. Furthermore, it has been shown in [60] that the training of ViT models encounters instability with greater depths. [25] proposes that this instability arises from the over-smoothing, where token representation for patches within an image becomes progressively alike as the model's depth increases. To explain this issue, [65] finds out that self-attention acts as a low-pass filter and smoothens the token representations in ViTs. This leads to the proposal of the FeatScale method [65], which regulates feature frequencies, whether low or high, to counteract the consequences of over-smoothing.
In addition, [55] observes the phenomenon in BERT [16], a deep language model, and explores over-smoothing through the graph perspective. The work utilizes hierarchical fusion strategies by preserving the output of self-attention through all layers, which is memory-costly. On the other hand, [65, 18] investigate over-smoothing in the image domain through the lens of Fourier spectrum, showing that self-attentions are low-pass filters, retaining only low-frequency, causing over-smoothed outputs. Our work is an orthogonal explanation of the previous work. We focus on developing a variational denoising framework to understand the self-attention of transformers as a gradient descent approximation of a functional. Our new finding explains the over-smoothing issue of transformers due to self-attention minimizing a functional and inspires us to derive the novel NeuTRENO method to overcome over-smoothing.
**Nonlocal Functionals for Image Processing.** Total variation [51] is well-known as an image-denoising technique. It denoises a noisy image by solving a constraint optimization problem. The method is also related to PDE-flow-based image-denoising techniques [24], namely isotropic and anisotropic diffusion [67] models. The method is edge preserving, meaning to avoid over-blurring edges' information [7]. Nonlocal functionals [35, 24] is considered as an extension of total variation to a nonlocal scale. Nonlocal functional and edge preservation properties are the motivation of our work to explain and overcome over-smoothing in transformers.
## 7 Concluding Remarks
In this paper, we establish a nonlocal variational denoising framework for self-attention. From this variational perspective, we explain over-smoothing in self-attention, which hinders the representation capacity of transformer models. We also derive the novel Neural Transformer with a Regularized Nonlocal Functional (NeuTRENO) to alleviate the over-smoothing. We empirically verify the benefits of NeuTRENO with a wide range of large-scale applications including ImageNet object classification, ADE20K object segmentation, and WikiText-103 language modeling. A limitation of our paper is that the privacy-preserving of NeuTRENO has not been addressed. It is interesting to explore if regularized nonlocal functional can also help improve the privacy-preserving of transformer models. We leave this exciting research idea as future work.
\begin{table}
\begin{tabular}{l|c c} \hline \hline Model & Training data & Test data \\ \hline Sym-DeiT & 0.982 & 0.976 \\ Softmax DeiT & 0.973 & 0.964 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The average cosine similarity between the numerical approximation of \(\nabla J(\mathbf{u})(x)\) using symmetric or asymmetric kernel \(K(x,y)\), for the trained Sym-DeiT and softmax DeiT models. The metric is evaluated on 1000 training and 1000 test data samples. The average score close to 1 shows a strong alignment between symmetric and asymmetric gradient approximations, suggesting that reversing the direction of the asymmetric approximation effectively reduces the functional \(J(\mathbf{u})\).
### Acknowledgments and Disclosure of Funding
RGB acknowledges support from the NSF grants CCF-1911094, IIS-1838177, and IIS-1730574;ONR grants N00014-18-12571, N00014-20-1-2534, and MURI N00014-20-1-2787; AFOSR grant FA9550-22-1-0060; and a Vannevar Bush Faculty Fellowship, ONR grant N00014-18-1-2047. TMN acknowledges support from his start-up grant at the National University of Singapore (Grant Number: A-0009807-00-00).
## References
* Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012)_, pages 385-393, Montreal, Canada, 7-8 June 2012. Association for Computational Linguistics.
* [2] Rami Al-Rfou, D. Choe, Noah Constant, Mandy Guo, and Llion Jones (2019) Character-level language modeling with deeper self-attention. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 3159-3166. Cited by: SS1.
* [3]A. Arnab, M. Dehghani, G. Heigold, C. Sun, M. Lucic, and C. Schmid (2021) Vivit: a video vision transformer. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 6816-6826. Cited by: SS1.
* [4]A. Baevski and M. Auli (2019) Adaptive input representations for neural language modeling. In International Conference on Learning Representations, Cited by: SS1.
* [5]A. Baevski and M. Auli (2019) Adaptive input representations for neural language modeling. In International Conference on Learning Representations, Cited by: SS1.
* [6]T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. (2020) Language models are few-shot learners. arXiv preprint arXiv:2005.14165. Cited by: SS1.
* [7]A. Buades, B. Coll, and J. M. Morel (2005) A review of image denoising algorithms with a new one. Multiscale Modeling & Simulation4 (2), pp. 490-530. Cited by: SS1.
* [8]K. Chang, K. Pearson, and T. Zhang (2008) Perron-frobenius theorem for nonnegative tensors. Communications in Mathematical Sciences6 (2), pp. 507-520. Cited by: SS1.
* [9]L. Chen, K. Lu, A. Rajeswaran, K. Lee, A. Grover, M. Laskin, P. Abbeel, A. Srinivas, and I. Mordatch (2021) Decision transformer: reinforcement learning via sequence modeling. Advances in neural information processing systems34, pp. 15084-15097. Cited by: SS1.
* [10]R. Child, S. Gray, A. Radford, and I. Sutskever (2019) Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509. Cited by: SS1.
* [11]K. Clark, U. Khandelwal, O. Levy, and C. D. Manning (2019-06) What does BERT look at? an analysis of BERT's attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, Florence, Italy, August 2019, pp. 276-286. Cited by: SS1.
* [12]Z. Dai, Z. Yang, Y. Yang, J. Carbonell, Q. Le, and R. Salakhutdinov (2019-06) Transformer-XL: attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, July 2019, pp. 2978-2988.
* [14] Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. Universal transformers. _arXiv preprint arXiv:1807.03819_, 2018.
* [15] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _2009 IEEE conference on computer vision and pattern recognition_, pages 248-255. Ieee, 2009.
* [16] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_, 2018.
* [17] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_, pages 4171-4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics.
* [18] Yihe Dong, Jean-Baptiste Cordonnier, and Andreas Loukas. Attention is not all you need: Pure attention loses rank doubly exponentially with depth. In _International Conference on Machine Learning_, pages 2793-2803. PMLR, 2021.
* [19] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In _International Conference on Learning Representations_, 2021.
* [20] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In _9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021_. OpenReview.net, 2021.
* [21] Leonhard Euler. _Institutiones calculi integrals_, volume 1. impensis Academiae imperialis scientiarum, 1792.
* [22] Tianyu Gao, Xingcheng Yao, and Danqi Chen. SimCSE: Simple contrastive learning of sentence embeddings. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_, pages 6894-6910, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics.
* [23] Guy Gilboa and S. Osher. Nonlocal linear image regularization and supervised segmentation. _Multiscale Model. Simul._, 6:595-630, 2007.
* [24] Guy Gilboa and S. Osher. Nonlocal operators with applications to image processing. _Multiscale Model. Simul._, 7:1005-1028, 2008.
* [25] Chengyue Gong, Dilin Wang, Meng Li, Vikas Chandra, and Qiang Liu. Vision transformers with patch diversification. _arXiv preprint arXiv:2104.12753_, 2021.
* [26] Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, et al. Conformer: Convolution-augmented transformer for speech recognition. _arXiv preprint arXiv:2005.08100_, 2020.
* [27] Meng-Hao Guo, Jun-Xiong Cai, Zheng-Ning Liu, Tai-Jiang Mu, Ralph R Martin, and Shi-Min Hu. Pct: Point cloud transformer. _Computational Visual Media_, 7(2):187-199, 2021.
* [28] Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, et al. The many faces of robustness: A critical analysis of out-of-distribution generalization. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 8340-8349, 2021.
* [29] Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. _arXiv preprint arXiv:1903.12261_, 2019.
* [30] Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 15262-15271, 2021.
* [31] John Hewitt and Percy Liang. Designing and interpreting probes with control tasks. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_, pages 2733-2743, Hong Kong, China, November 2019. Association for Computational Linguistics.
* [32] Michael Janner, Qiyang Li, and Sergey Levine. Offline reinforcement learning as one big sequence modeling problem. _Advances in neural information processing systems_, 34:1273-1286, 2021.
* [33] John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Zidek, Anna Potapenko, et al. Highly accurate protein structure prediction with alphafold. _Nature_, 596(7873):583-589, 2021.
* [34] Patrick Kahardipraja, Brielen Madureira, and David Schlangen. Towards incremental transformers: An empirical analysis of transformer models for incremental NLU. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_, pages 1178-1189, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics.
* [35] Stefan Kindermann, S. Osher, and Peter W. Jones. Deblurring and denoising of images by nonlocal functionals. _Multiscale Model. Simul._, 4:1091-1115, 2005.
* [36] Dimitrios Kotzias, Misha Denil, Nando de Freitas, and Padhraic Smyth. From group to individual labels using deep features. _Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_, 2015.
* [37] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
* [38] Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. A structured self-attentive sentence embedding. _CoRR_, abs/1703.03130, 2017.
* [39] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_, 2019.
* [40] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 10012-10022, 2021.
* [41] Ze Liu, Jia Ning, Yue Cao, Yixuan Wei, Zheng Zhang, Stephen Lin, and Han Hu. Video swin transformer. In _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, 2022.
* [42] Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. In _5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings_. OpenReview.net, 2017.
* [43] Ankur Parikh, Oscar Tackstrom, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention model for natural language inference. In _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_, pages 2249-2255, Austin, Texas, November 2016. Association for Computational Linguistics.
* [44] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In _International Conference on Machine Learning_, pages 8748-8763. PMLR, 2021.
* [45] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. _OpenAI report_, 2018.
* [46] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. _OpenAI blog_, 1(8):9, 2019.
* [47] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. _Journal of Machine Learning Research_, 21(140):1-67, 2020.
* [48] Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_, pages 2383-2392, Austin, Texas, November 2016. Association for Computational Linguistics.
* [49] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In _International Conference on Machine Learning_, pages 8821-8831. PMLR, 2021.
* [50] Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C Lawrence Zitnick, Jerry Ma, et al. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences. _Proceedings of the National Academy of Sciences_, 118(15), 2021.
* [51] Leonid I. Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal algorithms. _Physica D: Nonlinear Phenomena_, 60(1):259-268, 1992.
* [52] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. _International Journal of Computer Vision_, 115(3):211-252, 2015.
* [53] Imanol Schlag, Kazuki Irie, and Jurgen Schmidhuber. Linear transformers are secretly fast weight programmers. In _International Conference on Machine Learning_, pages 9355-9366. PMLR, 2021.
* [54] Matthias Seeger. Gaussian processes for machine learning. _International journal of neural systems_, 14(02):69-106, 2004.
* [55] Han Shi, JIAHUI GAO, Hang Xu, Xiaodan Liang, Zhenguo Li, Lingpeng Kong, Stephen M. S. Lee, and James Kwok. Revisiting over-smoothing in BERT from the perspective of graph. In _International Conference on Learning Representations_, 2022.
* [56] Robin Strudel, Ricardo Garcia, Ivan Laptev, and Cordelia Schmid. Segmenter: Transformer for semantic segmentation. In _Proceedings of the IEEE/CVF international conference on computer vision_, pages 7262-7272, 2021.
* [57] Ian Tenney, Dipanjan Das, and Ellie Pavlick. BERT rediscovers the classical NLP pipeline. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_, pages 4593-4601, Florence, Italy, July 2019. Association for Computational Linguistics.
* [58] Matthew Thorpe, Tan Minh Nguyen, Hedi Xia, Thomas Strohmer, A. Bertozzi, Stanley J. Osher, and Bao Wang. Grand++: Graph neural diffusion with a source term. In _International Conference on Learning Representations_, 2022.
* [59] Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Herve Jegou. Training data-efficient image transformers distillation through attention. In Marina Meila and Tong Zhang, editors, _Proceedings of the 38th International Conference on Machine Learning_, volume 139 of _Proceedings of Machine Learning Research_, pages 10347-10357. PMLR, 18-24 Jul 2021.
* [60] Hugo Touvron, Matthieu Cord, Alexandre Sablayrolles, Gabriel Synnaeve, and Herve Jegou. Going deeper with image transformers. In _Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)_, pages 32-42, October 2021.
* [61] Yao-Hung Hubert Tsai, Shaojie Bai, Makoto Yamada, Louis-Philippe Morency, and Ruslan Salakhutdinov. Transformer dissection: An unified understanding for transformer's attention via the lens of kernel. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_, pages 4344-4353, Hong Kong, China, November 2019. Association for Computational Linguistics.
* [62] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In _Advances in neural information processing systems_, pages 5998-6008, 2017.
* [63] Jesse Vig and Yonatan Belinkov. Analyzing the structure of attention in a transformer language model. In _Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP_, pages 63-76, Florence, Italy, August 2019. Association for Computational Linguistics.
* [64] Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_, pages 5797-5808, Florence, Italy, July 2019. Association for Computational Linguistics.
* [65] Peihao Wang, Wenqing Zheng, Tianlong Chen, and Zhangyang Wang. Anti-oversmoothing in deep vision transformers via the fourier domain analysis: From theory to practice. In _International Conference on Learning Representations_, 2022.
* [66] Zifeng Wang and Jimeng Sun. TransTab: Learning Transferable Tabular Transformers Across Tables. In _Advances in Neural Information Processing Systems (NeurIPS 2022)_, 2022.
* [67] Joachim Weickert, Wissenschaftlicher Werdegang, Steven Zucker, Allan Dobbins, Lee Iverson, Benjamin Kimia, and Allen Tannenbaum. Anisotropic diffusion in image processing. 01 1996.
* [68] Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, and Vikas Singh. Nystromformer: A Nystrom-based Algorithm for Approximating Self-Attention. 2021.
* [69] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. _arXiv preprint arXiv:1906.08237_, 2019.
* [70] Shuai Zhang, Lina Yao, Aixin Sun, and Yi Tay. Deep learning based recommender system: A survey and new perspectives. _ACM Computing Surveys (CSUR)_, 52(1):1-38, 2019.
* [71] Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip HS Torr, and Vladlen Koltun. Point transformer. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 16259-16268, 2021.
* [72] Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ade20k dataset. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 633-641, 2017.
* [73] Bolei Zhou, Hang Zhao, Xavier Puig, Tete Xiao, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Semantic understanding of scenes through the ade20k dataset, 2018.
* [74] Daquan Zhou, Bingyi Kang, Xiaojie Jin, Linjie Yang, Xiaochen Lian, Zihang Jiang, Qibin Hou, and Jiashi Feng. Deepvit: Towards deeper vision transformer, 2021.
## Supplement to "Mitigating Over-smoothing in Transformers via Regularized Nonlocal Functionals"
**Table of Contents**
* A Additional Details on the Experiments in Section 4
* A.1 Image Classification on Imagenet
* A.2 Image Segmentation on ADK20 dataset
* A.3 Language Modeling on WikiText-103
* B Technical Proofs
* B.1 Proof of Lemma 1
* B.2 Proof of Lemma 2
* B.3 Proof of Proposition 1
* C Derivation of Gradient of E as Given in Eqn. 21
* D Results of Symmetric Setting
* E Additional Details on the Empirical Analysis in Section 5
* E.1 Average Cosine Similarity between Gradient Approximations
* E.2 Average Value of Function
* E.3 Over-smoothing Analysis
* F Additional Experimental Results
* F.1 Object classification on Imagenet with DeiT-small baseline
* F.2 Beyond Softmax-Attention
* G Additional Empirical Analysis Results
* G.1 Visualizing Attention Matrices
* G.2 Head Redundancy between Layers
* G.3 NeuTRENO Inherently Mitigates Over-smoothing, even without Training the Models
* G.4 Efficiency Analysis
* G.5 Stability and Significance of NeuTRENO
* G.6 Robustness of NeuTRENO
* G.7 NeuTRENO in Incremental Learning
* G.8 Ablation study on the choice of \(\tilde{\lambda}\)
* G.9 Scalability of NeuTRENO
## Appendix A Additional Details on the Experiments in Section 4
This section provides datasets, models, and training details for experiments in Section 4. The code to reproduce our experimental results is included in our Supplementary Material submission.
### Image Classification on Imagenet
**Datasets and Metrics.** The ImageNet dataset [15, 52] comprises \(1.28\) million training images and \(50,000\) validation images, encompassing the classification of 1000 categories. The evaluation metrics used for performance assessment are the top-1 and top-5 accuracies.
**Models and Baselines.** Our baseline model is the DeiT-tiny model [59], which consists of 12 transformer layers, 3 attention heads per layer, and a model dimension of 192. For model setting and setting and configuration, we follow [59]. Their implementation is available at [https://github.com/facebookresearch/deit](https://github.com/facebookresearch/deit). The \(\tilde{\lambda}\) used for our NeuTRENO method is \(0.6\).
### Image Segmentation on ADK20 dataset
**Datasets and Metrics.** The ADE20K dataset is recognized for its inclusion of challenging scenes with fine-grained labels, making it one of the most demanding semantic segmentation datasets. The training set consists of 20,210 images encompassing 150 semantic classes. Additionally, there are 2,000 images in the validation set and 3,352 images in the test set. This in task the Single-scale mean Intersection over Union (SS mIoU) and the Multi-scale (MS mIoU).
**Models and baselines.** The training configuration and setting for our models are followed by [56]. The baseline model is finetuned with the pretrained DeiT-tiny backbone while our segmenter model used the pretrained NeuTRENO DeiT-tiny, with \(\tilde{\lambda}=0.6\).
### Language Modeling on WikiText-103
**Datasets and Metrics.** The WikiText-103 dataset consists of articles extracted from Wikipedia and is specifically designed to capture long contextual dependencies. The training set comprises approximately \(28,000\) articles, totaling \(103\) million running words. Each article contains text blocks consisting of approximately \(3,600\) words. The validation and test sets contain \(218,000\) and \(246,000\) running words, respectively, with each set consisting of \(60\) articles and approximately \(268,000\) words. Our experiment follows the standard setting [42, 53], which involves dividing the training data into independent long segments of \(L\) words. For evaluation, we employ a batch size of 1 and process the text sequence using a sliding window of size \(L\). When computing perplexity (PPL), we consider only the last position, except for the first segment where all positions are evaluated, following the approach in [2, 53].
**Models and baselines.** For our language modeling implementation, we rely on the publicly available code [https://github.com/IDSIA/lmtool-fwp](https://github.com/IDSIA/lmtool-fwp) developed by [53]. In our experiments, we set the dimensions of keys, values, and queries to 128, while the training and evaluation context length is set to 256. In this experiment, \(\tilde{\lambda}=0.4\) yields the best performance of NeuTRENO language model.
## Appendix B Technical Proofs
### Proof of Lemma 1
For all \(i=1,\dots,N\), we have \(\mathbb{E}[\mathbf{B}^{(0)}(i)]=\mathbf{v}(i)\). Assume that \(\mathbb{E}[\mathbf{B}^{(k)}(i)]=\mathbf{u}^{(k)}(i)\), then
\[\mathbb{E}[\mathbf{B}^{(k+1)}(i)] =\sum_{j=1}^{N}\mathbf{v}(j)\mathbb{P}(\mathbf{B}^{(k+1)}(i)=\mathbf{v}(j ))\] \[=\sum_{j=1}^{N}\mathbf{v}(j)\sum_{l=1}^{N}\mathbb{P}(\mathbf{B}^{(k+1) }(i)=\mathbf{v}(j)|\mathbf{B}^{(1)}(i)=\mathbf{v}(l))\mathbb{P}(\mathbf{B}^{(1)}(i)= \mathbf{v}(l))\] \[=\sum_{j=1}^{N}\mathbf{v}_{j}\sum_{l=1}^{N}\mathbb{P}(\mathbf{B}^{(k) }(l)=\mathbf{v}(j))\mathbb{P}(\mathbf{B}^{(1)}(i)=\mathbf{v}(l)|\mathbf{B}^{(0)}(i)= \mathbf{v}(i))\] \[=\sum_{j=1}^{N}\mathbf{v}(j)\sum_{l=1}^{N}\mathbf{A}_{il}\mathbb{P}( \mathbf{B}^{(k)}(l)=\mathbf{v}(j))\] \[=\sum_{l=1}^{N}\mathbf{A}_{il}\mathbb{E}[\mathbf{B}^{(k)}(l)]= \sum_{l=1}^{N}\mathbf{A}_{il}\mathbf{u}^{(k)}(l)\] \[=\mathbf{u}^{(k+1)}(i).\]Thus, by induction, we obtain the conclusion of the lemma.
### Proof of Lemma 2
Since the transition matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\) is right-stochastic, its largest eigenvalue is 1 (see Theorem 4.1 in [5]). Also, \(\mathbf{A}\) is a regular positive matrix since its elements are positive. Thus, the Perron-Frebenius theorem [8] implies the existence of a unique probability distribution \(\boldsymbol{\pi}\), which is a positive left eigenvector of the transition matrix \(\mathbf{A}\) associated with its largest eigenvalue 1. In particular, in the case of symmetricity constraint, \(\boldsymbol{\pi}\) can be chosen as follows
\[\boldsymbol{\pi}=\Bigg{(}\frac{d_{1}}{\sum_{j=1}^{N}d_{j}},\frac{d_{2}}{\sum_{ j=1}^{N}d_{j}},\ldots,\frac{d_{n}}{\sum_{j=1}^{N}d_{j}}\Bigg{)},\]
where \(d_{i}=\sum_{j=1}^{N}\exp\Bigl{(}\boldsymbol{k}(i)^{\top}\boldsymbol{k}(j)/ \sqrt{D_{qk}}\Bigr{)}\). It is easy to see that
\[\sum_{i=1}^{N}\pi_{i}\mathbf{A}_{ij} =\sum_{i=1}^{N}\frac{d_{i}}{\sum_{l=1}^{N}d_{l}}\frac{\exp\Bigl{(} \boldsymbol{k}(i)^{\top}\boldsymbol{k}(j)/\sqrt{D_{qk}}\Bigr{)}}{d_{i}}\] \[=\frac{\sum_{i=1}^{N}\Bigg{(}\exp\Bigl{(}\boldsymbol{k}(i)^{\top} \boldsymbol{k}(j)/\sqrt{D_{qk}}\Bigr{)}\Bigg{)}}{\sum_{l=1}^{N}d_{l}}\] \[=\frac{d_{j}}{\sum_{l=1}^{N}d_{l}}=\pi_{j}.\]
As a consequence, \(\boldsymbol{\pi}\) must be the unique stationary distribution of the random walk \(\{\mathbf{B}^{(k)}(i)\}_{k\in K}\). This concludes the proof.
### Proof of Proposition 1
Recall from the gradient flow in Eqn 21, by using the method of Euler discretization, after \(k\) update steps starting from the initial \(\boldsymbol{u}(x,0)=\boldsymbol{v}(x)\) with adaptive stepsize \(\Delta t=1/\!\int_{\Omega}\bigl{(}k(x,y)+k(y,x)\bigr{)}dy\) and by choosing \(\lambda=\tilde{\lambda}/\Delta t(x)\), we obtain the following
\[\boldsymbol{u}(x,k\Delta t(x)) =\boldsymbol{u}(x,(k-1)\Delta t(x))-\Delta t(x)\nabla_{ \boldsymbol{u}}J-\lambda\Delta t(x)\bigl{(}\boldsymbol{u}(x,(k-1)\Delta t(x)) -\boldsymbol{f}(x)\bigr{)}\] \[=\int_{\Omega}\frac{K(x,y)\boldsymbol{u}(y,(k-1)\Delta t(x))}{ \int_{\Omega}K(x,y^{\prime})dy^{\prime}}dy+\tilde{\lambda}\bigl{(}\boldsymbol {f}(x)-\boldsymbol{u}(x,(k-1)\Delta t(x))\bigr{)}. \tag{27}\]
Discretizing \(\boldsymbol{u}(x,k\Delta t(x))\) and using Monte-Carlo approximation for the integrals in 27, we obtain
\[\boldsymbol{u}^{(k)}(i)=\sum_{j=1}^{N}\mathbf{A}_{ij}\boldsymbol{u}^{(k-1)}(j) +\tilde{\lambda}\bigl{(}\boldsymbol{f}(i)-\boldsymbol{u}^{(k-1)}(i)\bigr{)}, \tag{28}\]
where \(\mathbf{A}_{ij}\) is computed using the keys and queries as either \(\operatorname{softmax}\Bigl{(}\boldsymbol{k}(i)^{\top}\boldsymbol{k}(j)/\sqrt {D_{qk}}\Bigr{)}\) or \(\operatorname{softmax}\Bigl{(}\boldsymbol{q}(i)^{\top}\boldsymbol{k}(j)/\sqrt {D_{qk}}\Bigr{)}\).
Suppose that \(\boldsymbol{u}^{(k)}(i)\), defined as Eqn. 28, converges to a constant vector \(\bar{\boldsymbol{u}}\) as \(k\to\infty\). We have \[\begin{split}&\mathbf{u}^{(k+1)}(i)-\mathbf{u}^{(k+1)}(j)\\ &=\sum_{l=1}^{N}\mathbf{A}_{il}\mathbf{u}^{(k)}(l)-\sum_{l=1}^{N} \mathbf{A}_{jl}\mathbf{u}^{(k)}(l)+\tilde{\lambda}(\mathbf{u}^{(k)}(j)-\mathbf{u}^{(k)}(i)) +\tilde{\lambda}(\mathbf{f}(i)-\mathbf{f}(j))\\ &=(\sum_{l=1}^{N}\mathbf{A}_{il}\mathbf{u}^{(k)}(l)-\mathbf{u}^{(k)}(i) \sum_{l=1}^{N}\mathbf{A}_{il})-(\sum_{l=1}^{N}\mathbf{A}_{jl}\mathbf{u}^{(k)}(l)- \mathbf{u}^{(k)}(j)\sum_{l=1}^{N}\mathbf{A}_{jl})\\ &\quad+(\tilde{\lambda}-1)(\mathbf{u}^{(k)}(j)-\mathbf{u}^{(k)}(i))+ \tilde{\lambda}(\mathbf{f}(i)-\mathbf{f}(j))\\ &=\sum_{l=1}^{N}\mathbf{A}_{il}(\mathbf{u}^{(k)}(l)-\mathbf{u}^{(k)}(i)) -\sum_{l=1}^{N}\mathbf{A}_{jl}(\mathbf{u}^{(k)}(l)-\mathbf{u}^{(k)}(j))+(\tilde{ \lambda}-1)(\mathbf{u}^{(k)}(j)-\mathbf{u}^{(k)}(i))\\ &\quad+\tilde{\lambda}(\mathbf{f}(i)-\mathbf{f}(j))\end{split} \tag{29}\]
Since \(\mathbf{u}^{(k)}(i)\rightarrow\bar{\mathbf{u}}\), for \(i=1,2,\ldots,N\), as \(k\rightarrow\infty\), we have as \(k\rightarrow\infty\). This is a contradiction since while the LHS of 29 approaches \(\mathbf{0}\), its RHS approaches \(\tilde{\lambda}(\mathbf{f}(i)-\mathbf{f}(j))\), which is not \(\mathbf{0}\) in general. Thus, we obtain the conclusion of Proposition 1.
## Appendix C Derivation of Gradient of E as Given in Eqn. 21
Taking the gradient of \(E(\mathbf{u},\mathbf{f})\) with respect to \(\mathbf{u}\), we obtain
\[\nabla_{\mathbf{u}}E=\nabla_{\mathbf{u}}J+\left[\frac{\partial G}{\partial u_{1}},\frac {\partial G}{\partial u_{2}},\ldots,\frac{\partial G}{\partial u_{D}}\right]^{T}. \tag{30}\]
The partial derivative \(\partial G/\partial u_{j}\), \(j=1,2,\ldots,D\), is defined through its dot product with an arbitrary function \(h_{j}\in L^{2}(\Omega)\) as follows
\[\begin{split}\frac{\partial G}{\partial u_{j}}\cdot h_{j}(x)& =\frac{d}{d\tau}G(u_{j}+\tau h_{j})\big{|}_{\tau=0}\\ &=\frac{\lambda}{2}\left(\frac{d}{d\tau}\int_{\Omega}(u_{j}(x)-f_ {j}(x)+\tau h_{j}(x))^{2}dx\right)\bigg{|}_{\tau=0}\\ &=\lambda\int_{\Omega}(u_{j}(x)-f_{j}(x))h_{j}(x)dx.\end{split}\]
Thus, the Frechet derivative of F with respect to \(u_{j}\) is given by
\[\frac{\partial G}{\partial u_{j}}=\lambda(u_{j}(x)-f_{j}(x)) \tag{31}\]
Substituting the formula for \(\partial G/\partial u_{j}\) in Eqn. 31 into Eqn. 30 for \(\nabla_{\mathbf{u}}E(\mathbf{u},\mathbf{f})\), we obtain the following gradient flow
\[\frac{d\mathbf{u}(x,t)}{dt}=-\nabla_{\mathbf{v}}E(\mathbf{u},\mathbf{f})=-\nabla_{\mathbf{u}}J(\bm {u})(x)+\lambda\big{(}\mathbf{f}(x)-\mathbf{u}(x)\big{)}, \tag{32}\]
where \(t\) is a dummy time variable and \(-\nabla_{\mathbf{u}}J(\mathbf{u})\) is defined as in 9.
## Appendix D Results of Symmetric Setting
In this section, we show that NeuTRENO significantly improves the performance of a symmetric transformer baseline, which utilizes symmetric self-attention. We refer to the DeiT with symmetric attention, defined in 14, as Sym-DeiT and the Sym-DeiT combined with our NeuTRENO method as Sym-NeuTRENO DeiT.
**Object classification on Imagenet** To further illustrate the advantage of our NeuTRENO method, we compare Sym-NeuTRENO DeiT with the Sym-DeiT baseline on the ImageNet image classification task. Our Sym-NeuTRENO DeiT outperforms the Sym-DeiT baseline, as shown in Table 5. Notably, the Sym-NeuTRENO DeiT achieves higher performance in terms of both top-1 accuracy and top-5 accuracy than Sym-DeiT baseline. These results further confirm the benefits of our proposed NeuTRENO model.
**Image Segmentation on ADE20K dataset** We also compare the performance of the Segmenter models [56] using the Sym-NeuTRENO DeiT backbone with models using the Sym-DeiT backbone on ADE20K image segmentation [72], as shown in Table 6. The results demonstrate the substantial performance improvements achieved by utilizing the Sym-NeuTRENO DeiT backbone compared to the Sym-DeiT backbone in terms of both single-scale (SS) MIoU and multi-scale (MS) MIoU metrics. This result further validates the advantages of our NeuTRENO models in enhancing image segmentation performance in the symmetric setting.
## Appendix E Additional Details on the Empirical Analysis in Section 5
In this section, we provide the details for the empirical analysis in Section 5.
### Average Cosine Similarity between Gradient Approximations
To produce the results in Table 4, we derive the approximation for the gradient \(\nabla_{\mathbf{u}}J(\mathbf{u})\), from Eqn 9, at time \(t=0\):
\[\nabla_{\mathbf{u}}J(\mathbf{u})=\int_{\Omega}\bigl{(}\mathbf{u}(x,0)-\mathbf{u}(y,0)\bigr{)}K (x,y)dy=\int_{\Omega}\bigl{(}\mathbf{v}(x)-\mathbf{v}(y)\bigr{)}K(x,y)dy,\]
where \(K(x,y):=k(x,y)+k(y,x)\). Using Monte-Carlo approximation for the integral and choosing \(K(x,y)=\exp\bigl{(}\mathbf{k}(x)^{T}\mathbf{k}(y)/\sqrt{D_{qk}}\bigr{)}\), the symmetric approximation of the gradient is derived as \(\sum_{j=1}^{N}\bigl{(}\mathbf{v}(i)-\mathbf{v}(j)\bigr{)}\exp\bigl{(}\mathbf{k}(i)^{T}\mathbf{ k}(j)/\sqrt{D_{qk}}\bigr{)}\). Otherwise, by choosing \(K(x,y)=\exp\bigl{(}\mathbf{q}(x)^{T}\mathbf{k}(y)/\sqrt{D_{qk}}\bigr{)}\), the assymmetric approximation of the gradient is derived as \(\sum_{j=1}^{N}\bigl{(}\mathbf{v}(i)-\mathbf{v}(j)\bigr{)}\exp\bigl{(}\mathbf{q}(i)^{T}\mathbf{ k}(j)/\sqrt{D_{qk}}\bigr{)}\). In this analysis, we take the dot product between the symmetric and asymmetric approximation of the gradient \(\nabla_{\mathbf{u}}J(\mathbf{u})\) and average these dot products over positions. We finally report the average cosine similarity over 1000 training data and 1000 test data, as shown in Table 4.
### Average Value of Function
In order to report the average value of function \(J(\mathbf{u})\) in Fig. 3, we follow the process of computing \(J(\mathbf{u})\) for 1000 data points for each transformer block. Subsequently, the average value is reported for each layer. This procedure is carried out for both the training and test datasets.
### Over-smoothing Analysis
The average cosine similarity between all pairs of token's representations \((\mathbf{x}_{i},\mathbf{x}_{j})\) in a sequence is computed as
\[\frac{1}{N(N-1)}\sum_{i\neq j}\frac{\mathbf{x}_{i}^{T}\mathbf{x}_{j}}{\|\mathbf{x}_{i}\|_ {2}\|\mathbf{x}_{j}\|_{2}}.\]
The result is then averaged over 1000 randomly chosen test data in ImageNet and ADE20K. The result is then reported for each layer, as in Fig. 1.
\begin{table}
\begin{tabular}{l|c c} \hline \hline Model/Metric & _Top-1 Acc (\%)_ & Top-5 Acc (\%) \\ \hline _Sym-DeiT_ & 71.14 & 90.54 \\ Sym-NeuTRENO DeiT & **72.07** & **91.22** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Single-scale (SS) MIoU and multi-scale (MS) MIoU of the Sym-NeuTRENO DeiT vs. Sym-DeiT. The Sym-NeuTRENO DeiT model is beneficial since they significantly outperform the Sym-DeiT.
\begin{table}
\begin{tabular}{l|c c} \hline \hline Model/Metric & _Top-1 Acc (\%)_ & Top-5 Acc (\%) \\ \hline _Sym-DeiT_ & 71.14 & 90.54 \\ Sym-NeuTRENO DeiT & **72.07** & **91.22** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Top-1 and Top-5 accuracy (%) of Sym-NeuTRENO DeiT vs. Sym-DeiT on the ImageNet classification task. The Sym-NeuTRENO DeiT models significantly outperform the Sym-DeiT in terms of accuracy, indicating the benefit of NeuTRENO method.
## Appendix F Additional Experimental Results
### Object classification on Imagenet with DeiT-small baseline
In this section, we show the advantages of our method when we further scale up the model by doubling the model dimension and the number of heads compared to that of the DeiT-tiny. In particular, the NeuTRENO DeiT-small achieves better results in both Top-1 Accuracy and Top-5 Accuracy, as shown in Table 7. Our method also outperforms DeiT plus FeatScale. Here, we did our best to reproduce the results of DeiT-small plus FeatScale [65]. In Table 7, we include our reproduced results and the results reported in [59] for DeiT-small and [65] for DeiT-small plus FeatScale, respectively.
### Beyond Softmax-Attention
We show that NeuTRENO can be combined with other baseline attention mechanisms other than softmax attention. In particular, our NeuTRENO significantly improves transformer-based models with kernel attention [54, 61], on the CIFAR-10 image classification task [37], as shown in Table 8. This further confirms the benefits of our model. Here, both models share the same configuration regarding training, the model's size, and the model's depth (12 layers).
## Appendix G Additional Empirical Analysis Results
This section provides extra empirical analysis to further demonstrate the benefits of NeuTRENO models in mitigating over-smoothing.
### Visualizing Attention Matrices
Fig. 4 displays the 3-head attention matrices obtained from layer \([1,6,12]\) of both the pre-trained NeuTRENO DeiT-tiny and the DeiT-tiny baseline models, using a random sample from the ImageNet dataset.
### Head Redundancy between Layers
NeuTRENO mitigates head redundancy between layers, particularly in the final transformer layers where over-smoothing is most pronounced. Fig. 5 shows the average cosine similarity of attention matrices between two successive layers, over 1000 randomly sampled data. The trained NeuTRENO DeiT obtains lower cosine similarity than that of the trained DeiT as the model depth increases.
### NeuTRENO Inherently Mitigates Over-smoothing, even without Training the Models
Randomly-initialized NeuTRENO DeiT-tiny significantly reduces the average cosine similarity between token representations of 12-layer randomly-initialized DeiT-tiny model, as shown in Fig. 6, on the Imagenet classification task. This observation highlights the ability of our NeuTRENO models in mitigating over-smoothing.
### Efficiency Analysis
We report the ratios of the floating-point operations per second (FLOPs), the inference memory, and the inference real-time running of NeuTRENO DeiT vs. DeiT per sample on the ImageNet dataset, which are \(1.00005,1.000002,1.00013\), respectively. This indicates that the significant gain in the performance of NeuTRENO does not come with the cost of efficiency.
\begin{table}
\begin{tabular}{l|c c} \hline \hline Model/Metric & _Top-1 Acc (\%)_ & Top-5 Acc (\%) \\ \hline _DeiT-small_ & 79.97 (79.9) & 95.05 (95.0) \\ DeiT-small + FeatScale & 79.96 (80.9) & 95.06 \\ NeuTRENO DeiT-small & **80.68** & **95.30** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Top-1 and Top-5 accuracy (%) of NeuTRENO DeiT-small vs. DeiT-small on the ImageNet benchmark. The NeuTRENO DeiT-small significantly outperform the DeiT-small in terms of accuracy. We also compare NeuTRENO DeiT-small with DeiT plus FeatScale, a vision transformer model that addresses over-smoothing, showing the advantage of NeuTRENO. The accuracies reported in [59] for DeiT-small and [65] for DeiT-small plus FeatScale, respectively, are in parentheses.
\begin{table}
\begin{tabular}{l|c} \hline \hline Model/Metric & _Accuracy_ (\%) \\ \hline _Kernel Transformer_ & 75.89 \\ NeuTRENO & **76.75** \\ \hline \hline \end{tabular}
\end{table}
Table 8: Accuracy of NeuTRENO vs.Kernel Transformer on the CIFAR-10 dataset [37]. The NeuTRENO model significantly outperforms the in terms of accuracy.
### Stability and Significance of NeuTRENO
To further confirm the stability and significance of NeuTRENO's performance, we provide the standard deviations from five runs for both the NeuTRENO and baseline models for each experiment (in the main text) in Tables 9, 10, 11.
Figure 4: Plot of attention matrices attained from layer \([1,6,12]\) of both the pretrained DeiT-tiny baseline (Left) and the NeuTRENO DeiT-tiny (Right) models, for each head, using a random sample from the Imagenet dataset.
Figure 5: The average cosine similarity of attention matrices between two successive layers, over 1000 randomly sampled data, of the trained NeuTRENO DeiT and trained DeiT models on the Imagenet classification task.
Figure 6: The average cosine similarity between token representations of 12-layer randomly-initialized NeuTRENO DeiT and DeiT models, on the Imagenet classification task. Here, 1000 data are randomly sampled for the analysis.
### Robustness of NeuTRENO
In addition to the standard metrics, we evaluate the robustness of our NeuTRENO model compared to the baseline transformer model, particularly under adversarial examples and for out-of-distribution generalization. Table 12 demonstrates that NeuTRENO DeiT-Tiny is consistently more robust than the DeiT-Tiny baseline on the Imagenet-C (common data corruption and perturbations, such as adding noise and blurring the images) [29], Imagenet-A (adversarial examples) [30], and Imagenet-R (out of distribution generalization) [28] datasets, which are widely used to test the model's robustness.
### NeuTRENO in Incremental Learning
In an incremental learning setting [34], our 8-layer NeuTRENO achieves \(1.97\)% higher accuracy on the sentiment classification task [36] than the 8-layer baseline transformer.
### Ablation study on the choice of \(\tilde{\lambda}\)
We also conduct an ablation study on the impact of the hyperparameter. In particular, on the ADE20K image segmentation task, we train NeuTRENO with different values. We summarize our results in Table 13. Our findings reveal that within the range of \([0.2,1]\), NeuTRENO consistently outperforms the softmax baseline. However, when values become small or big (below \(0.2\) or above \(1\), respectively), NeuTRENO's performance declines.
### Scalability of NeuTRENO
To demonstrate the scalability of our proposed model, we conduct additional experiments to show that our NeuTRENO method can effectively mitigate the oversmoothing issue in the BERT-base model. In particular, in Figure 7 (Left), we plot the cosine similarity between token representations across layers of a pre-trained BERT-base model [17] on the SQuAD v1.1 question answering task [48] and observe
\begin{table}
\begin{tabular}{l|c c} \hline \hline Metric/Model & _Pretrained Softmax Deit-Tiny_ & Pretrained NeuTRENO DeiT-Tiny \\ \hline SS MIoU & 35.72 \(\pm\) 0.57 & **37.24 \(\pm\) 0.62** \\ MS MIoU & 36.68 \(\pm\) 0.42 & **38.06 \(\pm\) 0.54** \\ \hline \hline \end{tabular}
\end{table}
Table 11: Means and standard deviations over five runs with different random seeds of models trained on the WikiText-103 language model task.
\begin{table}
\begin{tabular}{l|c c c c c c c c} \hline \hline Metric/Model & _Baseline_ & \(\tilde{\lambda}=0.1\) & \(\tilde{\lambda}=0.2\) & \(\tilde{\lambda}=0.4\) & \(\tilde{\lambda}=0.5\) & \(\tilde{\lambda}=0.6\) & \(\tilde{\lambda}=0.8\) & \(\tilde{\lambda}=1.0\) & \(\tilde{\lambda}=2.0\) \\ \hline SS MIoU & 35.72 & 34.94 & 35.60 & **37.54** & 37.38 & 37.24 & 36.71 & 36.37 & 26.25 \\ MS MIoU & 36.68 & 35.53 & 36.45 & **38.62** & 38.22 & 38.06 & 37.82 & 37.26 & 27.2 \\ \hline \hline \end{tabular}
\end{table}
Table 13: Ablation study of different values hyperparameter \(\tilde{\lambda}\) of NeuTRENO DeiT-Tiny on the ADE20K Image Segmentation task.
\begin{table}
\begin{tabular}{l|c c} \hline \hline Model/Metric & Top-1 Acc (\%) & Top-5 Acc (\%) \\ \hline _Softmax DeiT-Tiny_ & 72.17 \(\pm\) 0.07 & 91.02 \(\pm\) 0.04 \\ NeuTRENO DeiT-Tiny & 73.01 \(\pm\) 0.09 & 91.56 \(\pm\) 0.05 \\ NeuTRENO Adaptation & 72.63 \(\pm\) 0.07 & 91.38 \(\pm\) 0.03 \\ DeiT-Tiny + FeatScale & 72.346 \(\pm\) 0.06 & 91.22 \(\pm\) 0.04 \\ NeuTRENO DeiT-Tiny + FeatScale & **73.23 \(\pm\) 0.08** & **91.73 \(\pm\) 0.05** \\ \hline \hline \end{tabular}
\end{table}
Table 9: Means and standard deviations over five runs with different random seeds of models trained on the Imagenet Classification task.
\begin{table}
\begin{tabular}{l|c c} \hline \hline Metric/Model & _Softmax Transformer_ & NeuTRENO \\ \hline Valid PPL & 33.15 \(\pm\) 0.07 & **32.60 \(\pm\) 0.08** \\ Test PPL & 34.29 \(\pm\) 0.09 & **33.70 \(\pm\) 0.07** \\ \hline \hline \end{tabular}
\end{table}
Table 12: Evaluation of NeuTRENO DeiT-Tiny vs. Softmax DeiT-Tiny on the ImageNet-C (mean corruption error mCE), Imagenet-A (Accuracy), and Imagenet-R (Accuracy) datasets.
the presence of the oversmoothing issue as the model gets deeper, causing tokens to become identical. We then apply NeuTRENO on the same pre-trained BERT model, and without any fine-tuning, we observe a significant reduction in the cosine similarity between token embeddings in each layer (see Figure 7 (Left)), indicating that NeuTRENO effectively mitigates the oversmoothing problem in BERT. Additionally, our NeuTRENO BERT finetuned on the task yields better accuracy than the finetuned BERT (\(81.39\) exact match score and \(88.62\) F1-score vs. \(80.77\) exact match score and \(88.12\) F1-score). Moreover, we have conducted the same analysis for a randomized BERT-base model and a randomized NeuTRENO BERT-base model and obtained the same encouraging results (see Figure 7 (Middle)). These results further suggest that NeuTRENO helps alleviate the over-smoothing issue in large-scale transformer models.
We also obtain additional results and show that our NeuTRENO SimCSE, after fine-tuned on the STS-12 semantic textual similarity task [1], gains a significant improvement over the baseline SimCSE [22], which is also fine-tuned on the same task (\(77.32\)% vs. \(75.29\)% Spearman's correlation. Here, the higher correlation, the better). This additional result further verifies that decreasing the cosine dissimilarity between tokens within trained transformer-based models leads to improved empirical performance.
Figure 7: The average cosine similarity between token representations of 12-layer trained (Left) and randomly-initialized (Middle) BERT-base and NeuTRENO BERT-base on the SQuAD question answering task. We also plot the same cosine similarity scores for the trained SimCSE and NeuTRENO SimCSE models (Right) on the STS-12 semantic textual similarity task. Here, 1000 and 500 data are randomly sampled for the analysis on the SQuAD and STS-12 datasets, respectively. | ## Review
### Summary
This paper investigates the over-smoothing problem in transformers, demonstrating that self-attention layers minimize a functional that leads to token uniformity. To address this issue, the authors propose a novel regularizer, NeuTRENO, which penalizes the difference between input and output tokens. Their experimental results show that NeuTRENO improves performance compared to standard transformer models across various tasks, including image classification and language modeling. The paper provides a theoretical foundation for the proposed method, linking it to gradient descent steps and offering insights into the dynamics of self-attention in deep networks.
### Strengths
- The issue of over-smoothing in transformers is analyzed in depth, providing a unique perspective on self-attention as a gradient descent step.
- The theoretical proofs are detailed and clear, contributing to a deeper understanding of the over-smoothing problem.
- The proposed regularizer, NeuTRENO, effectively mitigates over-smoothing and shows improved performance on benchmark datasets.
- The paper's flow is commendable, diving directly into key issues without unnecessary background information.
### Weaknesses
- Empirical evaluations lack detail and thorough analysis, particularly regarding the configuration of NeuTRENO adaptation.
- The performance gains in certain tasks are small and not statistically significant, raising questions about the robustness of the results.
- The paper does not sufficiently compare with prior works addressing uniformity in transformer models, missing relevant citations.
- There are concerns about the clarity of mathematical definitions and the potential over-reliance on certain theoretical assumptions.
### Questions
- How stable are the results across multiple runs with different random seeds? Are the gains statistically significant?
- What is the relationship between the proposed methods and existing solutions for the uniformity and alignment issues in BERT?
- Can the authors clarify the role and definitions of k(x, y) and K(x, y) in their theoretical framework?
- What are the implications of the proposed approach for larger transformer architectures and pre-training/fine-tuning paradigms?
### Soundness
**Score:** 3
**Description:** 3 = good; the theoretical foundations are solid, but some assumptions and empirical evaluations need clearer justification.
### Presentation
**Score:** 2
**Description:** 2 = fair; while the paper is structured well, several mathematical terms and symbols lack clear definitions, affecting clarity.
### Contribution
**Score:** 3
**Description:** 3 = good; the paper contributes valuable insights into over-smoothing in transformers but needs stronger empirical evidence.
### Rating
**Score:** 6
**Description:** 6 = Weak Accept; the paper is technically solid with moderate-to-high impact, though it requires some improvements in evaluation and clarity.
### Paper Decision
**Decision:** Accept (poster)
**Reasons:** The paper presents an original approach to addressing the over-smoothing problem in transformers, backed by both theoretical and empirical evidence. While there are some limitations regarding the clarity of the presentation and the depth of empirical evaluations, the contributions are significant enough to warrant acceptance. The rebuttal improved clarity and addressed several concerns, reinforcing its suitability for NeurIPS.
|